A professor at Brigham Young University (BYU) is working on a new facial recognition algorithm that matches facial features and facial expressions at the same time. The goal is to deliver two-factor authentication with a single product, while simultaneously providing strong liveness detection to guard against various forms of spoofing.
The algorithm is the work of computer and electrical engineering professor D.J. Lee, who is conducting research with the help of Ph.D. student Zheng Sun. Lee has already applied for a patent for his Concurrent Two-Factor Identity Verification system, and believes that it has a wide range of potential access control applications. For example, the system could be used to unlock a smartphone, or to verify someone’s identity at a hotel room or an ATM.
“We could build this very tiny device with a camera on it and this device could be deployed at so many different locations,” said Lee. “How great would it be to know that even if you lost your car key, no one can steal your vehicle because they don’t know your secret facial action?”
Instead of taking a static photo, the Concurrent Two-Factor Identity Verification system asks users to record a one-to-two second video when they register their face. During that video, the user is expected to perform a facial action, such as blinking or smiling, and the algorithm will record both the user’s face and that action as part of the template. The user will then need to repeat that action the next time they use the system to verify their identity.
“The biggest problem we are trying to solve is to make sure the identity verification process is intentional,” continued Lee. “If someone is unconscious, you can still use their finger to unlock a phone and get access to their device or you can scan their retina. It’s pretty unique to add another level of protection that doesn’t cause more trouble for the user.”
Lee’s system currently verifies identities with 90 percent accuracy, though he is confident that he can increase that number with a larger data set. Thus far, Lee has trained the system with 8,000 video clips from only 50 subjects. The templates themselves could be stored on a server, or embedded locally on a device.
Many facial recognition providers have emphasized the importance of liveness detection, especially as deepfake technology improves and the threat becomes more significant. However, it is worth noting that Lee’s system is an active solution, which distinguishes it from passive solutions that require less input from the user. In the meantime, iProov recently established a new Security Operations Centre to monitor deepfake activity.
–
March 16, 2021 – by Eric Weiss
Follow Us