As if AI couldn’t get much scarier than it already is, it can now hack into biometric systems. Researchers from New York University and Michigan State University have discovered flaws in biometric systems. These flaws can be easily exploited by the AI.
Phillip Bontrager, a doctoral student at NYU and lead author of the paper said,
”Fingerprint-based authentication is still a strong way to protect a device or system, but at this point, most systems don’t verify whether a fingerprint or other biometric is coming from a real person or replica.”
The flaw lies in the way most biometric systems function. Biometric systems never rely hundred percent on a complete fingerprint. Instead, they take a partial fingerprint to identify and give access. Majority of the systems enable users to send multiple “snapshots” of their fingerprint.
The system then tries to match the fingerprint with all of these snapshots. If a match is made on any of the partial print, the system gives access to the user. This flaw was thoroughly examined previously by NYU professor Nasir Memon.
This led Memon, and Professor Arun Ross, of Michigan State University, to coin the term “MasterPrint” to describe how partial prints can give access to users. The research had so much potential that it was funded by a United States National Science Foundation grant. It also won the best paper award at a conference on biometrics and cybersecurity last month.
Scientists then expanded this research to design an algorithm that will be able to produce artificial fingerprints. These fake fingerprints could be used against real fingerprints stored in devices. Once the real fingerprints are extracted, they will be able to be used on devices that rely on fingerprints.
The scientists used neural networks (the main technology for data training) to create realistic digital fingerprints that performed even better than the images used in the earlier study. In the new paper, not only did the fake fingerprints resemble real fingerprints, they had invisible properties that could not be examined by the human eye. This means the fake fingerprints could even throw off some fingerprint scanners.
The researchers’ main concern is that if this flaw gets exploited, it can lead to brute force attacks. Like scripts that try different password combinations until they succeed, hackers could use such scripts to try different fingerprint snapshots to log in to fingerprint-accessible systems.
The algorithm hasn’t been tested in real-world scenarios, so it remains to be seen if the researchers’ concerns are truly valid. It is just a hypothesis at this point, a very real but totally scary hypothesis.
It is good to see that so much time and effort is being put into researching such potential security flaws. So many devices, including smartphones, tablets and even laptops are secured via fingerprint. Some banks are also offering customers to secure their accounts using fingerprints.
This is why biometric manufacturers need to come up with more secure algorithms and technology that runs their devices. It might come at a cost though. In making biometrics more secure, the finger scanning and recognizing process might become slow. But this will be a small price to pay for security.
With time, this small hurdle will eventually be improved further. So users would not have to worry about either time or their devices getting hacked by nefarious AI.
Stay tuned for more!