Op-Ed

Is Artificial Intelligence making fingerprint security obsolete?

Although Apple went all-in on facial recognition, most manufactures still use fingerprint sensors. To improve “convenience,” even major banks, such as Wells Fargo and HSBC, are letting customers increasingly use fingerprints to log in their checking accounts. However, the results of the DeepMasterPrints experiment highlight how AI can be deployed by criminals to bypass security measures. Furthermore, this vulnerability will be (or is already) exploited by state actors to gain access to dissidents’ devices.

Building on last year’s MasterPrints paper, researchers published their improvements in the DeepMasterPrints article in October. The researchers discovered that it was possible to trick fingerprint sensors by deploying digitally altered or partial images of real fingerprints. These “MasterPrints” can deceive biometric security sensors that focus only on partial prints instead complete fingerprints. Yet, to the naked eye MasterPrints are easily distinguishable because they contain only partial fingerprints. Current fingerprint software, however, could be duped. The improved DeepMasterPrints are in some cases 30 times more successful than real fingerprints because they use a technique called generative adversarial networks (GANs) — a variant of Deep Neural Networks (DNNs) used to train the underlying data — creating real looking digital fingerprints with undetectable covert properties.

You've reached your daily free article limit.

Subscribe and support our veteran writing staff to continue reading.

Get Full Ad-Free Access For Just $0.50/Week

Enjoy unlimited digital access to our Military Culture, Defense, and Foreign Policy coverage content and support a veteran owned business. Already a subscriber?

Although Apple went all-in on facial recognition, most manufactures still use fingerprint sensors. To improve “convenience,” even major banks, such as Wells Fargo and HSBC, are letting customers increasingly use fingerprints to log in their checking accounts. However, the results of the DeepMasterPrints experiment highlight how AI can be deployed by criminals to bypass security measures. Furthermore, this vulnerability will be (or is already) exploited by state actors to gain access to dissidents’ devices.

Building on last year’s MasterPrints paper, researchers published their improvements in the DeepMasterPrints article in October. The researchers discovered that it was possible to trick fingerprint sensors by deploying digitally altered or partial images of real fingerprints. These “MasterPrints” can deceive biometric security sensors that focus only on partial prints instead complete fingerprints. Yet, to the naked eye MasterPrints are easily distinguishable because they contain only partial fingerprints. Current fingerprint software, however, could be duped. The improved DeepMasterPrints are in some cases 30 times more successful than real fingerprints because they use a technique called generative adversarial networks (GANs) — a variant of Deep Neural Networks (DNNs) used to train the underlying data — creating real looking digital fingerprints with undetectable covert properties.

Examples of real fingerprints are on the left and AI-generated fake fingerprint images are on the right. Philip Bontrager et al., “DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution ∗,” 2018.

GANs have been used to create fabricated videos such as “deepfakes” — pictures that can trick image-recognition software. Deepfakes could have incredibly far-reaching consequences. For example, a deepfake video using President Trump’s image can be used to declare war. Even if it is debunked, the markets could plunge creating chaos around the world. Also, Google’s image recognition software was fooled by a GAN-generated image of a turtle, which mistook it for a rifle. This was achieved by embedding partial rifle imagery in the training data. Since then, Google created the Project Maven program for the Pentagon to track ISIS elements in Syria. This program has better security than open source software… it is not fool-proof, however.

GANs are usually deployed by utilising a pair of neural networks that work together to create realistic images inserted with mysterious features that can trick image-recognition software. With the use of open source fingerprint databases, researchers trained one DNN to identify real fingerprints, while the other DNN was trained to fabricate fake fingerprints. They then used the fake fingerprints of the second DNN to test the first DNN’s effectiveness. After millions of tests, the second DNN adapted and started to create more realistic fingerprint imagery to outsmart the first DNN.

After creating realistic fingerprints, the researchers tested it on fingerprint sensors from different manufacturers. Fingerprint sensors of Innovatrics and Neurotechnology were then tested with the realistic fingerprint images. Whenever the commercial sensors were fooled, researchers tweaked their software to create even more credible fakes. Like the turtle image, DeepMasterPrints contained so-called “noisy data” that could fool sensors consistently. Researchers could calibrate the “noisy data” to fool finger print sensors by employing an evolutionary algorithm. However, unlike the turtle image, this technique is a black box — meaning that researchers do not know how it impacts the input imagery.

Luckily, it is not all doom and gloom. Firstly, a lot of fingerprint readers use other security measures to detect real fingerprints, such as heat sensors or pressure sensors. Secondly, biometric companies can choose to upgrade the security level, triggering higher fail rates — but that would also create more inconvenience. We all know the annoyance when our phone fingerprint sensors don’t function when slightly wet. To keep systems secure, manufacturers need to keep up to date and patch vulnerabilities, because AI methods are getting more advanced by the day.

 

Written by NEWSREP guest author Ahmed Hassan, the CEO and Co-Founder of Grey Dynamics in London. He has worked in the Security and Intelligence industry in Africa for the last 8 years. He also holds a master’s degree in Intelligence and Security Studies with a focus on Machine Learning and Intelligence Analysis.

About SOFREP News Team View All Posts

The SOFREP News Team is a collective of professional military journalists. Brandon Tyler Webb is the SOFREP News Team's Editor-in-Chief. Guy D. McCardle is the SOFREP News Team's Managing Editor. Brandon and Guy both manage the SOFREP News Team.

COMMENTS

You must become a subscriber or login to view or post comments on this article.

More from SOFREP

REAL EXPERTS.
REAL NEWS.

Join SOFREP for insider access and analysis.

TRY 14 DAYS FREE

Already a subscriber? Log In