Although Apple went all-in on facial recognition, most manufactures still use fingerprint sensors. To improve “convenience,” even major banks, such as Wells Fargo and HSBC, are letting customers increasingly use fingerprints to log in their checking accounts. However, the results of the DeepMasterPrints experiment highlight how AI can be deployed by criminals to bypass security measures. Furthermore, this vulnerability will be (or is already) exploited by state actors to gain access to dissidents’ devices.

Building on last year’s MasterPrints paper, researchers published their improvements in the DeepMasterPrints article in October. The researchers discovered that it was possible to trick fingerprint sensors by deploying digitally altered or partial images of real fingerprints. These “MasterPrints” can deceive biometric security sensors that focus only on partial prints instead complete fingerprints. Yet, to the naked eye MasterPrints are easily distinguishable because they contain only partial fingerprints. Current fingerprint software, however, could be duped. The improved DeepMasterPrints are in some cases 30 times more successful than real fingerprints because they use a technique called generative adversarial networks (GANs) — a variant of Deep Neural Networks (DNNs) used to train the underlying data — creating real looking digital fingerprints with undetectable covert properties.

Examples of real fingerprints are on the left and AI-generated fake fingerprint images are on the right. Philip Bontrager et al., “DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution ∗,” 2018.

GANs have been used to create fabricated videos such as “deepfakes” — pictures that can trick image-recognition software. Deepfakes could have incredibly far-reaching consequences. For example, a deepfake video using President Trump’s image can be used to declare war. Even if it is debunked, the markets could plunge creating chaos around the world. Also, Google’s image recognition software was fooled by a GAN-generated image of a turtle, which mistook it for a rifle. This was achieved by embedding partial rifle imagery in the training data. Since then, Google created the Project Maven program for the Pentagon to track ISIS elements in Syria. This program has better security than open source software… it is not fool-proof, however.