Inteligencia y Seguridad Frente Externo En Profundidad Economia y Finanzas Transparencia
  En Parrilla Medio Ambiente Sociedad High Tech Contacto
High Tech  
 
18/12/2018 | Security - Is Artificial Intelligence making fingerprint security obsolete?

NewsRep

Although Apple went all-in on facial recognition, most manufactures still use fingerprint sensors. To improve “convenience,” even major banks, such as Wells Fargo and HSBC, are letting customers increasingly use fingerprints to log in their checking accounts. However, the results of the DeepMasterPrints experiment highlight how AI can be deployed by criminals to bypass security measures. Furthermore, this vulnerability will be (or is already) exploited by state actors to gain access to dissidents’ devices.

 

Building on last year’s MasterPrints paper, researchers published their improvements in the DeepMasterPrints article in October. The researchers discovered that it was possible to trick fingerprint sensors by deploying digitally altered or partial images of real fingerprints. These “MasterPrints” can deceive biometric security sensors that focus only on partial prints instead complete fingerprints. Yet, to the naked eye MasterPrints are easily distinguishable because they contain only partial fingerprints. Current fingerprint software, however, could be duped. The improved DeepMasterPrints are in some cases 30 times more successful than real fingerprints because they use a technique called generative adversarial networks (GANs) — a variant of Deep Neural Networks (DNNs) used to train the underlying data — creating real looking digital fingerprints with undetectable covert properties.

Examples of real fingerprints are on the left and AI-generated fake fingerprint images are on the right. Philip Bontrager et al., “DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution ∗,” 2018.

GANs have been used to create fabricated videos such as “deepfakes” — pictures that can trick image-recognition software. Deepfakes could have incredibly far-reaching consequences. For example, a deepfake video using President Trump’s image can be used to declare war. Even if it is debunked, the markets could plunge creating chaos around the world. Also, Google’s image recognition software was fooled by a GAN-generated image of a turtle, which mistook it for a rifle. This was achieved by embedding partial rifle imagery in the training data. Since then, Google created the Project Maven program for the Pentagon to track ISIS elements in Syria. This program has better security than open source software… it is not fool-proof, however.

GANs are usually deployed by utilising a pair of neural networks that work together to create realistic images inserted with mysterious features that can trick image-recognition software. With the use of open source fingerprint databases, researchers trained one DNN to identify real fingerprints, while the other DNN was trained to fabricate fake fingerprints. They then used the fake fingerprints of the second DNN to test the first DNN’s effectiveness. After millions of tests, the second DNN adapted and started to create more realistic fingerprint imagery to outsmart the first DNN.

After creating realistic fingerprints, the researchers tested it on fingerprint sensors from different manufacturers. Fingerprint sensors of Innovatrics and Neurotechnology were then tested with the realistic fingerprint images. Whenever the commercial sensors were fooled, researchers tweaked their software to create even more credible fakes. Like the turtle image, DeepMasterPrints contained so-called “noisy data” that could fool sensors consistently. Researchers could calibrate the “noisy data” to fool finger print sensors by employing an evolutionary algorithm. However, unlike the turtle image, this technique is a black box — meaning that researchers do not know how it impacts the input imagery.

Luckily, it is not all doom and gloom. Firstly, a lot of fingerprint readers use other security measures to detect real fingerprints, such as heat sensors or pressure sensors. Secondly, biometric companies can choose to upgrade the security level, triggering higher fail rates — but that would also create more inconvenience. We all know the annoyance when our phone fingerprint sensors don’t function when slightly wet. To keep systems secure, manufacturers need to keep up to date and patch vulnerabilities, because AI methods are getting more advanced by the day.

 

***Written by NEWSREP guest author Ahmed Hassan, the CEO and Co-Founder of Grey Dynamics in London. He has worked in the Security and Intelligence industry in Africa for the last 8 years. He also holds a master’s degree in Intelligence and Security Studies with a focus on Machine Learning and Intelligence Analysis.


***More:

https://thenewsrep.com/111506/is-artificial-intelligence-making-fingerprint-security-obsolete/

NewsRep (Estados Unidos)

 



 
Center for the Study of the Presidency
Freedom House