Gartner Forecasts Doubt in Biometric Solutions Amid Rising AI-Generated Deepfakes

In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication

author-image
SMEStreet Edit Desk
New Update
Akif Akif Khan
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

By 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation, according to Gartner, Inc.

 

“In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Akif Khan, VP Analyst at Gartner. “As a result, organizations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”

 

Identity verification and authentication processes using face biometrics today rely on presentation attack detection (PAD) to assess the user’s liveness. “Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” said Khan.

 

Gartner research said presentation attacks are the most common attack vector, but injection attacks increased 200% in 2023.  Preventing such attacks will require a combination of PAD, injection attack detection (IAD) and image inspection.

 

Combine IAD and Image Inspection Tools to Mitigate Deepfake Threats

To assist organizations in protecting themselves against AI-generated deepfakes beyond face biometrics, chief information security officers (CISOs) and risk management leaders must choose vendors who can demonstrate they have the capabilities and a plan that goes beyond current standards and are monitoring, classifying and quantifying these new types of attacks.

 

“Organizations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection,” said Khan.

 

Once the strategy is defined and the baseline is set, CISOs and risk management leaders must include additional risk and recognition signals, such as device identification and behavioral analytics, to increase the chances of detecting attacks on their identity verification processes.

 

Above all, security and risk management leaders responsible for identity and access management should take steps to mitigate the risks of AI-driven deepfake attacks by selecting technology that can prove genuine human presence and by implementing additional measures to prevent account takeover.

 

Gartner clients can learn more in “Predicts 2024: AI & Cybersecurity — Turning Disruption into an Opportunity”

 

Learn how to build a strong plan for handling security incidents in the complimentary Gartner ebook 3 Must-Haves in Your Cybersecurity Incident Response Plan.

 

Gartner Security & Risk Management Summit 

Gartner analysts will present the latest research and advice for security and risk management leaders at the Gartner Security & Risk Management Summits, taking place February 12-13 in Dubai, February 26-27 in India, March 18-19 in Sydney, June 3-5 in National Harbor, July 24-26 in Tokyo and September 23-25 in London. Follow news and updates from the conferences on X using #GartnerSEC.

 

Gartner Biometric Solutions AI-Generated Deepfakes