Fighting Benefits Fraud Requires a Holistic Approach to ID Verification

Social safety net agencies need a layered security strategy that includes biometrics, digital IDs, artificial intelligence and strong data governance.

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
Adobe Stock
Getting benefit money to the people who need it while protecting those dollars from cybercriminals is one of the most confounding riddles facing state and local government leaders today.

The nation is still tallying the cost of identity fraud during the pandemic when government benefit programs dramatically expanded. A Government Accountability Office report released in September 2023 estimates state unemployment insurance programs alone paid out as much as $135 billion in fraudulent claims, although the GAO says the full extent of UI fraud may never be known.

How can agencies clamp down on this type of abuse? One solution is right in front of us, thanks to the evolution of facial-recognition technology.

“With face ID, it’s instantaneous to the point where you may not even realize that it’s unlocking your phone,” says Arun Vemury, senior advisor with the Biometric and Identity Technology Center at the U.S. Department of Homeland Security’s Science and Technology Directorate. Vemury notes the technology has grown beyond its original role in border-security and passport checks to become an effective solution for a broad range of identity-proofing needs.

For all its promise, however, facial recognition is just one weapon for fighting fraud and preserving the mission of social-welfare agencies. Moreover, facial-ID techniques create privacy and fairness issues that are still being resolved.

Securing the social safety net is thus an immense challenge, requiring a holistic and multi-layered strategy that combines:
  • Biometrics like fingerprints, facial recognition and voice sampling to establish physical proof of a person’s identity.
  • Digital IDs like mobile driver’s licenses to provide standardized, trustworthy verification for those applying for government benefits.
  • Artificial intelligence and machine learning (AI/ML) technologies, including generative AI (GenAI), to scan massive datasets and detect anomalies pointing to criminals and nation-state actors while confirming legitimate identities.
  • Governance to protect the privacy of constituent data and ensure fair, equitable treatment of people who depend on safety-net programs.
  • Best-practices guidance from recognized cybersecurity authorities.
Unfortunately, state and local safety-net agencies are often ill-equipped for today’s identity challenges. “A lot of the existing legacy tools just don’t work anymore,” said identity expert Jeremy Grant in a recent Government Technology webinar. Grant is coordinator of the Better Identity Coalition and leader of online identity initiatives for the federal government during the Obama Administration. Attackers can easily circumvent knowledge-based tactics that use security questions to confirm a user’s identity, he says.

Identity proofing and validation have also outgrown their home base in agencies issuing driver’s licenses and birth certificates. “We’re seeing a lot of other state agencies being thrust into the identity business that don’t really have a history of trying to figure out who’s who,” Grant said in the webinar. Grant also penned an August 2023 op-ed in The Hill chiding the White House and congressional leaders for inaction on critical digital-ID issues. “Choosing to do nothing is also an active policy choice,” he wrote. “It’s a decision to embrace the status quo, to do nothing as other forces change things.”


COVID-19 was one of those forces of change — and it inflicted expensive lessons.

Federal and state governments spent trillions of dollars to strengthen the safety net for Americans affected by the pandemic. Waves of fraud ensued because identity validation was weak or nonexistent. “A lot of the narrative today is that identity verification technologies failed,” says Ryan Galluzzo, digital identity program lead for the Applied Cybersecurity Division at the National Institute of Standards and Technology (NIST). “Sometimes that was the case, but a lot of times there was just nothing being put in place at all.”

Everybody from local fraudsters to foreign criminal gangs joined the feeding frenzy. And legitimate recipients of benefits — who often desperately needed help — faced delays and frustrations. The reverberations continue four years later. A quick visit to the U.S. Justice Department’s COVID-19 webpage reveals one press release after another announcing convictions for benefits fraud. A 45-year-old Tennessee woman, for instance, faces an 18-month prison sentence for using other people’s personally identifiable information (PII) to make mass online applications for unemployment benefits. She was one of four people in on the scheme, which hauled in more than $550,000.

How did things go so wrong? Short-term haste was a factor, but long-term identity trends also played a role.

“The problem is our identities are no longer static,” says Deborah Snyder, senior fellow with the Center for Digital Government and former New York state chief information security officer. Details that define us, like jobs or names or addresses, can change quickly, leaving government databases out of date.

Crooks with access to fresh data can easily defeat legacy verification programs. “Some even used AI/ML to perform recon, identify ripe targets, and craft more creative and hard-to-detect attacks,” Snyder says.

Adobe Stock

Synthetic IDs and generative AI (GenAI) are shaping up as twin engines of digital fraud. “Synthetic identities combine real and fake information,” Snyder says. “Familial fraud” using deceased people’s Social Security numbers, for instance, is an emerging synthetic ID tactic. Fraudsters can find this kind of true information and back it up with made-up data or documents.

GenAI makes this threat even more worrisome. “We’ve seen folks using GenAI and neural networks to produce really, really good-looking fake documents that in some cases have been able to defeat document validation systems,” Galluzzo says. GenAI can also generate phishing messages that look authentic enough to overcome a user’s immediate suspicions.

Public welfare agencies must guard against a tactic called account takeover, where adversaries sneak into an established account, revise user data and give themselves full access to the person’s benefits.

Galluzzo also cites the emergence of “morph” attacks on facial-recognition systems. “This is where you take the image of a legitimate user and meld it using software with an illegitimate user,” he says. An adversary would use a morphed image to enroll in a benefit program. While the image has enough data to generate a thumbs-up match to a legitimate user, it can also trick the system into approving an illegitimate user.

The news from the threat front is not all frightening, however. “We have an increasingly diverse group of people attempting to tackle these challenges,” Galluzzo says. “And we have our hands on many of the same technologies the bad guys do. This will give us the ability to more rapidly adapt to emerging threats and deploy new techniques and technologies to mitigate the latest attacks.”


The realities of identity and anti-fraud technologies are shifting. “Password problems are getting easier to solve,” Grant said in the Government Technology webinar. “But identity proofing is getting harder.”

The easier part reflects the rise of phishing-resistant passwords, which use cryptography and automation to confirm a users’ credentials without requiring them to type in a conventional password. Combining multifactor authentication (MFA) with phishing-resistant passwords goes a long way toward thwarting attackers who steal login credentials. For all the speculation on the extinction of passwords, they remain an important verification tool in many use cases.

The harder part is identity proofing, which establishes a trustworthy link between an individual human and documentation confirming they are who they say they are. GenAI and other AI/ML apps raise the stakes for technologists working to strengthen the proofing side of the identity equation. Here’s a quick look at the technologies that are becoming essential for authenticating and verifying digital identities.

Biometrics. Vemury anticipates broader adoption of facial-, voice- and iris-recognition use cases. He concedes, however, that facial ID has limitations because attackers can generate plausible replicas of people’s faces and other biomarkers. NIST is vetting vendors’ facial-recognition algorithms to help technology buyers make more informed judgments about these tools. “We can evaluate hundreds of algorithms per year and provide objective data on how these things work,” he says.

Digital driver’s licenses/mobile IDs. A standardized digital equivalent of a driver’s license or similar ID credential would be an immense boon to safety net agencies. Some states are making progress along these lines, though much more remains to be done. Vemury says states need to update their technology infrastructure, ensure interoperability, and develop accurate, cost-effective solutions that are secure and difficult to duplicate.

Blockchain. Databases built with blockchain-style technologies can create encrypted, immutable records that are hard to fake. “This can put individuals in greater control of their own identities and data,” Snyder says. Using blockchain for identity verification, however, has limitations. “If you need decentralization, blockchains make a lot of sense,” Vemury says. The challenge, especially in the United States, is that centralized authorities like state and local governments are widely considered trustworthy for ID verification. If these agencies already have trusted data and verification processes, then distributed blockchains present a less-attractive alternative, he says.

AI/ML and GenAI. Learning algorithms that spot anomalies that humans could never catch have long been the core of fighting fraud in the private sector. While GenAI is implicated in a broad range of cyber threats, its efficacy against fraud gets less attention. For starters, agency leaders can ask an app like ChatGPT to summarize identity-proofing and anti-fraud tactics. Also, security companies are working on training large language models (LLMs) to scan email addresses for evidence that they were created by malware bots, which can help flag malicious activity.

Identity platforms. Identity-verification vendors are pulling all these capabilities together into a centralized application to give their customers a comprehensive toolbox against fraud. “There isn’t a silver bullet, but a few things taken together can go quite a long way,” Grant said in the Government Technology webinar.


Organizations like NIST and the Better Identity Coalition are crafting in-depth guidance (see sidebar) on identity validation and anti-fraud efforts. Key best practices include:

Managing risks. How can agency leaders prioritize risk management? “The answer is being able to understand the actual risks related to your applications and then determine how to apply controls from there,” Galluzzo says. “Think about the application from start to finish — the entire business process — and think about the risks and how they might be mitigated across the entire interaction with the end user.”

Optimizing user experience. Test applications to make sure they are not imposing needless friction during proofing and verification processes.

Tracking progress. Use data to discover how well your applications detect fraud and flag adversaries that are trying to exploit vulnerabilities.

Vetting vendors. Consult with technology vendors across the identity and anti-fraud spectrum. Organizations like NIST offer valuable guidance. Pursue public-private partnerships to gain insights and control costs.

Validating continuously. Systems should always be probing for evidence of malicious behavior and adapting in real time to changing threats.

Upgrading passwords. Phishing-resistant passwords and multi-factor authentication can provide a solid first line of defense in some use cases.


Effective identity verification technology delivers fast, accurate approvals to people applying for government benefits. The tricky part is doing all this while protecting data and avoiding privacy intrusions.

Agencies need to guard against implementing controls for their own sake without confirming their impact. It’s too easy, for instance, to apply a control in one place while adversaries are doing the most damage somewhere else. “You’ve just applied a friction point to one of your end users that might be impacting their ability to get life-or-death services for the sake of applying a control,” Galluzzo says.

“As you’re starting to think about, ‘Hey, we want to do biometric verification for all our users,’ make sure you’re having a conversation with your privacy and civil liberties team,” he adds. “There are a lot of different pieces that go into making these decisions. You need to do it with a whole view of the impact to your system and the end user.”

Snyder concurs. “I see it as a right-sizing exercise,” she says. Transactions with little risk of data or monetary loss or minimal traces of malicious activity would not necessarily require sophisticated approval processes. Applying tight security to these transactions adds unnecessary friction for users. By contrast, interactions approving financial payouts or involving sensitive data require more stringent verification.

Proofing and verification must provide the appropriate level of security while ensuring the user experience leaves nobody behind. “Addressing accessibility considerations during design helps make sure everything’s accessible to a diverse population,” Snyder says. “Your processes must be inclusive not only for those who have a long credit history but for everyone.”
  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email