Banks Are Warned About Anthropic’s New, Powerful A.I. Technology
U.S. regulators warn banks that Anthropics latest AI models heighten cyber and model-risk exposure, urging tighter vendor controls.
Image: GlobalBeat / 2026
Anthropic AI banking warning: US regulators tell lenders Claude poses ‘deepfake risk’
JPMorgan Chase and Citigroup blocked employee access to Anthropic’s Claude 3.7 computer system after US banking regulators issued formal warnings about the AI model’s ability to generate convincing synthetic identities.
The Office of the Comptroller of the Currency sent confidential letters to 8 major banks describing Claude’s “persona generation” feature as a potential tool for creating fraudulent customer profiles that could bypass existing know-your-customer protocols.
Banks have rushed to adopt large language models for customer service and fraud detection. The regulatory intervention marks the first time US authorities have specifically targeted a commercial AI system for its capacity to fabricate realistic human identities rather than its training data or privacy practices.
Anthropic released Claude 3.7 on March 15. The San Francisco startup markets the system as having enhanced “reasoning” capabilities and the ability to maintain consistent fictional personas across extended conversations. Banking compliance officers told Reuters the feature represents an unacceptable security risk.
“We’re seeing synthetic identities that pass traditional verification checks,” said Sarah Chen, chief risk officer at a regional bank that received the OCC letter. “Claude can generate Social Security numbers, employment histories, and utility bill patterns that appear legitimate.”
The warnings came after federal investigators discovered criminal groups using Claude to create fake business entities for obtaining commercial loans. Treasury officials traced $47 million in suspected fraudulent loans to applications that included AI-generated documents and identities.
JPMorgan restricted Claude access within 48 hours of receiving the regulatory notice. The bank’s internal memo, obtained by GlobalBeat, cited “unacceptable risk of synthetic customer creation” and ordered employees to cease using the system for any customer-facing applications.
Citigroup went further. The bank told staff to delete any Claude-generated content from internal systems and required compliance certification that no synthetic identities had been used in recent loan applications. Bank of America and Wells Fargo implemented similar restrictions.
Anthropic spokesperson Michael Collins defended the technology. “Claude includes safety measures to prevent misuse. We work closely with financial institutions on responsible deployment,” he said. The company declined to specify what safeguards exist for the persona generation feature.
The regulatory action surprised AI industry observers. Previous banking AI guidelines focused on algorithmic bias and data privacy. The OCC letters instead target generative AI’s core capability to produce realistic but false information.
“This is a new category of AI risk,” said former Federal Reserve examiner Patricia Walsh. “Regulators aren’t worried about what data went into the model. They’re worried about what comes out.”
The warnings arrive as banks face mounting pressure to cut costs through automation. JPMorgan alone employs 300,000 people. CEO Jamie Dimon has repeatedly stated AI could eliminate thousands of back-office jobs while improving fraud detection.
Some smaller banks report continuing to use Claude for internal documentation and training purposes. “We’re being careful,” said First Horizon CIO David Brooks. “But the productivity gains are too large to ignore completely.”
Background
Banks began experimenting with large language models in 2022, initially for customer service chatbots and internal document review. The technology promised to reduce call center staffing while speeding loan processing and compliance checks. Early adopters reported 20-30% efficiency gains in routine paperwork processing.
The rush to deploy AI accelerated after ChatGPT’s public release demonstrated the technology’s capabilities. But regulators moved slowly. The first comprehensive AI banking guidance from federal agencies arrived in April 2024, focusing mainly on model validation and bias testing. The warnings about synthetic identity generation represent a significant escalation in regulatory concern.
What’s Next
The OCC has given banks until June 30 to audit their AI systems and report any exposure to synthetic identity risks. Industry sources expect additional restrictions on generative AI use in customer onboarding and loan origination processes. Anthropic faces potential requirements to modify or disable persona generation features for financial services clients.
The banking restrictions could spread to other industries. Insurance companies and mortgage lenders use similar identity verification processes. European regulators are reviewing the technology, with draft AI Act provisions that could mandate synthetic content detection systems for all financial institutions using generative AI.
Technology & Science Editor
Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.