Officials from the Bank of England, the Financial Conduct Authority (FCA), and HM Treasury are working alongside the National Cyber Security Centre (NCSC) to evaluate whether the new model could expose vulnerabilities in critical financial IT infrastructure, the report said, citing people familiar with the matter.
The discussions come amid growing global concern over how rapidly advancing AI systems may interact with sensitive financial networks and cybersecurity frameworks.
Industry-wide briefing expected
Major UK banks, insurers, and stock exchange operators are expected to receive a formal briefing from regulators within the next fortnight, focusing on the potential cyber risks associated with the model, referred to in the report as Claude Mythos Preview.
While details of the model’s deployment remain limited, the consultations reflect heightened caution among regulators as AI tools become increasingly capable of identifying system weaknesses and interacting with complex digital environments.
Neither the Bank of England, FCA, nor NCSC commented on the discussions. The UK Treasury also declined to respond, while Anthropic did not immediately provide a statement when contacted.
Broader international scrutiny of AI cyber risks
The UK review follows similar attention in the United States, where reports indicate that Treasury Secretary Scott Bessent has convened meetings with major Wall Street banks to assess potential cybersecurity implications linked to the same AI model.
According to earlier disclosures from Anthropic, the model is being deployed under a controlled initiative known as “Project Glasswing,” which allows select organisations to test its capabilities in defensive cybersecurity scenarios.
The company has previously stated that the system has already identified thousands of vulnerabilities across operating systems, browsers, and widely used software platforms—raising both interest and concern among security experts about its dual-use potential.
Regulatory caution grows alongside AI capabilities
As AI systems become more advanced in detecting software flaws and system weaknesses, regulators are increasingly weighing the benefits of improved cybersecurity against the risks of potential misuse.
The ongoing consultations highlight a growing effort by financial regulators and cybersecurity agencies to stay ahead of emerging threats posed by next-generation AI tools, particularly those with capabilities that could be exploited in critical financial systems.
