The European Commission has confirmed ongoing discussions with leading U.S. artificial intelligence companies OpenAI and Anthropic as European regulators continue efforts to strengthen oversight and cooperation in the rapidly evolving AI sector.

Speaking during a daily press briefing on Monday, European Commission spokesperson Thomas Regnier said the Commission has been actively engaging both companies, although discussions with each firm are currently at different stages.

According to Regnier, the Commission welcomed OpenAI’s proactive approach toward regulatory engagement, particularly the company’s willingness to provide access to its latest AI model as part of ongoing discussions with European authorities.

“With one (OpenAI), you have a company proactively offering to give access to the company,” Regnier said.

He explained that the Commission views such cooperation as an important step in improving transparency, regulatory understanding, and responsible development of advanced AI systems.

The spokesperson also disclosed that the Commission has already held between four and five meetings with Anthropic as part of broader exchanges on artificial intelligence governance and regulatory compliance.

However, he clarified that discussions with Anthropic have not yet reached the stage where access to the company’s AI models is being considered.

“With the other one (Anthropic), we have good exchanges though we're not at a stage where we can speculate on potential access or not,” Regnier added.

The talks come as European regulators intensify scrutiny of advanced AI technologies following the implementation of stricter rules under the European Union’s evolving AI regulatory framework.

The EU has been positioning itself as one of the world’s leading regulators of artificial intelligence, focusing heavily on transparency, safety standards, accountability, and the management of risks associated with powerful generative AI systems.

OpenAI, the developer behind widely used AI products including ChatGPT, has increasingly engaged with governments and regulators globally as concerns grow around AI safety, data governance, misinformation, copyright issues, and competition.

Anthropic, another major AI company known for developing the Claude family of AI models, has also emerged as a key player in global conversations around responsible AI development and safety-focused research.

The European Commission has not disclosed further details regarding the nature of the discussions, timelines, or possible agreements with either company. However, analysts say the engagements reflect Europe’s growing effort to balance innovation with regulatory control as competition in the AI industry accelerates globally.