Artificial intelligence company OpenAI has banned multiple ChatGPT accounts suspected of having ties to Chinese government entities and Russian-speaking criminal groups, citing violations of its national security policies.

In its latest public threat report released on Tuesday, the San Francisco-based firm said some Chinese-linked users had asked ChatGPT to generate proposals for monitoring social media conversations and develop “listening” tools — activities OpenAI said breached its safety and security guidelines.

The company also disclosed that several Chinese-language accounts were removed for using ChatGPT to support phishing and malware campaigns, and for requesting the chatbot’s help in researching automation methods related to China’s DeepSeek AI system.

OpenAI said it took similar action against Russian-affiliated users who allegedly leveraged the chatbot to assist in malware development.

The company emphasized that while its models occasionally encounter such misuse attempts, they do not provide new offensive capabilities to threat actors. “We found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities,” the report stated.

Since beginning its public threat monitoring program in February 2024, OpenAI has disrupted and reported more than 40 malicious networks, many of which attempted to exploit generative AI for cyber or disinformation purposes.

The latest findings come amid heightened U.S.–China tensions over AI governance and national security, as both countries race to shape the future of the fast-evolving technology.

OpenAI, backed by Microsoft, now reports more than 800 million weekly ChatGPT users and recently reached a $500 billion valuation following a secondary share sale — making it the world’s most valuable startup.

The Chinese embassy in Washington has yet to comment on the report.