In a blog post published Monday, Anthropic said the companies — DeepSeek, Moonshot AI and MiniMax — generated more than 16 million interactions with Claude through roughly 24,000 fake accounts. According to Anthropic, the activity violated its terms of service and regional access restrictions.
The company said the firms employed a technique known as “distillation,” a process in which a smaller or less capable model is trained on the outputs of a more advanced system. While distillation can be a legitimate AI development method, Anthropic alleges that in this case it was used to extract capabilities from Claude without authorization.
“These campaigns are growing in intensity and sophistication,” Anthropic wrote. “The window to act is narrow, and the threat extends beyond any single company or region.”
National Security Concerns
Anthropic warned that models created through illicit distillation may lack important safety guardrails, potentially posing national security risks. If such systems are later open-sourced, the company argued, the risks could multiply as advanced capabilities spread beyond regulatory oversight.
The allegations come shortly after OpenAI issued a memo to U.S. lawmakers warning that Chinese firms were seeking to replicate leading American AI systems, including those behind ChatGPT, for use in their own training efforts.
Anthropic also used its blog post to advocate for stricter export controls on advanced semiconductor chips, arguing that restrictions on chip access could limit both direct large-scale model training and the effectiveness of improper distillation efforts.
Targeted Capabilities
According to Anthropic, each company focused on distinct areas of Claude’s functionality. DeepSeek allegedly targeted reasoning capabilities across a broad range of tasks, including the development of censorship-resistant responses to sensitive policy queries. Moonshot AI was said to be interested in agentic reasoning, tool use, coding and data analysis.
MiniMax, meanwhile, focused on agentic coding, orchestration and tool use. Anthropic said it detected MiniMax’s campaign while it was still active and before the company released the model it was training.
“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” the blog post stated.
DeepSeek, Moonshot AI and MiniMax did not immediately respond to requests for comment.
Anthropic, which recently raised $30 billion in a funding round that valued the company at $380 billion, is among a small group of firms competing at the frontier of generative AI. The dispute underscores mounting geopolitical and commercial tensions as companies and governments vie for leadership in rapidly advancing AI technologies.
