Internal documents filed in a New Mexico state court case and made public Monday indicate that Meta CEO Mark Zuckerberg approved allowing minors access to AI chatbot companions that safety staff warned could facilitate sexual interactions.

The lawsuit, brought by New Mexico Attorney General Raul Torrez and set for trial next month, alleges that Meta “failed to stem the tide of damaging sexual material and sexual propositions delivered to children” on Facebook and Instagram.

Court Filing Reveals Internal Disputes Over AI Chatbots

The documents include internal emails and messages obtained through legal discovery. The state claims they show that Meta, under Zuckerberg’s direction, rejected safety staff recommendations and did not implement adequate safeguards to prevent children from being exposed to sexually exploitative conversations with AI chatbots.

Safety staff reportedly expressed concerns that Meta was building chatbots designed for companionship, including sexual and romantic interactions. The AI chatbots were released in early 2024. The filing did not include any messages directly authored by Zuckerberg.

Meta has denied the allegations, with spokesperson Andy Stone describing the state’s portrayal as inaccurate and based on selective information.

Concerns Over Adult-Minor Interactions

Messages highlighted in the filing show safety staff expressing particular concern about the chatbots being used for romantic scenarios involving minors, referred to as “U18s.”

In January 2024, Ravi Sinha, Meta’s head of child safety policy, wrote:

“I don’t believe that creating and marketing a product that creates U18 romantic AI’s for adults is advisable or defensible.”

Meta’s global safety head, Antigone Davis, agreed that adults should be blocked from creating underage romantic companions because it “sexualizes minors.”

A February 2024 message reportedly relayed that Zuckerberg believed AI companions should be restricted from sexually explicit conversations with younger teens and that adults should not be able to interact with “U18 AIs for romance purposes.”

However, a meeting summary dated February 20, 2024, suggested Zuckerberg wanted the company to adopt a less restrictive approach. It said he preferred framing the issue around “general principles of choice and non-censorship” and that he wanted to “allow adults to engage in racier conversation on topics like sex.”

Meta’s spokesman argued that the documents show Zuckerberg directed that explicit AI should not be available to younger users and that adults should not be able to create under-18 AI companions for romance.

Parental Controls and “Romance AI”

The filing also includes messages from March 2024 indicating that Zuckerberg rejected parental controls for the chatbots. Employees reportedly noted they were working on “Romance AI chatbots” that would be available to users under 18.

One employee wrote:

“We pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark decision.”

Nick Clegg, Meta’s former head of global policy, also criticized the company’s approach in an email included in the documents, warning that sexual interactions could become the dominant use case for teenage users and create significant backlash.

Public Backlash and Policy Changes

Meta’s AI chatbot policies drew widespread attention and criticism, particularly after a Wall Street Journal report in April 2025 found that the chatbots included sexualized underage characters and engaged in sexual roleplay across age groups.

In August 2025, Reuters reported that Meta’s guidelines had stated it was “acceptable to engage a child in conversations that are romantic or sensual.” Meta said it was changing its policies and claimed the internal document was in error.

Last week, Meta announced it had removed teen access to AI companions entirely while it develops a new version of the chatbots.