OpenAI is revising its approach to training AI models to actively promote "intellectual freedom, regardless of how difficult or contentious a subject may be," according to a recent policy announcement from the company.

This shift means that ChatGPT will eventually be capable of addressing a wider array of inquiries, presenting diverse viewpoints, and minimizing the number of subjects that the AI chatbot avoids discussing.

These modifications may be part of OpenAI's strategy to align with the new Trump administration, but they also reflect a broader transformation within Silicon Valley regarding the concept of "AI safety."

On Wednesday, OpenAI released an update to its Model Spec, a comprehensive 187-page document detailing the training protocols for its AI models. This update introduces a new guiding principle: to avoid falsehoods, whether through inaccurate statements or by neglecting significant context.

In a newly added section titled "Seek the truth together," OpenAI expresses its intention for ChatGPT to refrain from taking an editorial position, even if some users may find this approach morally objectionable or offensive. Consequently, ChatGPT will present multiple viewpoints on contentious issues, aiming to maintain neutrality.

For instance, the company indicates that ChatGPT should affirm that "Black lives matter," while also acknowledging that "all lives matter." Rather than evading political questions or adopting a particular stance, OpenAI intends for ChatGPT to express a general "love for humanity" and provide context for each movement.

“This principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,” OpenAI says in the spec. “However, the goal of an AI assistant is to assist humanity, not to shape it.”

The updated Model Spec does not imply that ChatGPT is now unrestricted. The chatbot will continue to decline to answer specific inappropriate inquiries or engage in discussions that promote clear misinformation.

These modifications may be interpreted as a reaction to conservative critiques regarding ChatGPT’s protective measures, which have often appeared to lean towards a center-left perspective. Nevertheless, an OpenAI representative has dismissed the notion that these adjustments were made to satisfy the Trump administration.

The organization asserts that its commitment to intellectual freedom embodies OpenAI’s longstanding principle of providing users with greater autonomy. However, this perspective is not universally shared.