OpenAI is introducing new measures to give content rights holders more control over how their characters are used within Sora, its recently launched AI video-generation platform. The company also plans to share revenue with those who allow their intellectual property to appear in user-generated videos.

Chief Executive Officer Sam Altman announced the initiative on Friday in a blog post, stating that OpenAI will offer rights holders “more granular control over generation of characters.” This means television and film studios, as well as other copyright owners, will be able to block or approve the use of their creations in AI-generated videos.

The move comes amid mounting scrutiny over how artificial intelligence tools are trained and how they handle copyrighted materials. As AI content creation accelerates, companies are under pressure to balance technological innovation with fair compensation and legal protection for creators.

Sora, launched this week as a standalone app in the United States and Canada, allows users to generate and share 10-second AI videos. The app quickly gained traction, letting users remix and reinterpret scenes or characters — a feature that has sparked copyright concerns, especially within the entertainment industry.

According to reports, at least one major Hollywood studio — Disney — has already opted out of having its material appear in Sora. The company’s decision underscores the tension between AI developers and traditional media producers over ownership and creative rights.

Altman revealed that OpenAI intends to roll out a revenue-sharing model for copyright owners who choose to make their characters available in Sora. He noted that the framework is still being refined and will require “some trial and error” before a consistent approach is adopted across OpenAI’s products.

The goal, he explained, is to support a sustainable ecosystem where rights holders benefit financially as users create increasingly diverse and niche video content through the platform.

Backed by Microsoft, OpenAI has been expanding its reach in multimodal AI, which integrates text, images, and video. Its Sora model, first introduced to the public last year, competes with similar text-to-video offerings from Meta and Google. Meta recently unveiled Vibes, a social platform for sharing short, AI-generated clips, marking the growing race to define the future of creative media powered by artificial intelligence.