Major technology firms are making a concerted effort to convince the European Union to adopt a lenient regulatory framework for artificial intelligence, aiming to avoid potential fines amounting to billions of dollars.

In May, EU legislators reached an agreement on the AI Act, marking the first comprehensive regulatory framework for AI technology after extensive negotiations among various political factions.

However, the specifics of how regulations will apply to "general purpose" AI (GPAI) systems, such as OpenAI’s ChatGPT, remain uncertain until the associated codes of practice are finalized. This uncertainty raises questions about the potential for copyright litigation and substantial financial penalties that companies might encounter.

The EU has called upon businesses, scholars, and other stakeholders to assist in drafting the code of practice, receiving nearly 1,000 submissions—an unusually high figure, according to a source who requested anonymity due to the sensitive nature of the information.

While the AI code of practice will not be legally enforceable when it is implemented late next year, it will serve as a guideline for companies to demonstrate their adherence to the law. Firms that claim compliance while disregarding the code may face legal repercussions.

"The code of practice is essential. If we get it right, we can continue to foster innovation," stated Boniface de Champris, a senior policy manager at the trade organization CCIA Europe, which represents members like Amazon, Google, and Meta.

"If it is overly restrictive or too detailed, it could pose significant challenges," he cautioned.

Information Extraction

Companies like Stability AI and OpenAI have come under scrutiny regarding the legality of utilizing popular books and photo collections for training their models without obtaining permission from the original creators, raising potential copyright infringement issues.

According to the AI Act, these companies will be required to provide "detailed summaries" of the datasets employed in training their models. In principle, a content creator who finds their work has been incorporated into an AI model's training data may pursue compensation, although this matter is currently being adjudicated in the courts.

Some industry leaders argue that the summaries should include minimal information to safeguard trade secrets, while others contend that copyright holders deserve to be informed if their content has been utilized without authorization.

OpenAI, which has faced backlash for its lack of transparency regarding the data used for training its models, has reportedly applied to participate in the relevant working groups, as per an anonymous source.

Google has also filed an application, according to a spokesperson for Reuters. In addition, Amazon expressed its intention to "contribute our expertise and ensure the code of practice is effective."

Maximilian Gahntz, the AI policy lead at the Mozilla Foundation, which is known for the Firefox web browser, voiced concerns that companies are actively seeking to evade transparency.

"The AI Act offers a significant opportunity to shed light on this vital issue and clarify at least some aspects of the opaque processes involved," he stated.

Large Enterprises and their Focus Areas

The business community has expressed concerns regarding the EU's focus on technology regulation at the expense of innovation, prompting those responsible for drafting the code of practice to seek a balanced approach.

Recently, Mario Draghi, the former head of the European Central Bank, emphasized the necessity for the EU to adopt a more coordinated industrial policy, expedite decision-making processes, and secure substantial investments to remain competitive with China and the United States.

Thierry Breton, a prominent advocate for EU regulations and a critic of tech companies that do not comply, resigned from his position as European Commissioner for the Internal Market this week following disagreements with Ursula von der Leyen, the president of the EU's executive branch.

In light of increasing protectionist sentiments within the EU, domestic tech firms are advocating for exemptions in the AI Act that would support emerging European businesses.

Maxime Ricard, policy manager at Allied for Startups, a coalition of trade organizations representing smaller tech enterprises, stated, "We have emphasized that these obligations should be feasible and, where possible, tailored to startups."

Once the code is released in early next year, tech companies will have until August 2025 to align their compliance efforts with its requirements.

Various non-profit organizations, such as Access Now, the Future of Life Institute, and Mozilla, have also expressed interest in contributing to the drafting of the code.

Gahntz remarked, "As we move into a phase where the AI Act's requirements are detailed further, we must ensure that major AI companies do not dilute essential transparency provisions."