European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity.
Negotiators from the European Parliament and the bloc’s 27
member countries overcame big differences on controversial points including
generative AI and police use of face recognition surveillance to sign a
tentative political agreement for the Artificial Intelligence Act.
“Deal!” tweeted European Commissioner Thierry Breton just
before midnight. “The EU becomes the very first continent to set clear rules
for the use of AI.”
The result came after marathon closed-door talks this week,
with the initial session lasting 22 hours before a second round kicked off
Friday morning.
Officials were under the gun to secure a political victory
for the flagship legislation. Civil society groups, however, gave it a cool
reception as they wait for technical details that will need to be ironed out in
the coming weeks. They said the deal didn’t go far enough in protecting people
from harm caused by AI systems.
“Today’s political deal marks the beginning of important and
necessary technical work on crucial details of the AI Act, which are still
missing,” said Daniel Friedlaender, head of the European office of the Computer
and Communications Industry Association, a tech industry lobby group.
The EU took an early lead in the global race to draw up AI
guardrails when it unveiled the first draft of its rulebook in 2021. The recent
boom in generative AI, however, sent European officials scrambling to update a
proposal poised to serve as a blueprint for the world.
The European Parliament will still need to vote on the act
early next year, but with the deal done that’s a formality, Brando Benifei, an
Italian lawmaker co-leading the body’s negotiating efforts, told The Associated
Press late Friday.
“It’s very very good,” he said by text message after being
asked if it included everything he wanted. “Obviously we had to accept some
compromises but overall very good.” The eventual law wouldn’t fully take effect
until 2025 at the earliest, and threatens stiff financial penalties for
violations of up to 35 million euros ($38 million) or 7% of a company’s global
turnover.
Generative AI systems like OpenAI’s ChatGPT have exploded
into the world’s consciousness, dazzling users with the ability to produce
human-like text, photos and songs but raising fears about the risks the rapidly
developing technology poses to jobs, privacy and copyright protection and even
human life itself.
Now, the U.S., U.K., China and global coalitions like the
Group of 7 major democracies have jumped in with their own proposals to
regulate AI, though they’re still catching up to Europe.
Strong and comprehensive rules from the EU “can set a
powerful example for many governments considering regulation,” said Anu
Bradford, a Columbia Law School professor who’s an expert on EU law and digital
regulation. Other countries “may not copy every provision but will likely
emulate many aspects of it.”
AI companies subject to the EU’s rules will also likely
extend some of those obligations outside the continent, she said. “After all,
it is not efficient to re-train separate models for different markets,” she
said.
The AI Act was originally designed to mitigate the dangers
from specific AI functions based on their level of risk, from low to
unacceptable. But lawmakers pushed to expand it to foundation models, the
advanced systems that underpin general purpose AI services like ChatGPT and
Google’s Bard chatbot.
Foundation models looked set to be one of the biggest
sticking points for Europe. However, negotiators managed to reach a tentative
compromise early in the talks, despite opposition led by France, which called
instead for self-regulation to help homegrown European generative AI companies
competing with big U.S rivals, including OpenAI’s backer Microsoft.
Also known as large language models, these systems are
trained on vast troves of written works and images scraped off the internet.
They give generative AI systems the ability to create something new, unlike
traditional AI, which processes data and completes tasks using predetermined
rules.
The companies building foundation models will have to draw
up technical documentation, comply with EU copyright law and detail the content
used for training. The most advanced foundation models that pose “systemic
risks” will face extra scrutiny, including assessing and mitigating those
risks, reporting serious incidents, putting cybersecurity measures in place and
reporting their energy efficiency.
Researchers have warned that powerful foundation models,
built by a handful of big tech companies, could be used to supercharge online
disinformation and manipulation, cyberattacks or creation of bioweapons.
Rights groups also caution that the lack of transparency
about data used to train the models poses risks to daily life because they act
as basic structures for software developers building AI-powered services.
What became the thorniest topic was AI-powered face
recognition surveillance systems, and negotiators found a compromise after
intensive bargaining.
European lawmakers wanted a full ban on public use of face
scanning and other “remote biometric identification” systems because of privacy
concerns. But governments of member countries succeeded in negotiating
exemptions so law enforcement could use them to tackle serious crimes like
child sexual exploitation or terrorist attacks.
Rights groups said they were concerned about the exemptions
and other big loopholes in the AI Act, including lack of protection for AI
systems used in migration and border control, and the option for developers to
opt-out of having their systems classified as high risk.
“Whatever the victories may have been in these final
negotiations, the fact remains that huge flaws will remain in this final text,”
said Daniel Leufer, a senior policy analyst at the digital rights group Access
Now. -AP
