A committee of European Union lawmakers on Thursday reached a preliminary agreement on a European Artificial Intelligence Act, which would pave the way to the first ever regulation of AI.
"Against conservative wishes for more surveillance and
leftist fantasies of over-regulation, parliament found a solid compromise that
would regulate AI proportionately, protect citizens' rights, as well as foster
innovation and boost the economy," said Svenja Hahn, a European Parliament
deputy.
The European Commission proposed the draft rules nearly two
years ago in a bid to protect citizens from the dangers of the emerging
technology, which has experienced a boom in investment and consumer popularity
in recent months.
The draft needs to be thrashed out between EU countries and
EU lawmakers, called a trilogue, before the rules can become law.
Under the proposals, companies which make generative AI
tools such as ChatGPT would have to disclose if they have used copyrighted
material in their systems.
Legislators have sought to strike a balance between
encouraging innovation while protecting citizens' fundamental rights.
This led to different AI tools being classified according to
their perceived risk level: from minimal through to limited, high, and
unacceptable. High-risk tools won't be banned, but will require companies to be
highly transparent in their operations.
In the US, the chair of the Senate Intelligence Committee on
Wednesday urged CEOs of several artificial intelligence (AI) companies to
prioritize security measures, combat bias, and responsibly roll out new
technologies.
Democratic Senator Mark Warner raised concerns about
potential risks posed by AI technology. "Beyond industry commitments,
however, it is also clear that some level of regulation is necessary in this
field," said Warner, who sent letters to the CEOs of OpenAI, Scale AI,
Meta Platforms, Alphabet's Google, Apple, Stability AI, Midjourney, Anthropic,
Percipient.ai, and Microsoft. © Reuters