Austria on Monday called for fresh efforts to regulate the use of artificial intelligence in weapons systems that could create so-called 'killer robots', as it hosted a conference aimed at reviving largely stalled discussions on the issue.
With AI technology advancing rapidly, weapons systems that
could kill without human intervention are coming ever closer, posing ethical
and legal challenges that most countries say need addressing soon.
"We cannot let this moment pass without taking action.
Now is the time to agree on international rules and norms to ensure human
control," Austrian Foreign Minister Alexander Schallenberg told the
meeting of non-governmental and international organisations as well as envoys
from 143 countries.
"At least let us make sure that the most profound and
far-reaching decision, who lives and who dies, remains in the hands of humans
and not of machines," he said in an opening speech to the conference
entitled "Humanity at the Crossroads: Autonomous Weapons Systems and the
Challenge of Regulation".
Years of discussions at the United Nations have produced few
tangible results and many participants at the two-day conference in Vienna said
the window for action was closing rapidly.
"It is so important to act and to act very fast,"
the president of the International Committee of the Red Cross, Mirjana
Spoljaric, told a panel discussion at the conference.
"What we see today in the different contexts of
violence are moral failures in the face of the international community. And we
do not want to see such failures accelerating by giving the responsibility for
violence, for the control over violence, over to machines and algorithms,"
she added.
AI is already being used on the battlefield. Drones in
Ukraine are designed to find their own way to their target when signal-jamming
technology cuts them off from their operator, diplomats say.
The United States said this month it was looking into a
media report that the Israeli military has been using AI to help identify
bombing targets in Gaza.
"We have already seen AI making selection errors in
ways both large and small, from misrecognizing a referee's bald head as a
football, to pedestrian deaths caused by self-driving cars unable to recognize
jaywalking," Jaan Tallinn, a software programmer and tech investor, said
in a keynote speech.
"We must be extremely cautious about relying on the
accuracy of these systems, whether in the military or civilian sectors."
0 comments:
Post a Comment