The rapid growth of generative artificial intelligence (AI),
which can create text, images and video in seconds in response to prompts, has
heightened fears that the new technology could be used to sway major elections
this year, as more than half of the world's population is set to head to the
polls.
Signatories of the tech accord, which was announced at the
Munich Security Conference, include companies that are building generative AI
models used to create content, including OpenAI, Microsoft and Adobe. Other
signatories include social media platforms that will face the challenge of
keeping harmful content off their sites, such as Meta Platforms, TikTok and X,
formerly known as Twitter.
The agreement includes commitments to collaborate on
developing tools for detecting misleading AI-generated images, video and audio,
creating public awareness campaigns to educate voters on deceptive content and
taking action on such content on their services.
Technology to identify AI-generated content or certify its
origin could include watermarking or embedding metadata, the companies said.
The accord did not specify a timeline for meeting the
commitments or how each company would implement them.
“I think the utility of this (accord) is the breadth of the
companies signing up to it,” said Nick Clegg, president of global affairs at
Meta Platforms.
“It’s all good and well if individual platforms develop new
policies of detection, provenance, labeling, watermarking and so on, but unless
there is a wider commitment to do so in a shared interoperable way, we’re going
to be stuck with a hodgepodge of different commitments,” Clegg said.
Generative AI is already being used to influence politics
and even convince people not to vote.
In January, a robocall using fake audio of U.S. President
Joe Biden circulated to New Hampshire voters, urging them to stay home during
the state's presidential primary election.
Despite the popularity of text-generation tools like
OpenAI's ChatGPT, the tech companies will focus on preventing harmful effects
of AI photos, videos and audio, partly because people tend to have more
skepticism with text, said Dana Rao, Adobe's chief trust officer, in an
interview.
"There's an emotional connection to audio, video and
images," he said. "Your brain is wired to believe that kind of
media."
0 comments:
Post a Comment