The Center for Countering Digital Hate (CCDH), a nonprofit
that monitors online hate speech, used generative AI tools to create images of
U.S. President Joe Biden laying in a hospital bed and election workers smashing
voting machines, raising worries about falsehoods ahead of the U.S.
presidential election in November.
"The potential for such AI-generated images to serve as
'photo evidence' could exacerbate the spread of false claims, posing a
significant challenge to preserving the integrity of elections," CCDH
researchers said in the report.
CCDH tested OpenAI's ChatGPT Plus, Microsoft's Image
Creator, Midjourney and Stability AI's DreamStudio, which can each generate
images from text prompts.
The report follows an announcement last month that OpenAI,
Microsoft and Stability AI were among a group of 20 tech companies that signed
an agreement to work together to prevent deceptive AI content from interfering
with elections taking place globally this year. Midjourney was not among the
initial group of signatories.
CCDH said the AI tools generated images in 41% of the
researchers' tests and were most susceptible to prompts that asked for photos
depicting election fraud, such as voting ballots in the trash, rather than
images of Biden or former U.S. President Donald Trump.
ChatGPT Plus and Image Creator were successful at blocking
all prompts when asked for images of candidates, said the report.
However, Midjourney performed the worst out of all the
tools, generating misleading images in 65% of the researchers' tests, it said.
Some Midjourney images are available publicly to other
users, and CCDH said there is evidence some people are already using the tool
to create misleading political content. One successful prompt used by a
Midjourney user was "donald trump getting arrested, high quality,
paparazzi photo.”
In an email, Midjourney founder David Holz said
"updates related specifically to the upcoming U.S. election are coming
soon," adding that images created last year were not representative of the
research lab's current moderation practices.
A Stability AI spokesperson said the startup updated its
policies on Friday to prohibit "fraud or the creation or promotion of
disinformation."
An OpenAI spokesperson said the company was working to
prevent abuse of its tools, while Microsoft did not respond to request for
comment. -Reuters
0 comments:
Post a Comment