Over the coming months, Amazon will hire a small group of
people in its Amazon Web Services (AWS) division to develop expertise and work
with outside researchers to monitor for future threats, one of the sources
familiar with the matter said.
It could turn Amazon, the leading cloud service provider
worldwide with 40 percent market share according to research firm Gartner, into
one of the world's most powerful arbiters of content allowed on the internet,
experts say.
Amazon made headlines in the Washington Post last week for
shutting down a website hosted on AWS that featured propaganda from Islamic
State that celebrated the suicide bombing that killed an estimated 170 Afghans
and 13 US troops in Kabul last Thursday. They did so after the news
organization contacted Amazon, according to the Post.
The proactive approach to content comes after Amazon kicked
social media app Parler off its cloud service shortly after the January 6
Capitol riot for permitting content promoting violence.
"AWS Trust & Safety works to protect AWS customers,
partners, and Internet users from bad actors attempting to use our services for
abusive or illegal purposes. When AWS Trust & Safety is made aware of
abusive or illegal behavior on AWS services, they act quickly to investigate
and engage with customers to take appropriate actions," AWS said in a
statement.
"AWS Trust & Safety does not pre-review content
hosted by our customers. As AWS continues to expand, we expect this team to
continue to grow," it added.
Activists and human rights groups are increasingly holding
not just websites and apps accountable for harmful content, but also the
underlying tech infrastructure that enables those sites to operate, while
political conservatives decry the curtailing of free speech.
AWS already prohibits its services from being used in a
variety of ways, such as illegal or fraudulent activity, to incite or threaten
violence or promote child sexual exploitation and abuse, according to its
acceptable use policy.
Amazon first requests customers remove content violating its
policies or have a system to moderate content. If Amazon cannot reach an
acceptable agreement with the customer, it may take down the website.
Amazon aims to develop an approach toward content issues
that it and other cloud providers are more frequently confronting, such as
determining when misinformation on a company's website reaches a scale that
requires AWS action, the source said.
The new team within AWS does not plan to sift through the
vast amounts of content that companies host on the cloud, but will aim to get
ahead of future threats, such as emerging extremist groups whose content could
make it onto the AWS cloud, the source added.
Amazon is currently hiring for a global head of policy on
the AWS trust and safety team, which is responsible for "protecting AWS
against a wide variety of abuse," according to a job posting on its
website.
AWS's offerings include cloud storage and virtual servers
and counts major companies like Netflix, Coca-Cola and Capital One as clients,
according to its website.
Proactive moves
Better preparation against certain types of content could
help Amazon avoid legal and public relations risk.
"If (Amazon) can get some of this stuff off proactively
before it's discovered and becomes a big news story, there's value in avoiding
that reputational damage," said Melissa Ryan, founder of CARD Strategies,
a consulting firm that helps organizations understand extremism and online
toxicity threats.
Cloud services such as AWS and other entities like domain
registrars are considered the "backbone of the internet," but have
traditionally been politically neutral services, according to a 2019 report
from Joan Donovan, a Harvard researcher who studies online extremism and
disinformation campaigns.
But cloud services providers have removed content before,
such as in the aftermath of the 2017 alt-right rally in Charlottesville,
Virginia, helping to slow the organising ability of alt-right groups, Donovan
wrote.
"Most of these companies have understandably not wanted
to get into content and not wanting to be the arbiter of thought," Ryan
said. "But when you're talking about hate and extremism, you have to take
a stance." - Reuters