Google's new review procedure asks that researchers consult
with legal, policy, and public relations teams before pursuing topics such as
face and sentiment analysis and categorisations of race, gender or political
affiliation, according to internal webpages explaining the policy.
"Advances in technology and the growing complexity of
our external environment are increasingly leading to situations where seemingly
inoffensive projects raise ethical, reputational, regulatory or legal
issues," one of the pages for research staff stated. Reuters could not
determine the date of the post, though three current employees said the policy
began in June.
Google declined to comment for this story.
The "sensitive topics" process adds a round of
scrutiny to Google's standard review of papers for pitfalls such as disclosing
of trade secrets, eight current and former employees said.
For some projects, Google officials have intervened in later
stages. A senior Google manager reviewing a study on content recommendation
technology shortly before publication this summer told authors to "take
great care to strike a positive tone," according to internal
correspondence read to Reuters.
The manager added, "This doesn't mean we should hide
from the real challenges" posed by the software.
Subsequent correspondence from a researcher to reviewers
shows authors "updated to remove all references to Google products."
A draft seen by Reuters had mentioned Google-owned YouTube.
Four staff researchers, including senior scientist Margaret
Mitchell, said they believe Google is starting to interfere with crucial
studies of potential technology harms.
"If we are researching the appropriate thing given our
expertise, and we are not permitted to publish that on grounds that are not in
line with high-quality peer review, then we're getting into a serious problem
of censorship," Mitchell said.
Google states on its public-facing website that its
scientists have "substantial" freedom.
Tensions between Google and some of its staff broke into
view this month after the abrupt exit of scientist Timnit Gebru, who led a
12-person team with Mitchell focused on ethics in artificial intelligence
software (AI).
Gebru says Google fired her after she questioned an order
not to publish research claiming AI that mimics speech could disadvantage
marginalised populations. Google said it accepted and expedited her
resignation. It could not be determined whether Gebru's paper underwent a
"sensitive topics" review.
Google Senior Vice President Jeff Dean said in a statement
this month that Gebru's paper dwelled on potential harms without discussing
efforts underway to address them.
Dean added that Google supports AI ethics scholarship and is
"actively working on improving our paper review processes, because we know
that too many checks and balances can become cumbersome."
'Sensitive topics'
The explosion in research and development of AI across the
tech industry has prompted authorities in the United States and elsewhere to
propose rules for its use. Some have cited scientific studies showing that
facial analysis software and other AI can perpetuate biases or erode privacy.
Google in recent years incorporated AI throughout its
services, using the technology to interpret complex search queries, decide
recommendations on YouTube and autocomplete sentences in Gmail. Its researchers
published more than 200 papers in the last year about developing AI
responsibly, among more than 1,000 projects in total, Dean said.
Studying Google services for biases is among the
"sensitive topics" under the company's new policy, according to an
internal webpage. Among dozens of other "sensitive topics" listed
were the oil industry, China, Iran, Israel, COVID-19, home security, insurance,
location data, religion, self-driving vehicles, telecoms, and systems that
recommend or personalise web content.
The Google paper for which authors were told to strike a
positive tone discusses recommendation AI, which services like YouTube employ
to personalise users' content feeds. A draft reviewed by Reuters included
"concerns" that this technology can promote "disinformation,
discriminatory or otherwise unfair results," and "insufficient
diversity of content," as well as lead to "political
polarisation."
The final publication instead says the systems can promote
"accurate information, fairness, and diversity of content." The
published version, entitled "What are you optimising for? Aligning
Recommender Systems with Human Values," omitted credit to Google
researchers. Reuters could not determine why.
A paper this month on AI for understanding a foreign
language softened a reference to how the Google Translate product was making
mistakes following a request from company reviewers, a source said. The
published version says the authors used Google Translate, and a separate
sentence says part of the research method was to "review and fix
inaccurate translations."
For a paper published last week, a Google employee described
the process as a "long-haul," involving more than 100 email exchanges
between researchers and reviewers, according to the internal correspondence.
The researchers found that AI can cough up personal data and
copyrighted material - including a page from a Harry Potter novel - that had
been pulled from the internet to develop the system.
A draft described how such disclosures could infringe
copyrights or violate European privacy law, a person familiar with the matter
said. Following company reviews, authors removed the legal risks, and Google
published the paper.
© Reuters