What are community notes?
Announced early in 2021 — two years before Musk bought the
platform — under the name Birdwatch, the feature initially allowed volunteers
in the United States to signal questionable posts and provide notes with
context.
These notes were initially just visible on a separate site,
but then under the message themselves.
Twitter had just booted Donald Trump off the platform for
inciting violent riots at the US Capitol and was seeking to promote healthier
conversations.
In November 2022, a community note prompted the White House
to retract a tweet that had exaggerated the impact of President Joe Biden’s
policies on retirement benefits.
Having become the platform’s new owner, Musk relaxed
moderation but still said his “goal is to make Twitter the most accurate source
of information on Earth, without regard to political affiliation”.
The publication of community notes has since been rolled out
to 44 countries.
How does it function?
Twitter users who have registered as contributors write
propositions for notes and they are not edited by staffers at X. They must
nevertheless respect X’s moderation rules.
The proposed notes are then submitted to other contributors.
Those that get enough votes as being helpful may then be selected by an
algorithm to be posted publicly.
The algorithm is inspired by the one Netflix uses to
recommend content, and aims to “identify notes with broad appeal across
viewpoints” rather than just ones that get the most votes as being helpful.
It is regularly updated, with one announced recently
limiting the replacement of notes after publication.
Is it effective in fighting disinformation?
According to X people are on average 30% less susceptible to
agree to the contents of a post after having read accompanying community posts,
and they are also less likely to repost it to their followers.
But Alex Mahadevan, the programme director for the Poytner
Institute’s MediaWise initiative promoting digital literacy, noted that the
flaw in the algorithm is that for posts to be published they need to obtain a
consensus across ideological divides.
“Maybe that would have worked four years ago,” he said at a
June conference in South Korea in June.
“That does not work anymore because 100 people on the left
and 100 people on the right are not going to agree that vaccines are
effective.”
What was required, he explained, was a “cross-ideological
agreement on truth,” and in an increasingly partisan environment, he said
achieving that consensus is almost impossible.
Musk is considered by some to be be in part responsible for
that environment as, since taking over X, he has made deep cuts in content
moderation teams and pulled the platform out of the EU’s voluntary code of best
practices on disinformation.
For Musk, community notes is an economical method to
moderate content, including for political advertising which has recently been
re-authorised on the platform.
For Mahadevan, the system has proven to be efficient in
identifying and adding context to non-political content, such as pointing out
AI-generated images, misleading advertising and already debunked conspiracy
theories.
Is the algorithm neutral?
While the algorithm’s complex mathematical formula is meant
to discourage manipulation, it isn’t foolproof and needs a large numbers of
evaluators.
Julien Pain, the host of the “True or Fake” programme on
Franceinfo, said there are groups in France that game the algorithm to promote
their ideas.
He said hard right groups flood the platform with notes on
posts by leftist politicians and try to get notes on posts by conservatives
removed.
“There contributions aren’t always accepted but some get
by,” he told AFP.
“A note is interesting when it adds a bit of factual context
to nuance a statement. But these just declare it’s false and try to discredit
the person,” he added.
For French Green lawmaker Julien Bayou the notes have become
“a tool for ideological jousting which, unfortunately, can be distorted”.
.jpeg)