Twitter will test sending users a prompt when they reply to a tweet using "offensive or hurtful language," in an effort to clean up conversations on the social media platform, the company said in a tweet on Tuesday.
When users hit "send" on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
Twitter has long been under pressure to clean up hateful and abusive content on its platform, which are policed by users flagging rule-breaking tweets and by technology.
"We're trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret," Sunita Saligram, Twitter's global head of site policy for trust and safety, said in an interview with Reuters.
The company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
Asked whether the experiment would instead give users a playbook to find loopholes in Twitter's rules on offensive language, Saligram said that it was targeted at the majority of rule breakers who are not repeat offenders.
Twitter said the experiment, the first of its kind for the company, will start on Tuesday and last at least a few weeks. It will run globally but only for English-language tweets.
When users hit "send" on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
Twitter has long been under pressure to clean up hateful and abusive content on its platform, which are policed by users flagging rule-breaking tweets and by technology.
"We're trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret," Sunita Saligram, Twitter's global head of site policy for trust and safety, said in an interview with Reuters.
Twitter's policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.— Twitter Support (@TwitterSupport) May 5, 2020
The company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
Asked whether the experiment would instead give users a playbook to find loopholes in Twitter's rules on offensive language, Saligram said that it was targeted at the majority of rule breakers who are not repeat offenders.
Twitter said the experiment, the first of its kind for the company, will start on Tuesday and last at least a few weeks. It will run globally but only for English-language tweets.