At present, videos uploaded to the platform go through
technology tools that work to recognize and flag any potential violations which
are then reviewed by a safety team member. If a violation is identified, the
video is removed and the user is notified, TikTok said.
The ByteDance-owned company added that over the next few
weeks it will begin automatically removing some types of content that violate
policy over minor safety, adult nudity and sexual activities, violent and
graphic content and illegal activities and regulated goods.
This will be in addition to the removals confirmed by the
safety team.
The company said this will help its safety team to
concentrate more on highly contextual and nuanced areas, such as bullying and
harassment, misinformation and hateful behavior.
TikTok also added it will send a warning in the app upon
first violation. However, in case of repeated violations, the user will be
notified and the account can also be permanently removed.
The changes come as social media networks, including
Facebook and TikTok, have come under fire for amplifying hate speech and
misinformation globally across their platforms.
© Reuters
0 comments:
Post a Comment