The ambitious but controversial online safety bill would
give regulators wide-ranging powers to crack down on digital and social media
companies like Google, Facebook, Twitter and TikTok.
Authorities in the United Kingdom are the vanguard of a
global movement to rein in the power of tech platforms and make them more
responsible for harmful material such as child sex abuse, racist content,
bullying, fraud and other harmful material that proliferates on their
platforms. Similar efforts are underway in the European Union and the United
States.
While the internet has transformed people's lives, “tech firms
haven't been held to account when harm, abuse and criminal behaviour have run
riot on their platforms,” UK Digital Secretary Nadine Dorries said in a
statement. “If we fail to act, we risk sacrificing the wellbeing and innocence
of countless generations of children to the power of unchecked algorithms.”
The bill faces debate in Parliament, where it could be
amended before lawmakers vote to approve it as law.
The government has toughened the legislation since it was
first written after a committee of lawmakers recommended improvements. Changes
include giving users more power to block anonymous trolls, requiring porn sites
to verify users are 18 or older, and making cyberflashing — or sending someone
unsolicited graphic images — a criminal offence.
Tech executives would be criminally liable two months after
the law takes force, instead of two years afterward as proposed in the original
draft. Companies could be fined up to 10 percent of their annual global revenue
for violations.
There's also a wider range of criminal offences that could
result in prison sentences of up to two years in the updated draft.
Initially, tech executives faced prison time for failing to
quickly provide regulators with accurate information needed to assess whether
their companies are complying with the rules.
Now, they would also face it for suppressing, destroying or
altering information requested or not cooperating with regulators, who would
have the power to enter a tech company's premises to inspect data and equipment
and interview employees.
Tech companies would have to proactively take down illegal
content involving revenge porn, hate crime, fraud, ads for drugs or weapons,
suicide promotion or assistance, human trafficking and sexual exploitation, on
top of the originally proposed terrorism and child sexual abuse material.
The government said it would outline categories of harmful
but legal material that the biggest online platforms such as Google and
Facebook would have to tackle, instead of leaving it up to the “whim of
internet executives.”
That's aimed at addressing concerns of digital activists who
worried the law would crimp freedom of speech and expression because companies
would be overzealous in removing material that upsets or offends people but
isn't prohibited.