Sales could be significant for Clearview, a presenter on
Wednesday at the Montgomery Summit investor conference in California. It fuels
an emerging debate over the ethics of leveraging disputed data to design
artificial intelligence systems such as facial recognition.
Clearview's usage of publicly available photos to train its
tool draws it high marks for accuracy. The UK and Italy fined Clearview for
breaking privacy laws by collecting online images without consent, and the
company this month settled with US rights activists over similar allegations.
Clearview primarily helps police identify people through
social media images, but that business is under threat due to regulatory
investigations.
The settlement with the American Civil Liberties Union bans
Clearview from providing the social-media capability to corporate clients.
Instead of online photo comparisons, the new private-sector
offering matches people to ID photos and other data that clients collect with
subjects' permission. It is meant to verify identities for access to physical
or digital spaces.
Vaale, a Colombian app-based lending startup, said it was
adopting Clearview to match selfies to user-uploaded ID photos.
Vaale will save about 20 percent in costs and gain in
accuracy and speed by replacing Amazon.com Inc's Rekognition service, said
Chief Executive Santiago Tobón.
"We can't have duplicate accounts and we have to avoid
fraud," he said. "Without facial recognition, we can't make Vaale
work."
Amazon declined to comment.
Clearview AI CEO Hoan Ton-That said a US company selling
visitor management systems to schools had signed up as well.
He said a customer's photo database is stored as long as
they wish and not shared with others, nor used to train Clearview's AI.
But the face-matching that Clearview is selling to companies
was trained on social media photos. It said the diverse collection of public
images reduces racial bias and other weaknesses that affect rival systems
constrained by smaller datasets.
"Why not have something more accurate that prevents
mistakes or any kind of issues?" Ton-That said.
Nathan Freed Wessler, an ACLU attorney involved in the
union's case against Clearview, said using ill-gotten data is an inappropriate
way to pursue developing less-biased algorithms.
Regulators and others must have the right to force companies
to drop algorithms that benefit from disputed data, he said, noting that the
recent settlement did not include such a provision for reasons he could not
disclose.
"It's an important deterrent," he said. When a company
chooses to ignore legal protections to collect data, they should bear the risk
that they will be held to account." © Reuters
0 comments:
Post a Comment