In many busy households around the world, it’s not uncommon for children to shout directives at Apple Siri or Amazon’s Alexa. They can create a game by asking the voice-activated personal assistant (VAPA) what time it is or asking for a popular song.
While this may seem like a mundane part of family life,
there is so much more to it. VAPAs continuously listen to, record, and process
audio developments in a process known as “eavesdropping,” a compound word for
eavesdropping and data recording. This raises significant concerns regarding
privacy and surveillance issues, as well as discrimination, as the audio tracks
in people’s lives are captured by data algorithms. and scrutinized carefully.
These concerns increased as we applied them to children.
Their data accumulates over the generations in ways that go beyond what has
ever been collected about their parents with far-reaching consequences that we
are not even beginning to understand.
Always listen
VAPA adoption is happening at an astounding rate as it
expands to include mobile phones, smart speakers, and a growing number of
products connected to the Internet. These include digital toys for kids, home
security systems that listen for intruders, and smart doorbells that can pick
up sidewalk conversations.
There are pressing issues stemming from the collection,
storage and analysis of audio data relating to parents, youth and children.
Alarms have been raised in the past – in 2014, privacy advocates raised
concerns about the extent of Amazon Echo listening, what data is being
collected, and how it will be used by Amazon’s recommendation engines.
And yet, despite these concerns, VAPA and other wiretapping
systems have spread exponentially. Recent market research predicts that by
2024, the number of voice-activated devices will explode to over 8.4 billion.
Recording is more than just speech
There’s more to it than just uttered sentences, as VAPA and
other eavesdropping systems eavesdrop on personal characteristics of voices
that inadvertently reveal biometric and behavioral attributes such as age, sex,
health, intoxication and personality.
Information about the acoustic environment (such as a noisy
apartment) or specific acoustic events (such as broken glass) can also be
gathered through “auditory scene analysis” to make judgments about what is
happening in that environment.
Sabotage systems have had a recent track record of cooperating
with law enforcement agencies and being subpoenaed for data in criminal
investigations. This raises concerns about other forms of surveillance and
profiling of children and families.
For example, smart speaker data can be used to create
profiles such as “noisy household”, “disciplined parenting style” or “difficult
youth”. This, in the future, could be used by governments to identify welfare
dependents or families in crisis with potentially dire consequences.
There are also new eavesdropping systems introduced as a
solution to keep children safe called “aggression detectors”. These
technologies include microphone systems loaded with machine learning software,
which explicitly claim they can help predict incidents of violence by listening
for signs of increased volume and emotion in voice as well as other sounds such
as breaking glass.
School supervision
Violence detectors are advertised in school safety magazines
and at law enforcement conferences. They have been deployed in public spaces,
hospitals and high schools under the guise of being able to prevent and detect
mass shootings and other instances of deadly violence.
But there are serious problems surrounding the efficiency
and reliability of these systems. One brand of detectors repeatedly misinterpreted
children’s audible cues including coughing, screaming, and cheering as signs of
aggression. This raises questions about who is being protected and who will be
made less secure by its design.
Some children and young people will be disproportionately harmed
by this form of confidential listening, and the interests of all families will
not be uniformly protected or served. A repeated criticism of voice-activated
technology is that it reproduces cultural and racial prejudice by enforcing
vocal norms and misrepresenting diverse forms of speech. culture in relation to
language, accent, dialect, and slang.
We can predict that the words and voices of racist children
and young people will be disproportionately misinterpreted as sounding
aggressive. This troubling prediction should come as no surprise since it
follows colonial history and deeply entrenched white supremacy always warning
of a “negative color line”. The Right Policy Vandalism is a rich monitoring and
information site as the audio activities of children and families have become a
valuable source of data to be collected, tracked, stored, analyzed and sold
without the subject’s knowledge to thousands of third parties. These companies
are for-profit, with little moral obligation to children and their data.
There is no legal requirement to delete this data, which
will accumulate over the life of the children, potentially lasting forever. It
is not clear how long and how far these digital footprints will follow children
as they grow older, how ubiquitous this data will be, or how much this data
will be cross-referenced with other data. . These questions have a serious
impact on children’s lives both now and as they grow up.
There are countless threats posed by eavesdropping on
privacy, surveillance, and discrimination. Personalized recommendations, such
as information privacy education and digital skills training, will not be
effective in addressing these issues and place too much of a burden on families
in the future. developing the knowledge necessary to combat eavesdropping in
public and private spaces.
We need to consider the development of a collective
framework against the unique risks and realities of wiretapping. Perhaps the
development of the Principles of Fair Listening – an auditory spin on “Fair Information
Practice Principles” – will help to assess the platforms and processes that
impact audio life. of children and families.