10 reasons to be concerned about facial recognition technology

It’s everywhere you look and is set to be worth $8.5bn by 2025. But here’s why surveillance technology shouldn’t be taken at face value.

By Nigel Jones

Co Founder of The Privacy Compliance Hub

August 2021

From unlocking your iPhone and auto-tagging Facebook photos, to employers tracking productivity and police forces surveilling protests – facial recognition technology is becoming more and more embedded into our everyday lives. Experts believe the value of the sector will more than double from $3.8bn in 2020 to $8.5bn by 2025. But here are 10 reasons why we think there are reasons for concern when it comes to facial recognition software and privacy.

It’s hard to avoid

Airports, concert venues, sports stadiums, hotels, casinos, shopping centres, police forces – many of these places and organisations are using facial recognition technology. In 2019, at least 75 countries were using artificial enabled surveillance. Critics argue that users have not consented to being scanned and that going out in public should not be considered consent to be tracked. Woodrow Hertzog, a computer scientist and law professor at Northeastern University in Boston says individuals often don’t understand the risks, or have the recourse to say no. Even knocking on a friend’s door (complete with a Ring doorbell) could see you added to a police database of images. 

It’s biased 

Facial recognition technology has been found to recognise caucasian men better than women and other ethnic groups. Research by the National Institute of Standards and Technology in the US found the majority of facial recognition algorithms exhibited more false positives against people of colour, and there have been at least three cases of a wrongful arrest of a black man based on facial recognition evidence. Like all AI systems, it’s only as smart as the data it’s trained on. Few companies use truly representative data sets, which negatively impacts under-represented demographics. 

Answer our GDPR compliance checklist questions and we will email you an objective, personalised audit report within minutes, completely free of charge.

Get your audit

It’s also often wrong

Even for caucasian men, the technology often fails to work. In South Wales, police testing a facial recognition system saw 91% of matches labelled as false positives, as the system made 2,451 incorrect identifications and only 234 correct ones when matching a face to a name on the watchlist. Similarly In New York City the transport authority halted a pilot that had a 100% error rate. Facial recognition tools work a lot better in the lab than they do in the real world. Small factors such as light, shade and how the image is captured can affect the result. 

There’s the potential for fraud

Companies selling facial recognition software have compiled huge databases to power their algorithms – controversial Clearview AI, for example, has 3 billion images (scraped from Google, Facebook, YouTube, LinkedIn and Venmo) it can search against. But these systems are a real security risk. Hackers have broken into databases containing facial scans used by banks, police departments and defence firms in the past. Criminals can use this information to commit identity fraud, harass or stalk victims. Biometric data is not something you’d want to fall into the wrong hands, US Senator Edward Markey said after Clearview had a data breach in 2020. “If your password gets breached, you can change your password. If your credit card number gets breached, you can cancel your card. But you can’t change biometric information like your facial characteristics.” 

It’s being used to monitor children

Using our faces to unlock our iPhones or computers may seem harmless but this technology is increasingly being used to also capture images of children. In China, the gaming giant Tencent is using facial recognition to stop children playing games between 10pm and 8am. Between these times, players will have to give a facial scan to prove they’re an adult. In America, one Texas school district ran a pilot using surveillance technology in its school corridors. Over seven days there were 164,000 facial detections including one student who was detected 1,100 times. And in Argentina, police forces are using it to track alleged offenders as young as four. 

It’s insufficiently unregulated

The use of facial recognition tools is already governed by the GDPR in the EU and the UK, but technology companies themselves are calling for stronger regulation. IBM, Microsoft and Amazon have all either pulled out of the facial recognition software market altogether, or are limiting their work with police forces in the US. In 2021, the European Union’s lead data protection supervisor called for remote biometric surveillance in public places to be banned, and to stop AI being used to predict people’s ethnicity, gender, political or sexual orientation.

It’s not just monitoring your face

The way you look, how you think and feel – firms are developing new and alarming ways to track everything we do. Irish startup Liopa, for example, is trialling a phone application that can interpret phrases mouthed by people, and VisionLabs, which is based in Amsterdam, claims it is able to tell if someone is showing anger, disgust, fear, happiness, surprise or sadness. Such technology is increasingly being used by companies to track productivity and even make hiring decisions. And there’s other biometric technology that can track how you move, identification based on the shape of your ear, and iris matching. 

Your private life is no longer private

Would you act differently if you knew you were being watched? What if the watcher can find out who you are? Freedom of speech campaigners say the use of facial recognition software has real implications for fundamental human rights, including the right to protest, and the right to a private life. More than a third (38%) of 16-24 year olds polled in London and 28% of ethnic minority people said they would stay away from events monitored with live facial recognition. In 2020, a court also found in favour of British man Ed Bridges who argued the use of automatic facial recognition technology caused him distress. Bridges was one of 500,000 faces collected by the South Wales police force in Cardiff, the majority of whom were not suspected of any wrongdoing.  

If you want more practical content like this article, please click below to sign up for our monthly newsletter.

Sign up now

There’s not enough transparency

One of the challenges faced by regulators is that the technology is moving incredibly fast and there isn’t enough transparency about what happens to stored images, how they’re processed and for how long. This is particularly true for the close-box technology systems used by police forces. In 2019, the Ada Lovelace Institute found the majority of Brits supported this technology when they could see a public benefit, such as in criminal investigations, to unlock smartphones or to check passports. But 29% were uncomfortable with the police using the technology because they don’t trust them to use it ethically. There was almost no support for its use in schools, at work, or in supermarkets. 

We might start to see it as normal

One of the risks of children being monitored at a young age is the acceptance of facial recognition technology as they grow into adults. Surveillance has ramped up in the past 18 months because of Covid-19 – even France has used AI to monitor social distancing and whether people were wearing masks (although authorities insist nobody was identified). Campaigners worry these ‘emergency measures’ won’t be rolled back. “The more we use face recognition, the less we start to think of it, the less we think of it as risky and we become accustomed to it,” Jennifer Lynch from the campaigning group, the Electronic Frontier Foundation, says. “It’s a slippery slope.”

More to watch and read

SHARE THIS ARTICLE