Facial Recognition Technology, Should it be Banned?
Due to the ways in which facial recognition technology has altered the ways in which people can capture and record visual data, as well as biometric information, opinions of this groundbreaking software are inevitably varied. On the one hand, facial recognition cameras give business owners and law enforcement officials a helpful tool that can be used to cut down on crime and ensure that back actors are not able to steal items or goods without punishment. On the other hand, the ways in which these cameras have been implemented in practice have led many citizens and privacy advocates to raise concerns about government surveillance, as people that are residing within a particular community or jurisdiction will often be unaware of the presence of such cameras. For these reasons, the choice to allow or ban the use of facial recognition technology is currently a contentious subject on a global scale.
Types of facial recognition cameras
Speaking in a general sense, facial recognition software will generally fall into two distinct categories. The first of these are software applications that are used to compare the picture of a person’s face to a larger database of photos. To illustrate this point further, many law enforcement agencies and retail businesses are currently employing facial recognition cameras, as they can use this technology to compare the photos of suspected criminals or shoplifters, among other relevant individuals. Alternatively, the second of these are cameras that simply compare 2 photographs, such as the facial recognition software that has been embedded in iPhones in recent years, which enables users to unlock their phones by matching a new picture with a picture that has previously been stored within such phones.
Facial recognition and bias
What’s more, all forms of facial recognition technology, irrespective of the ways in which such technology is implemented, have been linked to bias at various points in the past. This is due in large part to the inherent nature of software development, as a software engineer that fails to use diverse training data to create the dataset that will be used to develop a particular facial recognition software application will invariably be creating a program that will be prone to misidentifying members of the general populace. To this point, this misidentification of people is one of the primary factors that have made everyday citizens and privacy advocates question the use of facial recognition software, as companies that utilize these methods can place the blame on the cameras themselves, leaving people to wonder who should be held responsible in such circumstances.
The banning of facial recognition
With all this being said, many major cities in the U.S. have gone back and forth about whether or not to ban the use of facial recognition software. For example, the city of New Orleans, Louisiana banned the use of facial recognition cameras in late 2020, only to revert course in July of 2022, when the city reinstated the use of facial recognition in the context of law enforcement operations. Conversely, the state of Virginia also banned the use of facial recognition software in July of 2021, on the condition that the state legislature would need to first enact a rule that could be used to regulate the utilization of such technology. This being the case, the Virginia state legislature ultimately passed such regulations, and law enforcement officials and campus police officers are currently permitted to use facial recognition software in certain situations.
Facial recognition on an international level
Just like many other polarizing issues within modern-day society, the topic of facial recognition technology has also been controversial in other countries around the world as well. To this end, many shopping and retail centers within the UK are currently employing facial recognition technology in a number of different formats, much to the chagrin of English-based privacy watch groups such as Big Brother, who recently filed a complaint with the UK’s Information Commissioner’s Office (ICO). More specifically, the complaint states that the facial recognition surveillance systems within the UK “use novel technology and highly invasive processing of personal data, creating a biometric profile of every visitor to stores where its cameras are installed. The group said the independent grocery chain had installed the surveillance technology in 35 stores across Portsmouth, Bournemouth, Bristol, Brighton and Hove, Chichester, Southampton, and London.”
Moving on to another area of the world, the Chinese ride-hailing mobile application DIDI was recently fined $1.2 billion by regulators within the country in response to a string of alleged privacy violations that the company had committed. Most notably, government regulators within China claim that DIDI has collected as many as 107 million facial recognition profiles, in addition to 12 million screenshots from the smartphone photo albums of Chinese customers, in a manner that was both unauthorized and illegal. If this were not enough, the Chinese government also mandated that the application be removed from major online marketplaces such as the Google Play store and the Apple App Store, while the company has simultaneously come under further scrutiny from the U.S. stock market.
The conversation surrounding the use of facial recognition technology is interesting in that it is predicated on personal data privacy protections that have yet to be established in many nations around the world. For instance, the U.S. has yet to pass a comprehensive data protection law at the federal level, as states around the country currency have the authority to permit or ban such technology at their sole discretion. Likewise, there are very few regulations that have been enacted by the world governments that regulate the use of facial recognition software. Subsequently, these cameras will likely continue to be deployed when there is a business or legal precedent to use them, and then be banned when members of the general public and privacy advocates raise concerns about such cameras, as lawmakers and tech companies continue to strike a balance between surveillance and privacy protections.