Australia Considers New Rules on Facial Recognition Tech
As developments in the fields of artificial intelligence and machine learning have made the application of facial recognition software more common around the world during the past few years, many governments have been torn over whether such technologies should be regulated or banned altogether. To this end, privacy advocates and civil society groups within Australia have called on the nation’s federal government to enact a new law that would govern the use of facial recognition software across the country, citing the ways in which such technologies have been abused by businesses in recent years.
To illustrate this point further, big box department store company K-Mart, Australian household hardware chain Bunnings Warehouse, and consumer electronics chain The Good Guys, three of the most prominent retailers in Australia, came under fire earlier this year for their use of facial recognition technology across their various retail locations in August of this year. Subsequently, an investigation that was conducted by the Australian Consumers’ Association, more commonly known as CHOICE, found that many of the nation’s citizens were not even aware of the use of facial recognition software within these retail stores.
A lack of regulation
Subsequently, legislative loopholes that currently exist within the Australian legal system have created an environment where businesses can implement facial recognition technology with little to no accountability or regulation. For this reason, the Human Technology Institute (HTI), described as “a leading voice in Australia and globally on human-centered tech”, has proposed a new model law that would “impose new obligations on both companies developing or distributing facial recognition systems and any entity deploying them, including police and employers.” This being said, many privacy advocates within Australia also have issues with the use of facial recognition software by the nation’s multitude of law enforcement agencies.
Facial recognition and law enforcement
As has been the case in many other developed countries around the world in recent years, the topic of facial recognition as it relates to law enforcement agencies has led to a great level of concern. For instance, several U.S. states implemented bans on the use of facial recognition technology last year, only to reverse course and lift these bans at various points throughout 2022. Furthermore, many proposed laws that are geared toward regulating the use of facial recognition technology contain provisions that would exempt law enforcement agencies from the scope and applicability of the law.
What’s more, the system of government that is currently present within Australia in general stands in the way of a single law that could ban the use of facial recognition technology across the country. For context, law enforcement agencies within Australia are typically governed by state law, just as many U.S. states will have different regulations as it relates to a particular political issue or business practice, among other things. In spite of this, however, consumer privacy groups and politicians alike are optimistic about the potential enactment of a federal law aimed at regulating the use of facial recognition technology in all sectors of business.
Risk-based approach
To this last point, the model law that has been proposed by Australia’s Human Technology Institute is based on taking a risk-based approach to the regulation of facial recognition technology as opposed to an outright ban of such technologies. As such, the law would establish three levels of risk as it concerns facial recognition technology, which includes base level, high, and elevated risk respectively. For instance, the use of a facial recognition camera within a workspace would raise the level of potential risk, as the employees that function within this workspace would have little control over their own environment on a day-to-day basis. Moreover, the model law would also prohibit the use of facial recognition technology under certain high-risk conditions, such as placing cameras within major populated areas without legal precedent.
On the other hand, the law also calls for a greater level of transparency concerning the use of facial recognition technology on an overall scale. More specifically, “A developer or organization seeking to use facial recognition technology would typically have to undertake a “Facial Recognition Impact Assessment” and make it publicly accessible. It could then be challenged and audited by the regulator or interested parties.” In this way, members of the general public would be able to gain a better understanding of the manner in which facial recognition technology should be used without infringing on the personal privacy rights of other individuals.
Going back to the Australian Consumers’ Association (CHOICE), many of the association’s members have called on Attorney-General Mark Dreyfus to comment on both the proposed model law, as well as the regulation of facial recognition software in general. Nevertheless, while Dreyfus has declined to answer specific questions, a spokesman for the Attorney-General stated that Dreyfus was in the process of considering “what privacy protections should apply to the collection and use of sensitive information using facial recognition technology.” As such, it remains unclear whether or not the model law will ultimately be passed.