A person's face is unique. It is both public and private at the same time. Sensitive information about us such as our gender, emotions, health and more can be seen on our faces. We would like to give you detailed information about an issue that has been written specifically for Australia but is of interest to the whole world.
Australian legislators, like those in other countries, did not know that our facial data would be collected on a large scale and used in everything from our cell phones to police CCTV cameras. Still, we shouldn't be surprised if our laws don't follow through, despite the incredible growth of facial recognition technology.
But what kinds of laws are needed? Since technology can be used for both good and bad purposes, both its prohibition and the current freedom seem to be undesirable.
However, deficiencies in legal regulations have exposed our society to dangerous facial recognition practices. To bridge this gap in the law, a "model law" is proposed, which is a framework for regulations that governments across Australia can adopt or change, allowing for safe ones while controlling the dangerous use of facial recognition technology.
Face recognition technology challenges
Applications of facial recognition technology seem to be limited only to human creativity. Most of us have no problem unlocking our smart devices using facial recognition technology. However, this technology has also been tested or used in Australia in various settings such as schools, airports, shops, nightclubs and casinos, and by the police.
With the projected 20% annual growth in facial recognition technology, the risk to humans is increasing, especially in high-risk environments such as law enforcement. The reliance on unreliable facial recognition technology in the US has led to many cases of injustice, particularly involving Blacks. These include the wrongful detention and incarceration of Robert Williams and the wrongful removal of a young Black girl from a Detroit skating rink.
Many of the world's biggest tech companies, such as Meta, Amazon, and Microsoft, have reduced or stopped offering services related to facial recognition. They cited the ineffectiveness of legal regulations and concerns about consumer safety as reasons. While this is admirable, it also led to a kind of "regulatory market failure".
While these businesses have withdrawn their services, others with less moral values have increased their share of the facial recognition market.
Consider the American company Clearview AI. After illegally collecting billions of face photos from social media and other websites, this company set up a face matching service that it offers to the Australian Federal Police and other law enforcement agencies around the world.
The Australian Information and Privacy Commissioner ruled that AFP and Clearview AI violated Australia's privacy laws in 2021, but such sanctions are rare.
Meanwhile, Australians are calling for tighter regulation of facial recognition.
This is demonstrated in several reports, including the Australian Human Rights Commission's 2021 report, the 2022 CHOICE investigation into the use of facial recognition technology by major retailers, and research commissioned by the Human Technology Institute as part of our model law.
Options to improve facial recognition
What alternatives does Australia have? The first is not to take any steps. But doing so will require acknowledging that we will not be protected from the potentially bad practices of facial recognition technology, which will keep us moving in the direction of widespread surveillance.
Another option would be to completely ban facial recognition technology. There are many exceptions (for useful uses) in the technological moratoriums that some governments have put in place, but they are a temporary solution at best.
The advocated view is that legislation regulating facial recognition technology according to the level of risk is a superior reform option. Such regulation would promote facial recognition with obvious public benefits while protecting against negative applications of the technology.
A risk-based law for the regulation of facial recognition technologies
Anyone designing or implementing facial recognition software in Australia will be required, under our model law, to undertake a comprehensive impact analysis to assess the potential harm to human rights.
Legal requirements or limits will increase with the level of danger. In addition, developers will have to adhere to a global norms-compliant facial recognition technological standard for AI performance and robust data management.
The model law prohibits all high-risk applications that use facial recognition technology. For example, it would be illegal to use a "face analysis" program to determine a person's sexual orientation before making judgments about them.
Three exceptions to the prohibition of high-risk facial recognition technologies are also included in the model law:
1) If the regulatory agency believes that a high-risk practice is justified by international human rights law, such practice may be approved.
2) Law enforcement agencies will be subject to a specific legal framework, including a "face search warrant" program that will provide independent surveillance similar to other search warrants.
3) Under the right conditions, high-risk applications can be used in academic research.
Ensuring the enforcement of any law will require a regulatory agency with the necessary authority and resources. Who should do this?
The Australian Information Commissioner's Office (OAIC) was recommended as the best candidate for overseeing face editing by the majority of the parties we interviewed, including business users, technology companies and civil society representatives. In addition, a special surveillance system may be needed for certain, sensitive users such as the military and certain security organizations.
Never before have we seen so many organizations and individuals from civil society, business and government come together and decide to reform facial recognition technology. The support of the model law by CHOICE and the Australian Technology Council is an indication of this.
The federal attorney general should seize this opportunity and lead national reform in light of the significant increase in the use of facial recognition and growing consensus among stakeholders. Submit a federal bill as soon as possible. This bill could be based on our model statute. The Attorney General must work with states and territories to synchronize facial recognition law in Australia.
This proposed change is significant on its own because we cannot allow facial recognition technology to run unchecked in practice. It will also show how Australia can use the law against dangerous uses of new technology while promoting innovation for the public good.
Our report, “Face recognition technology: Toward a model law,” includes more details on the model law.