Australians are broadly okay with facial recognition being used by police and healthcare providers, but get squeamish when private tech companies get involved, according to new findings published in a new academic report.
The report titled “Australian Public Attitudes to Facial Recognition Technology,” conducted by researchers from Monash University, Australian National University, and Deakin University, provides an analysis of the Australian public’s understanding, experiences, and opinions regarding facial recognition technology (FRT). Based on a nationally representative survey of 2,006 Australian adults conducted in April and May of 2024, the study reveals a significant gap in public knowledge about FRT, despite widespread awareness of the term. The majority of respondents reported having little knowledge about the technology, with only a minority indicating they knew “a fair bit” or “a lot,” primarily those with computer science or IT backgrounds.
Support for FRT varies significantly depending on its application. A majority of respondents expressed support for using FRT for age verification in online gambling (sixty-one percent) and pornography (fifty-one percent). Similarly, fifty-seven percent supported the use of FRT for accessing government services, while eighteen percent opposed it.
Experiences with FRT are most common in contexts such as unlocking smartphones (fifty-nine percent) and airport security (thirty-six percent). The public generally perceives FRT as accurate, with fifty-eight percent considering it “very accurate” or “accurate,” though concerns about misuse and privacy are prevalent.
Trust in institutions using FRT also varies. The police and healthcare organizations are trusted the most, with sixty percent and fifty-seven percent of respondents, respectively, expressing trust in these entities to use FRT responsibly. In contrast, retail outlets and tech companies like Google and Facebook are trusted the least, with only nineteen percent and seventeen percent of respondents, respectively, expressing trust.
Public opposition is strongest against using FRT in workplaces for monitoring productivity (sixteen percent support) and in retail outlets for personalized advertising (fifteen percent support).
The report highlights a strong demand for transparency and informed consent regarding FRT use. Eighty-nine percent of respondents support being informed each time FRT is used in public spaces, with forty-five percent advocating for notifications at entrances and a small minority (two percent) believing notification is unnecessary. The public is hopeful about FRT’s potential for improving safety and solving crimes but remains concerned about privacy, data security, and potential misuse.
The report underscores the need for robust public education and deliberation on FRT, emphasizing the importance of developing informed consent models and clear regulatory frameworks. As lead author Professor Mark Andrejevic notes in an article for The Conversation, Australian state have built extensive face databases from public records, which could be used for digital ID matching, while the Australian Federal Police have used facial recognition technology from the controversial firm Clearview AI.
Andrejevic argues that there is an urgent need for better public education about the technology, and for legislation aimed at minimizing the risk of fostering “an automated surveillance society” while attempting to reap the benefits of FRT.
Source: The Conversation
–
July 30, 2024 – by Cass Kennedy
Follow Us