With biometric technologies becoming increasingly commonplace, more academic experts are weighing in on their ethical and regulatory implications. In a lecture at The New York Academy of Science’s, Harvard’s Becerra Sandoval recently explored the ethical implications of voice biometrics, particularly their impact on marginalized communities. And in an article recently published in The MIT Sloan Review, a trio of academics from Harvard, University of Victoria, and Queen’s University flag the privacy risks of biometrics along with regulatory considerations.
Vocal Warnings About the Vulnerable
The development and use of voice biometrics raise significant ethical concerns, particularly in how these technologies might affect marginalized communities. In her seminar at The New York Academy of Sciences, Juana Catalina Becerra Sandoval, a PhD candidate at Harvard University and a research scientist at IBM Research, discussed these issues. Her presentation, titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies,” delved into the historical and contemporary implications of voice biometrics.
Becerra Sandoval emphasized that while the technology appears innovative, it is rooted in older ideas about the body and identity, some of which date back to 19th-century eugenic science. These historical underpinnings persist in modern applications, influencing how voice biometrics are developed and deployed. She cautioned that the interests driving the adoption of these technologies – particularly from sectors like finance and security – might not always align with public welfare, potentially leading to over-policing and surveillance of vulnerable populations.
One of the significant concerns Becerra Sandoval highlighted is the assumption that the relationship between a person’s voice and their identity is fixed. This could marginalize individuals whose voices change due to injury, illness, or other factors, leading to exclusion from systems that rely on voice biometrics. She argued for the need to design these technologies with equity in mind, ensuring they serve the needs of all people, rather than merely facilitating corporate or institutional efficiency.
On the other hand, Becerra Sandoval expressed optimism about the potential for AI and voice biometrics to be used ethically if driven by the needs and desires of individuals rather than profit motives. She underscored the importance of a multidisciplinary approach to understanding and developing these technologies, combining historical, technical, and socio-political perspectives to create more inclusive and equitable systems.
Navigating the Ethical Boundaries of Biometric Data
The ethics surrounding the use of data collected by biometric systems is also of growing importance as the use of these technologies continues to expand rapidly in a number of sectors.
In their article, titled Managing the Human Risks of Biometric Applications, academics Andrew Park and Jan Kietzmann from the University of Victoria, and Jayson Killoran from Queen’s University in Kingston, Ontario, lay out their argument that the increasing integration of biometric technologies into various applications raise significant ethical concerns regarding privacy and human dignity.
The authors argue that as the use of these technologies expands beyond simple identity verification into areas like behavioral analysis, the potential for misuse and erosion of consumer trust grows.
Kietzmann, Killoran and Park write of the importance of balancing the use of biometric technologies with privacy concerns, and argue that while biometrics can enhance security and efficiency, they also pose significant privacy risks if not managed properly. This includes issues like unauthorized data access, misuse, and the potential for surveillance and discrimination.
Regarding the ethical deployment of biometric technologies, they suggest that organizations should focus on maintaining the dignity and respect of individuals when implementing these systems. This, they argue, involves considering less intrusive alternatives to direct biometric scanning, such as using object recognition technologies.
One recommendation they lay out is the exploration of technological alternatives that do not require the collection of sensitive biometric data. For instance, instead of using facial recognition, the authors posit that organizations could employ other methods like typing patterns or object recognition to achieve similar goals without compromising personal privacy.
The article also recommends the adoption of data management practices that limit the scope and scale of biometric data collection. This includes using smaller, localized data processing models that do not require data to be sent to external servers, thereby reducing the risk of data breaches and enhancing individual control over personal information.
On the topic of regulatory and ethical frameworks governing the use of biometric technologies, the article stresses the need for these to be robust, including ensuring compliance with privacy laws and adopting best practices to prevent adverse outcomes for both businesses and individuals.
Bringing Ethical Considerations to Bear on Policy
Attempts to address these concerns are underway in some jurisdictions, where governments are taking measures to ensure such ethical considerations are factored into authorities’ decision-making.
In the UK, the Biometrics and Forensics Ethics Group (BFEG) recently announced the appointment of six new members: Giles Herdale of Herdale Digital Consulting; Matt James, an associate professor of bioethics and medical law; Criminal Law Commissioner Penney Lewis; cybersecurity consultant Elisabeth Mackay; Malcolm Oswald, Director of Citizens Juries; and Marion Oswald, a university professor and Senior Research Associate at the Alan Turing Institute.
Established in 2017 as the successor to the National DNA Database Ethics Group, the BFEG provides independent ethical advice on the use, collection, and retention of biometric and forensic materials. The group operates with transparency, publishing its findings without requiring Home Office approval, and its work is a mix of self-initiated projects and requests from the Home Office.
For more on the topics of transparency and ethics in the biometrics space, listen to our conversation with Youzec Kurp, VP of Identity & Biometric Solutions, at Thales. Youzec joined us on our ID Talk podcast earlier this year to discuss Thales’ TrUE Biometrics initiative – a philosophical approach to identity technologies with a strong moral core that positions education, collaboration, and compliance as guiding principles for innovation.
Source: MIT Sloan Review, The New York Academy of Sciences
–
August 31, 2024 – by Tony Bitzionis
Follow Us