A group of prominent academics and researchers are advocating for a new approach to digital ID and online identity in an age when artificial intelligence has effectively passed the Turing Test.
In a paper titled “Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online”, the researchers discuss the growing challenge of online deception, especially with the rise of advanced artificial intelligence. The authors argue that as AI becomes more capable of mimicking human behavior, distinguishing between real humans and AI-powered entities online is increasingly difficult. This creates a significant risk of AI being used for malicious purposes, such as spreading disinformation or committing fraud.
To address this issue, the paper proposes the concept of “personhood credentials” (PHCs), which are digital credentials that allow users to prove they are real people without revealing any personal information.
The paper emphasizes that traditional methods of combating online deception, such as CAPTCHAs or identity verification through personal information, are becoming inadequate in the face of sophisticated AI. CAPTCHAs, for example, can be easily bypassed by AI, while identity verification methods often compromise user privacy.
PHCs, on the other hand, offer a more privacy-preserving solution by allowing users to demonstrate their personhood through cryptographic proofs that do not disclose their identity or any other personal data. These credentials could be issued by trusted institutions, such as governments or other entities, and could be used across various online services.
One Person, One Credential, and Many Issuers
One of the key benefits of PHCs is their ability to limit the scale of deceptive activity by enforcing a one-person, one-credential policy. This would prevent bad actors from creating multiple fake identities to carry out large-scale attacks or manipulations. Moreover, PHCs provide unlinkable pseudonymity, meaning that a user’s interactions across different services cannot be traced back to them or linked together, even if the service providers or issuers collude. This ensures that users can maintain their privacy while proving their authenticity.
Biometrics are considered as a method for ensuring that a credential is tied to a real individual by measuring unique physical attributes like fingerprints, irises, or facial features. However, the paper also highlights several challenges associated with biometric systems. These include the potential risks related to the integrity of the hardware used to collect biometric data, the privacy concerns surrounding the storage and handling of this sensitive information, and the possibility of biases in biometric accuracy across different demographic groups.
The paper also discusses the potential challenges in implementing PHCs, such as ensuring equitable access to these credentials, maintaining free expression, and preventing the concentration of power among credential issuers. The authors suggest that a decentralized approach with multiple issuers could help mitigate these risks. They also highlight the need for robust systems to manage and revoke credentials in case of misuse or theft, without compromising the privacy of legitimate users.
The ‘Overwhelming’ Threat of AI-powered Deception
In addition to addressing current challenges, the paper explores the broader implications of PHCs for the future of the internet. The authors argue that as AI continues to advance, it is crucial to develop tools like PHCs to maintain trust in online interactions. They stress the importance of involving the public, policymakers, technologists, and standards bodies in the development and deployment of PHCs to ensure that these tools are effective and widely adopted.
The paper concludes with actionable recommendations for advancing the use of PHCs. These include investing in the development and piloting of PHC systems, encouraging their adoption across various services, and reexamining existing standards for identity verification and authentication in light of the challenges posed by AI.
The authors warn that without proactive measures, there is a significant risk that AI-powered deception could overwhelm the internet, leading to invasive and privacy-violating solutions that could undermine the principles of online freedom and privacy.
Prominent Voices
The paper features several notable authors, including experts from leading institutions in artificial intelligence, technology, and privacy. Among them are Steven Adler from OpenAI, Zoë Hitzig from the Harvard Society of Fellows, and Shrey Jain from Microsoft, all of whom contribute significant expertise in AI and technology. Wayne Chang from SpruceID and Renée DiResta from the Stanford Internet Observatory bring a focus on decentralized identity and the misuse of information technology, respectively.
Other prominent authors include Sean McGregor from UL Research Institutes, Brian Christian, a well-known author and researcher, and Andrew Critch from the Center for Human-Compatible AI at UC Berkeley, who focuses on AI safety and ethics. The diverse backgrounds of these authors underscore the interdisciplinary approach of the paper, in turn highlighting its importance in addressing the challenges of AI-powered deception and online privacy.
Source: arXiv
–
August 20, 2024 – by Cass Kennedy and Alex Perala
Follow Us