Countries across the Asia-Pacific region are expanding their use of biometric technology systems to detect and prevent deepfake content, building on existing digital identity infrastructure developments. India’s Ministry of Electronic and Information Technology (MeitY) has launched multiple artificial intelligence initiatives, including a deep learning framework for fake speech detection operating from December 2021 to December 2024.
A separate MeitY-funded project, running from January 2022 to March 2024, has developed a prototype tool called FakeCheck. Created by the Centre for Development of Advanced Computing (C-DAC) in Kolkata and Hyderabad, this desktop and web application detects deepfakes without requiring an internet connection. The tool is currently in testing phase and represents India’s first major domestic effort to create deepfake detection technology.
Under India’s IndiaAI mission, the government is developing indigenous solutions for AI-related threats, focusing on tools for assessment and regulation. The program encourages educational institutions to undertake AI projects that strengthen the country’s digital infrastructure capabilities, following several high-profile incidents of deepfake content targeting public figures in the region.
In Singapore, the TechX Summit 2024 featured discussions on biometric technology applications, particularly in homeland security. The Biometrics Institute, in collaboration with Singapore’s Immigration & Checkpoints Authority and HTX’s Biometrics & Profiling Centre of Expertise, led sessions on biometric standards development and border crossing applications. The initiative follows Singapore’s recent success in implementing biometric processing at Changi Airport, where immigration screening times have been reduced to just 10 seconds.
The implementation of these technologies has prompted discussion about privacy implications. “Systems avoiding personal identification are preferable,” said Carissa Véliz, associate professor at the University of Oxford’s Institute for Ethics in AI, while expressing concerns about potential expansion of surveillance capabilities.
The deployment of biometric systems has shown varied impacts globally. In Uganda, a 2023 study by the African Center for Media Excellence found that biometric and digital identity programs have enabled increased surveillance capabilities affecting journalism and media operations.
Current technological developments include the emergence of AI-powered tools capable of creating face-swapped videos, highlighting the need for advanced detection systems to identify such manipulated content. The challenge has become particularly acute in the context of electoral integrity, with recent incidents of deepfake content being used to influence political campaigns across multiple jurisdictions.
Sources: CyberPeace Blogs, SurveillanceCapitalism, TechX Summit 2024, CyberWire Daily Podcast, Keeping The Public Safe From Festive Scams
—
December 26, 2024 – by the ID Tech Editorial Team
Follow Us