The Department of Homeland Security (DHS) has signed off on a Privacy Impact Assessment that raises significant concerns about the use of facial recognition by the Department’s Immigration and Customs Enforcement (ICE) agency. The Assessment notes that ICE has access to vast amounts of personal and biometric information, and that the existing safeguards may not be enough to prevent the department from abusing that technology.
As FCW reports, much of the concern stems from the sheer volume of data that is available to ICE. DHS itself currently maintains two biometric databases, the legacy Automated Biometric Identification System (IDENT) and its incoming Homeland Advanced Recognition Technology (HART) replacement. ICE has access to both of those databases.
However, the Office of Biometric Identity Management that stores the DHS databases is taking steps to link with a slew of other local, state, and federal databases. The full lineup includes databases run by the FBI, the Department of Defense, and the Department of State, as well as those run by state and local law enforcement agencies and commercial vendors.
The scope of the program, coupled with the involvement of third-parties, creates a high risk of a data breach if personal information is mishandled. For example, roughly 100,000 people were impacted when a US government subcontractor was attacked in 2019. That particular contractor stored files on its own database in violation of DHS security policies.
The Privacy Assessment went on to warn that ICE could use facial recognition searches outside of the purview of their investigations, and that the agency might be using low quality images that would make the system less accurate and increase racial bias. ICE can also run searches on databases that have not been vetted in “exigent circumstances.”
To ensure privacy, ICE claims that its searches are carried out with a minimal amount of information to reduce the threat of exposure. Meanwhile, subcontractors must be able to carry out an audit of their own systems, and DHS and ICE do have training and rules of behavior for those organizations.
Many privacy advocates believe that those protections are a step in the right direction, but are still insufficient given the agency’s widespread use of facial recognition.
“We need a better accounting of the types of training data that is being used,” said Brookings Institute Technology fellow Nicol Turner Lee. Lee noted that many facial recognition systems still exhibit racial bias, and that ICE could exacerbate those consequences if those algorithms lead to the “over-profiling and overrepresentation” of minority populations.
DHS has faced harsh criticism for its lack of transparency, and for its efforts to expand its facial recognition program. The organization suspended the Trusted Traveler Program in New York after the state passed a law that prevented the DMV from sharing information, while ICE is one of the paying customers using the controversial Clearview AI platform.
Source: FCW
–
May 28, 2020 – by Eric Weiss
Follow Us