Privacy advocates are in an uproar in the wake of a bombshell report from the New York Times. The report details the activities of a company called Clearview AI, an American startup that has quietly provided facial recognition services to more than 600 law enforcement agencies.
Unlike other facial recognition tools – which often limit searches to government databases that contain official images like mug shots and license photos – Clearview allows law enforcement officers to upload a suspect’s photo and search the entire web for a potential match. Clearview’s database includes more than 3 billion unique images, many of which are pulled from social media platforms like Facebook, YouTube, and Twitter. (For reference, the FBI’s database only had 641 million images as of last April.)
Clearview has also been operating without any kind of quality control or public scrutiny, and has provided critics with several reasons to question its operational ethics. The company scrapes social media platforms like Facebook and Twitter, many of which have terms of service that explicitly forbid such gathering of information. At the same time, the facial recognition algorithm has not been tested by an independent organization like the NIST, which raises concerns about its accuracy (or lack thereof). At the very least, the law enforcement agencies that license the system are seemingly doing so without any concern for potential biases.
In that regard, Clearview has displayed a lack of restraint. Major corporations like Microsoft have avoided deals with law enforcement specifically due to concerns about mass surveillance. Clearview seems to have deliberately tried to fill the void, marketing its app directly to law enforcement agencies through free trials and cheap annual licenses.
Clearview was founded by Hoan Ton-That and Richard Schwartz, the latter of whom served as an aide during Rudy Giuliani’s tenure as mayor of New York City. The company has raised $7 million from investors, including $200,000 from Peter Thiel. It has leveraged political contacts to raise awareness and attest to the legality of its platform.
New laws like the California Consumer Privacy Act could complicate those assurances, but privacy advocates argue that much stronger legislation (like San Francisco’s ban) is needed to curtail Clearview’s unregulated activity. If not, they warn that a mass surveillance state is virtually inevitable.
“Absent a very strong federal privacy law, we’re all screwed,” said Stanford Law Professor Al Gidari.
Law enforcement agencies are actively using Clearview to solve criminal investigations. Any images uploaded for a Clearview search are added to the company’s database. At the moment, there is no way for people to have their images deleted, though the company indicated that it is working on a tool that would allow them to do so, provided that it has already been deleted elsewhere on the web.
Source: The New York Times
–
January 20, 2020 – by Eric Weiss
Follow Us