According to a recent report from The Globe and Mail, Canada’s federal government used facial recognition technology on millions of travellers arriving via the country’s largest airport over a six-month period in 2016.
Details regarding the project — referred to as “Faces on the Move” and conducted in partnership with Ottawa-based Face4 Systems Inc. — were acquired by The Globe through a freedom of information request and show that the Canada Border Services Agency (CBSA) used 31 facial recognition-capable cameras at the international arrivals border control area of Toronto’s Pearson International Airport between July and December of 2016.
According to a statement from the CBSA, the operation was conducted to identify individuals from an existing database of 5,000 that the agency suspected may attempt to enter the country using fake credentials. When the cameras were able to match an individual’s facial biometric data to the database, an officer on the terminal floor would be notified and the individual in question would be pulled aside for a “secondary inspection”.
The Globe report points to presentation slides posted by Face4 that reveal the pilot resulted in 47 positive matches, though the CBSA did release a statement to news outlets clarifying that none of the matches resulted in the deportation of any individual, and that the facial recognition technology “would not have been the only indicator used in the traveller’s border clearance process or in determining their admissibility.”
The CBSA spokesperson further revealed that over the course of the six-month trial, the facial recognition cameras were used on somewhere between 15,000 and 20,000 travelers per day, while a total of nearly 3 million people passed through the international arrivals control area in that same period.
The news comes following what has been a controversial time for the facial recognition industry, with the tech coming under intense scrutiny following a January 2020 front-page story in The New York Times that revealed that startup Clearview AI was scraping social media platforms to build a facial recognition database it was then selling to law enforcement agencies around the world.
Just one month prior to that, a National Institute of Standards and Technology (NIST) study revealed that a number of facial recognition algorithms tend to show a clear racial bias, misidentifying people of color — especially women — more often than they do middle-aged white men.
Though the CBSA outlined the pilot project on its website, the information provided didn’t include when or where the project was being conducted, a decision that some privacy advocates find troublesome.
“This was deployed in a context where there was no public discussion in advance, with a technology that’s known to have flaws in terms of both accuracy and, in particular, racial biases,” Tamir Israel, a lawyer at the University of Ottawa’s Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic, told The Globe. “In such a high-stakes environment, that’s really concerning.”
CBSA spokesperson Jacqueline Callin release an emailed statement stressing that the agency “takes the issue of personal information and privacy seriously,” while Face4 Systems Inc. CEO Robert Bell said that the pilot was “carefully designed to account for privacy and security at all levels” and that the algorithms selected were done so with the minimization of racial bias in mind.
Source: The Globe and Mail
–
July 20, 2021 — by Tony Bitzionis
Follow Us