Oosto continues to advocate for ethical uses of facial recognition technology. To that end, the company participated in a recent conference at Fordham University that specifically addressed the ethical and regulatory issues surrounding facial recognition applications and their development.
The symposium – titled “Ethical Vision Artificial Intelligence: Creating an Effective AI Compliance Framework” – took place on November 29, and was hosted by Fordham Law School visiting professor and Yale Law School Fellow Professor Shlomit Yanisky-Ravid. As host, Yanisky-Ravid noted that facial recognition technology is becoming increasingly popular with law enforcement agencies all over the world, which could violate people’s civil rights if that technology is integrated into body cameras and state surveillance apparatuses.
Unfortunately, the lack of a strong regulatory framework means that there is little to deter those potentially invasive uses of facial recognition technology. The conference was set up to bring representatives from the public and private sector together to discuss guidelines that would allow facial recognition development to proceed while still protecting the privacy of individual civilians, with Yanisky-Ravid describing it as an incubator that would advance research and policy discourse.
“Our goal is to fill the existing gap resulting from the lack of US laws and regulations relating to AI systems,” said Yanisky-Ravid. “We share the same goals in establishing ethical-legal principles, guidelines and norms. These principles should be based upon fairness, equality, privacy, responsibility, accountability, transparency and accuracy of AI systems.”
Oosto Chief Marketing Officer Dean Nicolls served as an industry representative during the event. He introduced the concept of a scale of sensitivity, noting that using facial recognition to unlock a smartphone is not nearly as invasive (or troubling) as a mass surveillance scheme. With that in mind, he argued that a good facial recognition policy needs to be nuanced enough to account for different applications of the technology.
“The media’s focus on law enforcement’s use of facial recognition and the wrongful arrests resulting from its application have cast a negative perception of facial recognition technology — even though these examples represent a small fraction of the total use cases in production,” said Nicolls.
The participants did suggest that the private sector needs to take some accountability for its own actions, and that businesses should work with the government to create ethical guidelines before facial recognition gets out of hand. For its part, Oosto addressed the issue directly in a recent e-book that makes the case for the ethical use of facial recognition. The company has also been openly critical of Clearview AI, whose unethical data collection practices have run afoul of privacy laws in several countries. Oosto itself emphasized the need to mitigate bias in a separate Fordham facial recognition conference at the beginning of the year.
–
December 8, 2021 – by Eric Weiss
Follow Us