Researchers from Peking University have developed a deep learning model that estimates age from 3D face scans. The latter comprised a collection of non-registered 3D face point clouds, according to Tech Xplore.
To protect privacy, they introduced a method called coordinate-wise monotonic transformation. This technique treats a 3D face scan as a swarm of data points, each with X, Y, and Z coordinates defining its location in space. It modifies these coordinates independently. It doesn’t scramble the points or change their order, but instead applies a mathematical function (along the lines of functions like exponential or logarithmic functions) to each individual coordinate value.
This transformation offers a clever solution: it protects privacy by disguising the original face. The adjustments make it difficult to recognize the person from the scan, similar to blurring a picture. However, the mathematical functions are chosen carefully to ensure they don’t significantly alter the information relevant to age estimation, like the distance between facial features or wrinkle patterns.
In essence, it’s like applying a mathematical filter that protects privacy while maintaining the details needed for accurate age prediction.
The researchers’ model maintained an accuracy of within 2.5 years of a person’s actual age even after applying these transformations. Interestingly, the transformed faces were significantly harder for both humans and machines to recognize, suggesting the method disrupts facial recognition systems without impacting age estimation.
Based on these results, the researchers proposed guidelines for managing facial data centers, advocating for coordinate-wise monotonic transformations and selective data sharing to achieve a balance between accurate data analysis and strong privacy protection. The full paper has been published in Science China Life Sciences.
Source: Tech Xplore
–
April 24, 2024 – by Ali Nassar-Smith
Follow Us