A research team from the Imperial College London and AI and machine learning startup FaceSoft.io announced a joint effort on a new solution called AvatarMe that is capable of reconstructing a photorealistic 3D bust from a source photograph.
The team — who announced their work in a paper that was accepted to the Conference on Computer Vision and Pattern Recognition (CVPR) 2020 — say that AvatarMe outperforms similar existing systems and can generate high resolution 3D faces with detailed reflections from low resolution images.
As VentureBeat reports, the system created pore-level reflectance maps of 200 different peoples’ faces by using a sphere rig consisting of 168 LED lights and 9 DSLR cameras. Those maps were used to train an AI generative adversarial network model called GANFIT to create its own realistic face maps while simultaneously optimizing them for a facial recognition identity match with the source image.
GANFIT consists of two parts, the first being a generator that creates samples and the second being the discriminator, which constantly attempts to spot the differences between the generated and real-world samples. The two parts go back and forth doing their job until the discriminator is no longer able to distinguish between the two with more than a 50% expected accuracy.
Though impressive, the research team acknowledges that AvatarMe still has limitations, though one of its major shortcomings is partially due to the data sample that was used to train the AI model not containing enough samples of various ethnicities. The result of the poorly trained model for some skin types is poor performance on its part when trying to reconstruct the faces of some individuals, highlighting the importance of diversity in sampling for machine learning.
Source: VentureBeat
–
June 18, 2020 – by Tony Bitzionis
Follow Us