A team of researchers from The Hong Kong Polytechnic University (PolyU) and Seoul’s Yonsei University have developed a new sensor that could revolutionize the computer vision industry. The design of the new sensor is inspired by the human eye, and is able to adapt to changes in lighting conditions in a way that mimics and even exceeds the capabilities of the eye itself.
The sensors would dramatically improve the accuracy of computer vision systems, since they would allow those systems to recognize objects in a wider range of conditions. In that regard, the researchers noted that the human eye is highly adaptable, and starts filtering visual information before it reaches the brain. The pupil can expand or contract to adjust the amount of light entering the eye, while the retina interprets that information in real time.
The researchers tried to recreate that functionality with the design of their new sensor. According to the team, the problem with conventional computer vision sensors is that all of the information is processed after the fact. Modern algorithms can try to account for different levels, but they are still performing those calculations after all of the information has been collected. That creates a delay between an input and a result, slows down overall identification times, and makes computer vision systems less adaptable in rapidly shifting circumstances.
The new system, on the other hand, can “reduce hardware complexity and greatly increase the image contrast under different lighting conditions, thus delivering high image recognition efficiency,” according to lead researcher Dr. Chai Yang. To do so, the sensors rely on an array of dual-layer phototransistors (developed by the researchers) that function as light detectors. The phototransistors are made with atomic-level ultra-thin molybdenum disulphide, a semiconductor material that allows the sensors to control the flow of electrons and create “trap states” where light information can be stored.
The ability to control the flow of electricity also makes the sensor more sensitive to light. The phototransistors essentially act as rods and cones, and automatically adjust to the level of brightness to reduce the amount of information that needs to get processed.
Natural light spans a range of 280 dB. The human eye can adapt to see objects within 160 dB of that range, while current computer vision cameras have a range of 70 dB. The new PolyU system, meanwhile, has a range of an astonishing 199 dB. The researchers believe that the tech has a wide variety of applications, from facial recognition through to self-driving cars.
Source: Harbour Times
–
June 2, 2022 – by Eric Weiss
Follow Us