Scientists with the National Institute of Standards and Technology and Michigan State University say they have engineered a new algorithm that could eliminate the need for humans to perform a key step in fingerprint analysis, and thereby reduce human error in forensic investigations.
The algorithm tackles the first step in an investigation of latent fingerprints – assessing how much useful information is contained in the sample. The researchers had 31 fingerprint experts each score a hundred fingerprint samples on quality. Their machine-learning algorithm was then fed the data, allowing it to train itself to determine whether a given fingerprint contains sufficient useful data for biometric matching with another sample.
The researchers then put the algorithm to work on a new set of prints, and used those prints to find matches in an Automated Fingerprint Identification System (AFIS). Looking at how often the algorithm’s poorly-scored prints produced erroneous matches, and how often its well-scoring prints produced correct matches, the researchers found it to be a little better than the average results of the human investigators who participated in the study.
The results suggest that forensic investigators could use this algorithm to figure out whether biometric fingerprint matching is a useful tool in a given investigation, which in turn could help to stop innocent people from being falsely matched to fingerprints found at crime scenes, among other benefits. But first the researchers say they need to test it and train it on more fingerprint samples – millions more – to ensure that the system is reliable.
For now, their initial, encouraging findings are available in the report “Latent Fingerprint Value Prediction: Crowd-based Learning,” in the IEEE Transactions on Information Forensics and Security journal.
Source: NIST
–
August 14, 2017 – by Alex Perala
Follow Us