It said:
A couple points that I thought interesting were:"Perhaps the most valuable fingerprint-related information I have heard in the past year is FBI Latent Print Examiner Kyle Tom’s ongoing research about NGI candidate scores. His preliminary research showed that when the matching scores of the #1 and #2 candidates has a difference of 1,250 or more, 83.5% of the time it will be an identification.
This is important because it means all agencies should consider implementing a policy that any such 1,250 or more difference in NGI candidate responses should require review by more than one examiner - either because the first examiner made an identification, or because there is an increased chance of an erroneous exclusion. Hopefully, the FBI will publish research on this topic in the future.
Some agencies are of the opinion that allowing examiners to see the matching scores in candidate lists biases them and should be precluded. The current chair of the OSAC Friction Ridge Subcommittee AFIS Best Practices Task Group (Mike French) and I are both of the opinion that AFIS matching scores are an important objective (not subjective) measurement which can lend valuable information to the examination process and aid quality assurance."
- Is it problematic to have someone who works for an AFIS provider on the board of a best practices committee that is pushing the use of a product they're involved in?
- Considering NGI does not require you mark a decision, the data on individualization is most likely coming from the FBI itself. Are the practices at the FBI sufficient to extrapolate to other agencies? (e.g. on what standards are their IDs based? Might that metric change if they engage in practices like the 'Show me the Print' thread?
- What is the correlation of the data to the Quality Metrics in the ULW? If you're running only 10P quality latents, the data will be skewed.
- What is the correlation between orientation being known, pattern type, presumptive finger position and how does this affect the number?
- How is a claim of AFIS scores being objective actually claimed when they're either the result of manual encoding or in the instance of a LFIS the encoding/matching algorithm is still trained and significance determined by an algorithm, which are not necessarily objective.
The AFIS score dilemma has even been the topic of some research on it's biasing effects, so beware of everything you read on the internet.