New method- Examiner-based likelihood ratios

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

Post Reply
Posts: 7
Joined: Wed Oct 05, 2016 1:28 pm

New method- Examiner-based likelihood ratios

Post by tombusey »

We have developed a new method to convert black box/error rate data to true likelihood ratios. We've applied it to fingerprint data, and found that the likelihood ratios are much smaller than previously reported. It is our view that in many cases the term Identification is not justified as a conclusion, and may overstate the strength of support for the same-source proposition. The good news is that this approach solves this issue by providing a quantitative likelihood ratio that calibrates images in the database against their actual strength of support, and may also work in casework. Our approach eliminates the problem of erroneous identfications, because no conclusions are reported. We also eliminate the problem of transposing the condition, because the likelihood ratio is a statement about the strength of support for a proposition, not the likelihood of that proposition.

The paper is available at the following link until February 24th 2023, and after that email for a reprint:

The math and modeling are pretty complicated, so we've put together a presentation that explains things:

We would love to hear your thoughts on this paper on CLPEX or via email. We may also do a zoom tutorial/QandA in the coming months to answer questions.

Tom Busey, Indiana University
Meredith Coon, Baltimore Police Department
Posts: 35
Joined: Tue Aug 07, 2018 7:36 am

Re: New method- Examiner-based likelihood ratios

Post by 4n6Dave »

It is an interesting approach, but does this run into some of the same problems as the FRStat model where it doesn't really provide a measure of the strength of the evidence itself but in this case just provides the strength of our opinion in reference to what other LP examiners might have said? My opinion has this likelihood to occur due to them being mated vs non-mated? vs this amount of agreement has this likelihood of occurring due them being mated vs non-mated. I know that is kinda of splitting hairs but it matters when comparing our LR's to DNA.

For example if I see 9 "mu" on your scale and I think its an identification and the score of 9 ends up having a LR of 100. Does the positive predictive value of thinking it is an ID also need to be factored in? When I say ID I am correct 99.9% of the time but when I say exclusion I am only right 93% of the time or does that not factor into the curves?

You mentioned increasing the "of value" threshold to increase the LR. Would a two-tier approach be appropriate? Where in the case of really clear comparisons you would use one set of curves and for the more difficult comparisons you would use another set of curves?

Did these curves include comparisons where the prints were clearly different? Because if you are only including close prints (those will be the interesting ones) would that change the LR's as compared to all comparisons? A lot of our exclusions are very clear and my intuition would say that would push that red curve further to the left. Does the number of exclusions in the database matter? Because even for a matched pair of fingerprints in casework we would have at least 9 exclusions and 1 identification.
If the LR from this dataset only applies to when examiners think that there is at least some similarity wouldn't that push the LR numbers down?
Post Reply