The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

Post Reply
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Post by josher89 »

Article for review...
The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence 2018.pdf
You do not have the required permissions to view the files attached to this post.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
ER
Posts: 351
Joined: Tue Dec 18, 2007 3:23 pm
Location: USA

Re: The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Post by ER »

Here is a summary of this examiner's results on the proficiency test:
Accurate Identifications: 98 out of 100 pairs
Erroneous Identifications: 2 out of 100 pairs
Error Breakdown:
Mistaken Matches: 1
Mistaken Non‐matches: 1
In
Am I reading this right? Did the study design present errors as "Mistaken Matches" and "Mistaken Non-matches" and then combine them into together being "Erroneous Identifications"?!?

So with this being the 'both' error sample, I'm assuming that even the sample with 0 "Mistaken Matches" also had up to 34 of the 100 samples labeled as "Erroneous Identifications".

This is just an initial read at this point, and maybe I'm missing something, but this fundamental misunderstanding of latent print examination terms raises serious doubts about the usefullness of the entire study.
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Post by Boyd Baumgartner »

I would agree with ER about the utility of the study but for different reasons. Talking about errors seems to lack any real teeth without talking about what constitutes an error, the types of comparisons that produce errors, who is producing the errors and what standards are being used as well as what conclusions are being used.

Aside from all the directions one could be pulled from all the linked articles above, I'm not sure how relevant proficiency tests even are to a case at hand. I think the paper nails it when they cite:

The consensus among forensic evidence scholars is that the fingerprint comparisons found on proficiency tests commonly
used by American crime laboratories are considerably less challenging than fingerprint comparisons found in real
case work (see, e.g., Cole, 2005; Koehler, 2017; President's Council of Advisors on Science and Technology (PCAST),
2016). Indeed, the head of the Collaborative Testing Service (CTS), a leading provider of forensic proficiency tests,
has conceded that “easy tests are favored by the forensic community” (PCAST, 2016, p. 57). The latent fingerprint
impressions used in proficiency tests tend to be of higher quality than those obtained from a typical crime scene, and
typically the tests are declared tests (i.e., known to be tests rather than real cases) that are not monitored by an independent
administrator and thus may sometimes involve collaboration among analysts to arrive at answers to the test.
(emphasis mine)

For case specific error aversion, wouldn't it be more informative to a jury to know when a latent comparison has certain qualities that might be error inducing? We've tried to do this by citing factors that lead to complex comparisons and then perform additional QA on those conclusions. In addition to exclusion guidelines (self evident orientation + low ambiguity data + target groups) we've also tried to put some transparency around those conditions that are contained in an identification such as: features not commonly used (pore placement, shape/ridge edge shape), and/or ambiguous data (it could be interpreted differently by different examiners, and/or limited data (examiners are bound to the same data in the print and it's right at the edge of sufficiency).


For me, this strategy get's to the heart of one of the critiques I have of likelihood ratios in fingerprint examination. Specifically, when error aversion is needed most as defined by complex comparisons, statistical models are least effective because they're relying on the most subjective or qualitative aspects. We saw this anecdotally in the Show me the print! thread. A statistic calculated based on what was marked in the known vs what was agreed upon between examiners (including varying confidence) should produce very different metrics.
NRivera
Posts: 138
Joined: Fri Oct 16, 2009 8:04 am
Location: Atlanta, GA

Re: The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Post by NRivera »

. Jurors who had no information about proficiency gave similar weight to the testimony as jurors exposed to highly proficient examiners, suggesting that jurors assume fingerprint examiners perform at high levels of proficiency unless informed otherwise.
This is exactly as it should be. In accredited labs, proficiency testing speaks to the basic level of ability for the examiner AND the laboratory's quality assurance system. Speaking to error aversion involves in-depth knowledge of established procedures to identify, address and, if necessary, mitigate potential or actual non-conforming examination work. PT's are not designed to address nor offer any insight regarding error aversion. They do speak to the examiner's basic qualifications and credibility as an expert on voir dire, but they should not bear any weight on the testimony regarding the specific evidence at hand.
2.2 | How proficient are fingerprint examiners?
There is no such animal. Examiners are either proficient or they are not, there is no "how much". Do certain examiners have a better "eye" than others? Absolutely! Proficiency tests are not the appropriate tool to attempt to measure this. As a discipline we can't even agree on how to measure this objectively.

We could probably run several concurrent threads on types of errors, error mitigation, factors that contribute to error, etc. Ultimately it's going to boil down to an unfortunate reality. Considering and documenting all the things that could significantly affect the level of complexity for an examination is inherently time consuming. Attempts to categorize them all into a documentation scheme is not an easy task and you will always find yourself wrestling against those who argue that "the cases have to go out the door, we don't have time for all that". More research will need to be done on all these things and more before we will all reach a reasonable consensus on when, what and how to document the all the variables affecting latent print examination complexity. I chock this paper down to "one more example of how not to make a light bulb".
"If at first you don't succeed, skydiving was not for you."
ER
Posts: 351
Joined: Tue Dec 18, 2007 3:23 pm
Location: USA

Re: The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Post by ER »

I totally get those points, but backing the freight train up here. For a moment, set aside all those other issues and let's assume for arguments sake that we want to simply measure the potential juror response in how reliably they view fingerprint evidence when presented with different rates of error. I get that there's lots else at play, but you know, just for fun.

But then you tell jurors that an examiner had 2 Erroneous Identifications out of 100 samples and that half of them were Mistaken Matches. Or stranger still that an examiner had 2 Erroneous Identifications and that neither was a Mistaken Match. Wait, what?

To paraphrase the great Chandler Bing, "Could you BE more confusing in your study design?!?"
NRivera
Posts: 138
Joined: Fri Oct 16, 2009 8:04 am
Location: Atlanta, GA

Re: The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

Post by NRivera »

ER wrote: Wed Mar 20, 2019 2:48 pm I totally get those points, but backing the freight train up here. For a moment, set aside all those other issues and let's assume for arguments sake that we want to simply measure the potential juror response in how reliably they view fingerprint evidence when presented with different rates of error. I get that there's lots else at play, but you know, just for fun.

But then you tell jurors that an examiner had 2 Erroneous Identifications out of 100 samples and that half of them were Mistaken Matches. Or stranger still that an examiner had 2 Erroneous Identifications and that neither was a Mistaken Match. Wait, what?

To paraphrase the great Chandler Bing, "Could you BE more confusing in your study design?!?"
Objection!!! Argumentative! :lol: :lol: :lol:
"If at first you don't succeed, skydiving was not for you."
Post Reply