Fingerprint Error

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

Post Reply

What has the most impact on error mitigation

Training on advanced topics
8
21%
Clear Quality Assurance guidelines in an SOP
29
76%
Proficiency testing
1
3%
 
Total votes: 38

Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Fingerprint Error

Post by Boyd Baumgartner »

Link to Article The article is behind a paywall on desktop, the link is to the offline mobile version which displays fine, but I've pasted it below.

Question of the day, posed as a poll in the thread, based on a quote in the article:
The FDLE review found that the fingerprint unit needed stronger policies, procedures and training protocols. Nader and Starling both said they’ve gone to workshops and conferences, but Starling said she did not have “much training on advanced topics” dealing with science.
.
.
--- Begin Article ---
AUSTIN L. MILLER | OCALA STAR-BANNER | 6:20 pm EDT March 21, 2021

The Marion County Sheriff’s Office is making changes, as recommended by the Florida Department of Law Enforcement, to improve the agency’s fingerprinting unit after an employee self-reported an error and touched off a series of reviews and legal moves.


Tiffany Nader, an MCSO fingerprint analyst, misidentified a print during a comparison in March 2019. It was a non-criminal matter. No one was arrested or prosecuted as a result of the mistake, according to official accounts of what happened.

The mistake wasn’t detected until November 2020, when the print was checked by Jeana Starling and Samuel Durrett, who also worked in the fingerprint unit. Their check was part of a standard verification process.

Documents in the case show that once Nader realized the error, she reported it to her supervisor. The sheriff's office contacted the FDLE, which interviewed Nader and Starling, looked at the agency's equipment, examined its technical procedures, and of course reviewed the fingerprint file in question. Durrett wasn’t interviewed because he resigned in September 2020.

The FDLE review found that the fingerprint unit needed stronger policies, procedures and training protocols. Nader and Starling both said they’ve gone to workshops and conferences, but Starling said she did not have “much training on advanced topics” dealing with science.

Nader and Starling said they were not pressured “to work a case and provide results.” But both said they put pressure on themselves to get results for deputies, detectives and prosecutors, according to the FDLE report.

The FDLE concluded that “the examiners involved in this event display the necessary visual acuity and technical knowledge to perform the duties of latent examination.” Still, nine “corrective actions” were recommended.


Corrective actions recommended by the FDLE
Among them: development and implementation of latent print specific policies and procedures to include analysis, comparison, evaluation and verification of latent prints for quality assurance/quality control; annual external proficiency testing of latent print examiners; and increased usage of software when analyzing and comparing distorted, difficult and “close non-match” latent prints.

In addition to that technical review, the FDLE, at Sheriff Billy Woods' request, conducted an audit of the unit. In January, a team looked at a random sampling of work done by Nader and Starling in 2019 and 2020. FDLE officials said 109 latent print identifications were reviewed; of them, the FDLE team agreed with 106 of the findings.

In an email to Chief Assistant State Attorney Walter Forgie, Lisa Zeller, chief of forensic services for FDLE, discussed the three cases.

“In each case, there was a print or lift for which FDLE did not agree with the conclusion," the analyst wrote. "However, in each case there was at least one other print identified to the same individual for which FDLE did agree with the conclusion.”


Before the audit, Forgie sent an email to prosecutors notifying them about the misidentified fingerprint and the fallout. He told prosecutors that they should review cases in which Nader, Starling and Durrett had served or were serving as witnesses. Notices also were sent to defense lawyers in those cases.

Forgie said they looked at open and closed cases as far back as 2003, which is when Nader was hired by the MCSO. The number of cases – including major felonies murder, attempted murder, sex charges, robbery and child abuse – numbered about 1,000.

Forgie said they did not find any problems based on their search. There is no indication that anyone was convicted on the basis of an inaccurate fingerprint examination.

“We want lawyers and the public to have confidence and trust in what we’re doing here,” Forgie said.


One of the cases at issue was that of Christopher Tahkvar, who is serving life in prison for second-degree murder and grand theft.

Prosecutors don't doubt the conviction. Still, "because fingerprint evidence was introduced during trial...through analyst Tiffany Nader, and such testimony could have played a role in the jury’s decision, in the interest of justice, the State is requesting said fingerprint evidence to be re-analyzed by an independent expert in order to confirm the identification testimony presented to the jury,” the state wrote in a motion.

In that motion, the state asked that Tahkvar provide a fresh set of fingerprints “to provide to the independent fingerprint analyst.”

He told the judge he refused to have his fingerprints be taken again. He told the judge that would be a violation of his rights. So the judge denied the state’s request.

The Sheriff’s Office internal affairs division conducted an administrative review of the matter and concluded that Nader and Starling did nothing wrong. The review said a “policy failure” was to blame.

Forgie has met with the command staff of the MCSO and said the agency has been “transparent and forthcoming in this process.”

Long-term solution

Forgie said his agency is working on a long-term solution that includes several measures, such as proficiency testing, training and having another analyst confirm the accuracy of all prints by Nader and Starling for a period of time.
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: Fingerprint Error

Post by josher89 »

I know nothing of this case but this sorta sounds like an ID was made to a victim in a case that didn't get verified. Obviously, that ID was incorrect hence the "non-criminal matter" aspect.

Up until around 2008 or so, we didn't routinely have victim elim ID's verified. I don't know the reason but my guess would be there's no (or not much) probative value to identifying a victim to their own property and until around that time, examiners had a "zero percent error rate" so it couldn't be a wrong conclusion.

If the FDLE is specifically mentioning ACE-V, then my guess is they aren't using any 'methodology' to perform comparisons or at least couldn't articulate it. I get wanting PTs but I wonder what software they are talking about when analyzing distorted or close-non-match prints.

I hope this does get the examiners to training they need and not get vilified by the public or courts. I feel the response of the internal investigation by the agency is somewhat typical (they did nothing wrong) but they obviously wanted someone else to take a second look which is a very good thing. She self-reported which is also very good. The three conclusions out of 109 that the FDLE didn't agree with do not necessarily mean they felt it was a wrong ID; it could have been an inconclusive with support for same source that was called an ID; the article doesn't say.

I hope that a clarification of policies and QA measures are added to this unit to make them better examiners and I hope they are able to attend training to become more proficient at casework. Unfortunately in some agencies, money to send officers to training or buying more guns or vests is prioritized over sending a non-sworn employee to training. I hope this isn't the case here as we need both.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: Fingerprint Error

Post by Boyd Baumgartner »

My only pushback re:Victim IDs is that really those designators are irrelevant to what fingerprint belongs to whom. Those designators are relevant for court, but who knows, the purported victim may be a suspect later on in the case. It's kind of like providing a narrative on a proficiency test. Who cares? The V in ACE-V doesn't stand for 'Victims are irrelevant'. Or maybe it does....I haven't read an OSAC update lately :lol:
Texas Pat
Posts: 35
Joined: Mon Aug 17, 2020 7:15 am

Re: Fingerprint Error

Post by Texas Pat »

Help me understand why there is a major problem here. An examiner made a mistake, then self-reported as soon as it was discovered. Numerous other cases were reexamined, but no additional errors were found. Policies and procedures were tightened to prevent a future occurrence like the one that did happen. Prosecutors and defense attorneys were advised of the situation.

Are we going in the direction of demanding 100% error free latent print examinations? Zero tolerance for anything short of absolute perfection? That's sort of an unrealistic goal when human beings are involved, no matter how skilled or conscientious they are.
"A pretty good 20th Century latent print examiner, stuck now in the 21st Century with no way to go back."
Alan C
Posts: 77
Joined: Mon Aug 08, 2005 10:50 pm
Location: King County SO, Seattle

Re: Fingerprint Error

Post by Alan C »

What about public shaming? :D
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: Fingerprint Error

Post by josher89 »

Texas Pat wrote: Wed Mar 24, 2021 9:14 am
Dangit Pat, you said in two short paragraphs what I was trying to say that took several.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: Fingerprint Error

Post by Boyd Baumgartner »

Texas Pat wrote: Wed Mar 24, 2021 9:14 am Help me understand why there is a major problem here.
I don't know if I'd characterize the problem as major, as the article is not specific enough to illustrate that. I think the most problematic element is usually in the root cause analysis because there's different levels on which such an analysis could occur and they usually serve more as convenient narratives than actual root causes. See:The OIG Report on Mayfield. The corrective actions mentioned in the article would give me more info. If anyone can find it, please share.

Texas Pat wrote: Wed Mar 24, 2021 9:14 am Numerous other cases were reexamined, but no additional errors were found.
Fact check: False

The article mentions 3 conclusions found by the FDLE review in which there were disagreements, but see point #1.
Texas Pat wrote: Wed Mar 24, 2021 9:14 am Are we going in the direction of demanding 100% error free latent print examinations?
Maybe that expectation is put on the examiners through impractical or lack of Operating Procedures, ultimately with the unintended consequence of contributing to this scenario.

It seems plausible given an article from 2011 which states:
“Someone’s life is going to be impacted by our decisions, so we are very careful to be 100 percent accurate,” says Nader, who says her work is like solving a small piece of the puzzle.
While I commend the Examiners for their ethical and transparent behavior, agency procedures that leave the majority of their error mitigation to ethics seems to be the biggest potential problem given the limited information in the article, especially if there's an expectation of perfection. You want to incentivize quality assurance and control along the way, not after the fact in the form of an outside error audit.
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: Fingerprint Error

Post by josher89 »

Some excerpts from the report by the FDLE:
Root cause analysis:

The examiners involved in this event display the necessary visual acuity and technical knowledge to perform the duties of latent print examination. They would benefit greatly from some suggested quality assurance quality control measures such as latent print specific policies and procedures being implemented in their agency. They would also benefit from remedial training specific to ACE-V methodology with emphasis on proper verification techniques and additional training on advanced topics that include close non-matches.

Analysis is the foundation of latent print comparison. It is the phase of examination where the examiner gathers the data they observe in the latent print. These data include clarity, distortion, pattern type, minutia, spatial relationship of the minutia, pores, creases, etc. and factor in to the comparison phase. Analysis should always be conducted free of any pressure, independent of the standards, and conducted thoroughly before any comparisons are attempted, including comparisons during verification. Failure to perform a proper analysis of a latent print can potentially cause an erroneous comparison which results in an erroneous conclusion. The ACE-V methodology is not linear and analysis can be re-performed by the examiner in light of new data.

In cognitive psychology there is a phenomenon where humans can be essentially 'blinded' by an overwhelming amount of data, to the extent that they can miss other data when processing information (i.e. cognitive bias). Based upon my interviews with the examiners, review of their equipment and procedures, and review of the latent print in question, it is my determination that this event is the result of improper verification technique that bypassed, or only minimally involved, an independent analysis that lead to cognitive bias. That cognitive bias rendered the verifier, and perhaps the case examiner, unable to recognize data that was not corresponding between the latent print and erroneous standards.


Suggested Corrective Actions:


1) Development and implementation of latent print specific policies and procedures to include analysis, comparison, evaluation and verification of latent prints for quality assurance/quality control.
2) Annual external proficiency testing of latent print examiners.
3) Development and implementation of a formal latent print training program to include the various topics of the science (ie, history, biology of friction ridge skin, ACE-V, etc.) with required readings, written exams and practical exercises.
4) Audit of past and/or current cases worked by Nader and Starling to determine the impact of technical errors and number of cases affected.
5) Remedial training for Nader and Starling to improve ability to analyze and compare latent prints, particular emphasis on proper analysis during comparison and verification, and "close non-matches”, for a yet-to-be-determined amount of time.
6) Increase usage of software when analyzing and comparing distorted, difficult and "close non-match" latent prints.
7) FDLE BIS training to include how to delete latent prints from the ULF/ULP that have already been identified by the agency to reduce the number of reverse hit notifications,
8) Increase verifications to include other technical decisions and encourage more consultation between examiners when dealing with "close non-matches" and difficult latent prints.
9) SAO requests for additional comparisons in casework before trial be routed through management to track how many, how often and the impact to examiners' caseload.
So this looks like not only a close non-match but also an AFIS close non-match. We now know that those should require additional QA measures as AFIS databases are getting much more populated. It was also apparent that the correct ID was a result of the failure of removing the latent from the ULF of their AFIS as a later reverse hit revealed the true ID. The FDLE did report that they only verify identifications and this went through a total of three examiners. I'm thinking that having a written set of policies with direction on the QA measures necessary for difficult comparisons will absolutely push this lab forward. I would like them to at least look at the OSAC documents that are coming out at the Tier 3 level to get a starting point when creating their policies.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: Fingerprint Error

Post by josher89 »

This probably goes without saying but I will just in case; this isn't about public shaming at all! I know Alan C was kidding so this isn't for you sir.

This is about how we can learn from mistakes (others or our own) to be better examiners. Pure and simple. If we find deficiencies, we must improve upon them. If we can learn from others, all the better. If we can help others fix their mistakes, we must do that as well.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
Michele
Posts: 384
Joined: Tue Dec 06, 2005 10:40 am

Re: Fingerprint Error

Post by Michele »

Josh,

From reading what you posted, there are 3 paragraphs on Root Cause Analysis (RCA). Most of it is suggestions for Corrective Action (CA), not a determination of Root Cause. It says “improper verification” but that is part of the verification, not a ROOT cause of what led to the initial error.

From #7 of the suggested CA: I would assume the error was found due to the latent being left in AFIS after it was identified. If that is the case, then it seems like AFIS contributed to finding the error (which is good) and therefore recommending taking prints out of AFIS that are ID’d doesn’t seem to be a good CA. Playing devil’s advocate, I would say leaving all latents in AFIS may be a better CA (it could be a way to find future errors). I know this would create more work and be time intensive, I am simply throwing the idea out there as a good QA measure (and as assessing the RCA and associated CA’s).

#6 recommends the use of software when comparing close non-matches. I agree software is helpful, however I don’t think a lack of software led to errors before software was developed (no errors that I recall). I would think the lack of equipment would result in false inconclusives or exclusions, not erroneous ID’s.

The CA’s listed are all great recommendations, however, I don’t see any of them as being actions that will diminish errors in the future. I ‘m a little leery of the RCA, but maybe there was more that we aren’t seeing.

#6 mentions close non-matches. Playing devil’s advocate again, if it is a close non-match, why did FDLE determine it not to be a match? I’m not so sure it was because they had better equipment.

After thinking about it more, I kind of want to change my vote in the poll. QA measures may not be have to be in your SOP’s. I think utilizing good procedures is essential, even if they aren't in writing (which brings up the question, what are good procedures?)
Michele
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: Fingerprint Error

Post by Boyd Baumgartner »

I'm attaching the full FDLE report.

I'll just make my most prominent point up front. A 5 conclusion scale would most likely have put this in the 'support for same source' and we wouldn't be having this discussion which is more problematic than the error in the first place.

That being said, as predicted, the usual suspects make their appearance:

'Close non-match'
The latent print in question and the erroneous subject from BIS is the result of what is colloquially called a "close non-match". This term refers to a latent print searched in an automated system that yields a potential match candidate standard print that is extremely similar to the latent print in question. Generally, the overall ridge flow and pattern type are extremely similar as well as a number of minutia shapes and general location. These types of search results are so similar that a cursory glance by an examiner could potentially result in an erroneous conclusion.
Can we just put this notion to bed already? It's fallacious reasoning known as a 'motte and bailey fallacy.

The Bailey: 'This is a close non-match'

Rebuttal: On what dimension are you measuring 'closeness'? And if it's a non-match, doesn't mean that it's risen above a threshold for sufficiency for exclusion which makes it not close? What is that standard for exclusion?

The Motte: AFIS algorithms are getting so good and databases so big, you're going to get very similar candidates back.

Any time you get a 'close non match' discussion in root cause analysis, all I hear is 'we have no AFIS standard', no AFIS inconclusive standard and no conclusion standard.
The interview of case analyst Nader and verifier Starling included questions regarding technical policy and procedure specific to latent print examination. They stated they don't have any policies and procedures specific to latent print examination and verification and that their policies and procedures essentially fall under the Evidence Management procedures and are brief.
Boyd wrote:Any time you get a 'close non match' discussion in root cause analysis, all I hear is 'we have no AFIS standard, no AFIS inconclusive standard and no standards for conclusion.
Would you look at that.

Root cause analysis:

The examiners involved in this event display the necessary visual acuity and technical knowledge to perform the duties of latent print examination. They would benefit greatly from some suggested quality assurance quality control measures such as latent print specific policies and procedures being implemented in their agency. They would also benefit from remedial training specific to ACE-V methodology with emphasis on proper verification techniques and additional training on advanced topics that include close nonmatches.
This just comes across as a 'no true scotsman' fallacy.
Person 1: These verification techniques produced an error. Obviously improper verification techniques were used.

Person 2: Which one's should we use?

Person 1: The proper ones! No true verification techniques would produce an error
It's all very reminiscent of the infallibility of fingerprint identification.



Then there's this beauty:
Analysis is the foundation of latent print comparison. It is the phase of examination where the examiner gathers the data they observe in the latent print. These data include clarity, distortion, pattern type, minutia, spatial relationship of the minutia, pores, creases, etc. and
factor in to the comparison phase. Analysis should always be conducted free of any pressure, independent of the standards, and conducted thoroughly before any comparisons are attempted, including comparisons during verification. Failure to perform a proper analysis of
a latent print can potentially cause an erroneous comparison which results in an erroneous conclusion. The ACE-V methodology is not linear and analysis can be re-performed by the examiner in light of new data.

Translation: Analysis should unequivocally be done before any comparisons and failure to do so can cause an error, but ACE-V is not linear you guys.

This is literally a meaningless statement. Notice the use of the word 'proper' again.


Last, but not least:
In cognitive psychology there is a phenomenon where humans can be essentially 'blinded' by an overwhelming amount of data, to the extent that they can miss other data when processing information (i.e. cognitive bias). Based upon my interviews with the examiners, review of their
equipment and procedures, and review of the latent print in question, it is my determination that this event is the result of improper verification technique that bypassed, or only minimally involved, an independent analysis that lead to cognitive bias. That cognitive bias rendered the
verifier, and perhaps the case examiner, unable to recognize data that was not corresponding between the latent print and erroneous standards.
Independent analysis and blind verification can perpetuate errors if they are systemic and should be techniques of QA/QC, not standard issue. An office culture where the conclusions are known and picked apart by the second person should be acknowledged.

I'm also interested in how the Examiners stated 'They stated they don't have any policies and procedures specific to latent print examination and verification ', but somehow they used improper verification techniques?

They could've just written:

Root cause: No functional policies
Corrective Action: Accreditation

.
.
FDLE Review 1.pdf
You do not have the required permissions to view the files attached to this post.
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: Fingerprint Error

Post by josher89 »

I was able to find the reason why the FDLE didn't agree with three of the 109 conclusions reviewed by them as a result of this case.
The FDLE reviewers agreed with 106 of the 109 latent print identification conclusions
reviewed. The following three (3) discrepancies were noted:

One comparison that was reported by MCSO as an identification was independently, but
unanimously determined to be inconclusive by the review team based on utilization of the
FDLE procedures for an inconclusive determination. There were points of similarity
between the latent print and the standard, but the team deemed them insufficient for
identification.

Two latent prints that were reported as identifications by MCSO were independently, but
unanimously determined to be of no value for comparison by the review team based on
utilization of FDLE procedures for value determination.
So the FDLE utilized their own procedures to make value determinations and comparison conclusions. Not saying this is a bad thing (in fact, it's probably the opposite) but the reviewer should at least look at the policies of the agency and hold them to that 'standard' (at least for now). In this case, the MCSO might not have had ANY policies so they had to use FDLE's.

It also appears, based on the additional case reviews, the FDLE is suggesting the use of software for the analysis and comparison of latent prints - this leads me to believe they are still doing everything under the glass or maybe doing on-screen work very minimally (as in, probably viewing the prints as taken only and not utilizing PS or a clone to clarify the image further)? Either way, kudos for doing what was necessary to improve.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
Post Reply