Mnookin: fingerprint error rates "not too high"

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

Post Reply
Dr. Borracho
Posts: 157
Joined: Sun May 03, 2015 11:40 am

Mnookin: fingerprint error rates "not too high"

Post by Dr. Borracho »

Interview with Jennifer Mnookin, published by The Verge, December 20, 2018
The Dean of UCLA Law Explains the Uncertain Future of Forensic Science
Many of the techniques are faulty, but still admissible in court
By Angela Chen@chengela Dec 20, 2018, 1:57pm EST

Shows like Law and Order and CSI have taught a generation of Americans that blood spatters and handwriting analysis are crucial for catching criminals. The reality, says UCLA School of Law dean Jennifer Mnookin, is that many of these so-called pattern evidence techniques used in forensic science are faulty and not supported by evidence.

In fact, when it comes to wrongful conviction cases (where new DNA evidence proves that someone was innocent), bad forensic science is the second most frequent contributing factor, behind only eyewitness testimony. There are real, and harmful, consequences to forensic science in the courtroom.

The Verge spoke to Mnookin, who recently wrote a paper on the uncertain future of forensic science, about how forensic science is different from laboratory research, which techniques might be credible, and why she’s not optimistic that the system is going to change anytime soon. This interview has been lightly edited for clarity.

Many people don’t realize that forensic science developed separately from laboratory science and is much less credible. How did that happen?

A lot of these traditional kinds of forensic science began outside of the university context and outside of any research framework. They each have an interesting history behind them. For example, let’s take handwriting identification evidence. Many of the early handwriting experts actually had previously been either bank tellers who were in the habit of looking at people’s handwriting to decide whether to honor checks, or clerks. For a long time, the profession of being a clerk and writing things down was a sort of honorable middle-class profession and the advent of the typewriter changed that and made it much less necessary, and some of these clerks went on to assert expertise in handwriting and some other techniques.

A lot of the early developments came from police-adjacent policies that wanted to figure out how to prove things better. There’s nothing wrong with any of these origin stories, except that they don’t develop in ways that invite careful scrutiny and scientific study. When you have a scientific crime detection lab adjacent to law enforcement trying to figure out how can we better solve cases, you’re not necessarily looking at how we can test these new techniques and make sure they’re valid.

If you have early handwriting examiners advertising their authority and hoping that lawyers will come to them for help, there may not be any situation where anyone’s doing careful scrutiny of whether they can really do what they claim to do. It’s not that anybody was trying to commit fraud or do something wrong, but these techniques did develop in ways that didn’t lead to them being tested carefully because the judges in these early cases didn’t require it. They just say, “you claim to be an expert? Sure.”

I’m sure that in the realm of forensic science, there are some forms of pattern evidence that are more credible and less credible. What are some examples of that?

Bite marks are, in my opinion, one of the most distressing forms of evidence that continues to be used right now. Not only do we not have good evidence to support the validity of bite mark identification, we actually have very good evidence to support that it’s not valid.

In some of the other techniques, we really don’t know if the validity is proven, but with bite marks there are numerous studies showing that bite mark identification experts have a distressing high error rate and can’t even accurately identify whether a mark left on a skin is a bite mark or not, much less whether it belongs to a particular person. So the fact that the courts haven’t consistently and loudly said that bite mark identification should not be permitted is both distressing and, I think, surprising. The Texas Forensic Science Commission has put a moratorium on bite mark evidence and others have expressed doubts, but there has not yet been a trial court that has excluded it from evidence on the grounds of it being insufficiently reliable, and that’s shocking.

On the other hand, fingerprint evidence has been used since the early 20th century and there was remarkably little serious study of its accuracy or error rates. That’s begun to change in the wake of the 2009 National Academy of Sciences report on forensic science. There has started to be meaningful evidence. It’s not as substantial as I wish it were, but it exists now in meaningful quantity and a number of studies are well-done. There’s pretty clear evidence that fingerprint experts are more accurate than lay people or novices. There is a craft knowledge.

There’s been some accuracy and error rate studies that show that, while fingerprint experts do make mistakes, those error rates appear not to be too high in many circumstances. I think fingerprint evidence carefully expressed and limited does have enough validity that it deserves to be a brick in the evidentiary wall. I’m not sure it’s enough to support a conviction without any other evidence.

What are the consequences of all this? I was surprised at the stat that forensic science is the second most frequently found contributing factor in certain wrongful conviction cases. Do we have numbers or a way to quantify what harm has been done?


It’s incredibly hard to get accurate numbers about wrongful conviction rate. It’s a heck of a lot higher than zero, but we don’t have any way of assessing it across all cases. That makes it very challenging to answer the question of how often forensic science evidence introduced in court is mistaken or erroneous because we don’t know how many mistakes we’re making overall.

People have been sounding the alarm about faulty forensic science for years. Some hope that there will be widespread change, but you’re less optimistic. Why is that?

I’m not wildly optimistic. In the time since the National Academy of Sciences report was issued, we really have seen some important forms of engagement and some modest forms of change. It would be a mistake not to recognize and even celebrate that. There’s a new degree of engagement by forensic practitioners, even parts of the law enforcement community, by scholars, and by some judges to take these questions seriously.


At the same time, a lot of the changes seem pretty modest and there’s ways in which many judges are still exhibiting somewhat ostrich-like behaviors about forensic science and don’t seem interested in or willing to confront the hard questions that insufficiently validated forms of evidence raise.

Plus, we have no institutional space that has both authority and broad stakeholder engagement. I don’t believe there’s a lot of reason to think that we’re going to have a lot of force for change. This administration’s Justice Department has been less interested in thinking about these questions than the Obama administration, and frankly, the Obama administration wasn’t as interested in taking these questions seriously as I wish they would have been.

All this is related to my next question. Change is hard in general, but what are some specific factors that are keeping the courts from changing?

There are several factors. One is the power of precedent in legal decision-making. You have these techniques and some have been around for a long time, and there’s a bunch of judicial opinions that say they’re admissible and legitimate. They may not be well-reasoned. They may not be based on a thoughtful examination of the underlying validity of the science, but there they are. So you have busy trial court judges making admissibility decisions about techniques that have been around for a long time and the easy thing to do, no question, is to preserve the status quo.

Given that we have a system that emphasizes precedent, that’s an even easier thing for judges to do. Many judges have been reluctant to even hold hearings about the question of adequate reliability, or some who permit such hearings end up shrugging and saying, “it could go either way, but we’ve used it for a long time so it’s good enough.” It probably doesn’t help matters that more judges with criminal law backgrounds come from the prosecuting side than the defense side and these techniques feel like they’re in the realm of common sense.

That’s the judicial side. On the forensic science side, many don’t have any science background. They come to law enforcement and don’t necessarily have a college degree, either. Now many forensic departments do require an undergrad science degree, but it’s very rare to have PhD-level science training, and many forensic scientists are not themselves scientific researchers, so they’re not well-positioned to research their own discipline or think about it from a research perspective. That doesn’t mean they’re not professionals trying to do a good job, but they’re not well-situated to be engaged in the exercise of establishing validity or to deeply understand what that requires. There’s begun to be some spaces within universities looking at these questions, but still not a lot.

So we continue to have a sort of guild mentality with forensic science, judges who have institutional incentives not to look deeply, and prosecutors who often tend to have more resources than the defense attorneys. That’s not an encouraging recipe for change.
Full article with photos (& ads):
https://www.theverge.com/2018/12/20/181 ... -ucla-dean
"The times, they are a changin' "
-- Bob Dylan, 1964
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: Mnookin: fingerprint error rates "not too high"

Post by Boyd Baumgartner »

Dr. Borracho wrote: Fri Dec 21, 2018 11:40 am Many people don’t realize that forensic science developed separately from laboratory science and is much less credible. How did that happen?
How is this question even taken seriously?

Appeal to prestige. How is that working out for the Social Sciences?
Bill Schade
Posts: 243
Joined: Mon Jul 11, 2005 1:46 pm
Location: Clearwater, Florida

Re: Mnookin: fingerprint error rates "not too high"

Post by Bill Schade »

In the same way that DNA analysis was originally called "DNA Fingerprinting"

Sounding good is half the battle.

Worked for the pattern evidence disciplines for years. :lol:
Dr. Borracho
Posts: 157
Joined: Sun May 03, 2015 11:40 am

Re: Mnookin: fingerprint error rates "not too high"

Post by Dr. Borracho »

Boyd Baumgartner wrote: Wed Dec 26, 2018 6:14 am
Dr. Borracho wrote: Fri Dec 21, 2018 11:40 am Many people don’t realize that forensic science developed separately from laboratory science and is much less credible. How did that happen?
Just for the record, I did not say that. Jennifer Mnookin did. I believe a clearer statement, and one that I could agree with, would be to rephrase her sentence thus: "Many people don't realize that forensic science developed separately from academic science and is deemed much less credible by academicians and theoretical scientists."

My point is posting her interview was to highlight her statement,
Jennifer Mnookin said,
"There’s been some accuracy and error rate studies that show that, while fingerprint experts do make mistakes, those error rates appear not to be too high in many circumstances. I think fingerprint evidence carefully expressed and limited does have enough validity that it deserves to be a brick in the evidentiary wall."
That is quite an admission from a critic who, until now, has been prominent in leading the charge against fingerprints, along with the other forensic sciences.
"The times, they are a changin' "
-- Bob Dylan, 1964
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: Mnookin: fingerprint error rates "not too high"

Post by josher89 »

Just remember; it's very easy to point out a problem but if you don't have a solution on how to fix it, you're not really helping out.

We always hear, with pinky's in the air, how 'wrong' we are but no one (within academia or at least outside of forensics) is telling us how to be 'right'. Sure, we're cranking out great research and they are saying, "Well, you're getting closer but still no cigar!" but there has not been one person saying how to do it 'right'.

Our criminal justice system is adversarial and it's designed to be that way. If you tell me I'm 'wrong', you should at least tell me how 'wrong' I am and more importantly, show me (or the judge/jury) how you're 'right'.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
Bill Schade
Posts: 243
Joined: Mon Jul 11, 2005 1:46 pm
Location: Clearwater, Florida

Re: Mnookin: fingerprint error rates "not too high"

Post by Bill Schade »

You make some good points Josh, but instead of taking comfort in the overall accuracy of the discipline (which I always have) I think the question now is "how do you measure and then state the accuracy and strength of this particular conclusion?"

That is the question the critics ask.

I think the answer must lie in finding a way to express this accuracy in either statistical or verbal terms. I've seen papers on both topics. I'm not sure which direction we will travel but falling back on our record sounds good but will not stop the questions.

I saw a funny cartoon recently that seems to sum up your thought though. It also reminds me of my training back in the seventies which consisted of "Keep looking, you'll get it with more experience"
Image
You do not have the required permissions to view the files attached to this post.
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: Mnookin: fingerprint error rates "not too high"

Post by Boyd Baumgartner »

Sorry, Borracho. Didn't mean to imply those were your words, just used the quote function.

Given that the general state of journalism is to present some clickbait headline and only publish the view of the critic while citing the original premise as 'contentious' in some fashion, I'm ready to play along.

So, for instance considering Simon Cole is both on the OSAC and the ASB now, once the OSAC guidelines are published and adopted and an error occurs under their banner, can we now say that he's caused the problem and part of the blame is that he's a social scientist which isn't real science given the lack of replication in the field?
'Prominent fingerprint critic to blame for errors'
Post Reply