Saks & Koehler article in Science
-
L.J.Steele
- Posts: 430
- Joined: Mon Aug 22, 2005 6:26 am
- Location: Massachusetts
- Contact:
Saks & Koehler article in Science
Assumptions of Uniqueness
I have to disagree that the proof of uniqueness has been well developed for firearms ID. I'm willing to accept it, until proven otherwise, for fingerprints based on the fetal development and twin studies. For firearms ID, if you read the AFTE Journal studies carefully, all but one of them are not done in a blind or double-blind manner. In each case, the examiner stating that he can distinguish two consecutively manufactured firearms KNOWS that they are consecutively manufactured and should not match. Or knows that the bullets fired after 5, 50, 500, etc. rounds were all fired from the same firearm and should match. That's leaves a fatal hole for arguments about confirmation bias.
The next question is how well one can distinguish uniqueness, particularly given the non-pristine state of typical crime scene materials. That's a harder question, and gets into those issues of confirmation bias, tunnel vision, pressure of high-profile cases mentioned in Stacey's report on Mayfield.
The number of reported errors is small. 18 to 20 known, published cases in the past 80ish years in the UK and US. But, most of those known cases are known because of flukes -- Cowans is only known because exculpatory DNA existed and was subsequently tested; Mayfield is only known because the Spainish authorities stuck to their suspect and fought with FBI. We don't know how many other Cowans and Mayfields are out there because we don't have good mechanisms to look for them.
Study
I'd second comments about the lamentable state of understanding about forensic science among the criminal defense bar, an unwillingness to question forensic results by cross-exam, and blatant mistatements often left unchallenged. (Some may recall my questions last fall about testimony in a CT homicide case about testimony that the clearest print in a "high traffic" area (external storm door) was left by the last person to depart the crime scene. Unquestioned by trial counsel, who likely didn't realize the "high traffic" theory was not widely accepted, tested, etc.)
The legal standard in this area, however, is constitutionally ineffective assistance of counsel, which is a high bar. The lack of understanding is typical of judges and attorneys, so it is not regarded as so unusually as to be constitutionally ineffective. (Heck, your attorney can be asleep for part of the trial and apparently not legally ineffective.) The distinction between an ill-educated lawyer and a constitutionally inadequate may explain the figures in the graph.
Error, Testing
I think the authors have a point about courts and science, judges and science, and that a fair amount of mistakes likely do slip thru unchallenged and undiscovered.
From my own experience litigating eyewitness ID appeals, courts seem incredibly resistant to peer-reviewed, published research in prestegious journals spaning two or more decades that contradicts their "common sense" notions about witness reliability. I suspect the folks litigating challenges to forensics on both sides of the issue are facing a similar problem just getting a judge to understand the science itself.
Testing, certification -- those are difficult questions. We know from Mayfield that experienced examiners can make a mistake. I don't know how one could set up a minimum standard that would reduce the risk of errors or how one could set up standards to know when an examiner is testifying about something beyond the accepted limits of the science.
I'm not sure how one might find an error rate, but it is fair to ask examiners to acknowledge that mistakes have happened and ask what they did in this case to avoid error.
I have to disagree that the proof of uniqueness has been well developed for firearms ID. I'm willing to accept it, until proven otherwise, for fingerprints based on the fetal development and twin studies. For firearms ID, if you read the AFTE Journal studies carefully, all but one of them are not done in a blind or double-blind manner. In each case, the examiner stating that he can distinguish two consecutively manufactured firearms KNOWS that they are consecutively manufactured and should not match. Or knows that the bullets fired after 5, 50, 500, etc. rounds were all fired from the same firearm and should match. That's leaves a fatal hole for arguments about confirmation bias.
The next question is how well one can distinguish uniqueness, particularly given the non-pristine state of typical crime scene materials. That's a harder question, and gets into those issues of confirmation bias, tunnel vision, pressure of high-profile cases mentioned in Stacey's report on Mayfield.
The number of reported errors is small. 18 to 20 known, published cases in the past 80ish years in the UK and US. But, most of those known cases are known because of flukes -- Cowans is only known because exculpatory DNA existed and was subsequently tested; Mayfield is only known because the Spainish authorities stuck to their suspect and fought with FBI. We don't know how many other Cowans and Mayfields are out there because we don't have good mechanisms to look for them.
Study
I'd second comments about the lamentable state of understanding about forensic science among the criminal defense bar, an unwillingness to question forensic results by cross-exam, and blatant mistatements often left unchallenged. (Some may recall my questions last fall about testimony in a CT homicide case about testimony that the clearest print in a "high traffic" area (external storm door) was left by the last person to depart the crime scene. Unquestioned by trial counsel, who likely didn't realize the "high traffic" theory was not widely accepted, tested, etc.)
The legal standard in this area, however, is constitutionally ineffective assistance of counsel, which is a high bar. The lack of understanding is typical of judges and attorneys, so it is not regarded as so unusually as to be constitutionally ineffective. (Heck, your attorney can be asleep for part of the trial and apparently not legally ineffective.) The distinction between an ill-educated lawyer and a constitutionally inadequate may explain the figures in the graph.
Error, Testing
I think the authors have a point about courts and science, judges and science, and that a fair amount of mistakes likely do slip thru unchallenged and undiscovered.
From my own experience litigating eyewitness ID appeals, courts seem incredibly resistant to peer-reviewed, published research in prestegious journals spaning two or more decades that contradicts their "common sense" notions about witness reliability. I suspect the folks litigating challenges to forensics on both sides of the issue are facing a similar problem just getting a judge to understand the science itself.
Testing, certification -- those are difficult questions. We know from Mayfield that experienced examiners can make a mistake. I don't know how one could set up a minimum standard that would reduce the risk of errors or how one could set up standards to know when an examiner is testifying about something beyond the accepted limits of the science.
I'm not sure how one might find an error rate, but it is fair to ask examiners to acknowledge that mistakes have happened and ask what they did in this case to avoid error.
-
Guest
I agree with the critics who say we need more research, testing, and validation in our science. But then, for what science could it be said that no further research, testing, or validation is needed?
I also believe that knowledge of fingerprint science by the defense attorney combined with a strong cross examination is the best safeguard against mistakes in the courtroom. George Reis made a similar comment last week in another thread on this discussion board, and his posting hit the nail on the head with the topic there. Problem is, most defense attorneys do not know much about science, they accept fingerprint identification as absolute themselves ("confirmation bias?"), and they do not know how to cross examine even the most incompetent fingerprint examiner (and there are those testifying in this country who ARE incompetent – I suspect they are responsible for the majority of the mistakes cited by the critics). So who should take the blame for the failure of the defense to question bad evidence – the fingerprint community?
I disagree with the critics not in the steps that should be taken to improve the science, but in what we should do in the meantime. The critics want all fingerprint testimony discontinued until more research, testing, and validation is complete. Well, the truth is you NEVER complete research, testing, and validation of any science. Discontinue the use of fingerprint evidence until that is done? Talk about "throwing the baby out with the bath water!"
But even after the most strenuous research, testing, and validation, there will still be mistakes. As long as humans are involved, mistakes will be made. We can hope the number of mistakes gets smaller and smaller, and we can strive energetically toward that objective, but the number of mistakes will never be zero. In any field of human endeavor, there exists the potential for human error. So we should strive for perfection, but with the understanding that true perfection can never be attained.
While the critics cite errors, and they criticize us for not having established an error rate, I would submit that if you took the number of known errors and compare that to the number of correct identifications (a number we do not know), the error rate would be almost infinitesimally small. So the critics then argue, "But there must be other errors we don't know about" and I would agree with that, also. So let's double or triple or quadruple the known number of errors, and I would submit that considering the number of correct identifications, the error rate would still be almost infinitesimally small. It’s a shame we do not know the number of correct identifications made in the US in a year’s time, or over the time period of the errors cited by the critics, but collecting that information would be literally impossible. If the critics want to include erroneous identifications made in other countries, then in calculating error rate we must include all the correct identifications made in those countries, too.
Yes, by all means, let's continue to do the research, testing, and validation studies to improve our science. Let's continue to try and upgrade our profession through training and proficiency testing. Let’s try to assure accuracy through certification and accreditation. But the small number of errors compared to the very large number of correct identifications does not justify the drastic measures our critics would impose. Our science has withstood every Daubert challenge filed, has been accepted by every court that has examined it critically, and DOES pass the test “beyond a reasonable doubt.” Fingerprint identification is HIGHLY reliable. Let’s work to make it better. But to discontinue the use of fingerprints in court in the meantime would impose on society a terrible increase in burglaries, rapes, and murders by large numbers of criminals who are correctly identified every day in fingerprint laboratories and identification units by honest, competent, hard working fingerprint examiners.
I also believe that knowledge of fingerprint science by the defense attorney combined with a strong cross examination is the best safeguard against mistakes in the courtroom. George Reis made a similar comment last week in another thread on this discussion board, and his posting hit the nail on the head with the topic there. Problem is, most defense attorneys do not know much about science, they accept fingerprint identification as absolute themselves ("confirmation bias?"), and they do not know how to cross examine even the most incompetent fingerprint examiner (and there are those testifying in this country who ARE incompetent – I suspect they are responsible for the majority of the mistakes cited by the critics). So who should take the blame for the failure of the defense to question bad evidence – the fingerprint community?
I disagree with the critics not in the steps that should be taken to improve the science, but in what we should do in the meantime. The critics want all fingerprint testimony discontinued until more research, testing, and validation is complete. Well, the truth is you NEVER complete research, testing, and validation of any science. Discontinue the use of fingerprint evidence until that is done? Talk about "throwing the baby out with the bath water!"
But even after the most strenuous research, testing, and validation, there will still be mistakes. As long as humans are involved, mistakes will be made. We can hope the number of mistakes gets smaller and smaller, and we can strive energetically toward that objective, but the number of mistakes will never be zero. In any field of human endeavor, there exists the potential for human error. So we should strive for perfection, but with the understanding that true perfection can never be attained.
While the critics cite errors, and they criticize us for not having established an error rate, I would submit that if you took the number of known errors and compare that to the number of correct identifications (a number we do not know), the error rate would be almost infinitesimally small. So the critics then argue, "But there must be other errors we don't know about" and I would agree with that, also. So let's double or triple or quadruple the known number of errors, and I would submit that considering the number of correct identifications, the error rate would still be almost infinitesimally small. It’s a shame we do not know the number of correct identifications made in the US in a year’s time, or over the time period of the errors cited by the critics, but collecting that information would be literally impossible. If the critics want to include erroneous identifications made in other countries, then in calculating error rate we must include all the correct identifications made in those countries, too.
Yes, by all means, let's continue to do the research, testing, and validation studies to improve our science. Let's continue to try and upgrade our profession through training and proficiency testing. Let’s try to assure accuracy through certification and accreditation. But the small number of errors compared to the very large number of correct identifications does not justify the drastic measures our critics would impose. Our science has withstood every Daubert challenge filed, has been accepted by every court that has examined it critically, and DOES pass the test “beyond a reasonable doubt.” Fingerprint identification is HIGHLY reliable. Let’s work to make it better. But to discontinue the use of fingerprints in court in the meantime would impose on society a terrible increase in burglaries, rapes, and murders by large numbers of criminals who are correctly identified every day in fingerprint laboratories and identification units by honest, competent, hard working fingerprint examiners.
-
Pat A. Wertheim
- Posts: 872
- Joined: Thu Jul 07, 2005 6:48 am
- Location: Fort Worth, Texas
My name is Pat A. Wertheim. Is anybody else having trouble getting their name attached to their postings? More than half I post go down as "Guest," even though I am registered on the discussion board at the time I write and post. I did not sign that last post, the second in this thread, because I thought my name would be attached to it rather than "Guest."
-
Les Bush
- Posts: 229
- Joined: Tue Jul 05, 2005 4:29 am
- Location: Australia
Who are the editors of Science
Thanks Lisa for this posting. I was wondering how to reply to Glenn and the editor of Science at the same time.
I'm sure the editor of Science will read the response by Glenn and classify its content as just a counter- argument to the views expressed by Saks and co. That leaves the editor in a position of judgement on the publication question of whether an opposing view is worthy of interest to his membership. It is not difficult to see that there are two agendas forming. Saks has an agenda to diminish the profile of fingerprint science as a vendetta based on the outcome of Daubert. The editor of Science has a membership agenda to put pressure upon the fingerprint community. That membership probably views fingerprint experts as being non academic and hence non scientific. This would explain why the content of Saks article was not reviewed by a subject specialist contracted by the staff of Science. The problem here is that our science is vulnerable to these publications and their intentions. What we express on this website can also be used as information about our personal agenda. I've found that the safest zone to shelter in is within the science itself. This was highlighted recently with the McKie case and the question of credibility of a British consultant to offer expert opinion.
I support fully the long list of questions that Glenn raises in his response. Members of the fingerprint community should react as Glenn has with information about our science and defend its truth. I believe we need greater support from staff of JFI to produce publications that negate the inferences that Science is permitting to generate. The same goes for other fingerprint associations and their publications. If they value the credibility of their publications it is because of the foundation of fingerprint science.
It appears the main issue being addressed is still the question of HOW we make identifications and how we prove our result to be correct. We should submit to JFI articles that bring innovative ideas about a solution. I've tried and will continue to refine my submissions until I'm shown that it is no longer valid or needed. There is little point submitting these solutions to Science as it appears they dont appreciate the real problem.
I'm sure the editor of Science will read the response by Glenn and classify its content as just a counter- argument to the views expressed by Saks and co. That leaves the editor in a position of judgement on the publication question of whether an opposing view is worthy of interest to his membership. It is not difficult to see that there are two agendas forming. Saks has an agenda to diminish the profile of fingerprint science as a vendetta based on the outcome of Daubert. The editor of Science has a membership agenda to put pressure upon the fingerprint community. That membership probably views fingerprint experts as being non academic and hence non scientific. This would explain why the content of Saks article was not reviewed by a subject specialist contracted by the staff of Science. The problem here is that our science is vulnerable to these publications and their intentions. What we express on this website can also be used as information about our personal agenda. I've found that the safest zone to shelter in is within the science itself. This was highlighted recently with the McKie case and the question of credibility of a British consultant to offer expert opinion.
I support fully the long list of questions that Glenn raises in his response. Members of the fingerprint community should react as Glenn has with information about our science and defend its truth. I believe we need greater support from staff of JFI to produce publications that negate the inferences that Science is permitting to generate. The same goes for other fingerprint associations and their publications. If they value the credibility of their publications it is because of the foundation of fingerprint science.
It appears the main issue being addressed is still the question of HOW we make identifications and how we prove our result to be correct. We should submit to JFI articles that bring innovative ideas about a solution. I've tried and will continue to refine my submissions until I'm shown that it is no longer valid or needed. There is little point submitting these solutions to Science as it appears they dont appreciate the real problem.
-
L.J.Steele
- Posts: 430
- Joined: Mon Aug 22, 2005 6:26 am
- Location: Massachusetts
- Contact:
Saks & Koehler article in Science
As attorneys, we depend on you folks to follow the right procedures, testify honestly to your results within the limits of your field, and take appropriate precautions against error and subconscious bias. In effect, we trust you folks to be the experts. We trust you to be fair and impartial. And we have to.Anonymous wrote:.I also believe that knowledge of fingerprint science by the defense attorney combined with a strong cross examination is the best safeguard against mistakes in the courtroom. George Reis made a similar comment last week in another thread on this discussion board, and his posting hit the nail on the head with the topic there. Problem is, most defense attorneys do not know much about science, they accept fingerprint identification as absolute themselves ("confirmation bias?"), and they do not know how to cross examine even the most incompetent fingerprint examiner (and there are those testifying in this country who ARE incompetent – I suspect they are responsible for the majority of the mistakes cited by the critics). So who should take the blame for the failure of the defense to question bad evidence – the fingerprint community?
Step back for a sec and think about a complex homicide case. I'm an appellate attorney. I can take as much time as I need to read and re-read the testimony and cross-exam, do some library work, and figure out whether there's any substance to my gut feeling that something's wrong with an expert's comments. The trial attorney doesn't have that luxury. He or she is in the courtroom having to take notes as you talk, and be ready to cross-exam with whatever prep and resources the attorney did ahead of time. Generally no time to run off to the library and pull a copy of Ashbaugh or dig up copies of JFI to double-check a hunch.
For the recent homicide case I worked on, the trial attorney had to cross-examine experts about fingerprints, crime scene reconstruction (mostly blood pattern analysis and firearms distance determinations for that case), firearms ID, gunshot residue testing, and deal with the fact witnesses. Plus master the legal aspects of some very complex evidentiary issues. It is hard to be well-enough read in multiple seperate forensic disciplines to know when the expert is streaching the limits of their field (often honestly believing their theory is valid) and to be able to thus do a thorough cross-exam. Most of my cases are indigent defense cases, so limited funds are available for experts. Yes, I would dearly love my peers to do a better job on the science, but I do recognize there aren't enough hours in the day for them to master all the key forensic and legal fields well enough to challenge folks whom it is a full-time profession.
Some probably do. For myself, I'd be mostly content to see the 2nd Plaza holding used in more courts -- the examiner needs to establish adequate training and testing, needs to use an established methodology, and provide the jury with the originals when available and sufficient enlargements for the jury to see and comprehend the method and result. (And I would like to see the latter, in particular, spread to firearms ID where photos of the match seem rare.) I'd also like to see the Stacey report recommendation of blind verification adopted and see precautions against the examiner's judgment being subconciously influenced by extraneous information about the offense.Anonymous wrote:The critics want all fingerprint testimony discontinued until more research, testing, and validation is complete.
Is it possible to get examiners to stop claiming that the error rate for ACE-V is zero? And that there's no chance that they have made an error? I think Saks admits that the error rate for fingerprints, while unknown, is probably low.Anonymous wrote:but the number of mistakes will never be zero.
Remember that for a defense attorney, the key concern at any given moment is this specific client. An error rate of 1 in a million is great, unless you are the 1. If the examiner is wrong, and the jury convicts, an innocent person goes to jail and the real bad guy remains at large. That's not good for anybody. Cases like Cowans and Mayfield, where there's a way to find the error and exonerate the client are very rare. What we as attorneys want to know is what factors lead to the Cowans or Mayfield or McKie or the 20 or so other widely-known error cases and whether those factors apply to this specific case.
Some of the Daubert challenges have been better litigated than others, and some of the Courts have given the issue more thoughtful and open-minded review than others. I suspect that if the problems from Mayfield remain unaddressed, the critics are simiply ignored, and if more errors crop up, then a fair-minded judge may rule for the critics.Anonymous wrote:Our science has withstood every Daubert challenge filed, has been accepted by every court that has examined it critically, and DOES pass the test “beyond a reasonable doubt.”
-
Michele Triplett
Lisa,
I think trusting that people are experts is a dangerous concept. In any field there are always people pretending to be knowledgeable just to make money. A few key questions should be asked to insure any person is somewhat knowledgeable. This can be done in court or in a pre-court interview. Some simple questions are
“What scientific training do you have?”
“What methodology did you use?”
“How long has this methodology been used?”
“Has it been tested, by whom and when?”
“Can you show me (the attorney) how you came to your conclusion?” (chart enlargement)
“Is this a conclusion any examiner would agree with, or does it require a skilled examiner?”
“Do you keep current with fingerprint literature and research? Can you give me some examples?”
“Can you explain the Plaza case?”
“Can you explain the basics of the Mayfield case?”
I think the list that John Nielson wrote, called “Are You Dead Yet?” has some perfect questions.
Blind Verification would be great but people have to understand the difference between a quality control procedure, a scientific testing procedure, and a peer review process. Implementing a procedure without understanding it wouldn’t help in eliminating errors, it would just produce the perception of better standards. I’ve heard that SWGFAST will be working on a position statement regarding Blind Verification during their next meeting.
Saks doesn’t admit that the error rate for fingerprints is probably low. During the E-symposium he stated, “there are numerous false positives”. He later gives the number as being 21 documented cases. After looking at his list, all of these cases aren’t false positives. Personally I would guess that the number is even higher than this but my point is that he seems to intentionally misrepresent the data to make the situation sound worse than it is. Granted that if you are the one person falsely imprisoned, then the situation is awful.
Michele
I think trusting that people are experts is a dangerous concept. In any field there are always people pretending to be knowledgeable just to make money. A few key questions should be asked to insure any person is somewhat knowledgeable. This can be done in court or in a pre-court interview. Some simple questions are
“What scientific training do you have?”
“What methodology did you use?”
“How long has this methodology been used?”
“Has it been tested, by whom and when?”
“Can you show me (the attorney) how you came to your conclusion?” (chart enlargement)
“Is this a conclusion any examiner would agree with, or does it require a skilled examiner?”
“Do you keep current with fingerprint literature and research? Can you give me some examples?”
“Can you explain the Plaza case?”
“Can you explain the basics of the Mayfield case?”
I think the list that John Nielson wrote, called “Are You Dead Yet?” has some perfect questions.
Blind Verification would be great but people have to understand the difference between a quality control procedure, a scientific testing procedure, and a peer review process. Implementing a procedure without understanding it wouldn’t help in eliminating errors, it would just produce the perception of better standards. I’ve heard that SWGFAST will be working on a position statement regarding Blind Verification during their next meeting.
Saks doesn’t admit that the error rate for fingerprints is probably low. During the E-symposium he stated, “there are numerous false positives”. He later gives the number as being 21 documented cases. After looking at his list, all of these cases aren’t false positives. Personally I would guess that the number is even higher than this but my point is that he seems to intentionally misrepresent the data to make the situation sound worse than it is. Granted that if you are the one person falsely imprisoned, then the situation is awful.
Michele
-
g.
- Posts: 247
- Joined: Wed Jul 06, 2005 1:27 pm
- Location: St. Paul, MN
LES: Thanks for comments and I agree about the Science editors. Hopefully if enough forensic scientists write them, they will be inclined to print some of them---NOTHING SELLS LIKE CONTROVERSY and we scientists LOVE A GOOD ELITIST COGNESCENTI CATFIGHT!!!!
LISA: Always good to hear your input! Sorry I am out of town and haven't the chance to reply. You made some good points, particularly about differences in litigation abilities, errors do occur so why can't we drop this "0 methodology (theoretical) error rate" silliness (my words--I am not a big fan of this argument), etc. You and I need to chat sometime and consider teaming up for a presentation at an IAI--maybe in Boston this year. Anyone who can quote Han Solo is way cool in my book...
I have a few points of clarification, and apologize if it wasn't clear in my response to Saks. So I will address your major points of topic:
Assumptions of Uniqueness
You mention firearms not having proved a unique foundation. This is a misunderstanding that occurs with non-scientists and unfortunately, too many scientists as well. I am talking about LAWS and THEORIES in science (because SAKS and KOEHLER said they lack empirical and theoretical basis for this discernible uniqueness). The problem is they are using terms that have specific meanings to scientists and they are using them incorrectly OR they are just plain wrong.
Laws are general descriptions of behavior or objects in the universe. Theories are the explanations for these laws. Either laws or theories can be tested with experimentation (hypothesis testing and attempting to refute).
An object dropped will fall to the earth at a constant acceleration, gases expand with heat, friction ridge skin is unique and persistent, all physically machined objects are unique etc. All examples of LAWS.
All are supported with empirical evidence and tests.
THEORIES explain the WHY and the HOW. Friction ridge skin is unique BECAUSE of stresses during fetal development, machined objects are unique because of microscopic imperfections on the surface of the tool, etc.
The problem is when we now go from general science to APPLIED SCIENCE, a specific application of the theory and/or law, we are now applying a test and a methodology. THIS IS THE AREA THAT IS THE ISSUE. DOES THE TEST adequately answer these questions. Does the test adequately differentiate two consecutive machined tools etc.
By this reasoning, we should never lose a Daubert challenge on the SCIENCE and foundation, but on the methodology (referring to fingerprints, firearms, handwriting, footwear, etc).
Can the test discern and differentiate the uniqueness. The test is the key. IF all OBJECT Xs were unique at the molecular level, then if you compared OBJECT X's with your eyes and brain, you would not discern the difference. If you used a magnifier or microscope you still would not discern the difference and may conclude WELL the EVIDENCE CLEARLY SHOWS THESE OBJECT Xs AREN"T UNIQUE. No the test you have chosen doesn't show it. But then you choose to use a Scanning Electron Microscope and then lo and behold: you can reliably and consistently (repeatable) see they are different.
It is the methodology(the test) that is the issue. Is the methodology sufficient to differentiate? STR testing can differentiate everyone but identical twins, yet twins DO NOT have IDENTICAL DNA....only IDENTICAL to the the STR test. Years from now if we have a new test, small imperfections and mutational difference might be detected and lo and behold--Identical twins are not identical (and empirically we know this to be true)
Study and defense attys
Yep agree, there is a legal standard for defense incompetence. The authors did not state their criteria for making those decisions and categorizations. That was my point. They literally took this data from the Innocence Project website (check it out, it's there) and retooled it in their graph and do not know any of collection methods, sampling, criteria, tests, stats, procedures, etc. and then have the gall to publish it in a science publication. That's BAD SCIENCE. The very definition and why I say we forensic scientists go on the offense. They don't stick their necks out like this so beautifully. Let's take this opportunity.
Error, Testing
<<I think the authors have a point about courts and science, judges and some errors...getting in>>
Yep. As guest (Pat?) said "errors can and do happen" Bad errors. And we need to be very transparent about this. But erroneously citing CTS data, when SAKS KNOWS THIS (and he KNOWS this from Daubert hearings, us contacting him, and all his associations thru QD). He knew this and yet still called these FALSE POSITIVES. So again, he's incredibly ignorant (and he's not) or....he's deliberately and UNETHICALLY reporting false data. And he has a history (a well documented one by the QD folks) of doing this. I hope Max Houck responds too, as Saks also misused their data too and he can speak personally for their misused Hair data in the article.
Thanks Lisa for the comments, but hopefully you can see why this just gets under my skin.
You made good points and we appreciate your well-informed criticism and know where to draw your line of expertise. Saks has not and I believe deliberately misinforming people to elevate his own persona and ego.
LAST POINT: Be keeping an eye open. Kasey and I performed and are completing (right this moment) a Practioner Error Rate study and presented the pilot results at the IAI. We will be publishing this data very soon....more to come.
Thanks and may the Force be with you,
g.
LISA: Always good to hear your input! Sorry I am out of town and haven't the chance to reply. You made some good points, particularly about differences in litigation abilities, errors do occur so why can't we drop this "0 methodology (theoretical) error rate" silliness (my words--I am not a big fan of this argument), etc. You and I need to chat sometime and consider teaming up for a presentation at an IAI--maybe in Boston this year. Anyone who can quote Han Solo is way cool in my book...
I have a few points of clarification, and apologize if it wasn't clear in my response to Saks. So I will address your major points of topic:
Assumptions of Uniqueness
You mention firearms not having proved a unique foundation. This is a misunderstanding that occurs with non-scientists and unfortunately, too many scientists as well. I am talking about LAWS and THEORIES in science (because SAKS and KOEHLER said they lack empirical and theoretical basis for this discernible uniqueness). The problem is they are using terms that have specific meanings to scientists and they are using them incorrectly OR they are just plain wrong.
Laws are general descriptions of behavior or objects in the universe. Theories are the explanations for these laws. Either laws or theories can be tested with experimentation (hypothesis testing and attempting to refute).
An object dropped will fall to the earth at a constant acceleration, gases expand with heat, friction ridge skin is unique and persistent, all physically machined objects are unique etc. All examples of LAWS.
All are supported with empirical evidence and tests.
THEORIES explain the WHY and the HOW. Friction ridge skin is unique BECAUSE of stresses during fetal development, machined objects are unique because of microscopic imperfections on the surface of the tool, etc.
The problem is when we now go from general science to APPLIED SCIENCE, a specific application of the theory and/or law, we are now applying a test and a methodology. THIS IS THE AREA THAT IS THE ISSUE. DOES THE TEST adequately answer these questions. Does the test adequately differentiate two consecutive machined tools etc.
By this reasoning, we should never lose a Daubert challenge on the SCIENCE and foundation, but on the methodology (referring to fingerprints, firearms, handwriting, footwear, etc).
Can the test discern and differentiate the uniqueness. The test is the key. IF all OBJECT Xs were unique at the molecular level, then if you compared OBJECT X's with your eyes and brain, you would not discern the difference. If you used a magnifier or microscope you still would not discern the difference and may conclude WELL the EVIDENCE CLEARLY SHOWS THESE OBJECT Xs AREN"T UNIQUE. No the test you have chosen doesn't show it. But then you choose to use a Scanning Electron Microscope and then lo and behold: you can reliably and consistently (repeatable) see they are different.
It is the methodology(the test) that is the issue. Is the methodology sufficient to differentiate? STR testing can differentiate everyone but identical twins, yet twins DO NOT have IDENTICAL DNA....only IDENTICAL to the the STR test. Years from now if we have a new test, small imperfections and mutational difference might be detected and lo and behold--Identical twins are not identical (and empirically we know this to be true)
Study and defense attys
Yep agree, there is a legal standard for defense incompetence. The authors did not state their criteria for making those decisions and categorizations. That was my point. They literally took this data from the Innocence Project website (check it out, it's there) and retooled it in their graph and do not know any of collection methods, sampling, criteria, tests, stats, procedures, etc. and then have the gall to publish it in a science publication. That's BAD SCIENCE. The very definition and why I say we forensic scientists go on the offense. They don't stick their necks out like this so beautifully. Let's take this opportunity.
Error, Testing
<<I think the authors have a point about courts and science, judges and some errors...getting in>>
Yep. As guest (Pat?) said "errors can and do happen" Bad errors. And we need to be very transparent about this. But erroneously citing CTS data, when SAKS KNOWS THIS (and he KNOWS this from Daubert hearings, us contacting him, and all his associations thru QD). He knew this and yet still called these FALSE POSITIVES. So again, he's incredibly ignorant (and he's not) or....he's deliberately and UNETHICALLY reporting false data. And he has a history (a well documented one by the QD folks) of doing this. I hope Max Houck responds too, as Saks also misused their data too and he can speak personally for their misused Hair data in the article.
Thanks Lisa for the comments, but hopefully you can see why this just gets under my skin.
You made good points and we appreciate your well-informed criticism and know where to draw your line of expertise. Saks has not and I believe deliberately misinforming people to elevate his own persona and ego.
LAST POINT: Be keeping an eye open. Kasey and I performed and are completing (right this moment) a Practioner Error Rate study and presented the pilot results at the IAI. We will be publishing this data very soon....more to come.
Thanks and may the Force be with you,
g.
-
Allan Bayle
Assumptions of Uniqueness
What an interesting subject.
I find myself in a unique position. I have taught ACEV and report writing using Ashbaugh's methodology. The headings I have had to change to be accepted by the British Court system. Training is very important and choosing the right system.
The Police estabishments are going to have to make some important changes. Should the Police have control of fingerprint experts and the training of experts?
Do we need to train all fingerprint personel to give evidence in court? No.
Technicians can complete most of the work. Expert court witnesses is a separate profession.
I believe competency testing is not the way forward for future experts giving evidence in Court. My experiance has shown that if you place difficult marks in front of a person and they use their skills on describing the evidence before them and write a report on their findings. Charting has now changed, shape and measurement has also become part of the tools of our trade.
Fingerprints has to move to the 21st century. It's no good saying we have so many points in agreement and therefore he/she must be guilty. We have to be more scientific. Ashbaugh has given us the key. We now have to comment on the position of the latent marks, deposition pressure, etc. Because it is an independent report on the valuation of the evidence before you.
Those of you who cannot keep up, beware, the dark forces will seek you out!
I find myself in a unique position. I have taught ACEV and report writing using Ashbaugh's methodology. The headings I have had to change to be accepted by the British Court system. Training is very important and choosing the right system.
The Police estabishments are going to have to make some important changes. Should the Police have control of fingerprint experts and the training of experts?
Do we need to train all fingerprint personel to give evidence in court? No.
Technicians can complete most of the work. Expert court witnesses is a separate profession.
I believe competency testing is not the way forward for future experts giving evidence in Court. My experiance has shown that if you place difficult marks in front of a person and they use their skills on describing the evidence before them and write a report on their findings. Charting has now changed, shape and measurement has also become part of the tools of our trade.
Fingerprints has to move to the 21st century. It's no good saying we have so many points in agreement and therefore he/she must be guilty. We have to be more scientific. Ashbaugh has given us the key. We now have to comment on the position of the latent marks, deposition pressure, etc. Because it is an independent report on the valuation of the evidence before you.
Those of you who cannot keep up, beware, the dark forces will seek you out!
-
L.J.Steele
- Posts: 430
- Joined: Mon Aug 22, 2005 6:26 am
- Location: Massachusetts
- Contact:
Saks & Koehler article in Science
Michele:
I think the judicial system is stuck with a certain level of trust in the experts. Were I a trial attorney, I wouldn't have the time or the budget to re-test every bit of forensic evidence with my own experts, nor would most judges be happy with an attorney who did a full-day cross on every forensic expert. I'd have to pick my battles and make some judgment call about where I think the expert(s) have come to reasonable conclusions and about what evidence does the most damage to my case.
The questions you pose are all good ones that should be in every attorney's basic checklist. Tho, a couple do assume the attorney is well-read on Plaza, Mayfield, Cowans, etc. (Which he or she ought to be, but you'd be surprised how many defense attorneys aren't.)
The other problem is the limits of cross-exam. Assume Examiner X honestly believes that he can tell the age of a fingerprint by its appearance and honestly thinks he recalls this from his training. (Memory is an odd thing.) He's going to testify that yes, this theory is supported by his training, even if he can't name a specific article or study. Unless I know that's untrue and I've got the contrary articles in my trial file to use for cross-exam, as an attorney I don't have a good way to test his theory by simple cross-exam. If Examiner X is intentionally lying about education, training, or results, there's no reason to think that he or she won't continue to lie on the stand about methods, supporting reasons, etc. Again, unless the attorney has a fair amount of reading in the area, and the right contrary authority handy, mistakes and lies are likely to get thru.
I suspect Saks' list of 21 is the same list that's in Cole's More than Zero, Accounting for Error in Latent Fingerprint Identification, 95:3 J. Crim. L. & Criminology 985 (2005) (either just published or about to be).
G
I'd be glad to do a presentation, schedule permitting, for/with IAI. I expect to try to visit the Burlington meeting to see Pat W. and Cole debate.
Accepted distinction on theory/law vs. applied science. Query tho, has the firearms/toolmark theory been adequately blind-tested? The theory says that because of wear on the toolmarks there should be differences on each surface, but don't the differences need to be observable to be able to test/falisify the theory? I can try to falsify friction ridge uniqueness and permance with observation and tests using statistically valid samples over sufficient time, but again, I need to guard against observer error to know that the test of the theory/law is valid.
I've seen all sorts of stuff about the CTS data. Is it publically available somewhere that interested person can read it?
Allen B[\b]
There's all sorts of interesting issues in lab control and supervision. I recognize that law enforcement needs a certain amount of privacy and protection of its investigations, and that the vast majority of lab work is likely to be law enforcement oriented. OTOH, it would be nice to feel that the labs and experts are impartial, honest brokers when it comes to their tests and results.
I think the judicial system is stuck with a certain level of trust in the experts. Were I a trial attorney, I wouldn't have the time or the budget to re-test every bit of forensic evidence with my own experts, nor would most judges be happy with an attorney who did a full-day cross on every forensic expert. I'd have to pick my battles and make some judgment call about where I think the expert(s) have come to reasonable conclusions and about what evidence does the most damage to my case.
The questions you pose are all good ones that should be in every attorney's basic checklist. Tho, a couple do assume the attorney is well-read on Plaza, Mayfield, Cowans, etc. (Which he or she ought to be, but you'd be surprised how many defense attorneys aren't.)
The other problem is the limits of cross-exam. Assume Examiner X honestly believes that he can tell the age of a fingerprint by its appearance and honestly thinks he recalls this from his training. (Memory is an odd thing.) He's going to testify that yes, this theory is supported by his training, even if he can't name a specific article or study. Unless I know that's untrue and I've got the contrary articles in my trial file to use for cross-exam, as an attorney I don't have a good way to test his theory by simple cross-exam. If Examiner X is intentionally lying about education, training, or results, there's no reason to think that he or she won't continue to lie on the stand about methods, supporting reasons, etc. Again, unless the attorney has a fair amount of reading in the area, and the right contrary authority handy, mistakes and lies are likely to get thru.
I suspect Saks' list of 21 is the same list that's in Cole's More than Zero, Accounting for Error in Latent Fingerprint Identification, 95:3 J. Crim. L. & Criminology 985 (2005) (either just published or about to be).
G
I'd be glad to do a presentation, schedule permitting, for/with IAI. I expect to try to visit the Burlington meeting to see Pat W. and Cole debate.
Accepted distinction on theory/law vs. applied science. Query tho, has the firearms/toolmark theory been adequately blind-tested? The theory says that because of wear on the toolmarks there should be differences on each surface, but don't the differences need to be observable to be able to test/falisify the theory? I can try to falsify friction ridge uniqueness and permance with observation and tests using statistically valid samples over sufficient time, but again, I need to guard against observer error to know that the test of the theory/law is valid.
Agreed. Tho given the potential for severe consequences to an examiner's career to admit error, I do have concerns about how willing someone is going to be to admit a mistake or challenge someone in a non-blind verification. I don't know if there is a way to encourage folks to admit mistakes without getting penalized for it.As guest (Pat?) said "errors can and do happen" Bad errors. And we need to be very transparent about this.
I've seen all sorts of stuff about the CTS data. Is it publically available somewhere that interested person can read it?
Allen B[\b]
There's all sorts of interesting issues in lab control and supervision. I recognize that law enforcement needs a certain amount of privacy and protection of its investigations, and that the vast majority of lab work is likely to be law enforcement oriented. OTOH, it would be nice to feel that the labs and experts are impartial, honest brokers when it comes to their tests and results.
-
g.
- Posts: 247
- Joined: Wed Jul 06, 2005 1:27 pm
- Location: St. Paul, MN
Saks Koehler article
Lisa,
<<Query tho, has the firearms/toolmark theory been adequately blind-tested? The theory says that because of wear on the toolmarks there should be differences on each surface, but don't the differences need to be observable to be able to test/falisify the theory? I can try to falsify friction ridge uniqueness and permance with observation and tests using statistically valid samples over sufficient time, but again, I need to guard against observer error to know that the test of the theory/law is valid.
>>
Now we are speaking the same language. Excellent point. I don't know those answers, not being a firearms examiner. Will blind testing refute the theory (as opposed to testing an expected outcome) is a great point. That's where it comes back to choosing the correct test to falsify the theory. The theory (the explanation) is still a theory and a possible explanation. The test (and the resulting data) either refute the theory or do not refute the theory (they never "PROVE" it, only refute or NOT refute in a true philosophy of science manner).
The tests they have done, do not refute, but maybe they need better tests to continue to attempt to refute (i.e. you suggest blind testing). Couldn't hurt, could it?
Good points, and I will have to give you a call (about presenting at the IAI) when I get back (I am in Colorado teaching a Ridgeology Science Workshop!),
g. (glenn langenburg)
<<Query tho, has the firearms/toolmark theory been adequately blind-tested? The theory says that because of wear on the toolmarks there should be differences on each surface, but don't the differences need to be observable to be able to test/falisify the theory? I can try to falsify friction ridge uniqueness and permance with observation and tests using statistically valid samples over sufficient time, but again, I need to guard against observer error to know that the test of the theory/law is valid.
>>
Now we are speaking the same language. Excellent point. I don't know those answers, not being a firearms examiner. Will blind testing refute the theory (as opposed to testing an expected outcome) is a great point. That's where it comes back to choosing the correct test to falsify the theory. The theory (the explanation) is still a theory and a possible explanation. The test (and the resulting data) either refute the theory or do not refute the theory (they never "PROVE" it, only refute or NOT refute in a true philosophy of science manner).
The tests they have done, do not refute, but maybe they need better tests to continue to attempt to refute (i.e. you suggest blind testing). Couldn't hurt, could it?
Good points, and I will have to give you a call (about presenting at the IAI) when I get back (I am in Colorado teaching a Ridgeology Science Workshop!),
g. (glenn langenburg)
-
Guest
Michele,Michele Triplett wrote: I think trusting that people are experts is a dangerous concept. In any field there are always people pretending to be knowledgeable just to make money. A few key questions should be asked to insure any person is somewhat knowledgeable. This can be done in court or in a pre-court interview. Some simple questions are
“What scientific training do you have?”
“What methodology did you use?”
“How long has this methodology been used?”
“Has it been tested, by whom and when?”
“Can you show me (the attorney) how you came to your conclusion?” (chart enlargement)
“Is this a conclusion any examiner would agree with, or does it require a skilled examiner?”
“Do you keep current with fingerprint literature and research? Can you give me some examples?”
“Can you explain the Plaza case?”
“Can you explain the basics of the Mayfield case?”
I think the list that John Nielson wrote, called “Are You Dead Yet?” has some perfect questions.
Michele
It looks like one must do an extensive amount of reading to truly become an expert. I know many fp examiners consider reading a chore and would rather learn things more visually. Do you have any suggestions here?
-
Pat Wertheim
Define "EXPERT" --
If you said, "Somebody who knows a little more than the average person about a topic," you will lose BIG TIME in court.
When I got started in this business in the 1970's, it was good enough to go to court and testify, "Fingerprints are unique and permanent. I compared the fingerprint from the crime scene to the defendant's fingerprints and they match." And the defense attorney would stand up and say, "No questions." If you think that is still the case today, welcome to Fantasyland.
"Guest" says, "It looks like one must do an extensive amount of reading to truly become an expert. I know many fp examiners consider reading a chore and would rather learn things more visually. Do you have any suggestions here?"
My reply: Yes, you must do an extensive amount of reading to truly become an expert. Almost all jobs have some component that is a chore, but you do it because it is a necessary part of the job and you like the rest of the job enough to put up with the chore of reading. Or else you find a job that doesn't require reading skills. There are plenty of those out there, if that's what you want. Unfortunately, they don't pay as well nor have any benefits (Do you like picking strawberries?) Suggestions? Sure, get over it -- and READ. And learn. It will be a long time before the special effects action movie comes out.
Are you willing to do what it takes to be a true professional and a true expert, or would you rather get slaughtered in court and do nothing but whine about how mean the defense attorney was?
If you said, "Somebody who knows a little more than the average person about a topic," you will lose BIG TIME in court.
When I got started in this business in the 1970's, it was good enough to go to court and testify, "Fingerprints are unique and permanent. I compared the fingerprint from the crime scene to the defendant's fingerprints and they match." And the defense attorney would stand up and say, "No questions." If you think that is still the case today, welcome to Fantasyland.
"Guest" says, "It looks like one must do an extensive amount of reading to truly become an expert. I know many fp examiners consider reading a chore and would rather learn things more visually. Do you have any suggestions here?"
My reply: Yes, you must do an extensive amount of reading to truly become an expert. Almost all jobs have some component that is a chore, but you do it because it is a necessary part of the job and you like the rest of the job enough to put up with the chore of reading. Or else you find a job that doesn't require reading skills. There are plenty of those out there, if that's what you want. Unfortunately, they don't pay as well nor have any benefits (Do you like picking strawberries?) Suggestions? Sure, get over it -- and READ. And learn. It will be a long time before the special effects action movie comes out.
Are you willing to do what it takes to be a true professional and a true expert, or would you rather get slaughtered in court and do nothing but whine about how mean the defense attorney was?
-
Les Bush
- Posts: 229
- Joined: Tue Jul 05, 2005 4:29 am
- Location: Australia
An expert reply
Well done Pat,
its been a while since I've read a response that shows the determination and ownership needed to be at the top of the profession. All things are not equal but we must preserve the highest standard possible to ensure our science keeps progress with contemporary knowledge and expectations. To do otherwise would be to become mediocre and that cap doesnt sit well. Regards from Oz. Les
its been a while since I've read a response that shows the determination and ownership needed to be at the top of the profession. All things are not equal but we must preserve the highest standard possible to ensure our science keeps progress with contemporary knowledge and expectations. To do otherwise would be to become mediocre and that cap doesnt sit well. Regards from Oz. Les
-
Michele Triplett
Guest,
For those who don’t particularly like reading, I do have some suggestions to minimize the amount of reading.
One suggestion would be to listen to audio material that comes out, like the e-symposium or the recent NPR presentation, “Talk of the Nation: Faulty Forensics”. Granted, there’s not a lot of this kind of information around.
For articles available in Adobe Reader, you can select the “read out loud” option.
Talking and asking questions is a great tool (my particular favorite). Many departments have some sort of discussion group, this could be monthly or quarterly. If your department doesn’t do this, call around to neighboring departments and see what you come up with.
Attend conferences and presentations.
Here’s my best suggestion. Our field has an amazing amount of different topics we need to know about:
Chemical processing
Photography
Computer Enhancement
History
Scientific Methodology
Statistical studies
AFIS computer system
Cognitive Aspects of making identifications
Training
Safety
Critics and their gripes
Skin structure and formation
Important court cases
Organizations and their standards
Latent development of certain substrates
Court testimony
Etc.
Am I interested in all of these topics? NO
Do I read articles on all these topics? NO
Recognizing that it’s hard to read and comprehend what we aren’t interested in, try to narrow down your own interests and read articles on that topic. Soon you’ll find that other topics seem to sneak into what you’re reading about and you may become more interested in expanding your reading.
For those topics you don’t want to read about, get to know people who are interested in these areas, then when you need information you’ll know who to go to. These people can also be extremely beneficial because they can direct you to good articles so you don’t waste you’re time reading articles that don’t have the information you’re looking for. This goes back to the “talking and asking questions” mentioned above.
In my opinion, getting to know good resources is what this website is all about!!
Michele
michele.triplett@metrokc.gov
For those who don’t particularly like reading, I do have some suggestions to minimize the amount of reading.
One suggestion would be to listen to audio material that comes out, like the e-symposium or the recent NPR presentation, “Talk of the Nation: Faulty Forensics”. Granted, there’s not a lot of this kind of information around.
For articles available in Adobe Reader, you can select the “read out loud” option.
Talking and asking questions is a great tool (my particular favorite). Many departments have some sort of discussion group, this could be monthly or quarterly. If your department doesn’t do this, call around to neighboring departments and see what you come up with.
Attend conferences and presentations.
Here’s my best suggestion. Our field has an amazing amount of different topics we need to know about:
Chemical processing
Photography
Computer Enhancement
History
Scientific Methodology
Statistical studies
AFIS computer system
Cognitive Aspects of making identifications
Training
Safety
Critics and their gripes
Skin structure and formation
Important court cases
Organizations and their standards
Latent development of certain substrates
Court testimony
Etc.
Am I interested in all of these topics? NO
Do I read articles on all these topics? NO
Recognizing that it’s hard to read and comprehend what we aren’t interested in, try to narrow down your own interests and read articles on that topic. Soon you’ll find that other topics seem to sneak into what you’re reading about and you may become more interested in expanding your reading.
For those topics you don’t want to read about, get to know people who are interested in these areas, then when you need information you’ll know who to go to. These people can also be extremely beneficial because they can direct you to good articles so you don’t waste you’re time reading articles that don’t have the information you’re looking for. This goes back to the “talking and asking questions” mentioned above.
In my opinion, getting to know good resources is what this website is all about!!
Michele
michele.triplett@metrokc.gov
-
Cindy Homer
- Posts: 23
- Joined: Wed Aug 10, 2005 7:51 am
- Location: Augusta, Maine
- Contact:
Sorry but it's taken me a while to get to read this article.
One of my pet peeves about sighting error rates and proficiency tests in articles such as that written by Saks and Koehler is the observable failure to ever look at what an error rate is and how they are generated and used. I was taught that to use statistics approriately, to have meaningful data you need a large population size. First, define the “population”. If we define “population” in terms of an individual examiner, then how many proficiency tests would I have to take in order to have a meaningful population size that meaningfully represents my “error rate”? I will need to have along career, that’s for sure. If the “population” is all the friction ridge proficiency tests then, whoa! Are Saks and Koehler seriously suggesting that that often misinterpreted and misrepresented “data” be used to determine the probability that I will commit an error? Give me a break. In what other scientific discipline does this occur? I can see it now…Dr. X who has 20 PhD candidates doing research under him calculates the error rate of each candidate and uses that to determine the probability that the information given to him by one of his candidates is erroneous. Ridiculous.
Joe has taken 6 proficiency tests in his three-year career (not counting all the exams he took in training) and in one test he believed there wasn’t enough information in the fingerprint to call it a match so he submits his test reflecting that. 76% of the other people who took the exam called it a match. Here comes a letter from ASCLD-LAB asking the lab management to explain this discrepancy. Was Joe wrong? Is that truly an error? What is considered an “error” is not as black and white as these “authors” who decry the “high error rate” in the identification sciences like to profess.
I am tired of reading non-scientists belittle and criticize science. I see many areas for improvement within our science and guess what? That’s what science is all about: growth, change, discovery, initiative, excitement, debate, thinking outside of a box, etc. If you don’t like it or don’t understand it, get out or educate yourself.
Here’s another hint: look up their references and read them. It seems that misquoting and misrepresenting information is rather habitual. I’m thankful that people who twist facts and numbers like this are NOT in our profession.
One of my pet peeves about sighting error rates and proficiency tests in articles such as that written by Saks and Koehler is the observable failure to ever look at what an error rate is and how they are generated and used. I was taught that to use statistics approriately, to have meaningful data you need a large population size. First, define the “population”. If we define “population” in terms of an individual examiner, then how many proficiency tests would I have to take in order to have a meaningful population size that meaningfully represents my “error rate”? I will need to have along career, that’s for sure. If the “population” is all the friction ridge proficiency tests then, whoa! Are Saks and Koehler seriously suggesting that that often misinterpreted and misrepresented “data” be used to determine the probability that I will commit an error? Give me a break. In what other scientific discipline does this occur? I can see it now…Dr. X who has 20 PhD candidates doing research under him calculates the error rate of each candidate and uses that to determine the probability that the information given to him by one of his candidates is erroneous. Ridiculous.
Joe has taken 6 proficiency tests in his three-year career (not counting all the exams he took in training) and in one test he believed there wasn’t enough information in the fingerprint to call it a match so he submits his test reflecting that. 76% of the other people who took the exam called it a match. Here comes a letter from ASCLD-LAB asking the lab management to explain this discrepancy. Was Joe wrong? Is that truly an error? What is considered an “error” is not as black and white as these “authors” who decry the “high error rate” in the identification sciences like to profess.
I am tired of reading non-scientists belittle and criticize science. I see many areas for improvement within our science and guess what? That’s what science is all about: growth, change, discovery, initiative, excitement, debate, thinking outside of a box, etc. If you don’t like it or don’t understand it, get out or educate yourself.
Here’s another hint: look up their references and read them. It seems that misquoting and misrepresenting information is rather habitual. I’m thankful that people who twist facts and numbers like this are NOT in our profession.