Confirmation Bias Study on Experienced Examiners

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

L.J.Steele
Posts: 430
Joined: Mon Aug 22, 2005 6:26 am
Location: Massachusetts
Contact:

Confirmation Bias Study on Experienced Examiners

Post by L.J.Steele »

http://www.newscientist.com/article/mg18725174.500

Examiners' objectivity called into question

Fingerprint examiners can be heavily influenced by external factors when making judgements, according to research in which examiners were duped into thinking matching prints actually came from different people.

The study, by Itiel Dror and Ailsa Péron at the University of Southampton, UK, suggests that subjective bias can creep into situations in which a match between two prints is ambiguous. So influential can this bias be that experts may contradict evidence they have previously given in court. "I think it's pretty damning," says Simon Cole, a critic of fingerprint evidence at the University of California, Irvine.

Dror and Péron arranged for five fingerprint examiners to determine whether a "latent" print matched an inked exemplar obtained from a suspect. A latent print is an impression left at a crime scene and visualised by a technique such a dusting. The examiners were also told by a colleague that these prints were the same ones that had notoriously and incorrectly been matched by FBI fingerprint examiners last year in the investigation into the Madrid bombings. That mismatch led to Portland lawyer Brandon Mayfield being incorrectly identified as one of the bombers.

What the three examiners didn't know was that the prints were not from the bombing case at all. Each pair of prints, different for each examiner, had previously been presented in court by that same expert as a definite match.

Yet in the experiment only one of the experts correctly deemed their pair as matches. "The other four participants changed their identification decision from the original decision they themselves had made five years earlier," says Dror. Three claimed the pair were a definite mismatch, while the fourth said there was insufficient information to make a definite decision. Dror will present the results at the Biometrics 2005 conference in London next month.

One solution, says Cole, might be for each forensics lab to have an independent official who distributes evidence anonymously to the forensic scientists. This would help to rule out any external case-related influences by forcing the scientists to work in isolation, knowing no more about each case than is necessary. At the moment fingerprint examiners asked to verify decisions made by their colleagues do not receive the evidence "blind". They already know the decision colleagues have made.

Paul Chamberlain, a fingerprint examiner with the UK Forensic Science Service who has more than 23 years experience, says: "The FSS was aware of the need for a more robust scientific approach for fingerprint comparison." But he questions the relevance of the expert study. "The bias is unusual and it is, in effect, an artificial scenario," he says.
KaseyWertheim

The beginning portion of this article:

Post by KaseyWertheim »

How far should fingerprints be trusted?
17 September 2005
NewScientist.com news service
Andy Coghlan
James Randerson

A HIGH-profile court case in Massachusetts is once again casting doubt on the claimed infallibility of fingerprint evidence. If the case succeeds it could open the door to numerous legal challenges.

The doubts follow cases in which the testimony of fingerprint examiners has turned out to be unreliable. The most high-profile mistake involved Brandon Mayfield, a Portland lawyer, who was incorrectly identified from crime scene prints taken at one of the Madrid terrorist bombings on 11 March 2004. Despite three FBI examiners plus an external expert agreeing on the identification, Spanish authorities eventually matched the prints to an Algerian.

Likewise, Stephan Cowans served six years in a Massachusetts prison for shooting a police officer before being released last year after the fingerprint evidence on which he had been convicted was trumped by DNA.

No one disputes that fingerprinting is a valuable and generally reliable police tool, but despite more than a century of use, fingerprinting has never been scientifically validated. This is significant because of the criteria governing the admission of scientific evidence in the US courts.

The so-called Daubert ruling introduced by the Supreme Court in 1993 set out five criteria for admitting expert testimony. One is that forensic techniques must have a known error rate, something that has never been established for fingerprinting.

The reliability of fingerprinting is at the centre of an appeal which opened earlier this month at the Massachusetts Supreme Court in Boston. Defence lawyers acting for Terry Patterson, who was convicted of murdering an off-duty policeman in 1993, have launched a so-called "interlocutory" appeal midway through the case itself to test the admissibility of fingerprinting. Patterson's conviction relies heavily on prints found on a door of the vehicle in which the victim died.

A key submission to the appeal court is a dossier signed by 16 leading fingerprint sceptics, citing numerous reasons for challenging the US Department of Justice's long-standing contention that fingerprint evidence has a "zero error rate", and so is beyond legal dispute. Indeed, fingerprint examiners have to give all-or-nothing judgements. The International Association for Identification, the oldest and largest professional forensic association in the world, states in a 1979 resolution that any expert giving "testimony of possible, probable or likely [fingerprint] identification shall be deemed to be engaged in conduct unbecoming".

Material in the dossier includes correspondence sent to New Scientist in 2004 by Stephen Meagher of the FBI's Latent Fingerprint Section in Quantico, Virginia, author of a pivotal but highly controversial study backing fingerprinting. The so-called "50K study" took a set of 50,000 pre-existing images of fingerprints and compared each one electronically against the whole of the data set, producing a grand total of 2.5 billion comparisons. It concluded that the chances of each image being mistaken for any of the other 49,999 images were vanishingly small, at 1 in 1097.

But Meagher's study continues to be severely criticised. Critics say that showing an image is more like itself than other similar images is irrelevant. The study does not mimic what happens in real life, where messy, partial prints from a crime scene are compared with inked archive prints of known criminals.

“Stephan Cowans served six years before being released last year after fingerprint evidence was overturned by DNA”When New Scientist highlighted these issues in 2004 (31 January 2004, p 6), Meagher's response to our questions arrived too late for publication. He wrote that critics misunderstood the purpose of his study, which sought to establish that individual fingerprints are effectively unique - unlike any other person's print. "This is not a study on error rate, or an effort to demonstrate what constitutes an identification," he wrote (the letter can be read at www.newscientist.com/article.ns?id=dn7983). By the time New Scientist went to press, the FBI had not responded to our requests for comment.

But critics of fingerprinting have seized on this admission and included it in the dossier as evidence that the 50K study doesn't back up the infallibility of fingerprinting. "It shows that the author of the study says it doesn't have anything to do with reliability," says Simon Cole, a criminologist at the University of California, Irvine and one of the 16 co-signatories of the dossier.

Cole says that Meagher's replies to New Scientist demolish claims by the courts, the FBI and prosecution lawyers that the 50K study is evidence of infallibility. He says the letter has already helped to undermine fingerprint evidence in a recent case in New Hampshire.

Whatever the decision in the Patterson case, the pressure is building for fingerprinting's error rate to be scientifically established.

One unpublished study may go some way to answering the critics. It documents the results of exercises in which 92 students with at least one year's training had to match archive and mock "crime scene" prints. Only two out of 5861 of these comparisons were incorrect, an error rate of 0.034 per cent. Kasey Wertheim, a private consultant who co-authored the study, told New Scientist that the results have been submitted for publication.

But evidence from qualified fingerprint examiners suggests a higher error rate. These are the results of proficiency tests cited by Cole in the Journal of Criminal Law & Criminology (vol 93, p 985). From these he estimates that false matches occurred at a rate of 0.8 per cent on average, and in one year were as high as 4.4 per cent. Even if the lower figure is correct, this would equate to 1900 mistaken fingerprint matches in the US in 2002 alone.
Michele Triplett

Post by Michele Triplett »

As with any scientific endeavor we should continually be striving to improve our processes in order to become more efficient as well as reducing errors.

I think one of the best things our profession can do is start documenting supporting data that leads us to our conclusions. Our industry seems to fear this process, claiming it would be so time consuming that it would backlog every office. I don’t believe this is true. To start, every conclusion wouldn’t have to have documentation, perhaps just the identifications that aren’t clearly apparent to the average examiner. Documentation is a standard process in most scientific fields. Ours seems to be the only field that avoids this process.

Having said that, let’s look at the conclusions of the 5 examiners in the above case, 4 of which had different conclusions than they originally had. If these examiners were required to document supporting data of why they arrived at their conclusions, we would have been able to see what went wrong and rectify the situation. Maybe if the examiners had to explain how they arrived at their conclusions they would have realized their conclusions weren’t based on reproducible observable data (as science requires), but outside information had biased them.

I’m not trying to imply that nobody in our field uses documentation, I know several who do, but from what I’ve seen this isn’t a standard requirement, in fact most people don’t even recognize its value. I don’t believe SWGFAST has it as a recommendation. The CTS proficiency test doesn’t require it, and neither does the IAI Certification test.

Documentation can be of great value and I see those agencies that require its use (in specific cases) as being leaders in setting appropriate scientific procedures.

My personal view is that without documentation, or the ability to document if asked, a conclusion cannot be claimed to be arrived at scientifically.

I’ll finally get to my point! The study with the 5 examiners seems to show that we can be influenced by outside information and perhaps doing our analysis blindly would reduce errors. I believe that doing an analysis scientifically reduces errors (which may include blind verification) and without documentation we can’t be assured that the examiners in this case did their analysis scientifically. Perhaps the large error rate in this study is due to improper use of methodology and not strictly outside influences.

From the portion Kasey posted, finding an error rate for our industry may be important but if we can establish why these errors occur (which would require documentation of the analysis) then we could stop many of these errors from occurring.

Michele
David Fairhurst
Posts: 196
Joined: Wed Jul 06, 2005 4:11 am
Location: UK
Contact:

50k Study

Post by David Fairhurst »

For those who don't know, the result of the 50k study mentioned above was 10 in 10^97.
I guess Kasey just cut and pasted this and the superscript didn't transcribe.
Wayne

Documentation

Post by Wayne »

"To start, every conclusion wouldn’t have to have documentation, perhaps just the identifications that aren’t clearly apparent to the average examiner."

Michelle,
I too document my analysis and agree that not all conclusions need to be documented. However, I am having a difficult time wording this in my policy manual. What guidelines do you use (or what wording is in your policy) that delineates when you do this? How do you describe the "not so obvious" identifications that do not need to be documented.
Michele Triplett

Post by Michele Triplett »

Wayne,

Luckily I’m not in a management position, so I don’t have the responsibility for writing policies or insuring that employees live up to them. Regardless, I am a working examiner and if I claim to be an expert, with that comes the responsibility of understanding and practicing proper procedures, despite if they exist in writing or not.

Scientific conclusions require that we’re able to explain how we came to every conclusion, if we’re ever asked. Those conclusions that can be easily reproduced in the future (either for attorney’s, a verifier, or management) wouldn’t need to be done now because they can be done at any time. I think any conclusion that isn’t “clearly apparent” or is “atypical” in anyway should have documentation attached. The amount of documentation is dependent on the print and anyone peer reviewing the conclusion. While one peer reviewer might accept an identification without documentation, another might want a lot of documentation. It also depends on the experience of the practitioner. Newer examiners should document more information until it’s clearly established that their seeing all the relevant information. It’s also important to realize that in science documentation can be done in several forms. It could be a picture of a charted impression, a sketch, notes, or a formal report. We have to make sure to follow scientific guidelines along with agency and industry policies and standards.

Example 1: A small clear impression identified to the left palm, the level one detail doesn’t give an indication of direction or area. In 2 years when this goes to court, could you find this identification easily again? Maybe not. Documentation could be as simple as indicating the area and the direction.

Example 2: A latent print is identified to the #7 finger. Over 30 level 2 characteristics exist. There is one area of the impression where the level 2 characteristics doesn’t match. Did the original examiner notice this area? Did they ignore data that didn’t fit their conclusion or was there supporting data to explain that this was a double impression or smearing? Documentation should be asked for by the verifier or done by the examiner to remind themselves in 2 years that they had noticed this area.

Example 3: A latent impression on a check doesn’t appear to be of value for individualization. If the background is taken out using some form of computer imaging technique and then the latent is individualized, this procedure (since it isn’t standard) should be documented.

If I were writing a manual, I would try to keep it very general and probably write something like:

“Documentation of conclusions will be done according to scientific guidelines and may be asked for in any situation. Documentation is required for any conclusion that isn’t clearly apparent to the average examiner and when any process is performed that is not standard.”

I can already anticipate someone asking, “what are scientific guidelines?” Scientific guidelines are dependent on the practitioner’s abilities and the situation. I wouldn’t ask Einstein to tell me how determined 9x8=72 but I would ask a 3rd grader (someone in training). I would need to know if the 3rd grader came to his conclusion by guessing, memorization, or if he/she knew that the conclusion was arrived at by adding 9+9+9+9+9+9+9+9. So I may add to the manual:
“Scientific guidelines are determined by the abilities of the practitioner as well as the given situation”.

I know this seems overly simplified but I would use this and modify it as needed, continually looking at ways to improve the policy.

Michele
Michele.triplett@metrokc.gov
Norberto Rivera
Posts: 33
Joined: Mon Aug 15, 2005 7:10 am
Location: Griffin, GA
Contact:

Post by Norberto Rivera »

Great thread!
As a new examiner I'm interested in knowing how different LPE's document comparisons. I've seen reports that are short, sweet and to the point (i.e. the latent print submitted on blah blah blah was examined and found to match the #x finger on the tenprint card bearing the name so and so). I've also seen reports that are considerably more involved. I am more inclined to include every relevant detail available to me in a chronological order beginning with the original reason for police response if I know it. Don't misunderstand me I don't mean re-writing a police incident report, but enough information to let my peer reviewer know how and where a particular latentn print originated from, how it was developed, how the suspect was suggested or identified, whether or not the latent was processed through AFIS, etc. I also prefer to describe how I examined the prints. (i.e. What characteristic stood out the most to me, which area did I use to begin my comparison, how many matching characteristics I observed, how many, if any, non-matching characteristics I observed [theoretically this should be none if dealing with an individualization, unless part of the print is distorted], etc} The end result is usually about a two page dissertation that anyone with some knowledge of LPE can pickup and follow from beginning to end. Granted I only do this type of report when I make an individualization. If a particular comparison results in a non-identification and an investigator requires more than verbal notice then I apply the short method and usually include at what level of detail I stopped the comparison (i.e. Level 1 detail was in obvious disagreement between the latent print and the known inked impressions of suspect so and so. Enough of my soapbox. I do agree that documentation should be included and as complete as possible in order for anyone who may review your findings to be able to reproduce them.
"We're all here 'cuz we ain't all there!"
"How long a minute is depends on what side of the bathroom door you're standing on."
L.J.Steele
Posts: 430
Joined: Mon Aug 22, 2005 6:26 am
Location: Massachusetts
Contact:

Post by L.J.Steele »

Norberto Rivera wrote:I am more inclined to include every relevant detail available to me in a chronological order beginning with the original reason for police response if I know it. Don't misunderstand me I don't mean re-writing a police incident report, but enough information to let my peer reviewer know how and where a particular latentn print originated from, how it was developed, how the suspect was suggested or identified * * * *
Knowing why the police suggested a specific suspect, particularly if there's external information that seems to confirm a match (confession, DNA, informant's tip, etc.), is precisely the problem the Dror study is discussing and was mentioned in the Stacey report on the Mayfield mis-ID.

The problem, in a nutshell, is "confirmation bias". The human brain is a very odd thing sometimes. One of the things we all do, subconsciously, is to seek out data that confirms our pre-existing beliefs and to discount data that challenges them. Dror gave the examiners an external reason to believe the latent they saw was a non-match. That information, it seems, overroad their previous documented conclusion that the latent was a match. In Mayfield's case, apparently the information the examiners learned about Mayfield, and what the verifiers knew about the folks who made the initial match, overrode the warning signs in the latent that it was not a match.

The psych folks will tell you that confirmation bais is not something you can overcome by strength of will or rigid adherence to standards. Your brain will play tricks on you -- that's why most medical studies are "double-blind" -- neither the subject nor the person giving the test and recording the results knows whether the patient is getting an actual drug or a placebo.

IMHO, and speaking as an attorney, not an examiner, the examiner should know as little as practical about the case and the suspect until after the ACE-V analysis is made and documented. In the cases where this information is relevant, then the examiner should mention what he or she knew in the report as well as the reasons for the conclusion.
Patrick Warrick
Posts: 37
Joined: Mon Jul 11, 2005 7:46 am
Location: Minnesota BCA-Northern Minnesota

confirmation bias and double blind exams

Post by Patrick Warrick »

"The psych folks will tell you that confirmation bais is not something you can overcome by strength of will or rigid adherence to standards. Your brain will play tricks on you -- that's why most medical studies are "double-blind" -- neither the subject nor the person giving the test and recording the results knows whether the patient is getting an actual drug or a placebo."


That might be fine for medical studies, but when doctors are giving a second opinion for a real case they either concur with the first prognosis or not...and that is not a double blind exam. I do not see how it would be possible, much less practical, to completely eliminate some form of bias in your mind. You just can't do it. Everyone, everyday has that going on in their head ...for almost every decision we make. I cannot think of any disipline in forensic science that can say they perform double blind exams for their individualizations in casework. We must do our best to recognize that it is there and be aware of it and try and be as objective as possible.

Patrick Warrick
"Rather leave the crime of the guilty unpunished than condemn the innocent."-Marcus Tullius Cicero, Roman statesman (106–43 B.C.)
Mary McCarthy

Experimental design

Post by Mary McCarthy »

A few comments about the design of the Dror and Peron study:
In a controlled experiment there are several factors. The treatment is the variable that is manipulated to produce results that will support or refute competing hypotheses. The competing hypotheses of the Dror and Peron study could be expressed as:
H1: fingerprint examiners are influenced by information they receive about a comparison prior to conducting the comparison
H2 : fingerprint examiners are not influenced by information they receive about a comparison prior to conducting the comparison

The treatment in this study is the information received before the comparison. Good experimental design requires a control, i.e., a group which was not given the treatment.
One group of examiners should be given a set of latents and standards and asked to conduct comparisons and render opinions.
A second group of examiners should be given a set of latents and standards, given information about each latent to influence their decisions, and then asked to conduct comparisons and render opinions.
The difference between the two groups is then attributed to the effect the information had on the decisions rendered.
In this study, there was no control group. The sample size of the experiment was five, much too small to be used to present conclusions that are applicable to thousands of latent print examiners. Further, it appears that the researchers are attempting to explore confirmational bias with respect to identifications by studying a bias not to identify, a bias of caution. The gravity of a false identification is so great that information that a latent has been falsely identified can and should introduce caution in the comparison process.
The real question is: does the knowledge that another examiner reached a conclusion of identification for a particular comparison influence the verification of that latent by a second examiner. This is a question that all latent print examiners should recognize so that we can take steps to avoid this potential problem. The information provided by the design of this experiment concerned the influence that previous knowledge of an erroneous identification has on the decision rendered. Most latent print examiners know that the opinion that a latent print was not made by a subject, if false, is a much less grievous error than a false positive identification. In most cases failing to identify a latent print to a subject is of little relevance. It does not mean the subject was not present, nor does it mean the subject did not touch the surface.
The topic of conformational bias is too important to ignore, but the results of this study provide little information on the issue.
Steve Everist
Site Admin
Posts: 551
Joined: Sun Jul 03, 2005 4:27 pm
Location: Bellevue, WA

Post by Steve Everist »

L.J.Steele wrote:
The psych folks will tell you that confirmation bais is not something you can overcome by strength of will or rigid adherence to standards. Your brain will play tricks on you -- that's why most medical studies are "double-blind" -- neither the subject nor the person giving the test and recording the results knows whether the patient is getting an actual drug or a placebo.
I think this statement, to a degree, outlines much of the problems that come with the processes of blind verification and double blind testing as many try to apply them to fingerprint comparison. First of all the term blind verification is confusing because it contains the word “verification.” This is not synonymous with the Verification phase of ACE-V. It is actually a part of the testing, or Comparison, phase of ACE-V. Thus the term blind testing (as used in the term double blind testing) is a more accurate representation of what is occurring. In all sciences testing is done prior to arriving at the conclusion.

Similar to what Pat Warrick stated, the Verification phase of medical prognosis is not normally done blind to the first physician’s results. Applying this to the comment regarding medical studies, are the results of the studies not normally known (published) when other studies of a similar nature are performed? These additional studies are the Verification phase – the goal is to see if the previous results are repeatable and that they were arrived at correctly. There may be different testing (Comparison) methods in the subsequent studies, but the final result will either support or not support the original study’s results. That is the Verification phase of ACE-V.
Steve E.
sharon cook

error rates/double blind/documentation

Post by sharon cook »

You can write down the time of day, the weather, each and every individual ridge and what it looks like at the edges, right down to how many times you took a breath during the examination...and it still won't verify an identification. The amount of "documentation" done only shows how anal retentive you are, it doesn't prove anything about the latent print or whether or not it is an identification. So what's the point of documentation? YOU HAVE TO LOOK AT THE PRINT!

As for error rates...am I just stupid or are the sceptics trying to disprove the science of dactyloscopy by showing that some examiners are bad at it? What does a study that shows five or even five hundred examiners messed up have to do with the science of fingerprints? An analogy seen before: does that fact that someone can't add disprove the science of mathmatics?

Double blind testing would be wonderful if you have one or two cases per year. Try having a backlog of 3 or 4 hundred cases per month and see how much blind verification OR "documentation" you'd be doing.

The point I'm laboring to make is: bad examiners don't equal bad science. It is what it is. It is the ability of the individual examiner that should be tested rigorously.
Ernie Hamm

Confirmation Bias

Post by Ernie Hamm »

This discussion on confirmation bias of opinions regarding latent prints or even other types of marks is interesting. I do not believe I am alone, by a long shot, but I feel this was never an issue in my casework in various areas of comparative examinations, whether I knew a lot or little about the incident. The comparison was mark-to-mark and what was seen and in agreement or disagreement were the ONLY aspects in consideration. I appreciate the reference to psychological and unconscious influences, but I do not believe that was ever the case. I know there were a number of especially difficult cases in which I was really, really interested in contributing by placing a suspect at the scene. I tried very hard to make that happen, but not to the extent that I would start making the latent fit the record. I even know of past cases that in ‘my heart of hearts’ the scene mark could be associated to a suspect, but I still would not make the ident symbol, but let it slide. If identification happened, it happened, but I would not make it happen.

As I stated, I know I am not alone or singularly exceptional on this point. There are undoubtedly many, many examiners with the same perceptions toward their responsibilities in this regard. The last overhead (apology for the gender) I usually display in some of my presentations and training sessions is one (similar to Pat Warrick's quote) from the title page of “Science Catches the Criminal” (aka, “Science Versus Crime”) by Robinson: “If the law has made you a witness, remain a man of Science. You have no victim to avenge, no guilty or innocent person to ruin or save. You must bear witness within the limits of Science, Brouardel" (Paul C.H. Brouardel (1837-1906), Chief of Forensic Medicine-1897, Sorbonne, Paris)
TAS

Real World

Post by TAS »

[quote="L.J.Steele"][
Knowing why the police suggested a specific suspect, particularly if there's external information that seems to confirm a match (confession, DNA, informant's tip, etc.), is precisely the problem the Dror study is discussing and was mentioned in the Stacey report on the Mayfield mis-ID.



Ms. Steele,
Let me suggest a hypothetical scenario where your personal residence was burglarized and a family heirloom was stolen, say, your great grandmother's engagement ring. The crime scene examination results in the recovery of latent prints at the point of entry to YOUR HOME, obviously those of the intruder. "Meanwhile back at the office", the Detectives get a call regarding someone trying to sell an old engagement ring at a tavern near your home. The person is a known thief with a drug habit.

Now, how would you suggest that this 'suspect' information be communicated to the Ident Office, knowing the more time that passes, the more the likelihood exists that you'll never, ever, see that ring again?

My point simply, psychology aside, there is a job to be done. Also consider, in a typical week, we exclude suspects far more frequently than we individualize.
L.J.Steele
Posts: 430
Joined: Mon Aug 22, 2005 6:26 am
Location: Massachusetts
Contact:

Re: Real World

Post by L.J.Steele »

TAS wrote: Let me suggest a hypothetical scenario where your personal residence was burglarized and a family heirloom was stolen, say, your great grandmother's engagement ring. The crime scene examination results in the recovery of latent prints at the point of entry to YOUR HOME, obviously those of the intruder. "Meanwhile back at the office", the Detectives get a call regarding someone trying to sell an old engagement ring at a tavern near your home. The person is a known thief with a drug habit.

Now, how would you suggest that this 'suspect' information be communicated to the Ident Office, knowing the more time that passes, the more the likelihood exists that you'll never, ever, see that ring again?
And is there no way that the police can't contact the examiner and say, "could you look at Suspect X on this case" without communicating why they are interested in X? The major bias problem would be the examiner knowing Suspect X's record.

My hypothetical involvement doesn't matter -- the question is whether the police get the right guy, and can make the conviction stick. There's no reason in your hypothetical that a patrol officer can't go to the bar and ask some questions, see if the ring matches the burglary report -- the seller may be the burglar, his buddy, someone the burglar traded the jewelry to for a rock of cocaine, or some other damned fool selling his own family's heirlooms for drug money. As victim, I'd want the case to stick and the bad guy to go away for a while. As defense attorney, my biggest concern is having a client who is factually innocent take a plea becaue he doesn't want to risk trial, or go to jail after trial.

Last year, I taught part of a CLE in Connecticut on eyewitness ID. The two final speakers were victims of sexual assaults -- brutal, shocking crimes. Both had identified their assailants and were 100% confident that they were right. The men they identified each went to jail. In Penny Beerntsen's case, DNA exonerated the man she identified 18years later. Beerntsen said “When my attorney told me that the judge had reversed the verdict, I wanted the earth to swallow me. After all, I was partly responsible for identifying the wrong man, and no one can give Steve back those lost years. Not a day goes by when I don't think about the woman Gregory Allen raped in 1995, or wonder how many other women’s lives were drastically altered in those years when he was walking free.”
The second speaker was Jennifer Thompson Cannino. Ten years later, DNA proved she too had identified the wrong man. Thompson was devastated. “ I remember feeling sick, but also I remember feeling just an overwhelming sense of just guilt that if indeed we had made a mistake and I had contributed to taking away 11 years of this man's life, and if indeed we had been wrong--I felt so bad. I fell apart. I cried and cried and I wept and I was angry at me and I beat myself up for it for a long time.”
This is what happens when folks take shortcuts -- in Beerntsen and Cannino's case, it was shortcuts with eyewitness ID procedures that are designed these days to minimize suggestion and bias. Think for a sec, folks. You listen to the biasing info. Subconsciously it affects your decision. Your testimony helps put a man in jail. A decade later, DNA or something else conclusive shows you were wrong -- you made a good faith mistake. How do you feel? And not just you, but the victim, the victims of the real bad guy uncaught due to the error, the innocent guy who was in jail, and so on.

Yes, I know there are practical problems with trying to make the verifiying examiner in ACE-V blind to the 1st examiner's results. I know there are practical problems with trying to insulate the examiner doing the ACE analysis from biasing crime scene info -- but is it not worth at least considering how to minimize the chance for bias when the consquences of error are so grave?

There were the same kinds of debates when sequential double-blind photo arrays and line-ups were proposed for police. Small departments said we can't make this work, we don't have the staff, everyone knows about all the cases. But some bright guys found cost-effective ways to do it.
Post Reply