"In my opinion"

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

Post Reply
ER
Posts: 351
Joined: Tue Dec 18, 2007 3:23 pm
Location: USA

"In my opinion"

Post by ER »

Thought this was interesting in email form and wanted to repost my response here.
Subject: FW: A call for more science in forensic science

Very interesting article! It begs the question, if we think more science should be implemented, then why aren’t we implementing it?

For example, many practitioners (and agencies) hold tight to the idea that conclusions are ‘opinions’.
The DOJ Uniform Language document recommends ‘… is an examiner' s decision that the observed corresponding friction ridge skin features provide extremely strong support for the proposition that the two impressions came from the same source and extremely weak support for the proposition that the two impressions came from different sources’
The AAAS response to the DOJ document, released in March 2018, recommends verbiage that includes ‘… it is my opinion…’

Doesn’t science try to limit human interpretations, not promote them.
If an examiner can give an opinion that it’s this person, why can’t they give an opinion that all others would be excluded?
Do all ID’s have ‘extremely strong support’ or are some still making or recommending overstatements?

Again, interesting article that could promote discussion. Let’s hope it promotes some changes as well.

Thanks for sharing with us Shelley!
Have a great weekend,
Michele

Michele Triplett

I’ll try to answer of few of these questions.

It seems that Michele is suggesting that testifying “in my opinion” increases human interpretation, and that one goal of science is to limit human interpretation. Further, that since forensic science should be more scientific, then forensic scientists should not be testifying “in my opinion”. I acknowledge that I could be misreading this, but I’m going to respond as if this is the proposed argument.

First off, the idea that science (or even good science) does not involve human interpretation, judgement, or bias is incorrect. Good science knows that all observations are subject to these factors and makes clear that observations may be mistaken because of these factors. Good science does not avoid human interpretation, it seeks to replicate results to minimize the risk that the human interpretation was wrong, and is willing to change conclusions based on new data, new observations, and new interpretations. Short version: saying that the result is “in my opinion” does not make the result less scientific or more questionable. This type of “opinion” is very different than someone’s “opinion” on Coldplay b-sides (which are all terrible).

Further, the courts understand this and expect us to testify in this manner. According to the Federal Rules of Evidence:

Rule 702. Testimony by Expert Witnesses
A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:
(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;
(b) the testimony is based on sufficient facts or data;
(c) the testimony is the product of reliable principles and methods; and
(d) the expert has reliably applied the principles and methods to the facts of the case.

An forensic scientist’s expert witness opinion is based on scientific knowledge, will help in the understanding of a fact, is based on data, and results from reliable methods correctly applied. (Again, very different to someone’s opinion on Coldplay.)

Returning to the questions in this email:

“Doesn’t science try to limit human interpretations, not promote them?”
No. Science IS human interpretations of human observations. In any case, stating an opinion based on knowledge, data, and reliable methods correctly applied is exactly how the courts want scientific expert witnesses to testify. And rightly so.

“If an examiner can give an opinion that it’s this person, why can’t they give an opinion that all others would be excluded?”
Because the opinion must be based on knowledge, data, and reliable methods correctly applied. We compare friction ridge impressions, not friction ridges. It is just flatly incorrect, unsupportable, and unreliable to suggest that an identification precludes the possibility that another fingerprint card somewhere in the world would not have a friction ridge impression that looks close enough to the identified latent where some latent print examiner would not be fooled by the similarities. Again, a scientific expert witness opinion can’t just be whatever you think, but must be based on knowledge, data, and reliable methods correctly applied.

“Do all ID’s have ‘extremely strong support’ or are some still making or recommending overstatements?”
From my experience, the vast majority of latent print identifications provide extremely strong support. I have seen some identifications on rare occasions where I personally do not think that there is enough data to support that conclusion. So, no. Not all identifications have extremely strong support.

-Eric Ray
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: "In my opinion"

Post by Boyd Baumgartner »

I think you raise some good issues (especially the coldplay one) and why I would say I tend to nerd out on the philosophy of it all. Otherwise it devolves very quickly into word games which fatigue people to the point of 'just tell me what to say', which is dangerous in it's own right. The more word games we play it seems we either arrive at more and more technical language that only people in the industry know but we expect to be obvious to lay people or we're constantly trying to bracket a definition to dodge a criticism. Ironically, this is all lost on juries.
It didn’t seem to matter whether they were simply told that a print at the scene “matched” or was “individualized” to the defendant, or whether the examiner offered further justification—the chance of an error is “so remote that it is considered to be a practical impossibility,” for example.
I would start by saying that there is no unified 'goal of science' and this point has actually been written about (nerd link) quite extensively. Science, literally just means 'having knowledge', so it would make sense then that science is stratified so that the most reliable means of generating knowledge reign supreme (I'm looking at you physics) on one end of the spectrum and ones where the knowledge generated is based on more complex systems that deal with qualitative data (read: interpretive) that don't fit so nicely into algorithms on the other end (cough * social sciences *). The point being here is that science doesn't have a unified goal and by consequence it doesn't have a unified voice.

That being said, I don't think you ever remove human interpretation out of the equation for the mere fact that fingerprints do not compare themselves (that's a very reliable experiment, btw...), it takes humans or algorithms which systematize human values to do so. Furthermore, interpretation or evaluation is not based solely on the data in the print but the attitudes that people have about that data. This is the essence of why sticking a definition to sufficiency is so hard. There is no unit of measurement of 'attitude' and furthermore 'attitudes' don't necessarily map to 'truth'. It's what I always say about the notion of confidence. You can be confident, but wrong.

Layer all of this on top of a judicial system which articulates the value of specialized knowledge and uses language like 'opinion evidence' and tempers it with scrutiny in the adversarial area and you've got quite an onion of complexity that's quite nuanced and not so easily disentangled.
NRivera
Posts: 138
Joined: Fri Oct 16, 2009 8:04 am
Location: Atlanta, GA

Re: "In my opinion"

Post by NRivera »

Am I the only one who thought he was going to claim Coldplay was the voice of science? :lol:
"If at first you don't succeed, skydiving was not for you."
ER
Posts: 351
Joined: Tue Dec 18, 2007 3:23 pm
Location: USA

Re: "In my opinion"

Post by ER »

Glad to clarify. I was referring to the best way to arrive at conclusions, to protect against errors, not who is allow to testify by the courts. Just because the courts allow something, doesn’t mean it is a best practice for scientific conclusions. If we allow for conclusions to be opinions then we shouldn’t be surprised by conclusions such as Michael West (a well-known bite mark expert who was allowed to testify a lot, but his conclusions were wrong).

Experts are definitely different than those following scientific protocols. I hope people aren’t taking FRE as requirements of science, they are requirements for testimony.

Perhaps the answer to improving forensic science conclusions is to change the FRE to have one rule for expert testimony and another for scientific testimony. Scientific conclusions should be based on data, accepted methods and testing … not based on the training, experience or ability of a person arriving at the conclusion. In logic, there are fallacies and one fallacy is in regard to putting weight in a person’s credentials or notoriety (or even putting too much weight in our own abilities). They call it an appeal to authority, and it is noted as a fallacy (something to try to protect against).

Here's a statement from the NAS Report:

“Two very important questions should underlie the law’s admission of and reliance upon forensic evidence in criminal trials: (1) the extent to which a particular forensic discipline is founded on a reliable scientific methodology that gives it the capacity to accurately analyze evidence and report findings and (2) the extent to which practitioners in a particular forensic discipline rely on human interpretation that could be tainted by error, the threat of bias, or the absence of sound operational procedures and robust performance standards.”

Again, expert conclusions may rely on experience but scientific conclusions should rely on data, methods and testing. Which brings up something you mentioned, replicated results. Yes, this is a protocol of science but it’s for physical events/experiments (if yeast makes bread rise, it should work for any person, not just one person), however, replicated results is not a protocol of analytic conclusions. For analytic conclusions, if you have replicated results, you simply have two people that think the same thing (or 3 or 4… like in Mayfield). In science, the protocol for analytic conclusions is to have the results be well supported to the point of holding up to scrutiny (there’s that majic word ).

One acceptable scientific method is deduction (I excluded the others by logical deduction… a valid scientific method). Or I could exclude due to rules. I think everyone should be wondering what are acceptable rules (I don’t think ‘try looking twice and then trust your expertise’ is an acceptable rule). Some have tried this rule and it doesn’t work.

My point is, if we want stronger conclusions, and if we really want to protect against errors, there are ways to do it but sometimes it seems like we’re digging our heals in and resisting change. Other times it seems like we’re too willing to follow authority figures when their statements are not well justified. Hopefully some will recognize this and move in a better direction.

Love the discussion!
Michele
“Scientific conclusions should be based on data, accepted methods and testing … not based on the training, experience or ability of a person arriving at the conclusion.”

You seem to be assigning an unwarranted definition to opinion. You’ve decided that this word is defined by a lack of data, methods, and testing, and solely relies on someone’s training, experience, and ability. There are innumerable examples of scientific conclusions based on data, methods, and testing being wrong or being adjusted over time or being debated over time. The essence of science is being able to always consider new evidence and adjust your conclusion based on new data. In this context, an opinion is only as good as the data that supports it. This is very different to the common definition of opinion. My opinion that Coldplay sucks isn’t based on much data, but instead on personal tastes. That doesn’t invalidate using the term in a scientific context to mean a conclusion strongly supported by evidence, data, and reliable methods correctly applied. In the end, if someone were to come to court and demonstrate that one of my ID’s was invalid or lacked sufficient evidence, I would change my conclusion/opinion based on the new evidence.

Back on the exclusion topic, reaching an identification should in no way be the basis for an assumption that all other friction ridge RECORDINGS will be sufficiently different so as no latent print examiner would ever make a mistake when comparing the identified latent. This isn’t an argument based on logic. We have all seen plenty of examples of close non-matches. There is no force in nature that guarantees that once two impressions are identified, there won’t EVER be another impression distorted in just the right way to fool a latent print examiner. Differential growth only makes this possibility extremely unlikely. The logical deduction fails. Even more so, there is insufficient data to support the statement because there has been insufficient testing of the hypothesis that once an identification is declared, no examiner could ever be fooled into identifying the wrong person later. In fact, current research suggests that, with a large enough testing group and a large enough testing sample, there will eventually be a latent that will fool different examiners into identifying different people (and by extension, you can’t just “exclude” everyone else once you identify one person).

I also know this from personal experience. I have this latent. I present it to examiners all the time with a non-matching exemplar with 10-20% misidentifying it, every class.

Logically,
It is possible for any identification to be incorrect.
It is also possible that another comparison of that identified latent to the true source would result in a correct identification.
Therefore, it is not logical that the initial identification (by itself) results in exclusions to all other fingers.

-Eric Ray
ER
Posts: 351
Joined: Tue Dec 18, 2007 3:23 pm
Location: USA

Re: "In my opinion"

Post by ER »

I think you’re confusing the issue by discussing 4 different topics. There’s a big difference between scientific conclusions, accuracy, the sample group considered, and the parameters/criteria to ID or exclude. I think we need to discuss one before we can discuss the others.

Accuracy (better stated as correctness since we don’t ever know the ground truth) is the result of procedures and criteria. If you don’t have a criteria, how can you judge correctness? We often mistake agreement as correctness (it’s an ID because everyone agrees). However, I think you are exactly right, a theoretical exclusion is only as solid as the ID. If the ID is strong (full clear impression) then the exclusion is strong (so theoretical exclusions can be very strong if arrived at in the right situations). Along with your scenario, we have to look at the parameters for both ID’s and exclusions. I would bet in your case, people are giving their opinion (personal opinion), based on their feelings, not based on a criteria. If there are parameters for an ID, then perhaps the better conclusion would be ‘inconclusive’ and not ID. As I stated, there needs to be parameters for exclusions as well (for both exclusions from a comparison and theoretical exclusions).

It seems very short sighted to say I’m wrong if you haven’t even considered the parameters I use. Isn’t that a definition of bias (jumping to a conclusion based on preconceived ideas or feelings, without considering all the information).

I hope if you post this to a chat board, you post it all.

Sincerely,
Michele
You're definitely right about the multiple topics. And Boyd above is correct about how things start to bleed over into other topics quickly.

So, in my scenario (I assume your referring to my sample latent from class) did examiners give their "opinion (personal opinion), based on feelings"? Well, no. What latent print examiner is presenting an identification as a personal opinion based on feelings? They gave a result, a conclusion, or an opinion based on observable data, their prior experience and training, withing the bounds and parameters of the latent print field.

So at the risk of falling into the trap of word games that Boyd warned about on the clpex board, why define the word opinion solely to mean "personal opinion, based on feelings"? Opinion has multiple definitions. One is a personal view or belief, but others include a professional judgment or any belief less than complete certainty. Is an identification decision a fact? No. Like any other scientific opinion, if further study and additional data supports a different conclusion, could an identification decision be changed? Yes. I see no reason to decry the idea that conclusions are opinions.

Leaving all other topics aside for now, this is the comment that started me writing a response to your original email. Are latent print conclusions opinions? Why not? Opinion doesn't imply that the conclusion is solely based on feelings. It doesn't imply an increase in human interpretation. It doesn't decrease the observable basis for the conclusion. From either the scientific or the law perspective, there's no problem with calling our conclusions opinions.

Touching quickly on the exclusion topic...
I agree with you about establishing parameters for conclusions. For a long time, those parameters were well understood and well established for ID's, even if we had a difficult time explaining them to non-experts. After training we all had a pretty good idea of what was "enough" for an ID. Again, it was an ill-defined and somewhat nebulous parameter, but it was something. I'm very pleased when I now see more and more agencies establishing parameters for exclusions.

I guess in the end, I don't see the purpose of the theoretical exclusion. I identified the finger. Why do I need to talk about all of the un-compared fingers? I mean, why even bring it up? To bolster the ID? Just let it stand as is. To emphasize the strength of the conclusion? An ID is plenty strong without mentioning theoretical exclusions.

-Eric Ray
NRivera
Posts: 138
Joined: Fri Oct 16, 2009 8:04 am
Location: Atlanta, GA

Re: "In my opinion"

Post by NRivera »

I have to agree with Eric. I don't see anything wrong with calling it an opinion. A doctor's diagnosis is an opinion, a judge's interpretation of law is an opinion. Merriam Webster defines opinion as ..."a formal expression of judgment or advice by an expert"... Contrary to Eric's opinion about Coldplay, our opinions are informed and based on observable data. Nothing wrong or "unscientific" about that. I might add it also serves to convey certain limitations to our conclusions as being short of fact.
"If at first you don't succeed, skydiving was not for you."
Boyd Baumgartner
Posts: 567
Joined: Sat Aug 06, 2005 11:03 am

Re: "In my opinion"

Post by Boyd Baumgartner »

So.....I finally read the article that prompted the discussion in the first place. It comes across as clickbait that turns into a kickstarter, asking for money.

A couple of points: Any time, you hear someone attempting to draw a line between what's 'real' science and what's not, your skepto-meter should start peaking. The Demarcation Problem (NerdLink) is literally thousands of years old and not resolved. What this means is that what counts as progress can and will vary by discipline.

At the moment, it seems that progress in our discipline is marked by transparency and recognizing that comparisons vary in their complexity. The rush to probabilities was little more in part than the phenomenon outlined by Jerry Muller in 'The Tyranny of Metrics', namely that metrics may inform, but do not replace human judgement because there are intangibles that matter. I would argue that AFIS itself is the perfect example of this phenomenon in our discipline.

I think that the PCAST report explicitly recognizes this when it says:
[3]Proficiency testing. Proficiency testing is essential for assessing an examiner’s capability and performance in making accurate judgments. As discussed elsewhere in this report, proficiency testing needs to be improved by making it more rigorous, by incorporating it systematically within the flow of casework, and by disclosing tests for evaluation by the scientific community.

Scientific validity as applied, then, requires that an expert: (1) has undergone relevant proficiency testing to test his or her accuracy and reports the results of the proficiency testing; (2) discloses whether he or she documented the features in the latent print in writing before comparing it to the known print; (3) provides a written analysis explaining the selection and comparison of the features; (4) discloses whether, when performing the examination, he or she was aware of any other facts of the case that might influence the conclusion; and (5) verifies that the latent print in the case at hand is similar in quality to the range of latent prints considered in the foundational studies.
And this is where it gets interesting and nuanced, because to Michele's point 'Opinion' and 'training and experience' seem counterintuitive to the values of transparency and complexity that the discipline has currently outlined. Since the past is littered with 'trust me, I'm an expert', 'My conclusion is 100% certain', and 'If you can't see this is an ID, you're not as gifted as me', there's a point to be made regarding people's justification of 'training and experience' which puts the weight of the conclusion on the person, not the conditions present in the comparison. A 'Mayfield' print, a 'Zero Point ID' and a 10Print Comparison all present the Examiner with very different conditions. This isn't a new critique. It's echoed by contributors to The Human Factors report (pg3) and outlined in This Paper from 2013 which says
“We contend that the use of verbal scales is inappropriate because it relies on the examiner’s impression and privileges untested ‘experience’. It cannot be readily assessed and it is not easy to explain the methodological frailties at trial—especially when it is the accused challenging the opinion of an experienced analyst.”
.

It's also been articulated in 2009 by John Collins in his article on Stochasticity which recognizes a gap between the inference of the data in two prints and its significance.
In the meantime, a more responsible and perhaps a more compelling approach is for forensic scientists to simply state the truth about identifications. I have never seen, nor would I expect to see, this amount of similarity in friction ridge patterns that came from different sources. This is a statement of fact and is supported by the practitioner’s education, training, and experience. Arguing, on the other hand, that an identification excludes every possible source that ever did, does, or could exist is probably correct, but it cannot be entirely defended under the scientific expectations that our courts now have. And when identifications fall increasingly close to those marginal gray areas, the risks naturally increase.”
And it only took about 10 years for the DOJ's Uniform Language to say basically the same thing.
'Source identification' is an examiner's conclusion that two friction ridge skin impressions originated from the same source. This conclusion is an examiner's decision that the observed friction ridge skin features are in sufficient correspondence such that the examiner would not expect to see the same arrangement of features repeated in an impression that came from a different source and insufficient friction ridge skin features in disagreement to conclude that the impressions came from different sources.
Michele's critique piggy backs on all of this especially in light of her complexity/strength of conclusion work. She's attempting to tie the strength of the conclusion to the properties of the print, which is exactly what the paper Error Rates for Latent Fingerprinting as a Function of Visual Complexity and Cognitive Difficulty is trying to do. The nuanced difference being that the paper makes error rates dependent on the examiner, not the print which seems like more of the same. 'Difficulty' is not as standard between examiners as is 'Distortion', 'Orientation Clues' or 'Ambiguity of features'. One is a function of the person, the other a function of the object. This at least gets us in the neighborhood of doing what we say are our values. I would argue directly against the notion that this needs to be a top down implementation like the original paper proposes, and is better done at the local lab level where varying ideas can compete.
josher89
Posts: 509
Joined: Mon Aug 21, 2006 10:32 pm
Location: NE USA

Re: "In my opinion"

Post by josher89 »

Here I go (some of the following statements may not necessarily reflect my own beliefs but are put out there for more discussion)...
As science—and forensic science more specifically—continues to advance, it becomes increasingly absurd to ask or expect lawyers, judges, and
juries to take sole responsibility for critically evaluating the quality and validity of scientific evidence and testimony.
Where does that leave Daubert? Should Daubert be re-evaluated and no longer hold the judges as gatekeepers?
The evolution of other forensic disciplines, particularly those related to pattern evidence, followed a different course, having been developed primarily within law enforcement environments or at the behest of law enforcement. Disciplines, such as fingerprints...
Try telling that to Herschel when he just wanted to pay people who built the infrastructure he was tasked with constructing in the 1850s.

Not that DNA has it made in the shade, but with DNA, the allele is either there or it isn't (simplistically it's binary; a 1 or a 0). Fingerprints, on the other hand, are harder to "quantify". A ridge ending may or may not be there, but if it is there, is it pointing north, south, east, or west (or northeast, south-southwest, etc.)?
As examples, the NCFS recommended the creation of postdoctoral training programs in forensic science to encourage the emergence of an inquisitive and investigative scientific culture, which the National Institute of Justice (part of the DOJ) quickly embraced.
Based on a personal communication I had with a very prominent PhD (in forensics), he said this regarding the pursuit of a post-doc in forensics
I don't think it is a realistic degree path to pursue in an intended way.
This was, in large part, due to the expansiveness of what forensic science could actually mean. PhD's are specific to a particular area. Chemistry, for example. There are not a lot of PhD's in Chemistry but are more likely PhD's in Organic Chemistry or Molecular Chemistry or Martian Subterranean Soil Sample Chemistry (I jest). I do believe that PhD programs in forensics would not be specific enough to really matter. One would need specialization in techniques or methods and not the larger, generic body of forensics (jack-of-all-trades need not apply).

Back to the article...call me hyper-sensitive, but the second paragraph states:
The issue is of particular importance in light of the decision by the Department of Justice (DOJ) in April 2017 to terminate the National Commission on Forensic Science (NCFS), a group (on which we served) that was charged with advising the federal government on improving the parlous state of the forensic science.
While I admit that the cessation of the NCFS was a silly move, nowhere in this article do they mention OSAC, which is comprised of stakeholders in the legal community from within and outside of forensics. Was this written as a way to whine that they are no longer involved in the pursuit of forensic science improvement?

To briefly piggyback off of what ER said, science is based on human observation. Knowledge, gained through observation, is the oldest form of science I know. Imagine early man seeing this big bright thing in the sky and wondering (maybe, anyway) what it was. I have to believe that when it went away, and got colder, there was some sort of mental click that said, when that fire isn’t in the sky, it’s colder. (visual)

When fire was captured and controlled, I have to believe that through observation, it was quickly realized that fire is hot and that you shouldn’t touch it because it hurt if you did. Our bodies are wired for this (reflexes). (touch)

When we are driving and we hear a siren, we begin looking for its source so we can get out of the way if needed. (auditory)

If we are working at a job site and are digging a hole and begin to smell rotten eggs, we get the heck outta there ‘cuz you know, natural gas. (olfactory)

You get my point. At its basal level, this is science--learning through observation. To me, I can't prove that the sun will rise in the east tomorrow morning but I would be willing to bet a lot of money that it will happen. How can you disprove this? Wait until tomorrow and see (observe) if the sun rises in the east. I think it's okay to say that based on your experience and what you know about how the Earth revolves around the Sun, that there's a strong chance that the aforementioned will happen. The only way to disprove it is to observe it happening contrary. Is there some sort of magic number or phrase that I need to know to say this? I think I would be okay saying that I would not expect to see the contrary happening because I haven't seen it before.

Is that really scientific? Depends on what your definition is. To me, knowing that fire is hot (because I've been burned before) and knowing not to touch it is science. Sure, this is vastly different than Stephen Hawking's research into the cosmos but it's still science to me.
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
Alan C
Posts: 77
Joined: Mon Aug 08, 2005 10:50 pm
Location: King County SO, Seattle

Re: "In my opinion"

Post by Alan C »

NRivera wrote: Mon Apr 16, 2018 10:18 am Am I the only one who thought he was going to claim Coldplay was the voice of science? :lol:
Of course it isn't. Thomas Dolby is the voice of science. https://www.youtube.com/watch?v=-FIMvSp01C8
Post Reply