Poll: Does your agency have an Exclusion standard/guideline?
-
Boyd Baumgartner
- Posts: 567
- Joined: Sat Aug 06, 2005 11:03 am
Poll: Does your agency have an Exclusion standard/guideline?
This poll comes from a tangent in the proficiency test thread...
(from here on out, guests will always be able to vote in polls)
If your agency has a standard/guideline for Exclusions and you're able to post it, let's hear it.
We have one and it involves knowing and having:
Anatomical Source (Finger, Palm(specific area: Thenar/Hypothenar), Foot)
Orientation
Clear Target Areas (can include focal points: core,delta)
(from here on out, guests will always be able to vote in polls)
If your agency has a standard/guideline for Exclusions and you're able to post it, let's hear it.
We have one and it involves knowing and having:
Anatomical Source (Finger, Palm(specific area: Thenar/Hypothenar), Foot)
Orientation
Clear Target Areas (can include focal points: core,delta)
-
Bill Schade
- Posts: 243
- Joined: Mon Jul 11, 2005 1:46 pm
- Location: Clearwater, Florida
Re: Poll: Does your agency have an Exclusion standard/guideline?
We just recently approached this issue and incorporated standards for when an exclusion is the appropriate conclusion. The intent is to "protect" examiners from making erroneous exclusions.
We copied KCSO ideas so there is no need to post them here. However, I think you left out an important part of the standard in the description of your guidelines.
The anatomical source and orientation should be "self evident". That means it is known with no doubt to someone with rudimentary training.
AFIS is turning up hits on cases that were thought to be finished and it is apparent to me that we are not as good at exclusions as we think we are. I'm sure that in most of these past misses, the examiner was "confident", "looked twice" and was certain they were correct. However, just because you think you "know" area and orientation based on training and experience, we see now that good examiners are making mistakes.
The previous conclusion was "Not Identified" so although it might be embarrassing to have missed one, it was not fatal to either your career or IAI certification. Since that is no longer the case, examiners must be guided to be more cautious. I think that agencies owe it to their staff to provide guidance on this subject and not wait for standards or guidance to come from the discipline itself.
We copied KCSO ideas so there is no need to post them here. However, I think you left out an important part of the standard in the description of your guidelines.
The anatomical source and orientation should be "self evident". That means it is known with no doubt to someone with rudimentary training.
AFIS is turning up hits on cases that were thought to be finished and it is apparent to me that we are not as good at exclusions as we think we are. I'm sure that in most of these past misses, the examiner was "confident", "looked twice" and was certain they were correct. However, just because you think you "know" area and orientation based on training and experience, we see now that good examiners are making mistakes.
The previous conclusion was "Not Identified" so although it might be embarrassing to have missed one, it was not fatal to either your career or IAI certification. Since that is no longer the case, examiners must be guided to be more cautious. I think that agencies owe it to their staff to provide guidance on this subject and not wait for standards or guidance to come from the discipline itself.
-
anwilson
- Posts: 28
- Joined: Tue Mar 24, 2015 1:25 pm
Re: Poll: Does your agency have an Exclusion standard/guideline?
I would agree that agencies owe it to their staff to provide guidelines. KC also gives examiners the ability to reach an Inconclusive decision that is due to ambiguity. The specific wording is "Inconclusive - due to ambiguity which results in an inability to locate corresponding detail." I think it may be similar to how "not identified" has been/is used and in simple terms just means "I can't find it because the print is crappy but it could be there." While I've only been in the discipline since 2011, I was trained that you will find it if you "looked harder" which isn't helpful to an examiner (especially one going through a training program).
-
Boyd Baumgartner
- Posts: 567
- Joined: Sat Aug 06, 2005 11:03 am
Re: Poll: Does your agency have an Exclusion standard/guideline?
There's a deep point in your statement and I think the section in the Black Box study on Examiner skill touches on it. (emphasis added)Bill Schade wrote: ↑Tue May 29, 2018 11:23 am AFIS is turning up hits on cases that were thought to be finished and it is apparent to me that we are not as good at exclusions as we think we are.
I think one of the most substantive points here is the distinction between inappropriate decisions and errors. The RSA proficiency test fails to distinguish between those two types of non-correct conclusions by labelling them both as 'inconsistent' conclusions. I don't know if I agree that we think we're good at exclusions though. Between the 7.5% Black Box study's findings on erroneous exclusions and the discussion in the Case Specific AFIS article where previous inconclusives were subsequently identified.Although most discussions of examiner skill focus on error rates (e.g., ref. 13), the other aspects of examiner skill are important not just to the examiner’s organization, but to the criminal justice system as well; e.g., an examiner who is frequently inconclusive is ineffective and thereby fails to serve justice. Both individual examiners and organizations must strike a proper balance between the societal costs of errors and inappropriate decisions, and the operational costs of detection. Contradictory verification decisions, whether involving erroneous conclusions or inappropriate inconclusive decisions, should be internally documented and addressed through an organization’s continual improvement processes.
I think we know the prevalence of such types of conclusions.No erroneous exclusions were detected during the review of the cases; however, seven additional identifications were found using CAFIS. The original examiner reported the results as inconclusive but identifications were made by the examiner reviewing the CAFIS search results. Four of these identifications were individual names not previously reported to investigators.
Another deep point and combined with the recent proficiency test discussion boils down to something like 'what is the utility of external certification or external proficiency tests?' If you look at the discussions on errors in the Black Box Study we see that:Bill Schade wrote: ↑Fri May 25, 2018 6:18 am The previous conclusion was "Not Identified" so although it might be embarrassing to have missed one, it was not fatal to either your career or IAI certification.
Add to that the that there were 'inconsistent' answers associated with Certified examiners in the RSA Proficiency Test and what guarantee do you have that being certified has any benefit to a case or a conclusion? Should the IAI be required to carry a surety bond in the same way a contractor is bonded to ensure the surety of a property owner?The six errors were committed by five examiners, three of whom were certified (including one examiner who made two errors); one was not certified; one did not respond to our background survey.
-
Bill Schade
- Posts: 243
- Joined: Mon Jul 11, 2005 1:46 pm
- Location: Clearwater, Florida
ineffective??
"Although most discussions of examiner skill focus on error rates (e.g., ref. 13), the other aspects of examiner skill are important not just to the examiner’s organization, but to the criminal justice system as well; e.g., an examiner who is frequently inconclusive is ineffective and thereby fails to serve justice. Both individual examiners and organizations must strike a proper balance between the societal costs of errors and inappropriate decisions, and the operational costs of detection."
Instead of measuring the frequency of the inconclusive decision, we should be measuring and judging the "appropriateness" of the decision. If an agency establishes criteria, then examiners (and customers) would have a way to judge what is the appropriate answer.
Instead of measuring the frequency of the inconclusive decision, we should be measuring and judging the "appropriateness" of the decision. If an agency establishes criteria, then examiners (and customers) would have a way to judge what is the appropriate answer.
-
Boyd Baumgartner
- Posts: 567
- Joined: Sat Aug 06, 2005 11:03 am
Re: Poll: Does your agency have an Exclusion standard/guideline?
That's the deeper point though is it not? This all boils down to the aim of the policy in the office. Protecting examiners and protecting victims is something that needs to be balanced. That was the point of some of Tom Busey's Research.Bill Schade wrote: ↑Wed May 30, 2018 9:00 am we should be measuring and judging the "appropriateness" of the decision
Appropriate conclusions don't necessarily end with accurate conclusions. In the Case AFIS article, Inconclusive was the appropriate decision according to their SOP and it prevented the Examiner from being forced into an erroneous exclusion (if they didn't have inconclusive as an option), it's just that it wasn't accurate because IDs were ultimately the most appropriate conclusion after having run them through AFIS.
-
NRivera
- Posts: 138
- Joined: Fri Oct 16, 2009 8:04 am
- Location: Atlanta, GA
Re: Poll: Does your agency have an Exclusion standard/guideline?
Although most discussions of examiner skill focus on error rates (e.g., ref. 13), the other aspects of examiner skill are important not just to the examiner’s organization, but to the criminal justice system as well; e.g., an examiner who is frequently inconclusive is ineffective and thereby fails to serve justice. Both individual examiners and organizations must strike a proper balance between the societal costs of errors and inappropriate decisions, and the operational costs of detection. Contradictory verification decisions, whether involving erroneous conclusions or inappropriate inconclusive decisions, should be internally documented and addressed through an organization’s continual improvement processes.
I think the issue is getting a little confused here. The implication is that the examiner is using the inconclusive decision (as much as I hate this phrase, I'll go ahead and use it) as a crutch. In my mind, that's not a discussion of skill or lack thereof, it's a discussion of laziness (more often than not, but not always) if we're talking about individuals. When you start seeing it in a group of individuals it should be a sign that there is a more systemic problem. Until someone smarter than me can answer the question "How much is enough?", we won't be able to pin down exactly what is "effective" either. I agree with Boyd that right now it depends on the purpose of the SOP for the agency. I could work a case with 30 prints, ID one and call 29 of them inconclusive (hypothetically speaking of course). How is that different than all the agencies out there doing one and done or just deferring comparisons to any degree? I'm not saying that approach is inappropriate at all. Quite the contrary, when you consider the volume of work labs get these days and the fact that 98% of cases end up in a plea agreement, deferring comparisons to a certain extent makes complete sense. There should be a distinction between a knowledge, skill and ability issue and a management issue and that line is not easy to draw either. Having standards for exclusion also make sense from the error mitigation standpoint, but one should understand that having those is invariably going to increase the number of inconclusives being reported to some extent.
"If at first you don't succeed, skydiving was not for you."
-
Stephany Louk-Denney
- Posts: 17
- Joined: Mon Aug 08, 2005 6:31 am
Re: Poll: Does your agency have an Exclusion standard/guideline?
My agency does not currently have an exclusion policy, but would like to implement one. If you are willing and able to share your policy, I would greatly appreciate it. My email is stephany.louk-denney@mshp.dps.mo.gov if you wish to provide the policy via email. Thank you in advance for your time.
-
josher89
- Posts: 509
- Joined: Mon Aug 21, 2006 10:32 pm
- Location: NE USA
Re: Poll: Does your agency have an Exclusion standard/guideline?
If using an inconclusive decision is lazy,
then how do you feel about the expanded conclusions set OSAC FRS is looking at implementing? Would that be giving examiners more opportunities to be 'lazy'?In my mind, that's not a discussion of skill or lack thereof, it's a discussion of laziness...
"...he wrapped himself in quotations—as a beggar would enfold himself in the purple of emperors." - R. Kipling, 1893
-
ER
- Posts: 351
- Joined: Tue Dec 18, 2007 3:23 pm
- Location: USA
Re: Poll: Does your agency have an Exclusion standard/guideline?
Now that there is research available on the accuracy of exclusion decisions, I've been pushing that agencies consider the research in establishing an exclusion policy. In my mind, exclusion is all about establishing sufficient differences to reach an accurate conclusion. So what's out there?
The FBI White Box paper on exclusions have the presence of a delta or core being the most significant aspect in lowering FNR. The Kellman paper had a similar result when evaluating various aspects of images affecting error rates. Finally, the White Box paper on changing your markup showed a drastic reduction in FNR when minutiae were moved, added, and deleted.
So, start with a core or delta, choose a clear target group near it, and then search in the knowns in the same area near the core or delta. If you don't find it, choose a different target group in another area of the print, and also try replotting some minutiae with connective ambiguity. Consider alternate orientations, locations, and patterns and how distortion might affect these. If there are clear differences anchored to a core or delta, exclude. If not, inconclusive. (And maybe the Support for Different Source, but we've got to figure out what Exclusion is before we figure out what "Almost Exclusion" is.)
Verification is the check against inappropriate conclusions and errors. If everything is being verified, then there's no need to bring up the bogeymen of laziness and crutches.
The FBI White Box paper on exclusions have the presence of a delta or core being the most significant aspect in lowering FNR. The Kellman paper had a similar result when evaluating various aspects of images affecting error rates. Finally, the White Box paper on changing your markup showed a drastic reduction in FNR when minutiae were moved, added, and deleted.
So, start with a core or delta, choose a clear target group near it, and then search in the knowns in the same area near the core or delta. If you don't find it, choose a different target group in another area of the print, and also try replotting some minutiae with connective ambiguity. Consider alternate orientations, locations, and patterns and how distortion might affect these. If there are clear differences anchored to a core or delta, exclude. If not, inconclusive. (And maybe the Support for Different Source, but we've got to figure out what Exclusion is before we figure out what "Almost Exclusion" is.)
Verification is the check against inappropriate conclusions and errors. If everything is being verified, then there's no need to bring up the bogeymen of laziness and crutches.