Page 1 of 1

RS&A FALL 2018 Proficiency Results are out

Posted: Wed Nov 21, 2018 12:43 pm
by Boyd Baumgartner
Download the results here

It seems as though some of the trending topics we've discussed this year are making their way to the comments of the tests and that's good. Specifically some standards around Exclusion and the need for an inconclusive. A related topic, also mentioned is the issue of performing to the test, akin to something NRivera said in the Spring test thread. Things that may not meet your casework standards, you'll be lax on just because you know the test's standards. That was echoed in the comments (See emphasis in quote below)

This comment sums these issues up nicely:
Three of our certified latent print examiners took this test and felt that two of the palm standards were of extremely low quality in the interdigital region, which was needed for comparison with L1. All three examiners indicated on the test that in active casework they may have concluded an inconclusive result for L1, as better standards were needed. The low quality of the standards caused concerns as to the value of the test, as pushing examiners to form a conclusion is not necessarily a test of their proficiency. Inconclusive may be stated as a valid conclusion, however, analysts know that ID or EXC are supported in the testing environment. (emphasis mine)

Since exclusions are where we make the most errors, many agencies are moving in the direction of more detailed procedures for exclusion determinations; including the necessity of comparison of two target groups, sufficient level two discrepancies be present, standards be of sufficient quality, known orientation and anatomical source, etc. Our concern is that L1 did not appear to support those trends in the community.
It would seem like a good first step towards moving proficiency tests towards a blind state (incorporated into casework unbeknown to the Examiner), would be to start offering tests for labs with varying conclusions. There should be a version with 'not identified' as an answer because some labs don't exclude. You could have one with an exclusion standard. You could have one where the prints are less than stellar and include some sort of 'incomplete-need better exemplars'. You could also had one with consistency but not enough to ID and include an 'inconclusive' result. Then agencies could purchase the test that best matched the SOP's and conclusions they follow.

Nothing real salacious in the results of this test. Although these two statements on the very last page seemed odd.
1. A total of seventy (70) participants representing seventeen (17) agencies/entities submitted responses to test 18102. Of those 17 agencies, a total of only one (1) agency submitted responses inconsistent with our assigned values.

2. The total number of inconsistent responses was six (6) and these inconsistencies did not show a pattern and were spread across multiple demographics.
I'd say the first observation negates the second observation as all of the inconsistent answers were confined to one agency. That, by all definitions, constitutes a pattern.

Also, Appendix 11 seems odd since all the inconsistent responses came from one agency and that agency apparently has two ways of doing things.

What else did you notice?

Re: RS&A FALL 2018 Proficiency Results are out

Posted: Sat Nov 24, 2018 11:42 am
by Steve Everist
Boyd Baumgartner wrote: Wed Nov 21, 2018 12:43 pm Download the results here

I'd say the first observation negates the second observation as all of the inconsistent answers were confined to one agency. That, by all definitions, constitutes a pattern.

Also, Appendix 11 seems odd since all the inconsistent responses came from one agency and that agency apparently has two ways of doing things.

What else did you notice?
All the appendices end up being one agency and everyone else who took the test (and anyone at that one agency who did get consistent answers, I guess).

I'll try to break down what I observe (from the agency with the inconsistent results):

4 civilian/2 sworn
2 accredited/4 non-accredited
1 CLPE/5 non-CLPE
3 CSI/3 LPE
3 reviewed/3 non reviewed
3 went INC on L1
2 went EXC on L6
1 went INC on L10

Nobody was inconsistent on more than one of their answers. The three INC results may have been justified, but others who either don't use INC or knew it was a test situation may have answered outside of their usual casework SOP. As Boyd pointed out from NRivera, "...something NRivera said in the Spring test thread. Things that may not meet your casework standards, you'll be lax on just because you know the test's standards." It could also be that INC was used in place of Incomplete, and the test-takers weren't comfortable excluding without additional exemplars. And then there are those who use not-identified?

Also, the experience, training, schooling was scattered for this agency.

It's always nice to be able to have the data to go through, so we can jump to conclusions.

Re: RS&A FALL 2018 Proficiency Results are out

Posted: Mon Nov 26, 2018 9:12 am
by NRivera
Steve Everist wrote: Sat Nov 24, 2018 11:42 am
It's always nice to be able to have the data to go through, so we can jump to conclusions.


:lol: :lol: :lol:

The first question that came to my mind has to do with the appendices. If all 6 inconsistent answers came from the same agency, how is it possible that 2 inconsistent answers came from an accredited agency and 4 came from a non-accredited agency? How can an agency be accredited and non-accredited at the same time? Something doesn't jive there.

I didn't take this particular test, so I can't comment on the appropriateness of the INC decisions. 2 of the test-takers with inconsistent INC's commented there was insufficient data in the knowns, which is hard to argue with without seeing the mated pairs provided. Take the iNC's away and you are left with just 2 false negatives out of 700 comparison decisions. That seems reasonable to me.

I don't think you necessarily need different versions of a PT. As a rule, accreditation-compliant PT's should be ground truth comparisons so you're always going to have "consistent" and "inconsistent" answers. There is nothing to prevent an agency from setting their own PT pass/fail criteria based on their needs to address exclusion standards and to mitigate error rates if they so choose (which they totally should).
Since exclusions are where we make the most errors, many agencies are moving in the direction of more detailed procedures for exclusion determinations; including the necessity of comparison of two target groups, sufficient level two discrepancies be present, standards be of sufficient quality, known orientation and anatomical source, etc. Our concern is that L1 did not appear to support those trends in the community.
The latent prints will never "support those trends". What should support those trends are the agency's conclusion thresholds and those are totally up for debate based on the agency's needs. All the criteria mentioned in the comment could be acceptable, but agencies need to consider each one and decide if they makes sense for them and their customers. An examiner working with clear decision thresholds spelled out in an agency procedure will have zero problem calling a comparison INC if it doesn't meet the stated ID or EXC criteria. It doesn't matter if it's on a case or a PT. I think the issue lies in the reluctance of many agencies to critically evaluate and take a position on those criteria and implement SOP's accordingly. It takes a lot of work and research to justify and support adopting certain standards and many agencies just can't or won't do it, or they are waiting on the rest of the community to dictate those thresholds and that's not necessarily appropriate at this point.

Re: RS&A FALL 2018 Proficiency Results are out

Posted: Mon Nov 26, 2018 1:05 pm
by Steve Everist
NRivera wrote: Mon Nov 26, 2018 9:12 am
The first question that came to my mind has to do with the appendices. If all 6 inconsistent answers came from the same agency, how is it possible that 2 inconsistent answers came from an accredited agency and 4 came from a non-accredited agency? How can an agency be accredited and non-accredited at the same time? Something doesn't jive there.
Since the agency appears to be three CSI and three LPE, it could be that the LP unit is accredited, while the crime scene unit is not (or vice versa) and one person didn't get that question right (or assumed it was relative to the other unit).

It's also interesting that one agency submits an original test for six people, instead of sending one for the agency and rest taking the test internally.