That being said, the summary report then breaks out 'inconsistencies' in various dimensions of the data in various charts and graphs. So looking at the beginning of the report, we can see right away that inconsistencies seem to be lopsided towards a very specific demographic of people/agencies in the discipline; non-certified, sworn personnel from non-accredited labs (at least based on this test anyway).
The report actually outlines all the responses by all the people who submitted the test, anonymously using their TEST ID. The inconsistencies are highlighted in yellow. As we saw from the previous thread regarding proficiency tests, not all agencies submit them in the same fashion. Some only choose one person from the agency, and some submit all/more than one person per agency. The thing that actually got me interested in this data was the 'Participants Additional Comments' where the TEST ID is associated with a comment as well.
TEST ID 3199Z18101 wrote:Brilliant test of latent Print Comparison skills, you have to look beyond what you see, take into consideration pressure, slippage and thickness of ridges. Your ridge counting and ridge tracing plays a vital role. As a manager it was great to lead by example. Thanking you.
I found the statement 'you have to look beyond what you see' a rather interesting turn of phrase, so I scrolled up to the data for that same TEST ID and what did I find, but 5 bum IDs. But that wasn't all, there were actually 3 other people who made the exact same bum IDs including the finger to which the latent was falsely attributed.
That wasn't the only pattern of similar answers that was present, as another group of 3 had the same thing happen. Errors on the same latents with the same incorrect results.
The common sense and most charitable reading of the data is that these two agencies verified their proficiency tests, but rather than catch error, the errors were propagated. This becomes concerning in light of the pie charts and graphs when plotted against hours of training and years of experience, especially seeing that 'training and experience' is becoming more of a problematic concept in the articulation of our process and conclusions because it doesn't ensure or even speak to accuracy.
Obviously, I don't know exactly what happened, like whether or not the manager associated with that comment influenced their staff. Did people compare notes prior to submission, or was this just extremely ineffective verification? What do you think accounts for this data? Also, considering there is a huge difference between best effort and bad results vs sloppiness or lack of effort, is there an ethical obligation for proficiency test providers to report this level of error?