Quality Assurance Measures
Posted: Tue Nov 28, 2006 1:01 pm
In Charles Parkers post on “Who or What is the Latent Print Community?” he mentions a great topic in the below statement, referring to SWGFAST:
“In looking at “Trained to Competency (ver.2.1) I see several key words such as “Understanding”, “Knowledge”, “Ability”, and “Application”. These are all fine words, but there is nothing in the document to measure when a person reaches understanding, or knowledge. There is nothing there to test ability or to check on application.”
In this weeks Detail, Dave Charlton states:
“Findings in earlier studies have highlighted that fingerprint experts are vulnerable to contextual influences. It will be important to try to ascertain how these influences manifest themselves, how such influences might be minimized or eradicated, and what steps could be taken to enhance training, accreditation and methodology to improve the resistance of experts to such influences.”
The need for better quality assurance measures seem to be continuously noted but it seems like little is being done to implement such measures.
First, as Charles points out, there is nothing to measure an examiners understanding. Many industries have some sort of testing process to determine competency. I’ve looked into a few of these tests (Professional Engineers, CPA’s, Amer. Medical Association tests, and the Bar exam) and it looks like the most credible professions have tests that are justification based, not just conclusion based. In the US we’ve had the IAI Certification Test available to us and the CTS proficiency test (both require conclusions only, no explanation of how an examiner arrived at the conclusion). Recently other companies are coming out with proficiency tests (two companies that I’m aware of), and these tests also only require a conclusion. I asked one company that if their goal is to have a better test, why don’t they make it a justification based test? Their reply was that the industry doesn’t promote that at this time, neither ASCLD or SWGFAST. I’ve also heard that the IAI is looking into an additional certification test (for those that do this job but can’t take the present test because they don’t qualify yet). It seems to me that if we want to be considered competent we need to show that we understand what we are doing. External testing seems to be more objective than internal tests, but if the external tests available aren’t sufficient then I think agencies should boycott these tests and use internal testing until adequate external tests are available. Or maybe continue the external tests and add quality internal tests that assure examiners came to their conclusions reasonably. I think justification could also be implemented in actual casework. If the examiner's in the Mayfield case (and Dror’s study) were required to document their reasoning then it would have been easier to determine why the error happened. Without this information, it seems like people are just speculating as to the cause.
In addition, Dave mentioned enhancing training, accreditation and methodology. When Roy Huber first articulated ACE + Verification he said the verification process was “the most reliable form of proof”. The entire method (referring to scientific methodology) is “constantly critical of itself, entertains no dogma, maintains no absolutes or infallibilities. It is both cautious and skeptical”… “In childlike fashion it constantly probes by asking why? And constantly provokes by challenging “so what?” (1962)
I know I’m reading a lot into this but this but it seems like Huber advocated verification as complete peer review and over the years our industry has accepted a limited view of verification as merely reproducing the conclusion. I think in 99% of conclusions reproducibility is fine but to ignore that complete peer review may be needed in some instances is dangerous to our conclusions, our industry, and our credibility as experts. I think that in some cases the verifier should be saying to themselves, “yes, I can reproduce this conclusion but (as Huber states) “so what”, does that mean the original examiner saw this or that? As the verifier (or perhaps the peer reviewer), I think additional documentation is needed or I think the examiner needs to explain a certain dissimilarity”.
In summary, I think expanding our methodology to include complete peer review (the verifier asking, “What were you seeing and basing your conclusion on?” - even if used in a limited capacity) and acknowledging the benefits of justification based conclusions, not just conclusions, are all things that would help examiners. I believe that solutions to some problems are right in front of us but we may just be ignoring their value. These might not be perfect solutions but could these be solutions that are headed in the right direction?
“In looking at “Trained to Competency (ver.2.1) I see several key words such as “Understanding”, “Knowledge”, “Ability”, and “Application”. These are all fine words, but there is nothing in the document to measure when a person reaches understanding, or knowledge. There is nothing there to test ability or to check on application.”
In this weeks Detail, Dave Charlton states:
“Findings in earlier studies have highlighted that fingerprint experts are vulnerable to contextual influences. It will be important to try to ascertain how these influences manifest themselves, how such influences might be minimized or eradicated, and what steps could be taken to enhance training, accreditation and methodology to improve the resistance of experts to such influences.”
The need for better quality assurance measures seem to be continuously noted but it seems like little is being done to implement such measures.
First, as Charles points out, there is nothing to measure an examiners understanding. Many industries have some sort of testing process to determine competency. I’ve looked into a few of these tests (Professional Engineers, CPA’s, Amer. Medical Association tests, and the Bar exam) and it looks like the most credible professions have tests that are justification based, not just conclusion based. In the US we’ve had the IAI Certification Test available to us and the CTS proficiency test (both require conclusions only, no explanation of how an examiner arrived at the conclusion). Recently other companies are coming out with proficiency tests (two companies that I’m aware of), and these tests also only require a conclusion. I asked one company that if their goal is to have a better test, why don’t they make it a justification based test? Their reply was that the industry doesn’t promote that at this time, neither ASCLD or SWGFAST. I’ve also heard that the IAI is looking into an additional certification test (for those that do this job but can’t take the present test because they don’t qualify yet). It seems to me that if we want to be considered competent we need to show that we understand what we are doing. External testing seems to be more objective than internal tests, but if the external tests available aren’t sufficient then I think agencies should boycott these tests and use internal testing until adequate external tests are available. Or maybe continue the external tests and add quality internal tests that assure examiners came to their conclusions reasonably. I think justification could also be implemented in actual casework. If the examiner's in the Mayfield case (and Dror’s study) were required to document their reasoning then it would have been easier to determine why the error happened. Without this information, it seems like people are just speculating as to the cause.
In addition, Dave mentioned enhancing training, accreditation and methodology. When Roy Huber first articulated ACE + Verification he said the verification process was “the most reliable form of proof”. The entire method (referring to scientific methodology) is “constantly critical of itself, entertains no dogma, maintains no absolutes or infallibilities. It is both cautious and skeptical”… “In childlike fashion it constantly probes by asking why? And constantly provokes by challenging “so what?” (1962)
I know I’m reading a lot into this but this but it seems like Huber advocated verification as complete peer review and over the years our industry has accepted a limited view of verification as merely reproducing the conclusion. I think in 99% of conclusions reproducibility is fine but to ignore that complete peer review may be needed in some instances is dangerous to our conclusions, our industry, and our credibility as experts. I think that in some cases the verifier should be saying to themselves, “yes, I can reproduce this conclusion but (as Huber states) “so what”, does that mean the original examiner saw this or that? As the verifier (or perhaps the peer reviewer), I think additional documentation is needed or I think the examiner needs to explain a certain dissimilarity”.
In summary, I think expanding our methodology to include complete peer review (the verifier asking, “What were you seeing and basing your conclusion on?” - even if used in a limited capacity) and acknowledging the benefits of justification based conclusions, not just conclusions, are all things that would help examiners. I believe that solutions to some problems are right in front of us but we may just be ignoring their value. These might not be perfect solutions but could these be solutions that are headed in the right direction?