Posted: Mon Aug 14, 2006 6:22 am
I am the person who wrote last week's ' Detail' about statistics and the Shirley McKie case. I hope nobody minds me jumping into this discussion with some observations.
I think that Pat Wertheim made an interesting point in his Aug 9 posting regarding AFIS. Statistical theory says that if you are working to identify or exclude a person who has come to you from an AFIS search of 1 million records, you have to be a million times more careful compared with a case where a detective gives you a suspect. This is simply because the AFIS is a million times more likely to present you with an innocent person who has a print that is extraordinarily like the latent. So a practice that might be safe for a 1 to 1 comparison may not be safe with AFIS.
You know, I cannot see any way you can come to conclusions about any of the points in this thread. In all walks of life just discussing things NEVER gets at the truth. Experience can give authority, but unless there is a feedback mechanism for practitioners to learn from mistakes, experience counts for nothing, or may even be dangerous. Theory must always be tested by experimentation.
Pat's 'playing' with the AFIS system to produce similar-looking prints is the kind of thing that can get at the truth. I don't know if you already do 'whole system' or 'closed loop' quality assurance tests, if not Pat's idea could form the basis of these. Set up a permanent department to find similar-looking latents and inked prints, and send them out to every fingerprint organisation in the world. Don't say if they are from the same person or not. Judgements are returned to the research department for analysis. It would be best if these tests could be 'slipped in' with the normal work, this way the whole system would be tested, individual skill, training, management practices, everything. This would all have to be done in a spirit of striving for 'continuous improvement' (to quote the great man W. Edwards Deming).
I think that Pat Wertheim made an interesting point in his Aug 9 posting regarding AFIS. Statistical theory says that if you are working to identify or exclude a person who has come to you from an AFIS search of 1 million records, you have to be a million times more careful compared with a case where a detective gives you a suspect. This is simply because the AFIS is a million times more likely to present you with an innocent person who has a print that is extraordinarily like the latent. So a practice that might be safe for a 1 to 1 comparison may not be safe with AFIS.
You know, I cannot see any way you can come to conclusions about any of the points in this thread. In all walks of life just discussing things NEVER gets at the truth. Experience can give authority, but unless there is a feedback mechanism for practitioners to learn from mistakes, experience counts for nothing, or may even be dangerous. Theory must always be tested by experimentation.
Pat's 'playing' with the AFIS system to produce similar-looking prints is the kind of thing that can get at the truth. I don't know if you already do 'whole system' or 'closed loop' quality assurance tests, if not Pat's idea could form the basis of these. Set up a permanent department to find similar-looking latents and inked prints, and send them out to every fingerprint organisation in the world. Don't say if they are from the same person or not. Judgements are returned to the research department for analysis. It would be best if these tests could be 'slipped in' with the normal work, this way the whole system would be tested, individual skill, training, management practices, everything. This would all have to be done in a spirit of striving for 'continuous improvement' (to quote the great man W. Edwards Deming).