Conclusion scales
-
- Posts: 46
- Joined: Fri Apr 18, 2008 5:39 am
- Location: London
Conclusion scales
Hi,
How do those of you who use a scale of conclusions, or who have views on how one should be used, define what is of no value? For example a latent with three characteristics could reveal strong support for same source or a latent with one characteristic could reveal extremely strong support for different sources. So in practical terms how do you decide what not to compare?
Thanks
Dan
How do those of you who use a scale of conclusions, or who have views on how one should be used, define what is of no value? For example a latent with three characteristics could reveal strong support for same source or a latent with one characteristic could reveal extremely strong support for different sources. So in practical terms how do you decide what not to compare?
Thanks
Dan
-
- Posts: 382
- Joined: Tue Dec 06, 2005 10:40 am
Re: Conclusion scales
Dan,
My opinion is that in order to use any conclusion scale, a criteria needs to be set for each conclusion. If this isn't done then the information will not be interpreted consistently among practitioners (or it can and will be interpreted differently by different practitioners).
What criteria are you using when you say that 3 features can result in 'strong support for same source' and 1 feature can result in 'extremely strong support for different sources'? Can you share examples of each with us? With this criteria then it seems as though every latent with 1 feature is considered to have value and only ridges with no features would be considered to be 'no value'.
Michele
My opinion is that in order to use any conclusion scale, a criteria needs to be set for each conclusion. If this isn't done then the information will not be interpreted consistently among practitioners (or it can and will be interpreted differently by different practitioners).
What criteria are you using when you say that 3 features can result in 'strong support for same source' and 1 feature can result in 'extremely strong support for different sources'? Can you share examples of each with us? With this criteria then it seems as though every latent with 1 feature is considered to have value and only ridges with no features would be considered to be 'no value'.
Michele
Michele
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
-
- Posts: 46
- Joined: Fri Apr 18, 2008 5:39 am
- Location: London
Re: Conclusion scales
Hi Michelle,
Thanks for your response.
Those were just examples - I suppose I was thinking of studies of marks deemed no value by examiners that when put through a statistical model indicated that the LR for them would be high if a corresponding print were found revealing those details. With the one characteristic for different source, I guess if you were confident you were comparing the same area and there was one clear characteristic present in one impression and absent in the other then that could be extremely strong support for different sources?
As you say, this approach seems impractical - but what I was asking was how do people define the criteria for what is considered of value if a scale is being used?
Thanks
Dan
Thanks for your response.
Those were just examples - I suppose I was thinking of studies of marks deemed no value by examiners that when put through a statistical model indicated that the LR for them would be high if a corresponding print were found revealing those details. With the one characteristic for different source, I guess if you were confident you were comparing the same area and there was one clear characteristic present in one impression and absent in the other then that could be extremely strong support for different sources?
As you say, this approach seems impractical - but what I was asking was how do people define the criteria for what is considered of value if a scale is being used?
Thanks
Dan
-
- Posts: 382
- Joined: Tue Dec 06, 2005 10:40 am
Re: Conclusion scales
Dan,
As I understand it, but I could be wrong, most LR's are not 'the LR' rather they are 'the specific LPE's LR'. For example, if the features plotted by the FBI examiners in the Mayfield case were run through a some statistical models, I've heard it would make it look like there was 'high correspondence'. If another person ran their charted features through the same statistical model, it may show very low correspondence.
It sounds like you follow the one dissimilarity rule. I have tons of examples of this rule not being valid (but can't figure out how to attach a picture). So, I would never say that a clear discrepancy/dissimilarity was strong support for a different source. Maybe this is a limitation of statistical models (I'm not sure).
At our agency, our initial assessment of a print is a quick triage, not an in-depth analysis. Therefore, we say an impression has 'potential value'. During the comparison process, while performing an in-depth examination, then we may determine that the features aren't as reliable as we initially thought, and may then label the print as having 'no value'. This is normally due to distortion. We do use a scaled approach but not a scale of LRs. Our scale is more like hospital condition levels (Good, fair, serious and critical) where the levels are defined with criteria instead of determined statistically. We have:
basic ID
advanced ID
complex ID
Inconclusive with considerable similarities
Inconclusive with marginal similarities (we'd expect to see this in others)
Inconclusive (I have no idea)
AFIS inconclusive (an investigative lead when we are reporting out a name not already associated with a case)
Exclusion due to another person being ID'd and
Exclusion due to a direct comparison
So, we do use a scaled approach but not a statistical scale. Sorry, I didn't realize you were asking about LR's in your original question.
Michele
As I understand it, but I could be wrong, most LR's are not 'the LR' rather they are 'the specific LPE's LR'. For example, if the features plotted by the FBI examiners in the Mayfield case were run through a some statistical models, I've heard it would make it look like there was 'high correspondence'. If another person ran their charted features through the same statistical model, it may show very low correspondence.
It sounds like you follow the one dissimilarity rule. I have tons of examples of this rule not being valid (but can't figure out how to attach a picture). So, I would never say that a clear discrepancy/dissimilarity was strong support for a different source. Maybe this is a limitation of statistical models (I'm not sure).
At our agency, our initial assessment of a print is a quick triage, not an in-depth analysis. Therefore, we say an impression has 'potential value'. During the comparison process, while performing an in-depth examination, then we may determine that the features aren't as reliable as we initially thought, and may then label the print as having 'no value'. This is normally due to distortion. We do use a scaled approach but not a scale of LRs. Our scale is more like hospital condition levels (Good, fair, serious and critical) where the levels are defined with criteria instead of determined statistically. We have:
basic ID
advanced ID
complex ID
Inconclusive with considerable similarities
Inconclusive with marginal similarities (we'd expect to see this in others)
Inconclusive (I have no idea)
AFIS inconclusive (an investigative lead when we are reporting out a name not already associated with a case)
Exclusion due to another person being ID'd and
Exclusion due to a direct comparison
So, we do use a scaled approach but not a statistical scale. Sorry, I didn't realize you were asking about LR's in your original question.
Michele
Michele
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
-
- Posts: 46
- Joined: Fri Apr 18, 2008 5:39 am
- Location: London
Re: Conclusion scales
Hi Michele,
Yes, I’d agree if you put the characteristics in different places with a model you will get a different LR - in the same way you would if you put them in different places on an AFIS system you will get a different score (or a different respondent if you put them in really different places).
I’m familiar with the one dissimilarity rule and I don’t follow it.
The point I was making was that I think latents with very few characteristics could provide support for a conclusion. Such latents could be considered no value if the traditional categoric conclusions are being used but the use of a broader scale may allow their value to be considered. In which case, how do we decide which latents we progress to the comparison stage and which we do not.
Thanks
Dan
Yes, I’d agree if you put the characteristics in different places with a model you will get a different LR - in the same way you would if you put them in different places on an AFIS system you will get a different score (or a different respondent if you put them in really different places).
I’m familiar with the one dissimilarity rule and I don’t follow it.
The point I was making was that I think latents with very few characteristics could provide support for a conclusion. Such latents could be considered no value if the traditional categoric conclusions are being used but the use of a broader scale may allow their value to be considered. In which case, how do we decide which latents we progress to the comparison stage and which we do not.
Thanks
Dan
-
- Posts: 382
- Joined: Tue Dec 06, 2005 10:40 am
Re: Conclusion scales
Dan,
As you can see, we don't use 'support for exclusion' as a conclusion. It seems like the discipline was surprised a few years ago when the erroneous exclusion rate was higher than they expected. Perhaps that's why some are developing a conclusion of 'support for exclusion'. I haven't seen any criteria for how to use this new conclusion and therefore I feel 'inconclusive' is the better conclusion when a solid exclusion cannot be made.
Could very few characteristics provide support for an exclusion? Maybe, but I'd want to see (and test) a criteria for when to use this conclusion before jumping on board.
Michele
As you can see, we don't use 'support for exclusion' as a conclusion. It seems like the discipline was surprised a few years ago when the erroneous exclusion rate was higher than they expected. Perhaps that's why some are developing a conclusion of 'support for exclusion'. I haven't seen any criteria for how to use this new conclusion and therefore I feel 'inconclusive' is the better conclusion when a solid exclusion cannot be made.
Could very few characteristics provide support for an exclusion? Maybe, but I'd want to see (and test) a criteria for when to use this conclusion before jumping on board.
Michele
Michele
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
-
- Posts: 382
- Joined: Tue Dec 06, 2005 10:40 am
Re: Conclusion scales
Sorry, I didn't answer your question about what is 'of value'.
To me, as the information goes down, we may not know if an impression has value. Therefore I prefer that impressions with small amounts of information be labeled as having 'potential value'. Whether or not impressions with 'potential value' move forward to be compared would depend on many factors (staffing, importance to the case, etc.)
To me, as the information goes down, we may not know if an impression has value. Therefore I prefer that impressions with small amounts of information be labeled as having 'potential value'. Whether or not impressions with 'potential value' move forward to be compared would depend on many factors (staffing, importance to the case, etc.)
Michele
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
The best way to escape from a problem is to solve it. Alan Saporta
There is nothing so useless as doing efficiently that which should not be done at all. Peter Drucker
(Applies to a full A prior to C and blind verification)
-
- Posts: 35
- Joined: Tue Aug 07, 2018 7:36 am
Re: Conclusion scales
In practical terms we define suitable for comparison as latent prints that will result in reliable conclusions. We define this further by listing an amount of detail needed in most cases. (Fingers 8, Palms 12) along with features that allow for the print to be searchable. Strong support does not matter when you cannot find the correct area. So we add in orientation, focal points, and target groups to the suitability criteria. You cannot identify that which you cannot find.
We recognize that are criteria are incomplete and do not cover all situations so there is a carve out for highly selective detail.
In our case we would not mark a print with one minutiae as without additional features (focal point, ect.) we could not be sure we were looking in the correct area.
Also from a practical side being able to exclude one person does not provide value, if you end up inconclusive to 99% of the people you compare and can exclude 1% that does not make a very effective standard. So from a resource/time perspective you need a higher quantity/quality of information to be able to exclude a majority of the relevant population. On the flip side 3 minutiae may provide strong evidence of an association according to the model but it is also likely that there will be numerous close non match individuals that would also provide strong evidence.
If you are going to be able to ID/strongly associate one person to a print you should be able to exclude most other people in the relevant population.
I am drawn to the comparison to signal detection theory when trying to determine what is enough. The signal to noise ratio has to be sufficient not only for you to detect the pattern but also be able to find that pattern within a set of exemplars. And your signal in the latent cannot be the same as the noise you find within exemplars.
We recognize that are criteria are incomplete and do not cover all situations so there is a carve out for highly selective detail.
In our case we would not mark a print with one minutiae as without additional features (focal point, ect.) we could not be sure we were looking in the correct area.
Also from a practical side being able to exclude one person does not provide value, if you end up inconclusive to 99% of the people you compare and can exclude 1% that does not make a very effective standard. So from a resource/time perspective you need a higher quantity/quality of information to be able to exclude a majority of the relevant population. On the flip side 3 minutiae may provide strong evidence of an association according to the model but it is also likely that there will be numerous close non match individuals that would also provide strong evidence.
If you are going to be able to ID/strongly associate one person to a print you should be able to exclude most other people in the relevant population.
I am drawn to the comparison to signal detection theory when trying to determine what is enough. The signal to noise ratio has to be sufficient not only for you to detect the pattern but also be able to find that pattern within a set of exemplars. And your signal in the latent cannot be the same as the noise you find within exemplars.
-
- Posts: 46
- Joined: Fri Apr 18, 2008 5:39 am
- Location: London
Re: Conclusion scales
Thanks Dave