Principle of Non-Specificity

Discuss, Discover, Learn, and Share. Feel free to share information.

Moderators: orrb, saw22

C. Coppock
Posts: 54
Joined: Wed Apr 26, 2006 8:48 pm
Location: Mossyrock, Washington

Re: Principle of Non-Specificity/Recursive Process/Training

Post by C. Coppock »

Experience with a particular type of problem set will no doubt increase efficiency in solving that type of problem whether on a practical or formal problem solving level. Part of this efficiency is a general increase the speed at which problems can be solved. Professional athletes provide a simple example of this. Of all athletes in a particular sport, only a small fraction will be able to go professional. Their skill level is not only well above average, it is also consistently well above average. Their problem set, is their particular game in all its aspects. Each play within each game is unique, yet the cumulative practice on similar problem sets have a provided them with the skills and experience to quickly, accurately, and consistently solve the problem at hand. It does not mean that someone with lesser skill could not have done so, nor does it mean that there might be player with even better skills that could have been more accurate, faster, and more consistent. Yet in this example, the problem was sufficiently solved. Minimum sufficiency may be found everywhere, but the idea is to be better than that.

In a recent baseball game, Milwaukee vs Houston the pitcher intercepted a ground ball and while on the run intended to throw the ball to first. The problem set was to take the fielded ball and transfer it to the first baseman before the runner could advance to the base. A pitcher does a lot of ball throwing from a standing position but this pitcher was on the run. His solution to maintain an accurate throw in time, was to reevaluate his potential accuracy within the time allowed, abort his overhand throw and convert to an underhand toss thus, successfully solving the problem. It seemed the pitcher did not trust his overhand throw accuracy while on the run this particular time. There were infinite solutions to this unique problem and it was successfully solved. Yes, the problem was unique, as we have never had this exact arrangement of information before. It is very similar to perhaps, tens of thousands of other scenarios in baseball, yet unique none the less. The difference is the professional is well practiced, accurate, and consistent. Lesser Joes, proficient at automotive repair, may have missed the ground ball in the first place. If they did accomplish the problem set it would be with a different efficiency and accuracy level. Probabilistically speaking, their effort would unlikely be up to a professional level. Now try to get that pitcher to fix your car, and you will be walking to first!

Experience, including practice and continuing education allows us to learn the micro details that may be encountered in any given profession. Latent print examiners use just such experience to learn and understand their forensic comparison process. Each problem set is very similar to many others, yet unique. The application of their solution will also be unique (non specificity). The solution to the problem, is their hypotheses which often arrives as a moment of positive recognition or ah ha epiphany. Yet this is just the tipping point. The problem is solved with minimum sufficiency. The examiner can continue with the recursive process gathering additional supporting information, essentially, formally going professional. The formal approach is to slow down and be methodical, objective, and critical of the process and its variability within a reasonably standardized format. This format is a learned process that takes into consideration specific types of variability. The goal is of course, accuracy and consistency well above the level of minimum sufficiency. If it took an automotive mechanic two months to solve the issue of each car, it could be said that he met the rule of minimum sufficiency, yet would be out of business in just two months. A latent print examiner must also find a professional level of competency that is consistently well above minimum sufficiency or they too will be out of business.

The main issue is the phase transition from practical to formal problem solving. ACE-V allows for a scientific level of detailed analysis, comparison, cross-comparisons, evaluation, and reevaluation combined with the ability to iterate the procedure until all differences have been eliminated or until some failure criterion is exceeded. (1) It also allows us to factor out known psychological phenomena thus minimizing its negative impacts. You can’t do that on the run.
Fundamentally it is not such a big leap to go from practical on the fly problem solving to detailed formal problem solving. Depending on the time constraints in solving a problem, one may have the option of trying multiple solutions based on past experience, inference, deduction, recursion, etc… within the practical problem set. However, in some cases, the ability to made adjustments and correction “on the fly” may be extremely limited. This is where a high level of practice and a wide knowledge base prove very beneficial at improving the probability of finding a correct solution to a particular problem. One can imagine driving a car as a cascading set of problems, where all the smaller problem sets are a subset of the goal of moving the vehicle from point A to point B. Many of the subset problems phase from one challenge to the next. Thus, the next problem is directly related to the current problem, whereas the current micro problem’s outcome in linked with the next problem, etc... Our cognitive attention needed to solve this cascade of problem sets is highly variable. Some linked issues require only occasional minor adjusts to the vehicles steering wheel with no adjustments needed to brakes or acceleration. Here you have cruise control with additional time to solve the problem of slowly drifting out of your lane. Now you are in very steep tight windy road…on the snow and ice at night with a heavy fog. Now, you have to focus most all your attention to the cascade of problems as gravity tries to accelerate you down the hill. Stress is high, and as the fog thickens, you lose your main reference point, the road! Vertigo is your brain’s answer to your failure to maintain minimum information input relevant the problem set.

How does this chronological cascade of inter-related problem sets parallel with friction skin comparison? Consider one of our most basic comparison scenarios, an “ink print to ink print” comparison. The over arching problem set is to compare the two impressions and render a conclusion. Did they originate from one in the same source, or not? Within that set are inter-related sub-sets of problems to solve invariably linked, that consisting of small challenges of comparing the one ink impression, a bit smudged at the core, to the second which is an older 300ppi live-scan that was obviously printed when the inkjet printer was out of calibration! Analyze, compare, evaluate, re-evaluate, compare, re-compare, evaluate, re-evaluate, each a different subset, yet with an eye to the whole and in context of our knowledge base and applied skill. Is this process recursive like? Perhaps not in a strict mathematical sense, yet the repetition of process is a fundamental aspect found to different degrees at all cognitive levels. Recursive as defined by Marriam-Webster: “of, relating to, or constituting a procedure that can repeat itself indefinitely.” In the real world there would be practical limits even within the realm of infinity such as with infinite possibilities in holistic processes even if the final end state is death and taxes or of course, Individualization, Exclusion, or Inconclusive hypotheses. The word recursive is often used within the forensic sciences to describe the formal process of ACE-V. The word recursive, with a somewhat looser non mathematical usage, is in common use and this should allow us to port it over to our forensic comparison. And we need it. Recursiveness smoothly bridges the gap between practical and formal cognitive problem sets. Our question should then be, how does the formal process differ, and what can we do to better understand it and fine tune it?

It is important that we improve our understanding of our cognitive process as applied to formal problem sets such as forensic comparisons. It should also help to understand the formal process in its relationship with more practical cognitive processes. In addition it may help expose our process weaknesses where we tend to skim over information that truly warrants more analysis, comparison and re-comparison. Distortion is a prime example.

Reference:
(1) A. Newell outlined this approach in his 1969 publication “Progress in operations research.” Referencing Means-End Analysis. / Hofstadter, Douglas (2000) Analogy as the Core of Cognition, The Best American Science Writing 2000, Ecco Press, New York.
Further related information can be found in the CLPE posts for “Circular reasoning vs Circular Process.”
C. Coppock
Posts: 54
Joined: Wed Apr 26, 2006 8:48 pm
Location: Mossyrock, Washington

Re: Principle of Non-Specificity

Post by C. Coppock »

Intuition, Insight, and Degrees of Solution.

Our problem solving process is a circular process embedded within a chronological scale. This includes a repetitive similarity of process yet not of data analyzed. Each cognitive analytical event evaluates, compares, reasons, then moves to the next cycle quickly running through large and small data sets within a much larger set, and with all hopefully relevant to the problem set to be solved. A recursive process iterates along with a goal of solution. But what is really happening?

Recursive inventory analysis may at some point tip towards a tentative hypothesis of uniqueness and by insidious criminal association… individualization. You just can’t separate the uniqueness from the individualization, nor the crime from the criminal. But how do we measure the information needed for such a hypothesis? How do we know that we have this tipping point, this minimally sufficient information that allows us to understand individualization, a moment of positive recognition in context, when “No predetermined number of friction ridge features is required to establish an individualization”? [1] We usually try to articulate some sort of ratio between quantity/quality of information as compared to the lack of it. An intuitive ratio? When we reach that tipping point we can typically illustrate such a feat with a simplified point chart. In most cases it is a level 2 chart that marks points of interest with red dots. These are points that we, in part, analyzed. Can it be said that we intuitively know we have reached the minimally sufficient information that supports a hypothesis of individualization? If the tipping point is relative to the examiner’s ability to evaluate the problem set, and the analysis is considered a unique development (non-specificity), it can’t be otherwise can it? This is fundamentally, degrees of solution. One examiner will require more or less information to solve the problem. The important point is that they each successfully surpass the minimally sufficient relative threshold. Of course, any additional analysis beyond this threshold would add hypothesis support and lower uncertainty. An analogy is that we don’t need to compare each and every minutia of a palm print to form a reliable hypothesis.

We must admit that to understand what entails a “minimally sufficient” information set, which allows us to recognize individualization when it exists, is a cumulative compilation of our experience and our capability applied to that study of that problem. Therefore, it must also be based, to a significant degree, on intuitive probability. As there is no sharp well defined threshold, as each examiner will have the potential to reach a “minimally sufficient” tipping point from their nonlinear analysis according to their available information and relative ability to correlate and comprehend that information at that particular point in time. Of course, as the analytical process progresses one’s knowledge also changes. The old saying of “you can never step in the same river twice” also applies to our cognitive process. Our mental database is always changing from moment to moment, and year to year. We gain new knowledge and forget some as well. What is the probability of my hypothesis being correct? As this is not computable to any sufficient degree of accuracy we are leveraged (forced) to rely on our ability to calculate probability on an intuitive level while we study our rough probability models in hope that they are at least reasonably accurate.

In a nutshell, the solution, at a minimally sufficient threshold, has the potential to be discovered holistically and recursively from the inter-related informational problem set while drawing from related experience such as training and practice. In the end, a hypothesis is made of this newly discovered solution that we can illustrate correct based on past experience and the concept of uniqueness. How we adjust our applied knowledge at specific points within the problem solving process is also critical in our ability to efficiently solve problems. If one particular approach does not work, then we need to recognize that it may be due to our process not the lack of information present. An examiner who keeps using a certain incorrectly acquired target group, may reach a conclusion of No Match. In fact several examiners may use this same false key, whereas the solution was to use a different starting point as subtle distortion has had is effects. With such a scenario it could be said that both the comparison process was incorrectly applied, and the intuitive probability assessment was based on false information. It illustrates the potential limits of the human element in our process. This is an error to be sure, yet is not evidence proving the concept of individualization is inconsistent. Had the examiner(s) had the insight to try a new target group, they most likely would have found the solution.

“Modern research on problem solving has continued to wrestle with emergent phenomena, such as insight. In the dominant modern formulation, discontinuous change in problem-solving behavior is assumed to involve a restructuring of an internal representation of the problem.” [2] How do we categorize insight, emergence, and intuition into our problem solving process? Insight can be largely attributed to efficient utilization of our base knowledge relevant to specific relationships. In effect, a pool of information and experience we can successfully access and understand in context as needed. Thus, insight can be the ability to see options. With this outline it is easily to see how insights can be limited and unrealized.

Emergence can also be described as; “The rapid appearance of novel structure” within the recursive process. “Examples of emergent cognitive structure can be trace back to early Gestalt research in problem solving.” [3] In this light emergence can be thought of as strategic application and is directly related to insight. While forensic comparison may not seem to have the need for significant degrees of strategy, the base comparison search would. That false negative conclusion may have been the result of poor strategy that failed to produce the needed insight. Or perhaps, our knowledge base and experience did not contain sufficient information for us to understand the relative connections, thus solve the problem.

I think it is safe to say that we do indeed use intuition to understand degrees of uncertainty in our evaluations. We must, because we can’t run the hard numbers. We are not allowed to run the numbers. We simply do not have all aspects and variables quantified however, we may indeed have the information we need to solve the problem itself even if we don’t realize it.

I see no way to shake the “intuitive probability” aspect, since we cannot have all the variables inventoried and well understood. We cannot even make a precise prediction regarding the degree of uncertainty within any of the relevant variables. Sounds catastrophic however this is not a bad thing and could even be considered perfectly normal with this type of problem solving. It is just that most critics want, and in some cases, demand solid numbers for their comprehension of proof. I consider this ignorance on their part rather than a weakness on the part of the examiners. While we will never be able to identify and calculate all the aspects of uncertainty, we can with proper training and experience, minimize their negative impacts on our accuracy and get the job done. Uncertainty, including the application of intuition, does not mean a hypothesis is not accurate; it simply means we can’t understand and utilize all the information within a real world problem set. A hypothesis of individualization is based on a non-algorithmic nonlinear assessment and is made in the face of uncertainty, albeit with the odds generally on our side. Absolute proof is an eccentric mathematical concept that seems to exist simply to cause us mere mortals mental grief. Luckily we don’t have much use for absolute proof. Intuitively, less than absolute seems to work just fine in the real world.

I like the concept of breaking things down to better understand their sub-systems and components, yet the best approach may be a recursive one that studies relationships and insight within a framework of education and experience coupled with intuitive probability. Informational sets and processes are revisited and insights applied as new relationships are better understood in context during our investigation of the problem. Many processes seem to only work as interrelated groups and by themselves, may simply become novelties. In our motorsport analogy of a few posts back, the problem set was to navigate a lap of a race track. This problem set, not to unfamiliar to any other problem set, is full of relationships and groups of interrelated actions that surge within a chronological order that subsequently must work in concert to solve the problem. I would think this is scalable to the simple problem of comparing two prints. There are many relationships and groups of inter-related data that when separated out has little value, but in particular relationships, means everything. A key is to comprehend the value of the relationships and interrelationships as one works through the problem. Again, I think this value of these relationships must be assigned intuitively.

Perhaps specific training in probability will help. I understand general research has shown that our intuitive probability skills are underdeveloped.

Reference:
[1] SWGFAST: STANDARDS FOR EXAMINING FRICTION RIDGE
IMPRESSIONS AND RESULTING CONCLUSIONS. 100910 draft

[2] The Self-Organization of Insight: Entropy and Power Laws
in Problem Solving; Damian G. Stephena,b; James A. Dixona,b,c
From: (Bowden, Jung-Beeman, Fleck, & Kounios, 2005; Chronicle,
MacGregor, & Ormerod, 2004; Fleck & Weisberg, 2004; Gilhooly & Murphy, 2005; Knoblich,
Ohlsson,& Raney,2001). Insight entails an observable discontinuity in a solver’s approach
to a problem indicating a restructuring of the solver’s representation of the problem;
(Chronicle et al., 2004; Weisberg, 1996)

[3] The Self-Organization of Insight: Entropy and Power Laws in Problem Solving. Damian G. Stephen/ James A. Dixon
C. Coppock
Posts: 54
Joined: Wed Apr 26, 2006 8:48 pm
Location: Mossyrock, Washington

Re: Principle of Non-Specificity

Post by C. Coppock »

Non-specificity and “Within-Expert” problem solving consistency.

Our search for hypothesis consistency when solving problems forensic comparison, are often practically measured by known error and proficiency testing. Then there is the “Within-Expert” testing in which know problems are repeatedly presented to experts for evaluation. Inconsistency in conclusions have been a topic of discussion relative to the accuracy for comparative forensic science, which includes of course, fingerprint individualization. Studies have shown that there is an error rate not only between-expert hypothesis, but within-expert analysis. This is where an expert faced with the very same problem arrives at a different conclusion. [1]

An interesting point is that if the comparison is not recognized as being identical to that encountered in the past, then only the testers have a concept of identical. From the examiner’s perspective, they will have to holistically approach the problem again using their expertise and strategy, which will be different than the last time they analyzed the problem. Even recall of experienced based detail will be slightly different. Since the analytical processes will again be applied uniquely (non-specificity) there would be a real and practical probability of a new and possibly conflicting hypothesis. At first, this sounds serious. However, I would think this could be said of most any human endeavor. In our previous (thread) example of the car lapping the racetrack, each lap is different, even though the racetrack stays the same. The most important point was the successful solution of the problem, not exactly how the solution was achieved. With forensic comparison each comparison is also approached uniquely. Even if the difference is slight it can be sufficient in that the examiner fails to recognize the correct solution, is sufficient for the discovery of a correct solution, or perhaps an expanded solution where new aspects are recognized.

An important point is not a reminder that humans err, but rather continue to focus on advanced training, research, and proper application of our methodologies to reduce error. The Analysis stage of ACE-V is very important to understand quality variables such as distortion. Bias is also real for examiners and investigators at all levels. Understanding what bias is and how it affects our work can help us minimize its negative influence on our science. With the rapid expansion of forensics over the last decade in law-enforcement and military, we need to take a closer look how to improve training quality and quantity and scientific guidelines.

There is an immeasurable need for continued education with advanced comparison training. Academia can help us focus on specific issues, yet it is up to us to fill in the gaps. With the combination of new examiners in the field and a reduction of agency funded education we need to be proactive and make new training solutions to maintain high levels of accuracy.

[1] Meta-analytically Quantifying the Reliability and Biasability of Forensic Experts.
Itel Dror, Ph.D. And Rovert Rosentahal, Ph.D.
ER
Posts: 351
Joined: Tue Dec 18, 2007 3:23 pm
Location: USA

Re: Principle of Non-Specificity

Post by ER »

While examiner's should never reach opposite conclusions on subsequent examinations, there can be some difference even 'within-expert'. In other words, if an examiner said ID first, then later conducted the same examination and said Exclusion, one of those answers is an error. However, if an examiner first says ID, then later says inconclusive, there may not be an error. The answer is different, but both answers might be valid. Does variability equal error? If both answers are valid, neither is an error. If during a first comparison an examiner sees sufficient detail in disagreement to exclude, then unknowingly recompares the same latent and exemplars, but decides to give the more conservative answer of inconclusive, is either answer wrong? How do you decide which is an error? I believe that there are a group of comparisons that fall on or near the line that divides ID from inconclusive and another group that falls on or near the line that divides exclusion from inconclusive where there is more than one correct answer.

Thankfully, most comparisons do not fit this situation. Most comparisons only have one answer.
Ernie Hamm
Posts: 214
Joined: Sun Jan 22, 2006 10:24 am
Location: Fleming Island, Florida
Contact:

Re: Principle of Non-Specificity

Post by Ernie Hamm »

Opinions of “identification” and “exclusion” are both definitive conclusions with vastly different repercussions.

“An erroneous identification can be a miscarriage OF justice”

“An erroneous exclusion can be a miscarriage TO justice”

Avoid making either!!
C. Coppock
Posts: 54
Joined: Wed Apr 26, 2006 8:48 pm
Location: Mossyrock, Washington

Re: Principle of Non-Specificity

Post by C. Coppock »

It is great to see new posting on this topic. I am trying to illustrate the gray area of cognitive functions. If we are more aware of the detail of our methodology and the limitations of its application, we can improve our process. Minimize error.

An analogy I see as similar is that of aircraft flights. When jets fly fine, they draw no attention, yet when they crash... the world wants to know why! In this tread I'm bringing some of our "normal flying time" to light. What frame of reference do we really work in?

Science is a process that is full of estimates and probability rather than exactness and absoluteness. Mother nature absolutely does not like to provide exactness. If we can sort out how and where we encounter such "estimation / probability issues," such as levels of distortion, thresholds, etc… we can invariably improve our process.

Some of these gray areas may be the points we ponder, or perhaps view correctly from different perspectives. It is the holistic nature of the process coupled with our experience-based inferences, which are variable from moment to moment as new information is recognized and evaluated. It is the old saying “you never step in the same river twice”, just revised; “You never utilize the same information sequence twice” as with the non-specificity principle. There is always new information in the information set as it is evaluation forward in time. This can be relationships, degree, insights, and of course, that information we simply forgot!

It is not wrong, these issue do not invalidate the process, it is simply the limitation of the human mind operating within the confines of physics, and according to quantum physics... well you know... it's all about probability. What does this mean to us? How can we expand our research and improve our training by studying our "normal flying time?"
C. Coppock
Posts: 54
Joined: Wed Apr 26, 2006 8:48 pm
Location: Mossyrock, Washington

Re: Principle of Non-Specificity

Post by C. Coppock »

I finally figured out how this concept (Non-Specificity) fits in the really big scheme of things.
Here it is: Scientific Method 2.0., beyond ACE-V.
https://fingerprintindividualization.bl ... m/2017/06/
Post Reply