Page 1 of 2
ANAB Draft For Comments
Posted: Mon Apr 29, 2019 12:12 pm
by Boyd Baumgartner
ANAB just put out a Draft for Comments today regarding their
17025 and 17020 programs.
They're looking for comments and the link is buried a couple layers deep, so here's the
direct link. Make your voice heard!
There are 17 pages of changes just in the
17020 document itself, so it's not small!
One of the more interesting changes is this:
7.1.3.5 All inspection methods that involve the comparison of an unknown to a known shall require the evaluation of the unknown item(s) to identify characteristics suitable for comparison prior to comparison to one or more known item(s). NOTE 1 Characteristics include, but are not limited to, friction ridge detail in a latent print or striation detail on a bullet.
The question I have is this: Are you reading this as a mandate for a linear application of ACE-V? Meaning you must chart each latent that you are going to compare in the case prior to comparing the print. Or, do you read this as applying a categorical status to your latent? For instance, here we label latent prints as being either No Value, Subject Value, or AFIS value when analyzing a case prior to comparing the prints.
I've added a poll to this thread as well so you can answer whether or not you're already performing a linear workflow.
Re: ANAB Draft For Comments
Posted: Tue Apr 30, 2019 5:43 am
by josher89
I think we get caught up in trying to using the correct verb to describe something and if we use the wrong verb, it becomes confusing as to what we are really trying to say. So, ANAB says "evaluation" when they are really meaning is "analysis" (as we would say).
(I think) ANAB is saying we have to deem something suitable for comparison before we can compare. Well, if we are comparing something, we are saying it's suitable, right? All this is saying is that you evaluate (analyze) the latent before you begin comparisons. It doesn't say you have to mark or record all characteristics first.
If I take this phrase literally for what it says, I would think that it is saying that characteristics are needed before beginning comparison. It cannot dictate what or how many characteristics we must have in order to begin a comparison. Sometimes, L1D and a few ridge features are enough for me to begin a comparison. I am already marking (recording) pattern type (if known), anatomical source (if known), and presence of L2D (if available) prior to a comparison anyway so I think I'm still meeting this standard.
There doesn't seem to be too much to worry about here. I haven't read through the rest yet, however.
I would encourage everyone to please read through this and even if you are not accredited, please take the time to comment. OSAC is looking closely at these standards and supplemental requirements when they are writing standards and best practice recommendations so your input is very valuable.
Re: ANAB Draft For Comments
Posted: Tue Apr 30, 2019 7:21 am
by Mark
Only the 17020 AR 3120 document is up for comment, not the 17020 standard itself. 17025 just recently went through a recent revision as well as their forensic specific AR 3125 document. The AR 3125 document has the same "already approved" requirement as what is cited in the AR 3120 above (for those under 17020) and yes, I believe the intent is to document the basis of your suitability or analysis decision, which would seem to necessitate marking up the "what" (minutia, ridge tracings, etc.) you will use for comparison to a known exemplar and not just categorically labeling them without something to support that decision prior to comparison to an exemplar.
This standard would seem to require a linear approach for initial suitability determination because the concept is to mark the things that you deem make the print suitable for comparison/identification prior to potentially being influenced by the details contained in the known exemplar. I think for those using a system like GYRO, it is acceptable to go back to the analysis to chart some things in orange that are being documented to support the comparison that were introduced after having seen the known and so being transparent to that effect (this is in addition to the "characteristics" (data) already marked to determine initial suitability).
The intent of this standard as understood by many accredited to 17025 is that it is not good enough to just say "suitable for comparison/identification" prior to comparison, but per ANAB to "identify characteristics suitable for comparison" which seems to necessitate documenting the data (features) that are supporting that determination prior to introducing an exemplar for comparison. It doesn't come right out and say it, but clearly this is an attempt to reduce the bias of a known exemplar, at least in the analysis stage in determining suitability. If what you're "evaluating" the suitability on isn't documented in some form, it would seem a non-compliance with the standard because although you may have mentally and competently determined a sufficient quality and quantity of information exists to move to comparison, the standard seems to require the specific information that that is determined on needs to be accounted for (by documenting them). How you choose to implement this in policy is up to the agency. For every AFIS search, you'd be meeting this by default because either you or AFIS or both are assigning minutia marks for the search, thus documenting at least some of the data that will be used for comparison. This standard is just extending that further to require that step up front on all latent print determinations prior to comparing to a known print which in our view and others I've talked to interpret as being able to prove what features you deemed (i.e., by marking them) for comparison prior to comparing to an exemplar. Admittedly, this is much easier to accomplish for those in a completely digital environment and less easier if working strictly from lifts or printed 1:1 photos. To my knowledge there hasn't yet been a "guidance" from ANAB to the extent of what this standard is actually requiring, and I doubt there ever will be in black and white. This is just how we are interpreting it moving forward and fortunately we've been operating this way since 2012 anyway.
Re: ANAB Draft For Comments
Posted: Tue Apr 30, 2019 7:54 am
by Boyd Baumgartner
josher89 wrote: ↑Tue Apr 30, 2019 5:43 am
I think we get caught up in trying to using the correct verb to describe something and if we use the wrong verb, it becomes confusing as to what we are really trying to say. So, ANAB says "evaluation" when they are really meaning is "analysis" (as we would say).
I don't disagree with your overall assessment, and it's not necessarily the word evaluation that even makes it an odd read. Rather, it's the fact that they mention that you have to 'identify characteristics suitable for comparison'. Characteristics
are what makes a print suitable, it's not the characteristics themselves that are suitable. We talk about characteristics being clear, rare, abundant, objective, having orientation, or having anatomical indicators. It just seems to link the results of an analysis (suitability) with the information used in the analysis (characteristics).
Re: ANAB Draft For Comments
Posted: Tue Apr 30, 2019 7:58 am
by Heather Baxter
The specific standard listed is already part of ANAB 3125 (ISO 17025: 2017). We recently successfully passed our 3125 assessment, which included this specific standard. Our SOPs state that an analysis of the latent must be completed, followed by an analysis of the exemplars, prior to performing a comparison. A written analysis (not a markup of the latent itself) is required as part of our documentation. I think the examples provided in the standard have led some to (pardon the pun) overanalyze the standard and call for the markup of all latents prior to comparison. ANAB is not requiring that.
During numerous transition discussions, there was more concern in my lab in the Toxicology and Controlled Substances Units about this particular standard than in the comparison disciplines. The comparative disciplines already had SOPs requiring an analysis of the unknowns prior to a comparative analysis. The Tox and Controlled Substances Units had several conversations about how they were going to meet this standard.
Re: ANAB Draft For Comments
Posted: Tue Apr 30, 2019 8:30 am
by Snyder22
Heather, can you provide an example of what your written analysis looks like/how extensive it is for each print? Trying to gather all possible options, especially from someone who has gone through an inspection already.
Re: ANAB Draft For Comments
Posted: Tue Apr 30, 2019 9:44 am
by Heather Baxter
Sent you a pm Snyder22.
Re: ANAB Draft For Comments
Posted: Wed May 01, 2019 7:46 am
by Mark
Heather, sent you a PM (assuming I did it correctly!)...thx
Re: ANAB Draft For Comments
Posted: Wed May 01, 2019 11:52 am
by Boyd Baumgartner
Heather,
if your agency policy permits, could you just upload a sample or copy/paste the verbiage into a reply? This thread might be helpful down the line for any agency going for accreditation.
thanks.
Re: ANAB Draft For Comments
Posted: Thu May 16, 2019 6:29 am
by josher89
I'm sorta hijacking this thread for a minute to see what you think about another proposed change:
6.1.5.3 talks about the process for monitoring the performance via PT tests and lists four things (I'm paraphrasing): a) ensure the results aren't known prior b) use approved methods c) determine a successful result d) ensure the quality of the PT test prior to the monitoring activity
No where in there does it say that the PT test shall be treated as actual casework. Thoughts?
Re: ANAB Draft For Comments
Posted: Thu May 16, 2019 6:42 am
by NRivera
josher89 wrote: ↑Thu May 16, 2019 6:29 am
I'm sorta hijacking this thread for a minute to see what you think about another proposed change:
6.1.5.3 talks about the process for monitoring the performance via PT tests and lists four things (I'm paraphrasing): a) ensure the results aren't known prior b) use approved methods c) determine a successful result d) ensure the quality of the PT test prior to the monitoring activity
No where in there does it say that the PT test shall be treated as actual casework. Thoughts?
The benchmarks for "b" and "c" are set by your agency's quality system or SOP's. It's not just a test of your ability to apply the ACE-V methodology, it also looks at your ability to follow your own agency's procedures in general.
Re: ANAB Draft For Comments
Posted: Thu May 16, 2019 8:12 am
by Steve Everist
josher89 wrote: ↑Thu May 16, 2019 6:29 am
I'm sorta hijacking this thread for a minute to see what you think about another proposed change:
...
No where in there does it say that the PT test shall be treated as actual casework. Thoughts?
Since many agencies have to use PT tests that can apply towards their accreditation, it's hard for a third party to recreate a test that can be treated as actual casework while meeting the requirements to be used towards accreditation. And workflows can differ significantly for many agencies.
At our agency, the LPE does the comparison, AFIS searching of any prints that are not matched and can be searched, reviews candidate lists, compares any prints of viable candidates, and enters all of this information into our own electronic tracking system. And then the entire case goes through a verification process. If there are any ID's determined to be complex, there's also an additional QA measure put in place. And this doesn't get in to how we treat inconclusive and incomplete prints/comparisons.
It would be very difficult to create a test that would be treated, in our agency, as casework. It would almost be easier to submit lift cards that would be worked blindly than to create a "case" that's known to be a test and work it like a case.
Re: ANAB Draft For Comments
Posted: Thu May 16, 2019 8:54 am
by josher89
And that's my point. I've heard for so long that our PT tests are not a valid test since we know they are PTs because simulating casework is almost impossible. There's research that speaks extensively on that (some say we work harder on PTs and others say we don't care about PTs because they aren't casework). I'll try and find and link them.
blood tox PT
To me, a PT isn't just about getting the correct answer; it's about allowing the process that you use in the lab to work. That means I think you should be able to have your PT test verified before submitting. That means I think you should be able to do a root cause and corrective action (if necessary) if you get an incorrect answer and still 'pass' the PT. Accrediting bodies give the agencies the ability to determine the criteria necessary for a successful PT and it truly shouldn't be based on a pass/fail metric only.
Some other notable, yet still only proposed, changes (specifically what they are removing - see AR3055):
7.1.2.3 - no longer requiring a procedure for the correct sequence of application of reagents, when essential
7.1.2.4 - no longer requiring (+) and (-) controls for presumptive tests
7.3.1.2 - no longer requiring what is included in the case file
7.3.1.7 - no longer requiring a procedure to deal with differences of opinions on conclusions
7.3.1.8 - no longer requiring pagination of case files
There's several more that don't particularly pertain to latents but they still affect the overall reports / documentation.
Re: ANAB Draft For Comments
Posted: Thu May 16, 2019 2:38 pm
by anwilson
6.1.5.3 talks about the process for monitoring the performance via PT tests and lists four things (I'm paraphrasing): a) ensure the results aren't known prior b) use approved methods c) determine a successful result d) ensure the quality of the PT test prior to the monitoring activity
No where in there does it say that the PT test shall be treated as actual casework. Thoughts?
I wouldn't want PTs treated as actual casework. There are other, better ways to monitor the effectiveness of a quality management system. Steve mentioned fake evidence being submitted as if they were real (blind PT) which is one way. Another way of seeing if a system is working is through regular auditing which is already a requirement for accreditation. For instance, you could have an audit specifically designed to look for cases that aren't in the correct ownership/location. When doing an audit like that I can almost guarantee that every once in a while you will catch something in the wrong ownership which would require an investigation, RCA, and corrective action. I view PTs as being designed to establish that you have maintained the ability to do a comparison. I don't really see a need to treat it like casework; however, I do believe there is a good argument that the prints used should mimic as closely as possible prints that examiners get in real casework. That's hard since each agency sets their own standard on what is considered comparable though.
7.1.2.3 - no longer requiring a procedure for the correct sequence of application of reagents, when essential
7.1.2.4 - no longer requiring (+) and (-) controls for presumptive tests
7.3.1.2 - no longer requiring what is included in the case file
7.3.1.7 - no longer requiring a procedure to deal with differences of opinions on conclusions
7.3.1.8 - no longer requiring pagination of case files
For 7.1.2.3; so the people writing this standard would be ok with an agency opting to use physical developer prior to other, less destructive methods? There's a reason why using techniques in a particular order are important. I'd be interested to know the reasoning why the need for a procedure on sequential processing isn't needed? I worked at an agency that if you only wanted to powder and never wanted to dye stain you could. I feel like that's the intent behind this standard but I'm not sure the standard is clear if that's the intent.
7.1.2.4; is this specific to presumptive tests for drugs and blood? The reason for positive controls in latent processing is to know the reagent is working as intended, but this standard doesn't seem to be about latent processing since we don't do presumptive tests
7.3.1.2; i'm confused by this one so it's probably just a difference in agency workflow. Are there agencies that have been required to list somewhere exactly what is included in an individual case? We have in our SOP the types of documents that can be retained in certain circumstances along with what documents are required if a particular task was performed. For instance, if processing was done it's required to retain processing notes in the case file. However, we don't have a list for each individual case of what is included in the file.
7.3.1.7; they're advocating that there isn't a need for a procedure on how conflicts are handled? I think that's one procedure that is absolutely needed so this change is really odd to me. With no procedure on disagreements, staff can handle disagreements however they want which could include verification shopping if one examiner doesn't agree with me. That just doesn't seem appropriate.
7.3.1.8; I'm all for this not being a requirement as it's currently written. I think if your SOP is clear on what documents need to be retained a person reviewing work would be able to tell if something was missing even without page numbers. The exception I can see is for internally created controlled documents where it wouldn't be obvious whether or not there were multiple pages. As an example, we don't paginate AFIS runs because we are required per our SOP to include in the case file a printout of the encoded image, a printout of the candidate list, and a copy of the side-by-side of a hit. So on a hit case, if one of those documents was missing from the case file I would know. Another example would be lift cards. We have an evidence form where we document how many lift cards were present in the envelope when we opened it. So if one was missing, it would be obvious so no page numbers are needed.
I would love to know the reasoning behind the proposed changes.
Re: ANAB Draft For Comments
Posted: Mon May 20, 2019 12:31 pm
by josher89
Another change is when observations are made. The current standard is "in a timely manner." The proposed change is "shall be recorded at the time they are made." This makes it somewhat difficult for CSI units as well as latent units in the middle of processing or examination. They are applying a laboratory standard to disciplines that fall better under an inspection than a testing environment. ARG!