Page 2 of 3
Posted: Tue Aug 14, 2007 5:13 am
by mdavis
The problem with so many accusations of potential bias is that these arguments all seem to exclude the reality of independent thought.
The crux of the matter. So how are examiners to "prove" (document) the fact that their verification was really independent thought? It stems back to the accusation of defense, with which examiners take offense, resulting in excessive attempts at time-consuming documentation aimed at countering such allegations.
I do not think that ongoing casework is the place for training exercises IF we are trying to demonstrate lack of bias. Using a more experienced or "better" examiner to talk you through an ident you missed (didn't see or don't see, not a "bad" call) would be pounced upon by defense in a heartbeat if they knew that's what was being done. My color example was perhaps not a good one because true colors can be measured using a spectrometer, un-influenced by ambient light or the reverse effect of the human eye/brain when subjected to an over-abundance of a single color.
We all learn by comparing our results to others. But the time to do this is AFTER the case is completed, ACE'd and V'd, not during the V process in which the ACE attempts to convice the V'er that a valid ident exists based on what #1 saw when #2 does not see it.
The car story is a good one, but it assumes that #3 and 4 are around and that they both agree on what was seen. If they are truly interviewed independently and had absolutely no possible way of corroborating stories prior to being interviewed, we might call it an ident. But what if it were an out of state plate on a less conspicious vehicle, say a beige Toyota, and the plates had a combination of letters consisting of either "O" or "0" and the state of origin was not obvious nor readable. Two observers described the car as a beige 4-door but were unsure of the model and/or year, and there was an inconsistency in reporting the tag as having an "O" or a zero. This would be a far more common scenario.
Posted: Tue Aug 14, 2007 8:23 am
by L.J.Steele
mdavis wrote:So far, I know of no Daubert challenges that have been successful in any of over 40 states. It would seem to be a last straw attempt by a desperate defense who has no clear evidence of possible innocence.
I count Plaza (with its concerns expressed about some aspects of FBI testing, and its requirement that the latent, exemplar, and sufficient enlargments for the jury to understand the match) as a partial win. Ditto Patterson (concerns expressed, and exclusion of certain simultaneous impression matches). We'll have to see how Langill plays out on appeal. (Anyone know what the status is?)
I think there are potential serious issues with confirmation bias that can and should be litigated via Daubert. Also for some departments, issues of training and methodology.
Posted: Tue Aug 14, 2007 8:28 am
by L.J.Steele
mdavis wrote:If the defense truly believed that the ident was bad, the rather obvious solution would be to enlist their own "independent," (already biased) legitimate LPE to re-examine the latent(s) in question. I guess it could be argued that by creating such havoc that case processing times are extended by a factor of 2X, they are increasing backlogs and consequently pushing cases beyond the statue of limitations, thus creating the potential of winning by defalt.
Problem is, that safeguard didn't work in Mayfield (court appointed expert agreed, provisionally, with FBI) or in Cowans (two defense experts agreed with the Boston PD). And the public defenders (who get the bulk of these cases) don't have the budget to hire outside experts for every case where the defendant says he didn't do it.
I agree with you that there are some serious practical issues and perhaps one needs to narrow the calls for blind (which can be different from independent) verification to those cases where confirmation bias is more likely to be an issue -- difficult matches, examiner has potentially contaminating info at time of match, high pressure cases, cases where the print match is the only evidence vs. defendant, etc.
Posted: Tue Aug 14, 2007 8:39 am
by L.J.Steele
mdavis wrote:I was told (grapevine) that our public defender office has a budget about twenty times greater than the combined budget for all the crime labs in the state. No wonder they can afford to fly in their "independent" experts for rebuttal.
You should be able to find your public defender's office's budget by examining your state's budget. I can say that in Mass and Conn, the two states I work with, the public defender is always underfunded. (Both Mass and CT ran out of money in their FY 06-07 buget -- CT decided to pay me 6 weeks late, Mass still owes me money from a 7/1 bill.) Caseloads are high. Pay rates lag well behind the private sector. (And for the folks like me who work as special public defenders, factor into our rates that we have to pay for our own offices, utilities, staff, travel, sick days, and retirement.) If one wants an expert in CT, one has to justify it to one's superiors, who take a sketpical view of expensive outside experts. If one wants an expert in Mass, the request is done thru the trial court which similarly takes a skeptical view of the matter.
But the problem isn't public defenders (or the private defense bar) vs. the labs. The question starts with some basic principles. How well does the "V" of ACE-V work if the examiner and/or verifier are exposed to biasing information when they make their decisions? Is there a practical way to reduce the risk of that information contributing to a mis-ID?
You are right that we could use much better training in science for lawyers and judges. I gather the law schools are doing more with science and law classes, and there are ongoing classes for practicing lawyers on these topics, but it will take time for scientific literacy to trickle thru the system.
Posted: Tue Aug 14, 2007 9:45 am
by Michele
Problem is, that safeguard didn't work in Mayfield (court appointed expert agreed, provisionally, with FBI) or in Cowans (two defense experts agreed with the Boston PD).
I just wanted to clarify that verification as a 'confirmation' process didn't work in these cases. Verification as a peer review process may have worked, we'll never know because it wasn't done.
I could be remembering incorrectly, but I thought the Patterson Decision mentioned that verification wasn't adequate peer review. I think it may have been the Stacey report that mentioned that verification works in most cases but additional measures may be needed in high profile cases (I'm sure someone will correct me if I'm wrong about how and where this information was said). We talk about wanting and needing high QA measures, but I can't seem to figure out why we (as a profession) ignore peer review and the value of it and keep looking at 'blind' as the only way to protect against biasing information.
The question starts with some basic principles. How well does the "V" of ACE-V work if the examiner and/or verifier are exposed to biasing information when they make their decisions?
When ACE-V was initially articulated, V was described as peer review. Over the years we've re-articulated V to be 'confirmation' (by 'we' I mean 'not me'

). I guess the question comes down to.....do we want good QA measures or do we want the method to be easy to use and easy to describe?
Posted: Mon Aug 20, 2007 6:31 pm
by Norberto Rivera
Not trying to knock on you Michelle, but here we go again: identification vs. individualization, peer review vs. confirmation....is it possible to come to a consensus at all???????????
This is precisely why I opt for detailed documentation. Do I think it's too much? I most certainly do. Do I think it could be done better? Absolutely. It takes me two pages to document one ID, not counting the chart. Thankfully I can get away with that because my current caseload allows it. This is the only way I know to keep my work transparent. Just about anyone can pick up one of my reports, read it, follow it and understand what the process was and why a particular conclusion was reached. I'm certain that as I gain more training and experience that will change. I am considering the suggestion that was mentioned about a checklist for common observations. That documentation, IMHO, should be disclosed along with the final report as part of the case file though, so I reach a quandary.
It's frustrating to me as a relatively new examiner (3 yrs) to see such dissension amongst LPE's who are long-time experts and accomplished professionals in this field over something as simple as identification vs. individualization. My experience with jurors has been that they want to know whether a print was "matched" or not. That's their vocabulary. Is that necessarily correct? Maybe, maybe not, but we should keep that in mind.
I look at the "V" process as a complete review of the original examiner's work with the purpose of ensuring that their methodology and observations are sound and correct. How can I do that if I don't know what he/she saw or did? The report can say "at least 10 points of minutiae" or however it's worded. What if I only see 5??? What if I see 18??? My bottom line is making sure that a misidentification is not made and the wrong person goes to prison. /soapbox.
Posted: Mon Aug 20, 2007 8:20 pm
by mdavis
What ever happened to giving the original lift (photo) and control to another examiner and having it verified without comment, coercion, prompting, suggestion or bias? Do I have the time to write 2 pages of detailed description of my intuitive thought processes during my evaluation? And my argument continues to be that if I do write down each and every detail that I "think" I see, then that very description is a prompt to the verifier to "see" what I saw, which I argue is bias.
If the identification ("individualization" to those of you who need to use that terminology) is so close to the edge of the envelope that you have to "sell" it to another qualified examiner, then it's too close to call. If it's clear, it's clear. If it isn't clear, then maybe you should back off.
I just finished a case in which a briefcase full of stolen checks was recovered. There were over 100 checks in the case in various stages of forgery completion. There were latent prints of value on nearly every check, more than one on most of them. There were about 150 prints of value in total. If I were to write a 2-page description of every identification (sorry, "individualization") I made on each latent print in that case, I'd still be working on it months after the fact. Once you establish such over the top P&P, you are stuck with it forever. It isn't necessary, and it doesn't demonstrate lack of bias and independence. Any examiner can open one of my case files, pull out enlargements of the original idents (sorry again, "individualizations") and see the comparison between the impressions for themselves. They don't need (and shouldn't need) my biased description of what I "saw".
I envy anyone who has the time to work in that fashion, but I don't think it demonstrates independent verification.
Posted: Mon Aug 20, 2007 9:47 pm
by Norberto Rivera
mdavis wrote:... if I do write down each and every detail that I "think" I see, then that very description is a prompt to the verifier to "see" what I saw, which I argue is bias.
After reading the paper on Observer Effects by Risinger, et. al. I have to revise my stance and agree with you on this point.
mdavis wrote: If the identification ("individualization" to those of you who need to use that terminology) is so close to the edge of the envelope that you have to "sell" it to another qualified examiner, then it's too close to call. If it's clear, it's clear. If it isn't clear, then maybe you should back off.
I think you misunderstand the motivation behind this. The purpose is not to "sell" anything to anyone, but merely to document the steps followed to make an individualization (sorry, identification

).
I understand your point now though.
Posted: Tue Aug 21, 2007 5:05 am
by mdavis
I don't think we're at odds here for the most part. My comment regards the questionable use of vast amounts of time to describe what amounts to a personal evaluation of a picture.
I was not trained in a large lab. My training was probably like most examiners, beginning with learning classification rules and working with thousands of control cards to get a feel for patterns, ridge flow, commonalities of ridge detail, and to see what is usual and what is relatively unique. I was trained in the art of identification (based on science, yes) by simply doing it. I attended one of the last FBI Advanced Administrative Latent Print classes at Quantico -- a month long -- in which we spent essentially one week working on identification of latent prints. We spent little or no time learning "what" to see, we simply practiced to sharpen our skills. That suggests that each of us learned our own techniques of elimination by doing it, not so much by learning HOW to do it (although you do learn to work "smart" by elimination processes).
What a detailed, written description of an ident (individualization) does is to consume vast amounts of time to "teach" another examiner (or perhaps the court ... one wonders at the purpose) exactly how you began your process of elimination. But if you started at the core, I may start at the delta. If you began with a crossover ridge, I may find a trifurcation elsewhere as my "key". But why bother in the first place? If we reach a common conclusion (ID or not ... hey, why not use ID for both identification or individualization ... but I digress), then does it matter how we reached that conclusion? And if I am asked to verify, either as your "V" or as a defense expert, I want a clean slate. I don't want you to tell me what you think you saw. If we don't agree, I don't want you to try to "sell" me on your method to prove mine is inadequate after I've examined the print. What if I see something you don't see? How is that adjudicated ... or does that mean the ID was too close to the edge to be called? I'd vote for the latter.
So the detailed description may help in a training process, but shouldn't be used to bias another examiner. Aren't we really doing it for the courts? And if so, are we trying to "sell" them on the ID? Well, yes, in a way, but are they adequately trained and experienced to recognize common and uncommon characteristics, L2D and L3D and know how much is needed to ID? That comes only from years of experience in which you teach yourself by repetition, not by reading volumes of descriptions made by "competent" examiners of valid IDs. It has been years since I used visual comparison charts in court. They only cause problems as untrained observers attempt to "see" your conclusion without adequate experience to evaluate the ID. We wouldn't prepare comparison charts for a defense expert, why do we use them in court? And is not a written detailed description the same, rather shallow attempt at describing a picture in 1,000 words?
I'm being the devil's advocate here. I fear that the latent print community is over-reacting to a couple of rogue judges and creating vast amounts of time-wasting documentation preparing for the unknown. We should stand on our history of competence, and weed out those who do not meet the ethical standards of our profession. By responding in a knee-jerk reaction to every threat from untrained judicial personnel, we are cheapening our image, implying that we've been doing it wrong all these years, despite our success rate. Valid, honest, independent (truly independent) verification should be the goal.
Posted: Tue Aug 21, 2007 8:45 am
by Michele
Norberto,
I take no offense to your comments, you’re absolutely right, there’s a huge lack of consensus on several terms and topics. I don’t know that it’s necessary that we all agree (that would be nice but I don’t see it happening) but I do think it’s important to understand that people use the terms differently. Communication is hard enough when we use terms in different ways but communication is completely ineffective if we don’t realize that terms are being used differently.
I agree with you about the value of documentation, it’s a great way to show what lead the examiner from the data to their conclusion (and recommended as a tool in science), but I think we need to be practical. Some (maybe most) ID’s are so obvious that documenting every detail may be considered excessive and not scientifically recommended. Even if it’s not recommended for ever print, my personal opinion is that it should be done a lot during training.
I understand why people like blind testing, it does insure a conclusion is reliable, independently reproduced, and it insures that bias isn’t introduced. I’m not against it, I just think we need to recognize when blind testing is valuable and use it in these cases. I thought the experts on bias have shown that bias is only a problem with close calls. If this is true then using blind testing to protect against bias in other ID’s (that aren’t close calls) would mean that you’re protecting against bias in cases where bias isn’t a problem. You may have procedures that you follow but that’s not the same as having good quality control measures.
Blind testing, documentation, confirming the conclusion, and full peer review are all valuable quality assurance measures. I think we need to understand when each has value and use them at the appropriate times.
By responding in a knee-jerk reaction to every threat from untrained judicial personnel, we are cheapening our image, implying that we've been doing it wrong all these years, despite our success rate.
Unfortunately, the route of advancing does require that we acknowledge that we might not have been doing things as well as we could have, that’s just one of the sacrifices we have to make to progress. I’d rather explain in court that the changes we’re making are a proactive attempt at improving our profession than wait until a huge mistake happens (Mayfield) before we revisit our procedures.
Posted: Tue Aug 21, 2007 5:23 pm
by mdavis
But (devil's advocate again), we are allowing untrained defense attorneys and judges to decide that we might not be doing our jobs properly....jobs that have been honed to an extremely high level of accuracy (despite what trolls may suggest) by the court system, training methods, pride and results for over a century. If our courts are now willing to throw the baby out with the bathwater, as they are doing by attacking the methodology rather than the results, that is their loss and a loss to the people they serve.
It seems a rather long time now since we debated on the validity of a given ID. Instead, we are hung up on rendering well accepted terminology invalid (or at least suspect), and on creating volumes of time-consuming documentation in response to one or two court opinions. There may be labs in this country (U.S.) that are on time with their latent print work, but I don't know of any. If we continue to lengthen the case by case process to the point where we cannot provide accurate and timely information to investigators, if we over-burden an already understaffed lab with unprofitable paperwork (unprofitable in terms of guaranteeing validity), if redundant paperwork extends court deadlines to exceed the statute of limitations, if we continue to frustrate professional examiners with mindless requirements forcing them to seek other careers, then the defense attorneys will be "high-fiving" in the law library as their clients are released either because time ran out on the crime, or because a perfectly valid ID was disallowed on a newly manufactured technicality.
Our job is to provide expert, factual and verified opinion to the very best of our human abilities. Michelle's point is well taken, that only the close ones should be considered for novel-writing status. But if that is truly necessary, then either the first or second examiner is not adequately trained to competency, or the ID is too close to risk calling. In other words, if I feel the ID is so questionable that I must produce a long narrative "sales job" to convince or show my "V" or a legitimate defense expert why I made the call, then I may not be certain enough to believe it myself. If you can't see it, then maybe I didn't see it either. The eye of faith is strong and must be avoided.
Posted: Wed Aug 22, 2007 7:59 pm
by Norberto Rivera
There has to be a happy medium somewhere though. I wholeheartedly agree that addressing every minute detail is too much, but on the flip side I don't think that the following is sufficient documentation for an ID:
On <date> the laboratory received the following evidence:
1. latent impression
2. inked impressions bearing the name John Doe
The latent impression on Item #1 has been visually exmained and found to be of sufficient quality for comparison purposes. Impression on item #1 was compared to the #2 (right index) finger on Item #2 with a positive result.
John Q. Examiner
Where is the ACE-V process documented there??? Who verified that comparison?????? What quality controls were used, if any???? What standards for conclusion were used??? I've seen many like this and I just can't bring myself to agree that it's a properly documented "scientific method". Or maybe I'm just totally nuts.

Posted: Thu Aug 23, 2007 5:27 am
by L.J.Steele
mdavis wrote:But (devil's advocate again), we are allowing untrained defense attorneys and judges to decide that we might not be doing our jobs properly.
The courts are charged as gatekeepers to ensure that only good science and qualified experts testify in court. The
proponent of the evidence (for forensics usually the prosecutor) is charged with proving to the court's satisfaction that the evidence is good science and the expert is qualified. The defense's job is to challenge that evidence when appropriate and hold the prosecutor to his or her job. That means that yes, trial judges make decisions about standards for all sorts of technical fields and its the proponent of the evidence's job to help them understand the field well enough to make the right decision.
mdavis wrote:If our courts are now willing to throw the baby out with the bathwater, as they are doing by attacking the methodology rather than the results, that is their loss and a loss to the people they serve.
The problem is that we can't
know how reliable the results are because it is hard to test whether a convicted defendant who asserted his innocence really is guilty, or even if a some of the defendants who plead guilty do so because they are actually guilty, or because they have been given a good offer and fears that he'll lose at trial and will get a much higher sentence. Listen to Pat W's talk on fabrication cases, and how many defendants in one scheme pled rather than fight a bogus print.
I'm willing to agree that the number of genuine mis-identifications are probably small. But Mayfield and Cowans, and many of the score or so of other mis-ID cases were discovered almost by accident -- had there been no DNA in Cowans case, he'd still be in jail. Had Mayfield come up from a department that didn't have an alternative suspect and wasn't willing to fight FBI, he might still be in jail on a material witness warrant. We don't know how big the iceberg of undiscovered mis-IDs is.
The experts have been saying in the
Daubert challenges that print evidence is reliable because we use a specific method (ACE-V) that has been determined by experience and study to provide accurate results when applied by properly trained experts. The courts have generally agreed. Which is why the focus is now on training and application of the method. The Courts generally don't allow any expert to come and say "let me testify because my opinion has been right 100% so far" without knowing the method behind that opinion.
mdavis wrote:then the defense attorneys will be "high-fiving" in the law library as their clients are released either because time ran out on the crime, or because a perfectly valid ID was disallowed on a newly manufactured technicality.
I can't speak for all attorneys, but I can say that the usual reaction when there's a published story of a lab scandal or a forensic method found to be flawed is a mixture of disappointment at the failure of the system and vindiatication that long-fought problems have finally been recognzied. (Usually for scandals, the defense bar has been challenging the lab for a while before someone finally listened. The folks who've been questioning the method likewise have probably been fighting about it and losing for a while.)
Remember that statutes of limitations are generally long. Speedy trial deadlines are shorter, but can still be extended for quite some time before the prosecutor has to go forward with whatever he or she has. Most of my clients, at least, are indigient and can't afford bail in serious cases. That means that while you guys are running your tests, my guy is in jail unable to test the prosecution's case against him. If he's not in jail, but the police have identified him as a suspect, then he's dealing with public reaction, possible family and job issues, and still can't challenge the case and clear his name. For cold cases on the fringes of the StL, they are a pain for both sides. Witness' memories are no longer fresh and likely contaminated with all sorts of post-incident information. Exculpatory evidence, if not collected and preserved by the police is long gone. Alibi witnesses may have moved, or died. Not an easy situation for either attorney.
And set that aside for a second. Defense attorneys are taxpayers too. We want to see the labs use their time efficiently. (Just as you folks, as taxpayers, want to see us use our time efficiently.) We, like you, are keenly aware of the damage crime causes and want to live in safe communities -- so we have an interest in the labs identifying and catching the right bad guys. Our job is to test your evidence, but we don't have any interest in making your job impossible.
I find the defense attorney bashing troubling. For Mass, the local continuing legal education provider (MCLE) frequently runs classes for attorneys on forensics and invites experts in to talk to prosecutors and defense attorneys outside the arena of a specific case so that we understand more about what you do, and you get to understand what we do. This may be something worth doing more of -- getting folks talking instead of just fighting or regarding each other with mutual suspicion.[/b]
Posted: Thu Aug 23, 2007 2:12 pm
by Michele
Norberto,
Impression on item #1 was compared to the #2 (right index) finger on Item #2 with a positive result.
Where is the ACE-V process documented there??? Who verified that comparison?????? What quality controls were used, if any???? What standards for conclusion were used??? I've seen many like this and I just can't bring myself to agree that it's a properly documented "scientific method". Or maybe I'm just totally nuts.
A good answer to this question would be like writing a chapter to a book. I’ll attempt to overly simplify it (fully expecting lots of criticism) but I just want to say from the outset that I don’t have the time or energy to fully defend my opinions on this.
“How much is enough?” isn’t only a question about sufficiency of an ID, it’s been a question in most scientific endeavors for hundreds of years. How many participants are needed for a good statistical study? How long do we need to note observations before we can say a scientific theory has been established? Most scientists have decided that the answer to - how much is enough (for anything) is determined by how much will satisfy others? One scientific rule seems to be that the conclusions of many people has more weight than the conclusion of an individual. This is usually stated as we want conclusions that others would come to (consensus). But that’s not the only requirement.
There’s no list because each situation is different. Some endeavors need more research, documentation, and testing than others.
The main scientific requirements (that I can think of without opening a book) are:
is or can the conclusion be justified (usually by documentation)?
is or can the conclusion be repeatable by others?
are other conclusions possible?
will the conclusion hold up to scrutiny?
will the conclusion stand the test of time?
As you can see, verification isn’t a scientific requirement. Conclusions just need to be open to verification. The requirement that verification be done is an industry requirement not a scientific requirement. I’m going to assume that our profession has this requirement because we take our conclusions very seriously and this is an additional QA measure that we use since peoples lives are on the line.
Documentation also isn’t a scientific requirement but the conclusion must be documentable if anyone (your reviewer, a peer reviewer, an attorney, etc) ever wants it.
For conclusions that are simple, yes- it is scientifically acceptable not to show documentation.
Just to show you how this works in other sciences, here’s an example (I’m sure I’ve posted this before). In Math (lets just assume we’re all in agreement that this is a science, even though I know this is arguable), if we have the problem 12 x 12 = ???
If I were to tell you the answer is 144, does that require documentation? While in training (elementary school) society (and our federal educational organizations) says ‘show your work’ but after training, when you’re a rocket scientist, then documentation isn’t usually required for simple conclusions like this. BUT if anyone doubted the conclusion (‘anyone’ does mean ‘anyone’ not only people trained to competency) then the rocket scientist would be required to show the methodology, the principles, and the procedures.
He may have used normal multiplication, he may have added the number 12 together 12 times, he may have multiplied 12 by 6 and then doubled it. Someone may ask him how he tested his conclusion and he might say he used a calculator. These are all valid methods of arriving at a conclusion and/or testing a conclusion. But I can guarantee you that when the engineers (an applied scientist) at NASA give their calculations for how much fuel is needed to get the space shuttle off the ground, they do not need to document the obvious.
Am I overly simplifying this? Sure I am but doing fingerprint comparisons isn’t rocket science. For the most part, I think it is a pretty simple process.
One more comment (so much for simple), even though this isn’t a scientific requirement that doesn’t mean that documentation isn’t a good personal standard, agency standard, or industry standard.
Posted: Thu Aug 23, 2007 6:24 pm
by L.J.Steele
[quote="Michele Triplett"]when you’re a rocket scientist, then documentation isn’t usually required for simple conclusions like this.
* * *
But I can guarantee you that when the engineers (an applied scientist) at NASA give their calculations for how much fuel is needed to get the space shuttle off the ground, they do not need to document the obvious. [/qutote]
Do you recall the loss of the Mars Climate Observer probe due to a simple inconsistency in unit conversions?
http://mars.jpl.nasa.gov/msp98/orbiter/
Even the obvious isn't so obvious when assumptions come into the picture.
The problem with prints is that numbers are pretty unambiguous. Absent dyslexia, everyone will read the same numbers the same way. And some print comparisons are probably that simple. The problem appears if, for example, one were trying to reconstruct my household checkbook (in my husband and my chicken-scrawl handwriting) after it had been thru the washing machine a few times. At that point, the expert would have to make some judgment calls about what number certain blurry marks represented. One hopes a verifying expert would see the marks the same way, but could be influenced by the sorts of context bias that one hopes to reduce (or at least minimize) by contemporary documentation by the original examiner and/or having the second examiner work blind to the first's assumptions and conclusions.
To go to Norberto's example, I see that kind of report fairly frequently. I'd be happier with a report that:
Affirmed that the examiner used the ACE-V method, and included the name of the verifier and when both the original conclusion and verification occured. (Take another sentence in the report.)
Mentioned how the examiner obtained the exemplar -- suspect came from AFIS, from examiner's recall of similar case, from name provided by police, etc. (Again, could be a sentence or a set of check boxes.)
Had a space for the examiner to mention if he or she had any unusual information about the case prior to making the report (knew that the suspect had confessed, or had a history of similar offenses, or that there was a witness ID, or of a DNA match, FREX.) Wouldn't have to be a novel, just a couple of sentences. Ditto for the verifier.
Certainly, I wouldn't mind more about the things the examiner found significant and how he/she accounted for dissimilarities, but I understand that may not be practical for all comparisons.