Page 1 of 1

Houston, we have a problem... with Houston

Posted: Wed Nov 15, 2023 8:31 am
by Boyd Baumgartner
Who shaves the barber? - Bertrand Russell

Bertrand Russell, British mathematician and philosopher, in challenging a popular theory of logic of the day known as set theory, showed that it suffered from a self-referential paradox. Namely, that there is no such thing as a set of all sets because it would necessarily need to, but could not contain itself.

It was stated as follows:
"Imagine a town where there is a rule: everyone has to be clean-shaven, and the barber is the only person who can shave you. But the barber has a special rule: he only shaves people who do not shave themselves.

So, everyone in the town has to be clean-shaven, but no one can shave themselves.

So, who shaves the barber?"
This is a paradox, because it is a situation that cannot exist.

This same paradox raises its head from time to time, however it is reformulated not in the domain of mathematical set theory but in the socio-political sphere. No doubt you’ve read a newspaper headline intimating something like ‘Who polices the police?’ And now it seems that this dilemma has spilled over into the socio-forensic disciplines.

Who examines the Examiner?

These questions pose the same problem as Bertrand Russell’s Paradox of the Barber, an infinite regress. It takes sense and turns it into nonsense very quickly. Let’s take a look...

If I say, ‘I’m an expert’, then who is qualified to judge my expertise? And if you admit that there is someone qualified to judge my expertise, who is the person that qualifies to judge the expertise of the expert on expertise? I won’t belabor the example, but you get the point. Sense making becomes nonsensical exponentially. This is the essence of infinite regress. It poses solutions which cause problems which need solutions that cause problems that never resolve. In the case that is about to be discussed we’d call this a grift, because it involves a stream of money and prestige in which the same people causing problems offer solutions, write papers, form committees which find new problems to which they offer solutions, write papers and form committees, blissfully unaware that there are people laughing at the proverbial new clothes of the emperor.

Considering this is a post about fingerprints, let’s get to it. In 2009, it came to light that the Houston Police Department’s (HPD) fingerprint unit was having some problems, some of them potentially criminal.

https://www.chron.com/news/houston-texa ... 622998.php

Houston hired an outside auditor, a company by the name of Ron Smith and Associates (RSA/ RS&A). The auditor was hired to check the work of the people thought to be involved in problematic casework. The scope of the contract was extended to work down a backlog of about 6000 cases to the tune of around 5 million dollars.

The written findings of the audit were published by the City of Houston.

https://www.houstontx.gov/police/audit/ ... tPrint.pdf

The findings of the initial audit state:
“It should be noted that, generally speaking, the most significant error which can be found in friction ridge comparisons is an “erroneous identification”, which is the incorrect determination that two areas of friction ridge impressions originated from the same source."

The findings go on to say:
“Based upon the previously established criteria, there were however a significant amount of technical errors which may, or may not have had an impact on the investigations which were represented by these cases”

The technical errors as defined by Ron Smith and Associates included:
“Cases which were reported as not being ‘sufficient for further analysis’, when in fact they did indeed contain latent prints which were sufficient for comparison purposes.” (we’ll come back to this)

It appears that the auditor examines the examiner.

Who audits the Auditor?

During the period in which Ron Smith and Associates was performing backlog reduction, they were asked to compare prints as part of a review of a cold case from 2001. A case that ultimately went to appeals based in part on the fingerprint evidence and ultimately resulted in an ethics investigation.

A longer version of the case synopsis given as part of the appeal can be found here:

https://cases.justia.com/texas/first-co ... 1499127228

An abbreviated synopsis of the case is as follows:

2001: a murder of a woman known to be a prostitute whose body was found in an alley with what was presumed to be bloody fingerprints both on and near her body. Houston Police Department responded to the scene and were able to enhance and preserve the latent prints via photography as well as preserve the portion of the actual item that contained the palm prints (a metal pole). Fingernail clippings and semen stains were also preserved by the Harris County Medical Examiner. HPD investigated the case and exhausted the leads with no viable suspects

2006: DNA from the fingernail clippings consisted of a mixed sample with a major and minor contributor composed of two males. DNA from the semen had a single contributor. DNA profiles were created by an outside laboratory.

2009: CODIS was searched using the DNA profiles and two individuals were identified, one by the name Joseph Webster from the mixed sample taken from the fingernail clippings, one by the name Lorenzo Jones from the semen stains. Both individuals admitted to soliciting prostitutes but denied being involved in the murder.

2010: HPD Cold Case Unit requests HPD Latent Print Unit to compare 51 individuals including the 2 individuals developed as part of the CODIS hits. No identifications were found.

2011: Ron Smith and Associates as part of the contract work for the city of Houston is asked to re-examine the prints and compare them against the 51 individuals again. No identifications were found.

2012: Detective Holbrook, the original detective on the case in 2001 returns to HPD homicide, re-reviews the case and per the appeal “instructed Ron Smith to reexamine their prints, believing that the bloody print was of sufficient quality to render an identification”. (We’ll come back to this statement as well). Ron Smith and Associates find similarities, ask for better quality prints from Webster and subsequently make an identification.

2013: HPD asks for the poles containing potential blood and bloody prints to be processed for 1) the confirmation of blood 2) DNA samples. Blood was not confirmed, nor were DNA profiles obtained.

2015: DNA profile taken from Webster was compared to the minor contributor DNA profile from the fingerprint clippings produced in 2006. Webster could not be ruled out as a contributor, with the chance that the DNA profile came from a random person at about 0.43% (1 in 230)

2015: A new DNA profile is obtained from the fingernail clippings using an advanced testing technique and compared to Joseph Webster. The result is that Webster could not be ruled out as a contributor, with a chance that a random person could have the same DNA profile at about 0.000015% (1 in 68 million)

As a result of the DNA found underneath Herbert's fingernails and the identification of Webster as the source of the bloody palm-print, Webster was indicted and tried for Herbert's murder.

During the trial, the State presented evidence that:

• Herbert was a prostitute who worked downtown, and that Webster admitted to possibly having sex with her;

• Herbert's body was found in a narrow and secluded gated alleyway that would not normally be used by the public—a location where someone might reasonably be expected to lure a prostitute he intended to kill;

• the DNA underneath Herbert's fingernails was Webster's—indicating that Herbert attempted to defend herself from Webster or at least had direct physical contact with him before she died;

• the palm-print found next to Herbert's bloody body was Webster's—placing Webster at the scene of the crime, crouched right in front of the narrow stairwell and Herbert's body; and

• the palm-print was made in blood—indicating that Webster moved or had some sort of contact with Herbert's body after she was dead.

Webster was convicted.

Upon appeal, Webster’s counsel argued that the sufficiency of the evidence was inadequate to convict. Specifically, as it relates to the latent print evidence, Webster argued that the examiners at HPD and the RSA analysts suffered from bias since they missed the identification the first time and only made an identification after they were allegedly aware of the DNA hit to Webster.

A complaint was submitted to the Texas Forensic Science Commission (TFSC).

As a result of the allegations of bias by Webster’s counsel, an ethics investigation was submitted to the Texas Forensic Science Commission. The complaint was accepted for investigation at the 4/21/2022 meeting.

The Commission offered to:
“evaluate the integrity and reliability of the forensic analysis, offer recommendations for best practices in the discipline of latent print analysis, and make other appropriate recommendations.”

https://www.txcourts.gov/media/1455318/ ... 22-1-1.pdf

This is what the Commission is tasked with doing by law, although arguably the more thorough version of what they were tasked with would be more ethically rigorous. It comes across more as an ethnography than an investigation.

https://www.txcourts.gov/media/1440362/sb01287f.pdf

The Commission is charged with the preparation of a written report that contains:
(1) observations of the commission regarding the integrity and reliability of the forensic analysis conducted;
(2) best practices identified by the commission during the course of the investigation; or
(3) other recommendations that are relevant, as determined by the commission.
It appears the Commission audits the auditor.

Who does the Commission commission?

The TFSC had provided relatively scant information on the state of the ethics complaint other than brief updates from quarterly meeting minutes posted on the site. One such document from a 09/15/2022 meeting states:
“The Commission will contract with Friction Ridge expert Glenn Langenburg to assist the Commission in the development of its report that will highlight key issues and recommendations that address the historical background and evolution of the friction ridge discipline, identify methods for avoiding cognitive bias in the discipline, and recommend changes for a positive impact on the field in Texas and nationwide.”

https://www.txcourts.gov/media/1455833/ ... 1522-3.pdf

Then just recently, on 10/20/2023, the Forensic Science Commission released a video of their quarterly meeting discussing a forthcoming 73 page full report on the complaint. In the video they thank:
  • Glenn Langenburg
  • HCPDO
  • HCDAO
  • HFSC
  • Henry Swofford
  • RS&A
  • OSAC Program Office
  • ANAB
  • Texas DPS Latent Print Advisory board

https://youtu.be/SeXGM6bFqA8?si=GADf88b7LLmDT4ek

It appears that the Commission commissions experts. It seems we have come full circle.

How do we investigate the ethics of an ethics investigation?

Since we’ve come full circle with the inclusion of new experts who have been brought in to scrutinize the ethics of the original experts, we’re in infinite regress. Who will scrutinize the ethics of these experts? Let’s put the microscope on the experts before we address the complaint. Surely, we wouldn’t want an ethics investigation to have compromised ethics, right?

Glenn Langenburg teaches for Ron Smith and Associates. Someone who gets paid by an agency under investigation and who would stand to risk losing payment or prestige, necessarily presents at the very minimum the appearance of an ethical dilemma. He does however, have at least 13 years of Latent Print Examination under his belt working in an agency.

https://www.ronsmithandassociates.com/p ... urg_CV.pdf

Henry Swofford is more of a bureaucrat than a bench examiner. If you look at his resume it’s not apparent that he has any significant examination experience, with the majority of his forensic examination experience coming from Blood Alcohol Content testing. While he has held chair positions on standards boards, management positions, and pursued research, he seems like an odd pick for an actual analysis.

https://dfs.dc.gov/sites/default/files/ ... ord_CV.pdf

This is evident in a leaked document associated with the case which states:
“Henry Swofford did an informal review of the palm print on the pole and could not identify it as belonging to Webster.”

Why wouldn’t Swofford perform a detailed analysis of the print? Why is his inability to identify the print indicative of anything? The answer as stated above is because this isn’t a person who has spent any significant time looking at fingerprints, let alone complex ones in dispute. The fact that he qualifies, let alone holds certifications from the International Association of Identification in Latent Fingerprints, Footwear and Crime Scene Investigation is a discussion for another time, but it would seem that the IAI is little more than a puppy mill for forensic certifications.

Mr. Swofford seems to be at odds with himself for what he deems appropriate techniques. From the leaked document:
“Mr. Swofford analyzed the quality of the palm print itself using the newly adopted best practice recommendations made by the OSAC Friction Ridge Subcommittee in September 2020. He took a copy of the latent print and feature markings of the original RSA examined and analyzed the quality using LQMetric software.”

Compare what he did, namely use LQMetric software, with what he’s published about LQMetric Software. According to his Ph. D thesis, LQMetric is:
“...geared entirely towards optimizing or predicting AFIS match performance rather than focused on assessing local ridge clarity (discernibility of feature data) and predicting human performance using image quality attributes considered by human analysts during manual comparisons. Consequently, these types of predictive models are often based on the aggregate of qualitative and quantitative attributes of the entire impression to provide a single estimate of utility or quality. These approaches often lack transparency and often do not necessarily correspond to the same features considered by human analysts during traditional examinations. The motivation behind this focus is largely driven by industry desires to optimize the performance of AFIS in a “lights-out” environment. "
https://serval.unil.ch/resource/serva ... 01/REF.pdf

He’s saying two things here. First, that a holistic metric is inappropriate given that the algorithm is measuring AFIS viability which stands in opposition to what Examiners look at, namely individual features. Secondly, such tools and models are biased towards the performance of a product, not assessing the quality of a print, which is what the tool purports to be. However, the thesis is also offering a product, one that directly competes with LQMetric. Would it be fair to characterize Swofford for also having a motivation to focus on an industry desire to optimize his proceedings from the licensing of a software product?

Salesmanship is not a new concept when it comes to Swofford and Langenburg however. They both have a financial incentive outside their use as consultants in this case. Each has a statistical model, which should they be used to reconcile a dispute, would act as a pseudo-validation of a statistical model that could then be incorporated into a ‘best practice’ paradigm, that is to say there is something to be gained by means of bureaucracy, not science.

Swofford model (FRstat): https://forensiccoe.org/statistical-int ... ns-frstat/

Langenburg model (Xena) https://assets.publishing.service.gov.u ... ite_up.pdf

There have been accusations of pseudoscience and snake oil between the two camps previously.

https://docplayer.net/170998794-Defence ... rstat.html

Furthermore, given Swofford’s seat on the Friction Ridge Subcommittee and the American Standards Board of the AAFS his influence in defining ‘best practices’ is problematic at best. Is it any surprise that statistical models have made their way into the conversation even though no model has been validated for use or demonstrated to make more accurate conclusions?

Even the latent print community has grown skeptical of what in effect are bureaucrats pushing agendas for personal gains. In a recent request-for-comments period by the American Academy of Forensic Science Standards Board, the community pushed back when statistical models were proposed.
“May use statistical or probabilistic systems is meaningless at this point in time. Does this mean FRStats or Xena? Could it mean an 8 point standard versus a 12‐point standard? Until a published, peer‐reviewed, validated, and accepted "probabilistic system" has been recognized, it should not be left wide open to interpretation.” (pg 27 of 55)

https://www.aafs.org/sites/default/file ... und01.pdf

Did you catch that? A standard was being proposed that was so vague that non-validated statistical models developed by people with influence over the standards board could be admitted. This is not science, or best practices, it’s lobbying.

So an investigation into an ethics violation of experts reveals at least the potential to have an ethics violation of experts through perverse incentives, namely personal gain even at the expense of truth.

Complaints about the complaint

Next, we need to look at the complaint. We already know that the complaint alleged the conclusion reached was not reproducible and therefore not reliable. With the allegation that there was bias based upon the DNA hit to Webster.

It’s out of fashion to blame examiners and fire them for mistakes. If we’ve learned anything from past high-profile errors like the Brandon Mayfield case, they will blame 1) the images, 2) the process and 3) a boogeyman like bias that is unmeasurable but somehow in everything, everywhere, all at once and only retroactively inferred.

From the Mayfield case:

The images:
“The FBI issued a rare public apology after Mayfield’s release — but maintained the error was due to the low resolution of the print.”

https://www.nbcnews.com/id/wbna6505083

The process:
“Second, the OIG examined whether the FBI's verification procedures contributed to the error. FBI procedures require that every identification be verified by a second examiner.”

https://oig.justice.gov/sites/default/f ... /final.pdf (pg 10)

The boogeyman:

Everything:
“However, whether Mayfield's religion was a factor in the Laboratory's failure to revisit its identification and discover the error in the weeks following the initial identification is a more difficult question.” (pg 12)
Everywhere:
“However, under procedures in place at the time of the Mayfield identification, the verifier was aware that an identification had already been made by a prior FBI examiner at the time he was requested to conduct the verification. Critics of this procedure assert that it may contribute to the expectation that the second examiner will concur with his colleague.” (pg 10)

All at once:
“The OIG found that a significant cause of the misidentification was that the LPU examiners' interpretation of some features in LFP17 was adjusted or influenced by reasoning "backward" from features that were visible in the known prints of Mayfield. This bias is sometimes referred to as "circular reasoning," and is an important pitfall to be avoided.”

https://oig.justice.gov/sites/default/f ... final.pdf

So what do we see in the ethics investigation?

The images:

According to the quarterly meeting:
“TFSC staff forwarded the complaint to RS&A, and they raised questions about the quality of images the blind examiners were given”
And
“RS&A maintained ‘any competent examiner’ would reach an identification conclusion so there must have been an issue with the images the blind examiners used"

We know that there were two sets of images taken of the latent prints, the film prints taken at the scene of the crime and digital images taken by the DEA who had the actual poles on which the prints were found.

The TFSC’s presentation provides a red herring in a sense when they state:
“The other set of photos of L-1 were taken by the DEA using digital photography. Digital photography was in its infancy at the time and the images were taken at a low resolution 384 ppi. (Today’s standard is 1000 ppi)"

This statement is ignorant of digital imaging however and represents the problem with an appeal to ‘best practices’ when those practices are made by fiat and not evidence. Let’s take a look. Since the TFSC thanks The Organization of Scientific Area Committees for Forensic Science (OSAC), we’ll go to OSAC’s digital imaging standards and look at what they say about resolution.
5.1 The procedure described in this document is in accordance with current SWGFAST guidelines (6), as well as National Institute of Standards and Technology (NIST) standard (7), which specify 1000 pixels per inch (ppi) at 1:1 as the minimum scanning resolution for latent print evidence. This standard appears primarily to be historical and directed towards scanners, rather than cameras, though recent studies suggest that it is suitable for capturing Level 3 detail (8).

And
“5.2 While the 1000 ppi resolution standard permits the capture of level three detail in latent prints, it does not mean that any image recorded at a lower resolution would necessarily be of no value for comparison purposes. Such an image could have captured level two details sufficiently for comparison.”

https://compass.astm.org/document/?cont ... -US&page=1

So, the standard is historical, pertains to scanning and pertains primarily to the inclusion of level three detail, which would most likely not even be present in a viscous blood print.

This is an appeal to historicism and not science.

The process:

The TFSC presentation states:
“The report emphasizes the importance of a linear sequential approach to the ACE-V process in which documentation of the features in the questioned impression occurs prior to an examination of a known exemplar.”

It’s fashionable to say this, and was stated both in the OIG report on the Mayfield error and in the PCAST report but is it true? Like, based in science, true?

This very topic was researched in 2015 in a paper titled ‘’Changes in latent fingerprint examiners’ markup between analysis and comparison’.

The paper admits:
“.... the details of how to document analysis and comparison are mostly unspecified, and SWGFAST's standards are unenforced, leaving the details to be sorted out by agency standard operating procedures or by the examiners’ judgments.”

The discussion of the paper finds no correlation between change in markup and error nor error rate different from that of the black box study.
“We observed frequent changes in markups of latents between the Analysis and Comparison phases. All examiners revised at least some markups during the Comparison phase, and almost all examiners changed their markup of minutiae in the majority of comparisons when they individualized. However, the mere occurrence of deleted or added minutiae during Comparison is not an indication of error: most changes were not associated with erroneous conclusions; the error rates on this test were similar to those we reported previously” [22]

https://noblis.org/wp-content/uploads/2 ... _Final.pdf

The footnote associated with that quote is from a study titled, Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners

https://journals.plos.org/plosone/artic ... ne.0032800

The boogeyman:

There is a sleight of hand in the complaint and the investigation if you look closely, namely the allegation of bias. The allegation states that the bias is due to Webster being identified via a DNA hit and NOT from a non-blind verification or from circular reasoning via the exemplars, as was the allegation in the Mayfield error. There are two problems with this. First, RS&A missed the identification the first time when Webster was included with the 51 subjects. If inclusion of Webster was biasing, then the implications are twofold: 1) any investigative lead into a suspect developed by a detective is biasing and 2) overwhelming the examiner with 51 names was more biasing because the identification was missed. Neither of these issues are addressed, nor is the mechanism of how getting a DNA hit suddenly makes minutiae appear that weren’t there.

Additionally:
“The complaint incorporated information that the crime scene mark was submitted to other examiners along with relevant known exemplars in blind examinations and the blind examinations reached an ‘inconclusive’ conclusion” stating that, “The blind examiners include an independent latent print examiner and two HFSC latent print examiners”

The TFSC presentation says that they took into consideration some national reports on forensic science and ‘pertinent empirical research in friction ridge examination’ such as the Noblis black box study. So what does ‘the pertinent empirical research’ say about blind verifications?

Before we get to the Noblis study, what does Glenn Langenburg’s own research into bias state?
“The results showed that fingerprint experts were influenced by contextual information during fingerprint comparisons, but not towards making errors. Instead, fingerprint experts under the biasing conditions provided significantly fewer definitive and erroneous conclusions than the control group.”

https://onlinelibrary.wiley.com/doi/abs ... 09.01025.x

Unless the blind examinations were provided in the context of an unworked case (which is going to be a near impossibility given that the case was decades old and the use of film, CDs, old reports, etc, would have been included), the examiner would have been given a comparison they knew would be scrutinized and therefore is introducing bias. According to Langenburg’s own research, when this happens, ‘significantly fewer definitive and erroneous conclusions’ are the result. Read ‘inconclusive’.

This is exactly what the blind examinations found, meaning it was as equally likely a methodological error on the TFSC’s part in introducing blind examinations in this fashion and not necessarily a problem with RS&A or the print.

Further highlighting the deficiency of the investigation is the following research which shows that collaboration reduces bias in fingerprint conclusions. This as opposed to pitting conclusions against each other vis-a-vis Examiner vs Blind Examiner and thereby reifying reproducibility. Pooling decisions highlights a wisdom-of-the-crowds approach to complex comparisons.

From the article:
“That is, the pooling of decisions systematically decreases the number of situations where decision-making agents disagree and increases the number of situations where they agree. Pooling decisions thus also reduces outcome variation between decision-making systems at the level of individual cases.”

https://www.sciencedirect.com/science/a ... via%3Dihub

Even previous research, what is commonly referred to as the Noblis White Box study recognizes that:
“Blind verification can be expected to be effective in detecting most errors and flagging debatable decisions, and should not be limited to individualization decisions.”

And
“Examiner assessments of difficulty may be useful in targeted quality control, which could focus on difficult decisions: operating procedures could provide means for an examiner to indicate when a particular decision is complex. Quality control measures, however, should not focus solely on difficult decisions, since even easy or obvious decisions were not always repeated or reproduced.”

And
“Metrics derived from the quality and quantity of features used in making a decision may assist examiners in preventing mistakes, and in making appropriate decisions in complex comparisons. Such metrics may be used to flag complex decisions that should go through additional quality assurance review and in arbitration of disagreements between examiners.”

And
“Procedures for detailed documentation of the features used in analysis or comparison decisions could be used to assist in arbitrating inter-examiner disagreements at the feature level.”

Lastly
“Repeatability and reproducibility are useful surrogate measures of the appropriateness of decisions when there is no “correct” decision, as when deciding between individualization and inconclusive. The reproducibility of decisions has operational relevance in situations where more than one examiner makes a decision on the same prints. Reproducibility as assessed in our study can be seen as an estimate of the effects of blind verification– not consulting or non-blind verification.”

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3299696/


The last quote being especially pertinent given that the TFSC explicitly states in the presentation that:
“Important to note neither the complaint nor the report allege the identification was wrong, as we do not have ground truth for the sample”.
Simply stated, there is no ‘correct’ decision as when deciding between individualization and inconclusive as explicitly stated in the above study. If ever there were a prescription, this is it and the TFSC and their ‘experts’ expert’ failed miserably.

Based upon best practices informed by the research, blind examinations were an inappropriate investigative method here.

The Final Analysis:
“Incompetence annoys me. Overconfidence terrifies me.” -Malcom Gladwell

The TFSC stated that the result of their investigation was to:
“...identify methods for avoiding cognitive bias in the discipline and recommend changes for a positive impact on the field in Texas and nationwide.”

A couple things to note. The TFSC is not a standards board, a research arm, nor do they employ any means of funding for research. Furthermore, if these goals are attainable, you would think they would be requirements of accreditation or licensure of experts in the state of Texas considering that the TFSC has been responsible for both since 2015. Lastly, that the TFSC thinks they could actually accomplish such discipline-wide goals while clearly being illiterate in current scientific literature, lacking funding for additional research and by choosing to drape itself in snake oil experts and fashionable policy prescriptions comes across as pathological.

It would seem that the TFSC is more concerned with the appearance of competence than with its actual practice. Not even mentioned in the presentation is the fact that the cold case detective is stated to have told Ron Smith and Associates that the impression could be identified. If bias was a concern, one would think that having police staff dictate what is and is not of identification value is quite concerning. Especially given that the TFSC is in charge of licensing experts. This was explicitly stated in the Appeals record when it stated:

2012: Detective Holbrook, the original detective on the case in 2001 returns to HPD homicide, re-reviews the case and per the appeal “instructed Ron Smith to reexamine their prints, believing that the bloody print was of sufficient quality to render an identification”.. Ron Smith and Associates find similarities, ask for better quality prints from Webster and subsequently make an identification.

Additionally, we know an error was made, as it was the very type of error that Ron Smith and Associates documented in its audit of the Houston Police Department.
“Based upon the previously established criteria, there were however a significant amount of technical errors which may, or may not have had an impact on the investigations which were represented by these cases”

The technical errors as defined by Ron Smith and Associates included:
“Cases which were reported as not containing any latent print identifications in which there was a latent print identification” (pg 4)

In summary, an ethics investigation sidestepped the question of truth by hiring experts with questionable credentials and perverse incentives to ignore the science in order to publish a self-congratulatory work which overlooks deficits which it has the authority to change through accreditation and will have zero effect on the fingerprint industry other than contempt for those that wrote it. While it comes across as the house that Jack built, it is really a house of cards.

When magical thinking is involved, anything is possible.

Re: Houston, we have a problem... with Houston

Posted: Thu Nov 16, 2023 1:09 pm
by NRivera
This right here is why this board has died. Let's just toss out all the standards and recommendations. Let it be the wild, wild west again. Who needs accreditation? It's all just a litany of made-up lies so people can be on committees and make money (on government owned software that can't be sold). You just have to trust me because I'm the expert, plus I'm not a PhD so I'm better than those guys. Give me a f()$% break.

Re: Houston, we have a problem... with Houston

Posted: Thu Nov 16, 2023 1:56 pm
by Boyd Baumgartner
*OSAC has entered the chat*

Re: Houston, we have a problem... with Houston

Posted: Thu Nov 16, 2023 4:48 pm
by ER
@NRivera

Love to see posts that speak truth.
Best response on this site in the past decade. ❤️

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 5:55 am
by josher89
I'm pretty sure that when Daubert v Merrell Dow was decided back in 1993, it was determined that the judge is the gatekeeper of experts. They decide if the expert can testify to facts that will assist the trier of the case. So, a judge examines the expert, maybe even performs a voir dire of the expert right then and there, and they decide.

If you are throwing shade around, might as well through it on judges as well. I think a lot of your words border, if not cross into, the realm of slander. I'll leave it up to those you mentioned to see if they feel the same way.

NRivera is right, if our discipline is fracturing from within, and the naysayers are complaining about everything that is wrong, that is the real problem. Why not offer improvements, why not get on committees to help affect change? Why? Because it's easier to sit on the sidelines and complain.

Oh, but committees don't take public comments seriously and they aren't bound to address public comments. Those committees are bound to address all public comments. It's just that sometimes, the comments don't have merit or don't address the spirit or intent of the document. So the document isn't changed. In other words, just because I pray to God for a miracle doesn't mean I'm going to get one. His answer might be "No".

To be honest, who cares who shaves the barber? Maybe his wife is into beards.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 6:11 am
by ekuadam
I just want to add, you can like or respect Henry, or not, that’s up to you. To say he doesn’t have much latent print experience is far from the truth. Look at his resume that you posted. Five years with the GBI processing latent prints. Six years at the army crime lab as a latent examiner and research coordinator. Four years as the manager of the section. If my math is correct, that is 15 years in latent prints. The only part of his CV that shows anything about blood alcohol is 3 years.

So again, you can like him or not, but that part of your story is very incorrect. He is a very good examiner and very knowledgeable about latent prints.

Also, the final TFSC report hasn’t been released, so why not wait to get all of the facts about the situation before you complete this article.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 10:58 am
by Boyd Baumgartner
This right here is why this board has died. Let's just toss out all the standards and recommendations. Let it be the wild, wild west again. Who needs accreditation? It's all just a litany of made-up lies so people can be on committees and make money (on government owned software that can't be sold). You just have to trust me because I'm the expert, plus I'm not a PhD so I'm better than those guys. Give me a f()$% break.
There really couldn’t have been a better response. Why? Because this is exactly the type of toxic leadership that’s emerged in the discipline. You can tell by the form it takes. They judge themselves by their intentions and you based on the way you make them feel. This is called emotional reasoning. Notice what results from it, actual tantrums by grown adults. Noticing there is a problem becomes the problem.

Having been 3 years old myself at one time, I can empathize with where Noberto is coming from.

Notice that in my post I didn’t claim to be some fingerprint Jesus, savior of the discipline. That is the position that you who are on OSAC have taken. Why would I want to be a part of OSAC when its clearly full of emotionally immature, toxic and ineffectual people?

And what is the result of all of this bloviating by our fingerprint saviors?

Credentialism

Credentialism is the ultimate in the emperor’s new clothes.

Credentials are only as good as the rigor that produces them and the integrity of the people that hold them.

Case and point. Noberto Rivera (nrivera) is on OSAC and OSAC in addition to producing standards has Standards of Conduct. And just as my original post is about ignoring scientific standards in lieu of ‘best practices’ formed by fiat or those that are arbitrarily and capriciously applied, the ethical standards of the OSAC are apparently merely suggestions. I'll say it again. Credentials are only as good as the integrity of the people that hold them.

See, at the end of the day, I do the work, I don’t tell other people how to do the work. The key to good work is a bottom up approach, not a top down approach. You know who else’s work I do, Noberto? The ATF’s, precisely because they use us. I’ll let you speculate on why.

If credentials saved the day we wouldn’t have had incidents like DC:

https://dcist.com/story/23/02/01/dc-tro ... ntil-2024/

When you want the prestige of being on a committee but throw tantrums from behind anonymous screen names, you reveal your character. Telling me you produce best practices is much like telling me you have class.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 11:30 am
by josher89
Bill:
"Hey Fred - you know those fancy new things they call automobiles? They sure have taken over the trails now-a-days. And it seems that people tend to get injured more when them there automobiles hit each other - much more than if a rider gets throwed off of his there horse. I think maybe we can put some kind of strap inside of that box on wheels and keep those fine folks from banging around inside or getting tossed out if they collide with another. Heck, maybe we can even make it standard to have every vehicle have something like that."

Fred:
"Well Bill. That sounds like a great idea but, what happens if those fine folks are strapped in tight like a bug in a rug inside of said box on wheels and they hit a tree and the vehicle catches on fire because the go-juice ignited? Those fine folks might not be able to get out in time and could get burned or maybe even die."

Bill:
"Fred, you're right. That was a dumb idea to try an improve the safety of that automobile. We should just let people do their own thing and hope for the best."

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 11:59 am
by Boyd Baumgartner
The key to wit is brevity Josh.

Josh: come help us fix the discipline, guys!

Guys: Ok, what about this?!

Josh: those have no merit


You do see that I appealed to the recommendations that are in scientific publications, right Josh? When appealing to analogy, you should aim to get the analogy correct.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 12:15 pm
by bficken
One thing I find sad is the choice some people are making to respond to this case with slander of other examiners. And I'm saying this not with an angry voice myself, but with a sincere and saddened heart.

This article personally attacks Henry, Glenn, and the TFSC. And I'm confused on why. None of those people or entities have attacked anyone else. None of them have called anyone incompetent, nor wrong, nor any other negative thing. Was there disagreement about the conclusion of a comparison? Sure. That HAPPENS. When examiners disagree, why are we not discussing what they each saw? Or looking at the mark ups of the original examinations, and the images, and talking about what they saw and their decision-making process on how they reached their conclusions?

The worst these people did was disagree with a comparison conclusion out of RS&A, and in response this article attempts to obliterate their reputations and discredit them personally. It saddens me, because it is just so unnecessary. And it isn't helpful to anything. It never needed to turn that way and it sucks when people taint the industry with this sort of anger and emotional response to a situation.

(Brianne Breedlove)

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 1:03 pm
by NRivera
You can call it a tantrum if that makes you feel better about yourself. My screen name is anything but anonymous, and I won't apologize for being human and showing some passion for what I do. You see, I do the job too. I've been doing it for almost 20 years now. Came up from a small 2-person local operation flying by the seat of our pants. That was back when SWGFAST "guidelines" and this forum was about all there was and a lot of us were grateful to at least have that to go on.

Now we get to see you post threads to throw mud at pretty much everyone, with helpful links you can twist to fit your narrative. The real problem is that while you can sling mud and name names of people you may not personally like nor agree with, who you're really trashing is the small armies of people on both the OSAC and ASB sides, dozens of practitioners at every level, academics, lawyers, judges, etc., all of whom have a part to play in the production of standards and BPR's. Contrary to what you suggest, Henry is NOT the OSAC FRS, he was but one voice and one vote. Just like SWGFAST, there are a ton of other people who VOLUNTEER their time and expertise to offer up documents that agencies big and small can use to produce reliable results, whether they are accredited or not. A lot of considerations, hard work and late hours go into writing these documents, but we have to sit here and take it when someone says "oh, these aren't really national standards because there's no way to enforce them" or claim it's being done for "credentialism" or whatever other excuse you want to throw out. It's not right and I won't sit here and just take it without putting up my competing argument. Let's be clear, this isn't me as the osac member talking. It's me the bench examiner that has a keen interest in having some quality standards to rely on and is willing to volunteer my time and effort for it. If you don't want to contribute to the discipline and you're happy just sitting there doing casework then go forth and be happy; but you don't get to be taken seriously when all you're willing do to is complain without being part of a solution. Where I come from that's called whining and it doesn't do anyone any good. I couldn't care less about any "prestige" that may or may not go with being on a committee. What I care about is doing my part so the new people out there where I came from have something more to rely on than dubious conjecture on a forum post. If you don't like the product, the invitation to join stands. We'd love to have your input.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 2:08 pm
by josher89
You do see that I appealed to the recommendations that are in scientific publications, right Josh?
I don't see King Co. AFIS on this list:
https://www.nist.gov/organization-scien ... plementers

It's incredible what points one picks and chooses to follow. I'm not suggesting you swallow everything hook, line, and sinker (if that analogy is wrong, Boyd please correct me since I'm not so good with them) but if all you do is criticize the ones that are trying to make it better without being a part of the change, it's kind of an effort in futility. If you are appealing to scientific publications, I will admit I got lost in your citations so I must have overlooked your appeals.

I'd love for you to show which comments or suggestions I've said have no merit. But I will give you an example of one that has come up.

"I don't like this document because you aren't telling me who I should send my latent print comparison to for a third party review because two of my examiners can't agree."

If you'd like me to pass on your name, agency, and information to them, Boyd so you can be their third, I'd be happy to. But that is not the point of the document or OSAC to decide who you should send it to. The doc was merely giving options to labs for how to handle consultations. That comment factually has no merit to the document.

Show me other examples where I've said a comment has no merit when in fact it did. As Bert said,
If you don't like the product, the invitation to join stands. We'd love to have your input.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 4:45 pm
by Boyd Baumgartner
Josh, you may want to consider joining bitemark, firearms or footwear subcommittees, because every time you open your mouth, you shoot yourself in the foot.

I'd be happy to collaborate with OSAC and go and query the LIMS of your list of participating agencies to show the exact extent and effect of error reduction that implementing OSAC standards has had. I'm proficient in SQL which most of your agencies that have a LIMS system should make use of.

Re: Houston, we have a problem... with Houston

Posted: Fri Nov 17, 2023 5:30 pm
by josher89
Whelp - OSAC doesn't have a bitemark subcommittee, Matt has a lock on the footwear committee, and I'm not qualified to join the firearms subcommittee.

Again with the criticisms but no real offer for making it better...just bragging about being able to point out all of our mistakes.


youtu.be/DxNzuCCh1T8

Re: Houston, we have a problem... with Houston

Posted: Thu Dec 21, 2023 4:55 pm
by Boyd Baumgartner
Let's just put this to rest once and for all.

We'll start by taking a look at a consensus document

https://www.ojp.gov/pdffiles1/nij/225320.pdf

According to Chapter 14 of the fingerprint sourcebook:
SCIENTIFIC RESEARCH SUPPORTING THE FOUNDATIONS OF FRICTION RIDGE EXAMINATIONS Glenn Langenburg wrote:14.2.1 Science and Falsifiability The word science is derived from the Latin scientia (meaning knowledge), which is itself derived from the Latin verb scire (to know). Science can be defined as a body of knowledge obtained by systematic observation or experimentation. This definition is very broad, and, under such a permissive definition, many fields of study may be defined as science. Scientific creationism, theological science, Freudian psychoanalysis, and homeopathic medicine could arguably be classified as sciences. Sir Karl Popper (1902–1994) recognized the difficulty of defining science. Popper, perhaps one of the most respected and widely known philosophers of science, separated science from nonscience with one simple principle: falsifiability. Separation, or demarcation, could be done if a theory or law could possibly be falsified or proven wrong (Popper, 1959, 1972). A theory or law would fail this litmus test if there was no test or experiment that could be performed to prove the theory or law incorrect. Popper believed that a theory or law can never be proven conclusively, no matter the extent of testing, data, or experimentation. However, testing that provides results which contradict a theory or law can conclusively refute the theory or law, or in some instances, give cause to alter the theory or law. Thus, a scientific law or theory is conclusively falsifiable although it is not conclusively verifiable (Carroll, 2003)
And if you look at that reference that I put in bold, it points to this work called Conjectures and Refutations.

https://www.dpi.inpe.br/gilberto/cursos ... ations.pdf

While I don't expect that you'll actually read it, in essence Popper states that we should regard our conceptual frameworks, and all our other theories, as conjectures that through criticism, may be improved or replaced with something better.

It actually has a name. Critical Rationalism. Say it with me.....criticism that is rational.

What it doesn't say is that improvement happens by elevating people pleasers with fragile egos and savior complexes to bureaucratic positions that self declare 'best practices' while being unaccountable when violating the very principles they profess. You know, the actual substance of what I posted that you have yet to point out any actual problems with.

https://en.wikipedia.org/wiki/Critical_ ... 0evaluated

So what do we have here? A consensus document written by Glenn, that cites a philosophy proposed to be professed by the discipline, which says that knowledge claims should be rationally criticized and through that criticism the knowledge claims can be refined and strengthened.

Since that's exactly what I did, you must have a problem with Glenn and a consensus body.

I'm old enough to remember 5 minutes ago when that was considered blasphemy and worthy of OSAC members resorting to engaging in a high school musical level production of the Outsiders in which Pony Boy and Johnny use insults from the 50's and run to the town square to defend their honor and get in an unwinnable fight.

Lacking-Diversity.jpg

And now that Glenn has been mentioned three times, Eric Ray will appear and attempt to steal someone's firstborn child until they can trick him into saying Idemia backwards.
aimedi.jpg
.
But back to the issue at hand. If your assertion is that only by joining the 'do something gang' can something be done, I've got news for you. You see, if you actually read Popper, and then you read who Popper read, and then you read who came after Popper, and you go all the way back to Aristotle and you go all the way forward to Kripke, you arrive at the following understanding. Identity is a paradox, precisely because it's a semantic problem. This has been known to anyone outside of the fingerprint field since the 4th Century BCE when the paradox was articulated.

https://www.cs.utexas.edu/~dnp/frege/th ... radox.html

And if you don't want to do all the reading, I asked AI to explain it to someone with an 8th grade education. Enjoy

https://g.co/bard/share/3b48c0efd90f

And this problem has littered itself within our field precisely because identification is a precise form of classification which need definitional boundaries. How many points are sufficient? At what point does a yellow become a red in GYRO? How much more confidence does a Yellow get over a Red in GYRO? At what point does an incipient become a dot? At what point does a dot become a short ridge? At what point does an identification become an inconclusive? At what point did a dissimilarity become a discrepancy? At what point does a print become complex? At what point does a central pocket whorl become a loop? ad infinitum...

And when you realize that there are semantic problems that are bound to identity problems, attempting to add more precise concepts will result in more problems precisely because a concept needs to be identified by a definition which is necessarily vague. (Hello Zeno's Paradox). So much so, that Wittgenstein (who Popper mentions in Conjectures and Refutations) coined the notion of the rule following paradox.

https://en.wikipedia.org/wiki/Wittgenst ... 20rule%22.

You can see it in action here:

https://noblis.org/wp-content/uploads/2 ... mbined.pdf
The rules for counting points and the operational and legal implications of point thresholds vary by country: for example, in Spain, the point thresholds are adjusted when ‘‘unusual’’ points are present [13]; in the Netherlands, a higher point threshold is required to testify to an identification in court than for an identification within the agency
Literally, the rule, 'count points until you ID' could not be applied consistently in Spain. It required a rule change depending on the data (at what point does a point become 'unsual'?). It also varied by country in which England had 16 and France that had 12.

https://en.wikipedia.org/wiki/Fingerprint

You see, unaccountable alphabet committees and agencies will produce vague rules (you really have no choice) and then blame Examiners for not following them rather than take responsibility. And while you clearly weren't on the OIG Committee, if you think that they only didn't fail because you weren't on them, you are a narcissist.

From Chapter 3 of the OIG report:

https://oig.justice.gov/sites/default/f ... apter3.pdf
The Examination SOPs in effect in the FBI Laboratory at the time of the identification of LFP 17 did not describe the ACE-V process in detail.
From Chapter 4 of the OIG report:

https://oig.justice.gov/sites/default/f ... apter4.pdf
The panelists identified the following as the primary causes of the misidentification:
• Failure to follow properly the Analysis, Comparison, Evaluation and Verification (ACE-V) steps in fingerprint examination. I
See Josh, people who make policy don't suffer the consequences of their crap policies. Examiners do.

Back to the point. Synthesizing the implications of the the identity paradox and the rule following paradox, the more concepts you have, the more boundaries between concepts you have. The more boundary cases you have the more rules you'll need to account for those boundary cases. The more rules you have the more interpretation you'll have, which practically means more judgement. The more judgement, the more error. So in an effort to make things more specific, you'll inevitably make things more fuzzy and more error prone.

That was already demonstrated here:

https://forum.clpex.com/viewtopic.php?t=2552

The link to the study in that article shows exactly what I just stated above, more concepts (scales in this point) results in more errors at the boundaries of those scales.

And in order to account for that, you'll end up with jaberwocky meaningless statements like this:

https://www.nist.gov/system/files/docum ... usions.pdf
An examiner shall not assert that a source identification is the conclusion that two impressions were made by the same source or imply an individualization to the exclusion of all other sources.
I dare you to tell me that you don't believe that prints that you ID were made by the same person, because that is what that says. And see, you as an OSAC member and/or Simon Cole wouldn't bear the repercussions of this document in the wild if evaluated by an ethics investigation. Examiners would.


All of this has already been pointed out multiple times:

Here: https://forum.clpex.com/viewtopic.php?p=19238#p19238

From that paper RE: OSAC:

They usually state few requirements.
Their stated requirements are often vague.
Compliance with their stated requirements can be achieved with little effort, the bar is set very low.
Compliance with their stated requirements would not be sufficient to lead to scientifically valid results
So I guess all those people on your list that you were touting earlier are low bar agencies, who through little effort achieved vague requirements and as a result didn't produce scientifically valid results. Or maybe these authors are just mad at Glennry's (that's their celebrity couple name) PhDs and have just resorted to bragging or whatever such nonsense you've been spouting up to this point.

Meet the new SWGFAST, same as the old SWGFAST

https://noblis.org/wp-content/uploads/2 ... _Final.pdf
That said, the details of how to document Analysis and Comparison are mostly unspecified, and SWGFAST’s standards are unenforced, leaving the details to be sorted out by agency standard operating procedures or by the examiners’ judgments.
Wait!? Vague predicates resulted in individual judgments for application?? Where have we heard this before? This all sounds so familiar...

It was also pointed out here:


https://noblis.org/wp-content/uploads/2 ... withSI.pdf
In addition to differences in interpretation, a lack of clear criteria in the latent print discipline specifying when and how to mark features may have contributed to much of the observed variability in annotations [50,20,49,51]
'The Science' has indicated that OSAC has a mandate to provide a document on when and how to mark features with a level of specificity that a repeat of the Whitebox Study shows no measurable difference between the markup in the examiners in the study. Chop, Chop! I'll hold my breath.

It gets worse here:
We designed our experiment to allow us to measure the extent to which various factors played a role in determining sufficiency for individualization, following the publication by SWGFAST of a conceptual Sufficiency Graph that depicts a complementary role between quality (an assessment of the overall clarity of the impression) and the quantity of minutiae for sufficiency for individualization. We found, contrary to the SWGFAST proposition, that models accounting for clarity and minutiae count performed no better than models that only accounted for minutiae count:we assume clarity influences which minutiae are marked rather than providing additional complementary information
While Popper was a rationalist, this literally joined the mile high club by empirically refuting a claim by a standards board. Above being merely falsifiable, it was falsified.


Since I'm a pattern recognition expert, let me help you see the pattern here. So called leaders in the industry ignore the professed values of the industry stated in authoritative documents, ignore published scientific findings, deflect criticism and take zero accountability for producing less than stellar results. Exactly what I pointed out in the original post. And the real tragedy here is that your insults weren't even witty.

You see, I don't need OSAC to do my job, OSAC needs people like me to give them legitimacy through acceptance. Well I don't give you that and I'm not the only one. I will also continue to refute the nonsense that makes its way down to the stand. I know defense attorneys read this board, so who knows, maybe I'll find my way to Nebraska via some pro-bono work. Tis the season for charity and all.

When asked why my agency doesn't follow OSAC guidelines, I'll simply repeat what was published in the paper.

"Compliance with their stated requirements would not be sufficient to lead to scientifically valid results." This has been mentioned in numerous peer reviewed articles.


Look, I get it you don't like my approach, 'you catch more flies with honey' or whatever, but what I've come to understand from my time in forensics is that the thing that attracts the most flies is a bloated corpse, which is exactly what OSAC is.

OSAC is dead! Long live OSAC!

P.S. I saw you look at my LinkedIn profile where I posted the original article as well. I don't have a problem saying these things publicly. Only people who are ashamed of themselves hide.
Big-Mad.jpg
.

Per the classic Evett and Williams review of the 16 point standard

https://www.ojp.gov/ncjrs/virtual-libra ... -and-wales
Instead (the authors) present recommendations designed to produce the kind of atmosphere in the forensics profession that would facilitate its members in debating the issues more freely than in the past. In particular, the training process should encourage a questioning attitude rather than a doctrinaire obedience to dogma. More junior, yet progressive, members should have the freedom to put their views forward in discussion arenas that in the past were apparently dominated by a small number of influential senior individuals whose objective has been to maintain the status quo.
.


You're 'that guy' now, Josh. So much for that paradigm shift everyone is always talking about.

It appears all things old are new again.