Forensic Science in Criminal Courts: The PCAST Report

In 2015 President Obama tasked his Council of Advisors on Science and Technology (PCAST with the job of reviewing the forensic sciences, and determining if there were areas that could be improved. In October of this year they released their report—which has surprisingly generated little press. If you’re a criminal defense lawyer who tries cases, you need to read it.

The background for this report was the groundbreaking report issued by the National Academy of Sciences in 2009, which strongly criticized the use of forensics in criminal cases. The report—which was titled Strengthening Forensic Science—the Path Forward—pointed out that much of the testimony typically used in court had very little—if any—research backing it up. Despite the lack of research, experts routinely testified about the validity and reliability of their particular areas. For example, fingerprint examiners typically testified to almost 100% certainty (and in some cases they were that certain) that a fingerprint could have come from no one else. They made those claims even though there was no study that ever confirmed that.

The NAS report called for increased research and validation, and one of the reasons for the most recent report was to determine how well that was done. While the PCAST committee noted some improvement, there is still a lot to be desired.

The PCAST report focused on what they called “Forensic Feature-Comparison Methods.” They addressed two separate concepts. The first was “foundational validity,” which presents the following issues:

1.   Foundational validity requires that a method has been subjected to empirical testing by multiple groups, under conditions appropriate to its intended use. The studies must (a) demonstrate that the method is repeatable and reproducible, and (b) provide valid estimates of the method’s accuracy (that is, how often the method reaches an incorrect conclusion) that indicate the method is appropriate to the intended application.

2.   For objective methods, the foundational validity of the method can be established by studying measuring the ac­cu­racy, reproducibility, and consistency of each of its in­di­vidual steps.

3.   For subjective feature-comparison methods, because the individual steps are not objectively specified, the method must be evaluated as if it were a “black box” in the examiner’s head. Evaluations of validity and reliability must therefore be based on “black box studies,” in which many examiners render decisions about independent tests (typically involving “questioned” samples and one or more “known” samples) and the error rates are determined.

4.   Without appropriate estimates of accuracy, an examiner’s statement that two samples are similar—or even indistinguishable—is scientifically meaningless: it has no probative value and considerable potential for prejudicial impact.

The language alone should give you some ammunition for cross-examination. The report also states that “statements claiming or implying greater certainty than demonstrated by empirical evidence are scientifically invalid.” Chew on that for a while.

The other concept addressed was “validity as applied,” which involves two tests:

1.   The forensic examiner must have been shown to be capable of reliably applying the method and must actually have done so. Demonstrating that an expert is capable of reliably applying the method is crucial—especially for subjective meth­ods, in which human judgment plays a central role. From a scientific standpoint, the ability to apply a method reliably can be demonstrated only through empirical testing that measures how often the expert reaches the correct answer. Determining whether an expert has actually applied the method requires that the procedure actually have been used in the case, the results obtained, and the laboratory notes be made available for scientific review by others.

2.   The practitioner’s assertions about the probative value of proposed identifications must be scientifically valid. The expert should report the overall false-positive rate and sensitivity for the method established in the studies of foundational validity and should demonstrate that the samples used in the foundational studies are relevant to the facts of the case. Where applicable, the expert should report the probative value of the observed match based on the specific features observed in the case. And the expert should not make claims or implications that go beyond the empirical evidence and the applications of valid statistical principles to that evidence.

Here, they are talking about much more than the forensic test­ing used by most agencies. Testing must mimic real world conditions and be done in a way that the examiner does not know he is being tested.

The following is a summary of the disciplines that were addressed along with the recommendations and findings. This summary, of course, is no substitute for obtaining and reading the whole PCAST report. It is a relatively short read and available to download for free at https://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_science_report_final.pdf.

DNA Analysis of Complex Mixture Samples

This is a complex issue, and if you have a case where the report involves a mixture, you need to consider getting an expert assist with the DNA portion of the case. Despite what most people think, this is not simply reading the results and reporting them. When more than one sample is involved, there are several decisions that must be made, and many of them are subjective. Basically, there are decisions that must be made by the analyst him/herself. Among those decisions include what markers to consider, and how to interpret them, as well as how many possible sources exist. The committee found that “subjective analysis of complex DNA mixtures has not been established to be foundationally valid, and is not a reliable methodology.”

If you have a case where DNA is an important part of the case and the results are reported as a mixture, you need to get an expert to review them.

Bitemark Analysis

The problems with bitemarks have already been pointed out by the Texas Forensic Science Commission (FSC). The FSC issued a report last year recommending that bitemarks not be used in court without further validation. Attempts to validate bitemarks have produced results showing that experts cannot even agree on the fundamental question of what is or is not a bitemark. The committee found that “bitemark analysis is far from meeting the scientific standards for foundational validity.” Hopefully, we have seen the end of this testimony. Again, if you have a case where a bitemark is central evidence, an expert is necessary to combat this testimony.

Latent Fingerprint Analysis

The committee found that fingerprint analysis is a “foundationally valid subjective methodology”—albeit with a false positive rate that is substantial and is likely higher than expected by many jurors, based on longstanding claims about the infallibility of fingerprint analysis.

Fingerprint examiners are fond of saying things like “no two people have ever been found to have the same fingerprint,” or “I’ve never made a mistake.” The implication is that fingerprint comparisons are extremely accurate. Like many other areas, those statements have never been validated. Until recently, there had been almost no effort to determine the accuracy of fingerprint comparisons. The few studies that have been done reveal an error rate far higher than most people expect. According to the PCAST report, the false positive could be as high as 1 error in 306 cases, based on an FBI study, and 1 error in 18 cases based on another study.

The committee made three recommendations to improve accuracy:

1)   Require examiners to complete and document their analysis before looking at any known fingerprints, and separately document any additional data used during their comparison and evaluation;

2)   Ensure that examiners are not exposed to any irrelevant information about the facts of the case before conducting their examination;

3)   Implement rigorous proficiency testing, and report those results for evaluation by the scientific community.

The committee also recommended additional studies to de­ter­mine the error rate for latent prints of varying qualities and completeness. In other words, the error rates for comparing poor quality prints as well as the error rates for comparing better quality prints.

This is an area where we need to do better at challenging experts. The depth of their knowledge—or lack thereof—about these studies should be explored, as well as the information they had when making their comparisons. We also need to ensure that juries are informed about error rates—which is most likely going to contradict common assumptions about the accuracy of fingerprint testing.

Firearms Analysis

The committee found “the current evidence falls short of the scientific criteria for foundational validity.” This is another area where there has been almost no legitimate effort to validate the accuracy of comparisons. The committee notes that an independent study funded by the Department of Defense established the error rate was most likely 1 in 66, and could be as high as 1 in 46. Unfortunately, the study was not published, and there are no studies that have been published in a peer-reviewed scientific journal. The committee recommended that such research be done; without it, there is no scientific support for the foundational validity of firearms comparisons. Studies are also needed to determine the reliability of such comparisons.

Hair Analysis

The committee noted the need for scientific studies to establish the reliability and validity of such comparisons. The FBI has admitted problems with hair comparison done by its examiners. The Texas Forensic Science Commission has also undertaken a review of cases where testimony about hair comparisons have been made. To date, the FSC has noted problems in the language used by examiners; instead of testifying that two hairs are “similar,” examiners have used language that suggests the two hairs are “identical” or come from the same source.

This is one of many disciplines that has been called into question by DNA testing. The committee noted that in 2002 the FBI used mitochondrial DNA analysis to look at 170 cases where microscopic comparison had been done. They found that in 11% of the cases, the examiners had reached the wrong result.

Conclusion

These findings may be surprising to most lay people, and probably even most lawyers. Most areas of forensics—e.g., fingerprints and DNA—have long been considered almost infallible. For years experts have been able to get away with claims that had no scientific support. Thankfully that is starting to come to an end.

As you might expect, the PCAST report has not been well received by prosecutors. The Justice Department has criticized many of the recommendations, and indicated it does not intend to follow them. With a new administration taking over, it is unknown how these issues will be addressed. However, the fact remains that the concerns pointed out by the committee, as well as the NAS, are valid. There has been little effort to validate these disciplines in a scientifically appropriate manner. That failure is not surprising, since a consensus could drastically change the use of forensics in criminal cases.

Law often lags science. It is up to us to ensure that “good” science is used in court. We can only do that by educating the prosecutors and judges. We can only do that by challenging the science, the experts, and their conclusions, and when the evidence is admitted, making sure jurors understand the error rates. If we continue doing our job, perhaps convictions based on “junk science” will become a relic of the past.

TCDLA
TCDLA
Walter Reaves
Walter Reaves
Waco, Texas, lawyer Walter Reaves has gained a reputation as the lawyer other lawyers go to when they need answers. Over the last 35 years he has gained national recognition, and been featured in Time, Newsweek, Slate, the Wall Street Journal, Texas Monthly, and Texas Observer. He is also a regular pre­senter at CLE. He has devoted his entire career to representing the citizen accused, and proudly claims to have never have been a prosecutor. He is Board Certified in both Criminal Law and Criminal Appellate Law by the Texas Board of Legal Specialization. He has been an active member of the Innocence Project of Texas, and has served on the Board of that organization for the last several years. He has also served on the Board and Executive Committee for TCDLA, and served as president of the McLennan County Criminal Defense Lawyers Association. He can be reached at .

Waco, Texas, lawyer Walter Reaves has gained a reputation as the lawyer other lawyers go to when they need answers. Over the last 35 years he has gained national recognition, and been featured in Time, Newsweek, Slate, the Wall Street Journal, Texas Monthly, and Texas Observer. He is also a regular pre­senter at CLE. He has devoted his entire career to representing the citizen accused, and proudly claims to have never have been a prosecutor. He is Board Certified in both Criminal Law and Criminal Appellate Law by the Texas Board of Legal Specialization. He has been an active member of the Innocence Project of Texas, and has served on the Board of that organization for the last several years. He has also served on the Board and Executive Committee for TCDLA, and served as president of the McLennan County Criminal Defense Lawyers Association. He can be reached at .

Previous Story

A Thorn in the Side of Forensic DNA: Complex Mixtures

Next Story

Depression: A Painful Road to Suicide

Latest from Features