21 March 2012

The myth of fingerprints

Could you have made a mistake?

If you are a fingerprint examiner in court giving testimony, the answer was once, “No,” according to Mnookin (2001).

(T)he primary professional organization for fingerprint examiners, the International Association for Identification, passed a resolution in 1979 making it professional misconduct for any fingerprint examiner to provide courtroom testimony that labeled a match “possible, probable or likely” rather than “certain.”

(I’ve been unable to find is this is still true.)

ResearchBlogging.orgThis post was chosen as an Editor's Selection for ResearchBlogging.orgA new paper by Ulery and colleagues is a follow-up to a paper they published last year on fingerprint analysis. The previous paper found 85% of fingerprint examiners made mistakes where two fingerprints were judged to be from different people, when in fact they were from the same person (false negative). There was much more analysis, but you get the idea.

The researchers wanted to see how consistent the decisions were after time had passed. For this paper, they used some of the same fingerprint examiners that had been tested before (72 of 169 from he previous paper). It had been seven months since the fingerprint examiners had seen these prints. They were all prints that they’d seen for the previous research, but Ulery and colleagues didn’t tell them that.

Because the experimenters wanted to see if examiners who had made a mistake before would make the same mistakes again, the choice of what pairs of fingerprints to make was somewhat complicated. But all examiners saw nine pairs fingerprints that were not matched (from different people) and sixteen pairs that were matched (same people). And it’s also important to note that the fingerprints chosen were chosen in part because they were difficult.

In the original test, the fingerprint examiners only rarely said two fingerprints were from the same person when they weren’t (false positives). On the retest, there were no cases of false positives, either repeated mistakes from the previous test or entirely new mistakes.

The reverse mistake, the false negatives, were more common. Of the false negative errors made in the previous paper, about 30% were made again in the new study. And the examiners made new mistakes that hadn’t been made before.

There is some good news here, however. One piece of good news in this paper is that in some cases the examiners’ ratings of the difficulty were correlated with probability they would make the same decisions as before. But he examiner’s ratings of difficulty, however, only weakly predicted the errors that they made.

Another important finding is evidence that the best way to reduce errors is to have fingerprints examined by multiple people, rather than multiple examinations by the same person. The authors write:

Much of the observed lack of reproducibility is associated with prints on which individual examiners were not consistent, rather than persistent differences among examiners.

Nevertheless, even with two examiners checking fingerprints, Ulery and colleagues estimate that 19% of false negatives would not be picked out by having another examiner check the prints.

These papers all concern decisions made by experts, which is obviously the logical place to start from a policy and pragmatic point of view. As an exercise in seeing how expertise develops, tt would be interesting to see if beginners showed the same types of patterns in decision making.

References

Mookin JL. 2001. Fingerprint evidence in an age of DNA profiling. Brooklyn Law Review 67: 13.

Saks M. (2005). The coming paradigm shift in forensic identification science Science, 309 (5736), 892-895 DOI: 10.1126/science.1111565

Ulery B, Hicklin R, Buscaglia J, & Roberts M (2012). Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners PLoS ONE, 7 (3) DOI: 10.1371/journal.pone.0032800

Photo by Vince Alongi on Flickr; used under a Creative Commons license.

1 comment:

  1. Patty Newton9:08 AM

    In our agency we have very good quality control measures by doing blind verifications in all of our examinations, not just verifying identifications but verifying non-idents as well. This procedure has proven to be very successful.
    Before our current administrator came to our agency, only idents were verified, and in reviewing prior cases, we found several missed identifications. Although time consuming, if all cases are not verified, cases tend to go unsolved for years, creating even more loss of time and money. Another interesting thing to note is that we have solved a major portion of our cases by having the agency implement the collection of elimination prints. It is nothing short of astounding, how many latent prints belong to people who are victims.

    ReplyDelete

Comments are moderated. Real names and pseudonyms are welcome. Anonymous comments are not and will be removed.