Diagnostic Tools for Automatic Cartridge Case Comparisons

Joseph Zemmels, Heike Hofmann, Susan VanderPlas

Acknowledgements

Funding statement

This work was partially funded by the Center for Statistics and Applications in Forensic Evidence (CSAFE) through Cooperative Agreement 70NANB20H019 between NIST and Iowa State University, which includes activities carried out at Carnegie Mellon University, Duke University, University of California Irvine, University of Virginia, West Virginia University, University of Pennsylvania, Swarthmore College and University of Nebraska, Lincoln.

Automatic Cartridge Case Comparison

Obtain an objective measure of similarity between two cartridge cases

  • Step 1: Independently pre-process scans to isolate breech face impressions
  • Step 2: Compare two cartridge cases to extract a set of numerical features that distinguish between matches vs. non-matches
  • Step 3: Combine numerical features into a single similarity score (e.g., similarity score between 0 and 1)

Examiner takes similarity score into account during an examination

Challenging to know how/when these steps work correctly

Cartridge Case Data

  • 3D topographic images using Cadre\(^{\text{TM}}\) TopMatch scanner from Roy J Carver High Resolution Microscopy Facility

  • x3p file contains surface measurements at lateral resolution of 1.8 micrometers (“microns”) per pixel

Step 1: Pre-process

Isolate region in scan that consistently contains breech face impressions

How do we know when a scan is adequately pre-processed?

Step 2: Compare Cells

  • Registration: Determine rotation and translation to align two scans

  • Cross-correlation function (CCF) measures similarity between scans

  • Split one scan into a grid of cells that are each registered to the other scan (Song 2013)

  • For a matching pair, we assume that cells will agree on the same rotation & translation

Why does the algorithm “choose” a particular registration?

Step 3: Score

  • Measure of similarity for two cartridge cases

  • Congruent Matching Cells (11 CMCs in example below) (Song 2013)

What factors influence the final similarity score?

Visual Diagnostics for Algorithms

  • A number of questions arise out of using comparison algorithms
  • How do we know when a scan is adequately pre-processed?
  • Why does the algorithm “choose” a particular registration?
  • What factors influence the final similarity score?
  • We wanted to create tools to address these questions

  • Well-constructed visuals are intuitive and persuasive

  • Useful for both researchers and practitioners to understand the algorithm’s behavior

X3P Plot

  • Emphasizes extreme values in scan that may need to be removed during pre-processing

  • Allows for comparison of multiple scans on the same color scheme

  • Map quantiles of surface values to a divergent color scheme

Comparison Plot

  • Separate aligned cells into similarities and differences

  • Useful for understanding a registration

  • Similarities: Element-wise average between two scans after filtering elements that are less than 1 micron apart

  • Differences: Elements of both scans that are at least 1 micron apart

Cell Comparison Plot

Thank You!

References

Song, John. 2013. “Proposed NIST Ballistics Identification System (NBIS)’ Based on 3d Topography Measurements on Correlation Cells.” American Firearm and Tool Mark Examiners Journal 45 (2): 11. https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=910868.

Appendix

Step 2: Compare Full Scans

  • Registration: Determine rotation and translation to align two scans

  • Cross-correlation function (CCF) measures similarity between scans

  • Choose the rotation/translation that maximizes the CCF

X3P Plot Pre-processing Example

  • Useful for diagnosing when scans need additional pre-processing

Full Scan Comparison Plot