Open-source tools for dense facial tissue depth mapping of computed tomography models

Terrie Simmons-Ehrhardt, Catyana Falsetti, Anthony Falsetti, Christopher J. Ehrhardt

Research output: Contribution to journalArticlepeer-review

6 Scopus citations


Computed tomography (CT) scans provide anthropologists with a resource to generate three-dimensional (3D) digital skeletal material to expand quantifijication methods and build more standardized reference collections. The ability to visualize and manipulate the bone and skin of the face simultaneously in a 3D digital environment introduces a new way for forensic facial approximation practitioners to access and study the face. Craniofacial relationships can be quantifijied with landmarks or with surface-processing software that can quantify the geometric properties of the entire 3D facial surface. This article describes tools for the generation of dense facial tissue depth maps (FTDMs) using deidentifijied head CT scans of modern Americans from the Cancer Imaging Archive public repository and the open-source program Meshlab. CT scans of 43 females and 63 males from the archive were segmented and converted to 3D skull and face models using Mimics and exported as stereolithography fijiles. All subsequent processing steps were performed in Meshlab. Heads were transformed to a common orientation and coordinate system using the coordinates of nasion, left orbitale, and left and right porion. Dense FTDMs were generated on hollowed, cropped face shells using the Hausdorfff sampling fijilter. Two new point clouds consisting of the 3D coordinates for both skull and face were colorized on an RGB (red-green-blue) scale from 0.0 (red) to 40.0-mm (blue) depth values and exported as polygon (PLY) fijile format models with tissue depth values saved in the “vertex quality” fijield. FTDMs were also split into 1.0-mm increments to facilitate viewing of common depths across all faces. In total, 112 FTDMs were generated for 106 individuals. Minimum depth values ranged from 1.2 mm to 3.4 mm, indicating a common range of starting depths for most faces regardless of weight, as well as common locations for these values over the nasal bones, lateral orbital margins, and forehead superior to the supraorbital border. Maximum depths were found in the buccal region and neck, excluding the nose. Individuals with multiple scans at visibly diffferent weights presented the greatest diffferences within larger depth areas such as the cheeks and neck, with little to no diffference in the thinnest areas. A few individuals with minimum tissue depths at the lateral orbital margins and thicker tissues over the nasal bones (> 3.0 mm) suggested the potential influence of nasal bone morphology on tissue depths. This study produced visual quantitative representations of the face and skull for forensic facial approximation research and practice that can be further analyzed or interacted with using free software. The presented tools can be applied to preexisting CT scans, traditional or cone beam, adult or subadult individuals, with or without landmarks, and regardless of head orientation, for forensic applications as well as for studies of facial variation and facial growth. In contrast with other facial mapping studies, this method produced both skull and face points based on replicable geometric relationships, producing multiple data outputs that are easily readable with software that is openly accessible.

Original languageEnglish (US)
Pages (from-to)63-76
Number of pages14
JournalHuman biology
Issue number1
StatePublished - Dec 1 2018


  • Computed tomography (CT)
  • Craniofacial identification
  • Facial approximation
  • Facial tissue depth mapping (FTDM)
  • Forensic anthropology
  • Forensic art
  • Forensic science
  • Meshlab

ASJC Scopus subject areas

  • Ecology, Evolution, Behavior and Systematics
  • Genetics
  • Genetics(clinical)


Dive into the research topics of 'Open-source tools for dense facial tissue depth mapping of computed tomography models'. Together they form a unique fingerprint.

Cite this