Degree

Doctor of Philosophy (PhD)

Department

Electrical Engineering

Document Type

Dissertation

Abstract

Geometric modeling is a fundamental and critical problem for many computer graphic and computer vision problems. We usually want to build a parameterization which establishes a low-distortion one-to-one map between geometric data and canonical parametric domain, and it is a fundamental tool to support many visual data processing tasks such as mesh generation, finite element simulations, visual analytics, geometric reconstruction, etc. In this dissertation, I studied two problems. The first is to construct low- distortion canonical (polycube) parameterization of large-scale geometric regions, with which high-quality semi-structured quad meshes can be generated and used to facilitate more efficient scientific computing. The second is to compute inter-surface parameterization among many scanned 3D human face data, with which a consistently sampled and tessellated face database can be constructed and used to facilitate forensic facial reconstruction. In the first part, we develop a distributed poly-square mapping algorithm for large-scale 2D geometric regions, which is suitable for generating huge quadrilateral meshes in parallel using computer clusters. Our approach adopts a divide-and-conquer strategy based on domain decomposition. We first partition the data into solvable subregions, balancing their size, geometry, and communication cost; then, poly-square maps will be solved on subregions; these maps are finally merged and optimized globally through a multi-pass optimization algorithm. We demonstrate that our meshing framework can handle very big and complex geometric datasets using high-performance clusters efficiently and generate high-quality semi-structured quad meshes. In the second part, we develop a new algorithm to perform facial reconstruction from a given skull. This technique has forensic application in helping the identification of skeletal remains when other information is unavailable. Unlike most existing strategies that directly reconstruct the face from the skull, we utilize a database of portrait photos to create many face candidates, then perform a superimposition to get a well-matched face, and then revise it according to the superimposition. To support this pipeline, we build an effective autoencoder for image-based Facial reconstruction, and a generative model for constrained face inpainting. Our experiments have demonstrated that the proposed pipeline is stable and accurate.

Date

3-18-2019

Committee Chair

Li, Xin

DOI

10.31390/gradschool_dissertations.4890

Available for download on Monday, March 16, 2026

Share

COinS