Face analysis and recognition has a large number of applications, such as security, communication, and entertainment. Current two-dimensional image based face recognition systems encounter difficulties with large facial appearance variations due to pose, illumination and expression changes. We have developed a face recognition system that utilizes three-dimensional shape information to make the system more robust to large head pose changes. Two different modalities provided by a facial scan, namely, shape and intensity, are utilized and integrated for face matching. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement, such as expressions. Collecting and storing multiple templates for each subject in a large database with multiple deformations is not practical. We have designed a hierarchical geodesic-based resampling scheme to derive a facial surface representation for establishing correspondence across expressions and subjects. Based on the developed representation, we extract and model three-dimensional non-rigid facial deformations such as expression changes for expression transfer and synthesis. For 3D face matching purposes, a user-specific 3D deformable model is built driven by facial expressions. An alternating optimization scheme is applied to fit the deformable model to a test facial scan, resulting in a matching distance. To make the matching system fully automatic, an automatic facial feature point extractor is developed. The developed 3D recognition system is able to handle large head pose changes and expressions simultaneously.