This is a Python package to fit 3D morphable models (3DMMs) to images of faces. It mainly provides classes to work with and render 3DMMs, and functions that use these classes to optimize the objective of fitting 3DMMs to a source RGB image or a video for facial performance capture.
- Fit a 3DMM shape model to a RGB image
- Joint optimization of rendering pixel error and landmarks fitting
- Fit a 3DMM texture model with spherical harmonic lighting to a source RGB image
- Recover the barycentric parameters of the underlying verticles from the 3DMM mesh triangles that contribute to each pixel of a person's face in an image
- Extract per vertex texture
- Track expressions and spherical harmonic lighting over a sequence of images (or a video)
- Python 3
- Install all requirements with
pip
:pip install -r requirements.txt .
- Install face2face library:
pip install -e .
You need to download 2017 BFM model as we aren't allowed to share it:
- Create models folder under Facial-Capture
- Download Basel model 2017 model from here to models folder
- Process via
python processBFM2017.py
Also you would need the trained landmark dlib predictor:
- Download and extract shape_predictor_68_face_landmarks from here to models folder
-
First create a face identity (use 1 to 3 images max) using
python cli/initialize.py --input_dir path_to_init_images --output_dir path_to_save_identity
-
After creating the identity, you can now track the expressions using:
python cli/tracker.py --input_dir path_to_tracking_images --output_dir path_to_save_tracking --parameters path_to_save_identity/params.npy