NIPIG: Neural Implicit Avatar Conditioned on Human Pose, Identity and Gender

Proceedings of CVMP 2025, London, UK

* InterDigital    Inria    University Rennes    § CNRS    IRISA

Abstract

The creation of realistic avatars in motion is a hot-topic in academia and the creative industries. Recent advances in deep learning and implicit representations have opened new avenues of research, especially to enhance the details of the avatars using implicit methods based on neural networks. State-of-the-art implicit Fast-SNARF methods encodes various poses of a given identity, but are specialized for that single identity. This paper proposes NIPIG, a method that extends Fast-SNARF model to handle multiple and novel identities. Our main contribution is to condition the model on identity and pose features, such as an identity code, a gender indicator, and a weight estimate. Extensive experiments led us to a compact model capable of interpolating and extrapolating between training identities. We test several conditioning techniques and network's sizes to find the best trade-off between parameter count and result quality. We also propose an efficient fine-tuning approach to handle new out-of-distribution identities, while avoiding decreasing the reconstruction performance for in-distribution identities.

Teaser Video

Interactive Controls

Composite result

Method Overview

NIPIG extends Fast-SNARF with forward skinning in canonical space and an occupancy network conditioned on identity and pose. Given a posed query point, we iteratively find canonical correspondences via the LBS field (conditioned on identity), evaluate occupancy (conditioned on identity and pose), and reconstruct the surface. The model can also be conditioned on other attributes such as a weight or gender parameter.

figure_method_overview.png placeholder

Conditioning Techniques

We review three standard mechanisms to condition a layer \(l\) with a code \(c \in \mathbb{R}^{n_c}\). Let \(x^{(l)} \in \mathbb{R}^{n_l}\) be the input activation and \(\sigma\) a nonlinearity. The output is \[ x^{(l+1)} = f^{(l)}(x) \quad \text{with} \quad f^{(l)}(x) = \sigma\!\big(W\, x + B\big), \]

conditioning.jpg placeholder

Teacher-Student Fine-Tuning

To add a new out-of-distribution identity without catastrophic forgetting, we duplicate the model into a frozen teacher and a trainable student. The student is supervised by ground truth for the new identity and by the teacher's predictions for prior identities. This allows fast adaptation (≈1k iterations) while preserving performance on the original identities.

figure_finetune_teacher_student.jpg placeholder

Pose Generalization in Extreme Motions

NIPIG reposes identities robustly on sequences far from the training distribution (e.g., MPI PosePrior). The forward-skinning field avoids the instability of inverse-skinning approaches and yields clean surfaces even for new identities (see image below).

figure_pose_extremes.png placeholder

Comparison to COAP and Fast-SNARF

Qualitatively, NIPIG reduces part-boundary seams typical of compositional models like COAP and produces sharper facial details than Fast-SNARF—particularly due to our higher-density sampling near the face. The model is as compact as Fast-SNARF while representing many more identities.

figure_sota_comparison.jpg placeholder

Quantitative Results & Model Size

Across seen and unseen settings, NIPIG matches or surpasses Fast-SNARF and COAP on IoU and Chamfer while using substantially fewer parameters. The neutral-gender single model also performs strongly with a footprint ∼0.52M params (base) or 0.13M (light).

table1_quantitative.jpg placeholder

Fine-Tuning Results & Forgetting

Supervised teacher-student fine-tuning mitigates forgetting on previously seen identities while retaining quality on the new identity. In both cases, the fine-tuning converges in a few hundred iterations.

fine-tune.gif placeholder

BibTeX

@inproceedings{nipig_cvmp25,
  title={ NIPIG: Neural Implicit Avatar Conditioned on Human Pose, Identity and Gender },
  author={ Guillaume Loranchet and Pierre Hellier and Adnane Boukhayma and João Regateiro and Franck Multon },
  booktitle={CVMP 2025},
  year={2025}
}