The creation of realistic avatars in motion is a hot-topic in academia and the creative industries. Recent advances in deep learning and implicit representations have opened new avenues of research, especially to enhance the details of the avatars using implicit methods based on neural networks. State-of-the-art implicit Fast-SNARF methods encodes various poses of a given identity, but are specialized for that single identity. This paper proposes NIPIG, a method that extends Fast-SNARF model to handle multiple and novel identities. Our main contribution is to condition the model on identity and pose features, such as an identity code, a gender indicator, and a weight estimate. Extensive experiments led us to a compact model capable of interpolating and extrapolating between training identities. We test several conditioning techniques and network's sizes to find the best trade-off between parameter count and result quality. We also propose an efficient fine-tuning approach to handle new out-of-distribution identities, while avoiding decreasing the reconstruction performance for in-distribution identities.