RGBD cameras, such as the Kinect, have recently revolutionized the field of real-time geometry and appearance acquisition. While impressive 3D reconstruction results have been obtained, combining data acquired by multiple RGBD cameras constitutes a technical challenge. Several methods have been proposed to estimate the internal parameters of each RGBD camera (such as depth mapping function and focal length). Despite that the textured geometry obtained by each RGBD camera individually is visually attractive, even state-of-the-art methods have difficulties in correctly combining the textured geometries obtained by several RGBD cameras via a rigid transformation. Based on this observation, our approach registers the RGBD cameras by a smooth field of rigid transformations, instead of a single rigid transformation. Experimental results on challenging data demonstrate the validity of the proposed approach.