Applying motion capture data for multi-person interaction to virtual characters is challenging because one needs to preserve interaction semantics in addition to satisfying the general requirements for motion retargeting, such as preventing penetration and preserving naturalness. An ecient method for representing the scene semantics of interaction motions is to dene the spatial relationships between body parts of characters. However, existing methods of this kind consider only character skeleton, and thus may require post-processing to rene the interaction motions and remove artifacts from the viewpoint of skin meshes. This paper proposes a novel method for retargeting interaction motions with respect to character skins. To this end, we introduce the aura mesh surrounding a characterâ€™s skin in order to represent skin-level spatial relationships between body parts. Using the aura mesh, we can retarget interaction motions while preserving skin-level spatial relationships and reducing skin inter-penetrations.