Haptic virtual fixtures have a key role in telepresence systems to enhance spatial perception as well as to improve task efficiency. However, there are some difficulties in bringing real and virtual constraints together due to the extremely different characteristics of their representations. This paper proposes a unified virtual fixture model for the telepresence systems to combine the autonomously obtained real-world obstacles with the manually defined virtual surfaces for preventing unpreferred motions. The real environment is scanned by an RGB-D camera in the form of a point cloud, and the implicit virtual surfaces are added to the task environment by an augmented reality technique. When the user interacts with the task environment through a haptic device, our method detects a collision between the proxy and the environmental constraints and estimates the local information of the contact point without reconstruction of surface topologies. Once collisions are detected, the optimal proxy position for resolving the collisions is computed by an integrated constraint solver in a convergent manner. The proposed model provides stable and faithful haptic feedback from the heterogeneous and unstructured geometric representations. We verified that our method is fast enough for real-time haptic rendering and renders the target geometries properly.