A collaborative virtual environment (CVE) is a shared environment that allows geographically separated users to watch and manipulate the same object at the same time. This paper presents a fast haptic rendering method when users interact with a virtual object at multiple points or multiple areas in a CVE. Previously, we proposed a new model (Shape-retaining Chain Linked Model) for real-time volume haptic rendering. The haptic rendering of an object represented by our model guarantees real-time performance because the deformation of the object is computed locally and is then propagated outward through its volume. However, this local computation approach presented us with a new issue for handling virtual deformable objects in a CVE, where interactions occur at multiple points or multiple areas. In order to overcome the limitation of the local computation, we construct a modeling framework that allows colliding interactions among geographically separated users. For the modeling framework, we vectorially sum forces generated at interaction points or areas. In order to inspect the behavior of objects modeled with our method, experiments are conducted with volumetric objects consisting of about 500 000 nodes at a haptic update rate of 1000 Hz. We perform a calculation time analysis to investigate real-time performance and conduct human factor studies to show that the feedback force from our model is realistic. Our experiments verify that our model provides realistic haptic feeling to participants in real-time under a CVE.