Advancement in hardware capability has opened up the possibility of performing ML inference tasks at the edge using a large volume of sensory data generated from IoT devices such as cameras. As cameras become more pervasive, edge systems need to process streams from multiple sources with overlapping fields-of-view. In this position paper, we describe a collaborative sensing mechanism at the edge for such cases. We introduce a View Mapping Database (DB) that maps regions in a camera's field of view to regions in other cameras' view. We analyze characteristics of 5 video streams that capture an intersection from multiple angles, prototype a View Mapping DB, and present our preliminary results.