Recent years have witnessed considerable advancements in the development and applications of robotic technologies. In particular, unmanned underwater vehicles (UUVs) have increasingly been applied to various science and engineering applications. However, it is very challenging to autonomously operate UUVs in an underwater environment because the use of a global positioning system (GPS) is not possible. One approach to overcome this limitation is simultaneous localization and mapping (SLAM). In particular, as computer vision algorithms become more sophisticated, the information from vision sensors has been widely used in the context of SLAM. This method is called visual SLAM, and it utilizes the relative motion information between images.This research addresses a visual SLAM framework for online localization and mapping in an unstructured seabed environment that can be applied to a low-cost UUV equipped with a single monocular camera as the main measurement sensor. However, visual SLAM with monocular vision poses a variety of challenges when the relative motion is determined by matching a pair of images. Among the various challenges, this research focuses on the loop-closure problem, one of the most important issues in SLAM. Specifically, a robust loop-closure algorithm is proposed in this study to improve operational performance in terms of both navigation and mapping by efficiently reconstructing image matching edges.To demonstrate and evaluate the effectiveness of the proposed loop-closure methodology, experimental datasets obtained in underwater environments are used, and the validity of the proposed algorithm is confirmed by a series of comparative results.