DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Kyungsu | ko |
dc.contributor.author | Kim, Jun Hee | ko |
dc.contributor.author | Lee, Haeyun | ko |
dc.contributor.author | Park, Juhum | ko |
dc.contributor.author | Choi, Jihwan | ko |
dc.contributor.author | Hwang, Jae Youn | ko |
dc.date.accessioned | 2023-01-11T09:00:10Z | - |
dc.date.available | 2023-01-11T09:00:10Z | - |
dc.date.created | 2023-01-11 | - |
dc.date.created | 2023-01-11 | - |
dc.date.issued | 2022-01 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, v.60 | - |
dc.identifier.issn | 0196-2892 | - |
dc.identifier.uri | http://hdl.handle.net/10203/304231 | - |
dc.description.abstract | Various deep learning-based segmentation models have been developed to segment buildings in aerial images. However, the segmentation maps predicted by the conventional convolutional neural network-based methods cannot accurately determine the shapes and boundaries of segmented buildings. In this article, to improve the prediction accuracy for the boundaries and shapes of segmented buildings in aerial images, we propose the boundary-oriented binary building segmentation model (B3SM). To construct the B3SM for boundary-enhanced semantic segmentation, we present two-scheme learning (Schemes I and II), which uses the upsampling interpolation method (USIM) as a new operator and a boundary-oriented loss function (B-Loss). In Scheme I, a raw input image is processed and transformed into a presegmented map. In Scheme II, the presegmented map from Scheme I is transformed into a more fine-grained representation. To connect these two schemes, we use the USIM operator. In addition, the novel B-Loss function is implemented in B3SM to extract the features of the boundaries of buildings effectively. To perform quantitative evaluation of the shapes and boundaries of segmented buildings generated by B3SM, we develop a new metric called the boundary-oriented intersection over union (B-IoU). After evaluating the effectiveness of two-scheme learning, USIM, and B-Loss for building segmentation, we compare the performance of B3SM to those of other state-of-the-art methods using public and custom datasets. The experimental results demonstrate that the B3SM outperforms other state-of-the-art models, resulting in more accurate shapes and boundaries for segmented buildings in aerial images. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Boundary-Oriented Binary Building Segmentation Model With Two Scheme Learning for Aerial Images | - |
dc.type | Article | - |
dc.identifier.wosid | 000732759100001 | - |
dc.identifier.scopusid | 2-s2.0-85112647483 | - |
dc.type.rims | ART | - |
dc.citation.volume | 60 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | - |
dc.identifier.doi | 10.1109/TGRS.2021.3089623 | - |
dc.contributor.localauthor | Choi, Jihwan | - |
dc.contributor.nonIdAuthor | Lee, Kyungsu | - |
dc.contributor.nonIdAuthor | Kim, Jun Hee | - |
dc.contributor.nonIdAuthor | Lee, Haeyun | - |
dc.contributor.nonIdAuthor | Park, Juhum | - |
dc.contributor.nonIdAuthor | Hwang, Jae Youn | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Image segmentation | - |
dc.subject.keywordAuthor | Buildings | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Shape | - |
dc.subject.keywordAuthor | Architecture | - |
dc.subject.keywordAuthor | Semantics | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Aerial images | - |
dc.subject.keywordAuthor | boundary enhancement | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | semantic segmentation | - |
dc.subject.keywordPlus | SEMANTIC SEGMENTATION | - |
dc.subject.keywordPlus | FOCAL LOSS | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
dc.subject.keywordPlus | FEATURES | - |
dc.subject.keywordPlus | EXTRACTION | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.