SpherePHD: Applying CNNs on 360° Images with Non-Euclidean Spherical PolyHeDron Representation

Cited 11 time in webofscience Cited 0 time in scopus
  • Hit : 922
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Yeonkunko
dc.contributor.authorJeong, Jaeseokko
dc.contributor.authorYun, Jongseobko
dc.contributor.authorCho, Wonjuneko
dc.contributor.authorYoon, Kuk-Jinko
dc.date.accessioned2022-01-18T06:40:51Z-
dc.date.available2022-01-18T06:40:51Z-
dc.date.created2020-05-12-
dc.date.created2020-05-12-
dc.date.created2020-05-12-
dc.date.created2020-05-12-
dc.date.created2020-05-12-
dc.date.issued2022-02-
dc.identifier.citationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.44, no.2, pp.834 - 847-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10203/291851-
dc.description.abstractOmni-directional images are becoming more prevalent for understanding the scene of all directions around a camera, as they provide a much wider field-of-view (FoV) compared to conventional images. In this work, we present a novel approach to represent omni-directional images and suggest how to apply CNNs on the proposed image representation. The proposed image representation method utilizes a spherical polyhedron to reduce distortion introduced inevitably when sampling pixels on a non-Euclidean spherical surface around the camera center. To apply convolution operation on our representation of images, we stack the neighboring pixels on top of each pixel and multiply with trainable parameters. This approach enables us to apply the same CNN architectures used in conventional Euclidean 2D images on our proposed method in a straightforward manner. Compared to the previous work, we additionally compare different designs of kernels that can be applied to our proposed method. We also show that our method outperforms in monocular depth estimation task compared to other state-of-the-art representation methods of omni-directional images. In addition, we propose a novel method to fit bounding ellipses of arbitrary orientation using object detection networks and apply it to an omni-directional real-world human detection dataset.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleSpherePHD: Applying CNNs on 360° Images with Non-Euclidean Spherical PolyHeDron Representation-
dc.typeArticle-
dc.identifier.wosid000740006100023-
dc.identifier.scopusid2-s2.0-85122779850-
dc.type.rimsART-
dc.citation.volume44-
dc.citation.issue2-
dc.citation.beginningpage834-
dc.citation.endingpage847-
dc.citation.publicationnameIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.identifier.doi10.1109/TPAMI.2020.2997045-
dc.contributor.localauthorYoon, Kuk-Jin-
dc.contributor.nonIdAuthorLee, Yeonkun-
dc.contributor.nonIdAuthorCho, Wonjune-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorOmni-directional cameras360 Degreeconvolutional neural networkdetection networksemantic segmentationdepth estimationicosahedronnon-euclidean deep learning-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0