DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Hwang, Sung Ju | - |
dc.contributor.advisor | 황성주 | - |
dc.contributor.author | Rhee, Hyunsu | - |
dc.date.accessioned | 2023-06-22T19:31:30Z | - |
dc.date.available | 2023-06-22T19:31:30Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032327&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/308235 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.2,[iii, 24 p. :] | - |
dc.description.abstract | Real-time video segmentation is a crucial task for many real-world applications such as autonomous driving and robot control. Since state-of-the-art semantic segmentation models are often too heavy for real-time applications despite their impressive performance, researchers have proposed lightweight architectures with speed-accuracy trade-offs, achieving real-time speed at the expense of reduced accuracy. In this paper, we propose a novel framework to speed up any architecture with skip-connections for real-time vision tasks by exploiting the temporal locality in videos. Specifically, at the arrival of each frame, we transform the features from the previous frame to reuse them at specific spatial bins. We then perform partial computation of the backbone network on the regions of the current frame that captures temporal differences between the current and previous frame. This is done by dynamically dropping out residual blocks using a gating mechanism which decides which blocks to drop based on inter-frame distortion. We validate our Spatial-Temporal Mask Generator (STMG) on video semantic segmentation benchmarks with multiple backbone networks, and show that our method largely speeds up inference with minimal loss of accuracy. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Semantic segmentation▼aReal-time vision▼aNetwork pruning | - |
dc.subject | 의미론적 분할▼a실시간 시각인식▼a신경망 가지치기 | - |
dc.title | Distortion-aware network pruning and feature reuse for real-time video segmentation | - |
dc.title.alternative | 실시간 의미론적 영상 분할을 위한 왜곡 인식 신경망 가지치기와 특징 재사용 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :김재철AI대학원, | - |
dc.contributor.alternativeauthor | 이현수 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.