(A) descriptor generation processor for low-power object recognition in video frames = 동영상 환경에서 저전력 물체 인식을 위한 디스크립터 생성 프로세서

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 146
  • Download : 0
In recent years, object recognition plays a key role in mobile vision applications, such as head-mounted displays (HMDs), robot vision, and mini-UAVs. Several types of object recognition processors have been reported, and they have shown real-time 30 frames per second (fps) object recognition with power consumption of 260 ~ 350 mW. Even though their power consumption is reasonable for non-mobile applications or non-power hungry applications, such as surveillance systems and driver assistance systems, in mobile ap-plications, due to the limited battery capacity, the power consumption of object recognition should be reduced further. In the recent object recognition processor, a descriptor generation processor consumes almost half of the total recognition power. Two main bottlenecks of descriptor generation processors are (1) a heavy work-load from real-time 30 fps HD-resolution input and (2) a large number of non-linear operations in descriptor generation process. To solve each bottleneck, a new object recognition flow with a feature reuse model and an LUT-based descriptor generation processor are proposed respectively. In the new object recognition flow with the feature reuse model, inter-frame similarity in video frames is used to reduce the workload of descriptor generation processor. With the proposed flow, over 50% of keypoints of the current frame reuse features of the previous frame. The LUT-based descriptor generation processor has two main features to generate descriptors efficiently. High cost non-linear operations are replaced to LUT-based operations with no accuracy reduction. And a highly utilized keypoint-level pipeline and a pixel-level pipeline are used to increase energy efficiency. The proposed descriptor generation processor with the feature reuse object recognition flow is implemented with a 65nm CMOS technology. The feature reuse engine is operating at 100MHz under 1.2V supply voltage and the descriptor generation processor is operating at 50MHz under 1.2V supply voltage. As a result, it shows 9mW power consumption in average and 385K descriptor/s throughput without the feature reuse scheme. Comparing to the state-of-the art vision processor, 22.9 times higher energy efficiency and 3.5 times higher area efficiency are achieved.
Advisors
Yoo, Hoi-Junresearcher유회준researcher
Description
한국과학기술원 :전기및전자공학과,
Publisher
한국과학기술원
Issue Date
2015
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학과, 2015.2,[iv, 30 p. :]

Keywords

descriptor generation▼aobject recognition▼areal-time▼avideo frames▼ainter-frame similarity▼afeature reuse▼alook-up table(LUT)▼apipeline▼aSIFT▼apipeline architecture; 디스크립터 생성▼a물체 인식▼a실시간▼a동영상▼a파이프라인▼aSIFT▼a인접 프레임▼a인접 프레임 유사성▼alook-up table

URI
http://hdl.handle.net/10203/266694
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=849306&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0