DSPU: A 281.6mW Real-Time Depth Signal Processing Unit for Deep Learning-Based Dense RGB-D Data Acquisition with Depth Fusion and 3D Bounding Box Extraction in Mobile Platforms

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 49
  • Download : 0
Emerging mobile platforms, such as autonomous robots and AR devices, require RGBD data and 3D bounding-box (BB) information for accurate navigation and seamless interaction with the surrounding environment. Specifically, the extraction of RGB-D data and 3D BB needs to be done in real-time (> 30fps) while consuming low power (< 1W) due to limited battery capacity. In addition, a conventional depth processing system consumes high power due to a high performance (HP) time-of-flight (ToF) sensor with an illuminator (> 3W) [1]. However, even the HP ToF fails to extract depth in areas of extreme reflectance, leading to failure in navigation or AR interaction. In addition, software implementation on an application processor suffers from high latency ( 0.1 s) to preprocess the depth data and process the 3D point cloud-based neural network (PNN) [2]. Therefore, this paper proposes an SoC for low-power and low-latency depth estimation and 3D object detection with high accuracy, as shown in Fig. 33.4.1. The system implements depth fusion [3], [4] to allow accurate RGB-D extraction without hollows, while using a low-power (LP) ToF sensor (<0.4W). The SoC can fully accelerate the depth-processing pipeline, achieving a maximum of 45.6fps.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2022-02
Language
English
Citation

2022 IEEE International Solid-State Circuits Conference, ISSCC 2022, pp.510 - 512

ISSN
0193-6530
DOI
10.1109/ISSCC42614.2022.9731699
URI
http://hdl.handle.net/10203/299786
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0