Defect and change detections robust to domain shift for industrial monitoring system산업용 모니터링 시스템을 위한 도메인 시프트에 강인한 결함 및 변화 감지

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 47
  • Download : 0
Recent advances in deep learning have enabled monitoring systems to perform complex monitoring tasks such as defect detection and change detection, which play a significant role in maintaining productivity and security. However, these learning-based detection methods are particularly vulnerable to domain shift, that is, the distribution mismatch between training and deployment. To tackle this problem, we propose 1) a defect detection network robust to its input density and size variations by employing domain knowledge-inspired feature extraction, and 2) a change detection network robust to non-targeted environmental variations by utilizing the end-to-end structure. For the defect detection, we aim to detect defects of a solder paste printer of surface mount technology (SMT) using defective solder paste pattern (DSPP) images. Since the DSPP images are sparse, have various sizes, and are hardly collected, existing CNN-based classifiers tend to fail to generalize and over- fitted to the train set. Besides, existing studies employing only multi-label classifiers are less helpful since when two or more defects are observed in the DSPP image, the location of each defect can not be specified. To solve these problems, we propose a dual-level defect detection PointNet (D3PointNet), which extracts point cloud features from DSPP images and then performs segmentation and multi-label classification simultaneously. Experimental results show that the proposed D3PointNet is robust to the sparsity and size changes of the DSPP image, and its exact match score was 10.2% higher than that of the existing CNN-based state-of-the-art multi-label classification model in the DSPP image dataset. For the change detection, we aim to detect change regions given two scenes robustly under the do- main shift scenario, that is, when non-targeted environmental variation occurs between the two scenes. To tackle the problem, we propose SimFaC, a novel end-to-end scene change detection network by Simul- taneously conducting correspondence and mis-correspondence estimation. To verify the robustness of the proposed SimFaC, we also propose highly challenging scene change detection dataset, ChangeSim, where environmental non-targeted variations such as air turbidity and light condition changes are included. Experimental results show that SimFaC outperforms state-of-the-art methods by a large margin (29%p), especially under extreme domain shift scenario. Furthermore, SimFaC also achieves new state-of-the-art scores on TSUNAMI, GSV, and VL-CMU-CD datasets, where the improvements are 0.8%p, 1.8%p, and 4.1%p, respectively. Moreover, the proposed SimFaC can be directly applied to visual place recognition without additional modules. When place recognition and change detection are performed simultaneously using the proposed SimFaC, the execution speed increases about 3 times compared to the baseline in which each is performed, while the accuracy is maintained.
Advisors
Kim, Jong-Hwanresearcher김종환researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[vii, 68 p. :]

URI
http://hdl.handle.net/10203/309080
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=996253&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0