Recent advances in deep learning have enabled monitoring systems to perform complex monitoring tasks
such as defect detection and change detection, which play a significant role in maintaining productivity
and security. However, these learning-based detection methods are particularly vulnerable to domain
shift, that is, the distribution mismatch between training and deployment. To tackle this problem,
we propose 1) a defect detection network robust to its input density and size variations by employing
domain knowledge-inspired feature extraction, and 2) a change detection network robust to non-targeted
environmental variations by utilizing the end-to-end structure.
For the defect detection, we aim to detect defects of a solder paste printer of surface mount technology
(SMT) using defective solder paste pattern (DSPP) images. Since the DSPP images are sparse, have
various sizes, and are hardly collected, existing CNN-based classifiers tend to fail to generalize and over-
fitted to the train set. Besides, existing studies employing only multi-label classifiers are less helpful
since when two or more defects are observed in the DSPP image, the location of each defect can not
be specified. To solve these problems, we propose a dual-level defect detection PointNet (D3PointNet),
which extracts point cloud features from DSPP images and then performs segmentation and multi-label
classification simultaneously. Experimental results show that the proposed D3PointNet is robust to the
sparsity and size changes of the DSPP image, and its exact match score was 10.2% higher than that of
the existing CNN-based state-of-the-art multi-label classification model in the DSPP image dataset.
For the change detection, we aim to detect change regions given two scenes robustly under the do-
main shift scenario, that is, when non-targeted environmental variation occurs between the two scenes.
To tackle the problem, we propose SimFaC, a novel end-to-end scene change detection network by Simul-
taneously conducting correspondence and mis-correspondence estimation. To verify the robustness of the
proposed SimFaC, we also propose highly challenging scene change detection dataset, ChangeSim, where
environmental non-targeted variations such as air turbidity and light condition changes are included.
Experimental results show that SimFaC outperforms state-of-the-art methods by a large margin (29%p),
especially under extreme domain shift scenario. Furthermore, SimFaC also achieves new state-of-the-art
scores on TSUNAMI, GSV, and VL-CMU-CD datasets, where the improvements are 0.8%p, 1.8%p, and
4.1%p, respectively. Moreover, the proposed SimFaC can be directly applied to visual place recognition
without additional modules. When place recognition and change detection are performed simultaneously
using the proposed SimFaC, the execution speed increases about 3 times compared to the baseline in
which each is performed, while the accuracy is maintained.