Empowering construction site safety with portable ai and computer vision: real-time detection of personal protective equipment and fall incidents건설 현장 안전 강화를 위한 휴대용 AI 및 컴퓨터 비전: 개인 보호 장비 및 낙상 사고 실시간 탐지

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
specificity: 99.10%). In addition, this model shows robust performance under different occlusion levels. Deployed on an edge device Jetson Xavier NX, it achieves 6.44 FPS. Finally, we design an integrated portable safety monitoring system on edge devices that can simultaneously monitor PPE usage and detect fall incidents. The system provides flexibility and adaptability for real-world applications at construction sites, enabling remote monitoring. These technologies hold immense promise for improving construction safety, laying a foundation for efficient, real-time automated safety monitoring, and enhancing worker safety in the construction industry.; In construction sites, fall and injury prevention are crucial to ensure worker safety and minimize serious consequences from fall incidents. This thesis focuses on empowering construction site safety with portable AI and computer vision for real-time automated detection of personal protective equipment (PPE) usage and fall incidents. First, we introduce an improved YOLOv8 model for accurate detection of proper usage of PPE (helmet, harness, lanyard) on edge devices. A novel large-scale multi-class PPE dataset is constructed. To achieve a balance between detection accuracy and lightweight design, we improve YOLOv8 by combining the coordinate attention module, ghost convolution module, transfer learning, and merge-non-maximum suppression. The proposed model outperforms the original YOLOv8, improving mAP50 by 1.58% and mAP50-95 by 3.04% while reducing the computational cost. Deployed on an edge device, Jetson Xavier NX, it achieves 9.11 FPS with a 92.52% mAP50. Second, we develop a real-time multi-person fall detection model for construction sites. Based on the constructed large-scale fall dataset and improved YOLOv8, we achieved 93.60% accuracy in human detection. Further integration of AlphaPose and SORT enables the extraction of skeleton keypoints for multi-person across consecutive frames. Utilizing 1D CNN-LSTM, our model classifies activities into fall incidents and non-fall incidents based on consecutive key points, achieving an accuracy of 98.66% (sensitivity: 97.32%
Advisors
셔핑숑researcher
Description
한국과학기술원 :데이터사이언스대학원,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 데이터사이언스대학원, 2024.2,[iv, 53 p. :]

Keywords

건설 안전▼a낙상▼a개인 보호 장비▼a인공지능▼a컴퓨터 비전▼a객체 탐지▼a엣지 디바이스; Computer vision▼aObject detection▼aEdge device; Construction safety▼aFalls▼aPersonal protective equipment▼aAI

URI
http://hdl.handle.net/10203/321425
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096210&flag=dissertation
Appears in Collection
IE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0