Reinforcement learning for predicting traffic accidents

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 60
  • Download : 0
As the demand for autonomous driving increases, it is paramount to ensure safety. Early accident prediction using deep learning methods for driving safety has recently gained much attention. In this task, early accident prediction and a point prediction of where the drivers should look are determined, with the dashcam video as input. We propose to exploit the double actors and regularized critics (DARC) method, for the first time, on this accident forecasting platform. We derive inspiration from DARC since it is currently a state-of-the-art reinforcement learning (RL) model on continuous action space suitable for accident anticipation. Results show that by utilizing DARC, we can make predictions 5% earlier on average while improving in multiple metrics of precision compared to existing methods. The results imply that using our RL-based problem formulation could significantly increase the safety of autonomous driving.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2023-02-20
Language
English
Citation

5th International Conference on Artificial Intelligence in Information and Communication, ICAIIC 2023, pp.684 - 688

ISSN
2831-6991
DOI
10.1109/ICAIIC57133.2023.10067034
URI
http://hdl.handle.net/10203/316409
Appears in Collection
GT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0