As artificial intelligence (AI) systems become more pervasive throughout various fields, the reliability
of deep learning models in real-life applications is becoming more important. In this dissertation, a
method for improving the robustness of a deep learning model to operate safely within various situations
potentially occurring in real life was studied. In the first study of this dissertation, an anomaly detection
method is proposed to make the deep learning model operate safely against the abnormal data which is
unseen in the training. This study focused on the deep generative model among various deep learning
models to detect anomalies. By formulating the anomaly detection problem as a Bayesian hypothesis
test, the locally powerful Bayesian hypothesis test using a deep generative model is proposed. In the
second study, a robust density estimation method against adversarial examples is proposed to maintain
the performance of the deep learning model despite external disturbances. In this study, the flow-based
generative model, standing as one of the deep generative models, is extended to a Bayesian flow-based
generative model. Compared to the existing models, the Bayesian flow-based generative model has robust
performance even in the adversarially generated test data. In the third study, by expanding the second
study, a method to improve the generalization ability of deep learning models is proposed to enhance
the performance of models in practical applications. In this study, a new prior distribution that can
be generally applied to Bayesian deep learning models called ‘inverse reference prior distribution’, is proposed. The inverse reference prior distribution regulates the Fisher information matrix of the Bayesian deep learning models and effectively improves the generalization ability of the deep learning models.