The recent growth of the use of machine learning in decision making has resulted in attention being paid to the problem of opacity. This thesis analyses the epistemic and ethical implications of the opacity of machine learning through a case study of the use of offender risk assessment tool in judicial decision making. Machine learning, due to a trade-off between explanatory power and predictive accuracy, produces an epistemic paradox where the improvement of knowledge necessitates restriction on knowledge. Epistemic opacity arises through such epistemic paradox, and epistemic opacity further involves ethical opacity, namely the impossibility of verifying ethical values incorporated in the construction of machine learning models. This thesis argues that epistemic opacity, ethical opacity, external opacity, and internal opacity are intertwined in machine learning. Furthermore, this thesis suggests that an understanding of different kinds of opacities can contribute to resolving ethical and social problems that characterize the opacity of machine learning.