Even with recent advances in speech synthesis models, the evaluation of such models is based purely on human judgment as a single naturalness score, such as the Mean Opinion Score (MOS). The score-based metric does not give any further information about which parts of speech are unnatural or why human judges believe they are unnatural. We present a novel speech dataset, RedPen, with human annotations on unnatural speech regions and their corresponding reasons. RedPen consists of 180 synthesized speeches with unnatural regions annotated by crowd workers; These regions are then reasoned and categorized by error types, such as voice trembling and background noise. We find that our dataset shows a better explanation for unnatural speech regions than the model-driven unnaturalness prediction. Our analysis also shows that each model includes different types of error types. Also, we develop an audio naturalness prediction pipeline with RegionMOS and a reason classifier, which predicts MOS, unnatural regions, and their reasons for each speech. We show that the RegionMOS improves MOS prediction performance better than the model predicting only MOS. Summing up, our dataset successfully shows the possibility that various error regions and types lie under the single naturalness score. We believe that our dataset will shed light on the evaluation.