Adversarial examples which are imperceptibly crafted by adversarial attacks can fool neural networks. Defense methods for it have been proposed, but new and stronger attacks can threaten existing defenses. This possibility highlights the importance of certified defense methods that train deep neural networks with verifiably robust guarantees. Interval bound propagation (IBP)-based methods have been demonstrated to be most effective for certified defense, However, we observe that these methods are suffered from Low Epsilon Overfitting (LEO), a problem arising from their training schedule which increases the input perturbation bound ($\epsilon$). In this paper, we show that LEO can disturb the learning of a simple linear classifier in higher epsilon $(\epsilon)$ and investigate the evidence of LEO by experiments. Based on these observations, we propose a new training strategy, BatchMix, which mixes various $\epsilon$ in a mini-batch to alleviate LEO. Experimental results on MNIST and CIFAR-10 datasets show that BatchMix can improve the performance of IBP-based methods.