Resistive RAM(ReRAM) is gaining attention as a suitable memory platform for accelerating deep neural networks(DNNs) in an energy-efficient way. However, energy-efficient ReRAM-based DNN accelerators suffer from serious Stuck-At-Fault(SAF) issues that significantly degrade the inference accuracy. SAF is a device-level non-ideality, and the problems of SAF worsen in the realistic ReRAM with low cell resolution. To address the problem in the realistic ReRAM, we present a framework for mitigating SAF on ReRAM-based accelerators(Fault-free). We first analyze the impact of SAF on low-resolution cells. Based on the analysis, we present offline compilation, which drastically reduces the impact of SAF on inference accuracy. At the first stage, we extract indices of distorted weights due to SAF. For extracted weights, fault-aware weight decomposition and closest value mapping are applied to minimize the error of weights. In the online phase, the target DNN model is executed on the ReRAM-based accelerator along with lightweight compensation units. The online compensation is selectively performed for a small portion of weights to reduce the hardware overhead. With the proposed framework, the ReRAM-based accelerator successfully ensures the inference accuracy of various DNN models with an average area of 5% and an energy overhead of 0.8% from an ideal ReRAM-based accelerator.