Recent many industrial and commercial fields are making an effort to deploy the DCNN on mobile devices. However, the layer depth of DCNNs deepens as user-applications become more complicated and require higher accuracy, thus resulting in an increasing number of computation and parameter size. Two solutions for mitigating this issue is quantization of the parameters of DCNNs and data encoding to lower the complexity of computing units and memory footprint with a little accuracy drop. In this thesis, to mitigate the above problems in two quantized networks, which are extremely-quantized weight CNNs, multi-bit quantized CNNs, data encoding schemes are proposed. And then, customized hardware accelerators are designed to verify the efficiency of these proposed schemes. As a result, in extremely-quantized weight CNNs, effective bit per weight is reduced to 0.67-0.80 bit achieving 4.52-7.70x and 1.52-2.21x improvement of performance and energy efficiency respectively, with higher accuracy compared to previous binary weight CNN work. In multi-bit quantized CNNs, about 12.7-26.0\% energy saving is achieved from iso-accuracy comparison, so that a design option, that can be the most efficient compared to the baselines considering accuracy-energy trade-off, is provided.