A 21mW low-power embedded Recurrent Neural Network (RNN) accelerator is proposed to realize the image captioning applications. The low-power RNN operation is achieved by 3 key features: 1) Quantization-table-based matrix multiplication with RNN weight quantization, 2) Dynamic quantization-table allocation scheme for balanced pipelined RNN operation, and 3) Zero-skipped RNN operation using quantization-table. The Quantization table enables the 98% reduction of the multiplier operations by replacing the multiplication to the table reference. The dynamic quantization table allocation is used to achieve high chip-utilization efficiency over 90% by balanced pipeline operation for three variations of the RNN operation. The zero-skipped RNN operation reduces the overall 27% of required external memory bandwidth and quantization-table operations without any additional hardware cost. The proposed RNN accelerator of 1.84mm(2) achieves 21mW power consumption and demonstrates its functionality on the image captioning RNN in 65nm CMOS process.