Deep reinforcement learning (DRL) is widely used for autonomous systems including autonomous driving, robots, and drones. DRL training is essential for human-level control and adaptation to rapidly changing environments in mobile autonomous systems. However, acceleration of DRL training has three challenges: 1) large memory access, 2) various data patterns, 3) complex data dependency due to utilization of multiple DNNs. Two CMOS DRL accelerators have been proposed to support high speed, high energy-efficiency DRL training in mobile autonomous systems. One accelerator handles different data patterns with transposable PE architecture and reduces large feature map memory access with top-3 experience compression. The other accelerator supports group-sparse training for weight compression and integrates the on-line DRL task scheduler to support multi-DNNs operations.