This paper deals with deep reinforcement learning based motion planning techniques for unknown environmental exploration using Aerial-Robot. Advances in drone technology have required demanding missions in the drone's complex environment. In order to facilitate this development, the mission was to explore drones in dynamic and atypical environments, such as the Autonomous Drone Racing Competition and the Subterranean Challenge organized by DARPA. A sub-goal selection method was introduced for exploration in atypical environments. Based on this, a route planning method using A * search and euclidean signed distance field (ESDF) was proposed. However, this requires unnecessary processing cost because the occupancy probability grid-map needs post-processing to boolean-valued voxel and distance fields. In addition, when applying A * search, the safety-border prevents the optimum path from becoming too close to the obstacle, which sometimes leads to the recognition of the narrow path as an obstacle when searching for a narrow path. Therefore, in-depth reinforcement learning-based methods have been proposed for solving these problems and for simpler exploration techniques. It is shown that the proposed deep reinforcement learning-based motion planning method can effectively explore unknown environments.