I propose a deep learning algorithm that estimates 3D pose and 3D position of the human. The proposed algorithm uses 2D body joints as input and estimates the depth of each input joint. I back-project 2D body joints to 3D body joints using estimated joint depths under perspective projection assumption. Because of the ambiguity of depth estimation, I estimate the joint depth by separating it to the depth between the camera and the root of the human and the relative depth between the root of the human and each body joint. In this paper, unlike previous works that estimate two types of depth in parallel, I present a relational equation of the two types of depth based on a body symmetry constraint and use it to estimate the two types of depth sequentially. Furthermore, proposed method deals with many-to-1 matching relation between 2D pose and 3D pose which means that one 3D pose can be projected on the image plane as various 2D poses depending on the 3D position of the 3D pose. When estimating the joint depth, I encode 2D body joints with the 2D position of them in the image plane and design a novel loss to make proposed network restore a consistent 3D pose from various 2D poses generated from the 3D pose. The proposed algorithm can be applied to in-the-wild images because it minimizes the effect of the background by converting the input image into a body joint heatmap. The effectiveness of the proposed algorithm is verified by showing high pose estimation accuracy and distance estimationaccuracy for the existing 3D human pose benchmark.