What to imitate" is one of the most important and difficult issues in robot imitation learning. A possible solution from an engineering approach involves focusing on the salient properties of actions. We investigate the developmental change of what to imitate in robot action learning in this paper. Our robot is equipped with a recurrent neural network with parametric bias (RNNPB), and learned to imitate multiple goal-directed actions in two different environments (i.e., simulation and real humanoid robot). Our close analysis of the error measures and the internal representation of the RNNPB revealed that actions' most salient properties (i.e., reaching the desired end of motor trajectories) were learned first, while the less salient properties (i.e., matching the shape of motor trajectories) were learned later. Interestingly, this result was analogous to the developmental process of human infant's action imitation. We discuss the importance of our results in terms of understanding the underlying mechanisms of human development.