Neural architecture search (NAS) is an automated method searching for the optimal network architecture by optimizing the combinations of edges and operations. For efficiency, recent differentiable architecture search methods adopt a one-shot network, containing all the candidate operations in each edge, instead of sampling and training individual architectures. However, a recent study doubts the effectiveness of differentiable methods by showing that random search can achieve comparable performance with differentiable methods using the same search cost. Therefore, there is a need to reduce the search cost even for previous differentiable methods. For more efficient differentiable architecture search, we propose a differentiable architecture search based on coordinate descent (DARTS-CD) that searches for optimal operation over only one sampled edge per training step. DARTS-CD is proposed based on the coordinate descent algorithm, which is an efficient learning method for resolving large-scale problems by updating only a subset of parameters. In DARTS-CD, one edge is randomly sampled, in which all the operations are performed, whereas only one operation is applied to the other edges. Weight update is also performed only at the sampled edge. By optimizing each edge separately, as in the coordinate descent that optimizes each coordinate individually, DARTS-CD converges much faster than DARTS while using the network architecture similar to that used for evaluation. We experimentally show that DARTS-CD performs comparably to the state-of-the-art efficient architecture search algorithms, with an extremely low search cost of 0.125 GPU days (1/12 of the search cost of DARTS) on CIFAR-10 and CIFAR-100. Furthermore, a warm-up regularization method is introduced to improve the exploration capability, which further enhances the performance.