Network pruning has been widely adopted for reducing computational costs and memory consumption in low-resource devices. Recent studies show that the lottery ticket hypothesis achieves high accuracy under high compression ratios (i.e., 80-90% of the parameters in original networks are removed). Nevertheless, finding well-trainable networks with sparse parameters (i.e., < 10% of the parameters remaining) is still a challenging task, commonly believed by the lack of model capacity. This paper revisits the training process of existing pruning methods and observe that dead connections, which do not contribute to model capacity, occur in existing pruning methods. To this end, we propose a novel pruning method, namely all-alive pruning (AAP), which produces the pruned networks with only trainable weights with no dead connections. Notably, AAP is broadly applicable to various pruning methods and model architectures. We demonstrate that AAP equipped with the existing pruning methods (e.g., iterative pruning, one-shot pruning, and dynamic pruning) consistently improves the accuracy of the original methods at high compression ratios on the various image-and language-based tasks.