With the success of the 3D deep learning models, various perception technologies have been developed in the LiDAR domain. While these models perform well in the trained source domain, they struggle in unseen domains with a domain gap. This paper proposes a single-domain generalization method for LiDAR semantic segmentation (DGLSS) that aims to ensure good performance not only in the source domain but also in the unseen domain by learning only in the source domain. To this end, the proposed method augments the domain to simulate the unseen domains by randomly subsampling the LiDAR scans. With the augmented domain, two constraints are introduced for generalizable representation learning: sparsity invariant feature consistency (SIFC) and semantic correlation consistency (SCC). The SIFC aligns sparse internal features of the source domain with those of the augmented domain based on the feature affinity, while the SCC ensures that the correlations between class prototypes are similar in both domains. Also, a standardized training and evaluation setting for DGLSS is presented. With the standardized evaluation setting, the proposed method showed improved performance in the unseen domains compared to other baselines. Even without access to the target domain, the proposed method performed better than the domain adaptation method.