Speech recognition systems require too much memory to run and are too slow for mass application. In order to overcome these constraints and make speech recognition systems suitable for mobile devices, we propose efficient codebook construction method for subspace distribution clustering hidden markov modeling (SDCHMM). The output probability of mixture Gaussians is more sensitive to quantization error of mean vectors than that of variance vectors. Therefore we propose a new subspace definition which minimizes quantization error of mean vectors first. Next, we split mixture Gaussians into mean and variance vectors and construct separate codebooks using modified Bhattacharyya distance measure. In experiments using RM database, proposed method decreases 24.5% relative word error rate compared with general SDCHMM without use of extra memory.