This study proposes a new data augmentation technique for offline reinforcement learning (RL). Rather than randomly choosing data points to carry out the data augmentation, our methodology selectively chooses data from sparse subspaces of the dataset to effectively augment the data region that is insufficient in the original dataset. For the augmentation, the subspaces of the dataset would be represented in the latent space created by the variational autoencoder (VAE). Data is then sampled from the latent space and converted back to the original space by using the decoder of the VAE so that the augmented data can be added to the original dataset. By using the VAE, virtual data that does not severely deviate from the original data could be generated because the VAE creates new data points by using the latent space that captures the original data distribution. We evaluate the performance of our methodology using several offline RL datasets generated from OpenAI Gym benchmark control simulations which mainly use state-based inputs.