Reinforcement learning (RL) trains a policy that maximizes cumulative returns using the data collected by interacting with the environment. While data acquisition and training policy are simultaneously carried out in online RL, offline RL trains the policy with the pre-collected dataset. In this regard, for online RL, it can be expected that the training performance improves along with the improving quantity and quality of the acquired data. However, as offline RL uses a static dataset, the performance is highly dependent on the inherent nature of the dataset. In order to address such a problem, research on data augmentation has been actively taking place to improve training performance. With the rapid growth of the field of computer vision, many data augmentation methodologies for image inputs have been developed; however, data augmentation of state-based inputs, which are widely used in the field of robotics, has received relatively less attention. In this work, two data augmentation techniques for state-based inputs are suggested. \textit{K-mixup} extends mixup data augmentation, which are developed for image inputs, to state-based inputs by using Koopman theory. \textit{PST-DA} uses a variational autoencoder (VAE) to selectively augment the specific subset of the dataset. The evaluation results show that both methodologies successfully improve the performance of training on the offline RL benchmark datasets.