Indoor human action recognition is used in various fields. For example, we can use it to recognize exercise movements in the fitness industry, which can significantly help improve the health of modern people. With the development of sensors, it has become possible to easily acquire multiple data modalities of RGB, IR, depth, and skeleton in the same scene. Since each data modality is complementary, proper fusion is beneficial in recognizing human action. However, existing studies have limitations in utilizing the advantages of each modality. Therefore, we propose a Multi-Modal Transformer (MMT) to use RGB and skeleton data simultaneously in this work. Using the transformer-based structure, MMT can capture the correlation between non-local joints in skeleton data modality. In addition, MMT does not require additional training phases or multiple trained networks as the number of people on the scene changes. In experiments on public benchmark datasets, MMT shows comparable results using only eight input frames.