The recent development of contrastive learning has resulted in significant advancements in self-supervised representation learning, with particularly strong results in the fully-supervised context setting with label information. The number of representations utilized in the loss computation has a direct impact on the performance of this batch contrastive approach. In this paper, we present supervised momentum contrastive learning that can arbitrarily increase the number of representations by applying a memory queue to supervised contrastive learning. It was shown that the performance of existing supervised contrastive learning can be exceeded by increasing the number of representation pairs under GPU memory or time constraint. This offers us the possibility to increase performance without high-performance computing equipment in various contrastive learning.