Multi-agent reinforcement learning (MARL) is a type of machine learning technique that decentralizes and controls a complex system composed of several sub-systems, and it is attracting much attention with recent advances in reinforcement learning and deep learning. Controlling such a multi-agent system aims for agents to complete a collaborative task. Thus, the consensus of decentralized and controlled agents by MARL is essential to complete the task in the system. In this dissertation, we propose various MARL methods to consider the relationship among agents in terms of model, exploration, and training to achieve consensus among agents. In particular, the proposed methods learn the dynamic relations that can change which agent to focus on depending on the situation. By making MARL learn better the relations, we empirically demonstrate that the proposed methods outperform existing methods and provide an empirical analysis of why the proposed methods work.