An environment is a complex space in which numerous agents and objects interact. In this environment, rational agents act to maximize utility, and their actions are learned through reinforcement learning algorithms using deep neural networks. Since most environments are non-stationary, a single agent cannot solely achieve maximum utility through its observations and inferences using a network. However, when multiple agents can communicate with each other, a single agent can access richer information about the state of the environment simply by receiving messages from other agents. Nonetheless, the inferred information from other agents is generated by a neural network, which can be vulnerable or incorporate the personal perspective of the agents processing the information. For example, even when presented with a single observation, two agents can encode significantly different messages due to variations in their parameter modules. Therefore, it is crucial to consider the subjectivity of messages in multi-agent reinforcement learning (MARL). Building on the assumption that the encoded information is inherently subjective and reflects the agent that encodes it, we address the challenge of objective and subjective information in MARL. One limitation is that the information processed by a neural network is often challenging to interpret. In the initial phases of addressing this subjectivity concern, we focus on a relatively minor challenge: determining whether subjective information surpasses objective information in utility. Consequently, we compare the usefulness of messages exchanged between agents in two forms: raw observation and the output of a neural network.