As intelligent machines have become widespread in various applications, it has become increasingly important to operate them efficiently. Monitoring human operators' trust is required for productive interactions between humans and machines. However, neurocognitive understanding of human trust in machines is limited. In this study, we analysed human behaviours and electroencephalograms (EEGs) obtained during non-reciprocal human-machine interactions. Human subjects supervised their partner agents by monitoring and intervening in the agents' actions in this non-reciprocal interaction, which reflected practical uses of autonomous or smart systems. Furthermore, we diversified the agents with external and internal human-like factors to understand the influence of anthropomorphism of machine agents. Agents' internal human-likenesses were manifested in the way they conducted a task and affected subjects' trust levels. From EEG analysis, we could define brain responses correlated with increase and decrease of trust. The effects of trust variations on brain responses were more pronounced with agents who were externally closer to humans and who elicited greater trust from the subjects. This research provides a theoretical basis for modelling human neural activities indicate trust in partner machines and can thereby contribute to the design of machines to promote efficient interactions with humans.