In various multi-agent networked environments, the system can benefit from coordinating actions of two interacting agents at some cost of coordination. In this paper, we first formulate an optimization problem that captures the amount of coordination gain at the cost of node activation over given network structure. In this paper, we propose three simulation-based distributed algorithms, each having different update rules, all of which require only one-hop message passing and locally-observed information. The key idea for being distributedness is due to a stochastic approximation method that runs a Markov chain simulation incompletely over time, but provably guarantees its convergence to the optimal solution. Next, we provide new interpretations of our proposed algorithms from a game-theoretic perspective. We artificially select the payoff function, where the game's Nash equilibrium is asymptotically equal to the socially optimal point. We show that two stochastically-approximated variants of standard game-learning dynamics overlap with two algorithms developed from the optimization perspective, and finally demonstrate our theoretical findings through extensive simulations.