Recent studies have found that human strategic decision-making is well explained by a mixture of model-based (MB) and model-free reinforcement learning (MF) [1], and the information necessary for this combination can be decoded from EEG signals [2]. These findings raise the expectation that BCI systems can be built to accommodate high-level cognitive processes, such as strategic decision making, planning, and goal-directed learning. However, these demonstrations were confined to simple Markov decision tasks, significantly undermining its applicability. While open-source benchmarks provide various realistic scenarios, most of them do not require model-based learning. To settle this issue, we present a novel task paradigm enabling the test of goal-driven learning and strategic decision-making in a realistic environment. Our task is implemented based on the open AI-based Atari game environment. We manipulated three task variables previously known to induce goal-driven learning: goal condition, state transition uncertainty, and task complexity. Lastly, we discuss potential applications in cognitive science, machine learning, and BCI.