Development of attention modules has improved the efficiency of neural networks by selectively encoding contextual information for inputs and outputs, regardless of size, length, or condition. Some recent studies have reported that successful control of attention increases classification performance for both training and test datasets. Building on these recent insights, this paper proposes two novel methods of attentional control for efficient processing of time-series data: a reinforcement learning (RL)-based attentional control algorithm that selects appropriate modular models according to contextual changes over time and a method for regularizing attentional control by embedding a novel alignment loss in causal sequence-to-sequence problems. Each attentional control method was tested on two such problems: EEG cognitive load classification and speech synthesis. The results confirm that these models outperform conventional methods.