In this paper, an approximate dynamic programming (ADP) based strategy is applied to the dual adaptive control problem. The ADP strategy provides a computationally amenable way to build a significantly improved policy by solving dynamic programming on only those points of the hyper-state space sampled during closed-loop Monte Carlo simulations performed under known suboptimal control policies. The potentials of the ADP approach for generating a significantly improved policy are illustrated on an ARX process with unknown/varying parameters. (C) 2009 Elsevier Ltd. All rights reserved.