Current advanced process control and scheduling techniques are based on the paradigm of mathematical programming. Typically, deterministic constrained optimization problems are formulated and solved. Uncertainties are handled by re-optimizing the solution whenever new information become available, either at every sample time as in Model Predictive Control or when infeasibilities arise as in reactive scheduling. Computational requirement is typically heavy and often a bottleneck. In addition, for problems where uncertainties are important and can be modeled stochastically, one would like to consider them directly by formulating and solving a stochastic optimization problem to derive an optimal policy. In this talk, I will introduce “approximate dynamic programming” technique in the context of a general multi-stage stochastic optimization problem. I will point out some key challenges in this area, especially for process control and scheduling applications. Some success has been achieved with this approach in several control and scheduling problems, which will be shown.