This article proposes the use of approximate dynamic programming (ADP) based on the "postdecision state" as a computationally efficient tool for the systematic handling of uncertainty in the context of stochastic process control. The need to handle uncertainties systematically, which is unmet by current advanced process control methodologies, such as model predictive control (MPC), is highlighted through two classes of problems that are commonly encountered in process control. These are (i) scenarios where processes, made to operate near constraint boundaries for economic reasons, exhibit frequent excursions into the infeasible region as a result of exogenous disturbances and (ii) situations where there exists significant interaction between state or parameter estimation and control such that the often employed assumptions of certainty equivalence or separation fail to hold. Most previous works on ADP, as specialized for process control problems, are better suited for deterministic problems. For stochastic problems, such as those treated in this work, the postdecision stage formulation confers immediate practical benefits since it allows the efficient use of off-the-shelf optimization solvers found in all MPC technology.