We consider a ranking and selection problem whose configuration depends on a common input model estimated from finite real-world observations. To find a solution robust to estimation error in the input model, we introduce a new concept of robust optimality: the most probable best. Taking the Bayesian view, the most probable best is defined as the solution whose posterior probability of being the best is the largest given the real-world data. Focusing on the case where the posterior on the input model has finite support, we study the large deviation rate of the probability of incorrectly selecting the most probable best and formulate an optimal computing budget allocation (OCBA) scheme for this problem. We further approximate the OCBA problem to obtain a simple and interpretable budget allocation rule and propose sequential learning algorithms. A numerical study demonstrates good performances of the proposed algorithms.