Interpolation is used to approximate the timing parameters of logic cells not specified in timing tables. Bilinear interpolation has been taken for granted in the industry, but the error increases as the non-linearity of the timing parameters increases. In this paper, we propose machine learning (ML) based interpolation to obtain more accurate timing parameters. Recurrent convolutional neural network (R-CNN) is employed and various ranges of table entries form a sequence of input data, in which the recurrent network allows them to influence the interpolation. In addition, variational auto-encoder (VAE) is used to capture the distribution feature of the table. ML interpolation is parallelized in GPU to minimize the runtime overhead from numerous arithmetic operations. Experimental results demonstrate that ML interpolation reduces timing parameter error by 19.7% and path delay error by 3.4% compared to bilinear interpolation at the cost of 13% runtime overhead.