Several video coding standards have been developed for many applications. Commonly, these standards have adopted the hybrid block-based video coding scheme that transforms the residual block generated as the difference between the original and spatially/temporally predicted block. In order to reduce the amount of bits, many researchers focused on the intra/inter-frame prediction step and then these standards have provided various prediction methods. On the other hand, most standards have used the discrete cosine transform (DCT), known to be near optimal for original images, to transform prediction errors. Since the statistical characteristics of prediction errors are quite different from those of original images, a more suitable transform for prediction errors has to be devised.
In this dissertation, we propose a new architecture of transform coding for Inter-frame Prediction Error (IPE) signal. In order to develop the system, we first derive a new statistical model which changes with each input IPE block adaptively. The model is based on assumptions that follow a first-order stationary Markov process in time domain and approximate the pixel-wisely unknown motion fluctuation with some physical motion models. We then devise a new transform based on the statistical modeling, which has no side information to be sent to the decoder. In addition, in order to reduce the computation time, we present a modified version for accelerating the transform using rank-one modification series. From the observation that the optimal transform of IPE block referencing a nearly uniform block is in fact identical to the Karhunen-L??eve transform (KLT) of natural images, we further reduce the computational complexity using DCT for the IPE blocks. Experiments on well-known image sequences confirm that our proposed transform can improve the performance of transform coding significantly.