In this thesis, we present a novel example-based paradigm for producing motions of various characters. With a rich repertoire of captured motions, our paradigm extends previous methods for motion synthesis in two directions: First, we produce realistic human motions guided by monocular videos, which are a most common source of human motions. Second, we apply those motions to diverse characters with different structures. After processing a input video, we select a pre-captured motion clip called a "reference motion" from a motion library, and then compute the sequence of body configurations of a character based on spacetime formulation. The root trajectory is estimated by using kinematic constraints and dynamic property of character. After synthesizing human motions from monocular videos, the motion is cloned frame by frame to a target character based on scattered data interpolation. To do this, we exploit the correspondence between keypostures of the source character and those of the target character. Through experiments, we demonstrated that our scheme can effectively produce a variety of motions for character animation.