This thesis is concerned with the estimation of parameters of a two state Markov process. If the trajectory of the process can be observed continuously, the parameter estimation is straightforward, for it is possible to write down the likelihood explicitly. In many situations, however, it is not possible to observe the process continuously ; rather observations are taken at regular or irregular epochs. In these cases classical estimators such as M.L.E. or A.U.E. do not always exist. Even if they exist, their values tend to be quite different from the true parameters. The number of cases where no estimators exist increase as the value of the true parameter becomes large. This thesis develops a method of modifying the usual estimators that largely overcome these difficulties; the method utilizes the limiting behaviors of the process and the properties of the state transition counts. An efficient adaptive strategy to be used together with modified estimators is also proposed. The properties of the new estimators and the adaptive strategy are investigated using Monte Carlo simulation.