The backpropagation algorithm based on the gradient descent method is a widely-used learning algorithm of neural networks. However, because of its inherited property, there exists cases in which the backpropagation algorithm is not adopted, or it costs to obtain the gradient information. This thesis classifies the learning methods as two categories: one is based on gradient descent method, and the other based on the stochastic search method such as Genetic Algorithms. It develops novel learning methods to alleviate the weakness of backpropagation algorithm such as local minima, slow speed, and size of structure. And it shows Genetic Algorithms as an efficient learning algorithm of neural networks, in case that the gradient information is not in use. For experimental simulations, the comparative results with several learning algorithms are provided to show effectiveness of the proposed algorithms in function approximation such as system identification and prediction of chaotic time series., and design of a neuro-controller of nonminimum phase systems. Simulation results show that the proposed algorithms are ones of efficient viable learning methods of neural networks.