In theory and applications of mathematical programming, the role of nondifferentiable optimization has been remarkably increased. Indeed, in spite of the fact that important results have been obtained in the way of constructing and justifying algorithms under certain assumptions regarding differentiability and regularity, there is still a wide range of practically significant economic problems which are difficult to solve by algorithms of this kind. A comprehensive situation surrounded by these problems is called, in this thesis, nonsmooth economy. Algorithms suggested for optimization under nonsmooth economy can be largely divided into three groups; subgradient methods, space dilation methods, and descent methods. But, in the case of space dilation methods, it has been noticed that they require considerable amount of computations for which better convergence properties do not seem to paid, compared with those of the other groups. Thus, in this thesis we will focus on subgradient methods and descent methods. The earlier subgradient versions distinguished themselves by simplicity and efficient use of the memory of a computer. Due to the very point, the versions (in particular, Polyak``s scheme) have been widely applied to many practical applications. However, the versions also showed their shortcomings of rather slow convergence, and difficulty in assessing the accuracy of the approximate solution or requirement of a good estimation of the optimal function value, which motivated suggestion of various modifications. Some researchers suggested use of information obtained at previous steps and others endeavored to overcome the requirement of a good estimation of the optimal function value. But, the former failed to provide theoretical clarification for their suggestion or did not develop a generalized convergence condition. The latter presented algorithms of heuristic nature or of limited value, which seem to be insufficient to provide a convergent prototype that d...