A class of implementable algorithms is proposed for minimizing any convex, not necessarily differentiable, function: f of several variables, which requires only the calculation of f and one subgradient of f at each generated point. Each algorithm generates a point which is closer and forms acuter angle to the solution set than the point generated by the conventional subgratient method, and if the solution set is nonempty, then this sequence of points generated by each algorithm converges to an element of the solution set. These algorithms still remain very simple or have the adjustment factor such that the computational burden per iteration can be controlled by an user. An aggregate subgradient algorithm of this class is constructed from the algorithm of Poljak in the similar way that Kiwiel did in the descent subgradient algorithm of Lemarechal. This algorithm is related to the algorithm of Camerini et al. and the clear clarification of that algorithm is made. A variant of the aggregate subgradient algorithm is proposed as a modified algorithm of the algorithm of Camerini et al. and is shown to have better performance than that algorithm by computational test. Furthermore, under a simple assumption, an extended algorithm of the aggregate subgradient algorithm terminates after finite iterations when function: f is piecewise linear. A two-direction subgradient algorithm of this class is proposed with the selection scheme for one preceding direction. A simple variant of the two-direction subgradient algorithm is also proposed and is shown to have great improvement in the computational efficiency, compared with the conventional subgradient method.