In this thesis, a new convex cut function method is proposed to handle the constraints that appear in the twice-differentiable general NLP problems. For unconstrained minimization problem, the difference of convex underestimator which is proposed by Chang et al. is used. Both the underestimator and the convex cut function are constructed based on the interval Hessian matrix of objective function and constraints, respectively. For univariate global optimization with non-convex constraints, the proposed convex cut function method shows good performance to several tests. To accelerate the convergence of multi-dimensional global optimization with non-convex constraints, the branch-and-bound algorithm is hybridized so that both the underestimator and the convex cut function get tighter as the iteration proceeds. In the proposed method, the difference of convex underestimator which is proposed by Chang et al. is used to obtain lower bound and the convex cut function serves the basis of discarding rule for an infeasible subregion after branching. The cutting region by the convex cut function is memorized as hyperellipsoid. However, the convergence rate decreases as the number of cutting regions increases. To accelerate the convergence rate, an inclusion relation between two hyperellipsoids should be applied in order to reduce the number of cutting regions. We show that the two-hyperellipsoid inclusion relation is determined by maximizing a quadratic function over a sphere, which is a special case of a trust region subproblem. The proposed method is applied to ten univariate constrained NLP test problems, twelve general NLP test problems and five engineering design problems. The proposed method is proven to have a finite $\varepsilon$ -convergence to locate the global optimum point. The numerical experiments indicate that the proposed method competes with another covering method, the index branch-and-bound algorithm, which uses the Lipschitz constant for univariate con...