Modern minimax problems, such as generative adversarial network, adversarial training, and fair training, are usually under a nonconvex-nonconcave setting. There are mainly two types of gradient methods for solving minimax problems: single-step method and multi-step method. However, existing methods either converge slowly or are not guaranteed to converge. In specific, the best known rate for a single-step method is only O(1/k) under a considered structured nonconvex-nonconcave setting, and existing multi-step methods are not guaranteed to converge under the same setting. Therefore, this dissertation provides two new single-step and multi-step methods that have a faster rate and guarantee convergence, respectively, under the structured nonconvex-nonconcave setting. First, we propose an efficient single-step method, named fast extragradient (FEG) method, which, for the first time, achieves the optimal O(1/k$^2$) rate on the squared gradient norm, under the negative comonotonicity condition on the saddle gradient operator. Next, we propose a multi-step method, named semi-anchored multi-step gradient descent ascent (SA-MGDA) method. The SA-MGDA has O(1/k) rate on the squared gradient norm under the weak Minty variational inequality condition on the saddle gradient operator, which is weaker than the negative comonotonicity.