Small perturbations of graph structure have been verified to bring catastrophically performance degradation of most Graph Neural Networks (GNNs). Existing defenses of which mainly top on homophily assumption could not address all structural attacks and do not perform reasonable robustness on graphs in general. An empirical analysis on structural attack motivated us in that an effective structure attack primarily injects edges adjusting local heterophily level as well as distance in feature space. While raising the potential problem of un-screening and mis-screening of homophily-based works for the first time, we propose a novel framework that resolves the issues and thereby elevating the robustness of GNN. Experiments on a variety of attack settings, datasets, and base architecture have shown that incorporating GNN with our framework adequately restores its demolished performance and accomplishes to outperform the existing baselines, improving the robustness of structural perturbations on a homophilous graph (Cora) by %3 and a heterophyllous graph (Wisconsin) by 9%