As the low-density parity-check (LDPC) code has a powerful error-correcting performance and can achieve high throughput, it is being used in many application areas and recently adopted as a channel coding method in the communication standard. In Recent QC-LDPC decoding architecture, row-layered scheduling and min-sum algorithm are mainly used because of various advantages. However, in the row-layered min-sum decoding architecture, the complexity of the check node unit is very high. The reason is that the complexity of the tree architecture in the check node unit increases exponentially according to the number of inputs, and row-layered min-sum decoding has as many inputs as the columns of the parity check matrix. To alleviate the complexity of the check node unit, a method to reduce the number of inputs is proposed while maintaining error-correcting performance. For reducing the complexity of the check node unit, in this paper, a new scheduling method that has not existed is proposed and named tiled scheduling. In the proposed scheduling, the update unit is a tile rather than a row or a column, and the smaller the tils size, the better the error-correcting performance. In addition, the proposed scheduling showed better error-correcting performance than the row-layered offset min-sum algorithm, and greatly reduced the complexity of the check node unit. A LDPC decoder based on tiled scheduling is realized in 65-nm CMOS technology and satisfies all lifting sizes defined in the 5G standard. In addition, it demonstrates that its decoding throughput is greater than 20 Gbps, and occupies smaller area than existing decoder.