Pathologists visually examine cell morphology by observing the biopsy slides under a microscope through different magnifying factors, which is time-consuming and error-prone. In this regard, computer-aided whole slide image (WSI) analysis is necessary to help pathologists reduce time effort, and human error. With the recent advances in deep learning for computer vision, convolutional neural networks (ConvNets) have gained attention in the medical domain and have shown significant progress in whole slide image classification. Existing deep learning approaches work by feeding ConvNet with small patches extracted from WSIs. However, it is unknown how the size of the extracted patches and the magnifying factor of the WSI affect the performance of the ConvNet. Therefore, we construct several datasets by extracting patches from stomach histopathological imagery, varying the size of the patches and the magnifying factor of the WSIs. Densely Connected Convolutional Neural Network (DenseNet) is used to classify dysplasia, malignant, and benign patches. We observe the impact of the patch extraction variables using precision and recall. This study shines a light on why these factors would affect the model performance concerning data representation and provides a guideline for histopathological imagery data extraction methods.