In recent years, few-shot classification using meta-learning has emerged as an approach to solve the problem of general deep learning whereby a model requires a large amount of data to learn a new concept. Existing studies on meta-learning use the episodic training method, which generates episodes from training datasets and optimizes the model. However, as the model is strongly associated with the given training data, it is possible to decrease its overall performance depending on the characteristics of the test dataset. In this dissertation, we propose a new episodic training method to provide robust performance for various datasets based on random bias sampling, which samples many sub-datasets from the training data. The proposed method uses the structural features of a large-scale hierarchical dataset to sample an intentionally biased dataset based on the depth information. Therefore, we make the trained model reflect sampled datasets that were derived from a hierarchy with a large depth having a high bias, and the training experience has been provided using various datasets. To evaluate whether the trained model functions suitably on different datasets for few-shot classification tasks, we conduct experiments that apply our method to the matching network and prototypical network to measure the accuracies for five datasets.