Sequential targeting: A continual learning approach for data imbalance in text classification

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 59
  • Download : 0
Text classification has numerous use cases including sentiment analysis, spam detection, document classification, hate speech detection, etc. In realistic settings, classification on text data confronts imbalanced data conditions where classes of interest usually compose a minor fraction. Deep neural networks used for text classification, such as recurrent neural networks and transformer networks, suffer from a lack of efficient methods addressing imbalanced data. Traditional data-level methods attempting to mitigate distributional skew include oversampling and undersampling. The oversampling methods destruct the quality of original language representation of the sparse data coming from minority classes whereas the undersampling methods fail to fully utilize the rich context of majority classes. We address such issues in data-driven approaches by enforcing continual learning on imbalanced data by partitioning the training data distribution into mutually exclusive subsets and performing continual learning, treating the individual subsets as distinct tasks. We demonstrate the effectiveness of our method through experiments on the IMDB dataset and constructed datasets from real-world data. The experimental results show that the proposed method improves by 56.38 %p on the IMDB dataset and by 16.89 %p and 34.76 %p on the constructed datasets compared to the baseline method in terms of the F1-score metric.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
Issue Date
2021-10
Language
English
Article Type
Article
Citation

EXPERT SYSTEMS WITH APPLICATIONS, v.179

ISSN
0957-4174
DOI
10.1016/j.eswa.2021.115067
URI
http://hdl.handle.net/10203/286401
Appears in Collection
RIMS Journal Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0