Exponential Loss Minimization for Learning Weighted Naive Bayes Classifiers

Cited 8 time in webofscience Cited 0 time in scopus
  • Hit : 1
  • Download : 0
The naive Bayesian classification method has received significant attention in the field of supervised learning. This method has an unrealistic assumption in that it views all attributes as equally important. Attribute weighting is one of the methods used to alleviate this assumption and consequently improve the performance of the naive Bayes classification. This study, with a focus on nonlinear optimization problems, proposes four attribute weighting methods by minimizing four different loss functions. The proposed loss functions belong to a family of exponential functions that makes the optimization problems more straightforward to solve, provides analytical properties of the trained classifier, and allows for the simple modification of the loss function such that the naive Bayes classifier becomes robust to noisy instances. This research begins with a typical exponential loss which is sensitive to noise and provides a series of its modifications to make naive Bayes classifiers more robust to noisy instances. Based on numerical experiments conducted using 28 datasets from the UCI machine learning repository, we confirmed that the proposed scheme successfully determines optimal attribute weights and improves the classification performance.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.10, pp.22724 - 22736

ISSN
2169-3536
DOI
10.1109/ACCESS.2022.3155231
URI
http://hdl.handle.net/10203/322762
Appears in Collection
IE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 8 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0