Deakin University
Browse
mammadov-learningthenaive-2013.pdf (234.94 kB)

Learning the naive Bayes classifier with optimization models

Download (234.94 kB)
journal contribution
posted on 2013-12-01, 00:00 authored by S Taheri, Musa MammadovMusa Mammadov
Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.

History

Journal

International journal of applied mathematics and computer science

Volume

23

Issue

4

Pagination

787 - 795

Publisher

Technical University of Zielona Gora

Location

Zielona Góra, Poland

eISSN

1641-876X

Language

eng

Publication classification

C1.1 Refereed article in a scholarly journal

Copyright notice

2013, Taheri S and Mammadov M