Home > Archive > 2021 > Volume 11 Number 2 (Mar. 2021) >
IJMLC 2021 Vol.11(2): 98-102 ISSN: 2010-3700
DOI: 10.18178/ijmlc.2021.11.2.1020

Boosted Supervised Intensional Learning Supported by Unsupervised Learning

A. C. M. Fong and G. Hong

Abstract—Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.

Index Terms—Intelligent computing, machine intelligence, machine learning, neural networks, intensional information, semi-supervised learning.

The authors are with Department of Computer Science, Western Michigan University, Kalamazoo, MI 49009 USA (e-mail: acmfong@gmail.com).

[PDF]

Cite: A. C. M. Fong and G. Hong, "Boosted Supervised Intensional Learning Supported by Unsupervised Learning," International Journal of Machine Learning and Computing vol. 11, no. 1, pp. 98-102, 2021.

Copyright © 2021 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

 

General Information

  • ISSN: 2010-3700 (Online)
  • Abbreviated Title: Int. J. Mach. Learn. Comput.
  • Frequency: Bimonthly
  • DOI: 10.18178/IJMLC
  • Editor-in-Chief: Dr. Lin Huang
  • Executive Editor:  Ms. Cherry L. Chen
  • Abstracing/Indexing: Inspec (IET), Google Scholar, Crossref, ProQuest, Electronic Journals Library.
  • E-mail: ijmlc@ejournal.net


Article Metrics