Home > Archive > 2019 > Volume 9 Number 6 (Dec. 2019) >
IJMLC 2019 Vol.9(6): 774-781 ISSN: 2010-3700
DOI: 10.18178/ijmlc.2019.9.6.872

Investigating GAN and VAE to train DCNN

Soundararajan Ezekiel, Larry Pearlstein, Abdullah Ali Alshehri, Adam Lutz, Jackson Zaunegger, and Waleed Farag

Abstract—The Convolutional Neural Network (CNN) is a class of deep artificial neural network and has recently gained special attention after demonstrating breakthrough accuracies in various classification tasks. CNNs have shown remarkable performance in machine vision tasks such as image classification, natural language processing and speech recognition. There is evidence that the depth of a CNN plays an important role in performance of CNNs. However, we investigated the feasibility of improving the performance of shallow networks via fusion of the features computed by a homogenous and heterogeneous set of pre-trained networks. We also explored a recently developed framework called the Generative Adversarial Network (GAN), in which we simultaneously train two models, a Generator and a Discriminator. The Generator attempts to produce data that mirrors the probability distribution of the “true” dataset. The Discriminator is trained to distinguish between the true dataset and the counterfeit data produced by the Generator. Our work involves the application of a GAN for generation and fine tuning of synthetic data to be used to train a deep CNN. Specifically, we investigate the use of a synthetic data generator along with a GAN to create an unlimited quantity of labeled training data, without the need for hand-labeling images. We apply this technique to the detection and localization of various vehicles. We attempt to distinguish between military trucks and other types of vehicles. A successful outcome could lead to improvements in addressing security threats rapidly, and cost-effectively. We also investigate an alternative method for generating synthetic data, the Variational Auto-Encoder (VAE). Variational auto-encoders are trained to encode then decode input vectors and can also be useful for generating new training data. VAEs are capable of dimensionality reduction and synthesizing data. Finally, we evaluate our multiplicative fusion method compared to the fusion methods that we investigated previously.

Index Terms—DCNN, GAN, VAE, synthetic data, data augmentation, machine learning.

S. Ezekiel is with the Computer Science Department, Indiana University of Pennsylvania, Indiana, PA USA (e-mail: sezekiel@iup.edu).
L. Pearlstein is with the Electrical & Computer Engineering Department, The College of New Jersey, Ewing, NJ USA.
A. Alshehri is with the Electrical Engineering Department, King Abdulaziz University, Jeddah, KSA.

[PDF]

Cite: Soundararajan Ezekiel, Larry Pearlstein, Abdullah Ali Alshehri, Adam Lutz, Jackson Zaunegger, and Waleed Farag, "Investigating GAN and VAE to train DCNN," International Journal of Machine Learning and Computing vol. 9, no. 6, pp. 774-781, 2019.

Copyright © 2019 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

 

General Information

  • E-ISSN: 2972-368X
  • Abbreviated Title: Int. J. Mach. Learn.
  • Frequency: Quaterly
  • DOI: 10.18178/IJML
  • Editor-in-Chief: Dr. Lin Huang
  • Executive Editor:  Ms. Cherry L. Chen
  • Abstracing/Indexing: Inspec (IET), Google Scholar, Crossref, ProQuest, Electronic Journals LibraryCNKI.
  • E-mail: ijml@ejournal.net


Article Metrics in Dimensions