Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Self-Improving Generative Artificial Neural Network for Pseudo-Rehearsal Incremental Class Learning

Version 1 : Received: 5 July 2019 / Approved: 8 July 2019 / Online: 8 July 2019 (14:29:28 CEST)

A peer-reviewed article of this Preprint also exists.

Mellado, D.; Saavedra, C.; Chabert, S.; and Rodrigo Salas, R.T. Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning. Algorithms 2019, 12, 206. Mellado, D.; Saavedra, C.; Chabert, S.; and Rodrigo Salas, R.T. Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning. Algorithms 2019, 12, 206.

Abstract

Deep learning models are part of the family of artificial neural networks and, as such, it suffers of catastrophic interference when they learn sequentially. In addition, most of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, in this article we propose the Self-Improving Generative Artificial Neural Network (SIGANN), a type of end-to-end Deep Neural Network system which is able to ease the catastrophic forgetting problem when leaning new classes. In this method, we introduce a novelty detection model to automatically detect samples of new classes, moreover an adversarial auto-encoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder; and a novelty detection module, implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN is able to retain previous knowledge with a gradual forgetfulness for each learning sequence. Moreover, SIGANN can detect new classes that are hidden in the data and, therefore, proceed with incremental class learning.

Keywords

Artificial Neural Networks; Deep Learning; Generative Neural Networks; Incremental Learning; Novelty detection; Catastrophic Interference

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.