Working Paper Article Version 2 This version is not peer-reviewed

Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, examined with FSGM

Version 1 : Received: 29 May 2021 / Approved: 31 May 2021 / Online: 31 May 2021 (13:49:55 CEST)
Version 2 : Received: 11 June 2021 / Approved: 14 June 2021 / Online: 14 June 2021 (12:14:09 CEST)

A peer-reviewed article of this Preprint also exists.

Rudd-Orthner, R.N.M.; Mihaylova, L. Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM. Sensors 2021, 21, 4772. Rudd-Orthner, R.N.M.; Mihaylova, L. Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM. Sensors 2021, 21, 4772.

Abstract

This paper presents a non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM) attack. This paper's focus is convolutional layers, and are the layers that have been responsible for better than human performance in image categorization. The proposed method induces earlier learning through the use of striped forms, and as such has less unlearning of the existing random number speckled methods, consistent with the intuitions of Hubel and Wiesel. The proposed method provides a higher performing accuracy in a single epoch, with improvements of between 3-5% in a well known benchmark model, of which the first epoch is the most relevant as it is the epoch after initialization. The proposed method is also repeatable and deterministic, as a desirable quality for safety critical applications in image classification within sensors. That method is robust to Glorot/Xavier and He initialization limits as well. The proposed non-random initialization was examined under adversarial perturbation attack through the FGSM approach with transferred learning, as a technique to measure the affect in transferred learning with controlled distortions, and finds that the proposed method is less compromised to the original validation dataset, with higher distorted datasets.

Keywords

Repeatable Determinism; Weight Initialization; Convolutional Layers; Adversarial Perturbation Attack; FSGM, Transferred Learning, Machine Learning, Smart Sensors.

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (1)

Comment 1
Received: 14 June 2021
Commenter: Richard Rudd-Orthner
Commenter's Conflict of Interests: Author
Comment: Update daigrames and remove refrence
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.