Preprint Article Version 4 Preserved in Portico This version is not peer-reviewed

Preference Neural Network

Version 1 : Received: 7 April 2019 / Approved: 8 April 2019 / Online: 8 April 2019 (11:50:05 CEST)
Version 2 : Received: 5 June 2020 / Approved: 5 June 2020 / Online: 5 June 2020 (04:37:39 CEST)
Version 3 : Received: 5 June 2020 / Approved: 7 June 2020 / Online: 7 June 2020 (17:44:06 CEST)
Version 4 : Received: 23 December 2021 / Approved: 24 December 2021 / Online: 24 December 2021 (16:08:06 CET)

How to cite: Elgharabawy, A.; Prasad, M.; Lin, C. Preference Neural Network. Preprints 2019, 2019040091 (doi: 10.20944/preprints201904.0091.v4). Elgharabawy, A.; Prasad, M.; Lin, C. Preference Neural Network. Preprints 2019, 2019040091 (doi: 10.20944/preprints201904.0091.v4).


Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


Preference learning; Multi-label ranking; Neural network; Kendall’s tau; Preference mining


MATHEMATICS & COMPUTER SCIENCE, Artificial Intelligence & Robotics

Comments (1)

Comment 1
Received: 24 December 2021
Commenter: Ayman Elgharabawy
Commenter's Conflict of Interests: Author
Comment: Adding Preference Net as a deep learning approach of PNN for Image classification
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 1
Metrics 0

Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.