Preprint Article Version 5 Preserved in Portico This version is not peer-reviewed

Preference Neural Network

Version 1 : Received: 7 April 2019 / Approved: 8 April 2019 / Online: 8 April 2019 (11:50:05 CEST)
Version 2 : Received: 5 June 2020 / Approved: 5 June 2020 / Online: 5 June 2020 (04:37:39 CEST)
Version 3 : Received: 5 June 2020 / Approved: 7 June 2020 / Online: 7 June 2020 (17:44:06 CEST)
Version 4 : Received: 23 December 2021 / Approved: 24 December 2021 / Online: 24 December 2021 (16:08:06 CET)
Version 5 : Received: 18 April 2023 / Approved: 19 April 2023 / Online: 19 April 2023 (07:43:17 CEST)

A peer-reviewed article of this Preprint also exists.

Elgharabawy, A.; Prasad, M.; Lin, C.-T. Preference Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2023, 1–15, doi:10.1109/tetci.2023.3268707. Elgharabawy, A.; Prasad, M.; Lin, C.-T. Preference Neural Network. IEEE Transactions on Emerging Topics in Computational Intelligence 2023, 1–15, doi:10.1109/tetci.2023.3268707.

Abstract

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.

Keywords

Preference learning; Multi-label ranking; Neural network; Kendall’s tau; Preference mining

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (1)

Comment 1
Received: 19 April 2023
Commenter: Ayman Elgharabawy
Commenter's Conflict of Interests: Author
Comment: The accepted version of IEEE emergence topics in Computational intelligence
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.