Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Recognising Image Shapes from Image Parts, not Neural Parts

Version 1 : Received: 17 January 2022 / Approved: 18 January 2022 / Online: 18 January 2022 (13:11:52 CET)
Version 2 : Received: 10 April 2022 / Approved: 11 April 2022 / Online: 11 April 2022 (10:17:57 CEST)

How to cite: Greer, K. Recognising Image Shapes from Image Parts, not Neural Parts. Preprints 2022, 2022010259. Greer, K. Recognising Image Shapes from Image Parts, not Neural Parts. Preprints 2022, 2022010259.


This paper describes an image processing method that makes use of image parts instead of neural parts. Neural networks excel at image or pattern recognition and they do this by constructing complex networks of weighted values that can cover the complexity of the pattern data. These features however are integrated holistically into the network, which means that they can be difficult to use in an individual sense. A different method might scan individual images and use a more local method to try to recognise the features in it. This paper suggests such a method, where a trick during the scan process can not only recognise separate image parts, as features, but it can also produce an overlap between the parts. It is therefore able to produce image parts with real meaning and also place them into a positional context. Tests show that it can be quite accurate, on some handwritten digit datasets, but not as accurate as a neural network, for example. The fact that it offers an explainable interface could make it interesting however. It also fits well with an earlier cognitive model, and an ensemble-hierarchy structure in particular.


image classifier; image part; quick learning; feature overlap; positional context


Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (1)

Comment 1
Received: 11 April 2022
Commenter: Kieran Greer
Commenter's Conflict of Interests: Author
Comment: A serious error was found in the testing process, which has been corrected.
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0

Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.