Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Improving RENet by Introducing Modified Cross Attention for Few-Shot Classification

Version 1 : Received: 28 May 2022 / Approved: 6 June 2022 / Online: 6 June 2022 (09:42:13 CEST)

How to cite: Chang, C.; Yu, T. Improving RENet by Introducing Modified Cross Attention for Few-Shot Classification. Preprints 2022, 2022060079. https://doi.org/10.20944/preprints202206.0079.v1 Chang, C.; Yu, T. Improving RENet by Introducing Modified Cross Attention for Few-Shot Classification. Preprints 2022, 2022060079. https://doi.org/10.20944/preprints202206.0079.v1

Abstract

Few-shot classification is challenging since the goal is to classify unlabeled samples with very few labeled samples provided. It has been shown that cross attention helps generate more discriminative features for few-shot learning. This paper extends the idea and proposes two cross attention modules, namely the cross scaled attention (CSA) and the cross aligned attention (CAA). Specifically, CSA scales different feature maps to make them better matched, and CAA adopts the principal component analysis to further align features from different images. Experiments showed that both CSA and CAA achieve consistent improvements over state-of-the-art methods on four widely used few-shot classification benchmark datasets, miniImageNet, tieredImageNet, CIFAR-FS, and CUB-200-2011, while CSA is slightly faster and CAA achieves higher accuracies.

Keywords

few-shot classification; attention

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.