Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Multi-Stage Meta-Learning for Few-Shot with Lie Group Network Constraint

Version 1 : Received: 2 March 2020 / Approved: 3 March 2020 / Online: 3 March 2020 (11:09:53 CET)

How to cite: Dong, F.; Li, F. Multi-Stage Meta-Learning for Few-Shot with Lie Group Network Constraint. Preprints 2020, 2020030035. https://doi.org/10.20944/preprints202003.0035.v1 Dong, F.; Li, F. Multi-Stage Meta-Learning for Few-Shot with Lie Group Network Constraint. Preprints 2020, 2020030035. https://doi.org/10.20944/preprints202003.0035.v1

Abstract

Deep learning has achieved lots of successes in many fields, but when trainable sample are extremely limited, deep learning often under or overfitting to few samples. Meta-learning was proposed to solve difficulties in few-shot learning and fast adaptive areas. Meta-learner learns to remember some common knowledge by training on large scale tasks sampled from a certain data distribution to equip generalization when facing unseen new tasks. Due to the limitation of samples, most approaches only use shallow neural network to avoid overfitting and reduce the difficulty of training process, that causes the waste of many extra information when adapting to unseen tasks. Euclidean space-based gradient descent also make meta-learner's update inaccurate. These issues cause many meta-learning model hard to extract feature from samples and update network parameters. In this paper, we propose a novel method by using multi-stage joint training approach to post the bottleneck during adapting process. To accelerate adapt procedure, we also constraint network to Stiefel manifold, thus meta-learner could perform more stable gradient descent in limited steps. Experiment on mini-ImageNet shows that our method reaches better accuracy under 5-way 1-shot and 5-way 5-shot conditions.

Keywords

meta-learning; lie group; machine learning; deep learning; convolutional neural network

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.