Submitted:
05 June 2025
Posted:
06 June 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Materials and Methods
2.1 Data Set Acquisition
2.1.1 Pork Freshness Grading Criteria
2.1.2. Pork Freshness Dataset
2.2 Few-Shot Learning Method Based on BBSNet
2.2.1. Composition of BBSNet
2.2.2. Upgrading of ShuffleNetV2 Module
2.2.3. Accelerating feature fitting with batch channel normalization
2.2.4 Upgrading of BiFormer module
2.2.5 Probability Distribution Function Based on Cosine Similarity
2.3 Fine-Tuning Strategy
2.3.1. Updating Cross-Entropy Loss Function
2.3.2 Updating Entropy Regularization Function
2.4 Model Training
2.4.1 Pre-training setting
2.4.2 Pre-training setting
2.5 Model Evaluation Metrics
3 Results and Discussions
3.1 Performance Comparison with Classic Algorithms
3.1.1 Comparison with Classic Few-Shot Models
3.1.2 Comparison against classical universality algorithms
3.2 Batch Channel Normalization Impacts
3.3 BiFormer Attention Mechanism impacts
3.4. Number of Support Set Samples Impacts
3.5. Number of Query Set Samples Impacts
3.6. Validation of Model Generalization on Large-Scale Unknown Samples
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| BBSNet | BCN-BiFormer-ShuffleNetV2 |
| BCN | Batch Channel Normalization |
| TVB-N | Total Volatile Basic Nitrogen |
| FPGE -ARM | Field-Programmable Gate Array-Advanced RISC Machines |
| BBSNet | BCN-BiFormer-ShuffleNetV2 |
| BCN | Batch Channel Normalization |
| TVB-N | Total Volatile Basic Nitrogen |
| FPGE -ARM | Field-Programmable Gate Array-Advanced RISC Machines |
| MAML | Model-Agnostic Meta-Learning |
| CNNs | Convolutional Neural Networks |
| BN | Batch Normalization |
| LN | Layer Normalization |
References
- Yang, Z.; Chen, Q.; Wei, L. (2024). Active and smart biomass film with curcumin pickering emulsion stabilized by chitosan adsorbed laurate esterified starch for meat freshness monitoring. International Journal of Biological Macromolecules, 275. [CrossRef]
- Zhang, F.; Kang, T.; Sun, J.; Wang, J.; Zhao, W.; Gao, S.; et al. (2022). Improving tvbn prediction in pork using portable spectroscopy with just in time learning model updating method. Meat Science, 188, 108801. [CrossRef]
- Cheng, M.; Yan, X.; Cui, Y.; Han, M.; Wang, X.; Wang, J.; et al. (2022). An ecofriendly film of phresponsive indicators for smart packaging. Journal of Food Engineering, 321, 110943. [CrossRef]
- Lee, S.; Norman, J.M.; Gunasekaran, S.; Laack, R.L.J.M.V.; Kim, B.C.; Kauffman, R.G. (2000). Use of electrical conductivity to predict water holding capacity in post rigor pork sciencedirect. Meat Science, 55(4),385389. [CrossRef]
- Nie, J.; Wu, K.; Li, Y.; Li, J.; Hou, B. (2024). Advances in hyperspectral remote sensing for precision fertilization decision making: a comprehensive overview. Turkish Journal of Agriculture and Forestry, 48 (6), 10841104. [CrossRef]
- Zhou, L.; Wang, X.; Zhang, C.; Zhao, N.; Taha, M.F.; He, Y.; Qiu, Z. (2022).Powdery Food Identification Using NIR Spectroscopy and Extensible Deep Learning Model.Food and Bioprocess Technology,15(10),23542362. [CrossRef]
- Guo, T.; Huang, M.; Zhu, Q.; Guo, Y.; Qin, J. (2017). Hyperspectral image based multi feature integration for tvbn measurement in pork. Journal of Food Engineering, 218(feb.), 6168. [CrossRef]
- Zhuang, Q.; Peng, Y.; Yang, D.; Wang, Y.; Zhao, R.; Chao, K.; et al. (2022). Detection of frozen pork freshness by fluorescence hyperspectral image. Journal of food engineering(Mar.), 316. [CrossRef]
- Musatov, V.Y.; Sysoev, V.V.; Sommer, M.; Kiselev, I. (2010). Assessment of meat freshness with metal oxide sensor microarray electronic nose: a practical approach. Sensors & Actuators B Chemical, 144(1), 99103. [CrossRef]
- Tian, X.Y.; Cai, Q.; Zhang, Y.M. (2012). Rapid classification of hairtail fish and pork freshness using an electronic nose based on the pca method. Sensors, 12(12), 260278.
- Zhang, J.; Wu, J.; Wei, W.; Wang, F.; Jiao, T.; Li, H.; Chen, Q. (2023).Olfactory imaging technology and detection platform for detecting pork meat freshness based on IoT.Computers and Electronics in Agriculture,215,Article 108384. [CrossRef]
- Huang, L.; Zhao, J.; Chen, Q.; Zhang, Y. (2014). Nondestructive measurement of total volatile basic nitrogen (tvbn) in pork meat by integrating near infrared spectroscopy, computer vision and electronic nose techniques. Food Chemistry, 145(feb.15), 228236. [CrossRef]
- Liu, C.; Chu, Z.; Weng, S.; Zhu, G.; Han, K.; Zhang, Z.; Huang, L.; Zhu, Z.; Zheng, S. (2022). Fusion of electronic nose and hyperspectral imaging for mutton freshness detection using input modified convolution neural network. Food Chemistry, 385,Article 132651. [CrossRef]
- Cheng, J.; Sun, J.; Shi, L.; Dai, C. (2024).An effective method fusing electronic nose and fluorescence hyperspectral imaging for the detection of pork freshness.Food Bioscience,59,Article 103880. [CrossRef]
- Sun, X.; Young, J.; Liu, J.; Newman, D. (2018).Prediction of pork loin quality using online computer vision system and artificial intelligence model.Meat Science,140(Article),7277. [CrossRef]
- Chen, D.; Wu, P.; Wang, K.; Wang, S.; Ji, X.; Shen, Q.; Yu, Y.; Qiu, X.; Xu, X.; Liu, Y. (2022).Combining computer vision score and conventional meat quality traits to estimate the intramuscular fat content using machine learning in pigs.Meat Science,185(Article),108727. [CrossRef]
- Liu, H.; Zhan, W.; Du, Z.; Xiong, M.; Han, T.; Wang, P.; Li, W.; Sun, Y. (2023).Prediction of the intramuscular fat content of pork cuts by improved U2 Net model and clustering algorithm.Food Bioscience,53,Article,102848. [CrossRef]
- Arnal, B.J.G. (2018). Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Computers and Electronics in Agriculture, 153, 4653. [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. (2019).A survey on Image Data Augmentation for Deep Learning.Journal of Big Data,6(1),. [CrossRef]
- Elish, M.A.; Elish, K. (2021). A comprehensive survey of recent trends in deep learning for digital images augmentation. Artificial Intelligence Review, 55(1), 59112.
- Li, Y.; Yang, J. (2021).Meta learning baselines and database for few shot classification in agriculture.Computers and Electronics in Agriculture,182(),106055. [CrossRef]
- Nie J, Yuan Y, Li Y, Wang H, Li J, Wang Y, Song K, Ercisli S (2024). Few shot learning in intelligent agriculture: A review of methods and applications. Journal of Agricultural Sciences, 30(2), 216228. [CrossRef]
- Pan, J.; Xia, L.; Wu, Q.; Guo, Y.; Chen, Y.; Tian, X. (2022).Automatic strawberry leaf scorch severity estimation via faster RCNN and few shot learning.Ecological Informatics,70(Article),101706. [CrossRef]
- Nie, J.; Jiang, J.; Li, Y.; Wang, H.; Ercisli, S.; et al. (2023). Data and domain knowledge dual driven artificial intelligence: survey, applications, and challenges. Expert Systems. [CrossRef]
- Altmann, B.A.; Gertheiss, J.; Tomasevic, I.; Engelkes, C.; Glaesener, T.; Meyer, J.; et al. (2022). Human perception of color differences using computer vision system measurements of raw pork loin. Meat Science, 188, 108766. [CrossRef]
- Snell, J.; Swersky, K.; Zemel, R. ; 2017: Prototypical networks for few shot learning. Neural Information Processing Systems. 4077–4087. [CrossRef]
- Zhao, P.; Wang, L.; Zhao, X.; Liu, H.; &Ji, X. (2024)Few shot learning based on prototype rectification with a self attention mechanism. Expert Systems with Applications, 249 (0), 123586123586. [CrossRef]
- Huang, X.; Choi, S.H. (2023).SAPENet: Self Attention based Prototype Enhancement Network for Few shot Learning.Pattern Recognition,135(Article),109170. [CrossRef]
- Peng, C.; Chen, L.; Hao, K.; Chen, S.; Cai, X.; Wei, B. (2024).A novel dimensional variational prototypical network for industrial few shot fault diagnosis with unseen faults.Computers in Industry,162(),104133. [CrossRef]
- Liu, Y.; Pu, H.; Sun, D. (2021).Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. 1932. [Google Scholar] [CrossRef]
- Li, X.; Li, Z.; Xie, J.; Yang, X.; Xue, J.; Ma, Z. (2024).Self reconstruction network for fine grained few shot classification.PatternRecognition,153(Article),110485. [CrossRef]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. (2018). Shufflenet v2: practical guidelines for efficient cnn architecture design. Springer, Cham. [CrossRef]
- Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log likelihood function. Journal of Statistical Planning and Inference, 90(2), 227244. [CrossRef]
- Ioffe, S.; Szegedy, C. (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learn-ing, pages 448–456. pmlr, 2015. 1-5. [CrossRef]
- Ba, J.; Kiros, J.R.; Hinton, G.E. (2016). Layer Normalization. ArXiv, abs/1607.06450.
- Khaled, A.; Li, C.; Ning, J. (2023).BCN: Batch Channel Normalization for Image Classification. [CrossRef]
- Mukhoti, J.; Dokania, P.K.; Torr, P.H.S.; Gal, Y. (2020). On batch normalisation for approximate bayesian inference. [CrossRef]
- Song, G.; Tao, Z.; Huang, X.; Cao, G.; Liu, W.; Yang, L. (2020).Hybrid Attention Based Prototypical Network for Unfamiliar Restaurant Food Image Few Shot Recognition.IEEE Access,8(),1489314900. [CrossRef]
- Xia, Z.; Pan, X.; Song, S.; Li, L.E.; Huang, G. (2022). In Vision transformer with deformable attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4794–4803. [Google Scholar] [CrossRef]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R.W. (2023)BiFormer: Vision Transformer with BiLevel Routing Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [CrossRef]
- Chen, Y.; Wang, Y.; Li, Z.; Liu, S. (2021). In Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 893–902), Montreal, Canada, 2021., October 11–17. [CrossRef]
- Gunasekaran, A.; Irani, Z.; Choy, K.L.; Filippi, L.; Papadopoulos, T. (2015). Performance measures and metrics in outsourcing decisions: a review for research and applications. International Journal of Production Economics, 161, 153166. [CrossRef]
- Finn, C.; Abbeel, P.; Levine, S. (2017). Model agnostic meta learning for fast adaptation of deep networks. [CrossRef]
- Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; Wierstra, D. (2016). Matching networks for one shot learning. [CrossRef]
- Sun, Q.; Liu, Y.; Chua, T.S.; Schiele, B. (2018). Meta transfer learning for few shot learning. [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. 2012. Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105. [CrossRef]
- Simonyan, K.; Zisserman, A. (2014). Very deep convolutional networks for large scale image recognition. Computer Science. [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; & Rabinovich, A. (2014). Going deeper with convolutions. IEEE Computer Society. [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. 2016. In Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778). Las Vegas, NV, USA. [CrossRef]
- Wu, Y.; He, K. (2018). Group normalization. arXiv:1803.08494. [CrossRef]
- Wang, Z.; Xia, N.; Hua, S.; Liang, J.; Ji, X.; Wang, Z.; Wang, J. (2025). Hierarchical Recognition for Urban Villages Fusing Multiview Feature Information. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 18(), 3344-3355. [CrossRef]
- Hu, J.; Shen, L.; Sun, G.; Albanie, S. (2017). Squeeze and excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99). [CrossRef]
- Park, J.; Choi, M.; Kim, K. (2018). CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 319). Springer. [CrossRef]
- Rao, Y.; Zhao, W.L.; Liu, B.; Lu, J.; Zhou, J.; Hsieh, C.J. (2021).DynamicViT: Efficient vision transformers with dynamic token sparsification. https://arxiv.org/pdf/2106.02034.
- Jo, K.; Lee, S.; Jeong, S.K.C.; Lee, D.H.; Jeon, H.B.; Jung, S. (2024). Hyperspectral imaging–based assessment of fresh meat quality: Progress and applications. Microchemical Journal, 197 (0), 109785109785. [CrossRef]
- Triantafillou, E.; Zemel, R.; Urtasun, R. (2020). Few shot learning via learning the representation, provably. In Proceedings of the 8th International Conference on Learning Representations (ICLR). [CrossRef]
- Yuan, P.; Mobiny, A.; Jahani Pour, J.; Li, X.; Cicalese, P.A.; Roysam, B.; Patel, V.; Dragan, M.; Van Nguyen, H. (2020). Few Is Enough: Task Augmented Active Meta Learning for Brain Cell Classification. arXiv:2007.05009.
- Zhao, P.; Wang, L.; Zhao, X.; Liu, H.; Ji, X. (2024).Few shot learning based on prototype rectification with a self attention mechanism.Expert Systems with Applications,249(),123586. [CrossRef]
- Xu, H.; Zhi, S.; Sun, S.; Patel, V.M.; Liu, L. (2023). Deep learning for cross domain few shot visual recognition: a survey. ArXiv, abs/2303.08557.
- Fonseca, J.; Bacao, F. (2023). Improving active learning performance through the use of data augmentation. International Journal of Intelligent Systems, 38(8), 4799 4825.
- Pang, S.C.; Zhao, W.S.; Wang, S.D.; Zhang, L.; Wang, S. (2024). Permute MAML: exploring industrial surface defect detection algorithms for few shot learning. Complex & Intelligent Systems, 10(3), 1473 1482.
- Triantafillou, E.; Zhu, T.L.; Dumoulin, V.; Lamblin, P.; Xu, K.; Goroshin, R.; Gelada, C.; Swersky, K.; Manzagol, P.; Larochelle, H. (2019). Meta Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. ArXiv, abs/1903.03096.
- Subramanyam, R.; Heimann, M.; Thathachar, J.S.; Anirudh, R.; Thiagarajan, J.J. (2022). Contrastive Knowledge Augmented Meta Learning for Few Shot Classification. 2023 IEEE/CVF Winter Conference on Applications of Computer Vision(WACV), 24782486.
- Bossard, L.; Guillaumin, M.; Van Gool, L. (2014). Food 101—mining discriminative components with random forests. European Conference on Computer Vision (ECCV), 446461. [CrossRef]
- Kiswanto, K.; Hadiyanto, H.; Yono, E.S. (2024). Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray Level Co Occurrence Matrix.Applied Systems Innovation, 7(3), 49. [CrossRef]
- Ropo Di, A.; Panagou, E.Z.; Nychas, G.J. E. (2016).Data mining derived from food analyses using non invasive/nondestructive analytical techniques; determination of food authenticity, quality & safety in tandem with computer science disciplines.Trends in Food Science & Technology, 50, 107–123. [CrossRef]







| Freshness grade | Microbial Concentration (×103CFU/g) | Storage Time (h) |
|---|---|---|
| First-grade fresh pork | 4.168 | 0 |
| Second-grade fresh pork | 13.182 | 24 |
| Third-grade fresh pork | 301.995 | 48 |
| First-grade spoiled pork | 1778.279 | 72 |
| Second-grade spoiled pork | 5370.317 | 96 |
| Class | Image | Resolution |
|---|---|---|
| First-grade fresh meat | ![]() |
224×224×3 |
| Second-grade fresh meat | ![]() |
224×224×3 |
| Third-grade fresh meat | ![]() |
224×224×3 |
| First-grade spoiled meat | ![]() |
224×224×3 |
| Second-grade spoiled meat | ![]() |
224×224×3 |
| Training Set (83%) | Validation Set (12%) | Test Set (8%) | |
|---|---|---|---|
| Number of Categories | 80 | 12 | 8 |
| Number of Samples | 49800 | 7200 | 4800 |
| Model | Backbone | Comparison function | 5way,1-shot Accuracy (%) | 5way, 5-shot Accuracy (%) |
|---|---|---|---|---|
| Matching Net | ResNet-18 | Cosine similarity. | 48.12 | 67.20 |
| Prototypical networks | ResNet-18 | Euclidean distance. | 44.56 | 54.31 |
| Relation Net | ResNet-18 | — | 51.44 | 63.12 |
| Ours | ShuffleNetV2 + Biformer | Cosine similarity. | 59.72 | 78.84 |
| Method Results |
Input image size | Batch size | Number of parameters (M) | Accuracy (%) | Sensitivity(%) | Specificity(%) | Precision(%) |
|---|---|---|---|---|---|---|---|
| AlexNet | 224*224*3 | 2 | 60 | 52.13 | 57.96 | 44.64 | 77.57 |
| VGG16 | 224*224*3 | 2 | 138 | 52.42 | 69.74 | 75.30 | 70.31 |
| GoogLeNet | 224*224*3 | 2 | 7 | 58.24 | 69.61 | 52.56 | 70.43 |
| ResNet50 | 224*224*3 | 2 | 25.6 | 68.43 | 79.33 | 74.78 | 69.29 |
| Ours | 224*224*3 | 2 | 2.3 | 96.36 | 78.85 | 85.71 | 96.35 |
| Model | 5way 1-shot Accuracy (%) | 5way 5-shot Accuracy (%) |
|---|---|---|
| ShuffleNetV2+BN,BiFormer+LN | 48.23 | 63.25 |
| ShuffleNetV2+BCN,BiFormer+LN | 49.56 | 64.18 |
| ShuffleNetV2+BN,BiFormer+BCN | 51.49 | 68.31 |
| ShuffleNetV2+BCN,BiFormer+BCN | 52.44 | 69.52 |
| Model | 5way, 1-shot Accuracy (%) | 5way, 5-shot Accuracy (%) |
|---|---|---|
| ShuffleNetV2, Backbone | 52.44 | 69.52 |
| ShuffleNetV2+BCN-BiFormer,Backbone | 55.73 | 69.62 |
| ShuffleNetV2, Backbone+BCN-BiFormer | 57.91 | 71.61 |
| ShuffleNetV2+BCN-BiFormer, Backbone+BCN-BiFormer | 59.72 | 78.84 |
| Fine-tuning | 1-Shot Accuracy (%) |
5-Shot Accuracy (%) |
10-Shot Accuracy (%) |
20-Shot Accuracy (%) |
40-Shot Accuracy (%) |
80-Shot Accuracy (%) |
100-Shot Accuracy (%) |
120-Shot Accuracy (%) |
|---|---|---|---|---|---|---|---|---|
| × | 56.64 | 71.55 | 76.29 | 83.46 | 85.81 | 87.21 | 87.19 | 87.20 |
| √ | 59.72 | 78.84 | 83.44 | 91.25 | 94.44 | 96.36 | 96.32 | 96.33 |
| Query set sample size | Accuracy (%) | Training time (min) |
|---|---|---|
| 5 | 56.64 | 12.3 |
| 10 | 68.55 | 15.1 |
| 15 | 71.29 | 18.9 |
| 20 | 73.46 | 22.5 |
| 25 | 78.84 | 26.8 |
| 30 | 77.21 | 31.4 |
| 35 | 77.19 | 35.6 |
| Dataset Name. | Accuracy (%) | Sensitivity (%) | Specificity (%) | Precision (%) |
|---|---|---|---|---|
| Food101 | 92.4 | 89.6 | 94.1 | 91.8 |
| Pork freshness | 96.36 | 78.85 | 85.71 | 96.35 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).




