Submitted:
24 July 2025
Posted:
24 July 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
- (1)
- We propose a replay-based learning algorithm that incorporates a performance-aware submodular sample selection strategy, namely ER-PASS, which is model-agnostic and can be applied across various deep learning models.
- (2)
- We demonstrate that ER-PASS effectively mitigates catastrophic forgetting compared to existing methods, while requiring relatively low resource demands.
- (3)
- Experimental results on building segmentation and LULC classification demonstrate that ER-PASS exhibits generalizability across diverse remote sensing applications.
2. Related Work
2.1. Domain-Incremental Learning in Remote Sensing
2.2. Replay-Based Continual Learning Algorithms
2.3. Sample Selection Strategies for Replay
3. Methodology
3.1. Overview
3.2. Proposed Algorithm
| Algorithm 1 Learning process of ER-PASS |
|
Input:▹ Dataset corresponding to each task
Require:▹ Neural network
Initialize:▹ Memory buffer Define as the training set used for task k
for task do
if t = 1 then
▹ Initialize model parameters else
end if
Define as the total number of mini-batches in
for do
▹ Update model end for
▹ Update memory buffer end for |
3.3. Performance-Aware Submodular Sample Selection
| Algorithm 2 Performance-aware submodular sample selection |
|
Input:, ▹ Dataset and trained model corresponding to
Require:N▹ Memory budget (number of samples to select)
Output:▹ Updated memory buffer Initialize , ,
Extract features from for each sample
Compute normalized features:
Compute evaluation score between and ground truth for each sample Let , where n is the total number of samples in
Compute intra-similarity:
for to N do
for to n do
if then
▹ Only intra-similarity else
Compute inter-similarity:
end if
if ▹ Exclude already selected samples end for
, ,
end for |
4. Experimental Settings
4.1. Datasets
4.2. Implementation Details
4.3. Evaluation Metrics
5. Experimental Results and Discussion
5.1. Building Segmentation (Strict DIL Setting)
5.2. LULC Classification (Relaxed DIL Setting)
5.3. Ablation and Efficiency Analysis
5.3.1. Ablation Study on Proposed Sample Selection Strategy
5.3.2. Effect of Sampling Ratio
5.3.3. Computational Efficiency Analysis
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| A-GEM | Averaged GEM |
| AIA | Average Incremental Accuracy |
| BWT | Backward Transfer |
| CIL | Class Incremental Learning |
| DIL | Domain Incremental Learning |
| ER | Experience Replay |
| EWC | Elastic Weight Consolidation |
| GEM | Gradient Episodic Memory |
| IoU | Intersection-over-Union |
| LULC | LandUse/LandCover |
| LwF | Learning without Forgetting |
| mIoU | mean Intersection-over-Union |
| MoF | Mean of Features |
| SAR | Synthetic Aperture Radar |
| TIL | Task Incremental Learning |
References
- Wellmann, T.; Lausch, A.; Andersson, E.; Knapp, S.; Cortinovis, C.; Jache, J.; Scheuer, S.; Kremer, P.; Mascarenhas, A.; Kraemer, R.; et al. Remote sensing in urban planning: Contributions towards ecologically sound policies? Landscape and urban planning 2020, 204, 103921. [Google Scholar] [CrossRef]
- Pham, H.M.; Yamaguchi, Y.; Bui, T.Q. A case study on the relation between city planning and urban growth using remote sensing and spatial metrics. Landscape and Urban Planning 2011, 100, 223–230. [Google Scholar] [CrossRef]
- Kuffer, M.; Pfeffer, K.; Persello, C. Special issue “remote-sensing-based urban planning indicators”, 2021.
- Hoalst-Pullen, N.; Patterson, M.W. Applications and trends of remote sensing in professional urban planning. Geography Compass 2011, 5, 249–261. [Google Scholar] [CrossRef]
- Song, W.; Song, W.; Gu, H.; Li, F. Progress in the remote sensing monitoring of the ecological environment in mining areas. International journal of environmental research and public health 2020, 17, 1846. [Google Scholar] [CrossRef] [PubMed]
- Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote sensing of Environment 2020, 241, 111716. [Google Scholar] [CrossRef]
- Ma, Y.; Chen, S.; Ermon, S.; Lobell, D.B. Transfer learning in environmental remote sensing. Remote Sensing of Environment 2024, 301, 113924. [Google Scholar] [CrossRef]
- Khan, S.M.; Shafi, I.; Butt, W.H.; Diez, I.d.l.T.; Flores, M.A.L.; Galán, J.C.; Ashraf, I. A systematic review of disaster management systems: approaches, challenges, and future directions. Land 2023, 12, 1514. [Google Scholar] [CrossRef]
- Ye, P. Remote sensing approaches for meteorological disaster monitoring: Recent achievements and new challenges. International Journal of Environmental Research and Public Health 2022, 19, 3701. [Google Scholar] [CrossRef]
- Lei, T.; Wang, J.; Li, X.; Wang, W.; Shao, C.; Liu, B. Flood disaster monitoring and emergency assessment based on multi-source remote sensing observations. Water 2022, 14, 2207. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS journal of photogrammetry and remote sensing 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 162, 94–114. [Google Scholar] [CrossRef]
- Munawar, H.S.; Hammad, A.W.; Waller, S.T. Remote sensing methods for flood prediction: A review. Sensors 2022, 22, 960. [Google Scholar] [CrossRef]
- White, J.C.; Coops, N.C.; Wulder, M.A.; Vastaranta, M.; Hilker, T.; Tompalski, P. Remote sensing technologies for enhancing forest inventories: A review. Canadian Journal of Remote Sensing 2016, 42, 619–641. [Google Scholar] [CrossRef]
- Khanal, S.; Kc, K.; Fulton, J.P.; Shearer, S.; Ozkan, E. Remote sensing in agriculture—accomplishments, limitations, and opportunities. Remote sensing 2020, 12, 3783. [Google Scholar] [CrossRef]
- Luo, L.; Li, P.; Yan, X. Deep learning-based building extraction from remote sensing images: A comprehensive review. Energies 2021, 14, 7982. [Google Scholar] [CrossRef]
- Digra, M.; Dhir, R.; Sharma, N. Land use land cover classification of remote sensing images based on the deep learning approaches: a statistical analysis and review. Arabian Journal of Geosciences 2022, 15, 1003. [Google Scholar] [CrossRef]
- Zhao, S.; Tu, K.; Ye, S.; Tang, H.; Hu, Y.; Xie, C. Land use and land cover classification meets deep learning: A review. Sensors 2023, 23, 8966. [Google Scholar] [CrossRef]
- Michau, G.; Fink, O. Unsupervised transfer learning for anomaly detection: Application to complementary operating condition transfer. Knowledge-Based Systems 2021, 216, 106816. [Google Scholar] [CrossRef]
- Xu, M.; Wu, M.; Chen, K.; Zhang, C.; Guo, J. The eyes of the gods: A survey of unsupervised domain adaptation methods based on remote sensing data. Remote Sensing 2022, 14, 4380. [Google Scholar] [CrossRef]
- French, R.M. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 1999, 3, 128–135. [Google Scholar] [CrossRef]
- McClelland, J.L.; McNaughton, B.L.; O’Reilly, R.C. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review 1995, 102, 419. [Google Scholar] [CrossRef]
- Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural networks 2019, 113, 54–71. [Google Scholar] [CrossRef]
- Hadsell, R.; Rao, D.; Rusu, A.A.; Pascanu, R. Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences 2020, 24, 1028–1040. [Google Scholar] [CrossRef]
- Van de Ven, G.M.; Tolias, A.S. Three scenarios for continual learning. arXiv, 2019; arXiv:1904.07734. [Google Scholar]
- Zhou, D.W.; Wang, Q.W.; Qi, Z.H.; Ye, H.J.; Zhan, D.C.; Liu, Z. Class-incremental learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 2024. [Google Scholar] [CrossRef]
- De Lange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; Tuytelaars, T. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence 2021, 44, 3366–3385. [Google Scholar]
- Wang, L.; Zhang, X.; Su, H.; Zhu, J. A comprehensive survey of continual learning: Theory, method and application. IEEE transactions on pattern analysis and machine intelligence 2024, 46, 5362–5383. [Google Scholar] [CrossRef] [PubMed]
- Rui, X.; Li, Z.; Cao, Y.; Li, Z.; Song, W. DILRS: Domain-incremental learning for semantic segmentation in multi-source remote sensing data. Remote Sensing 2023, 15, 2541. [Google Scholar] [CrossRef]
- Wang, M.; Yu, D.; He, W.; Yue, P.; Liang, Z. Domain-incremental learning for fire detection in space-air-ground integrated observation network. International Journal of Applied Earth Observation and Geoinformation 2023, 118, 103279. [Google Scholar] [CrossRef]
- Huang, W.; Ding, M.; Deng, F. Domain Incremental Learning for Remote Sensing Semantic Segmentation with Multi-Feature Constraints in Graph Space. IEEE Transactions on Geoscience and Remote Sensing 2024. [Google Scholar] [CrossRef]
- Tasar, O.; Tarabalka, Y.; Alliez, P. Incremental learning for semantic segmentation of large-scale remote sensing data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2019, 12, 3524–3537. [Google Scholar] [CrossRef]
- Li, H.; Jiang, H.; Gu, X.; Peng, J.; Li, W.; Hong, L.; Tao, C. CLRS: Continual learning benchmark for remote sensing image scene classification. Sensors 2020, 20, 1226. [Google Scholar] [CrossRef] [PubMed]
- Bhat, S.D.; Banerjee, B.; Chaudhuri, S.; Bhattacharya, A. CILEA-NET: Curriculum-based incremental learning framework for remote sensing image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 14, 5879–5890. [Google Scholar] [CrossRef]
- Ammour, N. Continual learning using data regeneration for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Feng, Y.; Sun, X.; Diao, W.; Li, J.; Gao, X.; Fu, K. Continual learning with structured inheritance for semantic segmentation in aerial imagery. IEEE Transactions on Geoscience and Remote Sensing 2021, 60, 1–17. [Google Scholar] [CrossRef]
- Marsocci, V.; Scardapane, S. Continual barlow twins: continual self-supervised learning for remote sensing semantic segmentation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2023, 16, 5049–5060. [Google Scholar] [CrossRef]
- Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 2017, 114, 3521–3526. [Google Scholar] [CrossRef]
- Zbontar, J.; Jing, L.; Misra, I.; LeCun, Y.; Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. In Proceedings of the International conference on machine learning. PMLR; 2021; pp. 12310–12320. [Google Scholar]
- Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the Proceedings of COMPSTAT’2010: 19th International Conference on Computational StatisticsParis France, August 22-27, 2010 Keynote, Invited and Contributed Papers. Springer, 2010, pp. 177–186.
- Lopez-Paz, D.; Ranzato, M. Gradient episodic memory for continual learning. Advances in neural information processing systems 2017, 30. [Google Scholar]
- Chaudhry, A.; Ranzato, M.; Rohrbach, M.; Elhoseiny, M. Efficient lifelong learning with a-gem. arXiv, 2018; arXiv:1812.00420. [Google Scholar]
- Chaudhry, A.; Rohrbach, M.; Elhoseiny, M.; Ajanthan, T.; Dokania, P.K.; Torr, P.H.; Ranzato, M. On tiny episodic memories in continual learning. arXiv, 2019; arXiv:1902.10486. [Google Scholar]
- Riemer, M.; Cases, I.; Ajemian, R.; Liu, M.; Rish, I.; Tu, Y.; Tesauro, G. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv, 2018; arXiv:1810.11910. [Google Scholar]
- Buzzega, P.; Boschini, M.; Porrello, A.; Abati, D.; Calderara, S. Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems 2020, 33, 15920–15930. [Google Scholar]
- Rebuffi, S.A.; Kolesnikov, A.; Sperl, G.; Lampert, C.H. icarl: Incremental classifier and representation learning. In Proceedings of the Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
- Bhat, S.D.; Banerjee, B.; Chaudhuri, S.; Bhattacharya, A. Efficient curriculum based continual learning with informative subset selection for remote sensing scene classification. arXiv, 2023; arXiv:2309.01050. [Google Scholar]
- Sun, H.; Xu, Y.; Fu, K.; Lei, L.; Ji, K.; Kuang, G. An Evaluation of Representative Samples Replay and Knowledge Distillation Regularization for SAR ATR Continual Learning. In Proceedings of the 2024 Photonics & Electromagnetics Research Symposium (PIERS). IEEE, 2024, pp. 1–6.
- Vitter, J.S. Random sampling with a reservoir. ACM Transactions on Mathematical Software (TOMS) 1985, 11, 37–57. [Google Scholar] [CrossRef]
- Sener, O.; Savarese, S. Active learning for convolutional neural networks: A core-set approach. arXiv, 2017; arXiv:1708.00489. [Google Scholar]
- Yoon, J.; Madaan, D.; Yang, E.; Hwang, S.J. Online coreset selection for rehearsal-based continual learning. arXiv, 2021; arXiv:2106.01085. [Google Scholar]
- Lee, H.; Kim, S.; Lee, J.; Yoo, J.; Kwak, N. Coreset selection for object detection. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 7682–7691.
- Aljundi, R.; Lin, M.; Goujaud, B.; Bengio, Y. Gradient based sample selection for online continual learning. Advances in neural information processing systems 2019, 32. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer, 2015. pp. 234–241.
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801–818.
- Rottensteiner, F.; Sohn, G.; Gerke, M.; Wegner, J.D. ISPRS semantic labeling contest. ISPRS: Leopoldshöhe, Germany 2014, 1, 4. [Google Scholar]
- Wang, J.; Zheng, Z.; Ma, A.; Lu, X.; Zhong, Y. LoveDA: A remote sensing land-cover dataset for domain adaptive semantic segmentation. arXiv, 2021; arXiv:2110.08733. [Google Scholar]
- Demir, I.; Koperski, K.; Lindenbaum, D.; Pang, G.; Huang, J.; Basu, S.; Hughes, F.; Tuia, D.; Raskar, R. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 172–181.
- Tong, X.Y.; Xia, G.S.; Zhu, X.X. Enabling country-scale land cover mapping with meter-resolution satellite imagery. ISPRS Journal of Photogrammetry and Remote Sensing 2023, 196, 178–196. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Hoiem, D. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 2017, 40, 2935–2947. [Google Scholar] [CrossRef] [PubMed]







| Dataset | Platform | Resolution (m) | Stride | # of Images | Redefined Class Labels |
|---|---|---|---|---|---|
| Potsdam [56] | Airborne | 0.05 | 384 | 8,550 | Building, Road, Trees, Grassland, Cars, Background |
| LoveDA [57] | Airborne | 0.3 | 512 | 7,332 | Building, Road, Forest, Water, Agriculture, Barren, Background |
| DeepGlobe [58] | WorldView-2 | 0.5 | 968 | 7,227 | Building, Forest, Grassland, Water, Agriculture, Barren, Background |
| GID [59] | Gaofen-2 | 4.0 | 512 | 10,737 | Building, Forest, Grassland, Water, Agriculture, Background |
| Method | Step1 | Step2 | Step3 | Step4 | AIA4 ↑ | BWT4 ↑ | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Potsdam | Potsdam | LoveDA | Potsdam | LoveDA | DeepGlobe | Potsdam | LoveDA | DeepGlobe | GID | |||
| Single-task | 0.8149 | 0.3672 | 0.5222 | 0.2246 | 0.2693 | 0.7391 | 0.0151 | 0.0028 | 0.0004 | 0.7276 | - | - |
| Joint learning | - | 0.8159 | 0.4761 | 0.8082 | 0.5192 | 0.7379 | 0.7859 | 0.5054 | 0.7414 | 0.6953 | 0.7078 | 0.0012 |
| Fine-tuning | - | 0.4284 | 0.5157 | 0.3032 | 0.2940 | 0.7391 | 0.0103 | 0.0102 | 0.0020 | 0.7220 | 0.4796 | |
| EWC [38] | - | 0.4300 | 0.5054 | 0.3578 | 0.2442 | 0.7386 | 0.0064 | 0.0050 | 0.0017 | 0.7175 | 0.4781 | |
| LwF [60] | - | 0.4386 | 0.5186 | 0.2664 | 0.3088 | 0.7210 | 0.0112 | 0.0168 | 0.0014 | 0.7250 | 0.4785 | |
| ER [43] | - | 0.6971 | 0.3629 | 0.5852 | 0.3854 | 0.5300 | 0.1899 | 0.2153 | 0.2827 | 0.6428 | 0.5444 | |
| Ours | - | 0.8113 | 0.4953 | 0.8157 | 0.4815 | 0.7338 | 0.8218 | 0.4703 | 0.7280 | 0.6899 | 0.7057 | |
| Method | Step1 | Step2 | Step3 | Step4 | AIA4 ↑ | BWT4 ↑ | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Potsdam | Potsdam | LoveDA | Potsdam | LoveDA | DeepGlobe | Potsdam | LoveDA | DeepGlobe | GID | |||
| Single-task | 0.7678 | 0.4507 | 0.4859 | 0.3360 | 0.2769 | 0.7228 | 0.0216 | 0.0072 | 0.0018 | 0.7095 | - | - |
| Joint learning | - | 0.8112 | 0.4784 | 0.7987 | 0.4667 | 0.7037 | 0.7925 | 0.4526 | 0.7351 | 0.6775 | 0.6834 | 0.0101 |
| Fine-tuning | - | 0.4671 | 0.4903 | 0.3303 | 0.2930 | 0.7328 | 0.0172 | 0.0128 | 0.0045 | 0.7138 | 0.4714 | |
| EWC [38] | - | 0.4311 | 0.5267 | 0.3876 | 0.2721 | 0.7406 | 0.0126 | 0.0027 | 0.0017 | 0.7077 | 0.4737 | |
| LwF [60] | - | 0.3893 | 0.4883 | 0.3488 | 0.2779 | 0.7463 | 0.0159 | 0.0038 | 0.0047 | 0.7044 | 0.4616 | |
| ER [43] | - | 0.7075 | 0.3916 | 0.6097 | 0.3941 | 0.5545 | 0.3501 | 0.3679 | 0.4466 | 0.6584 | 0.5731 | |
| Ours | - | 0.7941 | 0.5109 | 0.8234 | 0.4475 | 0.7380 | 0.8044 | 0.4705 | 0.7149 | 0.6993 | 0.6906 | |
| Method | Step1 | Step2 | Step3 | Step4 | AIA4 ↑ | BWT4 ↑ | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Potsdam | Potsdam | LoveDA | Potsdam | LoveDA | DeepGlobe | Potsdam | LoveDA | DeepGlobe | GID | |||
| Single-task | 0.6434 | 0.1087 | 0.3611 | 0.1912 | 0.1717 | 0.5857 | 0.1114 | 0.1610 | 0.0955 | 0.6339 | - | - |
| Joint learning | - | 0.6425 | 0.3253 | 0.6304 | 0.3812 | 0.5755 | 0.6336 | 0.3928 | 0.5245 | 0.6738 | 0.5534 | 0.0019 |
| Fine-tuning | - | 0.1277 | 0.3515 | 0.1563 | 0.1610 | 0.5991 | 0.0640 | 0.1857 | 0.0845 | 0.6913 | 0.3613 | |
| EWC [38] | - | 0.1342 | 0.3705 | 0.1417 | 0.1202 | 0.6117 | 0.0659 | 0.2342 | 0.0969 | 0.6851 | 0.3645 | |
| LwF [60] | - | 0.1786 | 0.3679 | 0.1109 | 0.1098 | 0.5925 | 0.0821 | 0.2317 | 0.1829 | 0.6519 | 0.3688 | |
| ER [43] | - | 0.2925 | 0.2306 | 0.1757 | 0.2667 | 0.4220 | 0.2600 | 0.2425 | 0.3159 | 0.6173 | 0.3880 | |
| Ours | - | 0.6571 | 0.3811 | 0.6555 | 0.3710 | 0.5941 | 0.6286 | 0.3606 | 0.5272 | 0.6869 | 0.5636 | |
| Method | Step1 | Step2 | Step3 | Step4 | AIA44 ↑ | BWT4 ↑ | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Potsdam | Potsdam | LoveDA | Potsdam | LoveDA | DeepGlobe | Potsdam | LoveDA | DeepGlobe | GID | |||
| Single-task | 0.6354 | 0.1425 | 0.3310 | 0.1244 | 0.1325 | 0.5391 | 0.0540 | 0.1962 | 0.1528 | 0.5630 | - | - |
| Joint learning | - | 0.6300 | 0.3110 | 0.6289 | 0.3602 | 0.6191 | 0.5886 | 0.3652 | 0.5638 | 0.6530 | 0.5462 | |
| Fine-tuning | - | 0.1320 | 0.3795 | 0.1466 | 0.1537 | 0.5799 | 0.0729 | 0.2002 | 0.1457 | 0.6563 | 0.3633 | |
| EWC [38] | - | 0.1769 | 0.3570 | 0.1618 | 0.1789 | 0.6125 | 0.0649 | 0.2018 | 0.1567 | 0.6679 | 0.3732 | |
| LwF [60] | - | 0.1853 | 0.3550 | 0.2107 | 0.1504 | 0.5083 | 0.0774 | 0.2568 | 0.2265 | 0.6069 | 0.3718 | |
| ER [43] | - | 0.3567 | 0.3142 | 0.3565 | 0.2866 | 0.3675 | 0.3537 | 0.3205 | 0.4240 | 0.5443 | 0.4296 | |
| Ours | - | 0.6275 | 0.3446 | 0.5908 | 0.3441 | 0.5805 | 0.5297 | 0.3521 | 0.5407 | 0.6164 | 0.5341 | |
| Method | Submodular | Eval. Score | Score Function | AIA4 | BWT4 |
|---|---|---|---|---|---|
| Random | ✗ | ✗ | Random | 0.6907 | |
| Submodular | ✓ | ✗ | 0.6796 | ||
| Eval. Score | ✗ | ✓ | S | 0.6930 | |
| Ours | ✓ | ✓ | 0.7057 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).