Submitted:
19 November 2024
Posted:
20 November 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
- To enhance sensitivity to salient information in both spatial and spectral dimensions, the SSJA module is proposed. This module jointly models the independently extracted global spatial and channel features via a MLP to reveal their correlations. This approach effectively captures the latent dependencies between spatial and spectral features, allowing the attention weights to account for global information.
- To address the problem of insufficient feature extraction, multiple multi-scale strategies are introduced in the model and the AMSFE module is proposed to gather supplementary information from different levels, enable feature reuse and extraction of more representative features. Additionally, low-parameter or even parameter-free attention mechanisms are embedded in the front, middle, and back stages of the model, ensuring accurate and efficient object identification.
- To reduce model complexity and mitigate overfitting, 3D depthwise separable convolution layers are used to replace standard 3D convolution layers . This replacement optimizes computational efficiency and reduces overfitting risk while maintaining high classification accuracy.
2. Methods
2.1. Spectral And Spatial Feature Extraction
2.2.1. Three-Dimensional Depthwise Separable Convolution
2.1.2. ACP
2.1.3. Spectral and Spatial Feature Extraction
2.1. SSJA

2.3. FF
2.4. AMSFE
2.4.1. NAM
2.4.2. AMSFE
3. Experimental Results and Discussion
3.1. Datasets
3.2. Evaluation Metrics
3.3. Experimental Parameters
3.3.1. Batchsize
3.3.2. Patchsize
3.3.3. Channel Numbers
3.3.3. FEG Numbers
3.4. Comparative Experiment
3.4.1. Experimental Results of IP
3.4.2. Experimental Results of PU
3.4.3. Experimental Results of YRE
3.5. Ablation Study
4. Discussion
4. Conclusion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Tan, K.; Wu, F.; Du, Q.; Du, P.; Chen, Y. A Parallel Gaussian–Bernoulli Restricted Boltzmann Machine for Mining Area Classification With Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 627–636. [Google Scholar] [CrossRef]
- Okada, N.; Maekawa, Y.; Owada, N.; Haga, K.; Shibayama, A.; Kawamura, Y. Automated identification of mineral types and grain size using hyperspectral imaging and deep learning for mineral processing. Minerals 2020, 10, 809. [Google Scholar] [CrossRef]
- Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced Spectral Classifiers for Hyperspectral Images: A Review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef]
- Matthews, M.W.; Bernard, S.; Evers-King, H.; Lain, L.R. Distinguishing cyanobacteria from algae in optically complex inland waters using a hyperspectral radiative transfer inversion algorithm. Remote Sens. Environ. 2020, 248, 111981. [Google Scholar] [CrossRef]
- Pascucci, S.; Pignatti, S.; Casa, R.; Darvishzadeh, R.; Huang, W. Special Issue “Hyperspectral Remote Sensing of Agriculture and Vegetation.” Remote Sens. 2020, 12, 3665.
- Aneece, I.; Thenkabail, P.S. DESIS and PRISMA: A study of a new generation of spaceborne hyperspectral sensors in the study of world crops. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; p. 479. [Google Scholar]
- Fabelo, H.; Ortega, S.; Ravi, D.; Kiran, B.R.; Sosa, C.; Bulters, D.; Sarmiento, R. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS ONE 2018, 13, e0193721. [Google Scholar] [CrossRef]
- Chang, C.I. (Ed.) Hyperspectral Data Exploitation: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
- Melgani, F.; Bruzzone, L. Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Ou, X.; Zhang, Y.; Wang, H.; Tu, B.; Guo, L.; Zhang, G.; Xu, Z. Hyperspectral image target detection via weighted joint K-nearest neighbor and multitask learning sparse representation. IEEE Access 2019, 8, 11503–11511. [Google Scholar] [CrossRef]
- Joelsson, S.R.; Benediktsson, J.A.; Sveinsson, J.R. Random forest classifiers for hyperspectral data. In Proceedings of the 2005 International Geoscience and Remote Sensing Symposium (IGARSS ‘05), Seoul, Republic of Korea, 29 July 2005; p. 4. [Google Scholar]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef]
- Duan, Y.; Huang, H.; Tang, Y. Local Constraint-Based Sparse Manifold Hypergraph Learning for Dimensionality Reduction of Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2021, 59, 613–628. [Google Scholar] [CrossRef]
- Jia, S.; Jiang, S.; Lin, Z.; Li, N.; Xu, M.; Yu, S. A Survey: Deep Learning for Hyperspectral Image Classification with Few Labeled Samples. Neurocomputing 2021, 448, 179–204. [Google Scholar] [CrossRef]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.-A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Phys. D. Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Zhong, P.; Gong, Z.; Li, S.; Schönlieb, C.-B. Learning to Diversify Deep Belief Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3516–3530. [Google Scholar] [CrossRef]
- Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
- Lever, J.; Krzywinski, M.; Altman, N. Principal component analysis. Nat. Methods 2017, 14, 641–642. [Google Scholar] [CrossRef]
- Fu, B.; Sun, X.; Cui, C.; Zhang, J.; Shang, X. Structure-Preserved and Weakly Redundant Band Selection for Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 1–15, Early access.
- Guo, Y.; Zhao, X.; Sun, X.; Zhang, J.; Shang, X. Sample Latent Feature-Associated Low-Rank Subspace Clustering for Hyperspectral Band Selection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 14050–14063. [Google Scholar] [CrossRef]
- Sun, X.; Lin, P.; Shang, X.; Pang, H.; Fu, X. MOBS-TD: Multiobjective Band Selection with Ideal Solution Optimization Strategy for Hyperspectral Target Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 10032–10050. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Feng, J.; Chen, J.; Liu, L.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. CNN-Based Multilayer Spatial-Spectral Feature Fusion and Sample Augmentation With Local and Nonlocal Constraints for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1299–1313. [Google Scholar] [CrossRef]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, X.; Zhang, J.; Shang, X.; Hu, Y.; Zhang, S.; Wang, J. A New Dual-Branch Embedded Multivariate Attention Network for Hyperspectral Remote Sensing Classification. Remote Sens. 2024, 16, 2029. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convo-lutional neural networks for hyperspectral image classifi-cation. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Chen, Y.; Zhu, L.; Ghamisi, P.; Jia, X.; Li, G.; Tang, L. Hyperspectral images classification with Gabor filtering and convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, F.; Zhang, L. Hyperspectral Image Classification via a Random Patches Network. ISPRS J. Photogramm. Remote Sens. 2018, 142, 344–357. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral-spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D-2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Jiang, Y.; Li, Y.; Zhang, H. Hyperspectral image classification based on 3-D separable ResNet and transfer learning. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1949–1953. [Google Scholar] [CrossRef]
- Hu, Y.; Tian, S.; Ge, J. Hybrid Convolutional Network Combining Multiscale 3D Depthwise Separable Convolution and CBAM Residual Dilated Convolution for Hyperspectral Image Classification. Remote Sens. 2023, 15, 4796. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Brooklyn, NY, USA, 2017; Volume 30. [Google Scholar]
- Lu, Z.; Xu, B.; Sun, L.; Zhan, T.; Tang, S. 3-D Channel and Spatial Attention Based Multiscale Spatial–Spectral Residual Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 4311–4324. [Google Scholar] [CrossRef]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
- Zhang, X.; Shang, S.; Tang, X.; Feng, J.; Jiao, L. Spectral Partitioning Residual Network with Spatial Attention Mechanism for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Li, M.; Liu, Y.; Xue, G.; Huang, Y.; Yang, G. Exploring the Relationship Between Center and Neighborhoods: Central Vector Oriented Self-Similarity Network for Hyperspectral Image Classification. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1979–1993. [Google Scholar] [CrossRef]


















| Category | Class Name | Color | Sample |
|---|---|---|---|
| 1 | Alfalfa | ![]() |
46 |
| 2 | Corn-notill | ![]() |
1428 |
| 3 | Corn-mintill | ![]() |
830 |
| 4 | Corn | ![]() |
237 |
| 5 | Grass-pasture | ![]() |
483 |
| 6 | Grass-trees | ![]() |
730 |
| 7 | Grass-pasture-mowed | ![]() |
28 |
| 8 | Hay-windrowed | ![]() |
478 |
| 9 | Oats | ![]() |
20 |
| 10 | Soybean-notill | ![]() |
972 |
| 11 | Soybean-mintill | ![]() |
2455 |
| 12 | Soybean-clean | ![]() |
593 |
| 13 | Wheat | ![]() |
205 |
| 14 | Woods | ![]() |
1265 |
| 15 | Buildings-Grass-Trees-Drives | ![]() |
386 |
| 16 | Stone-Steel-Towers | ![]() |
93 |
| Category | Class Name | Color | Sample |
|---|---|---|---|
| 1 | Asphalt | ![]() |
6631 |
| 2 | Meadows | ![]() |
18649 |
| 3 | Gravel | ![]() |
2099 |
| 4 | Trees | ![]() |
3064 |
| 5 | Painted metal sheets | ![]() |
1345 |
| 6 | Bare soil | ![]() |
5029 |
| 7 | Bitumen | ![]() |
1330 |
| 8 | Self-blocking bricks | ![]() |
3682 |
| 9 | Shadows | ![]() |
947 |
| Category | Class Name | Color | Sample |
|---|---|---|---|
| 1 | Mudflat | ![]() |
3583 |
| 2 | Bare soil | ![]() |
5386 |
| 3 | Tamarix | ![]() |
3582 |
| 4 | Suaeda salsa | ![]() |
4243 |
| 5 | Spartina alterniflora | ![]() |
23535 |
| 6 | Turbid water | ![]() |
17018 |
| 7 | Tidal flat reed | ![]() |
5987 |
| 8 | Clear water | ![]() |
12253 |
| 9 | Bare lake beach | ![]() |
4145 |
| Datasets | Learning Rate | Epochs | Batchsize | Patchsize | Channel Number | FEG Number |
|---|---|---|---|---|---|---|
| IP | 0.0001 | 150 | 32 | 15 | 30 | 2 |
| PU | 0.0001 | 150 | 64 | 13 | 20 | 2 |
| YRE | 0.0001 | 150 | 128 | 11 | 30 | 2 |
| Classes | 2D-CNN | DBMA | SSRN | HybridSN | MDRD- Net |
SSTFF | DMAN | MSFF- CBAM |
MSFF |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 100 | 100 | 100 | 100 | 100 | 100 | 97.50 | 91.67 | 97.87 |
| 2 | 91.34 | 99.14 | 99.57 | 99.64 | 96.89 | 98.06 | 97.01 | 99.13 | 100 |
| 3 | 31.53 | 95.71 | 97.63 | 98.46 | 98.91 | 98.76 | 97.15 | 97.75 | 99.64 |
| 4 | 95.11 | 100 | 99.11 | 100 | 100 | 89.38 | 94.76 | 98.33 | 99.58 |
| 5 | 97.48 | 99.57 | 99.79 | 99.79 | 99.18 | 98.57 | 99.51 | 99.79 | 99.79 |
| 6 | 97.69 | 98.91 | 99.86 | 99.59 | 98.51 | 96.58 | 99.52 | 100 | 99.45 |
| 7 | 92.86 | 100 | 100 | 100 | 100 | 67.57 | 100 | 96.55 | 100 |
| 8 | 95.88 | 100 | 99.38 | 97.55 | 100 | 100 | 100 | 99.58 | 100 |
| 9 | 91.67 | 100 | 100 | 100 | 100 | 94.74 | 93.75 | 90.91 | 94.74 |
| 10 | 92.56 | 99.67 | 97.57 | 98.88 | 99.15 | 97.82 | 98.46 | 98.26 | 98.68 |
| 11 | 92.33 | 94.93 | 98.53 | 97.71 | 98.58 | 99.68 | 98.86 | 97.92 | 98 |
| 12 | 94.44 | 99.65 | 98.66 | 97.82 | 99.15 | 97.86 | 98.02 | 99.64 | 99.66 |
| 13 | 95.28 | 100 | 100 | 99.03 | 100 | 100 | 100 | 98.55 | 99.03 |
| 14 | 98.59 | 98.21 | 99.06 | 99.84 | 99.68 | 99.73 | 99.81 | 99.76 | 99.92 |
| 15 | 94.44 | 100 | 97.18 | 99.48 | 98.21 | 99.71 | 98.80 | 99.22 | 98.97 |
| 16 | 95.83 | 96.84 | 98.92 | 96.74 | 90.72 | 88.24 | 95.12 | 92.86 | 95.79 |
| OA(%) | 94.08 | 97.85 | 98.78 | 98.79 | 98.66 | 98.35 | 98.49 | 98.71 | 99.15 |
| AA(%) | 89.71 | 92.48 | 98.56 | 95.97 | 95.35 | 97.97 | 98.14 | 98.64 | 99.06 |
| Kappa(%) | 93.24 | 97.54 | 98.60 | 98.62 | 98.47 | 98.12 | 98.28 | 98.53 | 99.03 |
| Classes | 2D-CNN | DBMA | SSRN | HybridSN | MDRD- Net |
SSTFF | DMAN | MSFF- CBAM |
MSFF |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 98.69 | 99.76 | 99.73 | 99.59 | 99.68 | 99.71 | 99.65 | 99.82 | 99.91 |
| 2 | 99.58 | 99.86 | 99.64 | 99.98 | 99.89 | 99.98 | 99.83 | 99.98 | 99.98 |
| 3 | 96 | 100 | 99.76 | 99.02 | 99.71 | 99.47 | 98.92 | 98.06 | 99.85 |
| 4 | 99.33 | 99.83 | 99.97 | 99.67 | 99.33 | 99.13 | 99.85 | 99.35 | 99.48 |
| 5 | 99.78 | 100 | 100 | 100 | 100 | 98.94 | 100 | 99.85 | 100 |
| 6 | 99.30 | 99.88 | 99.74 | 99.72 | 99.84 | 99.82 | 99.85 | 100 | 99.98 |
| 7 | 98.79 | 100 | 99.70 | 99.63 | 98.81 | 100 | 99.50 | 99.70 | 99.70 |
| 8 | 93.88 | 97.71 | 98.45 | 96.13 | 99.03 | 98.59 | 96.82 | 98.62 | 97.94 |
| 9 | 95.31 | 99.89 | 100 | 99.89 | 92.81 | 97.67 | 99.41 | 98.64 | 99.47 |
| OA(%) | 98.60 | 99.67 | 99.61 | 99.46 | 99.53 | 99.63 | 99.48 | 99.59 | 99.73 |
| AA(%) | 97.90 | 99.50 | 99.44 | 99.11 | 98.75 | 99.41 | 99.14 | 99.50 | 99.56 |
| Kappa(%) | 98.15 | 99.56 | 99.48 | 99.29 | 99.38 | 99.51 | 99.31 | 99.54 | 99.64 |
| Classes | 2D-CNN | DBMA | SSRN | HybridSN | MDRD- Net |
SSTFF | DMAN | MSFF- CBAM |
MSFF |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 99.19 | 99.71 | 99.64 | 97.90 | 100 | 99.89 | 99.97 | 99.92 | 99.86 |
| 2 | 99.06 | 99.98 | 99.96 | 100 | 100 | 100 | 100 | 99.96 | 100 |
| 3 | 93.99 | 99.47 | 100 | 100 | 99.97 | 99.43 | 99.46 | 100 | 100 |
| 4 | 97.56 | 99.13 | 99.88 | 99.58 | 99.91 | 99.81 | 99.76 | 99.76 | 99.51 |
| 5 | 99.89 | 98.94 | 99.52 | 100 | 99.98 | 99.90 | 99.99 | 99.99 | 100 |
| 6 | 99.94 | 99.82 | 100 | 99.96 | 99.79 | 100 | 100 | 100 | 99.98 |
| 7 | 99.98 | 100 | 100 | 100 | 99.40 | 99.74 | 99.95 | 100 | 100 |
| 8 | 99.23 | 98.59 | 100 | 99.78 | 100 | 99.83 | 99.93 | 99.65 | 99.98 |
| 9 | 96.43 | 97.67 | 100 | 99.95 | 99.95 | 100 | 100 | 99.93 | 100 |
| OA(%) | 99.12 | 99.63 | 99.83 | 99.83 | 99.89 | 99.88 | 99.94 | 99.92 | 99.96 |
| AA(%) | 99.15 | 99.41 | 99.61 | 99.84 | 99.86 | 99.82 | 99.92 | 99.90 | 99.95 |
| Kappa(%) | 98.94 | 99.51 | 99.79 | 99.80 | 99.87 | 99.85 | 99.93 | 99.90 | 99.95 |
| Name | ACP | SSJA | FF |
|---|---|---|---|
| S0 | Without | ||
| S1 | √ | ||
| S2 | √ | ||
| S3 | √ | ||
| S4 | √ | √ | |
| S5 | √ | √ | |
| S6 | √ | √ | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

































