Submitted:
10 February 2026
Posted:
10 February 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Related Work
2.1. Explicit Feature-Based Methods
2.2. Deep Learning-Based Methods
2.3. Hybrid and Structure-Aware Approaches
2.4. Our Positioning
3. Proposed Model
3.1. Overview of the Framework
3.2. Hierarchical Representation of Structure and Style
3.2.1. Character-Level Structural Abstraction
3.2.2. Stroke-Level Feature Extraction
3.2.3. Modeling Temporal Dynamics in Handwriting
3.2.4. Modeling Stylistic Variation
- Stroke and Radical Styles
- Character styles
- Character-Level Style Synthesis
3.2.5. Modeling Personalized Handwriting in a Continuous Style Space
3.4. Stroke Element Representation: A Hierarchical Vector Model
3.4.1. Foundational Definitions
3.4.2. A Three-Level Hierarchical Taxonomy
3.4.3. Visualizing the Representation Pipeline
3.4.4. Advantages and Implications
3.5. Parametric Style Generation via Vector Manipulation
3.5.1. Vector-Based Style Parameterization


3.5.2. The Yoke Vector Mechanism for Contour Control
3.5.3. Bézier Curve Generation from Vector Parameters
3.5.4. Hierarchical Style Generation
3.5.5. Summary of the Style Generation Framework
- Continuous style interpolation between different font weights and designs.
- Independent control over local stroke features without affecting structural integrity.
- Efficient style transfer by applying parameter sets to structural templates.
4. AI-Driven Optimization Framework
4.1. Overview
- Parameter Selection: Determining optimal Bézier curve parameters (yoke vector angles α and magnitudes η) for each stroke element
- Quality-Efficiency Trade-off: Balancing trajectory fidelity against computational cost
- Adaptability: Adjusting optimization strategies based on character complexity
4.2. Three-Layer Optimization Architecture
4.2.1. Layer 1: Reinforcement Learning for Parameter Selection
4.2.2. Layer 2: Genetic Algorithm for Style Exploration
4.2.3. Layer 3: Cloud-Aware Resource Allocation
4.3. Adaptive Complexity-Aware Optimization
4.3.1. Complexity Score Definition
4.3.2. Adaptive Parameter Selection
4.3.3. Optimization Algorithm (Summary)
4.4. Evaluation Metrics
- Trajectory Fidelity
- Curvature Smoothness
- Generation Efficiency
4.5. Implementation Considerations
5. Experimental Results and Analysis
5.1. Structural Reconstruction Evaluation
5.1.1. Reconstruction with Basic Stroke Elements
5.1.2. Reconstruction with Extended Stroke Elements
5.2. Stylistic Generation and Interpolation
5.2.1. Style Generation via Basic Stroke Elements
5.2.2. Style Generation via Extended/Combined Stroke Elements
5.3. Performance Evaluation of AI-Driven Optimization
5.3.1. Dataset and Experimental Setup
- Dataset Description
- Evaluation Methods
- Trajectory Fidelity (%): Normalized Hausdorff distance measuring structural preservation (higher is better).
- Normalized Curvature Variance: Consistency of curvature along the rendered stroke.
- Generation Time (ms): Computational latency per character (lower is better).
5.3.2. Overall Performance Comparison
- Trajectory Fidelity Improvement
- 2.
- Performance Consistency Enhancement
- 3.
- Computational Efficiency Analysis
- 4.
- Curvature Metric Interpretation
5.3.3. Performance by Character Complexity
| Complexity Range | Character Count | Baseline Fidelity | AI Adaptive Fidelity | Improvement |
| Simple (< 20) | 47 | 62.3% | 66.8% | +7.2% |
| Medium (20-50) | 68 | 60.5% | 65.1% | +7.6% |
| Complex (≥ 50) | 35 | 59.2% | 63.4% | +7.1% |
| Complexity | Interpolation Factor | Smoothing σ |
| < 20 | 2× original points | 0.5 |
| 20-50 | 2× original points | 0.8 |
| ≥ 50 | 3× original points | 1.0 |
5.3.4. Stroke Element Type Analysis
| Stroke Element Type | Count | Baseline | AI Adaptive | Improvement |
| 横向(Horizontal) | 281 | 63.2% | 68.1% | +7.8% |
| 竖向(Vertical) | 224 | 62.8% | 67.5% | +7.5% |
| 撇捺 (Diagonal) | 247 | 58.4% | 62.9% | +7.7% |
| 折转 (Turning) | 168 | 57.1% | 61.8% | +8.2% |
| 复合 (Compound) | 146 | 59.3% | 63.2% | +6.6% |
| 点 (Dot) | 57 | 65.4% | 69.8% | +6.7% |
5.3.5. Qualitative Visualization
5.3.6. System Performance and Scalability
| Metric | Result | Benchmark |
| Single Character Generation Time | 12.18 ± 5.2 ms | Target: <15ms ✓ |
| Storage per Character (template) | ~0.8 KB | vs. TTF: ~8 KB (90% reduction) |
| Memory Usage (150 chars) | 42 MB | Acceptable for cloud deployment |
| Batch Processing (150 chars) | 1.83 s | ~12ms per character |
| Concurrent Requests | Avg. Response Time | Fidelity Maintained |
| 1 | 12.18 ms | 65.2% |
| 10 | 14.5 ms | 64.8% |
| 50 | 18.2 ms | 63.5% |
| 100 | 25.3 ms | 62.1% |
5.4. Cross-Style Analysis: Impact of Font Style on Stroke Element Representation
5.4.1. Key Quantitative Findings
5.4.2. Implications and Validation for the Proposed Method
5.4.3. Contextual Interpretation of Experimental Results
6. Conclusion
6.1. Summary
6.2. Experimental Validation
| Key Performance Indicator | Result | Significance |
| Trajectory Fidelity | 65.2% (AI Adaptive) vs 60.9% (Baseline) | 7.1% statistically significant improvement |
| Performance Consistency | ±7.5% standard deviation (vs ±8.2% baseline) | More reliable across varying complexity |
| Generation Latency | <15ms per character | Suitable for real-time interactive applications |
| Storage Efficiency | ~0.8KB per character (vs 8KB TTF) | 90% reduction in storage requirements |
| Scalability | Maintains >62% fidelity at 100 concurrent requests | Cloud-deployment ready |
6.3. Contributions
6.4. Limitations and Future Work
6.5. Broader Implications
6.6. Final Remarks
Acknowledgments
Appendix A
Appendix A.1. Dataset Specification and Glyph Atlas
| Statistic Category | Value | Notes / Distribution |
| Character Set Size | 150 characters | Total number of evaluated glyphs |
| Total Stroke Elements | 1,123 | Sum of all atomic stroke components |
| Total Feature Points | 5,287 | Control points defining all stroke elements |
| Avg. Elements per Character | 7.49 | Range: 1 – 18 |
| Complexity Distribution |
Simple (1-5): 40 chars Medium (6-10): 87 chars Complex (11+): 23 chars |
|
| Stroke Type Statistics | Horizontal: 313 (27.9%) Vertical: 342 (30.5%) Compound/Turning: 362 (32.2%) Diagonal: 103 (9.2%) Dot: 3 (0.3%) |
Percentage of total stroke elements |
Appendix A.2. Glyph Atlas



Appendix B
Appendix B.1. Font Style Impact on Stroke Element Representation
Appendix B.1 Dataset Description
| Metric | Xingkai (Cursive) | Heiti (Sans-Serif) | Difference (Heiti - Xingkai) |
| Character Set Size | 150 | 150 | — |
| Total Stroke Elements | 1,123 | 1,640 | +517 (+46.0%) |
| Total Feature Points | 5,287 | 6,371 | +1,084 (+20.5%) |
| Avg. Elements per Character | 7.49 | 10.93 | +3.44 (+45.9%) |
| Avg. Points per Character | 35.2 | 42.5 | +7.3 (+20.7%) |
| Element Count Range | 1 – 18 | 3 – 22 | — |
Appendix B.2. Quantitative Comparison of Structural Characteristics
|
Complexity Category (by) |
Xingkai (Cursive) | Heiti (Sans-Serif) |
| Simple (1–5 elements) | 40 chars (26.7%) | 6 chars (4.0%) |
| Medium (6–10 elements) | 87 chars (58.0%) | 66 chars (44.0%) |
| Complex (11+ elements) | 23 chars (15.3%) | 78 chars (52.0%) |
| Stroke Type | Xingkai (Cursive) | Heiti (Sans-Serif) | Stylistic Implication |
| Horizontal (横向) | 313 (27.9%) | 742 (45.2%) | Heiti’s geometric design favors discrete, straight strokes. |
| Vertical (竖向) | 342 (30.5%) | 594 (36.2%) | Consistent prevalence across styles. |
| Diagonal (撇/捺) | 103 (9.2%) | 294 (17.9%) | More fragmented diagonal elements in Heiti. |
| Turning / Compound (折/复合) | 362 (32.2%) | 10 (0.6%) | Key differentiator: Xingkai merges strokes into complex, cursive compounds. |
| Dot (点) | 3 (0.3%) | 0 (0.0%) | Minimal representation in this character set. |
Appendix B.3. Key Observations and Implications for the Framework
Appendix B.4. Visualization Reference


References
- Chen, H. An analysis of the historical origin, development and cultural inher-itance of Chinese characters[J]. Sinogram Culture 2022, 16, 1–3. [Google Scholar] [CrossRef]
- Zhang, T; Liu, M; Yang, Y; et al. Ancient handwritten Chinese character recogni-tion via multi-style attention and feature fusion[C]. 2025 5th International Conference on Artificial Intelligence, Automation and High Performance Computing (AIAHPC); 2025, pp. 29–32.
- Chen, W; Su, J; Song, W; et al. Quality evaluation methods of handwritten Chi-nese characters: a comprehensive survey[J]. Multimedia Systems 2024, 30(4). [Google Scholar] [CrossRef]
- Yang, L; Wu, Z; Wu, D E. Easy recognition of artistic Chinese calligraphic charac-ters[J]. The Visual Computer 2023, 39(8), 3755–3766. [Google Scholar] [CrossRef]
- Wang, L; Liu, Y; Sharum, M Y; et al. Deep learning for Chinese font generation: A survey[J]. Expert Systems with Applications 2025, 276, 127105. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0957417425007274. [CrossRef]
- Yang, J; Zhang, M M; Zhang, J W; et al. Description and Generation of Chinese Outline Based on C-Bézier Curves[J]. Journal of Computer-Aided Design & Computer Graphics CrossRef. 2000, 09, 660–663. [Google Scholar]
- Lin, J W; Hong, C Y; Chang, R I; et al. Complete font generation of Chinese char-acters in personal handwriting style[C]. In Proceedings of the 34th International Perfor-mance Computing and Communications Conference, Nanjing; 2015, pp. 1–5. Available online: https://ieeexplore.ieee.org/document/7410313.
- Qing-sheng, LI; Qiang, XU; Jian-guo, XIAO; et al. A structure and style model for Chinese character dynamic generation [J]. ActaScientiarum Naturalium Universitatis Pekinensis 2017, 53(2), CrossRef. [Google Scholar]
- Chen, F; Wang, C; Yao, X; et al. SPFont: Stroke potential features embedded GAN for Chinese calligraphy font generation[J]. Displays 2024, 85, 102876. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0141938224002403. [CrossRef]
- Zeng, J S; Chen, Q; Wang, M W. Self-supervised Chinese font generation based on square-block transformation[J]. Scientia Sinica (Informationis) CrossRef. 2022, 52(01), 145–159. [Google Scholar]
- Wang, K; Zhou, C; Shi, Y; et al. FourCornerGAN: Glyph formation augmentation for unpaired Chinese font generation[J]. Digital Signal Processing 2025, 165. Available online: https://www.sciencedirect.com/science/article/abs/pii/S1051200425003276. [CrossRef]
- Lu, P; Chen, J Y; Zou, G L; et al. Personalized Handwritten Chinese Character Generation Method for Unsupervised Image Translation[J]. Computer Engineering and Applications CrossRef. 2022, 58(08), 221–229. [Google Scholar]
- Wang, L; Liu, Y; Sharum, M Y; et al. Deep learning for Chinese font generation: A survey[J]. Expert Systems with Applications 2025, 276, 127105. Available online: https://. [CrossRef]
- Deng, L; Yu, D. Deep learning: methods and applications[J]. Foundations and Trends® in Signal Processing 2014, 7(3-4), 197–387. Available online: https://www.nowpublishers.com/article/Details/SIG-039. [CrossRef]
- Cheng, H; Shinnick, E; Wang, X. A visualizing analysis of Chinese character processing in the past 40 years (1981–2020)[J]. Digital Scholarship in the Humanities 2022, 37(2), 336–353. Available online: https://academic.oup.com/dsh/article-abstract/37/2/336/6363641. [CrossRef]
- Li, Y; Li, Y. Design and implementation of handwritten Chinese character recognition method based on CNN and TensorFlow[C]//2021 IEEE International Con-ference on Artificial Intelligence and Computer Applications (ICAICA). IEEE 2021, 878–882. Available online: https://ieeexplore.ieee.org/document/9498146/. [CrossRef]
- Zeng, S; Pan, Z. An unsupervised font style transfer model based on generative adversarial networks[J]. Multimedia Tools and Applications 2022, 81(4), 5305–5324. Available online: https://link.springer.com/article/10.1007/s11042-021-11777-0. [CrossRef]
- Chang, B; Zhang, Q; Pan, S; et al. Generating handwritten Chinese characters using CycleGAN[C]//2018 IEEE Winter Conference on Applications of Computer Vi-sion (WACV). IEEE 2018, 199–207. Available online: https://arxiv.org/abs/1801.08624. [CrossRef]
- Wen, Q; Li, S; Han, B; et al. ZiGAN: Fine-grained Chinese calligraphy font gen-eration via a few-shot style transfer approach[C]. In Proceedings of the 29th ACM Inter-national Conference on Multimedia, 2021; ACM DL; pp. 621–629. Available online: https://dl.acm.org/doi/10.1145/3474085.3475225. [CrossRef]
- Gandhi, S; Rana, H; Bhatt, N. Conditional GANs in Image-to-Image Translation: Improving Accuracy and Contextual Relevance in Diverse Datasets[J]. Procedia Com-puter Science 2025, 252, 954–963. [Google Scholar] [CrossRef]
- Lin, X; Li, J; Zeng, H; et al. Font generation based on least squares conditional generative adversarial nets[J]. Multimedia Tools and Applications 2019, 78(1), 783–797. Available online: https://link.springer.com/article/10.1007/s11042-017-5457-4. [CrossRef]
- Zhu, J Y; Park, T; Isola, P; et al. Unpaired image-to-image translation using cy-cle-consistent adversarial networks[C]//ICCV 2017. Available online: https://arxiv.org/abs/1703.10593. [CrossRef]
- Zhou, P; Zhao, Z; Zhang, K; et al. An end-to-end model for Chinese calligraphy generation[J]. Multimedia Tools and Applications 2021, 80(5), 6737–6754. Available online: https://link. [CrossRef]
- Liu, J; Zheng, S; Cai, Q. Handwritten Chinese Character Recognition Based on Attention Mechanism[C]//2024 3rd International Conference on Artificial Intelligence and Computer Information Technology (AICIT). 2024, 1–4. [Google Scholar] [CrossRef]
- Zhi-geng, PAN; Xiao-hu, MA; Ming-min, ZHANG; et al. The fourier descriptor based automatic generation method for multiple Chinese fonts [J]. Journal of Software CrossRef. 1996, 6, 331–338. [Google Scholar]
- Isola, P; Zhu, J Y; Zhou, T; et al. Image-to-image translation with conditional ad-versarial networks[C]//CVPR 2017. arXiv. Available online: https://arxiv.org/abs/1611.07004. [CrossRef]
- Tian, Y. zi2zi: Master Chinese calligraphy with conditional adversari-al networks[EB/OL]. GitHub, 2017. Available online: https://github.com/kaonashi-tyc/zi2zi). https://kaonashi-tyc.github.io/2017/04/06/zi2zi.html.
- Jiang, Yongge; Lian, Zhihui; Tang, Yuming Yang; et al. SCFont: structure guided Chinese font generation via deep stacked networks [C]. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI: Québec, 2019; Volume 33, pp. 4015–4022. [Google Scholar] [CrossRef]
- WU, S J; YANG, C Y; HSU, J. CalliGAN: style and structure aware Chinese calligraphy character generator [EB/OL]. 2020-05-26) [2026-01-06. Available online: https://arxiv.org/abs/2005.12500.
- Wang, Yiz-hi; Gao, Yue; Lian, Zhou-hui. Attribute2Font: Creating Fonts You Want From Attributes [EB/OL]. 2020-05-16) [2026-01-06. [Google Scholar] [CrossRef]
- JIANG, H C; YANG, G Y; HUANG, K Z; et al. W-Net: one-shot arbitrary-style Chinese character generation with deep neural networks. 2024-06-10) [2026-01-06. Available online: https://arxiv.org/abs/2406.06122.
- Ning, F. Multi-style Migration of Chinese Characters based on Self-attention Mechanism and StarGAN v2[C]//2022 3rd International Conference on Computer Vi-sion, Image and Deep Learning & International Conference on Computer Engineering and Applications (CVIDL & ICCEA). IEEE 2022, 357–361. [Google Scholar] [CrossRef]
- CARLIER, A; DANELLJAN, M; ALAHI, A; et al. DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation[EB/OL]. 2020-07-22)[2026-01-06. [Google Scholar] [CrossRef]
- Reddy, P.; Gharbi, M.; Lukáč, M.; Mitra, N.J. Im2Vec: Synthesizing Vector Graphics without Vector Supervision. 2021-04-01) [2026-01-06. Available online: https://arxiv.org/abs/2102.02798.
- Qing-sheng, LI; Jing, XIONG; Qin-xia, WU; et al. Study of feature weighted-based generation method for dian strokes of Chinese character [J]. Acta Scientiarum Naturalium Universitatis Pekinensis CroseRef. 2014, 50(1), 153–160. [Google Scholar]
- Luo, X.; Li, Q. Intelligent Compilation System for Chinese Character Animation Based on Dynamic Data Sets. In Computer Animation and Social Agents. CASA 2025. Lecture Notes in Computer Science; Mousas, C., Seo, H., Thalmann, D., Cordier, F., Eds.; Springer: Singapore, 2026; vol 15915, Available online: https://link.springer.com/chapter/10.1007/978-981-95-0100-7_20. [CrossRef]
- Li, Z; Li, Q; Guan, Y. A Chinese Character Generation Model for Cloud Infor-mation Security[C]//2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS). IEEE 2019, 534–540. [Google Scholar] [CrossRef]
- Li, Q.; Li, J. -P.; Chen, L. A Bezier Curve-Based Font Generation Algorithm for Character Fonts; (HPCC/SmartCity/DSS): Exeter, UK, 2018; pp. 1156–1159. Available online: https://ieeexplore.ieee.org/document/8622932. [CrossRef]
- Lai, Y; Zhang, X. A Study on Influencing Factors of Dynamic Chinese Character Stroke Writing Behavior in CFL Beginners Using Digital Ink Technology[J]. Applied Mathematics and Nonlinear Sciences 2024, 9(1). [Google Scholar] [CrossRef]
- Guo, D.; Fang, W.; Yang, W. Brush Stroke-Based Writing Trajectory Control Model for Robotic Chinese Calligraphy. Electronics 2025, 14, 3000. [Google Scholar] [CrossRef]


















| ID | Name (Pinyin / Chinese) | Feature Points | Glyph Illustration | Vector Representation |
| 1 | Pie (撇) | ![]() |
||
| 2 | Na (捺) | ![]() |
||
| 3 | Heng (横) | ![]() |
||
| 4 | Shu (竖) | ![]() |
||
| 5 | Dian (点) | ![]() |
| ID | Combined Element | Feature Points | Composition Formula |
| 1 | CPie | ||
| 2 | CNa | ||
| 3 | CHeng | ||
| 4 | CShu | ||
| 5 | CDian |
| ID | Extended Element | Feature Points | Expression |
| 1 | Ti (提) | ||
| 2 | RGou (右钩) | ||
| 3 | LGou (左钩) |
| Vector Diagram | Rendered Stroke | |
| 50° | ![]() |
![]() |
| 100° | ![]() |
![]() |
| 130° | ![]() |
![]() |
| Rendered Style | Rendered Style | ||
| 50 | ![]() |
80 | ![]() |
| 60 | ![]() |
120 | ![]() |
| ID | Element | Feature Points | Fitted Curve |
| 1 | CPie | ![]() |
|
| 2 | CNa | ![]() |
|
| 3 | CHeng | ![]() |
|
| 4 | CShu | ![]() |
| ID | Stroke-Element Vector Sequence | Visual Abstraction & Reconstructed Glyph |
| 1 | (-13,-11) (-13,8) (-13,7) |
![]()
|
| 2 | (-12,-11) (-9,-11) (-9,8) | |
| 3 | (-12,6) (-9,6) | |
| 4 | (-6,-12) (0,-12) (-2,-3) (0,1) (0,5) (-1,7) (-3,7) | |
| 5 | (-5,-12) (-5,13) | |
| 6 | (2,-12) (13,-12) | |
| 7 | (11,-11) (11,11) (10,12) (6,12) | |
| 8 | (3,-6) (3,7) | |
| 9 | (3,-6) (7,-6) (7,4) | |
| 10 | (-13,-11) (-13,8) (-13,7) |
| ID | Stroke-Element Vector Sequence | Visual Abstraction & Reconstructed Glyph |
| 1 | (-17,-9) (-12,2) (-12,-10) (-7,-9) (-8,-3) (-11,-4) (-12,-5) (-11,-5) (1,-19) (4,-18) (1,-12) (3,-7) (1,-5) (-3,-11) (-2,8) (-1,-1) |
![]()
|
| 2 | (3,-12) (21,-16) (20,-16) | |
| 3 | (5,-5) (12,-7) (11,0) (9,-4) (13,-11) (16,-8) (14,15) |
| ID | Element | Parameters (η1, η2; α1, α2) | Vector Diagram | Rendered Stroke |
| 1 | Heng | η1=48 η2=53 α1=90 α2=90 |
![]() |
![]() |
| 2 | Shu | η1=60 η2=31 α1=90 α2=90 |
![]() |
![]() |
| 3 | Pie | η1=0 η2=36 α1=0 α2=85 |
![]() |
![]() |
| 4 | Na | η1=40 η2=0 α1=124 α2=0 |
![]() |
![]() |
| 5 | Dian | η1=0 η2=80 α1=0 α2=90 |
![]() |
![]() |
| ID | Source Glyph Style | Reconstructed Glyph Style |
| 1 | ![]() |
![]() |
| 2 | ![]() |
![]() |
| 3 | ![]() |
![]() |
| 4 | ![]() |
![]() |
| Metric | Value | Description / Context |
| Total Characters | 150 | The complete character set analyzed in the experiment |
| Total Stroke Elements | 1,123 | Sum of all stroke elements across all characters |
| Total Feature Points | 5,287 | Total number of feature points (including stationary points) extracted |
| Average Stroke Elements per Character | 7.49 | |
| Average Feature Points per Character | 35.25 | |
| Stroke Element Count Range | 1 – 18 | Minimum and maximum stroke elements observed in a single character |
| Unique Stroke Element Types | 6 | Distinct categories of stroke elements |
| Method | Description | Characteristics |
| Baseline (Linear Interpolation) | Proportional resampling with linear interpolation | Traditional, non-optimized rendering |
| AI Optimized (Cubic + Gaussian) | Cubic spline interpolation with Gaussian smoothing (σ = 1.0) | Standard AI-enhanced rendering |
| AI Adaptive (Complexity-Aware) | Dynamically adjusts optimization parameters based on character complexity score | Our proposed method, context-sensitive |
| Metric | Baseline | AI Optimized | AI Adaptive | Comparison Note |
| Trajectory Fidelity (%) | 60.9 ± 8.2 | 60.8 ± 8.1 | 65.2 ± 7.5 | AI Adaptive ↑7.1% |
| Normalized Curvature | 0.174 ± 0.18 | 0.339 ± 0.43 | 9.59 ± 36.9 | Context-dependent |
| Generation Time (ms) | 9.38 ± 4.2 | 12.14 ± 4.9 | 12.18 ± 5.2 | Δ~+3ms, <15ms target ✓ |
| Fidelity Improvement | - | -0.2% | +7.1% | Statistically significant |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).








































