1. Introduction
With the advent of information warfare, electronic warfare has become a critical component of modern military operations. As an essential tool for enhancing the ability of combat personnel to respond to electronic warfare threats, electronic warfare training faces challenges such as complexity and high costs. Traditional methods of electronic warfare training rely heavily on manual design and operation, which, while effective to some extent, struggle to adapt to the rapidly changing battlefield environment and the constantly evolving electronic warfare tactics. As a result, improving the intelligence and automation of training systems has become an inevitable trend in the development of electronic warfare training.Large Language Models (LLMs), as a breakthrough in artificial intelligence, show tremendous potential in various fields due to their powerful capabilities in natural language processing and generation. In electronic warfare training, LLMs can automatically generate combat scenarios, simulate electronic warfare strategies, and generate data for tactical interactions between opposing forces, significantly improving the efficiency and realism of training. Additionally, LLM applications can reduce training costs and improve scalability and flexibility. This paper explores the application of LLMs in electronic warfare training, analyzing their advantages, challenges, and future development directions. By detailing the working principles of LLMs and their specific applications in electronic warfare training, this paper aims to provide reference and guidance for further enhancing the intelligence and practical effects of electronic warfare training.
2. Basic Principles of Large Language Models
2.1. Overview of Large Language Models
A Large Language Model (LLM) is a natural language processing (NLP) model based on deep learning technologies, capable of understanding, generating, and inferring complex language structures and semantic information. Unlike traditional language models, LLMs have stronger computational capabilities and a larger parameter scale, allowing them to handle vast amounts of text data, learn language patterns, and perform tasks such as language generation, translation, summarization, and question answering efficiently.The core technology of LLMs is based on neural networks, particularly the Transformer architecture within deep neural networks. The Transformer model processes text data using self-attention mechanisms (Self-Attention), which can efficiently capture long-range dependencies between words in parallel computations, overcoming the limitations of traditional models like RNNs and LSTMs when dealing with long texts [
1,
2,
3]. The advantage of the Transformer is its ability to process multiple word positions in a sentence simultaneously, enabling LLMs to train on larger datasets and generate more accurate and coherent text.LLMs are typically trained using massive amounts of text data, such as Wikipedia, news articles, books, and other online text materials. During training, the model learns language statistical patterns and semantic information through self-supervised learning. For example, in GPT series (Generative Pretrained Transformer) models, the model is trained by predicting the next word in a given text, adjusting its weights continuously to learn deeper syntactic and semantic relationships.As computational power and data scale continue to grow, modern LLMs, such as GPT-3 and BERT, have hundreds of billions or even trillions of parameters, enabling unprecedented performance across various NLP tasks [
4,
5]. They can not only generate text but also perform text classification, sentiment analysis, machine translation, and excel in multiple natural language understanding (NLU) and natural language generation (NLG) benchmarks.In electronic warfare training, LLMs are valuable not only for generating simulated combat scenarios and corpora but also for assisting system decision-making and strategy optimization through their powerful language understanding and reasoning capabilities. Therefore, LLMs have become an important technology with significant potential in electronic warfare training [
6].
2.2. Working Mechanism and Technical Framework of Large Language Models
Large Language Models (LLMs) are built based on the Transformer architecture in deep learning. Through the self-attention mechanism and large-scale parallel computing, LLMs can effectively process and generate natural language text. The Transformer architecture consists of an encoder and a decoder, which can efficiently model language by capturing long-range dependencies between different words in text. However, in LLMs, typically only the decoder part of the Transformer is used, or only the encoder part, depending on the task’s requirements. For instance, GPT models use only the decoder to generate text, while BERT models use the encoder for text understanding. The self-attention mechanism, one of the core innovations of Transformer, allows the model to automatically adjust its attention to other words in the text while processing each word, enabling it to capture long-range dependencies and semantic relationships, overcoming the limitations of traditional RNNs and LSTMs in handling long texts [
6,
7,
8].The training process of LLMs includes two phases: pretraining and fine-tuning. In the pretraining phase, the model learns language patterns and semantic information through self-supervised learning on massive unlabelled text data. During this process, the model trains its language generation and understanding capabilities by predicting the next word in the text (as in GPT models) or filling in missing parts of the text (as in BERT models). After pretraining, the model is fine-tuned on specific tasks, such as sentiment analysis, text generation, and machine translation, using labeled datasets to optimize the model for specific applications.The power of LLMs lies in their massive parameter scale and computational capability [
9,
10]. With the continued development of computational resources, modern LLMs have reached parameter scales of billions or even trillions, such as GPT-3 with 175 billion parameters. This large model size allows LLMs to capture more complex language patterns and deeper semantic information. However, training such large models requires substantial computational resources, typically using Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) for large-scale distributed computing. To accelerate training and ensure efficient data storage, LLMs often employ distributed storage and computing architectures.In the inference and generation process, LLMs can generate contextually relevant outputs based on input text. Through encoding the input text, modeling the context, and generating the output, the model progressively produces grammatically and semantically coherent text. The generation process typically uses greedy decoding, sampling, or beam search strategies to ensure that the generated text is both coherent and diverse.The working mechanism of LLMs integrates advanced technologies from deep learning, such as the Transformer architecture, self-attention mechanisms, and large-scale computing, which enable them to excel in a wide range of NLP tasks. Through pretraining and fine-tuning strategies, these models can adapt to different fields and tasks, achieving remarkable success in text generation, sentiment analysis, machine translation, and other areas. In electronic warfare training, LLMs are also widely used, such as in generating simulated combat scenarios and simulating electronic warfare strategies, helping to improve training efficiency and effectiveness [
11].
3. Basic Concepts and Requirements of Electronic Warfare Training
3.1. Overview of Electronic Warfare Training
Electronic Warfare Training (EWT) refers to training methods that simulate enemy electronic warfare tactics, environments, and threats to help military personnel or other related units improve their ability to respond in electronic warfare environments. The core goal of EWT is to enhance the understanding of various technical methods and tactical operations in electronic warfare and improve personnel’s ability to respond to electronic warfare threats, thereby enhancing overall combat effectiveness. With the advancement of information warfare, electronic warfare has become a key component of modern military conflicts, especially in countering enemy communication, radar, satellite navigation, and other electronic systems [
12,
13,
14,
15]. As a result, the role of electronic warfare training is increasingly important, not only to help military units improve the adaptability of their technical equipment but also to enhance combatants’ practical experience and emergency response capabilities.
Electronic warfare training is conducted through various methods such as simulation, virtual environments, and real-world exercises. Training content covers multiple aspects such as Electronic Attack (EA), Electronic Protection (EP), and Electronic Support (ES). In Electronic Attack, training typically involves interference, suppression, and deception of enemy electronic systems to weaken or disrupt their operations. In Electronic Protection, training focuses on ensuring the safety and stability of one’s own electronic facilities, preventing enemy interference and attacks. Electronic Support involves electronic listening, intelligence gathering, and acquiring key information on enemy actions and the electronic environment to support decision-making [
16].As the complexity of counteracting environments increases, modern electronic warfare training requires not only simulating enemy electronic warfare tactics but also covering a range of scenarios, such as communication and radar jamming, navigation system deception, and information warfare. Training systems need to have a high degree of realism, dynamism, and real-time capability to cover various aspects of operations, from basic actions to complex tactics, and simulate battlefield uncertainty and complexity as closely as possible. Therefore, enhancing the intelligence and automation of electronic warfare training systems has become one of the key issues in current research. The demand for electronic warfare training is becoming increasingly diverse. Traditional training methods often have issues such as high costs, long durations, and lack of flexibility, making it difficult to respond to rapidly changing battlefield environments. To overcome these challenges, more and more research is incorporating advanced artificial intelligence technologies, particularly Large Language Models (LLMs) and natural language processing technologies, into electronic warfare training. These new technologies enable training systems to automatically generate combat scenarios, simulate complex electronic warfare situations, and even provide real-time strategy suggestions, greatly improving training efficiency and realism [
17,
18,
19,
20].
3.2. Challenges and Requirements of Electronic Warfare Training
As modern warfare evolves, electronic warfare training faces increasingly complex challenges. First, the environment of electronic warfare is highly dynamic and uncertain. Enemy electronic attack methods, technological levels, and tactical approaches can change at any time, requiring electronic warfare training systems to be highly flexible and capable of adjusting training content and environments in real-time to simulate the most realistic battlefield situations. Traditional training models often struggle to adapt quickly to these changes, failing to reflect the updates in enemy electronic warfare methods in a timely manner.Secondly, with the rapid development of information technology, electronic warfare systems have become increasingly complex. Modern electronic warfare training must involve multiple electronic warfare tactics, including communication, radar, satellite navigation, network defense, and attack. These methods are interwoven, and the coordination between systems makes the training tasks even more complicated [
21]. Existing training systems often find it difficult to maintain effective and comprehensive training in such a complex environment. Thus, integrating multi-dimensional and multi-level electronic warfare tasks into a unified training platform has become an urgent issue to address.Furthermore, the real-time and efficient nature of electronic warfare training poses another significant challenge. Electronic warfare often requires trainees to make decisions in a very short time and quickly respond to changing battlefield situations. Some existing training methods may require a long preparation and execution time, which reduces training efficiency and limits the frequency and depth of training. In actual combat, rapid response is key to determining victory or defeat. Therefore, electronic warfare training systems need to be highly automated and intelligent, enabling them to simulate various complex scenarios in a short period, provide diverse training content, and enhance training timeliness and effectiveness. Additionally, the cost of training is an issue that cannot be ignored. Traditional electronic warfare training typically requires large amounts of hardware and high maintenance costs. Furthermore, the time and resource investment for on-site exercises is also substantial. With the increasing complexity and diversity of training demands, how to reduce costs while ensuring training quality and improving sustainability has become a critical challenge in the industry. In the face of these challenges, the technical requirements for electronic warfare training continue to rise. In addition to traditional hardware support, more intelligent technologies such as big data analytics, artificial intelligence, and cloud computing are being introduced. By leveraging these emerging technologies, electronic warfare training systems can automatically generate training scenarios, adjust combat modes in real-time, and provide immediate feedback and evaluation, thereby improving training flexibility, real-time performance, and effectiveness [
22].
4. Applications of Large Language Models in Electronic Warfare Training
4.1. Data Generation for Combat Scenarios by Language Models
In electronic warfare training, simulating realistic combat scenarios is crucial for enhancing training effectiveness. Large Language Models (LLMs), with their powerful text generation capabilities, can efficiently produce various electronic warfare scenario data, helping training systems simulate enemy electronic warfare tactics and environmental changes more accurately. Specifically, LLMs can generate corpora related to electronic warfare, including the working principles of enemy electronic equipment, potential jamming strategies, attack patterns, and countermeasures, thereby providing diverse combat scenarios for training personnel [
23].
The applications of LLMs in data generation primarily include:
LLMs can automatically generate the operational status, attack methods, and countermeasures of enemy electronic devices based on predefined electronic warfare objectives. For example, by inputting basic parameters of an enemy radar system, LLMs can generate possible attack strategies, such as jamming and deception, and effective countermeasures against these strategies. The generated data can be used to construct enemy electronic warfare environments in virtual training [
24,
25,
26].
Through natural language processing, LLMs can automatically generate battlefield dynamics data that includes information on the deployment of electronic devices, tactical configurations, and communication content of both sides. These data can be integrated into electronic warfare training systems to simulate the evolution of enemy electronic warfare strategies and adjustments to countermeasures. This not only enhances the diversity of training but also strengthens the ability of training personnel to respond to complex battlefield situations.
LLMs can generate analysis reports for combat scenarios based on data collected during training. By summarizing and synthesizing the electronic warfare data of both sides, the model can generate detailed battlefield assessments, strategy evaluations, and other reports to help training personnel better understand the factors contributing to success or failure in electronic warfare, offering data support for future tactical adjustments [
27].
The following
Table 1 demonstrates the different applications of LLMs in generating data for electronic warfare training:
Through the generation of such data, training systems can create more complex and dynamic electronic warfare scenarios, allowing participants to quickly respond and execute effective countermeasures in the face of various dynamic changes. The application of LLMs in data generation significantly enhances the realism and efficiency of electronic warfare training while reducing the labor costs and time consumption associated with traditional data generation processes.
4.2. Large Language Models Assisting in Electronic Warfare Strategy Simulation
The application of Large Language Models (LLMs) in electronic warfare strategy simulation significantly enhances the realism and decision-making efficiency of electronic warfare training. By learning from large volumes of historical battle data, counter-strategy tactics, and battlefield intelligence, LLMs can automatically generate multiple electronic warfare strategies and simulate their outcomes. During the simulation, the model not only simulates tactical interactions between opposing forces but also evaluates the effectiveness and potential risks of various strategies, helping trainees quickly adjust their tactical plans.
Specific applications of LLMs in electronic warfare strategy simulation include:Generating attack strategies based on enemy electronic warfare tactics: LLMs analyze enemy electronic warfare methods and generate attack strategies based on the enemy’s electronic device capabilities and tactical intentions. These strategies can cover various techniques, such as radar jamming, satellite signal deception, and communication suppression. Furthermore, the model can simulate effective countermeasures and provide targeted tactical suggestions.Simulating tactical interactions in complex electronic warfare environments: In electronic warfare, the actions of both sides often influence each other, creating a competitive dynamic. LLMs can simulate the interaction between the enemy and friendly forces under different scenarios, predicting the next move of the enemy and providing strategic responses for the friendly side. Through repeated simulations, the model helps trainees familiarize themselves with the strengths and weaknesses of various tactical combinations and optimize the decision-making process.Assessing the potential outcomes of different tactical choices: LLMs can simulate the possible results of implementing different strategies and conduct comparative analysis based on success probabilities, failure risks, and potential losses [
28]. This enables the model to generate comprehensive tactical evaluation reports, helping decision-makers make more scientifically informed choices.
The
Table 2 below illustrates different tactical scenarios generated by the LLM in electronic warfare strategy simulations and their corresponding evaluation data. This helps training personnel understand the implementation effects of various strategies.
Through the simulation data in the table above, LLMs can generate detailed simulation results based on the characteristics of each tactic and assess its feasibility. This not only helps training personnel understand the implementation effects of each tactic but also provides real-time, scientific decision-making support in the face of a dynamic battlefield environment. The application of LLMs in electronic warfare strategy simulation makes electronic warfare training more aligned with actual combat needs, enhancing the adaptability and tactical expertise of combat personnel [
29].
5. Advantages and Disadvantages of Large Language Models in Electronic Warfare Training
The application of Large Language Models (LLMs) in electronic warfare training brings significant advantages due to their powerful natural language generation and understanding capabilities. First, LLMs effectively enhance the automation and intelligence of training. In traditional electronic warfare training, scenario construction and the generation of countermeasure strategies often rely on manual design, which is time-consuming and difficult to adapt to the rapidly changing battlefield needs. However, by learning from vast amounts of battlefield data, LLMs can automatically generate complex and diverse combat scenarios, dynamically adjusting to different electronic warfare tactics and environmental changes. Based on varying input data, LLMs can generate highly diverse and flexible tactical scenarios, providing training personnel with different combat contexts, thus greatly improving training efficiency and quality.Secondly, LLMs have a significant advantage in data generation. Electronic warfare training requires vast amounts of combat data, including information on the electronic devices of both sides, attack methods, and tactical strategies. Using LLMs, training systems can automatically generate combat intelligence, enemy device attack patterns, and countermeasure strategies based on real electronic warfare data, greatly reducing the costs associated with manually collecting and processing data. Furthermore, LLMs can dynamically adjust the data they generate as training progresses, making the training more aligned with actual combat needs, thus improving its realism and relevance.However, there are some shortcomings in the application of LLMs in electronic warfare training. Despite the ability of LLMs to generate highly diverse combat data and scenarios, the accuracy and authenticity of the generated content still have limitations. The generation capability of LLMs mainly depends on the quality of the training data, and the data related to electronic warfare is often highly specialized and complex. While LLMs can simulate a variety of possible electronic warfare scenarios, in situations that require high precision, the model-generated content may be biased or not fully meet tactical requirements. Therefore, training personnel still need to validate and correct the generated data to ensure the effectiveness of the training.Additionally, while LLMs have shown significant success in many fields, their reasoning ability may face limitations when dealing with highly complex and dynamic electronic warfare scenarios. Electronic warfare training requires in-depth analysis of the electronic warfare situation on both sides, as well as multiple rounds of simulation and decision-making [
30]. However, LLMs may not be able to fully handle the complex interactions and details in long-duration simulations, especially when cross-domain knowledge integration and complex logical reasoning are required. In such cases, the model’s performance may not match that of expert human judgment.Overall, LLMs have notable advantages in enhancing automation, generating diverse data, and providing real-time strategic responses in electronic warfare training. However, they also face challenges regarding the authenticity of the data and the limitations in reasoning abilities. As technology advances and more data is accumulated, the application of LLMs in electronic warfare training is expected to be further optimized and improved [
31].
6. Conclusion
The application of Large Language Models (LLMs) in electronic warfare training offers significant advantages, particularly in enhancing training automation, generating diverse combat data, and simulating electronic warfare strategies. By automatically generating complex combat scenarios and adjusting training content in real-time, LLMs can greatly improve the efficiency and realism of training. However, despite the ability of LLMs to simulate various electronic warfare environments, the authenticity and accuracy of the generated content still have limitations. Moreover, in the context of complex tactical simulations and rapidly changing battlefield situations, the reasoning capabilities of LLMs may be constrained. As technology continues to develop, LLMs are expected to play a larger role in electronic warfare training, further enhancing their application effectiveness and precision.
References
- Yang, Haowei, et al. “Optimization and Scalability of Collaborative Filtering Algorithms in Large Language Models. arXiv:2412.18715 (2024).
- Tan, Chaoyi, et al. “Generating Multimodal Images with GAN: Integrating Text, Image, and Style. arXiv:2501.02167 (2025).
- Yu Q, Wang S, Tao Y. Enhancing anti-money laundering detection with self-attention graph neural networks[C]//SHS Web of Conferences. EDP Sciences 2025, 213, 01016. [Google Scholar]
- Wang T, Cai X, Xu Q. Energy Market Price Forecasting and Financial Technology Risk Management Based on Generative AI[J]. Applied and Computational Engineering, 2024, 100: 29-34.
- Xiang A, Huang B, Guo X, et al. A neural matrix decomposition recommender system model based on the multimodal large language model[C]//Proceedings of the 2024 7th International Conference on Machine Learning and Machine Intelligence (MLMI). 2024: 146-150.
- Zhao Y, Hu B, Wang S. Prediction of brent crude oil price based on lstm model under the background of low-carbon transition[J]. arXiv:2409.12376, 2024.
- Ge, Ge, et al. “A review of the effect of the ketogenic diet on glycemic control in adults with type 2 diabetes.” Precision Nutrition 4.1 (2025): e00100.
- Shi X, Tao Y, Lin S C. Deep Neural Network-Based Prediction of B-Cell Epitopes for SARS-CoV and SARS-CoV-2: Enhancing Vaccine Design through Machine Learning[J]. arXiv:2412.00109, 2024.
- Min, Liu, et al. “Financial Prediction Using DeepFM: Loan Repayment with Attention and Hybrid Loss.” 2024 5th International Conference on Machine Learning and Computer Application (ICMLCA). IEEE, 2024.
- Xiang A, Qi Z, Wang H, et al. A multimodal fusion network for student emotion recognition based on transformer and tensor product[C]//2024 IEEE 2nd International Conference on Sensors, Electronics and Computer Engineering (ICSECE). IEEE, 2024: 1-4.
- Shen, Jiajiang, Weiyan Wu, and Qianyu Xu. “Accurate prediction of temperature indicators in eastern china using a multi-scale cnn-lstm-attention model. arXiv:2412.07997 (2024).
- Yang, Haowei, et al. “Research on the Design of a Short Video Recommendation System Based on Multimodal Information and Differential Privacy. arXiv:2504.08751 (2025).
- Lv, Guangxin, et al. Dynamic covalent bonds in vitrimers enable 1.0 W/(m K) intrinsic thermal conductivity. Macromolecules 2023, 56, 1554–1561. [Google Scholar] [CrossRef]
- Xiang A, Zhang J, Yang Q, et al. Research on splicing image detection algorithms based on natural image statistical characteristics[J]. arXiv:2404.16296, 2024.
- Qi, Zhen, et al. “Detecting and Classifying Defective Products in Images Using YOLO. arXiv:2412.16935 (2024).
- Yang, H. , Lu, Q., Wang, Y., Liu, S., Zheng, J., & Xiang, A. (2025). User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data. arXiv:2505.06305.
- Gao, Dawei, et al. Synaptic resistor circuits based on Al oxide and Ti silicide for concurrent learning and signal processing in artificial intelligence systems. Advanced Materials 2023, 35, 2210484. [Google Scholar]
- Li, Xiangtian, et al. “Artistic Neural Style Transfer Algorithms with Activation Smoothing. arXiv:2411.08014 (2024).
- Huang B, Lu Q, Huang S, et al. Multi-modal clothing recommendation model based on large model and VAE enhancement[J]. arXiv:2410.02219, 2024.
- Yin Z, Hu B, Chen S. Predicting employee turnover in the financial company: A comparative study of catboost and xgboost models[J]. Applied and Computational Engineering 2024, 100, 86–92. [Google Scholar]
- Yang, Haowei, et al. “Analysis of Financial Risk Behadvior Prediction Using Deep Learning and Big Data Algorithms. arXiv:2410.19394 (2024).
- Guo H, Zhang Y, Chen L, et al. Research on vehicle detection based on improved YOLOv8 network[J]. arXiv:2501.00300, 2024.
- Tan, Chaoyi, et al. “Real-time Video Target Tracking Algorithm Utilizing Convolutional Neural Networks (CNN).” 2024 4th International Conference on Electronic Information Engineering and Computer (EIECT). IEEE, 2024.
- Diao, Su, et al. “Ventilator pressure prediction using recurrent neural network. arXiv:2410.06552 (2024).
- Wang, H. , Zhang, G., Zhao, Y., Lai, F., Cui, W., Xue, J.,... & Lin, Y. (2024, December). Rpf-eld: Regional prior fusion using early and late distillation for breast cancer recognition in ultrasound images. In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 2605-2612). IEEE.
- Yang, Haowei, et al. “Enhanced Recommendation Combining Collaborative Filtering and Large Language Models. arXiv:2412.18713 (2024).
- Mo K, Chu L, Zhang X, et al. Dral: Deep reinforcement adaptive learning for multi-uavs navigation in unknown indoor environment[J]. arXiv:2409.03930, 2024.
- Yin J, Wu X, Liu X. Multi-class classification of breast cancer gene expression using PCA and XGBoost[J]. Theoretical and Natural Science, 2025, 76: 6-11.
- Shih K, Han Y, Tan L. Recommendation System in Advertising and Streaming Media: Unsupervised Data Enhancement Sequence Suggestions[J]. arXiv:2504.08740, 2025.
- Tang, Xirui, et al. “Research on heterogeneous computation resource allocation based on data-driven method.” 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS). IEEE, 2024.
- Ziang H, Zhang J, Li L. Framework for lung CT image segmentation based on UNet++[J]. arXiv:2501.02428, 2025.
Table 1.
Data Types Generated by Large Language Models in Electronic Warfare Training.
Table 1.
Data Types Generated by Large Language Models in Electronic Warfare Training.
| Application Scenario |
Data Type Generated |
Data Volume (per hour) |
Generation Frequency |
| Simulating enemy radar system attack patterns |
Radar system operating principles, attack strategies, jamming modes |
50 scenarios |
Generated every hour |
| Generating electronic warfare intelligence and battlefield dynamics |
Deployment information, tactical configurations, communication content |
200 pieces of intelligence |
Updated every hour |
| Automatically generating combat scenario analysis reports |
Tactical evaluation, situation analysis, countermeasure strategies |
5 reports |
Generated every hour |
Table 2.
Tactical Strategy Evaluation in Electronic Warfare Strategy Simulation.
Table 2.
Tactical Strategy Evaluation in Electronic Warfare Strategy Simulation.
| Tactical Choice |
Enemy Attack Strategy |
Our Countermeasures |
Success Probability |
Failure Risk |
Potential Losses |
| Radar Jamming Countermeasure |
Enemy jamming radar signals to locate and attack |
Use frequency hopping technology to avoid radar jamming |
85% |
10% |
5% |
| Satellite Signal Deception |
Enemy misguiding our operations by deceiving satellite navigation systems |
Activate backup navigation systems and implement error correction |
90% |
5% |
10% |
| Communication Suppression |
Enemy suppressing communication links, causing loss of command capability |
Use encrypted communication and shortwave radio to restore contact |
80% |
15% |
5% |
| Information Warfare & Countermeasures |
Enemy paralyzing our electronic systems via cyber attacks |
Strengthen firewalls and anti-penetration strategies |
75% |
20% |
10% |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).