Preprint
Review

This version is not peer-reviewed.

Advancements in Physics-Informed Neural Networks for Solving Maxwell’s Equations: A Systematic Literature Review

Submitted:

08 April 2026

Posted:

10 April 2026

You are already at the latest version

Abstract
This systematic literature review (SLR) investigates the use of physics-informed neural networks (PINNs) in electromagnetics by examining peer-reviewed journal articles and conference papers. By integrating governing physical laws into the loss function of a neural network (NN), PINNs offer a promising mesh-free method in scientific computing. The reviewed records were retrieved from the databases Scopus, Web of Science, and IEEE Xplore, published between 2020 and 2025. The initial dataset comprised 500 records of which 292 unique publications were identified. These were screened, yielding a final set of 139 publications that met predefined inclusion criteria. The analysis reveals a growth in research activity, with a pronounced increase from 2022 onward. The reviewed literature predominantly addresses electrodynamic problems, employs feedforward neural network architectures and adopts unsupervised, physics-driven training paradigms. Two-dimensional problem formulations are considerably more prevalent than three-dimensional formulations, and advanced architectures remain limited. A contingency table analysis reveals the associations between the extracted characteristics. The choice of medium is strongly dependent on the physics regime, and architectural diversity increases with spatial dimensionality. The review’s conclusions identify potential priorities for future work: extending three-dimensional formulations to the static and quasistatic electromagnetic regimes, broader architectural experimentation particularly in lower-dimensional settings, and increased use of semi-supervised learning in static electromagnetic applications.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

The accurate and efficient solution of Maxwell’s equations is a central challenge in computational electromagnetics (CEM), underpinning a wide range of scientific and engineering applications, including electric motor design, sensor implementation, antenna design, electromagnetic compatibility analysis, optics, microwave engineering, and high-frequency technologies. Classical numerical methods, including the finite element method, finite difference time domain method, and boundary element method have been successfully applied for decades. However, these approaches often entail high computational costs, intricate meshing procedures, and limited flexibility in scenarios involving inverse problems, multi-scale phenomena, or sparse measurement data. The application of deep learning (DL) in electromagnetics has also been explored; though, the majority of approaches needs large datasets for the training process, which are often challenging to acquire.
In recent years, physics-informed neural networks (PINNs) have emerged as an alternative for modeling physical systems by embedding the governing equations, e.g., partial differential equations (PDEs), directly into the training process of neural networks (NNs). The pioneering work in utilizing NNs to solve ordinary and partial differential equations was conducted by Lagaris et al. [1], while Raissi et al. [2,3,4] subsequently rediscovered the concept and expanded its application to a more extensive range of problems, e.g., solving the Schrödinger equation and the Allen–Cahn equation. They also established the name PINN and, therefore, catalyzed its broader use. Rather than relying exclusively on labeled data, PINNs incorporate the residuals of the underlying differential equations, as well as boundary and initial conditions (BCs/ICs), into a composed loss function. This enables the approximation of physically consistent solutions even in settings with limited or absent data, a feature that has attracted significant attention across scientific computing domains.
After their introduction, PINNs have been applied to a broad range of problems. In some cases, closely following the definition proposed by Raissi et al. [2], and in other cases, interpreting PINNs more loosely. This includes forward and inverse problems in fluid dynamics, heat transfer, solid mechanics, and material sciences [5,6,7,8].
In fluid dynamics, PINNs have been applied to predict turbulent flows under various conditions. Pioch et al. [9] assessed four RANS-based turbulence models for a backward-facing step using limited labeled data. Harmening et al. [10] analyzed network architecture effects on 2D cylinder flows at high Reynolds numbers without training data. Harmening et al. [11] also developed a surrogate model for airfoil flows at variable angles of attack using limited simulation data. Additional applications of PINNs in fluid dynamics are reviewed in [12,13,14,15,16].
Cai et al. [17] demonstrated the effectiveness of PINNs to solve forward and inverse heat transfer problems, including forced and mixed convection with unknown boundary conditions and two-phase Stefan problems, by incorporating sparse experimental data. Zobeiry et al. [18] successfully applied PINNs to conductive and convective heat transfer in manufacturing and engineering and validated the predictions against finite element results for 1D and 2D cases. For inverse heat transfer problems, Oommen et al. [19] and Billah et al. [20] demonstrated the ability of PINNs to estimate unknown thermal properties and boundary conditions from sparse data. Furthermore, Karthik et al. [21] utilized PINNs to solve the nondimensional thermal equations for analyzing heat transport in partially wetted wavy fins, considering convective, radiative effects, and temperature-dependent thermal conductivity, while Kumar et al. [22] have shown the use of PINNs for solving unsteady heat dissipation in a radiative-convective concave fin with periodic BCs and therefore, effectively capturing the physics of the problem and predicting temperature profiles beyond the trained region.
Regarding the application of PINNs in solid mechanics, Kapoor et al. [23] have shown that they can solve forward and inverse problems for Euler–Bernoulli and Timoshenko beam systems, including double-beam structures, by using nondimensional equations to improve computational efficiency and robustness against noisy data. An overview of the application of PINNs in experimental solid mechanics is given by Jin et al. [24]. Henkes et al. [25] applied PINNs to continuum micromechanics, solving boundary value problems for inhomogeneous materials, e.g., fiber-reinforced composites. Zhang et al. [26] developed a hybrid framework combining the finite element method and PINNs to solve elastic and elastoplastic boundary value problems, and Habib and Yildirim [27] proposed a PINN model for designing multi-stage friction pendulum bearings.
The broader application to electromagnetic problems governed by Maxwell’s equations is a comparatively recent development, yet it has gained significant traction due to the potential advantages it offers, such as mesh-free representations and the ability to integrate sparse measurement data combined with regularizing physics terms and physical constraints. Despite the growing number of publications investigating PINNs for electromagnetics, the existing literature remains fragmented. The reported results vary widely in terms of scope and methodological choices, and a comprehensive synthesis of current research trends, limitations, and open challenges is lacking.
To address these gaps, peer-reviewed journal articles and conference papers published between 2020 and 2025 were identified in a systematic literature review (SLR). The publications were screened, analyzed, and categorized according to bibliographic characteristics, physics regimes, spatial dimensionality, network architectures, medium, and learning paradigms. The extracted information is used to answer a set of research questions that collectively characterize the current state of the field.
This review contributes a quantitative overview of publication activity and research growth in employing PINNs in electromagnetics, and systematically classifies the existing research according to methodological and problem-dependent criteria, thereby facilitating structured comparison to reveal underexplored areas. Together, these contributions aim to guide future research in this evolving field.

2. Background

2.1. Physics-Informed Neural Networks

PINNs combine DL with prior physical knowledge about a system by incorporating the governing differential equations into a composed loss function [4]. As illustrated in Figure 1, in its initial form, the core of a PINN is a fully connected feedforward neural network.
As many systems can be described by PDEs, let Ω R d and T > 0 and consider the following first-order in time PDE in its general form:
t u ( t , x ) + N u ( t , x ) = 0 , x Ω , t [ 0 , T ] ,
where u : [ 0 , T ] × Ω R m denotes the unknown field, x the spatial coordinate, t time, and N a potentially nonlinear differential operator acting on u. The problem is supplemented with ICs and BCs on Ω (e.g., Dirichlet, Neumann, or Robin BCs), as appropriate. The PINNs objective is to approximate the exact solution u ( t , x ) with the network output u θ ( t , x ) , where θ are the trainable weights and biases. During training, a composed loss function L is minimized that incorporates physics- and optional data-driven terms, whose influence on the overall loss can be individually affected by weighting factors λ :
L ( θ ) = λ D L D ( θ ) + λ F L F ( θ ) + λ B L B ( θ ) + λ I L I ( θ ) .
Using the mean squared error, the individual loss terms become:
L D θ = 1 N D i = 1 N D u θ t i D , x i D u i t i D , x i D 2 ,
L F θ = 1 N F i = 1 N F t u θ t i F , x i F + N u θ t i F , x i F 2 ,
L B θ = 1 N B i = 1 N B B u θ t i B , x i B g t i B , x i B 2 ,
L I θ = 1 N I i = 1 N I u θ 0 , x i I u i 0 , x i I 2 ,
where L D is the data loss with { ( t i D , x i D , u i ) } i = 1 N D [ 0 , T ] × Ω × R m as data sample points and u i as ground truth data, L F the loss of the PDE with { ( t i F , x i F ) } i = 1 N F ( 0 , T ] × Ω as collocation points, L B the BC loss with B as a boundary operator and g : ( 0 , T ] × Ω R m as the target value on the boundary, { ( t i B , x i B ) } i = 1 N B ( 0 , T ] × Ω as scattered boundary points, and L I the IC loss with { ( 0 , x i I ) } i = 1 N I { 0 } × Ω as initial points.
During the training of the PINN, the input data is processed by the NN in a forward pass, transforming it according to the NN’s layers and neurons and their parameters. This transformation results in the generation of the output u θ . By using automatic differentiation (AD) the derivatives of the PDE, e.g., the time derivative t u θ of the NNs output u θ , with respect to the inputs are derived by:
u x = u a ( n ) · a ( n ) z ( n ) · z ( n ) a ( n 1 ) · a ( n 1 ) z ( n 1 ) · z ( 1 ) x ,
where z ( l ) denotes the pre-activation value and a ( l ) the post-activation value at layer l, with a ( 0 ) = x as the network input and n as the number of layers.
After a forward pass, the gradients required for the optimization of the composed loss function are efficiently computed using backpropagation, a type of reverse-mode AD, based on the chain rule of calculus [28]. The gradients with respect to the NNs parameters can be derived by:
L θ = l = 1 n L z ( l ) · z ( l ) θ ,
where a is the post-activation value, z the pre-activation value and θ the parameter vector across all layers.
At last, an optimizer adjusts the parameters of the NN in accordance with the gradients of the loss function that have been previously calculated. For the optimization process, common optimizers include Adam, L-BFGS, or a combination of both. This process is repeated iteratively until a stopping criterion is reached, e.g., the completion of a set number of epochs or the attainment of a predefined error threshold.

2.2. Electromagnetism and Maxwell’s equations

The branch of physics known as electromagnetism is governed by the field theory of classical electrodynamics. This theory pertains to electromagnetic phenomena, which are described by electromagnetic fields, electric charges, and currents as their fundamental sources, as well as their propagation in space and time and their interaction with matter. This is applicable to all phenomena that do not fall within the realm of quantum electrodynamics, encompassing both dynamic and static scenarios. The electromagnetic field is commonly regarded as the combination of the distinct electric and magnetic fields, a simplification that is only reasonable in static or quasistatic cases. In general, both fields should be viewed as one unified, coherent field.
The governing equations in electromagnetics are Maxwell’s equations, a set of first-order linear PDEs that were named after James Clerk Maxwell, who first combined the theories of electricity and magnetism in 1865 and thereby established the consistent field theory of electrodynamics [29]. The contemporary form was derived by the British mathematician and electrical engineer Oliver Heaviside between 1885 and 1887 and is given in its macroscopic form by Equations (9) to (12):
· D = ρ f B t ,
· B = 0 B t ,
× E = B t ,
× H = J f + D t ,
here, D is the electric flux density, B is the magnetic flux density, E is the electric field strength, and H is the magnetic field strength. The macroscopic equations are applied in greater-scale scenarios where matter is present and modeling charges and currents on an atomic level is not suitable. They are therefore also referred to as Maxwell’s equations in matter. The macroscopic equations incorporate bound charges and bound currents into the electric flux density D and the magnetic field strength H , and consequently, only free charges ρ f and free currents J f appear in the Equations (9) and (12).
The aforementioned implies that the macroscopic equations are connected to the microscopic equations, i.e., the electric flux density to the electric field strength, and the magnetic flux density to the magnetic field strength, by the following constitutive relations:
D = ε 0 E + P , and H = 1 μ 0 B M ,
where ε 0 is the permittivity of free space, μ 0 the permeability of free space and P and M the polarization field or the magnetization field respectively.
The matter present is often assumed to be homogeneous and linear in its behavior, and, therefore, the constitutive relations can be written as:
D = ε E , and H = 1 μ B ,
with ε = ε 0 ε r and μ = μ 0 μ r , where ε r is the relative permittivity and μ r the relative permeability, which both can be constants, functions of time and position, or functions of E and B .
Furthermore, the relationship between the free current and the electric field has to be considered when analyzing conductive media, by:
J f = σ E ,
with the conductivity σ as the proportionality constant. This equation is generally known as Ohm’s law.
In electrostatics, the absence of magnetic and time-varying fields results in a simplification of Maxwell’s equations. Equation (11) becomes equal to 0 and the electric field is therefore conservative. Together with Equation (9), this describes all electrostatic problems. As a conservative field, E can be expressed as the gradient of a scalar function, allowing the introduction of the electric potential φ :
E = φ .
Together with equations (9) and (14), electrostatic problems are therefore described by:
· ( ε φ ) = ρ f ,
which is Poisson’s equation.
The subfield of magnetostatics is distinguished by the absence of electric fields and a time-independent magnetic field. This results in Equations (10) and (12), which, when neglecting the displacement current ( D t = 0 ), describe the magnetic field in the static case. If there is also no free current J f present, Equation (12) becomes 0, indicating that H is irrotational, and, analogous to electrostatics, the magnetic scalar potential ψ can be defined [30]:
H = ψ .
Substituting (18) and (14) into (10) gives an elliptic PDE:
· ( μ ψ ) = 0 ,
and furthermore assuming μ is constant, Laplace’s equation is derived:
Δ ψ = 0 .
If there are static free currents J f present, the magnetic vector potential A can be introduced, because of Equation (10) stating that the divergence of the magnetic flux density is always 0 [30]:
B = × A .
By using the vector identity:
× ( × A ) = ( · A ) Δ A ,
the Coulomb gauge:
· A = 0 ,
and substituting Equation (21) and (14) into Equation (12), Poisson’s equation can be derived again:
× 1 μ × A = J f
In the special cases of electroquasistatics (EQS) and magnetoquasistatics (MQS), the fields either exhibit such slow variation over time, or the magnitude of one of the fields is so small that a full coupling of Maxwell’s equations is not necessary.
Charges and currents might be time-dependent in EQS, but inductive effects are negligible, and capacitive as well as conductive effects dominate. Therefore, the electric field E is approximately curl-free, and Equation (11) becomes:
× E = B t 0 ,
and the electric potential from Equation (16) can be used.
In MQS inductive effects dominate and the displacement current in Equation (12) is negligible ( D t 0 ), leading to:
× H J f .

3. Methodology

This study employs an SLR to explore the state of research regarding the application of PINNs in the realm of electromagnetics, i.e., solving Maxwell’s equations.

3.1. Systematic Literature Review

An SLR maps, identifies, and categorizes relevant literature on a research topic. It is an extension of a classical literature review, where the objective is reached in a systematic, well-documented, and comprehensible way. The key steps of an SLR are planning, querying the databases for records, screening the publications, extracting relevant information from the publications, and reporting the results, including answering the research questions.
In the following, records are defined as database entries representing a publication indexed in a bibliographic database, whereas publications are defined as unique documents containing information on a topic, e.g., articles, conference papers, or books. Consequently, the set of records retrieved from all databases used may contain duplicates, for example, if a publication is indexed in more than one database, while publications are treated as unique entities that occur only once in the SLR.
During planning, the search strategy, eligibility criteria, research questions, data to extract, and team members were established. Querying the databases conducted the search to get the records needed for further processing. Before screening the results, duplicates were identified and excluded, which left behind the unique publications. The eligibility criteria were verified, and the results that did not align with the established criteria were excluded in the screening step. In the penultimate step, data was extracted, and the publications were categorized into predefined groups that represent the information needed to answer the research questions. In the end, the results of the SLR were visualized, reported, and the research questions answered.
The study encompasses several key steps, detailed below.

3.2. Research Questions

To get meaningful and focused information from the SLR on the state of research regarding the use of PINNs to solve Maxwell’s equations, a set of research questions was developed. These help to guide the reviewer through the course of the review process. Establishing them is a fundamental task to derive high-quality information from the SLR [31]. The research questions for this study are shown in Table 1.

3.3. Search Strategy

A comprehensive search strategy was developed to identify relevant studies on PINNs applied to the realm of electromagnetics. The search was conducted using the three bibliographic databases:
  • Scopus;
  • Web of Science;
  • IEEE Xplore.
For Web of Science, the option "All Databases" was used, which queries all databases to which an institution is subscribed.
The search query was designed to capture publications that solve electromagnetic problems utilizing PINNs. Conceptually related and synonymous terms were included in the search string. This led to the following concept for the search string, which was subsequently adapted to meet the syntax of each database:
("physics-informed neural networks" OR "physics-informed neural network" OR "PINNs" OR "PINN") AND ("Maxwell’s equations" OR "electromagnetics" OR "electromagnetic" OR "electrodynamics" OR "electrodynamic")
Additional filters restricted the search to English-language publications from 2020 up to the query date, with search terms applied to the title, abstract, and keyword fields. Due to the recent emergence of PINNs and their use in electromagnetics, including earlier publication dates did not significantly increase the number of records. The databases were last queried on November 21, 2025. The dataset retrieved is openly available from Zenodo [32].

3.4. Eligibility Criteria

To ensure the collection of publications is both methodologically sound and relevant to the research questions, a set of eligibility criteria was established. These were verified during different stages of the process, whenever the necessary information could be extracted.
Publications were included in further processing if they met the inclusion criteria shown in Table 2.
The inclusion criteria I3 to I5 should ensure that a publication grants sufficient details for the following data extraction. If there is insufficient or no information given on the architecture, application domain, learning paradigm, or the considered problem, the publication was excluded because a reliable categorization or comparison is not feasible. Moreover, reproducibility is not given for the underlying study. While the criteria I1 and I2 can be verified using bibliographic information and by screening the title and abstract, criteria I3 through I5 are most likely to be examined by using full-text screening.
To further ensure a rigorous selection, exclusion criteria were defined as shown in Table 3. Publications meeting at least one exclusion criterion were excluded from the review during the screening phase. Therefore, the total number of exclusion criteria met may be higher than the number of publications excluded.

3.5. Study Selection

The first step of the study selection stage was to identify duplicates in the obtained records, to ensure that every publication occurs once in the following data extraction stage. The next steps in the process involved two independent reviewers who screened the title and abstract of the publications and examined them for the eligibility criteria. If this was not sufficient, full-text screenings were conducted. Any disagreements between the reviewers were resolved through consensus or consultation with a third reviewer.
The initial search yielded a total of 500 records. After removing duplicates, 292 unique publications remained for title and abstract screening. A flow diagram based on the PRISMA 2020 statement [33] is shown in Figure 2. It illustrates the screening process and the number of records and publications included at each stage.

3.6. Data Extraction

Data extraction was performed by the same reviewers involved in the study selection, using a standardized form, capturing the relevant information from each included publication in a spreadsheet. The extracted characteristics and the corresponding categories are shown in Table 4. These can be classified into three primary groups: bibliographic, problem-specific, and model-specific characteristics. The bibliographic characteristics were directly obtained from the results received from the databases, while the problem- and model-specific characteristics were obtained by the reviewers from each publication. The problem- and model-specific characteristics are as follows:
  • Physics Regime: The subfield of electromagnetics addressed by a publication is recorded in the characteristic Physics Regime. To answer the research questions, six categories were introduced, namely Magnetostatics, Electrostatics, Magnetoquasistatics, Electroquasistatics and Electrodynamics. Magneto- and Electrostatics deal with the purely static cases, while the quasistatic assumptions extend these as described in Chapter 2.2. The category Electrodynamics encompasses the dynamic regime governed by the time-dependent Maxwell’s equations, where the displacement current as well as the induction terms are retained and wave propagation is essential. Here, the term Electrodynamics does not refer to the full scope of the field theory of classical electrodynamics, therefore excluding the static regime.
  • Dimensionality: The characteristic Dimensionality delineates the spatial dimensions of the addressed problem with the categories 1D, 2D, and 3D. In this context, time is not considered as a separate dimension; rather, it is inherently incorporated within the categories that define the physics regime. A category for 0D, or lumped-element models, is not included, as such problems do not resolve spatial field variations, and the system is therefore represented by spatially aggregated quantities (e.g., charge, current, flux, energy) or an equivalent circuit model. Those publications were excluded from further processing due to not dealing with Maxwell’s field equations. Problems with 1D, 2D, and 3D dimensionality, in contrast, explicitly resolve the spatial dimensions and consider the underlying field equations.
  • Medium: The absence or presence of spatially varying properties in the computational domain is documented in the characteristic Medium. The category Homogeneous applies when there is no spatial variance of the material properties, e.g., permittivity, permeability, polarization, or magnetization, in the domain, while the category Inhomogeneous applies when these material properties vary with position. Continuously and discretely varying properties were aggregated under the latter without further distinction. Boundary conditions that emulate material boundaries are not seen as a variance in material properties to be classified as Inhomogeneous.
  • Network Architecture: The high-level structural design, the organization of computation, and the flow of information through a NN is given by its architecture. It specifies the types of computational units and layers used (e.g., fully connected layers, convolutional layers, attention heads, or recurrent units) and how they are connected (e.g., forward connections, recurrent connections, parallel branches, or residual connections). The occurring architectures the PINNs in the selected publications are based on were noted in the characteristic Network Architecture. When suitable, architectures were aggregated, as U-Nets and ResNets were categorized as CNNs and LSTMs as RNNs. Architectures that distinctly differ from the generally established architectures were combined under Other.
  • Learning Paradigm: In the characteristic Learning Paradigm, the type and setting of learning is described, which governs how data is utilized in the learning process of a DL model. The categories used are supervised learning, unsupervised learning, and semi-supervised learning. In contrast to the predominant use of these terms in DL, a more specific definition used in the context of PINNs is applied. Here, supervised learning means the utilization of labeled data as ground truth, without the incorporation of a physics term within the loss function. Consequently, models designated as employing supervised learning are, by definition, not designated as being PINNs. This deviates from the interpretation proposed by Raissi et al. [2], who employ a more general definition in the sense of DL. In their approach, the physics loss is conceptualized as analogous to the use of labeled data. Subsequently, this results in the categorization of PINNs as a form of supervised learning. In this study, semi-supervised learning is defined as utilizing labeled data as well as physics loss terms, while purely unsupervised learning takes no labeled data as inputs, relying entirely on the physics loss. Publications to which only the category Supervised Learning applied were not included in further processing.
If a publication covered more than one category, it was assigned to each of them.

4. Results

4.1. Bibliometric Analysis

The initial search yielded a total of 500 records retrieved from three scientific databases. 133 records were retrieved from IEEE Explore, 220 from Scopus, and 147 from WoS (Figure 3(a)). After the consolidation of records across databases, duplicate detection and removal were performed, resulting in 292 unique publications and 208 duplicates, corresponding to 58.4 % unique records and 41.6 % duplicates within the initial dataset (Figure 3(b)). The relatively high proportion of duplicates is indicative of the substantial overlap in database coverage for the research domain under investigation.
An upward trend in the annual number of publications before applying the eligibility criteria over the period from 2020 to 2025 is shown in Figure 4(a). Only 3 publications were identified for 2020 and 6 for 2021, followed by an increase to 27 publications in 2022 and 46 publications in 2023. A pronounced growth is observed in 2024, with 96 publications, while 114 publications were recorded for 2025 up to November 21.
Figure 5(a) presents the distribution of the unique records by document type prior to final filtering. Journal articles constitute the largest share, with 178 publications, followed by conference papers, with 93 publications. Other document types, including books, conference reviews, theses, and review articles, appear only marginally, with counts ranging from 2 to 11 publications. This distribution highlights the dominance of journal articles and conference papers in the dataset.
For the final set of publications considered to answer the research questions, an additional filtering step was applied using the eligibility criteria defined in Section 3.4. After this filtering, the dataset comprised 139 publications in total (see Appendix A for the full list of reviewed publications), including 81 journal articles and 58 conference papers (Figure 5(b)).
As illustrated in Figure 4(b), 2 publications from the remaining dataset were published in both 2020 and 2021, reaching a peak of 54 publications published in 2024 and 48 in 2025 as of November 21.
This refined dataset forms the basis for the subsequent analysis reported in the following sections.

4.2. RQ1: How Extensively Are PINNs Applied Within Electromagnetics?

To assess the adoption and growth of PINNs in electromagnetics, and therefore, answer the first research question, the publication trends over time were analyzed. The annual count of publications that utilize PINNs for CEM increased from 2 in 2020 and 2021 to 54 in 2024, indicating limited early adoption for CEM after PINNs were established by Raissi et al. [2,3,4] in 2017 and 2019, respectively, and a significantly increased adoption in the following years, as can be seen in Figure 4(b). This is corroborated by the cumulative publication trend, revealing an exponential growth pattern from 2020 to 2024 (Figure 6). Starting from 2 publications in 2020, the cumulative total reached 139 by November 21, 2025, with the steepest increase observed between 2023 and 2025. As the most recent search was conducted prior to the end of 2025, the results shown for this year remain provisional. However, it can be reasonably anticipated that the number of publications at the end of 2025 will be equivalent to or exceed that of 2024.

4.3. RQ2: Which Subfields of Electromagnetics Are PINNs Applied to?

The subfield of electromagnetics addressed in the extracted publications was reviewed to provide a detailed analysis of the application of PINNs in this domain. PINNs are used for modeling various problems in electromagnetics ranging from static over quasistatic to dynamic problems, including antenna design, scattering, optics, and general high frequency applications.
Most of the research deploying PINNs focuses on the electrodynamic regime, with 91 publications extracted from the dataset. 22 publications investigate magnetostatic problems as well as magnetoquasistatic problems. In addition, 21 publications deal with electrostatics, while none considered electroquasistatics (Figure 7(a)).
The growth in the annual number of publications was concentrated in electrodynamics, and increased from 2 publications in 2020 and 2021, respectively, to 40 publications in 2024 (Figure 7(b)). Static and quasistatic publications only appeared from 2022 onward, with numbers increasing at a considerably lower rate and no regime exceeding 10 annual publications. In the whole period, electroquasistatic publications remained entirely absent in the dataset.

4.4. RQ3: Which Network Architectures Are Used for PINNs in Solving Maxwell’s Equations?

A variety of network architectures can be leveraged to establish a PINN, ranging from simple FNNs to more complex, attention-based architectures. During the SLR, seven main architectures were initially extracted from the dataset, of which six occur in the final dataset. Overall, researchers predominantly employ FNNs as proposed by Raissi et al. [4], with 113 occurrences, followed by 22 occurrences of CNNs and 15 occurrences of architectures classified as Other, using architectures that diverge distinctly from the established architectures (Figure 8(a)).
The results indicate a pronounced growth in the number of occurrences over time, particularly from 2022 onward, reflecting a substantial increase in research activity during the later years of the period considered (Figure 8(b)). In 2020 and 2021, the reported use of architectures is limited exclusively to FNNs, with 2 occurrences in each year. A notable expansion appears in 2022, when the total number of occurrences increases to 15. FNNs remain dominant with 10 occurrences, while CNNs (4 occurrences) and other architectures (1 occurrences) are introduced. The diversification continues in 2023, with a total of 25 occurrences. FNNs again represent the largest share (15 occurrences), followed by CNNs (4 occurrences). This year also marks the first appearance of GNNs and RNNs, each with 1 occurrence, and a larger contribution from other architectures (4 occurrences). The most pronounced growth occurs in 2024, where the total number of occurrences rises to 59. This increase is driven primarily by FNNs, with 47 occurrences. CNNs (8 occurrences), DeepONets (1 occurrence), and other architectures (3 occurrences) contribute more modestly. Despite the overall growth, several architecture classes, including GNNs, Transformers, RNNs, and Autoencoders, are absent in this year. In 2025, the total number of occurrences remains high, at 51. FNNs continue to dominate with 37 occurrences, followed by CNNs (8 occurrences). DeepONets (1 occurrence) and other architectures (7 occurrences) also appear to a minor extent.

4.5. RQ4: What Spatial Dimensionality Are the Electromagnetic Problems Solved?

This research question investigates the spatial dimensionality of the electromagnetic problems addressed in the analyzed publications. Spatial dimensionality is an indicator of model complexity and computational requirements, ranging from 1D- to 3D-field-simulations.
Figure 9(a) presents the distribution of dimensionalities across the publications. The results show a predominance of 2D problems, accounting for 108 occurrences. 3D problems appear considerably less frequently, with 16 occurrences, while 1D problems account for 23 occurrences.
Figure 9(b) provides a temporal breakdown of spatial dimensionalities by year of publication from 2020 to 2025.
In 2020 and 2021, only 1D and 2D problems were observed, with a single occurrence each per year, while 3D studies were not reported (Figure 9(b)). Starting in 2022, the number of 2D problems increases markedly (11 occurrences), accompanied by the first appearance of a 3D problem. This trend continues in subsequent years. In 2023, 2D problems rise to 20 occurrences, while 1D (4 occurrences) and 3D problems (1 occurrence) remain limited. The peak occurs in 2024, with 42 occurrences of 2D problems, alongside a notable increase in 1D (10 occurrences) and 3D (6 occurrences) problems. In 2025, although the total number of occurrences slightly decreases, 2D problems remain dominant (33 occurrences), and 3D problems further increase to 8 occurrences, while 1D problems decrease to 5 occurrences. The most recent search was conducted prior to the conclusion of 2025, and therefore, the results for this year are provisional.

4.6. RQ5: Are the Reviewed Domains Divided into Different Media?

The objective of this research question is to evaluate whether the reviewed publications divide the computational domain into regions with differing material properties, distinguishing between homogeneous and inhomogeneous media. The classification of media is an important factor in determining the complexity and applicability of electromagnetic models, as modeling inhomogeneous media often demands more complex approaches.
The distribution of media types among the publications is summarized in Figure 10(a). Both categories appear in comparable numbers, with inhomogeneous media appearing slightly more frequent, at 74 occurrences compared to 67 occurrences for homogeneous media. Problems involving a single medium remain common, but a substantial fraction of studies already incorporates spatially varying properties.
In 2020, only inhomogeneous media were reported (2 occurrences), and in 2021, only homogeneous media (2 occurrences) (Figure 10(b)). Beginning in 2022, both types have been documented each year. Homogeneous media increased to 9 occurrences in 2022 alongside 5 occurrences of inhomogeneous media. This trend continued in 2023, with sustained growth of inhomogeneous media (13 occurrences) and a consistent application of homogeneous media (9 occurrences). Both categories reached their peak in 2024, reaching 30 homogeneous and 27 inhomogeneous occurrences. In 2025, the total number of occurrences slightly decreased, but the use of inhomogeneous media (27 occurrences) remained more frequent than homogeneous media (17 occurrences). As before, the data for 2025 are provisional, since the most recent search was conducted before the end of the year.

4.7. RQ6: Which Learning Paradigms Are Used for Solving Maxwell’s Equations with PINNs?

The sixth research question investigates which learning paradigms are employed in the literature to solve Maxwell’s equations using PINNs. As illustrated in Figure 11, three paradigms are considered: unsupervised, semi-supervised, and supervised learning.
The distribution of learning paradigms across the reviewed studies shows unsupervised learning being the dominant paradigm in the field, accounting for 98 occurrences (Figure 11(a)). Semi-supervised approaches appear substantially less frequently, with 46 occurrences, while supervised methods only appear 3 times, due to publications being categorized in another category as well (see Section 3.6).
Early publications almost exclusively rely on unsupervised learning, with 1 occurrence in 2020 and 2 occurrences in 2021 (Figure 11(b)). For semi-supervised learning, only a single contribution was reported in 2020, and none for supervised learning. From 2022 onward, a noticeable increase in the total number of publications is observed, accompanied by the emergence and growth of semi-supervised approaches. In 2022, there are 7 occurrences of semi-supervised methods, compared to 8 occurrences of unsupervised methods, while supervised learning remains marginal (1 occurrence). This trend continues in subsequent years, with unsupervised learning maintaining its position as the leading method: A total of 18 occurrences of unsupervised learning were documented in 2023, 36 in 2024, and 33 as of November 21, 2025. Semi-supervised methods also exhibited an increase in absolute numbers, reaching a peak of 21 occurrences in 2024 before declining to 11 in 2025. Supervised approaches remain consistently rare, with at most 1 occurrence per year and none reported in 2025.

4.8. RQ7: Are There Associations Between the Extracted Characteristics of the Reviewed Publications?

The preceding research questions examine the distributions of individual characteristics in isolation. To assess whether these characteristics are independent of one another, contingency tables were computed for all pairwise combinations of the five extracted characteristics. For each table, Cramér’s V is reported as a descriptive measure of association strength. Of the ten pairwise combinations, four are presented below on the basis of their association strength and descriptive relevance.
Between the characteristics Physics Regime and Medium, a association of small to medium strength was found (Cramér’s V = 0.266). While the marginal distribution shown in Section 4.6 reported an approximately balanced use of homogeneous (67 occurrences) and inhomogeneous media (74 occurrences), the contingency table reveals regime-specific differences (Figure 12). Electrostatic publications predominantly rely on homogeneous media (68.0 %), whereas magnetostatic and magnetoquasistatic studies favor inhomogeneous media (70.8 % and 72.2 %, respectively). Electrodynamic publications show an approximately even distribution between the two media (51.2 % homogeneous and 48.8 % inhomogeneous, respectively).
Reviewing the contingency table of Dimensionality and Network Architecture (Figure 13) shows that the association between the characteristics is small to medium (Cramér’s V = 0.265). The distribution of architectures varies with dimensionality. In 1D, FNNs account for 92.0 % of occurrences, with CNNs as the only alternative (8.0 %). In 2D, architectural diversity increases: FNNs remain dominant (73.5 %), but CNNs (12.8 %), other architectures (12.0 %), and individual occurrences of GNNs and RNNs are also present. In 3D, FNNs constitute 60.0 % of occurrences, while CNNs are responsible for 25.0 % and DeepONets for 10.0 %. No usage of DeepONets was reported in 1D or 2D settings.
The association between Physics Regime and Dimensionality is small (Cramér’s V = 0.193), but 2D formulations are the most frequent choice across all regimes, accounting for 91.7 % of magnetostatic, 82.6 % of electrostatic, 81.0 % of magnetoquasistatic, and 65.5 % of electrodynamic publications (Figure 14). 3D implementations are concentrated in electrodynamics, which accounts for 14 of the 16 reported 3D studies (87.5 %). Magnetostatics and electrostatics each contain a single 3D publication, while no 3D publications were recorded for magnetoquasistatics.
Unsupervised learning is the most frequent paradigm across all physics regimes (Figure 15), but the association was found to be small between Physics Regime and Learning Paradigm (Cramér’s V = 0.156). However, the distribution of semi-supervised learning differs between static and time-varying regimes. While the static regimes, magnetostatics and electrostatics, each report a share of 16.7 % for semi-supervised learning, the time-varying regimes employ it at roughly double that rate (magnetoquasistatics: 35.0 %, electrodynamics: 36.8 %) (Figure 15). Supervised learning is applied only marginally, with one occurrence each in electrodynamics, magnetostatics, and electrostatics.

5. Discussion

The following chapter synthesizes the descriptive findings reported in Section 4, interprets their significance for the application of PINNs to Maxwell’s equations, identifies methodological and topical gaps, and proposes concrete directions for future research.

5.1. Interpretation of Findings

The bibliometric data characterizes a field that is expanding rapidly but has not yet consolidated. The annual publication counts rose sharply between 2020 and 2025 (Figure 4(b)), and the cumulative count exhibited a steep increase from 2022 onward (Figure 6), without showing signs of levelling off. In the reviewed dataset, electrodynamic studies dominate (Figure 7(a)), suggesting that researchers overall prioritize dynamic Maxwell formulations over static or quasistatic cases. Together, these trends characterize that the research regarding PINNs in electromagnetics is in an exploratory, scaling phase. PINNs are being applied to increasingly complex, time-dependent problems as frameworks and computational resources improve, but standard practices and established benchmarks have yet to emerge. In the following the key findings from the SLR are discussed, with the contingency table analysis giving a more nuanced insight into the field.
First, there are no publications covering the electromagnetic subfield of electroquasistatics. This is a topical gap that indicates a lack of studies in the utilization of PINNs for solving electroquasistatic equations. This discrepancy might be ascribed to a more limited range of applications in contrast to, electro- and magnetostatics or magnetoquasistatics. It might also be partly ascribed to the definition of PINNs used in this SLR. A more expansive definition, deviating from the canonical definition, might yield additional results.
Second, FNNs dominate the network architectures (Figure 8(a)), with CNNs and other architectures appearing only sporadically. Nevertheless, there has been a recent trend of experimenting with a wider range of architectures in conjunction with PINNs. This could be due to problem-specific requirements, thereby giving rise to problem-specific approaches to implement PINNs, as opposed to their universal implementation. The contingency table analysis (Section 4.8) substantiates this observation and shows that the degree of architectural diversity is associated with spatial dimensionality. In 1D, FNNs account for 92.0 % of all architectures, leaving virtually no room for alternatives. In 3D, this share decreases to 60.0 %, suggesting that the trend toward architectural experimentation is not uniform, but is concentrated in higher-dimensional problem settings where the limitations of standard FNNs might be more pronounced.
Third, unsupervised learning, driven entirely by physics residuals rather than labeled field data, is the most widely applied learning paradigm (Figure 11(a)). Although unsupervised learning is often regarded as the preferable training method, particularly when labeled data is either unavailable or challenging to obtain, the incorporation of labeled data from measurements or numerical simulations can nevertheless be advantageous to guide the training process and improve results. The limited use of semi-supervised learning in electromagnetics likely reflects two primary factors: researchers could encounter challenges in acquiring suitable labeled data, or they could already attain good results without additional data. The latter is most likely only plausible for simpler problems. The proportion of semi-supervised learning differs between static and time-varying regimes, with magnetoquasistatic and electrodynamic studies using semi-supervised approaches at approximately twice the rate observed in magnetostatic and electrostatic studies. This disparity suggests that the availability of or need for labeled data varies with the physics regime, rather than being uniform across the field. Still, the overall dominance of unsupervised approaches is consistent with the promise of PINNs to be able to operate without labeled data, but it also raises questions about data availability, robustness, calibration, and empirical validation in settings where measurements do exist.
Fourth, the overall distribution of media types appears roughly balanced, with 67 occurrences of homogeneous and 74 of inhomogeneous media (Figure 10(a)). However, this is misleading, which becomes clear when the association between Physics Regime and Medium is reviewed (Figure 12). It can be seen that electrostatic studies predominantly use homogeneous media, whereas magnetostatic and magnetoquasistatic studies favor inhomogeneous media. Electrodynamic studies show an approximately even split between both categories. The aggregate balance is thus the result of opposing regime-specific preferences rather than an indifference toward medium choice. These preferences could reflect the nature of the problems typically addressed in each regime. Homogeneous media might be employed for proof-of-concept and computationally lower-cost studies, while inhomogeneous media are adopted when spatially varying material properties are essential and cannot reasonably be simplified nor neglected. This is often the case in practical engineering problems which involve more complex multi-material interactions. As those problems were rarely considered in the reviewed publications, it is possible that some studies were conducted in preparation for those problems. Additionally, not all publications treat multiple materials by spatially varying properties. Some instead describe them through the use of BCs, as it is more common in radio and antenna design. In this SLR, such cases were not considered as multi-material and therefore not counted as inhomogeneous.
Fifth, the prevalence of 2D problem formulations and the relatively small share of 3D studies indicate a practical compromise between physical fidelity and computational resources (Figure 9(a)). A gradual increase in 3D problems over time can be seen, though not a substantive shift in the balance away from 2D problems. This likely reflects computational and methodological challenges of applying PINNs to 3D problems to solve Maxwell’s equations; challenges that include resource usage, training stability, point sampling strategies, training time, convergence behavior, and solver scalability. Therefore, algorithmic advances might be needed that reduce computational costs or improve convergence in higher spatial dimensions to further extend research activities to 3D problems. The analysis further indicates that the scarcity of 3D studies is not distributed evenly across subfields ((Figure 14)). Electrodynamics accounts for 14 of the 16 reported 3D studies, while magnetoquasistatics contains no 3D publications at all. Moreover, magnetostatic and electrostatic problems are addressed almost exclusively in 2D, leaving two possible assumptions. One possibility is that the extension to 3D formulations might be a challenge that is particularly acute in the static and quasistatic regimes and another is that those regimes are significantly easier to describe in 2D than the dynamic regime.

5.2. Perspective and Research Opportunities

The findings of the conducted SLR discussed in the sections before show several research opportunities that merit attention:
  • The absence of publications categorized as electroquasistatics suggests potential for further research. This is a topical gap that might be addressed through further targeted research or domain outreach.
  • The results show a limited diversity in the application of network architectures beyond FNNs. More advanced architectures, e.g., CNNs, GNNs, or DeepONets, are generally rarely deployed, although architectural diversity increases with spatial dimensionality, with 3D studies already employing a broader range of architectures. Consequently, the need for architectural experimentation, to meet problem-specific requirements, is most pressing in lower-dimensional settings. Furthermore, systematic comparative studies that evaluate architecture choices for representative problems, which involve solving Maxwell’s equations, would be advantageous.
  • In the context of learning paradigms, the application of semi-supervised learning is sparse. While unsupervised learning is applied in most studies, the integration of labeled data from measurements or numerical simulations is limited. The use of semi-supervised learning varies by physics regime, with magnetoquasistatic and electrodynamic studies employing it most. Consequently, the integration of labeled data presents an opportunity particularly for magnetostatic and electrostatic applications, where semi-supervised approaches remain underutilized.
  • The majority of publications in the SLR concentrate on 2D domains, with considerably less attention given to the review of 3D domains. This gap is most pronounced in the static and quasistatic regimes. The extension of PINN-based approaches to 3D formulations in these regimes presents a specific opportunity for future work.
  • Associations between several of the extracted characteristics was found. This indicates that methodological choices are not made independently of the problem under investigation. Future studies might therefore benefit from reporting and analyzing these interactions explicitly, rather than treating characteristics such as Network Architecture, Learning Paradigm, and Medium as isolated decisions.

5.3. Limitations

Several methodological aspects of the SLR influence the interpretation of these findings and should be considered when generalizing conclusions. The search was constrained to English, peer-reviewed publications from three bibliographic databases, while the query date was November 21, 2025 (see Section 3.3). The latter might result in recent developments not being captured within the designated review period, given the dynamic evolution of the field. The exclusion of preprints and non-English literature might undercount nascent or geographically localized work. In addition, the review’s definition of what constitutes a PINN, in conjunction with the stipulation that purely supervised learning was excluded, unless combined with physics-driven models in the same publication, influenced the selection of retained publications. Different definitional choices potentially alter the category counts, particularly for the analysis of the learning paradigms (see Section 3.6).
Furthermore, the analysis of the association between the characteristics is subject to methodological constraints. Some combinations of characteristics contain zero or near-zero cell counts, which limits the interpretability of those combinations. Only pairwise associations between characteristics were considered; associations between more than two characteristics at a time are not described in the SLR, but might give further insights and point to more specific research opportunities. The extracted characteristics and categories were designed for a primarily descriptive analysis, but to further investigate associations between characteristics, revised or finer-grained categories might be beneficial, yet far more laborious to implement.

6. Conclusion

This SLR examined recent research on the application of PINNs to the solution of Maxwell’s equations in electromagnetics. By analyzing 139 peer-reviewed journal articles and conference papers published between 2020 and 2025, the study provides an overview of methodological trends, application domains, and prevailing modeling choices.
A sustained increase in research activity can be seen, reflecting growing interest in PINNs as an alternative or complementary approach to classical numerical methods in CEM. Overall, the majority of publications focus on electrodynamic problems, while static and quasistatic regimes receive comparatively less attention. Methodologically, the literature is dominated by canonical PINN formulations based on FNNs combined with physics-based loss functions, closely following the original framework introduced by Raissi et al. [4]. Nonetheless, there appears to be an increasing amount of experimentation with a variety of architectures in order to optimize PINNs and address problem-specific requirements. Unsupervised learning paradigms prevail, highlighting the appeal of PINNs in scenarios where labeled data is limited or unavailable.
Concurrently, limitations and imbalances in current research practice were found. Most studies address 2D problems, with relatively few large-scale 3D implementations reported. Architectural diversity beyond FNNs remains limited, and systematic comparisons between alternative network designs are scarce. Furthermore, while semi-supervised approaches are increasingly explored, their use remains secondary.
Beyond these individual trends, it was found that methodological choices reported in the literature are not independent of the problem characteristics. Associations were identified between Physics Regime and Medium, as well as between Dimensionality and Network Architecture. Specifically, the seemingly balanced use of homogeneous and inhomogeneous media obscures opposing regime-specific preferences, and the limited adoption of architectures other than FNNs is concentrated in lower-dimensional settings, while 3D studies already exhibit greater architectural diversity. This indicates that the field’s methodological landscape is shaped by problem-specific factors that are not evident in marginal distributions alone. Addressing these gaps and dependencies, particularly the extension of 3D formulations to the static and quasistatic regimes, broader architectural experimentation in lower-dimensional settings, and increased integration of labeled data in static electromagnetic applications, represents a key opportunity for future research.

Author Contributions

Conceptualization, L.S.; methodology, L.S. and F.P.; software, L.S.; formal analysis, L.S.; investigation, L.S. and F.P.; data curation, L.S.; writing—original draft preparation, L.S.; writing—review and editing, F.P.; visualization, F.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data associated with this article are openly available from Zenodo at https://doi.org/10.5281/zenodo.18709548. The repository contains the lists of records retrieved from the databases queried in the SLR, including the title, authors, corresponding DOIs, the journals in which they were published, etc. It also includes a list with the categorization of the reviewed publications.

Acknowledgments

The authors acknowledge the support provided by the Open Access Publication Fund of the Westphalian University of Applied Sciences.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. List of Reviewed Publications

Table A1 provides a summary of the publications reviewed in this SLR, classified into the characteristics Physics Regime, Dimensionality, Medium, Network Architecture, and Learning Paradigm (see Section 3.6).
Table A1. List of reviewed publications.
Table A1. List of reviewed publications.
Preprints 206241 i001
Preprints 206241 i002
Preprints 206241 i003
Preprints 206241 i004

References

  1. Lagaris, I.; Likas, A.; Fotiadis, D. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks 1998, 9, 987–1000. [Google Scholar] [CrossRef] [PubMed]
  2. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations, 2017, [arXiv:cs.AI/1711.10561].
  3. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations, 2017, [arXiv:cs.AI/1711.10566].
  4. Raissi, M.; Perdikaris, P.; Karniadakis, G. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 2019, 378, 686–707. [Google Scholar] [CrossRef]
  5. Karniadakis, G.; Kevrekidis, Y.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nature Reviews Physics 2021, 1–19. [Google Scholar] [CrossRef]
  6. Farea, A.; Yli-Harja, O.; Emmert-Streib, F. Understanding Physics-Informed Neural Networks: Techniques, Applications, Trends, and Challenges. AI 2024, 5, 1534–1557. [Google Scholar] [CrossRef]
  7. Michaloglou, A.; Papadimitriou, I.; Gialampoukidis, I.; Vrochidis, S.; Kompatsiaris, I. Physics-Informed Neural Networks in Materials Modeling and Design: A Review. Archives of Computational Methods in Engineering 2025. [Google Scholar] [CrossRef]
  8. Fan, W.; Chen, X. Embedding Physics into Machine Learning: A Review of Physics Informed Neural Networks as Partial Differential Equation Forward Solvers. Tsinghua Science and Technology 2026, 31, 1326–1364. [Google Scholar] [CrossRef]
  9. Pioch, F.; Harmening, J.H.; Müller, A.M.; Peitzmann, F.J.; Schramm, D.; el Moctar, O. Turbulence Modeling for Physics-Informed Neural Networks: Comparison of Different RANS Models for the Backward-Facing Step Flow. Fluids 2023, 8. [Google Scholar] [CrossRef]
  10. Harmening, J.H.; Peitzmann, F.J.; el Moctar, O. Effect of network architecture on physics-informed deep learning of the Reynolds-averaged turbulent flow field around cylinders without training data. Frontiers in Physics 2024, 12. [Google Scholar] [CrossRef]
  11. Harmening, J.H.; Pioch, F.; Fuhrig, L.; Peitzmann, F.J.; Schramm, D.; el Moctar, O. Data-assisted training of a physics-informed neural network to predict the separated Reynolds-averaged turbulent flow field around an airfoil under variable angles of attack. Neural Computing and Applications 2024, 36. [Google Scholar] [CrossRef]
  12. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 2020, 367, 1026–1030. [Google Scholar] [CrossRef]
  13. Laubscher, R.; Rousseau, P. Application of a mixed variable physics-informed neural network to solve the incompressible steady-state and transient mass, momentum, and energy conservation equations for flow over in-line heated tubes. Applied Soft Computing 2022, 114, 108050. [Google Scholar] [CrossRef]
  14. Ouyang, H.; Zhu, Z.; Chen, K.; Tian, B.; Huang, B.; Hao, J. Reconstruction of hydrofoil cavitation flow based on the chain-style physics-informed neural network. Engineering Applications of Artificial Intelligence 2023, 119, 105724. [Google Scholar] [CrossRef]
  15. Wang, H.; Liu, Y.; Wang, S. Dense velocity reconstruction from particle image velocimetry/particle tracking velocimetry using a physics-informed neural network. Physics of Fluids 2022, 34, 017116. [Google Scholar] [CrossRef]
  16. Jin, X.; Cai, S.; Li, H.; Karniadakis, G.E. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations. Journal of Computational Physics 2021, 426, 109951. [Google Scholar] [CrossRef]
  17. Cai, S.; Wang, Z.; Wang, S.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Neural Networks for Heat Transfer Problems. Journal of Heat Transfer 2021, 143, 060801. [Google Scholar] [CrossRef]
  18. Zobeiry, N.; Humfeld, K.D. A physics-informed machine learning approach for solving heat transfer equation in advanced manufacturing and engineering applications. Engineering Applications of Artificial Intelligence 2021, 101, 104232. [Google Scholar] [CrossRef]
  19. Oommen, V.; Srinivasan, B. Solving Inverse Heat Transfer Problems Without Surrogate Models: A Fast, Data-Sparse, Physics Informed Neural Network Approach. Journal of Computing and Information Science in Engineering 2022, 22, 041012. [Google Scholar] [CrossRef]
  20. Billah, M.M.; Khan, A.I.; Liu, J.; Dutta, P. Physics-informed deep neural network for inverse heat transfer problems in materials. Materials Today Communications 2023, 35, 106336. [Google Scholar] [CrossRef]
  21. Karthik, K.; Sowmya, G.; Sharma, N.; Kumar, C.; Ravikumar Shashikala, V.K.; Alur Shivaprakash, S.; Muhammad, T.; Gill, H.S. Predictive modeling through physics-informed neural networks for analyzing the thermal distribution in the partially wetted wavy fin. ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik 2024, 104, e202400180. [Google Scholar] [CrossRef]
  22. Kumar, C.; Srilatha, P.; Karthik, K.; Somashekar, C.; Nagaraja, K.V.; Varun Kumar, R.S.; Shah, N.A. A physics-informed machine learning prediction for thermal analysis in a convective-radiative concave fin with periodic boundary conditions. ZAMM - Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik 2024, 104, e202300712. [Google Scholar] [CrossRef]
  23. Kapoor, T.; Wang, H.; Núnez, A.; Dollevoet, R. Physics-Informed Neural Networks for Solving Forward and Inverse Problems in Complex Beam Systems. IEEE Transactions on Neural Networks and Learning Systems 2024, 35, 5981–5995. [Google Scholar] [CrossRef]
  24. Jin, H.; Zhang, E.; Espinosa, H.D. Recent Advances and Applications of Machine Learning in Experimental Solid Mechanics: A Review. Applied Mechanics Reviews 2023, 75, 061001. [Google Scholar] [CrossRef]
  25. Henkes, A.; Wessels, H.; Mahnken, R. Physics informed neural networks for continuum micromechanics. PAMM 2021, 21, e202100040. [Google Scholar] [CrossRef]
  26. Zhang, N.; Xu, K.; Yu Yin, Z.; Li, K.Q.; Jin, Y.F. Finite element-integrated neural network framework for elastic and elastoplastic solids. Computer Methods in Applied Mechanics and Engineering 2025, 433, 117474. [Google Scholar] [CrossRef]
  27. Habib, A.; Yildirim, U. Developing a physics-informed and physics-penalized neural network model for preliminary design of multi-stage friction pendulum bearings. Engineering Applications of Artificial Intelligence 2022, 113, 104953. [Google Scholar] [CrossRef]
  28. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  29. Maxwell, J.C., VIII. A dynamical theory of the electromagnetic field. Philosophical Transactions of the Royal Society of London 1865, 155, 459–512. [Google Scholar] [CrossRef]
  30. Griffiths, D.J. Introduction to Electrodynamics, 5 ed.; Cambridge University Press, 2023. [CrossRef]
  31. Kitchenham, B.; Charters, S. Guidelines for performing Systematic Literature Reviews in Software Engineering. 2007, 2. [Google Scholar]
  32. Schmeing, L.; Pioch, F. Dataset for "Advancements in Physics-Informed Neural Networks for Solving Maxwell’s Equations: A Systematic Literature Review", 2026. [CrossRef]
  33. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021, 372. [Google Scholar] [CrossRef]
  34. Liu, Z.; Xu, F. Principle and Application of Physics-Inspired Neural Networks for Electromagnetic Problems. In Proceedings of the IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2022, p. 5244–5247. [CrossRef]
  35. Fujita, K. Physics-Informed Neural Networks with Data and Equation Scaling for Time Domain Electromagnetic Fields. In Proceedings of the 2022 Asia-Pacific Microwave Conference (APMC), 2022, pp. 623–625. [CrossRef]
  36. Li, R.; Xiao, L.; Zhang, Y.; Shi, Z.; Jiao, Y.; Tang, H. Forward electromagnetic modeling and inverse scattering of cylinder with various cross-section using physics informed neural network. In Proceedings of the 2024 International Applied Computational Electromagnetics Society Symposium (ACES-China), 2024, pp. 1–3. [CrossRef]
  37. Piao, S.; Gu, H.; Wang, A.; Qin, P. A Domain-Adaptive Physics-Informed Neural Network for Inverse Problems of Maxwell’s Equations in Heterogeneous Media. IEEE Antennas and Wireless Propagation Letters 2024, 23, 2905–2909. [Google Scholar] [CrossRef]
  38. Cao, B.; Wang, Y.D.; Zhang, N.E.; Liang, Y.Z.; Yin, W.Y. A Physics-Informed Neural Networks Algorithm for Simulating Semiconductor Devices. In Proceedings of the 2023 International Applied Computational Electromagnetics Society Symposium (ACES-China), 2023, pp. 1–3. [CrossRef]
  39. Li, R.; Zhang, Y.; Tang, H.; Shi, Z.; Jiao, Y.; Xiao, L.; Wei, B.; Gong, S. Research on Electromagnetic Scattering and Inverse Scattering of Target Based on Transfer Learning Physics-Informed Neural Networks. In Proceedings of the 2024 International Applied Computational Electromagnetics Society Symposium (ACES-China), 2024, pp. 1–3. [CrossRef]
  40. Li, W.; Tang, H.; Li, R.; Zhang, M.; Deng, Q.; Zhang, Y.; Shi, Z. Electromagnetic Scattering of Infinitely Long Cylinder of Arbitrary Cross-section Based on PINNs. In Proceedings of the 2024 Photonics & Electromagnetics Research Symposium (PIERS), 2024, pp. 1–8. [CrossRef]
  41. Barmada, S.; Tucci, M.; Formisano, A.; Di Barba, P.; Mognaschi, M.E. Hybrid Boundary Element – Physics Informed Neural Network Formulation for Electromagnetics Problems. In Proceedings of the 2024 IEEE 21st Biennial Conference on Electromagnetic Field Computation (CEFC), 2024, pp. 1–2. [CrossRef]
  42. Qi, S.; Sarris, C.D. Hybrid Physics-Informed Neural Network for the Wave Equation With Unconditionally Stable Time-Stepping. IEEE Antennas and Wireless Propagation Letters 2024, 23, 1356–1360. [Google Scholar] [CrossRef]
  43. Baldan, M.; Di Barba, P.; Lowther, D.A. Physics- Informed Neural Networks for Inverse Electromagnetic Problems. In Proceedings of the 2022 IEEE 20th Biennial Conference on Electromagnetic Field Computation (CEFC), 2022, pp. 1–2. [CrossRef]
  44. Backmeyer, M.; Kurz, S.; Möller, M.; Schöps, S. Solving Electromagnetic Scattering Problems by Isogeometric Analysis with Deep Operator Learning. In Proceedings of the 2024 Kleinheubach Conference, 2024, pp. 1–4. [CrossRef]
  45. Su, Y.; Zeng, S.; Wu, X.; Huang, Y.; Chen, J. Physics-Informed Graph Neural Network for Electromagnetic Simulations. In Proceedings of the 2023 XXXVth General Assembly and Scientific Symposium of the International Union of Radio Science (URSI GASS), 2023, pp. 1–3. [CrossRef]
  46. Mokhtari, B.E.; Chauviere, C.; Bonnet, P. On the Importance of the Mathematical Formulation to Get PINNs Working. IEEE Transactions on Electromagnetic Compatibility 2024, 66, 2142–2149. [Google Scholar] [CrossRef]
  47. Liu, J.P.; Wang, B.Z.; Chen, C.S.; Wang, R. Inverse Design Method for Horn Antennas Based on Knowledge-Embedded Physics-Informed Neural Networks. IEEE Antennas and Wireless Propagation Letters 2024, 23, 1665–1669. [Google Scholar] [CrossRef]
  48. Hu, Y.D.; Wang, X.H.; Zhou, H.; Wang, L. A Priori Knowledge-Based Physics-Informed Neural Networks for Electromagnetic Inverse Scattering. IEEE Transactions on Geoscience and Remote Sensing 2024, 62, 1–9. [Google Scholar] [CrossRef]
  49. Qi, S.; Sarris, C.D. Physics-Informed Deep Operator Network for 3-D Time-Domain Electromagnetic Modeling. IEEE Transactions on Microwave Theory and Techniques 2025, 73, 3800–3812. [Google Scholar] [CrossRef]
  50. Ping, Y.; Zhang, Y.; Jiang, L. Uncertainty Quantification in PEEC Method: A Physics-Informed Neural Networks-Based Polynomial Chaos Expansion. IEEE Transactions on Electromagnetic Compatibility 2024, 66, 2095–2101. [Google Scholar] [CrossRef]
  51. Lim, K.L.; Dutta, R.; Rotaru, M. Physics Informed Neural Network using Finite Difference Method. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2022, pp. 1828–1833. [CrossRef]
  52. Hu, Y.D.; Wang, X.H.; Zhou, H.; Wang, L.; Wang, B.Z. A More General Electromagnetic Inverse Scattering Method Based on Physics-Informed Neural Network. IEEE Transactions on Geoscience and Remote Sensing 2023, 61, 1–9. [Google Scholar] [CrossRef]
  53. Baldan, M.; Di Barba, P.; Lowther, D.A. Physics-Informed Neural Networks for Inverse Electromagnetic Problems. IEEE Transactions on Magnetics 2023, 59, 1–5. [Google Scholar] [CrossRef]
  54. Pan, Y.Q.; Wang, R.; Wang, B.Z. Physics-Informed Neural Networks With Embedded Analytical Models: Inverse Design of Multilayer Dielectric-Loaded Rectangular Waveguide Devices. IEEE Transactions on Microwave Theory and Techniques 2024, 72, 3993–4005. [Google Scholar] [CrossRef]
  55. Qi, S.; Sarris, C.D. Physics-Informed Neural Networks for Multiphysics Simulations: Application to Coupled Electromagnetic-Thermal Modeling. In Proceedings of the 2023 IEEE/MTT-S International Microwave Symposium - IMS 2023, 2023, pp. 166–169. [CrossRef]
  56. Sato, T.; Sasaki, H.; Sato, Y. A Fast Physics-informed Neural Network Based on Extreme Learning Machine for Solving Magnetostatic Problems. In Proceedings of the 2023 24th International Conference on the Computation of Electromagnetic Fields (COMPUMAG), 2023, pp. 1–4. [CrossRef]
  57. Ping, Y.; Zhang, Y.; Jiang, L. Uncertainty Quantification in PEEC Method: A Physics-Informed Neural Networks-Based Polynomial Chaos Expansion. In Proceedings of the 2024 IEEE Joint International Symposium on Electromagnetic Compatibility, Signal & Power Integrity: EMC Japan / Asia-Pacific International Symposium on Electromagnetic Compatibility (EMC Japan/APEMC Okinawa), 2024, pp. 395–398. [CrossRef]
  58. Deng, Q.; Tang, H.; Li, R.; Li, W.; Zhang, M.; Shi, Z.; Zhang, Y. Application of PINNs in PNJ Research. In Proceedings of the 2024 Photonics & Electromagnetics Research Symposium (PIERS), 2024, pp. 1–8. [CrossRef]
  59. Brendel, P.; Medvedev, V.; Rosskopf, A. Convolutional Physics- Informed Neural Networks for Fast Prediction of Core Losses in Axisymmetric Transformers. In Proceedings of the 2024 IEEE 21st Biennial Conference on Electromagnetic Field Computation (CEFC), 2024, pp. 1–2. [CrossRef]
  60. Fujita, K. Modeling Power-Bus Structures with Physics-Informed Neural Networks. In Proceedings of the 2024 IEEE Joint International Symposium on Electromagnetic Compatibility, Signal & Power Integrity: EMC Japan / Asia-Pacific International Symposium on Electromagnetic Compatibility (EMC Japan/APEMC Okinawa), 2024, pp. 552–555. [CrossRef]
  61. Wang, J.; Wang, D.; Wang, S.; Li, W. Modeling of Permanent Magnet Eddy-Current Coupler Based on Unsupervised Physics-Informed Radial-Based Function Neural Networks. IEEE Transactions on Magnetics 2025, 61, 1–10. [Google Scholar] [CrossRef]
  62. Barmada, S.; Dodge, S.; Tucci, M.; Formisano, A.; Di Barba, P.; Evelina Mognaschi, M. A Novel Hybrid Boundary Element—Physics Informed Neural Network Method for Numerical Solutions in Electromagnetics. IEEE Access 2024, 12, 171444–171457. [Google Scholar] [CrossRef]
  63. Xia, C.; Du, B.; Huang, W.; Cui, S. Parameter Identification of Permanent Magnet Synchronous Motor Based on Physics-Informed Neural Network. In Proceedings of the 2024 5th International Conference on Power Engineering (ICPE), 2024, pp. 207–212. [CrossRef]
  64. Qi, S.; Sarris, C.D. Benchmarking Physics-Informed Neural Networks for Time-Domain Electromagnetic Simulations. In Proceedings of the 2023 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (USNC-URSI), 2023, pp. 1619–1620. [CrossRef]
  65. Lu, W.; Duan, J.; Cheng, L.; Lu, J.; Dou, D. Electromagnetic Interference Effect Assessment Under Measuring Testability Limitation Based on Physics-Informed Neural Network and Gaussian Process Regression. IEEE Transactions on Power Electronics 2024, 39, 12413–12423. [Google Scholar] [CrossRef]
  66. Gong, R.; Tang, Z. Hot Spot Driven Physics-informed Neural Network via Special Designed Quantity of Interest applied to Magneto-thermal Analysis. In Proceedings of the 2022 IEEE 20th Biennial Conference on Electromagnetic Field Computation (CEFC), 2022, pp. 1–2. [CrossRef]
  67. Han, J.H.; Park, J.H.; Song, S.M.; Hong, S.K. Electromagnetic Field Analysis Using Physics Informed Neural Network Considering Eddy Current. In Proceedings of the 2024 IEEE 21st Biennial Conference on Electromagnetic Field Computation (CEFC), 2024, pp. 1–2. [CrossRef]
  68. Medvedev, V.; Erdmann, A.; Rosskopf, A. Modeling of Near-and Far-Field Diffraction from EUV Absorbers Using Physics-Informed Neural Networks. In Proceedings of the 2023 Photonics & Electromagnetics Research Symposium (PIERS), 2023, pp. 297–305. [CrossRef]
  69. Liu, Y.H.; Liang, J.C.; Wang, B.Z.; Wang, R. Inverse Design Method for Electromagnetic Periodic Structures Based on Physics-Informed Neural Network With Embedded Analytical Models. IEEE Transactions on Microwave Theory and Techniques 2025, 73, 844–853. [Google Scholar] [CrossRef]
  70. Brendel, P.; Medvedev, V.; Rosskopf, A. Physics-Informed Neural Networks for Magnetostatic Problems on Axisymmetric Transformer Geometries. IEEE Journal of Emerging and Selected Topics in Industrial Electronics 2024, 5, 700–709. [Google Scholar] [CrossRef]
  71. Wang, J.Y.; Pan, X.M. Universal Approximation Theorem and Deep Learning for the Solution of Frequency-Domain Electromagnetic Scattering Problems. IEEE Transactions on Antennas and Propagation 2024, 72, 9274–9285. [Google Scholar] [CrossRef]
  72. Pan, Y.Q.; Wang, R.; Wang, B.Z. Solving Two-Dimensional Waveguide Problem Based on Physics-Informed Neural Networks. In Proceedings of the 2023 International Conference on Microwave and Millimeter Wave Technology (ICMMT), 2023, pp. 1–3. [CrossRef]
  73. Khan, M.R.; Zekios, C.L.; Bhardwaj, S.; Georgakopoulos, S.V. A Physics-Informed Neural Network-Based Waveguide Eigenanalysis. IEEE Access 2024, 12, 120777–120787. [Google Scholar] [CrossRef]
  74. Shao, J.; Wang, R.; Wang, B.Z. Theoretical Analysis of Rotational Symmetry Models for Time-Domain PINN. In Proceedings of the 2024 IEEE International Conference on Computational Electromagnetics (ICCEM), 2024, pp. 1–2. [CrossRef]
  75. Guo, Z.; Sabariego, R.V. Physics-Informed Neural Network for 2D Magneto-Quasi-Static Problems in Time Domain. In Proceedings of the 2024 IEEE 21st Biennial Conference on Electromagnetic Field Computation (CEFC), 2024, pp. 1–2. [CrossRef]
  76. Zhang, J.B.; Yu, D.M.; Pan, X.M. Physics-Informed Neural Networks For the Solution of Electromagnetic Scattering by Integral Equations. In Proceedings of the 2022 International Applied Computational Electromagnetics Society Symposium (ACES-China), 2022, pp. 1–2. [CrossRef]
  77. Hu, Y.D.; Wang, X.H.; Wei, T.; Ren, H.Y. Physics-Informed Neural Networks with Dynamic Sampling Method for Solving Rectangular Waveguide Problems. In Proceedings of the 2023 International Conference on Microwave and Millimeter Wave Technology (ICMMT), 2023, pp. 1–3. [CrossRef]
  78. Zhu, Y.; Xu, K.; Wan, B.; Lei, G.; Zhu, J. Kolmogorov–Arnold Network for Solving 2-D Magnetostatic Problems. IEEE Transactions on Magnetics 2025, 61, 1–5. [Google Scholar] [CrossRef]
  79. Zhang, P.; Hu, Y.; Jin, Y.; Deng, S.; Wu, X.; Chen, J. A Maxwell’s Equations Based Deep Learning Method for Time Domain Electromagnetic Simulations. In Proceedings of the 2020 IEEE Texas Symposium on Wireless and Microwave Circuits and Systems (WMCS), 2020, pp. 1–4. [CrossRef]
  80. Li, H.; Liu, J.G.; Huang, X.W.; Sheng, X.Q. Physics-Informed Neural Networks with Hard Constraints for Electromagnetic Scattering Analysis. In Proceedings of the 2024 14th International Symposium on Antennas, Propagation and EM Theory (ISAPE), 2024, pp. 1–3. [CrossRef]
  81. Mušeljić, E.; Reinbacher-Köstinger, A.; Kaltenbacher, M. Solving the electrostatic Laplace’s equation with a parameterizable physics informed neural network. In Proceedings of the 2022 IEEE 20th Biennial Conference on Electromagnetic Field Computation (CEFC), 2022, pp. 1–2. [CrossRef]
  82. Han, J.H.; Choi, E.J.; Hong, S.K. A Study on Electromagnetic Field Analysis Considering Geometry Variation Using Physics-Informed Neural Network. In Proceedings of the 2023 26th International Conference on Electrical Machines and Systems (ICEMS), 2023, pp. 3345–3348. [CrossRef]
  83. Zheng, Y.R.; Huang, Z.Y.; Gong, X.Z.; Zheng, X.Z. Implementation of Maxwell’s equations solving algorithm based on PINN. In Proceedings of the 2024 IEEE International Conference on Computational Electromagnetics (ICCEM), 2024, pp. 1–3. [CrossRef]
  84. Liu, Y.H.; Wang, B.Z.; Wang, R. Inverse Design of Frequency Selective Surface Using Physics-Informed Neural Networks. In Proceedings of the 2024 IEEE International Symposium on Antennas and Propagation and INC/USNC-URSI Radio Science Meeting (AP-S/INC-USNC-URSI), 2024, pp. 1027–1028. [CrossRef]
  85. Chang, H.Y.; Wang, R.; Wang, B.Z. Solving Complex Electromagnetic Scattering Problem Based on Physics-Informed Neural Network with Adaptive Sampling Method. In Proceedings of the 2023 International Conference on Microwave and Millimeter Wave Technology (ICMMT), 2023, pp. 1–3. [CrossRef]
  86. Khan, A.; Lowther, D.A. Physics Informed Neural Networks for Electromagnetic Analysis. IEEE Transactions on Magnetics 2022, 58, 1–4. [Google Scholar] [CrossRef]
  87. Jiang, X.; Zhang, M.; Song, Y.; Chen, H.; Huang, D.; Wang, D. Predicting Ultrafast Nonlinear Dynamics in Fiber Optics by Enhanced Physics-Informed Neural Network. Journal of Lightwave Technology 2024, 42, 1381–1394. [Google Scholar] [CrossRef]
  88. Gong, Z.; Chu, Y.; Yang, S. Physics-Informed Neural Networks for Solving Two-Dimensional Magneto-Static Fields. In Proceedings of the 2023 IEEE International Magnetic Conference - Short Papers (INTERMAG Short Papers), 2023, pp. 1–2. [CrossRef]
  89. Guo, Z.; Nguyen, B.; Sabariego, R.V. Physics-Informed Neural Network for Solving 1-D Nonlinear Time-Domain Magneto-Quasi-Static Problems. IEEE Transactions on Magnetics 2025, 61, 1–9. [Google Scholar] [CrossRef]
  90. Shao, J.; Liu, Y.; Wang, R.; Wang, B.Z. Finite Difference Based PINN for Electromagnetic Forward Problem Solving. In Proceedings of the 2024 IEEE International Symposium on Antennas and Propagation and INC/USNC-URSI Radio Science Meeting (AP-S/INC-USNC-URSI), 2024, pp. 1023–1024. [CrossRef]
  91. Gong, Z.; Chu, Y.; Yang, S. Physics-Informed Neural Networks for Solving 2-D Magnetostatic Fields. IEEE Transactions on Magnetics 2023, 59, 1–5. [Google Scholar] [CrossRef]
  92. Liu, C.; Li, L.; Cui, T. Physics-informed Unsupervised Deep Learning Framework for Solving Full-Wave Inverse Scattering Problems. In Proceedings of the 2022 IEEE Conference on Antenna Measurements and Applications (CAMA), 2022, pp. 1–4. [CrossRef]
  93. Jiang, X.; Wang, D.; Fan, Q.; Zhang, M.; Lu, C.; Tao Lau, A.P. Solving the Nonlinear Schrödinger Equation in Optical Fibers Using Physics-informed Neural Network. In Proceedings of the 2021 Optical Fiber Communications Conference and Exhibition (OFC), 2021, pp. 1–3.
  94. Chang, H.; Fan, J.; Wang, R.; Wang, B.Z. Solving Electromagnetic Problems with PINN based on Scattering Equivalent Source Method. In Proceedings of the 2024 IEEE International Symposium on Antennas and Propagation and INC/USNC-URSI Radio Science Meeting (AP-S/INC-USNC-URSI), 2024, pp. 1025–1026. [CrossRef]
  95. Jiang, F.; Li, T.; Lv, X.; Rui, H.; Jin, D. Physics-Informed Neural Networks for Path Loss Estimation by Solving Electromagnetic Integral Equations. IEEE Transactions on Wireless Communications 2024, 23, 15380–15393. [Google Scholar] [CrossRef]
  96. Liu, C.; Zhang, H.; Li, L.; Cui, T.J. Towards Intelligent Electromagnetic Inverse Scattering Using Deep Learning Techniques and Information Metasurfaces. IEEE Journal of Microwaves 2023, 3, 509–522. [Google Scholar] [CrossRef]
  97. Sun, B.; Wu, F.; Zhang, C.; Fan, W.; Gao, Z.; Liu, Y. Physics-Informed Contrast Source Inversion Learning Methods for Microwave Imaging. In Proceedings of the 2024 IEEE Asia-Pacific Microwave Conference (APMC), 2024, pp. 793–795. [CrossRef]
  98. Uduagbomen, J.; Lakshminarayana, S.; Liu, Z.; Leeson, M.S.; Xu, T. Physics-Informed Neural Network for Fibre Channel Modelling in Optical Communication Systems. In Proceedings of the 2023 23rd International Conference on Transparent Optical Networks (ICTON), 2023, pp. 1–4. [CrossRef]
  99. Wang, D.; Wang, S.; Kong, D.; Wang, J.; Li, W.; Pecht, M. Physics-Informed Sparse Neural Network for Permanent Magnet Eddy Current Device Modeling and Analysis. IEEE Magnetics Letters 2023, 14, 1–5. [Google Scholar] [CrossRef]
  100. Fujita, K. Physics-Informed Neural Network Method for Space Charge Effect in Particle Accelerators. IEEE Access 2021, 9, 164017–164025. [Google Scholar] [CrossRef]
  101. Liu, W.; Luo, W.; Cheng, X.; Zhou, M. Measurement-Physic-Constrained Neural Network for Multifield Reconstruction of PMC. IEEE Transactions on Instrumentation and Measurement 2025, 74, 1–9. [Google Scholar] [CrossRef]
  102. Yu, X.; Serrallés, J.E.C.; Giannakopoulos, I.I.; Liu, Z.; Daniel, L.; Lattanzi, R.; Zhang, Z. PIFON-EPT: MR-Based Electrical Property Tomography Using Physics-Informed Fourier Networks. IEEE Journal on Multiscale and Multiphysics Computational Techniques 2024, 9, 49–60. [Google Scholar] [CrossRef]
  103. Lim, K. Electrostatic Field Analysis Using Physics Informed Neural Net and Partial Differential Equation Solver Analysis. In Proceedings of the 2024 IEEE 21st Biennial Conference on Electromagnetic Field Computation (CEFC), 2024, pp. 01–02. [CrossRef]
  104. Fang, Z.; Zhan, J. Deep Physical Informed Neural Networks for Metamaterial Design. IEEE Access 2020, 8, 24506–24513. [Google Scholar] [CrossRef]
  105. Papadimitropoulos, S.; Tsogka, C.; Hasan, M. Synthetic Aperture Imaging Using Physically Informed Convolutional Neural Networks. In Proceedings of the 2024 IEEE Conference on Computational Imaging Using Synthetic Apertures (CISA), 2024, pp. 01–04. [CrossRef]
  106. Ruan, G.; Wang, Z.; Liu, C.; Xia, L.; Wang, H.; Qi, L.; Chen, W. Magnetic Resonance Electrical Properties Tomography Based on Modified Physics- Informed Neural Network and Multiconstraints. IEEE Transactions on Medical Imaging 2024, 43, 3263–3278. [Google Scholar] [CrossRef]
  107. Li, Y.; Liu, Y.; Yan, Y.; Wang, J.; Mattar, T. Deep Learning Method Based on Physics Informed Neural Networks for the Electromagnetic Stress Simulation in Transformer Windings. In Proceedings of the The Proceedings of the 19th Annual Conference of China Electrotechnical Society; Yang, Q.; Bie, Z.; Yang, X., Eds., Singapore, 2025; pp. 725–736. [CrossRef]
  108. Li, X.; Wang, P.; Yang, F.; Li, X.; Fang, Y.; Tong, J. DAL-PINNs: Physics-informed neural networks based on D’Alembert principle for generalized electromagnetic field model computation. Engineering Analysis with Boundary Elements 2024, 168, 105914. [Google Scholar] [CrossRef]
  109. Medvedev, V.; Erdmann, A.; Rosskopf, A. Physics-informed deep learning for 3D modeling of light diffraction from optical metasurfaces. Optics Express 2025, 33, 1371–1384. [Google Scholar] [CrossRef] [PubMed]
  110. Tan, B.; Yi, J.; Qin, Y.; Pu, H.; Luo, J. Design Optimization of Permanent Magnet Coupler Based on Physics-Informed Neural Networks. In Proceedings of the Advances in Mechanical Design; Tan, J.; Liu, Y.; Huang, H.Z.; Yu, J.; Wang, Z., Eds., Singapore, 2024; pp. 657–670. [CrossRef]
  111. Zhang, H.; Li, C.; Xia, R.; Chen, X.; Xiao, T.; Guo, X.W.; Liu, J. FE-PIRBN:Feature-Enhanced physics-informed radial basis neural networks for solving high-frequency electromagnetic scattering problems. Journal of Computational Physics 2025, 527, 113798. [Google Scholar] [CrossRef]
  112. Pu, H.; Tan, B.; Yi, J.; Yuan, S.; Zhao, J.; Bai, R.; Luo, J. A novel key performance analysis method for permanent magnet coupler using physics-informed neural networks. Eng. with Comput. 2023, 40, 2259–2277. [Google Scholar] [CrossRef]
  113. Wang, B.; Guo, Z.; Liu, J.; Wang, Y.; Xiong, F. Geophysical Frequency Domain Electromagnetic Field Simulation Using Physics-Informed Neural Network. Mathematics 2024, 12. [Google Scholar] [CrossRef]
  114. Hou, S.; Hao, X.; Pan, D.; Wu, W. Physics-informed neural network for simulating magnetic field of coaxial magnetic gear. Engineering Applications of Artificial Intelligence 2024, 133, 108302. [Google Scholar] [CrossRef]
  115. Dimitropoulos, I.; Contopoulos, I.; Mpisketzis, V.; Chaniadakis, E. The pulsar magnetosphere with machine learning: methodology. Monthly Notices of the Royal Astronomical Society 2024, 528, 3141–3152. [Google Scholar] [CrossRef]
  116. Suhendar, H.; Pratama, M.R.; Silambi, M.S. Mesh-Free Solution of 2D Poisson Equation with High Frequency Charge Patterns Using Data-Free Physics Informed Neural Network. In Proceedings of the Journal of Physics: Conference Series. IOP Publishing, 10 2024, Vol. 2866, p. 012053. [CrossRef]
  117. Qian, K.; Kheir, M. Investigating KAN-Based Physics-Informed Neural Networks for EMI/EMC Simulations. In Proceedings of the Intelligent Systems, Blockchain, and Communication Technologies; Abdelgawad, A.; Jamil, A.; Hameed, A.A., Eds., Cham, 2025; pp. 40–48. [CrossRef]
  118. Hu, Z.; Yang, A.; Xu, S.; Li, N.; Wu, Q.; Sun, Y. Prediction of soliton evolution and parameters evaluation for a high-order nonlinear Schrödinger–Maxwell–Bloch equation in the optical fiber. Physics Letters A 2025, 531, 130182. [Google Scholar] [CrossRef]
  119. Wang, Y.; Zhang, S. Multi-receptive-field physics-informed neural network for complex electromagnetic media. Optical Materials Express 2024, 14, 2740–2754. [Google Scholar] [CrossRef]
  120. Biswal, P.; Avdijaj, J.; Parente, A.; Coussement, A. Solving the Radiation Transfer Equation in Participating Media Using Physics Informed Neural Networks. In Proceedings of the Proceedings of the 10th World Congress on Mechanical, Chemical, and Material Engineering (MCM’24), 8 2024, pp. HTFF 269–1–HTFF 269–10. [CrossRef]
  121. Han, J.H.; Park, J.H.; Song, S.M.; Hong, S.K. Enhancing Learning Efficiency in Physics Informed Neural Network through Data Comparison and Transfer Learning. Journal of Electrical Engineering & Technology 2025, 20, 3335–3341. [Google Scholar] [CrossRef]
  122. Gaire, P.; Bhardwaj, S. Physics embedded neural network: Novel data-free approach towards scientific computing and applications in transfer learning. Neurocomputing 2025, 617, 128936. [Google Scholar] [CrossRef]
  123. Chang, C.; Xin, Z.; Zeng, T. A conservative hybrid deep learning method for Maxwell–Ampère–Nernst–Planck equations. Journal of Computational Physics 2024, 501, 112791. [Google Scholar] [CrossRef]
  124. Saleh, E.; Ghaffari, S.; Bretl, T.; Olson, L.; West, M. Learning from Integral Losses in Physics Informed Neural Networks. In Proceedings of the Proceedings of the 41st International Conference on Machine Learning; Salakhutdinov, R.; Kolter, Z.; Heller, K.; Weller, A.; Oliver, N.; Scarlett, J.; Berkenkamp, F., Eds. PMLR, 21–27 Jul 2024, Vol. 235, Proceedings of Machine Learning Research, pp. 43077–43111.
  125. Chen, Y.; Wang, C.; Hui, Y.; Shah, N.V.; Spivack, M. Surface Profile Recovery from Electromagnetic Fields with Physics-Informed Neural Networks. Remote Sensing 2024, 16. [Google Scholar] [CrossRef]
  126. Mahmoud, M.G.; Hares, A.S.; Hameed, M.F.O.; El-Azab, M.S.; Obayya, S.S.A. AI-driven photonics: Unleashing the power of AI to disrupt the future of photonics. APL Photonics 2024, 9, 080902. [Google Scholar] [CrossRef]
  127. Rahman, M.M.; Khan, A.; Lowther, D.; Giannacopoulos, D. Evaluating magnetic fields using deep learning. COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 2023, 42, 1115–1132. [Google Scholar] [CrossRef]
  128. Xu, S.Y.; Zhou, Q.; Liu, W. Prediction of soliton evolution and equation parameters for NLS–MB equation based on the phPINN algorithm. Nonlinear Dynamics 2023, 111, 18401–18417. [Google Scholar] [CrossRef]
  129. Kovacs, A.; Exl, L.; Kornell, A.; Fischbacher, J.; Hovorka, M.; Gusenbauer, M.; Breth, L.; Oezelt, H.; Praetorius, D.; Suess, D.; et al. Magnetostatics and micromagnetics with physics informed neural networks. Journal of Magnetism and Magnetic Materials 2022, 548, 168951. [Google Scholar] [CrossRef]
  130. Barmada, S.; Barba, P.D.; Formisano, A.; Mognaschi, M.E.; Tucci, M. Physics-informed Neural Networks for the Resolution of Analysis Problems in Electromagnetics. Applied Computational Electromagnetics Society Journal (ACES) 2023, 38, 841–848. [Google Scholar] [CrossRef]
  131. Son, S.; Lee, H.; Jeong, D.; Oh, K.Y.; Ho Sun, K. A novel physics-informed neural network for modeling electromagnetism of a permanent magnet synchronous motor. Advanced Engineering Informatics 2023, 57, 102035. [Google Scholar] [CrossRef]
  132. Zhang, R.; Su, J.; Feng, J. Solution of the Hirota equation using a physics-informed neural network method with embedded conservation laws. Nonlinear Dynamics 2023, 111, 13399–13414. [Google Scholar] [CrossRef]
  133. Fujita, K. Impedance modeling of accelerator beams with discontinuous charge density using scattered-field physics-informed neural networks. IEICE Electronics Express 2023, 20, 20220523–20220523. [Google Scholar] [CrossRef]
  134. Fujita, K. Electromagnetic field computation of multilayer vacuum chambers with physics-informed neural networks. Frontiers in Physics 2022, 10–2022. [Google Scholar] [CrossRef]
  135. Zhelyeznyakov, M.; Fröch, J.; Wirth-Singh, A.; Noh, J.; Rho, J.; Brunton, S.; Majumdar, A. Large area optimization of meta-lens via data-free machine learning. Communications Engineering 2023, 2, 60. [Google Scholar] [CrossRef]
  136. Chen, Y.; Dal Negro, L. Physics-informed neural networks for imaging and parameter retrieval of photonic nanostructures from near-field data. APL Photonics 2022, 7, 010802. [Google Scholar] [CrossRef]
  137. Liu, Y.H.; Liu, J.P.; Wang, B.Z.; Wang, R. A PINN framework for inverse physical design of metal-loaded electromagnetic devices. AIP Advances 2024, 14, 125201. [Google Scholar] [CrossRef]
  138. Fujita, K. Comparison of physics-informed neural networks in solving electromagnetic interior scattering problems including a relativistic beam current. Journal of Advanced Simulation in Science and Engineering 2024, 11, 73–82. [Google Scholar] [CrossRef]
  139. Dodge, S.; Barmada, S.; Formisano, A. A STacked Adaptive Residual PINN (STAR-PINN) Approach to 2D Time-Domain Magnetic Diffusion in Nonlinear Materials. IEEE Access 2025, 13, 141380–141394. [Google Scholar] [CrossRef]
  140. Zheng, X.; Peng, T.J.; Hou, J.; Zhang, Y.; Chen, L.; Qin, S.L.; Mao, Y.Q.; Lu, W.B.; Zhang, J.N.; You, J.W.; et al. Hybrid Physics-Data-Driven Neural Network for Accurate Modeling of Scattering Problems. IEEE Transactions on Antennas and Propagation 2025, 73, 6826–6838. [Google Scholar] [CrossRef]
  141. Medvedev, V.; Rosskopf, A.; Erdmann, A. Generative Inverse Design of Metamaterials Enhanced by Physics-Informed Neural Network. In Proceedings of the 2025 Nineteenth International Congress on Artificial Materials for Novel Wave Phenomena (Metamaterials), 2025, pp. X–215–X–217. [CrossRef]
  142. Wu, S.; Ling, H.; Zhao, K.; Hong, D. Solving the Maxwell’s Equations From Magnetic Dipole Sources in 2.5-D TI Medium With PINNs. IEEE Transactions on Geoscience and Remote Sensing 2025, 63, 1–14. [Google Scholar] [CrossRef]
  143. Tosun, R.A.; Kuzucu, D.; Durgun, A.C.; Baydoğan, M.G. Fine-Pitch Interconnect Modeling Using Physics-Informed Neural Networks. In Proceedings of the 2025 IEEE 29th Workshop on Signal and Power Integrity (SPI), 2025, pp. 1–4. [CrossRef]
  144. Li, H.; Liu, J.G.; Wang, Y.; Xin, X.D.; Huang, X.W.; Sheng, X.Q. Physics-Informed Deep Learning for Inverse Scattering of Irregular Targets From Near-Field Data. IEEE Antennas and Wireless Propagation Letters 2025, 24, 3734–3738. [Google Scholar] [CrossRef]
  145. Fujita, K. Physics-Informed neural networks with transfer learning for space charge impedances in particle accelerators. International Journal of Applied Electromagnetics and Mechanics 2025, 78, 40–44. [Google Scholar] [CrossRef]
  146. Zhang, Y.; Li, R.; Tang, H.; Shi, Z.; Wei, B.; Gong, S.; Yang, L.; Yan, B. Electromagnetic Scattering from a Three-dimensional Object using Physics-informed Neural Network. Applied Computational Electromagnetics Society Journal (ACES) 2025, 40, 103–111. [Google Scholar] [CrossRef]
  147. Wan, B.; Lei, G.; Guo, Y.; Zhu, J. Physics-Informed Neural Networks Based on Unsupervised Learning for Multidomain Electromagnetic Analysis. IET Electric Power Applications 2025, 19, e70083. [Google Scholar] [CrossRef]
  148. Rutigliano, N.; Rossi, R.; Murari, A.; Gelfusa, M.; Craciunescu, T.; Mazon, D.; Gaudio, P. Physics-informed neural networks for the modelling of interferometer-polarimetry in tokamak multi-diagnostic equilibrium reconstructions. Plasma Physics and Controlled Fusion 2025, 67, 065029. [Google Scholar] [CrossRef]
  149. Qiao, Z.; Wang, D.; Ni, Y.; Song, K.; Li, Y.; Wang, S. A partitioned modeling approach using a physics-informed neural network for PMSM. Engineering Analysis with Boundary Elements 2025, 179, 106379. [Google Scholar] [CrossRef]
  150. Xu, W.; Zhong, Q.; Wang, M.; Wei, Z.; Wang, Z.; Cheng, X. High precision, full-vector optical mode solving in waveguides via fourth-order derivative physics-informed neural networks. Opt. Express 2025, 33, 38317–38328. [Google Scholar] [CrossRef] [PubMed]
  151. Wang, J.; Wang, D.; Wang, S.; Li, W.; Jiang, Y. Dimensionless Physics-Informed Neural Network for Electromagnetic Field Modelling of Permanent Magnet Eddy Current Coupler. IET Electric Power Applications 2025, 19, e70084. [Google Scholar] [CrossRef]
  152. Riganti, R.; Zhu, Y.; Cai, W.; Torquato, S.; Negro, L.D. Multiscale Physics-Informed Neural Networks for the Inverse Design of Hyperuniform Optical Materials. Advanced Optical Materials 2025, 13, 2403304. [Google Scholar] [CrossRef]
  153. Qin, H.; Zhang, T.; Bao, H.; Yu, Z.; Ding, D. Physics-Informed Neural Network for Solving Three-Dimensional Maxwell’s Equations. In Proceedings of the 2025 International Conference on Microwave and Millimeter Wave Technology (ICMMT), 2025, pp. 1–3. [CrossRef]
  154. Ou, M.; Sun, Y.F.; Zhu, H.; Li, X.H. Physics-Informed Neural Network for Rapid Prediction of Wide-Angle Electromagnetic Scattering from Three-Dimensional Objects. In Proceedings of the 2025 IEEE 13th Asia-Pacific Conference on Antennas and Propagation (APCAP), 2025, pp. 178–179. [CrossRef]
  155. Su, W.; Shao, W.; Cheng, X.; Ding, X. A Physics-Informed Neural Network for Unconditionally Stable Time-Domain Simulations. In Proceedings of the 2025 International Conference on Microwave and Millimeter Wave Technology (ICMMT), 2025, pp. 1–3. [CrossRef]
  156. Hu, Y.D.; Zhang, K.; Zhang, L.; Du, W.; Wang, X.H. An Improved Physics-Informed Neural Networks Method for Three-Dimensional Electromagnetic Inverse Scattering Problems. In Proceedings of the 2025 International Conference on Microwave and Millimeter Wave Technology (ICMMT), 2025, pp. 1–3. [CrossRef]
  157. Fan, J.; Shao, J.; Liu, J.P.; Chang, H.; Liu, Y.; Wang, R.; Wang, B.Z. Electromagnetic Inverse Design Method for 2-D Parametric-Curve-Defined Metallic Structures Based on PINNs. IEEE Transactions on Microwave Theory and Techniques 2026, 74, 1385–1395. [Google Scholar] [CrossRef]
  158. Barmada, S.; Dodge, S.; Formisano, A. Weak Formulation for Physics-Informed Neural Networks in the Resolution of Analysis Problems in Electromagnetics. IEEE Transactions on Magnetics 2025, 1–1. [Google Scholar] [CrossRef]
  159. Zhu, Y.; Guo, Z.; Lei, G.; Guo, Y.; Zhu, J. Self-Adaptive Physics-Informed Neural Networks for Solving 2-D Magnetostatic Fields in Open Boundaries. IEEE Transactions on Magnetics 2025, 1–1. [Google Scholar] [CrossRef]
  160. Shaviner, G.G.; Chandravamsi, H.; Pisnoy, S.; Chen, Z.; Frankel, S.H. PINNs for solving unsteady Maxwell’s equations: convergence issues and comparative assessment with compact schemes. Neural Computing and Applications 2025, 37, 24103–24122. [Google Scholar] [CrossRef]
  161. Wang, S.; Wang, K.; Zeng, P.; Lei, Y.; Wang, Z.; Zhang, B. Gradient-aligned physics-informed neural network for performance analysis of permanent magnet eddy current device under complex operating conditions. Expert Systems with Applications 2026, 299, 129915. [Google Scholar] [CrossRef]
  162. Sun, B.; Guo, X.; Wu, F.; Gao, Z.; Su, M.; Liu, Y. A Physics-Informed Contrast Source Inversion Learning Method for Solving Full-Wave 2-D Inverse Scattering Problems. IEEE Transactions on Microwave Theory and Techniques 2025, 73, 9701–9716. [Google Scholar] [CrossRef]
  163. Mo, G.; Narayanan, K.K.; Castells-Rufas, D.; Carrabina, J. Physics-Informed Neural Network Surrogate Model For Capacitive Touch Sensors By Solving Maxwell’s Equations. In Proceedings of the Proceedings of the 39th ECMS International Conference on Modelling and Simulation, ECMS 2025, Catania, Italy, June 2025; Scarpa, M.; Cavalieri, S.; Serrano, S.; Vita, F.D., Eds. European Council for Modeling and Simulation, 2025, pp. 390–396. [CrossRef]
  164. Sun, Y.; Xv, W. Application of Physical Information Neural Network Based on Fourier Features in Electromagnetic Computing. In Proceedings of the Proceedings of the 1st Electrical Artificial Intelligence Conference, Volume 1; Qu, R.; Song, Z.; Ding, Z.; Mu, G.; Xiong, R.; Han, L., Eds., Singapore, 2025; pp. 63–76. [CrossRef]
  165. Rigoni, T.; Arcieri, G.; Haywood-Alexander, M.; Haener, D.; Chatzi, E. Modeling GPR observations on railway tracks via black box and physics informed neural networks. In Proceedings of the Proceedings of the 9th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS), 2024, pp. 1–12. [CrossRef]
  166. Toghranegar, S.; Kazmi, H.; Deconinck, G.; Sabariego, R.V. Magnetostatic and Magnetodynamic Modeling With Unsupervised Physics-Informed Neural Networks. IEEE Transactions on Magnetics 2025, 61, 1–10. [Google Scholar] [CrossRef]
  167. Zhu, R.; Cong, X.; Pu, S.; Lin, N.; Dinavahi, V. SAS-PINN: An Enhanced Physics-Informed Neural Network for 2-D Time-Domain Electromagnetic Field Computation of Power Transformer. IEEE Transactions on Magnetics 2025, 61, 1–8. [Google Scholar] [CrossRef]
  168. Kheir, M.; Qian, K.; Nabi, M.; Ebel, T. Modular Meshless Electromagnetic Simulation Using KAN-Based Physics-Informed Neural Networks. IEEE Journal on Multiscale and Multiphysics Computational Techniques 2025, 10, 452–458. [Google Scholar] [CrossRef]
  169. Rezende, R.S.; Piwonski, A.; Schuhmann, R. An Efficient Architecture Selection Approach for PINNs Applied to Electromagnetic Problems. IEEE Transactions on Magnetics 2025, 1–1. [Google Scholar] [CrossRef]
  170. Lou, X.Y.; Zhang, J.B.; Yu, D.M.; Wang, D.F.; Pan, X.M. Solution of Electromagnetic Scattering and Inverse Scattering by Integral Equations Through Neural Networks. IEEE Transactions on Antennas and Propagation 2025, 73, 9654–9659. [Google Scholar] [CrossRef]
  171. Brendel, P.; Medvedev, V.; Rosskopf, A. Convolutional Physics-Informed Neural Networks for Fast Prediction of Core Losses in Axisymmetric Transformers. IEEE Transactions on Magnetics 2024, 60, 1–4. [Google Scholar] [CrossRef]
Figure 1. Physics-Informed Neural Network (PINN) workflow. A neural network with activation σ approximates solutions, while automatic differentiation ( x , t ) enforces PDE constraints. The composed loss L = λ i L i (weighted by λ i ) aggregates PDE, data, initial, and boundary condition residuals. Training iterates until L < ϵ , updating parameters via gradient descent.
Figure 1. Physics-Informed Neural Network (PINN) workflow. A neural network with activation σ approximates solutions, while automatic differentiation ( x , t ) enforces PDE constraints. The composed loss L = λ i L i (weighted by λ i ) aggregates PDE, data, initial, and boundary condition residuals. Training iterates until L < ϵ , updating parameters via gradient descent.
Preprints 206241 g001
Figure 2. Flow diagram illustrating the study selection process based on the PRISMA 2020 statement.
Figure 2. Flow diagram illustrating the study selection process based on the PRISMA 2020 statement.
Preprints 206241 g002
Figure 3. Retrieved records and share of duplicates. (a) Number of records retrieved from each database. (b) Proportion of unique versus duplicate records in the dataset.
Figure 3. Retrieved records and share of duplicates. (a) Number of records retrieved from each database. (b) Proportion of unique versus duplicate records in the dataset.
Preprints 206241 g003
Figure 4. Number of publications. (a) Annual number of publications before applying eligibility criteria. (b) Annual number of publications after applying eligibility criteria.
Figure 4. Number of publications. (a) Annual number of publications before applying eligibility criteria. (b) Annual number of publications after applying eligibility criteria.
Preprints 206241 g004
Figure 5. Number of publications by document type. (a) Total number of publications by document type before applying eligibility criteria. (b) Total number of publications by document type after applying eligibility criteria.
Figure 5. Number of publications by document type. (a) Total number of publications by document type before applying eligibility criteria. (b) Total number of publications by document type after applying eligibility criteria.
Preprints 206241 g005
Figure 6. Cumulative number of publications over time.
Figure 6. Cumulative number of publications over time.
Preprints 206241 g006
Figure 7. Number of publications by physics regime. A single publication may contribute to multiple categories. (a) Total number of publications by physics regime. (b) Annual number of publications by physics regime.
Figure 7. Number of publications by physics regime. A single publication may contribute to multiple categories. (a) Total number of publications by physics regime. (b) Annual number of publications by physics regime.
Preprints 206241 g007
Figure 8. Occurrences of architectures. A single publication may contribute to multiple categories. (a) Total occurrences of architectures. (b) Annual occurrences of architectures.
Figure 8. Occurrences of architectures. A single publication may contribute to multiple categories. (a) Total occurrences of architectures. (b) Annual occurrences of architectures.
Preprints 206241 g008
Figure 9. Occurrences of dimensionalities. A single publication may contribute to multiple categories. (a) Total occurrences of dimensionalities. (b) Annual occurrences of dimensionalities.
Figure 9. Occurrences of dimensionalities. A single publication may contribute to multiple categories. (a) Total occurrences of dimensionalities. (b) Annual occurrences of dimensionalities.
Preprints 206241 g009
Figure 10. Occurrences of media. A single publication may contribute to multiple categories. (a) Total occurrences of media. (b) Annual occurrences of media.
Figure 10. Occurrences of media. A single publication may contribute to multiple categories. (a) Total occurrences of media. (b) Annual occurrences of media.
Preprints 206241 g010
Figure 11. Occurrences of learning paradigms. A single publication may contribute to multiple categories. (a) Total occurrences of learning paradigms. (b) Annual occurrences of learning paradigms.
Figure 11. Occurrences of learning paradigms. A single publication may contribute to multiple categories. (a) Total occurrences of learning paradigms. (b) Annual occurrences of learning paradigms.
Preprints 206241 g011
Figure 12. Contingency table of the characteristics Physics Regime and Medium. Cell values indicate the number of publications with row percentages in parentheses.
Figure 12. Contingency table of the characteristics Physics Regime and Medium. Cell values indicate the number of publications with row percentages in parentheses.
Preprints 206241 g012
Figure 13. Contingency table of the characteristics Dimensionality and Network Architecture. Cell values indicate the number of publications with row percentages in parentheses.
Figure 13. Contingency table of the characteristics Dimensionality and Network Architecture. Cell values indicate the number of publications with row percentages in parentheses.
Preprints 206241 g013
Figure 14. Contingency table of the characteristics Physics Regime and Dimensionality. Cell values indicate the number of publications with row percentages in parentheses.
Figure 14. Contingency table of the characteristics Physics Regime and Dimensionality. Cell values indicate the number of publications with row percentages in parentheses.
Preprints 206241 g014
Figure 15. Contingency table of characteristics Physics Regime and Learning Paradigm. Cell values indicate the number of publications with row percentages in parentheses.
Figure 15. Contingency table of characteristics Physics Regime and Learning Paradigm. Cell values indicate the number of publications with row percentages in parentheses.
Preprints 206241 g015
Table 1. Research questions of the SLR.
Table 1. Research questions of the SLR.
Research Question
RQ1 How extensively are PINNs applied within electromagnetics?
RQ2 Which subfields of electromagnetics are PINNs applied to?
RQ3 Which network architectures are used for PINNs in solving Maxwell’s equations?
RQ4 What spatial dimensionality are the electromagnetic problems solved?
RQ5 Are the reviewed domains divided into different media?
RQ6 Which learning paradigms are used for solving Maxwell’s equations with PINNs?
RQ7 Are there associations between the extracted characteristics of the reviewed publications?
Table 2. Inclusion criteria.
Table 2. Inclusion criteria.
Inclusion criterion
I1 Employed PINNs to solve Maxwell’s equations or related electromagnetic problems.
I2 Is a peer-reviewed journal article or conference paper.
I3 Described the neural network architecture used.
I4 Provided sufficient details on the application domain.
I5 Provided sufficient details on the learning paradigm.
I6 Provided sufficient details on the electromagnetic problem.
Table 3. Exclusion criteria.
Table 3. Exclusion criteria.
Exclusion Criterion
E1 Is not written in English.
E2 Is of type review, editorial, book, awarded grant, preprint.
E3 Is not peer-reviewed.
E4 Full-text is not accessible to the reviewers.
E5 Is not utilizing PINNs.
E6 Addresses a problem outside of electromagnetics or does not solve field equations.
Table 4. Overview of extracted characteristics and categories.
Table 4. Overview of extracted characteristics and categories.
Characteristics Description Categories
Bibliographic
Title Title of the publication Free text (as extracted from the database)
Type of Publication Document type Journal article; Conference paper
Authors Authors of the publication Free text (as extracted from the database)
Publication Year Year in which the work was published Integer
DOI Digital Object Identifier of the publication Free text (as extracted from the database)
Journal Journal or conference venue Free text (as extracted from the database)
Problem
Physics Regime Regime addressed in electromagnetics Magnetostatics; Electrostatics; Magnetoquasistatics; Electroquasistatics; Electrodynamics
Dimensionality Spatial dimensionality of the problem 0D; 1D; 2D; 3D
Medium Properties of the reviewed medium Homogeneous; Inhomogeneous
Model
Network Architecture Neural network architecture used in the PINN Feedforward Neural Network (FNN); Graph Neural Network (GNN); Transformer; Recurrent Neural Network (RNN); Autoencoder; Convolutional Neural Network (CNN); DeepONet; Other
Learning Paradigm Learning setting used to train the model and use of data Supervised Learning; Semi-supervised Learning; Unsupervised Learning
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated