Preprint
Review

This version is not peer-reviewed.

Artificial Intelligence and Its Immense Relevance to Composite Materials: A Snapshot

Submitted:

08 August 2025

Posted:

08 August 2025

You are already at the latest version

Abstract
Composite materials have brought highly welcomed changes to a variety of industries, including aerospace, automotive engineering, and renewable energy, by offering an unmatched mechanical strength-to-weight ratio, multi functionality capabilities, and other excellent performance roles. Composite structures constitute deliberate mixtures of unique reinforcements, such as carbon or glass fibers, with unique and engineered matrices, which create composite performance metrics that are not achievable by monolithic metals or ceramics. Nevertheless, the design, optimization, and lifetime management of these materials are still constrained by multiscale complexity, computational complexity, and experimental resource intensity. In this article, we assess how artificial intelligence (AI) - including machine learning (ML), physics-informed neural networks (PINNs), graphs, and quantum-based approaches is changing the composites material lifecycle. We systematically examine the way AI is changing the game for accelerating property prediction, enabling inverse design, optimizing manufacturing conditions, and informing intelligent structural health monitoring (SHM). For example, with sufficient training, graph neural networks (GNNs) are now modelling interfacial adhesion with comparable accuracy to density functional theory (DFT) but with time savings of three orders of magnitude. Generative diffusion models trained on synthetic microstructures are being used to prototype wind turbine blades and aerospace panels in less than one month, instead of 12-18 months, which was typical of traditional iterative means. Further, transformer-based multimodal inspection systems have achieved >95% classification accuracy on 17 defect types while exceeding the spatial resolution of NDT/type without exceeding 15× resolution limits of pre-NDT and situational standards specifications. We present GNoME (Graph Neural composite Optimization and Modeling Environment), which is a seamless interface for combining multiscale simulation, generative architectures, and experimental feedback. GNoME represents an important step from purely data-driven learning to hybrid approaches by including physics, uncertainty quantification, and domain-specific priors. We also highlight some persistent challenges, such as sparse data environments, black-box model interpretability, and regulatory hurdles. The review ends with a technology roadmap outlining quantum-enhanced AI, ethical governance in industrial applications, and sustainable composite design paradigms. This extensive review synthesized over 300 published studies published between 2018 and 2022 with an intention that it will help researchers and practitioners utilize practical frameworks for applying AI in intelligent, adaptive, and sustainable composites for the next generation.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

Introduction

Composite materials are engineered combinations of one or more materials retaining their identity, so that when combined, they will impart synergistic properties to the original materials. Most composites utilize a high-strength fiber (carbon, glass, aramid, basalt) in a matrix phase (polymer, ceramics, metals) [1]. The fiber handles the mechanical loads while the matrix transfers stresses between the fibers, surrounds and protects the fibers from the environment, and provides characteristics such as toughness, thermal stability, or dielectric properties. New forms of interfacing, nano-in filler dispersion, and hierarchically reinforced composites create composites and armor systems that perform better than classical unidirectional laminated composites (easily 19% reliability), have tunable thermal conductivities, shape memory responses, and possible embedded sensing.
In 2024, composites made up as much as 50% by mass of modern aerospace platforms, e.g., Boeing 787 Dreamliner, or Airbus A350, 100-meter wind turbine blades, EV battery containment, and future satellites, among others [2,3]. Those high stiffness-to-density ratios, with resistance to corrosion, have led to fuel savings and related structural light-weighting, which promotes lifetime performance improvements in extremely harsh service conditions. Table 1 summarizes key AI techniques and their applications in design, manufacturing, and inspection. Although there have been immense developments over the last several decades, the full design and deployment of composites are still inhibited by three interrelated bottlenecks: computational expense, costs of experimental characterization, and the lack of high-fidelity materials data.

Computational Cost Barriers

Composite material modeling is synonymous with multiscale simulation. Accurate predictions of composite behavior encompass phenomena that span at least nine orders of magnitude in spatial dimensions, from quantum-mechanical simulations of interfacial bonding to finite element analysis (FEA) of large structural components. A single mechanical model of damage propagation for a wing spar can require over 15,000 core-hours on a high-performance computing (HPC) cluster when simulating delamination, matrix cracking, and fiber–matrix debonding events.
Atomistic simulations are incredibly demanding in terms of computing resources. Imagine the time taken in running molecular dynamics (MD) or density functional theory (DFT) calculations to simply characterize adhesion, chemical functionalization impacts, or nanofiller-matrix interactions. These simulations can run for weeks on HPC systems, and still only examine a very small region of an interface or the interactions between 10,000-50,000 atoms in simulation [4]. For industrial engineers, this computational burden is unacceptable due to the time delay incurred, especially while trying to assess hundreds or thousands of permutations in design for practical implementation.
The combinatorial aspect of a composite design quickly makes brute-force computation infeasible. For example, a single laminated-panel design with just 10 fiber orientations, 5 types of matrices, 4 stacking sequences, and 3 curing cycles has 600 different discrete combinations. And the system-level optimization with actual geometries (e.g., stiffened fuselage panels, turbine blades) would take petascale computation, out of even the best research labs and many industrial design teams’ reach. Therefore, AI-based surrogate models that can replicate full-physics simulations in a fraction of the time and resources are now essential.

1.2. Experimental Characterization and Variability

Similar to computational overhead, composite characterization is expensive and subject to variation. Nondestructive inspection methods, e.g., micro-computed tomography (µCT), have been frequently employed to characterize fiber misalignment, void distributions, and ply wrinkling for carbon-fiber laminates; however, in order to achieve sub-micron resolution (i.e., a voxel size of 0.4–0.7 µm), µCT typically costs about $500−1,000 per hour at synchrotron beamlines, and post-processing often exhibits completion times greater than several (>4) days per scan [5].
Also complicating matters, manual segmentation and interpretation for these images inherently involve a high degree of human variability. Upwards of a 30% difference in the estimated fiber orientation tensors has been documented during studies when multiple operators analyzed the same scan. This presents challenges for model validation and further hinders the training of machine learning algorithms as they rely on the “ground truth” for supervised learning.
Consistency is also an issue with mechanical testing. Inter-laboratory studies have shown that even standardized tensile and fatigue tests show variations in ultimate strength of between 10 and 25% when nominally identical specimens are being tested due to tabbing, gripping, loading rates, and due to humidity control during conditioning. Not documenting these variations in freshly recorded metadata can further contaminate datasets and reduce the reliability of AI models when training.

Lack of Comprehensive Data for Machine Learning

The biggest challenger to widespread AI utilization in composites is the availability and chunkiness of usable data. Of the thousands of papers and experimental studies in composites research, only a small percentage are available in a structured machine-readable format. For instance, a recent meta-analysis of the available literature identified that of the known polymer matrix materials, only 8% of them have publicly available full thermomechanical property profiles [4].
Data availability is particularly scarce with contextually important properties, such as fatigue behaviour, creep, and environmental degradation; these are important for predicting service-life, but they are challenging to measure, resulting in thin datasets. Also, material descriptions are often categorized under proprietary naming conventions or lack essential specifications, which would make aggregating data across studies much simpler. A compelling example of this is the naming of materials such as “epoxy A” or “glass fiber B”, which could involve multiple formulations across suppliers, or even batches, rendering reproducibility impossible without tracking precise chemical and processing metadata.
AI models can overfit on limited or poorly labeled datasets, leading to a failure to generalize beyond the training data. It is essential to ensure that the training data is rich and meaningful so that important patterns are not lost. Even models taking transfer learning or domain adaptation boast large uncertainty in confidence intervals. Consequently, their application in high-consequence usage scenarios, such as design scenarios for an aircraft wing, or a new offshore wind blade, is ’too much’ uncertainty, and is unactionable due to the catastrophic possibilities of error.

Opportunity Landscape for Artificial Intelligence

However, the application of AI presents a viable avenue forward. Machine learning algorithms, especially deep learning algorithms and probabilistic graphical models, are suitable for identifying patterns and associations in noise-riddled, high-dimensional datasets [4,6]. The importance of domain knowledge on the acquisition of a data-driven model is apparent because there is little benefit to using AI models if the user doesn’t understand the underlying uses of contextual knowledge in the modelling process. Physics-informed neural networks (PINNs) provide a mechanism to imprint conservation laws and governing equations directly within the learning process, thus incorporating domain knowledge and expanding the applicability of the physics-based models in a sample-efficient manner. Graph neural networks (GNNs) represent an advanced state of the art for the design of fiber architectures and are suitable for understanding non-Euclidean geometries not adequately modeled in classical models, designing fiber architectures, and reporting predictions of interfacial strength, crack initiation, and damage accumulation.
At the same time instantiated as generative models, including the development of variational autoencoders (VAEs), diffusion models, and generative adversarial networks (GANs) have made a significant impact in the inverse design of composite microstructures by generating new layups with fiber patterns and filler-stacking sequences to achieve user-defined property targets, and importantly, could reduce innovation timelines from years to weeks.
Assuming AI is merged with real-time sensing technologies (e.g., strain gauges, acoustic emissions, thermal profiles, etc), it is possible to realize predictive maintenance approaches in the data-driven non-linear environment to create “digital twins.” Based on these frameworks, there are opportunities for not only reducing unexpected downtime but also allowing adaptive control during manufacturing, where there is consistency in process outputs.

Background of Composites and Design Constraints

In recent decades, composite materials have evolved from traditional fiber–matrix laminates into a large class of multifunctional architected systems with structural, thermal, electrical, and adaptive capabilities. Composite materials—defined as two or more constituents combined to afford properties that the individuals alone will not achieve—are critical to the development of lightweight design, aerospace structural durability, renewable energy infrastructures, and biomedical implants. As functions and structural complexity progress, the design constraints will also change, particularly with the increasing interfacing of optimization, simulation, and diagnostics with artificial intelligence (AI) and computational techniques [7].

Advanced Composite Systems

Contemporary materials, or composites, have broken away from the paradigms of the past and are also seemingly free from the constraints imposed by unidirectional or woven fibers held in matrix architectures. The advance into more innovative designs – from biomimetic architectures, through programmable matter, to responsive materials – is challenging and redefining performance and flexibility. These new systems are co-developed with AI tools to individually or jointly supplement mechanical, thermal, and specific dynamic actions within complicated, often mission-profiled environments.
Self-healing composite networks, for example, include systems that use microencapsulated healing agents or reversible chemistries targeted to self-repair the load-bearing capabilities. Experimental systems have been reported to recover up to 92% of their peak strength after damage in repeated stress cycles (up to 198 MPa), which is nearly eight times the tensile strength of conventional foamed structural polymers [8]. Further, using generative diffusion models learned from soft-constrained Fickian transport simulations, improvements of 27% from their manually engineered counterparts were reported for healing efficiencies of such systems [6].
While programs that prioritize the field of 4D and responsive composites have emerged, providing an entry point to smart structural systems, it is exciting to work with matrices sensitive to stimuli (i.e., shape memory polymers (SMPs)) to create predictable performance criteria over time.
The cellulose nanocrystal/epoxy composites are one such example, where they can strain beyond 120% with the presence of moisture gradients, where they induce swelling anisotropically. Long short-term memory (LSTM) neural networks based on time-series environmental data have opened the possibility of predicting curvature evolution of these structures to a coefficient of determination (R2) above 0.94, thus opening new possibilities of autonomous strategies of deployment and reconfiguration that surpass even what had been done in our initial pathway modeling.
Also pushing the limits are architectured metamaterials, which achieve unexpectedly high levels of performance based on unit cell geometry instead of material composition. This is not just restricted to negative Poisson ratios but also to programmable anisotropy and tensile stabilisation under compressive stresses. Chiral honeycombs have recently been demonstrated with Poisson’s ratios as low as -0.8, and tensegrity lattices in which a controlled colonization process enabled over 300% of maximum deformation. The ability to determine unit cell architectures for these performance metrics is now taking a leap forward, with conditional GANs (cGANs) that have been trained on more than 50,000 finite element simulations and can provide a synthesizing geometry in minutes with a desired stiffness tensor, compared to previous reliable solutions that took weeks.
The potential is that a new set of composites enabled by AI-based design and simulation frameworks can offer new outlets to performance and application opportunities. Yet, they compound issues connected to microstructural characterization, defect control, and multiscale integration of properties that exist already in our repertoire of limitations [9].

Persistent Experimental Challenges

Despite all the theoretical and computational progress in the design of composites, experimental verification, defect correction, and manufacturing regulation remain an obstacle to further practices in the subject matter. The high-resolution imaging of microstructure and feature extraction is one of the most considerable limitations of the work in terms of time and cost of quality improvement.
Sub-micron-level microstructural studies are too costly. As an example, resolution of 0.7 µm, comparing synchrotron-based micro-computed tomography (micro-CT) can take greater than 72 hours for a single 1 mm3 sample with scanning costs exceeding $18,000 per specimen [10]. Manual segmentation takes a great deal of time and is prone to significant error, reaching up to 18% between annotators. Automation of this process has been made recently utilizing attention-augmented U-Net with combined segmentation and analysis accuracy of over 97% and analysis time decrease by more than 90 percent [11].
Simultaneously, there is the growing use of synthetic data generation as a substitute for expensive imaging needs. Physics-informed generative adversarial networks (GANs) can be trained on real and simulated data to virtual microstructures with any prescribed void distribution, including fiber orientation, with any desired distribution as well. These are synthetically produced volumes that are not more than 5% away from measured mechanical property values of physical samples and alleviate the data bottlenecks when training an AI model on property predictions [12].
Another major challenge is with regard to manufacturing-driven issues and, more specifically, automated fiber placement (AFP) methods. In AFP, the temperature and pressure applied can vary and lead to void content ranging from 0.5% to 2.3%, hence affecting mechanical performance [13]. Reinforcement learning (RL) controllers leverage the sensor information, which in this case are thermal images of an infrared camera, to achieve a reduced porosity level and a low void content of less than 0.8% and a boost in the actual deposition rates by 22% [14]. These systems are optimizing yield and, importantly, can make real-time intelligent corrections while fabrication takes place.
When looked at collectively, these advancements indicate the increased significance of experimental validation and AI-powered instruments that are inextricably linked with each other. Practical defect identification, high-resolution imaging, and data-effective surrogate modelling will be significant to see that next-gen composites are exhibiting desired performance on simulated and open-loop cases.

Artificial Intelligence Techniques Overview

The increasing complexity of composite materials, ranging from multi-phase interfaces to programmable architectures, has driven a parallel advancement of the algorithms used to model, design, and optimize them. The ability of artificial intelligence (AI), of which there are many forms, to provide tangible and influential evidence on this transformation cannot be overstated. While machine learning and deep learning have proven to simply speed up traditional materials workflows, new frontier AI paradigms—quantum machine learning (QML), neural operators, physics-informed generative models—are expanding the boundaries of computational materials science. This section will cover these AI methods and what they mean for composite material systems.

3.1. Quantum Machine Learning (QML)

Quantum machine learning comes up with a promising platform to perform high-dimensional design and property spaces that are otherwise not computationally viable. The use of quantum parallelism, superposition, and entanglement in particular can result in more efficient searches in the space of complex configurations. Given the highly complex structure/property interactions, nonlinear responses of interfacial properties, defect topologies, and manufacturing processing routes themselves that can all come into play in composite materials, for example, there is some budding promise in QML methods. To illustrate, predictions of interfacial shear strengths in graphene-epoxy nanocomposites can be obtained using quantum support vector regression (QSVR) models implemented on superconducting qubit hardware. After training the QSVR model with 150 training samples, it has achieved 94% accuracy in its predictions - competitive with and in many cases superior to classical kernel methods, with the additional point of being a similar computation time due to quantum acceleration [14]. Altogether, it indicates that QML is not merely practical, but possibly beneficial in lower data volumes, which is a frequent case in materials science.
Quantum combinatorial optimization (particularly that done through the D-Wave quantum annealers on fiber placement path planning problems) is another fruitful avenue. Being an Ising model, it optimized the orientation of the fibers that decreased the greatest stress concentration areas by 40% more in comparison to the normal gradient-based algorithms [15]. This potentially is of much benefit to large composite structures (the prime examples are pressure vessels or wing skins), where an optimal path planning potentially eliminates the risk of delaminating or buckling.
Although this could be the case, industrial QML adoption is surrounded by obstacles. Even in the early generation of quantum hardware, coherence times are almost always below 100 microseconds, and error correction remains costly and in its infancy. Thus, the other hybrid quantum-classical models have been studied as a transitional stage in which quantum subroutines carry out key optimization functions in otherwise classical processes [16].

Advanced Neural Architectures

Decades have passed since the introduction of convolutional neural networks (CNNs); nevertheless, recent advances have led to a new class of neural architectures with significantly improved learning efficiency and modeling accuracy. In recent years, a new range of neural models has emerged that have had considerable benefits in terms of learning efficiency and modeling accuracy - in particular, vision transformers (ViTs), graph neural networks (GNNs), diffusion models, and neural operators.
Vision transformers represent a shift to a global context as opposed to local pattern recognition. Dividing 3D micro-CT volumes into patch embeddings and passing through patch embeddings with multiple-head self-attention, ViTs have been successful at learning physically correlated but spatially distant defects. In a study to study barely visible impact damage (BVID) in laminated carbon fiber composites, a Boeing researcher had developed ViTs to align anomalies from thermal diffusion imaging from BVID along with subsurface delamination, and the model was able to yield an overall mean average precision (mAP) of 98.7% accuracy, vastly exceeding the depths of accuracy reported by conventional CNNs [5].
In contrast, graph neural networks (GNNs) offer an entirely new paradigm in which the edge structure of the components matters more than pixelation. In composite modeling, we might regard fibers as nodes, and the interfaces around them as edges, and think in a relational way across the whole microstructure. A simulation could show that GNNs could be used to predict interfacial failure stresses on a scale of 0.8 GPa of atomistic standards, as well as that it was more than 3 orders of magnitude faster than full molecular dynamics (MD) simulations [17] (see Figure 1).
Neural operators have the advantage of being defined over infinite-dimensional space function domains, thus making a distinction with respect to their use in learning solution operators of partial differential equations characterized around composite behavior, which has to be pinned down. Using Fourier neural operators (FNOs) to approximate stiffness fields for a recently-created mechanical metamaterial, 15,000 finite element simulations were used for training the model and reduced 18-hour computation times down to less than 70 seconds per simulation with an average relative L2 error of only 2.7% [18]. Now, using these models to quickly homogenize stiffness fields, they are being deployed in real-time digital twin environments.
Originally designed out of an ability to help out in generative imaging, diffusion models have now been translated over to the generation of composite microstructures. Recently, researchers have begun to incorporate statistical priors and physical restrictions (such as void fraction and fiber alignment) to generate synthetic distributions of nanotubes that have up to 95% match with experimental µCT scans. Such a method will assist in guiding new data augmentation and virtual testing methods, and is particularly promising in cases where large imaging datasets are hard to obtain, such as nanocomposite designs [19].

Active Learning Frameworks

The process of obtaining experimental data in composite materials is costly and time-consuming, not to mention often duplicative if not planned appropriately. This issue is resolved by active learning frameworks that pick samples that provide the most information to inquire and reduce experimental burden to promote learning standards.
Bayesian optimization (BO), because of its success in process parameter tuning, has become a popular technique. As one example, a Gaussian process surrogate model would have been utilized in one autoclave curing study. The model took expected improvement as the acquisition function to determine the best temperature and pressure cycles. The research took just 15 experiments to reach the 99.9% resin cure, as opposed to the 70% faster compared to the conventional grid search techniques.
Multi-fidelity modeling is also an important technique, especially where it can combine inexpensive but low-fidelity simulations (e.g., classical MD) with sparse, high-fidelity quantum mechanical data. As an example, a project headed by NASA created a multi-fidelity Gaussian process that correlated molecular dynamics (50 CPU-hours per iteration) with ab initio diagrams (5,000 CPU-hours). The computational cost of this project was scaled by 85% but still managed to get the accuracy to predict interface energy 0.05 eV/cm2 [20].
Lastly, quantum active learning (QAL) is a new form that has not been fully studied yet, but can be used to minimize the experimental fatigue testing. Initial QAL models have reduced the necessary elements of fatigue testing by 70% through quantum kernel estimation and entropy-based query selection on IBM quantum devices, in order to show a complete mapping of S–N curves for composite laminates [21]. Such advances indicate an important part of quantum (and classical) hybridization in active learning loops. These and other AI techniques applied to composite materials are summarized in [Table 2].

Cross-Architectural Integration

This proved to be 22 percent better impact resistance as compared to classical optimization routines in a recently published article on helicoidal laminate design [25]. Such methods are now shaping up in the arena of bioinspired and topologically designed materials.
The use of physics-informed generative adversarial networks (PI-GANs), where soft constraints are imposed on the adversarial training loop, is increasing and can be used to generate valid physically causal microstructures using minimal training data, with the constraints based on either an equilibrium or constitutive equation. The given is one implementation where there was a 60% reduction in the number of training samples needed to meet the fidelity of the stress distribution over the entire representative composite section [26].
The method is particularly applicable in simpler regimes where labeled data becomes scarce or incomplete, such as new composite development or in-field structural health monitoring applications. Here, when physical laws are imbibed into the learning algorithm, the system is accurate and generalizes, but does not overfit to noise or data sparsely sampled.

Challenges: Scalability and Interpretability

Due to the rise in the complexity and capacity of AI models, their computational and practical boundaries will also start to expand. Scalability to large structures and interpretability, which is an attribute of any model decision, have become recurring topics that are experienced when applying AI in composite structures.
The scalability problem is of particular concern to graph-based models such as GNNs. Growth in size of geometric structures, for instance, a meter-scale aerospace panel, caused a quadratic increase in the number of nodes and edges and the accompanying challenges of memory and convergence. While replacing-and-artifacting, multi-resolution coarsening, blocking, pooling, etc., have been developed, accuracy is still substantially degraded at least beyond component-level geometries. One GNN-based model developed for a full-scale wingbox example maintained 90% accuracy in predicting global modes of deformation, but involved ad-hoc partitioning and algorithms that may not be replicable and which may not transfer to other geometries in general [27].
Although attention maps of ViTs and GNNs can hint at the answer, usually, there is the underlying question of causal incentives between microstructure and properties that still cannot be conceived in a worthwhile hypothesis. Not to mention, in SHAP (SHapley Additive exPlanations) analysis of transformer-driven strength prediction models, resin-rich zones explained the largest fraction of the predicted failure areas in 73% of experiments- and this was by no means conflicting with the classical laminate theory, although there were still concerns that might lie in the overfitting capability of the models [28]. This points to the fact that more rigorous frameworks need to be developed that incorporate explainable AI (XAI) tools with a validation using physics. Post hoc inferences that depend upon (and any other form of qualitative interpretations, which we can do) interpretability are much easier to relate to/obtain when you base the inference on forward simulational or sensitivity analyses that then have plausible motivations for the model predictions. Just as the performance of models is regardless of any sub-optimal conditions, one ought to be seeking guarantees beyond mere old-fashioned goodness-of-fit and into the question of whether or not AI models are sensible in the physical world. Moreover, quantification of uncertainties ought to be carried out in pace with the forecasted intervals, as well as to incorporate reassurance that a certain model attribution is available and sensitive, especially when one would like to implement such techniques in controlled sectors.

AI-Material and Inverse Design

To develop advanced composites, one no longer in general has to do forward simulations of properties: one has to sensibly search enormous design spaces, with the constraint that it must meet given performance requirements under diverse restrictions. Inverse design the specification and synthesis of material structures or compositions to satisfy particular functional requirements, is no longer a state-of-the-art goal, but is an emerging capability enabled by artificial intelligence (AI) combined with high-throughput experimentation, generative models and quantum computation. This section explains the usage of AI-powered models in changing the composite design flow, especially the area that is encompassing microstructural optimization, polymer-filler choice, and process-condition mapping. The coming together of generative models, and quantum capabilities and privacy-protected collaborative intelligence tools is redefining the way next-generation materials are conceptualized and industrialised.

High-Throughput Virtual Screening

Of particular interest in materials discovery has been high-throughput virtual screening (HTVS) in systems with chemically unlimited diversity (such as polymer composites), whose interfacial phenomena are difficult to make general statements about. Historically, property predictions for composites had to be modeled using atomistic or mesoscale (i.e., polymer scale) simulations for each candidate material, which is not feasible when screening tens of thousands of combinations. Recent innovations in such areas as natural language processing (NLP), automated feature engineering, and transfer learning have now increased the scalability of screening and more broadly the implications of HTVS platforms by orders of magnitude.
One of the key projects in this area is the Materials Genome Initiative, which developed an NLP pipeline to mine more than 200,000 journal articles in polymer science, extracting approximately 145 descriptors that are chemically and physically relevant (e.g., chain rigidity, hydrogen-bonding capacity, backbone flexibility, and electronegativity of side-groups) for graph convolution networks (GCN) (after being trained on additional datasets available in the nanocomposite domain). Through this screening platform, they could screen up to almost 100 000 virtual polyimide filler composites within a day.
Transfer learning holds a lot of potential benefit under HTVS. The transferability of pre-trained models on metal-organic frameworks (MOFs) allowed one to predict adhesion energies in carbon fiber/epoxy systems with a mean absolute error of about 8 percent, and training data needed very little training data [29]. Such an extent of cross-domain flexibility is time- and compute-saving in the construction of chemically underrepresented composite systems predictive models.
Such evaluation systems are currently incorporated into robotic labs that can assist in loop closure in experimentations. The reinforcement learning agents will prioritize the candidates to synthesize and test and create a virtuous loop where the models improve as they have real-time input through experimentations. Such adaptive pipelines are the deliberations in the following decade supremacy of composite innovation hortatory cycles.

Generative Models for Microstructure and Performance

Generative models allow a completely different paradigm of material discovery. As opposed to forecasting the behavior of a list of known buildings, these models attempt to formulate new buildings that perform whatever particular goals are targeted. Recent findings show that generative models can propose microstructures with engineered porosity filament orientations, or configurations of polymer-matrix that could not be located using conventional design-of-experiments methods of analysis.
The use of statistical modeling, such as variational autoencoder (VAE) models, has been found to be important in learning latent representations of complex micrographs. The concept in this paper about flax-polypropylene biocomposites is the application of VAEs to plot the ongoing latent space of SEM image data, and it was utilized later via gradient-based optimization to determine new morphology interface shapes. The morphologies that arose created certain void patterns that enhanced the impact strength by 40% with a slight loss of stiffness compared to that of the control material [30].
Conditional generative adversarial networks (GANs) have been used in textile composites to generate a weave pattern of carbon fiber, during which they minimize stress concentration in the mechanical structure. A GAN was trained to learn the statistical dependencies between fibre curvature and stress concentrations and displayed two architectures that localized maximum stresses at 60 percent substantially less than a classical orthogonal weave [31]. The stochastic systems are used to simulate Fickian diffusion behavior, which is assigned to a Markov chain so as to create helicoidal ply sequences. They have resulted in designs which tripled the mode-I interlaminar fracture toughness of typical unidirectional plies, proving the mechanical advantage of biological design principles into engineering systems as well [32].
One relatively recent development is the use of generative flow networks (GFlowNets) that represent a sequential decision-making problem as material generation. GFlowNets were used due to manufacturing process-sensitive microstructures in powder bed fusion systems, and they provided an optimal laser track scan in order to remove the balling defect and have a relative density of 99.2 percent on EOS industrial systems [33]. Given that GFlowNets control the relative states of design, they are optimally suited to incorporate manufacturability and cost constraints into design creation.

Real-World Applications and Industrial Deployment

Inverse design using AI does not remain just another theoretical or academic opportunity; it is already implemented in most industries and has proven performance and sustainability improvements, as well as faster certification procedures. Vestas used multi-objective reinforcement learning in wind energy to optimize the designs of blade designs against 14 mutually conflicting criteria, such as, e.g., fatigue resistance, buckling performance, cost per kilogram, and manufacturability. Vestas reduced the structural mass by 22 percent by using turbine blades that were 100 meters long and were able to comply with the IEC certification-related load assumptions concerning the turbulent wind.
European Space Agency (ESA) has led the way in using AI-based material design in an attempt to gain knowledge on how to combine carbon-fiber-reinforced plastics (CFRP) sandwich panels with the next-generation satellite antennas that are required to have extreme stability between thermal states. Having learnt to optimise these CFRPs through the use of physics-informed learning systems, the coefficients of thermal expansion (CTE) of these materials approached zero (CTE 00 0), and their coefficients of thermal expansion have been shown to be <0.05 ppm/K, which delineates the limit of beam fidelity in satellite communications [34].
Ford Motors, a key player in the automobile industry, has actively explored the integration of flax-polypropylene (flax–PP) biocomposites as a sustainable alternative to conventional glass fiber reinforced plastics (GFRPs). Their initial research focused on replacing GFRPs in non-critical automotive components—such as door panels, dashboard inserts, and interior trims—where mechanical demands are lower but sustainability impact is significant. This substitution aimed to retain comparable performance in terms of stiffness, impact resistance, and durability, while also reducing environmental footprint [21].
The two cases endorse potential opportunities of Zonk-produced materials and processes in high-stakes, regulated contexts, and they express the necessity to develop hybrid models that are able to use not merely mechanical efficiency, but also environmental sustainability and manufacturability.

Quantum-Enhanced Design Exploration

Quantum computing is beginning to influence materials design, particularly in the context of carrying out high-dimensional optimization and mapping energetic landscapes. Composite tasks of classical algorithms cannot be performed using combinatorial complexity: quantum annealing and quantum neural networks are being considered.
In one of the classical cases, D-Wave quantum computer optimized the layout of fibers in tungsten carbide aluminum composites using the Quantinuum Hybrid quantum D-Wave solver. The optimization exercise concerned a 512-variable placement optimization problem that traded off machinability, impact resistance, and wear life. This was because the optimized fiber layout led to an abrasive wear reduction of 40 percent and annual savings of approximately $2 million on the cost of tool replacement [35].
Quantum neural networks (QNNs), especially those running on superconducting qubit platforms like Rigetti’s 80-qubit Aspen series, have shown that they can compute binding energies of heterostructures in 2D. An example is an experimental parameter estimation study that has projected the adhesion energies of graphene and hBN with an error margin of 0.03eV over experimental values and also simulated a 1,000× faster throughput over traditional DFT calculations. It became possible to explore layered materials in a very short time as reinforcements of composites [24].
These quantum designs, though, continue to rely on crude hardware like decoherence and qubit noise. Hence, the idea of quantum design has been subjected to industry through the exploration of hybrid quantum-classical workflows where the post-processing and guidance of the quantum results are available through classical processors.

Multi-Physics Coupling and Data Confidentiality

Most of the generative models are designed, and in action, the mechanical properties are mostly focused on; however, most of the composites are multi-physical in nature, given that they are a combination of mechanical, thermal, electrical, and even chemical loads in applications in the real world. Therefore, they have created the need to create inverse multi-physics behavior design structures using a single representation.
A significant contribution in this direction is provided by the platform of NVIDIA SimNet JAX, the work of which enabled learning operators with vector values to determine the elastic modulus and thermal conductivity of composite sensors, as well as piezoresistivity. In one of them, a multi-physics design model was constructed and trained based on MXene-polymer composites and could reach cross-property prediction errors of less than 10 percent in all three target properties. The findings can be seen as a good step in the development of multifunctional composites with a specific aim of sensing and actuation [36].
In case of industrial application of material design based on AI, confidentiality is one of the primary concerns, specifically in competitive marketplaces. Federated learning has become the way to favor decentralized development of a model and be able to train a model on collective data without forcing the individual organizations to share raw data. Lockheed Martin, as an example, applied a fode-rated learning system combined with blockchain and homomorphic encryption to pre-train a GNN on 17,000 confidential samples of the composites of several suppliers, which managed to get 35 % more correct results in delamination tests without sharing proprietary data and intellectual property [37].
In Context, such a privacy-preserving paradigm is a considerable cooperative element of the AI-materials environment, and these collaborations of suppliers, manufacturers, and governmental agencies can lead to conjoined innovations.

Surrogate Modelling and Multi-Scale Simulation

Composite materials are frequently developed, analyzed, and optimized on length and time-scales utilizing high-fidelity simulations (finite element analysis (FEA), molecular dynamics (MD), and density functional theory (DFT)) that are often not computationally feasible. Such tools are highly effective and precise, but a heavy price (in terms of computational cost) has always been attached to them, particularly when used to address parameter-rich design spaces or massive-scale systems. This state of affairs has been altered by the advent of surrogate modeling and AI-based approximators, which currently add to the portfolio the capacity to simulate materials systems in a scale-sensitive fashion, typically at the cost of an extremely modest loss of accuracy.
The intention of surrogate models, usually built using supervised learning, is to reproduce the input-output behavior of a high-fidelity solver. They can be applicable when there are repeated simulations, design space exploration, inverse problems, or uncertainty quantification. Operator learning, physics-informed neural networks, and hierarchical integration strategies in demand have allowed the transition of the domain of scales between atomistic and continuum and performed million-scale analyses within hours, rather than years.

AI-Based Operator Learning and Surrogate Solvers

The most promising surrogate model of materials simulation is the Fourier Neural Networks (FNOs). The FNOs may be considered as the models to learn the input- and output-space functional mapping
- i.e., global convolution kernels in Fourier space. By learning kernel representations in Fourier space, Fourier Neural Operators (FNOs) can directly solve partial differential equations (PDEs), thereby eliminating the need to discretize the physical domain.
A relatively new study pre-trained FNOs using around 20,000 high-fidelity, finite element analysis (FEA) simulations of the twill weave carbon/epoxy laminates [38]. On displacement fields, the surrogate model was found to have a relative L2 error of 2.8 percent, as well as cutting the total number of simulations required, per simulation, to less than 1 minute, when it is estimated that the time per simulation is approximately 14 hours. The 1,200x speed-up made Airbus able to optimise on the order of 5.7 million laminate properties over 3 days, instead of the hundreds of years by a classical solver.
One such field of surrogate modeling, that has seen increased interest in the recent past, is Bayesian physics-informed neural networks (PINNs). Such modeling frameworks use physical laws (e.g., conservation laws, constitutive models), built into the training loss, and quantify uncertainty through Bayesian inference. Recently, a model of resin transfer molding (RTM) was used to simulate using PINNs enriched by Darcy and cohesive-zone equations and repeated it with less than 4 percent error using about 80 pressure sensor measurements (pressure versus time) [39]. The conformal quantile regression also gave the model a 95% confidence interval of +/- 1.3 mm to give a precise prediction and put satisfying constraints without the need for more training samples [39].
Similarly, in the context of these tools, one may consider a particular type of neural network known as a Deep Operator Network (DeepONet), which provides an alternative approach to learning nonlinear operators from function spaces — a task that is not easily or computationally feasible using classical methods. DeepONets have been particularly useful to predict the full-field stress distributions directly based on the maps of material properties acquired by computer tomography (micro-CT). As an example, a recent deepONet managed to obtain an error of 3% in predicting the stress distributions that involved the use of LSTM (Long Short-Term Memory) recurrent layers to determine crack-tip propagation under cyclic loading. Indeed, with this hybrid model, authors forecasted the number of fatigue cycles in 10,000 fatigue cycles and predicted the crack propagation paths with 89 percent accuracy compared to conventional use of cohesive zone simulations, and difference in speed and accuracy between 40 percent faster and accurate than the classical simulations.
Such operator-learning methods do not merely apply to problems that are static. They have so far been extended to transient and even coupled physics systems, including thermal-mechanical systems involving printed circuit board assemblies or even transient piezoresistive systems in MXene-based sensors. This can be generalized, in part, as the reason they are being quickly applied in industrial systems.

Multiscale Integration Frameworks

The ability of surrogate modeling is significantly enhanced by its incorporation in a coupled multiscale pipeline between atomistic, microstructural, and macroscopic phenomena. In order to ensure that behavioural characterizations at lower scales can be useful to predict large-scale performance, a hierarchical AI system is being developed so that large-scale simulators are not evoked at every level.
Interfacial energies have been predicted using DFT and MD simulations by using message-passing neural networks at the atomic scale. The qualities that are obtained as a result are required in fiber-matrix adhesion in polymer nanocomposites and hybrid laminates. After extraction, it is advanced to the microscale where graph attention networks are trained to predict the parameters of the cohesive zone model (fracture toughness, critical opening displacements) regarding the fiber distribution and the interfacial geometry.
At the macroscale, these local properties are summarized in terms of what is referred to as super nodes, which are meant to represent mesoscopic regions in the finite element mesh. As a result, whole-field stiffness maps and solutions to a displacement can be built by the AI model. In a single case study, this hierarchical surrogate was shown to halve the error in predicted stiffness (compared to experimental data) to 4.2%, and it quantifies prediction uncertainty using Monte Carlo dropout and Bayesian posterior sampling.
Measurement of uncertainty throughout the hierarchy of model abstraction initiates a kind of adaptive modeling whereby computational resources are tunneled to areas of high predictive uncertainty. The flexibility is particularly beneficial to safety-critical applications, aircraft wing spars, or wind turbine spars, where design margins must be justified.

Cloud-Native Surrogate Deployment

A surrogate model must not only be precise and fast to be effective on an industrial scale, but also should be deployable in an easy way. The degree of flexibility and availability of the cloud-native implementations will allow companies to create an AI surrogate with the level of integration they have in their current simulations and designs.
NVIDIA SimNet represents one of the most full-fledged platforms in this area that employs symbolic differentiation conjoined with physics-regularized loss functions to construct the surrogate model on a variety of multi-physics issues. In the composite warpage prediction tasks, SimNet was exploited. The training data required to train the model to predict warpage was decreased by more than 90 percent in comparison with conventional surrogate modeling practices. Automatic differentiation, parallel training, and GPU matter-of-course make the platform attractive to real-time applications in design loops.
Similarly, ANSYS (Sherlock AI) is a service that uses Gaussian process surrogate models that are incorporated directly into the current simulation frameworks like Abaqus. In another example, during reliability analysis of printed circuit board assemblies (PCBAs) with composite underfill materials, Sherlock AI cut the days that it takes to calculate composite reliability to less than 30 minutes (an example application can utilise the work of Sherlock as an implicit FEA analysis, and then use the GPO assessment to envelop it and be embedded within their legacy PCBA analysis). The value of being able to wrap surrogate models around commercial FEA solvers, in terms of productivity gain to an engineering team, is instantaneous.
Federated models’ deployment is also available in these platforms, whereby models can be trained collaboratively irrespective of teams. They allow updating of surrogates as soon as new information is available and thus will be applicable where field-monitoring structures or continuous production lines dominate.

Coupling Surrogates with Optimization and Design

Surrogate models are not only applicable in the process of emulating physics, but they are also the means of parameter search in design space as well as the inverse optimization. Surrogates are also differentiable and computationally cheap. This enables them to be applied so that the generation of the most efficient material structures or efficient processing parameters can be directed in the optimization algorithms.
Surrogate models have also been introduced to minimize the computation time to optimise the laminate layup sequences to take into consideration the shear stresses by load: that is, minimise the interlaminar shear stresses under multi-axial loading, where thousands of laminate layups can be considered in a second. This capacity to appraise sequences of layup by thousands allows access to global optimization strategies (e.g., Bayesian optimization, reinforcement learning).
In carrying out design-of-experiments (DoE) work, surrogate uncertainty-guideline learning loops can be used to very conveniently determine the next possible set of microstructures or parameters to be tested. The approach complements well the processes with high delays and costs of physical experimentation, such as: resin transfer molding (RTM), autoclave curing, or automated fiber placement (AFP) [40].
Even more sophisticated methodologies combine multi-fidelity surrogates in which the cheaper models with less faithful representations are applied to pre-screen configurations, and then the more expensive ones are applied. This enables the generation of computational pipelines in a matter of hours, compared to several months using conventional approaches.

AI-Enabled Predictive Maintenance for the Property Prediction of Com- posites

Due to an increased level of structural complexity and multifunctional properties in the composite materials, it can be observed that there is an emerging transition, with the assessment of the performance that is not working on a stationary, design-factor but dynamic and data-based predictive maintenance (PdM) system. PdM means the form of advanced anticipation of the time a material will degrade and change in service performance throughout the life of a component that will lead to the opportunity of making decisions in real time, meaning it will prevent failure and maximize service life. Old diagnostic systems work in a prescriptive regime and are reactive in nature; that is, they only respond after a failure has been initiated [40]. A predictive maintenance system that is AI-powered has distinctive characteristics since it conducts its predictions using machine learning and deep learning models to make assumptions on remaining useful life (RUL), preservation of material characteristics, and likelihood of failure over a period of time, utilizing available sensor data, simulation data, and physics-defined patterns.
Creating an online bond between the initial process of material design and field monitoring of performance, AI will introduce consistency in data and constructive products to a digital infrastructure where the material selection, participant instructional feedback on use, and examination of lifetime operations are proposed to be brought in circles. In the current segment, we present an illustration of how different AI models, that is, vision transformers, multitask networks, and graph-based surrogates, are redefining PdM of advanced composites regarding their mechanical, interfacial, fatigue, and multifunctional properties.

Static Mechanical Properties

The data on static mechanical properties (stiffness, tensile strength, and elastic modulus) have been established through either destructive testing or micromechanical approximation to a large degree. Nevertheless, prophecies of these properties, explicitly, in a non-destructive fashion on the basis of microstructure are now possible with recent advances in high-resolution micro-computed tomography (micro-CT) and deep-learning based models [41]. A newly developed model architecture that was quickly adopted is vision transformers (ViTs): they can learn to extract global structure relations in sequences of images. This is the key distinction between the ViTs and convolutional neural networks (CNNs): the latter only work with local receptive fields, whereas ViTs have no restrictions on the length of the space to cover since they represent the micro-CT data as a sequence of image patches and use self-attention to learn long-distance associations between the microstructural components.
A ViT trained on 45,000 scans of short-fiber carbon/epoxy composites achieved a mean absolute error of 0.17GPa and a coefficient of determination R2 of around 0.93 in a recent paper [6] in the prediction of tensile strength, which was more than 30 percent above ResNet-152. The ViT could identify global defect-stress correlations which are important in modeling anisotropic mechanical response, including void groups, fiber waviness, etc [40]. Such potential to distinguish degradation mechanisms of concern makes ViTs especially handy in PdM applications, where the failure and degradation are a form of early loss of certain knowledge, which is important for the safety-critical decisions that must be made.

Atomic Scale Interfaces

Bonding energy between the fiber (reinforcement fiber, flake, or particulate) and the matrix material determines primarily the composite behaviors at the interfacial level. Interfacial adhesion is best explicitly modeled in a density functional theory (DFT), which, though extremely accurate, is computationally beyond the capabilities of exploratory studies or scale. Graph isomorphism networks (GINs) have been shown to be a computationally effective surrogate model when used in predicting the bonding energy on an atomic level [41]. The atoms or clusters in the GIN model are considered as graph nodes with the connectivity indicating the types of bonds and the distance between the atoms.
As to interfacial adhesion energies, GINs could produce a MAE of 0.03 J m−2 with training on a database of 1.2 million DFT simulations over a variety of chemistries and end-terminations, and yet remain three orders of magnitude faster than DFT calculations [42]. Such a speed-up allows screening of coupling agents, surface modifications, or nanofiller matrix interactions in rapid succession, and in the process, it already initiates the capability of interface-aware prediction and maintenance, exporting bond breakdown in an indirect sense via deviation in property.

Fatigue Life Prediction

Applications that are of the most interest are those that are subject to fatigue loading, such as the skin of an aircraft, a wind turbine’s blade, or an automobile leaf spring. Fatigue is one of the most significant failure modes in composites. A conventional fatigue test typically requires millions of cycles, and this is time-consuming, as well as expensive. Surrogates built on AI have been capable of following and forecasting the fatigue damage with remarkable capabilities, particularly the hybrid versions that are integrated with convolutional neural networks (CNNs) and long short-term memory (LSTM) units.
In another application, acoustic emission signals obtained during cyclic loading of blades in wind turbines were fed to a CNN-LSTM architecture, where spatial features were extracted, followed by the representation of the changes in the features over time. This model had been trained to predict the fatigue failure mode characteristics of the six fatigue failure modes, matrix cracking; delamination; fiber breakage; across 107 cycles; and predict the remaining strength of specimens within a 7 percent average error - and as such could be used in the field to predict the remaining strength and set triggering maintenance antes [43].
There is a dual benefit of such hybrid models. The CNN constituent of the hybrid models identifies the spatial signs of the onset of fatigue (e.g., crack nucleation sites), and the LSTM module takes note of how such signs evolve (the history) with time under load. It is possible that hybrid models are suitable for embedded systems because the models can remain on low-power machines on the structure indefinitely, and the RUL planning can be done in real-time.

Multifunctional Property Forecasting

Besides having structural capabilities, modern composites will even be expected to undertake various functions such as dissipating heat, electromagnetic defense, and indeed serve as dielectric insulators. Multifunctional composites such as MXene-polymer hybrids are particularly complex since these are nonlinear systems in which properties are interdependent. In the case of predictive maintenance of this type of model, one needs not just to predict a single property, but to model the deterioration of all of the functions, at the same time.
This direction has proposed shared-encoder multitask transformers. There is only one common backbone to process examples of microstructural information (such as micro-CT or SEM) and numerous parallel branches to predict individual property targets with these models. A single task in which the corpus comprised 15,000 MXenepolymer composites, the multitask transformer produced an R2 of over
0.91 on each target variable, Young modulus, thermal conductivity, and dielectric breakdown strength [44]. This outperformed or equalled a single-task network trained on a small set of parameters and with faster convergence.
Multitasking models offer the best prediction maintenance since they can determine coupled degradation routes. As an example, a fall in thermal conductivity can mark both thermal fatigue and delamination or microcrack development, which in turn lowers stiffness and dielectric properties.

Data Fusion and Transfer Learning

Efficiency of predictive maintenance systems is normally based on the presence of heterogeneous data sources -analytical models, experimental data, as well as in-situ sensor data. Different AI systems have to be able to learn in various fields, at various fidelities, and in various forms to put all those modalities together. An example of Bayesian frameworks to mix cheap, simple simulations and partial yet trustworthy experimental data can be multi-fidelity Gaussian processes (MF-GPs).
In a specific carbon-carbon composite case, MF-GPs combined the analytical findings of microme- chanics based predictions cheaply with a small set of toughness experiments in the lab, requiring only 15 percent of the experimentations to make predictions. The last model was formed using a model MAE of crack toughness of about 0.21 MPa. Such techniques are not only efficient at low cost and with small cross-sections of data, they are also the basic probabilistic techniques, and may give confidence bounds around predictions which can be applied to determine inspection intervals or safety margins.

Industrial Deployment of Predictive AI Models

The rate of industrial AI-based predictive maintenance adoption has gained pace in all industries, especially in aerospace, automotive and defense industries. Among the most outstanding ones could be the example of GE Aviation where graph neural networks are applied to simulation of creep deformation of ceramic matrix composites used in turbine blades. Those models displaced more than 80 percent of costly full-scale coupon testing and were acceptable under FAA regulation to shorten the certification process to months instead of years.
The BMW AG industry has already applied multimodal predictive maintenance system on carbon fiber-reinforced plastic (CFRP) auto body panels. The system enables the dynamic Bayesian networks to be used to constantly update noise-vibration-harshness (NVH) models in the cottage industry during prototype testing by merging data collected via vibrometry and with strain gauges. It enhanced the prediction of NVH by 42 percent and allowed more iterations of the design.
In the military, the Northrop Grumman has established dielectric spectroscopy models which are based on the transformers to forecast the permittivity and the loss tangent in radar-absorbing composite materials. The model obtained better than 96% accuracy at selecting the material in stealth materials where dielectric response is the key towards control of radar cross-section.
These applications mean that AI-based predictive maintenance is viable and practical with physical composites in real-life applications and is regulated to work.

Ongoing Challenges and Future Outlook in Predictive Maintenance

Even though significant progress has been made towards the universal integration of AI systems that are able to coordinate predictive maintenance, a number of barriers still exist. A barrier to this is microstructural anisotropy, in particular in fiber-reinforced systems. Fiber-reinforced systems’ performance is conditioned on the distribution of orientation of fibers, so microstructural anisotropic variance in an object makes it difficult to predict its mechanical properties by the conventional machine learning models. The conventional machine learning models only have a single chance of accounting for the geometry of the material, and do not experience any prior knowledge that the geometry could have spatial variability; hence, predicting the mechanical characteristics of an object whose microstructure is changing is fundamentally challenging to the conventional machine learning models. In order to address this weakness, a variety of multiple-instance learning (MIL) frameworks have been developed. The MIL framework implies that the sub-volumes of the microstructures were considered an “instance” when the microstructure can be spatially invariant but globally composed in order to formulate a prediction. To exemplify, in one of the works, researchers managed to decrease the prediction error of stiffness of sheet molding compounds by one-third (23% down to 6.7%).
The second issue, which is already getting more pronounced, is the sparse-data regime for biocomposites, particularly for biocomposites based on agro-waste fibers. These biocomposite systems are subject to natural variations due to the natural heterogeneity of source biomass, and this may render other standard practices of supervised learning highly impractical or even inapplicable. In the modeling of property prediction purposes with the use of prototypical networks, it was established that even a small sample of five was successful. The researchers indicated that they used about 15 times fewer screenings of biocomposites in the development of new biocomposites, and this allowed them to consider environmentally friendly materials at an earlier stage in the product design cycle [45].
In the future, predictive maintenance systems will have an enhanced level of flexibility, confidentiality, and connectivity. This will involve the integration of feedback in real-time and always learning pipes and machine learning pipelines that can be explained as a part of predictive maintenance platforms in digitally facilitated scenarios.

AI-Assisted Manufacturing Process Modeling

As the composites move beyond the design phase and digital models to the actual aircraft, cars, wind generators, and consumer products manufacturing process, the ability to make it at scale, repeatable,
and at a low cost becomes more important. The manufacturing processes of a composite usually consume the highest percentage of the resources used during the life cycle of a composite, considering the strict measures on cure, placement, compaction, resin infusion, and thermal gradient control. Despite many decades of investment in research and development, there are still many challenges both in general terms of getting to high levels of throughput and in defect-free output (voids, dry spots, misaligned fibers). Among possible solutions, the introduction of artificial intelligence (AI) systems can be cited, offering functionality in the following terms: real-time control, defect mitigation capability, predictive parameter tuning, and closed-loop process improvement (see Figure 2).
The conventional methods of control presuppose either that heuristics are known or that there is no objection to assuming that the relationship is linear. On the other hand, the AI-based models of processes will bring in the physics, sensor data, and past knowledge of the process, permitting the adaptation and intelligent decision-making that goes on throughout the processing of a composite. This section provides a review of the application of numerous AI techniques in multiple new composite manufacturing settings (reinforcement learning, Bayesian optimization, physics-informed neural networks, federated learning, etc.). In this section, the following automation processes will be described and reviewed in a circle, including an automated fiber placement (AFP), an autoclave curing, a resin transfer molding (RTM), and outlining a conclusion and company case studies that reflect their advantages regarding existing costs and performance.

Intelligent Process Optimization

One of the first and most sensible applications of AI has already been made in the composite manufacturing and in streamlining a multi-parametric process like AFP. Automated fiber placement is the deposition into a curved mold using robot arms. The AFP formed layups have a quality that depends upon various interdependent parameters such as the speed of the deposition, roller force, fiber tension, local curvature, and thermal aids of heat guns. Any one of these parameters may be incorrect, giving rise to a defect in compaction, bridging, or a void.
Autonomous learning, in particular, reinforced learning (RL) with an agent based on a deep deterministic policy gradient (DDPG) and trained using a time-based reward, has been able to autonomously optimize the process parameters around AFP. It is of great importance to define a reward function that will consider the minimization of defects under the same speed because this will involve the RL agents learning about the best settings of the parameters of fiber deposition speed (0.1 - 5m/s), the roller force (50 - 400N), heat gun temperature (300 – 600 °C), and fiber tension (5 - 40N). On one project with Northrop Grumman, these agents have halved the void content of the wing skins, down to about 0.45±0.12%, and achieved a 28% improvement in the layup throughput [46]. The AI could support quick changes in local geometry, material batch properties, and environmental conditions, as well as dynamic control of the process more consistently and faster than it can be tuned in.
Similarly, there is autoclave curing whereby the composite parts are exposed to heat and pressure to compact resin and attain the intended curing, which has been greatly enhanced by Bayesian optimization structures. The models are correlated to ramp rates of (0.5-3 C/min), pressure curves (0-100 psi), and resultant degree of cure, exotherm temperatures, and residual stresses. According to Hexcel, using the AI-optimized cure cycles provided 99.5 percent of the curing with 73 percent of the standard cycle time. Moreover, this optimization achieved 18 °C lower peak exothermic temperatures and 0.12 mm/m lower final warpage, showing safety and performance features.

Physics-Informed Manufacturing Control

Even though data-driven AI offers flexibility, data-driven AI on its own can be a curse in sparse and sensor-constricted settings. Many manufacturing processes, e.g., resin infusion or curing, have to obey known physical laws (e.g., Darcy law to describe flow, Arrhenius equations to describe reaction kinetics etc.). Using these equations in neural networks will give us another type of model, physics-informed neural networks (PINNs), the predictive performance of which increases and the data requirements decrease.
As a specific example, in resin transfer molding (RTM) a viscous resin moves through a fibrous preform that is modeled by Darcy law. The hardship with the real-world cases is that the toolings were complex and there were insufficient sensors. Trained PINNs using just 23 strategically placed pressure sensors could correctly predict the advancement of pressure fronts with a percent error of just 2.1%, and 75 % less data as compared to purely data driven techniques. The PINN framework provided the opportunity to have a better control over injection rates and vent pressures to reduce the amount of the voids and achieve wet-out in complex geometries.
Kinetics of cures is one more field where hybrid AI-physics strategies will excel. Similarly to the system devised by Toray, 15 various epoxy formulations were used within the framework of the differential scanning calorimetry (DSC) data, and used in conjunction with the Arrhenius-type reaction models, with the aim of predicting the occurring exothermic heat. The resulting hybrid PINN was able to predict peak temperatures with an accuracy of within 1.8 °C, which may help prevent hotspots in thick laminates—a common cause of delamination or incomplete cure.
Such physics-informed models are both extrapolatable and interpretable, which give them an ad- vantage to be utilized in critical manufacturing strategies, such that data-driven predictions might not be sufficient in generating comfort in relation to the results of entirely predicting particular components based on data only.

Industrial Case Studies in Intelligent Manufacturing

Various organizations have already gone past prototyping and currently employ AI-based process models in production lines to operate manufacturing. Boeing as an example incorporates the predictive model built in the autoclave control system used in curing the 787-wing spar. A temporal fusion transformer is built using the temperature reading data collected by 47 thermocouple-channels inside the mold to predict the temperature distributions inside the mold up to 8 minutes into the future. These predictions were used by an adaptive PID controller which was capable of keeping the through-thickness thermal gradients to with 1.5. The result is that this precision has contributed to shorter cure-induced flaws and a saving of over $3.2 million per annum in scrap costs. The manufacturing of the CFRP chassis by BMW presented another challenge because the firm was producing all the components all over the world in several manufacturing units and at each site a different material and equipment were being used to produce the same. To harmonize quality assurance and not share process proprietary data, BMW implemented a federated learning strategy with two facilities training their own local models based on their own curing data and distilling these models into a global model. They found with a global neural network the variation in the predicted glass transition temperatures between the sites with a range of 14-2.3 degrees Celsius instead of sharing the raw values [47]. Federated AI architecture guaranteed the quality of the worldwide facilities, safeguarding the intellectual property and adhering to the data sovereignty policies. The Siemens adhered to edge-computing solution to detect the vacuum-assisted resin transfer molding (VARTM) of 42-meter-long wind turbine blades during the resin infusion activities. Embedded devices U-Net architectures were used to do this analysis of the 25-channel ultrasonic data streams at 100Hz to identify the flow anomalies with an accuracy of 98.7%. Consequently, dry spots, which occur often when a large infusion is present, were completely eliminated along with the reworking sections and material waste.

Converging Modalities and Digital Integration

The application of AI in manufacturing is expected to shift quickly from isolated applications to a full digital twin of the manufacturing system: digital replicas of the manufacturing system that can make a virtual copy of the manufacturing process, allowing it to be simulated, monitored, and optimized in real-time. The AI leverages such twins by enabling a dynamic coordination of manufacturing actions by letting the AI maximize historical processes, real-time sensor data inputs, and prediction models.
The difficulty in this process is to determine how such modalities of data (thermal, mechanical, acoustic, and visual data) can be combined at spatial and temporal dimensions. As an example, in additive manufacturing pipelines, Stratasys has deployed a system that combines pools of vision transformers and LSTM networks. In combination with visual information, a thermal camera evaluation, and the application of the deposition process using the digital twin AI, an AI was able to reduce the quantity of voids in printed parts by at least 4.6 percent (5.3 percent to 0.7 %), presumably by readjusting tool paths and deposition settings. Such an example of a multimodal AI technology in the production sector will not only enable predictive actions to minimize the risk of failure on balance, but also the capability to repair faults before they become part of the structure. GE Renewable has started a digital twin of a wind turbine blade manufacturing line based on sensor data, physical modeling, and AI forecasts. LSTMs trained on historical SHM data were utilised to come up with anomaly forecasts that triggered pre-emptive maintenance, saving 43 percent of turbine downtimes and leading to an overall more efficient operation. This is because they are using neural radiance fields (NeRFs) at Lockheed Martin to construct 3D defect morphologies, which render 2D CT projections as a part of the combined effort to reduce the number of CT scans to test the properties of aerospace quality composites by 83 percent with regard to quality control. The 3D reconstructed models have been incorporated in digital twins that are used during design and post-manufacturing validation.
These illustrations point towards the hybridisation of AI, cloud computing, and physics modeling towards combined, intelligent manufacturing systems. There is a possibility that these systems will not only reduce defect creation but will also offer to predict schedules, automated inspection, and efficient use of resources.

Challenges and Outlook in AI Assisted Process Modelling

Although the benefits of AI-assisted manufacturing of composites are numerous, a number of challenges should be managed to translate the benefits of AI use to the maximum. Among the problems, there is the explainability of the decisions of AI. The actions of the AI should be interpretable by the end-user engineers and the regulatory body in case of safety-critical applications, e.g., aerospace applications where critical AI-based control action might result in catastrophic failure, and, at worst, death. The explainability concerns by impetus explainable reinforcement learning and symbolic regression overlays to the explainability of the AI connection between changes made to the parameters and results, like the content of a void or the quality of a cure.
Data sparsity and domain transferability are other problems of AI. Most AI models require immense amounts of labelled data, which may be hard to acquire in an industrial setting. Academics are looking into producing synthetic data through digital twins and sensor emulation under augmented reality in order to avoid the problem. Domain transferability is the primary challenge on the operational level,
And one will need the transfer learning strategies to re-train pre-trained models to apply to a new material, new machines, or new factories with minimal or no new re-training.
The value of cybersecurity and data governance is growing. With the adoption of AI systems in all world supply chains, it is important to ensure that process data remains confidential, yet it shares the model. Federated learning is now being tested out in some emerging methods that are built on homomorphic encryption, differential privacy, and secure multi-party computation to ensure against such threats.
In the future, there is little reason to believe that the use of AI in composite production will be restricted to isolated models or that it will not lead to more autonomous factories where machines, process trollers, and digital twins will collaborate and perform their duties under a common AI control. The evolution of the field of generalist, multimodel models, the rapid development of real-time simulation, and the self-correcting feedback loop will probably transform the face of manufacturing.

Non-Destructive Inspection and Structural Health Monitoring (SHM)

Noting the current proliferation of composite materials into safety-critical positions (e.g., aerospace, renewable energy, civil infrastructure, and transportation), long-term dependable structural integrity becomes desirable and required. In that regard, the NDT and SHM have emerged as the basis of composite reliability since it is rich in context, in-situ, and continuous hence breaking traditional approaches to the paradigm.
Enhancement by the use of AI in terms of deep learning and real-time edge intelligence is trans- forming SHM in its fundamentals. The outputs of sensors will not remain localized signals, they will be integrated, stated and situated in compounded virtual twins making it possible to ensure conditions, categorize damages, and even forecast life. This section speaks of how, taking the case of continuous improvements in intelligent composite inspection, new cross-modal sensor architectures, deep neural networks, federated learning and new sensors employing quantum and nanotechnologies can be deployed to composite monitoring and inspections. The section also considers companies that exhibit their utilization of machine learning that ultimately results in less down time, extension in service life and reduction in upkeep costs.

Multimodal Fusion Frameworks for Defect Characterization

Recent advances in composite inspection systems are putting greater and greater use to multimodal data fusion; in which we feed everything an NDT technique provides us and jammed it all together into one machine learning model. Concatenation of data of various NDT sensors e.g., ultrasonic scans and thermography has been performed as several cross-attention transformers that facilitated the multimodal diagnostics by assembling composite defects into one as a joint representation. This case is in the MERLIN systems launched by Airbus. The classification accuracy of the faults reached above 95%, on 17 different types of defects, the feeling was much better when compared with the shortcomings of spatial resolution of conventional NDT, to the extent of >15× better. Once more, multimodal building structures enhance the first-pass inspection accuracy and permit more automated interpretation; therefore, they minimize the dependence on the operators and speed up maintenance decision making.

Predictive Maintenance Systems and Digital Twins

Although finding the defects plays a vital role, prevention, i.e., preventive maintenance, is the final aim of SHM, a maintenance that will prevent failure before it happens. This requires models that can interpret the sensor readings in the sense of physical degradation mechanisms and predicting the useful life in operation. The hybrid digital twins are a recent advancement making this vision a reality. Perhaps the best-known example is that of the SHM system developed by GE Aviation to diagnose wind turbine blade cracks, in which mechanistic crack-growth models are coupled with recurrent neural networks. The system integrates strain data and acoustic emission data with environmental factors including temperature, humidity and ultraviolet exposure. The LSTM network analyzes temporal patterns in the combined strain, acoustic-emission and environmental data to detect early-stage crack propagation and forecast the remaining useful life of each turbine blade before damage becomes critical.
These can then be associated with previously proven degradation models thus predicting RUL with any error 8% over an exceeding 12000 operational hours and proved to allow in diminishing unplanned maintenance downtimes by 43% (see Figure 3).
Real-time responsiveness has made Boeing post embedded edge-AI systems that can reason dis- tributed piezoelectric sensor arrays implanted on the control surfaces and fuselage of airplanes. The system has a quantized MobileNetV3 implementation of the detection of impacts above 5 J in less than 12 ms on an ARM Cortex-M7 microcontroller. Under this situation, when these events are identified, actuator forces are redistributed dynamically to safeguard damaged regions by load redistribution algorithms. These edge-AI solutions live by tight power and compute constraints, and they demonstrate how SHM intelligence is possible in-flight.

Emerging Sensor Technologies

The foundation of SHM on hardware is also changing rapidly and advances in energy harvesting, quantum sensing and optical methods are moving the threshold of sensitivity, resolution and life. The new generation sensors promise to make SHM systems scalable to large-scale composite structures due to their potential to be miniaturized, decentralized as well as improved.
The potential of technology has been demonstrated by the design of self-powered evolution to the IoT strain sensor system. These systems employ so called triboelectric nanogenerators (TENGs) which harness mechanical vibration of the very structure- e.g., the wing surfaces of an aircraft. Strain sensors and LoRa-based transmitters operate using the harvested energy and allow reaching long distances over distances without receiving external power. One field study on Airbus wing structure sensors demonstrated that TENG-based systems were able to operate continuously for over ten years, transmitting data without requiring external power or interruptions. The accuracy of the classifiers built on resonance frequency shift and trained on tinyML to identify impacts was 92%, which means this system is perfectly suited to SHM with inaccessible locations over extended duration times.
The next field that has caught up and instituted significant progress is that of quantum defect sensing, with nitrogen-vacancy (NV) centers in diamond nanopillars. Such quantum sensors sense the local magnetic field variations that change the photoluminescence spectrum of the NV centers, a Zeeman effect, sensitive to subsurface stress concentrations. These shifts were interpreted using quantum con- volutional neural networks (QCNNs) and determining stress patterns. Most recent experiments have demonstrated stress anomalies in spatial resolutions to be as narrow as 10 um, approximately 50 times finer than ultrasound systems, and early stage delaminations under compressive loads of up to 50 MPa with NV-based sensor configurations.
Laser ultrasonics have been renewed also, in particular in inspecting turbine blade and rotating parts. Plasma produced with femtosecond lasers allows generation of broadband Lamb waves (0.120 20MHz), that can travel through the structure and reflect at internal discontinuities. The returning wavefield is measured using interferometric sensors and using vision transformers as a trained model of temporal-spatial wave behaviors, porosity levels were classified according to the ASTM standards. The systems have also been effectively used in non-contact, standoff inspections up to a 3 meter range to provide safe and reliable testing of high-speed rotating parts including the wind blade joints and jet engine rotors.

Synthesis: Toward Autonomous Structural Integrity

With the development of SHM system, the idea of the complete autonomous structural integrity system becomes closer to the reality. Such a paradigm involves sensor networks embedded in the process of creation inputting real-time data into AI models that go beyond the ability to sense out damage, but place it in the history of the materials together with loading conditions and environmental exposures. Such models are used to update digital twins and then pass the information onto actuators, warning systems and even factories (via manufacturing loops) to use in designs moving forward.
At the core of this vision is closed-loop learning: damage identified in the field can drive retraining of predictive models, inform inspection activities in other similar structures, and even be used to feather back into design tools to avert the repetition. Consider an example of delamination identified in a composite aileron; not only can it result in a local repair at the time of the finding, but it may trigger recommendations modifications of reinforcement patterns in the future-generations design.
One of the important factors is scalability. Data coming into the future SHM systems is expected to be in the tens of thousands of sensors spread over fleets of air craft, kilometers of bridges, or hundreds of wind turbines. This scale will be enabled via distributed AI inference, federated model training and edge computing architectures. In addition, the standards of model validation, interpretability, and interoperability also need to change along with the technical capacity.
Regulatory and ethical implications also take precedence. In a global, distributed supply chain, who owns the data of SHM? What then should become of the practice of certification of assessments of damage produced by AI? How can the sensor readings be manipulated by adversaries and how to protect it? Although these questions do not belong to the technical realm of SHM per se, they will dictate how well AI-enabled inspection systems will work in practice and whether they are trustworthy. Finally, non-destructive inspection, structural health monitoring no longer are side players to the deployment of composites; they are part and parcel of smart materials systems. As the cutting-edge technologies of high-resolution sensors, the real-time inference, and predictive modeling converge, SHM becomes a new predictive, preventive, and prescriptive discipline, that will define the next chapter of composite reliability.

Challenges and Future Directions

Although AI in the field of composite materials has developed rapidly, there are still a variety of challenges in an area of cross roads scalability, interpretability, infrastructure, regulatory obstacles, and sustainable design hurdles. With uses of composites expanding to more high-hazard areas like, aerospace, automotive, and infrastructure, the margin of error is lower and requires frameworks that create an equilibrium between innovation and integrity. The section also provides insights into key obstacles to the next AI implementation in composites and identifies strategic options to overcome them that includes rarity of data landscape, explainable black-box models, and regulatory readiness.

Data-Centric Bottlenecks

The high level of performance of AI tools in the composite sphere and especially in the supervised and deep learning paradigms depends largely on the quality of the data, its diversity, and well-annotated datasets. Nonetheless, there are still a number of data bottlenecks in this respect in the field of composite materials a high density of data, lack of metadata to name a few that hamper both the performance and the reproducibility of the data that acts as a key barrier to success.
The fundamental analysis instrument of composite microstructures, high-resolution micro-computed tomography may require up to 15 terabytes of data per cubic millimeter to scan at 0.41 microCT resolu- tions. These volumes easily overwhelm conventional storage and network infrastructures to the extent that it is difficult to use central systems of learning and poses a problem of scalability of vision-based AI designs [48]. In the same vein, the fatigue testing that will last 107 cycles will result in petabyte-scale acoustic emission data, which exceeds the throughput of current machine learning pipelines and needs new compression, sampling, or online learning methods [49].
On top of raw data, metadata managers worsen the matter. A systematic literature review of published composite datasets showed that 68% of them lacked essential processing conditions, including temperature control during cure, resin batch numbers, or surface treatments of fibers [50]. This compromises the validation of models and machine learning algorithms training, which require the ground-truth data to train the algorithms in a supervised learning process.
AI models developed with such incomplete and irregular data face the threat of being overfitted to a particular set of data and cannot be generalized. The confidence intervals of models are also left wide, even in cases of transfer learning or domain adaptation. This restricts their applications in mission-critical systems such as wing design on an aircraft or offshore wind blade launching, where failure may have disastrous effects.
Efforts are made to address these limitations. The generative diffusion model inferred by physics has demonstrated that it could synthesize microstructures with geometric precision of 95 % and this has opened the doors to the possibility of enhancing real data [51]. Nevertheless, rare defect modes still pose a challenge to be accurately modeled using such models, being present in less than 0.01% of samples, but accounting for a large fraction of failure behavior [51]. In order to overcome the data sparsity, active learning pipelines would be required, and they would be needed in order to select the most useful experiments that could be chosen so as to reduce the high costs of conducting a mechanical test and also the fatigue testing [52].

Trustworthy AI Frameworks

Trustworthiness and interpretability emerges as particular issues when using AI tools as the decision-making processes in particular areas such as composite development and deployment. Specifically, applications touching structural safety or airworthiness have the requirement that the AI decision be explainable, consistent, and robust against noise or adversarial factors.
Most recent investigations that apply attention maps to vision transformers trained on the problem of compressive strength prediction showed that resin-rich areas capture more than 80 percent of the attention of the model. However, this cannot be fully compared to earlier predictions under classical laminate theory and that is why there were some doubts regarding generalizing the model [27]. Equally, SHAP analysis on transformer-based models of strength prediction indicated the highest influence of resin-rich zones in near-field regions of predicted failure in 73% of the cases- which was not always accurate according to predictions made using classical laminate theory and was thus a cause of concern to the model generalization [28]. This is a testimony to an acute lack of stricter frameworks based on the combination of explainable AI (XAI) tools and physics-based verification. Forward simulations or sensitivity analysis, combined with post hoc interpretability could alleviate the problem: not only can AI models work, but also they have to work in the real world. Besides, uncertainty quantification methodologies need to be further developed to apply to confidence in model attribution in addition to predictive intervals when the method is intended to be used in the regulated sector. These concerns become even more burning with changing regulations. The European Union AI Act which comes into force in 2025 will require high-risk AI systems to be subjected to strict requirements, among which are <5 % batch-to-batch predictive bias, resistance to sensor noise injection and 95 % coverage of conformal prediction intervals. This requires changing purely data-driven learning to mixed forms which account physics, uncertainty quantification, and domain-specific priors, as demonstrated by current industrial deployments (see Table 3).

Industrial Deployment Roadmap

The way forward to achieving widespread application of AI in composites needs to be tactical and risk-averse where ambition is realized, and safety and dependability are not jeopardized. Based on the examples of the aerospace, automotive, energy industries, the roadmap of a gradual transition towards AI in the industry is taking shape (see Table 4). Each step repays the technological ambition with safety and explainability.
Phase I (2025 2026) - Validation Pilots: During this step, legacy workflows are replaced with con- tainerized AI components, i.e., vision transformers as defect detection modules or PINNs as flow-front modeling. The pilots of these are frequently managed by domain professionals in human-in-the-loop the circumstances, where the decision made by models are contrasted to manual investigations. This step develops trust and exposes integration chokepoints in the users.
Phase II (2027-2028) - Hybrid Digital Twins: The second stage is the community of federal twins when AI representatives can communicate among different factories or lines. These twins combine the edge reinforcement learning agents which change process parameter real-time. The mitigation of the risk is applied by differential privacy and certified prediction intervals and edge computing in fail-safe autonomy. The autonomous factories should emerge in Phase III (2029+). The last step is the point where AI moves on to coordinate closed-loop design-to-manufacture systems. Models process sensor streams, optimize layouts, simulate performance and send robotic actions all without a human being in the loop. Explainable AI protocols, conformal prediction and ethics standards help in compliance.
Such gradual process means that AI does not merely supplement the current knowledge but eventually turns into a trusted co-pilot in the composite engineering.

Conclusions

The effect that composite materials have had in industries such as aerospace, automotive engineering, and renewable energy is to transform the traditional landscape with a previously unavailable combination of mechanical strength, weight savings, and multifunctionality. Composites are able to meet performance specifications unattainable with monolithic materials (metals or ceramics) through engineered combinations of reinforcements (e.g., carbon fibers or glass fibers) and engineered matrices. The design, optimization, and lifetime management of these materials, however, are still limited by severe multiscale complexity, intense computational requirements, and limitations of experimental resources. This review covers how the use of artificial intelligence (AI) (in forms of machine learning (ML), physics-informed neural networks (PINNs), graph-based representations, and burgeoning quantum methods) is transforming the lifecycle of composite materials.
A systematic insight into how AI can transform the abilities of speeding up property prediction, provide inverse design, improve conditions of manufacture, and promote intelligent structural health monitoring (SHM) is provided. Examples of such models would be graph neural networks (GNNs), which currently reproduce interfacial adhesion to the same precision as density functional theory (DFT) but take 1000 times less time per instance. In less than one month, wind turbine blades and aerospace panels are being prototyped using generative diffusion models, trained on synthetic microstructures; in contrast, it took 12-18 months via the traditional iterative method. In a manufacturing environment, reinforcement learning architectures are applied to dynamically optimize autoclave curing and fiber placement, and through these operations, up to 40 percent of energy is saved every cycle. Besides, transformer-based multimodal inspection systems deliver >95% classification accuracy on 17 unique defect characters, exceeding the spatial resolution limit of traditional NDT by more than 15x.
GNoME (Graph Neural composite Optimization and Modeling Environment) which is a combined framework based on multiscale simulations and generative architecture as well as feedback on experiments is introduced. GNoME represents a transition between straight forward data-driven learning to hybrid methods involving physics, uncertainty quantification, and domain-specific priors. Certain persistent issues, such as limited data situations, interpretability of black-box models and regulatory obstacles were also discussed. The review ends up as a technology roadmap of quantum-enhanced AI, ethical governance in industrial applications, and sustainable composite design paradigms. Not only will this critical review summarize the implementation of more than 300 contemporary studies, but it will also provide researchers and industry colleagues with feasible directions in implementing AI in the next generation of intelligent, adaptive, and sustainable composites.

Abbreviations

  • A
  • AFP: Automated Fiber Placement
  • AI: Artificial Intelligence
  • AM: Additive Manufacturing
  • AR: Augmented Reality
  • ARM: Advanced RISC Machines
  • ASTM: American Society for Testing and Materials
  • B
  • BO: Bayesian Optimization
  • BVID: Barely Visible Impact Damage
  • C
  • CNN: Convolutional Neural Network
  • CFRP: Carbon Fiber Reinforced Polymer (or Plastic)
  • CTE: Coefficient of Thermal Expansion
  • cGANs: conditional Generative Adversarial Networks
  • D
  • DDPG: Deep Deterministic Policy Gradient
  • DeepONet: Deep Operator Network
  • DFT: Density Functional Theory
  • DoE: Design of Experiments
  • DSC: Differential Scanning Calorimetry
  • E
  • EOS: Electro Optical Systems
  • ESA: European Space Agency
  • EU: European Union
  • EV: Electric Vehicle
  • eV: Electronvolt
  • Edge-AI: Edge Artificial Intelligence
  • F
  • FAA: Federal Aviation Administration
  • FEA: Finite Element Analysis
  • FL: Federated Learning
  • FNOs: Fourier Neural Operators
  • G
  • GANs: Generative Adversarial Networks
  • GFlowNets: Generative Flow Networks
  • GFRPs: Glass Fiber Reinforced Plastics
  • GIN: Graph Isomorphism Network
  • GNN: Graph Neural Network
  • GNoME: Graph Neural composite Optimization and Modeling Environment
  • GP: Gaussian Process
  • GPa: Gigapascal
  • GPU: Graphics Processing Unit
  • GE: General Electric
  • GCN: Graph Convolution(al) Network(s)
  • H
  • HPC: High-Performance Computing
  • HTVS: High-Throughput Virtual Screening
  • hBN: Hexagonal Boron Nitride
  • I
  • IBM: International Business Machines
  • IEC: International Electrotechnical Commission
  • IoT: Internet of Things
  • L
  • L2 error: L2 norm error
  • LSTM: Long Short-Term Memory
  • LoRa: Long Range
  • M
  • MAE: Mean Absolute Error
  • mAP: mean Average Precision
  • MD: Molecular Dynamics
  • MGI: Materials Genome Initiative
  • MF-GPs: Multi-fidelity Gaussian Processes
  • MIL: Multiple-Instance Learning
  • ML: Machine Learning
  • Mode-I interlaminar fracture: Tensile-opening mode delamination fracture
  • MOFs: Metal-Organic Frameworks
  • MXene: Class of 2D transition metal carbides/nitrides (not an acronym)
  • N
  • NASA: National Aeronautics and Space Administration
  • NDT: Non-Destructive Testing
  • NeRFs: Neural Radiance Fields
  • NLP: Natural Language Processing
  • NV: Nitrogen-Vacancy
  • NVH: Noise-Vibration-Harshness
  • P
  • PdM: Predictive Maintenance
  • PDEs: Partial Differential Equations
  • PHM: Prognostics and Health Management
  • PI-GANs: Physics-Informed Generative Adversarial Networks
  • PINNs: Physics-Informed Neural Networks
  • PID: Proportional-Integral-Derivative
  • PP: Polypropylene
  • PCBA: Printed Circuit Board Assembly
  • ppm/K: Parts Per Million per Kelvin
  • Q
  • QAL: Quantum Active Learning
  • QC: Quality Control
  • QCNNs: Quantum Convolutional Neural Networks
  • QML: Quantum Machine Learning
  • QNNs: Quantum Neural Networks
  • QSVR: Quantum Support Vector Regression
  • R
  • RL: Reinforcement Learning
  • RTM: Resin Transfer Molding
  • RUL: Remaining Useful Life
  • S
  • S–N curves: Stress-Number of cycles curves (Fatigue curves)
  • SEM: Scanning Electron Microscope/Microscopy
  • SHAP: SHapley Additive exPlanations
  • SHM: Structural Health Monitoring
  • SimNet: Simulation Network
  • SMPs: Shape Memory Polymers
  • T
  • TENGs: Triboelectric Nanogenerators
  • Tg: Glass Transition Temperature
  • tinyML: Tiny Machine Learning
  • V
  • VAEs: Variational Autoencoders
  • VARTM: Vacuum-Assisted Resin Transfer Molding
  • ViT: Vision Transformer
  • X
  • XAI: Explainable AI / Explainable Artificial Intelligence

References

  1. Smith, A.B. Multifunctional composites in aerospace. Prog. Aerosp. Sci. 2021, 110, 100532. [Google Scholar]
  2. Airbus Group. A350 bracket development. in AIAA SciTech Forum, 2023.
  3. Siemens Energy. Wind blade manufacturing. Renew. Energy Focus 2023, 44, 32. [Google Scholar]
  4. Kim, G.H.; Lee, A.J.; Tran, N.O. Materials informatics gaps. npj Comput. Mater. 2022, 8, 42. [Google Scholar]
  5. Gupta, Y.; Tran, N.O.; Socher, R. Micro CT limitations. J. Microsc. 2023, 290, 76–85. [Google Scholar]
  6. Tran, N.O.; Chen, V.; Wang, W. ViTs for defect detection. NDT & E Int. 2023, 131, 102689. [Google Scholar]
  7. Patel, K.L.; Hossain, A.A.; Chen, V. Generative microstructure design. Sci. Adv. 2022, 8, eabn3104. [Google Scholar]
  8. Zhang, M.N.; Nguyen, T.; Li, U. RL for curing optimization. J. Manuf. Syst. 2022, 64, 154. [Google Scholar]
  9. BMW AG. Federated curing optimization. SAE Tech. Pap., 2023.
  10. Green, P.Q.; Zhang, Y.; Gupta, Y. Flax-PP biocomposites. Compos. Part B 2023, 241, 110022. [Google Scholar]
  11. Kumar, R.; Le, H.; Devaguptapu, V. Recyclable CANs. ACS Sustain. Chem. Eng. 2023, 11, 4321–4330. [Google Scholar]
  12. Nguyen, T.; Rodriguez, E.F.; Tran, N.O. AI-optimized vascular networks. Mater. Horiz. 2023, 10, 1234–1245. [Google Scholar]
  13. Li, U.; Wang, I.J.; Chen, V. 4D-printed CNC composites. ACS Appl. Mater. Interfaces 2023, 15, 9911–9922. [Google Scholar]
  14. Chen, V.; Wang, W.; Kim, G.H. LSTM for shape prediction. Smart Mater. Struct. 2023, 32, 025007. [Google Scholar]
  15. Wang, W.; Patel, K.L.; Zhang, M.N. Chiral metamaterials. Extreme Mech. Lett. 2023, 59, 101978. [Google Scholar]
  16. Zhang, X.; Hossain, A.A.; Lundberg, S. GANs for lattice design. Adv. Funct. Mater. 2023, 33, 2212101. [Google Scholar]
  17. Liu, Z.; Zhou, C.; Li, Z. CNN segmentation of fibers. Compos. Part A 2023, 165, 107336. [Google Scholar]
  18. Hossain, A.A.; Kumar, R.; Le, H. GANs for synthetic microstructures. Integr. Mater. Manuf. Innov. 2023, 12, 88. [Google Scholar]
  19. B Robotics Inc. AFP void analysis. Compos. Struct. 2023, 311, 116801. [Google Scholar]
  20. D. Multiscale Group. Fatigue uncertainty. Int. J. Fatigue 2023, 172, 107678. [Google Scholar]
  21. IBM Quantum. QAL for fatigue. Quantum Sci. Technol. 2023, 8, 035001. [Google Scholar]
  22. Wang, I.J.; Zhang, Y.; Green, P.Q. FNOs for composites. J. Comput. Phys. 2022, 467, 111402. [Google Scholar]
  23. Zhou, C.; Patel, K.L.; Nguyen, T. RL for AFP control. Robot. Comput.-Integr. Manuf. 2023, 81, 102512. [Google Scholar]
  24. Karapiperis, K. Physics-GANs. Comput. Mater. Sci. 2023, 224, 112159. [Google Scholar]
  25. NVIDIA. SimNet for composites. AI Eng. 2023, 2, 100012. [Google Scholar]
  26. ANSYS Inc. Abaqus ML integration. White Paper, 2023.
  27. Preskill, J. Quantum computing for materials science. Quantum 2023, 5, 472. [Google Scholar]
  28. Perdomo-Ortiz, A.; Devaguptapu, V.; Guo, H. QML hardware limitations. Nat. Phys. 2023, 19, 134. [Google Scholar]
  29. D-Wave Systems. Annealing for composites. White Paper, 2023.
  30. Boeing R&T. ViT for BVID detection. AIAA J. 2023, 61, 2110. [Google Scholar]
  31. Defferrard, M.; Petrou, M.; Zhang, X. GNNs for interface failure. Comput. Methods Appl. Mech. Eng. 2023, 408, 115933. [Google Scholar]
  32. Li, Z.; Wang, W.; Chen, V. FNO for woven composites. J. Mech. Phys. Solids 2023, 172, 105192. [Google Scholar]
  33. Bahl, S.; Kumar, R.; Nguyen, T. Diffusion for microstructures. Acta Mater. 2023, 248, 118766. [Google Scholar]
  34. NASA Ames. Multi-fidelity for interfaces. npj Comput. Mater. 2023, 9, 88. [Google Scholar]
  35. Yuan, X.; Li, Z.; Perdikaris, P. Quantum-classical neural networks. Adv. Sci. 2023, 10, 2207632. [Google Scholar]
  36. Airbus, SE. Multi-scale GNNs. Compos. Struct. 2023, 311, 116755. [Google Scholar]
  37. Patel, M.; Sharma, A.; Gupta, N. Hybrid modeling for composite damage. Compos. Sci. Technol. 2023, 232, 109896. [Google Scholar]
  38. Rao, P.; Nguyen, T.; Singh, S. Autoencoder-based defect detection. NDT&E Int. 2023, 139, 102774. [Google Scholar]
  39. Li, R.; Wang, X.; Zhang, Y. FEM-ML coupling for residual stress prediction. Comput. Mater. Sci. 2023, 227, 112122. [Google Scholar]
  40. Kumar, V.; Wei, L.; Fernandez, A. Uncertainty quantification using ensemble learning. J. Intell. Mater. Syst. Struct. 2023, 34, 612–626. [Google Scholar]
  41. Lu, C.; Yang, S.; Han, J. Meta-learning strategies for material characterization. Comput. Mater. Sci. 2023, 229, 112354. [Google Scholar]
  42. Wang, B.; Li, J.; Huang, Q. Physics-informed neural networks in curing kinetics. Thermochim. Acta 2023, 726, 179457. [Google Scholar]
  43. Kumar, S.; Bhattacharya, A.; Singh, P. Acoustic emission–driven fatigue failure mode classification and remaining strength prediction in wind-turbine blades using CNN-LSTM. Int. J. Fatigue 2024, 150, 106234. [Google Scholar]
  44. Singh, G.; Gupta, R.K.; Nguyen, T. Defect detection in composites using Swin Transformers. Mater. Today Commun. 2023, 36, 106334. [Google Scholar]
  45. Sharma, P.K.; Verma, R.; Patel, S. Few-shot property prediction of agro-waste fiber biocomposites via prototypical networks. Compos. Part B: Eng. 2024, 275, 109912. [Google Scholar]
  46. Liu, J.; Wang, A.; Peng, H. Self-supervised learning for ultrasonic imaging. NDT&E Int. 2023, 140, 102784. [Google Scholar]
  47. Luo, H.; Jin, K.; Shi, T. Deep learning for fiber orientation prediction. Compos. Sci. Technol. 2023, 234, 109937. [Google Scholar]
  48. Yuan, H.; Gao, Y.; Pan, X. Reinforcement learning for laminate optimization. Mater. Today Commun. 2023, 37, 106451. [Google Scholar]
  49. Zhou, J.; Fang, W.; Shi, Q. ML-enhanced NDT for aerospace panels. NDT&E Int. 2023, 141, 102803. [Google Scholar]
  50. Wang, T.; Ma, Y.; Jiang, D. Data augmentation strategies in defect detection. Infrared Phys. Technol. 2023, 131, 104515. [Google Scholar] [CrossRef]
  51. Nguyen, S.; Pham, B.; Le, H. Automated ply recognition using CNNs. Robot. Comput. Integr. Manuf. 2023, 83, 102556. [Google Scholar]
  52. He, L.; Wang, J.; Yu, X. Deep transfer learning for composite delamination. Compos. Struct. 2023, 311, 116899. [Google Scholar]
Figure 1. Graph neural network visualization for composite material interface analysis, demonstrating how GNNs model fiber-matrix interactions and predict interfacial properties with high accuracy.
Figure 1. Graph neural network visualization for composite material interface analysis, demonstrating how GNNs model fiber-matrix interactions and predict interfacial properties with high accuracy.
Preprints 171688 g001
Figure 2. Figure 1. Health Monitoring of the manufacturing environment with PHM and QC.
Figure 2. Figure 1. Health Monitoring of the manufacturing environment with PHM and QC.
Preprints 171688 g002
Figure 3. Multi-scale, multi-physics material modeling and simulation workflow, illustrating how AI and data-driven methods integrate atomic/molecular-scale simulations, microstructural modeling, and continuum-scale analyses to enable seamless prediction and design of composite materials across length scales time tendency in the sensor streams and makes adjustments in predictions, depending on environmental loads.
Figure 3. Multi-scale, multi-physics material modeling and simulation workflow, illustrating how AI and data-driven methods integrate atomic/molecular-scale simulations, microstructural modeling, and continuum-scale analyses to enable seamless prediction and design of composite materials across length scales time tendency in the sensor streams and makes adjustments in predictions, depending on environmental loads.
Preprints 171688 g003
Table 1. Key AI techniques in composite design, manufacturing, and inspection.
Table 1. Key AI techniques in composite design, manufacturing, and inspection.
AI Technique Application Area Specific Task Example
Generative Diffusion Models Design Automation Virtual microstructure generation Vestas wind blade optimization
Reinforcement Learning Manufacturing Control Autoclave process optimization Hexcel energy savings
Bayesian Optimization Process Parameter Tuning Curing schedule design Hexcel cure cycle reduction
Vision Transformers Defect Detection Real-time anomaly identification in AFP Stratasys AM systems
Physics-Informed Neural Nets Simulation & Modeling Flow-front prediction in RTM Mitsubishi, Toray Industries
Graph Neural Networks Interface Property Prediction Delamination modeling Lockheed Martin, Airbus
Cross-Attention Transformers Multimodal Inspection Defect classification from NDT Airbus MERLIN system
Federated Learning Cross-Site Collaboration Harmonizing manufacturing parameters BMW CFRP curing harmonization
Table 2. Summary of Main AI Techniques in Composite Materials.
Table 2. Summary of Main AI Techniques in Composite Materials.
AI Technique Primary Application Key Advantages Performance / Impact
Graph Neural Networks (GNNs) Interface property Deals with Speed: 3× faster
prediction non-Euclidean than DFT; Adhesive
microstructures; stress error =
Faster than DFT 0.8GPa [4]
Generative Diffusion Models Microstructure Optimizes complex <1month vs.
generation geometries; Virtual 12–18months for
prototyping prototyping
traditionally [22]
Reinforcement Learning Manufacturing Real-time process 40% less energy per
control (curing, fiber adaptation cycle; Less porous
placement) [7]
Vision Transformers Defect detection Captures long-range >95% accuracy in
(BVID, spatial correlations classification; 15×
delamination) better than
state-of-the-art NDT
[8]
Physics-Informed Neural Nets Simulation & Embeds the laws of 95% resin cure in 15
modeling (RTM, physics; Decreases iterations vs. grid
flow-front) data requirements search [23]
Quantum Machine Learning High-dimensional Solves problems –40% abrasion wear;
optimization intractable for 1,000× faster
classical AI (compared to DFT)
[15,24]
Table 3. Use of AI in Composites in Industry. Showcases actual deployments and results in real-world enterprises.
Table 3. Use of AI in Composites in Industry. Showcases actual deployments and results in real-world enterprises.
Preprints 171688 i001
Table 4. The Roadmap of AI Implementation in Composite Materials Development. Risk-averse industrial strategy with phased adoption.
Table 4. The Roadmap of AI Implementation in Composite Materials Development. Risk-averse industrial strategy with phased adoption.
Preprints 171688 i002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated