1. Introduction
Optimization of airfoil shape is an important step for aircraft design required to ensure a stable, safe and economical flight. Optimization of airfoil shape is important for curtailing the fuel cost by increasing lift and reducing the drag, thus, enhancing the aerodynamic efficiency of the aircraft [
1]. The optimization of airfoils is employed in designing vehicle components such as spoilers and wings, especially for high-performance aircrafts and racing cars, to enhance aerodynamics, stability, and downforce. Many techniques exist for the optimization of airfoils to increase their performance.
Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) have been employed for this job with mixed results, and their utility depended on the specific optimization problem. Jiang et al. have used an improved PSO for the optimization of low Reynolds number airfoils to obtain a 12% increase in the performance [
2]. Ümütlü and Kiral used genetic algorithm to optimize airfoils parametrized by Bezier Curve [
3]. They performed XFOIL and CFD simulations to demonstrate that the optimized airfoils had an increased aerodynamic efficiency. Chen and Li proposed a hybrid PSO and GA technique to optimize the lift, drag, and surface pressure of an airfoil. The experimental results showed the effectiveness of the hybrid technique [
4]. Similarly, Mathioudakis et al. optimized an airfoil with successful integration into a Blended-Wing Unmanned Aerial Aircraft (UAV) using GA [
5]. This research work paved the way for data-based optimization of airfoils. In 2021, Li and Zhang introduced a high-fidelity Aerodynamic Shape Optimization (ASO) using a CFD-GA interface [
6]. The results showed this fast optimization method for the airfoils resulted in a relative mean error of less than 0.4% as compared to CFD-based optimization methods.
The use of Machine Learning (ML) for ASO has increased because of the extensive airfoil data available nowadays. Li et al. have discussed cutting-edge algorithms for fast optimization of airfoils in their review article including GAN (Generative Adversarial Networks), CNN (Convolution Neural Networks), and ANN (Artificial Neural Networks) [
7]. The article by Le Clainche et al. examines the latest advances in machine learning (ML) that are having an impact on the multidisciplinary fields of aerospace engineering, aerodynamics, acoustics, combustion, and structural health monitoring [
8]. An approximate model for the prediction of aerodynamic behavior can be obtained using an extensive database (Surrogate Modelling). Recent advances in Surrogate Modelling using ML have paved the way for several techniques and algorithms utilizing the power of such low-fidelity models [
9]. Researchers have used ML-based surrogate modeling for predictions of aerodynamic coefficients, load distributions, and shape optimization [
10,
11].
Many researchers have used various ML-based algorithms for the design optimization of their problems. The first stage is to parametrize the airfoil curve. This parametrization introduces design space for the dataset and the optimization of the airfoil. Agarwal and Sahu have discussed a unified approach to parametrize any airfoil using Bezier Curves [
12]. An airfoil can be accurately parametrized using only a six-degree Bezier Curve. Wei et al. introduced a systematic approach for parametrizing an airfoil using the Bezier Curve and then used the parametrization data for airfoil optimization by employing Non-Sorted Genetic Algorithm (NSGA-II) on results obtained from XFOIL [
13]. It was observed that the optimization of E387 airfoil showed an increase in the Lift to Drag ratio. CNN has also been used for the parametrization of airfoils. Li et al. optimized an airfoil using CNN coupled with surrogate modeling [
14]. The results show the effectiveness of the proposed method in the case of low Reynolds number airfoils. There have been many instances where CNN has been used to parametrize airfoils for the predictions of aerodynamic coefficients which can be further optimized using GA and PSO [
15,
16].
Deep learning has also been employed for the aerodynamic shape optimization of airfoils. Deep Reinforcement Learning (DRL) was used to optimize 20 airfoils to prove the generality of the method. Enhancement in lift-to-drag ratio as well as substantial stall margin was noticed for the optimized airfoil [
17]. Similarly, BPNN (Back Propagation Neural Network) has also been used for the prediction of aerodynamic coefficients [
18,
19]. Yonekura et al. utilized DRL for optimization of Turbine airfoils [
20]. They observed that the training time of the model was long, but the trained model could be used for optimization in a shorter time as compared to conventional optimization methods. Researchers have used deep learning for various other tasks as well such as dynamic stall suppression and structural strength improvements [
21,
22].
Determining the aerodynamic performance of airfoils using wind tunnel testing and CFD simulations can be resource intensive and time-consuming. Due to this, low-fidelity sources can be used for dataset generation for deep learning applications. XFOIL is one of the interactive programs through which aerodynamic properties of airfoils can be computed. It uses panel-based computational methods due to which computational cost becomes low [
23]. Morgado et al. compared the XFOIL results for an airfoil with that from CFD [
24]. They observed that XFOIL gave the best prediction results overall. XFOIL has been coupled with Machine Learning for optimization of airfoils associated with wind turbines [
25]. In their review article, Sharma et al. discussed using XFOIL and RFOIL (a software based on XFOIL) for quick prediction of aerodynamic performance coefficients. These results can be coupled with PSO, GA, or ML for quick optimization [
26]. In recent developments, Wen et al. coupled XFOIL with airfoils fitted with Bessel polynomials and optimized them with GABP (Genetic Algorithm Back Propagation) artificial neural network [
27].
To date, there is a substantial research gap in the direct integration of deep learning and Neural Networks (NN) with genetic algorithms for airfoil shape optimization (ASO). While researchers have integrated CNN with traditional optimization algorithms, coupling GA with deep learning remains an uncharted territory [
28]. This novel approach leverages the strength of deep learning and genetic algorithm to solve complex optimization problems related to the fields of aerodynamics and aircraft design. As the dataset of the ML model is generated in many ways (low-fidelity and high-fidelity methods), this optimization approach is flexible to the dataset. Multi-Objective Genetic Algorithm (MOGA) can be incorporated into this design system as well. Our proposed novel method introduces the use of DL-GA integration. The results have been validated using CFD simulations and wind tunnel testing.
The optimization methodology and results are outlined in the subsequent sections.
Section 2 describes the dataset and the structure of the deep-learning model. This model is incorporated into the genetic algorithm (NSGA-II) for calculation of the optimal coefficients of lift and drag. In
Section 3, details of CFD set up and mesh refinement study are presented. Subsequently, results are checked against XFOIL (XFLR5) and CFD (ANSYS Fluent) simulations and are also validated by wind tunnel testing.
Section 4 offers discussion on the results. The present study sets grounds for other researchers to use different datasets and investigate aerodynamic shape optimization (ASO) in different contexts.
2. Methodology for Airfoil Optimization
The main objective of our work is to get an airfoil with enhanced aerodynamic characteristics.
Figure 1 shows the flowchart representing the algorithm of the approach of training and testing the data, and
Figure 2 shows the approach of ASO using DBFC-HA DN integrated with GA. GA optimize the solutions based on the principles of evolution. The process starts with creating a population of potential solutions in the form of parametric vectors. The initial conditions are normally chosen randomly so that they are as dissimilar as possible. Every candidate solution is assessed by a fitness function, such as a trained model in the present case to determine the quality of the solution. The fittest individuals are selected for reproduction using selection methods such as the roulette wheel or tournament selection [
29]. Some of the parents are selected to crossover, which means that their parameters are mixed to create offspring, thus encouraging new solutions. Random mutations are then applied to preserve the genetic variation and prevent the algorithm from getting stuck in local optima. This process of evaluation, selection, crossover, and mutation is repeated for many generations to enhance the population until a termination criterion is reached.
In DBFC-HA DN coupled with GA, there are two output labels: coefficient of lift and coefficient of drag. For prediction, Linear Regression uses the values of independent variables; here, the total input features/independent variables are 28, which include the upper and lower control points of the 2D airfoil (26 Control points), Reynolds number, and Angle of attack to predict the value of aerodynamic efficiency. The main goal of this study is to optimize the aerodynamic efficiency of the airfoil, defined as:
2.1. Bezier Curve Parametrization
Parametrization of an airfoil can be performed using Bezier curves. Bezier curve parameterization for airfoils uses a set of control points to define the curve shape, providing a smooth and flexible surface contour by employing Bernstein polynomials to perform interpolation between these points. This method allows precise control over the airfoil geometry, facilitating the optimization of aerodynamic parameters and airfoil design [
30].
The leading and trailing edges of the two Bezier curves, which represent the bifurcation of upper and lower airfoil surfaces, must match to ensure C0 continuity and to produce a closed loop curve. The equations of Bezier Curve Parametrization include [
31]:
Hence, Control points will be:
Where B is Bernstein polynomial while
k is its degree, (
n+1) defines the order of parametrization,
t ranges from 0 to 1, i ranges from 0 to k, and
P is the control point (or vertices) of the curve.
Figure 3 shows the application of 6th degree Bezier Curve parametrization to represent the airfoil shape.
2.2. Dataset
The dataset consists of data generated by XFLR5, a low fidelity analysis tool, for low Reynolds number airfoils. The data for 50 airfoils have been taken by varying Reynolds numbers from 1×10
5 to 1×10
6 and while keeping angles of attacks from −5º to 18º. A total of 38,239 instances are available, each described by 28 predictors (features): comprising of
x and
y coordinates of 13 control points (6 each on upper and lower surfaces of the airfoil and C0 at the leading edge), Reynolds number, and angle of attack. The airfoil shapes were parameterized into control points using sixth-degree Bézier curves. The two response variables (target output) are C
L and C
D. The model was trained using 34,416 instances and evaluated with 3,823 instances per fold through a 10-fold cross-validation approach. A detailed list of the predictors is presented in
Table 1.
2.3. Development of DL Model
The experiments were conducted on a Lenovo P500 ThinkStation featuring an Intel® Xeon® E5-2650 v3 processor (10 physical cores, 20 logical threads @ 2.3 GHz), 64 GB of RAM, and an NVIDIA GeForce GTX 1080 Ti GPU with 11 GB of VRAM. GPU-accelerated computations were enabled using CUDA Toolkit 11.8 and compatible NVIDIA drivers. The system ran on Microsoft Windows 10 Pro (Build 19045).
Model implementation utilized the TensorFlow CUDA framework with the Keras API for designing and training DL architectures. Key components included the Adam optimizer, EarlyStopping, and ReduceLROnPlateau callbacks for enhancing training efficiency. The evaluation was performed using scikit-learn, specifically, StandardScaler for normalization, train_test_split for partitioning, and metrics such as R2 and MSE, MAE, and RMSE for model assessment. Visualization tasks were handled with matplotlib. The input dataset included Reynolds number and angle of attack as features. These were normalized using StandardScaler to ensure zero mean and unit variance for each feature. The model outputs two key aerodynamic coefficients: the lift coefficient (CL) and the drag coefficient (CD).
This study proposes a DBFC-HA DN, a DL strategy for multi-task learning where different branches learn different outputs from the same input, for predicting C
L and C
D. As detailed in
Table 2, the architecture begins with a shared base network that processes input features through a 128-unit Dense layer with Swish activation, selected for its smooth gradient behavior, followed by batch normalization for training stability. From this shared backbone, the network splits into two specialized branches:
CL Prediction Branch: This path begins with a 128-unit Dense layer using ReLU activation, followed by a second ReLU-activated Dense layer with 64 neurons. These layers are designed to extract and refine lift-related features. The branch concludes with a single-neuron output layer using Linear activation, ideal for continuous output.
CD Prediction Branch: Recognizing the more complex nature of drag force, this branch employs a deeper and more regularized structure. It starts with a 256-unit Dense layer using GELU activation, followed by a 128-unit GELU-activated Dense layer. A Dropout layer (rate: 10%) is included to mitigate overfitting. A final 64-unit GELU-activated Dense layer further refines the features before reaching the Linear-activated output neuron, delivering the continuous CD prediction.
2.4. Loss Function
To account for the differing importance of CL and CD in aerodynamic applications, a mean squared error (‘mse’) loss function is used. This function combines mean squared error (MSE) terms for both CL and CD but assigns asymmetric weights (0.3 for CL and 0.7 for CD) to emphasize drag predictions. This tailored weighting strategy drives the model to minimize CD errors aggressively, while preserving CL accuracy. The network is trained using the Adam optimizer, with L2 regularization (λ = 10−5) and dropout applied to prevent overfitting. The dual-branch architecture allows for task-specific feature learning, and when coupled with the loss function, the model aligns with real-world aerodynamic objectives. By amplifying CD’s contribution to the loss, the architecture achieves enhanced precision in drag prediction, crucial for optimizing wing and airfoil designs.
2.5. Activation Functions’ Hybrid
This study employs a hybrid approach to activation functions to enhance the model’s representational power. The Swish activation function [
32] is used in the shared base of the network to facilitate smooth gradient flow and capture fine-grained input patterns. The Cd prediction branch utilizes GeLU activations [
33], known for their probabilistic behavior, which helps in learning nuanced, context-rich features and accelerates convergence. In contrast, the Cl branch employs ReLU activations [
34], which are effective for extracting sparse, discriminative features, particularly useful in learning more binary or sharply defined patterns. For the output layers of both branches, Linear activations [
35] are used to preserve the continuous nature of the predicted aerodynamic coefficients. The model was further fine-tuned through hyperparameter optimization across various predictors. This strategic use of multiple activation functions allows each branch to specialize in its respective task, ultimately improving the model’s accuracy and convergence behavior.
2.6. Model Parametrization
The parameters, as illustrated in
Table 3, within the DBFC-HA DN model have been tuned during training to best fit the data. The DBFC-HA DN was configured with a bifurcated architecture, employing shared layers followed by two separate output branches for C
L and C
D predictions. The model employs separate mean squared error (MSE) losses for lift (C
L) and drag (C
D) coefficient predictions. A weighted combination of these losses guides optimization, with C
D assigned a higher weight (0.7) to prioritize drag prediction accuracy. C
L receives a lower weight (0.3) to maintain awareness of lift dynamics. This weighting compensates for C
D’s smaller physical magnitude while ensuring both aerodynamic properties influence training.
The model was compiled using the Adam optimizer, and a three-phase learning rate schedule was employed as illustrated in
Table 4: Phase 1 (<50 epochs): (Initial learning rate configuration, Phase 2 (50–100 epochs): Intermediate adjustments, and Phase 3 (100–300 epochs): Final fine-tuning phase). Training was carried out for 150 epochs with a mini batch size of 64. An early stopping mechanism was implemented with a patience of 20 epochs based on validation C
D R² score to prevent overfitting. The best model weights, as determined by this criterion, were saved to the file best_model.keras. To improve generalization, L2 regularization was applied with a weight decay coefficient of λ = 10
-5. Furthermore, a 10-fold cross-validation strategy was adopted to ensure reliability of the model performance across the dataset.
Feature inputs were standardized using StandardScaler, while CL and CD outputs were scaled independently using separate StandardScalers to account for differing distributions. The shared backbone of the DBFC-HA DN architecture consisted of a 256-unit Swish-activated layer followed by batch normalization, then a 192-unit Swish layer with another batch normalization. The CL prediction branch followed a sequence of 128 and 64 neurons activated with ReLU, concluding with a single linear output. In parallel, the CD prediction branch used a deeper path: 256, 128, and 64 neurons with GELU activation, incorporating a dropout layer (rate = 0.1) after the second hidden layer, and ending with a single linear output node.
2.7. Sensitivity Analysis of Loss and Learning Rate with Epochs
The model was trained for 150 epochs, and the resulting learning curves are presented in
Figure 4 which illustrates the overall training process. In
Figure 4(a), the abscissa denotes the number of training epochs, while the ordinate indicates the loss values. The blue and green curves represent the training loss of C
L and C
D, respectively, and the orange and red curves represent the validation loss of C
L and C
D, respectively. Both curves exhibit a consistent downward trend over time indicating that the model is effectively learning from the training data while maintaining good generalization performance on unseen test data.
Figure 4(b) depicts the evolution of the learning rate throughout training in terms of its performance measurement, where the abscissa again shows the number of epochs and the ordinate represents the R2 score. The plot shows that the learning rate is initially set to a high value to enable rapid learning in the early stages, but it decreases sharply to allow the model to fine-tune C
L and C
D. This learning rate schedule helps prevent the model from converging prematurely to a suboptimal solution and promotes better overall performance.
2.8. Ablation Study for Model Generalization
To investigate the generalization capability of the proposed DBFC-HA DN model, an ablation study was carried out using varying phase schedules in terms of epochs across three progressive training phases as illustrated in
Figure 5: Phase 1 (iterations 1–50), Phase 2 (51–100), and Phase 3 (101–150). This approach was designed to analyze how different learning rate phase combinations affect the model’s ability to predict C
L and C
D accurately. The performance of the DBFC-HA DN model was evaluated using four widely accepted statistical metrics: R-squared (R2), root mean squared error (RMSE), mean squared error (MSE), and mean absolute error (MAE) [
36]. These metrics provide a comprehensive understanding of both the accuracy (through R2) and the error magnitude (via RMSE, MSE, and MAE) of the model’s predictions.
Table 4illustrates the results of this epoch variation-related study, showing the model’s performance for both C
L and C
D under different phase schedule settings. Among the tested combinations, the configuration using phase values of 0.0005 (Phase 1), 0.0001 (Phase 2), and 0.00005 (Phase 3) outperformed others, achieving the highest R2 scores of 0.9897 ± 0.0006 for C
L and 0.9506 ± 0.0053 for C
D. This indicates that the model was able to explain approximately 99% and 95% of the variance in C
L and C
D, respectively, suggesting a high degree of model reliability. Additionally, this configuration also resulted in the lowest RMSE and MAE values, which indicate smaller average prediction errors, further confirming its effectiveness.
The standard deviation values, obtained from ten-fold cross-validation, are relatively low, highlighting the consistency of the DBFC-HA DN performance across different data partitions. It is also worth noting that the largest predicted values for CL and CD across all configurations were 1.9556 and 0.23535, respectively, serving as a reference for the model’s prediction range. Overall, the ablation study demonstrates that careful tuning of phase schedules significantly enhances the DBFC-HA DN model’s generalization ability.
2.9. Genetic Algortihm
GA was used for airfoil optimization because of its efficiency in the solution of problems of high nonlinearity and multidimensionality. In aerodynamic optimization, there are always multiple objectives that are in conflict with each other, for instance, maximum lift and minimum drag. Many conventional approaches are unable to search for the solution space effectively when dealing with a large number of variables and non-convexity [
13]. In such cases, GAs, which are based on the principles of natural selection, are capable of providing selection, crossover, and mutation for both exploration and exploitation, and they can find the global optima. The trained DBFC-HA DN model was incorporated as the fitness function to estimate the C
L and C
D rapidly. This approach is computationally much less intensive than performing high-fidelity simulations on all the candidate solutions.
Table 5 shows the settings used for NSGA-II (Non-Sorted GA), which is used with integration of the DL model.
Where, eta is Distribution Index and Sampling Method is the Method to Initialize Population. Design space was provided from reference airfoil (Lower and Upper Bounds), Polynomial Mutation was used as the Mutation Operator, and DBFC-HA DN was used to access CL and CD values at different points in the design space.
Figure 1.
A flowchart depicting the training and testing phases of DBFC-HA DN, utilized as a fitness function within the GA.
Figure 1.
A flowchart depicting the training and testing phases of DBFC-HA DN, utilized as a fitness function within the GA.
Figure 2.
A flowchart illustrating the application of ASO through the evolutionary cycle of the genetic algorithm, with DBFC-HA DN incorporated as the fitness function.
Figure 2.
A flowchart illustrating the application of ASO through the evolutionary cycle of the genetic algorithm, with DBFC-HA DN incorporated as the fitness function.
Figure 3.
Bezier curve parametrization of airfoil (y/c and x/c are normalized coordinates) [
28].
Figure 3.
Bezier curve parametrization of airfoil (y/c and x/c are normalized coordinates) [
28].
Figure 4.
Training phase exploration for the computation of CL and CD, (a) variation of loss of aerodynamic coefficients in training and validation phases, and (b) R2 variation with epochs for aerodynamics parameterization.
Figure 4.
Training phase exploration for the computation of CL and CD, (a) variation of loss of aerodynamic coefficients in training and validation phases, and (b) R2 variation with epochs for aerodynamics parameterization.
Figure 5.
Learning rate against the Epoch-variation schedule based on three phases trialed in three groups (Table 4).
Figure 5.
Learning rate against the Epoch-variation schedule based on three phases trialed in three groups (Table 4).
Figure 6.
C-Mesh for Airfoil CFD.
Figure 6.
C-Mesh for Airfoil CFD.
Figure 7.
Mesh refinement study.
Figure 7.
Mesh refinement study.
Figure 8.
Experimental setup for subsonic wind tunnel testing.
Figure 8.
Experimental setup for subsonic wind tunnel testing.
Figure 9.
Validation of the CFD results for lift coefficient of optimized airfoil.
Figure 9.
Validation of the CFD results for lift coefficient of optimized airfoil.
Figure 10.
Validation of the CFD results for drag coefficient of optimized airfoil.
Figure 10.
Validation of the CFD results for drag coefficient of optimized airfoil.
Figure 11.
Comparison of the profiles of baseline and optimized airfoils.
Figure 11.
Comparison of the profiles of baseline and optimized airfoils.
Figure 12.
Comparison of (a) CL vs. AoA (b) CL/CD vs. AOA for optimized and original airfoils.
Figure 12.
Comparison of (a) CL vs. AoA (b) CL/CD vs. AOA for optimized and original airfoils.
Figure 13.
Comparison of CFD results for optimized and original airfoils.
Figure 13.
Comparison of CFD results for optimized and original airfoils.
Figure 14.
CP of Original vs. Optimized Airfoil at 0º AOA.
Figure 14.
CP of Original vs. Optimized Airfoil at 0º AOA.
Figure 15.
CP of Original vs. Optimized Airfoil at 5º AOA.
Figure 15.
CP of Original vs. Optimized Airfoil at 5º AOA.
Figure 16.
Pressure contours for original (left) vs. optimized (right) airfoil at 0º(a,b), 5º(c,d), 10º(e,f), 15º(g,h) angles of attack.
Figure 16.
Pressure contours for original (left) vs. optimized (right) airfoil at 0º(a,b), 5º(c,d), 10º(e,f), 15º(g,h) angles of attack.
Table 1.
Details of predictors used in DBFC-HA DN with a cardinality of 28.
Table 1.
Details of predictors used in DBFC-HA DN with a cardinality of 28.
| Input Features |
Cardinality |
| Lower curve control points (x coordinates) |
7 |
| Upper curve control points (x coordinates) |
6 |
| Lower curve control points (y coordinates) |
7 |
| Upper Lower control points (y coordinates) |
6 |
| Angle of attack |
1 |
| Reynold’s number |
1 |
Table 2.
A layer-wise overview of the model, detailing types, parameters, and activation functions, designed to optimize performance while maintaining computational efficiency.
Table 2.
A layer-wise overview of the model, detailing types, parameters, and activation functions, designed to optimize performance while maintaining computational efficiency.
| Layer |
Neurons count |
Activation function |
Purpose |
| Input |
28 |
X |
Inputs are fed to the model instance-wise |
| Dense (Shared) |
256 |
Swish |
Feature extraction with smooth gradients |
| BatchNorm |
X |
X |
Normalizes activations |
| Dense (Shared) |
192 |
Swish |
Higher-level features |
| BatchNorm |
X |
X |
Normalizes activations |
| Branch: CL
|
| Dense |
128 |
ReLU |
Lift-specific features |
| Dense |
64 |
ReLU |
Lift-specific features |
| Output (CL) |
1 |
Linear |
Unbounded* CL prediction |
| Branch: CD
|
| Dense |
256 |
GELU |
Drag-specific features |
| Dense |
128 |
GELU |
Nonlinear drag relationships |
| Dropout |
0.1 |
X |
10% Regularization |
| Dense |
64 |
GELU |
Final drag features |
| Output (CD) |
1 |
Linear |
Unbounded* CD prediction |
Table 3.
Model parameterization details for DBFC-HA DN.
Table 3.
Model parameterization details for DBFC-HA DN.
| Parameter |
Configuration |
| Model Type |
Dual Branch Fully Connected Hybrid Activated Deep Network |
| Loss Function |
‘mse’ |
| Compilation Loss |
CL: 0.3, CD: 0.7 |
| Optimizer |
Adam |
| Learning Rate |
Phase 1 (<50 epochs) Table 4
|
| Schedule |
Phase 2 (50-100) Table 4
|
| Phase 3 (100-300)Table 4
|
| Training Epochs |
150 |
| Batch Size |
64 |
| Early Stopping |
Patience=20 (val_cd_r2_score) |
| Model Checkpoint |
Save best weights to ‘best_model.keras’ |
| Regularization |
L2 (λ=10-5) |
| K-Fold Validation |
10 partitions (folds) |
| Feature Scaling |
StandardScaler |
| Target Scaling |
Separate StandardScalers for CL/CD
|
| Shared Layers |
256(swish), BN, 192(swish),BN |
| CL Branch |
128(relu), 64(relu), 1(linear) |
| CD Branch |
256(gelu), 128(gelu), Dropout(1/10), 64(gelu), 1(linear) |
Table 4.
Performance measures for DBFC-HA DN Model with phase schedules.
Table 4.
Performance measures for DBFC-HA DN Model with phase schedules.
| Phase 1 (1-50) |
Phase 2 (50-100) |
Phase 3 (100-150) |
Measure |
CL
|
CD
|
| 0.001 |
0.005 |
0.0001 |
R2 |
0.9872 ± 0.0010 |
0.9410 ± 0.0053 |
| RMSE |
0.0809 ± 0.0030 |
0.0809 ± 0.0030 |
| MSE |
0.0065 ± 0.0005 |
0.0001 ± 0.0000 |
| MAE |
0.0546 ± 0.0029 |
0.0037 ± 0.0001 |
| 0.0005 |
0.0001 |
0.00005 |
R2 |
0.9897 ± 0.0006 |
0.9506 ± 0.0053 |
| RMSE |
0.0725 ± 0.0018 |
0.0725 ± 0.0018 |
| MSE |
0.0053 ± 0.0003 |
0.0000 ± 0.0000 |
| MAE |
0.0436 ± 0.0015 |
0.0034 ± 0.0001 |
| 0.002 |
0.001 |
0.0002 |
R2 |
0.9889 ± 0.0013 |
0.9470 ± 0.0061 |
| RMSE |
0.0752 ± 0.0043 |
0.0752 ± 0.0043 |
| MSE |
0.0057 ± 0.0007 |
0.0000 ± 0.0000 |
| MAE |
0.0475 ± 0.0050 |
0.0039 ± 0.0001 |
Table 5.
Settings of Genetic Algorithm.
Table 5.
Settings of Genetic Algorithm.
| Parameter |
Value/Setting |
| Population Size |
100 |
| Sampling Method |
FloatRandomSampling() |
| Crossover Operator (crossover) |
SBX(prob=0.9, eta=15) |
| Mutation Operator (mutation) |
PolynomialMutation(prob.=10-4, eta=20) |
| Eliminate Duplicates |
TRUE |
| Termination Criteria (termination) |
No. of Generations = 500 |
| Maximize |
CL
|
| Minimize |
CD
|
| Fitness Test/ Function |
DBFC-HA DN Model |
Table 6.
Validation of CFD results with wind tunnel testing.
Table 6.
Validation of CFD results with wind tunnel testing.
| AOA (Degrees) |
CL Wind Tunnel |
CD Wind Tunnel |
CL CFD |
CD CFD |
% Diff. CL
|
% Diff. CD
|
| 0 |
0.290 |
0.081 |
0.313 |
0.079 |
8.00 |
3.29 |
| 5 |
0.754 |
0.116 |
0.719 |
0.106 |
4.62 |
8.90 |
| 8 |
0.812 |
0.139 |
0.881 |
0.127 |
8.57 |
8.42 |
| 10 |
1.287 |
0.157 |
1.333 |
0.144 |
3.60 |
8.00 |
| 12 |
1.252 |
0.174 |
1.299 |
0.163 |
3.70 |
6.47 |
| 15 |
1.229 |
0.209 |
1.264 |
0.196 |
2.83 |
6.11 |
Table 7.
Comparison of Geometric properties of baseline and optimized airfoils.
Table 7.
Comparison of Geometric properties of baseline and optimized airfoils.
| Property |
NACA 65(2)-415 |
| Baseline |
Optimized |
| Percentage Thickness |
14.99 |
12.90 |
| Maximum Thickness Position (%) |
39.94 |
38.64 |
| Maximum Camber (%) |
2.20 |
2.24 |
| Maximum Camber Position (%) |
50.05 |
52.15 |
Table 8.
Comparison of lift and drag coefficients predictions for optimized and original airfoils by XFOIL.
Table 8.
Comparison of lift and drag coefficients predictions for optimized and original airfoils by XFOIL.
| AOA |
CL (Original) |
CD (Original) |
CL (Optimized) |
CL (Optimized) |
ηAE (Original) |
ηAE (Optimized) |
% Increase |
| 0 |
0.308 |
0.012 |
0.328 |
0.010 |
25.133 |
31.830 |
26.646 |
| 5 |
0.848 |
0.011 |
0.818 |
0.010 |
73.957 |
84.110 |
13.728 |
| 5.5 (Max.) |
0.888 |
0.011 |
0.881 |
0.010 |
78.459 |
86.460 |
10.197 |
| 10 |
0.849 |
0.031 |
1.083 |
0.025 |
27.359 |
43.337 |
58.403 |
| 15 |
0.887 |
0.082 |
1.266 |
0.061 |
10.803 |
20.890 |
93.371 |
Table 9.
Airfoil performance predictions for optimized and original using ANSYS Fluent (CFD).
Table 9.
Airfoil performance predictions for optimized and original using ANSYS Fluent (CFD).
| AOA |
CL (Original) |
CD (Original) |
CL (Optimized) |
CD (Optimized) |
ηAE (Original) |
ηAE (Optimized) |
% Increase |
| 0.00 |
0.242 |
0.016 |
0.288 |
0.016 |
14.834 |
18.461 |
24.455 |
| 5.00 |
0.790 |
0.021 |
0.810 |
0.021 |
37.289 |
39.404 |
5.672 |
| 5.50 |
0.838 |
0.023 |
0.857 |
0.022 |
37.209 |
39.749 |
6.827 |
| 10.00 |
1.194 |
0.039 |
1.199 |
0.038 |
30.828 |
31.717 |
2.883 |
| 12.00 |
1.270 |
0.050 |
1.280 |
0.050 |
25.400 |
25.600 |
0.787 |
| 15.00 |
1.223 |
0.099 |
1.141 |
0.101 |
12.382 |
11.297 |
−8.763 |