Verification and Validation in Computational Mechanics

Decades ago, when computational was expensive and limited, the structural design was mostly performed by hand calculations using simple mathematical models. For example, it was a common practice to design a structure as complex as the wing of an aircraft by simple beam analysis. However, ever since the classic paper by Turner et al., due to a rapid increase in computational, more complex mathematical models are being used to simulate the physical behavior of complex structural components. To solve intractable problems unsolvable by hand calculations, numerical techniques like Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Finite Difference Method etc. are being employed. In fact, the availability of these methods has led to the development of an entirely new area of research known as Multidisciplinary Design Optimization (MDO) where various disciplines are considered in an optimization problem. The most important question while using a mathematical model to represent practical industrial problems is to what extent these models represent the real-life situation. Computational models are always built on upon assumptions. Simply at looking at the simulation outcomes i.e. the graphical and numerical results, it is often very difficult to ensure if the underlying assumption holds and that the results are reliable. This has led to the development of another field of research known as Verification and Validation (called V&V in short).

complex structures like submarines and ships, the US Navy has become increasingly reliant on computational models and simulations. Many important decisions are made by the engineers designing vehicles for the navy using computational simulation results but they often struggle due to inconsistency in results caused by the use of different computational models. .As mentioned earlier, computational models are developed based on some assumptions. The following assumptions are common while solving problems involving complex structures: 1) For metallic structures, the material is often assumed to be linearly isotropic. This means that material even in the presence of complex microstructure of the metal with grain boundaries, dislocations, voids, micro-cracks etc, is modeled as a homogeneous continuum. The properties of a mechanical component after manufacturing are almost never the same as those predicted by simulations.
2) In reality, the structures are subjected to highly complex and often random loads. In structural design and optimization, the analysis is usually run considering certain idealistic loading conditions.
Example of such boundary conditions includes simply-supported, pinned, fixed etc. In reality the boundary is often too complex to be represented by a pure form of any of the above mentioned, idealistic boundary conditions. 4) Many details are excluded at the conceptual level to save computational resources. For example, in the mathematical model of a submarine, the bulkhead is often approximated as a simple plate with T-joints whereas in reality it is a complex structure made of several components. Moreover, in a structural assembly different components are connected using bolted joints, welded joints, rivets etc. The detailed features of these joints are often excluded from the mathematical model.
Approximation should be made such that a mathematical model is sufficient to answer the requisite quantitative questions and it requires sound engineering judgement. The major goal in Verification and Validation is to systematically collect evidence to ensure that the computational model has sufficient fidelity for the analyzing engineering component.

II. Verification and Validation Definitions
The definition of Validation according to ASME V&V 10-2006 [25] is "the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model." Such determination can be made using either data from physical tests (classic validation) or from simulation models with higher fidelity physics (hierarchical validation).
Definition of Verification as quoted from the ASME V&V 10-2006, is "the process of determining that a computational model accurately represents the underlying mathematical model and its solutions". It is important that the accuracy criteria is specified prior to the initiation of model development and experimental activities for validation purposes. For experimental validation, the V&V plan should be prepared that will include a detailed specification of the use of the model, description of the global system and its subsystems, their interactions and the list of experiments that needed to be performed. It may also include details about the approach to verify the model.

III. Hierarchical Model development and Bottom-up approach
Complex models are usually hierarchical in structure. They consist of components linked together to form subassemblies. Similarly subassemblies form assemblies which again make up the global structure. For V&V purpose, a bottom-up approach is recommended. The accuracy of each individual component should be validated first followed by their mutual interaction in the subassembly where they belong to and so on. If validation of the system as a whole is done from experimental data, it might lead to the following issues: 1) For complex systems, if there is a significant disagreement between experiment and simulation it often becomes difficult if not impossible to find which of the components are responsible for the disagreement.
2) Even if there is good agreement between the experiment and its simulation, it might be due to error cancellation from different components.
A bottom-up approach will result in confidence in each level of the hierarchical structure.

IV. Conceptual, Mathematical and Computational Model
The modeling activity starts with a preliminary conceptual model which can be defined as

V. Metrics for Verification and Validation
The solution output as a function of the user-provided input parameters is known as the response. The key responses in a computational mechanics problem are as follows: • Displacements: Displacement fields are easy to measure using Digital Image Correlation (DIC). Displacement can be translational or rotational.
• Energy: Strain energy is the energy stored in the system due to deformation and computed by integration of stress over the strain domain. Depending on the geometry of the component and the loading condition some of the components from the energy expression can be ignored. For example, for a beam subjected to pure bending loads, the strain energy is mostly due to the bending moment and it is called bending strain energy. The total energy input from applied load deflection (which leads to frictional, plastic or viscous dissipation). Energy can be classified as: Modes can also be determined easily using accelerometers and strain gauges.
Responses are used to formulate metrics for V&V comparison purposes. Depending on the problem, the following types of metrics are useful: • Pointwise quantities: It is the response at a particular point • Average values: Usually response average over a certain area is used. Average value of response over entire structure is rarely useful.
• Total Energy Sums: It can be computed over entire structure or over regions of interest.

VI. Model Updating
During the modeling process, the conceptual, mathematical and computational model is usually updated using experimental data to achieve the desired accuracy. This process is commonly known as 'model updating'. Model updating involves calibrating parameters in the mathematical and computational models as well as updating model form. A model calibrated to certain experimental data however may not lead to accurate predictions for all of its uses.
Moreover, experimental results are affected by error due to scaling, boundary effects, instrumental resolution and a human factor which introduces additional challenges in the model calibration.
Updating the form of the conceptual mode is observed during quantitative comparison when some structural response is not consistent with corresponding characteristics of the model output and it is found that the difference is not due to inaccurate values of model parameters. One of the most common deficiencies in model form arises when two dimensional models are used for three dimensional structures.

VII. Sensitivity Analysis
Sensitivity analysis is the process by which the influence of various model input parameters is determined usually by analysis of variance (ANOVA). Sensitivity analysis must be performed after the model is validated. Sensitivity analysis can be local and global. Local sensitivity analysis is used to determine the influence of input parameters on the response in the vicinity of a point.
For this purpose adjoint method or finite difference methods are mostly used. In the global sensitivity analysis, the average of the response over a larger domain is calculated.

VIII. Uncertainty Quantification
As part of the validation the uncertainties associated with both simulation and experiment should be identified and reported. Uncertainties are of two types: aleatory and epistemic. Aleatory uncertainties occur due to inherent variability in geometry, material properties, loading condition etc. and they cannot be reduced but they should be quantified by determining the mean value standard deviation etc. Using methods like Latin Hypercube and Monte Carlo simulation, the inherent variability is propagated to determine the expected variability in the simulated outcome.
Epistemic uncertainties occur due to lack of proper knowledge about the system. They are again of two types: statistical uncertainties occurring due to use of a limited number of samples and model form uncertainties which occurs due to assumptions about model parameters. ○ Stress spatial variation (e.g., contour smoothness/jumps)

IX. Verification Procedure
• Energy balance totals (e.g., physical energy vs. artificial energies associated with numerical solution) • Conservative bounding (e.g., the model set-up that bounds worst case response range) • Error estimates (if available)

X. Mesh Convergence Study
The rate of convergence is estimated by comparing results obtained by different discretization preferably to an exact solution. The difference in response due to different discretization occurs due to various factors like: • Mesh size and local refinement in simulation (static, quasi-static, dynamic, modal extraction etc.) • Exact analytical solution (including manufactured solutions) • Semi-analytic solutions (reduction to numerical integration of ordinary differential equations • Highly accurate numerical solution to PDEs

XI. A Posteriori Error Estimation
The stresses at the Gauss points in a standard Finite Element Model are known to be superconvergent. Usually, extrapolation of Gauss point stresses to the nodes and then averaging them over the nodes is the standard procedure used to obtain nodal stresses which may be inaccurate. However, by using a posteriori error estimator, a better estimate of the nodal stresses i.e. smoothed stresses can be obtained. The error in smoothed stresses and the stresses estimated by the FEM procedure in the energy norm can be further used for refinement purposes, an essential procedure in FEM, by using h or p refinement. It was shown by Zeinkeiwicz-Zhu [26] that if the recovered derivatives are superconvergent, then the estimator will be asymptotically exact (see Zienkeiwicz-Zhu [27]). The smoothed stresses are generally calculated by fitting the shape functions through these superconvergent integration points resulting in better values at the nodes and throughout the element. Also it is intuitive to have continuous stresses rather than a discontinuous discrete direct evaluation of stresses Oden [28] through linear finite elements like for instance taking the case of the axial bar as presented later in this report.
A posteriori error estimator, to increase the reliability of finite element solution procedure, was developed by Zeinkeiwicz-Zhu [26] based on their smoothed stress solution. After evaluation of the stresses based on a finite element solution, a better smoothed stress solution is obtained to compare and estimating errors resulting from the finite element solution. Ainsworth et al [29] proved the convergence and efficiency of this method. Later, Zienkiewicz-Zhu [26] presented an analogous approach for a local refinement using patch insertion and higher order polynomials.
Kelly et al [30] had previously mentioned a posteriori error approach and its application to finite elements, adaptive re-meshing techniques and techniques related to p-convergent methods.
The stresses at integration points are known to be very accurate. The extrapolation of the stress at the integration points are then defined by a linear combination of coefficients and the displacement shape function. A different approach uses an insertion patch [26] (i.e. higher order polynomial in a local region) in linear combination to the coefficients. The coefficients for these shape function or polynomials are found such that they match the nodal stresses. The smoothed stresses can be used to calculate relative errors and further used for refinement using fewer trials.
The work is applicable to any linear finite element-based discretization. These improved stresses at the nodes give accurate values as compared to nodal averaging which were used traditionally.

Example:
The process is demonstrated using a problem involving an axial bar with uniformly varying load and a point load at the free end. Figure 3 shows the displacement profile for the axial bar as obtained using the analytical method and finite element method (with 4 elements). Figure 4 shows that stress as computed from FEM is discontinuous and it is compared with smoothed stress. A similar approach can be employed to 2D (quad, tria, etc.) or 3D (hex, tetra, etc.) elements.
Instead of interpolation of the smoothed stresses using shape function in another approach uses a higher order set of polynomial as an interpolation function. The critical areas under consideration, where stress values are needed to be improved or refined, can be defined by patches, to evaluate smooth stresses in that patch. Usually a patch consists of all the elements shared by the node under consideration where stresses are required to be improved.

XII. Classical Validation
Classical validation is comparing simulation results to physical test data.  The validation needs to be achieved by acquiring data from specially designed experiments.
The process should be standardized to build credibility among clients and customers. For computational solid mechanics problems, the strain is one of the most popular metrics used for validation since strain is easy to measure using strain-gauges. It is a common practice to attach strain gauges at regions of high stress however this method of traditional validation has a number of drawbacks: 1) The location of high stress is determined by guess or by results of the computational model.
Thus, the approach is somewhat a circular process. There is always a chance that failure can initiate somewhere else.
2) The regions of low stress are vastly ignored in the validation process. It is a common practice to remove materials from these regions (example: by making holes and cutouts on the ribs of an aircraft wing). Since validation is not performed for these regions, it results in an unquantified risk of component failure.
3 where, u is the uncertainty of data from an experiment.

XIII. Hierarchical Validation
Even though according to the strict definition of validation, simulation results must be compared to experimental data, the latter is not often available. In this work, an alternative method of validation is proposed where the comparison is made with a higher fidelity model instead of experimental data. The term 'hierarchal validation' is used to denote the concept of a family of mathematical models of increasing physics fidelity. Such higher fidelity physics models themselves will lead to higher fidelity computational models which again are subject to verification and validation. Hence, ultimately all models must be subjected to classical validation, but in the absence of available physical test data, hierarchical validation can be used to provide significant   • Finite Element Models: Components with thin geometry are often meshed using 2D shell elements which can be validated against full 3D models.

XIV. Solid vs Shell Comparison
Finite element analysts face a lot of challenges to create the appropriate mesh for the problem being solved. Shell elements may be considered as mathematically simplified solid elements [10]. Shell elements are of two types: thin shell elements and thick shell element. Thin shell elements do not consider stress in the direction perpendicular to its surface as thick shell elements consider stress in direction to the middle surface and take into account shear deformation.
The industry is often found to trust more simulations performed with solid-mesh, however there is always a concern about computational resources and also issues like hourglass modes and membrane and shear locking. For thin-walled geometries like the fuselage of aircraft, airfoil skin of aircraft wings, submarine hulls etc. using shell elements over solid element can provide a lot of advantages including significant reduction of computational time. Some of the advantages are as follows: 1) The difference in stress, strain and displacement vary significantly for solid mesh often vary significantly especially in problems with high magnitude of forces and moments.
When the mesh is coarse this difference is more pronounced. When the mesh is refined, the difference becomes less but still a lot of iterations are required to achieve mesh convergence.
2) In order to include all details of the model the edges of an element should be smaller than the smallest detail of the geometry. Sometimes a detailed solid CAD model can have very small edges and while attempting to generate the mesh using any commercial FEA software, it often fails. In order to generate the solid mesh the industry spends significant resources to remove features from the solid CAD model. In such situations it often becomes advantageous to define the mid-surface model for the original geometry and use shell elements for the FEM.
3) In order to generate solid FEM for geometries with thin features, at least 3-4 elements across the thickness is needed to capture all bending and stiffness effects for the accurate solution. For complex parts like injection molded parts with stiffeners, draft angles, fillets etc. the necessary number of elements is often huge, resulting in large computational time.
4) The post-processing time for solid models with a large number of elements is also very long. This becomes an issue when a large number of plots for visualization generated. 5) When shell elements are used, the wall thickness is captured as a mathematical value instead of accurately modeling the thickness. This results in a lesser number of equations to be solved. Since the shell approach is significantly less computationally intensive than the solid approach, it has a huge advantage in problems with iterative simulations. 6) While using solid elements for extremely thin geometry there is a much higher chance that the elements can get inverted causing the so called negative Jacobean error. This do not occur when shell elements are used.
Most of the finite element analyst develops a hybrid model consisting of beam, shell and solid elements. For geometries with thin features, a choice of shell or solid element is usually made based on the analyst's previous knowledge. The choice is subjective and often depends on the experience of the analyst and thus there is a possibility of difference in simulation outcome as a result of different finite element models.

XV. Verification and Validation in Sub-project
Verification and Validation have multiple uses in the current project including solving representative numerical problems, test plan development and design guide documentation. The key results for a model with 100% 3D elements in compared with multi-fidelity models (comprising of both 2D and 3D elements). Maximum von Mises Stress, maximum principal stress, maximum principal strain, maximum displacement and total strain energy are chosen as metrics.
Simple thresholding to this response using Artificial Neural Network (ANN) lead to awareness about right choice among the model hierarchy. XVI.