Real-time 3D Surface Measurement in Additive Manufacturing Using Deep Learning

Layer-wise 3D surface morphology information is critical for the quality monitoring and control of additive manufacturing (AM) processes. However, most of the existing 3D scan technologies are either contact or time consuming, which are not capable of obtaining the 3D surface morphology data in a real-time manner during the process. Therefore, the objective of this study is to achieve real-time 3D surface data acquisition in AM, which is achieved by a supervised deep learning-based image analysis approach. The key idea of this proposed method is to capture the correlation between 2D image and 3D point cloud, and then quantify this relationship by using a deep learning algorithm, namely, convolutional neural network (CNN). To validate the effectiveness and efficiency of the proposed method, both simulation and real-world case studies were performed. The results demonstrate that this method has strong potential to be applied for real-time surface morphology measurement in AM, as well as other advanced manufacturing processes.


Introduction
Surface morphology information is a very critical determinant of functional integrity in many advanced manufacturing systems [1], particularly in additive manufacturing (AM, also known as 3D printing) [2]. Due to the layer by layer fabrication framework of AM, the occurring surface issues such as deformation or defects in each layer may significantly deteriorate the quality of the products [3,4]. Therefore, accurate information for layer-wise 3D surfaces morphology is critical to effectively track and control the printing quality of AM processes.
Due to the rapid development of metrology tools, nowadays multiple types of technologies are capable of capturing the 3D morphology information of an object, such as coordinate measuring machine (CMM) [5], 3D laser scan [6] and structured light scan [7], etc., however, most of the existing technologies are either contact method or time consuming, which are not suitable for real-time layer-wise 3D surface morphology data acquisition in AM. Therefore, how to achieve fast and accurate layer-wise 3D scan for AM still remains challenging. On the other hand, due to the rapid development of online image acquisition technologies, to utilize the high resolution image data provides a potential solution to measure 3D surface in a real-time manner.
Consequently, the objective of this study is to develop a data-driven approach to obtain the 3D surface point cloud data directly by using real-time image data. To realize this objective, a supervised deep learning-based modeling approach is proposed for analyzing the image patterns and recognizing the 3D structure. The key idea of this new method is to correlate the image captured by camera and the corresponding 3D point cloud obtained by structured light 3D scanner through an offline model training. Then this trained model can be applied to online predict the depth information of each pixel in the image. Recently deep learning methods [8] have demonstrated excellent performance in image feature extraction and pattern analysis, which also provides a new perspective to handle the research challenges in this 3D surface morphology data acquisition problem.
The main contribution of this study is to reformulate the challenging 3D point cloud construction problem as an image-based regression problem, which can be effectively and efficiently solved by a typical deep learning algorithm called convolutional neural networks (CNNs) [9]. Meanwhile, the proposed method also provides an alternative choice for online 3D surface point cloud data acquisition in the application of advanced manufacturing processes.
The rest of the paper is organized as follows. A brief review of research background and related works is provided in Sec. 2; the proposed research methodology is presented in detail in Sec. 3; Sec. 4 and Sec. 5 presents the numerical simulation and actual case studies, respectively, to demonstrate the effectiveness of the proposed method. Finally, the conclusions and future work are discussed in Sec. 6.

Review of Related Work
As mentioned in Sec. 1, the proposed method is motivated by the demand of real-time 3D surface scan in AM processes. Therefore, this section first introduces the related existing commonly applied 3D scan technologies (Sec. 2.1), and then followed by a brief review of the related studies regarding image-based 3D structure reconstruction algorithms (Sec. 2.2). Finally, the shortcomings in the current literature are identified in Sec. 2.3.

3D scan technologies
In general, the current 3D scan technologies consist of two main categories, namely, contact and non-contact methods. Contact methods such as CMMs [5,10] use a probe to touch the surface and then assign the location of a point in the space. It could yield high accuracy but the measurement efficiency is very low. Therefore, they are not suitable for real-time measurement in AM application due to the issues of time-consuming and low spatial resolution.
Non-contact technologies include structured light scan [11], laser 3D scan [6], time of flight [12], etc. Laser 3D scan is a similar technology with the structured light method. Instead of projecting a 2D fridge pattern, laser scanner shoot laser line onto the surface. In practice, to form an entire surface point cloud data, laser scanner needs relative motion against the object. The framework of time of flight (TOF) method is to measure the distance between measuring point and 3D scanner. The distance measured by timing the round-trip duration between sending out the laser (or infrared) beam and receiving the reflection of the surface. This method is only appropriate to measure large objects like buildings. In structured light scan, a projector is applied to project fringe patterns on a surface, and sequences of surface images with different phases and wavelength of fridge patterns are captured by a camera established with an angle against the projector. Afterwards, the 3D point cloud can be obtained by processing the collected data. Its scanning speed is much faster than laser 3D scan and time of flight. However, it still may not be fast enough for AM surface online scan applications, particularly, for high resolution scans, which usually requires relatively long time for computation.

Image-based 3D reconstruction approaches
Apart from the 3D scan technologies introduced Sec 2.1, another perspective to obtain 3D structure data is image-based 3D reconstruction. In computer vision area, 3D structure reconstruction algorithms from 2D images has been widely investigated. Typically, in terms of the requirement for images, most of the existing approaches consists of three major categories. First, the most conventional category is to use the images captured from different viewpoints. Based on this condition, the 3D position (i.e., the depth information) of any specific point can be identified as the intersection of the two projection rays. Related research works are introduced in [13,14]. The related algorithms are also commonly applied to the image-based 3D scanners such as photogrammetry [15]. However, these methods need to setup and calibrate multiple cameras, and the computational efficiency is not high enough as well for AM layer-wise online scanning. Then, by considering the single camera case, another important aspect is to analyze multiple images from the same camera. For example, the algorithms based on "shape from shading" [16], "structure from motion" [17], and "shape from defocus" [18], etc. The major limitation of these algorithms is that the performance may be very poor when estimating depths at large ranges. Also in AM processes, the printed layer and camera are always fixed, so it is difficult to capture multiple different images by a single camera. To further improve the capability of 3D reconstruction approach, using single image to recover the 3D structure has become another popular category. A number of successful attempts are reported in the literature, such as Han et al. [19], Hoiem et al. [20,21], and Hassner et al. [22], etc. Unfortunately, due to the limited capability, these methods are only appropriate for the fairly simple images such as the photos for natural objects.

Research gaps analysis
The research works introduced in Sec. 2.1 are more focused on the offline high accuracy 3D surface structure measurement. The capability of real-time and non-contact data acquisition is very limited. Thus, the investigation on improving measurement efficiency is still in sufficient. For the image-based 3D structure reconstruction methods summarized in Sec.2.2, a key shortcoming is that they need either multiple images or an image with clear features regarding the depth information. In addition, the resolution and accuracy of reconstructed data is also not high enough for the measurement requirement in AM. Therefore, this paper seeks to address these research gaps by a supervised deep learning approach to estimate the 3D surface structure using a single image.

Research Methodology
As shown in Figure 1, the overall framework of this study consists of three components: (1) an experimental data acquisition platform for collecting training and validation data from actual AM parts including surface image and 3D point cloud (Sec. 3.1); (2) data pre-processing and the application of deep neural networks to predict the depth information by using image pattern (Sec.

Experimental setup
In this study, a customized structured light 3D scanner was developed to collect highresolution point cloud data for modeling purpose. Figure 2 presents the overall layout of the designed system. It has three main components: a high-resolution camera (Figure 2a), a calibration/measurement platform (Figure 2b and 2c), and a tiny area projector (Figure 2d). These components are equipped with XYZ & RZ stages that have fine tuning capability with 10micrometer accuracy. The camera resolution is up to 1924 by 1448 pixel and the projector has a projecting dimension of 30 by 21 mm. Although the area pixel density of the projector is less than the camera's, the resolution of point cloud data will not be affected in practice. The reason is that the projector is slightly off focus, which makes projecting pattern have continuous brightness change. After successful calibration, the projector will precisely project checkerboard pattern which matches the position of calibrating pattern on the calibration target, as shown in Figure 2b. During the data acquisition process, the projector will project 64 different fridge patterns on to the surface of interest and the camera will capture each of them for 3D surface construction. The in total time for data acquisition is from 5 seconds to 20 seconds depending on the shutter speed setting. In this study, the field of view and spatial resolution of the point cloud data can achieve 26 by 19.5 mm 2 and 5 m per two points, respectively. At the same time, the gray-scale image data for the same region that captured by the same camera is recorded as well. Afterwards, the collected pairs of image and point cloud can be applied for further analysis and modeling.

Data reformulation and the application of deep neural networks
As discussed in Sec.1, the objective of this study is to predict the 3D surface morphology using image data based on a supervised machine learning perspective. For the convenience of presentation, in this paper, the captured gray-scale image is denoted by a matrix with dimension , where is the intensity of each pixel. The point cloud data collected by the customized structured light 3D scanner (see Sec. 3.1) is represented by a matrix with the same dimension of , where represents the depth information of each point. In practice, since the format of raw point cloud data is usually a list of scattered points , it is necessary to convert this original format to a matrix (i.e., a grid format) through appropriate interpolation. Then the key of this problem is to find an effective map between and , i.e., .
Due to complexity of nonlinear matrix to matrix mapping, it is very challenging to fit an accurate regression model directly. Consequently, to simplify the modeling task, two applicable assumptions are proposed by considering the application of AM processes: (1) consistent lighting condition and (2) homogeneous surface texture. In practice, these two assumptions can be well satisfied if the data is collected from the same AM process (i.e., same material and fabrication mechanism) with consistent sensor setup. However, even with these two assumptions, there are still two major challenges that need to be addressed. First, the sample size is very limited. For online application, it is impractical to scan large amount of layers and then fit a model. In addition, the mapping is still highly nonlinear.

Cooling component
To address the low sample size challenge, a window-based data reformulation approach is developed. This approach is motivated by an intuition that the depth information usually can be estimated based on the local image patterns. In this study, for any pixel in , with an appropriate window size , its depth and the corresponding surround local image pattern may exist strong correlation. Therefore, as demonstrated in Figure 3, the samples can be extracted from and through consecutive window sliding. Consequently, a large number of samples, namely, , are available for model training after data reformulation. Meanwhile, the original regression problem also can be reformulated as, (1) where is an unknown matrix to scalar mapping based on a predefined window-size . Afterwards, for the second challenge, i.e., how to quantify the nonlinear relationship , the convolutional neural network (CNN) [9] is selected to address this issue. As a powerful deep learning technique based on feed-forward artificial neural networks, CNN has been widely applied to a large variety of real-world areas, particularly, image processing and computer vision [23,24]. A typical CNN consists of an input and an output layer, as well as multiple hidden layers. As shown in Figure 4, the basic structure of hidden layers usually has convolutional layers, pooling layers, and fully connected layers. Due to the excellent mathematical properties, existing research [25] indicates that CNNs can achieve superior performance on most of the computer vision tasks when it was trained with appropriate regularization. The technical details of CNNs are introduced in [26]. For the implementation of CNN, the deep learning toolbox in Matlab® [27] is applied in this study.

230
Depth information for the pixel Figure 4: A demonstration for the framework of convolutional neural network (CNN) regression model.

Proposed four-phase modeling and online 3D surface measurement approach
Based on effective data reformulation and the application of CNN introduced in Sec. 3.2, a complete predictive modeling and online 3D surface measurement framework can be developed subsequently. The proposed modeling approach is based on a four-phase structure (see Figure 5), which is listed as below: Sliding phase: As introduced in Sec. 3.2, the aim of this phase is to reformulate the original data to , which is appropriate to build regression model. CNN phase: After the sliding phase, CNN is applied to fit a regression model between and . Then the depth information of the pixels in can be predicted by their local surrounding image patterns.
Smoothing phase: The regression model fitted by the CNN phase is point-based prediction, which does not consider the spatial correlation between pixels. Therefore, it is necessary to estimate this correlation in this phase through effective smooth filters. In this study, one of most widely applied smooth filter, namely, Gaussian filter is selected. Compensation phase: The smoothing phase may result in significant mean shift of the prediction results. Consequently, to avoid this issue, the mean shift will be removed in this compensation phase. group is the tuning parameters in other phases, which mainly include the window size , smooth filter parameters, etc. To search an appropriate setting, a cost function which is defined by the overall RMSE can be applied and an approximate solution can be found by minimizing the cost (i.e., RMSE).
Based on the fitted model, online 3D surface measurement can be implemented using images. Regarding the computational cost for online application, although CNN may require more time if images become larger, the computational efficiency for CNN can be significantly improved by GPU or other effective hardware acceleration approach in practice [28].

Numerical Simulation Studies
In this section, the effectiveness of the proposed method is validated by numerical simulated surface, which is generated by Gaussian random field [29]. As shown in Figure 6, with the simulation of 3D surface, the images for the simulated surfaces are generated simultaneously under the same lighting condition. To be consistent, the dimension for both images and point clouds are 300×300. Subsequently, to validate the effectiveness of the proposed method, two simulated surfaces are used for model training. One is for training and the other is for validation. Then based on the trained model, a new surface based on the same underlying distribution and lighting condition is generated for testing. The prediction results are demonstrated in Figure 7. In terms of both 2D and 3D view, the predicted result is able to provide accurate measurement of the actual surface using the proposed method. view of the predicted surface.

Case Studies
To validate the measurement accuracy in real-world AM applications, this section further applies the proposed method to actual metal AM parts, which are built by electron-beam melting (EBM) process [30] using Ti-6Al-4V. As shown in Figure 8, three sample parts printed by different printing strategies, namely, Raster scan ("L7" part), Dehoff scan ("D7" part), and Random scan ("R7" part) [31], are used in this study. From each sample part, the surface of top layer is applied for analysis. To collect data for method validation, the image (see Figure 9a) and point cloud (see Figure  9b) of the top layer surface were captured by the customized 3D scanner introduced in Sec. 3.1. After necessary pre-processing for the images, which consists of denoise, registration, and resize, two sub-regions of each sample with dimension 300 by 300 were selected for training and testing, respectively. Subsequently, the model was trained by the procedure introduced in Sec. 3.3, and the surface measurement for testing region sets was performed. As shown in Table 1, the level of the relative prediction error is about 7% in average. In addition, the computational time for a single image is less than 1 second using a single GPU. Therefore, the surface estimated by the proposed method is a good approximation of structured light 3D scan. To further validate the prediction performance, the correlation between the Ra of actual and predicted surfaces are used for evaluation due to the interest of surface quality monitoring. Ra is a widely applied measurement for surface roughness, which is the arithmetical mean deviation of the assessed profile [32]. As demonstrated in Figure 10, for all of the three parts, the correlation is higher than 70%, which indicate that the prediction is consistent with actual surface.

Conclusions and Future Work
This paper developed a new real-time 3D surface measurement approach in AM processes based on a supervised deep learning perspective. The most significant benefit of this developed methodology is that the 3D morphology information is enabled to be estimated in a real-time manner by only using a single image, since the proposed modeling framework fully utilized the correlation between the image pattern and point cloud. The case studies based on both numerical simulation and actual metal AM parts also demonstrate that this proposed method is able to provide accurate results with high computational efficiency. Consequently, this study addressed one of most significant challenges from the existing 3D surface morphology data acquisition approaches regarding the capability of real-time data collection.
In summary, the results of this study show that the proposed method is very promising to be applied in real-time layer-wise surface morphology measurement in AM. Therefore, future work and investigation along these lines would be highly valuable to pursue, mainly in two directions. First, to further verify the effectiveness of the proposed method, more real-world case studies, particularly, actual online layer-wise measurement in AM, should be conducted in the future. Second, this method also has great potential to be used for online defect detection of AM processes.