Laser cutting is a fast, precise, and noncontact processing technique widely applied throughout industry. However, parameter specific defects can be formed while cutting, negatively impacting the cut quality. While light-matter interactions are highly nonlinear and are, therefore, challenging to model analytically, deep learning offers the capability of modeling these interactions directly from data. Here, we show that deep learning can be used to scale up visual predictions for parameter specific defects produced in cutting as well as for predicting defects for parameters not measured experimentally. Furthermore, visual predictions can be used to model the relationship between laser cutting defects and laser cutting parameters.

Laser materials processing is a noncontact technique that has industrial applications in many areas including manufacturing,1–4 energy harvesting,5 and defence.6 Industrial laser processing techniques include cutting,7–9 welding,10 cladding,11 and drilling.12 Fiber lasers are compact, energetically efficient with high wall-plug efficiency, high-power and high-brightness capable,13 and as such are beneficial for processing of materials. Due to the laser properties, high precision laser cutting is widely applicable across many areas of manufacturing, as well as having advantages over competing techniques such as electric discharge machining14 or abrasive water jet cutting.15 

In general, the laser cutting process of the metal occurs in several stages. First, the laser is used to drill a small hole into the metal workpiece forming a molten pool at the irradiation site. A gas-assisted jet then blows away the molten and vaporizes material, leaving a hole. This gas can be reactive, such as oxygen,16 or inert, such as nitrogen,17 to avoid the thermodynamic effects associated with oxygen cutting. The laser then “drills” in further, with the molten and vaporized material blown away through the bottom. Cutting begins as the nozzle (and laser beam) starts scanning across the workpiece.18 After cutting, surface defects such as welts and striations can be present.19 Also present is a heat affected zone (HAZ), which has properties that differ from those of the bulk material before cutting.17,20 The origins of laser cutting-induced striations are only partially understood, but explanations include an interplay between laser power variations and gas pressure fluctuations21 as well as possible effects caused within the molten material.11 Striation-free, laser cutting has been achieved with single-mode fiber lasers.22 However, since the power of single-mode lasers is limited13 in comparison to multimode lasers and not always suited for the thick material, striation-free laser cutting remains a challenge. These effects result in machined parts with a complex surface topography that can be analyzed to determine how input parameters, such as laser power and cutting speed, influence output parameters such as surface roughness or kerf profiles.

Due to the highly nonlinear nature of laser cutting as well as difficulty visualizing the laser cutting mechanism,23 modelling of laser cutting is an extremely complex process. Arai19 has studied the origins of laser cutting defects extensively using thermal models and showed that the high temperature molten material exerts pressure on the walls of the cut while cooling periodically as the laser beam shifts along the sample. As such, variations in energy density allow for a periodic heating and cooling of the workpiece in the cutting region. Bocksrocker et al.16 have investigated laser and material input parameters and their effects on output parameters such as the melt flow and dross formation. Miraoui et al.20 have analyzed heat affected zones using analytical models and found that the depth and microhardness of the heat affected zones depended on input parameters, specifically, laser power and beam diameter. However, many of these investigations involved the use of high-speed imaging and specialized cutting techniques that are not representative of real-world cutting examples.23 

Deep learning can be used as a data-driven modelling technique that can model complex phenomena directly from observed data24,25 and is completed using neural networks (NNs). Neural networks, of which there are many types, can act as universal function approximators.26 This work will focus on convolutional neural networks (CNNs) for image classification27,28 and generative adversarial networks (GANs) for image-to-image generation.29 CNNs can identify features by applying convolutional filters to learn the spatial relationship between data points.30–33 GANs use similar techniques to break down data into features and then use transposed convolutions to reconstruct the original shape of the experimental data34 via a network known as a generator.35 A separate discriminator network can then be used to determine whether the output from the generator network is experimental or predicted.36 Both networks have associated loss functions that act as feedback mechanisms,37 and as such, each network uses feedback from the other network to improve their assigned task. A variation in GANs that can use experimental conditions as an additional input is called conditional generative adversarial neural networks (cGANs).38 An important application for cGANs is image inpainting, i.e., filling in missing information in an image,39,40 and it is this approach that enables the predictive capability presented here. As modelling of laser cut topographies is highly challenging, the hypothesis introduced here is that deep learning could be a useful tool for predicting (and subsequently optimizing) the topographies of surface defects formed by laser cutting.

Laser materials processing has already been studied using deep learning, with processes such as laser cutting,41–46 laser welding,10,47–49 fabrication,50,51 and machining52–61 investigated. Deep learning has also widely been applied to image data62–67 and topographic data.68,69 Pacher et al.41 have used visually a combination of visual techniques such as edge detection, gradient-based methods and gray relational analysis to measure burr profiles, identification of process zones, and computation of striation angles. Santolini et al.43 used Gaussian Mixture Models, recurrent Neural Networks and CNNs on multisensory laser cutting data from input parameters to classify the quality of the cut into three categories, good cut, bad cut, and missed cut. Furthermore, Pacher et al.46 have also studied the attachment of dross in real time using process emission images with CNNs. Stadter et al.47 have used ANNs in combination with optical coherence tomography to estimate the severity of defects in laser welding. Wasmer et al.48 used gradient boost to improve the monitoring capabilities of acoustic emissions and x-ray imaging of laser processing. Anastasiou et al.50 used support vector regression to classify the quality of computer-generated holograms into a quality score of 1–5. Yao et al.51 used clustering techniques to classify additive manufacturing features into similar groups and then used a support vector machine to determine the limits for these features. Mills et al.57,59 have analyzed images of femtosecond machining using CNN to extract the laser machining parameters directly from images of machined samples. Tatzel et al.68,69 have used CNNs to calculate surface roughness values directly from images and from laser cutting parameters. Franceschetti et al. have used CNNs to classify defects formed in the laser cutting process as well as dross formation using CNNs.45 This work aims to improve on previous studies by using CNN, GANs, and image inpainting to model topographic data directly from parameters, reducing the number of images needed to accurately model laser cutting defects as well as modelling the effects of changing parameters while cutting.

In this work, machine learning techniques are used for two purposes: to classify laser cut surfaces based on their surface topographic defects (e.g., estimating the cutting speed used) and to make visual predictions of laser cutting output topography under different parameters (i.e., synthesizing topographic image data). The two key objectives were (i) to develop a cGAN that could extend a given input topography using image inpainting and (ii) to use the cGAN to generate fully synthetic surface topographies for different cutting speeds. These artificial laser-cut surfaces were subsequently tested using a regression CNN (which was capable of predicting cutting speed from real-world laser cut topographies) to verify that the simulated topography accurately represented the intended cutting speed. Work similar to this work has been performed by Courtier et al.70 This paper aims to build on this work by applying NNs to laser cutting topographies instead of images and does so by predicting the appearance of laser cutting topographies using image inpainting to increase the size of experimental topographies as well as predicting the laser cutting topography for cutting conditions not used experimentally, saving time on inspection costs. Section II details how each stainless steel sample was cut, how it was used and presented for the machine learning analysis. Section III contains methods and results for extending the range of predictive visualization of laser cut topographies. Section IV discusses the simulation of different cutting speeds using image inpainting and the implications for the relationship between surface defects and laser cutting speed, along with potential applications of such a simulation method. Conclusions are presented in Sec. V.

For sample cutting, a 6 kW fiber laser was used, with multimode output delivered via a Ø100 μm fiber. The processing workstation was a TRUMPF TruLaser 1030 flatbed cutting machine fitted with a Precitec ProCutter cutting head that had magnification 2.0× and used high pressure nitrogen as the co-axial assist gas. Ten stainless steel samples were laser cut at speeds of 15–24 m/min, with 12 bar of N2 assist gas pressure, with a nozzle diameter of 2 mm, and with a beam diameter of 200 μm (1/e2). The beam parameter product was 4.5 mm mrad. The cutting head had two lenses: a collimator lens with a focal length of 100 mm and a focusing lens of focal length 200 mm, giving a magnification ratio of 2.0. The Rayleigh range is 2.24 mm. These cutting parameters were chosen to highlight the difference of defects that occur at this range cutting speeds. The purpose behind choosing the level of process parameters, especially cutting speed, was due to the association between cutting speed and cut quality. There is a need for faster cutting speeds as it saves time for processing; however, faster cutting speeds limit the quality of the cut. Testing for optimal cutting speeds is performed through trial and error that is costly in terms of time and resources. This specific range of cutting speeds was chosen as the quality of the cut changes significantly over this range of cutting speeds. Samples cut at 15 m/min are significantly less rough and more vertical than samples cut at 24 m/min. As such this range of cutting speeds is interesting for study. Modelling the variation of cutting quality with speed allows for predictions of quality to be associated with cutting speed, saving time and resources on parameter testing. This approach could be applied to other parameters with further investigation. The cutting nozzle head had a stand-off distance of 1 mm, which was maintained using a capacitive height sensor, ensuring that the focal position would have been 1 mm below the metal surface. Each sample had a length of 116.0 mm, a thickness of 2.0 mm, and a width of 9.5 mm. While thicker sheets would provide a different set of geometrical details, we are limited by the field of view of the topographer. The topographer has a measuring field of 3.4 × 2.8 mm2 when using a 5× objective, giving an array of 2456 × 2054 pixels. These are then cropped to a size of 1536 × 2000 pixels. To ensure that both upper and lower surfaces were captured in each topographic measurement, we used 2 mm thick steel sheets. Laser cutting of sheet of this type (1.5–3 mm) is relevant to a wide range of industries and has been studied in a variety of applications including drilling,71,72 welding,73 and industrial manufacturing.74 As indicated by the schematic in Fig. 1(a), the topographic profile of the edges of the cut samples was measured using a SmartWLI Compact white light interferometer (GBS/Omniscan) with a Michelson interferometric 5× objective lens (Nikon). Each measured topography is represented as an image with resolution 3000 × 2000 pixels and size of 3.4 × 2.8 mm, and multiple topographies were recorded along the length of each sample. During each topographic measurement, the objective sweeps through a range of focus heights, spanning 0.5 mm. The absolute position of the scanning range was adjusted so that all relevant features came into focus during the scan. Each sample was imaged at 12 locations, giving 12 images per sample (120 unprocessed images across all cutting speeds).

FIG. 1.

Flow chart showing the process of measuring topographies of laser cut stainless-steel as well as the concepts for topography prediction and inpainting. Panel (a) shows the process of laser cutting and measurement with an example of a laser cut topography section (cut at 15 m/min) plotted in 2D and 3D. The red arrow indicates the direction of the laser beam while the black arrow indicates the direction of scan. The blue box shows a section of topographic data to be used for deep learning. Panel (b) shows the concept for image inpainting for a laser cut topography section. Panel (c) shows the functioning of a CNN to predict the cutting speed. Panel (d) shows the combined use of NNs to study the prediction of defects via inpainting and how they depend on the cutting speed.

FIG. 1.

Flow chart showing the process of measuring topographies of laser cut stainless-steel as well as the concepts for topography prediction and inpainting. Panel (a) shows the process of laser cutting and measurement with an example of a laser cut topography section (cut at 15 m/min) plotted in 2D and 3D. The red arrow indicates the direction of the laser beam while the black arrow indicates the direction of scan. The blue box shows a section of topographic data to be used for deep learning. Panel (b) shows the concept for image inpainting for a laser cut topography section. Panel (c) shows the functioning of a CNN to predict the cutting speed. Panel (d) shows the combined use of NNs to study the prediction of defects via inpainting and how they depend on the cutting speed.

Close modal

Figure 1 includes flow charts that show the various experimental procedures used in this work. The left of Fig. 1(a) shows the process of laser cutting and subsequent measurement of the topography of the cut edge. The right of Fig. 1(a) shows an example of the laser cut stainless steel topography in 2D and 3D (with the light blue box indicating the crop size used for the deep learning input). As observed in the figure, the light blue box covers the full width of the sample. Brighter regions indicate regions of higher elevation while darker regions indicate areas of lower elevation. Figure 1(b) shows the functioning of an inpainting cGan which fills in missing topographic information from a masked input. A cGAN that predicts hidden topography from surrounding topography could be used to model the topography in between measured regions of a sample. Such a cGAN could, therefore, reduce the number of regions that need to be measured in order to fully model the topography of the laser cut edge of the stainless-steel samples. Figure 1(c) shows a CNN that can classify topographic inputs by estimating the cutting speed used to produce them. Figure 1(d) shows the combined use of the NNs illustrated in Figs. 1(b) and 1(c), i.e., the inpainting cGAN is used repeatedly (in a chain-like loop) to produce synthetic surface topographies that extend over longer distances. The classifier CNN is then used to examine the effectiveness of making long distance predictions of laser cutting topography and to examine the effect of the laser cutting speed on the cutting defects produced.

Figure 2 contains example topographies from samples cut at different speeds, illustrating the variation in appearance and defects that occurs. All topographic data were measured using a white light interferometer; this device contains an integrated light source that illuminates the sample via its measurement objective. Light conditions were, therefore, constant for all topographic data. Where topographic data are displayed in 2D image form, a color bar has been included to indicate the relationship between color and height. Where topographic data are plotted as a 3D surface, the Z axis is scaled differently to X and Y axes; this is done to emphasize the features of the laser cut edge topography, which might otherwise be difficult to see. Defects include striations (periodically spaced ridges) and welts, which are rounded and elongated defects such as those seen in the bottom half of Figs. 2(c)2(j), which appear more randomly. As seen in Fig. 2, each topographic section has a region of high elevation at the top of the cut, which gradually descends toward the middle of the cut. For lower cutting speeds, this gradual descent continues unless there are welts, which only occasionally appear at lower speeds. For higher cutting speeds, the appearance of welts is more consistent, with an increase in elevation occurring at the bottom half of the topographic sections. The vertical position in the image sections at which the welts start to appear increases with speed, which is consistent with previous studies.75 

FIG. 2.

Experimental examples of topographic sections of stainless-steel samples cut by a fiber laser at (a) 15, (b) 16, (c) 17, (d) 18, (e) 19, (f) 20, (g) 21, (h) 22, (i) 23, and (j) 24 m/min.

FIG. 2.

Experimental examples of topographic sections of stainless-steel samples cut by a fiber laser at (a) 15, (b) 16, (c) 17, (d) 18, (e) 19, (f) 20, (g) 21, (h) 22, (i) 23, and (j) 24 m/min.

Close modal

Figure 3 shows a comparison between the average output parameters for each of the ten cutting speeds. Figure 3(a) shows the average kerf profile for each of the ten cutting speeds, while Fig. 3(b) shows the average roughness profiles for the edge of the stainless-steel samples. As the cutting speed is increased, the bottom half of the sample becomes higher. This is due to the shorter interaction time for the laser cutting head, which, in turn, influences the viscosity of the layer of the molten metal. As such, the rate of material removal at the bottom of the sample will be lower as the cutting speed increases. This results in more molten and resolidified material being left at the bottom of the sample.76 The roughness in this region will also be higher for each speed increase, as shown in Fig. 3(b). For example, lower speeds tend to be smoother, as shown in Figs. 2(a) and 2(b), while higher speeds are rougher as seen in Figs. 2(i) and 2(j). The dip in the average kerf profile for all speeds coincides with the position of the focus of the laser beam. This is the point of highest energy density in the beam path and, therefore, contributes the most heat. This means that the heating gradient is positive with respect to the direction of the gas flow above the focus position, while the heating gradient is negative below the focus position. This, in-part, may explain why defects formed above the focus position are different to those formed below. This is further evidenced by the difference in roughness shown in Fig. 3(b) for the top and bottom half of each sample (left and right of the sample width axis, respectively).

FIG. 3.

Comparison of output parameters for each of the ten cutting speeds. (a) Comparison of average kerf profiles. (b) Comparison of the average roughness profiles.

FIG. 3.

Comparison of output parameters for each of the ten cutting speeds. (a) Comparison of average kerf profiles. (b) Comparison of the average roughness profiles.

Close modal

For neural network processing, topographic sections were randomly chosen from the full-sized, measured, topographies (which were represented in the form of monochrome images). This allowed data augmentation by varying the cropping coordinates. The dimensions and shape of all topographic sections were chosen to ensure that the full width of the sample was contained in each section and to allow the use of well-established neural network architectures for topographic modelling.

For the image inpainting cGAN, topographic sections with 768 × 768 pixels were chosen since this allowed a wide field of view for topographic predictions. For the inpainting network, topographic sections with the central half of each section occluded were used as inputs and the nonoccluded topographic sections were used as outputs. It is not expected that the topographic predictions of the cGAN generator will be identical to the experimental data; rather, during training they will gradually improve until they are realistic enough that they can regularly deceive the cGAN discriminator.

For the classification CNN, topographic sections with a size of 768 × 768 pixels were chosen as this allowed a larger number of example topographies to be created for each cutting speed (i.e., it increased the amount of training data that could be created, from the same quantity of measured data). For this regression-based CNN, a topographic section of the laser cut edge was used as an input, and the output was the prediction of the laser cutting speed used to create the topography.

The purpose of this section is to demonstrate that NNs have the capability to predictively visualize laser cutting topographies on a large scale. The cGAN used for this modelling approach was a Pix2Pix architecture77 with minor modifications, namely, the increase in input and output dimensions by a factor of 3, up to 768 × 768 pixels, and the use of partial convolutions instead of traditional convolutions. The unmodified architecture used for this paper can be found at Ref. 78.

Topographic image sections were produced from stainless-steel samples laser-cut at 15–24 m/min in 1 m/min intervals. This yielded a total of 24 000 topographic sections. Of these, the topographic sections from cutting speeds of 15 and 20 m/min were reserved for testing, while the topographic data from other cutting speeds were used for training. The network was, therefore, trained using 20 000 topographic sections, with 4000 topographic sections being held back for testing. The batch size of training was 1, with the activation function at each layer being ReLU. The optimizer used was ADAM, with moments of 0.9 for beta 1 and 0.999 for beta 2 used for the generator and discriminator and a learning rate of 0.0001 for the generator and 0.000 01 for the discriminator. L1 loss has been used to regularize the network preventing overfit and underfit. Dropout layers are also used in the upscaling portion of the Pix2Pix network to prevent underfit. Data augmentation was used to prevent underfit as shown in Fig. 4. The testing data speeds of 15 and 20 m/min were chosen to allow the assessment of the NN’s ability to extrapolate and interpolate, respectively, as 15 m/min lies outside of and 20 m/min lies within the range of experimental values contained in the training data. As mentioned in Sec. I, image inpainting is the filling in of missing information in image data.40 The cGAN used for this approach uses partial convolutions instead of convolutional layers. A partial convolution is essentially a convolution that distinguishes between masked and nonmasked features, such that only nonmasked features are used to fill in missing information in the topography.39 This is done by multiplying the output of the convolutional filter of each convolutional layer at each position in the image by a proportion of nonmasked pixels contained at each point in the image such that only regions with nonmasked data are fully learned, and masked or mostly masked regions are virtually ignored.

FIG. 4.

Concept of data augmentation for topographic sections.

FIG. 4.

Concept of data augmentation for topographic sections.

Close modal

The hyperparameters that were tuned were the learning rates of the generator and discriminator, the lambda value for the L1 loss, and the resolution of the network. It is common practice to have a higher generator learning rate than the discriminator as it allows the generator to be penalized less harshly for attempting different solutions to optimize the output data. The lambda value of the L1 loss was taken from Isola et al.78 The resolution was changed to 768 × 768 instead of 256 × 256 (the default network resolution) as it allowed for higher resolution of defects while preserving the ability to capture the full width of the sample. This means that the middle layer in the generator has a dimension of 3 × 3 × 1024 instead of the default 1 × 1 × 1024.

The topographic data were collected over the course of a week, taking around 30 h in total. Using dataset augmentation, 24 000 topographic images were generated in 4 h. The data augmentation was performed by cropping 1536 × 1536 pixel topographic sections from a larger 1536 × 2000 topography and completed 200 times for each image. Each topographic section is then scaled down by a factor of 2, giving topographic sections of size 768 × 768. Twelve larger topographic sections were collected for each cutting speed, and there were 10 cutting speeds. This gave a total dataset size of 24 000 topographic sections after data augmentation. This process is shown in Fig. 4.

Inputs were 768 × 768 pixel topographic sections, with either the middle halves of the images masked, and the outputs were 768 × 768 pixel images of unmasked topographic sections. The network was trained on 20 000 topographic sections of stainless steel cut at 16, 17, 18, 19, 21, 22, 23, and 24 m/min, and it was tested on 4000 topographic sections cut at 15 and 20 m/min. Figure 5 demonstrates the concept of using image inpainting to fill in missing information for a laser cut topographic section, in order to predict a realistic topography based on the surrounding data. Figure 5(a) shows the process in detail, with blue dashed outlines highlighting sections of experimentally measured data, and orange outlines marking sections that are produced by the inpainting cGan. Figure 5(b) shows an example of a ∼16.5 mm long section of laser-cut topography, predicted by the inpainting cGan for a cutting speed of 20 m/min.

FIG. 5.

(a) Flow chart showing the process of producing long distance laser cutting topographies of a sample cut at 20 m/min. (b) Example of a long section (16.5 mm) of inpainted laser cut topography.

FIG. 5.

(a) Flow chart showing the process of producing long distance laser cutting topographies of a sample cut at 20 m/min. (b) Example of a long section (16.5 mm) of inpainted laser cut topography.

Close modal
Figure 6 compares the statistical distributions of the experimentally measured data and NN inpainted data. Figures 6(a) and 6(b) are histograms of the pixel values contained within the experimental and inpainted topographic sections for 15 and 20 m/min, respectively. In both cases, the overall shape of the curves is well matched, with the peaks of experimental and inpainted distributions lining up with each other. The pixel distribution for inpainted data at 20 m/min [Fig. 6(b)] matches the experimental pixel distribution more closely than the 15 m/min one [Fig. 6(a)]. This is likely to be because the cutting rate of 15 m/min falls outside of the range of speeds on which the NN was trained (and, therefore, requires the NN to extrapolate). In contrast, the NN inpainting at 20 m/min requires only interpolation, allowing a closer match to be achieved. The reason for the single peak for 15 m/min and the double peak for 20 m/min is due to the time spent on each region by the laser head at each cutting speed. The 15 m/min pixel distribution exhibits a single peak, while in the 20 m/min case, a double peak is observed. This is because, at 15 m/min, the laser head spends more time at each region, removing more material and leaving a single high region at the top of each topographic section (the pixel distribution peaks at lower height values, centered at approximately 100). At 20 m/min, the laser head spends less time at each region, removing less material and creating more defects at the bottom of each topography. This leaves a high region at both the top and bottom of each topographic section, which contributes a greater number of brighter pixels (a second peak in the distribution appears, centered near a height value of 140). Figures 6(c) and 6(d) show the average kerf profiles for samples cut at 15 and 20 m/min, respectively. The average kerf of inpainted topographic sections matches the experimental average kerf of samples cut at 20 m/min very well. The NN training has likely been aided by the fact that the average kerfs of 19 and 21 m/min cut speeds are similar to that of the 20 m/min case. As was observed for pixel value distributions, the 15 m/min case matches less well and for the same reasons. Figures 6(e) and 6(f) show the average roughness criteria (Rz) along the width of experimental and inpainted topographic sections for samples cut at 15 and 20 m/min, respectively. The roughness criterion Rz was calculated using79,
R z = i N R zi N = i N ( R P R T ) i N ,
(1)
where RZi is the peak to trough distance of a small sampling length, while RP is the peak height while RT is the trough height. The roughness criterion was measured over five sampling lengths of length 0.44 mm, giving the total measurement length of 2.2 mm. The standard measurements are given by DIN ISO 9013.80 Typically, the number of sampling lengths used is 5. In both cases (15 and 20 m/min), the roughness of inpainted topographic sections were lower than the experimentally measured ones. This is likely because the depth of inpainted defects does not match the experimentally measured ones exactly. Furthermore, during training, the NN will tend to learn the most statistically likely surface texture for each cutting speed. However, attempting to predict the most likely surface topography may have an inherent smoothing effect—causing predicted topographies to lack extreme height outliers that can be found in experimental data while producing the most commonly occurring defects.
FIG. 6.

Plots comparing the statistical distribution of experimental topographies and inpainted topographies of samples cut at 15 and 20 m/min. Histograms showing the distribution of pixel values in topographies for samples cut at (a) 15 and (b) 20 m/min. Plots of the average kerf profile across the thickness of samples cut at (c) 15 and (d) 20 m/min. Plots of the roughness Rz across the thickness of samples cut at (e) 15 and (f) 20 m/min.

FIG. 6.

Plots comparing the statistical distribution of experimental topographies and inpainted topographies of samples cut at 15 and 20 m/min. Histograms showing the distribution of pixel values in topographies for samples cut at (a) 15 and (b) 20 m/min. Plots of the average kerf profile across the thickness of samples cut at (c) 15 and (d) 20 m/min. Plots of the roughness Rz across the thickness of samples cut at (e) 15 and (f) 20 m/min.

Close modal

The average profile was generated using the mean of the topographic profile along the horizontal direction. The roughness profile was generated by applying Eq. (1) to each of the pixel height values in a topographic section in the horizontal direction. This value was then collected for each vertical coordinate in the topographic section. For each topographic section, these calculations were applied to each line of topographic data pixels, along the laser cut direction. This produced roughness values for each point across the thickness of the sample. Each of these profiles was averaged over 500 topographic sections of the same cutting speed. These evaluation metrics were used to compare the physical dimensions of the predicted data to the experimental data. The pixel distributions were compared to highlight the resemblance of the images produced using image inpainting, while the kerf profile and the roughness profile are closely linked with the cut quality.80 As there are many possible defects that can be produced for each input, is it not appropriate to use standard machine learning metrics such as the FCN score. It is clear that inpainting within the range of cutting speeds used for training is more accurate than extrapolating to values outside the range of cutting speeds used for training. Inpainting outside of the range of training cutting speeds appears to more closely resemble the topography of the closest speed within the training dataset.

It is necessary to show how the inpainting cGAN performs on each of the cutting speeds individually. Figure 7 shows the mean absolute error (MAE) of inpainted topography relative to experimental topography as a function of cutting speed. In this case, inpainted topographies were produced from nonadjacent experimental topographies for each cutting speed. For each cutting speed, the MAE was calculated for 500 examples and averaged. In each case, the inpainted topography was produced using five inpainting steps.

FIG. 7.

Mean absolute error of as a function of cutting speed evaluated over five inpainting steps for each cutting speed.

FIG. 7.

Mean absolute error of as a function of cutting speed evaluated over five inpainting steps for each cutting speed.

Close modal

Until now, this approach has only allowed for interpolation and extrapolation capabilities to be tested on one cutting speed each. It is necessary to further analyze the quality of inpainted topography of interpolated and extrapolated cutting speeds. Figure 8(a) shows the average kerf profile for samples cut at 15 m/min calculated by CGANs trained on the upper 80% (17–24 m/min), the upper 60% (19–24 m/min), and upper 40% (21–24 m/min) of the available cutting speeds. Figure 8(b) shows the average kerf profile for samples cut at 20 m/min calculated by CGANs trained on the outer 80% (15–18 m/min and 21–24 m/min), the outer 60% (15–17 m/min and 22–24 m/min), and outer 40% (15–16 m/min and 23–24 m/min) of the available cutting speeds. Figures 8(c) and 8(d) show the average roughness profile for samples cut at 15 and 20 m/min, respectively, using the same CGANs as in Figs. 8(a) and 8(b). In this case, the kerf profiles and roughness profiles for 20 m/min perform better than those for 15 m/min, with the network performance depending on the proportion of the available cutting speeds used for training. This indicates a dependence of the quality of interpolating and extrapolation on how representative the training data is of the intended interpolated and extrapolated data. In all cases, the models trained on the outer available cutting speeds performed better than those trained on the upper available cutting speeds, showing that this modelling approach is more effective for interpolation than for extrapolation.

FIG. 8.

In-depth comparison between extended topographies and experimental topographies for the purposes of interpolation and extrapolation of cutting speeds. (a) Average kerf profile for samples cut at 15 m/min as calculated by CGANs trained on the upper 40%, 60%, and 80% of the available cutting speeds. (b) Average kerf profile for samples cut at 20 m/min as calculated by CGANs trained on the outer 40%, 60%, and 80% of the available cutting speeds. (c) Average roughness profile for samples cut at 15 m/min as calculated by CGANs trained on the upper 40%, 60%, and 80% of the available cutting speeds. (d) Average roughness profile for samples cut at 20 m/min as calculated by CGANs trained on the outer 40%, 60%, and 80% of the available cutting speeds.

FIG. 8.

In-depth comparison between extended topographies and experimental topographies for the purposes of interpolation and extrapolation of cutting speeds. (a) Average kerf profile for samples cut at 15 m/min as calculated by CGANs trained on the upper 40%, 60%, and 80% of the available cutting speeds. (b) Average kerf profile for samples cut at 20 m/min as calculated by CGANs trained on the outer 40%, 60%, and 80% of the available cutting speeds. (c) Average roughness profile for samples cut at 15 m/min as calculated by CGANs trained on the upper 40%, 60%, and 80% of the available cutting speeds. (d) Average roughness profile for samples cut at 20 m/min as calculated by CGANs trained on the outer 40%, 60%, and 80% of the available cutting speeds.

Close modal

It is noteworthy that the striation patterns produced in the extended topographies such as those in Fig. 5(b) appear to repeat themselves after a certain number of inpainting steps. It is, therefore, necessary to further investigate the periodicity of the inpainted topography as related to the number of inpainting steps. In this case, the periodicity was investigated by taking a sample section of the inpainted topography and plotting the recurrence error of the sample inpainted topography relative to the total inpainted topography. If there is repetition in the inpainted topographic defects, the recurrence error will also show periodic behavior. The recurrence error was measured by taking the mean absolute error (MAE) of the sample topography relative to the total inpainted topography along the length of the total inpainted topography. The sample topography in this case was a 768 × 256 section taken from the middle of the inpainted topography. These dimensions were chosen as it corresponds to a single step of inpainted topography as well as being the last section of the topography produced during the inpainting process. The periodicity of the inpainted topography was measured using Fourier Transforms of the recurrence error. Spatial frequencies smaller than 0.1 cycles/mm and larger than 150 cycles/mm were not taken into account as they are not representative of defects produced during the laser cutting process. Figure 9(a) shows the number of frequencies above the amplitude of noise for the recurrence error of inpainted topographies of 15 and 20 m/min. The number of frequencies above noise reaches a constant with the number of inpainting steps. This shows that while the inpainting process does produce new defects with each inpainting step, while also showing that fewer new defects appear after each step. The pattern appears to repeat itself after approximately 10 steps. Figure 9(b) shows a sample recurrence error plot for inpainted topographies of 15 and 20 m/min. Figures 9(c) and 9(d) show a samples’ Fourier transform for extended topographies of 15 and 20 m/min produced using 20 inpainting steps. Each measurement was averaged over 100 topographies for each inpainting step. This is done to show the total number of frequencies present throughout the extended inpainting process. As these periods vary somewhat along the length of the topography, it is indicative that the striation patterns produced via inpainting do not repeat themselves identically and, therefore, still allow for some variation in the prediction of striations.

FIG. 9.

Study of the periodicity of inpainted topography with respect to the number of inpainting steps. (a) The number of frequencies of the Fourier transform of the MAE compared to the number of inpainting steps for 15 and 20 m/min. (b) Average MAE for inpainted topographies of samples cut at 15 m/min and 20 m/min using 20 inpainting steps. (c) Example Fourier transform of the MAE for an extended topography of a sample cut at 15 m/min. (d) Example Fourier transform of the MAE for an extended topography of a sample cut at 20 m/min.

FIG. 9.

Study of the periodicity of inpainted topography with respect to the number of inpainting steps. (a) The number of frequencies of the Fourier transform of the MAE compared to the number of inpainting steps for 15 and 20 m/min. (b) Average MAE for inpainted topographies of samples cut at 15 m/min and 20 m/min using 20 inpainting steps. (c) Example Fourier transform of the MAE for an extended topography of a sample cut at 15 m/min. (d) Example Fourier transform of the MAE for an extended topography of a sample cut at 20 m/min.

Close modal

The purpose of this section is to study the effect of inpainting the space between topographies produced at two different cutting speeds. Given that an inpainting cGAN can fill in the space in between two portions of topographic data, it might be able to fill in the space between topographic data from two samples cut at different speeds. If this were possible, perhaps the cGAN might produce laser cutting defects representative of a cutting speed halfway between those of the give input topographies. If so, we could consider the halfway cutting speed as a target speed for the cGAN to reproduce (without access to any input topography that corresponds to this cutting speed). To test this hypothesis, a classification CNN capable of accurately estimating the laser-cut speed of any given topographic section was developed. If the inpainting cGan can successfully produce topographies that mimic those of the intermediate cutting speed, then the classification CNN should identify the topography as having been cut at the middle speed.

The classification CNN receives a topographic section of the laser cut stainless steel edge as an input and outputs an estimate of the cutting speed used to create that sample. Stainless-steel samples were cut at speeds ranging from 15 to 24 m/min in intervals of 1 m/min. Topographic data were collected from each sample, with the cutting speed stored in the filename for each image. Of the 120 topographic images, such as those shown in Fig. 2, 96 were used for training and 24 were used for testing. From these sets of topographic images, topographic sections with a shape 768 × 768 pixels were procedurally generated by randomized cropping. As such topographic sections used for training and testing were drawn from different images to prevent overlap.

The architecture of the CNN was inception,81 the implementation of which can be found in Ref. 82. The CNN was trained for 10 epochs, with 9600 training topographic sections procedurally generated during each epoch. The CNN was then tested on 2400 procedurally generated examples. The predictions for the cutting speed were then compared to the actual experimental cutting speeds. For the purposes of classification, predictions of the cutting speed were rounded to the nearest integer. The mean absolute error for the regression CNN was 0.10 m/min.

The results of inpainting the space in between two topographies of different cutting speeds are shown in Fig. 10. Figure 10(a)(1) shows an example of topographic data inpainting between experimentally measured topographies for samples cut at 15 and 17 m/min. Figure 10(a)(2) shows an example of topographic data inpainting between two topographies that had themselves been generated by the inpainting cGan, with theoretical speeds of 16 and 18 m/min. These two artificial input topographies were inpainted between experimental topographies cut at 15 and 17 m/min and 17 and 19 m/min, respectively. Figure 10(b)(1) shows a confusion matrix comparing cutting speeds predicted by the regression CNN with the actual experimental cutting speeds. Darker regions indicate high correlation. In the event of perfect accuracy, there would be a single diagonal line in the matrix from bottom left to top right. For the regression CNN, it is clear that certain cutting speeds have topographic features in common, making them difficult to distinguish. It is evident that cutting speeds of 15 and 16 m/min have defects in common, as well as cutting speeds 20–24 m/min. Figure 10(b)(2) shows, when inpainting between experimentally measured topographic sections, the relationship between the middle speed (experimental) and the speed estimate made by the classifier NN for the inpainted topography. Figure 10(b)(3) shows the error between the middle cutting speed and the classifier NN’s estimate of cutting speed (based on the topography), as a function of the difference between the cutting speeds of the two topographies used as inpainting inputs. In the event of perfect prediction accuracy, a horizontal line with a value of 0 would be seen. Figure 10(b)(4) is similar to Fig. 10(b)(2) in that it shows the relationship between the middle speed and the speed estimate made by the classifier NN; except that in this case, the two input topographies are themselves generated by inpainting, rather than by direct experimental measurement.

FIG. 10.

(a) (1) Example of topographic data inpainted between two experimental topographies with laser cut speeds of 15 and 17 m/min. (2) Example of the topographic data inpainted between two artificial topographies with theoretical speeds of 16 and 18 m/min. (b) (1) Correlation plot comparing cutting speeds predicted by the regression CNN with actual experimental cutting speeds. (2) Plot showing the relationship between the middle speed and the regression CNN’s estimate of the laser cut speed based on the inpainted topography in the case where the inpainting input topographies are experimentally measured. (3) Plot of error between the middle cutting speed and the regression CNN’s estimate of the laser cutting speed, as a function of the difference in cutting speed of the two input topographies. (4) Plot showing the relationship between the middle speed and the regression CNN’s estimate of the laser cut speed based on the inpainted topography, in the case where the inpainting input topographies are themselves artificially generated by inpainting.

FIG. 10.

(a) (1) Example of topographic data inpainted between two experimental topographies with laser cut speeds of 15 and 17 m/min. (2) Example of the topographic data inpainted between two artificial topographies with theoretical speeds of 16 and 18 m/min. (b) (1) Correlation plot comparing cutting speeds predicted by the regression CNN with actual experimental cutting speeds. (2) Plot showing the relationship between the middle speed and the regression CNN’s estimate of the laser cut speed based on the inpainted topography in the case where the inpainting input topographies are experimentally measured. (3) Plot of error between the middle cutting speed and the regression CNN’s estimate of the laser cutting speed, as a function of the difference in cutting speed of the two input topographies. (4) Plot showing the relationship between the middle speed and the regression CNN’s estimate of the laser cut speed based on the inpainted topography, in the case where the inpainting input topographies are themselves artificially generated by inpainting.

Close modal

Figure 10 demonstrates that when inpainting the topography between topographic data from two different cutting speeds, the inpainted topography will contain surface defects similar to those of the intermediate cutting speed. Given that the regression plot in Fig. 10(b)(1) demonstrates good accuracy for predicting cutting speed from experimental data, if the inpainting cGAN produces accurate simulations of the middle cutting speed (between the speeds of the two input topographies), then the regression CNN should be able to accurately determine the middle cutting speed from the inpainted topographies. As the plots in Fig. 10(b)(2), 10(b)(3), and 10(b)(4) show, the mean predicted speed for each inpainted topography lies along the theoretical middle speed in between the speeds used for the inputs. In particular, Fig. 10(b)(3) shows that the accuracy of the predicted speeds has a dependence of how big the difference between the starting and ending speed are. On average, the difference between the middle speed and the regression CNN’s estimate of the cutting speed, is smaller for smaller speed differences. In both Fig. 10(b)(2) and Fig. 10(b)(4), there are no inpainted topographies where the regression CNN estimates the cutting speed to be v = 15 m/min. This is likely to be because the inpainting network was not trained on the samples cut at v = 15 m/min and as such highlights the difference between the interpolating and extrapolating capabilities of NNs. This might also be responsible for some bias within other predictions. For all other speeds, however, the mean predicted speed falls very close to the target cutting speed. This demonstrates the potential capability of cGANs to be used to model intermediate cutting speeds between those already measured. This modeling process could drastically reduce the number of experiments needed to simulate the results of laser cutting over a wide range of parameters. It also demonstrates that the surface defects produced during laser cutting are on average linearly dependent on the cutting speed. This modelling process could also be used to determine the linearity of dependence on other laser cutting parameters such as laser power, gas pressure, focus position, and stand-off distance. This modeling process could also be used to model acceleration of cutting speeds in real time as well as measuring the impact of varying other parameters while cutting.

In conclusion, an inpainting cGAN was used to model the surface topography of samples of stainless-steel cut by a fiber laser at speeds of 15–24 m/min in steps of 1 m/min. A regression CNN was then used to determine the accuracy of inpainted features compared to experimental results.

Predictions of surface topography made by the inpainting cGAN were successful in matching experimentally measured parameters such as height distribution, kerf profile, and roughness criteria across the thickness of the sample. Furthermore, the inpainting cGAN was shown to be capable of interpolating the region between the two input topographies with different cutting speeds. This was demonstrated both with experimentally measured input topographies and with input topographies that had themselves been artificially generated by inpainting. The classifier CNN was first shown to be able to correctly predict the cutting speed used to produce a given (experimentally measured) surface topography. Subsequently, this classifier CNN was used to verify that artificial topographies, generated by inpainting, matched the expected appearance for their chosen target cutting speed.

It is possible to conclude, based on the mean of the predictions of the regression CNN, that the inpainting cGAN can predict laser cutting topographies that are statistically similar to topographies not experimentally measured given than in all cases the CNN predicts the same cutting speeds as for experimental cutting topographies. The major benefits offered by this novel approach to modeling laser cutting topographies are as follows: First, it allows topographic defects in laser cut stainless steel to be predicted based on the surrounding topography. Second, it can be applied to cases where the two input topographies have been produced using different laser parameters, allowing, via interpolation, the creation of artificial topographies that correspond to laser parameters that may not have been experimentally measured. Third, it also allows the generation of large areas of surface topography from minimal input data, meaning that far fewer experimental examples of laser cutting data would be needed for large area simulations. The roughness criteria can be accurately calculated from topographic predictions made by inpainting; this is of key importance as the roughness criterion is inextricably linked to the final quality of the laser cut edge.79 Furthermore, this approach also offers the possibility of modeling the effect of changing laser parameters during cutting.81,82

In terms of real world applications, this work allows for theoretical optimization of the laser cutting process for a given parameter range without needing to physically test each parameter setting. Furthermore, it could be used for real-time prediction of defects produced during laser cutting, thereby enabling a self-optimizing cutting system to maximize the final cut quality according to the needs of the user.

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. This study was funded by Engineering and Physical Sciences Research Council (Nos. EP/N03368X/1 and EP/T026197/1)

The authors declare no conflict of interest related to this article.

Alexander F. Courtier: Conceptualization (equal); Data curation (equal); Formal analysis (lead); Software (equal); Writing – original draft (lead); Writing – review & editing (equal). Matthew Praeger: Software (equal); Writing – review & editing (equal). James A. Grant-Jacob: Writing – review & editing (equal). Christophe Codemard: Writing – review & editing (equal). Paul Harrison: Data curation (equal); Writing – review & editing (equal). Michalis Zervas: Funding acquisition (equal); Supervision (equal). Ben Mills: Conceptualization (equal); Resources (lead); Supervision (equal); Writing – review & editing (equal).

The data that support the findings of this study are openly available in Dataset in support of the journal paper ‘Predictive Visualisation of Fibre Laser Cutting Topography via Deep Learning with Image Inpainting’ at https://doi.org/10.5258/SOTON/D2489, Ref. 83.

1.
H.
Booth
, “
Laser processing in industrial solar module manufacturing
,”
J. Laser. Micro/Nanoeng.
5
,
183
191
(
2010
).
2.
J.
Francis
and
L.
Bian
, “
Deep learning for distortion prediction in laser-based additive manufacturing using Big data
,”
Manufact. Lett.
20
,
10
14
(
2019
).
3.
J.
Dutta Majumdar
and
I.
Manna
, “
Laser processing of materials
,”
Sadhana
28
,
495
562
(
2003
).
4.
M.
Sparkes
and
W.
Steen
,
“Light” Industry: An Overview of the Impact of Lasers on Manufacturing
(
Elsevier
,
Coventry
,
2018
).
5.
X.
Zang
,
C.
Jian
,
T.
Zhu
,
Z.
Fan
,
W.
Wang
,
M.
Wei
,
B.
Li
,
M.
Diaz
,
P.
Ashby
,
Z.
Lu
,
Y.
Chu
,
Z.
Wang
,
X.
Ding
,
Y.
Xie
,
J.
Chen
,
J.
Hohman
,
M.
Sanghadasa
,
J.
Grossman
, and
L.
Lin
, “
Laser-sculptured ultrathin transition metal carbide layers for energy storage and energy harvesting applications
,”
Nat. Commun.
10
,
1
(
2019
).
6.
M.
Eichhorn
, “
Pulsed 2 μm fiber lasers for direct and pumping applications in defence and security
,”
Proc. SPIE
7836
,
1
(
2010
) [cited 2 September 2020].
7.
J.
Pocorni
,
J.
Powell
,
J.
Frostevarg
, and
A. F. H.
Kaplan
, “
Investigation of the piercing process in laser cutting of stainless steel
,”
J. Laser Appl.
29
,
022201
(
2017
).
8.
J.
Powell
,
D.
Petring
,
R. V.
Kumar
,
S. O.
Al-Mashikhi
,
A. F. H.
Kaplan
, and
K. T.
Voisey
, “
Aerodynamic interactions during laser cutting
,”
Proc. SPIR
,
668
(
1986
) [online].
9.
J.
Thieme
, “
Fiber laser—New challenges for the materials processing
,”
Laser Tech.
4
,
58
60
(
2007
).
10.
Y.
Chen
,
B.
Chen
,
Y.
Yao
,
C.
Tan
, and
J.
Feng
, “
A spectroscopic method based on support vector machine and artificial neural network for fiber laser welding defects detection and classification
,”
NDT E. Int.
108
,
102176
(
2019
).
11.
L.
Shepeleva
,
B.
Medres
,
W. D.
Kaplan
,
M.
Bamberger
, and
A.
Weisheit
, “
Laser cladding of turbine blades
,”
Surf. Coat. Tech.
125
,
45
48
(
2000
).
12.
V.
Balasubramaniam
,
D.
Rajkumar
,
P.
Ranjithkumar
, and
C.
Narayanan
, “
Comparative study of mechanical technologies over laser technology for drilling carbon fiber reinforced polymer materials
,”
Ind. J. Eng. Mat. Sci
27
,
19
32
(
2020
).
13.
M. N.
Zervas
, “
High power ytterbium-doped fiber lasers—Fundamentals and applications
,”
Int. J. Mod. Phys. B
28
,
1442009
(
2014
).
14.
C. H.
Fu
,
J. F.
Liu
,
Y. B.
Guo
, and
Q. Z.
Zhao
, “
A comparative study on white layer properties by laser cutting vs. electrical discharge machining of nitinol shape memory alloy
,”
Procedia CIRP
42
,
246
251
(
2016
).
15.
J.
Powell
,
CO2 Laser Cutting
(
Springer-Verlag
,
London
,
1993
).
16.
O.
Bocksrocker
,
P.
Berger
,
B.
Regaard
,
V.
Rominger
, and
T.
Graf
, “
Characterization of the melt flow direction and cut front geometry in oxygen cutting with a solid state laser
,”
J. Laser Appl.
29
,
022202
(
2017
).
17.
M.
Boujelbene
,
B.
El Aoud
,
E.
Bayraktar
,
I.
Elbadawi
,
I.
Chaudhry
,
A.
Khaliq
,
A.
Ayyaz
, and
Z.
Elleuch
,
Mater. Today Proc.
44
,
2080
(
2021
).
18.
J.
Mesko
, “
Effect of cutting conditions on surface roughness of machined parts in CO2 laser cutting of pure titanium
,”
Mater. Today Proc.
44
,
2080
2086
(
2021
).
19.
T.
Arai
, “
Generation of striations during laser cutting of mild steel
,”
SOP Trans. Appl. Phys.
2014
,
81
95
.
20.
I.
Miraoui
,
M.
Boujelbene
, and
E.
Bayraktar
, “
Analysis of roughness and heat affected zone of steel plates obtained by laser cutting
,”
Adv. Mat. Res.
974
,
169
173
(
2016
).
21.
D.
Schuocker
, “
Dynamic phenomena in laser cutting and cut quality
,”
Appl. Phys. B
40
,
9
14
(
1986
).
22.
M.
Sobih
,
P. L.
Crouse
, and
L.
Li
, “
Striation-free fibre laser cutting of mild steel sheets
,”
Appl. Phys. A
90
,
171
174
(
2007
).
23.
J.
Pocorni
,
J.
Powell
,
E.
Deichsel
,
J.
Frostevarg
, and
A. H.
Kaplan
, “
Fibre laser cutting stainless steel: Fluid dynamics and cut front morphology
,”
Opt. Laser Technol.
87
,
87
93
(
2017
).
24.
T.
Mitchell
,
Machine Learning. 2
(
McGraw-Hill
,
New York
,
1997
).
25.
I.
Goodfellow
,
Y.
Bengio
, and
A.
Courville
,
Deep Learning
(
MIT Press
,
Cambridge
,
MA
,
2016
).
26.
K.
Hornik
,
M.
Stinchcombe
, and
H.
White
, “
Multilayer feedforward networks are universal approximators
,”
Neur. Net.
2
,
359
366
(
1989
).
27.
K.
Simonyan
, and
A.
Zisserman
, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556v6 (
2015
).
28.
K.
He
,
X.
Zhang
,
S.
Ren
, and
J.
Sun
, “Deep residual learning for image recognition,” arXiv:1512.03385v1 (
2015
).
29.
J.
Bao
,
D.
Chen
,
F.
Wen
,
H.
Li
, and
G.
Hua
, “CVAE-GAN: Fine-grained image generation through asymmetric training,” arXiv:1703.10155v1 (
2017
).
30.
Y.
LeCun
and
B.
Yoshua
,
Convolutional Networks for Images, Speech, and Time Series
(
The Handbook of Brain Theory and Neural Networks, MIT Press
,
Cambridge
,
MA
,
1997
), pp.
255
258
.
31.
C.
Szegedy
,
W.
Liu
,
Y.
Jia
,
P.
Sermanet
,
S.
Reed
,
D.
Anguelov
,
D.
Erhan
,
V.
Vanhoucke
, and
A.
Rabinovich
, “Going deeper with convolutions”, arXiv:1409.4842v1 (
2015
).
32.
A.
Krizhevsky
,
I.
Sutskever
, and
G.
Hinton
,
ImageNet Classification with Deep Convolutional Neural Networks
(
NIPS
, Lake Tahoe, NV,
2012
).
33.
M.
Zeiler
, and
R.
Fergus
, “Visualizing and understanding convolutional networks,” arXiv:1311.2901v3 (
2013
).
34.
I.
Durugkar
,
L.
Gemp
, and
S.
Mahadevan
, “Generative multi-adversarial networks,” arXiv:1611.01673 (
2017
).
35.
I.
Serban
,
R.
Lowe
,
L.
Charlin
, and
J.
Pineau
, “Generative deep neural networks for dialogue: A short review,” arXiv:1611.06216 (
2016
).
36.
I.
Goodfellow
,
M.
Mirza
,
B.
Xu
,
D.
Wade-Farley
, and
A.
Courville
, “Generative adversarial nets,” arXiv:1406.2661 (
2014
).
37.
M.
Lucic
,
K.
Kurach
,
M.
Michalski
,
S.
Gelly
, and
O.
Bousquet
, “Are GANs created equal? A large-scale study,” arXiv:1711.10337 (
2018
).
38.
M.
Mirza
and
S.
Osindero
, “Conditional generative adversarial nets,” arXiv:1411.1784v1 (
2014
).
39.
G.
Liu
,
F.
Reda
,
K.
Shih
,
T.
Wang
,
A.
Tao
, and
B.
Catanzaro
, “Image inpainting for irregular holes using partial convolutions,” arXiv:1804.07723.pdf (
2018
).
40.
J.
Xie
,
L.
Xu
, and
E.
Chen
, “
Image denoising and inpainting with deep neural networks
,”
Adv. NeurlPS
25
,
1–9 (
2012
).
41.
M.
Pacher
,
L.
Monguzzi
,
L.
Bortolotti
,
M.
Sbetti
and
B.
Previtali
, “
Quantitative identification of laser cutting quality relying on visual information
,” in
Lasers in Manufacturing Conference 2017
, Munich, Germany, June 26, 2017 (World of Photonics Congress,
2017
).
42.
N.
Levichev
,
P.
Herwig
,
A.
Wetzig
, and
J. R.
Duflou
, “
Towards robust dynamic beam shaping for laser cutting applications
,”
Procedia CIRP
111
,
746
749
(
2022
).
43.
G.
Santolini
,
P.
Rota
,
D.
Gandolfi
, and
P.
Bosetti
, “
Cut quality estimation in industrial laser cutting machines: A machine learning approach
,” in
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
,
Long Beach, CA
, June 16, 2019 (
IEEE
, Long Beach, CA,
2019
), pp.
389
397
.
44.
H.
Tercan
,
T. A.
Khawli
,
U.
Eppelt
,
C.
Büscher
,
T.
Meisen
, and
S.
Jeschke
, “
Improving the laser cutting process design by machine learning techniques
,”
Prod. Eng.
11
,
195
203
(
2017
).
45.
L
,
Franceschetti
,
M.
Pacher
,
M.
Tanelli
,
S. C.
Strada
,
B.
Previtali
and
S. M.
Savaresi
, “
Dross attachment estimation in the laser-cutting process via convolutional neural networks (CNN)
,” in
2020 28th Mediterranean Conference on Control and Automation (MED)
, Saint-Raphael, France, June 16, 2020 (MED
2020
).
46.
M.
Pacher
,
L.
Franceschetti
,
S. C.
Strada
,
M.
Tanelli
,
S. M.
Savaresi
, and
B.
Previtali
, “
Real-time continuous estimation of dross attachment in the laser cutting process based on process emission images
,”
J. Laser Appl.
32
,
042016
(
2020
).
47.
C.
Stadter
,
M.
Schmoeller
,
L.
von Rhein
, and
M. F.
Zaeh
, “
Real-time prediction of quality characteristics in laser beam welding using optical coherence tomography and machine learning
,”
J. Laser. Appl.
32
,
022046
(
2020
).
48.
K.
Wasmer
,
T.
Le-Quang
,
B.
Meylan
,
F.
Vakili-Farahani
,
M. P.
Olbinado
,
A.
Rack
, and
S. A.
Shevchik
, “
Laser processing quality monitoring by combining acoustic emission and machine learning: A high-speed X-ray imaging approach
,”
Procedia CIRP
74
,
654
658
(
2018
).
49.
C.
Knaak
,
U.
Thombansen
,
P.
Abels
, and
M.
Kröger
, “
Machine learning as a comparative tool to determine the relevance of signal features in laser welding
,”
Proc. CIRP
74
,
623
627
(
2018
).
50.
A.
Anastasiou
,
E. I.
Zacharaki
,
D.
Alexandropoulos
,
K.
Moustakas
, and
N. A.
Vainos
, “
Machine learning based technique towards smart laser fabrication of CG
,”
Microelectron. Eng.
227
,
111314
(
2020
).
51.
X.
Yao
,
S. K.
Moon
, and
G.
Bi
, “
A hybrid machine learning approach for additive manufacturing design feature recommendation
,”
Rap. Prot. J.
23
,
983
997
(
2017
).
52.
D. J.
Heath
,
J. A.
Grant-Jacob
,
Y.
Xie
,
B. S.
Mackay
,
J. A. G.
Baker
,
R. W.
Eason
, and
B.
Mills
, “
Machine learning for 3D simulated visualization of laser machining
,”
Opt. Expr.
26
,
21574
(
2018
).
53.
N.
Sanner
,
N.
Huot
,
E.
Audouard
,
C.
Larat
,
J. P.
Huignard
, and
B.
Loiseaux
, “
Programmable focal spot shaping of amplified femtosecond laser pulse
,”
Opt. Lett.
30
,
1479
(
2005
).
54.
X.
Yunhui
,
D.
Heath
,
J.
Grant-Jacob
,
B.
MacKay
,
M.
McDonnell
,
T.
David
,
M.
Praeger
,
R.
Eason
, and
B.
Mills
, “
Deep learning for the monitoring and process control of femtosecond laser machining
,”
J. Phys. Phot.
1
,
1
10
(
2019
).
55.
M. D. T.
McDonnell
,
T.
David
,
J. A.
Grant-Jacob
,
Y.
Xie
,
M.
Praeger
,
B. S.
MacKay
,
R. W.
Eason
, and
B.
Mills
, “
Modelling laser machining of nickel with spatially shaped three pulse sequences using deep learning
,”
Opt. Expr.
28
,
14627
14637
(
2020
).
56.
W.
Feng
,
J.
Guo
,
W.
Yan
,
H.
Wu
,
Y. C.
Wan
, and
X.
Wang
, “
Underwater laser micro-milling of fine-grained aluminium and the process modelling by machine learning
,”
J. Micromech. Microeng.
30
,
045011
(
2020
).
57.
Ben
Mills
,
D.
Heath
,
J.
Grant-Jacob
,
Y.
Xie
, and
R. W.
Eason
, “
Image-based monitoring of femtosecond laser machining via a neural network
,”
J. Phys. Phot.
1
,
1
10
(
2019
).
58.
M.
Zuric
,
O.
Nottrodt
, and
P.
Abels
, “
Multi-sensor system for real-time monitoring of laser micro-structuring
,”
J. Laser Micro/Nanoeng.
14
,
245
254
(
2019
).
59.
B.
Mills
,
D. J.
Heath
,
J. A.
Grant-Jacob
, and
R. W.
Eason
, “
Predictive capabilities for laser machining via a neural network
,”
Opt. Expr.
26
,
17245
(
2018
).
60.
B. S.
MacKay
,
M.
Praeger
,
J. A.
Grant-Jacob
,
J.
Kanczler
,
R. W.
Eason
,
R. O. C.
Oreffo
, and
B.
Mills
, “
Modelling adult skeletal stem cell response to laser-machined topographies through deep learning
,”
Tiss. Cel.
67
,
101442
(
2020
).
61.
B.
Mills
and
J. A.
Grant-Jacob
, “
Lasers that learn: The interface of laser machining and machine learning
,”
IET Optoelectronics
15
,
207
224
(
2021
).
62.
D.
Teixidor
,
M.
Grzenda
,
A.
Bustillo
, and
J.
Ciurana
, “
Modeling pulsed laser micromachining of micro geometries using machine-learning techniques
,”
J. Intell. Manufact.
26
,
801
814
(
2015
).
63.
Daniel J.
Heath
,
James A.
Grant-Jacob
,
Matthias
Feinaeugle
,
Ben
Mills
, and
Robert W.
Eason
, “Automated 3D labelling of fibroblasts and endothelial cells in SEM-imaged placenta using deep learning,” in Proceedings of the International Joint Conference on Biomedical Engineering Systems and Technologies, La Valletta, Malta, February 24, 2020 (BIOSTEC 2020,
2020
), Vol. 13, pp. 46–53.
64.
D.
Weichert
,
P.
Link
,
A.
Stoll
,
S.
Rüping
,
S.
Ihlenfeldt
, and
S.
Wrobel
,
Int. J. Adv. Manufact. Technol.
104
,
1889
1902
(
2019
).
65.
J.
Grant-Jacob
,
M.
Praeger
,
M.
Loxham
,
R. W.
Eason
, and
B.
Mills
, “
Lensless imaging of pollen grains at three-wavelength using deep learning
,”
Env. Res. Comm.
2, 1–8 (
2020
).
66.
Y.
Rivenson
,
Y.
Zhang
,
H.
Günaydın
,
D.
Teng
, and
A.
Ozcan
, “
Phase recovery and holographic image reconstruction using deep learning in neural networks
,”
Light. Sci. Appl.
7
,
17141
17141
(
2018
).
67.
J.
Grant-Jacob
,
B.
MacKay
,
Y.
Xie
,
D.
Heath
,
M.
Loxham
,
R. W.
Eason
, and
B.
Mills
, “
A neural lens for super-resolution biological imaging
,”
J. Phys. Comm.
3, 1–7 (
2019
).
68.
L.
Tatzel
and
F. P.
León
, “
Image-based roughness estimation of laser cut edges with a convolutional neural network
,”
Procedia CIRP
94
,
469
473
(
2020
).
69.
L.
Tatzel
,
O.
Al Tamimi
,
T.
Haueise
, and
F.
León
, “
Image-based modelling and visualisation of the relationship between laser-cut edge and process parameters
,”
Opt. Laser Technol.
141
,
107028
(
2021
).
70.
A. F.
Courtier
,
M.
McDonnell
,
M.
Praeger
,
J. A.
Grant-Jacob
,
C.
Codemard
,
P.
Harrison
,
B.
Mills
, and
M.
Zervas
, “
Modelling of fibre laser cutting via deep learning
,”
Opt. Exp.
29
,
36487
(
2021
).
71.
L.
Ozler
and
N.
Dogru
, “
An experimental investigation of hole geometry in friction drilling
,”
Mater. Manuf. Processes
28
,
470
475
(
2013
).
72.
R.
Kumar
and
N. R. J.
Hynes
, “
Influence of rotational speed on mechanical features of thermally drilled holes in dual-phase steel
,”
Proc. Inst. Mech. Eng. Part B
233
,
1614
1625
(
2019
).
73.
J. M.
Fortain
,
S.
Guiheux
, and
T.
Opderbecke
, “
Thin-sheet metal welding
,”
Welding Int.
27
,
30
36
(
2013
).
74.
H.
Soussi
,
N.
Masmoudi
, and
A.
Krichen
, “
Analysis of geometrical parameters and occurrence of defects in the hole-flanging process on thin sheet metal
,”
J. Mater. Proc. Technol.
234
,
228
242
(
2016
).
75.
U.
Karanfil
and
U.
Yalcin
, “
Real-time monitoring of high-power fibre-laser cutting for different types of materials
,”
Ukr. J. Phys. Opt.
20
,
60
72
(
2019
).
76.
K.
Hirano
and
R.
Fabbro
, “
Experimental investigation of hydrodynamics of melt layer during laser cutting of steel
,”
J. Phys. D: Appl. Phys.
44
,
105502
(
2011
).
77.
P.
Isola
,
J.
Zhu
,
T.
Zhou
, and
A.
Efros
, “Image-to-image translation with conditional adversarial networks,” arXiv:1611.07004 (
2016
) (Accessed 20 March 2021).
78.
Pix2pix: Image-to-image translation with a conditional Gan  :   Tensorflow Core (no date) TensorFlow. Available at https://www.tensorflow.org/tutorials/generative/pix2pix (Accessed 2 March 2023).
79.
D.
Whitehouse
,
Surfaces and Their Measurements
(
Hermes Penton Ltd
,
London
,
2002
), pp.
51
60
.
80.
DIN ISO 9013. Thermal cutting—Classification of thermal cuts—Geometrical product specification and quality tolerances, 3rd ed. (ISO, Brussels, 2017), pp. 1–28.
81.
C.
Szegedy
,
V.
Vanhoucke
,
S.
Ioffe
,
J.
Shlens
, and
Z.
Wojna
, “Rethinking the inception architecture for computer vision,” arXiv:1512.00567v3.pdf (
2015
).
82.
J. F.
Shaikh
, see https://www.analyticsvidhya.com/blog/2018/10/understanding-inception-network-from-scratch/ for “Inception network: Implementation of googlenet in Keras, Analytics Vidhya” (
2020
) (Accessed 2 March 2023).
83.
A. F.
Courtier
,
M.
Praeger
,
J. A.
Grant-Jacob
,
C.
Codemard
,
P.
Harrison
,
M.
Zervas
, and
B.
Mills
, “
Dataset in support of the journal paper ‘Predictive Visualisation of Fibre Laser Cutting Topography via Deep Learning with Image Inpainting
.’”
Published open access through an agreement withJISC Collections