The challenge of monitoring ultrashort-pulse laser microstructuring lies in the stringent requirements for both spatial and temporal accuracy. Additionally, the monitoring system must not interfere with the processing. This challenge is addressed by employing high-speed off-axis collection of secondary optical emissions. The spatial information is derived from the current processing position, which is recorded synchronously with the emission intensities. A field programmable gate array-based system is used for real-time data collection, synchronization, analysis, and feedback generation. Defects that arise during machining are located as they appear on the workpiece surface, triggering a correction procedure, such as a laser-polishing pass. Furthermore, we compare this method with a data-driven approach using a model that analyzes heatmaps created from photodiode time series and scan positions. A neural network, trained with labels generated by the analytical algorithm and human assistance, detects defects even when the analytical method fails.

Ultrashort pulse (USP) laser processing enables micrometer-precision surface machining of a variety of materials.1 The machining of metals is almost melt-free, which opens possibilities for numerous applications in many industrial sectors.2 The diversity of applications as well as processable materials continuously require process development, which is highly time-consuming. In addition, the process itself often takes a long time. Moreover, the constant requirement for more efficiency leads to a further acceleration of the processing.3 The resulting need for detection and elimination of any instabilities as early as possible increases the importance of monitoring surface properties during the machining.

Previously developed machine-integrated sensor technology at Fraunhofer ILT4 can provide information on the status of the machining and enable closed loop-control. The off-axis sensor arrangement has the advantage of a simple machine integration, but at the same time, it increases the complexity of the data analysis. The optical process emissions are not solely dependent on the surface composition and geometry, but also on the processing parameters, as well as different physical phenomena and interactions. For these reasons, a robust defect detection system requires thorough data analyses.

In this study, we focus on the improvement of our previously reported analysis during the machining, described in Ref. 5. The analysis algorithm is executed as the data are being collected, so that a new result is produced every 10 μs. Improvement compared to the existing work is that large-area defects are now detected as large despite being calculated in the rolling window. This is thoroughly described in the first part of this paper—analytical approach. Moreover, a framework is developed that enables the adjustment to various new use cases. This development is then used to train a deep-learning model. The methodology and the results are presented in the second half of this study—data-based modeling.

Research on quality assurance of the USP-laser-surface machining through inline monitoring is almost as old as the process itself. In general, the approaches can be divided into collection of the laser reflection and the secondary process emissions, on one hand, and the direct topography measurement, on the other hand. Various methods for the first group have been evaluated and their results reported. For example, by collecting acoustic emissions at the laser pulse repetition rate, deviations from the expected processing parameters can be detected.6 For processes with higher repetition rates, microphones with frequency ranges of several hundred kilohertz are necessary. They have been employed in the basic research of the USP laser ablation7 and for the detection of transition during machining of multilayer materials.8 However, recording the high-frequency components of the acoustic signals in air is possible either a few millimeters away from the processing zone or directly at the workpiece surface.9 The consequently required mounting of the detector directly on, or very near the workpiece makes it unsuitable for the production environment.

In comparison to the acoustic emissions, the optical secondary emissions are much easier to collect and interpret. A photodiode integrated inside the beam path can detect instabilities as well as the amount of the ablated volume.10 However, an automatic quality monitoring based on these results has not been implemented yet. For the special case of multimaterial ablation, the transition to the next material is detectable using laser-induced plasma spectroscopy.11 Nevertheless, this method is too slow to achieve a sufficient special resolution if paired with scanner coordinates. Besides the visible secondary emissions, both IR-emissions12 and the laser reflection13 have been evaluated for inline monitoring, however, without spatial information.

Finally, it is possible to measure the distance between the workpiece surface and a detector using the same optical path as the processing to extract the surface roughness. This is commonly done using optical coherence tomography (OCT). One example of a measurement system integrated into the beam path is based on the ultrahigh-resolution spectral domain optical coherence tomography (UHR-SD-OCT).14 However, it is unsuitable for real-time roughness detection because its maximum measurement frequency reaches only 1.4 kHz. Newer OCT systems are quicker and are suitable for measurements at lower scan speeds or for laser drilling.15 However, a strong disadvantage of OCT is that it requires a complex integration into the optical path.16 

Although various approaches for process monitoring of USP-laser micromachining have been investigated, none of them have been able to reliably and without limitations detect or predict the quality of the resulting surface yet. A trade-off is present between the direct topography measurement with low sampling rates and complex machine integration and the faster detection of phenomena from the machining zone, which requires a more complex data analysis. In this paper, a previously developed system for process monitoring of USP-laser micromachining is further extended. The monitoring is based on a lateral acquisition of optical process emissions at three wavelength ranges. The spatial information originates from the scanner coordinates sampled directly from its controller and synchronized with the photodiode time series via a field programmable gate array (FPGA)-based system.

For implementing an edge-based surface topography analysis, the existing system4 integrated into the machine and schematically depicted in Fig. 1 was used. It consists of photodiodes for three different wavelength ranges arranged laterally around the focusing lens. For that, a special holder for the sensors had been developed that ensures reproducible distance and angle of the detectors relative to the surface. An FPGA-based system samples scanner positions synchronously to the photodiode time series and, therefore, adds information on the position of the laser beam to the photodiode signals. That way, a positional resolution is added to the signals that originate from the off-axis observation of the process emissions and the laser reflection.

FIG. 1.

Schematic representation of the measurement system with examples of raw measurements.

FIG. 1.

Schematic representation of the measurement system with examples of raw measurements.

Close modal

In addition to the data collection and synchronization, data analysis is executed in the FPGA part of the multiprocessor system-on-chip (MPSoC). That way, the evaluation of the signals runs entirely simultaneously with their collection and is updated every 10 μs. The feedback is sent to the machine immediately. The analysis results are forwarded to a PC via Ethernet and can be validated, e.g., by representing them on a 2D plane. The images with the positions of the defects are present a few seconds after a layer has been fully scanned. The procedure is an improvement of the analysis described in Ref. 5. There, the possibility of detecting the surface roughness through this measurement principle was investigated. It was shown that relative changes in surface roughness are detectable during the machining. Moreover, the system enables the localization of deviations within the layer, e.g. holes, changes in depth, cracks, and cone-like protrusions (CLPs). For the latter, an algorithm was developed that detects CLPs during the machining. A simpler version of the analysis was implemented for the execution in the FPGA. The information is forwarded to a computer together with the raw data, which enables the generation of defect maps retrospectively. However, this version of the algorithm is unable to differentiate one large from many small defects that stretch in the scan direction, but it can reliably compute a measure proportionate to the number of defects in a layer.

Section III A thoroughly describes the improved analysis algorithm. After that, a further improvement of the algorithm is presented that employs a data-based approach and is supported by the analytical real-time analysis.

The presented analysis was developed to detect anomalies during the machining. Anomalies are defined as anything unusual, such as abrupt changes in the signal. To achieve independence of the processing parameters, as well as a general anomaly detection, relative instead of absolute changes are observed.

The following steps take place each time a new data point is collected—every 10 μs. The time series of each photodiode are analyzed separately. The procedure for each sensor is the same—only the VIS-photodiode data are low-pass filtered in a moving average window of 50 μs before the analysis. This is because the data contain high-frequency disturbances that come from outside of the processing.

Figure 2 describes the analysis procedure on an example of one sensor.

FIG. 2.

Procedure description of the (simplified) analysis algorithm for only one sensor.

FIG. 2.

Procedure description of the (simplified) analysis algorithm for only one sensor.

Close modal

In the first step, the raw signal is filtered by computing a moving average in a 100 μs window. At the same time, for each datapoint, a deviation from the current average value is determined. Based on this deviation, an upper and lower threshold is computed, which defines if a datapoint is in the OK or nOK range, respectively. All datapoints that are between both thresholds are considered undefined. The computation of new thresholds takes place only when the raw signal is inside the undefined range or as soon as it leaves the current state. That way, even though the computation is relative to the intensities in a 10-datapoint window, the detected OK and nOK ranges can be longer.

The same procedure is executed for each of the sensors separately and at the same time. In addition, a separate process determines if a currently detected defect exceeds the length of the window, and if so, by which multiple of the window length. If it does, an error counter is incremented by a multiple of the window length a specific defect is long. This is illustrated in Fig. 3.

FIG. 3.

Process of the (simplified) analysis for computation of the defect presence.

FIG. 3.

Process of the (simplified) analysis for computation of the defect presence.

Close modal

A new error measure is computed for each pass over the surface. It is compared to the overall defect measure determined during the first pass. As soon as it exceeds a threshold computed relative to the first pass (in this case, a 25% increase), a defect flag is set to 1. From this point, in addition to the information that the processing is going wrong, the locations of the defects are transmitted to the user. This is done via the network interface used for the streaming of the raw data, which is based on the lwIP (lightweight TCP/IP stack).

For the development and validation of the algorithm, a simulation was designed and executed in the simulator integrated into the Vivado Design Suite. The VIS-emission photodiode was given an offset that corresponds to the measured intensity when no processing takes place, and the machine illumination is turned on. The analysis of the simulated photodiode data is validated by observing the computed flags and signals given in red and white in Fig. 3. The red flags denote the presence of defects and the white their certain absence.

Due to the relative computation of the defect measure, the first pass is always considered to be without defects. During the first pass, the defect threshold is computed in the background, but the whole surface is set to always OK. The second pass uses the procedure to assign a class to each datapoint: nOK (red signals), OK (white signals), or not assigned (where neither OK nor nOK is set to high).

In an additional simulation, we validated that most signal drops are correctly assigned to both the OK and the nOK labels. Moreover, the lower signal intensities at the beginning of the layer are correctly interpreted as OK, thanks to the relative computation in a sliding window. This is an additional advantage of this procedure compared to a much simpler threshold comparison. Furthermore, even long after the rising edge of the laser modulation signal, the sole presence of the photodiode signal drops does not necessarily denote a defect. This is due to the computation that is relative to the current standard deviation of the signal. More precisely, the thresholds that assign a certain intensity an OK or a nOK are computed in such a way that fluctuations are always detected. To differentiate between a fluctuation that is a defect, and one that is standard without having to introduce absolute thresholds, another characteristic of the CLPs is analyzed. This is the fact that as soon as they appear on the surface, with every additional pass, the CLPs either disappear completely or grow around the biggest clusters. Therefore, only the defects with a certain size are considered while determining the defect measure. As soon as the defect measure exceeds the threshold set during the first pass, a defect flag is set to and held as high until a new processing starts.

Different examples of the edge-based analysis results are shown in Fig. 4. The butterflies were machined with an unchanged parameter combination, which in most cases leads to the appearance of CLPs and burnt edges. The 2D representation of the analyzed laser reflection time series shows the defects in red, the certain absence of defects in green, and the not assigned areas in dark blue. The 2D representations originate from the data collected during the final, 11th pass over the workpiece surface. The example on the bottom depicts a surface with CLPs, whereas the top example does not have any punctual defects on the structure of butterfly wings. However, both the butterfly’s body and antennae and the wings’ edges are burnt—which is visible as darker areas in the microscope image. This is also correctly classified by the analysis algorithm. The top image additionally features the CLPs that are also correctly detected by the analyzed laser reflection time series.

FIG. 4.

Microscope images of surfaces after processing (left) and the 2D representations of defect positions (right), detected during the machining using the laser reflection for two different structures: without CLPs, but with burnt edges (top) and with CLPs (bottom); both images show the data collected during the last pass over the surface.

FIG. 4.

Microscope images of surfaces after processing (left) and the 2D representations of defect positions (right), detected during the machining using the laser reflection for two different structures: without CLPs, but with burnt edges (top) and with CLPs (bottom); both images show the data collected during the last pass over the surface.

Close modal

Figure 5 gives a more detailed insight into how the 2D representations from Fig. 4 are generated. It depicts the raw signal of the laser reflection together with the laser modulation signal as well as the areas that are denoted an “OK”-, “nOK”-, or the “not assigned”-label.

FIG. 5.

Example of the raw signal and analysis results during processing as well as its depiction on a 2D-workpiece surface.

FIG. 5.

Example of the raw signal and analysis results during processing as well as its depiction on a 2D-workpiece surface.

Close modal

All the depicted signals are either collected by the FPGA in the edge device or are the result of the analysis that is executed every 10 μs. When plotted on the 2D plane, as indicated in Fig. 5, the analysis is validated by comparing it to the microscope images.

The former two figures both compare the data collected during the last pass over the surface with its state after the processing. The actual added value, however, originates from the information that this procedure delivers during the processing. As both the monitoring and the analysis are carried out on the edge device as the laser passes over the workpiece surface, the information on the defect location is present before the processing is finished—as soon as the defects appear on the workpiece surface. This is depicted in Fig. 6 in more detail.

FIG. 6.

Automatically computed error measure of an example of machining the butterfly in 11 passes with examples of defect positions based on the analyzed laser reflection.

FIG. 6.

Automatically computed error measure of an example of machining the butterfly in 11 passes with examples of defect positions based on the analyzed laser reflection.

Close modal

In Fig. 6, the 2D representation of the detected defects during the processing of a few selected layers is shown. The graph depicts the error measure of each pass determined by analyzing all three photodiodes. Moreover, the automatically computed threshold is given, where the defect measure exceeds the internal threshold based on the first pass, after which the exact positions are sent from the FPGA to the personal computer. Therefore, the first five butterflies are shown in green. During their processing, no or only insignificantly few CLPs and similar defects are present on the surface. However, after the fifth pass, the error measure continues growing and reaches the threshold after which the number of defects is considered alarming.

By executing the analysis on multiple processing structures with and without CLPs and plotting the results on a 2D plane, the procedure was successfully validated. The presence of defects was always correctly detected. Depending on the structure, this happened after the 5th, 6th, or 7th pass. This is due to the fact that the growth of the defects is in fact arbitrary and happens at different rates. The exact positions of the defects are in most cases correctly assigned. Especially larger areas are reliably detected, whereas smaller defects are sometimes falsely assigned the OK class.

The described procedure was integrated into an existing machine. For immediate feedback on defects’ presence, a digital output is generated 10 μs after the threshold has been exceeded. To forward this signal to the machine, either one of the network interfaces or a digital output can be used. The possible interfaces are given in Fig. 7.

FIG. 7.

Photo of the edge device with the interfaces.

FIG. 7.

Photo of the edge device with the interfaces.

Close modal

Because the machine used in this work does not support any inputs from the outside, to be able to interact with the system, a graphical user interface (GUI) shown in Fig. 8 was developed. It runs on a separate PC, which also receives the raw data from the FPGA-based system from Fig. 7.

FIG. 8.

Graphical user interfaces during the data acquisition and right after defects have been detected.

FIG. 8.

Graphical user interfaces during the data acquisition and right after defects have been detected.

Close modal

The GUI allows both the control of the measurement and the 2D-plotting procedure. Moreover, it shows the information about the data collection status, the axes positions, and additional information from the machine, e.g., the temperatures at specific locations inside it. During the data acquisition and after defecting the defects, the GUI correspondingly informs the user. This flag is reset each time a new processing is started.

Besides the GUI, the result of the analysis is simultaneously generated for each processing pass. One example is given in Fig. 9.

FIG. 9.

Example analysis results of seven passes over the surface (seven rows) showing the defect positions for the last four passes in red; One map (column) for each of the three sensors and one for all (majority voting) resulting in four columns.

FIG. 9.

Example analysis results of seven passes over the surface (seven rows) showing the defect positions for the last four passes in red; One map (column) for each of the three sensors and one for all (majority voting) resulting in four columns.

Close modal

Here, for seven consecutive passes or layers (corresponds to rows), the defect positions are plotted on a 2D plane that represents the machined geometry. This is done as soon as the first defective layer has been detected. One image is created for each photodiode (corresponds to columns) and the fourth image depicts the majority vote between all three. From Fig. 9, it becomes obvious that as soon as CLPs appear on the surface and no action is taken, they spread. Some of the smaller CLPs may disappear, whereas the rest organized into growing clusters. This is adequately detected by each of the photodiodes.

To achieve the simultaneous visualization of the results, the existent software was extended to both plot the error maps, as well as receive instructions if the raw data heatmaps should be additionally generated.

In addition to the analytical approach, a deep learning model was trained to detect the defects from the photodiode data. For this task, semantic segmentation was chosen because it enables assigning every pixel a certain label in a similar way as in the analytical model from Sec. III. The model, UNet, was selected due to its availability directly in the manufacturer’s Model Zoo and because the compatibility with the chip employed in the edge device is given. The employed model is roughly based on a development originally used for medical image segmentation from Ref. 17.

In contrast to the model for medical image segmentation, we used single images, which we previously generated from the photodiode time series. Moreover, for the encoder side, a VGG16 pretrained on ImageNet with frozen weights was employed. This enables the model to extract general features such as edges and textures from the input image. As the input image passes through the encoder, its features are extracted. The decoder side expands these features back to their original input size. This upsampling is followed by a concatenation with the corresponding feature map from the encoder through skip connections. After that, the information passes through convolution layers. Immediately after upsampling, the spatial details in the feature maps may be coarse and therefore inaccurate. The convolution layers refine the coarse edges and integrate local context, which enables the model to make more accurate and detailed predictions for each pixel in the output. The described model architecture is shown in Fig. 10.

FIG. 10.

UNet model architecture used for the segmentation of the 2D representation of photodiode signals.

FIG. 10.

UNet model architecture used for the segmentation of the 2D representation of photodiode signals.

Close modal

In total, 99 images of photodiode data were manually labeled. Out of these datapoints, 9 were used for validation and 13 as the test dataset. By using data augmentation, the 77 training datapoints were extended into 462.

Using this data, the model from Fig. 10 was trained multiple times to identify the most suitable parameter combination and configurations. Besides the varying number of epochs, batch size, and learning rate, the constant image size of 256 × 256 pixels was chosen. Moreover, different loss functions were tested—binary cross entropy,
(1)
dice loss,
(2)
and the combination of both. Here, y represents the predicted value and y ^ the actual label.
As the measure for the accuracy of the prediction, pixel accuracy was used. It represents the percent of pixels in the image which were correctly classified and can be described with the equation
(3)
where TP and TN stand for the true positive (defects) and true negative (no defects) pixels, respectively, whereas FP and FP stand for the two categories of falsely classified pixels. Because this measure is only an indication of the result, the expert opinion on predicted segmentation masks was considered. The finally selected training parameters are summarized in Table I.
TABLE I.

Model configuration.

Model architecture UNet 
Encoder Pretrained VGG16 
Loss function Combined dice loss and binary cross entropy 
Batch size 
Learning rate 0.001 (with Adam optimizer) 
Image size 256 × 256 px 
Model architecture UNet 
Encoder Pretrained VGG16 
Loss function Combined dice loss and binary cross entropy 
Batch size 
Learning rate 0.001 (with Adam optimizer) 
Image size 256 × 256 px 

A model trained in this way produces segmentation masks, which are in some cases better than the manually generated ones. Due to the fact that the pixel accuracy is determined by comparing the two, the resulting accuracies are possibly higher but appear worse because they are compared to the imperfect manually labeled masks. To mitigate this, a procedure for automatic labeling was investigated.

A depiction of the full data preprocessing pipeline is given in Fig. 11.

FIG. 11.

Data processing pipeline for UNet training.

FIG. 11.

Data processing pipeline for UNet training.

Close modal

A qualitative comparison between the manual and automatic labeling procedures is given in Fig. 12 on one example. By comparing the microscope image with both the manually and automatically labeled masks, it becomes clear that the automatic labeling produces a much more detailed mask than the manual approach. Especially the very small features, such as the butterfly’s antennae or the “metamorpha” letters, have their actual shape or are at all visible in the automatically labeled images. In addition to more detailed labeling, another property of the analytical model was exploited—before the number of defects reaches a certain threshold, the defects are considered nonexistent. Therefore, the corresponding segmentation masks are completely black, i.e., contain no defects.

FIG. 12.

Comparison of manually and automatically labeled laser reflection data with the microscope image of the surface.

FIG. 12.

Comparison of manually and automatically labeled laser reflection data with the microscope image of the surface.

Close modal

To prevent the model from overfitting, previously collected data were additionally employed for training, where two different forms were machined. Figure 13 shows examples of the photodiode data and the corresponding labels. The segmentation masks for the data where the defect measure does not exceed the threshold were left black, indicating no presence of defects.

FIG. 13.

Examples of training data with and without defects and corresponding automatically generated masks using the analytic procedure.

FIG. 13.

Examples of training data with and without defects and corresponding automatically generated masks using the analytic procedure.

Close modal

Besides being more precise, automatic labeling is much quicker than the manual one—in total more than 3000 segmentation masks were generated. However, the automatic labeling based on the analytic analysis is not flexible, which led to producing incorrect masks. Therefore, in the next step, all masks were inspected by an expert and a total of 578 images were sorted out. For testing the model, 101 of the correctly labeled datapoints were used, leaving a total of 2403 images for the training and validation. Figure 14 visually explains the division of the dataset.

FIG. 14.

Overview of the dataset.

FIG. 14.

Overview of the dataset.

Close modal

For evaluating the model, two datasets were used—one with correct labels, to be able to determine the pixel accuracies and one where the analytical model was unable to correctly detect the defects—to be able to compare the two approaches.

Figure 15 shows the confusion matrix that results from the pixel accuracy of the training dataset (101 images). It shows a high rate of correctly assigned defects (90%) and no defects (96%) as well as a significant increase in comparison to the model trained on manually labeled data. The increase in accuracy is assumed to be due to masks labeled in a different way rather than the amount of data used for the training. To validate this and achieve the same conditions as during the training with manually labeled masks, the best 86 images were selected by an expert from the automatically labeled dataset and the model was trained again.

FIG. 15.

Normalized pixel accuracies of the UNet model trained with automatically labeled data.

FIG. 15.

Normalized pixel accuracies of the UNet model trained with automatically labeled data.

Close modal

Figure 16 presents a comparison between the results of the three trained models as well as the analytically labeled data on various test images (Fig. 16). It indicates the correct and incorrect results with green and red backgrounds, respectively. This is assessed by comparing the segmentation mask with the raw image on the left-hand side of Fig. 16.

FIG. 16.

Comparison of the four approaches for defect detection, from left to right: raw data, results of analytical algorithm used for labeling, and of models trained: on manually labeled data, on over 2000 automatically labeled images and on 77 automatically labeled images.

FIG. 16.

Comparison of the four approaches for defect detection, from left to right: raw data, results of analytical algorithm used for labeling, and of models trained: on manually labeled data, on over 2000 automatically labeled images and on 77 automatically labeled images.

Close modal

The first three rows show images, which are correctly analyzed using the analytical approach. The model trained on manually labeled data fails to detect the defects in all three images, whereas both models trained on the automatically generated masks perform well. In the case of the model trained on only 77 images, the results are even better than those of the automatically generated masks.

For the three examples from the bottom rows in Fig. 16, where the analytical algorithm performed poorly, all three data-based models show at least a slight improvement. The model trained on manually labeled masks shows deficiency on images with vertical defects. However, it detects the punctual and horizontal geometries more reliably than both other deep learning models. Moreover, the two models trained on automatically labeled data show different behavior from each other. The model trained on the large dataset in many cases mimics the behavior of the analytical algorithm used for labeling, which suggests its overfitting. The one trained on the smaller, carefully chosen dataset detects the defects’ positions correctly in all example images, regardless of their size or shape. Furthermore, it correctly recognizes the absence of defects.

In this work, we extended our previously developed system for quality monitoring during USP-laser processing by incorporating spatially resolved collection and analysis of photodiode data. The existing edge-based analysis for defect detection was enhanced to locate large-area defects. Additionally, we refined our detection algorithm using a data-driven approach. A neural network was trained on various datasets, including one labeled by the analytical algorithm and assisted by a human to identify incorrect segmentation masks. The model trained with a small set of carefully chosen automatically labeled data achieved the highest accuracy. It was tested on data where the analytical algorithm failed to detect defects correctly, demonstrating a clear improvement over the analytical approach.

The next steps include implementing the model in the edge device. Furthermore, beyond the presented detection of CLPs, this development enables various extensions, such as the detection of material transitions and localization of residues during multimaterial processing.

This work has received funding from Horizon Europe, the European Union’s Framework Programme for Research and Innovation, under Grant Agreement No. 101057457 (METAMORPHA).

The authors have no conflicts to disclose.

Milena Žurić: Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal); Methodology (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal). Goomaral Sukhbold: Formal analysis (equal); Investigation (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
J.
Meijer
,
K.
Du
,
A.
Gillner
,
D.
Hoffmann
,
V. S.
Kovalenko
,
T.
Masuzawa
,
A.
Ostendorf
,
R.
Poprawe
, and
W.
Schulz
, “
Laser machining by short and ultrashort pulses, state of the art and new opportunities in the age of the photons
,”
CIRP Ann.
51
,
531
550
(
2002
).
2.
M.
Weikert
,
C.
Föhl
, and
F.
Dausinger
, “
Surface structuring of metals with ultrashort laser pulses
,”
Proc. SPIE
4830
,
501
505
(
2003
).
3.
B.
Neuenschwander
,
B.
Jaeggi
,
M.
Schmid
, and
G.
Henning
, “
Surface structuring with ultra-short laser pulses— Basics, limitations and needs for high throughput
,”
Phys. Procedia
56
,
1047
1058
(
2014
).
4.
M.
Zuric
,
O.
Nottrodt
, and
P.
Abels
, “
Multi-sensor system for real-time monitoring of laser micro-structuring
,”
J. Laser Micro/Nanoeng.
14
,
245
254
(
2019
).
5.
M.
Zuric
and
A.
Brenner
, “
Real-time defect detection through lateral monitoring of secondary process emissions during ultrashort pulse laser microstructuring
,”
Opt. Eng.
61
,
094101
(
2022
).
6.
P.
Weber
, Steigerung der Prozesswiederholbarkeit mittels Analyse akustischer Emissionen bei der Mikrolaserablation mit UV-Pikosekundenlasern, Karlsruher Institut für Technologie (
2014
).
7.
F.
Mitsugi
,
I.
Tomoaki
,
T.
Nakamiya
, and
Y.
Sonoda
, “
Optical wave microphone measurements of laser ablation of copper in supercritical carbon dioxide
,”
Thin Solid Films
547
,
81
85
(
2013
).
8.
C.
Lutz
,
R.
Sommerhuber
,
M.
Kettner
,
C.
Esen
, and
R.
Hellmann
, “
Towards process control by detecting acoustic emissions during ultrashort pulsed laser ablation of multilayer materials
,”
12873
,
227
232
(
2024
).
9.
E. V.
Bordatchev
and
S. K.
Nikumb
, “
Effect of focus position on informational properties of acoustic emission generated by laser–material interactions
,”
Appl. Surf. Sci.
253
,
1122
1129
(
2006
).
10.
C.
Gehrke
, Überwachung der Struktureigenschaften beim Oberflächenstrukturieren mit ultrakurzen Laserpulsen, Utz Wissenschaft (
2013
).
11.
R.
Kunze
,
G.
Mallmann
, and
R. H.
Schmitt
, “
Inline plasma analysis as tool for process monitoring in laser micro machining for multi-layer materials
,”
Phys. Procedia
83
,
1329
1338
(
2016
).
12.
L.
Olawsky
,
S.
Moghtaderifard
,
C.
Kuhn
, and
A. F.
Lasagni
, “
Online process monitoring of direct laser interference patterning using an infrared camera system
,”
Mater. Lett.
350
,
134914
(
2023
).
13.
S.
Marimuthu
,
S.
Pathak
,
J.
Radhakrishnan
, and
A. M.
Kamara
, “
In-process monitoring of laser surface modification
,”
Coatings.
11
,
886
(
2021
).
14.
M.
Wiesner
,
J.
Ihlemann
,
H. H.
Müller
,
E.
Lankenau
, and
G.
Hüttmann
, “
Optical coherence tomography for process control of laser micromachining
,”
Rev. Sci. Instrum.
81
,
033705
(
2010
).
15.
S.
Hasegawa
,
M.
Fujimoto
,
T.
Atsumi
, and
Y.
Hayasaki
, “
In-process monitoring in laser grooving with line-shaped femtosecond pulses using optical coherence tomography
,”
Light: Adv. Manuf.
3
,
427
436
(
2022
).
16.
F.
Zechel
,
R.
Kunze
,
P.
Widmann
, and
R. H.
Schmitt
, “
Adaptive process monitoring for laser micro structuring of electrical steel using optical coherence tomography with non-colour corrected lenses
,”
Procedia CIRP
94
,
748
752
(
2020
).
17.
O.
Ronneberger
,
P.
Fischer
, and
T.
Brox
, “
U-Net: Convolutional networks for biomedical image segmentation
,” in
Medical Image Computing and Computer-Assisted Intervention (MICCAI)
, (Springer LNCS,
2015
), Vol.
9351
, pp.
234
241
.