Modeling and forecasting the dynamics of complex systems, such as moderate pressure capacitively coupled plasma (CCP) systems, remains a challenge due to the interactions of physical and chemical processes across multiple scales. Historically, optimization for a given application would be accomplished via a design of experiment (DOE) study across the various external control parameters. Machine learning (ML) techniques show the potential to “forecast” process conditions not tested in a traditional DOE study and thereby allow better optimization and control of a plasma tool. In this article, we have used standard DOE as well as ML predictions to analyze I-V data in a moderate-pressure CCP system. We have demonstrated that supervised regression ML techniques can be a useful tool for extrapolating data even when a plasma system is undergoing a transition in the heating mode, in this case from the alpha to gamma mode. Classification analysis of control parameters is another possible application of ML techniques that can be deployed for system control. Here, we show that given a large set of measured data, the models can identify the gas ratio in the feed gas as well as correctly identify the operating pressure and electrode gap in almost all the cases.

CE: Tables 9–11 presented in picture format in Author source so that we treat as fx1 to fx3. Kindly check.

Due to their wide use in the semiconductor industry, capacitively coupled plasmas (CCPs) are among the most commonly used laboratory plasmas. CCPs are commonly categorized based on their operational pressure and the resulting “heating mode.”1–5 A low-pressure discharge typically falls within the range of tens of μTorr to tens of mTorr, while the moderate pressure range extends from 1 to 100 Torr.5,6 These two pressure regimes exhibit notable distinctions regarding breakdown, heating mechanisms, and sustaining the plasma discharge.3–6 Although the general pressure dependence of CCPs is known, studies of these systems in the moderate pressure regime are limited. This is in part because “traditional” diagnostic tools, such as Langmuir probes, do not work at these moderate pressures.7,8 Nonetheless, such moderate pressure CCP systems are widely utilized in industrial applications, including carbon nanotube and diamond-like carbon deposition processes,9 flat panel display, and solar panel fabrication industries,10 as well as an active medium for CO 2 lasers.5 Despite their widespread use, a lack of understanding of radio frequency (RF) CCP discharges in these pressure ranges has resulted in most designs being based on empirical studies.5 

Modeling and forecasting the dynamics of complex systems, such as moderate pressure CCP systems, remains a challenge due to the interactions of physical and chemical processes across multiple scales. Observational data, including multifidelity data from sensors, can provide valuable insights, but integrating it into existing models is difficult.11 Under some conditions, it is possible to deploy a large suite of diagnostics and via those results develop effective models of the plasma system. Under other conditions, such as moderate or high pressures, traditional diagnostics (Langmuir probes, etc.) will not function correctly, and detailed models of the plasmas are harder to develop. On the other hand, ML approaches, particularly deep learning, can extract features from massive amounts of data. Machine learning proves exceptionally valuable in scenarios where we lack a comprehensive theory, yet we seek to discern meaningful trends. Essentially, machine learning automates the scientific method,12 mirroring the sequence of hypothesis generation, testing, and either rejection or refinement. This technology equips us with a neutral toolkit for streamlining the process of discovery. Consequently, it is no wonder that machine learning is presently catalyzing revolutions across numerous domains, including science, technology, business, and medicine.13 

Machine learning holds the potential to expand our understanding of laboratory plasmas operating under conditions that are largely unexplored.14–17 The influence of ML in the realm of low-temperature plasmas (LTPs) is especially noteworthy for emerging applications such as plasma treatment in microelectronics production, quantum materials processing, LTP-based advancements in the chemical industry, as well as in medicine and biotechnology.18 Machine learning and data-driven approaches show promise in enhancing plasma research by addressing challenges in modeling plasma–surface interactions, enabling real-time diagnostics, and developing predictive controllers for efficient and automated LTP treatments on complex biological surfaces.19 So far, they have been used to accurately predict plasma parameters like density and electron temperature in LTPs20,21 as well as in fusion plasma research.22,23 In plasma medicine research, ML combined with standard dosimetric techniques has been found to offer a highly tunable and beneficial dose rate of LTPs for controlled radiobiological effects.24 

LTPs, particularly those used in the semiconductor industry, are complex environments that are influenced by many external “control parameters.”25,26 These control parameters include external power (power deposition method, power level, supplied power frequency); neutral gas (species, pressure, flow rate); and chamber geometry/materials. For many years, researchers deploying plasma systems in the semiconductor industry made use of design of experiment (DOE) studies to ascertain control parameter regimes that would result in the effective processing of a wafer or other substrates. Specifically, DOE studies have been used to narrow in on the “correct parameters” needed in a large number of processes, including hydrophilicity of fabric,27 gas utilization in semiconductor manufacturing,28 wire bonding,29 and many others. A basic review of DOEs and how they are used is given in the online NIST/SEMATECH handbook.30 DOE studies do not allow one to build models of a process—just find which combination of parameters will give rise to the “proper” processing of a wafer. Early results with ML already show promise in bridging this gap.18,31–33

In this article, we will examine the use of ML to predict trends in heating mode transition4,5,34–36 of moderate-pressure CCPs, specifically between 0.1 and 3.5 Torr for argon, nitrogen, and oxygen plasmas. Levitski34 was the first to point out this transition between two distinct but stable regimes in a capacitive RF discharge at the moderate pressure regime and named them as alpha and gamma mode plasmas after the Townsend ionization coefficients.4 Here, we will make use of the data collected from an experimental study36 of the current-voltage (I–V) characteristics at 0.5–2.5 Torr. We will show results and compare the prediction accuracy of different ML models for both supervised regression as well as supervised classification approaches. We will start by describing the experimental system and related diagnostic tools in Sec. II. This will include a review of how the measured data is analyzed to arrive to our database. In Sec. III, we will describe the DOE and present the resultant structure of the database that will be used in our ML study. In Sec. IV A, we will deploy four common ML regression analysis models to examine our experimental data. We will explore the efficacy of various ML models on the resultant I–V data sets in predicting I–V data under varying conditions. In Sec. IV B, we will use ML to explore the classification of measured data based on their control parameters. Finally, we will provide our conclusions in Sec. V.

The experiments in this study were conducted using the modified gaseous electronics chamber (mGEC).37–41 The mGEC reactor’s design has been discussed in detail by Goeckner et al.37 Originally, the mGEC had an inductively coupled plasma (ICP) source, which was later converted into a CCP source.38 A general schematic of the CCP system is depicted in Fig. 1. The plasma-facing powered and grounded aluminum electrodes are surrounded by grounded electrode shields, maintaining a gap of about 2.5 mm between the electrode and shield. The powered electrode measures 11.4 cm in diameter, while the grounded electrode is 15 cm in diameter. The gap between them is adjustable from 2 to 12.5 cm. Although the mGEC has internal walls to control chamber diameter, they were not utilized in these studies.

FIG. 1.

Sketch of experimental setup used for this study. Details of the plasma system can be found in (Refs. 37 and 38).

FIG. 1.

Sketch of experimental setup used for this study. Details of the plasma system can be found in (Refs. 37 and 38).

Close modal

As is shown in Figs. 1 and 2, an RF signal generator (Keysight 33600 waveform generator) produced a 13.56 MHz signal, amplified by an ENI A-300 RF Power Amplifier to create input power. Baseline measurements of supplied and reflected power were taken with a Bird Model 43-wattmeter. The power then passed through an L-type match network, equipped with two adjustable capacitors, which were set to minimize the reflected power, before reaching the powered electrode. The load encompassed the plasma, DC bias circuit, current and voltage probes (I–V), the powered electrode, the 50  Ω transmission line, and the grounded chamber with a grounded electrode/chuck.

FIG. 2.

Circuit elements of the I–V measurement experiment. Measured parasitic impedances at Z 1, Z 2, Z g 1, and Z g 2 are given in Table I.

FIG. 2.

Circuit elements of the I–V measurement experiment. Measured parasitic impedances at Z 1, Z 2, Z g 1, and Z g 2 are given in Table I.

Close modal
TABLE I.

Measured parasitic impedances Z1, Z2, Zg1, and Zg2. These values were used to determine the currents and voltages at the electrode faces from the measured values.

Frequency (MHz)Z1 (Ω)Z2 (Ω)Zg1 (Ω)Zg2 (Ω)
13.56 28.97i 4.96−113.87i 1.1+11.95i −71.79i 
27.12 0.12+7.6i 2.39−18.68i 1.1+23.91i −35.89i 
40.68 11.48+81.83i 2.52+35.96i 1.1+35.86i −23.93i 
Frequency (MHz)Z1 (Ω)Z2 (Ω)Zg1 (Ω)Zg2 (Ω)
13.56 28.97i 4.96−113.87i 1.1+11.95i −71.79i 
27.12 0.12+7.6i 2.39−18.68i 1.1+23.91i −35.89i 
40.68 11.48+81.83i 2.52+35.96i 1.1+35.86i −23.93i 

As is also shown in Figs. 1 and 2, the DC self-bias, current, and voltage were measured on the 50  Ω transmission line between the matching network and powered electrode. The DC self-bias measurement setup featured an 84  μH choke followed by a capacitor to ground for measuring bias. The current probe utilized a Rogowski coil (Pearson electronics 2877), while the voltage probe was built in-lab by capacitively coupling the transmission line’s powered lead through Teflon. An identical Rogowski coil measured current through the grounded electrode. High-speed data acquisition was performed using a Teledyne Lecroy HDO 6104B oscilloscope with 12-bit vertical resolution and 10 10 samples per second. To enhance voltage probe sensitivity to plasma sheath harmonics, it was connected to the oscilloscope’s 50  Ω input. A fast Fourier transform (FFT) analysis discerned fundamental and higher harmonics within waveforms, crucial for power and impedance calculations of the sheath and plasma bulk.

Calibrating both current and voltage probes involved obtaining amplitude and phase factors as functions of frequency, including relative phase. The oscilloscope’s 50  Ω input provided a known resistive load for probe calibration. Calculating I–V magnitude and phase at the electrode relied on FFT data from the probes and the parasitic impedances listed in Table I. The electrical length of the transmission line between the probe and electrode, as measured by a network analyzer, was accounted for when reconstructing the I–V waveform at the powered electrode. Additionally, an equivalent circuit, see Fig. 2, incorporating measured parasitics, Table I, was used to calculate current and voltage at the electrodes from raw data at the probes. The methods of parasitic impedance, delay measurements, and calibration are described by Press.41 Typical measured and reconstructed I–V traces are shown in Fig. 3.

FIG. 3.

Example (a) RF voltage and (b) RF current traces measured at the probe and reconstructed at the electrode by correcting for parasitic impedances and transmission delay for a 0.5 Torr argon discharge.

FIG. 3.

Example (a) RF voltage and (b) RF current traces measured at the probe and reconstructed at the electrode by correcting for parasitic impedances and transmission delay for a 0.5 Torr argon discharge.

Close modal
Both electrodes’ currents comprise conductive and displacement components. Conductive current originates from the faster-moving electrons, capable of tracking rapid potential shifts induced by the RF field due to their lighter mass compared to ions. As electrons exhibit a roughly Maxwellian velocity distribution, the electron density at the electrode and electron current ( I e) adhere to the Boltzmann relation. This leads to exponential decay in electron density and current as potential in the sheath drops below the plasma potential42,
I e = I 0 exp [ e V ( t ) / k B T e ] ,
(1)
where T e is the electron temperature, k B is Boltzmann’s constant, and I 0 is a function of bulk electron density and temperature.42, V ( t ) is the voltage across the sheath ( V ( t ) < 0) and is given by
V ( t ) = V d c + V 1 cos ( ω t ) .
(2)
In this context, V d c represents the DC voltage and V 1 cos ( ω t ) signifies the fluctuating voltage across the sheath—part of it being the RF voltage applied. Electrons react to the dynamic field, while ions are influenced solely by the average potential. Consequently, a predominantly DC current I i o n arises from ions, dictated by electrode area A, ion density n i ( x ), and velocity u i ( x ) at distance x from the powered electrode42,
I i o n = e n i ( x ) u i ( x ) A .
(3)
Thus, the total conduction current I c ( t ) is given by
I c ( t ) = I 0 exp [ e V ( t ) / k B T e ] + I i o n ,
(4)
where I 0 can be determined via Lieberman’s model of an RF sheath.43,44 Because the temporal change in the voltage across the sheath is determined by the applied voltage, the temporal change in the conduction current across the sheath will also follow the applied voltage. This implies that at the electrode
I c ( t ) cos ( ω t ) .
(5)
Likewise, displacement current stems from the varying sheath electric field. By exploiting the property that the derivative of an even sinusoidal function is odd, one can dissect the overall current into its conductive and displacement constituents.42 This observation becomes apparent when considering the definition of displacement current,
I d ( t ) = ϵ 0 A d E ( t ) d t
(6)
ϵ 0 A d d t [ V ( t ) / s ( t ) ]
(7)
= ϵ 0 A s ( t ) d V ( t ) d t V ( t ) ϵ 0 A s ( t ) 2 d d t s ( t ) .
(8)
With E as the electric field, s as the sheath thickness, and A as the electrode’s surface area, if V ( t ) takes the form of an even sinusoidal function, I c is also even, while I d is odd, assuming that the sheath thickness is synchronized with the applied voltage.45 Thus, if V ( t ) = V 0 cos ( ω t ) and total current, I ( t ) = I 0 cos ( ω t ν ), ν being the phase shift, we can write the even and odd component of I ( t ) as
I c ( t ) = I 0 cos ( ν ) cos ( ω t ) ,
(9)
I d ( t ) = I 0 sin ( ν ) sin ( ω t ) .
(10)

A study of moderate pressure plasma can consist of an indefinite number of experiments involving many adjustable control parameters, such as feed gas, power, pressure, interelectrode gap, etc. To understand the effects of these control parameters, current and voltages (I–V) were measured at combinations of four control parameters: applied power (10–70 W), electrode gap (20–28 mm), operating pressure (0.5–2.5 Torr), and ratios of feed gases ( Ar / N 2, Ar / O 2, N 2 / O 2). Specific combinations of these parameters were determined using a commercial software package (JMP). The order in which the runs were taken was randomized. The experiments were repeated four times, allowing for either the direct measurement or calculation of the parameters shown in Table II. In Table III, we show the p-values of the ten primary I–V quantities vs the six external control parameters as calculated using JMP. p-values are a measure of the probability that a value is due to random chance.46 In general, P < 0.05, or a 5 % chance of random occurrence, is considered statistically significant. As seen in Table III, variation in the electrode gap was found to have an insignificant correlation with all the measured quantities and thus was kept fixed at 24  mm for subsequent experiments.

TABLE II.

34 Directly measured and calculated parameters in this study.

No.ParameterDescription
Measured parameters 
VDC DC self-bias 
Vrf,1 Total 1st harmonic peak to peak voltage 
Vrf,2 Total 2nd harmonic peak to peak voltage 
Vrf,3 Total 3rd harmonic peak to peak voltage 
Irf,1 Total driven electrode peak to peak current (1st harmonic) 
Irf,2 Total driven electrode peak to peak current (2nd harmonic) 
Irf,3 Total driven electrode peak to peak current (3rd harmonic) 
Ignd,1 Total ground electrode peak to peak current (1st harmonic) 
Ignd,2 Total ground electrode peak to peak current (2nd harmonic) 
10 Ignd,3 Total ground electrode peak to peak current (3rd harmonic) 
11 ν1,p Phase difference at the probe (1st harmonic) 
12 ν2,p Phase difference at the probe (2nd harmonic) 
13 ν3,p Phase difference at the probe (3rd harmonic) 
Calculated parameters 
14 Icond,1 Driven electrode peak to peak conduction current (1st harmonic) 
15 Icond,2 Driven electrode peak to peak conduction current (2nd harmonic) 
16 Icond,3 Driven electrode peak to peak conduction current (3rd harmonic) 
17 Idisp,1 Driven electrode peak to peak displacement current (1st harmonic) 
18 Idisp,2 Driven electrode peak to peak displacement current (2nd harmonic) 
19 Idisp,3 Driven electrode peak to peak displacement current (3rd harmonic) 
20 R1 Resistive impedance at the driven electrode (1st harmonic) 
21 R2 Resistive impedance at the driven electrode (2nd harmonic) 
22 R3 Resistive impedance at the driven electrode (3rd harmonic) 
23 X1 Reactive impedance at the driven electrode (1st harmonic) 
24 X2 Reactive impedance at the driven electrode (2nd harmonic) 
25 X3 Reactive impedance at the driven electrode (3rd harmonic) 
26 P1,e Average power at the driven electrode (1st harmonic) 
27 P2,e Average power at the driven electrode (2nd harmonic) 
28 P3,e Average power at the driven electrode (3rd harmonic) 
29 P1,p Average power at the probe (1st harmonic) 
30 P2,p Average power at the probe (2nd harmonic) 
31 P3,p Average power at the probe (3rd harmonic) 
32 ν1,e Phase difference at the driven electrode (1st harmonic) 
33 ν2,e Phase difference at the driven electrode (2nd harmonic) 
34 ν3,e Phase difference at the driven electrode (3rd harmonic) 
No.ParameterDescription
Measured parameters 
VDC DC self-bias 
Vrf,1 Total 1st harmonic peak to peak voltage 
Vrf,2 Total 2nd harmonic peak to peak voltage 
Vrf,3 Total 3rd harmonic peak to peak voltage 
Irf,1 Total driven electrode peak to peak current (1st harmonic) 
Irf,2 Total driven electrode peak to peak current (2nd harmonic) 
Irf,3 Total driven electrode peak to peak current (3rd harmonic) 
Ignd,1 Total ground electrode peak to peak current (1st harmonic) 
Ignd,2 Total ground electrode peak to peak current (2nd harmonic) 
10 Ignd,3 Total ground electrode peak to peak current (3rd harmonic) 
11 ν1,p Phase difference at the probe (1st harmonic) 
12 ν2,p Phase difference at the probe (2nd harmonic) 
13 ν3,p Phase difference at the probe (3rd harmonic) 
Calculated parameters 
14 Icond,1 Driven electrode peak to peak conduction current (1st harmonic) 
15 Icond,2 Driven electrode peak to peak conduction current (2nd harmonic) 
16 Icond,3 Driven electrode peak to peak conduction current (3rd harmonic) 
17 Idisp,1 Driven electrode peak to peak displacement current (1st harmonic) 
18 Idisp,2 Driven electrode peak to peak displacement current (2nd harmonic) 
19 Idisp,3 Driven electrode peak to peak displacement current (3rd harmonic) 
20 R1 Resistive impedance at the driven electrode (1st harmonic) 
21 R2 Resistive impedance at the driven electrode (2nd harmonic) 
22 R3 Resistive impedance at the driven electrode (3rd harmonic) 
23 X1 Reactive impedance at the driven electrode (1st harmonic) 
24 X2 Reactive impedance at the driven electrode (2nd harmonic) 
25 X3 Reactive impedance at the driven electrode (3rd harmonic) 
26 P1,e Average power at the driven electrode (1st harmonic) 
27 P2,e Average power at the driven electrode (2nd harmonic) 
28 P3,e Average power at the driven electrode (3rd harmonic) 
29 P1,p Average power at the probe (1st harmonic) 
30 P2,p Average power at the probe (2nd harmonic) 
31 P3,p Average power at the probe (3rd harmonic) 
32 ν1,e Phase difference at the driven electrode (1st harmonic) 
33 ν2,e Phase difference at the driven electrode (2nd harmonic) 
34 ν3,e Phase difference at the driven electrode (3rd harmonic) 
TABLE III.

p-values for control parameter effects on measured current and voltages. It is noted that the observed p-value for the gap is always large, indicating that the gap, over the range studied, is not important in the results. In light of this, the gap was not examined in many of the additional parameter sweeps.

ControlAr flowN2 flowO2 flowPressureGapPower
(%)(%)(%)(Torr)(mm)(W)
Powered electrode 
VDC 0.0024 0.0311 0.4070 <0.0001 0.7025 <0.0001 
Vrf,1 0.0012 0.0043 0.7295 0.0252 0.7869 <0.0001 
Vrf,2 0.0001 0.0444 0.0971 0.0007 0.9940 <0.0001 
Vrf,3 0.9278 0.8050 0.8758 <0.0001 0.8299 <0.0001 
Irf,1 <0.0001 0.0083 0.0911 0.9634 0.8046 <0.0001 
Irf,2 0.0001 0.0450 0.0970 0.0007 0.9933 <0.0001 
Irf,3 0.8136 0.7093 0.8912 <0.0001 0.8446 <0.0001 
Ground electrode 
Ignd,1 0.0009 0.0225 0.3313 0.0031 0.1812 <0.0001 
Ignd,2 0.3328 0.4769 0.7984 <0.0001 0.3369 <0.0001 
Ignd,3 0.0585 0.1756 0.5999 0.0012 0.5578 <0.0001 
ControlAr flowN2 flowO2 flowPressureGapPower
(%)(%)(%)(Torr)(mm)(W)
Powered electrode 
VDC 0.0024 0.0311 0.4070 <0.0001 0.7025 <0.0001 
Vrf,1 0.0012 0.0043 0.7295 0.0252 0.7869 <0.0001 
Vrf,2 0.0001 0.0444 0.0971 0.0007 0.9940 <0.0001 
Vrf,3 0.9278 0.8050 0.8758 <0.0001 0.8299 <0.0001 
Irf,1 <0.0001 0.0083 0.0911 0.9634 0.8046 <0.0001 
Irf,2 0.0001 0.0450 0.0970 0.0007 0.9933 <0.0001 
Irf,3 0.8136 0.7093 0.8912 <0.0001 0.8446 <0.0001 
Ground electrode 
Ignd,1 0.0009 0.0225 0.3313 0.0031 0.1812 <0.0001 
Ignd,2 0.3328 0.4769 0.7984 <0.0001 0.3369 <0.0001 
Ignd,3 0.0585 0.1756 0.5999 0.0012 0.5578 <0.0001 
TABLE IV.

mGEC I–V database structure vs the control parameters. Sets of data collected over time starting with a level 2 DOE set (Dataset 1). A more comprehensive set of data was later collected based on the DOE study (Dataset 2). Finally, a third set of data was collected for interpolation studies (Dataset 3).

DatasetFactorialNo. of runsMixtureFlow ratio (%)Pressure (Torr)Power (W)Gap (mm)
Level 2 Ar: O2
Ar: N2
N2: O2 
0:100
33:67
67:33
100:0 
0.5
1.5
2.5 
10
25
40
55
70 
20
24
28 
Full Ar: O2
Ar: N2
N2: O2 
0:100
33:67
67:33
100:0 
0.5
1.5
2.5 
10
25
40
55
70 
24 
Full Ar: O2
Ar: N2
N2: O2 
0:100
33:67
67:33
100:0 
1
10
25
40
55
70 
24 
DatasetFactorialNo. of runsMixtureFlow ratio (%)Pressure (Torr)Power (W)Gap (mm)
Level 2 Ar: O2
Ar: N2
N2: O2 
0:100
33:67
67:33
100:0 
0.5
1.5
2.5 
10
25
40
55
70 
20
24
28 
Full Ar: O2
Ar: N2
N2: O2 
0:100
33:67
67:33
100:0 
0.5
1.5
2.5 
10
25
40
55
70 
24 
Full Ar: O2
Ar: N2
N2: O2 
0:100
33:67
67:33
100:0 
1
10
25
40
55
70 
24 

Once the initial screening was completed, comprehensive experiments were conducted across power, pressure, and gas mixtures. DC self-bias as well as magnitude and phases of electrode voltage and currents were measured at the first three harmonics as a function of these control parameters. Power, impedance, conduction, and displacement currents were calculated from the measured magnitude and phases. Experiments were run for all combinations of pressure of 0.5, 1.5, 2.5 Torr, and nominal power of 10, 25, 40, 55, and 70 W (read from the wattmeter) for pure argon, nitrogen, and oxygen as well different mixtures between them; see Table IV. Each experiment was run for a total of four times and 13 full RF cycles were processed for each run. Each cycle consisted of about 738 data points acquired by the oscilloscope, giving a phase resolution of about one-half degree. A 13 × 4 matrix was built for all parameters of interest for each run. For each of the four iterations, average parameter values as well as the error bars were calculated by taking the mean and the standard deviation of the 13 acquired cycles. To study interpolation using machine learning, additional data were later collected at 1 and 2 Torr for the same gap, gas mixtures, and power ranges. Data at 1 and 2 Torr were collected only once and postprocessed in a similar way.

The results of these experiments were used to construct three separate matrices from the available datasets (Datasets 1, 2, and 3) for machine learning and statistical analysis with JMP. The first matrix consists of the measured phase-independent 10 parameters, namely, voltage and current magnitudes at all three harmonics and DC bias (parameters 1–10 in Table II) from all three sets in Table IV. For each parameter of the first two Datasets, the mean of the four runs was used. The second matrix has all 34 parameters from all three sets above including all four runs for Datasets 1 and 2, size 894 × 34. This is the comprehensive matrix including all experimental data phase dependent or not. Finally, the third matrix consists of all 34 parameters with only the experiment corresponding to the max electrode power of the four runs for the first two sets, size 201 × 34. This matrix was prepared to represent the “best” of the four runs since power loss in the transmission line/match network was minimal for this run.

Empirical modeling, characterized by its focus on understanding observed data patterns, is a cornerstone of scientific exploration. It delves into datasets, uncovers hidden trends, and generates insights into the underlying processes. Through statistical techniques, visualization, and exploratory data analysis, empirical modeling aids in developing hypotheses and theories. It is particularly useful in initial data exploration. On the other end of the spectrum, deterministic modeling thrives on well-defined rules and equations. By simulating real-world processes and interactions, deterministic models offer precise insights into cause-and-effect relationships. Despite the precision of deterministic modeling, its accuracy is often limited by computational resources. Unlike empirical modeling, deterministic modeling is often susceptible to limitations in the underlying approximations of the model itself, which makes it unwieldy especially where the physics is not well understood.

Machine learning has the potential to change how industrial plasmas are optimized for a given use.14 In general, ML relies on advanced statistical analysis of data sets to generate insights into underlying processes. A number of these analysis techniques have been developed and are readily available. In our ML-based examination of moderate pressure CCP discharges, we will make use of both “classification” and “regression” ML techniques available in MATLAB (R2023a). Specifically, MATLAB has built in a set of both “supervised” (for labeled input, X, and output, Y, data) and “unsupervised” (for unlabeled data) ML techniques. Supervised results in a mapping from the input variables to the output values via Y = f ( X ). Supervised techniques can be further divided into “classification” and “regression” techniques. Classification techniques are those that seek to apply a label to a given input (cat vs dog picture) while regression provides an algebraic output value.

Four different ML-supervised regression models available in MATLAB were compared to each other for our studies:

  • Deep Neural Network (DNN) regression with Levenberg–Marquerdt backpropagation (MATLAB function trainlm).47,48

  • Tree Ensemble (TE) regression49–51 with an ensemble of decision trees (MATLAB function treebagger).50,52

  • Naive Bayes (NB) (MATLAB function fitrgp).53–56 

  • Support Vector (SV) regression model (MATLAB function fitrsvm).57–61 

Each of these regression techniques focuses on learning relationships between input features and output variables using labeled training data. This approach enables the creation of predictive models capable of making accurate forecasts on new, unseen data. The strength of supervised regression lies in its adaptability to diverse scenarios. Through algorithms like Neural Networks, Tree Ensembles, Naive Bayes, and Support Vector Machines, it can capture intricate, nonlinear relationships present in data. By training on existing data, supervised regression extracts patterns, enabling it to generalize to new data while providing accurate predictions.

Neural Networks62 comprise layers of nodes that learn relationships by adjusting weights through iterative training. Neural Networks excel at handling complex, nonlinear data and are suitable for large datasets where nuanced relationships are key. Specifically, the Levenberg–Marquerdt method is an algorithm that makes use of both the Gauss–Newton method and the steepest descent method to solve nonlinear least squares problems.47,48 On the other hand, TEs operate through ensemble learning, combining multiple decision trees to enhance predictive accuracy.63 Decision trees split data based on features, and Random Forests aggregate their outputs. This approach reduces overfitting and provides robustness, making it suitable for various data types and large datasets. Naive Bayes,64 another widely used regression method, is a probabilistic algorithm rooted in Bayesian probability. Despite its assumption of feature independence, Naive Bayes performs remarkably well in text and categorical data analysis. It is particularly efficient for high-dimensional data and quick predictions. Finally, Support Vector Machines aim to find the best hyperplane that separates data points or predicts a continuous value.58 They excel in scenarios with distinct class separation or complex feature interactions, often utilizing kernel functions to capture nonlinear relationships. Support Vector Machines prioritize generalization and perform well even in high-dimensional spaces.

These regression models were used to train and predict the 34 different parameters shown in Table II vs the plasma control parameters shown in Table IV. The “training” serves to adjust the weights of each input parameter, thereby adjusting the results reaching the output layer. The input variables/parameters were standardized before training, i.e., data for each variable was rescaled to have a mean of 0 and standard deviation of 1. Additionally, the model hyperparameters65 were optimized during training. Hyperparameters are essential settings or configurations that are not learned from the training data but are predefined by the model developer. These parameters govern the speed, quality, and performance of the machine-learning model during the training process. Examples of hyperparameters include learning rates in gradient descent, batch size, the depth of decision trees in TEs, and the number of hidden layers in neural networks. The choice of hyperparameters significantly impacts a model’s ability to learn and generalize to new, unseen data. Therefore, hyperparameter tuning, which involves systematically optimizing these settings, is a crucial step in building effective machine-learning models. For example, the TE model was optimized with Bayesian optimization66 using quantile error.67 The predictor importance object “OOBPermutedPredictorDeltaError” in the “treebagger” function in MATLAB was used to infer the minimum number of inputs required while predicting high-error bar parameters. On the other hand, the neural network model was optimized by cross-validating 15% of the data and testing 15% of the data, while the rest was used for training. Additionally, the depth of the neural network for each parameter was determined by finding the number of hidden layers between 10 and 50 giving the least mean squared error (MSE) during training. An example of MSE variation with the number of hidden layers is shown in Fig. 4.

FIG. 4.

Example of MSE vs the number of hidden layers when training displacement current ( I d i s p , 1) data for 2.5 Torr predictions.

FIG. 4.

Example of MSE vs the number of hidden layers when training displacement current ( I d i s p , 1) data for 2.5 Torr predictions.

Close modal

Both interpolation and extrapolation of the data sets were studied using our supervised regression models. For interpolation, the models were trained at 0.5, 1.5, and 2.5 Torr and tested at 1 and 2 Torr. For extrapolation, prediction accuracy was checked at both the high and low end of the pressure range, namely, at 2.5 and 0.5 Torr. For the extrapolation to 2.5 Torr, the models were trained with data at the lower pressures. Similarly, for the extrapolation to 0.5 Torr, the models were trained with data at higher pressures. This was done to check how the models perform at a pressure where the physics may be different than where they were trained. For predicting phase-independent parameters, only the control parameters: the gas ratio, electrode gap, pressure, and nominal power were used as inputs. Since these parameters are known to have a low error bar,36 mean values of the measured parameters from Matrix 1 as shown in Table V were used for training and testing the machine learning models. On the other hand, all available data from the four repeated experiments (Matrix 2) were used to train and predict the phases and phase-dependent parameters as they may have a high error bar. Training the models with data from all experiments enables them to learn from an increased number of observations as well as from additional input parameters, the values of which may vary between repeated runs, unlike the control parameters. These additional inputs can be used given they can be predicted accurately beforehand. This allows one to use cumulative inputs with the help of predictor importance ranking by the TE model until the desired prediction accuracy is achieved.

TABLE V.

Matrices for ML and JMP studies.

Matrix 1Matrix 2Matrix 3
10 phase-independent parameters (e.g., voltage and current magnitudes at all harmonics, and DC bias) and the corresponding control parameters from all three sets above. For the first two datasets mean of the four runs was used. Number of observations: 291. All 34 parameters (both phase independent and dependent) and the corresponding control parameters from all three datasets above including all four runs for Datasets 1 and 2. Number of observations: 894. All 34 parameters (both phase independent and dependent) and the corresponding control parameters with only the experiment corresponding to the max electrode power of the four runs for the first two sets. Number of observations: 201. 
Matrix 1Matrix 2Matrix 3
10 phase-independent parameters (e.g., voltage and current magnitudes at all harmonics, and DC bias) and the corresponding control parameters from all three sets above. For the first two datasets mean of the four runs was used. Number of observations: 291. All 34 parameters (both phase independent and dependent) and the corresponding control parameters from all three datasets above including all four runs for Datasets 1 and 2. Number of observations: 894. All 34 parameters (both phase independent and dependent) and the corresponding control parameters with only the experiment corresponding to the max electrode power of the four runs for the first two sets. Number of observations: 201. 

1. Prediction accuracy check with regression models

Phase-independent parameters were first examined by means of prediction with both interpolation and extrapolation. Matrix 1 was used to predict and check fit for the parameters at 1 and 2 Torr. 201 observations at 0.5, 1.5, and 2.5 Torr were used for training with control parameters ( 201 × 6 matrix) as inputs. Then predictions were checked against the remaining 90 independent observations at 1 and 2 Torr (see Table VI). Similarly, for extrapolation at 2.5 and 0.5 Torr, 216 observations were used for training, and predictions were checked against the remaining 75 independent observations (see Table VI). Voltage and currents to both powered and grounded electrodes at first and second harmonics were found to have a correlation coefficient of 0.95 or higher between the measured and predicted data by at least one model. The only exception was the second harmonic current to the ground electrode at 0.5 Torr with a correlation coefficient of 0.84. On the other hand, predictions for the third harmonic value despite having a low error bar,36 for most cases, were less accurate than the others ( r < 0.95).

TABLE VI.

Prediction accuracy (Pearson’s r-coefficient) for phase-independent parameters (using Matrix 1) with control parameters as inputs. 201 observations were used for training the models with data at 0.5, 1.5, and 2.5 Torr, and then predictions were checked against the remaining 90 independent observations at 1 and 2 Torr. Similarly, predictions were checked against 75 independent observations for extrapolation at 2.5  Torr and 0.5 Torr with models trained from the 216 remaining observations.

Phase-independent parameters
Interpolation (1–2 Torr)Extrapolation (0.5 Torr)Extrapolation (2.5 Torr)
ParameterTENBDNNSVTENBDNNSVTENBDNNSV
VDC 0.98 0.97 0.95 0.97 0.89 0.94 0.32 0.92 0.96 0.88 0.86 0.86 
Vrf,1 0.99 0.99 0.99 0.99 0.98 0.99 0.99 0.96 0.99 0.99 0.98 0.97 
Vrf,2 0.99 0.99 0.96 0.98 0.93 0.94 0.95 0.97 0.99 0.99 0.98 0.94 
Vrf,3 0.92 0.92 0.93 0.94 0.94 0.94 0.96 0.97 0.91 0.88 0.63 0.82 
Irf,1 0.99 0.99 0.99 0.99 0.95 0.96 0.86 0.97 0.99 0.98 0.97 0.98 
Irf,2 0.99 0.99 0.97 0.99 0.93 0.94 0.82 0.96 0.99 0.98 0.97 0.96 
Irf,3 0.93 0.91 0.92 0.95 0.93 0.93 0.91 0.96 0.91 0.86 0.76 0.46 
Ignd,1 0.98 0.99 0.99 0.99 0.93 0.94 0.94 0.96 0.98 0.99 0.96 0.96 
Ignd,2 0.98 0.96 0.92 0.95 0.84 0.84 0.79 0.81 0.98 0.97 0.37 0.83 
Ignd,3 0.90 0.86 0.82 0.79 0.68 0.76 0.78 0.82 0.92 0.95 0.38 0.84 
Phase-independent parameters
Interpolation (1–2 Torr)Extrapolation (0.5 Torr)Extrapolation (2.5 Torr)
ParameterTENBDNNSVTENBDNNSVTENBDNNSV
VDC 0.98 0.97 0.95 0.97 0.89 0.94 0.32 0.92 0.96 0.88 0.86 0.86 
Vrf,1 0.99 0.99 0.99 0.99 0.98 0.99 0.99 0.96 0.99 0.99 0.98 0.97 
Vrf,2 0.99 0.99 0.96 0.98 0.93 0.94 0.95 0.97 0.99 0.99 0.98 0.94 
Vrf,3 0.92 0.92 0.93 0.94 0.94 0.94 0.96 0.97 0.91 0.88 0.63 0.82 
Irf,1 0.99 0.99 0.99 0.99 0.95 0.96 0.86 0.97 0.99 0.98 0.97 0.98 
Irf,2 0.99 0.99 0.97 0.99 0.93 0.94 0.82 0.96 0.99 0.98 0.97 0.96 
Irf,3 0.93 0.91 0.92 0.95 0.93 0.93 0.91 0.96 0.91 0.86 0.76 0.46 
Ignd,1 0.98 0.99 0.99 0.99 0.93 0.94 0.94 0.96 0.98 0.99 0.96 0.96 
Ignd,2 0.98 0.96 0.92 0.95 0.84 0.84 0.79 0.81 0.98 0.97 0.37 0.83 
Ignd,3 0.90 0.86 0.82 0.79 0.68 0.76 0.78 0.82 0.92 0.95 0.38 0.84 

Measured phases were found to have a smaller signal to noise ratio (SNR) than the phase-independent parameters due to the tuning variability of the matching network.36 Here, the SNR is the average measured parameter value divided by the standard deviation. The accuracy of prediction in such cases could be related to the error bar magnitude of that parameter. Hence, interpolation and extrapolation were studied separately for the phase-dependent parameters, specifically the phase difference, conduction, and displacement current at the powered electrode, using all available data from Matrix 2. For interpolating these parameters, predictions were checked against 90 independent observations at 1 and 2 Torr (see Table VII), with models trained from the remaining 804 observations at 0.5, 1.5, and 2.5 Torr. For extrapolation at 2.5 and 0.5 Torr, 594 observations were used for training and 300 remaining independent observations for testing (see Table VII). When using only the control parameters as inputs (Table VII), only the displacement current predictions were accurate with a correlation coefficient >0.95 in most cases. Although displacement current depends on phase, having a low error bar36 could explain its prediction accuracy with only control parameters as inputs. Due to the value of sine being small at measured phases of the cycle [see Eq. (10)], the variation in displacement current magnitude is less sensitive to change in phase resulting in a lower error bar. Nonetheless, prediction accuracy increased even more with total current as an additional input. The first harmonic current to the powered electrode was chosen as the additional input since it ranked as the most important predictor by the TE model for both interpolation and extrapolation cases. Once the total and displacement currents were predicted accurately, high error bar parameters like conduction current and phase were predicted next. Prediction accuracy for high error bar parameters was less accurate with only the control parameters as inputs. But with total and displacement currents as additional inputs, machine learning models were able to predict conduction current and phase with great accuracy (Fig. 5). This is not surprising given the fact that phase shift and conduction current magnitude can be easily calculated in a single step with the knowledge of total and displacement currents beforehand [see Eqs. (9) and (10)].

FIG. 5.

Extrapolation of the displacement current at the fundamental frequency and 2.5  Torr with (a) only control parameters and (b) with total current as additional input. Extrapolation of conduction current at the fundamental frequency and 2.5 Torr (c) with only control parameters, and (d) with total and displacement current as additional inputs. Extrapolation of electrode phase at the fundamental frequency and 2.5 Torr with (e) only control parameters, and (f) with total and displacement current as additional inputs. The legends indicate the Pearson’s r-coefficients on 300 test observations for the four ML models. The models were trained on the remaining 594 observations.

FIG. 5.

Extrapolation of the displacement current at the fundamental frequency and 2.5  Torr with (a) only control parameters and (b) with total current as additional input. Extrapolation of conduction current at the fundamental frequency and 2.5 Torr (c) with only control parameters, and (d) with total and displacement current as additional inputs. Extrapolation of electrode phase at the fundamental frequency and 2.5 Torr with (e) only control parameters, and (f) with total and displacement current as additional inputs. The legends indicate the Pearson’s r-coefficients on 300 test observations for the four ML models. The models were trained on the remaining 594 observations.

Close modal
TABLE VII.

Prediction accuracy (Pearson’s r-coefficient) for phase-dependent parameters (using Matrix 2) with control parameters as inputs. Predictions were checked against 90 independent observations at 1 and 2 Torr, with models trained on the remaining 804 observations at 0.5, 1.5, and 2.5 Torr. For extrapolation at 2.5 and 0.5 Torr 300 observations were used for testing while the remaining 594 observations were used for training.

Interpolation (1–2 Torr)Extrapolation (0.5 Torr)Extrapolation (2.5 Torr)
ParameterTENBDNNSVTENBDNNSVTENBDNNSV
First harmonic phase-dependent parameters 
Icond,1 0.67 0.67 0.65 0.69 0.59 0.81 0.05 0.84 0.82 0.91 0.19 0.85 
Idisp,1 0.99 0.96 0.99 0.92 0.95 0.96 0.88 0.97 0.99 0.96 0.94 0.89 
R1 0.92 0.88 0.84 0.77 0.84 0.85 0.49 0.83 0.93 0.92 0.29 0.84 
X1 0.76 0.8 0.74 0.89 0.88 0.85 0.37 0.92 0.78 0.73 0.48 0.76 
P1,e 0.73 0.72 0.71 0.84 0.82 0.96 0.42 0.95 0.81 0.92 0.65 0.93 
P1,p 0.86 0.85 0.78 0.94 0.91 0.98 0.5 0.96 0.9 0.94 0.1 0.95 
ν1,e 0.36 0.29 0.29 0.46 0.37 0.57 0.21 0.59 0.59 0.56 0.17 0.7 
ν1,p 0.69 0.64 0.61 0.7 0.35 0.32 <0 0.32 0.86 0.76 0.61 0.63 
Second harmonic phase-dependent parameters 
Icond,2 0.91 0.98 0.94 0.78 0.9 0.94 0.93 0.92 0.99 0.97 0.91 0.88 
Idisp,2 0.93 0.99 0.98 0.82 0.92 0.94 0.88 0.97 0.99 0.94 0.75 0.94 
R2 0.33 0.38 0.35 0.32 0.32 0.35 0.11 0.36 0.56 0.45 0.10 <0 
X2 <0 <0 <0 <0 0.63 0.46 <0 0.47 0.24 <0 <0 0.55 
P2,e 0.92 0.99 0.95 0.99 0.85 0.9 0.8 0.86 0.99 0.79 0.85 0.96 
P2,p 0.71 0.66 0.77 0.52 0.72 0.81 0.66 0.74 0.88 0.7 0.82 0.44 
ν2,e 0.38 0.42 0.31 0.42 0.26 0.24 0.05 0.31 0.69 0.44 <0 0.41 
ν2,p 0.28 0.31 0.05 0.33 0.34 0.16 0.18 0.28 0.21 0.48 0.14 0.07 
Third harmonic phase-dependent parameters 
Icond,3 0.75 0.80 0.76 0.76 0.83 0.91 0.53 0.90 0.73 0.62 0.17 <0 
Idisp,3 0.85 0.86 0.92 0.87 0.93 0.94 0.83 0.95 0.90 0.78 0.55 0.82 
R3 0.05 <0 0.01 0.02 0.25 0.13 0.11 0.16 0.12 0.55 <0 0.41 
X3 0.4 0.44 0.4 0.53 0.41 0.27 0.02 0.37 0.37 0.33 0.09 0.47 
P3,e 0.75 0.76 0.51 0.69 0.82 0.83 0.73 0.89 0.77 0.01 0.08 0.75 
P3,p 0.6 0.58 0.54 0.51 0.65 0.83 0.17 0.8 0.62 <0 <0 0.65 
ν3,e 0.08 0.04 0.02 ≈0 0.25 0.17 <0 0.17 0.15 0.2 <0 0.39 
ν3,p 0.17 0.15 0.13 0.30 0.25 0.29 <0 0.32 0.11 0.24 <0 0.46 
Interpolation (1–2 Torr)Extrapolation (0.5 Torr)Extrapolation (2.5 Torr)
ParameterTENBDNNSVTENBDNNSVTENBDNNSV
First harmonic phase-dependent parameters 
Icond,1 0.67 0.67 0.65 0.69 0.59 0.81 0.05 0.84 0.82 0.91 0.19 0.85 
Idisp,1 0.99 0.96 0.99 0.92 0.95 0.96 0.88 0.97 0.99 0.96 0.94 0.89 
R1 0.92 0.88 0.84 0.77 0.84 0.85 0.49 0.83 0.93 0.92 0.29 0.84 
X1 0.76 0.8 0.74 0.89 0.88 0.85 0.37 0.92 0.78 0.73 0.48 0.76 
P1,e 0.73 0.72 0.71 0.84 0.82 0.96 0.42 0.95 0.81 0.92 0.65 0.93 
P1,p 0.86 0.85 0.78 0.94 0.91 0.98 0.5 0.96 0.9 0.94 0.1 0.95 
ν1,e 0.36 0.29 0.29 0.46 0.37 0.57 0.21 0.59 0.59 0.56 0.17 0.7 
ν1,p 0.69 0.64 0.61 0.7 0.35 0.32 <0 0.32 0.86 0.76 0.61 0.63 
Second harmonic phase-dependent parameters 
Icond,2 0.91 0.98 0.94 0.78 0.9 0.94 0.93 0.92 0.99 0.97 0.91 0.88 
Idisp,2 0.93 0.99 0.98 0.82 0.92 0.94 0.88 0.97 0.99 0.94 0.75 0.94 
R2 0.33 0.38 0.35 0.32 0.32 0.35 0.11 0.36 0.56 0.45 0.10 <0 
X2 <0 <0 <0 <0 0.63 0.46 <0 0.47 0.24 <0 <0 0.55 
P2,e 0.92 0.99 0.95 0.99 0.85 0.9 0.8 0.86 0.99 0.79 0.85 0.96 
P2,p 0.71 0.66 0.77 0.52 0.72 0.81 0.66 0.74 0.88 0.7 0.82 0.44 
ν2,e 0.38 0.42 0.31 0.42 0.26 0.24 0.05 0.31 0.69 0.44 <0 0.41 
ν2,p 0.28 0.31 0.05 0.33 0.34 0.16 0.18 0.28 0.21 0.48 0.14 0.07 
Third harmonic phase-dependent parameters 
Icond,3 0.75 0.80 0.76 0.76 0.83 0.91 0.53 0.90 0.73 0.62 0.17 <0 
Idisp,3 0.85 0.86 0.92 0.87 0.93 0.94 0.83 0.95 0.90 0.78 0.55 0.82 
R3 0.05 <0 0.01 0.02 0.25 0.13 0.11 0.16 0.12 0.55 <0 0.41 
X3 0.4 0.44 0.4 0.53 0.41 0.27 0.02 0.37 0.37 0.33 0.09 0.47 
P3,e 0.75 0.76 0.51 0.69 0.82 0.83 0.73 0.89 0.77 0.01 0.08 0.75 
P3,p 0.6 0.58 0.54 0.51 0.65 0.83 0.17 0.8 0.62 <0 <0 0.65 
ν3,e 0.08 0.04 0.02 ≈0 0.25 0.17 <0 0.17 0.15 0.2 <0 0.39 
ν3,p 0.17 0.15 0.13 0.30 0.25 0.29 <0 0.32 0.11 0.24 <0 0.46 

2. Extrapolation beyond mGEC database with regression models

Based on the above effort, we recognize that ML can be used to extrapolate beyond the known results. Once prediction accuracy is checked at the high and low end of the pressure range, namely, at 2.5 and 0.5 Torr, one can use ML to further extrapolate data at pressures greater than 2.5 Torr as well as smaller than 0.5 Torr where experimental data is unavailable. To demonstrate this, Matrix 3 was used to extrapolate voltage and current parameters at 3.5 and 0.1 Torr (Fig. 6). Predictions were made at 3.5 Torr with the model giving the best fit at 2.5 Torr. The best model was trained for each respective parameter with all available data for that parameter at 0.5, 1.5, and 2.5 Torr. Similarly, predictions were made at 0.1 Torr using the best-fitting model at 0.5 Torr. ML predictions at both 3.5 and 0.1 Torr were found to be congruent with the I–V trends at increasing/decreasing pressure. This can be seen from Fig. 6, where predictions for fundamental voltage and total fundamental current for pure argon, nitrogen, and oxygen at 0.1 and 3.5 Torr have been compared to those at 0.5, 1, 2, and 2.5 Torr as well to the experimental I–V curves at 0.5, 1, 2, and 2.5 Torr.

FIG. 6.

Experimental vs ML predictions of fundamental frequency current as a function of fundamental frequency voltage for (a) argon, (b) nitrogen, and (c) oxygen discharges. For extrapolations at 3.5 and 0.1 Torr, models with the highest r-coefficients at 2.5 and 0.5 Torr were used respectively for both current and voltage.

FIG. 6.

Experimental vs ML predictions of fundamental frequency current as a function of fundamental frequency voltage for (a) argon, (b) nitrogen, and (c) oxygen discharges. For extrapolations at 3.5 and 0.1 Torr, models with the highest r-coefficients at 2.5 and 0.5 Torr were used respectively for both current and voltage.

Close modal

The predictions shown here are consistent with the transition from a low to moderate pressure discharge. This transition was proposed to be a consequence of the alpha-mode sheath breakdown due to an increased emission of secondary electrons into the bulk plasma35 and is accompanied by a sharp growth in the electron density and a rise in the steepness of the I–V curve.35 Following this, in Fig. 6, we see a similar rise in steepness in argon and oxygen between 0.1 and 2 Torr, the rate of increase being much higher in argon. These predictions are consistent with the known alpha–gamma transitions for argon,68 and oxygen.3 Additionally, between 2 and 3.5 Torr for argon and oxygen, we see no significant change in the I–V curves. Thus, ML predicts that once the transition is complete, ( p > 2 Torr), no significant change in plasma production/ionization mechanisms occurs.

Five different classification models were used for comparison for the supervised classification studies. A linear discriminant analysis (LDA) was used for supervised classification in addition to the four models used for regression. LDA aims to find a linear combination of features or variables that best separates or discriminates between two or more classes or groups.69,70 It does this by modeling the distribution of each class, making assumptions about their underlying probability distributions (usually Gaussian), and then calculating the likelihood of a new data point belonging to each class based on these distributions. The key idea behind LDA is to maximize the between-class variance while minimizing the within-class variance, resulting in a set of discriminant functions. These functions are then used to project new data points into a lower-dimensional space, making it easier to classify them into their respective classes. LDA is particularly useful when dealing with multiple classes and can be a powerful tool for dimensionality reduction while preserving class separability.

The ML models were used to classify the data by three of the four control parameters, namely, the operating pressure, gap, and the gas content of the plasma. The designated MATLAB functions for Neural Network, Tree Ensemble, Naive Bayes, Support Vector, and Linear Discriminant Analysis were “patternnet,” “fitcensemble,” “fitcnb,” “fitcsvm,” and “fitcdscr,” respectively. For classification studies, Matrix 2 was used to check prediction accuracy for a given control parameter using all 34 parameters as well as the remaining control parameters as inputs. The models were randomly trained on 80% of all 894 observations and tested on the rest 20% of the observations independent of the training set. The models were optimized during the training. The Neural Network model was optimized by cross-validating 15% of the data and testing 15% of the data while the rest was used for training. The number of hidden layers was 30. On the other hand, the shallow models were optimized by tuning all of the hyperparameters of the respective models. Moreover, predictors were ranked based on their relative importance using the ensemble tree model. For a given control parameter, a standard least square fitting optimization was also done with JMP using the remaining control parameters as well as all the measured and calculated parameters listed in Table II as predictors. The predictors were ranked in descending order of their p-values. The JMP ranking was then compared to the parameter relative importance ranking by the TE model; see Fig. 7. Here, the parameter relative importance ranking, generated by the TE model, was found to be very different from the p-value (logworth) ranking generated by the combined least square fitting method using JMP. The least-square fit method of analyzing DOE data has been used successfully for many years to find the optimal parameter space for material processing.27–30 Because ML predictor importance ranking is radically different from DOE p-value (logworth) ranking of the parameters, it seems unlikely that one will be able to use ML parameter relative importance ranking values to optimize processes. However, it is worth pointing out that neither DOE nor ML studies can produce an accurate ranking of the optimal parameter space for processing a wafer as neither is based on a physical model of the discharge.

FIG. 7.

Predictor importance for pressure with JMP vs ML. Importance is measured in logworth, i.e., log 10(p-value) for JMP, whereas it is measured as a weighted average importance for each tree learner in the ensemble fitcensemble (MATLAB function name).

FIG. 7.

Predictor importance for pressure with JMP vs ML. Importance is measured in logworth, i.e., log 10(p-value) for JMP, whereas it is measured as a weighted average importance for each tree learner in the ensemble fitcensemble (MATLAB function name).

Close modal

1. Results for classification analysis

Classification accuracy for the ML models was checked by plotting confusion matrices71 as well as receiver operating characteristic (ROC) curves.72 A confusion matrix (Tables IXXI) shows the summary of the model’s predictions compared to the actual outcomes. Each table shows the number of experimental or “true” control parameters compared to the number of predicted control parameters. It is observed from these tables that ML classification analysis is able to correctly classify >95% of the correct control parameters. On the other hand, a ROC curve (Figs. 8–10) is a graphical representation used to evaluate the performance of a binary classification model or a diagnostic test. It plots the true positive rate (TPR) against the false positive rate (FPR) across different threshold values for classifying the positive and negative classes. The area under curve (AUC) for such a curve measures the probability that a classifier will be more confident in correctly identifying a randomly chosen positive example as positive compared to a randomly chosen negative example. Thus, the closer the AUC is to 1, the better the classification, and an AUC of 1 implies a perfect classification by the model. The specific AUC obtained for all of the ML models against all of the examined control parameters is given in Table VIII. It is observed that the AUC is close to 1 for almost all the cases. This implies that the models were able to predict the correct operating pressure, gas ratio, and electrode gap of a given set of measurements almost all the time. This gives high confidence that one can use these techniques in monitoring plasma systems for rapid fault detection.

FIG. 8.

ROC plot for electrode gap (for the best performing model: MATLAB function fitcdscr). The AUC for each class is given in the legend. The closer the AUC is to 1, the better the classification. For a perfect classification, i.e., AUC = 1, the curve should overlap with the y-axis and then run horizontally at y = 1 as seen from the curve for the 20 mm gap. The dashed line denotes the curve with an AUC of 0.5.

FIG. 8.

ROC plot for electrode gap (for the best performing model: MATLAB function fitcdscr). The AUC for each class is given in the legend. The closer the AUC is to 1, the better the classification. For a perfect classification, i.e., AUC = 1, the curve should overlap with the y-axis and then run horizontally at y = 1 as seen from the curve for the 20 mm gap. The dashed line denotes the curve with an AUC of 0.5.

Close modal
FIG. 9.

ROC plot for pressure (for the best performing model: MATLAB function fitcensemble).

FIG. 9.

ROC plot for pressure (for the best performing model: MATLAB function fitcensemble).

Close modal
FIG. 10.

ROC plot for the gas mixture (for the best performing model: MATLAB function patternnet). Class 1–9 refer to the same order as in Table VIII.

FIG. 10.

ROC plot for the gas mixture (for the best performing model: MATLAB function patternnet). Class 1–9 refer to the same order as in Table VIII.

Close modal
TABLE VIII.

Prediction accuracy (%) and AUC for classification models using Matrix 2. The models were randomly trained on 80% of all 894 observations and tested on the remaining 20% of the observations independent of the training set.

AUCTotal % accuracy
Class nameTENBDNNSVLDATENBDNNSVLDA
Pressure classification 
500 mTorr 0.99      
1000 mTorr 0.99 0.97      
1500 mTorr 0.99 0.99 98.88 88.76 98.3 98.88 96.63 
2000 mTorr 0.99 0.96 0.96 0.98 0.91      
2500 mTorr 0.99 0.99      
Electrode gap classification 
20 mm 0.91 0.99 0.97 0.90      
24 mm 0.96 0.99 0.99 0.92 0.99 97.19 97.75 97.2 97.75 97.75 
28 mm 0.99 0.99      
Feed gas classification 
Ar (pure) 0.99      
Ar: N2 ∼ 2: 1 0.99 0.98 0.97 0.98      
Ar: N2 ∼ 1: 2 0.99 0.99      
N2 (pure) 0.99 0.99 0.99 0.99 0.95      
N2: O2 ∼ 2: 1 0.99 0.99 0.99 0.99 0.97 91.01 87.64 95.5 93.26 93.82 
N2: O2 ∼ 1: 2 0.99 0.99 0.98      
O2 (pure) 0.99 0.99 0.99 0.99 0.98      
O2: Ar ∼ 2: 1 0.98 0.98 0.91 0.91 0.82      
O2: Ar ∼ 1: 2 0.99 0.99 0.98      
AUCTotal % accuracy
Class nameTENBDNNSVLDATENBDNNSVLDA
Pressure classification 
500 mTorr 0.99      
1000 mTorr 0.99 0.97      
1500 mTorr 0.99 0.99 98.88 88.76 98.3 98.88 96.63 
2000 mTorr 0.99 0.96 0.96 0.98 0.91      
2500 mTorr 0.99 0.99      
Electrode gap classification 
20 mm 0.91 0.99 0.97 0.90      
24 mm 0.96 0.99 0.99 0.92 0.99 97.19 97.75 97.2 97.75 97.75 
28 mm 0.99 0.99      
Feed gas classification 
Ar (pure) 0.99      
Ar: N2 ∼ 2: 1 0.99 0.98 0.97 0.98      
Ar: N2 ∼ 1: 2 0.99 0.99      
N2 (pure) 0.99 0.99 0.99 0.99 0.95      
N2: O2 ∼ 2: 1 0.99 0.99 0.99 0.99 0.97 91.01 87.64 95.5 93.26 93.82 
N2: O2 ∼ 1: 2 0.99 0.99 0.98      
O2 (pure) 0.99 0.99 0.99 0.99 0.98      
O2: Ar ∼ 2: 1 0.98 0.98 0.91 0.91 0.82      
O2: Ar ∼ 1: 2 0.99 0.99 0.98      
TABLE IX.

Confusion matrix (true/target class vs predicted/output class) for electrode gap (best performing model: MATLAB function fitcdscr). To aid viewing, only nonzero entries are shown. Further, correct assignments are highlighted in light green while incorrect assignments are highlighted in light orange. The bottom row and the rightmost column give the percentage of correct assignments for that particular row or column. The rightmost cell on the bottom row is the net correct percentage. In total, 97.8 % of all assignments were correct.

 
 
TABLE X.

Confusion matrix (true/target class vs predicted/output class) for pressure (best performing model: MATLAB function fitcensemble).

 
 
TABLE XI.

Confusion matrix (predicted/output class vs true/target class) for species (best performing model: MATLAB function patternnet). Classes 1–9 refer to the same order as in Table VIII.

 
 

In this article, we have used statistical as well as machine learning predictions to analyze I–V data in a moderate-pressure CCP. We have demonstrated that DNNs, as well as other commonly used machine learning models, can be a useful tool for extrapolating data even with a high error bar at a transitional regime. Specifically, phase data with high error bars can be predicted with great accuracy, which can be used to automatically tune matching networks. Classification of control parameters is another possible application of these models, given a large set of measured data are available. The models were able to identify the gas ratio in the feed gas as well as correctly identify the operating pressure and electrode gap in almost all the cases. The importance of the predictors was ranked for these classification predictions. While input parameter importance ranking can give some insight into physics, comparison with physics-informed models is necessary. Physics-informed learning can integrate domain knowledge and physical laws into ML models, improving their interpretability and accuracy, even with imperfect data.11 In future works, physics-informed learning, such as physics-informed neural networks (PINNs)11,15 will be explored on the mGEC moderate pressure database.

This work was supported by a generous donation from Applied Materials Inc.

The authors have no conflicts to disclose.

Shadhin Hussain: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Validation (equal); Writing – original draft (equal); Writing – review & editing (equal). David J. Lary: Formal analysis (equal); Methodology (equal); Supervision (equal); Validation (equal); Writing – review & editing (equal). Ken Hara: Formal analysis (equal); Methodology (equal); Supervision (equal); Validation (equal); Writing – review & editing (equal). Kallol Bera: Funding acquisition (equal); Project administration (equal); Resources (equal); Supervision (equal). Shahid Rauf: Funding acquisition (equal); Project administration (equal); Resources (equal); Supervision (equal). Matthew Goeckner: Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
O.
Popov
and
V.
Godyak
,
J. Appl. Phys.
57
,
53
(
1985
).
2.
P.
Belenguer
and
J.
Boeuf
,
Phys. Rev. A
41
,
4447
(
1990
).
3.
V.
Lisovskiy
and
V.
Yegorenkov
,
Vacuum
74
,
19
(
2004
).
4.
V.
Lisovskiy
,
J.-P.
Booth
,
K.
Landry
,
D.
Douai
,
V.
Cassagne
, and
V.
Yegorenkov
,
Phys. Plasmas
13
,
103505
(
2006
).
5.
Y. P.
Raizer
,
M. N.
Shneider
, and
N. A.
Yatsenko
,
Radio-Frequency Capacitive Discharges
(
CRC
,
Boca Raton, FL
,
2017
).
6.
N.
Yatsenko
,
Sov. Phys. Tech. Phys.
25
,
1454
(
1980
).
7.
L. J.
Overzet
and
M. B.
Hopkins
,
J. Appl. Phys.
74
,
4323
(
1993
).
8.
V.
Godyak
and
V.
Demidov
,
J. Phys. D: Appl. Phys.
44
,
233001
(
2011
).
9.
A.
Chingsungnoen
and
V.
Amornkitbamrung
,
Chiang Mai J. Sci.
42
,
248
(
2015
), https://epg.science.cmu.ac.th/ejournal/journal-detail.php?id=5521.
10.
D.-Q.
Wen
,
J.
Krek
,
J. T.
Gudmundsson
,
E.
Kawamura
,
M. A.
Lieberman
, and
J. P.
Verboncoeur
,
Plasma Sources Sci. Technol.
30
,
105009
(
2021
).
11.
G. E.
Karniadakis
,
I. G.
Kevrekidis
,
L.
Lu
,
P.
Perdikaris
,
S.
Wang
, and
L.
Yang
,
Nat. Rev. Phys.
3
,
422
(
2021
).
12.
P.
Domingos
,
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake our World
(
Basic Books
,
New York
,
2015
).
13.
D. J.
Lary
,
A. H.
Alavi
,
A. H.
Gandomi
, and
A. L.
Walker
,
Geosci. Front.
7
,
3
(
2016
).
14.
K. J.
Kanarik
,
W. T.
Osowiecki
,
Y.
Lu
,
D.
Talukder
,
N.
Roschewsky
,
S. N.
Park
,
M.
Kamon
,
D. M.
Fried
, and
R. A.
Gottscho
,
Nature
616
,
707
(
2023
).
15.
L.
Zhong
,
B.
Wu
, and
Y.
Wang
,
Phys. Fluids
34
,
087116
(
2022
).
16.
J.
Trieschmann
,
L.
Vialetto
, and
T.
Gergs
, “Machine learning for advancing low-temperature plasma modeling and simulation,” preprint arXiv:2307.00131 (2023).
17.
O.
Sakai
,
S.
Kawaguchi
, and
T.
Murakami
,
Jpn. J. Appl. Phys.
61
,
070101
(
2022
).
18.
A. D.
Bonzanini
,
K.
Shao
,
D. B.
Graves
,
S.
Hamaguchi
, and
A.
Mesbah
,
Plasma Sources Sci. Technol.
32
,
024003
(
2023
).
19.
A. D.
Bonzanini
,
K.
Shao
,
A.
Stancampiano
,
D. B.
Graves
, and
A.
Mesbah
,
IEEE Trans. Radiat. Plasma Med. Sci.
6
,
16
(
2021
).
20.
P.
Ghosh
,
B.
Chaudhury
,
S.
Purohit
,
V.
Joshi
,
A.
Kothari
, and
D.
Shetranjiwala
,
J. Phys. D: Appl. Phys.
57
,
014001
(
2024
).
21.
D.
Nishijima
,
S.
Kajita
, and
G.
Tynan
,
Rev. Sci. Instrum.
92
,
023505
(
2021
).
22.
M.
Koubiti
and
M.
Kerebel
, “Introducing machine-learning in spectroscopy for plasma diagnostics and predictions,” in Journal of Physics: Conference Series (IOP Publishing, Bristol, UK, 2023), Vol. 2439, p. 012016.
23.
C.
Samuell
,
A.
Mclean
,
C.
Johnson
,
F.
Glass
, and
A.
Jaervinen
,
Rev. Sci. Instrum.
92
,
043520
(
2021
).
24.
A.
Sebastian
,
D.
Spulber
,
A.
Lisouskaya
, and
S.
Ptasinska
,
Sci. Rep.
12
,
18353
(
2022
).
25.
B.
Chapman
,
Glow Discharge Processes: Sputtering and Plasma Etching
(
Wiley-Interscience
,
New York
,
1980
).
26.
Plasma Etching: An Introduction, edited by D. Manos and D. Flamm (Academic, New York, 1989).
27.
M.
Park
,
H. Y.
Jeon
,
S.
Han
,
D. H.
Lee
, and
Y.-I.
Lee
,
Mater. Trans.
64
,
2206
(
2023
).
28.
I.
Namose
,
IEEE Trans. Semicond. Manuf.
16
,
429
(
2003
).
29.
X.
Wang
and
C.-C.
Chen
, “Optimizing process conditions using design of experiments—A wire bonding, semiconductor assembly process case study,” in 2015 12th International Conference on Service Systems and Service Management (ICSSSM) Guangzhou, China, 22–24 June 2015 (IEEE, New York, 2015), pp. 1–5.
30.
N. A.
Heckert
,
J. J.
Filliben
,
C. M.
Croarkin
,
B.
Hembree
,
W. F.
Guthrie
,
P.
Tobias
, and
J.
Prinz
, Handbook 151: NIST/SEMATECH e-Handbook of Statistical Methods (National Institute of Standards and Technology, Gaithersburg, MD, 2002), http://www.itl.nist.gov/div898/handbook/mpc/mpc.htm.
31.
J.
Vincent
,
H.
Wang
,
O.
Nibouche
, and
P.
Maguire
,
Plasma Sources Sci. Technol.
29
,
085018
(
2020
).
32.
D. H.
Kim
and
S. J.
Hong
,
IEEE Trans. Semicond. Manuf.
34
,
408
(
2021
).
33.
Y.
Ding
,
Y.
Zhang
,
H. Y.
Chung
, and
P. D.
Christofides
,
Comput. Chem. Eng.
144
,
107148
(
2021
).
34.
S.
Levitskii
,
Sov. Phys.-Tech. Phys.
2
,
887
(
1957
).
35.
V.
Godyak
and
A.
Khanneh
,
IEEE Trans. Plasma Sci.
14
,
112
(
1986
).
36.
S.
Hussain
,
A.
Verma
,
K.
Bera
,
S.
Rauf
, and
M.
Goeckner
, “Power measurement analysis of moderate pressure capacitively coupled discharges,”
J. Vac. Sci. Technol. A
42
,
033010
(
2024
).
37.
M.
Goeckner
,
J.
Marquis
,
B.
Markham
,
A.
Jindal
,
E.
Joseph
, and
B.-S.
Zhou
,
Rev. Sci. Instrum.
75
,
884
(
2004
).
38.
J.
Poulose
,
M.
Goeckner
,
S.
Shannon
,
D.
Coumou
, and
L.
Overzet
,
Eur. Phys. J. D
71
,
1
(
2017
).
39.
K.
Hernandez
,
L. J.
Overzet
, and
M. J.
Goeckner
,
J. Vac. Sci. Technol., B
38
,
034005
(
2020
).
40.
K.
Hernandez
,
A.
Press
,
M. J.
Goeckner
, and
L. J.
Overzet
,
J. Vac. Sci. Technol., B
39
,
024003
(
2021
).
41.
A. F.
Press
,
M. J.
Goeckner
, and
L. J.
Overzet
,
J. Vac. Sci. Technol. B
37
,
062926
(
2019
).
42.
M. A.
Sobolewski
,
IEEE Trans. Plasma Sci.
23
,
1006
(
1995
).
43.
M. A.
Lieberman
,
IEEE Trans. Plasma Sci.
16
,
638
(
1988
).
44.
M. A.
Lieberman
and
A. J.
Lichtenberg
,
Principles of Plasma Discharges and Materials Processing
(John Wiley & Sons, New York, 2005).
45.
T.
Panagopoulos
and
D. J.
Economou
,
J. Appl. Phys.
85
,
3435
(
1999
).
46.
R. L.
Wasserstein
and
N. A.
Lazar
,
Am. Statist.
70
, 129 (2016).
47.
K.
Levenberg
,
Q. Appl. Math.
2
,
164
(
1944
).
48.
D. W.
Marquardt
,
J. Soc. Ind. Appl. Math.
11
,
431
(
1963
).
49.
T. K.
Ho
,
IEEE Trans. Pattern Anal. Mach. Intell.
20
,
832
(
1998
).
50.
L.
Breiman
, Classification and Regression Trees, The Wadsworth Statistics/Probability Series (Wadsworth International Group, Belmont, CA, 1984).
52.
S. R.
Safavian
and
D.
Landgrebe
,
IEEE Trans. Syst. Man Cybern.
21
,
660
(
1991
).
53.
C. E.
Rasmussen
and
C. K. I.
Williams
,
Gaussian Processes for Machine Learning
(
MIT
,
Cambridge
,
2006
).
54.
D. G.
Krige
,
J. South. Afr. Inst. Min. Metall.
52
,
119
(
1951
), https://journals.co.za/content/saimm/52/6/AJA0038223X_4792.
56.
R. M.
Neal
, “Bayesian training of backpropagation networks by the hybrid Monte Carlo method,” Technical Report CRG-TR-92-1 (Dept. of Computer Science, University of Toronto, 1992).
57.
V. N.
Vapnik
, Estimation of Dependences Based on Empirical Data, Springer Series in Statistics (Springer, New York, 1982).
58.
V.
Vapnik
,
The Nature of Statistical Learning Theory
(
Springer
,
New York
,
1995
).
59.
C.
Cortes
and
V.
Vapnik
,
Mach. Learn.
20
,
273
(
1995
).
60.
V. N.
Vapnik
, The Nature of Statistical Learning Theory, 2nd ed., Statistics for Engineering and Information Science (Springer, New York, 2000).
61.
V. N.
Vapnik
, Estimation of Dependences Based on Empirical Data; Empirical Inference Science: Afterword of 2006, 2nd ed., Information Science and Statistics (Springer, New York, 2006).
62.
S.
Haykin
,
Neural Networks and Learning Machines
, 3rd ed. (
Pearson
,
London, UK
,
2009
).
63.
T. K.
Ho
, “Random decision forests,” in
Proceedings of 3rd International Conference on Document Analysis and Recognition
, Montreal, Canada, 14–16 August 1995 (IEEE, Piscataway, NJ, 1995), Vol. 1, pp. 278–282.
64.
E.
Frank
,
L.
Trigg
,
G.
Holmes
, and
I. H.
Witten
,
Mach. Learn.
41
,
5
(
2000
).
65.
L.
Yang
and
A.
Shami
,
Neurocomputing
415
,
295
(
2020
).
66.
J.
Močkus
, “On Bayesian methods for seeking the extremum,” Optimization Techniques IFIP Technical Conference, Novosibirsk, July 1–7, 1974 (Springer, New York, 1975), pp. 400–404.
67.
R.
Koenker
,
Quantile Regression
(
Cambridge University
,
Cambridge
,
2005
), Vol. 38.
68.
V. A.
Godyak
,
R. B.
Piejak
, and
B. M.
Alexandrovich
,
IEEE Trans. Plasma Sci.
19
,
660
(
1991
).
70.
G. J.
McLachlan
,
Discriminant Analysis and Statistical Pattern Recognition
(
Wiley
,
New York
,
2005
).
71.
S. V.
Stehman
,
Remote Sens. Environ.
62
,
77
(
1997
).
72.
T.
Fawcett
,
Pattern Recognit. Lett.
27
,
861
(
2006
).