Plasma is found almost everywhere in the universe. Yet most people are not conscious of its presence, nor the role it plays in creating the electronic devices in our everyday lives. Virtually every semiconductor chip has been touched by plasma, which is used in nearly half of all semiconductor manufacturing steps today. These chips power our mobile phones, computers, automobiles, and are vital for making artificial intelligence a reality. With a reputation of mystery, the world of plasma processing has largely remained hidden from the consumer's view. What does plasma do in chipmaking? Why is it so critical? What does a process engineer do? To give the reader a sense of the importance that plasma plays in the electronics industry and how its energy has been harnessed into producing chips, this perspective aims to make this technology more relatable as it continues to change our everyday lives.

In 1989, The New York Times characterized plasma processes as “little-known but crucial” and deemed them “arcane” as if this mysterious world could only be known by those with secret knowledge.1 Those statements were made back when plasma was used in only a handful of processes. Plasma is now used in nearly half of the hundreds of steps needed to manufacture a semiconductor chip. Without it, we might still be living in a world without our advanced mobile phones, computers, and the Internet. I had not given much thought to plasma before joining the semiconductor industry nearly two decades ago—as a new graduate with a Ph.D. in Physical Chemistry. I was invited to participate in the “Celebrating the Women of the AVS” issue to provide a perspective for this special topic collection. I am using this opportunity to highlight the importance of plasma processing for newcomers and to share my excitement for its future.

Even those working with plasma every day may like to step back to think about its origins. The story of plasma begins 13.8 billion years ago with the Big Bang. The universe was so hot that most visible matter was plasma—ions, electrons, radicals, and photons. It was only hundreds of thousands of years later that matter cooled enough to form neutral atomic gases. The visible universe is still predominantly made of plasma in stars and interstellar space. While plasma is today referred to as the fourth state of matter, the universe made it before the other states—solids, liquids, and gases.

Remarkably, although it is the most common and fundamental state of matter, plasma was only given a name less than a century ago. It was coined by Nobel laureate Irving Langmuir in 1928 when he called the emissions of a fluorescent lamp by the Greek word for “formation” or “mold.”2 His reasoning is unknown, and the term had already been used in biology (e.g., blood plasma). His choice is part of the mystery—the word “plasma” may be a misnomer given that, unlike an actual shapeable mold, it does not tend to conform to any external influence and seemingly has a mind of its own.3 Up close, plasma behaves as a complex, nonlinear system in which gases have been broken down into their constituent parts such as ions, radicals, photons, and electrons. However, from far away, plasma looks more like a gentle glow of light. Nature produces these beautiful radiances in forms such as lightning bolts, the aurora borealis, and solar flares.

Although the universe created plasma, humans have found inventive ways to use it. Students had plasma lamps in the 1980s, families watched plasma TVs in the 1990s, and tiny plasmas are used by dentists today. Plasma can even be made in a common kitchen. If you dare, cut a grape almost in half and place it into the microwave oven. The microwaves break down the air between the grape halves to form a plasma.4 A typical commercial plasma reactor is not much larger than a microwave oven and works similarly, except using radio waves and specialized gases under vacuum. It is the ability to control the plasma and its interactions with the surface of the wafer that makes it so valuable for advancing technology.

As soon as plasma could be tamed, scientists have wanted to harness plasma's energy into something useful. In 1938, scientists began using plasma to reproduce the sun's reactions in hopes of obtaining an unlimited source of electricity—fusion. Fusion research has provided a fundamental understanding of how plasma works. An essential parameter in plasma science is called the electron temperature, which characterizes the extent to which the gas has broken down, typically expressed in units of electron volts (eV). Although plasmas are already energetic relative to the other states of matter, there is still a spectrum of energies across different types of plasma.

The concept of electron temperature can be used to distinguish between semiconductors and fusion plasmas. Fusion reactors are considered “hot” at roughly 10 000 eV, which almost completely breaks down the isotopes of hydrogen. For perspective, this energy is orders of magnitude higher than the typical binding energy holding atoms together in a solid of a few electron volts. Only about 100 fusion reactors are operating in the world today, each the size of an auditorium. In comparison, commercial plasma reactors for making chips are “cold” at 1–10 eV, dissociating and ionizing only a small fraction of the gas. These reactors are four orders of magnitude smaller than those for fusion, with hundreds of thousands of them worldwide. They are designed to facilitate chemical reactions instead of nuclear reactions. Their purpose is to add, remove, or modify the building blocks of semiconductor chips for the electronics industry.

Chips are built onto a silicon wafer using the semiconductor manufacturing process. Each 30-cm diameter wafer contains several hundred 1 cm2 chips, each containing billions of transistors or memory cells packed closely together, with features as small as 10 nm. Millions would fit on the period at the end of this sentence.

The Computer History Museum in California lists the milestones of this industry from the past half-century. It gives the earliest invention date of the integrated chip manufacturing process as 1955. Yet, the foundation was laid much earlier. The way chips are made today is adapted from a much older technique called printmaking. One of the most famous printmakers in history was Rembrandt, the “Father of Etching,” who lived in the 1600s.5 In its simplest form, chipmaking can be thought of as a high-tech version of this artform.

Here is a step-by-step comparison to Rembrandt as shown in Fig. 1. (a) Rembrandt used a copper plate, instead of today's silicon wafer. (b) He then covered the plate with wax. The analog is a photosensitive polymer in the chip industry, referred to as a photoresist “mask.” A variety of other films are also deposited. (c) Rembrandt used a sharp needle to scratch into the wax, exposing some of the copper. The chip industry uses UV light to create a pattern through the mask for the circuits. (d) Rembrandt dipped his plate into a wet acid solution to remove the exposed copper, transferring the pattern to the copper. While the chip industry still uses some wet etching solutions, today most etching is done with plasma. (e) The mask is stripped off the plate or wafer. Rembrandt repeated this entire sequence a dozen times with overlaying patterns. In comparison, the sequence is repeated approximately 40 times in a memory chip and over 100 times in a logic chip. As Rembrandt printed art, the semiconductor industry prints circuits—except our industry does it 100 000 times smaller than was once done by hand.

FIG. 1.

Schematic of the printmaking approach using to build semiconductor chips. See main text for comparison to Rembrandt.

FIG. 1.

Schematic of the printmaking approach using to build semiconductor chips. See main text for comparison to Rembrandt.

Close modal

There were no plasma steps used when chips first started being manufactured in high volume. It was only in the 1970s when the sizes of transistors shrank into the micron range that the semiconductor industry started adopting plasma at all. The first application was removing the mask [Fig. 1(e)], using an oxygen plasma to “burn” off the photoresist. The primary benefit of plasma was in creating radicals, chemical species that are extremely reactive due to unpaired electrons. Radicals make processes faster as they are hundreds of times more reactive than their gas counterparts. In the stripping steps, plasma enables atomic oxygen radicals to quickly remove the organic photoresist mask. Likewise, in deposition processes, the radicals enable reactions at room temperature to prevent damage to the chip. However, this was only the beginning for plasma.

A significant breakthrough in the adoption of plasma was in the etching step [Fig. 1(d)]. According to plasma lore, the plasma etching chemistry for oxide was discovered during a stripping process. Engineers noticed silicon dioxide (SiO2) on the wafer etching during a stripping step near a Teflon-coated tool part and realized that it was because the Teflon was breaking down in the plasma. Following this discovery, they used a derivative of Teflon called trifluoromethane (CHF3) to etch oxides. The plasma breaks CHF3 into radicals such as CF, CF2, and CF3. Fluorine then readily binds with silicon to form volatile SiF4, while carbon binds with oxygen to form carbon dioxide. This chemistry is very similar to that still in use today.

In etching, the plasma must meet very stringent requirements to transfer the pattern into the target material. (For more details, the reader is referred to technical overviews of plasma etching in Refs. 6 and 7.) In brief, etching often needs to be directional, meaning it removes material downward rather than sideways underneath the mask. Fortunately, an important benefit of plasma is that it enables etching to occur preferentially downward, as the charged ions in the plasma can be biased toward the wafer by an electric field. In contrast, Rembrandt’s acid bath removed copper isotopically, undercutting the mask and making the etched feature wider than the mask pattern. Chip manufacturers today etching features approaching 10 nm cannot tolerate the widening of features. As a result, today plasma is used in an estimated 93% of etching steps, in addition to 43% of stripping and 41% of deposition steps.8 Overall, nearly half of the total 150–500 process steps for making a chip use plasma. Often, there is no other viable alternative to these plasma steps. The modern electronics industry simply could not exist without plasma processes.

To illustrate the increasing number of plasma steps over the last few decades, consider dynamic random-access memory (DRAM), which stores data, for example, in mobile phone applications. When DRAM was first introduced in 1970, no plasma was used. By 1980, there were only four plasma etch steps in DRAM, and wafers were loaded manually. By 1990 (when The New York Times article was written), the process had increased to ten steps, and the process was automated. By 2017, the plasma etching steps had jumped to nearly 150 per wafer.9 In the early days of chip making, the increased use of plasma was primarily due to replacing etches using wet chemistry with dry plasma. More recently, though, the increasing use of plasma has as much to do with the additional etching and deposition steps needed to make more complicated structures.

3D nand is a great example of one of these complicated structures that will keep increasing the demand for plasma processes. 3D nand is the flash memory used for solid-state storage such as a USB drive or a memory card in cameras. The structure is built as stacks of memory cells, such as a skyscraper on top of the silicon wafer (Fig. 2). Manufacturers started building these with 32 layers. To add more memory bit density, they are now up to around 96 layers and could reach as many as 500 layers in the next decade. In comparison with an actual skyscraper, it would have three times as many floors as the world’s tallest building.

FIG. 2.

Graphic of 64-layer 3D nand, an example of a complex structure that relies on plasma etch and deposition for manufacturing. Memory storage devices such as 3D nand are important for the artificial intelligence revolution.

FIG. 2.

Graphic of 64-layer 3D nand, an example of a complex structure that relies on plasma etch and deposition for manufacturing. Memory storage devices such as 3D nand are important for the artificial intelligence revolution.

Close modal

In 3D nand, one of the most challenging steps is etching the hole for the memory cells. These holes are etched after all the layers are deposited. This is like creating the elevator shaft after building the floors of a skyscraper—a nearly impossible feat. Even 0.1° of deviation from the top to the bottom of this etched hole can cause the shaft to be about 20% larger at the top than at the bottom. The elevator simply would not work. To etch aspect ratios greater than 100:1, plasma power will drive ions to travel well over 100 000 m/s (300 times faster than the speed of sound). Designing a reactor that can provide these ions without destroying itself is a challenge. Manufacturers probably would not have considered 3D nand as an option if it were not for plasma etch. It is only because so much previous effort has gone into developing this etch over the years that 3D nand memory chips are available today. As we look toward the future, building upward will be even more challenging. As structures continue to scale upward—and they will—plasma will become even more critical.

If plasma processing is so critical, then why do more people not know about it? Consider that the electronics industry is enormous—roughly $2 trillion per year.10 The semiconductor industry drives the electronics industry and is worth an estimated $500 billion—it purchases approximately $50 billion of equipment per year, some $20 billion of which is plasma equipment,8 equivalent to the GDP of a small country and likely the most profitable application of plasma so far. That said, while plasma enables the electronics industry, the market is worth only 1% of it. Based on economics alone, most consumers may never know that plasma touched the chips—not just in mobile phones—but in all the electronic devices around them.

Looking into the future, the growing need for data will lead to continued demand for more and better plasma technology. Soon over 100 billion connected devices will be generating data, many in real-time, and that data need to be stored and analyzed. Emerging technologies for artificial intelligence demand huge amounts of memory and processing power to learn from repeated digital experiences. As a result, the artificial intelligence revolution could drive nand bit demand by 3–4× in the next four years alone.8 The semiconductor industry as a whole will continue to be a big beneficiary. In turn, this drives the need for better plasma processing capability.

Richard P. Feynman once said, “I am not afraid to consider the final question as to whether, ultimately, in the great future, we can arrange the atoms the way we want; the very atoms, all the way down!”11 Plasma is made up of violent and chaotic reactions, yet must gently process features of 10 nm and beyond. Today, it is commonplace to have repeatability at the atomic scale; for example, it is routine to achieve uniformity in feature sizes to three to four atoms across the entire wafer. Yet, adding or removing material at this resolution is still challenging and requires the most advanced processing techniques.

The state-of-the-art in processing is called atomic layer deposition (ALD) and atomic layer etching (ALE).12,13 These processes work by cycling through separate and self-limiting reaction steps. Both were first studied in laboratories for decades before making their way into a manufacturing environment. I observed first-hand the impact that plasma has had on these processes. About a decade ago, ALD was already a mainstream technique used for depositing metals and dielectrics, while ALE had not yet been made suitable for manufacturing environments. Many literature reports on “ALE” used thermal methods and specialized equipment such as ion beam systems, and this made the process too slow. Plasma offered an attractive alternative by reducing the reaction times if the plasma could be controlled to not react too fast. Despite all the reasons that it should not have worked with plasma, the ALE process behaved surprisingly well and offered a bonus benefit—the surface was smoother than it had started.14 The use of plasma ultimately enabled ALE to move to high volume production in 2016, on a 10 nm logic application that forms the critical contact structure near the transistor. The focus today is on further improvements to address the productivity barrier for an increasing number of applications. Both ALE and ALD continue to be the subject of much active research in both academia and industry, as these techniques have rapidly gained popularity over the years to probe the highest resolution of plasma processing.

It is the process engineer who gets plasma to work as well as it does. They are the Rembrandts of today. Lam Research alone has thousands of them often entering the workforce as new Ph.D. graduates. These engineers are tasked with demonstrating a qualified plasma process before the hardware is delivered. Their mission is to find a set of plasma parameters that meet the criteria desired by the manufacturer for a given step—whether it be etching or deposition. Parameters such as plasma pressure and power are called “knobs,” and the combination of knobs is called a process “recipe.” My first assignment as a process engineer was to etch a high aspect ratio oxide hole in DRAM. To give a sense, here is a simplified version of a plasma etch recipe:

45 mT pressure, 1900 W 2 MHz RF power, 1200 W 27 MHz RF power, 300 cm3/min Ar gas, 28 cm3/min C4F8 gas, 10 cm3/min O2 gas, 0 °C wafer temperature, and 250 s.

The recipe is critical to controlling the plasma parameters such as electron energy, ion energies and fluxes, and neutral fluxes. These, in turn, govern hundreds of competing reactions happening in the plasma and at the wafer surface. Determining a reasonable recipe is challenging because the search space is massive. If an actual recipe has 15 knobs, each with 10 distinct positions, the engineer must choose out of 1015 possible recipe combinations. How does an engineer pick?

One way to do the search is to measure the result of every permutation of the knobs. This could cost somewhere in the vicinity of $100 000 000 000 000 000 considering that each recipe requires advanced metrology and the time of an engineer to plan and execute the results. This makes it highly unlikely that anyone has ever found a completely optimized recipe. In practice, engineers have a limited number of tries and time to find their recipe; therefore, they need to be smart about how they play the game. Imagine deciding to tune the input power to either 1150 W or 1200 W, a choice that could depend on a nearly infinite number of other parameter choices and cross correlations. Information theory provides a variety of optimization strategies and statistical methodologies, but these strategies work best when the search space is bound or when a lot of data exist.

In practice, a typical engineer will rely on two basic and complementary approaches: (1) intuition, based on the experience gained in previous process tuning attempts and (2) physics-based models, based on knowledge of the underlying chemistry and physics of the plasma process. Once the process is close to the required specification, it moves into the fine-tuning stage. At this stage, the search space has been narrowed down, such that this portion of the tuning process is a prime candidate for computational algorithms (Fig. 3). Engineers look forward to the day that a machine can assist!

FIG. 3.

Simplified schematic showing the task of a process engineer. Images show the cross section of a test wafer after etching silicon through an oxide mask. Perhaps, a machine will assist the process engineer in the future.

FIG. 3.

Simplified schematic showing the task of a process engineer. Images show the cross section of a test wafer after etching silicon through an oxide mask. Perhaps, a machine will assist the process engineer in the future.

Close modal

The problem in manufacturing is even more difficult than just described—the recipe optimization discussed above is just for one spot on the wafer, on one reactor, for one feature, and at one point in time. Ultimately, the recipe only succeeds when it is made stable and repeatable over billions of devices on thousands of wafers. For high yield, the process also needs to be optimized uniformly across the wafer, on multiple different reactor chambers and must stay stable over time. Consider that each reactor may have more than 100 critical parts in it, any of which might have slight variations that could affect the process. No matter how careful we are today, no two chambers are exactly alike, and as a result, chamber matching is one of the biggest challenges in our industry.

There are potentially more than 10100 reactor chamber states possible (a googol), which makes the number of recipes seems small in comparison. It is like fine-tuning the recipe over and over on each chamber, in each fab, and over time due to any drifts. Inevitably processes drift from desired conditions and, therefore, require continual retuning. This presents a major problem, as it leads to suboptimal performance and decreased yield. Over the next decade, we will need a different approach to chamber matching and a different approach to manufacturing. Potential research directions include trying to figure out how artificial intelligence can best be implemented to help us here too. The story has come full circle. When the smart processing dream is realized, it will use the very semiconductor technology that our plasma processes enable.

Plasma is essential to building our digital future as data, storage, and processing demands increase as artificial intelligence disrupts all industries and our way of living. To continue pushing the limits of plasma processing, the industry must improve the capability of these processes. Even in the most advanced plasma processes of ALD and ALE, more work is needed to expand into more cost-effective applications. Over the next decade, designers will continue to come up with challenging new architectures, most likely building upward in 3D. Process engineers will continue to develop even more miraculous etching and deposition processes. Soon, the machines enabled by these chip developments will help engineers better tackle the next challenges of process development.

I hope you have enjoyed this peek inside the world in which I work. Plasmas are mysterious, beautiful, and inspiring. They enable today's electronics industry and will provide many of the opportunities for the developments of tomorrow. It would be hard to imagine what life would be like without them.

The author thanks her mentors R. A. Gottscho and J. Marks and her co-workers at Lam Research. The author also thanks D. Maydan for discussions about the early days.

1.
A.
Pollack
, “Pillar of chip industry eroding,” The New York Times, 1989.
3.
F.
Chen
,
Introduction to Plasma Physics and Controlled Fusion
(
Springer
, New York,
1984
).
6.
V. M.
Donnelly
and
A.
Kornblit
,
J. Vac. Sci. Technol. A
31
,
050825
(
2013
).
7.
M. A.
Lieberman
and
A. J.
Lichtenberg
,
Principles of Plasma Discharges and Materials Processing
, 2nd ed. (
Wiley
,
New York
,
2005
).
8.
Lam Research Corporation, internal data.
9.
Lam Research Corporation, internal data based on an estimate of DRAM 18 nm process flow in 2017, not including lithography use of plasma to generate UV light.
10.
The McClean Report—A Complete Analysis and Forecast of the Integrated Circuit Industry, January 2019.
11.
Richard Feynman gave the talk “There’s Plenty of Room at the Bottom” on December 29, 1959, at the annual meeting of the American Physical Society at the California Institute of Technology. It was published in Caltech’s Engineering and Science, Vol. 23, pp. 22–36 (1960).
12.
H. B.
Profijt
,
S. E.
Potts
,
M. C. M.
van de Sanden
, and
W. M. M.
Kessels
,
J. Vac. Sci. Technol. A
29
,
050801
(
2011
).
13.
K. J.
Kanarik
,
T.
Lill
,
E. A.
Hudson
,
S.
Sriraman
,
S.
Tan
,
J.
Marks
,
V.
Vahedi
, and
R. A.
Gottscho
,
J. Vac. Sci. Technol. A
33
,
020802
(
2015
).
14.
K. J.
Kanarik
,
S.
Tan
, and
R. A.
Gottscho
,
J. Phys. Chem. Lett.
9
,
4814
(
2018
).

Keren J. Kanarik received her Ph.D. in Physical Chemistry from the University of California at Berkeley. She joined Lam Research in 2002 as a plasma etch process engineer. She is currently the Technical Advisor to the CTO. She has contributed to numerous publications and patents, with a focus on atomic layer etching and a mechanistic understanding of plasma processes. She is active in AVS-sponsored conferences including the Steering Committee of the International ALE Workshop.