Skip to Main Content
Skip Nav Destination

Industrial Physics Forum 2013: The future of electronics Free

6 December 2013

What technologies will extend silicon's reign as the preeminent material for electronics? What materials will ultimately supplant silicon?

Industrial Physics Forum 2013: The future of electronics

The third session of the 2013 AIP Industrial Physics Forum, held at the Long Beach convention center during the AVS annual meeting, was devoted to five fronts in the campaign to insure that smartphones, disk drives, and other electronic devices continue to become more capable. The talks covered graphene and other novel materials, self-assembly as a means of making nanoscale devices, phase-change computer memory, and the R&D needed to establish 450-mm as the new standard diameter for silicon wafers.

Graphene materials and devices

In 2004 Andre Geim and Konstantin Novosolev discovered a cheap and simple way to make graphene, the atom-thick honeycomb form of carbon. Thanks to graphene’s unusual band structure, electrons whizz though the material ballistically. That property and others attracted the interest of the electronics industry, which was looking for—and continues to look for—materials to replace silicon as the basis for electronics.

The need to look beyond silicon has been evident for decades. As Luigi Colombo of Texas Instruments recounted in his talk, the US government, in partnership with US chip and device manufacturers, began to look at extending Moore’s law in the face the silicon’s intrinsic limitations two decades ago. Three centers—Index and CNFD in New York, SWAN in Texas—were established to take up the challenge of finding ways to shrink the size of processors while lowering their power requirements. In 2007 carbon-based electronics was identified as a candidate technology.

Scientific interest in graphene is intense. By Colombo’s count, around 33 000 graphene papers have been published so far. But 85% of those papers are by academic authors. If graphene is to supplant—or even just to supplement—silicon, a well-funded, coordinated R&D program that includes industry is in order. The European Union has already set up such a program, comprised of three broad areas: production, components, and systems.

Colombo noted that graphene’s versatility makes it difficult to draw up a conventional roadmap, which typically identifies a single, ultimate goal and several intermediate milestones. Nevertheless, he outlined several specific challenges. Integrating dielectric materials and metals with graphene remains difficult, as does making electrical contacts with the material. What’s more, graphene’s material advantages over silicon only come into play when devices are far smaller than they are now. Scientists need ways to scale down graphene technology to 1 nm.

Progress is being made. In 2004 the largest graphene sheets were micron-scale. Now they are meter scale. Novel concepts, including a bilayer-based field effect transistor, have been devised.

What are we looking for?

Graphene is not the only route to electronics beyond the current paradigm, known as CMOS (complimentary metal oxide semiconductor). In his talk, Wilfried Hänsch of IBM surveyed some of the alternatives, with an emphasis on apples-to-apples comparisons in performance. It’s not enough to compare material properties, Hänsch argues. You also need to compare components and devices—virtual ones, if you can't yet build real ones.

Compounds of elements drawn from the boron and nitrogen groups of the periodic table, the so-called III–V materials, show promise because of their higher electron mobilities. But according to Hänsch, the potential speed gain of an actual III–V device compared with state-of-the art silicon is likely to be only 40%.

Tunneling field-effect transistors that gate current by aligning energy levels are potentially faster and lower-powered than “regular” field-effect transistors. But the source of that advantage—a cliff-shaped current-versus-voltage curve—is also a potential disadvantage. Small differences in the curve's shape can lead to big differences in performance, making it hard to produce what modern processors need: billions of identical components.

Electrons move through graphene and its relative, the carbon nanotube (CNT), 10 times faster than they do through silicon. Whereas graphene has no bandgap (or, rather, one of infinitesimal width), carbon nanotubes are bona fide semiconductors, which, in Hänsch’s opinion, qualifies CNTs as a legitimate contender to take on silicon. Indeed, just last month researchers at Stanford unveiled a processor with semiconductor components made wholly from CNTs.

Like Colombo of Texas Instruments, Hänsch pointed out that feature sizes, or, equivalently, the separation between the CNTs, must be below 10 nm to successfully compete with silicon. Placing tubes individually with, say, the tip of an electron microscope, is manifestly unscalable. Self-assembly will be needed.

With new materials comes the possibility of new device architectures. Hänsch outlined three that, at least for now, remain in the realm of basic research. Based upon graphene bilayers, the BisFET depends upon successfully creating a Bose–Einstein condensate in graphene, a feat that has yet to be accomplished. The so-called Klein tunneling device mimics the p-n junction of the silicon transistor by taking advantage of the electrons' photon-like behavior in graphene. On paper it’s fast. Another new architecture, the piezo-electric FET could also be fast—provided that it can be made small enough to reap the advantages.

Hänsch doesn’t expect CMOS to disappear soon. “It will be with us until at least 2020,” he said; Its life will be extended by cooling and other “add-ons” that enhance performance.

Directed self-assembly

Packing more components onto a chip entails not only making features smaller. Those features must be created in parallel, lest it take too long to assemble a device. One promising approach for doing just that, directed self-assembly (DSA), was the subject of the talk given by Roel Gronfied of IMEC in Belgium.

The type of DSA that Gronfied and his colleagues are working on was inspired by block copolymers. Discovered in 1970, block copolymers usually consist of a mix of two different polymer species that feel a stronger attraction to each other than to the other species. A 50–50 mix will spontaneously form alternating layers of the two species; a 75–25 mix causes the minority species to form cylinders within the other species.

That same tendency to self-assemble is manifest in polymer blends, which form structures less readily than do block copolymers but are easier to process. Spun onto a surface, a 50–50 mix will arrange itself into alternating stripes, the width of which is twice that of the individual polymer chains. If the components differ in polarity, an organic solvent can be used to dissolve and remove one of them, while leaving the other in place.

The “directed” part of IMEC’s self-assembly technique comes from spreading the polymer blend within a guide pattern whose features are at the ~100 nm scale that can be produced with current optical/etching technology. The stripes or other features that form within the pattern end up being smaller in scale than the features of the pattern.

Gronfield reported the results of using DSA to make some of the components of a test device with feature sizes of 100 nm. He said that IMEC is on track to reach 10 nm features. Challenges remain in verifying the structures non-destructively.

Phase-change memory

Computer memory holds quickly accessible information in the form of electric charge, stored either in capacitors (DRAM) or transistors (NAND, of which Flash is the most common variant). Making memory chips is a $52-billion-a-year business. To cope with the millions of videos and other files uploaded every month, Google and other data warehouses run vast data centers. Half of a data center's energy bill is devoted just to cooling the memory infrastructure.

In his talk, Roberto Bez of Micron Technology outlined his company’s development of a new type of memory—phase-change memory—that has the potential to store information with greater energy efficiency than either DRAM or NAND. Alternatives to DRAM and NAND will be needed, said Bez, because both technologies are close to hitting their intrinsic physical limits. Of several new memory paradigms, PCM is the most advanced.

The “phase change” in PCM refers to a reversible, heat-induced switch between a crystalline phase and a glassy, amorphous phase. Materials that include an element drawn from the oxygen group, group VI, of the periodic table, undergo such changes. Bez’s company uses Ge2Sb2Te5.

Each cell in the PCM memory consists of a single transistor coupled to a single resistor, a piece of Ge2Sb2Te5 material. A tiny heating element effects the phase change, which is detected as a change in resistance (the amorphous phase is more resistive than the crystalline phase).

Bez listed the advantages of PCM. Each individual cell or bit is addressable, unlike the case for NAND. Reading and writing are fast, thanks to the speed of the phase change. The change is robust and repeatable, which leads to endurance (1 million cycles) and retention (once switched, the phase endures). It’s also scalable: There’s no intrinsic limit to how many cells can be combined to form a memory chip.

Micron Technology began making demonstration devices in 2009. The current production memory is 1 GB and made of 45-nm-wide cells. It operates at 1.7 V and between ambient temperatures of −40 and 85°C. Bez foresees PCM devices being adopted first for low-end mobile phones. Hybrid memories that combine PCM with NAND could provide more speed than NAND alone. PCM with DRAM would use less power than DRAM alone.

The Global 450 mm Consortium

The basic substrate of modern electronics, the wafer, is cut from a large near-cylindrical crystal of pure silicon. In response to rising demands of computation, the standard diameter of the largest crystals has grown. The current standard, which was introduced in 2003, is 300 mm. The next standard, 450 mm, is under development.

As Paul Farrar of SUNY Albany explained in his talk, realizing the 450 mm standard has required the concerted efforts of five of the world’s major chip manufactures. Cost is the reason for the rivals’ cooperation. Whereas building a plant capable of fabricating 200-mm wafers required a capital investment of $1 billion, a 450-mm fab will require $10 billion. Intel, Samsung, or one of the other chip makers might be prepared to spend that amount, but only if all the necessary manufacturing steps, tools, and measurement devices have been worked out in advance. Solving those problems collectively is the goal of the Global 450 mm Consortium (G450C).

Founded in 2008, G450C brought together GlobalFoundries and Intel of California, IBM of New York, Samsung of South Korea, and TSCM of Taiwan. The consortium’s R&D facility is based in Albany, New York, and is managed by the center for nanoscale science and engineering at SUNY Albany. The state of New York provided funding and tax breaks.

The need for new tools and processes arises in part from the wafers’ size. Besides being 50% wider than the previous 300-mm standard, each 450-mm wafer is more than twice as heavy. Even so, the G450C members have committed to using the same amount of cleaning acids and other chemicals in the manufacture of the 450-mm wafers.

Chip makers are not vertically integrated. They rely on other companies to supply tools and materials. Those companies, in turn, may or may not be interested in participating and investing in G450C. That they do so, explained Farrar, is because they anticipate future payoffs. Nikon, for example, recently signed up to develop tools for EUV immersion lithography needed to create features at the project’s ultimate target scale of 10 nm.

The G450C is already producing wafers with 14-nm features. The number of defects has been reduced from 3000 per wafer in 2010 to 35 in 2013. The wafers’ flatness continues to improve. By the end of 2013, the production rate should be 11 000 a month.

G450C aims to finish—that is, deliver a set of tools and processes to its major partners—by the end of 2015. How quickly devices made from the new wafers make it to market will be a business decision. On the one hand, the capital cost of building a plant is considerable. On the other, each new, larger standard in wafer size has been accompanied by a reduction in device cost of 30%.

Charles Day is Physics Today's online editor.

Subscribe to Physics Today
Get our newsletters
 

or Create an Account

Close Modal
Close Modal