point
The Smart Techie was renamed Siliconindia India Edition starting Feb 2012 to continue the nearly two decade track record of excellence of our US edition.

The triumph of modern lithography:Lessons in managing high-tech innovation

Ian Smith
Friday, February 1, 2008
Ian Smith
The fortunes of modern lithography illustrate some of the best operational practices in managing innovation in a high-risk environment. Back in the 1980s, the semiconductor industry faced a crisis: the alleged ‘sub-micron barrier’ which threatened to disrupt the monotonic doubling of device density every two years, as predicted by Intel’s Gordon Moore. The fast-approaching diffraction resolution limit, according to classical optical theory (the Rayleigh resolution criteria), led the industry to contemplate the demise of optical lithography—hitherto one of the cornerstones of its growth. The plan of record was to transition to shorter wavelength x-ray lithography, which entailed significantly more expensive experimental synchrotron radiation sources, fragile membrane reticles, and fabs built like radiation-hardened nuclear bunkers. These technical and financial challenges were a threat to the economic viability of the industry.

This scenario has never materialized, so far. What actually happened is one of the wonders of engineering in the modern era. Working in concert, technologists unleashed a wave of innovation that has extended optical lithography more than 10 times beyond the supposed limit, to today’s 45nm node and beyond (albeit with ultraviolet light). X-ray lithography, now re-branded as extreme ultraviolet lithography, is only expected to make tentative steps into production in the next few years at the 22nm node, provided nothing else changes in the meantime.

Below, we review these extraordinarily expensive and unpredictable developments and their business implications. These provide valuable lessons and point to the best practices for managing high-technology innovation in the future which is likely to be just as exciting and risky as the past.

Saving optical lithography

In a highly complex global endeavor, technologists pulled on just about every conceivable innovation lever to save optical lithography. The obvious approach to extension, according to the classical theory, was to shorten the wavelength of the exposure light from visible through i-line to today’s 193nm and 157nm laser sources, while nearly doubling the numerical aperture of the lens to increase its resolving power. Image fidelity had to be maintained precisely uniform over a 30mm field of view for the most powerful chips, and in order to print en masse smaller chips economically at each exposure. To say that development of these lenses was challenging and risky severely understates the magnitude of the task. In fact, maintaining a 2m tall stack of 25 plus precision ground elements in sub-micron dimensional alignment over the long term against thermal and vibrational perturbations was unprecedented. Mitigating the optical degradation effects to 50 plus lens coatings from long-term UV exposure was no small feat either. Along the way, stepped-field exposure gave way to a step-and-scanned field exposure, and more recently to double patterning approaches. The partial coherence and polarization properties of light were exploited to enhance image contrast and thus refine resolution. More recently, the use of high-refractive index immersion liquids between the lens and the wafer has enabled yet shorter wavelength operation from the same light source. This development has raised several issues, including dynamic pattern-dependent photo-thermal heating of the immersion liquid causing mirage-like refractive distortion of the image, particles, and micro-bubbles being dragged around in the meniscus and getting deposited on the wafer, all of which can generate device-killing pattern defects.


Share on Twitter
Share on LinkedIn
Share on facebook