Bikram Garg
Wednesday, October 1, 2008
Prior to sub-100-nm technology there was a clear demarcation of work between designers and the manufacturing community. The designer community was targeting for better design performance, more design functionality for the same chip area and the manufacturing community was ensuring that the design intent was met with no manufacturing defect. The design rules acted as a policeman ensuring the layout passed to manufacturer meets the constraints (design rules) for sufficient product yield. The basic design rules were constraints on the minimum width of a pattern and the minimum spacing among the patterns required for correct fabrication. This way the designer was isolated about the processes used by the manufacturer to fabricate the chip. But as the chip manufacturing is moving towards deep sub-100-nm technology such as 65nm, 45nm, 32nm the need to have more hand shaking between the two communities has become imperative to achieve higher yield. The yield has decreased considerably at these scales which might delay the adoption of smaller dimensions. The feature scaling is outpacing the manufacturing process changes. Some of the manufacturing processes are hitting a ceiling and the most critical one is lithography. This opens new opportunities for the EDA vendors to come with new tools and design methodology to help adopt smaller dimensions.

The problems

Lithography: The optical resolution limit of a conventional lithography system with on-axis illumination can be approximated as Rmin = 0.5 ?/NA under coherent light Where ? is an illumination wavelength, NA is a numerical aperture and Rmin is the minimum feature size. And the lithography process resolution provided is R = k ?/NA Where k is the Rayleigh criterion which has touched to 0.35 inspite of continuous decrease of wavelength (193 nm) and increase in Numerical aperture(NA), showing that the feature technology is decreasing faster than the lithography changes. Some of the new lithography technologies to decrease wavelength are not mature to be made part of the manufacturing process. There are experiments going on to increase the NA of the lithography process. The immersion of lens in liquid to increase refractive index thus NA has process issues. Also by increasing numerical aperture(NA) to decrease resolution we are deteriorating the DOF(Depth of focus) which is inversely proportional to (NA) square. As the Rayleigh criterion has hit below .5 there are various resolution enhancement techniques (RET) devised to achieve same patterns on wafer as on the layout i.e. “What you see is what you get”. They are Optical proximity correction(OPC), which modifies the patterns on the mask considering the lithography, CMP(chemical Mechanical Process) processes effect so that the effect of below .5 Rayleigh resolution is nullified, Alternate Phase Mask shift (AltPSM) which passes lights through two source path shifted by ½ ? to have destructive interference resulting in doubling the resolution limit and many others like double patterning, AttPSM and others. Adding these RETs bring in many new complex Design rules and also conflicting rules which are not easy for the designers to understand as this requires a learning curve in technology and also the rules are complex. Thus bringing in strong RET in the processes is a solution but can result in more systematic defects in the chip affecting the yield and economics of the fabrication.

Interconnect: Second major issue encountered due to scaling of features has been the increasing interconnects delay. The function of interconnect or wiring system is to distribute clock and other signals and to provide power/ground to and among the various circuits/systems functions on a chip. With the continued push to smaller geometries based on the Moore law and growing need to improve performance and lower power dissipation in IC, interconnect has become a challenge for feature scaling. The graph below shows how the interconnect delay has been rising with smaller scaling vis-à-vis the device delay. In 100nm the device delay was ~20ps and that of the RC delay of the interconnect of 1mm was ~1ps while in the project 35nm technology the device delay will come down to ~1ps but the RC delay of interconnect will be ~250ps. Not only that the power dissipation in interconnect will also increase to ~80 percent from ~50 percent in the earlier technology. The increase in the interconnect wire aspect ratio together with decrease in the line-to-line spacing results in the increase in the coupling capacitance, increasing power dissipation which is proportional to the capacitance. The increase in interconnect delay is a bottleneck not only to achieve high performance and lower power dissipation in the design but also at the design level the issue of interconnect is more profound. The exact layers and the length of interconnect is decided at the layout level prior to which the designer is done with the design both functionally and in term of performance. The timing information at design synthesis is based on a model of interconnect which is difficult to determine accurately because of distributive nature of interconnect and also the coupling capacitance of interconnect which depends on both the spatial locations of the neighboring wires and temporal relation between the signals on the wire. This can potentially result in iterating back and forth at synthesis and P&R level for timing closure of the chip. This can invalidate the existing interconnect model currently used in the high level designs.
The coupling capacitance can also result in crosstalk which can decrease the signal integrity. The global interconnect such as clock, power, ground requires more thorough handling and modeling to decrease delay by adding repeaters which increases the area of the chip.

EDA Methodologies and Solutions

There has been a growing need for new EDA tools or new design methodology to migrate to these sub-100-nm technologies. We will explore the complete design flow and look into various tools options that could be provided to the designers to improve their productivity.

System level Design (During design exploration): The design flows main focus has been on logic functionality and devices optimization to achieve the design’s goals such as frequency, area, power utilization. This focus needs to be evenly balanced between the device optimizations and interconnect optimizations. Looking at the growing importance of interconnect the designer requires a tool which can help him in tweaking with his high level architectural design considering the interconnect issues which can come up at the layout level. The tool will do a high level partitioning of designs into blocks and then floor planning of the design blocks and look mostly at the interface interconnects and global interconnects which are usually the main cause of coupling capacitance. Based on the floor planning of these blocks the interconnect capacitance, length is determined. This information on interconnect can be used by the architectural tool to determine high level power dissipation, performance goals for a design. For example presently state-of-the-art microprocessors are moving towards dual- or multi- core architecture. The parallel processing in the multi-cores allow comparable or even higher performance at lower core frequency and reduced power consumptions. The work presently done manually which can be time consuming can be hastened if there is a tool which can help the user to change the architecture and see the effects in terms of frequency, power dissipation. So tool should exist, which allow to easily change to various architectures and show the effect of the changes in term of frequency, power dissipation based on the interconnects' delays and power dissipation.

Synthesis: The synthesis tools presently look for functional and timing objective of the design and based on the technology mapping use the standard cell which meets the timing and functional criterion. The standard cell library needs to be extended to have 3D physical description of the associated circuit and its electrical behavior based on the process used. Each cell should be provided with the manufacturing index which will determine the usage of that cell by the synthesis tool. The manufacturing index should be provided by the technology manufacturer based on the complexity of the manufacturing for the cell which is dependent on both internal and external factor of the cell. The internal factor is the cell functionality, more the functionality, usually more the transistors and hence more manufacturing complexity. The external factor is the interface and the pattern complexity vis-à-vis other standard cells which will come close at the time of P&R. Once the manufacturing index is known to the synthesis tool, the tool while optimizing the logic and while determining the standard cells that should be used from the library should keep in mind the manufacturing index. As any logic could be represented through various logical equations the tool in addition to considering the optimization in terms of number of gates should also consider the manufacturing index to use represent the minimum cost function. So instead of taking the design rules we take the manufacturing index as the main component to determine a use of the cell.

P&R: The existing P&R tool needs to ensure the design rules are honored. The task of the P&R tool is driven with lot of design constraints. Every new technology node brings in new set of design rules which also need to be met making the tool inefficient and difficult to modify to meet the new requirements. This is the reason that after every two technology node we are seeing the present successful P&R tool fades and new P&R tool starts picking up the market. As it is always good to catch the bull by horn it will be good for the P&R tool to not work not only on the design rules but also on the technology process of the manufacturing flow. The technology processes determine the design rules which the P&R tool needs to maintain. Adding more parameters linked with the technology process in the tool, in addition to the design rule, the changes in the technology processes can be handled more easily and can increase the life cycle of the P&R tool. The existing tools are considering the design intent while routing and placement, the critical area analysis done at the post-layout level should also be done by the P&R tool. The tool can look into the hot areas which can be potential cause of random defects and also systematic defects.

The new tools and methodology has become imperative for the designs moving to smaller features. There are other issues which have not been discussed such as large layout data and manipulation which needs to be handled as we move to lower feature technology. The yield issues due to technology are linked but are not discussed separately.

Share on LinkedIn

Previous Magazine Editions