The Quest for More Processing Power, Part One: "Is the single core CPU doomed?"
by Johan De Gelas on February 8, 2005 4:00 PM EST- Posted in
- CPUs
When a CPU becomes a sieve
The real problem is leakage power, and the Intel power graph below illustrates this perfectly.
Fig 2. "Leakage power grows exponentially ".
As you can see, dynamic power - which does useful work - has increased relatively slowly despite the increase in CPU complexity. Leakage power, however, increases exponentially, and not linearly. It has grown quickly from a "minor nuisance" to a "circuit killing monster".
Leakage is comparable to a small hole in a waterhose of a firefighter. The more pressure (i.e. the higher the core voltage), the bigger the hole gets, and thus, the more water that leaks to the ground. The thinner the walls of the tube (i.e. smaller process technology), the quicker the holes become bigger, and the more water you lose, the harder the pumps must work to get the same amount of water to extinguish the fire. If the pumps overheat, you better throttle them down, or they will cease to work after a while.
Power Leakage happens as a part of the current, which is supposed to make our transistors switch leaks away in the substrate and finally in the ground. There are several leakage currents, but the two most important ones are the gate oxide tunnelling current and sub-threshold leakage.[3]
Fig 3. I3 is the Gate oxide tunnelling currents, I2 is the Sub-threshold leakage current
Gate oxide tunnelling (I3) currents get more important with smaller process technology as the gate oxide that is supposed to insulate the transistor becomes thinner and thinner. As a result, current that is going through the transistors leaks away - the gate oxide becomes a sieve instead of being the "wall of a tube".
Sub-threshold leakage (I2) transistor is the leakage current flowing through the transistor when it is supposed to be turned off. To understand this, we got to back to basic transistor technology.
Normally, a voltage threshold of x volts is needed to get current across the transistor, with x volts being the threshold. This way, the transistor is being used as a switch with a binary function: more or equal to threshold voltage = ON = 1, less than the threshold voltage = OFF.
The point that you have to remember is this: ideally, as long as the threshold voltage is not reached, no current should run through the transistor. However, as transistors and interconnects get smaller and smaller (smaller process technology), the insulation between drain and source gets worse and worse. As a result, a small leakage current gets through the transistor (I 2) even though the threshold voltage is not reached (the Transistor is off).
That subthreshold leakage has become a major problem, which has been made clear by Shekhar Borkar [5] (Intel Fellow, Director of Circuit Research). He illustrated this by the logarithmic graph below.
Fig 4. Subthreshold leakage - notice the logarithmic scale!
Subthreshold leakage was only a small problem at the time of Willamette - the leakage problem wasted a few watts at 180 nm. The graph is based on Moore's law: every two years, the number of transistors doubles. As you can see, without countermeasures, it wouldn't be interesting to use devices that make use of 45 nm technology. They would simply leak too much power, up to 100 Watts!
And subthreshold leakage is only part of the leakage problem. Together with gate oxide tunnelling, CPUs made of 65 nm technology would leak more power than what they need for making the transistors switch. It is comparable to a fuel tank that has so many holes, causing it to leak more gasoline to the ground than what the fuel pump can pump to the engine.
Let us check the third and last problem for high performance CPUs.
Wire delay
It is hard to imagine that the little wires - the metal interconnects - between transistors can be a limiting factor. About twenty years ago, transistor switching speeds were pretty low, and wire delays were completely ignored. However, as process technology became better, transistors were capable of switching much faster. Right now, the fastest transistors in the labs can attain 100 GHz (the record being around 300-500 GHz) and more. So, transistor switching speed still has a lot of headroom.
The tiny wires between the different transistors are still not the problem. Functional blocks are also wired to the TLBs (Translation Lookaside Buffer) and caches. The real problem is these global wires - they are a lot longer . If the RC delay is too high, the clock speed will have to be reduced to get a working CPU.
The speeds at which signals travel through the global wires (from logic blocks to the caches, for example) are quite a bit slower than what the maximum speed (speed of light) allows. The reason is the resistance (R, Ohm) and capacitive resistance (C) of the wire. As the whole CPU was made with smaller process technology, the wires also shrunk. You probably know from your lessons of physics that resistance increases as the cross section of the wire gets smaller and the length of the wire gets longer. So, if you shrink a wire, the effect of the shorter length is completely negated by the smaller thickness of the wire. You could make the wires thicker, but it wouldn't be easy and that would increase the capacitance of the wire. The result is that wire delay remains, more or less, the same (in nanoseconds).
However, gate switching speed improves a lot with smaller transistors (for example, 100%). So, while RC delay improves with a very small percentage (or nothing all), gates might switch up to 100% (simplified example) faster as process technology improves. The RC delay of the global wires becomes more a bottleneck that makes bumping up the clock speed hard. Modern Integrated Circuits (ICs), such as CPUs, must be partitioned, as a signal can travel for a time slightly less than the length of one clockpulse.
65 Comments
View All Comments
sandorski - Tuesday, February 8, 2005 - link
While reading the article I couldn't help but think that when Intel states something it becomes all the buzz in the Industry and is accepted as fact. OTOH, AMD has been way ahead of Intel concerning these issues, adopting the Technologies in order to avoid the issues while Intel ran ahead right into the wall. Given the history between the 2, I'd hope that AMD's musings on the future become more relevant as they seem more in tune with the technology and its' limitations. Likely won't happen though.Mingon - Tuesday, February 8, 2005 - link
I Thought originally it was reported that prescotts alu's were single pumped vs double for northwood et al.segagenesis - Tuesday, February 8, 2005 - link
Heh heh heh, good timing with the recent news. Very well written and good insight on low level technology.It is starting to become obvious to even average joe user now that computer power for pc's has plataeu'ed (sp?) over the past year or so. You can have a perfectly functional and snappy desktop in just 2ghz or less if you use the right apps.
I think the recent walls hit by processor technology should be an indication for developers to work better with what they have rather than keep demanding more power. We used to make jokes about how much processor power is needed for word processing, but considering MS Word runs no faster really than it did on a P2-266mhz in Office 97... urrrgh.
sandorski - Tuesday, February 8, 2005 - link
hehe, you said, "clocks peed" hehe :D(Chapter 1)
good article.
Ender17 - Tuesday, February 8, 2005 - link
Interesting. Great read.