Intel's 65nm Processors: Overclocking Preview
by Anand Lal Shimpi on October 25, 2005 12:05 AM EST- Posted in
- CPUs
Power Consumption of Intel's 65nm Processors
Other than poor performance, extremely high power consumption has been a frequently voiced criticism about Intel's Prescott. Thanks to its 31+ stage pipeline and high clock speeds, the Pentium 4 and Pentium D tend to draw quite a bit of power. How does 65nm change the power consumption landscape?
First up is Cedar Mill:
Next up, we loaded two threads of POV-ray's benchmark to fully load the CPUs and compare power consumption under load:
Next, we have Intel's dual core processors - Smithfield (90nm) and Presler (65nm):
Other than poor performance, extremely high power consumption has been a frequently voiced criticism about Intel's Prescott. Thanks to its 31+ stage pipeline and high clock speeds, the Pentium 4 and Pentium D tend to draw quite a bit of power. How does 65nm change the power consumption landscape?
First up is Cedar Mill:
At idle, Cedar Mill doesn't really draw all that much less power than Prescott. We measured a 3W decrease at 3.6GHz, but since both Cedar Mill and Prescott implement Intel's Enhanced Speed Step (EIST) and the new C1E Halt instruction, power consumption is identical on both.
Next up, we loaded two threads of POV-ray's benchmark to fully load the CPUs and compare power consumption under load:
Under full load, the Cedar Mill system at 3.6GHz drew 176W while the Prescott system at 3.60GHz pulled 213W, an average of 21% more power than Cedar Mill. The move to 65nm definitely helps, but AMD still has the low power advantage. While we didn't have an identical AMD system on hand to compare exact numbers, Intel's 90nm Pentium 600 series chips have generally consumed as much as 50% more power than AMD's 90nm offerings.
Next, we have Intel's dual core processors - Smithfield (90nm) and Presler (65nm):
We only had a 2.8GHz Presler on hand at the time of testing, but here, we see that Presler at 3.40GHz draws slightly less power than Smithfield at 2.8GHz.
Now under full load, Presler at 3.40GHz still consumes the same amount of power as Smithfield at 2.8GHz. Once again, we see a decent improvement due to the decreased power consumption of Intel's 65nm process. Intel will need to look towards Conroe, Merom and Woodcrest to finally become competitive in power consumption.
43 Comments
View All Comments
Griswold - Tuesday, October 25, 2005 - link
I didnt see anything about (real) stability and throttling.POV-Ray is all nice and such, but does it put as much stress on the core(s) as, let's say, S&M 1.7.3?
Also, I havent seen any temperatures. :)
Granted, it's just a preview, but still. I'm mostly interested in throttling. 4.5GHz on CPU-Z is cool, but does it deliver that when you heat up the kitchen?
tuteja1986 - Tuesday, October 25, 2005 - link
Is Intel back in the high performance section ? Can they finally defeat AMD in gaming benchmark ? What does AMD in store for us ? Do you guys think that Intel are going to pawn AMD ? : ( Question ... Anyways this is a good sign of competition from Intel and i am know interested in getting a Intel 65nm if they perform good and be very competitive in gaming benchmark.Shintai - Tuesday, October 25, 2005 - link
65nm P4s are only a temporary thing. They sued the 65nm to gain benefits inmaking a cheaper product, while getting better in the performance/watt issue and getting the P4 into acceptable powerlevels and heatlevels.However..Intel gives a damn about 4Ghz P4 etc for one reason. Conroe will burrow P4 in week 36 2006. And Yonah will take a huge part aswell until Conroe/Merom. Netburst is dead, all hail Pentium-M and it´s successor.
KristopherKubicki - Tuesday, October 25, 2005 - link
The first gen 65nm are definitely a temporary thing. As we all have seen from the roadmaps, Conroe is the real chip.Kristopher
NullSubroutine - Tuesday, October 25, 2005 - link
I found it very interesting that Intel actually put two seperate cores and dies on the new dual core chip. This is very interesting as it affects pricing but decreasing cost (of defective dies) thus places this processor in a possibly lower pricing point than AMD's. And if you like heavy encoding where netburst has always done well in, this chip could be your best bet, espeically if the increase in cache increases the benchmarks like the P4 notably has with such. IMHOViditor - Tuesday, October 25, 2005 - link
It does indeed increase yield (thus decreasing cost), however we have no way of knowing how good the original yield is, so they may still be more expensive to produce.
It also decreases performance by increasing the latency between cores and increasing the bandwidth requirements of the FSB...
JarredWalton - Tuesday, October 25, 2005 - link
I have to say that I'm a little skeptical on the whole core-to-core bandwidth topic. I think there's a lot less inter-core communication than some people think. The FSB latency and bandwidth is the bigger question, so really I doubt that having split cores (Presler) is any worse than Smithfield - the extra cache probably more than compensates.Of course, X2 still has the latency advantage (by a huge margin), but I'm only looking at the Intel side here. I mean, we can't really draw any conclusions from Presler vs. Toledo other than to (most likely) say that Toledo is faster. Why it's faster is almost certainly due more to the overall architecture and design than the faster inter-core communication.
It will be interesting to see if Intel can get latencies down with future chips without resorting to an integrated memory controller. I believe at present Intel's best is around 100ns latency for RAM while AMD is 30 to 40ns. If Intel could even get their chips to 75ns, it would be a huge improvement.
Viditor - Tuesday, October 25, 2005 - link
Fair enough Jarred...but we can probably get a good idea of the core to core advantages by comparing a dual core Opteron to a 2P Opteron rig at the same speeds...As to Presler v Smithfield, maybe I'm confused but isn't Smithfield a split core as well? Any corrections greatly appreciated!
JarredWalton - Tuesday, October 25, 2005 - link
Heh - from this article by Anand, I gathered that Smithfield wasn't split and Presler is. Needless to say, I haven't personally pried off the heat spreader on my Smithy, so I don't know for sure. :) (I'm also open to enlightenment if someone has definite evidence - I'm being too lazy to research it right now!)2P Opteron vs. DC Opteron isn't quite the same as split core vs. single core, though. The difference between communications sent over the FSB, through the chipset, and to a separate socket is going to be quite a bit larger than simply splitting the cores. I'm sure a unified core has faster inter-core communication speeds, but the question is: how fast do they need to be?
If such signaling only occurs on rare occasions, the real world performance difference between split cores and unified cores (of dual-core packages) may be less than 1 or 2 percent in real-world testing. There's also the question of 2P vs. DC on Intel in contrast to the same on AMD. Intel tends to have more bus bandwidth and lower latency, and perhaps better RAM prefetch logic as well. Opteron DC might be 5 to 10% faster while Intel would only be 2% faster, or maybe it's the reverse of that. (Again, I'm being lazy.)
Basically, since the architectures of NetBurst and K8 are vastly different, DC/SMP/etc. can benefit - or not - from technologies to varying degrees. And yes, I realize for many that's going to be a "duh!" statement. "Hey people - bananas are very different from oranges!" Shock and awe.... Still, it bears mention since we still have people out there that don't understand that pipeline stages, architectural designs, etc. are at least as important as raw clock speeds.
AndreasM - Tuesday, October 25, 2005 - link
http://images.google.com/images?q=smithfield+intel">Google smithfield+intel