The Nehalem Preview: Intel Does It Again
by Anand Lal Shimpi on June 5, 2008 12:05 AM EST- Posted in
- CPUs
The Return of Hyper Threading
While Nehalem is designed to scale to up to 8 cores per chip, each one of those cores has the hardware necessary to execute two threads simultaneously - yep, it's the return of Hyper Threading. Thus our quad-core Nehalem sample appeared as 8 logical cores under Windows Vista:
Four cores, eight threads, all in a desktop CPU
Note that as in previous implementations of Hyper Threading (or other SMT processors) this isn't a doubling of execution resources, it's simply allowing two instruction threads to make their way down the pipeline at the same time to make better use of idle execution units. Having 8 physical cores will obviously be faster, but 8 logical (4 physical) is a highly power efficient way of increasing performance.
We took Valve's source-engine map compilation benchmark and measured the compile time to execute one instance (4 threads) vs. two instances of the benchmark. The graph below shows the increase in compilation time when we double the workload:
While the 2.66GHz Core 2 Quad Q9450 (Penryn) takes another 127 seconds to execute twice the workload, the 2.66GHz Nehalem only needs another 49 seconds. And if you're curious, this quad-core Nehalem running at 2.66GHz is within 20% of the performance of an eight-core 3.2GHz Skulltrail system. Equalize clock speed and we'd bet that a quad-core Nehalem would be the same speed as an 8-core Skulltrail here. The raw performance numbers are below:
We couldn't disable Hyper Threading so we reached the limits of what we were able to investigate here.
108 Comments
View All Comments
ForumMaster - Thursday, June 5, 2008 - link
if you'd bother to read the article, it states quite clearly there are PCI-E issues which prevent any GPU testing as of now. it says the motherboard makers need another month to iron out the issues.what amazed me is how much better the performance is at this point. when nehalem is optimized, wow.
JPForums - Thursday, June 5, 2008 - link
I wouldn't expect much further optimization of the CPU. The only optimizations in code that couldn't have been made for Penryn would be further threading. Motherboard and chipset optimizations could make a difference, but only up to the point where they are mature. After that there will be little to differentiate CPU performance.Like with AMD, implementing the on-die memory controller gives Intel a free performance boost. There are no new instructions to implement and the improvement doesn't apply only to rare or obscure scenarios. Getting data into the core faster with less latency simply makes everything faster. It also serves to further minimize performance differences between supporting platforms.
What impresses me is that Intel got it right on the first try. It doesn't really surprise me as they have far more resources to work with, but it is nonetheless impressive.
The article mentions that Pat that said you can only add a memory controller once. Is this somehow different from any other architecture improvement? You can only add SSE or hyperthreading, or a new divider architecture once. Improvements to SSE2 and beyond or adding more thread support in hyperthreading are no different than putting in a DDR4 (or newer) controller with 4 (or more) channel support. Note: I don't advocate trying to add further thread support in hyperthreading. In fact, one of the few architectural change that I can think of that can be used more than once is increasing the cache size. Since AMD can't keep up with Intel's cache size due to process inferiority, it seems like an obvious viewpoint for Intel to take.
I suspect that the real reasons Intel didn't move over sooner were:
1) They didn't want to be seen as trailing AMD.
2) More importantly, an on-die memory controller reduces the advantage of larger caches. Alternately, from Intels perspective, a P4 processor would not have seen nearly as much benefit from an on-die memory controller due to its heavy reliance on large cache sizes. Benchmarks of the P4's showed that raw memory bandwidth was great for the P4s, but they couldn't care less about memory latencies (the largest advantage of an on-die memory controller) as they were hidden by the large cache size. Fast forward to the Core2's of today and you'll see major performance increases from lowering memory latencies on the X38/X48 chipsets. I believe this is true even to the point that the best performance isn't necessarily in line with the highest frequency overclock anymore. Even though Core2 has an even larger cache, it doesn't rely on it as much. Consider how much closer the performances of Intel's budget line (read: low cache) processors are to the mainstream than they were in the P4 era.
Intel was not going to introduce an on-die memory controller on an architecture that it made little sense to add it to. While it made sense with Core2, it would've taken much longer to get the chips out with one and Intel didn't have the luxury of time at Conroe's launch. Further, Intel would need to give something up to put it in. It is debatable whether they would see the same performance improvements depending on what got left out or changed. In conclusion, Intel added the on-die memory controller when it made the most sense.
Hyper-transport, on the other hand, was something Intel could've used a long time ago. Though they probably left it out because they weren't getting rid of the front side bus and they didn't want even more communications paths. Quickpath is a welcome improvement, though I'd really like to delve into the details before comparing it to hyper-transport. It'll be a real shame if they use time-division-multiplexing for the switching structure. (Assuming it supports multiple paths of course.)
The article mentions that the cache latencies and memory latencies are superior to Phenom's. While this is true, I don't really think AMD screwed anything up. Rather, Intel is simply enjoying the benefits of its smaller process technology and newer memory standard. You need look no further than Anandtech to find articles explaining the absolute latencies differences between DDR2 and DDR3. Intel memory latencies may still be a bit lower than AMD when they move over to the smaller process with a DDR3 controller, but I doubt it'll be earth shattering.
The good news for AMD is that Intel has essentially told them that they are on the right path with their architecture design. The bad news is that Intel also just told them that it doesn't matter if they are right, Intel is so fast that they can take a major detour and still find their way to the destination before AMD arrives. Hopefully, AMD will pick up speed once they're done paying for the ATI merger.
Gary Key - Thursday, June 5, 2008 - link
I was able to view but not personally benchmark a recently optimized Bloomfield/X58 system this week and it was blindingly fast in several video benchmarks compared to a QX9650. These numbers were before the GPUs became a bottleneck. ;)BansheeX - Thursday, June 5, 2008 - link
The performance conclusion might be a good example of why a monopoly is neither self-perpetuating or an inherently bad thing for the consumer. It IS possible for a virtual monopoly like Intel to be making the best product for the consumer. Perhaps the fear itself of losing that position is enough for such companies to not be complacent or attempt to overprice products, as it would open a window for smaller capital to come in and take marketshare. Just keep them away from subsidies and other special privileges, and the market will always work out for the best. You listening, Europe?Chesterh - Saturday, August 9, 2008 - link
Go back and to school and take Economics 101. Monopolies and corporate consolidation are part of the reason our economy is in the crapper right now. If the US government had been actually enforcing the antitrust regulations on the books, we might have done the same thing as Europe and slapped Microsoft on the wrist as well.Besides, Intel does not have a 'virtual monopoly', or any kind of monopoly. AMD is not out of the game; they are down in the high end CPU segment, but they are definitely not out. The only reason Intel is still releasing aggressively competitive new products is because it doesn't want to lose its lead over AMD. If there was no AMD, we might not even have multicore procs at this point.
SiliconDoc - Monday, July 28, 2008 - link
Uhhh, are you as enamored with Intel as Anand has been for as many years ?Did you say "monopoly not overprice it's products" ?
Ummm... so like after they went insane locking their multipliers because they hate we overclockers ( they can whine about chip remarkers - whatever) - they suddenly... when they could make an INSANE markup...ooooh..... they UNLOCKED their processor they "locked up on us".... made a "cool tard" marketing name and skyerocketed the price...
Well - you know what you said... like I wish some hot 21yr. old virgin would kiss me that hard.
With friends like you monopolies can beat all enemies... ( not that there's anything wrong with that).
\Grrrrr -
I know, you're just being positive something I'm not very good at.
I lost my cool when I read " the 1633 pin socket will be a knocked down 1066 version "for the public" or the like....
You never get the feeling your 3rd phalanx is permanently locked to your talus, both sides ?
Hmmm.... man we're in trouble.
Grantman - Sunday, July 6, 2008 - link
A monopoly is the worst thing that could happen for consumers in every regard, and the ease of entry for the smaller businesses to enter the processor market and gobble up marketshare is laughable. Firstly once a monopoly is established the can set the prices at any height they want and you think smaller players can take advantage of the opportunity and enter the sophisticated cpu market without being crushed by aggressive price cutting? Not only that, but as the monopoly makes it's close to 100% market share revenue it will reach a point were it's research and development is simply far ahead of any hopeful to enter the market thus perpetuating it's ongoing domination.Justin Case - Sunday, June 8, 2008 - link
The only reason why Intel came out with these CPUs at all is that there _is_ competition. Without that, Intel would still be run by its marketing department and we'd be paying $1000 for yet another 100 MHz increase on some Pentium 4 derivative.The words "a mononopoly isn't a bad thing for consumers" sound straight out of the good ol' USSR. You need to study some Economics 101; you clearly don't understand what "barriers to entry" or "antitrust" mean.
mikkel - Sunday, August 10, 2008 - link
Are you honestly suggesting that desktop performance requirements is the only thing driving processing innovation? I'm fairly sure that you'd find a very good number of server vendors whose customers wouldn't be satisfied with P4 derivatives today.There are market forces that Intel couldn't hold back even if they wanted to.
adiposity - Friday, June 6, 2008 - link
AMD is not dead yet and is still undercutting Intel at every price point they are able to. Intel will not rest until AMD is dead or completely non-competetive. At that point we may see a return to the arrogant, bloated Intel of old.All that said, their engineers are awesome and deserve credit for delivering again and again since Intel decided to compete seriously. They have done a great job and provided superior performance.
The only question is: will Intel corporate stop funding R&D and just rake in profits once AMD is dead and gone? I unless they get lucky in court in 2010, I think AMD's death is now a foregone conclusion.
Dan