The Nehalem Preview: Intel Does It Again
by Anand Lal Shimpi on June 5, 2008 12:05 AM EST- Posted in
- CPUs
The Socket
With an integrated memory controller, Intel needed a new pinout for Nehalem and the first version with three 64-bit DDR3 memory channels features a 1366-pin LGA interface:
LGA-1366 (left) vs. LGA-775 (right)
The socket is noticeably bigger than LGA-775 as is the mounting area for heatsinks. You can't reuse LGA-775 heatsinks and instead must use a heatsink with mounting holes more spread apart. As far as we can tell, the same push-pin mounting mechanism from LGA-775 is present in Nehalem which is disappointing.
With a larger socket and more pins, the CPU itself is obviously bigger. Here's a shot of our Nehalem compared to a Core 2 Duo E8500:
Nehalem (left) vs. Penryn (right)
Nehalem (left) vs. Penryn (right)
Intel will obviously have dual-channel versions of Nehalem in the future, unfortunately it looks like they will use a smaller socket for mainstream versions of the chip (LGA-1160?). We won't have to deal with socket segmentation just yet and it is always possible that Intel will choose to stand behind a single socket for the majority of the desktop market, reserving LGA-1366 for a Skulltrail-like high end but the strategy is unclear at this point.
108 Comments
View All Comments
mkruer - Thursday, June 5, 2008 - link
Not a problem.I tend not to take most things at face value. Looking at the Nehalem, its focus was to increase the multi threaded performance, not the single thread app per say. This would put it more inline with what AMD is offering on per core scalability. The Nehalem will get Intel back into the big iron scalability that it lost to AMD.
My guess is that the Nehalem will not give users any real advantage playing games or other single threaded apps, unless the game or app supports more then one thread.
The final question is poised back to AMD. If AMD gets their single threaded IPC and clock speed up, then both platforms should be near identical from a performance standpoint. Then it is just down to price, manufacturing and distribution. I just hope that AMD claims of 15-20% improvement in per core IPC are true. This should make this holiday season much more interesting.
Anand Lal Shimpi - Thursday, June 5, 2008 - link
Nehalem most definitely had a server focus coming up, but I wouldn't underestimate what the IMC will do for CPU-bound gaming performance. Don't forget what the IMC did for the K8 vs. Athlon XP way back when...As far as AMD goes, clock speed issues should get resolved with the move to 45nm. The IPC stuff should get taken care of with Bulldozer, the question is when can we expect Bulldozer?
JumpingJack - Saturday, June 7, 2008 - link
Don't count on 45 nm clocking up much higher than 65 nm, maybe another bin or so.... gate leakage and SCE are limiting and the reason for the sideways move from 90 to 65 nm to begin with (traditional gate ox, SiO2, did not scale 90 to 65 nm) ... the next chance for a decent clock bump will come with their inclusion of HKMG. Which from the rumor mill isn't until 1H09.fitten - Friday, June 6, 2008 - link
AMD hasn't really resolved any clock speed issues from the move from 130nm -> 90nm -> 65nm (look at the top speed 130nm parts compared to the top speed 65nm parts). During some of those transitions, the introductory parts actually were slower clocked than the higher clocked of the previous process and didn't even catch up for some time.bcronce - Thursday, June 5, 2008 - link
Does anyone know why Intel is claiming NUMA on these? I'm assuming you need a multi-cpu system for such uses, but how is the memory segmented that it's NUMA?bcronce - Thursday, June 5, 2008 - link
Seems Arstechnica(http://arstechnica.com/articles/paedia/cpu/what-yo...">http://arstechnica.com/articles/paedia/...-you-nee... has info on NUMA.Assuming more than 1 node being used, each node connects to the Memmory hub and gets assigned it's own *default* memory bank. A one node computer won't see any diff, but a 2-4 node will get a default memory bank and reduced latencies. A node can interleave the data amoung the 2-4 memory banks, but DDR3 is freak'n fast and probably best just streaming from your own bank to reduce contention amoung the nodes.
RobberBaron - Thursday, June 5, 2008 - link
I think there are going to be other issues revolving around this chip. For example:http://www.fudzilla.com/index.php?option=com_conte...">http://www.fudzilla.com/index.php?optio...amp;task...
Nvidia's Director or PR, Derek Perez, has told Fudzilla that Intel actually won't let Nvidia make its Nforce chipset that will work with Intel's Nehalem generation of processors.
We confirmed this from Intel’s side, as well as other sources. Intel told us that there won't be an Nvidia's chipset for Nehalem. Nvidia will call this a "dispute between companies that they are trying to solve privately," but we believe it's much more than that.
AmberClad - Thursday, June 5, 2008 - link
That still leaves you with CrossFire and cards with multiple GPUs like the 9800 X2. It's a tiny fraction of the market that actually uses SLI anyway.Eh, who knows, maybe Nvidia will finally cave and grant that SLI license, and we'll finally have decent chipsets with SLI.
chizow - Thursday, June 5, 2008 - link
Agreed, as much as I love NV GPUs, I'm tired of having SLI tied to NV's buggy chipsets. Realistically I'd probably just get an Intel chipset with Nehalem even if there was an Nforce SLI variant and just go with the fastest single-GPU processor.Baked - Thursday, June 5, 2008 - link
Maybe I can finally grab that E8400 when it drops to $50.