Intel's 45nm Dual-Core E8500: The Best Just Got Better
by Kris Boughton on March 5, 2008 3:00 AM EST- Posted in
- CPUs
What's the next best thing to an Intel 45nm quad-core processor? Why, a 45nm dual-core, of course. At least that's what Intel seems to being saying lately. While we tend to agree, there are certainly more than a few important considerations to take into account when deciding just which CPU is best suited for your intended uses. Choosing a CPU can be as personal an experience as buying a new car. While you know what you want, it really comes down to what you need, what you can afford, and more importantly, what makes sense. Although the four-core model easily overclocked to 4GHz or higher on air alone certainly does sound sexy, the brown sub-compact in the corner of the lot may be just what you're looking for. Don't worry though; either way Intel has an answer for you….
Intel's "tick-tock" strategy gives us a very early glimpse at the future of micro processing. If all goes well, Moore's Law should be as true in 2010 as it is today…
Amid rumors of manufacturing problems, the next step in the continuation of Intel's accelerated "tick-tock" strategy - which pledges process-technology shrinks of existing designs and the introduction of entirely new core architectures on an alternating two-year cycle - comes the release of a new line of 45nm dual-core processors, codenamed Wolfdale. Built on the familiar Core 2 architecture, these processors feature a few notable changes with some rather large implications for the overclocking crowd, all of which we will discuss in more detail later. For starters, advancements in process technology have allowed Intel to shrink the size of the transistors used in these CPUs from last-generation's 65nm down to 45nm, allowing for a ~50% reduction in die size for an equivalent design.
The changes don't end there; a few core processing modification have been made, making Wolfdale a little faster clock-for-clock than Conroe. These changes include but are not limited to: a new divider technique called Radix 16 that nearly doubles the speed of operations involving operand division, the introduction of 47 new Intel Streaming SIMD Extensions 4 (SSE4) instructions (many perfect for HD video production and decoding), and a unique 128-bit wide Super Shuffle Engine that significantly improves performance for all SSE-related instructions (i.e. content creation, image manipulation, and video encoding). Unfortunately, it will take some time for software developers to catch up with most of these innovations, but eventually we should see more and more programs and games that show the true power of these feature sets.
The layout of discrete components on the bottom of any Intel CPU is an easy way to quickly determine which product series you hold in your hands. This is what a 45nm E8000-series dual-core looks like.
Finally, the L2 cache size has been substantially increased. The E8000-series processors will feature up to 6MB of shared L2 cache, up from 4MB per core pair. However, the larger L2 cache comes with a move from the previous low-latency 4MB 8-way association scheme to a more complicated 24-way associated cache when using 6MB, adding precious nanoseconds to each data fetch. The larger cache is technically "better", but the higher latencies will in some cases negate the benefit, so this is not a clear improvement in 100% of cases. There has been no formal word yet from Intel as to whether this trade-off was a result of the use of the larger cache or if it was an intended design change.
All 45nm dual-core Intel CPUs will operate at a default bus speed of 333MHz (1333 quad-pumped), which is needed in order to give the included store forward technology and intelligent prefetch algorithms the fast memory access they desire. These background processes, combined with the use of the large L2 cache, are instrumental in Intel's recent success in hiding most of the traditional memory access latency experienced with many older designs. Although memory access operations are still slower than desired, more cache means these processes are able to look farther ahead, fetch more data into the L2, and increase the chances that an incorrect branch assumption will not result in a costly data stall. The move to an integrated memory controller (IMC), like that in Nehalem planned for a Q4 2008 release, will largely invalidate the necessity of these super-scalar caches.
The Intel Core 2 Duo E8500 processor promises to be the fastest, most energy efficient dual-core CPU ever designed for the PC.
We have noted in previous articles what an amazing difference Intel's new high-K 45nm process has made in the improved switching efficiency and the reduction in leakage current of these tiny transistors. Our results with the Core 2 Extreme QX9650 were nothing short of impressive. First impressions left us feeling as though Intel had either made a mistake in calculating the processor's thermal characteristics, or more likely they decided to conservatively rate these new quad-cores relative to the older 65nm quad-core CPUs. In any case, the improvement was real and measurable.
Drawing upon that same success, the E8000-series of dual-core processors shows great promise when applied in situations that demand unrivaled performance and/or energy-efficient operation. While there is no doubt that the E8500 will excel when subjected to even the most intense processing loads, underclocked and undervolted it's hard to find a better suited Home Theater PC (HTPC) processor. For this reason alone we predict the E8200 (2.66GHz) and E8300 (2.83GHz) processors will become some of the most popular choices ever when it comes to building your next HTPC.
What's more, Intel has decided to stay with the classic LGA775 package for at least one more round, meaning there is a good chance an upgrade to one of these new 45nm processors could be easier than you originally thought it would be. Past upgrades have required the purchase of an entirely new motherboard due to modifications to the Voltage Regulation Module (VRM) specifications, dictating board-level hardware changes needed for new processor compatibility. Not so with Wolfdale; a simple micro-code BIOS update from your motherboard vendor is often all that is necessary to add official support for these CPUs. After that, it's only a matter of removing the old processor and installing the new one, and you can begin enjoying all the benefits an E8000-series processor has to offer.
45 Comments
View All Comments
TheJian - Thursday, March 6, 2008 - link
Agreed. I haven't had a cpu that hasn't been heavily overclocked since like 1992 or so. All of these chips clear back a 486 100mhz ran for others for years after I sold them. My sister is still running my Athlon 1700+ in one machine. It's on all day, she doesn't even use standby...LOL (except for the monitor). It's like 6yrs old. Probably older than that. Still running fine. I think it runs at 1400mhz if memory serves (but it's a 1700+, you know amd's number scheme). Every time I upgrade I sell a used chip at like 1/2 off to a customer/relative and they all run for years. I usually don't keep them for more than 1yr but they run well past the 3yr warranty for everyone else, and I drive them hard while I have them. I'd venture to guess that most of Intel/AMD chips would last 5+ years on avg. The processes are just that good. After a few years in a large company with 600+ pc's I realized they just don't die. We sent them to schools (4-5yr upgrade schedule) before they died or sold them to employees for $50-100 as full pc's! I think I saw 3 cpu deaths in 2.5yrs and they were dead weeks after purchase (p4's...presshots..LOL). Don't even get me started on the hard drive deaths though...ROFL. 40+/yr. Those damn Dell SFF's get hot and kill drives quick (not to mention the stinking plastic, smells burnt all day). You're lucky to get to warranty on those. I digress...mindless1 - Wednesday, March 5, 2008 - link
Yes, the author is completely wrong about overclocking. Overclocking (within sane bounds like not letting the processor get hotter than you'd want even if it weren't overclocked), INCREASES the usable lifespan, not decreases.The author has obviously not had much experience overclocking, for example there are still plenty of Celeron 300MHz processors that ran at 450+MHz for almost ten years then were retired due to beyond beyond their "usable" lifespan, just slow by then modern standards. Same for Coppermine P3 and Celeron, Athlon XP, take your pick there are almost never examples of a processor that fails prematurely that had ran stable for a couple years, unless it was due to some external influence like the heatsink falling off or motherboard power circuit failure.
Overclocking really isn't a gamble - unless you don't use common sense. 2-3 years is a lifespan you'd get if you were doing something extreme, not a modest voltage increase using a heatsink that keeps it cool enough.
I suggest the article page about "The Truth About Processor Degradation" should just be deleted, it's not just misleading but mostly incorrect. Here's the core of the problem:
"As soon as you concede that overclocking by definition reduces the useful lifetime of any CPU, it becomes easier to justify its more extreme application."
Absolutely backwards. Overclocking does not by definition nor by any other nonsensical standard, reduce the useful lifetime of CPUs. It increases the useful lifetime by providing more performance so that processor remains at the required performance levels (which escalate) for a longer period, then eventually is retired before failing in most cases. It is wrong to think that if an overclocked processor would last 18 years without overclocking and 12 with modest overclocking, that this suddenly means "it becomes easier to justify it's more extreme application." It means you can do something sanely and have zero problems or use random guesses and do something "extreme" and then you will find a problem. Author is completely backwards.
"Too many people believe overclocking is "safe" as long as they don't increase their processor core voltage - not true."
There is no evidence of this. Show us even one processor that failed from increase in clock speed within it's default voltage and within it's otherwise viable lifespan.
"Frequency increases drive higher load temperatures, which reduces useful life. "
Wrong. While it is true that a higher frequency will increase temps, it is not true that a higher temp (so long as it's not excessive) will cause the processor to faill within it's "useful life". On the contrary you have extended the useful life by increasing the performance. Millions upon millions of overclockers know this, a moderate overclock (or even a lot, providing the vcore isn't increased significantly) has no effect, it's always some other portion of the system that fails first from old age, generally motherboard or PSU. It might be fair to say that overclocking, through use of more current, is more likely to reduce the viable lifespan of the motherboard or PSU, or actually both long before the processor would fail.
Intel doesn't warranty overclocking because it is definitely possible to make a mistake though ignorance or ineptitude, and because their price model is based on speed/performance. It is not based upon evidence that experienced overclockers using good judgement will end up with a processor that failed within 8 years, let alone 3!
It also goes a long way to understanding why Intel has a strict "no overclocking" policy when it comes to retaining the product warranty. Too many people believe overclocking is "safe" as long as they don't increase their processor core voltage - not true. Frequency increases drive higher load temperatures, which reduces useful life.
TheJian - Thursday, March 6, 2008 - link
AMD has recently proved this, and even Intel to some extent with P4's. AMD's recent chips have been near max, with almost no overclocking room (same for quite a few models of P4's) and they lived long lives. Proving you can run at almost max at default voltages with no worries.Where does the author get his data? Just as you said. Prove it. I think Intel is tire of us overclocking the crap out of their great cores. With AMD not having ANY competitor they end up with all chips being able to hit 4ghz but having to mark them at 3.16ghz. What do we do? Overclock to near max and that pisses them off. :) Make a few phone calls to some people and tell them write a "10minute's of overclocking and your cpu blows up" article or you won't get our next engineering samples to test :) Maybe I'm wrong but it's sure suspicious. Recommending anything with Intel IGP for HTPC applications is might suspicious also. Yes, I read tech-report too. Also the same is on toms hardware! Check out this FIRST SENTENCE of their 3/4/08 article on 780 chipset from AMD:
"With today's introduction of its new 780G chipset, AMD is finally enabling users to build an HTPC or multimedia computer for HDTV, HD-DVD or Blu-ray playback that doesn't require an add-in graphics card. (AMD already included HDCP support and an HDMI interface in its predecessor chipset, the 690G.) The northbridge chip of the new 780G chipset also features an integrated Radeon HD3200 graphics unit that can decode any current high-definition video codec. As a result, CPU load is decreased to such a degree that even a humble AMD Sempron 3200+ is sufficient for HD video playback. Also, while Intel's chipsets get more power-hungry with every generation, AMD's newest design was designed with the goal of reducing power consumption."
http://www.tomshardware.com/2008/03/04/amd_780g_ch...">http://www.tomshardware.com/2008/03/04/amd_780g_ch...
OK, so for the first time we can build an HTPC without an add-in graphics card. Translation - IT can't be done on Intel! Ok, even a LOWLY SEMPRON 3200+ cuts it with this chipset! Translation - No need for an Intel Core2 2.66ghz-2.83ghz dual core! No need for a dual core at all. Before this chipset it took an A64 6400 DUAL CORE (on AMD's old 690G chipset and that chipset smokes Intels IGP) and still was choppy. Now they say a 1.8ghz SINGLE CORE SEMPRON only shows 63% cpu utilization WITHOUT choppy on the 780G! On top of that it will save you money while running. Even the chipset is the BEST EVER in power use. They openly tell you how BAD Intel's chipsets are at 90nm. But Anandtech wants us to buy this crap? BLU-RAY finally hit it's limit on a 1.6ghz SEMPRON at tomshardware. They hit 100% cpu in a few spots. I hope Anandtech's 780G chipset review sets this record straight. They'd better say you should AVOID INTEL like the plague or something is FISHY in HTPC/Intel/Anandtech world.
Don't get me wrong Intel has the greatest chips now for a year, I'm personally waiting on the E8400 to hand me down my E4300 to my dad (with runs at 3.2hz with ease). But call a spade a spade man. Intel sucks in HTPC. SERIOUSLY SUCKS after early this week!
Quiet1 - Wednesday, March 5, 2008 - link
Kris exposes his personal preferences when he writes... "While there is no doubt that the E8500 will excel when subjected to even the most intense processing loads, underclocked and undervolted it's hard to find a better suited Home Theater PC (HTPC) processor. For this reason alone we predict the E8200 (2.66GHz) and E8300 (2.83GHz) processors will become some of the most popular choices ever when it comes to building your next HTPC."But what are you going to plug that CPU in to??? An Intel motherboard with Intel integrated graphics? Look at the full picture and you'll see that if you're building an HTPC, the CPU just has to be decent enough to get the job done... the really important thing is your IG performance on your chipset.
The Tech Report: “AMD's 780G chipset / Integrated graphics all grown up”
http://www.techreport.com/articles.x/14261/1">http://www.techreport.com/articles.x/14261/1
“The first thing to take away from these results is just how completely the 780G's integrated graphics core outclasses the G35 Express. Settings that deliver reasonably playable framerates on the 780G reduce the G35 to little more than an embarrassing slideshow.”
"Between our integrated graphics platforms, the 780G exhibits much lower CPU utilization than the G35 Express. More importantly, the AMD chipset's playback was buttery smooth throughout. The same can't be said for the G35 Express, whose playback of MPEG2 and AVC movies was choppy enough to be unwatchable."
sprockkets - Thursday, March 6, 2008 - link
That's the problem with Intel's platform, at least without an add in card. I thought the new nVidia chipset would change all that, then I found out they are only using a single channel of ram, how retarded is that?Then, for now, having the ability to run the add in card for games but then shut it down afterwards when you do not need it is sweet. I would wait though for the 8200 chipset since i know it will be easier to get working in Linux but may go still for the 780G for Windows Vista.
HilbertSpace - Wednesday, March 5, 2008 - link
I read that article too, and thought the same thing.Atreus21 - Wednesday, March 5, 2008 - link
I wish to hell Intel would quit using those penises for marketing their architecture shrinks. Every time I try and read it I'm like, "Ah!"One would think they're trying to say something.
Atreus21 - Wednesday, March 5, 2008 - link
I mean, the least they could do is not make it flesh colored.frombauer - Wednesday, March 5, 2008 - link
I'll finally upgrade my x2 3800+ (@2.5GHz) very soon. Question is, for gaming mostly, will a high clocked dual core suffice, or a lower clocked quad will be faster as games become more multi-threaded?7Enigma - Thursday, March 6, 2008 - link
I think it really depends on how long you plan on keeping the new system. Since your current rig is a couple years old, you fall into the category of 90% of us. We don't throw out the mobo and cpu every time a new chip comes out, we wait out a couple generations and then rebuild. I'm running at home right now on an A64 3200+ (OC'd to 2.5GHz) so I don't even have dual-core right now.My plan is that even though the duals offer potentially better gaming performance right now (gpu obviously still being the caveat), since I only rebuild every 3-4 years I need something to be more futureproof than someone who upgrades every year. It would be great to say I'll get a fast dual-core today and next year get a quad, but 4 out of 5 times the upgrade would require a new mobo anyway so I'd rather wait another month or two, get a 45nm quad and the 9800 when it comes out.
My biggest dissapointment with my last build was NOT jumping on the "new" slot and instead getting an AGP mobo. That is what has really hampered my gaming the last year or so. Once the main manufacturers stopped producing AGP gfx cards my upgrade path stopped cold. If I could go back to jan 05 I would have spent the extra $50-100 on a mobo supporting PCI-X, which would have allowed me to upgrade past my current 6800GT and keep on gaming. Right now I have a box of games I've never played (gifts from Christmas) because my system can't even load them.
So in short, the duals are right NOW the better buy for gaming, but I'd hedge my bets and splurge on a 45nm quad when they come out. In all honesty unless you play RTS' heavily, or we have some crazy change of mindset by game producers (not likely) the gpu will continue to be the bottleneck at anything above 17-19" LCD resolutions. I actually just got a really nice 19" LCD this past Christmas to replace my aging 19" CRT and I did it for a very good reason. All it takes is to see a game like Crysis and realize that we may not be ready yet for the resolutions that 20/22/24 display, unless we have the cash to burn on top of the line components and upgrade at a much more frequent rate.
2cents.