Architecting Fermi: More Than 2x GT200

NVIDIA keeps referring to Fermi as a brand new architecture, while calling GT200 (and RV870) bigger versions of their predecessors with a few added features. Marginalizing the efforts required to build any multi-billion transistor chip is just silly, to an extent all of these GPUs have been significantly redesigned.

At a high level, Fermi doesn't look much different than a bigger GT200. NVIDIA is committed to its scalar architecture for the foreseeable future. In fact, its one op per clock per core philosophy comes from a basic desire to execute single threaded programs as quickly as possible. Remember, these are compute and graphics chips. NVIDIA sees no benefit in building a 16-wide or 5-wide core as the basis of its architectures, although we may see a bit more flexibility at the core level in the future.

Despite the similarities, large parts of the architecture have evolved. The redesign happened at low as the core level. NVIDIA used to call these SPs (Streaming Processors), now they call them CUDA Cores, I’m going to call them cores.

All of the processing done at the core level is now to IEEE spec. That’s IEEE-754 2008 for floating point math (same as RV870/5870) and full 32-bit for integers. In the past 32-bit integer multiplies had to be emulated, the hardware could only do 24-bit integer muls. That silliness is now gone. Fused Multiply Add is also included. The goal was to avoid doing any cheesy tricks to implement math. Everything should be industry standards compliant and give you the results that you’d expect.

Double precision floating point (FP64) performance is improved tremendously. Peak 64-bit FP execution rate is now 1/2 of 32-bit FP, it used to be 1/8 (AMD's is 1/5). Wow.

NVIDIA isn’t disclosing clock speeds yet, so we don’t know exactly what that rate is yet.

In G80 and GT200 NVIDIA grouped eight cores into what it called an SM. With Fermi, you get 32 cores per SM.

The high end single-GPU Fermi configuration will have 16 SMs. That’s fewer SMs than GT200, but more cores. 512 to be exact. Fermi has more than twice the core count of the GeForce GTX 285.

  Fermi GT200 G80
Cores 512 240 128
Memory Interface 384-bit GDDR5 512-bit GDDR3 384-bit GDDR3

 

In addition to the cores, each SM has a Special Function Unit (SFU) used for transcendental math and interpolation. In GT200 this SFU had two pipelines, in Fermi it has four. While NVIDIA increased general math horsepower by 4x per SM, SFU resources only doubled.

The infamous missing MUL has been pulled out of the SFU, we shouldn’t have to quote peak single and dual-issue arithmetic rates any longer for NVIDIA GPUs.

NVIDIA organizes these SMs into TPCs, but the exact hierarchy isn’t being disclosed today. With the launch's Tesla focus we also don't know specific on ROPs, texture filtering or anything else related to 3D graphics. Boo.

A Real Cache Hierarchy

Each SM in GT200 had 16KB of shared memory that could be used by all of the cores. This wasn’t a cache, but rather software managed memory. The application would have to knowingly move data in and out of it. The benefit here is predictability, you always know if something is in shared memory because you put it there. The downside is it doesn’t work so well if the application isn’t very predictable.

Branch heavy applications and many of the general purpose compute applications that NVIDIA is going after need a real cache. So with Fermi at 40nm, NVIDIA gave them a real cache.

Attached to each SM is 64KB of configurable memory. It can be partitioned as 16KB/48KB or 48KB/16KB; one partition is shared memory, the other partition is an L1 cache. The 16KB minimum partition means that applications written for GT200 that require 16KB of shared memory will still work just fine on Fermi. If your app prefers shared memory, it gets 3x the space in Fermi. If your application could really benefit from a cache, Fermi now delivers that as well. GT200 did have an L1 texture cache (one per TPC), but the cache was mostly useless when the GPU ran in compute mode.

The entire chip shares a 768KB L2 cache. The result is a reduced penalty for doing an atomic memory op, Fermi is 5 - 20x faster here than GT200.

A Different Sort of Launch A More Efficient Architecture
Comments Locked

415 Comments

View All Comments

  • Dobs - Thursday, October 1, 2009 - link

    I'm with the zorro - will be setting this up for my son pretty soon - he is an extreme gamer who has mentioned multiple monitors to me a few times over the last few months. Up until now I only had a vague idea on how I could accommodate his desire.... that has all changed since the introduction of Eyefinity.
  • Finally - Thursday, October 1, 2009 - link

    ..pussy-whipped by your son?
  • the zorro - Thursday, October 1, 2009 - link

    moron, i am going to buy two more monitors and then... eyefinity.
  • chizow - Wednesday, September 30, 2009 - link

    Nvidia didn't mention anything about multi-monitor support, but today's presentation wasn't really focused on the 3D gaming market and GeForce. They did spend a LOT of time on 3D Vision though, even integrating it into their presentation. They also made mention of the movie industry's heavy interest in 3D, so if I had to bet, they would go in the direction of 3D support before multi-monitor gaming.

    It wouldn't be hard for them to implement it though if they wanted to or were compelled to. Its most likely just a simple driver block or code they need to port to their desktop products. They already have multi-monitor 3D on their Quadro parts and have supported it for years, its nothing new really, just new on the desktop space with Eyefinity. It then becomes a question if they're willing to cannibilize their lucrative Quadro sales to compete with AMD on this relatively low-demand segment. My guess is no, but hopefully I'm wrong.
  • Dobs - Thursday, October 1, 2009 - link

    I think Nvidia are underestimating the desire and affordability for multi-monitor gaming. Have you seen monitor prices lately? Have you seen the Eyefinity reviews?

    By not making any mention of it is a big mistake in my book. Sure they can do it, but it will reduce there margins even further since they obviously hadn't planned spending the extra dollar$ this way.

    I do like the sound of the whole 3D thing in the keynote though... and everyone wearing 3D glasses...(not so much). But it will be cool once the Sony vs Panasonic vs etc.? 3D format war is finished, (although it's barely started) so us mainstream general consumers know which 3D product to buy. Just hope that James Cameron Avatar film is good :)
  • chizow - Thursday, October 1, 2009 - link

    Yeah I've seen the reviews and none seemed very compelling tbh, the 3-way portrait views seemed to be the best implementation. 6-way is a complete joke, unless you enjoy playing World of Bezelcraft? There's also quite a few problems with its implementation as you alluded to, the requirement of an active DP adapter was just a short-sighted half-assed implementation by AMD.

    As Yacoub mentioned, the market segment for people interested or willing to invest in this technology is so ridiculously small, 0.1% is probably pretty close to accurate given multi-GPU technology is estimated to only be ~1% of the GPU market. Surely those interested in multi-monitor is below that by a significant degree.

    Still for a free feature its definitely welcome, even in the 2D productivity sense, or perhaps for a day trader or broker....or anyone who wanted to play 20 flops simultaneously in online poker.
  • Dobs - Thursday, October 1, 2009 - link

    Lol @ 20 flops simultaneously in online poker. I struggle with 4 :)

    Agree with 6 monitor bezelcraft - Cross hair is the bezel :)
    I guess I'm lucky that my son is due for a screen upgrade anyhow so all 3 monitors will be new. Which one will be the problem - I hear Samsung are bringing out small-bezel monitors specifically for this, but I probably can't wait that long. (Samsung LED looks awesome though) I might end up opting for 3 of Dell's old (2008) 2408WFP's (my work monitor) as I know I can get a fair discount for these and I think they have DisplayPort. I'm not sure if my son will like Landscape or Portrait better but I want him to have the option... and yeah apparently the portrait drivers are limited (read crap) atm.

    Appreciate your feedback as well as your comments on the 5850 article... I actually expected the GT prices to be $600+ not the $500-$550 you mentioned. Oops... rambling now. Cheers
  • chizow - Thursday, October 1, 2009 - link

    Heheh I've heard of people playing more than 20 flops at a time....madness.

    Anyways, I'm in a similar holding pattern on the LCD. While I'm not interested in multi-monitor as of now, I'm holding out for LED 120Hz panels at 24+" and 1920. Tbh, I'd probably check out 3D Vision before Eyefinity/multi-monitor at this point, but even without 3D Vision you'd get the additonal FPS from a 120Hz panel along with increased response times from LED.

    If you're looking to buy now for quality panels with native DP support, you should check out the Dell U2410. Ryan Smith's 5870 review used 3 of them I think in portrait and it looked pretty good. They're a bit pricey though, $600ish but they were on sale for $480 or so with a 20% coupon. If you called Dell Sm. Biz and said you wanted 3 you could probably get that price without coupon.

    As for GTX 380 price, was just a guess, Anand's article also hints Nvidia doesn't want to get caught again with a similar pricing situation as with GT200 but at the same time, relative performance ultimately dictates price. Anyways, enjoyed the convo, hope the multi-mon set-up works out! Sounds like it'll be great (especially if you like sims or racing games)!
  • RadnorHarkonnen - Thursday, October 1, 2009 - link

    Eyefinity is screaming for DIY.

    Bezel Craft can be easly avoided. Just tear you monitor apart. A stand for 3 monitors is easly ordered/DIY made. Ussually the bezel is way thicker than it need to be.

    Unfortunely i alreayd have a 4850 CF that i will keep for more a year or two and let the tecnology mature for now.
  • wifiwolf - Wednesday, September 30, 2009 - link

    Can we have a new feature in the comments please?
    I just get tired of reading a few comments and get bugged by some SiliconDoc interference.
    Can we have a noise filter so comments area gets normal again.
    Every graphics related article gets this noise.
    Just a button to switch the filter on. Thanks.

Log in

Don't have an account? Sign up now