Better Image Quality: CSAA & TMAA

NVIDIA’s next big trick for image quality is that they’ve revised Coverage Sample Anti-Aliasing. CSAA, which was originally introduced with the G80, is a lightweight method of better determining how much of a polygon actually covers a pixel. By merely testing polygon coverage and storing the results, the ROP can get more information without the expense of fetching and storing additional color and Z data as done with a regular sample under MSAA. The quality improvement isn’t as pronounced as just using more multisamples, but coverage samples are much, much cheaper.


32x CSAA sampling pattern

For the G80 and GT200, CSAA could only test polygon edges. That’s great for resolving aliasing at polygon edges, but it doesn’t solve other kinds of aliasing. In particular, GF100 will be waging a war on billboards – flat geometry that uses textures with transparency to simulate what would otherwise require complex geometry. Fences, leaves, and patches of grass in fields are three very common uses of billboards, as they are “minor” visual effects that would be very expensive to do with real geometry, and would benefit little from the quality improvement.

Since billboards are faking geometry, regular MSAA techniques do not remove the aliasing within the billboard. To resolve that DX10 introduced alpha to coverage functionality, which allows MSAA to anti-alias the fake geometry by using the alpha mask as a coverage mask for the MSAA process. The end result of this process is that the GPU creates varying levels of transparency around the fake geometry, so that it blends better with its surroundings.

It’s a great technique, but it wasn’t done all that well by the G80 and GT200. In order to determine the level of transparency to use on an alpha to coverage sampled pixel, the anti-aliasing hardware on those GPUs used MSAA samples to test the coverage. With up to 8 samples (8xQ MSAA mode), the hardware could only compute 9 levels of transparency, which isn’t nearly enough to establish a smooth gradient. The result was that while alpha to coverage testing allowed for some anti-aliasing of billboards, the result wasn’t great. The only way to achieve really good results was to use super-sampling on billboards through Transparency Super-Sample Anti-Aliasing, which was ridiculously expensive given that when billboards are used, they usually cover most of the screen.

For GF100, NVIDIA has made two tweaks to CSAA. First, additional CSAA modes have been unlocked – GF100 can do up to 24 coverage samples per pixel as opposed 16. The second change is that the CSAA hardware can now participate in alpha to coverage testing, a natural extension of CSAA’s coverage testing capabilities. With this ability CSAA can test the coverage of the fake geometry in a billboard along with MSAA samples, allowing the anti-aliasing hardware to fetch up to 32 samples per pixel. This gives the hardware the ability to compute 33 levels of transparency, which while not perfect allows for much smoother gradients.

The example NVIDIA has given us for this is a pair of screenshots taken from a field in Age of Conan, a DX10 game. The first screenshot is from a GT200 based video card running the game with NVIDIA’s 16xQ anti-aliasing mode, which is composed of 8 MSAA samples and 8 CSAA samples. Since the GT200 can’t do alpha to coverage testing using the CSAA samples, the resulting grass blades are only blended with 9 levels of transparency based on the 8 MSAA samples, giving them a dithered look.


Age of Conan grass, GT200 16x AA

The second screenshot is from GF100 running in NVIDIA’s new 32x anti-aliasing mode, which is composed of 8 MSAA samples and 24 CSAA samples. Here the CSAA and MSAA samples can be used in alpha to coverage, giving the hardware 32 samples from which to compute 33 levels of transparency. The result is that the blades of grass are still somewhat banded, but overall much smoother than what the GT200 produced. Bear in mind that since 8x MSAA is faster on the GF100 than it was GT200, and CSAA has very little overhead in comparison (NVIDIA estimates 32x has 93% of the performance of 8xQ), the entire process should be faster on GF100 even if it were running at the same speeds as GT200. Image quality improved, and at the same time the performance improved too.


Age of Conan grass, GF100 32x AA

The ability to use CSAA on billboards left us with a question however: isn’t this what Transparency Anti-Aliasing was for? The answer as it turns out is both yes and no.

Transparency Anti-Aliasing was introduced on the G70 (GeForce 7800GTX) and was intended to help remove aliasing on billboards, exactly what NVIDIA is doing today with MSAA. The difference is that while DX10 has alpha to coverage, DX9 does not – and DX9 was all there was when G70 was released. Transparency Multi-Sample Anti-Aliasing (TMAA) as implemented today is effectively a shader replacement routine to make up for what DX9 lacks. With it, DX9 games can have alpha to coverage testing done on their billboards in spite of DX9 not having this feature, allowing for image quality improvements on games still using DX9. Under DX10 TMAA is superseded by alpha to coverage in the API, but TMAA is still alive and well due to the large number of older games using DX9 and the large number of games yet to come that will still use DX9.

Because TMAA is functionally just enabling alpha to coverage on DX9 games, all of the changes we just mentioned to the CSAA hardware filter down to TMAA. This is excellent news, as TMAA has delivered lackluster results in the past – it was better than nothing, but only Transparency Super-Sample Anti-Aliasing (TSAA) really fixed billboard aliasing, and only at a high cost. Ultimately this means that a number of cases in the past where only TSAA was suitable are suddenly opened up to using the much faster TMAA, in essence making good billboard anti-aliasing finally affordable on newer DX9 games on NVIDIA hardware.

As a consequence of this change, TMAA’s tendency to have fake geometry on billboards pop in and out of existence is also solved. Here we have a set of screenshots from Left 4 Dead 2 showcasing this in action. The GF100 with TMAA generates softer edges on the vertical bars in this picture, which is what stops the popping from the GT200.


Left 4 Dead 2: TMAA on GT200


Left 4 Dead 2: TMAA on GF100

Better Image Quality: Jittered Sampling & Faster Anti-Aliasing Applications of GF100’s Compute Hardware
Comments Locked

115 Comments

View All Comments

  • dentatus - Monday, January 18, 2010 - link

    Absolutely. Really, the GT200/RV700 generation of DX10 cards was inarguably 'won' (i.e most profitable) for AMD/ATI by cards like the HD4850. But the overall performance crown (i.e highest in-generation performance) was won off the back of the GTX295 for nvidia.

    But I agree with chizow that nvidia has ultimately been "winning" (the performance crown) each generation since the G80.
  • chizow - Monday, January 18, 2010 - link

    Not sure how you can claim AMD "inarguably" won DX10 with 4850 using profits as a metric. How many times did AMD turn a profit since RV770 launched? Zero. They've posted 12 straight quarters of losses last time I checked. Nvidia otoh has turned a profit in many of those quarters and most recently Q3 09 despite not having the fastest GPU on the market.

    Also, the fundamental problem people don't seem to understand with regard to AMD and Nvidia die size and product distribution is that they overlap completely different market segments. Again, this simply serves as a referendum in the differences in their business models. You may also notice these differences are pretty similar to what AMD sees from Intel on the CPU side of things....

    Nvidia GT200 die go into all high-end and mainstream parts like GTX 295, 285, 275, 260 that sell for much higher prices. AMD RV770 die went into 4870, 4850, and 4830. The latter two parts were competing with Nvidia's much cheaper and smaller G92 and G96 parts. You can clearly see that the comparison between die/wafer sizes isn't a valid one.

    AMD has learned from this btw, and this time around it looks like they're using different die for their top tier parts (Cypress) and their lower tier parts (Redwood, Cedar) so that they don't have to sell their high-end die at mainstream prices.
  • Stas - Tuesday, January 19, 2010 - link

    [quote]Not sure how you can claim AMD "inarguably" won DX10 with 4850 using profits as a metric. How many times did AMD turn a profit since RV770 launched? Zero. They've posted 12 straight quarters of losses last time I checked. Nvidia otoh has turned a profit in many of those quarters and most recently Q3 09 despite not having the fastest GPU on the market. [/quote]
    AMD also makes CPUs... they also lost market due to Intel's high end domination... they lost money on ATI... If it wasn't for success of the HD4000 series, AMD would've been in deep shit. Just think before you post.
  • Calin - Tuesday, January 19, 2010 - link

    Hard to make a profit paying the rates of a 5 billion credit - but if you want to take it this way (total profits), why wouldn't we take total income?
    AMD/ATI:
    PERIOD ENDING 26-Sep-09 27-Jun-09 28-Mar-09 27-Dec-08
    Total Revenue 1,396,000 1,184,000 1,177,000 1,227,000
    Cost of Revenue 811,000 743,000 666,000 1,112,000
    Gross Profit 585,000 441,000 511,000 115,000

    NVidia
    PERIOD ENDING 25-Oct-09 26-Jul-09 26-Apr-09 25-Jan-09
    Total Revenue 903,206 776,520 664,231 481,140
    Cost of Revenue 511,423 619,797 474,535 339,474
    Gross Profit 391,783 156,723 189,696 141,666

    Not looking so good for the "winner of the generation", though. As for the die size and product distribution, all I'm looking at is the retail video card offer, and every price bracket I choose have both NVidia and AMD in it.
  • knutjb - Wednesday, January 20, 2010 - link

    You missed my point. I wasn't talking about AMD as a whole I was talking about ATI as a division within AMD. If a company bleeds that much and still survives some part of the company must be making some money and that is the ATI division. ATI is making money. Your macro numbers mean zip.

    The model ATI is using is putting out competitive cards from a company, AMD, that is bleeding badly. What generation card is easier to sell the new and improved one with more features, useful or not, or the last generation chip?
  • beck2448 - Tuesday, January 19, 2010 - link

    Those numbers are ludicrous. AMD hasn't made a profit in years. ATI's revenue is about 30% of Nvidia's.
  • knutjb - Monday, January 18, 2010 - link

    ATI is what has been floating AMD with its profits. ATI has decided to make smaller incremental developmental steps that lower end production costs.

    Nvidia takes a long time to create a monolithic monster that required massive amounts of capital to develop. They will not recoup this investment off gamers alone because most don't have that much cash to put one of those cards in their machines. It is needed for marketing so they can push lower level cards implying superiority, real or not, they are a heavy marketing company. This chip is directed at their GPU server market and that is where they hope to make their money hoping it can do both really well.

    ATI on the other hand by making smaller steps, but at a higher cycle of product development, have focused on the performance/mainstream market. With lower development costs they can turn out new cards that payback development costs back quicker allowing them to put that capital back into new products. Look at the 4890 and 4870. They both share similar architecture but the 4890 is a more refined chip. It was a product that allowed ATI to keep Nvidia reacting to ATI's products.

    Nvidia's marketing requires them to have the fastest card on the market. ATI isn't trying to keep the absolute performance crown but hold onto the price/performance crown. Every time they put out a slightly faster card it forces Nvidia to respond. Nvidia recieves lower profits from having to drop card prices. I don't think this chip will be able to function on the 8800 model because AMD/ATI is now on stronger financial footing than they have been in the past couple years and Nvidia being late to market is helping ATI line their pockets cash. The 5000 series is just marginally better, but is better than Nvidia's current offerings.

    Will Nvidia release just a single high end card or several tiers of cards to compete across the board? I don't think one card will really help the bottom line over the longer term.
  • StormyParis - Monday, January 18, 2010 - link

    I'm not sure what "winning" means, nor, really what a generation is.

    you can win on highest performance, highest marketshare, highest profit, best engineering...

    a generation may also be adirectX iteration, a chip release cycle (in which case, each manufacturer has its own), a fiscal year...

    Anyhoo, I don't really care, as long as i'm regularly getting better, cheaper cards. I'll happily switch back to nVidia
  • chizow - Monday, January 18, 2010 - link

    I clearly defined what I considered a generation, historically the rest of the metrics measured over time (market share, mind share, profits, value-add features, game support) tend to follow suit.

    For someone like you that doesn't care about who's winning a generation it should be simple enough, buy whatever is best that suits your price:performance requirements when you're ready to buy.

    For those who want to make an informed decision once every 12-16 months per generation to avoid those niggling uncertanties and any potential buyer's remorse, they would certainly want to consider both IHV's offerings before making that decision.
  • Ahmed0 - Monday, January 18, 2010 - link

    How can you "win" if your product isnt intended for a meaningful number of customers. Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world as well but theres a reason why they dont.

    Really, the performance crown isnt anything special. The title goes from hand to hand all the time.

Log in

Don't have an account? Sign up now