More GDDR5 Technologies: Memory Error Detection & Temperature Compensation

As we previously mentioned, for Cypress AMD’s memory controllers have implemented a greater part of the GDDR5 specification. Beyond gaining the ability to use GDDR5’s power saving abilities, AMD has also been working on implementing features to allow their cards to reach higher memory clock speeds. Chief among these is support for GDDR5’s error detection capabilities.

One of the biggest problems in using a high-speed memory device like GDDR5 is that it requires a bus that’s both fast and fairly wide - properties that generally run counter to each other in designing a device bus. A single GDDR5 memory chip on the 5870 needs to connect to a bus that’s 32 bits wide and runs at base speed of 1.2GHz, which requires a bus that can meeting exceedingly precise tolerances. Adding to the challenge is that for a card like the 5870 with a 256-bit total memory bus, eight of these buses will be required, leading to more noise from adjoining buses and less room to work in.

Because of the difficulty in building such a bus, the memory bus has become the weak point for video cards using GDDR5. The GPU’s memory controller can do more and the memory chips themselves can do more, but the bus can’t keep up.

To combat this, GDDR5 memory controllers can perform basic error detection on both reads and writes by implementing a CRC-8 hash function. With this feature enabled, for each 64-bit data burst an 8-bit cyclic redundancy check hash (CRC-8) is transmitted via a set of four dedicated EDC pins. This CRC is then used to check the contents of the data burst, to determine whether any errors were introduced into the data burst during transmission.

The specific CRC function used in GDDR5 can detect 1-bit and 2-bit errors with 100% accuracy, with that accuracy falling with additional erroneous bits. This is due to the fact that the CRC function used can generate collisions, which means that the CRC of an erroneous data burst could match the proper CRC in an unlikely situation. But as the odds decrease for additional errors, the vast majority of errors should be limited to 1-bit and 2-bit errors.

Should an error be found, the GDDR5 controller will request a retransmission of the faulty data burst, and it will keep doing this until the data burst finally goes through correctly. A retransmission request is also used to re-train the GDDR5 link (once again taking advantage of fast link re-training) to correct any potential link problems brought about by changing environmental conditions. Note that this does not involve changing the clock speed of the GDDR5 (i.e. it does not step down in speed); rather it’s merely reinitializing the link. If the errors are due the bus being outright unable to perfectly handle the requested clock speed, errors will continue to happen and be caught. Keep this in mind as it will be important when we get to overclocking.

Finally, we should also note that this error detection scheme is only for detecting bus errors. Errors in the GDDR5 memory modules or errors in the memory controller will not be detected, so it’s still possible to end up with bad data should either of those two devices malfunction. By the same token this is solely a detection scheme, so there are no error correction abilities. The only way to correct a transmission error is to keep trying until the bus gets it right.

Now in spite of the difficulties in building and operating such a high speed bus, error detection is not necessary for its operation. As AMD was quick to point out to us, cards still need to ship defect-free and not produce any errors. Or in other words, the error detection mechanism is a failsafe mechanism rather than a tool specifically to attain higher memory speeds. Memory supplier Qimonda’s own whitepaper on GDDR5 pitches error correction as a necessary precaution due to the increasing amount of code stored in graphics memory, where a failure can lead to a crash rather than just a bad pixel.

In any case, for normal use the ramifications of using GDDR5’s error detection capabilities should be non-existent. In practice, this is going to lead to more stable cards since memory bus errors have been eliminated, but we don’t know to what degree. The full use of the system to retransmit a data burst would itself be a catch-22 after all – it means an error has occurred when it shouldn’t have.

Like the changes to VRM monitoring, the significant ramifications of this will be felt with overclocking. Overclocking attempts that previously would push the bus too hard and lead to errors now will no longer do so, making higher overclocks possible. However this is a bit of an illusion as retransmissions reduce performance. The scenario laid out to us by AMD is that overclockers who have reached the limits of their card’s memory bus will now see the impact of this as a drop in performance due to retransmissions, rather than crashing or graphical corruption. This means assessing an overclock will require monitoring the performance of a card, along with continuing to look for traditional signs as those will still indicate problems in memory chips and the memory controller itself.

Ideally there would be a more absolute and expedient way to check for errors than looking at overall performance, but at this time AMD doesn’t have a way to deliver error notices. Maybe in the future they will?

Wrapping things up, we have previously discussed fast link re-training as a tool to allow AMD to clock down GDDR5 during idle periods, and as part of a failsafe method to be used with error detection. However it also serves as a tool to enable higher memory speeds through its use in temperature compensation.

Once again due to the high speeds of GDDR5, it’s more sensitive to memory chip temperatures than previous memory technologies were. Under normal circumstances this sensitivity would limit memory speeds, as temperature swings would change the performance of the memory chips enough to make it difficult to maintain a stable link with the memory controller. By monitoring the temperature of the chips and re-training the link when there are significant shifts in temperature, higher memory speeds are made possible by preventing link failures.

And while temperature compensation may not sound complex, that doesn’t mean it’s not important. As we have mentioned a few times now, the biggest bottleneck in memory performance is the bus. The memory chips can go faster; it’s the bus that can’t. So anything that can help maintain a link along these fragile buses becomes an important tool in achieving higher memory speeds.

Lower Idle Power & Better Overcurrent Protection Angle-Independent Anisotropic Filtering At Last
Comments Locked

327 Comments

View All Comments

  • SiliconDoc - Sunday, September 27, 2009 - link

    I'll be watching you for the very same conclusion when NVidia launches soft and paper.
    I'll bet ten thousand bucks you don't say it.
    I'll bet a duplicate amount you're a red rager fan, otherwise YOU'D BE HONEST, NOT HOSTILE !
  • rennya - Thursday, September 24, 2009 - link

    It may be paper-launch in the US, but here somewhere in South East Asia I can already grab a Powercolor 5870 1GB if I so desire. Powercolor is quite aggresive here promoting their ATI 5xxx wares just like Sapphire does when the 4xxx series comes out.
  • SiliconDoc - Thursday, September 24, 2009 - link

    I believe you. I've also seen various flavors of cards not available here in the USA, banned by the import export deals and global market and manufacturer and vendor controls and the powers that be, and it doesn't surprise me when it goes the other way.
    Congratulations on actually having a non fake launch.
  • Spoelie - Wednesday, September 23, 2009 - link

    "The engine allows for complete hardware offload of all H.264, MPEG-2 and VC1 decoding".

    This has afaik never been true for any previous card of ATi, and I doubt it has been tested to be true this time as well.

    I have detailed this problem several times before in the comment section and never got a reply, so I'll summarize: ATi's UVD only decodes level 4 AVC (i.e. bluray) streams, if you have a stream with >4 reference frames, you're out of luck. NVIDIA does not have this limitation.
  • lopri - Wednesday, September 23, 2009 - link

    Yeah and my GTX 280 has to run full throttle (3D frequency) just to play a 720p content and temp climbs the same as if it were a 3D game. Yeah it can decode some *underground* clips from Japan, big deal. Oh and it does that for only H.264. No VC-1 love there. I am sure you'd think that is not a big deal, but the same applies to those funky clips with 13+ reference frames. Not a big deal. Especially when AMD can decode all 3 major codecs effortlessly (performance 2D frequency instead of 3D frequency)
  • rennya - Thursday, September 24, 2009 - link

    G98 GPUs (like 8400GS discrete or 9400 chipset) or GT220/G210 can also do MPEG2/VC-1/AVC video decoding.

    The GPU doesn't have to run full throttle either, as long as you stick to the 18x.xx drivers.
  • SJD - Wednesday, September 23, 2009 - link

    Ryan,

    Great article, but there is an inconsistancy. You say that thanks to there only being 2 TDMS controllers, you can't use both DVI connectors at the same time as the HDMI output for three displays, but then go onto say later that you can use the DVI(x2), DP and HDMI in any combination to drive 3 displays. Which is correct?

    Also, can you play HDCP protected content (a Blu-Ray disc for example) over a panel connected to a Display Port connector?

    Otherwise, thanks for the review!
  • Ryan Smith - Wednesday, September 23, 2009 - link

    It's the former that is correct: you can only drive two TDMS devices. The article has been corrected.

    And DP supports HDCP, so yes, protected content will play over DP.
  • SJD - Friday, September 25, 2009 - link

    Thanks for clarifying that Ryan - It confirms what I thought.. :-)
  • chowmanga - Wednesday, September 23, 2009 - link

    I'd like to see a benchmark using an amd cpu. I think it was the Athlon II 620 article that pointed out how Nvidia hardware ran better on AMD cpus and AMD/ATI cards ran better on Intel cpus. It would be interesting to see if the 5870 stacks up against Nv's current gen with other setups.

Log in

Don't have an account? Sign up now