Where's The Physics: The State of Hardware Accelerated Physics
by Ryan Smith on July 25, 2007 4:00 PM EST- Posted in
- GPUs
When ATI and NVIDIA launched their first physics initiatives in 2006, they rallied behind Havok, the physics middleware provider whose software has powered a great number of PC games this decade. Havok in turn produced Havok FX, a separate licensable middleware package that used Shader Model 3.0 for calculating physics on supported GPUs. Havok FX was released in Q2 of 2006, and if you haven't heard about it you're not alone.
So far not a single game has shipped that uses Havok FX; plenty of games have shipped using the normal Havok middleware which is entirely CPU-powered, but none with Havok FX. The only title we know of that has been announced with Havok FX support is Hellgate: London, which is due this year. However we've noticed there has been next-to-no mention of this since NVIDIA's announcement in 2006, so make of that what you will.
Why any individual developer chooses to use Havok FX or not will be a unique answer, but there are a couple of common threads that we believe explain much of the situation. The first is pure business: Havok FX costs extra to license. We're not privy to the exact fee schedule Havok charges, but it's no secret PC gaming has been on a decline - it's a bad time to be spending more if it can be avoided. Paying for Havok FX isn't going to break the bank for the large development houses, but there are other potentially cheaper options.
The second reason, and that which has the greater effect, is a slew of technical details that stem from using Havok FX. Paramount to this is what the GPU camp is calling physics is not what the rest of us would call physics with a straight face. As Havok FX was designed, the physics simulations run on the GPU are not retrievable in a practical manner, as such Havok FX is designed to be used to generate "second-order" physics. Such physics are not related to gameplay and are inserted as eye-candy. A good example of this is Ghost Recon: Advanced Warfighter, which we'll ignore was a PhysX powered title for the moment and focus on the fact that it used the PhysX hardware primarily for extra debris.
The problem with this of course is obvious, and Havok goes through a great deal of trouble in their Havok FX literature to make this clear. The extra eye-candy is nice and it's certainly an interesting solution to bypassing the problem of lots-of-little-things loading down the CPU (although Direct3D 10 has reduced the performance hit of this), but it also means that the GPU can't have any meaningful impact on gameplay. It doesn't make Havok FX entirely useless since eye-candy does serve its purpose, but it's not what most people (ourselves included) envision when we think hardware accelerated physics; we're looking for the next step in interactive physics, not more eye-candy.
There's also a secondary issue that sees little discussion, largely because it's not immediately quantifiable, and that's performance. Because Havok FX is doing its work on the GPU, shader resources being used for rendering may be getting reallocated to physics calculations, while the remainder of the resources are left to pick up the rest of the work on top of the additional work generated by Havok FX as a result of creating more eye-candy. When the majority of new titles are GPU limited, it's not hard to imagine this scenario.
A Jetway board with 3 PCIe x16 slots. We're still waiting to put them to use
Thankfully for the GPU camp, Havok isn't the only way to get some level of physics, Shader Model 4.0 introduces some new options. Besides implementing Havok FX in the form of custom code, with proper preparation the geometry shader can be used to do second-order physics like Havok. For example the Call of Juarez technology demonstration uses this technique for its water effects. That said using the geometry shader brings on the same limitations as Havok FX in not being able to retrieve the data for first-order physics.
The second, and by far more interesting use of new GPU technology is exploiting the use of GPGPU techniques to do physics calculations for games. ATI and NVIDIA provide the CTM and CUDA interfaces respectively to allow developers to write high-level code for GPUs to do computing work, and although the primary use of GPGPU technology is for the secondary market of high-performance research computing, it's possible to use this same technology with games. NVIDIA is marketing this under the Quantum Effects initiative, separating it from their early Havok-powered SLI Physics initiative.
Unfortunately the tools for all of these technologies are virtually brand new, games using GPGPU techniques are going to take some time to arrive. This would roughly be in line with the arrival of games that make serious use of DirectX10, which includes the lag period where games will need to support older hardware and hence can't take full advantage of GPGPU techniques. The biggest question here is if any developers using GPGPU techniques will end up using the GPU for first-order physics or solely second-order.
It's due to all of the above that the GPU camp has been so quiet about physics as of late. Given that the only currently commercial-ready GPU accelerated physics technology is limited to second-order physics and only one game is due to be released using said technology this year, there's simply not much to be excited about at the moment. If serious GPU accelerated physics are to arrive, it's going to be another video card upgrade away at the least.
32 Comments
View All Comments
Axion22 - Wednesday, August 1, 2007 - link
Sorry PhysX, you're toast.Multi-core CPUs will have you beat, and by manufacturers who have much more influence in the industry. Even if it did catch on, AMD and Nvidia would just add support and bury you in that market segment.
Aegia would do better trying to get in on the console action. At least there they will have a customer-base.
Zak - Wednesday, August 29, 2007 - link
Yeah, I never liked the idea from the beginning. Count me as one of those who'd rather spend extra $200 on faster CPU than dedicated physics card. What are the chances that many games will use PhysX in a meaningful way, how long PhysX will be around? And, if PhysX is able to run in software mode on one core of a multicore CPU, I'd rather go that way.Z.
0roo0roo - Sunday, July 29, 2007 - link
the simple fact is gamers would rather buy a nicer cpu with more cores with that money. if those cores still can only deliver slightly less physics than the addon, people are willing to live with it. we aren't in a desperate rush to get physics. with the rate at which cpus keep progressing it won't matter, we'll get it regardless. so why worry. its not like sony or nintendo are quaking in their boots at the insane games that physx can create:P the compelling need is not apparent. digital worlds aren't detailed enough for it to matter, people still don't expect things to work in games the same way as reality. physics is limited to blowing stuff up and stacking boxes.and as i said, for online play it won't ever be set for physx cards. if it affects game play than you can't play together with other non physx, so to guarantee compatibility it will be limited to fx and it becomes nothing more than eye candy again.
Bensam123 - Saturday, July 28, 2007 - link
There are quite a few more game available that feature PhysX then just GRAW and GRAW2.http://ageia.com/physx/titles.html">http://ageia.com/physx/titles.html
Not in the list is Rise of Legends. I don't know if it's official, but when installed it installs PhsyX and has quite robust physics in game (ragdolls in a RTS, land deformation, unit movement, etc.).
What I really don't understand and what this article didn't answer, is WHY game developers would pay for a license for a SDK when you can get a better, more user friendly, better supported, more robust, and finely free SDK. It just doesn't make sense to me.
Developers have nothing to lose from using PhysX, but have a lot to gain.
FYI for people that can't read the article, PhysX has a software mode it operates in. The software mode is natively made to run on more then one core. When it all comes down to it, even if you are a advocate for doing physics on a spair core PhysX already does that.
commandar - Friday, July 27, 2007 - link
Wow, this article definitely isn't up to the quality level I generally expect from Anandtech. Typos everywhere and then gems like this:"Being embarrassingly parallel in nature, physics simulations aren't just a good match for GPUs/PPUs with their sub-processors, but a logical fit for multi-core CPUs."
What you say? For one, all processors are not created equal. CPUs are awesome for general purpose work, but a GPU will eat its lunch when it comes to vector math. GPUs are massively parallel vector processors. Physics math generall *is* vector math. While there are problems with doing physics processing which others have already pointed out, suggesting that CPUs are better suited to the job because of parallelism is baffling.
Ryan Smith - Friday, July 27, 2007 - link
Reposted from earlier in the comments:I think you're misinterpreting what I'm saying. GPUs are well suited to embarrassingly parallel applications, however with the core-war now you can put these tasks on a CPU which while not as fast at FP as a GPU/PPU, is quickly catching up thanks to having multiple CPU cores and how easy it is to put embarrassingly parallel tasks on such a CPU. GPUs are still better suited, but CPUs are becoming well enough suited that the GPU advantage is being chipped away.
taterworks - Saturday, July 28, 2007 - link
It's still not parallel enough. A modern CPU has only four cores (the Cell/BE doesn't count since it's not a CPU for PCs), but effective physics processing requires much more parallelism. The Ageia card is better suited to physics processing than any CPU that we'll be able to buy in the next five years. In addition, Ageia identifies a few shortcomings of GPUs when applied to physics calculations. GPUs can't perform read-modify-write operations in their shader units -- they can only perform read operations. In addition, GPUs aren't optimized for applications where each shader unit must execute different code -- they're designed to execute the same code, but on different parts of the image. As a result, some shader units finish their calculations before other shader units and simply sit idle instead of processing the next batch of data. The problem here is that the parallelism advantages become hamstrung by inefficiency. In the end, physics computations are too subtantially different from graphics computations for one optimized processing unit to be applied to a half-hearted form of the other.What's to blame for Ageia's failure? I think there's a fundamental problem with the way gamers think about an immersive gaming experience. Gamers are too preoccupied with resolution, texture and model detail, lighting, and frame rates to notice that objects in games don't behave like real objects. The focus is on visual realism, not physical realism, but both are required for a true virtual reality experience. In addition, the PhysX hardware was too expensive from the start -- it had to be cheaper than GPUs in order for anyone to take a chance on it. A $99 PhysX card was desperately needed last year.
JarredWalton - Sunday, July 29, 2007 - link
Of course, all of what you're saying assumes that we actually need that much physics processing power. I remember reading about a flight simulator a while back where they described all of the complex calculations being done to make the flight model as realistic as possible. After the lengthy description of the surface dynamics calculations and whatever else was involved in making the planes behave realistically, the developer than made the comment that all of that used less than 5% of the CPU power. Most of the remaining CPU time was used for graphics.Granted, that was Flight Unlimited and it was a while ago, but the situation is still pretty similar to what we have today. As complex as physics might be if you model it exactingly, it's really not necessary and the graphics still demand the majority of the CPU power. AI and physics are the other things the CPU handles. People can come up with situations (i.e. Cell Factor) where hardware physics is necessary to maintain acceptable performance. The real question is whether those situations are really necessary in order to deliver a compelling game.
Right now, games continue to be predominantly single core - even the most physics oriented games (Half-Life 2?) don't use multiple cores. And physics calculations aren't really consuming a majority of even one core! Now, give physics two cores that only need to do physics (on a quad core system), and do you see any reason that any current game is going to need a PPU? Especially when the cost of the PPU card is about as much as the cost of a quad core CPU?
I don't, and I don't expect to before AGEIA is pretty much gone. Maybe Intel or AMD will by their intellectual property and incorporate the tech into a future CPU. Short term, I just don't think they're relevant.
0roo0roo - Friday, July 27, 2007 - link
wouldn't they have to create physx only servers if the physics affect gameplay? they certainly aren't going to require physx to play online...i'm guessing it will only be limited to effects for the most part because of this.Bladen - Sunday, July 29, 2007 - link
I'd imagine that hardware physics would not be much more taxing then non-hardware physics, except for RAM, on servers.I'd say that the server would make everyones computer do the number crunching. The server would just say 'player 1 fires a rocket from this position at this angle (and thus hits this wall like so)". Every players individual PhysX card would do its own processing, and calculate to the same answer.
This is purely speculation though, I have no real knowledge of video game programing (or any kind of programming for that matter).