Lucid's Multi-GPU Wonder: More Information on the Hydra 100
by Derek Wilson on August 22, 2008 4:00 PM EST- Posted in
- GPUs
Barriers to Entry and Final Words
Depending on the patents Lucid has, neither NVIDIA nor ATI may be able to build a competing bit of hardware / software for use in their own solutions. And then there is the quesetion: what will NVIDIA and ATI attempt to do in order to be anticompetitive (err, I mean to continue to promote their own solutions to or platforms surrounding multi-GPU).
Because of the fact that both NVIDIA and ATI already participate in anti-competitive practices by artificially limiting the functionality of their hardware on competing platforms, it doesn't seem like a stretch to think they'll try something here as well. But can they break it?
Maybe and maybe not. At a really crappy level they could detect whether or not the hardware is in the system and refuse to do anything 3D. If they're a little nicer they could detect whether the Hydra driver is running and refuse to play 3D while it is active. Beyond that it doesn't seem like there is really much room to do anything like they've been doing. The Lucid software and hardware is completely transparent to the game, the graphics driver and the hardware. None of those components need to know anything for this to work.
As AMD and NVIDIA have to work closely with graphics card and motherboard vendors, they could try and strong arm Lucid out of the market by threatening either (overtly or not) the supply of their silicon to certain OEMs. This could be devastating to Lucid, as we've already see what the fear of an implication can do to software companies in the situation with Assassin's Creed (when faced with the option of applying an already available fix or pulling support for DX10.1 which only AMD supports, they pulled it). This type of thing seems the largest unknown to us.
Of course, while it seems like an all or nothing situation that would serve no purpose but to destroy the experience of end users, NVIDIA and ATI have lots of resources to work on this sort of "problem" and I'm sure they'll try their best to come up with something. Maybe one day they'll wake up and realize (especially if one starts to dominate over the other other) that Microsoft and Intel got slammed with antitrust suits for very similar practices.
Beyond this, they do still need to get motherboard OEMs to place the Hydra 100 on their boards. Or they need to get graphics hardware vendors to build boards with the hardware on them. This increases cost, and OEMs are really sensitive to cost increases. At the same time, a platform that can run both AMD and NVIDIA solutions in multi-GPU configurations has added value. As does a single card multi-GPU solution that gets better performance than even the ones from AMD and NVIDIA.
The parts these guys sell will still have to compete in the retail market, so they can't price themselves out of competition. More performance is great, but they have to worry about price/performance and their own cost. We think this will be more attractive to high end motherboard vendors than anyone else. And we really hope Intel adopts it and uses instead of nForce 100 or nForce 200 chips to enable flexible multi-GPU. Assuming it works of course.
Anyway, Lucid's Hyrda 100 is a really cool idea. And we really hope it works like Lucid says it will. Most of the theory seems sound, and while we've seen it in action, we need to put it to the test and look hard at latency and scaling. And we really really want to get excited. So we really really need hardware.
57 Comments
View All Comments
jeff4321 - Sunday, August 24, 2008 - link
If you think that NVIDIA and AMD have been stagnant, you haven't seen the graphics industry change. The basic graphics pipeline hasn't changed. It simply got smaller. A current NVIDIA or ATI GPU probably has as much computation power as an SGI workstation from the 90's. GPGPU is a natural extension of graphics hardware. Once the graphics hardware becomes powerful enough, it starts to resemble a general purpose machine, so you build it that way. It's possible because the design space for the GPU can do more (Moore's Law).Since it's early in the deployment of using a GPU as an application-defined co-processor, I would expect there to be competing APIs. Believe it or not, in the late eighties, x87 wasn't the only floating point processor available for x86's. Intel's 387 was slower than Weitek's floating point unit. Weitek lost because the next generation CPUs at the time started integrating floating point. Who will win? The team that has better development tools or the team that exclusively runs the next killer app.
Dynamically changing between AFR and splitting the scene is hard to do. I'm sure that ATI and NVIDIA have experimented w/ this in-house and they are either doing it now, or they have decided that it kills performance because of the overhead to change it on the fly. How Lucid can do better than the designers of the device drivers and ASICs, I don't know.
Lucid Hydra is not competition for either NVIDIA or ATI. The Lucid Hydra chip is a mechanism for the principals of the company to get rich when Intel buys them to get access to Multi-GPU software for Larrabee. It'll be a good deal for the principals, but probably a bad deal for Intel.
Licensing Crossfire and SLI is a business decision. Both technologies cost a bundle to develop. Both companies want to maximize return.
AnnonymousCoward - Saturday, August 23, 2008 - link
I'm afraid this solution will cause unacceptable lag. If the lag isn't inherent, maybe the solution will require a minimum "max frames to render ahead / Prerender limit". I don't buy their "negligible" BS answer.Does SLI require a minimum? I got the impression it does, from what I've read in the past. I don't have SLI, and use RivaTuner to set mine to "1".
Aethelwolf - Saturday, August 23, 2008 - link
Lets pretend, if only for a moment, that I was a GPU company interested giving a certain other GPU company a black eye. And lets say I have this strategy where I design for the middle range and then scale up and down. I would be seriously haggling lucid right now to become a partner in supplying me, and pretty much only me, besides intel, with their hydra engine.DerekWilson - Saturday, August 23, 2008 - link
that'd be cool, but lucid will sell more parts if they work with everyone.they're interested in making lots of money ... maybe amd and intel could do that for them, but i think the long term solution is to support as much as possible.
Sublym3 - Saturday, August 23, 2008 - link
Correct me if i am wrong but isn’t this technology still depending on making the hardware specifically for each DirectX version?So when a new DirectX or OpenGL version comes out not only will we have to update our videos cards but also our motherboard at the same time?
Not to mention this will probably jack up the price on already expensive motherboards.
Seems like a step backwards to me...
DerekWilson - Saturday, August 23, 2008 - link
you are both right and wrong --yes the need to update the technology for each new directx and opengl release.
BUT
they don't need to update the hardware at all. the hardware is just a smart switch with a compositor.
to support a new directx or opengl version, you would only need to update the driver / software for the hydra 100 ...
just like a regular video card.
magao - Saturday, August 23, 2008 - link
There seems to be a strong correlation between Intel's claims about Larrabee, and Lucid's claims about Hydra.This is pure speculation, but I wouldn't be surprised if Hydra is the behind-the-scenes technology that makes Larrabee work.
Aethelwolf - Saturday, August 23, 2008 - link
I think this is the case. Hydra and Larrabee appear to be made for each other. I won't be surprised if they end up mating.From a programmers view, Larrabee is very, very exciting tech. If it fails in the PC space, it might be resurrected when next-gen consoles come along, since it is fully programmable and claims linear performance (thanks to hydra?).
DerekWilson - Saturday, August 23, 2008 - link
i'm sure intel will love hydra for allowing their platforms to support linear scaling with multigpu solutions.but larrabee won't have anything near the same scaling issues that nvidia and amd have in scaling to multi-gpu -- larrabee may not even need this to get near linear scaling in multigpu situation.
essentially they just need to build an smp system and it will work -- shared mem and all ...
their driver would need to optimize differently, but that would be about it.
GmTrix - Saturday, August 23, 2008 - link
If larrabee doesn't need hydra to get near linear scaling isn't hydra just providing a way for amd and nvidia to compete with it?