Physics Engines revisited
nVidia have just announced using SLI to do physics processing, and Charlie Demerjian at the Inquirer has a few words to say about it. I understand his frustrations that little seems to have happened in this area, but I am not sure about his conclusions.
Ageia have spent an age developing their card, and its still not here. Moreover most of the games floating around (the big ones at any rate) have the little Havoc logo on them. PhysX seems to be less obvious - 3DMark uses it I think.
With no cards or drivers its very hard to see what difference it will make on the ground. So where do I disagree?
I reckon the first round of physics effects in games will look exactly as nVidia describes them, regardless of the tech - eye candy. Many developers have already pointed out writing an engine that physics can be dynamically scaled in complexity is hard. Think about it: if one player had to dodge 10x the rocks of someone with a lower spec machine, it is quite hard to balance the gameplay. Not being able to hit someone because your machine was adding wind to your bullets would also become quickly tiresome.
Having an Ageia or Geforce will probably lead to eyecandy improvements in the first gen of games regardless of their underlying strengths.
The second disagreement is around Nvidia offering a halfway solutions, and assumptions of what they can achieve. They have total control over a huge, massively scalable FPU. They know it inside and out, and expose the bits required to draw pretty pictures efficiently. They can probably program it to do many things that are currently not supported by the drivers.
I'm not saying they won't have limitations - the design was focused on graphics, telling it to go and do something else is bound to cause problems. However guesses at how flexible it can be are pie-in-the-sky speculation. The software is the layer to make it seamless, and possibly the first drivers will be cludgy - supporting only physics or graphics. This does not mean it has to always be so - even on current generation cards.
Unfortunately its also likely that nVidia and ATI will want to gouge people for a "physics accelerator", so not invest too much time in producing a good solution on current hardware.
Make no mistake though, GFX cards can do physics sums whilst drawing gfx - check out the
GPGPU site for lots of interesting things that can be written using current gen hardware. The SLI requirement is just a trick to drive sales...
The most interesting thing about nVidia's announcement for me was Havoc being the choice - and that is based around content. Halflife2, Farcry, Painkiller and numerous others use Havoc. Optimizing it will mean that almost all FPSs should become much smoother when you suddenly decide to unload a clip into a stack of barrels.
Sadly I'm not sure many benchmarks really apply physics to their timedemos, so the hardware review sites might give them initially a bad review. Don't believe them. The one time you *don't* want your game slowing is when it gets complicated!
Finally I don't agree with Charlie's memory limits argument. 10000 objects with 50 bytes an object - probably more than you'd need to describe bounding boxes, weights densities etc, is only half a meg.
So I'm back to waiting. I want ATI to annouce something similar, as I have an ATI X1600, and don't really want to buy another GFX card straight away! Hopefully soon we'll see some fruits of all this talk, cos the PR-hype war is starting to wear thin...
Ageia have spent an age developing their card, and its still not here. Moreover most of the games floating around (the big ones at any rate) have the little Havoc logo on them. PhysX seems to be less obvious - 3DMark uses it I think.
With no cards or drivers its very hard to see what difference it will make on the ground. So where do I disagree?
I reckon the first round of physics effects in games will look exactly as nVidia describes them, regardless of the tech - eye candy. Many developers have already pointed out writing an engine that physics can be dynamically scaled in complexity is hard. Think about it: if one player had to dodge 10x the rocks of someone with a lower spec machine, it is quite hard to balance the gameplay. Not being able to hit someone because your machine was adding wind to your bullets would also become quickly tiresome.
Having an Ageia or Geforce will probably lead to eyecandy improvements in the first gen of games regardless of their underlying strengths.
The second disagreement is around Nvidia offering a halfway solutions, and assumptions of what they can achieve. They have total control over a huge, massively scalable FPU. They know it inside and out, and expose the bits required to draw pretty pictures efficiently. They can probably program it to do many things that are currently not supported by the drivers.
I'm not saying they won't have limitations - the design was focused on graphics, telling it to go and do something else is bound to cause problems. However guesses at how flexible it can be are pie-in-the-sky speculation. The software is the layer to make it seamless, and possibly the first drivers will be cludgy - supporting only physics or graphics. This does not mean it has to always be so - even on current generation cards.
Unfortunately its also likely that nVidia and ATI will want to gouge people for a "physics accelerator", so not invest too much time in producing a good solution on current hardware.
Make no mistake though, GFX cards can do physics sums whilst drawing gfx - check out the
GPGPU site for lots of interesting things that can be written using current gen hardware. The SLI requirement is just a trick to drive sales...
The most interesting thing about nVidia's announcement for me was Havoc being the choice - and that is based around content. Halflife2, Farcry, Painkiller and numerous others use Havoc. Optimizing it will mean that almost all FPSs should become much smoother when you suddenly decide to unload a clip into a stack of barrels.
Sadly I'm not sure many benchmarks really apply physics to their timedemos, so the hardware review sites might give them initially a bad review. Don't believe them. The one time you *don't* want your game slowing is when it gets complicated!
Finally I don't agree with Charlie's memory limits argument. 10000 objects with 50 bytes an object - probably more than you'd need to describe bounding boxes, weights densities etc, is only half a meg.
So I'm back to waiting. I want ATI to annouce something similar, as I have an ATI X1600, and don't really want to buy another GFX card straight away! Hopefully soon we'll see some fruits of all this talk, cos the PR-hype war is starting to wear thin...
Comments