comparisonphysicsnvidiaphysx

PhysX for massive performance via GPU?


I recently compared some of the physics engine out there for simulation and game development. Some are free, some are opensource, some are commercial (1 is even very commercial $$$$). Havok, Ode, Newton (aka oxNewton), Bullet, PhysX and "raw" build-in physics in some 3D engines.

At some stage I came to conclusion or question: Why should I use anything but NVidia PhysX if I can make use of its amazing performance (if I need it) due to GPU processing ? With future NVidia cards I can expect further improvement independent of the regular CPU generation steps. The SDK is free and it is available for Linux as well. Of course it is a bit of vendor lock-in and it is not opensource.

Whats your view or experience ? If you would start right now with development, would you agree with the above ?

cheers


Solution

  • Disclaimer: I've never used PhysX, my professional experience is restricted to Bullet, Newton, and ODE. Of those three, ODE is far and away my favorite; it's the most numerically stable and the other two have maturity issues (useful joints not implemented, legal joint/motor combinations behaving in undefined ways, &c).

    You alluded to the vendor lock-in issue in your question, but it's worth repeating: if you use PhysX as your sole physics solution, people using AMD cards will not be able to run your game (yes, I know it can be made to work, but it's not official or supported by NVIDIA). One way around this is to define a failover engine, using ODE or something on systems with AMD cards. This works, but it doubles your workload. It's seductive to think that you'll be able to hide the differences between the two engines behind a common interface and write the bulk of your game physics code once but most of your difficulties with game physics will be in dealing with the idiosyncrasies of your particular physics engine, deciding on values for things like contact friction and restitution. Those values don't have consistent meanings across physics engines and (mostly) can't be formally derived, so you're stuck finding good-looking, playable values by experiment. With PhysX plus a failover you're doing all that scut work twice.

    At a higher level, I don't think any of the stream processing APIs are fully baked yet, and I'd be reluctant to commit to one until, at the very least, we've how the customer reaction Intel's Larrabee shapes peoples' designs.

    So far from seeing PhysX as the obvious choice for high-end game development, I'd say it should be avoided unless either you don't think people with AMD cards make up a significant fraction of your player base (highly unlikely) or you have enough coding and QA manpower to test two physics engines (more plausible, though if your company is that wealthy I've heard good things about Havok). Or, I guess, if you've designed a physics game with performance demands so intense that only streaming physics can satisfy you - but in that case, I'd advise you to start a band and let Moore's Law do its thing for a year or two.