Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Floating point errors in collision routines

Reply
Thread Tools

Floating point errors in collision routines

 
 
Dave
Guest
Posts: n/a
 
      12-01-2004
Hi folks,

I am trying to develop a routine that will handle sphere-sphere and
sphere-triangle collisions and interactions. My aim is to develop a
quake style collision engine where a player can interact with a rich
3D environment. Seem to be 90% of the way there! My problems are
related to calculations where the result tends to zero (or another
defined limit.)

Have loads of cases where this kind of interaction occurs but this one
is as tricky (?) as any...

When a moving sphere approaches a triangulated plane I am using a
sphere-plane collision routine to see if a collision will occur. All
goes well until I start to have near parallel interactions (e.g. when
the sphere has sufficient energy to collide with, and then slide along
the plane.) During the next pass of the collision routine the velocity
of the sphere is perpendicular (in a single plane) to the face normal
of the previous colliding plane. A dot product between the velocity of
the sphere and the normal of the colliding plane should be zero. It's
close to zero but how close is close enough to warrant no further
collision? A "need to collide" case occurs where the sphere is just
above the plane with a velocity almost parallel to the plane BUT just
on a collision course. The above dot product will be very close to
zero again, but this time there should be a collision with this plane!

Since I need to account for glancing off faces, approaching faces at
right angles, interacting with steps, falling off edges etc the number
of vector operations to determine the final position and velocity of
the sphere need to be as accurate as possible.

I guess my question (sorry for the waffle) is how to minimise rounding
errors in floating point operations?

So far I've tried to account for the errors (or eliminate them) by:

- Using double precision decelerations and calculations
- Using the epsilon error on the variables to account for known
internal storage errors

Unable to eliminate or account for the errors I then set about trying
to use flags to ignore colliding objects from future collision during
the current recursive collision routine call. Helped in some cases so
started to try to account for all near critical iterations using
flags.

Getting close to a general solution now but my once clear simple
routine has turned into a logical nightmare!

When working with numbers where accuracy is paramount should I be
using a different approach? I've considered the following:

- Using a quantised system where all values have to fall along a
virtual 3D grid
- Using sin and cos tables to support the above goal of quantisation
- Using integer mathematics
- Use a high precision custom storage class for calculations (speed is
off the essence so might not be an option?)

Perhaps I am worrying too much! My current "effort" seems to handle
interactions in a similar, if much more shaky, way to Unreal
Tournament 2004. Kind of clunky at times, stops occasionally and seems
to have problems jumping on steps!

I'm working on Windows 2000 using Visual Studio 6 and .net enterprise
architect 2003 (c++). It's a DirectX application if that makes any
difference?

Any advice would be really appreciated.

Many thanks,

Dave
 
Reply With Quote
 
 
 
 
Greg Schmidt
Guest
Posts: n/a
 
      12-01-2004
On 1 Dec 2004 12:11:36 -0800, Dave wrote:

> I guess my question (sorry for the waffle) is how to minimise rounding
> errors in floating point operations?
>
> So far I've tried to account for the errors (or eliminate them) by:
>
> - Using double precision decelerations and calculations


This will certainly decrease your rounding errors. You'll have to
measure whether any speed decrease is worth the mathematical
improvement.

> - Using the epsilon error on the variables to account for known
> internal storage errors


Do you mean using something like
if(result < epsilon) result = 0;
That's something I've had to do on occasion. (epsilon usually around
10^-8 I think, but don't quote me on that.)

> Unable to eliminate or account for the errors I then set about trying
> to use flags to ignore colliding objects from future collision during
> the current recursive collision routine call. Helped in some cases so
> started to try to account for all near critical iterations using
> flags.


This might also be a useful optimization trick, if it can often replace
a bunch of floating point calculations with a single boolean comparison.

> Getting close to a general solution now but my once clear simple
> routine has turned into a logical nightmare!


That does tend to be the way of these things. 99% of all situations are
handled very simply, and then you need a mass of special cases for the
last 1%.

> When working with numbers where accuracy is paramount should I be
> using a different approach? I've considered the following:
>
> - Using a quantised system where all values have to fall along a
> virtual 3D grid
> - Using sin and cos tables to support the above goal of quantisation
> - Using integer mathematics


These three will probably improve speed but decrease accuracy.

> - Use a high precision custom storage class for calculations (speed is
> off the essence so might not be an option?)


This could increase accuracy but decrease speed.

> I'm working on Windows 2000 using Visual Studio 6 and .net enterprise
> architect 2003 (c++).


Have you investigated the Visual Studio "improve floating point
consistency" compilation option? In a couple of projects I've worked
on, it has made a significant improvement. I didn't notice any speed
degradation, but my apps were not as computationally intense as yours
and didn't have real-time requirements.

--
Greg Schmidt http://www.velocityreviews.com/forums/(E-Mail Removed)
Trawna Publications http://www.trawna.com/
 
Reply With Quote
 
 
 
 
John Nagle
Guest
Posts: n/a
 
      12-02-2004
Trying to converge a floating point calculation all the way to
zero is fundamentally futile. You get characteristic underflow
before you get zero.

In general, bear in mind that subtracting two large numbers
to get a tiny result is inherently prone to roundoff error. You
must design your algorithms to avoid that.

Some collision detection algorithms, especially early versions
of GJK, are badly behaved when two faces are very close to
parallel. Of course, if you're doing physics right, objects
in contact come to rest in face-parallel situations, so the
near-parallel situation gets explored very thoroughly.
Newer collision engines, like (I think) Gino's SOLID, get
it right.

Developing collision algorithms that don't have floating
point precision problems yet don't "cheat" on contact analysis
is hard, but quite possible. See my US patent #6,067,096, which
covers the first "ragdoll physics" that worked. It's necessary
to be very careful about the limitations of floating point.
But once you get it right, it works very well.

As I usually tell people, either buy a physics engine
(probably Havok), figure out some way to make gameplay
work with lousy physics, or expect to spend a few years
on the problem.

John Nagle
Animats

Dave wrote:

> Hi folks,
>
> I am trying to develop a routine that will handle sphere-sphere and
> sphere-triangle collisions and interactions. My aim is to develop a
> quake style collision engine where a player can interact with a rich
> 3D environment. Seem to be 90% of the way there! My problems are
> related to calculations where the result tends to zero (or another
> defined limit.)
>
> Have loads of cases where this kind of interaction occurs but this one
> is as tricky (?) as any...
>
> When a moving sphere approaches a triangulated plane I am using a
> sphere-plane collision routine to see if a collision will occur. All
> goes well until I start to have near parallel interactions (e.g. when
> the sphere has sufficient energy to collide with, and then slide along
> the plane.) During the next pass of the collision routine the velocity
> of the sphere is perpendicular (in a single plane) to the face normal
> of the previous colliding plane. A dot product between the velocity of
> the sphere and the normal of the colliding plane should be zero. It's
> close to zero but how close is close enough to warrant no further
> collision? A "need to collide" case occurs where the sphere is just
> above the plane with a velocity almost parallel to the plane BUT just
> on a collision course. The above dot product will be very close to
> zero again, but this time there should be a collision with this plane!
>
> Since I need to account for glancing off faces, approaching faces at
> right angles, interacting with steps, falling off edges etc the number
> of vector operations to determine the final position and velocity of
> the sphere need to be as accurate as possible.
>
> I guess my question (sorry for the waffle) is how to minimise rounding
> errors in floating point operations?
>
> So far I've tried to account for the errors (or eliminate them) by:
>
> - Using double precision decelerations and calculations
> - Using the epsilon error on the variables to account for known
> internal storage errors
>
> Unable to eliminate or account for the errors I then set about trying
> to use flags to ignore colliding objects from future collision during
> the current recursive collision routine call. Helped in some cases so
> started to try to account for all near critical iterations using
> flags.
>
> Getting close to a general solution now but my once clear simple
> routine has turned into a logical nightmare!
>
> When working with numbers where accuracy is paramount should I be
> using a different approach? I've considered the following:
>
> - Using a quantised system where all values have to fall along a
> virtual 3D grid
> - Using sin and cos tables to support the above goal of quantisation
> - Using integer mathematics
> - Use a high precision custom storage class for calculations (speed is
> off the essence so might not be an option?)
>
> Perhaps I am worrying too much! My current "effort" seems to handle
> interactions in a similar, if much more shaky, way to Unreal
> Tournament 2004. Kind of clunky at times, stops occasionally and seems
> to have problems jumping on steps!
>
> I'm working on Windows 2000 using Visual Studio 6 and .net enterprise
> architect 2003 (c++). It's a DirectX application if that makes any
> difference?
>
> Any advice would be really appreciated.
>
> Many thanks,
>
> Dave

 
Reply With Quote
 
Dave
Guest
Posts: n/a
 
      12-03-2004
Thanks for your comments Greg

Greg Schmidt <(E-Mail Removed)> wrote in message news:<1ji3f0cxp1zqo$(E-Mail Removed)>...
> On 1 Dec 2004 12:11:36 -0800, Dave wrote:
>
> > I guess my question (sorry for the waffle) is how to minimise rounding
> > errors in floating point operations?
> >
> > So far I've tried to account for the errors (or eliminate them) by:
> >
> > - Using double precision decelerations and calculations

>
> This will certainly decrease your rounding errors. You'll have to
> measure whether any speed decrease is worth the mathematical
> improvement.
>
> > - Using the epsilon error on the variables to account for known
> > internal storage errors

>
> Do you mean using something like
> if(result < epsilon) result = 0;
> That's something I've had to do on occasion. (epsilon usually around
> 10^-8 I think, but don't quote me on that.)
>


That's exactly what I've been doing! Thinking on this a bit more I
guess it's not really useful since it's only accounting for the
internal errors in the floating point operations and not the actual
errors due to the limitation of the storage class?

> > Unable to eliminate or account for the errors I then set about trying
> > to use flags to ignore colliding objects from future collision during
> > the current recursive collision routine call. Helped in some cases so
> > started to try to account for all near critical iterations using
> > flags.

>
> This might also be a useful optimization trick, if it can often replace
> a bunch of floating point calculations with a single boolean comparison.
>


Certainly did! My first attempt involved testing if the sphere
collided with everthing in the environment. About a 100 refinements
later and the routine now only does any involved maths for a handful
of triangles. Using flags to help control critical situations has also
been useful in terms of performance but not so useful to avoid my
objects slipping through the environment!

> > Getting close to a general solution now but my once clear simple
> > routine has turned into a logical nightmare!

>
> That does tend to be the way of these things. 99% of all situations are
> handled very simply, and then you need a mass of special cases for the
> last 1%.
>


Guess that's why there are only a handful of quick, robust collision
engines out there.

> > When working with numbers where accuracy is paramount should I be
> > using a different approach? I've considered the following:
> >
> > - Using a quantised system where all values have to fall along a
> > virtual 3D grid
> > - Using sin and cos tables to support the above goal of quantisation
> > - Using integer mathematics

>
> These three will probably improve speed but decrease accuracy.
>


Why so? Working with a quantised system should enable the position and
velocity of an object to be "forced" into allignment. For a simple
environment where objects are alligned with the quantised system I'm
sure such an approach would work (dull game though!) For a complex
environment operations like calculating the point of collision between
a sphere and an off-axis triangle face would be very tricky to
quantise though!

> > - Use a high precision custom storage class for calculations (speed is
> > off the essence so might not be an option?)

>
> This could increase accuracy but decrease speed.
>


And I'm not really that sure of the benefits. I think I'm stuck trying
to represent what are ultimately irrational numbers. No storage class
is going to help here!

> > I'm working on Windows 2000 using Visual Studio 6 and .net enterprise
> > architect 2003 (c++).

>
> Have you investigated the Visual Studio "improve floating point
> consistency" compilation option? In a couple of projects I've worked
> on, it has made a significant improvement. I didn't notice any speed
> degradation, but my apps were not as computationally intense as yours
> and didn't have real-time requirements.


Did have a look at this under VS6 but it didn't help too much.
Accuracy marginally improved but still below what is required. Will
have a look to see if .net is any better.
 
Reply With Quote
 
Dave
Guest
Posts: n/a
 
      12-05-2004
John Nagle <(E-Mail Removed)> wrote in message news:<Z8Jrd.27551$(E-Mail Removed). com>...
> Trying to converge a floating point calculation all the way to
> zero is fundamentally futile. You get characteristic underflow
> before you get zero.
>
> In general, bear in mind that subtracting two large numbers
> to get a tiny result is inherently prone to roundoff error. You
> must design your algorithms to avoid that.
>


Have done this... up to a point! Within each function I have tried to
avoid any such calculations but my code is such a mess that I am
probably doing all sorts of dodgy calculations. Certainly lots of
vector normalisation which I really must tidy up. I can see lots of
scope for improvement here but also many cases where subracting two
large numbers is pretty much inevitable. When you say two large
numbers do you mean two numbers with many decimal places or just large
numbers? Would it help to change the scale of the environment? Do you
know if floating point operations are generally more accurate within a
certain range of numbers? Guessing it's not important where the
decimal place sits as every digit needs to be accounted for!

Need to look out those old maths notes about Nth degree of error on a
caculation?

> Some collision detection algorithms, especially early versions
> of GJK, are badly behaved when two faces are very close to
> parallel. Of course, if you're doing physics right, objects
> in contact come to rest in face-parallel situations, so the
> near-parallel situation gets explored very thoroughly.
> Newer collision engines, like (I think) Gino's SOLID, get
> it right.
>


I think I'm doing the physics right! For objects that have specific
frictional resistance I'm assuming that there will be a deceleration
period where face-face contact must be maintained. This is the messy
bit! This is why I am trying to use flags to maintain stability for
these cases. Please let me know if this assumption is wrong, I studied
physics but am no expert on collision mechanics!

> Developing collision algorithms that don't have floating
> point precision problems yet don't "cheat" on contact analysis
> is hard, but quite possible. See my US patent #6,067,096, which
> covers the first "ragdoll physics" that worked. It's necessary
> to be very careful about the limitations of floating point.
> But once you get it right, it works very well.
>


You're patent is a bit too much for my head on a Sunday evening! Very
impressive though! Were you able to control the floating point errors
by clever algorithm design alone? Did you find it necessary, or
helpful, to trunctate your intermediate results or use the documented
error on floating point operations during comparisons?

> As I usually tell people, either buy a physics engine
> (probably Havok), figure out some way to make gameplay
> work with lousy physics, or expect to spend a few years
> on the problem.
>


Think I'll keep bashing away for now! Have just spent 3 hours playing
Kill Zone (purely for research you understand!) Seems to suffer from
the same sort of issues I am having (and some that I am not!) Perhaps
mine will be finished in time for the PS9!

Thanks again for your comments.

> John Nagle
> Animats
>
> Dave wrote:
>
> > Hi folks,
> >
> > I am trying to develop a routine that will handle sphere-sphere and
> > sphere-triangle collisions and interactions. My aim is to develop a
> > quake style collision engine where a player can interact with a rich
> > 3D environment. Seem to be 90% of the way there! My problems are
> > related to calculations where the result tends to zero (or another
> > defined limit.)
> >
> > Have loads of cases where this kind of interaction occurs but this one
> > is as tricky (?) as any...
> >
> > When a moving sphere approaches a triangulated plane I am using a
> > sphere-plane collision routine to see if a collision will occur. All
> > goes well until I start to have near parallel interactions (e.g. when
> > the sphere has sufficient energy to collide with, and then slide along
> > the plane.) During the next pass of the collision routine the velocity
> > of the sphere is perpendicular (in a single plane) to the face normal
> > of the previous colliding plane. A dot product between the velocity of
> > the sphere and the normal of the colliding plane should be zero. It's
> > close to zero but how close is close enough to warrant no further
> > collision? A "need to collide" case occurs where the sphere is just
> > above the plane with a velocity almost parallel to the plane BUT just
> > on a collision course. The above dot product will be very close to
> > zero again, but this time there should be a collision with this plane!
> >
> > Since I need to account for glancing off faces, approaching faces at
> > right angles, interacting with steps, falling off edges etc the number
> > of vector operations to determine the final position and velocity of
> > the sphere need to be as accurate as possible.
> >
> > I guess my question (sorry for the waffle) is how to minimise rounding
> > errors in floating point operations?
> >
> > So far I've tried to account for the errors (or eliminate them) by:
> >
> > - Using double precision decelerations and calculations
> > - Using the epsilon error on the variables to account for known
> > internal storage errors
> >
> > Unable to eliminate or account for the errors I then set about trying
> > to use flags to ignore colliding objects from future collision during
> > the current recursive collision routine call. Helped in some cases so
> > started to try to account for all near critical iterations using
> > flags.
> >
> > Getting close to a general solution now but my once clear simple
> > routine has turned into a logical nightmare!
> >
> > When working with numbers where accuracy is paramount should I be
> > using a different approach? I've considered the following:
> >
> > - Using a quantised system where all values have to fall along a
> > virtual 3D grid
> > - Using sin and cos tables to support the above goal of quantisation
> > - Using integer mathematics
> > - Use a high precision custom storage class for calculations (speed is
> > off the essence so might not be an option?)
> >
> > Perhaps I am worrying too much! My current "effort" seems to handle
> > interactions in a similar, if much more shaky, way to Unreal
> > Tournament 2004. Kind of clunky at times, stops occasionally and seems
> > to have problems jumping on steps!
> >
> > I'm working on Windows 2000 using Visual Studio 6 and .net enterprise
> > architect 2003 (c++). It's a DirectX application if that makes any
> > difference?
> >
> > Any advice would be really appreciated.
> >
> > Many thanks,
> >
> > Dave

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Share-Point-2010 ,Share-Point -2010 Training , Share-point-2010Hyderabad , Share-point-2010 Institute Saraswati lakki ASP .Net 0 01-06-2012 06:39 AM
floating point problem... floating indeed :( teeshift Ruby 2 12-01-2006 01:16 AM
converting floating point to fixed point H aka N VHDL 15 03-02-2006 02:26 PM
floating point to fixed point conversion riya1012@gmail.com C Programming 4 02-22-2006 05:56 PM
Fixed-point format for floating-point numbers Motaz Saad Java 7 11-05-2005 05:33 PM



Advertisments