Velocity Reviews > C++ > How to test a 'float' or 'double' zero numerically?

# How to test a 'float' or 'double' zero numerically?

Rune Allnor
Guest
Posts: n/a

 09-14-2008
On 14 Sep, 12:38, Rune Allnor <(E-Mail Removed)> wrote:

> Around 1953-55 Tompson and Haskell proposed a method to
> compute the propagation of seismic waves through layered
> media. The method used terms on the form
>
> *x = (exp(y)+1)/(exp(z)+1)
>
> where y and z were of large magnitude and 'almost equal'.
> In a perfect formulation x would be very close to 1.

Typo correction: The problematics terms were on the form

x = exp(y)-exp(z)

where y and z are large and x is small.

Rune

Ron AF Greve
Guest
Posts: n/a

 09-14-2008
Hi,

You could indeed do an analysis that way. Actually that kind of thing is
also done when measuring something and one has to know the error in the
measurement. Taking in account the accuracy of measuring equipment and the
kind of operation you (multiplying, addition etc) you can the tell what the
error range is (like I measured 5V +/- 0.5V.

It is a lot of work though.

Regards, Ron AF Greve

http://www.InformationSuperHighway.eu

"Peng Yu" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> On Sep 13, 4:48 pm, "Ron AF Greve" <ron@localhost> wrote:
>> Hi,
>>
>> Consider a machine where the smallest number that can be represented is
>> 0.0001
>>
>> Lets asume I have the following calculation (lets assume the 0.00005
>> would
>> be the result of some calculaton).
>> 0.0001 -0.00005 - 0.00005
>> Now it is obvious that this should result in zero. However the last two
>> results would be zero since the machine can only have up to four digits
>> behind the dot. So what should be zero is actually 0.0001 so a correct
>> value
>> for a multiplier for epsilon would be 0.0002. Reasoning 0.0001 < 0.0002
>> therefore it is zero?
>>
>> Consider then the following
>>
>> The same formula only we also divide by 0.0001 afterwards
>> ( 0.0001 -0.00005 - 0.00005 ) / 0.0001 = 1 However the one actually
>> should
>> be a zero therefore our first conclusion was incorrect. A correct
>> multiplier
>> for epsilon should be 10001
>>
>> Of course one could go on, epsilons multiplier could be anything.
>>
>> Conclusion there is not a correct multiplier for epsilon. There can be
>> one
>> per formula but that is probably not very practical.

>
> I see. Then the problem is how to derive it for a particular formula.
>
> Probably, I need to write down the formula and take the derivatives of
> all its arguments, check how much errors there could be for each
> arguments. Then I would end up with a bound of the rounding error
> (epsilon is equivalent to it). Right?
>
> Thanks,
> Peng

James Kanze
Guest
Posts: n/a

 09-15-2008
On Sep 13, 5:35 pm, Peng Yu <(E-Mail Removed)> wrote:

> Suppose T is 'float' or 'double'.

> T x;

> x < 10 * std::numeric_limits<T>::epsilon();

> I can use the above comparison to test if 'x' is numerically
> zero.

If you want to test whether x is numerically zero, "x == 0.0" is
the only correct way.

> But I'm wondering what should be a good multiplicative
> constant before epsilon?

There isn't one, since the idiom is broken (in general---there
are specific cases where it might be appropriate).

--
James Kanze (GABI Software) email:(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze
Guest
Posts: n/a

 09-15-2008
On Sep 13, 6:23 pm, Paavo Helde <(E-Mail Removed)> wrote:
> Peng Yu <(E-Mail Removed)> kirjutas:
> > Suppose T is 'float' or 'double'.

> > T x;

> > x < 10 * std::numeric_limits<T>::epsilon();

> > I can use the above comparison to test if 'x' is numerically zero. But

> Really? What if x is -10000? What if it is equal to
> std::numeric_limits <T>::epsilon()?

> > I'm wondering what should be a good multiplicative constant before
> > epsilon?

> easy, just use if(x==0). However, this usually does not give
> you much if x is a result of some computation, with this
> expression you can pretty much just check whether x has been
> assigned literal zero beforehand.

It depends on the computation. There are a lot of contexts
where you get 0.0 exactly, and that's what you want to test for.
There are less contexts where this is true for other values (0.0
is a bit special), but they also exist.

--
James Kanze (GABI Software) email:(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze
Guest
Posts: n/a

 09-15-2008
On Sep 14, 1:53 am, Peng Yu <(E-Mail Removed)> wrote:
> On Sep 13, 4:48 pm, "Ron AF Greve" <ron@localhost> wrote:
> > Consider a machine where the smallest number that can be
> > represented is 0.0001

> > Lets asume I have the following calculation (lets assume the
> > 0.00005 would be the result of some calculaton).
> > 0.0001 -0.00005 - 0.00005
> > Now it is obvious that this should result in zero. However
> > the last two results would be zero since the machine can
> > only have up to four digits behind the dot. So what should
> > be zero is actually 0.0001 so a correct value for a
> > multiplier for epsilon would be 0.0002. Reasoning 0.0001 <
> > 0.0002 therefore it is zero?

> > Consider then the following

> > The same formula only we also divide by 0.0001 afterwards
> > ( 0.0001 -0.00005 - 0.00005 ) / 0.0001 = 1 However the one
> > actually should be a zero therefore our first conclusion was
> > incorrect. A correct multiplier for epsilon should be 10001

> > Of course one could go on, epsilons multiplier could be anything.

> > Conclusion there is not a correct multiplier for epsilon.
> > There can be one per formula but that is probably not very
> > practical.

> I see. Then the problem is how to derive it for a particular formula.

No. The problem is how to implement the formula so that it
gives the correct results.

> Probably, I need to write down the formula and take the
> derivatives of all its arguments, check how much errors there
> could be for each arguments. Then I would end up with a bound
> of the rounding error (epsilon is equivalent to it). Right?

Not necessarily. You need to better understand how machine
floating point works, and the mathematics which underlies it.

Think of it for a minute. If I had a system in which sin(0.0)
returned anything but 0.0 (exactly), I'd consider it defective.
For other values, this is somewhat less obvious, but 0.0 (and in
some contexts, 1.0 and -1.0) are a bit special.

--
James Kanze (GABI Software) email:(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34