Velocity Reviews > C++ > How to test a 'float' or 'double' zero numerically?

# How to test a 'float' or 'double' zero numerically?

Peng Yu
Guest
Posts: n/a

 09-13-2008
Hi,

Suppose T is 'float' or 'double'.

T x;

x < 10 * std::numeric_limits<T>::epsilon();

I can use the above comparison to test if 'x' is numerically zero. But
I'm wondering what should be a good multiplicative constant before
epsilon?

Thanks,
Peng

Anders Dalvander
Guest
Posts: n/a

 09-13-2008
> I can use the above comparison to test if 'x' is numerically zero.

No, as x can also be negative.

> But I'm wondering what should be a good multiplicative constant before
> epsilon?

Epsilon is the smallest value such such that 1.0 + epsilon != 1.0. You
need to scale it with the numbers to compare with. Comparing against
zero is always hard. You are probably best of with using abs(x) <
your_own_epsilon. Set your_own_epsilon to what ever you want, such as
0.00000001 perhaps.

Regards,
Anders Dalvander

Peng Yu
Guest
Posts: n/a

 09-13-2008
On Sep 13, 11:10 am, Anders Dalvander <(E-Mail Removed)> wrote:
> > I can use the above comparison to test if 'x' is numerically zero.

>
> No, as x can also be negative.

Right, I meant std::abs(x).

> > But I'm wondering what should be a good multiplicative constant before
> > epsilon?

>
> Epsilon is the smallest value such such that 1.0 + epsilon != 1.0. You
> need to scale it with the numbers to compare with. Comparing against
> zero is always hard. You are probably best of with using abs(x) <
> your_own_epsilon. Set your_own_epsilon to what ever you want, such as
> 0.00000001 perhaps.

Therefore, there is no general accept such epsilon?

Thanks,
Peng

Juha Nieminen
Guest
Posts: n/a

 09-13-2008
Peng Yu wrote:
> x < 10 * std::numeric_limits<T>::epsilon();
>
> I can use the above comparison to test if 'x' is numerically zero.

No you can't. A value of x distinct from zero might also test as
"zero" with that.

Ron AF Greve
Guest
Posts: n/a

 09-13-2008
Hi,

Consider a machine where the smallest number that can be represented is
0.0001

Lets asume I have the following calculation (lets assume the 0.00005 would
be the result of some calculaton).
0.0001 -0.00005 - 0.00005
Now it is obvious that this should result in zero. However the last two
results would be zero since the machine can only have up to four digits
behind the dot. So what should be zero is actually 0.0001 so a correct value
for a multiplier for epsilon would be 0.0002. Reasoning 0.0001 < 0.0002
therefore it is zero?

Consider then the following

The same formula only we also divide by 0.0001 afterwards
( 0.0001 -0.00005 - 0.00005 ) / 0.0001 = 1 However the one actually should
be a zero therefore our first conclusion was incorrect. A correct multiplier
for epsilon should be 10001

Of course one could go on, epsilons multiplier could be anything.

Conclusion there is not a correct multiplier for epsilon. There can be one
per formula but that is probably not very practical.

Regards, Ron AF Greve

http://www.InformationSuperHighway.eu

"Peng Yu" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> Hi,
>
> Suppose T is 'float' or 'double'.
>
> T x;
>
> x < 10 * std::numeric_limits<T>::epsilon();
>
> I can use the above comparison to test if 'x' is numerically zero. But
> I'm wondering what should be a good multiplicative constant before
> epsilon?
>
> Thanks,
> Peng

Peng Yu
Guest
Posts: n/a

 09-13-2008
On Sep 13, 4:48 pm, "Ron AF Greve" <ron@localhost> wrote:
> Hi,
>
> Consider a machine where the smallest number that can be represented is
> 0.0001
>
> Lets asume I have the following calculation (lets assume the 0.00005 would
> be the result of some calculaton).
> 0.0001 -0.00005 - 0.00005
> Now it is obvious that this should result in zero. However the last two
> results would be zero since the machine can only have up to four digits
> behind the dot. So what should be zero is actually 0.0001 so a correct value
> for a multiplier for epsilon would be 0.0002. Reasoning 0.0001 < 0.0002
> therefore it is zero?
>
> Consider then the following
>
> The same formula only we also divide by 0.0001 afterwards
> ( 0.0001 -0.00005 - 0.00005 ) / 0.0001 = 1 However the one actually should
> be a zero therefore our first conclusion was incorrect. A correct multiplier
> for epsilon should be 10001
>
> Of course one could go on, epsilons multiplier could be anything.
>
> Conclusion there is not a correct multiplier for epsilon. There can be one
> per formula but that is probably not very practical.

I see. Then the problem is how to derive it for a particular formula.

Probably, I need to write down the formula and take the derivatives of
all its arguments, check how much errors there could be for each
arguments. Then I would end up with a bound of the rounding error
(epsilon is equivalent to it). Right?

Thanks,
Peng

Rune Allnor
Guest
Posts: n/a

 09-14-2008
On 14 Sep, 01:53, Peng Yu <(E-Mail Removed)> wrote:
> On Sep 13, 4:48 pm, "Ron AF Greve" <ron@localhost> wrote:

> > Of course one could go on, epsilons multiplier could be anything.

>
> > Conclusion there is not a correct multiplier for epsilon. There can be one
> > per formula but that is probably not very practical.

>
> I see. Then the problem is how to derive it for a particular formula.
>
> Probably, I need to write down the formula and take the derivatives of
> all its arguments, check how much errors there could be for each
> arguments. Then I would end up with a bound of the rounding error
> (epsilon is equivalent to it). Right?

Numerical analysis is an art in itself. There are departments
in universities which deal almost exclusively with the analysis
of numerics, which essentially boils down to error analysis.

In my field of work certain analytical solutions were formulated
in the early '50s, but a stable numerical solution wasn't found
until the early/mid '90s.

You might want to check with the math department at your local
university on how to approach whatever problem you work with.

Rune

Peng Yu
Guest
Posts: n/a

 09-14-2008
> In my field of work certain analytical solutions were formulated
> in the early '50s, but a stable numerical solution wasn't found
> until the early/mid '90s.

Would you please give some example references on this?

Thanks,
Peng

Erik Wikström
Guest
Posts: n/a

 09-14-2008
On 2008-09-13 18:16, Peng Yu wrote:
> On Sep 13, 11:10 am, Anders Dalvander <(E-Mail Removed)> wrote:
>> > I can use the above comparison to test if 'x' is numerically zero.

>>
>> No, as x can also be negative.

>
> Right, I meant std::abs(x).
>
>> > But I'm wondering what should be a good multiplicative constant before
>> > epsilon?

>>
>> Epsilon is the smallest value such such that 1.0 + epsilon != 1.0. You
>> need to scale it with the numbers to compare with. Comparing against
>> zero is always hard. You are probably best of with using abs(x) <
>> your_own_epsilon. Set your_own_epsilon to what ever you want, such as
>> 0.00000001 perhaps.

>
> Therefore, there is no general accept such epsilon?

No, different applications requires different precision, some would
consider a variable equal to zero if it was 0.0001 from zero while
others might require 0.0000001. You have to analyse your problem to find
a value that suites you.

--
Erik Wikström

Rune Allnor
Guest
Posts: n/a

 09-14-2008
On 14 Sep, 04:06, Peng Yu <(E-Mail Removed)> wrote:
> > In my field of work certain analytical solutions were formulated
> > in the early '50s, but a stable numerical solution wasn't found
> > until the early/mid '90s.

>
> Would you please give some example references on this?

At the risk of becoming inaccurate, as I haven't reviewed
the material in 5 years and write off the top of my head:

Around 1953-55 Tompson and Haskell proposed a method to
compute the propagation of seismic waves through layered
media. The method used terms on the form

x = (exp(y)+1)/(exp(z)+1)

where y and z were of large magnitude and 'almost equal'.
In a perfect formulation x would be very close to 1.

Since y and z are large an one uses an imperfect numerical
representation, the computation errors in the exponents
become important. So basically the terms that should
cancel didn't, and one was left with a numerically unstable
solution.

There were made several attempts to handle this (Ng and Reid
in the '70s, Henrik Schmidt in the '80), with varoius
degrees of success. And complexity. As far as I am concerned,
the problem wasn't solved until around 1993 when Sven Ivansson
came up with a numerically stable scheme.

What all these attempts had in common was that they took
the original analytical formulation and organized the terms
in various ways to avoid the complicated, large-magnitude
internal terms.

I am sure there are simuilar examples in other areas.

As for an example on error analysis, you could check out the
analysis of Horner's rule for evaluating polynomials, which
is tretaed in most intro books on numerical analysis.

Rune