Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Avoiding NaN and Inf on floating point division

Reply
Thread Tools

Avoiding NaN and Inf on floating point division

 
 
ardi
Guest
Posts: n/a
 
      01-04-2014
Hi,

Am I right supposing that if a floating point variable x is normal (not denormal/subnormal) it is guaranteed that for any non-NaN and non-Inf variablecalled y, the result y/x is guaranteed to be non-NaN and non-Inf?

If affirmative, I've two doubts about this. First, how efficient can one expect the isnormal() macro to be? I mean, should one expect it to be much slower than doing an equality comparison to zero (x==0.0) ? Or should theperformance be similar?

Second, how could I "emulate" isnormal() on older systems that lack it? Forexample, if I compile on IRIX 6.2, which AFAIK lacks isnormal(), is there some workaround which would also guarantee me that the division doesn't generate NaN nor Inf?

Also, if the isnormal() macro can be slow, is there any other approach which would also give me the guarantee I'm asking for? Maybe comparing to some standard definition which holds the smallest normal value available for each data type? Are such definitions standardized in some way such that I can expect to find them in some standard header on most OSs/compilers? Would I be safe to test it this way rather than with the isnormal() macro?

Thanks a lot!

ardi

P.S: Yes, I realize a floating point division can result in a meaningless value even if it's non-NaN and non-Inf, because you might be dividing by a very small (yet normal) which should be zero but it isn't because of the math operations performed on it previously. But this is another problem. I'm asking for an scenario where you don't care if the result is meaningless or not, you just need to be sure it isn't NaN nor Inf.
 
Reply With Quote
 
 
 
 
jacob navia
Guest
Posts: n/a
 
      01-04-2014
Le 04/01/2014 12:07, ardi a écrit :
> Am I right supposing that if a floating point variable x is normal (not denormal/subnormal)


it is guaranteed that for any non-NaN and non-Inf variable called y, the
result y/x is guaranteed

to be non-NaN and non-Inf?

No.

#include <stdio.h>
int main(void)
{
double a=1e300,b=1e-300,c;
c=a/b;
printf("%g\n",c);
}

 
Reply With Quote
 
 
 
 
ardi
Guest
Posts: n/a
 
      01-04-2014
On Saturday, January 4, 2014 12:53:40 PM UTC+1, jacob navia wrote:
> Le 04/01/2014 12:07, ardi a �crit :
>
> > Am I right supposing that if a floating point variable x is normal (notdenormal/subnormal)

>
>
>
> it is guaranteed that for any non-NaN and non-Inf variable called y, the
>
> result y/x is guaranteed
>
>
>
> to be non-NaN and non-Inf?
>
>
>
> No.
>
>
>
> #include <stdio.h>
>
> int main(void)
>
> {
>
> double a=1e300,b=1e-300,c;
>
> c=a/b;
>
> printf("%g\n",c);
>
> }



Ooops!!! I believe this means I forgot you can also get Inf from overflow.... if a number is very big and a division turns it even larger, it can overflow, and then it becomes Inf even if the denominator is a normal value.

This effectively breaks my quest for "healthy divisions". I guess I'm back to my old arbitrary epsilon checking approach (i.e.: check the denominator for fabs(x)>epsilon for deciding whether the division can be performed or not, where epsilon is left as an exercise for the reader

Thanks,

ardi
 
Reply With Quote
 
Ben Bacarisse
Guest
Posts: n/a
 
      01-04-2014
ardi <(E-Mail Removed)> writes:

> Am I right supposing that if a floating point variable x is normal
> (not denormal/subnormal) it is guaranteed that for any non-NaN and
> non-Inf variable called y, the result y/x is guaranteed to be non-NaN
> and non-Inf?


No. Assuming what goes by the name of IEEE floating point, you will get
NaN when y == x == 0, and Inf from all sorts of values for x and y
(DBL_MAX/0.5 for example).

An excellent starting point is to search the web for Goldberg's paper
"What Every Computer Scientist Should Know About Floating-Point
Arithmetic". It will pay off the time spent in spades.

> If affirmative, I've two doubts about this. First, how efficient can
> one expect the isnormal() macro to be? I mean, should one expect it to
> be much slower than doing an equality comparison to zero (x==0.0) ? Or
> should the performance be similar?


I'd expect it to be fast. Probably not as fast as a test for zero, but
it can be done by simple bit testing.

However, you say "if affirmative" and the answer to your question is
"no" so maybe all the rest is moot.

> Second, how could I "emulate" isnormal() on older systems that lack
> it? For example, if I compile on IRIX 6.2, which AFAIK lacks
> isnormal(), is there some workaround which would also guarantee me
> that the division doesn't generate NaN nor Inf?


There are lots of ways. For example, IEEE double precision sub-normal
numbers have an absolute value less less than DBL_MIN (defined in
float.h). You can also test normality by looking at the bits. For
example, a sub-normal IEEE number has zero bits in the exponent field
and a non-zero fraction.

> Also, if the isnormal() macro can be slow, is there any other approach
> which would also give me the guarantee I'm asking for? Maybe comparing
> to some standard definition which holds the smallest normal value
> available for each data type?


The guarantee you want is that a division won't generate NaN or +/-Inf?
The simplest method is to do the division and test the result, but maybe
one or more of your systems generates a signal that you want to avoid?
I think you should a bit more about what you are trying to do.

It's generally easy to test if you'll get a NaN from the division of
non-NaN numbers (you only get NaN from 0/0 and the four signed cases of
Inf/Inf), but pre-testing for Inf is harder.

> Are such definitions standardized in
> some way such that I can expect to find them in some standard header
> on most OSs/compilers? Would I be safe to test it this way rather than
> with the isnormal() macro?


Your C library should have float.h and that should define FLT_MIN,
DBL_MIN and LDBL_MIN but I don't think that helps you directly.

<snip>
--
Ben.
 
Reply With Quote
 
Tim Prince
Guest
Posts: n/a
 
      01-04-2014
On 1/4/2014 6:07 AM, ardi wrote:

> Second, how could I "emulate" isnormal() on older systems that lack it? For example, if I compile on IRIX 6.2, which AFAIK lacks isnormal(), is there some workaround which would also guarantee me that the division doesn't generate NaN nor Inf?
>
> Also, if the isnormal() macro can be slow, is there any other approach which would also give me the guarantee I'm asking for? Maybe comparing to some standard definition which holds the smallest normal value available for each data type? Are such definitions standardized in some way such that I can expect to find them in some standard header on most OSs/compilers? Would I be safe to test it this way rather than with the isnormal() macro?
>

Maybe you could simply edit the glibc or OpenBSD implementation into
your working copy of your headers, if you aren't willing to update your
compiler or run-time library.

http://ftp.cc.uoc.gr/mirrors/OpenBSD...gen/isnormal.c

Is your compiler so old that it doesn't implement inline functions?
That's the kind of background you need to answer your own question about
speed. Then you may need to use an old-fashioned macro (with its
concerns about double evaluation of expressions).


--
Tim Prince
 
Reply With Quote
 
James Kuyper
Guest
Posts: n/a
 
      01-04-2014
On 01/04/2014 06:07 AM, ardi wrote:
> Hi,
>
> Am I right supposing that if a floating point variable x is normal
> (not denormal/subnormal) it is guaranteed that for any non-NaN and
> non-Inf variable called y, the result y/x is guaranteed to be non-NaN
> and non-Inf?


How could that be true? If the mathematical value of y/x were greater
than DBL_MAX, or smaller than -DBL_MAX, what do you expect the floating
point value of y/x to be? What you're really trying to do is prevent
floating point overflow, and a test for isnormal() is not sufficient.
You must also check whether fabs(x) > fabs(y)/DBL_MAX (assuming that x
and y are both doubles).

As far as the C standard is concerned, the accuracy of floating point
math is entirely implementation-defined, and it explicitly allows the
implementation-provided definition to be "the accuracy is unknown"
(5.2.4.2.2p6). Therefore, a fully conforming implementation of C is
allowed to implement math that is so inaccurate that DBL_MIN/DBL_MAX >
DBL_MAX. In practice, you wouldn't be able to sell such an
implementation to anyone who actually needed to perform floating point
math - but that issue is outside the scope of the standard.

However, if an implementation pre-#defines __STDC_IEC_559__, it is
required to conform to the requirements of Annex F (6.10.8.3p1), which
are based upon but not completely identical to the requirements of IEC
60559:1989, which in turn is essentially equivalent to IEEE 754:1985.
That implies fairly strict requirements on the accuracy; for the most
part, those requirements are as strict as they reasonably could be.

> If affirmative, I've two doubts about this. First, how efficient can
> one expect the isnormal() macro to be? I mean, should one expect it
> to be much slower than doing an equality comparison to zero (x==0.0)
> ? Or should the performance be similar?


The performance is inherently system-specific; for all I know there
might be floating point chips where isnormal() can be implemented by a
single floating point instruction; but at the very worst it shouldn't be
much more complicated than a few mask and shift operations on the bytes
of a copy of the argument.

> Second, how could I "emulate" isnormal() on older systems that lack
> it? For example, if I compile on IRIX 6.2, which AFAIK lacks
> isnormal(), is there some workaround which would also guarantee me
> that the division doesn't generate NaN nor Inf?


Find a precise definition of the floating point format implemented on
that machine (which might not fully conform to IEEE requirements), and
you can then implement isnormal() by performing a few mask and shift
operations on the individual bytes of the argument.

> Also, if the isnormal() macro can be slow, is there any other
> approach which would also give me the guarantee I'm asking for? ..


If you can find a alternative way of implementing the equivalent of
isnormal() that is significantly faster than calling the macro provided
by a given version of the C standard library, then you should NOT use
that alternative; what you should do is drop that version of the C
standard library and replace it with one that's better-implemented.

> ... Maybe
> comparing to some standard definition which holds the smallest normal
> value available for each data type?


Yes, that's what FLT_MIN, DBL_MIN, and LDBL_MIN are for.

> ... Are such definitions standardized
> in some way such that I can expect to find them in some standard
> header on most OSs/compilers? ...


Yes - the standard header is <float.h>.

> ... Would I be safe to test it this way
> rather than with the isnormal() macro?


It could be safe, if you handle correctly the possibility that the value
is a NaN. Keep in mind that all comparisons with a NaN fail, so
x>=DBL_MIN is not the same as !(x<DBL_MIN). If x is a NaN, the first
expression is false, while the second is true.
--
James Kuyper
 
Reply With Quote
 
Tim Prince
Guest
Posts: n/a
 
      01-04-2014
On 1/4/2014 6:07 AM, ardi wrote:

> Am I right supposing that if a floating point variable x is normal (not denormal/subnormal) it is guaranteed that for any non-NaN and non-Inf variable called y, the result y/x is guaranteed to be non-NaN and non-Inf?
>

1/x is well-behaved when x is normal (only possible flag raised is
inexact). That is an important enough consideration to be part of
IEEE754 design, but not guaranteed in C without IEEE754 (the latter
being a reasonable expectation of a good quality platform, but there are
still exceptions). As others pointed out, your goal seems to be well
beyond that.


--
Tim Prince
 
Reply With Quote
 
Keith Thompson
Guest
Posts: n/a
 
      01-04-2014
James Kuyper <(E-Mail Removed)> writes:
> On 01/04/2014 06:07 AM, ardi wrote:

[...]
>> Also, if the isnormal() macro can be slow, is there any other
>> approach which would also give me the guarantee I'm asking for? ..

>
> If you can find a alternative way of implementing the equivalent of
> isnormal() that is significantly faster than calling the macro provided
> by a given version of the C standard library, then you should NOT use
> that alternative; what you should do is drop that version of the C
> standard library and replace it with one that's better-implemented.


That's not always an option. What you should probably do in
that case is (a) consider carefully whether your faster version
is actually correct, and (b) contact the maintainers of your
implementation.

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
glen herrmannsfeldt
Guest
Posts: n/a
 
      01-04-2014
James Kuyper <(E-Mail Removed)> wrote:
> On 01/04/2014 06:07 AM, ardi wrote:
>> Am I right supposing that if a floating point variable x is normal
>> (not denormal/subnormal) it is guaranteed that for any non-NaN and
>> non-Inf variable called y, the result y/x is guaranteed to be non-NaN
>> and non-Inf?


> How could that be true? If the mathematical value of y/x were greater
> than DBL_MAX, or smaller than -DBL_MAX, what do you expect the floating
> point value of y/x to be? What you're really trying to do is prevent
> floating point overflow, and a test for isnormal() is not sufficient.
> You must also check whether fabs(x) > fabs(y)/DBL_MAX (assuming that x
> and y are both doubles).


> As far as the C standard is concerned, the accuracy of floating point
> math is entirely implementation-defined, and it explicitly allows the
> implementation-provided definition to be "the accuracy is unknown"
> (5.2.4.2.2p6). Therefore, a fully conforming implementation of C is
> allowed to implement math that is so inaccurate that DBL_MIN/DBL_MAX >
> DBL_MAX. In practice, you wouldn't be able to sell such an
> implementation to anyone who actually needed to perform floating point
> math - but that issue is outside the scope of the standard.


Yes, but it seems that it might not be so far off for rounding to allow
fabs(y)/(fabs(y)/DBL_MAX) to overflow, such that your test doesn't
guarantee no overflow.

-- glen
 
Reply With Quote
 
glen herrmannsfeldt
Guest
Posts: n/a
 
      01-04-2014
Tim Prince <(E-Mail Removed)> wrote:
> On 1/4/2014 6:07 AM, ardi wrote:


(snip)
> 1/x is well-behaved when x is normal (only possible flag raised is
> inexact). That is an important enough consideration to be part of
> IEEE754 design, but not guaranteed in C without IEEE754 (the latter
> being a reasonable expectation of a good quality platform,
> but there are still exceptions).


I haven't looked at IEEE754 in that much detail, but on many floating
point systems the exponent range is such that the smallest normal
floating point value will overflow on computing 1/x. If the exponent
range is symmetric, there is a factor of the base (2 or 10) to
consider.

> As others pointed out, your goal seems to be well beyond that.


-- glen
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
A struct for 2.4 that supports float's inf and nan? Joshua J. Kugler Python 4 09-21-2007 02:37 AM
NaN and inf literal constants g++ Jon Wilson C++ 3 12-31-2004 03:56 PM
Floating point division Pieter Hulshoff VHDL 2 12-07-2004 06:15 AM
How to force Tomcat to reload WEB-INF/lib and WEB-INF/classes ??? Hans Java 3 05-16-2004 04:57 PM
FLOATING POINT DIVISION sunwij VHDL 3 12-29-2003 04:44 AM



Advertisments