Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Peculiar floating point numbers in GCC

Reply
Thread Tools

Peculiar floating point numbers in GCC

 
 
n.torrey.pines@gmail.com
Guest
Posts: n/a
 
      04-06-2007
I understand that with floats, x*y == y*x, for example, might not
hold. But what if it's the exact same operation on both sides?

I tested this with GCC 3.4.4 (Cygwin), and it prints 0 (compiled with -
g flag)

#include <cmath>
#include <iostream>

int main() {
double x = 3.0;

double f = std::cos(x);
std::cout << (f == std::cos(x)) << std::endl;
}

Using -O3 instead optimizes something, and now both sides are equal.

VC++8.0 prints 1 in both debug and release.

What does the standard have to say?

 
Reply With Quote
 
 
 
 
Jim Langston
Guest
Posts: n/a
 
      04-06-2007
<(E-Mail Removed)> wrote in message
news:(E-Mail Removed) ups.com...
>I understand that with floats, x*y == y*x, for example, might not
> hold. But what if it's the exact same operation on both sides?
>
> I tested this with GCC 3.4.4 (Cygwin), and it prints 0 (compiled with -
> g flag)
>
> #include <cmath>
> #include <iostream>
>
> int main() {
> double x = 3.0;
>
> double f = std::cos(x);
> std::cout << (f == std::cos(x)) << std::endl;
> }
>
> Using -O3 instead optimizes something, and now both sides are equal.
>
> VC++8.0 prints 1 in both debug and release.
>
> What does the standard have to say?


It's not really the standard that's the issue I dont' think, it's just the
way floating point math works.

In your particular case, those statements are close together, initializing f
and the comparison, so the compiler may be optimizing and comparing the same
thing. It all depends on how the compiler optimizes. Even something like:

double f = std::cos(x);
double g = std::cos(x);
std::cout << ( f == g ) << std::endl;

may output 1 or 0, depending on compiler optimization.

You just cant count on floating point equality, it may work sometimes, not
others.


 
Reply With Quote
 
 
 
 
Pete Becker
Guest
Posts: n/a
 
      04-06-2007
Jim Langston wrote:
>
> It's not really the standard that's the issue I dont' think, it's just the
> way floating point math works.
>


Don't get paranoid. <g> There's a specific reason for this descrepancy.

> In your particular case, those statements are close together, initializing f
> and the comparison, so the compiler may be optimizing and comparing the same
> thing. It all depends on how the compiler optimizes. Even something like:
>
> double f = std::cos(x);
> double g = std::cos(x);
> std::cout << ( f == g ) << std::endl;
>
> may output 1 or 0, depending on compiler optimization.
>


It had better output 1, regardless of compiler optimizations.

> You just cant count on floating point equality, it may work sometimes, not
> others.
>


The not-so-subtle issue here is that the compiler is permitted to do
floating-point arithmetic at higher precision than the type requires. On
the x86 this means that floating-point math is done with 80-bit values,
while float is 32 bits and double is 64 bits. (The reason for allowing
this is speed: x86 80-bit floating-point math is much faster than 64-bit
math) When you store into a double, the stored value has to have the
right precision, though. So storing the result of cos in a double can
force truncation of a wider-than-double value. When comparing the stored
value to the original one, the compiler widens the original out to 80
bits, and the result is different from the original value. That's what's
happening in the original example.

In your example, though, both results are stored, so when widened they
should produce the same result. Some compilers don't do this, though,
unless you tell them that they have to obey the rules (speed again).

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
Reply With Quote
 
Jim Langston
Guest
Posts: n/a
 
      04-06-2007
"Pete Becker" <(E-Mail Removed)> wrote in message
news(E-Mail Removed) ...
> Jim Langston wrote:
>>
>> It's not really the standard that's the issue I dont' think, it's just
>> the way floating point math works.
>>

>
> Don't get paranoid. <g> There's a specific reason for this descrepancy.
>
>> In your particular case, those statements are close together,
>> initializing f and the comparison, so the compiler may be optimizing and
>> comparing the same thing. It all depends on how the compiler optimizes.
>> Even something like:
>>
>> double f = std::cos(x);
>> double g = std::cos(x);
>> std::cout << ( f == g ) << std::endl;
>>
>> may output 1 or 0, depending on compiler optimization.
>>

>
> It had better output 1, regardless of compiler optimizations.
>
>> You just cant count on floating point equality, it may work sometimes,
>> not others.

>
> The not-so-subtle issue here is that the compiler is permitted to do
> floating-point arithmetic at higher precision than the type requires. On
> the x86 this means that floating-point math is done with 80-bit values,
> while float is 32 bits and double is 64 bits. (The reason for allowing
> this is speed: x86 80-bit floating-point math is much faster than 64-bit
> math) When you store into a double, the stored value has to have the right
> precision, though. So storing the result of cos in a double can force
> truncation of a wider-than-double value. When comparing the stored value
> to the original one, the compiler widens the original out to 80 bits, and
> the result is different from the original value. That's what's happening
> in the original example.
>
> In your example, though, both results are stored, so when widened they
> should produce the same result. Some compilers don't do this, though,
> unless you tell them that they have to obey the rules (speed again).


FAQ 29.18 disagrees with you.

http://www.parashift.com/c++-faq-lit...html#faq-29.18


 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      04-06-2007
On Apr 7, 1:15 am, Pete Becker <(E-Mail Removed)> wrote:
> Jim Langston wrote:


> > It's not really the standard that's the issue I dont' think, it's just the
> > way floating point math works.


> Don't get paranoid. <g> There's a specific reason for this descrepancy.


But it is partially the standard at fault, since the standard
explicitly allows extended precision for intermediate results
(including, I think function return values, but I'm not sure
there).

> > In your particular case, those statements are close together, initializing f
> > and the comparison, so the compiler may be optimizing and comparing the same
> > thing. It all depends on how the compiler optimizes. Even something like:


> > double f = std::cos(x);
> > double g = std::cos(x);
> > std::cout << ( f == g ) << std::endl;


> > may output 1 or 0, depending on compiler optimization.


> It had better output 1, regardless of compiler optimizations.


I don't know about this particular case, but I've had similar
cases fail with g++. By default, g++ is even looser than the
standard allows. One could put forward the usual argument about
it not mattering how fast you get the wrong results, but in at
least some cases, the results that it gets by violating the
standard are also more precise. (The problem is, of course,
that precise or not, they're not very predictable.)

> > You just cant count on floating point equality, it may work
> > sometimes, not others.


> The not-so-subtle issue here is that the compiler is permitted to do
> floating-point arithmetic at higher precision than the type requires. On
> the x86 this means that floating-point math is done with 80-bit values,
> while float is 32 bits and double is 64 bits. (The reason for allowing
> this is speed: x86 80-bit floating-point math is much faster than 64-bit
> math) When you store into a double, the stored value has to have the
> right precision, though. So storing the result of cos in a double can
> force truncation of a wider-than-double value. When comparing the stored
> value to the original one, the compiler widens the original out to 80
> bits, and the result is different from the original value. That's what's
> happening in the original example.


Question: if I have a function declared "double f()", is the
compiler also allowed to maintain extended precision, even
though I'm using the results of the expression in the return
statement to initialize a double? (G++ definitly does, at least
in some cases.)

> In your example, though, both results are stored, so when widened they
> should produce the same result. Some compilers don't do this, though,
> unless you tell them that they have to obey the rules (speed again).


OK. We're on the same wave length. I can confirm that g++ is
one of those compilers.

--
--
James Kanze (Gabi Software) email: http://www.velocityreviews.com/forums/(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

 
Reply With Quote
 
Walter Bright
Guest
Posts: n/a
 
      04-07-2007
James Kanze wrote:
> Question: if I have a function declared "double f()", is the
> compiler also allowed to maintain extended precision, even
> though I'm using the results of the expression in the return
> statement to initialize a double? (G++ definitly does, at least
> in some cases.)


I don't quite understand your question. I would rephrase it as "if a
function is declared to return a double, must it round the return value
to double precision, or can it return a value that actually is of higher
precision?"

For the Digital Mars C/C++, the answer is "no". Even though the return
value is typed as a double, it is actually returned as a full 80 bit
value in the ST0 register. In other words, it is treated as an
intermediate value that can be of higher precision than the type.

D programming is even more liberal with floating point - the compiler is
allowed to do all compile time constant folding, even copy propagation,
at the highest precision available.

The idea is that one should structure algorithms so that they work with
the minimum precision specified by the type, but don't break if a higher
precision is used.
 
Reply With Quote
 
Walter Bright
Guest
Posts: n/a
 
      04-07-2007
Walter Bright wrote:
> For the Digital Mars C/C++, the answer is "no".


Er, I meant that it returns a value that is actually of higher precision.
 
Reply With Quote
 
Bo Persson
Guest
Posts: n/a
 
      04-07-2007
Jim Langston wrote:
:: "Pete Becker" <(E-Mail Removed)> wrote in message
:::
::: In your example, though, both results are stored, so when widened
::: they should produce the same result. Some compilers don't do this,
::: though, unless you tell them that they have to obey the rules
::: (speed again).
::
:: FAQ 29.18 disagrees with you.
::
:: http://www.parashift.com/c++-faq-lit...html#faq-29.18

The FAQ tells you what actually happens when you don't tell the compiler to
obey the language rules.

Like Pete says, most compilers have an options to select fast or strictly
correct code. The default is of course fast, but slightly incorrect.

If the source code stores into variables x and y, the machine code must do
that as well. Unless compiler switches (or lack thereof) relax the
requirements.


The "as-if" rule of code transformations doesn't apply here, as we can
actually see the difference.


Bo Persson


 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      04-07-2007
On Apr 7, 9:08 am, "Bo Persson" <(E-Mail Removed)> wrote:
> Jim Langston wrote:


> :: "Pete Becker" <(E-Mail Removed)> wrote in message


> ::: In your example, though, both results are stored, so when widened
> ::: they should produce the same result. Some compilers don't do this,
> ::: though, unless you tell them that they have to obey the rules
> ::: (speed again).


> :: FAQ 29.18 disagrees with you.


> ::http://www.parashift.com/c++-faq-lit...html#faq-29.18


> The FAQ tells you what actually happens when you don't tell the compiler to
> obey the language rules.


> Like Pete says, most compilers have an options to select fast or strictly
> correct code. The default is of course fast, but slightly incorrect.


For a one definition of "correct": for a naive definition,
1.0/3.0 will never give correct results, regardless of
conformance.

And I don't understand "slightly"---I always thought that
"correct" was binary: the results are either correct, or they
are not.

> If the source code stores into variables x and y, the machine code must do
> that as well.


Just a nit: the results must be exactly the same as if the
machine code had stored into the variables. The memory write is
not necessarily necessary, but any rounding that would have
occured is.

> Unless compiler switches (or lack thereof) relax the
> requirements.


> The "as-if" rule of code transformations doesn't apply here, as we can
> actually see the difference.


The "as-if" rule always applies, since it says that only the
results count. In this case, for example: I believe that the
Intel FPU has an instruction to force rounding to double
precision, without actually storing the value, and an
implementation could use this.

--
--
James Kanze (Gabi Software) email: (E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      04-07-2007
On Apr 7, 3:33 am, Walter Bright <(E-Mail Removed)>
wrote:
> James Kanze wrote:
> > Question: if I have a function declared "double f()", is the
> > compiler also allowed to maintain extended precision, even
> > though I'm using the results of the expression in the return
> > statement to initialize a double? (G++ definitly does, at least
> > in some cases.)


> I don't quite understand your question. I would rephrase it as "if a
> function is declared to return a double, must it round the return value
> to double precision, or can it return a value that actually is of higher
> precision?"


Yes. Your English is better than mine.

> For the Digital Mars C/C++, the answer is "no". Even though the return
> value is typed as a double, it is actually returned as a full 80 bit
> value in the ST0 register. In other words, it is treated as an
> intermediate value that can be of higher precision than the type.


My question concerned what the standard requires, not what
different compilers do. I know what the compilers I use do (and
that it differs according to the platforms); my question perhaps
belongs more in comp.std.c++, but I wanted to know whether they
were conform, or whether it was "yet another bug" in the
interest of getting the wrong results faster.

> D programming is even more liberal with floating point - the compiler is
> allowed to do all compile time constant folding, even copy propagation,
> at the highest precision available.


Which has both advantages and disadvantages. As I pointed out
elsewhere, the "non-conformant" behavior of g++ can sometimes
result in an answer that is more precise that conformant
behavior would have been. (Extended precision was invented for
a reason.)

> The idea is that one should structure algorithms so that they work with
> the minimum precision specified by the type, but don't break if a higher
> precision is used.


The problem is that if you want to analyse your limits in
detail, you don't really know what precision is being used.
There are two schools of thought here, and I don't know enough
numerics to have an opinion as to which is right. (The only
calculations in my present application involve monetary amounts,
so still another set of rules applies.)

--
James Kanze (Gabi Software) email: (E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Share-Point-2010 ,Share-Point -2010 Training , Share-point-2010Hyderabad , Share-point-2010 Institute Saraswati lakki ASP .Net 0 01-06-2012 06:39 AM
floating point problem... floating indeed :( teeshift Ruby 2 12-01-2006 01:16 AM
abt floating point numbers prasad VHDL 1 02-04-2006 10:40 PM
Fixed-point format for floating-point numbers Motaz Saad Java 7 11-05-2005 05:33 PM
Floating point numbers John Wilkinson XML 2 10-28-2005 04:33 PM



Advertisments