Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Peculiar floating point numbers in GCC

Reply
Thread Tools

Peculiar floating point numbers in GCC

 
 
Pete Becker
Guest
Posts: n/a
 
      04-07-2007
Jim Langston wrote:
> "Pete Becker" <(E-Mail Removed)> wrote in message
> news(E-Mail Removed) ...
>> Jim Langston wrote:
>>> It's not really the standard that's the issue I dont' think, it's just
>>> the way floating point math works.
>>>

>> Don't get paranoid. <g> There's a specific reason for this descrepancy.
>>
>>> In your particular case, those statements are close together,
>>> initializing f and the comparison, so the compiler may be optimizing and
>>> comparing the same thing. It all depends on how the compiler optimizes.
>>> Even something like:
>>>
>>> double f = std::cos(x);
>>> double g = std::cos(x);
>>> std::cout << ( f == g ) << std::endl;
>>>
>>> may output 1 or 0, depending on compiler optimization.
>>>

>> It had better output 1, regardless of compiler optimizations.
>>
>>> You just cant count on floating point equality, it may work sometimes,
>>> not others.

>> The not-so-subtle issue here is that the compiler is permitted to do
>> floating-point arithmetic at higher precision than the type requires. On
>> the x86 this means that floating-point math is done with 80-bit values,
>> while float is 32 bits and double is 64 bits. (The reason for allowing
>> this is speed: x86 80-bit floating-point math is much faster than 64-bit
>> math) When you store into a double, the stored value has to have the right
>> precision, though. So storing the result of cos in a double can force
>> truncation of a wider-than-double value. When comparing the stored value
>> to the original one, the compiler widens the original out to 80 bits, and
>> the result is different from the original value. That's what's happening
>> in the original example.
>>
>> In your example, though, both results are stored, so when widened they
>> should produce the same result. Some compilers don't do this, though,
>> unless you tell them that they have to obey the rules (speed again).

>
> FAQ 29.18 disagrees with you.
>
> http://www.parashift.com/c++-faq-lit...html#faq-29.18
>
>


No, it says the same thing, as far as it goes. It doesn't mention the
requirement that the compiler respect stored values, and, as I said,
many don't do that unless you explicitly ask for it.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
 
Reply With Quote
 
 
 
 
Walter Bright
Guest
Posts: n/a
 
      04-07-2007
James Kanze wrote:
> The problem is that if you want to analyse your limits in
> detail, you don't really know what precision is being used.
> There are two schools of thought here, and I don't know enough
> numerics to have an opinion as to which is right. (The only
> calculations in my present application involve monetary amounts,
> so still another set of rules applies.)


My experience in the matter is that the only algorithms that failed when
used with greater precision were:

1) test suites
2) wrong for other reasons, too

Put another way, if I was implementing a square root algorithm, why
would I ever *need* less accuracy?

Ok, ok, I can think of one - I'm writing an emulator that needs to be
dumbed down to match its dumb target.
 
Reply With Quote
 
 
 
 
James Kanze
Guest
Posts: n/a
 
      04-07-2007
On Apr 7, 7:38 pm, Walter Bright <(E-Mail Removed)>
wrote:
> James Kanze wrote:
> > The problem is that if you want to analyse your limits in
> > detail, you don't really know what precision is being used.
> > There are two schools of thought here, and I don't know enough
> > numerics to have an opinion as to which is right. (The only
> > calculations in my present application involve monetary amounts,
> > so still another set of rules applies.)


> My experience in the matter is that the only algorithms that failed when
> used with greater precision were:


> 1) test suites
> 2) wrong for other reasons, too


> Put another way, if I was implementing a square root algorithm, why
> would I ever *need* less accuracy?


I'm sure that that is true for many algorithms. But the
question isn't more or less accuracy, it is known accuracy, and
I can imagine certain algorithms becoming instable if the
accuracy varies.

As I said, I don't know the domain enough to have an opinion. I
do know that in my discussions with experts in the domain,
opinions varied, and many seem to consider the extended accuracy
in the Intel processors a mis-design (or maybe I've
misunderstood them). Curiously, all seem to consider the base
16 used in IBM mainframe floating point bad thing, and the most
common reason they give is the varying accuracy.

--
James Kanze (Gabi Software) email: http://www.velocityreviews.com/forums/(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

 
Reply With Quote
 
n.torrey.pines@gmail.com
Guest
Posts: n/a
 
      04-07-2007
On Apr 6, 4:37 pm, "Jim Langston" <(E-Mail Removed)> wrote:

> http://www.parashift.com/c++-faq-lit...html#faq-29.18


Thanks, that was very useful. I like the tone of the FAQ though: "if
you don't think this is surprising, you are stupid, but if you do,
that's because you don't know very much - keep reading"

 
Reply With Quote
 
Walter Bright
Guest
Posts: n/a
 
      04-07-2007
James Kanze wrote:
> In this case, for example: I believe that the
> Intel FPU has an instruction to force rounding to double
> precision, without actually storing the value, and an
> implementation could use this.


That rounds the mantissa, but not the exponent. Thus, it doesn't do the
job. The only way to get the 87 to create a true double value is to
store it into memory, a slow operation.

Early Java's insistence on doubles for intermediate values caused a lot
of problems because of this.
 
Reply With Quote
 
Bo Persson
Guest
Posts: n/a
 
      04-08-2007
James Kanze wrote:
: On Apr 7, 9:08 am, "Bo Persson" <(E-Mail Removed)> wrote:
:: Jim Langston wrote:
:
:::: "Pete Becker" <(E-Mail Removed)> wrote in message
:
::::: In your example, though, both results are stored, so when widened
::::: they should produce the same result. Some compilers don't do this,
::::: though, unless you tell them that they have to obey the rules
::::: (speed again).
:
:::: FAQ 29.18 disagrees with you.
:
:::: http://www.parashift.com/c++-faq-lit...html#faq-29.18
:
:: The FAQ tells you what actually happens when you don't tell the
:: compiler to obey the language rules.
:
:: Like Pete says, most compilers have an options to select fast or
:: strictly correct code. The default is of course fast, but slightly
:: incorrect.
:
: For a one definition of "correct": for a naive definition,
: 1.0/3.0 will never give correct results, regardless of
: conformance.

It has an expected result, given the hardware. Like you mentioned elsewhere,
we have a problem with the Intel hardware, that a temporary result (in a
register) is produced faster *and* has higher precision than a stored
result.

That makes it very tempting to use the temporary.

: And I don't understand "slightly"---I always thought that
: "correct" was binary: the results are either correct, or they
: are not.

That was supposed to be some kind of humor. Didn't work out too well, did
it?

:
:: If the source code stores into variables x and y, the machine code
:: must do that as well.
:
: Just a nit: the results must be exactly the same as if the
: machine code had stored into the variables. The memory write is
: not necessarily necessary, but any rounding that would have
: occured is.
:
:: Unless compiler switches (or lack thereof) relax the
:: requirements.
:
:: The "as-if" rule of code transformations doesn't apply here, as we
:: can actually see the difference.
:
: The "as-if" rule always applies, since it says that only the
: results count. In this case, for example: I believe that the
: Intel FPU has an instruction to force rounding to double
: precision, without actually storing the value, and an
: implementation could use this.

You must do a store to get the rounding. However, the target of the store
instruction can be another FP register, so in effect Yes.

In the example from the FAQ, the machine code compares a stored value to a
temporary in a register. That doesn't follow the "as-if" rule (as we can
easily see). Possibly because the code wasn't compiled with a "strict but
slow" compiler option.


Bo Persson


 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      04-08-2007
On Apr 7, 10:40 pm, Walter Bright <(E-Mail Removed)>
wrote:
> James Kanze wrote:
> > In this case, for example: I believe that the
> > Intel FPU has an instruction to force rounding to double
> > precision, without actually storing the value, and an
> > implementation could use this.


> That rounds the mantissa, but not the exponent. Thus, it doesn't do the
> job.


I'm not sure what you mean by not rounding the exponent. The
exponent is an integral value, so no rounding is involved.

If it doesn't truncate the exponent, that's not a problem, since
anytime it would have to truncate the exponent, you have
undefined behavior. According to the standard, anyway; from a
quality of implementation point of view, I don't know. Most
implementations (those using IEEE, anyway) *do* define behavior
in this case.

> The only way to get the 87 to create a true double value is to
> store it into memory, a slow operation.


Even rounding in a register takes measurable time. (But not as
much as storing it, of course.)

> Early Java's insistence on doubles for intermediate values caused a lot
> of problems because of this.


I know. Their goal was laudable: that code should always give
the same results, everywhere. The problem is that in order to
do so, floating point becomes real slow on some platforms,
including the Intel. The other alternative would have been
requiring extended precision in the intermediate values, but
then, they couldn't use native floating point on a Sparc, which
doesn't support extended precision. And for some reason, Sun
seems to consider Sparcs an important platform. So they
compromized. (The still require IEEE, which makes Java
significantly slower on an IBM mainframe. Also an important
platform in certain milieu. Obviously, two measures apply.)

--
James Kanze (Gabi Software) email: (E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34


 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      04-08-2007
On Apr 8, 11:34 am, "Bo Persson" <(E-Mail Removed)> wrote:
> James Kanze wrote:


> : On Apr 7, 9:08 am, "Bo Persson" <(E-Mail Removed)> wrote::: Jim Langston wrote:


> :::: "Pete Becker" <(E-Mail Removed)> wrote in message


> ::::: In your example, though, both results are stored, so when widened
> ::::: they should produce the same result. Some compilers don't do this,
> ::::: though, unless you tell them that they have to obey the rules
> ::::: (speed again).


> :::: FAQ 29.18 disagrees with you.


> ::::http://www.parashift.com/c++-faq-lit...html#faq-29.18


> :: The FAQ tells you what actually happens when you don't tell the
> :: compiler to obey the language rules.


> :: Like Pete says, most compilers have an options to select fast or
> :: strictly correct code. The default is of course fast, but slightly
> :: incorrect.


> : For a one definition of "correct": for a naive definition,
> : 1.0/3.0 will never give correct results, regardless of
> : conformance.


> It has an expected result, given the hardware. Like you mentioned elsewhere,
> we have a problem with the Intel hardware, that a temporary result (in a
> register) is produced faster *and* has higher precision than a stored
> result.


That's why I raised the question of what is meant by correct.
In context, if I remember it right, Pete's statement was clear:
correct meant conform to the standards. The simple quote above
removed the context, however, and makes the statement somewhat
ambiguous. Correct can mean many things, especially where
floating point values are involved.

> That makes it very tempting to use the temporary.


Quite. Especially because in some (most?, all?) cases, the
resulting code will be "correct" in the sense that it meets its
requirement specifications.

I think that this is the basis of Walter's argument. He hasn't
convinced me that it applies in all cases, but even before this
thread, I was convinced that it applied often enough that a
compiler should offer the alternative. And while I'd normally
oppose non-standard as a default, floating point is so subtle
that either way, someone who doesn't understand the issues will
get it wrong, so the argument of "naïve users getting the
correct behavior" really doesn't apply.

> : And I don't understand "slightly"---I always thought that
> : "correct" was binary: the results are either correct, or they
> : are not.


> That was supposed to be some kind of humor. Didn't work out too well, did
> it?


No humor. For any given definition of "correct", code is either
correct, or it is not correct.

> :: If the source code stores into variables x and y, the machine code
> :: must do that as well.


> : Just a nit: the results must be exactly the same as if the
> : machine code had stored into the variables. The memory write is
> : not necessarily necessary, but any rounding that would have
> : occured is.


> :: Unless compiler switches (or lack thereof) relax the
> :: requirements.


> :: The "as-if" rule of code transformations doesn't apply here, as we
> :: can actually see the difference.


> : The "as-if" rule always applies, since it says that only the
> : results count. In this case, for example: I believe that the
> : Intel FPU has an instruction to force rounding to double
> : precision, without actually storing the value, and an
> : implementation could use this.


> You must do a store to get the rounding. However, the target of the store
> instruction can be another FP register, so in effect Yes.


I didn't remember it that way, but it's been years since I
looked at the specification. (It was the specification for the
original 8087, so that gives you some idea of just how long ago
it was.)

> In the example from the FAQ, the machine code compares a stored value to a
> temporary in a register. That doesn't follow the "as-if" rule (as we can
> easily see). Possibly because the code wasn't compiled with a "strict but
> slow" compiler option.


Quite possibly. I know that real programs do occasionally have
this problem. I seem to recall that g++ even has it in certain
cases where the strict option is given, but I could be wrong.
This problem is probably the strongest argument against extended
precision. The fact that the value may change depending on
whether an intermediate value was spilled to memory or
maintained in a register certainly cannot help reasoning about
program correctness.

--
James Kanze (Gabi Software) email: (E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

 
Reply With Quote
 
Walter Bright
Guest
Posts: n/a
 
      04-08-2007
James Kanze wrote:
> On Apr 7, 10:40 pm, Walter Bright <(E-Mail Removed)>
> wrote:
>> James Kanze wrote:
>>> In this case, for example: I believe that the
>>> Intel FPU has an instruction to force rounding to double
>>> precision, without actually storing the value, and an
>>> implementation could use this.

>
>> That rounds the mantissa, but not the exponent. Thus, it doesn't do the
>> job.

>
> I'm not sure what you mean by not rounding the exponent. The
> exponent is an integral value, so no rounding is involved.


I meant that the exponent for 80 bit doubles has a greater range, and
values in that greater range do not cause the whole to be set to
infinity when set to double precision significand.
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Share-Point-2010 ,Share-Point -2010 Training , Share-point-2010Hyderabad , Share-point-2010 Institute Saraswati lakki ASP .Net 0 01-06-2012 06:39 AM
floating point problem... floating indeed :( teeshift Ruby 2 12-01-2006 01:16 AM
abt floating point numbers prasad VHDL 1 02-04-2006 10:40 PM
Fixed-point format for floating-point numbers Motaz Saad Java 7 11-05-2005 05:33 PM
Floating point numbers John Wilkinson XML 2 10-28-2005 04:33 PM



Advertisments