Velocity Reviews > Are floating-point zeros required to stay exact?

# Are floating-point zeros required to stay exact?

Shao Miller
Guest
Posts: n/a

 03-01-2013
On 3/1/2013 13:07, Keith Thompson wrote:
> Shao Miller <(E-Mail Removed)> writes:
>> On 3/1/2013 03:05, glen herrmannsfeldt wrote:
>>> Shao Miller <(E-Mail Removed)> wrote:

> [...]
>>>> If you forfeit a single value from the set of allowable
>>>> values and call it zero, then ensure the zero-and-non-zero C semantics
>>>> with special consideration to this representation, there is already an
>>>> excuse for the arithmetic results being slightly off.
>>>
>>> I suppose you could call the smallest value 'zero', but it won't
>>> compare equal to its negative.

>>
>> What I meant was that there are certain semantics which require zero and
>> non-zero, and perhaps the implementation could insert a little extra
>> assembly... Suppose we have 'double_zero == 0'. Then the assembly
>> could actually compare the value of 'double_zero' with the forfeited
>> value that's called zero. In fact, it could do little else, if there's
>> no actual zero.
>>
>> Comparing equal to zero seems a little more trivial than performing
>> arithmetic, so arithmetic could just be performed "naturally." If I
>> recall correctly, expectations for floating-point arithmetic in C are
>> already that it's potentially imprecise (up to the implementation). I
>> don't know of a reason why the zero and non-zero semantics (such as
>> comparison) need to be imprecise.

>
> I think just about *every* floating-point operation (except
> assignment) would have to take this fake "zero" into account.
> It wouldn't just have to compare equal to 0.0, it would have to
> compare less than all positive values and greater than all negative
> values, x+0 and x-0 would have to yield x, x*0 would have to yield
> 0, 0/x would have to yield 0, and so on.
>
> Unless you take unreasonable advantage of C's vague requirements
> for floating-point semantics -- but then I'm not sure it's worth
> the effort, since that's likely to give you mathematically incorrect
> answers in a lot of cases.
>

Well that's what I was saying. The arithmetic operators are "allowed"
to give imprecise results anyway, so 'y = 0.; x == x - y' needn't yield
1. I cannot imagine 'y == y' (or 0.) being too complicated.

> If you can make your "fake zero" a trap value (not just in the
> C sense of a trap representation, but something you can actually
> *trap*), you can do after-the-fact cleanup if you use the fake zero
> in an expression.
>

That'd be handy.

> Practically speaking, it's not worth worrying about until somebody
> comes up with an actual system whose floating-point can't represent
> 0.0 and wants to implement C on it.
>

Agreed.

--
- Shao Miller
--
"Thank you for the kind words; those are the kind of words I like to hear.

Cheerily," -- Richard Harter

glen herrmannsfeldt
Guest
Posts: n/a

 03-01-2013
Shao Miller <(E-Mail Removed)> wrote:
> On 3/1/2013 13:07, Keith Thompson wrote:

(snip)
>> I think just about *every* floating-point operation (except
>> assignment) would have to take this fake "zero" into account.
>> It wouldn't just have to compare equal to 0.0, it would have to
>> compare less than all positive values and greater than all negative
>> values, x+0 and x-0 would have to yield x, x*0 would have to yield
>> 0, 0/x would have to yield 0, and so on.

>> Unless you take unreasonable advantage of C's vague requirements
>> for floating-point semantics -- but then I'm not sure it's worth
>> the effort, since that's likely to give you mathematically incorrect
>> answers in a lot of cases.

> Well that's what I was saying. The arithmetic operators are "allowed"
> to give imprecise results anyway, so 'y = 0.; x == x - y' needn't yield
> 1.

And I believe can easily do that on the x87.

Well, the idea behind the 8087 was that it would use an infinite
(virtual) stack, storing in memory when the processor stack was full.
After the 8087 was built, it was found that it was not possible to write
the interrupt routine to spill the stack registers. Some needed state
bits weren't available.

Seems that they might have fixed that in the 80287, but, as I understand
it, they didn't. (Maybe to stay compatible with existing systems.)

> I cannot imagine 'y == y' (or 0.) being too complicated.

I suppose it shouldn't fail for y==y or y==0 when y is 0, but
can for more complicated expressions, again with the x87.

Optimizing compilers will keep values in registers between statements,
even when they look like they are stored into float or double variables.
If the stack is full, values might be stored as double. I suppose they
should be stored as temporary real, but maybe that doesn't always
happen.

>> If you can make your "fake zero" a trap value (not just in the
>> C sense of a trap representation, but something you can actually
>> *trap*), you can do after-the-fact cleanup if you use the fake zero
>> in an expression.

> That'd be handy.

>> Practically speaking, it's not worth worrying about until somebody
>> comes up with an actual system whose floating-point can't represent
>> 0.0 and wants to implement C on it.

> Agreed.

The place I might expect to see this problem would be in systolic
arrays doing floating point. (Or, for that matter, in other MIMD
systems.) For a synchronous array, there is no fix-up.

Easiest way to avoid it is to not use a hidden one. Then zero is just
a number like any other. (Though possibly with a different sign or
exponent.) (Though be careful with post normalization.)

-- glen

Lowell Gilbert
Guest
Posts: n/a

 03-01-2013
Shao Miller <(E-Mail Removed)> writes:

> On 3/1/2013 13:07, Keith Thompson wrote:
>> If you can make your "fake zero" a trap value (not just in the
>> C sense of a trap representation, but something you can actually
>> *trap*), you can do after-the-fact cleanup if you use the fake zero
>> in an expression.
>>

>
> That'd be handy.

In the end, there's only so much the machine can do to hide precision
from you.

--
Lowell Gilbert, embedded/networking software engineer
http://be-well.ilk.org/~lowell/

glen herrmannsfeldt
Guest
Posts: n/a

 03-02-2013
Robert Wessel <(E-Mail Removed)> wrote:

(snip on inexact floating point, then someone wrote)
>>> Well that's what I was saying. The arithmetic operators are "allowed"
>>> to give imprecise results anyway, so 'y = 0.; x == x - y' needn't yield
>>> 1.

(and wrote)
>>And I believe can easily do that on the x87.

>>Well, the idea behind the 8087 was that it would use an infinite
>>(virtual) stack, storing in memory when the processor stack was full.
>>After the 8087 was built, it was found that it was not possible to write
>>the interrupt routine to spill the stack registers. Some needed state
>>bits weren't available.

>>Seems that they might have fixed that in the 80287, but, as I understand
>>it, they didn't. (Maybe to stay compatible with existing systems.)

> Palmer and Morse (the lead architects on the 8087) talked about it in
> their book on the coprocessor.

I used to have that book, and probably still do.

> Basically the original idea was to
> have the coprocessor spill to memory automatically, but that was
> dropped for die space issues, and the idea was then to allow a trap to
> handle the spills (and fills). While the facility was at least half
> there, it turned out to be impossible to reliably tell why the fault
> occurred (stack overflow and underflow are not reliably distinguished
> from other errors), and then the restart would have been very
> expensive, since the 8087 instruction could not actually have been
> restarted, the faulting instruction would have had to have been
> emulated in software.

> And then it never got fixed.

Seems to me that the 80287 has much of the same logic as the 8087.

The 8086 and 8087 require, at design clock frequency, a 33% duty cycle
clock. The 8284 divides the crystal frequency by three to do that.
(The original IBM PC has a 14.31818MHz crystal, four times the
NTSC color subcarrier frequency, to make it easier to build the CGA
video board. That divided by three is 4.77MHz, close to the 5MHz
for the 8088 at the time.)

The 80286 uses a 50% clock, but the 80287 run asynchronously to
that with a 33% clock. I once built a little board that would plug
into an 80287 socket and clock its 80287 at the desired rate.

Seems, then, that the internal logic of the 802878 is similar to that
of the 8087, with a new bus interface spliced on. It then has the
extra complication of passing data between different clock domains of
two asynchronous clocks.

Also, note that to save one crystal on the original PC, all later
machines with the ISA bus have to have a 14.31818MHz crystal just
in case someone uses a CGA with them. (I did once have one in my
AT clone in 1989.)

-- glen

glen herrmannsfeldt
Guest
Posts: n/a

 03-03-2013
Robert Wessel <(E-Mail Removed)> wrote:

(snip)
> As far as I know, the math parts of the 8087, 80187 and 80287 were
> basically identical, just the bus interfaces varied. The extra
> overhead on the '286 (because the busses were not well matched) was
> such that at the same clock speed you'd get slower performance from a
> 286/287 combination than an 8086/87. The 80387 was a major revamp.

Yes.

So, why didn't they fix the virtual stack on the 80387?

Or did they and no-one noticed?

-- glen

Nick Keighley
Guest
Posts: n/a

 03-09-2013
On Feb 27, 10:19*pm, James Kuyper <(E-Mail Removed)> wrote:
> On 02/27/2013 04:48 PM, Eric Sosman wrote:
>
> > On 2/27/2013 3:29 PM, James Kuyper wrote:

> ...
> >> "The accuracy of the floating-point operations (+, -, *, /) and of the
> >> library functions in <math.h> and <complex.h> that return floating-point
> >> results is implementation defined, as is the accuracy of the conversion
> >> between floating-point internal representations and string
> >> representations performed by the library functions in <stdio.h>,
> >> <stdlib.h>, and <wchar.h>. The implementation may state that the
> >> accuracy is unknown." (5.2.4.2.2p6)

>
> > * * *I don't think this paragraph applies. *None of (+, -, *, /)
> > is performed,

>
> I've always assumed that "floating point operations" was the key phrase,
> and *that "(+, -, *, /)" should be taken only as examples,

ditto.

> implying, in
> particular, that the relational and equality operators were also
> intended to be covered by that clause.

surely the implementer would have to be extraordinarily perverse!
Ultimately == comes down to comparing bit patterns. Isn't 0.0 always
going to have the same bit pattern? De-normalised numbers?

Many times I've seen it said that floating point can be used as large
integers. So stuff like this will work

a = 2.0
b = 3.0

a + b == 5.0

> On the other hand, You might be right. If so, does that mean that the
> unary +, unary -, !, ?:, ++, --, and compound assignment operators
> acting on floating point values are also not covered,

unary + and unary - I'd expect "sensible" answers (exluding NaNs, and
numbers with very large magnitudes). Are !, ++ and -- even defined for
FP numbers? What problem do you envisage with ?

and must therefore
> always return exact values? That's trivial to achieve for the first four
> of those operators, but I don't think it's possible for the others - but
> perhaps the others are covered by their definitions in terms of the four
> explicitly-listed operations.
>
> Still,
>
> * * LDBL_MIN + LDBL_EPSILON == LDBL_MAX - LDBL_EPSILON
>
> is unambiguously covered by that clause, and the same is true of
>
> * * nextafterl(LDBL_MIN, 0.0) == nextafterl(LDBL_MAX,0.0)
>
> and having either of those evaluate as true is an equally disconcerting
> possibility.

David Thompson
Guest
Posts: n/a

 03-11-2013
On Wed, 27 Feb 2013 15:55:40 -0500, Shao Miller
<(E-Mail Removed)> wrote:

> On 2/27/2013 15:29, James Kuyper wrote:
> > On 02/27/2013 02:56 PM, army1987 wrote:

<snip: comparing 0. in variable to literal constant>
> > No.
> > "The accuracy of the floating-point operations (+, -, *, /)

>
> Which do not appear in the code.
>
> > and of the
> > library functions in <math.h> and <complex.h> that return floating-point
> > results

>
> Which do not appear in the code.
>
> > is implementation defined, as is the accuracy of the conversion
> > between floating-point internal representations

>
> I do not see conversions happening in the code.
>

Nit: fetch of the variable and 'decay' of a string literal value like
"Okay" to /*const_ish*/char* are formally conversions (6.3.2p2,3),
although not conversions that (can) lose numeric precision which is
the issue here apparently.

> > and string
> > representations performed by the library functions in <stdio.h>,
> > <stdlib.h>, and <wchar.h>.

>
> No library functions are using floating types in the code and there are
> no string representations of a floating value in the code.
>