Velocity Reviews > Can a double always represent an int exactly?

# Can a double always represent an int exactly?

Fred Ma
Guest
Posts: n/a

 10-22-2004
I'm using the expression "int a = ceil( SomeDouble )".
The man page says that ceil returns the smallest
integer that is not less than SomeDouble, represented
as a double. However, my understanding is that a
double has nonuniform precision throughout its value
range. Will a double always be able to exactly
represent any value of type int? Could someone please
point me to an explanation of how this is ensured,
given that the details of a type realization varies
with the platform?

Thanks.

Fred

P.S. I am not worried about overflowing the int value
range, just about the guaranteed precise representation
of int by double.

Rouben Rostamian
Guest
Posts: n/a

 10-22-2004
In article <(E-Mail Removed)>,
Fred Ma <(E-Mail Removed)> wrote:
>I'm using the expression "int a = ceil( SomeDouble )".
>The man page says that ceil returns the smallest
>integer that is not less than SomeDouble, represented
>as a double. However, my understanding is that a
>double has nonuniform precision throughout its value
>range. Will a double always be able to exactly
>represent any value of type int? Could someone please
>point me to an explanation of how this is ensured,
>given that the details of a type realization varies
>with the platform?

I don't know whether the C Standard specifies anything to
this effect. But here is an implementation-specific
observation.

On a machine with 64-bit doubles which follow the IEEE
specification, the mantissa part is 53 bits (plus one hidden
bit as well) therefore integers as large as around 2 to the
50th power should be exactly representable. In particular, if
the machine has 32-bit ints, they are all exactly representable
as doubles.

On my machine, which has 32-bit ints and 64-bit doubles,
the following yields the exact answer:

printf("%30.15f\n", 1.0 + pow(2.0, 52.));

However the following stretches it too far and the answer
is inexact:

printf("%30.15f\n", 1.0 + pow(2.0, 53.));

--
rr

Fred Ma
Guest
Posts: n/a

 10-22-2004
Rouben Rostamian wrote:
>
> I don't know whether the C Standard specifies anything to
> this effect. But here is an implementation-specific
> observation.
>
> On a machine with 64-bit doubles which follow the IEEE
> specification, the mantissa part is 53 bits (plus one hidden
> bit as well) therefore integers as large as around 2 to the
> 50th power should be exactly representable. In particular, if
> the machine has 32-bit ints, they are all exactly representable
> as doubles.
>
> On my machine, which has 32-bit ints and 64-bit doubles,
> the following yields the exact answer:
>
> printf("%30.15f\n", 1.0 + pow(2.0, 52.));
>
> However the following stretches it too far and the answer
> is inexact:
>
> printf("%30.15f\n", 1.0 + pow(2.0, 53.));

I realize that if a double actually uses twice as many bits as
ints, the mantissa should be big enough that imprecision should
never arise. I'm just concerned about whether this can be relied
upon. My faith in what seems normal has been shaken after finding
that long has the same number of bits as int in some environments.
What if double has the same number of bits as ints in some
environments? Some of those bits will be taken up by the
exponent, and the mantissa will actually have fewer bits than an
int. Hence, it will be less precise than ints within the value
range of ints.

Fred

Chris Torek
Guest
Posts: n/a

 10-22-2004
In article <(E-Mail Removed)>
Fred Ma <(E-Mail Removed)> writes:
>I'm using the expression "int a = ceil( SomeDouble )". The man
>page says that ceil returns the smallest integer that is not less
>than SomeDouble, represented as a double. However, my understanding
>is that a double has nonuniform precision throughout its value range.

This is correct (well, I can imagine a weird implementation that
deliberately makes "double"s have constant precision by often
wasting a lot of space; it seems quite unlikely though).

Note that ceil() returns a double, not an int.

>Will a double always be able to exactly represent any value of
>type int?

This is implementation-dependent. If "double" is not very precise
but INT_MAX is very large, it is possible that not all "int"s can
be represented. This is one reason ceil() returns a double (though
a small one at best -- the main reason is so that ceil(1.6e35) can
still be 1.6e35, for instance).

>Could someone please point me to an explanation of how this is ensured,
>given that the details of a type realization varies with the platform?

I am not sure what you mean by "this", especially with the PS:

>P.S. I am not worried about overflowing the int value
>range, just about the guaranteed precise representation
>of int by double.

.... but let me suppose you are thinking of a case that actually occurs
if we substitute "float" for "double" on most of today's implementations.
Here, we get "interesting" effects near 8388608.0 and 16777216.0.
Values below 16777216.0 step by ones: 8388608.0 is followed
immediately by 8388609.0, for instance, and 16777215.0 is followed
immediately by 16777216.0. On the other hand, below (float)(1<<23)
or above (float)(1<<24), we step by 1/2 or 2 respectively. Using
nextafterf() (if you have it) and variables set to the right values,
you might printf() some results and find:

nextafterf(8388608.0, -inf) = 8388607.5
nextafterf(16777216.0, +inf) = 16777216.2

So all ceil() has to do with values that are at least 8388608.0
(in magnitude) is return those values -- they are already integers.
It is only values *below* this area that can have fractional
parts.

Of course, when we use actual "double"s on today's real (IEEE style)
implementations, the tricky point is not 2-sup-23 but rather
2-sup-52. The same principal applies, though: values that meet or
exceed some magic constant (in either positive or negative direction)
are always integral, because they have multiplied away all their
fraction bits by their corresponding power of two. Since 2-sup-23 +
2-sup-22 + ... + 2-sup-0 is a sum of integers, it must itself be
an integer. Only if the final terms of the sum involve negative
powers of two can it contain fractions.

The other "this" you might be wondering about is: how do you
drop off the fractional bits? *That* one depends (for efficiency
reasons) on the CPU. The two easy ways are bit-twiddling, and
doing addition followed by subtraction. In both cases, we just
want to zero out any mantissa (fraction) bits that represent
negative powers of two. The bit-twiddling method does it with
method uses the normalization hardware to knock them out. If
normalization is slow (e.g., done in software or with a microcode
loop), the bit-twiddling method is generally faster.
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
Reading email is like searching for food in the garbage, thanks to spammers.

Gordon Burditt
Guest
Posts: n/a

 10-22-2004
>I'm using the expression "int a = ceil( SomeDouble )".
>The man page says that ceil returns the smallest
>integer that is not less than SomeDouble, represented
>as a double. However, my understanding is that a
>double has nonuniform precision throughout its value
>range. Will a double always be able to exactly
>represent any value of type int?

No. There is nothing prohibiting an implementation from choosing
int = 64-bit signed integer, and double = 64-bit IEEE double, which
has only 53 mantissa bits. Integers outside the range +/- 2**53
may be rounded.

>point me to an explanation of how this is ensured,
>given that the details of a type realization varies
>with the platform?

It is NOT ensured.

Gordon L. Burditt

Erik Trulsson
Guest
Posts: n/a

 10-22-2004
Fred Ma <(E-Mail Removed)> wrote:
> I'm using the expression "int a = ceil( SomeDouble )".
> The man page says that ceil returns the smallest
> integer that is not less than SomeDouble, represented
> as a double. However, my understanding is that a
> double has nonuniform precision throughout its value
> range.

I am not sure what you mean here, but a double is a floating-point type
and like all such has a precision of some fixed number of significant
digits. This precision does not vary, but for large exponents the
difference between one number and the next higher one can be fairly
large.

> Will a double always be able to exactly
> represent any value of type int?

Not necessarily. If, as is common, a double is 64 bits wide with 53
bits of precision, and (as is less common) int is also 64 bits wide
then there are some values of type int which can not be exactly
represented by a double.

> point me to an explanation of how this is ensured,
> given that the details of a type realization varies
> with the platform?
>
> Thanks.
>
> Fred
>
> P.S. I am not worried about overflowing the int value
> range, just about the guaranteed precise representation
> of int by double.

--
Erik Trulsson
http://www.velocityreviews.com/forums/(E-Mail Removed)

Erik Trulsson
Guest
Posts: n/a

 10-22-2004
Fred Ma <(E-Mail Removed)> wrote:
> Rouben Rostamian wrote:
>>
>> I don't know whether the C Standard specifies anything to
>> this effect. But here is an implementation-specific
>> observation.
>>
>> On a machine with 64-bit doubles which follow the IEEE
>> specification, the mantissa part is 53 bits (plus one hidden
>> bit as well) therefore integers as large as around 2 to the
>> 50th power should be exactly representable. In particular, if
>> the machine has 32-bit ints, they are all exactly representable
>> as doubles.
>>
>> On my machine, which has 32-bit ints and 64-bit doubles,
>> the following yields the exact answer:
>>
>> printf("%30.15f\n", 1.0 + pow(2.0, 52.));
>>
>> However the following stretches it too far and the answer
>> is inexact:
>>
>> printf("%30.15f\n", 1.0 + pow(2.0, 53.));

>
> I realize that if a double actually uses twice as many bits as
> ints, the mantissa should be big enough that imprecision should
> never arise. I'm just concerned about whether this can be relied
> upon.

This can't be relied upon.

> My faith in what seems normal has been shaken after finding
> that long has the same number of bits as int in some environments.

Actually in most environments these days. (Most Unix-variants on
32-bit systems has both int and as 32 bits wide.)

> What if double has the same number of bits as ints in some
> environments? Some of those bits will be taken up by the
> exponent, and the mantissa will actually have fewer bits than an
> int. Hence, it will be less precise than ints within the value
> range of ints.

Correct, and this can indeed happen.

--
Erik Trulsson
(E-Mail Removed)

Jack Klein
Guest
Posts: n/a

 10-22-2004
On 22 Oct 2004 00:07:14 GMT, Fred Ma <(E-Mail Removed)> wrote in
comp.lang.c:

> I'm using the expression "int a = ceil( SomeDouble )".
> The man page says that ceil returns the smallest
> integer that is not less than SomeDouble, represented
> as a double. However, my understanding is that a
> double has nonuniform precision throughout its value
> range. Will a double always be able to exactly
> represent any value of type int? Could someone please
> point me to an explanation of how this is ensured,
> given that the details of a type realization varies
> with the platform?
>
> Thanks.
>
> Fred
>
> P.S. I am not worried about overflowing the int value
> range, just about the guaranteed precise representation
> of int by double.

As others have mentioned, on 64-bit platforms some integer types, and
perhaps even type int on some, have 64 bits and doubles usually have
fewer mantissa bits than this.

What I haven't seen anyone else point out, so far, is the fact that
this implementation-defined characteristic is available to your
program via the macros DECIMAL_DIG and DBL_DIG in <float.h>.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html

Fred Ma
Guest
Posts: n/a

 10-22-2004
Fred Ma wrote:
>
> I'm using the expression "int a = ceil( SomeDouble )". The man page says
> that ceil returns the smallest integer that is not less than SomeDouble,
> represented as a double. However, my understanding is that a double has
> nonuniform precision throughout its value range. Will a double always be
> able to exactly represent any value of type int? Could someone please
> point me to an explanation of how this is ensured, given that the details
> of a type realization varies with the platform?
>
> Thanks.
>
> Fred
>
> P.S. I am not worried about overflowing the int value range, just about
> the guaranteed precise representation of int by double.

Thanks, all, for your replies. They have pointed out a flaw with my own
question. Specifically, it is one thing to ask:

(1) if a double can precisely represent any int.

It is quite another to ask:

(2) if an int(ceil(SomeDouble)) can precisely represent the smallest
integer that is no smaller than SomeDouble, given that SomeDouble is
in the value range of int.

The answer to #1 is clearly no if the mantissa of the double has
"significantly" fewer bits than the int. The reason for "significantly" is
approximate bookkeeping I've walked through; based on Chris's description,
I tried to sanity check this. It starts with the idea that whether a
double can represent any int depends on whether a double can increase in
value by exactly 1 throughout the value range of int. That is, when the
LSB of the mantissa is toggled, does the value of the double change by no
more than 1? For a mantissa of N bits, ignoring the IEEE hidden bit, this
condition is satisfied if scaling due to the exponent (power of 2) is
less-than-or-equal-to 2^N. I'm not talking about how the exponent is
represented in terms of bits; I'm talking about multiplying the mantissa by
2^N, however it is represented in IEEE format. Bascially, the scaling is
such that there are no fractional bits. An exponent value greater than N
yields a scaling that causes the double to increment by more than 1 when
the mantissa increments. Hence, the limiting condition for the double to
have a precision of unity is when the scaling is 2^N. The maximum number
under this condition is when the mantissa is all-ones (N+1 ones including
the hidden bit) i.e. the double has value 2^(N+2)-1. (I'm ignoring the
details to accommodate negative numbers, this might affect the answer by a
bit or so). If all ints fall within this limit, then a double can
represent all ints.

I think the answer to #2 follows from this picture of scaling the mantissa
so that the LSB has unit value. I had to remind myself that the condition
considered in #2 is that SomeDouble is within the value range of int, so
the hazard being tested is not one of overflow. Irrespective of this
condition, however, there are two scenarios which ceil(SomeDouble) can be
split into. One is that the exponent scaling of SomeDouble leaves some
fractional bits, and the other is that it doesn't. If there are some
fractional bits, then the resolution of SomeDouble in that value range is
obviously more precise than a unity step, so integers are precisely
representable, and ceil should return the right value. If there are no
fractional bits, then SomeDouble has an integral value, and passing it
through the ceil function should result in no change, regardless of the
resolution of SomeDouble in that value range i.e. ceil should be able to
return the correct value as a double.

The unintuitive result of this (to me) is that SomeDouble *always* returns
precisely the right answer. Whether it fits into an int is a different
issue (issue#1). I suspect this is what Chris was illustrating.

Fred

Fred Ma
Guest
Posts: n/a

 10-22-2004
Jack Klein wrote:
>
> As others have mentioned, on 64-bit platforms some integer types, and
> perhaps even type int on some, have 64 bits and doubles usually have
> fewer mantissa bits than this.
>
> What I haven't seen anyone else point out, so far, is the fact that
> this implementation-defined characteristic is available to your
> program via the macros DECIMAL_DIG and DBL_DIG in <float.h>.

Hi, Jack,

I found these definitions at Dinkum:

DECIMAL_DIG
#define DECIMAL_DIG <#if expression >= 10> [added with C99]
The macro yields the minimum number of decimal digits needed to represent all the significant digits for type long double.

FLT_DIG
#define FLT_DIG <#if expression >= 6>
The macro yields the precision in decimal digits for type float.

I guess the point is that one can infer the bit-width of the mantissa from
them. Thanks.

Fred