On Thu, 3 Jun 2010, Keith Thompson wrote:

> Seebs <(E-Mail Removed)> writes:

>> For plain float, on the systems I've tried, the boundary seems to be

>> about 2^24; 2^24+1 cannot be represented exactly in a 32-bit float. I

>> wouldn't be surprised to find that double came out somewhere near

>> 2^48+1 as the first positive integer value that couldn't be

>> represented.

>

> It's more likely to be 2^53-1, assuming IEEE floating-point; look at the

> values of FLT_MANT_DIG and DBL_MANT_DIG.
It's my turn to sigh now. For some reason I failed both to notice and to

remember DBL_MANT_DIG, which IIRC is on the same page of the standard as

LDBL_DIG.

We could simply check if

2 ** (sizeof(utype) * CHAR_BIT) <= FLT_RADIX ** DBL_MANT_DIG

That is,

sizeof(utype) * CHAR_BIT <= log2(FLT_RADIX) * DBL_MANT_DIG

or perhaps even

logb(2) * (sizeof(utype) * CHAR_BIT) <= DBL_MANT_DIG

We could pre-check if FLT_RADIX is 2, and if so, simply omit

log2(FLT_RADIX) or logb(2), and compare integers. If not, then perhaps we

should first ask the environment to round towards zero or -Inf for the

log2(FLT_RADIX) formula, or towards +Inf for the logb(2) formula.

Sorry,

lacos