Velocity Reviews > C++ > Doing the 0.1-trick in C++

# Doing the 0.1-trick in C++

Stefan Ram
Guest
Posts: n/a

 03-16-2013
When explaining the meaning of »0.1« in Java, I use a trick
to exhibit the value of the internal representation of 0.1:

public class Main
{ public static void main( final java.lang.String args[] )
{ java.lang.System.out.println( new java.math.BigDecimal( 0.1 )); }}

0.100000000000000005551115123125782702118158340454 1015625

Is there a similar possibility in C++, too?

Stefan Ram
Guest
Posts: n/a

 03-16-2013
Andy Champ <(E-Mail Removed)> writes:
>>0.1000000000000000055511151231257827021181583404 541015625
>>Is there a similar possibility in C++, too?

>std::cout << std::setprecision(50) << 0.1 << std::endl;

Here, this gives

0.100000000000000005551115123125782702118158340454 1

. It seems to miss the last 6 digits, but is quite close.
The ending of the first numeral in »625« hints at a binary
fraction.

Ian Collins
Guest
Posts: n/a

 03-16-2013
Stefan Ram wrote:
> Andy Champ <(E-Mail Removed)> writes:
>>> 0.100000000000000005551115123125782702118158340454 1015625
>>> Is there a similar possibility in C++, too?

>> std::cout << std::setprecision(50) << 0.1 << std::endl;

>
> Here, this gives
>
> 0.100000000000000005551115123125782702118158340454 1
>
> . It seems to miss the last 6 digits, but is quite close.
> The ending of the first numeral in »625« hints at a binary
> fraction.

std::cout << std::setprecision(60) << 0.1 << std::endl;

--
Ian Collins

SG
Guest
Posts: n/a

 03-18-2013
Am 16.03.2013 23:07, schrieb Ian Collins:
> Stefan Ram wrote:
>> Andy Champ <(E-Mail Removed)> writes:
>>>> 0.100000000000000005551115123125782702118158340454 1015625
>>>> Is there a similar possibility in C++, too?
>>> std::cout << std::setprecision(50) << 0.1 << std::endl;

>>
>> Here, this gives
>>
>> 0.100000000000000005551115123125782702118158340454 1
>>
>> . It seems to miss the last 6 digits, but is quite close.
>> The ending of the first numeral in »625« hints at a binary
>> fraction.

>
> std::cout << std::setprecision(60) << 0.1 << std::endl;
>
>

Right. Though, I do wonder how to determine the lowest possible
precision value while the result will still accurately reflect the value
of the floating point number. So far, I used
numeric_limits<double>::digits+3 (see http://ideone.com/frexSH )

Öö Tiib
Guest
Posts: n/a

 03-18-2013
On Monday, 18 March 2013 15:03:35 UTC+2, SG wrote:
> Right. Though, I do wonder how to determine the lowest possible
> precision value while the result will still accurately reflect the value
> of the floating point number. So far, I used
> numeric_limits<double>::digits+3 (see http://ideone.com/frexSH )

std::numeric_limits<double>::digits10.

SG
Guest
Posts: n/a

 03-18-2013
Am 18.03.2013 17:36, schrieb Öö Tiib:
> On Monday, 18 March 2013 15:03:35 UTC+2, SG wrote:
>> Right. Though, I do wonder how to determine the lowest possible
>> precision value while the result will still accurately reflect the value
>> of the floating point number. So far, I used
>> numeric_limits<double>::digits+3 (see http://ideone.com/frexSH )

>
> std::numeric_limits<double>::digits10.

No.

digits10 is the number of decimal digits that can be represented exactly
as floating point number (15 for an IEEE-754 64bit float)

Slightly higher: max_digits10 is the number of decimal digits one needs
to uniquely identify a floating point value's bit pattern excluding bit
patterns for +/-0, +/-Inf and NaN (17 for an IEEE-754 64bit float).

representations that are close enough. Consider this:

binary decimal
---------------
1,1 1,5
1,01 1,25
1,001 1,125
1,0001 1,0625
1,00001 1,03125

So, you need at least the same amount of digits as the amount of bits to
be able to represent the _same_ value. That was my motivation to use
numeric_limits<T>::digits as a starting point (53 for an IEEE-754 64bit
float).

Victor Bazarov
Guest
Posts: n/a

 03-18-2013
On 3/18/2013 1:07 PM, SG wrote:
> Am 18.03.2013 17:36, schrieb Öö Tiib:
>> On Monday, 18 March 2013 15:03:35 UTC+2, SG wrote:
>>> Right. Though, I do wonder how to determine the lowest possible
>>> precision value while the result will still accurately reflect the value
>>> of the floating point number. So far, I used
>>> numeric_limits<double>::digits+3 (see http://ideone.com/frexSH )

>>
>> std::numeric_limits<double>::digits10.

>
> No.
>
> digits10 is the number of decimal digits that can be represented exactly
> as floating point number (15 for an IEEE-754 64bit float)
>
> Slightly higher: max_digits10 is the number of decimal digits one needs
> to uniquely identify a floating point value's bit pattern excluding bit
> patterns for +/-0, +/-Inf and NaN (17 for an IEEE-754 64bit float).
>
> representations that are close enough. Consider this:
>
> binary decimal
> ---------------
> 1,1 1,5
> 1,01 1,25
> 1,001 1,125
> 1,0001 1,0625
> 1,00001 1,03125
>
> So, you need at least the same amount of digits as the amount of bits to
> be able to represent the _same_ value. That was my motivation to use
> numeric_limits<T>::digits as a starting point (53 for an IEEE-754 64bit
> float).

But... Does 'setprecision' (that Andy recommended, and from which all
this 'digits' conversation started) have anything to do with the binary
representation? Or does it have everything to do with the decimal
output? IOW, why care about 'digits' AFA 'setprecision' is concerned?

V
--

SG
Guest
Posts: n/a

 03-18-2013
Am 18.03.2013 18:53, schrieb Victor Bazarov:
> On 3/18/2013 1:07 PM, SG wrote:
>> Am 18.03.2013 17:36, schrieb Öö Tiib:
>>> On Monday, 18 March 2013 15:03:35 UTC+2, SG wrote:
>>>> Right. Though, I do wonder how to determine the lowest possible
>>>> precision value while the result will still accurately reflect the
>>>> value
>>>> of the floating point number. So far, I used
>>>> numeric_limits<double>::digits+3 (see http://ideone.com/frexSH )
>>>
>>> std::numeric_limits<double>::digits10.

>>
>> No.
>>
>> digits10 is the number of decimal digits that can be represented exactly
>> as floating point number (15 for an IEEE-754 64bit float)
>>
>> Slightly higher: max_digits10 is the number of decimal digits one needs
>> to uniquely identify a floating point value's bit pattern excluding bit
>> patterns for +/-0, +/-Inf and NaN (17 for an IEEE-754 64bit float).
>>
>> representations that are close enough. Consider this:
>>
>> binary decimal
>> ---------------
>> 1,1 1,5
>> 1,01 1,25
>> 1,001 1,125
>> 1,0001 1,0625
>> 1,00001 1,03125
>>
>> So, you need at least the same amount of digits as the amount of bits to
>> be able to represent the _same_ value. That was my motivation to use
>> numeric_limits<T>::digits as a starting point (53 for an IEEE-754 64bit
>> float).

>
> But... Does 'setprecision' (that Andy recommended, and from which all
> this 'digits' conversation started) have anything to do with the binary
> representation? Or does it have everything to do with the decimal
> output? IOW, why care about 'digits' AFA 'setprecision' is concerned?

I'm not sure if I understand your question. Given the OP's desire to
display the exact value of a floating point number in decimal, he
obviously has to pick an "output precision" that is large enough for the
output to be exact.

For an exact decimal representation of a radix-2-based floating point
number x you need max(floor(log10(x)),0)+1 digits before the decimal
point and about max(numeric_limits<T>::digits-log2(x),0) give or take
decimal digits after the decimal point, I think.

Victor Bazarov
Guest
Posts: n/a

 03-18-2013
On 3/18/2013 3:37 PM, SG wrote:
> Am 18.03.2013 18:53, schrieb Victor Bazarov:
>> On 3/18/2013 1:07 PM, SG wrote:
>>> Am 18.03.2013 17:36, schrieb Öö Tiib:
>>>> On Monday, 18 March 2013 15:03:35 UTC+2, SG wrote:
>>>>> Right. Though, I do wonder how to determine the lowest possible
>>>>> precision value while the result will still accurately reflect the
>>>>> value
>>>>> of the floating point number. So far, I used
>>>>> numeric_limits<double>::digits+3 (see http://ideone.com/frexSH )
>>>>
>>>> std::numeric_limits<double>::digits10.
>>>
>>> No.
>>>
>>> digits10 is the number of decimal digits that can be represented exactly
>>> as floating point number (15 for an IEEE-754 64bit float)
>>>
>>> Slightly higher: max_digits10 is the number of decimal digits one needs
>>> to uniquely identify a floating point value's bit pattern excluding bit
>>> patterns for +/-0, +/-Inf and NaN (17 for an IEEE-754 64bit float).
>>>
>>> representations that are close enough. Consider this:
>>>
>>> binary decimal
>>> ---------------
>>> 1,1 1,5
>>> 1,01 1,25
>>> 1,001 1,125
>>> 1,0001 1,0625
>>> 1,00001 1,03125
>>>
>>> So, you need at least the same amount of digits as the amount of bits to
>>> be able to represent the _same_ value. That was my motivation to use
>>> numeric_limits<T>::digits as a starting point (53 for an IEEE-754 64bit
>>> float).

>>
>> But... Does 'setprecision' (that Andy recommended, and from which all
>> this 'digits' conversation started) have anything to do with the binary
>> representation? Or does it have everything to do with the decimal
>> output? IOW, why care about 'digits' AFA 'setprecision' is concerned?

>
> I'm not sure if I understand your question. Given the OP's desire to
> display the exact value of a floating point number in decimal, he
> obviously has to pick an "output precision" that is large enough for the
> output to be exact.
>
> For an exact decimal representation of a radix-2-based floating point
> number x you need max(floor(log10(x)),0)+1 digits before the decimal
> point and about max(numeric_limits<T>::digits-log2(x),0) give or take
> decimal digits after the decimal point, I think.

Uh... Wait. To represent 0.1f exactly how many do you need digits?
max(floor(log10(0.1f)),0)+1 is 1, so you need just 1 digit before the
decimal separator, and mumeric_limits<float>::digits is 24, so you need
max(24-log2(0.1),0) *decimal* digits _after_ the decimal separator?
log2(0.1) is -3.3, so you need 20 (or 21) *decimal* digits? Did I get
that right?

V
--

SG
Guest
Posts: n/a

 03-19-2013
Am 18.03.2013 20:47, schrieb Victor Bazarov:
> On 3/18/2013 3:37 PM, SG wrote:
>>
>> For an exact decimal representation of a radix-2-based floating point
>> number x you need max(floor(log10(x)),0)+1 digits before the decimal
>> point and about max(numeric_limits<T>::digits-log2(x),0) give or take
>> decimal digits after the decimal point, I think.

>
> Uh... Wait. To represent 0.1f exactly how many do you need digits?
> max(floor(log10(0.1f)),0)+1 is 1, so you need just 1 digit before the
> decimal separator, and mumeric_limits<float>::digits is 24, so you need
> max(24-log2(0.1),0) *decimal* digits _after_ the decimal separator?
> log2(0.1) is -3.3, so you need 20 (or 21) *decimal* digits? Did I get
> that right?

First of all, let me mention that I should have said "you need up to"

Now, if you write this as binary number you get

0.000110011001100110011001100110... (with a period of 4 bits)

Picking the closest IEEE-754 single float value basically means limiting
the number of consecutive significant bits to 24 with a leading one
that's not explicitly stored for a normalized number:

0.000110011001100110011001101 (LSB rounded up)
^^^^^^^^^^^^^^^^^^^^^^^^
(24 significant bits)

This is the number a float-type variable will actually store.

Now, since 2 is a factor of 10 we are able to express this number in
decimal _exactly_ :

0.100000001490116119384765625

That's a chunk of 27 consecutive decimal digits that starts and ends
with something other than zero. So, 27 is in fact the lowest possible
precision value to pass to the std::setprecision manipulator in order to
see _this_number_.

Of course, if you just want to see 0.1 being printed then
numeric_limits<float>::digits10 is what you would want to use for the
printing precision. digits10 for float on my machine is 6:

0.10000000149...
^^^^^^
6 digits

_exact_ value of a floating point number in decimal. Why? I guess it's
just for educational purposes ... to make students understand that
floating point numbers are typically just approximations and cannot
exactly represent a number like 0.1. It's not uncommon for students to
confuse the different kinds of roundings that happen during the
conversions. OFten students think that writing 0.1 will yield a value of
type double that exactly represents one tenth. In other cases I noticed
students were convinced that the number printed on screen in decimal
represents the exact value the floating point variable stores. We know
that in both it's a false assumption.

Cheers!
SG