Velocity Reviews > Python math is off by .000000000000045

# Python math is off by .000000000000045

Terry Reedy
Guest
Posts: n/a

 02-26-2012
On 2/25/2012 9:49 PM, Devin Jeanpierre wrote:

> What this boils down to is to say that, basically by definition, the
> set of numbers representable in some finite number of binary digits is
> countable (just count up in binary value). But the whole of the real
> numbers are uncountable. The hard part is then accepting that some
> countable thing is 0% of an uncountable superset. I don't really know
> of any "proof" of that latter thing, it's something I've accepted
> axiomatically and then worked out backwards from there.

Informally, if the infinity of counts were some non-zero fraction f of
the reals, then there would, in some sense, be 1/f times a many reals as
counts, so the count could be expanded to count 1/f reals for each real
counted before, and the reals would be countable. But Cantor showed that
the reals are not countable.

But as you said, this is all irrelevant for computing. Since the number
of finite strings is practically finite, so is the number of algorithms.
And even a countable number of algorithms would be a fraction 0, for
instance, of the uncountable predicate functions on 0, 1, 2, ... . So we
do what we actually can that is of interest.

--
Terry Jan Reedy

jmfauth
Guest
Posts: n/a

 02-26-2012
On 25 fév, 23:51, Steven D'Aprano <steve
(E-Mail Removed)> wrote:
> On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:
> >>>> (2.0).hex()

> > '0x1.0000000000000p+1'
> >>>> (4.0).hex()

> > '0x1.0000000000000p+2'
> >>>> (1.5).hex()

> > '0x1.8000000000000p+0'
> >>>> (1.1).hex()

> > '0x1.199999999999ap+0'

>
> > jmf

>
> What's your point? I'm afraid my crystal ball is out of order and I have
> no idea whether you have a question or are just demonstrating your
> mastery of copy and paste from the Python interactive interpreter.
>

It should be enough to indicate the right direction

Guest
Posts: n/a

 02-27-2012

Those of you who program in other languages regularly: if you visit
floating-point arithmetic in that forum? Or in comp.lang.perl?

Is there something about Python that exposes the uncomfortable truth
about practical computer arithmetic that these other languages
obscure? For of course, arithmetic is surely no less accurate in
Python than in any other computing language.

I always found it helpful to ask someone who is confused by this issue
to imagine what the binary representation of the number 1/3 would be.

0.011 to three binary digits of precision:
0.0101 to four:
0.01011 to five:
0.010101 to six:
0.0101011 to seven:
0.01010101 to eight:

And so on, forever. So, what if you want to do some calculator-style
math with the number 1/3, that will not require an INFINITE amount of
time? You have to round. Rounding introduces errors. The more
binary digits you use for your numbers, the smaller those errors will
be. But those errors can NEVER reach zero in finite computational
time.

If ALL the numbers you are using in your computations are rational
numbers, you can use Python's rational and/or decimal modules to get
error-free results. Learning to use them is a bit of a specialty.

But for those of us who end up with numbers like e, pi, or the square
root of 2 in our calculations, the compromise of rounding must be
accepted.

Terry Reedy
Guest
Posts: n/a

 02-27-2012

> I always found it helpful to ask someone who is confused by this issue
> to imagine what the binary representation of the number 1/3 would be.
>
> 0.011 to three binary digits of precision:
> 0.0101 to four:
> 0.01011 to five:
> 0.010101 to six:
> 0.0101011 to seven:
> 0.01010101 to eight:
>
> And so on, forever. So, what if you want to do some calculator-style
> math with the number 1/3, that will not require an INFINITE amount of
> time? You have to round. Rounding introduces errors. The more
> binary digits you use for your numbers, the smaller those errors will
> be. But those errors can NEVER reach zero in finite computational
> time.

Ditto for 1/3 in decimal.
....
0.33333333 to eitht

> If ALL the numbers you are using in your computations are rational
> numbers, you can use Python's rational and/or decimal modules to get
> error-free results.

Decimal floats are about as error prone as binary floats. One can only
exact represent a subset of rationals of the form n / (2**j * 5**k). For
a fixed number of bits of storage, they are 'lumpier'. For any fixed
precision, the arithmetic issues are the same.

The decimal module decimals have three advantages (sometimes) over floats.

1. Variable precision - but there are multiple-precision floats also
available outside the stdlib.

2. They better imitate calculators - but that is irrelevant or a minus
for scientific calculation.

3. They better follow accounting rules for financial calculation,
including a multiplicity of rounding rules. Some of these are laws that
*must* be followed to avoid nasty consequences. This is the main reason
for being in the stdlib.

> Learning to use them is a bit of a specialty.

Definitely true.

--
Terry Jan Reedy

Steven D'Aprano
Guest
Posts: n/a

 02-27-2012

> Curiosity prompts me to ask...
>
> Those of you who program in other languages regularly: if you visit
> floating-point arithmetic in that forum? Or in comp.lang.perl?

Yes.

http://stackoverflow.com/questions/5...ts-math-broken

And look at the "Linked" sidebar. Obviously StackOverflow users no more
search the internet for the solutions to their problems than do
comp.lang.python posters.

http://compgroups.net/comp.lang.java...roundoff-error

--
Steven

Grant Edwards
Guest
Posts: n/a

 02-27-2012
On 2012-02-27, Steven D'Aprano <(E-Mail Removed)> wrote:
>
>> Curiosity prompts me to ask...
>>
>> Those of you who program in other languages regularly: if you visit
>> floating-point arithmetic in that forum? Or in comp.lang.perl?

>
> Yes.
>
> http://stackoverflow.com/questions/5...ts-math-broken
>
> And look at the "Linked" sidebar. Obviously StackOverflow users no
> more search the internet for the solutions to their problems than do
> comp.lang.python posters.
>
> http://compgroups.net/comp.lang.java...roundoff-error

One might wonder if the frequency of such questions decreases as the
programming language becomes "lower level" (e.g. C or assembly).

--
Grant Edwards grant.b.edwards Yow! World War III?
at No thanks!
gmail.com

Michael Torrie
Guest
Posts: n/a

 02-27-2012
On 02/27/2012 08:02 AM, Grant Edwards wrote:
> On 2012-02-27, Steven D'Aprano <(E-Mail Removed)> wrote:
>>
>>> Curiosity prompts me to ask...
>>>
>>> Those of you who program in other languages regularly: if you visit
>>> floating-point arithmetic in that forum? Or in comp.lang.perl?

>>
>> Yes.
>>
>> http://stackoverflow.com/questions/5...ts-math-broken
>>
>> And look at the "Linked" sidebar. Obviously StackOverflow users no
>> more search the internet for the solutions to their problems than do
>> comp.lang.python posters.
>>
>> http://compgroups.net/comp.lang.java...roundoff-error

>
> One might wonder if the frequency of such questions decreases as the
> programming language becomes "lower level" (e.g. C or assembly).

I think that most of the use cases in C or assembly of math are
integer-based only. For example, counting, bit-twiddling, addressing
character cells or pixel coordinates, etc. Maybe when programmers have
to statically declare a variable type in advance, since the common use
cases require only integer, that gets used far more, so experiences with
float happen less often. Some of this could have to do with the fact
that historically floating point required a special library to do
floating point math, and since a lot of people didn't have
floating-point coprocessors back then, most code was integer-only.

Early BASIC interpreters defaulted to floating point for everything, and
implemented all the floating point arithmetic internally with integer
arithmetic, without the help of the x87 processor, but no doubt they did
round the results when printing to the screen. They also did not have
very much precision to begin with. Anyone remember Microsoft's
proprietary floating point binary system and how there were function
calls to convert back and forth between the IEEE standard?

Another key thing is that most C programmers don't normally just print
out floating point numbers without a %.2f kind of notation that properly
rounds a number.

Now, of course, every processor has a floating-point unit, and the C
compilers can generate code that uses it just as easily as integer code.

No matter what language, or what floating point scheme you use,
significant digits is definitely important to understand!

Ethan Furman
Guest
Posts: n/a

 02-27-2012
jmfauth wrote:
> On 25 fév, 23:51, Steven D'Aprano <steve
> (E-Mail Removed)> wrote:
>> On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:
>>>>>> (2.0).hex()
>>> '0x1.0000000000000p+1'
>>>>>> (4.0).hex()
>>> '0x1.0000000000000p+2'
>>>>>> (1.5).hex()
>>> '0x1.8000000000000p+0'
>>>>>> (1.1).hex()
>>> '0x1.199999999999ap+0'
>>> jmf

>> What's your point? I'm afraid my crystal ball is out of order and I have
>> no idea whether you have a question or are just demonstrating your
>> mastery of copy and paste from the Python interactive interpreter.

>
> It should be enough to indicate the right direction

I'm a casual interested reader and I have no idea what your post is
trying to say.

~Ethan~

Michael Torrie
Guest
Posts: n/a

 02-28-2012
On 02/27/2012 10:28 AM, Ethan Furman wrote:
> jmfauth wrote:
>> On 25 fév, 23:51, Steven D'Aprano <steve
>> (E-Mail Removed)> wrote:
>>> On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote:
>>>>>>> (2.0).hex()
>>>> '0x1.0000000000000p+1'
>>>>>>> (4.0).hex()
>>>> '0x1.0000000000000p+2'
>>>>>>> (1.5).hex()
>>>> '0x1.8000000000000p+0'
>>>>>>> (1.1).hex()
>>>> '0x1.199999999999ap+0'
>>>> jmf
>>> What's your point? I'm afraid my crystal ball is out of order and I have
>>> no idea whether you have a question or are just demonstrating your
>>> mastery of copy and paste from the Python interactive interpreter.

>>
>> It should be enough to indicate the right direction

>
> I'm a casual interested reader and I have no idea what your post is
> trying to say.

He's simply showing you the hex (binary) representation of the
floating-point number's binary representation. As you can clearly see
in the case of 1.1, there is no finite sequence that can store that.
You end up with repeating numbers. Just like 1/3, when represented in
base 10 fractions (x1/10 + x2/100, x3/1000, etc), is a repeating
sequence, the number base 10 numbers 1.1 or 0.2, or many others that are
represented by exact base 10 fractions, end up as repeating sequences in
base 2 fractions. This should help you understand why you get errors
doing simple things like x/y*y doesn't quite get you back to x.

Ethan Furman
Guest
Posts: n/a

 02-28-2012
Michael Torrie wrote:
> He's simply showing you the hex (binary) representation of the
> floating-point number's binary representation. As you can clearly see
> in the case of 1.1, there is no finite sequence that can store that.
> You end up with repeating numbers.

Thanks for the explanation.

> doing simple things like x/y*y doesn't quite get you back to x.

I already understood that. I just didn't understand what point he was
trying to make since he gave no explanation.

~Ethan~