Velocity Reviews > Maths error

# Maths error

Rory Campbell-Lange
Guest
Posts: n/a

 01-08-2007
>>> (1.0/10.0) + (2.0/10.0) + (3.0/10.0)
0.60000000000000009
>>> 6.0/10.0

0.59999999999999998

Is using the decimal module the best way around this? (I'm expecting the first
sum to match the second). It seem anachronistic that decimal takes strings as
input, though.

Help much appreciated;
Rory
--
Rory Campbell-Lange
<(E-Mail Removed)>
<www.campbell-lange.net>

Bjoern Schliessmann
Guest
Posts: n/a

 01-08-2007
Rory Campbell-Lange wrote:

> Is using the decimal module the best way around this? (I'm
> expecting the first sum to match the second). It seem
> anachronistic that decimal takes strings as input, though.

precision errors with floating point numbers are normal because the
precision is limited technically.

For floats a and b, you'd seldom say "if a == b:" (because it's
often false as in your case) but rather
"if a - b < threshold:" for a reasonable threshold value which

Also check the recent thread "bizarre floating point output".

Regards,

Björn

--
BOFH excuse #333:

A plumber is needed, the network drain is clogged

Gabriel Genellina
Guest
Posts: n/a

 01-09-2007
At Monday 8/1/2007 19:20, Bjoern Schliessmann wrote:

>Rory Campbell-Lange wrote:
>
> > Is using the decimal module the best way around this? (I'm
> > expecting the first sum to match the second). It seem
> > anachronistic that decimal takes strings as input, though.

>[...]
>Also check the recent thread "bizarre floating point output".

And the last section on the Python Tutorial "Floating Point
Arithmetic: Issues and Limitations"

--
Gabriel Genellina
Softlab SRL

__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas

Dan Bishop
Guest
Posts: n/a

 01-09-2007
On Jan 8, 3:30 pm, Rory Campbell-Lange <(E-Mail Removed)> wrote:
> >>> (1.0/10.0) + (2.0/10.0) + (3.0/10.0)

> 0.60000000000000009
> >>> 6.0/10.0

> 0.59999999999999998
>
> Is using the decimal module the best way around this? (I'm expecting the first
> sum to match the second).

Probably not. Decimal arithmetic is NOT a cure-all for floating-point
arithmetic errors.

>>> Decimal(1) / Decimal(3) * Decimal(3)

Decimal("0.9999999999999999999999999999")
>>> Decimal(2).sqrt() ** 2

Decimal("1.999999999999999999999999999")

> It seem anachronistic that decimal takes strings as
> input, though.

How else would you distinguish Decimal('0.1') from
Decimal('0.100000000000000005551115123125782702118 1583404541015625')?

Nick Maclaren
Guest
Posts: n/a

 01-09-2007

|> Rory Campbell-Lange wrote:
|>
|> > Is using the decimal module the best way around this? (I'm
|> > expecting the first sum to match the second). It seem
|> > anachronistic that decimal takes strings as input, though.

As Dan Bishop says, probably not. The introduction to the decimal
module makes exaggerated claims of accuracy, amounting to propaganda.
It is numerically no better than binary, and has some advantages

|> Also check the recent thread "bizarre floating point output".

No, don't. That is about another matter entirely, and will merely
confuse you. I have a course on computer arithmetic, and am just
now writing one on Python numerics, and confused people may contact
me - though I don't guarantee to help.

Regards,
Nick Maclaren.

Carsten Haese
Guest
Posts: n/a

 01-09-2007
On Tue, 2007-01-09 at 11:38 +0000, Nick Maclaren wrote:
> |> Rory Campbell-Lange wrote:
> |>
> |> > Is using the decimal module the best way around this? (I'm
> |> > expecting the first sum to match the second). It seem
> |> > anachronistic that decimal takes strings as input, though.
>
> As Dan Bishop says, probably not. The introduction to the decimal
> module makes exaggerated claims of accuracy, amounting to propaganda.
> It is numerically no better than binary, and has some advantages

no better than binary?

-Carsten

Tim Peters
Guest
Posts: n/a

 01-09-2007
[Rory Campbell-Lange]
>>> Is using the decimal module the best way around this? (I'm
>>> expecting the first sum to match the second). It seem
>>> anachronistic that decimal takes strings as input, though.

[Nick Maclaren]
>> As Dan Bishop says, probably not. The introduction to the decimal
>> module makes exaggerated claims of accuracy, amounting to propaganda.
>> It is numerically no better than binary, and has some advantages

[Carsten Haese]

Well, just about any technical statement can be misleading if not qualified
to such an extent that the only people who can still understand it knew it
to begin with <0.8 wink>. The most dubious statement here to my eyes is
the intro's "exactness carries over into arithmetic". It takes a world of
additional words to explain exactly what it is about the example given (0.1
+ 0.1 + 0.1 - 0.3 = 0 exactly in decimal fp, but not in binary fp) that
does, and does not, generalize. Roughly, it does generalize to one
important real-life use-case: adding and subtracting any number of decimal
quantities delivers the exact decimal result, /provided/ that precision is
set high enough that no rounding occurs.

> and how is decimal no better than binary?

Basically, they both lose info when rounding does occur. For example,

>>> import decimal
>>> 1 / decimal.Decimal(3)

Decimal("0.3333333333333333333333333333")
>>> _ * 3

Decimal("0.9999999999999999999999999999")

That is, (1/3)*3 != 1 in decimal. The reason why is obvious "by eyeball",
but only because you have a lifetime of experience working in base 10. A
bit ironically, the rounding in binary just happens to be such that (1/3)/3
does equal 1:

>>> 1./3

0.33333333333333331
>>> _ * 3

1.0

It's not just * and /. The real thing at work in the 0.1 + 0.1 + 0.1 - 0.3
example is representation error, not sloppy +/-: 0.1 and 0.3 can't be
/represented/ exactly as binary floats to begin with. Much the same can
happen if you instead you use inputs exactly representable in base 2 but
not in base 10 (and while there are none such if precision is infinite,
precision isn't infinite):

>>> x = decimal.Decimal(1) / 2**90
>>> print x

8.077935669463160887416100508E-28
>>> print x + x + x - 3*x # not exactly 0

1E-54

The same in binary f.p. is exact, because 1./2**90 is exactly representable
in binary fp:

>>> x = 1. / 2**90
>>> print x # this displays an inexact decimal approx. to 1./2**90

8.07793566946e-028
>>> print x + x + x - 3*x # but the binary arithmetic is exact

0.0

If you boost decimal's precision high enough, then this specific example is
also exact using decimal; but with the default precision of 28, 1./2**90
can't be represented exactly in decimal to begin with; e.g.,

>>> decimal.Decimal(1) / 2**90 * 2**90

Decimal("0.9999999999999999999999999999")

All forms of fp are subject to representation and rounding errors. The
biggest practical difference here is that the `decimal` module is not
subject to representation error for "natural" decimal quantities, provided
precision is set high enough to retain all the input digits. That's worth
something to many apps, and is the whole ball of wax for some apps -- but
leaves a world of possible "surprises" nevertheless.

Bjoern Schliessmann
Guest
Posts: n/a

 01-09-2007
Nick Maclaren wrote:

> No, don't. That is about another matter entirely,

It isn't.

Regards,

Björn

--
BOFH excuse #366:

ATM cell has no roaming feature turned on, notebooks can't connect

Nick Maclaren
Guest
Posts: n/a

 01-09-2007

In article <Xns98B384E126F4Ctim111one@216.196.97.136>,
Tim Peters <(E-Mail Removed)> writes:
|>
|> Well, just about any technical statement can be misleading if not qualified
|> to such an extent that the only people who can still understand it knew it
|> to begin with <0.8 wink>. The most dubious statement here to my eyes is
|> the intro's "exactness carries over into arithmetic". It takes a world of
|> additional words to explain exactly what it is about the example given (0.1
|> + 0.1 + 0.1 - 0.3 = 0 exactly in decimal fp, but not in binary fp) that
|> does, and does not, generalize. Roughly, it does generalize to one
|> important real-life use-case: adding and subtracting any number of decimal
|> quantities delivers the exact decimal result, /provided/ that precision is
|> set high enough that no rounding occurs.

Precisely. There is one other such statement, too: "Decimal numbers can
be represented exactly." What it MEANS is that numbers with a short
representation in decimal can be represented exactly in decimal, which
is tautologous, but many people READ it to say that numbers that they
are interested in can be represented exactly in decimal. Such as pi,
sqrt(2), 1/3 and so on ....

|> > and how is decimal no better than binary?
|>
|> Basically, they both lose info when rounding does occur. For example,

Yes, but there are two ways in which binary is superior. Let's skip
the superior 'smoothness', as being too arcane an issue for this group,
and deal with the other. In binary, calculating the mid-point of two
numbers (a very common operation) is guaranteed to be within the range
defined by those numbers, or to over/under-flow.

Neither (x+y)/2.0 nor (x/2.0+y/2.0) are necessarily within the range
(x,y) in decimal, even for the most respectable values of x and y.
This was a MAJOR "gotcha" in the days before binary became standard,
and will clearly return with decimal.

Regards,
Nick Maclaren.

Terry Reedy
Guest
Posts: n/a

 01-09-2007

"Carsten Haese" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
| On Tue, 2007-01-09 at 11:38 +0000, Nick Maclaren wrote:
| > As Dan Bishop says, probably not. The introduction to the decimal
| > module makes exaggerated claims of accuracy, amounting to propaganda.
| > It is numerically no better than binary, and has some advantages
|
| Please elaborate. Which exaggerated claims are made, and how is decimal
| no better than binary?

As to the latter question: calculating with decimals instead of binaries
eliminates conversion errors introduced when one has *exact* decimal
inputs, such as in financial calculations (which were the motivating use
case for the decimal module). But it does not eliminate errors inherent in
approximating reals with (a limited set of) ratrionals. Nor does it
eliminate errors inherent in approximation algorithms (such as using a
finite number of terms of an infinite series.

Terry Jan Reedy