Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > a.index(float('nan')) fails

Reply
Thread Tools

a.index(float('nan')) fails

 
 
Steven D'Aprano
Guest
Posts: n/a
 
      10-26-2012
On Fri, 26 Oct 2012 03:54:02 -0400, Terry Reedy wrote:

> On 10/25/2012 10:44 PM, Steven D'Aprano wrote:
>> On Thu, 25 Oct 2012 22:04:52 -0400, Terry Reedy wrote:
>>
>>> It is a consequence of the following, which some people (but not all)
>>> believe is mandated by the IEEE standard.
>>>
>>> >>> nan = float('nan')
>>> >>> nan is nan
>>> True

>>
>> The IEEE 754 standard says nothing about object identity. It only
>> discusses value equality.
>>
>>> >>> nan == nan
>>> False

>>
>> IEEE 754 states that all NANs compare unequal to everything, including
>> NANs with the same bit value. It doesn't make an exception for
>> comparisons with itself.
>>
>> I'm not entirely sure why you suggest that there is an argument about
>> what IEEE 754 says about NANs.

>
> I did not do so.


I'm afraid you did. Your quote is shown above, and repeated here:

"... some people (but not all) believe is mandated by the IEEE standard"

This suggests that there is a disagreement -- an argument -- about what
the IEEE standard mandates about NANs. I don't know why you think this
disagreement exists, or who these "some people" are. The standard is not
ambiguous, and while it is not readily available at no cost, it is widely
described by many secondary sources.

Every NAN must compare unequal to every float, including itself.


> There has been disagreement about whether the standard mandates that
> Python behave the way it does. That is a fact, but I have no interest in
> discussing the issue.


I'm not entirely sure which behaviour of Python you are referring to
here. If you choose not to reply, of course I can't force you to. It's
your right to make ambiguous statements and then refuse to clarify what
you are talking about.

If you are referring to *identity comparisons*, the IEEE 754 says nothing
about object identity, so it has no bearing on Python's `is` operator.

If you are referring to the fact that `nan != nan` in Python, that is
mandated by the IEEE 754 standard. I can't imagine who maintains that the
standard doesn't mandate that; as I said, the disagreement that I have
seen is whether or not to follow the standard, not on what the standard
says.

If you are referring to something else, I don't know what it is.



--
Steven
 
Reply With Quote
 
 
 
 
Steven D'Aprano
Guest
Posts: n/a
 
      10-26-2012
On Fri, 26 Oct 2012 04:00:03 -0400, Terry Reedy wrote:

> On 10/25/2012 10:19 PM, MRAB wrote:


>> In summary, .index() looks for an item which is equal to its argument,
>> but it's a feature of NaN (as defined by the standard) that it doesn't
>> equal NaN, therefore .index() will never find it.

>
> Except that is *does* find the particular nan object that is in the
> collection. So nan in collection and list.index(nan) look for the nan by
> identity, not equality.


So it does. I made the same mistake as MRAB, thank you for the correction.



> This inconsistency is an intentional decision to
> not propagate the insanity of nan != nan to Python collections.


That's a value judgement about NANs which is not shared by everyone.

Quite frankly, I consider it an ignorant opinion about NANs, despite what
Bertrand Meyer thinks. Reflectivity is an important property, but it is
not the only important property and it is not even the most important
property of numbers. There are far worse problems with floats than the
non-reflexivity of NANs.

Since it is impossible to have a fixed-size numeric type that satisfies
*all* of the properties of real numbers, some properties must be broken.
I can only imagine that the reason Meyer, and presumably you, think that
the loss of reflexivity is more "insane" than the other violations of
floating point numbers is due to unfamiliarity. (And note that I said
*numbers*, not NANs.)

Anyone who has used a pocket calculator will be used to floating point
calculations being wrong, so much so that most people don't even think
about it. They just expect numeric calculations to be off by a little,
and don't give it any further thought. But NANs freak them out because
they're different.

In real life, you are *much* more likely to run into these examples of
"insanity" of floats than to be troubled by NANs:

- associativity of addition is lost
- distributivity of multiplication is lost
- commutativity of addition is lost
- not all floats have an inverse

e.g.

(0.1 + 0.2) + 0.3 != 0.1 + (0.2 + 0.3)

1e6*(1.1 + 2.2) != 1e6*1.1 + 1e6*2.2

1e10 + 0.1 + -1e10 != 1e10 + -1e10 + 0.1

1/(1/49.0) != 49.0

Such violations of the rules of real arithmetic aren't even hard to find.
They're everywhere.

In practical terms, those sorts of errors are *far* more significant in
computational mathematics than the loss of reflexivity. I can't think of
the last time I've cared that x is not necessarily equal to x in a
floating point calculation, but the types of errors shown above are
*constantly* effecting computations and leading to loss of precision or
even completely wrong answers.

Once NANs were introduced, keeping reflexivity would lead to even worse
situations than x != x. It would lead to nonsense identities like
log(-1) ==log(-2), hence 1 == 2.


--
Steven
 
Reply With Quote
 
 
 
 
MRAB
Guest
Posts: n/a
 
      10-26-2012
On 2012-10-26 17:23, Steven D'Aprano wrote:
> On Fri, 26 Oct 2012 04:00:03 -0400, Terry Reedy wrote:
>
>> On 10/25/2012 10:19 PM, MRAB wrote:

>
>>> In summary, .index() looks for an item which is equal to its argument,
>>> but it's a feature of NaN (as defined by the standard) that it doesn't
>>> equal NaN, therefore .index() will never find it.

>>
>> Except that is *does* find the particular nan object that is in the
>> collection. So nan in collection and list.index(nan) look for the nan by
>> identity, not equality.

>
> So it does. I made the same mistake as MRAB, thank you for the correction.
>

[snip]
Yes, I forgot that Python checks for identity before checking for
equality:

>>> [float("nan")].index(float("nan"))

Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
[float("nan")].index(float("nan"))
ValueError: nan is not in list
>>> nan = float("nan")
>>> [nan].index(nan)

0

 
Reply With Quote
 
Chris Angelico
Guest
Posts: n/a
 
      10-26-2012
On Sat, Oct 27, 2012 at 3:23 AM, Steven D'Aprano
<(E-Mail Removed)> wrote:
> In real life, you are *much* more likely to run into these examples of
> "insanity" of floats than to be troubled by NANs:
>
> - associativity of addition is lost
> - distributivity of multiplication is lost
> - commutativity of addition is lost
> - not all floats have an inverse
>
> e.g.
>
> (0.1 + 0.2) + 0.3 != 0.1 + (0.2 + 0.3)
>
> 1e6*(1.1 + 2.2) != 1e6*1.1 + 1e6*2.2
>
> 1e10 + 0.1 + -1e10 != 1e10 + -1e10 + 0.1
>
> 1/(1/49.0) != 49.0
>
> Such violations of the rules of real arithmetic aren't even hard to find.
> They're everywhere.


Actually, as I see it, there's only one principle to take note of: the
"HMS Pinafore Floating Point Rule"...

** Floating point expressions should never be tested for equality **
** What, never? **
** Well, hardly ever! **

The problem isn't with the associativity, it's with the equality
comparison. Replace "x == y" with "abs(x-y)<epsilon" for some epsilon
and all your statements fulfill people's expectations. (Possibly with
the exception of "1e10 + 0.1 + -1e10" as it's going to be hard for an
automated algorithm to pick a useful epsilon. But it still works.)
Ultimately, it's the old problem of significant digits. Usually it
only comes up with measured quantities, but this is ultimately the
same issue. Doing calculations to greater precision than the answer
warrants is fine, but when you come to compare, you effectively need
to round both values off to their actual precisions.

ChrisA
 
Reply With Quote
 
Steven D'Aprano
Guest
Posts: n/a
 
      10-26-2012
On Sat, 27 Oct 2012 03:45:46 +1100, Chris Angelico wrote:

> On Sat, Oct 27, 2012 at 3:23 AM, Steven D'Aprano
> <(E-Mail Removed)> wrote:
>> In real life, you are *much* more likely to run into these examples of
>> "insanity" of floats than to be troubled by NANs:
>>
>> - associativity of addition is lost
>> - distributivity of multiplication is lost
>> - commutativity of addition is lost
>> - not all floats have an inverse
>>
>> e.g.
>>
>> (0.1 + 0.2) + 0.3 != 0.1 + (0.2 + 0.3)
>>
>> 1e6*(1.1 + 2.2) != 1e6*1.1 + 1e6*2.2
>>
>> 1e10 + 0.1 + -1e10 != 1e10 + -1e10 + 0.1
>>
>> 1/(1/49.0) != 49.0
>>
>> Such violations of the rules of real arithmetic aren't even hard to
>> find. They're everywhere.

>
> Actually, as I see it, there's only one principle to take note of: the
> "HMS Pinafore Floating Point Rule"...
>
> ** Floating point expressions should never be tested for equality **
> ** What, never? **
> ** Well, hardly ever! **
>
> The problem isn't with the associativity, it's with the equality
> comparison. Replace "x == y" with "abs(x-y)<epsilon" for some epsilon
> and all your statements fulfill people's expectations.


O RYLY?

Would you care to tell us which epsilon they should use?

Hint: *whatever* epsilon you pick, there will be cases where that is
either stupidly too small, stupidly too large, or one that degenerates to
float equality. And you may not be able to tell if you have one of those
cases or not.

Here's a concrete example for you:

What *single* value of epsilon should you pick such that the following
two expressions evaluate correctly?

sum([1e20, 0.1, -1e20, 0.1]*1000) == 200
sum([1e20, 99.9, -1e20, 0.1]*1000) != 200


The advice "never test floats for equality" is:

(1) pointless without a good way to know what epsilon they should use;

(2) sheer superstition since there are cases where testing floats for
equality is the right thing to do (although I note you dodged that bullet
with "hardly ever" *wink*);

and most importantly

(3) missing the point, since the violations of the rules of real-valued
mathematics still occur regardless of whether you explicitly test for
equality or not.

For instance, if you write:

result = a + (b + c)

some compilers may assume associativity and calculate (a + b) + c
instead. But that is not guaranteed to give the same result! (K&R allowed
C compilers to do that; the subsequent ANSI C standard prohibited re-
ordering, but in practice most C compilers provide a switch to allow it.)

A real-world example: Python's math.fsum is a high-precision summation
with error compensation based on the Kahan summation algorithm. Here's a
pseudo-code version:

http://en.wikipedia.org/wiki/Kahan_summation_algorithm

which includes the steps:

t = sum + y;
c = (t - sum) - y;

A little bit of algebra should tell you that c must equal zero.
Unfortunately, in this case algebra is wrong, because floats are not real
numbers. c is not necessarily zero.

An optimizing compiler, or an optimizing programmer, might very well
eliminate those calculations and so inadvertently eliminate the error
compensation. And not an equals sign in sight.



--
Steven
 
Reply With Quote
 
Terry Reedy
Guest
Posts: n/a
 
      10-26-2012
On 10/26/2012 11:26 AM, Steven D'Aprano wrote:
> On Fri, 26 Oct 2012 03:54:02 -0400, Terry Reedy wrote:
>
>> On 10/25/2012 10:44 PM, Steven D'Aprano wrote:
>>> On Thu, 25 Oct 2012 22:04:52 -0400, Terry Reedy wrote:
>>>
>>>> It is a consequence of the following, which some people (but not all)
>>>> believe is mandated by the IEEE standard.
>>>>
>>>> >>> nan = float('nan')
>>>> >>> nan is nan
>>>> True
>>>
>>> The IEEE 754 standard says nothing about object identity. It only
>>> discusses value equality.
>>>
>>>> >>> nan == nan
>>>> False
>>>
>>> IEEE 754 states that all NANs compare unequal to everything, including
>>> NANs with the same bit value. It doesn't make an exception for
>>> comparisons with itself.
>>>
>>> I'm not entirely sure why you suggest that there is an argument about
>>> what IEEE 754 says about NANs.

>>
>> I did not do so.

>
> I'm afraid you did. Your quote is shown above, and repeated here:


The quote precedes and refers to Python code.

>
> "... some people (but not all) believe is mandated by the IEEE standard"
>
> This suggests that there is a disagreement -- an argument -- about what
> the IEEE standard mandates about NANs.


Disagreement about what Python should do has been expressed on the lists
and even on the tracker. There was one discussion on python-ideas within
the last month, another a year or so ago.

Python does not implement the full IEEE standard with signalling and
non-signalling nans and multiple bit patterns.

When a nan is put in a Python collection, it is in effect treated as if
it were equal to itself.
See the discussion in http://bugs.python.org/issue4296
including "I'm not sure that Python should be asked to guarantee
anything more than "b == b" returning False when b is
a NaN. It wouldn't seem unreasonable to consider
behavior of nans in containers (sets, lists, dicts)
as undefined when it comes to equality and identity
checks."



--
Terry Jan Reedy

 
Reply With Quote
 
Terry Reedy
Guest
Posts: n/a
 
      10-26-2012
On 10/26/2012 12:23 PM, Steven D'Aprano wrote:
> On Fri, 26 Oct 2012 04:00:03 -0400, Terry Reedy wrote:


>> This inconsistency is an intentional decision to
>> not propagate the insanity of nan != nan to Python collections.

>
> That's a value judgement about NANs which is not shared by everyone.
>
> Quite frankly, I consider it an ignorant opinion about NANs, despite what
> Bertrand Meyer thinks. Reflectivity is an important property, but it is
> not the only important property and it is not even the most important
> property of numbers.


Reflexivity is one of the definitional properties of the mathematical
equality relationship and of equivalence relationships in general. It is
not specific to numbers. It is assumed by the concept and definition of
sets.

--
Terry Jan Reedy

 
Reply With Quote
 
Devin Jeanpierre
Guest
Posts: n/a
 
      10-26-2012
On Fri, Oct 26, 2012 at 2:40 PM, Steven D'Aprano
<(E-Mail Removed)> wrote:
>> The problem isn't with the associativity, it's with the equality
>> comparison. Replace "x == y" with "abs(x-y)<epsilon" for some epsilon
>> and all your statements fulfill people's expectations.

>
> O RYLY?
>
> Would you care to tell us which epsilon they should use?


I would assume some epsilon that bounds the error of their
computation. Which one to use would depend on the error propagation
their function incurs.

That said, I also disagree with the sentiment "all your statements
fulfill people's expectations". Comparing to be within some epsilon of
each other may mean that some things that are the result of
mathematically unequal expressions, will be called equal because they
are very close to each other by accident. Unless perhaps completely
tight bounds on error can be achieved? I've never seen anyone do this,
but maybe it's reasonable.

> Hint: *whatever* epsilon you pick, there will be cases where that is
> either stupidly too small, stupidly too large, or one that degenerates to
> float equality. And you may not be able to tell if you have one of those
> cases or not.
>
> Here's a concrete example for you:
>
> What *single* value of epsilon should you pick such that the following
> two expressions evaluate correctly?
>
> sum([1e20, 0.1, -1e20, 0.1]*1000) == 200
> sum([1e20, 99.9, -1e20, 0.1]*1000) != 200


Some computations have unbounded error, such as computations where
catastrophic cancellation can occur. That doesn't mean all
computations do. For many computations, you can find a single epsilon
that will always return True for things that "should" be equal, but
aren't -- for example, squaring a number does no worse than tripling
the relative error, so if you square a number that was accurate to
within machine epsilon, and want to compare it to a constant, you can
compare with relative epsilon = 3*machine_epsilon.

I'm not sure how commonly this occurs in real life, because I'm not a
numerical programmer. All I know is that your example is good, but
shows a not-universally-applicable problem.

It is, however, still pretty applicable and worth noting, so I'm not
unhappy you did. For example, how large can the absolute error of the
sin function applied to a float be? Answer: as large as 2, and the
relative error can be arbitrarily large. (Reason: error scales with
the input, but the frequency of the sin function does not.)

(In case you can't tell, I only have studied this stuff as a student. )

-- Devin
 
Reply With Quote
 
Chris Angelico
Guest
Posts: n/a
 
      10-27-2012
On Sat, Oct 27, 2012 at 5:40 AM, Steven D'Aprano
<(E-Mail Removed)> wrote:
> On Sat, 27 Oct 2012 03:45:46 +1100, Chris Angelico wrote:
>>
>> Actually, as I see it, there's only one principle to take note of: the
>> "HMS Pinafore Floating Point Rule"...
>>
>> ** Floating point expressions should never be tested for equality **
>> ** What, never? **
>> ** Well, hardly ever! **
>>
>> The problem isn't with the associativity, it's with the equality
>> comparison. Replace "x == y" with "abs(x-y)<epsilon" for some epsilon
>> and all your statements fulfill people's expectations.

>
> O RYLY?
>
> Would you care to tell us which epsilon they should use?
>
> Hint: *whatever* epsilon you pick, there will be cases where that is
> either stupidly too small, stupidly too large, or one that degenerates to
> float equality. And you may not be able to tell if you have one of those
> cases or not.
>
> Here's a concrete example for you:
>
> What *single* value of epsilon should you pick such that the following
> two expressions evaluate correctly?
>
> sum([1e20, 0.1, -1e20, 0.1]*1000) == 200
> sum([1e20, 99.9, -1e20, 0.1]*1000) != 200


Your epsilon value needs to take into account the precisions of the
values involved, and each operation needs to modify the
precision/error value. That's how I was taught to do it in
mathematical calculations. Well, I was taught "significant digits",
counting decimal digits, and a computer would normally want to count
"bits of precision", but close enough.

So here's my heresy: When you add 1e20 and 0.1, the value should be
equal to the original 1e20 unless it has at least 21 significant
digits. Otherwise, you get stupidly accurate errors, like in the old
anecdote about the age of a museum piece: It's 1001 years, 2 months,
and 3 days old, because I asked last year how old it was and it was a
thousand years old.

Flame away!

ChrisA
 
Reply With Quote
 
Dennis Lee Bieber
Guest
Posts: n/a
 
      10-27-2012
On 26 Oct 2012 16:23:51 GMT, Steven D'Aprano
<(E-Mail Removed)> declaimed the following in
gmane.comp.python.general:

> Anyone who has used a pocket calculator will be used to floating point
> calculations being wrong, so much so that most people don't even think


I don't know about the more modern calculators, but at least up
through my HP-41CX, HP calculators didn't do (binary) "floating
point"... They did a form of BCD with a fixed number of significant
/decimal/ digits (and did not keep a guard digit -- whereas my first
scientific calculator maintained one or two guard digits which were
beyond the displayable precision, but could be seen by subtracting the
displayed value from the computed value)
--
Wulfraed Dennis Lee Bieber AF6VN
http://www.velocityreviews.com/forums/(E-Mail Removed) HTTP://wlfraed.home.netcom.com/

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Constant.t fails 240 of 272 tests and recurs.t fails 1 of 25 tests on HPUX using perl 5.8.7 dayo Perl Misc 11 12-16-2005 09:09 PM
slideshow fails, Firefox debugger also fails lkrubner@geocities.com Javascript 2 12-23-2004 06:22 PM
Wireless Zero Configuration Servoce fails to start andrew Wireless Networking 0 07-28-2004 03:08 PM
IAS fails with certs from Stand Alone CA Harrison Midkiff Wireless Networking 2 07-22-2004 09:45 PM
Forms Authentication Fails Between ASP.NET 1.0 and 1.1 Applications (Cookie Decryption Fails) John Saunders ASP .Net 1 11-18-2003 03:25 PM



Advertisments