Velocity Reviews > strange sum behavior

# strange sum behavior

simon place
Guest
Posts: n/a

 02-22-2004
while playing about with inheriting from list to be able to cache macro
properties i noticed this, the rate of summing a list seems to be over linear?

its nearly 3 times faster to sum the sums of smaller lists?

from time import clock

l=range(0,1000000)

start=clock()
print sum(l),
print clock()-start

# now sum a list of the sums of 1000 slices.
start=clock()
print sum([sum(l[x+1000]) for x in xrange(0,len(l),1000)]),
print clock()-start

# repeat
start=clock()
print sum(l),
print clock()-start

# output from 500MHz AMD K62 64Mb Python 2.3.3 (#51, Dec 18 2003, 20:22:39)
[MSC v.1200 32 bit (Intel)] on win32
#499999500000 1.89348044721
#499999500000 0.731985115406
#499999500000 1.90818149818

jepler80@unpythonic.net
Guest
Posts: n/a

 02-22-2004
I get similar results from
Python 2.4a0 (#1, Feb 19 2004, 17:19:10)
[GCC 3.3.2 20031022 (Red Hat Linux 3.3.2-1)] on linux2

\$ timeit -s 'l=range(0,1000000)' 'sum(l)'
10 loops, best of 3: 985 msec per loop
\$ timeit -s 'l=range(0,1000000)' 'sum([sum(l[x+1000]) for x in xrange(0,len(l),1000)])'
10 loops, best of 3: 352 msec per loop

I suspect the reason is that the top loop has many operations on Python
longs (unbounded ints), while the bottom loop has relatively few. Here's a
use of sum() that doesn't switch from ints to longs:
\$ timeit -s 'l=[0]*1000000' 'sum(l)'
10 loops, best of 3: 241 msec per loop
\$ timeit -s 'l=[0]*1000000' 'sum([sum(l[x+1000]) for x in xrange(0,len(l),1000)])'
10 loops, best of 3: 285 msec per loop

.... the nested loop is slower, as it should be.

Jeff

Josiah Carlson
Guest
Posts: n/a

 02-22-2004
simon place wrote:

> while playing about with inheriting from list to be able to cache macro
> properties i noticed this, the rate of summing a list seems to be over
> linear?
>
> its nearly 3 times faster to sum the sums of smaller lists?
>
>
> from time import clock
>
> l=range(0,1000000)
>
> start=clock()
> print sum(l),
> print clock()-start
>
> # now sum a list of the sums of 1000 slices.
> start=clock()
> print sum([sum(l[x+1000]) for x in xrange(0,len(l),1000)]),
> print clock()-start
>
> # repeat
> start=clock()
> print sum(l),
> print clock()-start
>
> # output from 500MHz AMD K62 64Mb Python 2.3.3 (#51, Dec 18 2003,
> 20:22:39) [MSC v.1200 32 bit (Intel)] on win32
> #499999500000 1.89348044721
> #499999500000 0.731985115406
> #499999500000 1.90818149818

You are confusing the behavior of summing long integers with that of
summing integers.

When you sum from 0 to 65,534, your sum is just under sys.maxint on (32
bit machines). When you sum from 0 to 65,535, your sum is just over
sys.maxint (on 32 bit machines), and becomes a Python Long. It is no
longer a single add instruction when running on the core processor when
adding the running total with the next number, it is multiple
instructions. If you check the implementation, Python ends up doing
manual adds and carries on unsigned C shorts, which makes up each
'digit' in a Python long.

When you break the pieces up into blocks of 1000, the largest sum is for
998,999 to 999,999, which is 999,499,500; which you will notice is
smaller than sys.maxint (on 32 bit machines), and at worst only results
in 1000 Python long adds, as compared with 934464 long additions in your
original method (that factor of 934 times as many long additions is crazy).

The fact that it is /only/ 3 times slower shows us that Python's long
integer implementation for smaller long integers works pretty damn well.

- Josiah