Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > Delete all items in the list

Reply
Thread Tools

Delete all items in the list

 
 
Steven D'Aprano
Guest
Posts: n/a
 
      02-28-2009
Chris Rebert wrote:

> Obviously that equivalence is true, but in this case I'm emphasizing
> that it's even worse than that when constant factors are taken into
> account. Big-O is nice in the abstract, but in the real-world those
> constant factors can matter.
>
> In pure big-O, it is indeed O(M*N) vs. O(N)
> Including constant factors, the performance is roughly 2*M*N*X [X =
> overhead of remove()] vs. N, which makes the badness of the algorithm
> all the more apparent.



So, what you're saying is that if you include a constant factor on one side
of the comparison, and neglect the constant factors on the other, the side
with the constant factor is worse. Well, duh

I forget which specific O(N) algorithm you're referring to, but I think it
was probably a list comp:

L[:] = [x for x in L if x != 'a']

As a wise man once said *wink*, "Big-O is nice in the abstract, but in the
real-world those constant factors can matter". This is very true... in this
case, the list comp requires creating a new list, potentially resizing it
an arbitrary number of times, then possibly garbage collecting the previous
contents of L. These operations are not free, and for truly huge lists
requiring paging, it could get very expensive.

Big Oh notation is good for estimating asymptotic behaviour, which means it
is good for predicting how an algorithm will scale as the size of the input
increases. It is useless for predicting how fast that algorithm will run,
since the actual speed depends on those constant factors that Big Oh
neglects. That's not a criticism of Big Oh -- predicting execution speed is
not what it is for.

For what it's worth, for very small lists, the while...remove algorithm is
actually faster that using a list comprehension, despite being O(M*N)
versus O(N), at least according to my tests. It's *trivially* faster, but
if you're interested in (premature) micro-optimisation, you might save one
or two microseconds by using a while loop for short lists (say, around
N<=12 or so according to my tests), and swapping to a list comp only for
larger input.

Now, this sounds silly, and in fact it is silly for the specific problem
we're discussing, but as a general technique it is very powerful. For
instance, until Python 2.3, list.sort() used a low-overhead but O(N**2)
insertion sort for small lists, only kicking off a high-overhead but O(N)
sample sort above a certain size. The current timsort does something
similar, only faster and more complicated. If I understand correctly, it
uses insertion sort to make up short runs of increasing or decreasing
values, and then efficiently merges them.


--
Steven

 
Reply With Quote
 
 
 
 
Terry Reedy
Guest
Posts: n/a
 
      02-28-2009
Steven D'Aprano wrote:

> Big Oh notation is good for estimating asymptotic behaviour, which means it
> is good for predicting how an algorithm will scale as the size of the input
> increases. It is useless for predicting how fast that algorithm will run,
> since the actual speed depends on those constant factors that Big Oh
> neglects. That's not a criticism of Big Oh -- predicting execution speed is
> not what it is for.
>
> For what it's worth, for very small lists, the while...remove algorithm is
> actually faster that using a list comprehension, despite being O(M*N)
> versus O(N), at least according to my tests. It's *trivially* faster, but
> if you're interested in (premature) micro-optimisation, you might save one
> or two microseconds by using a while loop for short lists (say, around
> N<=12 or so according to my tests), and swapping to a list comp only for
> larger input.
>
> Now, this sounds silly, and in fact it is silly for the specific problem
> we're discussing, but as a general technique it is very powerful. For
> instance, until Python 2.3, list.sort() used a low-overhead but O(N**2)
> insertion sort for small lists, only kicking off a high-overhead but O(N)


O(NlogN)

> sample sort above a certain size. The current timsort does something
> similar, only faster and more complicated. If I understand correctly, it
> uses insertion sort to make up short runs of increasing or decreasing
> values, and then efficiently merges them.


It uses binary insertion sort to make runs of 64 (determined by
empirical testing). The 'binary' part means that it uses O(logN) binary
search rather than O(n) linear search to find the insertion point for
each of N items, so that finding insertion points uses only O(NlogN)
comparisions (which are relatively expensive). Each of the N insertions
is done with a single memmove() call, which typically is relatively
fast. So although binary insertion sort is still O(N*N), the
coefficient of the N*N term in the time formula is relatively small.

The other advantages of timsort are that it exploits existing order in a
list while preserving the order of items that compare equal.

Terry Jan Reedy


 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
unable to retrieve listbox items on postback , items moved usingjavascript between 2 list boxes (source and target ) divya ASP .Net 1 05-28-2008 05:27 AM
Is there any way to append some items to List box, without retrieving all items through AJAX? Anjan Bhowmik ASP .Net 1 02-14-2008 09:02 PM
How to delete and remove all items in a container<T*> James Aguilar C++ 8 03-22-2005 07:19 PM
pmw MenuBar: delete all menu items of a menu Tina Li Python 0 09-18-2003 09:57 PM
grouping items among a list according to items subtag value Gilles Kuhn XML 0 09-15-2003 12:01 PM



Advertisments