Velocity Reviews > Insert item before each element of a list

# Insert item before each element of a list

Hans Mulder
Guest
Posts: n/a

 10-11-2012
On 9/10/12 04:39:28, rusi wrote:
> On Oct 9, 7:34 am, rusi <(E-Mail Removed)> wrote:
>> How about a 2-paren version?
>>
>>>>> x = [1,2,3]
>>>>> reduce(operator.add, [['insert', a] for a in x])

>>
>> ['insert', 1, 'insert', 2, 'insert', 3]

>
> Or if one prefers the different parens on the other side:
>
>>>> reduce(operator.add, (['insert', a] for a in x))

> ['insert', 1, 'insert', 2, 'insert', 3]

Or, if you don't want to import the operator module:

sum((['insert', a] for a in x), [])

-- HansM

Terry Reedy
Guest
Posts: n/a

 10-11-2012
On 10/11/2012 6:21 PM, Hans Mulder wrote:
> On 9/10/12 04:39:28, rusi wrote:
>> On Oct 9, 7:34 am, rusi <(E-Mail Removed)> wrote:
>>> How about a 2-paren version?
>>>
>>>>>> x = [1,2,3]
>>>>>> reduce(operator.add, [['insert', a] for a in x])
>>>
>>> ['insert', 1, 'insert', 2, 'insert', 3]

>>
>> Or if one prefers the different parens on the other side:
>>
>>>>> reduce(operator.add, (['insert', a] for a in x))

>> ['insert', 1, 'insert', 2, 'insert', 3]

>
> Or, if you don't want to import the operator module:
>
> sum((['insert', a] for a in x), [])

All of the solutions based on adding (concatenating) lists create an
unneeded temporary list for each addition except the last and run in
O(n**2) time. Starting with one list and appending or extending (which
does two appends here) is the 'proper' approach to get an O(N) algorithm.

This does not matter for n=3, but for n = 10000 it would.

expanded = []
expand = expand.append
for item in source:
expand('insert')
expand(item)

is hard to beat for clarity and time.

expanded = []
expand = expand.extend
for item in source:
expand(['insert', item])

might be faster if creating the list is faster than the second expand
call. Note that a typical lisp-like version would recursively traverse
source to nil and build expanded from tail to head by using the
equivalent of
return ['insert' item].extend(expanded)
Extend would be O(1) here also since it would at worst scan the new list
of length 2 for each of the items in the source.

def interleave(source):
for item in source:
yield 'insert'
yield item

list(interleave(source))

might also be faster since it avoids the repeated python level call. I
prefer it anyway as modern, idiomatic python in that it separates
interleaving from creating a list. In many situations, creating a list
from the interleaved stream will not be needed.

--
Terry Jan Reedy

Steven D'Aprano
Guest
Posts: n/a

 10-12-2012
On Fri, 12 Oct 2012 00:21:57 +0200, Hans Mulder wrote:

> On 9/10/12 04:39:28, rusi wrote:
>> On Oct 9, 7:34 am, rusi <(E-Mail Removed)> wrote:
>>> How about a 2-paren version?
>>>
>>>>>> x = [1,2,3]
>>>>>> reduce(operator.add, [['insert', a] for a in x])
>>>
>>> ['insert', 1, 'insert', 2, 'insert', 3]

>>
>> Or if one prefers the different parens on the other side:
>>
>>>>> reduce(operator.add, (['insert', a] for a in x))

>> ['insert', 1, 'insert', 2, 'insert', 3]

>
> Or, if you don't want to import the operator module:
>
> sum((['insert', a] for a in x), [])

Which is also O(N**2) like the reduce solution above.

That means that it will seem perfectly fine when you test it using a a
hundred or so items, then some day you'll pass it a list with ten million
items and it will take 36 hours to complete.

I'm serious by the way. By my tests, increasing the number of items in
the list by a factor of ten increases the time taken by between 30 and
300 times.

--
Steven