Velocity Reviews > Efficient implementation of deeply nested lists

# Efficient implementation of deeply nested lists

Kay Schluehr
Guest
Posts: n/a

 01-19-2006
I want to manipulate a deeply nested list in a generic way at a
determined place. Given a sequence of constant objects a1, a2, ..., aN
and a variable x. Now construct a list from them recursively:

L = [a1, [a2, [....[aN, [x]]...]]

The value of x is the only one to be changed. With each value of x a
new instance of L is needed so that we actually have a function:

L = lambda x: [a1, [a2, [....[aN, [x]]...]]

I'm seeking for an efficient implementation that does not recompute the
whole list each time. Any ideas?

Kay

Fuzzyman
Guest
Posts: n/a

 01-19-2006

Kay Schluehr wrote:
> I want to manipulate a deeply nested list in a generic way at a
> determined place. Given a sequence of constant objects a1, a2, ..., aN
> and a variable x. Now construct a list from them recursively:
>
> L = [a1, [a2, [....[aN, [x]]...]]
>
> The value of x is the only one to be changed. With each value of x a
> new instance of L is needed so that we actually have a function:
>
> L = lambda x: [a1, [a2, [....[aN, [x]]...]]
>
> I'm seeking for an efficient implementation that does not recompute the
> whole list each time. Any ideas?
>

Is the sequence of a1 to aN fixed ?

I assume you want a new list each time ?

You can create a reference list once but keep a reference to the most
deeply nested part that contains x.

Each time you need a new copy, insert hte new value of x and use
copy.deepcopy to produce your new list.

***WARNING UNTESTED****

def listmaker(consts, x):
global finalref
out = []
if not consts:
finalref = [x]
return finalref
entry = consts.pop()
out.append(entry)
out.append(listmaker(consts, x))
return out

listmaker recursively builds the list structure and creates a global
reference to the list holding x.

You can build this once as a 'generic' list.

When you need a copy do :

del finalref[0]
finalref.append(newx)

newlist = copy.deepcopy(reference_list)

I hope that helps, it may need debugging.

Because lists are mutable, you can't avoid the copy operation - but it
is *probably* quicker than recomputing hte list. You should test this -
as it might not be the case.

Fuzzyman
http://www.voidspace.org.uk/python/index.shtml

> Kay

Peter Otten
Guest
Posts: n/a

 01-19-2006
Kay Schluehr wrote:

> I want to manipulate a deeply nested list in a generic way at a
> determined place. Given a sequence of constant objects a1, a2, ..., aN
> and a variable x. Now construct a list from them recursively:
>
> L = [a1, [a2, [....[aN, [x]]...]]
>
> The value of x is the only one to be changed. With each value of x a
> new instance of L is needed so that we actually have a function:
>
> L = lambda x: [a1, [a2, [....[aN, [x]]...]]
>
> I'm seeking for an efficient implementation that does not recompute the
> whole list each time. Any ideas?

I'd say you are nesting the wrong way.

>>> items = make_list(range(10))
>>> def make_list(items):

.... return reduce(lambda *args: args, items)
....
>>> def update_value(items, value):

.... return items[0], value
....
>>> items = make_list(range(10))
>>> items

(((((((((0, 1), 2), 3), 4), 5), 6), 7), , 9)
>>> update_value(items, 42)

(((((((((0, 1), 2), 3), 4), 5), 6), 7), , 42)

Obviously I must be missing something. What?

Peter

Kay Schluehr
Guest
Posts: n/a

 01-19-2006

Peter Otten wrote:
> Kay Schluehr wrote:
>
> > I want to manipulate a deeply nested list in a generic way at a
> > determined place. Given a sequence of constant objects a1, a2, ..., aN
> > and a variable x. Now construct a list from them recursively:
> >
> > L = [a1, [a2, [....[aN, [x]]...]]
> >
> > The value of x is the only one to be changed. With each value of x a
> > new instance of L is needed so that we actually have a function:
> >
> > L = lambda x: [a1, [a2, [....[aN, [x]]...]]
> >
> > I'm seeking for an efficient implementation that does not recompute the
> > whole list each time. Any ideas?

>
> I'd say you are nesting the wrong way.

The structure of the sequence is *not* optional.

>
> >>> items = make_list(range(10))
> >>> def make_list(items):

> ... return reduce(lambda *args: args, items)
> ...
> >>> def update_value(items, value):

> ... return items[0], value
> ...
> >>> items = make_list(range(10))
> >>> items

> (((((((((0, 1), 2), 3), 4), 5), 6), 7), , 9)
> >>> update_value(items, 42)

> (((((((((0, 1), 2), 3), 4), 5), 6), 7), , 42)
>
> Obviously I must be missing something. What?

The structure of the sequence is *not* optional. Of course you can also
take the original sequence and apply:

>>> L = list([a1, [a2, [....[aN, [x]]...]]) # shallow copy
>>> L[0] = b

but it is intended to change the most inner element. I guess there is
no way in doing it without a deepcopy / full list construction.
Kay

Peter Otten
Guest
Posts: n/a

 01-19-2006
Kay Schluehr wrote:

> The structure of the sequence is not optional. Of course you can also
> take the original sequence and apply:

>>>> L = list([a1, [a2, [....[aN, [x]]...]])**#*shallow*copy
>>>> L[0] = b

I don't understand the example. Wouldn't you have to do

L[1][1]...[1] = b

to change x?

> but it is intended to change the most inner element. I guess there is
> no way in doing it without a deepcopy / full list construction.

Yes, if you cannot change the nesting I don't see a way to avoid deepcopy
either.

Peter

Bryan Olson
Guest
Posts: n/a

 01-19-2006
Kay Schluehr wrote:
> I want to manipulate a deeply nested list in a generic way at a
> determined place. Given a sequence of constant objects a1, a2, ..., aN
> and a variable x. Now construct a list from them recursively:
>
> L = [a1, [a2, [....[aN, [x]]...]]
>
> The value of x is the only one to be changed. With each value of x a
> new instance of L is needed so that we actually have a function:
>
> L = lambda x: [a1, [a2, [....[aN, [x]]...]]
>
> I'm seeking for an efficient implementation that does not recompute the
> whole list each time. Any ideas?

The problem is that all your lists/sublists have different
values. "recompute" doesn't really even apply. If x != y,
then no two sublists found within L(x) plus L(y) are equal
(excluding sublists that might be within x or y).

--
--Bryan

Bengt Richter
Guest
Posts: n/a

 01-20-2006
On 19 Jan 2006 01:19:06 -0800, "Kay Schluehr" <(E-Mail Removed)> wrote:

>I want to manipulate a deeply nested list in a generic way at a
>determined place. Given a sequence of constant objects a1, a2, ..., aN
>and a variable x. Now construct a list from them recursively:
>
>L = [a1, [a2, [....[aN, [x]]...]]
>
>The value of x is the only one to be changed. With each value of x a
>new instance of L is needed so that we actually have a function:
>
>L = lambda x: [a1, [a2, [....[aN, [x]]...]]
>
>I'm seeking for an efficient implementation that does not recompute the
>whole list each time. Any ideas?
>

Make a custom class factory that makes a class with a1..aN and can be instantiated
with x, producing an object that behaves like L

So the question in my mind is, how are you actually going to use these "L" things?

Regards,
Bengt Richter

Kay Schluehr
Guest
Posts: n/a

 01-20-2006
Bengt Richter wrote:
> On 19 Jan 2006 01:19:06 -0800, "Kay Schluehr" <(E-Mail Removed)> wrote:
>
> >I want to manipulate a deeply nested list in a generic way at a
> >determined place. Given a sequence of constant objects a1, a2, ..., aN
> >and a variable x. Now construct a list from them recursively:
> >
> >L = [a1, [a2, [....[aN, [x]]...]]
> >
> >The value of x is the only one to be changed. With each value of x a
> >new instance of L is needed so that we actually have a function:
> >
> >L = lambda x: [a1, [a2, [....[aN, [x]]...]]
> >
> >I'm seeking for an efficient implementation that does not recompute the
> >whole list each time. Any ideas?
> >

> Make a custom class factory that makes a class with a1..aN and can be instantiated
> with x, producing an object that behaves like L

Hi Bengt,

I made such an attempt already yesterday but I disliked the result
because the __getitem__ method of the class that encapsulated a1...aN
worked as an object factory that was as expensive as list creation. My
second try is much cheaper and shifts only references:

def BinList(flat):
proto = None

class _BinList(object):
def __init__(self, x=None):
self.x = x
if proto:
self.first = proto.first
self.next = proto.next

def _nest(self, lst):
self.first = lst.pop(0)
self.next = None
if lst:
self.next = _BinList()
self.next._nest(lst)

def __getitem__(self, i):
if i==0:
return self.first
elif i==1:
if self.next:
self.next.x = self.x
return self.next
else:
return self.x
else:
raise IndexError,i

def __repr__(self):
if self.next is None:
return "[%s, %s]"%(self.first,self.x)
else:
return "[%s, %s]"%(self[0], self[1])

# instantiate prototype
bl = _BinList()
bl._nest(flat)
proto = bl
return _BinList

>>> A = BinList([1,2,3])
>>> a0 = A(7)
>>> a1 = A(9)
>>> a0

[1, [2, [3, 7]]]
>>> a1

[1, [2, [3, 9]]]

> So the question in my mind is, how are you actually going to use these "L" things?

The "L" things are fragments of listified Python parse trees

>
> Regards,
> Bengt Richter

Regards too,
Kay