Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > Tail recursion to while iteration in 2 easy steps

Reply
Thread Tools

Tail recursion to while iteration in 2 easy steps

 
 
Terry Reedy
Guest
Posts: n/a
 
      10-01-2013
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.

Assume that you have already done the work of turning a body recursive
('not tail recursive') form like

def fact(n): return 1 if n <= 1 else n * fact(n-1)

into a tail recursion like

def fact(n, _fac=1):
'''Return factorial for any count n.

Users are not expected to override private parameter _fac.
'''
if n <= 1:
return _fac
else: # optional
return fact(n-1, n * _fac)

(This conversion nearly requires adding an accumulator parameter, as
done here.

Turn this into while iteration with two easy steps.

1. If not already done, make if-else a statement, rather than an
expression, with the recursive call in the if branch. If nothing else,
just use 'not condition' to invert the condition.

def fact(n, _fac=1):
if n > 1: # not n <= 1
return fact(n-1, n * _fac)
else: # optional
return _fac

While contrary to what is often published, this order makes logical
sense. The recursive call means 'go to the top of this function with new
bindings for the parameters'. So put it closest to the top. The base
case means 'we are done, time to leave'. So put it at the bottom.

2. This order also makes the follow substeps work:
2a. Replace 'if' with 'while'.
2b. Replace the recursive call with multiple assignment, using the
parameters as targets and the arguments as source.
For our example:

def fact(n, _fac=1):
while n > 1:
n, _fac = n-1, n * _fac
else:
return _fac

The proof of correctness for this conversion might argue that the
recursive form is equivalent to the following pseudo-Python:

def fact(n, _fac=1):
label top
if n > 1:
n, _fac = n-1, n * _fac
goto top
else:
return _fac

and that this is equivalent to the real Python with while.

At this point, you call pull the initialization of the private parameter
into the body, remove the else, and even add a value check.

def fact(n):
if n < 0 or n != int(n):
raise ValueError('fact input {} is not a count'.format(n))
fac = 1
while n > 1:
n, fac = n-1, n * fac
return fac

With care, the multiple-assignment statement can usually be turned into
multiple assignment statements for greater efficiency.

fac *= n
n -= 1

But note that the reverse order would be a bug. So I would not do this,
especially in more complicated situations, without having tests first.

--
Terry Jan Reedy

 
Reply With Quote
 
 
 
 
rusi
Guest
Posts: n/a
 
      10-02-2013
On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
> Part of the reason that Python does not do tail call optimization is
> that turning tail recursion into while iteration is almost trivial, once
> you know the secret of the two easy steps. Here it is.


What happens for mutual tail recursive code like this

def isodd(x) : return False if x==0 else iseven(x-1)
def iseven(x): return True if x==0 else isodd(x-1)

----------------
Notes:
1. I am not advocating writing odd/even like the above -- just giving a trivial example
2. General mutual structural (what you call 'body') recursion is more general than mutual tail recursion is more general than single tail recursion.
3. IOW tail recursion optimization is intermediate between the optimizationyou are showing and the maximally pessimistic implementation -- stack, activation record etc -- of programming languages
4. There is a whole spectrum of such optimizaitons --
4a eg a single-call structural recursion example, does not need to push return address on the stack. It only needs to store the recursion depth:

If zero jump to outside return add; if > 0 jump to internal return address

4b An example like quicksort in which one call is a tail call can be optimized with your optimization and the other, inner one with 4a above

5. Programming language implementations dont do optimizations like TR because these analyses are in themselves hard but because inter-procedural analysis per se is a headache. For python-like languages its a much bigger headache because the recursive name must be dynamically fetched for every call

6. [Personal pov] TR optimization is not really a big beef for me:
As you can see, it does not figure in the "All things every programmer should learn from FP" list of mine
http://blog.languager.org/2012/10/fu...ost-booty.html

7. Pedagogically, understanding general recursion is far more important than exploiting tail recursion
 
Reply With Quote
 
 
 
 
Alain Ketterlin
Guest
Posts: n/a
 
      10-02-2013
Terry Reedy <(E-Mail Removed)> writes:

> Part of the reason that Python does not do tail call optimization is
> that turning tail recursion into while iteration is almost trivial,
> once you know the secret of the two easy steps. Here it is.
>
> Assume that you have already done the work of turning a body recursive
> ('not tail recursive') form like
>
> def fact(n): return 1 if n <= 1 else n * fact(n-1)
>
> into a tail recursion like

[...]

How do know that either "<=" or "*" didn't rebind the name "fact" to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or possible only under very conservative assumption, that make it
worthless).

-- Alain.

P/S: don't take me wrong; your explanation is excellent (and very useful
to python programmers). What I say is that it relies on assumptions that
do not apply to python.
 
Reply With Quote
 
rusi
Guest
Posts: n/a
 
      10-02-2013
On Wednesday, October 2, 2013 1:53:46 PM UTC+5:30, Alain Ketterlin wrote:
> rusi writes:
>
>
>
> > On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
> >> Part of the reason that Python does not do tail call optimization is
> >> that turning tail recursion into while iteration is almost trivial, once
> >> you know the secret of the two easy steps. Here it is.

>
> >
> > What happens for mutual tail recursive code like this
> >
> > def isodd(x) : return False if x==0 else iseven(x-1)
> > def iseven(x): return True if x==0 else isodd(x-1)

>
>
> It takes one step of inlining to make Terry's technique applicable.


Well I did say my example was trivial!
For a more nontrivial case consider the delta function of an FSM.

The cleanest way of doing it in C is to make one function with a goto label for each state and a goto for each transition

The same thing can be done in a TR optimized language like scheme by making each state into a function and each transition into a TR-call

>
>
>
> (Actually, out of curiosity, I tried this with gcc 4.6.3: the compiler
> does 16 levels of inlining, plus tail call optimization. The final code
> has no call.)


Good to know.
 
Reply With Quote
 
Mark Janssen
Guest
Posts: n/a
 
      10-02-2013
>> def fact(n): return 1 if n <= 1 else n * fact(n-1)
>>
>> into a tail recursion like

> [...]
>
> How do know that either "<=" or "*" didn't rebind the name "fact" to
> something else? I think that's the main reason why python cannot apply
> any procedural optimization (even things like inlining are impossible,
> or possible only under very conservative assumption, that make it
> worthless).


It's called "operator precedence".

--
MarkJ
Tacoma, Washington
 
Reply With Quote
 
Mark Janssen
Guest
Posts: n/a
 
      10-02-2013
On Wed, Oct 2, 2013 at 1:23 PM, Alain Ketterlin <(E-Mail Removed)> wrote:
> On 10/02/2013 08:59 PM, Mark Janssen wrote:
>>>>
>>>> def fact(n): return 1 if n <= 1 else n * fact(n-1)
>>>
>>> How do know that either "<=" or "*" didn't rebind the name "fact" to
>>> something else? I think that's the main reason why python cannot apply
>>> any procedural optimization

>>
>> It's called "operator precedence".

>
> Operator precedence is totally irrelevant here, you misunderstand what
> "bind" means.
>
> Imagine that you call fact(x) where x is an instance of the following class:
>
> class Strange:
> ...
> def __le__(dummy):
> global fact
> fact = someotherfun # this is "binding"
> return false
>
> i.e., executing "n<=1" calls __le__, which rebinds the name "fact" to
> someting else. Then, there is no recursive call at all. At the time
> fact(x-1) is executed, "fact" is not the same function any more.
>
> You cannot prevent this in python.



No, but you can't prevent a lot of bad moves in python. What you just
did there is a total bonehead ("strange"?) of an idea.

--
MarkJ
Tacoma, Washington
 
Reply With Quote
 
Terry Reedy
Guest
Posts: n/a
 
      10-02-2013
On 10/2/2013 4:17 AM, Alain Ketterlin wrote:
> Terry Reedy <(E-Mail Removed)> writes:
>
>> Part of the reason that Python does not do tail call optimization is
>> that turning tail recursion into while iteration is almost trivial,
>> once you know the secret of the two easy steps. Here it is.
>>
>> Assume that you have already done the work of turning a body recursive
>> ('not tail recursive') form like
>>
>> def fact(n): return 1 if n <= 1 else n * fact(n-1)


As I said in response to randomxxx, even this 0th step (recursion to
recursion transformation) requires assumptions or carefully reasoning
about the properties of the operations.

>> into a tail recursion like

> [...]
>
> How do know that either "<=" or "*" didn't rebind the name "fact" to
> something else? I think that's the main reason why python cannot apply
> any procedural optimization (even things like inlining are impossible,
> or possible only under very conservative assumption, that make it
> worthless).
>
> -- Alain.
>
> P/S: don't take me wrong; your explanation is excellent (and very useful
> to python programmers). What I say is that it relies on assumptions that
> do not apply to python.


Program transformations (usually intended to be optimizations), whether
automatic or manual, are infamous for being buggy in not always being
correct because of hidden assumptions that are not always true.

CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze that.

--
Terry Jan Reedy

 
Reply With Quote
 
Steven D'Aprano
Guest
Posts: n/a
 
      10-03-2013
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:

> CPython core developers have be very conservative about what
> tranformations they put into the compiler. (1,2,3) can always be
> compiled as a constant, and so it is. [1,2,3] might or might not be a
> constant, depending on the context, and no attempt is made to analyze
> that.


The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by "constant" in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:

py> from dis import dis
py> dis(compile("x = (1, 2, 3)", '', 'exec'))
1 0 LOAD_CONST 4 ((1, 2, 3))
3 STORE_NAME 0 (x)
6 LOAD_CONST 3 (None)
9 RETURN_VALUE


while a literal [1, 2, 3] does not:


py> dis(compile("x = [1, 2, 3]", '', 'exec'))
1 0 LOAD_CONST 0 (1)
3 LOAD_CONST 1 (2)
6 LOAD_CONST 2 (3)
9 BUILD_LIST 3
12 STORE_NAME 0 (x)
15 LOAD_CONST 3 (None)
18 RETURN_VALUE


But I don't think this is a necessary language limitation. Both (1, 2, 3)
and [1, 2, 3] are known at compile time: the first cannot be anything
other than a tuple of three ints, and the second a list of three ints. It
seems to me that an implementation might provide a single byte-code to
build list literals, perhaps even LOAD_CONST itself. The byte-codes used
by the Python VM are not part of the language definition, and are subject
to change without warning.

And in fact, if we go all the way back to Python 1.5, even tuple literals
weren't handled by a single byte-code, they were assembled at runtime
like lists still are:

[steve@ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:1 [GCC 4.1.2 20080704 (Red Hat
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> from dis import dis
>>> dis(compile("x = (1, 2, 3)", '', 'exec'))

0 SET_LINENO 0

3 SET_LINENO 1
6 LOAD_CONST 0 (1)
9 LOAD_CONST 1 (2)
12 LOAD_CONST 2 (3)
15 BUILD_TUPLE 3
18 STORE_NAME 0 (x)
21 LOAD_CONST 3 (None)
24 RETURN_VALUE




--
Steven
 
Reply With Quote
 
Dave Angel
Guest
Posts: n/a
 
      10-03-2013
On 2/10/2013 21:24, Steven D'Aprano wrote:

> On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
>
>> CPython core developers have be very conservative about what
>> tranformations they put into the compiler. (1,2,3) can always be
>> compiled as a constant, and so it is. [1,2,3] might or might not be a
>> constant, depending on the context, and no attempt is made to analyze
>> that.

>
> The first sentence of this is correct. The next two don't quite make
> sense to me, since I don't understand what you mean by "constant" in this
> context. I *think* you might be referring to the LOAD_CONST byte-code,
> which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
> a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
> call:
>
> py> from dis import dis
> py> dis(compile("x = (1, 2, 3)", '', 'exec'))
> 1 0 LOAD_CONST 4 ((1, 2, 3))
> 3 STORE_NAME 0 (x)
> 6 LOAD_CONST 3 (None)
> 9 RETURN_VALUE
>
>
> while a literal [1, 2, 3] does not:
>
>


The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const. (Much like the interning of small
integers) The list, however, would always have to be copied from the
compile-time object. So that object itself would be a phantom, used
only as the template with which the list is to be made.


--
DaveA


 
Reply With Quote
 
MRAB
Guest
Posts: n/a
 
      10-03-2013
On 03/10/2013 02:39, Dave Angel wrote:
> On 2/10/2013 21:24, Steven D'Aprano wrote:
>
>> On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
>>
>>> CPython core developers have be very conservative about what
>>> tranformations they put into the compiler. (1,2,3) can always be
>>> compiled as a constant, and so it is. [1,2,3] might or might not be a
>>> constant, depending on the context, and no attempt is made to analyze
>>> that.

>>
>> The first sentence of this is correct. The next two don't quite make
>> sense to me, since I don't understand what you mean by "constant" in this
>> context. I *think* you might be referring to the LOAD_CONST byte-code,
>> which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
>> a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
>> call:
>>
>> py> from dis import dis
>> py> dis(compile("x = (1, 2, 3)", '', 'exec'))
>> 1 0 LOAD_CONST 4 ((1, 2, 3))
>> 3 STORE_NAME 0 (x)
>> 6 LOAD_CONST 3 (None)
>> 9 RETURN_VALUE
>>
>>
>> while a literal [1, 2, 3] does not:
>>
>>

>
> The difference is that a tuple can be reused, so it makes sense for the
> comiler to produce it as a const. (Much like the interning of small
> integers) The list, however, would always have to be copied from the
> compile-time object. So that object itself would be a phantom, used
> only as the template with which the list is to be made.
>

The key point here is that the tuple is immutable, including its items.

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Tail Call Optimization (Tail Recursion) Terry Michaels Ruby 16 04-20-2011 11:37 AM
does CCS supports tail recursion? shailesh C Programming 16 08-06-2007 11:31 AM
New tail recursion decorator Kay Schluehr Python 19 05-15-2006 07:44 AM
g++ and tail recursion Ray Wesley Kinserlow Jr. C++ 0 03-05-2006 01:16 AM
Proper tail recursion Christopher T King Python 10 07-13-2004 06:17 PM



Advertisments