You should know what recursive-functions are.

In contrast to a recursive procedure, an *iterative procedure* does
explicitely iterate, thus forcing state changes to be done using
*mutation*. The following recursive function to add the elements of
a list:

```
(define (sum lis)
(if (null? lis)
0
(+ (car lis)
(sum (cdr lis)))))
```

would be written as an *iterative procedure* (using a hypothetical
`while` macro):

```
(define (sum lis)
(let ((thesum 0))
(while (not (null? lis))
(set! thesum (+ (car lis) thesum))
(set! lis (cdr lis)))))
```

Scheme tries to encourage the recursive style, which is the functional solution. There's only one problem: On the typical von Neumann machine, the architecture on which computing is based today, the iterative style is more efficient.

To understand this, we have to remember the amount of memory used in the two versions: The first one starts building up a list, and when the last recursive call finishes, it adds all elements of that list:

(sum '(1 2 3)) => (+ 1 (sum '(2 3))) => (+ 1 (+ 2 (sum '(3)))) => (+ 1 (+ 2 (+ 3 (sum '())))) => (+ 1 (+ 2 (+ 3 0))) => (+ 1 (+ 2 3)) => (+ 1 5) => 6

As you can see quite well, the memory use is linear to the size of the
list. In contrast, the iterative procedure modifies only a single
variable, and drops the list once it modified the sum. This will use
constant space. Luckily, a simple optimization for recursive calls
allows us to benefit from the efficiency of the iterative procedure:
If we just assume that a call which does happen as the *last* action
of a procedure does not build up a call frame, but returns the value
directly to the previous caller, a simple recursive call does not
build up space. This is called tail-call-optimization?.

The substitution-model? again helps to visualize this (it has built-in tail-call-optimization, so to say):

(define (sum sum-so-far lis) (if (null? lis) sum-so-far (sum (+ (car lis) sum-so-far) (cdr lis)))) (sum 0 '(1 2 3)) => (sum 1 '(2 3)) => (sum 3 '(3)) => (sum 6 '()) => 6

As you can see, the size does not vary (except by the length of the
argument list). We gave a *recursive* procedure the efficiency
advantage of an *iterative* procedure. Actually, it can be proven
that *every* iteration is easily expressed using tail calls!

But now we have to be able to differenciate between those cases.
Therefore, we call the *process* *created by* a procedure that
builds up data on the call stack a **recursive process**, and a
*process* *created by* a procedure that requires constant space an
**iterative process**. An *iterative procedure* can create only
*iterative processes*, while a *recursive procedure* can create
both *recursive* as well as *iterative* processes.

Consequently, Scheme does not provide iterative procedures (such as
`while`) at all. They can be easily defined in terms of tail
calls, though. Hence, Scheme can *emulate* an *iterative*
procedure using the appropriate syntax, but internally, it is
translated into a *recursive* procedure. You can see examples of
such emulations on simple-iterators.

Using tail calls is imperative in many cases for efficiency reasons. There are some simple techniques that help translating a procedure that creates a recursive process into one that creates an interative process, i.e. uses tail calls.

We have seen one such transformation already. `sum` above was
translated into an iterative version by using a new argument, a
so-called *accumulator*. This argument is used in subsequent calls
to store the sum we have collected so far, saving us from the burden
of calculating that when the procedure we call returns. It turns out
that using an accumulator is the solution for 99% of the cases where a
procedure should be modified to create an iterative process.

The extra argument of course is annoying in the normal case, so we'd usually define:

```
(define (sum lis)
(define (sum-iter sofar lis)
(if (null? lis)
sofar
(sum-iter (+ sofar (car lis))
(cdr lis))))
(sum-iter 0 lis))
```

Thus hiding the extra argument from the user. This is a pretty common idiom as well.

We already know the `map` procedure. We have defined it on
recursive-functions as:

```
(define (map-rec proc lis)
(if (null? lis)
'()
(cons (proc (car lis))
(map-rec proc (cdr lis)))))
```

We can now clearly see that this is not *tail recursive*: When the
recursive call to `map-rec` returns, it's return value is passed
to other procedures as arguments before it is returned. We can now try
using an accumulator to make `map` create an *iterative process*.

```
(define (map-iter sofar proc lis)
(if (null? lis)
sofar
(map-iter (cons (proc (car lis))
sofar)
proc
(cdr lis))))
```

This looks like tail recursion, and it is. But trying this, we notice something:

> (map-iter '() (lambda (x) (+ x 1)) '(1 2 3)) '(4 3 2)

This is backwards!

Actually, this should have been expected: We now start the list at the
*beginning*, and add elements from the beginning of the argument
list to the head of the created list. Of course, the finished result
is the backwards from the original list. The solution is simple: Just
use `reverse` or `reverse!` on the returned list. Using a
single `reverse` at the end is more efficient then calling
`append` on every recursion, so this is the preferred method:

```
(define (map-iter sofar proc lis)
(if (null? lis)
(reverse! sofar)
(map-iter (cons (proc (car lis))
sofar)
proc
(cdr lis))))
```

And it works as expected.

There are many factual errors in this page, and the introduction is extremely misleading by stating that Scheme encourages recursive processes over iterative processes *and* that iterative code using assignment would be faster. It furthermore does not elaborate on why anyone should care about tail call optimization as opposed to loops that use local variable assignment.

I am with Riastradh here. This page does not help the beginning programmer to understand what is going on. I think I will try to do something about it when I have some time.

When a program is running, it uses a *stack* for keeping track of various things. A stack can be regarded as dishes put on each other. Everytime a function call is made, the stack grows. Every time a function returns the stack
shrinks. When a recursion is deep, i.e. there are many function calls, the stack can grow very big. There are 2 problems with this:

- The stack might have a maximal size. When this size is exhausted, the program is brought to abort.

- The time spent growing and shrinking the stack might be spent on some other computations, such that the program runs faster.

Fortunately there exist a solution known as tail-recursive or iterative functions. This page addresses the solution.

A function call is said to be in *tail-position* if it is the last thing done in the function. Let us take an example:

```
(define (sum lis)
(if (null? lis)
0
(+ (car lis)
(sum (cdr lis)))))
```

This code simple sums the contents of a list of integers. The recursive function call *sum* is not in tail-position, because when sum returns with the value of
the *cdr* of the list it has to be added to the *car* of the list before the function returns. The function *+*, however, is in tail position. This is the last thing done before the function returns.

It is possible to define a version of the above code where *sum* is in tail
position. By letting *sum* take 2 arguments, a list and a value of the sum-so-far, we can provide a function with sum in tail position:

```
(define (sum sum-so-far list)
(if (null? lis)
sum-so-far
(sum (+ (car lis) sum-so-far) (cdr lis))))
```

Note that we are now adding before we are calling sum again. We have sum in tail position. A function defined like the above, where the recursive call is in tail position is said to be *tail-recursive* or *iterative*.

When a function is tail-recursive it is possible to apply the tail-call-optimization? to the function. All R5RS-compliant scheme systems have this optimization as it is a requirement of the specification! What happens in effect is that the function call in tail position is transformed into a goto (with arguments) to the beginning of the function. Thus the code is transformed into a loop. The advantages of doing this are:

- There is no longer any growth and shrinkage of the stack when the function recurses. In fact, the process of executing the function only takes a constant amount of space.

- The function runs faster, since growth and shrinkage of the stack is avoided.

Feel free to correct me and add to this.

When all I do on the return stack is to create a list, I think it is quite implementation-dependent whether that is more efficient then the version using the accumulator and

reverse: I am building the list anyways, so the accumulator doesn't save me any space. So building the list on the return stack might even be more efficient, since it allocates only half as many pairs.Any comments on this?

Found on in SRFI:1: