<< Previous exercise (2.62) | Index | Next exercise (2.64) >>

a. Both procedures produce the same result because they both are in-order traversals.

b. Let T(n) be the time taken by the procedure for a balanced tree with n nodes.

For tree->list-1:

**T(n) = 2*T(n/2) + O(n/2)** (as the procedure *append* takes linear time)

Solving above equation, we get **T(n) = O(n * log n)**

For tree->list-2:

**T(n) = 2*T(n/2) + O(1)**

Solving the above equation, we get **T(n) = O(n)**

*by Master Theorem.

More specifically, https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)#Generic_form tree->list-1 corresponds to case 2 where $c_crit=1,k=0$ and tree->list-2 is case 1.

I cannot see why the time complexity of the two implementations are different. If we ignore the specific implementations and only consider it from an abstract algorithmic prespective, both are O(N) tree traversal.

Of course one can then argue that **append** is O(N) while **cons** is O(1), which I suppose is correct. (But I am not one hundred percent sure about that. I would appreciate if anybody can give reference to the built-in implementation of **append**, preferably to the Racket interpreter, which I am using). So **tree->list-1** now becomes O(NlogN).

But **tree->list-2** might also be O(NlogN). Consider the third formal argument of **copy-to-list**, i.e. **result-list**. Again I know little about the call stack or the details of Racket/Scheme interpreter(s), but does this require O(N) time and space to allocate and copy the data? If it does, the merge step of Divide and Conquer would be O(n) as well. As a result, the time complexity would be O(NlogN).

I'm not 100% sure about my answer but I spent some time tracing out the procedures by hand and it is pretty strikingly obvious how much pointless work is being done by tree->list-1. The difference between the two procedures basically boils down to the following difference:

`tree->list-1` fully expanded (I think it may depend on the tree structure. this is tree A i.e. `(7 (3 (1 () ()) (5 () ())) (9 () (11 () ())))`

```
(append
(append (append '() (cons 1 '())) (cons 3 (append '() (cons 5 '()))))
(cons 7 (append '() (cons 9 (append '() (cons 11 '()))))))
```

`tree->list-2` fully expanded

```
(cons 1 (cons 3 (cons 5 (cons 7 (cons 9 (cons 11 '()))))))
```

I got a little lazy with the substitution because I had already done all the trees for `tree->list-2`, but I think this is more or less correct. The point is, it does pointless things like appending to the empty list and worst of all the very last thing it does is append the right branch of the whole tree to its left branch. This is where the procedures differ most dramatically; They take about as much time and recursion depth to set up the final operation I think, but `tree->list-2` can build the list in linear time, whereas `tree->list-1` has to go through the left branch twice, once to append and cons it up, and then again to locate the end of the list. Anyway, the procedures do the exact same thing except that tree->list-1 takes at least twice as long to do it.

a. The trees shown in figure 2-16 can be written as:

Using either function, the trees all evaluate to the same list:

b. The first function is of O(n log n) complexity. The second function is of O(n) complexity.