sicp-ex-2.63



<< Previous exercise (2.62) | Index | Next exercise (2.64) >>


skangas

a. The trees shown in figure 2-16 can be written as:

  
   (define fig2-16-1 '(7 (3 (1 () ()) (5 () ())) (9 () (11 () ())))) 
   (define fig2-16-2 '(3 (1 () ()) (7 (5 () ()) (9 () (11 () ()))))) 
   (define fig2-16-3 '(5 (3 (1 () ()) ()) (9 (7 () ()) (11 () ())))) 

Using either function, the trees all evaluate to the same list:

  (1 3 5 7 9 11)

b. The first function is of O(n log n) complexity. The second function is of O(n) complexity.


meteorgan

a. Both procedures produce the same result because they both are in-order traversals.

b. Let T(n) be the time taken by the procedure for a balanced tree with n nodes.

For tree->list-1:
T(n) = 2*T(n/2) + O(n/2) (as the procedure append takes linear time)
Solving above equation, we get T(n) = O(n * log n)

For tree->list-2:
T(n) = 2*T(n/2) + O(1)
Solving the above equation, we get T(n) = O(n)


wl

I cannot see why the time complexity of the two implementations are different. If we ignore the specific implementations and only consider it from an abstract algorithmic prespective, both are O(N) tree traversal.

Of course one can then argue that append is O(N) while cons is O(1), which I suppose is correct. (But I am not one hundred percent sure about that. I would appreciate if anybody can give reference to the built-in implementation of append, preferably to the Racket interpreter, which I am using). So tree->list-1 now becomes O(NlogN).

But tree->list-2 might also be O(NlogN). Consider the third formal argument of copy-to-list, i.e. result-list. Again I know little about the call stack or the details of Racket/Scheme interpreter(s), but does this require O(N) time and space to allocate and copy the data? If it does, the merge step of Divide and Conquer would be O(n) as well. As a result, the time complexity would be O(NlogN).