<< Previous exercise (4.28) | Index | Next exercise (4.30) >>

; ; ; ; a) with memoization：0.43s, without memoization：9.3s (define (fib i) (if (<= i 2) 1 (+ (fib (- i 1)) (fib (- i 2))))) (define (test x) (define (iter t) (if (= t 0) 0 (+ x (iter (- t 1))))) (iter 10)) (test (fib 20)) (exit)

with memoization: (square (id 10)) => 100 count =>1 without memoization: (square (id 10)) =>100 count =>2

a simple explanation by diagrams: +--------------------+ | global environment | +--------------------+ ^ | +--------------------+ | x: #Thunk:(id 10) | (square (id 10)) +--------------------+ (* x x) --> (* (actual-value x) (actual-value x)) (* (force-it #Thunk) (force-it #Thunk)) (* (id 10) (id 10)) ;;set count twice

(define (f1 x) x) (define f2 (f1 f1)) (define t1 (real-time-clock)) (define (loop i n) (if (= i n) '() (begin (f1 2) (f2 2) (loop (+ 1 i) n)))) (loop 1 100) (p (- (real-time-clock) t1)) ;; with memoization the above runs in 55ms ;; without memoization, it runs in 300ms

Here's a very simple, trivial example that illustrates the difference between memoizing and not memoizing thunks. First, assume that the force-it procedure has been defined with memoization, as in the longer of the two versions of force-it given in the book.

We also define a non-memoized version, which is simply the shorter of the two versions of force-it given in the book:

`(define (unmemoized-force-it obj) (if (thunk? obj) (actual-value (thunk-exp obj) (thunk-env obj)) obj))`

Here is our simple example, which can be run in MIT Scheme:

What's happening here is that the argument given to identity, the expression '(identity-with-computation 0), is stored in a thunk via delay-it. With memoization, that thunk is effectively forced only once, but without it, that thunk is forced every time we call actual-value on z.

Note that memoization of thunks is similar, but *not exactly* the same thing as the memoization you may be used to in dynamic programming (the canonical example being the naive recursive algorithm for fibonacci, which is O(exp N) without dynamic programming (DP) memoization but O(N) *with* DP memoization.

In DP memoization, a function f(x) is computed *once* for every x; further calls to f(x) for the same x will simply return the stored value. For the thunk memoization discussed in this section of SICP, memoization is done for each *thunk object*, regardless of whether the the arguments are the same.

For example:

The takeaway here is that thunk memoization does *not* tabulate previously computed results by *argument*, the way dynamic programming memoization does. Note that DP memoization was actually implemented earlier in the book (chapter 3 if I remember correctly).

Hence, I would personally caution against using the O(exp N) naive recursive fibonacci algorithm for this exercise, since in that case, the multiple redundant calls of (fib n) to the same n are *not* sharing work, but are *still* redundantly doing work since they result in separate thunk objects. Even if thunk memoization *does* make a difference here, it is certainly not nearly as much as the jump from O(exp N) to O(N) that true dynamic programming memoization, which involves actually *tabulating computed results indexed by argument*.