<< Previous exercise (1.6) | sicp-solutions | Next exercise (1.8) >>

The absolute tolerance of 0.001 is significantly large when computing the square root of a small value. For example, on the system I am using, `(sqrt 0.0001)` yields 0.03230844833048122 instead of the expected 0.01 (an error of over 200%).

On the other hand, for very large values of the radicand, the machine precision is unable to represent small differences between large numbers. The algorithm might never terminate because the square of the best guess will not be within 0.001 of the radicand and trying to improve it will keep on yielding the same guess [i.e. (improve guess x) will equal guess]. Try (sqrt 1000000000000) [that's with 12 zeroes], then try (sqrt 10000000000000) [13 zeroes]. On my 64-bit intel machine, the 12 zeroes yields an answer almost immediately whereas the 13 zeroes enters an endless loop. The algorithm gets stuck because (improve guess x) keeps on yielding 4472135.954999579 but (good-enough? guess x) keeps returning #f.

If `good-enough?` uses the alternative strategy (a relative tolerance of 0.001 times the difference between one guess and the next), `sqrt` works better both for small and large numbers.

;; Modified version to look at difference between iterations (define (good-enough? guess x) (< (abs (- (improve guess x) guess)) (* guess .001)))

;;Alternate version, which adds an "oldguess" variable to the main function. (define (sqrt-iter guess oldguess x) (if (good-enough? guess oldguess) guess (sqrt-iter (improve guess x) guess x))) (define (good-enough? guess oldguess) (< (abs (- guess oldguess)) (* guess 0.001))) (define (sqrt x) (sqrt-iter 1.0 2.0 x))

[atoy]: The above solutions fail for x = 0. It hangs and never finishes evaluating. Does anybody know why?

[atoy]: Figured out why the procedure hangs on 0. It hangs because when the guess reaches 0, the delta between guess and oldguess can never be less than (* guess 0.001) because that evaluates to 0. If you change the '<' operator to '<=', the procedure will properly evaluate 0.

[random person]: I don't see why (* guess 0.001) is used. Just '0.001' or whatever tolerance desired seems to work fine. It would be nice if someone explained above if there is a reason why the (* guess 0.001) is better.

[SchemeNewb]: Just using 0.001 is, in effect, doing the same thing as the original program. It basically says "If the difference between this guess and improved guess is less than 0.0001 in absolute terms (as opposed to percent terms) then stop improving." Problem with this is the same as explained up top. For really tiny numbers, it is easy for the total difference between guess and improve guess to be less than .0001 and for the program to stop without actually doing anything. For large numbers, it might take forever to get to where guess and improved guess are less than .0001. So the book asks us to stop the program if improved guess is less than a certain PERCENT of guess. And THAT is what this alternative does. It checks to see how close guess and improved guess are as a percent. It basically says: "figure out how far guess is from improved guess and then see if that amount is less than .1% of guess. If it is, stop the program"

[robwebbjr]: I don't really know how to explain this, but the first example listed above gives the wrong result after six decimal places, e.g., using the first solution, (sqrt 0.000005) returns 0.0022365388240630493 on my intel-64 machine (running Ubuntu), whereas the second solution returns 0.002236068027062195 (as does my calculator). I guess (no pun intended) that it has something to do with calling the improve function from the good-enough? function - but I don't know enough yet to say exactly what's going on there. Here is my solution that gives accurate results beyond six decimal places (just like the second solution from above):

;Another take on the good-enough? function (define (good-enough? guess x) (< (/ (abs (- (square guess) x)) guess) (* guess 0.0001)))

[tnvu]: One way to "watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess" is to see it as a rate of change using the classic (X1 - X0) / X0. In this case X1 = (improve guess x) and X0 = guess. This is equivalent to the first solution (multiply the numerator and denominator by guess) but is more explicit about calculating the rate of change.

; A guess is good enough when: ; abs(improved-guess - original-guess) / original-guess < 0.001 (define (good-enough? guess x) (< (abs (/ (- (improve guess x) guess) guess)) 0.001))

[torinmr]: An alternative approach is to stop iteration when the error (i.e. abs(guess^2 - x)) is less than a given proportion of x. This only requires changing one line of the original algorithm:

```
(define (good-enough? guess x)
(< (abs (- x (square guess)))
(* 0.0001 x)))
```

[JESii]: Unfortunately the answer by GWB only works for "exact" results; if the answer is some form of repeating fractional number, then the equal comparison will never succeed, resulting in an infinite loop.

[White_Rabbit]: I disagree with JESii. I've never seen an infinite loop with GWB's solution, and I've seen it working for all "non exact" results I've tried. As explained in the intro, when (improve guess x) hits the machine's precision it will yield the same guess it got as input and GWB's solution will return TRUE.

[Sreeram]: The approach I have taken is to first narrow down to a close answer and then repeatedly funnelling down to as accurate an answer as allowed by the machine precision

(define (good-enough? guess x) (< (abs (- (square guess) x)) 0.1)) (define (small-enuf guess x) ( <= (diff (sqr guess) x) (diff (sqr (improve guess x)) x))) (define (sqrt-iter guess x) (if (good-enough? guess x) (if (small-enuf guess x) guess (sqrt-iter (improve guess x) x)) (sqrt-iter (improve guess x) x)))

[Chan]: I wonder whether The good-enough? test will be effective for finding the square roots of large numbers. Before large numbers, I implemented The good-enough? test for small numbers. When i implement (sqrt 0.0001), It returns the same value to first solution. It means I can know The good-enough? test is not effective for finding the square roots of small numbers. But, If you see combinations about larger numbers, you will know that the larger number is, the more precise the value has. So, Isn't the good-enough? test be effective for finding square roots of large numbers?

(sqrt 0.0001) ;value: 0.03230844833048122 (sqrt 10000) ;value: 100.00000025490743 (sqrt 1000000) ;value: 1000.0000000000118 (sqrt 10000000) ;value: 3162.277660168379 (sqrt 100000000) ;value: 10000.0

[Thomas]: SchemeNewb wrote: "Just using 0.001 is, in effect, doing the same thing as the original program." This is not the case at all — the original programme checks that the *guess squared* is within 0.001 of the *radicand*, whereas the algorithm described by "random person" checks that the *new guess* is within 0.001 of the *former guess*. This not only works much better than the original algorithm in all cases (both being more accurate with very small numbers and not hanging the programme with very large numbers) but also works better than

```
(* 0.001 guess)
```

for very large numbers because, 0.001 being by definition smaller than the thousandth of any number larger than 1, the lower tolerance forces the algorithm to continue refining the guess. It is indeed, however, inferior for very small numbers because 0.001 is by definition a larger tolerance than the thousandth of any number smaller than 1. One could cover both bases with the following:

```
(define (good-enough? guess x)
(< (abs (- (improve guess x) guess))
(cond ((< x 1) (* guess 0.001))
(else 0.001))))
```

[Owen]: I believe people here are discussing in the wrong direction. The real problem here is the procedure "improve". For the original program (which stops calculating when the difference between y^2 and x is less than a fixed number 0.001). There are two main risks in it. For a small number x, 0.001 simply might be too large to be a tolerance threshold. For a large number x, and this is where people get confused, the real reason for hanging is because (improve guess x) never actually improve the result because of the limitation of bits, the "improved guess" will simply be equal to "old guess" at some point, results in (- y^2 x) never changes and hence never reach inside the tolerance range. This situation applied to the small number case as well --- if the threshold is to be set extremely small. The second solution, comparing the difference between new guess and old guess, should never care about a specific precision value (or percentage) at all. Since at some point, the difference between new guess and old guess will guarantee to be 0 because the machine will not be able to represent "the averaging between a guess and (/ x guess)" using fix number of bits. Hence, Thomas's solution can be improved by just setting the threshold to 0.

```
(define (good-enough-thomas? guess x)
(= (- (improve guess x) guess) 0))
```

or, simply reference to GWB's solution, which I believe is the best solution, guaranteeing to stop and at the same time, with the best accuracy.

[Thomas]: Good point, Owen. I should've read the whole discussion before posting — my mistake!

I think the hint is in the question: "in real computers, arithmetic operations are almost always performed with limited precision". Given that it's done with limited precision, at some point,

improvedoesn't actually change the guess any more. This is my solution, and my results: