The absolute tolerance of 0.001 is significantly large when computing the square root of a small value. For example, on the system I am using, `(sqrt 0.0001)` yields 0.03230844833048122 instead of the expected 0.01 (an error of over 200%).

On the other hand, for very large values of the radicand, the machine precision is unable to represent small differences between large numbers. The algorithm will terminate because the difference between the numbers will be evaluated to zero eventually, but the accuracy of the solution may not be within 0.001 as the test suggests. That is, for very large numbers this test does not ensure that the desired accuracy has been achieved.

Try (sqrt 1000000000000) [that's with 12 zeroes], then try (sqrt 10000000000000) [13 zeroes]. On my 64-bit intel machine, the 12 zeroes yields an answer almost immediately whereas the 13 zeroes seems to enter an endless loop. I don't know enough about how the algorithm actually plays out to say why there would be this dramatic of a change.

If `good-enough?` uses the alternative strategy (a relative tolerance of 0.001 times the difference between one guess and the next), `sqrt` works better both for small and large numbers.

;; Modified version to look at difference between iterations (define (good-enough? guess x) (< (abs (- (improve guess x) guess)) (* guess .001)))

;;Alternate version, which adds an "oldguess" variable to the main function. (define (sqrt-iter guess oldguess x) (if (good-enough? guess oldguess) guess (sqrt-iter (improve guess x) guess x))) (define (good-enough? guess oldguess) (< (abs (- guess oldguess)) (* guess 0.001))) (define (sqrt x) (sqrt-iter 1.0 2.0 x))

[random person]: I don't see why (* guess 0.001) is used. Just '0.001' or whatever tolerance desired seems to work fine. It would be nice if someone explained above if there is a reason why the (* guess 0.001) is better.

[SchemeNewb]: Just using 0.001 is, in effect, doing the same thing as the original program. It basically says "If the difference between this guess and improved guess is less than 0.0001 in absolute terms (as opposed to percent terms) then stop improving." Problem with this is the same as explained up top. For really tiny numbers, it is easy for the total difference between guess and improve guess to be less than .0001 and for the program to stop without actually doing anything. For large numbers, it might take forever to get to where guess and improved guess are less than .0001. So the book asks us to stop the program if improved guess is less than a certain PERCENT of guess. And THAT is what this alternative does. It checks to see how close guess and improved guess are as a percent. It basically says: "figure out how far guess is from improved guess and then see if that amount is less than .1% of guess. If it is, stop the program"

[robwebbjr]: I don't really know how to explain this, but the first example listed above gives the wrong result after six decimal places, e.g., using the first solution, (sqrt 0.000005) returns 0.0022365388240630493 on my intel-64 machine (running Ubuntu), whereas the second solution returns 0.002236068027062195 (as does my calculator). I guess (no pun intended) that it has something to do with calling the improve function from the good-enough? function - but I don't know enough yet to say exactly what's going on there. Here is my solution that gives accurate results beyond six decimal places (just like the second solution from above):

;Another take on the good-enough? function (define (good-enough? guess x) (< (/ (abs (- (square guess) x)) guess) (* guess 0.0001)))

[tnvu]: One way to "watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess" is to see it as a rate of change using the classic (X1 - X0) / X0. In this case X1 = (improve guess x) and X0 = guess. This is equivalent to the first solution (multiply the numerator and denominator by guess) but is more explicit about calculating the rate of change.

; A guess is good enough when: ; abs(improved-guess - original-guess) / original-guess < 0.001 (define (good-enough? guess x) (< (abs (/ (- (improve guess x) guess) guess)) 0.001))

I think the hint is in the question: "in real computers, arithmetic operations are almost always performed with limited precision". Given that it's done with limited precision, at some point,

improvedoesn't actually change the guess any more. This is my solution, and my results: