The all pairs shortest path can be formulated informally as: If you have a weighted graph find for all pairs of vertices in the graph, the shortest path between these vertices.
Formally, let G = (V, E) be a graph of vertices (V) and edges (E). For each pair (u, v) in the V-set, find a subset of E, such that the subset is a simple path between u and v and is the set with minimal weight sum. The subset might be empty, if no such path exist.
It is important that the graph must not contain any negative-weight cycles. If the graph does contain such cycles, the result from the algorithm is wrong.
The crux of the problem is to represent the graph as a matrix, where a (u,v) index in the matrix different from infinity means there is an edge between u and v of the given weight.
We assume SRFI1? in the following code. A matrix is represented as a list of lists.
(define matrix-zero '()) (define matrix-sum (lambda (element-sum) (lambda (rows-A rows-B) (cond ((null? rows-B) rows-A) ((null? rows-A) rows-B) (else (map (lambda (row-A row-B) (map element-sum row-A rowB)) rows-A rows-B))))))
First, we define a zero for the matrix. Next we define the meaning of A + B, where A and B are matrices. Beware that we take the plus operation as a curried parameter, so we can change the meaning of it later. 2 simple maps will suffice.
(define matrix-dotprod (lambda (element-zero element-sum element-prod) (lambda (pa pb) (fold element-sum element-zero (map element-prod pa pb)))))
This is the familiar dot-product of vectors pa and pb. Again, we curry in the meaning of the zero, the sum and the product operations of our field.
(define (transpose rows) (if (null? (car rows)) '() (cons (map car rows) (transpose (map cdr rows)))))
Now for the transpose of a matrix. Clever uses of car, cdr and map does the stuff easily when we are working on lists of lists.
(define matrix-prod (lambda (element-zero element-sum element-prod) (lambda (rows-A rows-B) (let ((cols-B (transpose rows-B)) (dotprod (matrix-dotprod element-zero element-sum element-prod))) (map (lambda (row) (map (lambda (col) (dotprod row col)) cols-B)) rows-A)))))
Now for the all familiar matrix product. Given zero, sum and product curried, and given to matrices, A and B, we take the transpose of B and instantiate a dot-product function. Then a simple map calculates the matrix product.
(define zz 1000) ; Set this to some high value (define path-zero zz) (define path-sum min) (define path-prod (lambda (m n) (if (or (= zz m) (= zz n)) zz (+ m n)))) (define path-matrix-prod (matrix-prod path-zero path-sum path-prod))
Here is where things get fun. We define a ZSP, that is a grouping of a Zero, a sum operator and a product operator. The zero is defined as zz, some great value. The sum is the minimum function, and the product is a simple + guarded, however, by a check on zz (since infinity + x still is infinity).
The matrix product can now be defined in terms of this ZSP. We call it path-matrix-prod.
(define (fast-paths matrix) (let ((n (length matrix))) (let f ((m 1) (mat matrix)) (if (<= (- n 1) m) mat (f (* 2 m) (path-matrix-prod mat mat))))))
So, given a matrix A of adjacencies in a graph between vertices, we can apply the matrix product A * A to obtain A^2. Applying A^2 * A^2 obtains A^4 and so forth. We can continue this way until we have more than A^n, where n is the dimension of the matrix. When we reach this point, we must have the correct paths, since the last step will calculate all paths using exactly all the vertices in the graph.
A Test case:
; Test matrix (define matrix `((0 3 8 ,zz -4) (,zz 0 ,zz 1 7) (,zz 4 0 ,zz ,zz) (2 ,zz -5 0 ,zz) (,zz ,zz ,zz 6 0))) ; > (fast-paths matrix) ; '((0 1 -3 2 -4) ; (3 0 -4 1 -1) ; (7 4 0 5 3) ; (2 -1 -5 0 -2) ; (8 5 1 6 0))
The algorithm presented here is O(n^3 lg n). An O(n^3) algorithm exists, known as the floyd-warshall algorithm for all-pairs-shortest-path.