I'm reading the Reasoned Schemer.
I have some intuition about how conde
works.
However, I can't find a formal definition of what conde
/conda
/condu
/condi
do.
I'm aware of https://www.cs.indiana.edu/~webyrd/ but that seems to have examples rather than definitions.
Is there a formal definition of conde
, conda
, condi
, condu
somewhere?
In Prolog's terms,
condA
is "soft cut" a.k.a. *->
, where A *-> B ; C
is like (A, B ; not(A), C)
, only betterā; whereas
condU
is "committed choice", a combination of once
and a soft cut so that (once(A) *-> B ; false)
expresses (A, !, B)
(with the cut inside):
condA: A *-> B ; C % soft cut,
% (A , B ; not(A) , C)
condU: once(A) *-> B ; C % committed choice,
% (A , !, B ; not(A) , C)
(with ;
meaning "or" and ,
meaning "and", i.e. disjunction and conjunction of goals, respectively).
In condA
, if the goal A
succeeds, all the solutions are passed through to the first clause B
and no alternative clauses C
are tried.
In condU
, once/1
allows its argument goal to succeed only once (keeps only one solution, if any).
condE
is a simple disjunction of conjunctions, and condI
is a disjunction which alternates between the solutions of its constituents, interleaving the streams thereof.
Here's an attempt at faithfully translating the book's code, w/out the logical variables and unification, into 18 lines of Haskell which is mostly a lazy Lisp with syntax.(*) See if this clarifies things:
mplus
" of the book): (1) [] ++: ys = ys
(2) (x:xs) ++: ys = x : (xs ++: ys)
Alternating stream combination ("mplusI
"):
(3) [] ++/ ys = ys
(4) (x:xs) ++/ ys = x : (ys ++/ xs)
Sequential feed ("bind
"):
(5) [] >>: g = []
(6) (x:xs) >>: g = g x ++: (xs >>: g)
Alternating feed ("bindI
"):
(7) [] >>/ g = []
(8) (x:xs) >>/ g = g x ++/ (xs >>/ g)
"OR
" goal combination ("condE
"):
(9) (f ||: g) x = f x ++: g x
"Alternating OR
" goal combination ("condI
"):
(10) (f ||/ g) x = f x ++/ g x
"AND
" goal combination ("all
"):
(11) (f &&: g) x = f x >>: g
"Alternating AND
" goal combination ("allI
" of the book):
(12) (f &&/ g) x = f x >>/ g
Special goals true
and false
(or "success" and "failure"):
(13) true x = [x] -- a sigleton list with the same solution repackaged
(14) false x = [] -- an empty list, meaning the solution is rejected
And why are they called true
and false
? Because for any goal g
, we have e.g.
(g &&: true) x = g x >>: true = g x >>: (\ x -> [x] ) = g x
(false &&: g) x = false x >>: g = [] >>: g = [] = false x
-- ... etc.
Goals produce streams (possibly empty) of (possibly updated) solutions, given a (possibly partial) solution to a problem.
Putting these goals one after the other (sequentially) corresponds to the AND combination, while putting them beside each other (in parallel) corresponds to the OR combination. A solution that went through both goals fulfills the first goal AND the second. A solution that went through either of the two goals fulfills one goal OR the other.
Re-write rules for all
are:
(all) = true
(all g1) = g1
(all g1 g2 g3 ...) = (\x -> g1 x >>: (all g2 g3 ...))
= g1 &&: (g2 &&: (g3 &&: ... ))
(allI g1 g2 g3 ...) = (\x -> g1 x >>/ (allI g2 g3 ...))
= g1 &&/ (g2 &&/ (g3 &&/ ... ))
Re-write rules for condX
are:
(condX) = false
(condX (else g1 g2 ...)) = (all g1 g2 ...) = g1 &&: (g2 &&: (...))
(condX (g1 g2 ...)) = (all g1 g2 ...) = g1 &&: (g2 &&: (...))
(condX (g1 g2 ...) (h1 h2 ...) ...) = (ifX g1 (all g2 ...)
(ifX h1 (all h2 ...) (...) ))
To arrive at the final condE
and condI
's translation, there's no need to implement the book's ifE
and ifI
, since they reduce further to simple operator combinations, with all the operators considered to be right-associative:
(condE (g1 g2 ...) (h1 h2 ...) ...) =
(g1 &&: g2 &&: ... ) ||: (h1 &&: h2 &&: ...) ||: ...
(condI (g1 g2 ...) (h1 h2 ...) ...) =
(g1 &&: g2 &&: ... ) ||/ (h1 &&: h2 &&: ...) ||/ ...
So there's no need for any special "syntax" in Haskell, plain binary infix operators suffice. Any combination can be used anywhere, with &&/
instead of &&:
as needed. But on the other hand condI
could also be implemented as a function to accept a collection (list, tree etc.) of goals to be fulfilled, that would use some smart strategy to pick of them one most likely or most needed etc, and not just simple binary alternation as in ||/
operator (or ifI
of the book).
Next, the book's condA
can be modeled by two new operators, ~~>
and ||~
, working together. We can use them in a natural way as in e.g.
g1 ~~> g2 &&: ... ||~ h1 ~~> h2 &&: ... ||~ ... ||~ gelse
which can intuitively be read as "IF g1 THEN g2 AND ... OR-ELSE IF h1 THEN ... OR-ELSE gelse
":
"IF-THEN
" goal combination is to produce a "try" goal which must be called with a failure-continuation goal:
(15) (g ~~> h) f x = case g x of [] -> f x ; ys -> ys >>: h
"OR-ELSE
" goal combination of a try goal and a simple goal simply calls its try goal with a second, on-failure goal, so it's nothing more than a convenience syntax for automatic grouping of operands:
(16) (g ||~ f) x = g f x
With the "OR-ELSE
" ||~
operator given less binding power than the "IF-THEN
" ~~>
operator and made right-associative too, and ~~>
operator having still less binding power than &&:
and the like, sensible grouping of the above example is automatically produced as
(g1 ~~> (g2 &&: ...)) ||~ ( (h1 ~~> (h2 &&: ...)) ||~ (... ||~ gelse ...) )
Last goal in an ||~
chain must thus be a simple goal. That's no limitation really, since last clause of condA
form is equivalent anyway to simple "AND
"-combination of its goals (or simple false
can be used just as well).
That's all. We can even have more types of try goals, represented by different kinds of "IF
" operators, if we want:
use alternating feed in a successful clause (to model what could've been called condAI
, if there were one in the book):
(17) (g ~~>/ h) f x = case g x of [] -> f x ; ys -> ys >>/ h
use the successful solution stream only once to produce the cut effect, to model condU
:
(18) (g ~~>! h) f x = case g x of [] -> f x ; (y:_) -> h y
So that, finally, the re-write rules for condA
and condU
of the book are simply:
(condA (g1 g2 ...) (h1 h2 ...) ...) =
g1 ~~> g2 &&: ... ||~ h1 ~~> h2 &&: ... ||~ ...
(condU (g1 g2 ...) (h1 h2 ...) ...) =
g1 ~~>! g2 &&: ... ||~ h1 ~~>! h2 &&: ... ||~ ...
(*) which is:
f a b c =~= (((f a) b) c) =~= f(a, b, c)
(\ a -> b )
is lambda function, (lambda (a) b)
foo x = y
is shorthand for foo = (\ x -> y )
a @@ b = y
is shorthand for (@@) a b = y
, definition of an infix operator @@
(
)
are just for grouping[]
is the empty list, and:
means cons -- both as a constructor ( lazy, as the whole language is lazy, i.e. call by need ), on the right of =
in definitions; and as a destructuring pattern, on the left (or in pattern-matching case
expressions).