macrosschemelisp

What do Lisp macros have over functions that optionally evaluate their arguments?


Context

I've been learning some Lisp (Scheme, primarily) over the past few days, drawn by the allure of the sheer power said to be afforded by these languages. Of course, macros are the thing that are supposed to be the most powerful feature that this power is based on. Learning to use macros, by the accounts that I've heard, is much like first learning how to use first-class functions. When you learn how to use first-class functions, they start to be applicable everywhere, to the point of it being very hard to go back to imperative languages with no such concept. Writing pre-lambda Java code, for example, is very painful for me now.

Given this analogy of first-class functions, I was quite shocked to discover that Lisp macros are not first-class.

For instance, I can easily use a first-class function in Scheme in the following way:

(map (lambda (x) (+ x 1)) '(1 2 3))

Now replace that lambda with the name of some macro, and Scheme will complain that you're using a macro as a variable name. Given that I can't pass macros around like data, they are definitely second-class. Granted, I'm not even certain how a random macro should work in that scenario, but that's beside the point.

Alternative?

I'm always wary of a construct that is compile-time only, because that usually signals some limitation in the language. So, I naturally wonder how macros could be first-class runtime objects. We can sorta do this with lambdas already. For instance, in this Scheme code, both or and eval-or will print 1 and return #t, but eval-or requires apostrophes on the call site:

(or
  (begin (display "1") (= 5 5))
  (begin (display "2") (= 5 6)))

(define eval-or 
  (lambda (a b)
    (if (eval a) #t (eval b))))

(eval-or
  '(begin (display "1") (= 5 5))
  '(begin (display "2") (= 5 6)))

I am not acquainted with the niceties of eval, but I do know that it doesn't preserve the lexical scope. But perhaps if we added some sort of special construct to Scheme, say lazy-lambda, that allows you to choose when to evaluate an argument, while still preserving all the scoping properties of normal lambdas? Then you could do something like:

(define lazy-or
  (lazy-lambda (a b)
    (if (value a) #t (value b))))

(lazy-or
  (begin (display "1") (= 5 5))
  (begin (display "2") (= 5 6)))

It seems to me that this should be able to perform all of the same operations that a hygienic macro can. For instance, I can create DSLs or entirely new forms of control flow, just as macros can. It's not using eval, so I can reference things in the lexical scope of the caller. But moreover, it's also entirely first-class--I can pass lazy-or to functions just like I could with a first-class function like +. It looks like the power of Lisp macros, but fully first-class and runtime.

The question

Of course, if it were that simple, someone would have thought of it already, now wouldn't they? I fully expect that there may be some fundamental flaw in this idea (even more so given my minimal Lisp experience); if so, I'd be much obliged if someone could point it out.

Otherwise, can anyone give me a concrete example where a macro can do something really important that this hypothetical lazy-lambda cannot?


(Note: Not a duplicate of Are Lisp macros just syntactic sugar? or What makes Lisp macros so special?. I'm asking about a hypothetical language feature not found in Lisp that is possibly better than macros without loss of important usability, not asking about the differences between macros and functions)


Solution

  • Your lazy-lambda construct provides what is really normal-order evaluation in a language which is generally applicative-order. It does this in the same way that this was done in many ancient Lisps: by saying that you need to ask for the value of some object explicitly, in your case via your value form.

    (A normal-order language evaluates arguments to a function at the point their value is needed, while an applicative-order language evaluates them at the point the function is called. A normal-order language may therefore never evaluate some function arguments, while an applicative-order language always does.)

    As I said, this is an old idea. The traditional name for it is that this kind of function is a FEXPR, where a normal function would be an EXPR.

    Here's how FEXPRs were done in ancient Lisp:

    1. require the language to be dynamically scoped, which means that eval will be able to see variable bindings;
    2. a FEXPR is then handed the source of its arguments, and uses eval as needed.

    This is as horrible as it sounds, and in particular there are three implications which follow from an implementation like this.

    1. Such languages could never really be fully compiled, because FEXPRs needed to be handed the source of their arguments.
    2. Such languages needed to use dynamic scope, not lexical scope: if a FEXPR was handed an argument which was x, then the value of the variable x needed to be available to eval.
    3. A FEXPR could be abused as follows: instead of simply evaluating its argument, it could look at it (since it had the source which was just an s-expression), rewrite it into some other thing, and evaluate that.

    (In fact ancient Lisps usually had much more incoherent semantics than this.)

    Well, let's consider a less horrible implementation of FEXPRs:

    1. a FEXPR will get arguments which are 'promises' which encapsulate a form which can be evaluated in the right environment;
    2. and there is a function, force which will force the evaluation of a promise.

    This can all be done in such a way as to make lexical scope possible and allow compilation. The obvious implementation is that a promise is just an object wrapping procedure, and force arranges for the procedure to be called, perhaps memoizing its value.

    So now let's see what you can, and can not do with these rationalised FEXPRs. Let's say it's 1965 and you're getting bored writing

    ((lambda (x ...)
        ...)
     ...
     ...)
    

    every time you want to bind a variable. You decide you'd like to write

    (with ((x ...) ...)
      ...)
    

    instead. Can you write with using FEXPRs? I'll wait.

    Well, the answer, obviously, is no, you can't. You can't because the first subform of with isn't a thing that can be evaluated at all: it's a new bit of syntax. You could write it with old awful FEXPRs but then you're back to dynamic scope, inability to compile, tentacles everywhere and so on.

    Instead, you have to write with (whose real name, of course, is let) another way. You define it as corresponding to one of a number of functions whose domains and ranges are source code. let is implemented by a function whose argument is a fragment of source code written in a language which includes let and whose value is a fragment of source code in a language which doesn't.

    And these functions, of course, are macros. And, critically, you can't do what such functions can do with FEXPRs (unless you are willing to accept the horrors described above, which you are not).

    So that's why you need macros: macros allow you to build languages, FEXPRs do not.

    This view of macros as functions between languages, once you understand it, really gives a very powerful insight as to what they are, and the power they give you. That power is the ability to treat programming as the process of building a language in which to solve a problem, not of forcing the problem into some existing, inadequate, language.


    Note 1. It's worth noting that in Common Lisp macros are quite explicitly implemented by functions whose domain and range is source code as described above. A macro is implemented by a perfectly ordinary function associated with its name, which is called during macroexpansion with an argument which is an s-expression (and an environment object) and which returns another s-expression.

    Note 2. It's quite easy to implement the rationalised FEXPRs I describe above using macros. One such, for CL, is here. If you look at this you'll fairly quickly come to realise some of the problems with FEXPRs. For instance, given

    (define (f g a)
      (g (expensive a)))
    

    Then, if FEXPRs are first class, g might be one. When should expensive be called then? If you're going to get this right, then every function call where the function is not statically known has to handle the case that the function might be a FEXPR. And don't even ask what apply should do, because I have no idea.

    Historical Lisps which had FEXPRs were not semantically coherent enough to worry about problems like this: their answer would likely be 'if you do this, then the magic smoke will escape'.