Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This made me think of Kernel, a programming language of the Lisp family where the main primitive is vau rather than lambda, defining a fexpr rather than a function.

Juste like lambdas, fexprs are lexically binded closures. The difference is that a fexpr arguments are not evaluated before it's called, instead, the arguments syntax is passed to the fexpr (so it's not call-by-name either, more like a macro), except that the fexpr also receives the dynamic environment (the one from the call site) so that it can eval its arguments in this context if it wants to.

This makes fexpr a kind of all powerful form that can do both what macros and functions can do. Even primitives like lambda or quote can be written as fexprs. Quite fascinating.

See https://web.cs.wpi.edu/~jshutt/kernel.html about Kernel :).



I’m not familiar with the theoretical aspects, but what you’re describing reminds me of Tcl: arguments to procedures can be passed unevaluated (quoted, using Tcl terms) and the procedure itself can alter the caller’s environment via uplevel.


It’s also extremely similar to R and Rebol.

(In fact, R isn’t just ‘extremely similar’: it does use fexprs, though without mentioning the term.)


It's what let's the tidy data tools take arguments like .x where x is a column name, not a local varia le that's in scope.


“Wow this is fun and amazingly useful”

8 months later.

“I regret this decision, and a thousand curses on anyone who chooses to use this in a code base I have to work on”

I’ve been at the receiving end of this particular “feature” and it was awful. Your logic is splayed all over the place, and you’ll often get mystery errors, because someone else’s code makes some undocumented, un-typed assumption about the availability of some magically named variables. This is not a feature, this is bugs and misery in disguise.


Isn't that a feature/problem of every dynamic language?


R is even more wild because you can access the entire definition of a function from the function object itself. You can literally make a new function that's a copy of an existing function with some lines of code injected.



Powershell lets you do this too.


In Kernel, the environments are completely reified--they exist as a thing and you can pass them around.

However, if you don't get an environment passed in, you can't get access to it. In addition, the environments in many ways are append-only, you can change what you see as "car" but you can't change what previous callers see as "car" unless you get a handle to their environment but that won't change previous calls.

It's really quite tricky. It's doubly tricky because there are some environment mutations that you need to do as the system that you need to prevent as the user once everything is bootstrapped.

I really wish Shutt had implemented a version in C/C++/Ada/anything systems language and not just a metacircular scheme implementation. The metacircularity obscures the management of the environments that is key to the implementation. It also obscures just how much garbage Kernel cons-es up in order to maintain all of the necessary housekeeping to pass all this stuff around.

Alas, he is no longer with us to request anything of. RIP.


I'm a huge fan of FEXPRs. No idea why they fell out of favor, they're what lisp is all about. I implemented all primitives as FSUBRs in my lisp, including all control structures and quote.

I should study Kernel. Looks like it went even further than that. Simply passing the dynamic environment to the FEXPR is quite ingenious.


>The difference is that a fexpr arguments are not evaluated before it's called

What's funny is that the actual lambda calculus allows multiple reduction strategies, including lazy evaluation. So I guess the only reason to introduce this distinction is to facilitate impure code, which cares about the difference.


Kernel-style fexprs give macros plus a bunch of reflective abilities that even macros don't provide. Lazy evaluation (alone) doesn't do that.


The kernel thesis is worth reading if you haven't already. The fexpr/vau reduction semantics are (a subset of) the lambda calculus.


I mean, there are observable differences between the two. An obvious one is to have an infinite recursive function application as a parameter to a function. Depending on whether CBV or CBN is used, you either get non-halting code, or a meaningful result.


> makes fexpr a kind of all powerful form that can do both what macros and functions can do.

A fexpr cannot do everything a macro can do.

A macro can be expanded upfront, so that even code that is not reached during execution is expanded.

The separate macro expansion pass can take place in a different host environment, such as a developer's build machine. The expanded code then executes elsewhere: a target machine different from the developer's machine.

A macro can, for instance, grab a text file on the build machine, massage it and turn it into a literal (or even code, obviously).

Macro expansion can perform basic checks on code. For instance, in TXR Lisp, unbound variables are reported by the macro expander, before the code is executed, and even if it isn't.

This error is caught during expansion. The function is still interpreted:

  1> (defun foo () (list a (cons)))
  ** expr-1:1: warning: unbound variable a
  foo
  2> (fun foo)
  #<interpreted fun: foo nil>
Cons not being given enough arguments is caught in the compiler:

  3> (compile 'foo)
  ** expr-1:1: warning: cons: too few arguments: needs 2, given 0
   #<vm fun: 0 param>
  4> (fun foo)
  #<vm fun: 0 param>
Application-defined macros can implement their own static checks. Commonly, macros check arguments for validity and such, but more advanced checks are possible.

For instance, in the awk macro provided the same Lisp dialect, it's possible to perpetrate a situation in which code that looks like it is scoped a certain way isn't. The macro diagnoses it:

  5> (awk ((let ((x 1) (y 2)) (rng x y)) (prn)))
  ** expr-5:1: warning: rng: form x
                             is moved out of the apparent scope
                             and thus cannot refer to variables (x)
  ** expr-5:1: warning: rng: form y
                             is moved out of the apparent scope
                             and thus cannot refer to variables (y)
  ** expr-5:1: warning: unbound variable x
  ** expr-5:1: warning: unbound variable y
The issue is that (rng ...) expressions, which test for a record being in a range indicated by two conditions, are not evaluated in the apparent scope where they are physically located. They are evaluated when a new record is read and delimited into fields, before the ladder of condition/action pairs is evaluated. The information is recorded in a hidden vector of Boolean values, and when the (rng ...) expression is encountered, it simply retrieves its corresponding pre-computed Boolean.

A fexpr could implement this check but (1) it would be expensive at run-time to be digging into scopes, and (2) it would have to be executed and (3) it would need some stateful hack to avoid diagnosing repeatedly.

Another thing fexprs cannot do is to be debuggable via inspecting an expansion. To debug the form which uses the fexpr we have to debug into the fexpr. To debug a form using a macro, we can inspect the expansion. We can understand what is wrong in terms of the behavior of the expansion, which can be very simple compared to the macro. Then work backwards to understand why the bad expansion was produced. At that point we can dig into the macro, and that is a different problem: a pure code generation problem we can likely debug in isolation in a development system.

Yet another thing fexprs cannot do is simply go away entirely in a software build. We can stage macros such that their definitions are available at compile time, but are not propagated into the compiled program.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: