Hacker Newsnew | past | comments | ask | show | jobs | submit | larsbrinkhoff's commentslogin

Forth was invented before Moore worked at NRAO. Granted, it was gradually expanded from a very small interpreter, so it's hard to say exactly when it became "Forth" as we mean it today.

Source: "The Invention of Forth", by Chuck Moore. https://colorforth.github.io/HOPL.html

Forth should be considered a family of languages; Anton Ertl took its picture some time ago [1].

Chuck Moore agrees I think with the idea [2]:

That raises the question of what is Forth? I have hoped for some time that someone would tell me what it was. I keep asking that question. What is Forth?

Forth is highly factored code. I don't know anything else to say except that Forth is definitions. If you have a lot of small definitions you are writing Forth. In order to write a lot of small definitions you have to have a stack. Stacks are not popular. Its strange to me that they are not. [...]

What is a definition? Well classically a definition was colon something, and words, and end of definition somewhere.

    : some ~~~ ;
I always tried to explain this in the sense of this is an abbreviation, whatever this string of words you have here that you use frequently you have here you give it a name and you can use it more conveniently. But its not exactly an abbreviation because it can have a parameter perhaps or two. And that is a problem with programmers, perhaps a problem with all programmers; too many input parameters to a routine. Look at some 'C' programs and it gets ludicrous. Everything in the program is passed through the calling sequence and that is dumb.

[1] https://www.complang.tuwien.ac.at/anton/euroforth/ef04/ertl0...

[2] https://www.ultratechnology.com/1xforth.htm


I think misinterpreting the "el" as Spanish is fun. In that vein, your game could be called ElCiudad.


Somebody somewhere suggested doing a clone of Tropico called ElPresidente, which is even cooler.

Btw, Lars, you have endlessly more experience in Elisp than I do. Do you maybe have any ideas/directions on how to make the graphical mode look... A bit more decent and snappy?


Sorry, I don't know anything about Emacs graphics. Some people confuse me with larsi, but I'm not that guy.


How would it be run without Emacs?

You might point out that there are things like elisp.lisp that purports to run Emacs Lisp in Common Lisp, but I'm not sure that's viable for anything but trivial programs. There's also something for Guile, but I remain unconvinced.


Maybe a Common Lisp core with an Emacs frontend running it in https://www.gnu.org/software/emacs/manual/html_mono/cl.html?


Why not just use the best known emacs lisp core, then? Like say emacs.


To allow it to run on other lisp dialects as well.

(I’m just trying to defend GP’s point – I’m not a heavy lisp user myself, tbh.)


Portability across Lisp dialects is usually not a thing. Even Emacs Lisp and Common Lisp which are arguably pretty close rarely if ever share code.

You could make a frontend for dialect A to run code from dialect B. Those things have been toyed with, but never really took off. E.g. cl in Emacs can not accept real Common Lisp code.

I'm not arguing against the idea, I'm just curious how it would work because I see no realistic way to do it.


Gotcha. Too bad – I was hoping there was at least some (non-trivial) subset you can run on both :(

Any idea why is it not a thing? Is this level of interop not practical for some reason?


Lisp dialects have diverged quite a bit, and it would be a lot of work to bridge the differences to a degree approaching 100%. 90% is easy, but only works for small trivial programs.

I say this, having written a "95%" Common Lisp for Emacs (still a toy), and successfully ran an old Maclisp compiler and assembler in Common Lisp.

https://github.com/larsbrinkhoff/emacs-cl

https://github.com/PDP-6/ITS-138/blob/master/tools/maclisp.l...



Having read that, I'm even less convinced it's not more than a toy.


you could probably use the unexec tooling


I don't see how unexec would help with "decoupling the core from Emacs" since the core is written in Emacs Lisp.


you could make a standalone executable. I was assuming that people didn't want to start emacs to run it. if its just because...emacs is just morally offensive and one doesn't even want it running under the covers, I dont how to help you.

Emacs is needed because it provides Emacs Lisp.

If you used Emacs as a stand-alone game engine, at least it could make it claim it was "Reticulating Splines..." for a few minutes while it started up.

I kid, I kid! I love Emacs. I named my cat Emacs!


I wrote a VT52 hardware simulation: https://github.com/larsbrinkhoff/terminal-simulator


In my mind, taking your toy Forth from implemented in C, assembler, or what have you, to metacompiled is transformative. I struggled at first, making a few abortive attempts. But when I finally did it, it was a revelation.


I emailed this to Lee. I guess it can go here too.

---

I have been fortunate to have worked professionally with Forth recently. It was so fun! But I still struggle to point out exactly why I like Forth, and why and how it's different. Your essay is fresh take, which is good.

To me, maybe the most important lessons are.

1. Eschew complexity (sometimes to a fault), and 2. Improve the code by redefining the problem. Look at things from another angle. (I hate to say it, but think out of the box.)

Much of Forth falls out from these principles. E.g. people are quick to point out Forth is a stack based programming language. Which is true enough, but to me it's kind of beside the point. The point is the language does away with local variables (redefine the problem) to lay the ground for a much simpler implementation (eschew complexity).

Yes, there's REPL. But why? Because Forth is (or can be) a programming language, operating system, compiler, and command line rolled into one. Heaps of layers and components done away with.

File system, virtual system, code structure, documentation? Blocks!

The list goes on. Once you internalize this, the veil falls from your eyes, and you see how much needless complexity stands in your way in most other languages, operating system, tools, apps, ... it's everywhere.


He should have ended this essay mid-sencence, because that would


Because Zork was written on the MIT Dynamic Modeling PDP-10. MDL was an important part of the software ecosystem on that computer, but Lisp wasn't. On the other MIT PDP-10 computers, Maclisp reined.


Was there any particular reason they did that, or was it just a random coincidence (that was the team that wrote it and the hardware they had access to was that particular machine and that particular machine ran MDL, otherwise, it would have been MACLISP)? Was there anything about MDL that helped with writing an adventure game?


Approximately, DEC-10 = PDP-10.


What happened to

1. Sun's JavaStation, 2. ARM's Jazelle, ??? 3. Profit!


Jazelle worked for its target market (or at least, I've never seen anyone claim otherwise).

But its target market wasn't "faster java". Instead Jazelle promised better performance than an interpreter, with lower power draw than an interpreter, but without the memory footprint and complexity of a JIT. It was never meant to be faster than a JIT.

Jazelle made a lot of sense in the early 2000s where dumb phones where running J2ME applets on devices with only 1-4MB of memory, but we quickly moved onto smartphones with 64MB+ of memory, and it just made more sense to use a proper JIT.

---------

JavaStation might as well been vaporware. Sure, the product line existed, but the promised "Super JavaStation" with a "java coprocessor" never arrived, so you were really just paying sun for a standard computer with Java pre-installed.


I briefly worked in a team that implemented a JVM on a mobile OS (before the iPhone) and one of the senior devs said Jazelle was in effect very inefficient because of all the context switching between ARM mode and Jazelle mode. Turned out a carefully tuned ARM JVM was in practice th best


The JavaStation is what led me to this. They sucked, Java OS sucked, and the whole idea was DOA precisely because they didn't do something like this and instead decided to make a shitty sparc machine for the 5 people that wanted a Java branded thin client


It's more like JITs got good.


I never understood why AOT never took off for Java. The write once run anywhere quickly faded as an argument, the number of platforms that a software package needs to support is rather small.


Because developers don't like to pay for tools.

https://en.wikipedia.org/wiki/Excelsior_JET

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/products-services/jamaicavm/

It is now getting adopted because GraalVM and OpenJ9 are available for free.

Also while not being proper Java, Android does AOT since version 5, mixed JIT/AOT since version 7.

EDIT: Fixed the sentence regarding Android versions.


Developers pay for tools gladly when the pricing model isn’t based on how much money you’re making.

I’m happy to drop a fixed 200e/mo on Claude but I’d never sign paperwork that required us to track user installs and deliver $0.02 per install to someone


Especially not if those kind of contracts don't survive an acquisition because then your acquisition is most likely dead in the water. The acquirer would have to re-negotiate the license and with a little luck they'd be screwed over because they have nowhere else to go.


I have seen worse, where people updated the EULA 6 months after being paid $14k/seat.

Now it is FOSS all the way... lesson learned... =3

https://www.youtube.com/watch?v=WpE_xMRiCLE


That is something that I never understood, that that's even legal. You enter into an agreement (let's call it a contract, because that's how the other side treats it) and then, retroactively they get to pull the rug right out from under you.

I made the 'FOSS all the way' decision somewhere in '96 or so but unfortunately our bookkeeping system and our own software package only worked on Windows (this was an end-user thing) so we had to keep one windows machine around. I was pretty happy when we finally switched it off.

The funny thing is that I wouldn't even know where to start to develop on/for mac or windows, Linux just feels so much more powerful in that sense. Yes, it has some sharp edges but for the most part it is the best thing that could have happened to the world of software development.


I have done native cross-platform projects in https://wxwidgets.org/ and https://quasar.dev/ . Fine for basic interfaces, but static linking on Win64 gets dicey with lgpl libraries etc. YMMV For iOS targets, one must use a MacOS environment with a non-free Apple developer account.

Personally, I like Apache 2.0, and standard quality of life *nix build tools. Everything Windows runs off a frozen VM backing image KVM COW file now, as even Microsoft can no longer resist the urge to break things. =3


Depends on the use-case, anyone that has seen the commercial host scaling cost of options like MATLAB usually ported to another language. lesson learned...

Commercial licensing is simply a variable cost, and if there is another FOSS option most people will make the right call. Some commercial licenses are just Faustian bargains, that can cost serious money to escape. =3


I think what they do is correct. We also need to get paid this way.


You could do AOT Java using gcj, it didn't need commercial tools.


If we ignore gcj was never production ready, and basically the only good case that Red-Hat sponsored was to compile Eclipse, which was usually slower than using the JIT anyway.

And that around 2009, most of the team left the project, some went to OpenJDK, others elsewhere, while GCC kept it around because gcj unit tests stressed parts of the GCC that weren't tested by other frontends, until the decision came to remove it completly.

As side note, I expect a similar outcome to gccgo, abandoned since Go added generics support.


You don't have to pay for dotnet AOT.


Actually you do indirectly, via Windows licenses, Office, Azure, Visual Studio Professional and Ultimate licenses, C# DevKit.

Also you are forgetting AOT first came with NGEN, .NET Native, commercial, and on Mono side, Xamarin had some price points for AOT optimiztions, if I recall correctly.

However this is a moot point, you also don't pay for GraalVM, OpenJ9, or Android.


You don’t have to pay for Java AOT either. Graal is free.


> I never understood why AOT never took off for Java.

GraalVM native images certainly are being adopted, the creation of native binaries via GraalVM is seamlessly integrated into stacks like Quarkus or Spring Boot. One small example would be kcctl, a CLI client for Kafka Connect (https://github.com/kcctl/kcctl/). I guess it boils down to the question of what constitutes "taking off" for you?

But it's also not that native images are unambiguously superior to running on the JVM. Build times definitely leave to be desired, not all 3rd party libraries can easily be used, not all GCs are supported, the closed world assumption is not always practical, peak performance may also be better with JIT. So the way I see it, AOT compiled apps are seen as a tactical tool by the Java community currently, utilized when their advantages (e.g. fast start-up) matter.

That said, interesting work is happening in OpenJDK's Project Leyden, which aims to move more work to AOT while being less disruptive to the development experience than GraalVM native binaries. Arguably, if you're using CDS, you are using AOT.


Well, one aspect is how dynamic the platform is.

It simply defaults to an open world where you could just load a class from any source at any time to subclass something, or straight up apply some transformation to classes as they load via instrumentation. And defaults matter, so AOT compilation is not completely trivial (though it's not too bad either with GraalVM's native image, given that the framework you use (if any) supports it).

Meanwhile most "AOT-first" languages assume a closed-world where everything "that could ever exist" is already known fully.


Except when they support dynamic linking they pay the indirect call cost that JITs can remove.


I'm not sure how much Hotspot can do this, but JIT means you can target different CPUs, taking advantage of specific extensions or CPU quirks. It can also mean better cache performance because you don't need branches to handle different chips, so the branch is gone and the code is smaller.


dynamic class loading is a major issue, and it's an integral feature. Realistically, there are very few cases that AOT and Java make sense.


People want to run things other than Java.

We did see a recent attempt to do hardware-based memory management again with Vypercore, but they ran out of money.

I think part of the problem with any performance-related microarchitectural innovation is that unless you are one of the big players (i.e. Qualcomm, Apple, Intel, AMD, Nvidia) then you already have a significant performance disadvantage just due to access to process nodes and design manpower. So unless you have an absolutely insane performance trick, it's still not going to make sense to buy your chip.


They have the volume as well, if you do carve out a niche they’ll just add it and roll over you.

That’s held for decades though I think it only really worked when computers where doubling in speed every 12-18 months, for a while they scaled horizontally (more cores) over radical IPC improvements so we might see the rise of proper co-processors again (but nothing stops the successful ones getting put on die, like Strix Point is already heading).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: