Hacker Newsnew | past | comments | ask | show | jobs | submit | tibordp's commentslogin

Given that we are talking about A4 papers and grams, I'd bet this wasn't in the US.

In Europe, the typical flat rate is up to 100g for standard letters. And that's 20 sheets, which is not a particularly unusual letter to send.


But 20 sheets do not fit in a regular DL or C5 envelope so you already have an hint that you may check the limits, you usually send them in a reinforced C4 enveloppe.


That's only true for information theoretically secure algorithms like one-time pad. It's not true for algorithms that are more practical to use like AES.


Similar - I learned HTML by tweaking ReadMe.htm that was included with Dweep videogame from (now defunct) Dexterity Software.


Not sure I'd agree about it being esoteric. Understanding or at least knowing about ARP is still very much essential for people in networking. arping is a very useful tool for seeing if machines on the same network segment are up and just not responding to ICMP pings. Anyone looking at tcpdump/Wireshark dumps will run into it sooner or latter.

It is true that software engineers may sooner run into it when debugging their home network than their application though as cloud and traditional networks are very different.


ARP is very common knowledge for people with basic Linux, networking skills. Back in 1990s, early 2000s was a very common tool for LAN troubleshoot


You can go surprisingly far with C, though LLVM is probably a better long-term option for a serious compiler, because it's a tool made for the job (unless you target exotic and/or embedded platforms that don't have LLVM support - but that's fairly unlikely).

C is very easy to get started with if you don't already know LLVM. You don't have to flatten everything to SSA + basic blocks and can keep expressions as trees. The downside is that once your compiler is reasonably complete, you may spend quite a bit of time working around quirks of C (e.g. int promotion is very annoying when you already have full type information, so your compiler either has to understand C semantics fairly well or defensively cast every subexpression).

I have a C backend in my compiler (https://github.com/alumina-lang/alumina) and it works really well, though the generated C is really ugly and assembly-like. With #line directives, you can also get source-level debugging (gdb/lldb) that just works out of the box.

There are a few goodies that LLVM gives you that you don't get with C, like coverage (https://clang.llvm.org/docs/SourceBasedCodeCoverage.html). It works when done through clang, but cannot easily be made to track the original sources.


That's interesting, I don't find VSCode slow at all, even when working on large workspaces via SSH over a high latency link.

Sure, there are native editors that are snappier, but not to the point that affect my productivity in any way.

The one thing that VSCode does not handle well is large files (e.g. DB dumps, large JSONs, logfiles), but for coding, it really is not an issue.


As a rather vocal user of Sublime Text 4, VS Code is an order of magnitude slower on basic UI interaction.

I eventually switched to VS Code but I miss immediate highlighting and extremely fast start times. It's very close to TextEdit launch speeds. VS Code crawls by comparison.

The only reason I'm using VS Code is the plugin system is way better.


What plugin are you using to open workspaces via SSH?


There is a Microsoft-built plugin called Remote - SSH, which fits into a group called "Remote Development" that includes WSL and Docker versions. It basically runs most of vscode on the remote, and feels just as fast as local on a decent connection. Most of the heavy lifting is done on the remote so things like full project searches, or linting, etc not having to go through ssh to access the files.

I use it for almost all of my development. I launch an ec2 instance with my projects and all of my code and data stays in my dev vpc. I can connect from my laptop or workstation and I can spin up extra dev environments if I am working on multiple projects. Plus, since the projects usually involve a pretty big data set, i don't have to download that locally and it can be replicated quickly within the vpc for each dev environment.

The SSH extension even knows how to forward ports back and you can add/remove port forwards from the vscode ui.



They are probably using "Remote - SSH" by the VSCode team. It was a big part of what convinced me to switch from Pycharm. That and being able to work on C++ code from the same tool.


That's a good point and also one of the things I kinda like about Alumina. You can do thing like this and the file will only be closed at the end of the function rather than the end of the if block.

    let stream: &dyn Writable<Self> = if output_filename.is_some() {
        let file = File::create(output_filename.unwrap())?
        defer file.close();

        file
    } else {
        &StdioStream::stdout()
    };


I wouldn't say it was very difficult, but it did take quite a bit of time. Apart from some basic principles (no GC, no RAII, "everything is an expression"), I basically kept adding features whenever I hit some pain point trying to write actual programs in Alumina. If I were to do it again, I'd probably be more methodical, but anyway, here we are :)

Protocols were probably the trickiest feature of the language to figure out. As for the compiler itself, surprisingly, the biggest hurdle to get over was the name resolution. It's a tiny part of the compiler today, but everything else was much more straightforward.

I don't have formal CS background, but I have been coding for a long time. I read the Dragon Book and would recommend it to anyone writing a compiler, even though it's a bit dated.

I don't know Racket or LISP myself so I cannot comment on that part.


No native compilation to WASM yet, but since the compiler outputs self-contained C, it should be fairly easy to do it with Emscripten.

The sandbox is running the code server-side in a nsjail container.

As for unwrap, I feel you! the try expression (expr?) is supported, which makes it look a bit nicer, but I'm still trying to figure out a good idiom for when you actually want to do specific things based on whether the result is ok or err.

Alumina does not have Rust-style enums (tagged unions) or the match construct, which makes it a bit tricky.


Scoped destruction is awesome in general, and I agree that it is superior to defer.

I think one case where defer might be nicer is for things that are not strictly memory, e.g. inserting some element into a container and removing it after the function finishes (or setting a flag and restoring it).

This can be done with a guard object in RAII languages, but it's a bit unintuitive. Defer makes it very clear what is going on.


> This can be done with a guard object in RAII languages, but it's a bit unintuitive

Some syntactic sugar, like Python’s “with” should help with that, shouldn’t it?


Python context managers are actually very similar to guard objects in C++ and Rust.

What I meant was something like this (could also be done with `contextlib`, but it's also verbose)

    seen_names = {}

    class EnsureUnique:
        def __init__(self, name: str):
            self.name = name
        
        def __enter__(self):
            if self.name in seen_names:
                raise ValueError(f"Duplicate name: {self.name}")
            seen_names.add(self.name)

        def __exit__(self, exc_type, exc_value, traceback):
            seen_names.remove(self.name)


    def bar():
        with EnsureUnique("foo"):
            do_something()
            ...
With defer this could be simplified to

    static seen_names: HashSet<&[u8]> = HashSet::new();

    fn bar() {
        if !seen_names.insert("foo") {
            panic!("Duplicate name: foo")
        }
        defer seen_names.remove("foo");

        do_something();
    }


Honestly, the with example seems simpler if you ignore what it takes to build a context manager (which isn’t all that hard).

Maybe it’s just I’ve never used defer before but I do use python with whenever I get a chance. Not like that, I don’t really understand what the code is trying to achieve by removing the name at the end, but to close resources at the end of the block. And even then only if it makes sense for what I’m doing.

Using a context manager like your example is just busywork IMHO, easier to just write the code out linearly like the defer example.


It's not that it's hard, it's just that it is not inline, so it requires a context switch because the CM is defined outside, even when it's doing something specific.

The most common problem that defer is trying to solve is cleanup when the function returns early (ususally because of an error). Writing the cleanup code inline before the early return results in code duplication.

C#/Java/Javascript have try/finally for this, C has the "goto cleanup" idiom, and C++ and Rust have the guard objects. Go and Alumina have defer.


There's `contextlib.closing` for objects that do not support the context manager protocol and they should be closed.

And then one can simulate defer in the spirit of the `atexit` module with a single context manager (say `finalizer`), defined only once, which could be used as:

    with finalizer() as defer:
        ...
        req = requests.get('https://w3.org/')
        defer(req.close)  # just like contextlib.closing
        ...
        a_dict['key'] = value
        defer(a_dict.__delitem__, 'key')
        ...
        defer(print, "all defers ran", file=sys.stderr)
        ...
The `__call__` of finalizer adds callables with their *args and **kwargs to a fifo or a stack, and its `__exit__` will call them in sequence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: