Hacker Newsnew | past | comments | ask | show | jobs | submit | auvrw's commentslogin

> We're essentially subject to the whims of someone who is letting us do something for as long as we play nice.

Isn't that so the spirit of the times when working with LLMs? If one asks for a Prisma schema or something else out of GipPTy meant to be fed into a traditional parser, we're up the the whims of the attention blocks and layers to write out something that doesn't become a parse error. One can turn the temperature down, fine-tune, or self-host a model but does that guarantee the syntax will be correct (much less the semantics)?


> this isn’t what Kubernetes is really for.

i'm guessing that writing dyson in Nim is a tacit acknowledgement of that: if this were something geared toward production ecosystems, it would be in golang like kubernetes? although there is the helm luafication, so perhaps dyson is part of a fringe of non-golang k8s auxilliaries.

another way to implement this is with a 'static CMS' where there are still static pages except built into a situated deploy. the 'cultish' (cultic? anyway) aspect of k8s appears to be to phrase all the things in terms of k8s constructs rather than using k8s constructs as a foundation and abstracting out.

i learned about 'rollout' from the CI portion of this post, although initial attempts to search for a comprehensible description of it fail.


Honestly dyson is just something I wrote for myself to see how difficult it would be to write. I don't expect anyone else to use it. The tool is also a punny name, because you'd need to terraform a dyson sphere before you(r apps) can live in it.

I just wanted something with easy templating syntax like this: https://github.com/Xe/within-terraform/blob/master/dyson/src...

The fact that cligen (https://github.com/c-blake/cligen) exists too makes it super easy for me to define subcommands of the thing: https://github.com/Xe/within-terraform/blob/master/dyson/src...


additionally, there's (c) kernel and/or embedded.

in particular, and i've read that writing kernels isn't feasible in Go. it appears not to be an issue of performance. something about the runtime prevents writing kernels altogether.

like the linked article says

https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-repl...

rust has a lot of features (generics, etc.) aside and apart from the memory management strategy.

i'm also unaware of a procedural language that has the feature stability of C or Go as well as the ability to write kernels or embedded code with the memory safety of rust.

currently wondering how easy it'd be to tack rust's memory model onto another language, alef in particular.


i agree that the general-purpose programming language space is fairly crowded ... the lisp dialect/user ratio especially so.

DSLs, otoh, are in short supply. while awk or plain sed are great for shell programming, this is the only (open source) DSL i'm aware of targeting certain types of NLP-esque "munging". this space is mostly full of statistical approaches, which, while conceptually pure, don't allow the kind of flexibility that would be useful in many applications.

i wonder if, eventually, the DSL portion of TXR could be sheared off (possibly via metacircular evaluation of the TXR lisp?) into something that's portable across lisps or at least to semi-standardized scheme implementations?


N. Westbury has been cloning it in Java:

https://github.com/westbury/txr-java


yes, files in plan 9 can be viewed as a way to "make objects look like files."

http://doc.cat-v.org/plan_9/4th_edition/papers/names

note that the filesystem interface isn't totally durable across all operations: a syscall is still necessary to create processes, for instance. cp /proc/... doesn't do what one might think it does.

http://doc.cat-v.org/plan_9/4th_edition/papers/9

i appreciate that the original article similarly acknowledges that lisp is an "all the things (except for some things)" solution in the Single address space section. synchronization libraries (and perhaps other libs as well?) in C-based OSs need to drop into assembly to get at test-and-set kinds of operations. addressing these special cases one-by-one is something that'll need to be ramified in source in order to get any real insight, although (perhaps just from the inertia of familiarizing with the original bell labs warp) it's difficult to imagine the kernel as a "all the things, seriously everything" abstraction away from the machine.

----

overall, the original article and discussion here was a bright spot in my day. it gives me hope that there's some actual thought and discussion about how to evolve operating systems intended for commodity hardware, not just generalized, "yea, that's something we could do," one-offs. i'm not as familiar with the xerox heritage mentioned elsewhere in the comments, and the view that, in some sense, linguistic abstraction might cover OSs is something to think about.

i've heard the smalltalkers catch some flak about not actually writing an os because there was (apparently) the equivalent of exec(), no fork()

http://bitsavers.org/pdf/xerox/alto/memos_1975/Alto_Operatin...

this criticism of the alto OS could be due to cognitive bias as much as anything else. the concept of processes is by now deeply ingrained in the way we conceptualize operating systems. so i have to ask: is leading with eschewing processes a clean-sheet rethink, informed by history's mistakes as well as its successes, or is limiting the number of OS primitives at the expense of less-rich interfaces actually a desirable tradeoff?

like, although everything i've heard about multics in particular seems well thought-out, thomson and ritchie were doing something substantially different by opting for bytestreams as much of the interface rather than making strict decisions about arities and so on. every rule system makes is bound to be a rule a user will eventually want to break. i suppose the operating system's main job is to safely lift off of the hardware, not to impose further unnecessary artifice (of which hierarchical file systems AKA namespaces could be viewed as one) on the user. adding further icing onto this core objective is so much the better, so long as it's possible to scrape away and redecorate.


> i've heard the smalltalkers catch some flak about not actually writing an os because there was (apparently) the equivalent of exec(), no fork()

In the Smalltalk world, processes are just instances of the class `Process` and `Processor` is a singleton that manages all processes. You can create new processes several different ways, the most common being to send the message `fork` to a block closure.

Because everything is so late bound in a Smalltalk system, and because the objects are running and executing live all the time, it comes to resemble an OS in a lot of ways. I think the key lesson here is not just about which language is better or good for X or Y, but about how holistic computing environments are more important than languages alone. I go on a lot about Hypercard and how that was a real missed opportunity, as well as being an excellent environment. But it's Hypertalk language was only a part of that environment and cannot really be evaluated on its own, for example.


i'm not totally sure what 'fractal geometry' is, although there is something called 'geometric measure theory'..

the canonical text here is Federer, although it's supposedly a tome. Krantz's _the geometry of domains in space_ appears more approachable.

...

also, note that these topics aren't totally separable.


ssh access at build time via buildkit works on os x

https://medium.com/@tonistiigi/build-secrets-and-ssh-forward...

the link you've supplied regards ssh access when running docker images, not when building them.

... i did notice that for lots of downloads, it is somewhat slower, or perhaps more prone to lag, than authentication from inside the running image.


these changed almost everything i think about:

misc. Forrest Mims notebooks, the Python Tutorial, Practical C Programming

first chapter of Sound and the Fury, snippets of Naked Lunch, wide swaths of Ulysses

The Universe in a Nutshell, French's Special Relativity

The Gamma Function, a graph theory text whose name i don't remember, baby Rudin

Brain Structure and Its Origins, The Form Within


perhaps the

> Obviously, this is a project for an advanced electronic hobbyist and computer enthusiast.

is somewhat tongue-in-cheek: this could be the thing to get a kid who's already soldered together a bunch of (less expensive) analog hobby kits .. or the adult who's into that sort of thing.

it's pretty much the same thing as building model cars, airplanes, etc. although it doesn't teach you exactly how the thing works, the perceptual task of doing the soldering and checking out the traces gives some idea of how things piece together.

as far as vintage electronics go, IBM PCs (probably?) aren't that difficult to find for those who're essentially interested in abstracting away from the hardware.


i frequently compare workaday web-dev to general contracting (a favorable description from my perspective). no issues with the article title.

here's my issue with the article text: it describes

    I came to the Graide Network and immediately started breaking our legacy app into microservices.
as "overengineering," and perhaps it was ... or perhaps it was just not-great planning. after all, the premise of the article is that the microservices are now incurring costs that can be described more like plumbing than engineering.

here's where some of my experience partially aligns with the article's complaints

    Investors are guilty of pushing it on us
this has happened to CEOs at startups i've been at: [some] investors asked for diagrams, etc., and we made decisions to build invisible fanciness instead of product for users.

here's where the author begins upon the path to enlightenment

    we can build more with less


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: