why do I get so annoyed every time I read the word "slop" used like this? I have the same reaction with "enshittification". am I just getting grumpy and old?
it triggers the same eye roll as the schoolyard bully nicknames so popular in politics right now. bite sized, zero effort, fashionable take downs that suffocate any attempt at genuine discourse.
I've always preferred the word sludge. A mixed-up waste byproduct of something that was already made, accumulating in low places, ignored when possible but capable of being toxic and clogging up things that are supposed to work better.
I think these words are useful because they convey a feeling of disenchantment people are experiencing with technology. "You say this is progress, but the experience keeps getting shittier. You say this model's output is the next big thing, but my plate is filled with indistinguishable slop."
I would point out what they're criticizing is also lazy and driven by trends, the reflexive acceptance that whatever is new is inevitable and must be embraced. To me "slop" especially feels splashing someone with a bucket of water to try and wake them up from a stupor.
I feel as you do but I also recognize that I am a bit defensive with regard to LLMs.
And maybe I'm a little too optimistic? Because I see a world in a few years when AI is producing content good enough that those still calling it "slop" will come across as sounding a little shrill.
If it's AI art/vid with an AI voice reading an AI script(as has become common on youtube) it will always be slop, regardless of how high quality the output is.
I want to know a person's ideas, not a computers regurgitation of others'. It's low effort and usually lacks a point.
Now it doesnt have zero usefulness in writing/the arts. Probably tons tbh. For instance someone using AI voice because they aren't an English speaker and want to talk to that audience, or using it to clean up grainy film is different (in my opinion) than genning the writing or art.
Things made without enough human in the loop - I've found - lack purpose and identity. I dont see AI changing there. If it wasn't a good idea from the start, ai isn't gonna fix that. No amount of awesome cgi or a-list actors saves a terrible script.
The only people I see pushing stuff like ai music is spotify so it doesn't have to pay royalties, but everyone I speak to hates it. The listeners, artists, and the record labels those models stole from. Probably instrument and audio software makers too. When people figure out a pic is AI they voice frustration and embarrassment.
There's more in the word 'slop' than just bad content. Comments/posts on here or reddit often get slaughtered solely because it was written by AI and the user wasn't skilled enough to hide it. Some people just don't like reading something a machine trying to sound like a person wrote.
I don't doubt we will advance to the stage where it becomes on the same level quality-wise, but doubt most people would be wanting AI content while human made stuff is available. It will still be considered low effort slop by many, I believe.
I've been trying to beat this point in and failing. If a parameter type creates "colors", you can extrapolate that to an infinite set of colors in every single language and every single standard library, and the discussion on colors becomes meaningless.
Some people are so focused on categorical thinking that they are missing the forest for the trees.
The colors are a means of describing an observed outcome -- in Node's case, callback hell, in Rust's, 4 different standard libraries. Whatever it may be, the point is not that there are colors, it's the impact on there being colors.
> But there is a catch: with this new I/O approach it is impossible to write to a file without std.Io!
This sentence just makes me laugh, like it's some kind of "gotcha". It is the ENTIRE BASIS of the design!
> you can extrapolate that to an infinite set of colors in every single language and every single standard library, and the discussion on colors becomes meaningless.
It's more that discussion about most of them becomes meaningless, because they're trivial. We only care when it's hard to swap between "colours", so e.g. making it easy to call an Io function from a non-Io function "removes" the colouring problem.
> so e.g. making it easy to call an Io function from a non-Io function "removes" the colouring problem.
Exactly. In golang (which is also a cooperatively multithreaded runtime if I understand correctly), calling a function that needs IO does not infect the callers type signature.
Another poster up thread identified the exact problem: async/await contexts are not first-class values, they are second class citizens. If they were values then you could just stick the context in a struct/class and pass that around instead, and avoid having to refactor call chains every time something changes. It's their second class status that forces the "colouring" into the function signature itself at each point. This is also why ordinary first class values do not introduce colours, ie. you can hide new values/parameters inside other types that are already part of the function signature, thus halting the propagation/vitality of the change.
Of course, if these async contexts were first class citizens then you've basically just reinvented delimited continuations, and that introduces complications that compiler writers want to avoid, which is why async/await are second citizens.
Because async/await are not values, they are ways to run/structure your code. That's why they are so infectious. If you don't want the division the only solution is to make everything of one kind. Languages like C make everything sync/blocking, while languages like Go make everything async.
They are computations that produce values. Computations can be reified as values. What do you think functions and threads are?
As I described above, "delimited continuations" are values that subsume async/await and many other kinds of effects. You can handle async/await like any other value if they were reified as delimited continuations, but this makes the compiler writer's life much more difficult.
> As I described above, "delimited continuations" are values that subsume async/await and many other kinds of effects.
Supporting delimited continuations force some specific ways of performing computations. They are akin to making everything async, which proves my point: you have to make everything of one kind in order to solve the problem.
> They are akin to making everything async, which proves my point: you have to make everything of one kind in order to solve the problem.
You can use ordinary direct style compilation, but all references to stack values simply have to be relative offsets, then a simple implementation of shift/reset is just capturing context and copying stack fragments, which you can do using setjmp/longjmp in C (although there are better ways [1]).
This is not akin to making everything async, nor is everything "of one kind", whatever that means. A delimited continuation is very much its own kind of thing, distinct from other values, and doesn't have to influence the function call/return semantics unless you're targeting a less flexible runtime like the JVM.
> but all references to stack values simply have to be relative offsets
Then such references are no longer pointers, and in order to have a generic reference (that can point to both stack memory and heap memory) you have to store additional data (which one of them it points to) and conditionally use the correct one any time you access it. This is a very invasive change to the memory model.
> then a simple implementation of shift/reset is just capturing context and copying stack fragments, which you can do using setjmp/longjmp in C
That sounds like you're just reinventing green threads then, which basically forces everything to be async once you start noticing its issues.
> This is not akin to making everything async, nor is everything "of one kind", whatever that means.
Sure, if everything has to be able to "wait" for the result of a continuation then you're forcing async support on everything.
If instead you implement continuations as some kind of monads that users have to return from their functions then you got function coloring because you can't normally wait for the result of a continuation from a function that does not return a continuation.
> the only solution is to make everything of one kind
That's not really the case. All you really need is a way to run async code from a sync function; "keep doing this async thing until it's done" is the primitive you need, and some languages/runtimes offer this.
Going by that reasoning then Rust solved the colored function problem too: when calling an `async` function from a sync one use `block_on`, while when calling a sync blocking function from an `async` one use `spawn_blocking`, but somehow people are not happy with this.
Yes, it's quite curious. Having used both block_on and spawn_blocking, and not being worried about what "colour" my function is, I am also quite confused about the fuss.
On a practical note, since Rust doesn't standardise on an async runtime, it would be more accurate to say tokio solved the coloured function problem, for whatever that means. Or any and everyone else that made it easy to call one coloured function from another.
Aside from the ridiculous argument that function parameters color them, the assertion that you can’t call a function that takes IO from inside a function that does not is false, since you can initialize one to pass it in
To me, there's no difference between the IO param and async/await. Adding either one causes it to not be callable from certain places.
As for the second thing:
You can do that, but... You can also do this in Rust. Yet nobody would say Rust has solved function coloring.
Also, check this part of the article:
> In the less common case when a program instantiates more than one Io implementation, virtual calls done through the Io interface will not be de-virtualized, ...
Doing that is an instant performance hit. Not to mention annoying to do.
> Doing that is an instant performance hit. Not to mention annoying to do.
The cost of virtual dispatch on IO path is almost always negligible. It is literally one conditional vs syscall. I doubt it you can even measure the difference.
Sure you can. An `async` function in Javascript is essentially a completely normal function that returns a promise. The `async`/`await` syntax is a convenient syntax sugar for working with promises, but the issue would still exist if it didn't exist.
More to the point, the issue would still exist even if promises didn't exist — a lot of Node APIs originally used callbacks and a continuation-passing style approach to concurrency, and that had exactly the same issues.
Other commenters have already provided examples for other languages, and it's the same for Rust: async functions are just regular functions that return an impl Future type. As a sync function, you can call a bunch of async functions and return the futures to your caller to handle, or you can block your current thread with the block_on function typically available through a handle (similar to the Io object here) provided by your favorite async runtime [0].
In other words, you don't need such an Io object upfront: You need it when you want to actually drive its execution and get the result. From this perspective, the Zig approach is actually less flexible than Rust.
If you have a sync/non-IO function that now needs to do IO, it becomes async/IO. And since IO and async are viral, it's callers must also now be IO/async and call it with IO/await. All the way up the call stack.
You’re allowed to not like it, but that doesn’t change that your argument that this is a form of coloring is objectively false. I’m not sure what Rust has to do with it.
Sure it is a function coloring. Just in a different form. `async` in other languages is something like an implicit parameter. In zig they made this implicit parameter explicit. Is that more better/more ergonomic? I don't know yet. The sugar is different, but the end result the same. Unless you can show me concrete example of things that the approach zig has taken can do that is not possible in say, rust. Than I don't buy that it's not just another form of function coloring.
It’s more like adding a runtime handle to the struct.
Modulo that I’m not sure any langage with a sync/async split has an “async” runtime built entirely out of sync operations. So a library can’t take a runtime for a caller and get whatever implementation the caller decided to use.
> I’m not sure any langage with a sync/async split has an “async” runtime built entirely out of sync operations.
You get into hairy problems of definition, but you can definitely create an "async" runtime out of "sync" operations: implement an async runtime with calls to C. C doesn't have a concept of "async", and more or less all async runtime end up like this.
I've implemented Future (Rust) on a struct for a Windows operation based only on C calls into the OS. The struct maintains everything needed to know the state of the IO, and while I coupled the impl to the runtime for efficiency (I've written it too), it's not strictly necessary from memory.
> You get into hairy problems of definition, but you can definitely create an "async" runtime out of "sync" operations: implement an async runtime with calls to C. C doesn't have a concept of "async", and more or less all async runtime end up like this.
While C doesn't have async OS generally provide APIs which are non-blocking, and that is what async runtimes are implemented on top of.
By sync operations I mean implementing an "async" runtime entirely atop blocking operations, without bouncing them through any sort of worker threads or anything.
It's funny, but I do actually like it. It's just that it walks like a duck, swims like a duck and quacks like a duck.
I don't have a problem with IO conceptually (but I do have a problem with Zig ergonomics, allocator included). I do have a problem with claiming you defeated function coloring.
I do want to say that I regretted that comment as nonconstructive after it was too late to edit it. Others in the thread are representing my argument better than I can or care to.
I mean... you use `await` if you've used `async`. It's your choice whether or not you do; and if you don't want to, your callers and callees can still freely `async` and `await` if they want to. I don't understand the point you're trying to make here.
To be clear, where many languages require you to write `const x = await foo()` every time you want to call an async function, in Zig that's just `const x = foo()`. This is a key part of the colorless design; you can't be required to acknowledge that a function is async in order to use it. You'll only use `await` if you first use `async` to explicitly say "I want to run this asynchronously with other code here if possible". If you need the result immediately, that's just a function call. Either way, your caller can make its own choice to call you or other functions as `async`, or not to; as can your callees.
The moment you take or even know about an io, your function is automatically "generic" over the IO interface.
Using stackless coroutines and green threads results in a completely different codegen.
I just noticed this part of the article:
> Stackless Coroutines
>
> This implementation won’t be available immediately like the previous ones because it depends on reintroducing a special function calling convention and rewriting function bodies into state machines that don’t require an explicit stack to run.
>
> This execution model is compatible with WASM and other platforms where stack swapping is not available or desireable.
I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.
If `foo` needs to do IO, sure. Or, more typically (as I mentioned in a different comment), it's something like `const x = something.foo()`, and `foo` can get its `Io` instance from `something` (in the Zig compiler this would be a `Compilation` or a `Zcu` or a `Sema` or something like that).
> Using stackless coroutines and green threads results in a completely different codegen.
Sure, but that's abstracted away from you. To be clear, stackless coroutines are the only case where the codegen of callers is affected, which is why they require a language feature. Even if your application uses two `Io` implementations for some reason, one of which is based on stackless coroutines, functions using the API are not duplicated.
> I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.
Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior -- just like passing a pointer allocated with one `Allocator` into the `free` of a different `Allocator` does. This really isn't a problem. Even with allocators, it's pretty rare for people to mess this up, and with allocators you often do have multiple of them available in one place (e.g. a gpa and an arena). In contrast, it will be extraordinarily rare to have more than one `Io` lying around. Even if you do mess it up, the IB will probably just trip a safety check, so it shouldn't take you too long to realise what you've done.
> Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior
Thinking about it more, you've possibly added even more colors. Each executor adds a different color and while each function is color-agnostic (but not colorless) futures aren't.
> it will be extraordinarily rare to have more than one `Io`
Will it? I can immediately think of a use case where a program might want to block for files on disk, but defer fetching from network to some background async executor.
but that's not even the case, because it's certainly possible to write a function that receives an object that holds onto an io (and uses it in its vtable calls) that equally well receives an object that doesn't have anything to do with io [0]. The consumers of those objects don't have to care, so there's no coloring.
[0] and this isn't even really a theoretical matter, having colorblind object passing is extremely useful for say, mocking. Oh, I have a database lookup/remote API call, which obviously requires io, but i want fast tests and I can mock it with an object with preseeded values/expects -- hey, that doesn't require IO.
I think in practice the caller still needs to know.
If I call `a.foo()` but `a` has and is using a stackless coroutine IO but the caller is being executed from a green thread IO then as was said before, I'm hitting UB.
But, I do like that you could skip/mock IO for instance. That's pretty neat.
> Adding either one causes it to not be callable from certain places.
you can call a function that requires an io parameter from a function that doesn't have one by passing in a global io instance?
as a trivial example the fn main entrypoint in zig will never take an io parameter... how do you suppose you'd bootstrap the io parameter that you'd eventually need. this is unlike other languages where main might or might not be async.
>you can call a function that requires an io parameter from a function that doesn't have one by passing in a global io instance?
How will that work with code mixing different Io implementations? Say a library pulled in uses a global Io instance while the calling code is using another.
I guess this can just be shot down with "don't do that" but it feels like a new kind of pitfall get get into.
Zig already has an Allocator interface that gets passed around, and the convention is that libraries don't select an Allocator. Only provide APIs that accept allocators. If there's a certain process that works best with an Arena, then the API may wrap a provided function in an Arena, but not decide on their own underlying allocator for the user.
For Zig users, adopting this same mindset for Io is not really anything new. It's just another parameter that occasionally needs to be passed into an API.
while not really idiomatic, as long as you let the user define the Io instance (eg with some kind of init function), then it doesn't really matter how that value is accessed within the library itself.
that's why this isn't really the same as async "coloring"
> you can’t call a function that takes IO from inside a function that does not is false, since you can initialize one to pass it in
that's not true. suppose a function foo(anytype) takes a struct, and expects method bar() on the struct.
you could send foo() the struct type Sync whose bar() does not use io. or you could send foo() the struct type Async whose bar uses an io stashed in the parameter, and there would be no code changes.
if you don't prefer compile time multireification, you can also use type erasure and accomplish the same thing with a vtable.
It's hard to parse your comment, but I think we are agreeing? I was refuting the point of the parent. you have given another example of calling an IO-taking function inside a non-IO taking function. the example I gave was initializing an IO inside the non-IO taking function. you could also, as pointed out elsewhere, use global state.
This is very well written, and very exciting! I especially love the implications for WebAssembly -- WASI in userspace? Bring your own IO? Why not both!
Yes, container is like `docker` CLI: 'I am a developer and I want to run a container'; containerization is for packaging OCI image container sidecars into Swift .apps - you could distribute your app with postgres 'built in' (but running as a container), user doesn't need to ensure it's installed and running separately or anything.
This is just so inspirational and cool. I’ve had so many of these concepts floating around in my head, without the critically necessary capability, or followthrough. I’m sure others have as well. It’s very cool to see someone execute on it.
I wonder about operating systems with isolated applications like this providing some kind of attestation.
Is it even possible to do that in a non-user hostile way?
The use case I daydream about is online competitive gaming. It’s a situation where a majority of users are willing to give up software freedom for a fair community. Consoles used to be a locked down way to guarantee other users were participating fairly, but this increasingly less the case as cheaters become more advanced. Solving this problem necessarily locks down access to software and hardware, as far as I can figure it. From a game theory perspective I can’t see any other approach. Enter “kernel level anticheat”; aka rootkits, depending on who you ask.
So I guess I wonder if virtualization at a level like this can somehow be a part of the solution while preserving software freedom, user privacy and security where the user still wants it
Don't want to derail from the interesting technical conversation, so feel free to ignore.
This is maybe more of a philosophical answer, but IMO the answer is to play games with people you trust. I've recently redicovered the joy of LAN parties (both Halo and AoE2) and man, it's so much better than the countless hours I spent getting pissed at faceless strangers in online games.
I wish there were more games designed for local multiplayer.
I think that's an interesting observation. It seems to be a philosophical discussion on social trust.
Anticheat is an attempt to control the behavior of the community. Your solution is to just control who is in your community.
In a way, I think it's the same lessons learned from social networks. You see it in the trends moving away from global communities back toward smaller online communities; private Discord servers, BlueSky follow/block lists, and so on.
You seem to frequently appear in threads involving WebAssembly. Each time you do, I see you point out how bytecode VMs have been done before. And, every time, it doesn’t contribute anything interesting to the conversation.
I don’t mean this in a hostile way, however, it has become frustrating finding this predictable and low effort comment from you every time I open an HN thread on WebAssembly — and, frankly, I’ve begun collapsing comments whenever I see your username.
Every iteration on the concept brings different approaches and tradeoffs made from lessons learned from previous attempts. This is just how engineering, and our industry, works.
I don’t mean disrespect. I assume you are probably speaking from a place of experience. I would be so much more interested hearing your thoughts on the minutiae than basic pattern recognition.
I recommend not engaging. I've tried once or twice, and when I shared a concrete example[0] of a real problem wasm solves for me that I'm not aware of another way to do, pjmlp just stopped responding. I'm not sure what their motivations are but it's too bad they choose to distract from someone's awesome project the person has spent years working on.
And yet you decided to spend around 5 minutes writing to me, instead of collapsing the comment.
Here is tip for you as well, learn from the past, before doing any kind of WebAssembly marketing of how innovative it happens to be, versus JVM as if there was never anything else done.
Yes, rather than ignore you, this time I chose to spend five minutes of my life writing a well meaning piece of critical feedback. Whether that was a waste of my time is left to you.
Oh wow. Took me several minutes of aimlessly poking around.
Actually, even without that, the grouping and the hierarchy don't make sense. Why are some things top-level items and other under "general"? Same for "privacy and security" (I assume that's what it's called in English), for some reason "passwords", "lock screen" and "touch ID and password" are separate top-level items even though they do very much belong to "privacy and security".
> Your smoking gun is to not use the app in the most intuitive and obvious way?
Search isn't the most intuitive and obvious way to everyone. Just adding a search function also isn't an excuse to just totally ignore good UX design and information hierarchy.
I've been a sysadmin my entire career, and still do end-user support occasionally. You'd be surprised how few people use the search function, for anything, on their computers. Just opening the windows start menu and showing them they can search there is like black magic to a frighteningly large amount of people.
I've met fellow Mac users that don't even know spotlight exists, and navigate through the OS and every app via mouse and clicking around.
So yeah, just throwing a search box in your app as an excuse for ignoring the experience of navigating it any other way is bad UX design.
There's a search bar in the System Settings app, you don't need to know what Spotlight is.
I'm staying with family and just handed my 64 year old mother who has never used a Mac my Macbook Pro with the settings app open, and after explaining the concept of default browser in non-leading language (not mentioning the word default), her first thought was to click Display.
When nothing familiar was there her next thought was to click Search and then type in Browser and she made the connection of "Default Browser" to the concept I mentioned immediately.
yeah, I'm one of those who usually ignores any built in search option. I just default to assuming that it's an adware infested trap that will provide no value, only "engagement". Windows and Google conditioned this behaviour in me over the years.
By the way, macOS has a super useful search field under "help" in the menu bar. It searches among all menu items in the current app and even shows you where they are. Very non-obvious, but once you try it, you don't understand how you lived without it.
Different people may approach the same UI differently. A good practice in UX design is to put things where people expect to find them — and duplicate them if different people go looking in different places. So a working search function doesn't absolve you of having to make the structure of your screens/menus/whatever make sense.
Life is not smoking guns, objective truths, or us and thems.
I do find it amusing how disorganized the app has become, and that has become my favorite example.
I find it even more amusing that you think citing search as a primary UI path is your “smoking gun” of good information hierarchy and interface design.
> A setting's placement in the menu hierarchy "is a bad example" of the Settings app's being bad because search is available.
> Search is always available while the app is open, across all menus and functions.
> Therefore no placement or layout can be singled out as better or worse in the Settings app. All possible hierarchies or arrangements are equal.
> I unroll my Apple UX Researcher Toolkit (contents: blindfold, dart, dartboard, crack pipe), and use it to make my decision: I put the dropdown 3 levels deep under Touch ID, safe in the knowledge that I cannot be criticized, because we've also included a search bar.
It's just bad thinking. Sorry if you're upset I've called it out.
How is that setting spelled? What synonym did they use? Are there multi-work linking hyphens? Will it work with or without them? Is the search fuzzy?
And then localization comes in. Take any translated UI and the search often falls short. Did they translate the setting name? Did they translate it right, or did a google-translate of their localization plist? Will it find the setting if I spell it without accents? Which dialect does it use? Wait I don't know how to say this specific technical work in my native language because nobody actually uses it?
I couldn't search System Settings when I setup my laptop for over an hour because it was indexing files I migrated from my old Mac. It made for a frustrating user experience trying to set this thing up.
I mean, by your logic the whole settings app should just be a search box when you open it. Clearly there’s a use case for browsability in a settings app, so that you can discover what settings exist. Given that, it’s probably important for the location of each setting to be intuitive.
this seems unmaintained https://dayssincelastvscodefork.com/