Text editors are for cleaning up after the agents, of course. And for crafting beautiful metaprompt files to be used by the agentic prompt-crafter intelligences that mind the grunt agents. And also for coding.
This is going to sound nit-picky, but I wouldn't classify this as the model being able to say no.
They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.
And it can't say no if it simply doesn't want to. Because it doesn't "want".
Yeah we're using Ensue since it already handles the annoying infra pieces you’d otherwise have to build to make this work (shared task state + updates, event streams/subscriptions, embeddings + retrieval over intermediate artifacts). You can run the example with a free key from ensue-network.ai. This repo focuses on the orchestration harness.
> This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.
Yes, this is one of the main references for Nick Land's thesis that capitalism is (retrochronic) AI.
The one you cited is from Machinic Desire (1993). Interestingly, it also appears in the relatively unknown text Shorelines from the Making People Disappear exhibition (1993):
"What appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources." [0]
I have always been fascinated by this interpretation (that capitalism is (retrochronic) AI), which is why I have created a research project on it: https://retrochronic.com
> My concerns about obsolescence have shifted toward curiosity about what remains to be built. The accidental complexity of coding is plummeting, but the essential complexity remains. The abstraction is rising again, to tame problems we haven't yet named.
what if AI is better at tackling essential complexity too?
The essential complexity isn't solvable by computer systems. That was the point Fred Brooks was making.
You can reduce it by process re-engineering, by changing the requirements, by managing expectations. But not by programming.
If we get an LLM to manage the rest of the organisation, then conceivably we could get it to reduce the essential complexity of the programming task. But that's putting the cart before the horse - getting an LLM to rearrange the organisation processes so that it has less complexity to deal with when coding seems like a bad deal.
And complexity is one of the things we're still not seeing much improvement in LLMs managing. The common experience from people using LLM coding agents is that simple systems become easy, but complex systems will still cause problems with LLM usage. LLMs are not coping well with complexity. That may change, of course, but that's the situation now.
I don't think the parent comment was saying it can be solved, only that the LLM paradigm is better at dealing with complexity. I agree that they are not great at it yet, but I've seen vast improvements in the past 3 months alone.
maybe they could pivot into the luxury boutique hand-crafted artisanal code market
reply