Since coding with Cursor, I’ve spent a lot of time thinking about how much computing interfaces have changed in such a short period of time.
Just as we saw a paradigm shift from imperative to declarative interfaces, LLMs have opened the door to new kinds of interfaces.
Intent-driven interfaces are already everywhere - from AI assistants processing raw data, to Arazzo for defining how APIs map to human-scale workflows, to agents capable of handling complex tasks.
While the doomer view is “robots will take all our jobs”, my optimistic take is we finally have a human-centric way of interacting with machines and it’s going to supercharge our abilities
If the faster one creates naive, hard to discover bugs, is it really faster? I don't think we understand the long term consequences (or maintenance) of LLM generated code. So far the anecdotal results haven't been great.
I've become increasingly frustrated with having to work with other people's AI-"assisted" code. I can tell when I depart from reading sensible human-written code and enter the land of Copilot where all bets are off. Just yesterday I discovered that some environment variables someone had configured on a service weren't actually doing anything, because the service didn't support being configured via environment variables, let alone those particular ones. It's stuff like that which really gets me about all this: you can no longer assume a coherent theory of mind behind what you're reading. You can no longer trust that because something looks specific and intentional (like environment variable names), that it actually came from reality and not a confabulation. It's breaking the social contract that makes collaboration work.
I don’t support stupidly applied AI generated code either, but this isn’t a new thing at all, before it was code pasted from Stackoverflow and hammered at until it sort of worked.
Stack Overflow rarely surfaced a solution to your particular problem, but solutions to tangential or otherwise very similar problems. You still had to reason around the posters problem context and try to see if it really matched your own. You still had to have some understanding.
LLMs present a solution as if it is 100% in the context of your problem. You arrive at a solution without being forced to think at all if the solution applies to your problem.
Without being forced to, many will not. I teach programming and I can tell you it is so so obvious when students are just blasting shit from ChatGPT into their submissions without thought.
If I understand, this means to map a chat to a choice of clear API calls. I buy it.
The only issue with it that I can think of it the lack of reporting without asking. When I login to a site, I already want to see the most applicable information without having to ask for it. If I exclusively have to start asking a chatbot for basic info, I may miss out on a lot of what I need to know.
Just as we saw a paradigm shift from imperative to declarative interfaces, LLMs have opened the door to new kinds of interfaces.
Intent-driven interfaces are already everywhere - from AI assistants processing raw data, to Arazzo for defining how APIs map to human-scale workflows, to agents capable of handling complex tasks.
While the doomer view is “robots will take all our jobs”, my optimistic take is we finally have a human-centric way of interacting with machines and it’s going to supercharge our abilities