Another problem the article doesn't mention is how much of a hassle it is to deal with permissions. Depending on the GraphQL library you are using, sure, but my general experience with GraphQL is that the effort needed to secure a GraphQL API increases a lot the more granular permissions you need.
Then again, if you find yourself needing per-field permission checks, you probably want a separate admin API or something instead.
Users are also happy with clunkiness if it gives them other things they value. Ask any Windows user. Also, video game modding.
My general experience is that clunky software is what made people tech literate, and now that everything has safety barriers and protects the user from everything tech literacy has fallen.
Cost isn't only money. In the case of linux it is time to learn to use it (which is a sunk cost on windows: already paid it). Then you need to download and install it - again windows comes by default so a sunk cost.
If somebody else admins your system. However if not there is a lot to learn. At least every distribution I've used needs manual updates from time to time. (though admittedly most people would replace the computer before I've seen anything hard happen)
Windows user here. It goes vastly further than that. I've been using Windows since version 3.0. I'm used to it to the point where it's second nature. Linux is foreign and difficult to comprehend, not least because it explicitly avoids being anything like Windows or accommodating habits people acquired from Windows. I don't like the direction Windows is going any more than anyone, and I'm avoiding Windows 11 for the time being, but as long as Linux people continue to believe that the only reason Windows users don't switch is because they don't know Linux exists, Linux will not be able to attract Windows users even as Windows goes full capitalist enshittification.
I don't remember the last time I've clicked on a Firefox icon. I've pinned it to the taskbar and I press Win+1 to use it, which is 100 times faster. I've been doing that for >10 years now.
This is acceptable. We now understand that privacy-focused solutions are not appropriate for individuals with average technological literacy, and we cannot depend on companies to self-regulate.
At present, the emphasis is on the potential of large language models (LLMs) and the related ethical considerations. However, I would prefer to address the necessity for governments or commissions to assume responsibility for their citizens concerning "social" media, as this presents a significantly greater risk than any emerging technology.
Basically everyone I know in engineering share this resentment in some way, and the AI industry has itself to blame.
People are fed up and burned out from being forced to try useless AI tools by non-technical leaders who do not understand how LLM works nor understand how they suck, and now resent anything related to AI. But for AI companies there is a perverse incentive to push AI on people until it finally works, because the winner of the AI arms race won't be the company that waits until they have a perfect, polished product.
I have myself had "fun" trying to discuss LLMs with non technical people, and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation. They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues. Combine that with how every "wow!" LLM example is actually just the LLM regurgitating a very common thing to write tutorials about, and people tend to over-estimate its abilities.
I use claude multiple times a week because even though LLM-generated code is trash I am open to try new tools, but my general experience is that Claude is unable to do anything well that I can't have my non-technical partner do. It has given me a sort of superiority complex where I immediately disregard the opinion of any developer who thinks its a wondertool, because clearly they don't have high standards for the work they were already doing.
I think most developers with any skill to their name agree. Looking at how Microsoft developers are handling the forced AI, they do seem desperate: https://news.ycombinator.com/item?id=44050152 even though they respond with the most "cope" answers I've ever read when confronted about how poorly it is going.
> and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation.
There are quite a few things they can do reasonably well - but they mostly are useful for experienced programmers/architecs as a time safer. Working with a LLM for that often reminds me of when I had many young, inexperienced Indians to work with - the LLM comes up with the same nonsense, lies and excuses, but unlike the inexperienced humans I can insult it guilt free, which also sometimes gets it back on track.
> They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues.
For having a LLM operate on a complete code base there currently seems to be a hard limit of something like 10k-15k LOC, even with the models with the largest context windows - after that, if you want to continue using a LLM, you'll have to make it work only on a specific subsection of the project, and manually provide the required context.
Now the "getting to 10k LOC" _can_ be sped up significantly by using a LLM. Ideally refactor stupid along the way already - which can be made a bit easier by building in sensible steps (which again requires experience). From my experiments once you've finished that initial step you'll then spend roughly 4-5 times the amount of time you just spent with the LLM to make the code base actually maintainable. For my test projects, I roughly spent one day building it up, rest of the week getting it maintainable. Fully manual would've taken me 2-3 weeks, so it saved time - but only because I do have experience with what I'm doing.
I think there's a lot of reason to what you are saying. The 4-5 amount of time to make the codebase readable resonates.
If i really wanted to go 100% LLM as a challenge I think I'd compartmentalize a lot and maybe rely on OpenAPI and other API description languages to reduce the complexity of what the LLM has to deal with when working on its current "compartment" (i.e the frontend or backend). Claude.md also helps a lot.
I do believe in some time saving, but at the same time, almost every line of code I write usually requires some deliberate thought, and if the LLM makes that thought, I often have to correct it. If i use English to explain exactly what I want it is some times ok, but then that is basically the same effort. At least that's my empirical experience.
> almost every line of code I write usually requires some deliberate though
That's probably the worst case for trying to use a LLM for coding.
A lot of the code it'll produce will be incorrect on the first try - so to avoid sitting through iterations of absolute garbage you want the LLM to be able to compile the code. I typically provide a makefile which compiles the code, and then runs a linter with a strict ruleset and warnings set to error, and allow it to run make without prompting - so the first version I get to see compiles, and doesn't cause lint to have a stroke.
Then I typically make it write tests, and include the tests in the build process - for "hey, add tests to this codebase" the LLM is performing no worse than your average cheap code monkey.
Both with the linter and with the tests you'll still need to check what it's doing, though - just like the cheap code monkey it may disable lint on specific lines of code with comments like "the linter is wrong", or may create stub tests - or even disable tests, and then claim the tests were always failing, and it wasn't due to the new code it wrote.
* The most skilled workers are the most undervalued
* Make products to serve the customer
* Management is a skill, not a career path
* The only people they consider themselves to be unable to compete with are their customers, so enabling the customer to produce better content in their ecosystem is the most efficient way of producing things.
It would be naive to assume they couldn't access the data from a technical perspective. I think anyone in here would think so. The problem is regular customers who aren't technical and don't have much choice but to trust claims by the seller - these are the real victims here.
In general having a chief "order" employees sounds like a red flag to me. Isn't ordering a bit authoritarian and used in leau of being able to change things in a more civil manner? If you aren't able to get people to work more at the office through more civil manners, maybe you should reflect on why?
Surely this is just to get people to quit without needing to give them expensive severance packages, that seems pretty common nowadays?
One of the tough pills I and probably many other developers have had to swallow when maturing is that "non-programming skills" from schools are useful and very valuable, actually. Writing is one of them. Everyone loves a programmer that can explain themselves. An opinion isn't worth having if you aren't able to defend it either. Maintaining a blog therefore seems like a great way of improving your writing skills while also testing your own opinions.
Writing down opinions on things have done wonders for my ability to reason about them, especially when the opinions are built on 10 years of "hunch" and no discussion.
I upvoted but I was not taught this! I have had to slowly figure it out on my own. Writing things down is kind of like augmenting your brain. It's a memory that does not forget. When working through a problem, writing it down tends to point out the holes in your understanding. A corner case is never lost or forgotten when written down, it just stares at you until you write down a solution. The next step after realizing this is to develop the discipline to write things down and to organize your environment so it's effortless to write things down.
Same, I had to learn this the hard way. In fact, I find that many (younger me included) are arrogant about *not* wanting to deal with writing due to it feeling like waste of time. But after maintaining codebases for 5+ years, you begin to appreciate younger you explaining wtf you were thinking.
And now, being at a point in my career where I have opinions on many things and discuss them with peers, I slowly realized writing about it was actually helping me more than anything.
Using something like confluence religiously in a team is a big boon. Write docs about everything. Write to get decisions done, to plan, to celebrate, to retrospect, to architect, to help oncall. Everything! Doesnt need to be beautiful prose - just needs to be famn useful and ideally easy/quick to read.
Maybe in US, if you learn to write in a simple and straightforward way.
In France essays are all about writing in a complex way to show how smart you are. Which not only is not a useful skill to have, it's detrimental because we learn to write in obscure and hard to understand ways.
This is painfully true. I went to a US university after high school in France and had a really hard time adjusting to the American style of essays. So many paragraphs with sentences crossed off for being too long, in particular. Hitting word limits when, in a French dissertation, you'd just be getting started (an exaggeration, yes, but still).
It wasn't a "language" problem because I was already a fluent American English speaker. It was all style-related.
I've recently started reading 19th century French literature again and sometimes I have to reread sentences multiple times because they're so long I come to the wrong conclusion too early.
This reminds me a bit of my Korean professor from college. Perhaps the most memorable thing from his class was when he explained the Korean style of writing essays was to not explain up front what you are going to cover but to "beat around the bush" until the end. He accompanied the bit in quotes with a mime of him swinging a stick at various parts of a really big bush.
For some reason, that image will forever accompany that phrase in my mind.
I've tried a few time to read le Journal du Hacker, the french-speaking clone of Hacker News and each time I've found that the writing level is so low it's basically unreadable.
They still want you to write in an academic style, even if that style is fairly different.
I was once asked to write to then governor of California, Arnold Schwarzenegger, to sway him on some political issue. That's certainly a practical assignment, but I chatted with some classmates about it, and none of us thought the professor would give a good grade to something that might genuinely sway the man.
"One of the tough pills I and probably many other developers have had to swallow when maturing is that "non-programming skills" from schools are useful and very valuable, actually."
It is interesting in how challenging it can be to convince some younger developers of this. Some of the stronger and more technically proficient developers (that are young-ish... mid 30's and down, let's say) have a level of contempt for those skills that is surprising and not trivial to coach them on. They seem to suffer from "smartest person in the room" syndrome and mistakenly believe those smarts apply to everything they deal with rather than the just the technical areas that they excel in.
I agree. But I also think there is an overlap between programming and writing. If you are a good programmer, you have some abilities that can help you explain a subject or argue some point. Especially when it is about something non-trivial.
I write to the computers because sure as fuck wasn't nobody gonna give me no money for writing to the humans. Instead, I am lead to understand that during the formative years of my primary caretakers, what people got for writing was jail time. And the people who put 'em there never actually went anywhere; they just became less visible as they shed the dead weight of the state apparatus.
As a result, computer touchin' is how my entire cohort developed sentience, since computers have the useful property of always responding correctly when asked correctly. Humans, meanwhile, can quite easily become trapped in a permanent low-intensity fight-or-flight state - where they only respond correctly to incorrect statements, and vice versa.
For better or worse, ChatGPT is pretty good at explaining people to you if you describe what you did and what happened and don't leave out the details. Just be aware that you're human too and it'll syncophantically tell you you're right if you leave anything out, which really strokes the ego, but doesn't actually help. Don't fall into that trap. If you're worried about privacy, get a local one instead of ChatGPT.
I'm definitely in the "for worse" camp on that one, and doing my part.
If I was exposed to LLMs when I was a wee atom, I strongly suspect it would've had a massively detrimental effect.
Tbh, an effect not unlike that of the shitfractal of media that our salad days were wasted on. You know, all those endless cultural artifacts that purport to explain people to themselves (in a literally - but at least somewhat more visibly - irresponsible way.) Just ... vastly more so.
Basically, with my socioeconomic background, if I'd had GPT as a kid, I don't think I would have even developed a mind. Exact opposite of a Diamond Age scenario.
Generally, I consider the perspective of people aligned with AI to be a perspective from which a P-zombie is superior in all respects to a conscious being. And I've seen enough of that shit without needing to fucking self-host it, thank you very much, go get an actual therapist, they need to eat too.
Pithy: "Writing things down is a special way of processing them for yourself."
A fictional example of this that I love is (seriously) the Twilight rewrite Luminosity: https://luminous.elcenia.com/chapters/ch1.shtml. Bella writes everything down in her notebook and is unusually self-reflective.
One benefit of blogs that isn't mentioned enough is the opportunity to express unorthodox ideas, and the chance to defend them to form a good thesis.
Diversity of thought is pretty valuable. So is training yourself to think independently, come up with your own premises and learning to build sound arguments, which you also get from writing and discussing ideas.
I wish the article talked more about this app India wanted to pre-install. Forcing the pre-install of apps is worrisome in general, but there's some nuance that is missed by not explaining what is being forced on the citizens. "Cybersecurity app" can mean a lot. From the looks it's a government-sponsored "brick my phone"-kind of app for disabling stolen phones?
Then again, if you find yourself needing per-field permission checks, you probably want a separate admin API or something instead.
reply