The quoted revenue numbers seem insane, but I guess it's the result of corporate deals where every developer seat is hundreds of dollars a month?
My job has been publicly promoting who's on top of the "AI use dashboard" while our whole product falls apart. Surely this house of cards has to collapse at some point, better get public money before it does.
Yeah, it is wild seeing with my eyes how bad these tools are in a lot of cases. We do have some vibe coders on our team but they basically are banned from my current project because they completely ruin the design and nuke throughput. HN would have me believe I'm a Luddite who shouldn't be writing code, however. I truly do not understand how to reconcile this experience and many times it is too complicated a topic to explain to someone who isn't an engineer. AI is the uiltmate Dunning-Kruger machine. You cannot fix what you do not know because you do not know that you did not know.
As you say, I think things are just going to fall apart and we're just going to have to learn the hard way.
No, these tools are really great in a lot of cases. But they still don't have general intelligence or true understanding of anything - so if people using them wrong and rely on their output because it looks good and not because they verified it, then this is on the people using them.
I mean, that is fine, but then it seems like people at large are not using them "right". I think you'll find that since these tools are convenient and produce a lot of code in terms of lines, that verifying goes out the window. Due diligence was hard before these tools existed.
Oh I do find it certainly tempting to get lazy with these tools, but I did learn that there are sideprojects, where vibecoding is fine - and important codebase, that can be improved with LLM's - but not if you just let agents loose on them.
I have started using the most token-intensive model I can find and asking for complicated tasks (rewrite this large codebase, review the resulting code, etc.)
The agent will churn in a loop for a good 15-20 minutes and make the leaderboard number go up. The result is verbose and useless but it satisfies the metrics from leadership.
> Our token usage and number of lines changed will affect our performance review this year.
The AI-era equivalent of that old Dilbert strip about rewarding developers directly for fixing bugs ("I'm gonna write me a new mini-van this afternoon!") just substitute intentional bug creation with setting up a simple agent loop to burn tokens on random unnecessary refactoring.
> Our token usage and number of lines changed will affect our performance review this year.
I'm going nuts, because as I was "growing up" as a programmer (that was 20+ years ago) it was stuff like this [1] that made me (and people like me) proud to be called a computer programmer. Copy-pasting it in here, for future reference, and because things have turned out so bleak:
> They devised a form that each engineer was required to submit every Friday, which included a field for the number of lines of code that were written that week. (...)
> Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementer, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code. (...)
> He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000.
Name pretty much any company. Every one of my friends have said their company is doing this. Across 3 countries mind you. Especially if they already use microsoft office suite. Those folks got sold copilot on a deal it seems.
I work for a mega corp, and our global overlord( who is ex dev) has tried Claude code at home, and figured out that generating large amounts of code comes with its own challenges - they explicitly don’t want this to happen so there’s no such metric.
Weird. I would have thought most smaller companies would not need this sort of useless metric where people know each other and know what they are doing. These things are generally the domain of larger companies where they have already dehumanized their employees and deal only with numbers.
I feel like a crazy person, especially when I read HN. Half or more of the comments on this thread are saying how the game is over for even writing code. Then at my job, I see people break things at a rate I can't personally keep up with. Worse, I hear more and more colleagues talk about mandated AI tooling usage and massive regression rates. My company isn't there yet, but I feel it is around the corner.
I mean, they claim they've got 15B consumer revenue and 900M weekly active users.
If that's accurate, that means what, like 11% of the human population is using their product, and the average user pays $15?
That seems incredibly high, especially for poorer countries.
Still, I do know that if I go to a random cafe in the developed world and peep at people's screens, I'm very likely to see a ChatGPT window open, even on wildly non-technical people's screens.
There's no hope trying to sell "plant-based hamburger" with any name to toxic masculinity advocates who think soy feminizes you (even though seitan isn't soy). These guys are getting hospitalized from eating all-beef diets because chicken is "too feminine".
Controversial take but... don't buy from Amazon? If you really care about quality and physical media you can go to a bookstore or at least form a relationship with a smaller online seller.
I've been wearing the Soundcore Space Q45 for 6 months. Good noise cancelling, comfortable headband, not too heavy and they cost...$99. I can't imagine these being worth 5x as much, even with the Apple tax.
The manager of my team is like this. He LLMed a design doc and then whenever people have questions he's exasperated that people didn't read the design doc. Bro you didn't write it, why would we read it?
I think it's the opposite - the censorship has made the Israeli public believe they're safer than they really are. The US is lying about their stockpiles and frantically moving resources from East Asia to try and shore up missile defense in the Middle East.
These people believed that no Iranian missiles could possibly get through and instead of accepting they were misled they're shooting the messenger
Theatres don't just show new movies. There's something very special about being locked in a dark room with a big screen to watch Alien or Barry Lyndon. Older movies especially look great in a theatre and some of the magic is lost on a smaller screen.
90% of any content is crap but you're missing out if you like movies and you haven't seen Sinners, The Bone Temple, or NOPE (to name a few recent great theatre watches).
I regularly go to a nearby cinema that also shows older movies. Watching a movie like "Ran" (Akira Kurosawa) in a restored 4k version on a big screen is quite an experience. (Tickets are 10€ btw.) Often I go with a friend, but occasionally I also go alone. There's something about spending 2 or 3 hours in a dark room entirely focussing on a piece of art.
I hear everyone say "the LLM lets me focus on the broader context and architecture", but in my experience the architecture is made of the small decisions in the individual components. If I'm writing a complex system part of getting the primitives and interfaces right is experiencing the friction of using them. If code is "free" I can write a bad system because I don't experience using it, the LLM abstracts away the rough edges.
I'm working with a team that was an early adopter of LLMs and their architecture is full of unknown-unknowns that they would have thought through if they actually wrote the code themselves. There are impedance mismatches everywhere but they can just produce more code to wrap the old code. It makes the system brittle and hard-to-maintain.
It's not a new problem, I've worked at places where people made these mistakes before. But as time goes on it seems like _most_ systems will accumulate multiple layers of slop because it's increasingly cheap to just add more mud to the ball of mud.
This matches my experience when building my first real project
with Claude. The architectural decisions were entirely up to me:
I researched which data sources, schema, and enrichment logic were suitable and which to use. But I
had no way of verifying whether these decisions were actually
good (no programming knowledge) until Claude Opus had implemented them.
The feedback loop is different when you don’t
write the code yourself. You describe a system to the AI, after a few lines of code the result appears, and then you
find out whether your own mental model was actually sound. In my first attempts, it definitely wasn’t. This friction, however, proved to be useful; it just wasn’t the friction I had expected at the beginning.
Every modern streaming platform seems to be focused on the relationship between contemporary singles - who featured on what, what's trending, if you like this current pop artist you'll like this other one. Setting aside OP's interest in classical music this approach doesn't even work for popular music from the 60s to 90s when the primary format was the album. God help you if you try to use voice commands to play Help! (the album by the Beatles) and instead get Help! (the title track by the Beatles).
In the past ten years I have been frustrated by the tension between working on "interesting" or "important" stuff and working on dumb trendy shit. With the current LLM trend everything has become dumb trendy sshit, which has made the decision simpler.
My job has been publicly promoting who's on top of the "AI use dashboard" while our whole product falls apart. Surely this house of cards has to collapse at some point, better get public money before it does.
reply