Hacker Newsnew | past | comments | ask | show | jobs | submit | rho4's commentslogin

Ouch, so painful to read.

Or when you're too lazy to hunt down the sources, both for internal and external dependencies. Just Ctrl+click the method and have a quick look at the decompiled implementation, usually good enough.

And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.

Don't be the person refusing the 4GL/Segway/3D TV/NFT/Metaverse. Regularly give the new tool a chance and check if improvements are useful for specific tasks.

Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.

(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)


  Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
My relative came to me to make a small business website for her. She knew I was a "coder". She gave me a logo and what her small business does.

I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.

My relative was extremely happy with the result.

It took me about 10 minutes to do all of this.

In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.

I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.


Okay, I mean if that’s the sort of thing you regularly have to do, cool, it’s useful for that, maybe, I suppose? To be clear I’m not saying LLMs are totally useless.

I don't have to do this regularly. You asked for a qualitative example. I just gave you one.

They actually asked for quantitative evidence.

Yea I read that wrong.

Quantitative should be easy. OpenAI's ARR is $20b in 2025. Up nearly 6x over last year. If it isn't useful, people wouldn't pay for it.


Isn't it mostly from corporate clients who pay based on FOMO and force everyone to use it to justify spending?

I detest LLMs , but I want to point out that segway tech became the basis for EUCs , which are based https://youtu.be/Ze6HRKt3bCA?t=1117

These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.

https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...


While that video looks cool from a "Red Bull Video of crazy people doing crazy things" type angle, that looks extremely dangerous for day to day use. You're one pothole or bad road debris away from a year in the hospital at best, or death at worst.

There is something to be said for the protective shell of a vehicle.


lol! I thought this was going to link to some kind of innovative mobility scooter or something. I was still going to say "oh, good; when someone uses the good parts of AI to build something different which is actually useful, I'll be all ears!", because that's all you would really have been advocating for if that was your example.

But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.


I mean sure but none of these even claimed to help you do things you were already doing. If your job is writing code none of these help you do that.

That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.


Being honest, I don't know what a 4GL is. But the rest of them absolutely DID claim to help me do things I was already doing. And, actually, NFTs and the Metaverse even specifically claimed to be able to help with coding in various different flavors. It was mostly superficial bullshit, but... that's kind of the whole tech for those two things.

In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.


A 4GL is a "fourth generation language"; they were going to reduce the need for icky programmers back in the 70s. SQL is the only real survivor, assuming you're willing to accept that it counts at all. "This will make programmers obsolete" is kind of a recurrent form of magic tech; see 4GLs, 5GLs, the likes of Microsoft Access, the early noughties craze for drag-and-drop programming, 'no-code', and so forth. Even _COBOL_ was kind of originally marketed this way.

> Online spaces where people hang out are bigger than ever.

Personally I wouldn't mind if they went back to being small again


If a calculator gives me 5 when I do 2+2, I throw it away.

If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.

If a mobile phone refuses to connect to a cellular tower, I get another one.

What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.


You can have this position, but the reality is that the industry is accepting it and moving forward. Whether you’ll embrace some of it and utilize it to improve your workflow, is up to you. But over-exaggerating the problem to this point is kinda funny.

"You exaggerate, and the evidence is PMs are pushing it. PMs can't be wrong, can they?" Somebody really has to know what makes developers tick to write ragebait this good.

I can't even get the most expensive model on Claude to use "ls" correctly, with a fresh context window. That is a command that has been unchanged in linux for decades. You exaggerate how reliable these tools are. They are getting more useless as more customers are added because there is not enough compute.


I’m not sure what you’re talking about, because I have a completely different experience.

Sorry you're being downvoted even though you're 100% correct. There are use cases where the poor LLM reliability is as good or better than the alternatives (like search/summarization), but arguing over whether LLMs are reliable is silly. And if you need reliability (or even consistency, maybe) for your use case, LLMs are not the right tool.

Honestly, LLMs are about as reliable as the rest of my tools are.

Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.

My regular software breaks constantly. All the time. It's a rare day where everything works as it should.

LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.


What I want from my tools is autonomy/control. LLMs raise the bar on being at the mercy of the vendor. Anything you can do with an LLM today can silently be removed or enshittified tomorrow, either for revenue or ideological reasons. The forums for Cursor are filled with people complaining about removed features and functional regressions.

Except it's more a case of "my phone won't teleport me to Hawaii sad faec lemme throw it out" than anything else.

There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.

Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.


> Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.

That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.


It's even worse, because even with an unreliable 60s car you could at least diagnose and repair the damn thing when it breaks (or hire someone to do so). LLMs can be silently, subtly wrong and there's not much you can do to detect it let alone fix it. You're at the mercy of the vendor.

> What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.

"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.

But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.

Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.


I’m not a native English speaker so I checked on the definition of reliability

  the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.

So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.

It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).


> For a tool, I expect “well” to mean that it does what it’s supposed to do

Ah, then LLMs are actually very reliable by your definition. They're supposed to output semi-random text, and whenever I use them, that's exactly what happens. Except for the times I create my own models and software, I basically never see any cases where the LLM did not output semi-random text.

They're not made for producing "correct code" obviously, because that's a judgement only a human can do, what even is "correct" in that context? Not even us humans can agree what "correct code" is in all contexts, so assuming a machine could do so seems foolish.


I'm a native English speaker. Your understanding and usage of the word "reliability" is correct, and that's the exact word I'd use in this conversation. The GP is playing a pointless semantics game.

It's not semantics, if the definition is "it does what it’s supposed to do" then probably all of the currently deployed LLMs are reliable according to that definition.

> "it does what it’s supposed to do"

That's the crux of the problem. Many proponents of LLMs over promise the capabilities, and then deny the underperformance through semantics. LLMs are "reliable" only if you're talking about the algorithms behind the scene and you ignore the marketing. Going off the marketing they are unreliable, incorrect, and do not do what they're "supposed to do".


But maybe we don't have to stoop down to the lowest level of conversation about LLMs, the "marketing", and instead do what most of us here do best, focus on the technical aspects, how things work, and how we can make them do our bidding in various ways, you know like the OG hacker.

FWIW, I agree LLMs are massively over-sold for the average person, but for someone who can dig into the tech, use it effectively and for what it works for, I feel like there is more interesting stuff we could focus on instead of just a blanket "No and I won't even think about it".


The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.

You're preaching to the wrong crowd I guess. Many people here think in extremes.

I was once in your camp, thinking there was some sort of middle-ground to be had with the emergence of Generative AI and it's potential as a useful tool to help me do more work in less time, but I suppose the folks who opposed automated industrial machinery back in the day did the same.

The problem is that, historically speaking, you have two choices;

1. Resist as long as you can, risking being labeled a Luddite or whatever.

2. Acquiesce.

Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.

My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.

Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.

To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.

At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.


Maybe the dilemma isn’t whether to “resist” or “acquiesce”, but rather whether to frame technological change as an inherently adversarial and zero sum struggle, versus looking for opportunities to leverage those technologies for greater productivity, comfort, prosperity, etc. Stop pushing against the idea of change. It’s going to happen, and keep happening, forever. Work with it.

And by any metric, the average citizen of a developed country is wildly better off than a century or two ago. All those moments of change in the past that people wrung their hands over ultimately improved our lives, and this probably won’t be any different.


> and this probably won’t be any different

It's just exhausting to read the 1000th post of people saying "If we replace jobs with AI, we will all be having happy times instead of doing boring work." It's like reading a Kindergartner's idea of how the world works.

People need to pay for food. If they are replaced, companies are not going to make up jobs just so they can hire people. They are under no responsibility or incentive to do that.

It's useless explaining that here because half of the shills likely have ulterior reasons to be obtuse about that. On top of that, many software developers are so outside the working class that they don't really have a concept of financial obligation, some refusing to have friends that aren't "high IQ", which is their shorthand for not poor or "losers".


I’m sure it must be exhausting, but nobody said that.

Your profile: Former staff software engineer at big tech co, now focused on my SaaS app, which is solo, bootstrapped, and profitable.

Yep. Makes sense.

> And by any metric

Can you cite one? Just curious. I enjoy when people challenge the idea that the advancement of tech doesn't always result in a better world for all because I grew up in Detroit, where a bunch of car companies decided that automation was better than paying people, moved out and left the city a hollowed out version of itself. Manufacturing has returned, more or less, but now Worker X is responsible for producing Nx10 Widgets in the same amount of time Worker Y had to produce 75 years ago, but still gets paid a barely livable wage because the unchecked force of greed has made it so whatever meager amount of money Worker X makes is siphoned right back out of their hands as soon as the check clears. So, from where I'm standing, your version of "improvement" is a scam, something sold to us with marketing woo and snake oil labels, promising improvement if we just buy in.

The thing is, I don't hate making money. I also don't hate change. Quite the opposite, as I generally encourage it, especially when it means we grow as humans...but that's generally not the focus of what you call "change," is it? Be honest with yourself.

What I hate is the argument that the only way to make it happen is by exploiting people. I have a deep love technology and repair it in my spare time for people to help keep things like computers or dishwashers out of landfills, saving people from having to buy new things in a world that treats technology as increasingly disposable, as though the resources used to create are unlimited. I know quite a bit about what makes it tick, as a result, and I can tell you first hand that there's no reason to have a microphone on a refrigerator, or a mobile app for an oven. But you and people like you will call that change, selling it as somehow making things more convenient while our data is collected, sorted and we spend our days fending of spam phone calls or contemplating if what we said today is tomorrow's thought crime. Heck, I'm old enough to remember when phone line tapping was a big deal that everyone was paranoid about, and three decades later we were convinced to buy listening devices that could track our movements. None of this was necessary for the advancement of humanity, just the engorgement of profits.

So what good came of it all? That you and I can argue on the Internet?


Metrics: life expectancy, infant mortality, maternal mortality, extreme poverty, access to clean water, access to adequate calories, medical care, literacy, education, likelihood of being murdered, disposable income, on and on. Take your pick.

The fact that you’d even ask me to share a metric of how someone from a century or two is worse off tells me all I need to know about whether you’re able to engage in good faith here. Ditto for the low effort ad hominem attack you opened with.

But by all means, carry on with your tilting at the windmills of change.


Maybe its just me, but i often feel that the issue in these debates is not that we give up creativity but people unwillingness to change their creativity.

When PCs came around, people looked down on the idea of painting on a screen. Some stubborn held on to their easel, while the often younger generation embraced the new tech, and made their carriers.

LLMs are the same ... We have people who are stubborn and still want to do everything on their easel and good for them. But those that adapted, will turn out more work and eventually replace most of the die hards in the workplace. Sure, there will be people who are needed, just like we had Fortran programmers 50 years later making bank, or painters who make bank.

But the idea that it makes us less creative is stupid. Did PCs make us less creative in painting? No, we got a ton of new media and changes. We adapted to the tools and possibilities.

O, PCs came on the market, well, no way somebody is going to use that to make ... MUSIC ... you only do that with real instruments. Que entire generation of techno, movie music etc all made digitally. You did not need to be a directory, know how to play dozens of different instruments to make insane pieces of music. You used your creativity to use the tools at your disposal !

The fact that you can now do the work in a few weeks, that will have taken you a year as a programmer, that opens up the doors for more creativity. Prototyping a idea does not take months, but days. It changes the industry...

Why do i need to pay license cost for a piece of software that is "enterprise", when i can now make the same level of software in a few weeks. It actually take the power away from big corporations, sure, you rely on the tool, and whoever is behind the tool for now (like anthropic etc) but as time moves forwards, more hardware will become more powerful, and LLMs will be more in the open source, ...

Remember what i said about music ... hey, maybe creative people will be able to make music that is different thanks to AI. Hey, you wanted to make a Magic The Gathering game but the art was a delima, ... LLMs suddenly open the door to make new products, and change the industry away from large corporation where the entry fee is high.

We are on a threshold of change, those that stay behind and think it kills creativity, never really used the new tools.

I am writing software right now, that will have took me a year to write. It duplicates the function of some enterprise software that will have cost me insane money. BUTTTTTT, because i now control this software, i can add features that i always found lacking or missing, because that enterprise software only looked at the companies, now what somebody like me may need.

All for the low low price of barely $40, what is probably 70~100k in developer cost. That is still being creative, i made something new using a existing idea, with my own touches to it. But i used a tool for it... Eventually, my code if open source, may be used in other LLMs to improve them. What in turn makes them smarter and maybe some of my idea get used by others.

This is frankly how we as the human race advanced, not by stagnating and not embracing change, but by often copying, improving, using our creativity. Does that mean i need to know how a LLM works? No, just like do not need to know how a bread slicer works, ...

Anyway, ... this discussion is never going to end, because there is always resistance to change. I remember my parents about smartphones, ... guess who uses them, even at their old age. Eventually some power always get consolidated but for now, with the progress i am seeing in the open source/open weights LLM progress, there will been a escape hatch for those that do not want to be linked to specific corporations, just like Linux etc exists.


I like the AI-disclaimer :). This might become a thing for blog and news articles: (c) all words written by <editor> on <date> without AI. And then there will be a robots.txt directive that allows collection of this self-declared human material for AI training. And a google search option: "ai:no" :)


Well said. Sad how that reflex starts kicking in for HN comments as well (ps I'm not getting any signals from your comment).


This. Speed determines whether I (like to) use a piece of software.

Imagine waiting for a minute until Google spits out the first 10 results.

My prediction: All AI models of the future will give an immediate result, with more and more innovation in mechanisms and UX to drill down further on request.

Edit: After reading my reply I realize that this is also true for interactions with other people. I like interacting with people who give me a 1 sentence response to my question, and only start elaborating and going on tangents and down rabbit holes upon request.


> All AI models of the future will give an immediate result, with more and more innovation in mechanisms and UX to drill down further on request.

I doubt it. In fact I would predict the speed/detail trade-off continues to diverge.


> Imagine waiting for a minute until Google spits out the first 10 results.

what if the instantaneous responses make you waste 10 min realizing they were not what you searched for?


I understand your point, but I still prefer instantaneous responses.

Only when the immediate answers become completely useless will I want to look into slower alternatives.

But first "show me what you've got so far", and let me decide whether it's good enough or not.


I am already at that point. when I need to search something more complex than exact keyword match, I don't even bother googling it anymore, I just ask chatgpt to research it for me and read it's response 5 min later.


Yes, I feel the same recently with Google results. But I think I would still like to see the immediate 10 results, along with a big button "Try harder - not feeling very lucky".


Grok fast is fast but doing a lot of stupid stuff fast actually ends up being slower


I really hope hacker news does not also turn into a Bash-Elon-Club like electrek.co (used to love that blog). But this comment section does not bode well.


That's what happens when you promise again and again and don't deliver.


Has it occured to you that maybe the reason Elon Musk is universally unliked is because he's unlikable?

Someone people deserve to be hated. Elon is one of those people. From a business perspective and a human perspective.

From a business and politics perspective, he's a con man. He stole your money. Mine too, with DOGE.


electrek is not a Bash-Elon-Club, its 'I was a true believer but got scammed with fake roadsters worth half a $mil I will never see in my life' club.


Elon Musk is a very dishonest, misogynistic, neo-nazi grifter.


Isn't SpaceX now part of the military industrial complex?


Nice pattern detection.


The system should support this, e.g. via // @formatter:off/on tags


For the stored IR version that means it needs to store raw source code when those directives are used. And then you lose the benefits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: