- don't have career growth that you can feel good about having contributed to
Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
- don't have a genuine interest in accomplishment or team goals
Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.
- have no past and no future. When you change companies, they won't recognize you in the hall.
Or on the picket line.
- no ownership over results. If they make a mistake, they won't suffer.
Good deal. Less human suffering is usually worth striving for.
> Humans are on the verge of building machines that are smarter than we are.
You're not describing a system that exists. You're describing a system that might exist in some sci-fi fantasy future. You might as well be saying "there's no point learning to code because soon the rapture will come".
That particular future exists now, it's just not evenly distributed. Gemini 2.5 Pro Thinking is already as good at programming as I am. Architecture, probably not, but give it time. It's far better at math than I am, and at least as good at writing.
Computers beat us in maths decades ago, yet LLMs are not able to beat a calculator half of the time. The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.
Most AI experts not heavily invested in the stocks of inflated tech companies seem to agree that current architectures cannot reach AGI. It's a sci-fi dream, but hyping it is real profitable. We can destroy ourselves plenty with the tech we already have, but it won't be a robot revolution that does it.
The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.
What I really need to ask an LLM for is a pointer to a forum that doesn't cultivate proud exhibition of ignorance, Luddism, and general stupidity at the level exhibited by commenters in this entire HN story, and in this subthread in particular.
>>Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
Have you ever spent any time around children? How about people who think they're accomplishing a great mission by releasing truly noxious ones on the world?
You just dismissed the entire notion of accountability as an unnecessary form of suffering, which is right up there with the most nihilistic ideas ever said by, idk, Dostoevsky's underground man or Raskolnikov.
> Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
It's also the premise of The Matrix. I feel pretty goddamned uneasy about that.
(Shrug) There are other sources of inspiration besides dystopic sci-fi movies. There's the Biblical story of the Tower of Babel, for instance. Better not work on language translation, which after all is how the whole LLM thing got started.
Sometimes fiction went in the wrong direction. Sometimes it didn't go far enough.
In any case, the matrix wasn't my inspiration here, but it is a pithy way to describe the concept. It's hard to imagine how humans maintain relevancy if we really do manage to invent something smarter than us. It could be that my imagination is limited though. I've been accused of that before.
We'll fix that, eventually.
- don't have career growth that you can feel good about having contributed to
Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
- don't have a genuine interest in accomplishment or team goals
Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.
- have no past and no future. When you change companies, they won't recognize you in the hall.
Or on the picket line.
- no ownership over results. If they make a mistake, they won't suffer.
Good deal. Less human suffering is usually worth striving for.