Hacker Newsnew | past | comments | ask | show | jobs | submit | ForHackernews's commentslogin


"You need to think of Larry Ellison the way you think of a lawnmower."

https://www.youtube.com/watch?v=-zRN7XLCRhc&t=2308s

Start at 33:02 for full rant.


https://medium.com/state-of-the-art-technology/world-models-...

> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”


But they don't only operate on language? They operate on token sequences, which can be images, coordinates, time, language, etc.

It’s an interesting observation, but I think you have it backwards. The examples you give are all using discrete symbols to represent something real and communicating this description to other entities. I would argue that all your examples are languages.

Whats the first L stand for? Thats not just vestogial, their model of the world is formed almost exclusively from language rather than a range of things contributing significantly like for humans.

The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more


How is a Linear stream of symbols able to capture the relationships of a real world?

It's like the people who are so hyped up about voice controlled computers. Like you get a linear stream of symbols is a huge downgrade in signals, right? I don't want computer interaction to be yet more simplified and worsened.

Compare with domain experts who do real, complicated work with computers, like animators, 3D modelers, CAD, etc. A mouse with six degrees of freedom, and a strong training in hotkeys to command actions and modes, and a good mental model of how everything is working, and these people are dramatically more productive at manipulating data than anyone else.

Imagine trying to talk a computer through nudging a bunch of vertexes through 3D space while flexibly managing modes of "drag" on connected vertexes. It would be terrible. And no, you would not replace that with a sentence of "Bot, I want you to nudge out the elbow of that model" because that does NOT do the same thing at all. An expert being able to fluidly make their idea reality in real time is just not even remotely close to the instead "Project Manager/mediocre implementer" relationship you get prompting any sort of generative model. The models aren't even built to contain specific "Style", so they certainly won't be opinionated enough to have artistic vision, and a strong understanding of what does and does not work in the right context, or how to navigate "My boss wants something stupid that doesn't work and he's a dumb person so how do I convince him to stop the dumb idea and make him think that was his idea?"


>We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.

https://en.wikipedia.org/wiki/Moravec%27s_paradox

All the things we look at as "Smart" seem to be the things we struggle with, not what is objectively difficult, if that can even be defined.


China leads the world in solar energy, by a wide margin. Yes, they have hedged their bets somewhat with coal, but you cannot claim with a straight face that China believes renewable energy is nonviable.

https://apnews.com/article/china-climate-solar-wind-carbon-e...


> Step 2: The AI bot executes arbitrary code. Claude interpreted the injected instruction as legitimate and ran npm install pointing to the attacker's fork - a typosquatted repository (glthub-actions/cline, note the missing 'i' in 'github'). The fork's package.json contained a preinstall script that fetched and executed a remote shell script.

Even leaving aside the security nightmare of giving an LLM unrestricted access on your repo, you'd think the bots would be GOOD at spotting small details like typosquatted domains.


According to another comment, the title exploits GitHub's forking feature to point at a commit which appeared to be in `github-actions/cline` but which instead invisibly pointed to the typo-squatted repository.

https://news.ycombinator.com/item?id=47264574


STEM PhDs and engineers are not the elite at issue here. They're talking about the social elite, and the angry wannabes shut out of the ruling class.

Doesn't show the comparative energy waste of bitcoin?

This source[0] says

> One Bitcoin now requires 854,400 kilowatt-hours of electricity to produce. For comparison, the average U.S. home consumes about 10,500 kWh per year, according to the U.S. Energy Information Administration, April 2025, meaning that mining a single Bitcoin in 2026 uses as much electricity as 81.37 years of residential energy use.

[0] https://www.compareforexbrokers.com/us/bitcoin-mining/


About 1,200 kWh per transaction, currently[0]. I wrote about this back in 2022, when it was about 2,200 kWh per transaction[1].

Edit: made a chart with this data, but adding in a bitcoin transaction[2]

[0]: https://digiconomist.net/bitcoin-energy-consumption

[1]: https://rollen.io/blog/crypto-climate/

[2]: https://imgur.com/a/ggAGylW


LLMs cannot, as you put it, "properly, correctly think"

So-called reasoning models are hallucinating, their self-reported "reasoning" does not reflect their inner state https://transformer-circuits.pub/2025/attribution-graphs/bio...

(before someone comes at me, yes, humans can also lie about their inner state but we are [usually] aware of it. Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination)


[flagged]


But we also at HN have historically called your experience "anecdata" and take it with a grain of salt. Don't take offense. Provide more data.

I humbly suggest that a more hacker response would be, "That's really interesting that my experience doesn't agree with that study. Let's figure out what's going on."


I linked you a paper from one of the leading AI shops in the world demonstrating that the "Chain of Thought" reported doesn't match up with the actual activation inside the model, and you replied that you're an expert on some human psych stuff that may or may not even be real[0].

Forgive me if I don't immediately bow to your expertise.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC11293289/


They are not a major OEM, but the Hiroh phone is going to offer hardware cutoff switches and and a de-googled OS: https://www.notebookcheck.net/Murena-taking-pre-orders-for-t...

I think this is great news, but I thought GrapheneOS considered unlocked bootloaders to be a terrible security risk? What's changed?

Unlocked baotloaders are mandatory to install graphene, but so is the ability to re-lock the bootloader.

Not if it comes preinstalled though. Isn't that the point of the partnership?

Doesn't seem to be, announcement only talks about GrapheneOS compatibility.

It has always been a hardware requirement to be able to unlock the device, install GrapheneOS and lock the device again. Verified boot has been a requirement since it was introduced for Pixels and the is main benefit of locking the device. There are additional security features enabled by verified boot. The overall hardware requirements are listed at https://grapheneos.org/faq#future-devices.

You always have to temporarily unlock your bootloader to install graphene.

The key point is being able to lock it again after installation.


Counterpoint: No one ever gets fired or goes to jail when big tech firms break the law. Companies will put out an apology, pay whatever small fine is imposed, and continue with illegal AI usage at scale.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: