Hacker Newsnew | past | comments | ask | show | jobs | submit | more Yizahi's commentslogin

This is objectively incorrect (with the exception of maybe corrupt courts). No court will indict you on any charge based only on "Testimonies/memories/personal experience". Not only that, a more scientifically literate judges know that even if combined real evidence plus testimony, the testimony part is extremely unreliable at >1 year old, practically useless for any factual corroboration. It's just how human mind works, basic biology. Given a few year of time any person can convince themselves of a past event which never actually happened. But a person can imagine that with a lot of details and interactions, so vivid that he/she will truly believe it. It is normal for humans.

There is zero evidence for anything alien in the Solar system. Not a single good photo or video or item. Not a single verified experiment showing paranormal things work.

So it's not a question of belief, there is nothing to believe in. But r/ufos is genuinely funny place I have to admit, it's like a live study into human psyche. :)


A perfect illustration, thank you

You see, we are here observing a clash in the terminology. Hallucinations in humans is thinking, just not typical. So called "hallucinations" in LLM programs are just noise output, a garbage. This is why using anthropomorphic terms for programs is bad. Just like "thinking" or "reasoning".

I think the answer is somewhere in the middle, not as restrictive as parent, but also not as wide as AI companies want us to believe. My personal opinion is that hallucinations (random noise) are a fundamental building block of what makes human thinking and creativity possible, but we have additional modes of neuroprocessing layered on top of it, which filter and modify the underlying hallucinations in a way so they become directed at a purpose. We see the opposite if the filters fail, in some non-neurotypical individuals, due to a variety of causes. We also make use of tools to optimize that filter function further by externalizing it.

The flip side of this is that fundamentally, I don't see a reason why machines could not get the same filtering capabilities over time by adjusting their architecture.


No they don't. When queried how exactly did a program arrive to a specific output it will happily produce some output resembling thinking and having all the required human-like terminology. The problem is that it doesn't match at all with how the LLM program calculated output in reality. So the "thinking" steps are just a more of the generated BS, to fool us more.

One point to think about - an entity being tested for intelligence/thinking/etc only needs to fail once, o prove that it is not thinking. While the reverse applies too - to prove that a program is thinking it must be done in 100% of tests, or the result is failure. And we all know many cases when LLMs are clearly not thinking, just like in my example above. So the case is rather clear for the current gen of LLMs.


This is an interesting point but while I agree with the article, don’t think LLMs are more than sophisticated autocomplete, and believe there’s way more to human intelligence than matrix multiplication humans also cannot explain in many cases why they did what they did.

Of course the most famous and clear example are the split brain experiments which show post hoc rationalization[0].

And then there’s the Libet experiments[1] showing that your conscious experience is only realized after the triggering brain activity. While it’s not showing you cannot explain why it does seem to indicate your explanation is post hoc.

0: https://www.neuroscienceof.com/human-nature-blog/decision-ma...

1: https://www.informationphilosopher.com/freedom/libet_experim...


I agree, but here we are veering into more complex decision making. I was talking about much simpler cases, like for example going through a handful of simple steeps for simple task. For example addition, ask a person to sum two numbers and then ask to explain what he just did step by step and a person would be able to do it. Person may even make a mistake in process but the general algorithm will be matching what actually happened. Query LLM for the same, and while LLM answer will be correct for a human, it won't match what LLM actually did to calculate. This is what outs LLM "thinking" for me, they just generate a very plausible intermediate steps too.

Too big to fail now

If it only takes a few years for a private entity to become "too big to fail" and quasi-immune to government regulation, we have a real problem.

An yeah, and honestly we do seem to have a real problem. Here's hoping OpenAI doesn't get the bailout they seem to be angling for..

Yes, the same, it's my de-facto HN homepage for years now. The chronological feed is much more convenient.


Each Positron installation annihilates one Chrome clone from the PC and frees up 1-2 Gigajoules of RAM in the process.


I suspect that most of the trades are off-chain now, due to blockchain being being complete and utter crap for fast transactions by design. So people are entrusting their tokens to the centralized entity and receive some IOUs from it, with which they trade and that centralized platform. Basically unlicensed banks recreated with all negatives of the bank and no benefits.


a) money make up for a lot of systemic deficiencies, adding here taxes, which are insanely high all across EU, salaries are several times lower here too

b) English language as a first class language. For example, while there are many job offers promising English only requirement, in reality a significant part of these unofficially presume you will speak local language at proficient level in the team, and hiring process filters for that.

c) EU countries have a lot of their own bureaucracy hell regarding immigration. For example I'm now on a Blue Card status in EU, that's a high skilled immigration program. I need to renew my card for the next 1 to 3 year period, I've started process in Sep 2025 and best case scenario will get card around next winter 26/27. Worst case scenario, add half a year more to that. If I want to get a passport, originally I had to dance through these hoops for at minimum 9 years. Just recently it increased to 11 year. And right now there is a law proposal in the parliament, increasing this term to minimum 17 years (among other inane requirements). If that passes, they may increase it even more in the future, making all immigrants live on the flimsy status for decades. USA at least makes the process faster, even if unpredictable.

tl;dr - EU is nice, just like USA is nice in it's own way, but in both countries immigrants have to put up with a lot of legal BS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: