Hacker Newsnew | past | comments | ask | show | jobs | submit | anonnon's commentslogin

And yet, has anyone ever claimed regular marijuana use improved their memory?

Ok fine, I'll chime in.

No, it has not improved my memory. Though, I am not really certain anything does. At least, not permanently. Though, I will say the effects on memory are rather complex. Some diminished abilities in some domains, but oddly some enhancements in a select few domains.

I am 'neurodivergent' apparently, so my experiences might not be worth much.


Did you not see this: https://news.ycombinator.com/item?id=47282777

Look at how many updoots it has. Look at how many vacuous, enthusiastic replies it got. That post is especially egregious, but you see stuff like that on a lesser scale every day here, now. My favorite bit is when they go out of their way to shill specific plans/pricing, e.g.:

> You really NEED the $200 Claude MAX plan.


> The chip machines Taiwan uses come from Europe, for example.

Yeah, the EUV photolithography machine, but not much else. American companies like Lam Research and Applied Materials are the leaders in thin film deposition and etch, KLA Tencor is the leader in metrology, and Synopsys and Cadance are the leaders in EDA (though there's also Germany's Mentor Graphics).


This chart: https://www.longtermtrends.com/home-price-vs-inflation/

And Case-Shiller is based on price-per-square footage, so the argument that houses are bigger is moot.


> The MIT license

I still can't believe that developers got memed into this being the default license. 20 years ago, you'd always default to GPL and only opt for something else if it was a complete non-starter, and then you'd turn to LGPL (e.g., if it was a C library), and failing that, some BSD variant. But developers were always cautious to prefer GPL wherever they could to prevent exploitation and maximize user freedom.

It's crazy that even in compiled languages like Rust, MIT is now the default, though I think that's probably due to the lack of a stable ABI complicating dynamic linking enough to make LGPL less viable.


> that's probably due to the lack of a stable ABI complicating dynamic linking enough to make LGPL less viable

They could use MPL. It's an alternative "weak copyleft" license that's not concerned with dynamic linking


> submitted a fix and told them how to and I received a barrage of comments about working for free for a corporation that's making money off me

After it became obvious that 1) these LLMs were trained heavily on OSS, and 2) that they (arguably) wantonly violated the licenses of the OSS they were trained on (as even the most permissive of which mandated attribution), 3) that LLMs could be used to rewrite code licensed with terms (e.g., copyleft) deemed unsuitable for certain commercial purposes to nullify those terms, and 4) that these LLMs would ultimately be used to reduce the demand for developers and suppress developer wages (even as cost of living keeps rising, and now even cost of compute, once deflationary, rises quickly as well, ironically thanks to LLMs), the culture of unbounded enthusiasm for open source amongst devs ought to have quickly been supplanted by one of peer pressure-bordering-on-public-shaming against open source participation.

Yet people still go out of their way to open source projects, or work, uncompensated, on open source beyond the "good citizen" stuff of reporting bugs (possibly with fixes) in things you use.

It really boggles the mind. Even if you can't starve the beast, why willingly feed it, and for free?


> I feel like anyone used AI coding tools before 11/25 and after 1/26 (with frontier models) will say there has been a massive jump in, there is a difference between whether LLM can do a specific task or pass some arguably arbitrary checks by maintainers vs. what the are capable of.

How much of that is the model and how much of that is the tooling built around it? Also why is the tooling, specifically Claude Code, so buggy?


90% model if not more, look at terminal benchmark terminus tool, that mostly proves it

> Well, on one hand they lack new data. Lot's of new code came out of an LLM, so it feeds back.

Supposedly model curation is a Big Deal at Big AI, and they're especially concerned about Ouroboros effects and poisoned data. Also people are still contributing to open source and open sourcing new projects, something that should have slowed to trickle by 2023, once it became clear that from now on, you're just providing the fuel for the machines that will ultimately render you unemployable (or less employable), and that these machines will completely disregard your license terms, including those of the most permissive licenses that seek only attribution, and that you're doing all of this for free.


> I've endured 50k runs

Did you see this? https://www.nytimes.com/2025/08/19/health/running-colon-canc...

But I agree with you, I would only want this done if I could get it without sedation.


> Among many, many other things, he invented the term "vibecoding".

Yeah, that's a great reason to hate him, but the person you're responding to asked why his Twitter braindroppings belong on the front page.

It should be stated, again, that Karpathy completely missed the boat on LLMs, leaving OpenAI before they developed ChatGPT, and that he convinced Tesla to pursue a visual-only, no LIDR approach to FSD that doesn't work and probably won't ever work until after LIDR-based systems have already solved FSD.

Karpathy is the AI-equivalent of Sam Altman, who, for whatever reason, only fails upward. I think many HNers like him because he reminds them of themselves. Look at this bullshit and tell me it doesn't read like something the average HNer would write: http://karpathy.github.io/2020/06/11/biohacking-lite/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: