Hacker Newsnew | past | comments | ask | show | jobs | submit | smallpipe's commentslogin

Having a bad day eh, let’s be mean to the disabled o the internet

Cars are moving towards something like that, mostly because copper is expensive and there are miles and miles of it in even a basic car these days

If Codex’s core quality is anything to go by, it’s time to create a community fork of UV


Maybe they are being acquired to improve the quality of Codex.


That's the thing. To me that says that as soon as cash becomes tight at OpenAI, the Astral staff will no longer get to work on Python tooling anymore, namely uv, etc.


Tale as old as time in SV, why we keep trusting venture capital to be the community's stewards I have no idea.

We need public investment in open source, in the form of grants, not more private partnerships that somehow always seem to hurt the community.


what do you mean "trusting" or "hurting the community"? i don't think uv has damaged anything yet. i'll use a tool from whoever if the risk profile is acceptable. given the level of quality in uv already, it seems very low risk to adopt no matter who the authors are, because it's open source, easy to use old version, and if they really go off the deep end, i expect the python community as a whole will maintain a slow-moving but stable fork.

i'd love there to be infinite public free money we could spend on Making Good Software but at least in the US, there is vanishingly small public free money available, while there's huge sums of private free money even in post-ZIRP era. If some VCs want to fund a team to write great open source software the rest of us get for free, i say "great thanks!"


> why we keep trusting venture capital to be the community's stewards I have no idea.

They bought the trust.


> we keep trusting venture capital to be the community's stewards

OpenAI isn't a VC. It's VC-backed. But so is Astral.


At least it’s in rust.

Unlike those react-game-engine guys over at Claude


Docker is not a security boundary. You’re one prompt injection away from handing over your gmail cookie.


No, but Podman is. The recent escapes at the actual container level have been pretty edge case. It's been some years since a general container escape has been found. Docker's CVE-2025-9074 was totally unnecessary and due to Docker being Docker.


No they have not been. There were at least 16 container escapes last year - at least 8 of them were at the runtime layer.

I personally spent way too much time looking at this in the past month:

https://nanovms.com/blog/last-year-in-container-security

runc: https://www.cve.org/CVERecord?id=CVE-2025-31133

nvidia: https://www.cve.org/CVERecord?id=CVE-2025-23266

runc: https://www.cve.org/CVERecord?id=CVE-2025-52565

youki: https://www.cve.org/CVERecord?id=CVE-2025-54867

Also, last time I checked podman uses runc by default.


It looks to me like what is called a "container escape" in this context isn't necessarily as bad as it seems. For example, in the advisory for CVE-2025-31133 affecting runc[1]:

> Container Escape: ...Thus, the attacker can simply trigger a coredump and gain complete root privileges over the host.

Sounds bad. But...

> this flaw effectively allows any attacker that can spawn containers (with some degree of control over what kinds of containers are being spawned) to achieve the above goals.

The attacker needs already to have the capability to spawn containers! This isn't a case of "RCE within the container" -> "RCE outside the container", which is what I would think prima facie reading "container escape".

I have always thought that running an untrusted image within an unprivileged container was a safe thing to do and I still believe so.

[1] https://github.com/opencontainers/runc/security/advisories/G...


The best container security in the world isn’t going to help you when the agent has credentials to third party services. Frankly, I don’t think bad actors care that much about exploiting agents to rm -rf /. It’s much more valuable to have your Google tokens or AWS credentials.


> This has completely replaced human code review for anything that isn't functional correctness

Isn’t functional correctness pretty much the only thing that matters though?


Well no, style is important too for humans when they read a codebase, so the LLMs the parent is running clearly have some value for them.

They're not claiming LLMs solved every problem, just that they made life easier by taking care of busywork that humans would otherwise be doing. I think personally this is quite a good use for them - offering suggestions on PRs say, as long as humans still review them as well.


But isn't style already achievable by running e.g. GNU indent?


Some examples of complex transformations linters can't catch:

* Function names must start with a verb.

* Use standard algorithms instead of for loops.

* Refactor your code to use IIFEs to make variables constexpr.

The verb one is the best example. Since we work adjacent to hardware, people like creating functions on structs representing register state called "REGISTER_XYZ_FIELD_BIT_1()" and you can't tell if this gets the value of the first field bit or sets something called field bit to 1.

If you rename it to `getRegisterXyzFieldBit1()` or `setRegisterXyzFieldBitTo1()` at least it becomes clear what they're doing.


Could you run a similar analysis for pre-2020 papers? It'd be interesting to know how prevalent making up sources was before LLMs.


Also, it'd be interesting how many pre-2020 papers their "AI detector" marks as AI-generated. I distrust LLMs somewhat, but I distrust AI detectors even more.


Yeah, it’s kind of meaningless to attribute this to AI without measuring the base rate.

It’s for sure plausible that it’s increasing, but I’m certain this kind of thing happened with humans too.


at the end of the article they made a clear distinction between flawed and hallucinated cititations. I feels its hard to argue that through a mistake a hallucinated citation emerge:

> Real Citation Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521:436-444, 2015.

Flawed Citation

Y. LeCun, Y. Bengio, and Geoff Hinton. Deep leaning. nature, 521(7553):436-444, 2015.

Hallucinated Citation

Samuel LeCun Jackson. Deep learning. Science & Nature: 23-45, 2021.


Hopefully the malware authors have the same issue of filtering through garbage AI submission


The viewport of this website is quite infuriating. I have to scroll horizontally to see the `cloc` output, but there's 3x the empty space on either side.


Science has no problem admitting something works without knowing why. Making stuff up to sell a book is preying on vulnerable people.


Doubt it. Avoiding jailbreak sure to keep selling games, but no one cares about emulators.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: