That's the thing. To me that says that as soon as cash becomes tight at OpenAI, the Astral staff will no longer get to work on Python tooling anymore, namely uv, etc.
what do you mean "trusting" or "hurting the community"? i don't think uv has damaged anything yet. i'll use a tool from whoever if the risk profile is acceptable. given the level of quality in uv already, it seems very low risk to adopt no matter who the authors are, because it's open source, easy to use old version, and if they really go off the deep end, i expect the python community as a whole will maintain a slow-moving but stable fork.
i'd love there to be infinite public free money we could spend on Making Good Software but at least in the US, there is vanishingly small public free money available, while there's huge sums of private free money even in post-ZIRP era. If some VCs want to fund a team to write great open source software the rest of us get for free, i say "great thanks!"
No, but Podman is. The recent escapes at the actual container level have been pretty edge case. It's been some years since a general container escape has been found. Docker's CVE-2025-9074 was totally unnecessary and due to Docker being Docker.
It looks to me like what is called a "container escape" in this context isn't necessarily as bad as it seems. For example, in the advisory for CVE-2025-31133 affecting runc[1]:
> Container Escape: ...Thus, the attacker can simply trigger a coredump and gain complete root privileges over the host.
Sounds bad. But...
> this flaw effectively allows any attacker that can spawn containers (with some
degree of control over what kinds of containers are being spawned) to achieve
the above goals.
The attacker needs already to have the capability to spawn containers! This isn't a case of "RCE within the container" -> "RCE outside the container", which is what I would think prima facie reading "container escape".
I have always thought that running an untrusted image within an unprivileged container was a safe thing to do and I still believe so.
The best container security in the world isn’t going to help you when the agent has credentials to third party services. Frankly, I don’t think bad actors care that much about exploiting agents to rm -rf /. It’s much more valuable to have your Google tokens or AWS credentials.
Well no, style is important too for humans when they read a codebase, so the LLMs the parent is running clearly have some value for them.
They're not claiming LLMs solved every problem, just that they made life easier by taking care of busywork that humans would otherwise be doing. I think personally this is quite a good use for them - offering suggestions on PRs say, as long as humans still review them as well.
Some examples of complex transformations linters can't catch:
* Function names must start with a verb.
* Use standard algorithms instead of for loops.
* Refactor your code to use IIFEs to make variables constexpr.
The verb one is the best example. Since we work adjacent to hardware, people like creating functions on structs representing register state called "REGISTER_XYZ_FIELD_BIT_1()" and you can't tell if this gets the value of the first field bit or sets something called field bit to 1.
If you rename it to `getRegisterXyzFieldBit1()` or `setRegisterXyzFieldBitTo1()` at least it becomes clear what they're doing.
Also, it'd be interesting how many pre-2020 papers their "AI detector" marks as AI-generated. I distrust LLMs somewhat, but I distrust AI detectors even more.
at the end of the article they made a clear distinction between flawed and hallucinated cititations. I feels its hard to argue that through a mistake a hallucinated citation emerge:
>
Real Citation
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521:436-444, 2015.
Flawed Citation
Y. LeCun, Y. Bengio, and Geoff Hinton. Deep leaning. nature, 521(7553):436-444, 2015.
Hallucinated Citation
Samuel LeCun Jackson. Deep learning. Science & Nature: 23-45, 2021.
The viewport of this website is quite infuriating. I have to scroll horizontally to see the `cloc` output, but there's 3x the empty space on either side.
reply