Hacker Newsnew | past | comments | ask | show | jobs | submit | masklinn's commentslogin

> I remember these Macbooks did tend to break apart at the corners of the palmrests.

Not the corners for me, but the "feet" of the topcase digging into the palmrest, which would splinter the plastic, then you'd have holes in the case and jagged plastic splinters digging into your wrist as you typed, not enjoyable.

This: https://ismh.s3.amazonaws.com/2014-02-24-macbook-topcase.jpg is exactly what mine had, on both sides.

Shame because it was the last macbook that was really easy to upgrade: the battery was removable (with a simple lock), and behind it were the RAM and 2.5" drive slots.

The next generation was not that hard but you had to unscrew the entire bottom shell, and the battery was glued.


Unscrewing the bottom on the generations after this gave you access to nearly everything. Which was vastly superior for most repairs. Getting to the logic board or AirPort card on the polycarbonate MacBook took significantly longer. For the Bluetooth motherboard you had to remove the display cable, optical drive and HDD.

And JITs often don't care for type specifications as they can generally get better info from the runtime values, need to support that anyway, and for languages like python the type specifications can be complete lies anyway. They also might support (and substitute) optimised versions of types internally (e.g. pypy has supported list specialisation for a long time).

Maybe it's changed since, but last I checked the JVM's JIT did not care at all for java's types.

Which is not to say JITs don't indirectly benefit mind, type annotations tend to encourage monomorphic code, which JITs do like a lot. But unlike most AOT compilers it's not like they mind that annotations are polymorphic as long as the runtime is monomorphic...


PyPy may not care in principle, but RPython does, being a kind of python dialect designed for static compilation that is intended for writing JIT engines like PyPy.

> It is also lagging behind in terms of Python releases.

Which it has always been, especially since Python 3, as anyone who's followed the pypy project in the last decade years is well aware.


The problem is that it is lagging behind enough that it is falling out of the support window for a lot of libraries.

Imagine someone releases RustPy tomorrow, which supports Python 2.7. Is it maintained? Technically, yes - it is just lagging behind a few releases. Should tooling give a big fat warning about it being essentially unusable if you try to use it with the 2026 Python ecosystem? Also yes.


> The problem is that it is lagging behind enough that it is falling out of the support window for a lot of libraries.

Which is a concern for those libraries, I've not seen one thread criticising (or even discussing) numpy's decision.

> Should tooling give a big fat warning about it being essentially unusable if you try to use it with the 2026 Python ecosystem? Also yes.

But it's not, and either way that has nothing to do with uv, it has to do with people who use pypy and the libraries they want to use.


> It’s been a lot longer than that.

pypy 7.3.20, officially supporting python 3.11, was released in july 2025: https://pypy.org/posts/2025/07/pypy-v7320-release.html

We're in March 2026. That's 9 months, which is exactly what GP stated.

> There was a reasonable sized effort to provide binaries via conda-forge but the users never came.

How is that in any way relevant to the maintenance status of pypy?


Which is just as wrong.

I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.

There may be non-zero maintenance work happening, but a project that only maintains support for old versions and will never adopt new ones is functionally one that the ecosystem will eventually forget about. Maybe you call that "under active development" but my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."

On the other hand, if you don't support new minor versions day one, but you eventually support them, that's quite different.


More specifically, the Scientific Python community through SPEC 0[0] recommends that support for Python versions is dropped three years after their release. Python 3.12 was released in October 2023[1], so that community is going to drop support for it in October 2026.

Considering that PyPy is only just now starting to seriously work on supporting 3.12, there's a pretty high chance that it won't even be ready for use before becoming obsolete. At that point it doesn't even matter whether you want to call it "in active development", it is simply too far behind to be relevant.

[0]: https://scientific-python.org/specs/spec-0000/

[1]: https://www.python.org/downloads/release/python-3120/


What's the point of a three year window? It seems like a weird middle-point. Either you are in a position to choose/install your own interpreter and libraries or you are not.

If you can choose your own versions and care at all about new releases, you can track latest and greatest with at the very most a few months of lag. Six months of "support" is luxurious in this scenario.

If you can't choose your own versions, you are most likely stuck on some sort of LTS Linux and will need to make do with what they provide. In that case three years is a cruel joke, because almost everything will be more than three years old when it is first deployed in your environment.


I guess the point of a three year window is to be able as an ecosystem to at some point adopt new language features.

When you have some kind of ecosystem rule for that, you can make these upgrade decisions with a lot more confidence.

For example in my project I have a dependency on zstandard. In 3.14 zstandard was added to the standard library. With this ecosystem wide 3 year support cycle I can in good confidence drop the dependency in three years and use the standard lib from then on.

I feel like it just prevents the ecosystem from going stale because some important core library is still supporting a really old version, thus preventing other smaller libraries from using new language features as well, to not exclude a large user base still on an old version.


They appear to be talking about CPython implementations, taking into account when those versions continue to be sorted (in the sense of security updates). That's irrelevant for PyPy, which clearly supports version numbers on a different schedule.

It's not irrelevant, because if SPEC 0 says that a particular Python version is no longer supported, then libraries that follow it won't avoid language or standard library features that that version doesn't have. And then those libraries won't work in the corresponding PyPy version. If there isn't a newer PyPy version to upgrade to, then they won't work in PyPy at all.

You might make a different decision if you were targeting PyPy.

This is silly, there's no killer feature for scientific computing being added to python that would make an existing pypy codebase drop that dependency, getting a code validated takes a long time and dropping something like pypy will require re-valditating the entire thing.

The phenomena you're describing is why Cobol programmers still exist, and simultaneously, why it's increasingly irrelevant to most programmers

The killer feature is ecosystem: Easily and reliably reusing other libraries and tools that work out-of-the-box with other Python code written in the last few years . There are individually neato features motivating the efforts involved in upgrading a widely-used language & engine as well, but that kind of thinking misses the forest for the trees unfortunately.

It's a bit surprising to me, in the age of AI coding, for this to be a problem. Most features seem friendly to bootstrapping with automation (ex: f-strings that support ' not just "), and it's interesting if any don't fall in that camp. The main discussion seems to still be framed by the 2024 comments, before Claude Code etc became widespread: https://github.com/orgs/pypy/discussions/5145 .


The alternative is when you run a script that you last used a few years ago and now need it again for some reason (very common in research) and you might end up spending way too much time making it work with your now upgraded stack.

Sure you can were you should have pinned dependencies but that's a lot of overhead for a random script...


Unfortunately python does add features in a drip-drip kind of way that makes being behind an experience with a lot of niggles. This is particularly the case for the type annotation system, which is retrofit to a language that obviously didn't have one originally. So it's being added slowly in a very conservative way, and there are a lot of limitations and pain points that are gradually being improved (or at least progressed on). The upcoming lazy module loading will also immediately become a sticking point.

> I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.

There is literally a Python 3.12 milestone in the bug tracker.

> my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."

It sounds a lot more like your actual response is "I don't care about pypy".

Which is fine, most people don't to start with. You don't have to pretend just to concern-troll the project.


To be fair that’s literally just a waste of resources. If you want 128 random bits just get 128 random bits from the underlying source, unless your host langage is amazingly deficient it’s just as easy.

That the problem is already solved does not mean the solution is good. Or that you can’t solve it better.

A uuidv4 is 15.25 bytes of payload encoded in 36 bytes (using standard serialisation), in a format which is not conducive to gui text selection.

You can encode 16 whole bytes in 26 bytes of easily selectable content by using a random source and encoding in base32, or 22 by using base58.


The global uniqueness of a uuid v4 is the global uniqueness of pulling 122 bits from a source of entropy. Structure has nothing to do with it, and pulling 128 bits from the same source is strictly (if not massively) superior at that.

I stand corrected. I was thinking of the sequential nature of uuid 7, or SQL servers sequential id.

> Models in the past did not attempt to account for non-anthropogenic carbon emissions

They're literally mentioned by the first IPCC report already.


Early IPCC reports, all the way up to AR5 basically threw their hands up when it came to permafrost emissions. They admitted we didn't have the necessary data yet and for the most part didn't account for it at all in their models

Check out the 1.5C special report. Go to section 2.2.1.2, last paragraph says

> The reduced complexity climate models employed in this assessment do not take into account permafrost or non-CO2 Earth system feedbacks, although the MAGICC model has a permafrost module that can be enabled. Taking the current climate and Earth system feedbacks understanding together, there is a possibility that these models would underestimate the longer-term future temperature response to stringent emission pathways

https://www.ipcc.ch/sr15/chapter/chapter-2/#:~:text=Geophysi...


The claim being discussed is not that they didn’t account for it, but that they didn’t attempt to account for it. Reading that text, I think they did, but chose not to include it (I guess because they didn’t need to to make their point and, by not including it, avoided opponents from arguing about the validity of the result based on uncertainties in those models)

I don't get the distinction you're trying to make. It seems to me they considered it, but did not even attempt to account for it.

They admitted limitations of the data/research they had available. Their model explicitly does not attempt to account for it.


Is it fair to say they account for it, but don’t try to quantify if?

it did not factor into their models at all. They simply mentioned it. Mostly as an asterisk for why their models are likely an underestimation

> Edit, didn't realise it was this bad:

It's probably not bottomed out yet, some of those trips were booked months in advance and not cancellable without taking a financial hit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: