On a widely used open source project I maintain I've been seeing PRs in the last month that are a little off (look okayish but are trivial or trying to solve problems in weird ways), and then when I look at their account they started opening PRs within the last few weeks, and have opened hundreds of PRs spread over hundreds of repositories.
My understand is Astral's focus for ty has been on making a good experience for common issues, whereas they plan for very high compliance but difficult or rare edge cases aren't are prioritized.
Compliance suite numbers are biased towards edge cases and not the common path because that's where a lot of the tests need to be added.
My advise is to see how each type checker runs against your own codebase and if the output/performance is something you are happy with.
> My understand is Astral's focus for ty has been on making a good experience for common issues, whereas they plan for very high compliance but difficult or rare edge cases aren't are prioritized.
I would say that's true in terms of prioritization (there's a lot to do!), but not in terms of the final user experience that we are aiming for. We're not planning on punting on anything in the conformance suite, for instance.
AI is somewhat helpful but I'm not interested in a company finding a way for me to pay to do my volunteer OSS work. GitHub Copilot offers a permanent free subscription for OSS maintainers.
I previously ignored a free offer when Claude reached out to me as an open source maintainer as it was a glorified free trial. I hope this one continues beyond the listed 6 months, I am not interested in a glorified free trial and if it requires entering credit card details I won't be signing up.
As a pip maintainer I primarily think about resolution and resolver performance. I got involved when pip introduced it's current resolver around late 2020, and got my first PR landed in mid 2021.
I recently became a packaging maintainer, from working on fixing edge case behavior around specifiers and prerelease versions.
When I did some recent profiling I noticed that A LOT of time was being spent in packaging, largely parsing version strings. I found a few places in pip and packaging that reduced the number of Version objects being created, Henry really ran with the idea of improving performance and made big improvements. I'm excited for this to be vendored in pip 26.0 coming out at the end of January.
If anyone is interested the next big improvement for pip is likely to implement a real CDCL (Conflict-Driven Clause Learning) resolver algorithm, like uv's use of pubgrub-rs. That said, I do this in my spare time so it may be a year or more before I make any real traction on implementing that.
> At one company I worked at, we had a system where each deploy got its own folder, and we'd update a symlink to point to the active one. It worked, but it was all manual, all custom, and all fragile.
The first time I saw this I thought it was one of the most elegant solutions I'd ever seen working in technology. Safe to deploy the files, atomic switch over per machine, and trivial to rollback.
It may have been manual, but I'd worked with a deployment processes that involved manually copying files to dozens of boxes and following 10 to 20 step process of manual commands on each box. Even when I first got to use automated deployment tooling in the company I worked at it was fragile, opaque and a configuration nightmare, built primarily for OS installation of new servers and being forced to work with applications.
I am now feeling old for using Capistrano even today. I think there might be “cooler and newer” ways to deploy, but i never ever felt the need to learn what those ways are since Capistrano gets the job done.
I did this, but I used rsync, and you can tell rsync to use the previous ver as the basis so it wouldn't even need to upload everything all over again. Super duper quick to deploy.
I put that in a little bash script so.. I don't know if you call anything that isn't CI "manual" but I don't think it'd be hard to work into some pipeline either.
Pip has been a flag bearer for Python packaging standards for some time now, so that alternatives can implement standards rather than copy behavior. So first a lock file standard had to be agreed upon which finally happened this year: https://peps.python.org/pep-0751/
Now it's a matter of a maintainer, who are currently all volunteers donating their spare time, to fully implement support. Progress is happening but it is a little slow because of this.
Python has about 40 keywords, I say I would regularly use about 30, and irregularly use about another 5. Hardly seems like a "junkyard".
Further, this lack of first class support for lazy importing has spawned multiple CPython forks that implement their own lazy importing or a modified version of the prior rejected PEP 690. Reducing the real world need for forks seems worth the price of one keyword.
False await else import pass
None break except in raise
True class finally is return
and continue for lambda try
as def from nonlocal while
assert del global not with
async elif if or yield
Soft Keywords:
match case _ type
I think nonlocal/global are the only hard keywords I now barely use, for the soft ones I rarely use pattern matching, so 5 seems like a good estimate
> I had been hoping someone would introduce the non-virtualenv package management solution that every single other language has where there's a dependency list and version requirements (including of the language itself) in a manifest file (go.mod, package.json, etc) and everything happens in the context of that directory alone without shell shenanigans.
Isn't that exactly a pyproject.toml via the the uv add/sync/run interface? What is that missing that you need?
reply