Hacker Newsnew | past | comments | ask | show | jobs | submit | notatallshaw's commentslogin

On a widely used open source project I maintain I've been seeing PRs in the last month that are a little off (look okayish but are trivial or trying to solve problems in weird ways), and then when I look at their account they started opening PRs within the last few weeks, and have opened hundreds of PRs spread over hundreds of repositories.

My understand is Astral's focus for ty has been on making a good experience for common issues, whereas they plan for very high compliance but difficult or rare edge cases aren't are prioritized.

Compliance suite numbers are biased towards edge cases and not the common path because that's where a lot of the tests need to be added.

My advise is to see how each type checker runs against your own codebase and if the output/performance is something you are happy with.


> My understand is Astral's focus for ty has been on making a good experience for common issues, whereas they plan for very high compliance but difficult or rare edge cases aren't are prioritized.

I would say that's true in terms of prioritization (there's a lot to do!), but not in terms of the final user experience that we are aiming for. We're not planning on punting on anything in the conformance suite, for instance.


AI is somewhat helpful but I'm not interested in a company finding a way for me to pay to do my volunteer OSS work. GitHub Copilot offers a permanent free subscription for OSS maintainers.

I previously ignored a free offer when Claude reached out to me as an open source maintainer as it was a glorified free trial. I hope this one continues beyond the listed 6 months, I am not interested in a glorified free trial and if it requires entering credit card details I won't be signing up.


I assume lock and dependency files are in the training data, so predicting version number tokens have high probabilities associated with them.


As a pip maintainer I primarily think about resolution and resolver performance. I got involved when pip introduced it's current resolver around late 2020, and got my first PR landed in mid 2021.

I recently became a packaging maintainer, from working on fixing edge case behavior around specifiers and prerelease versions.

When I did some recent profiling I noticed that A LOT of time was being spent in packaging, largely parsing version strings. I found a few places in pip and packaging that reduced the number of Version objects being created, Henry really ran with the idea of improving performance and made big improvements. I'm excited for this to be vendored in pip 26.0 coming out at the end of January.

If anyone is interested the next big improvement for pip is likely to implement a real CDCL (Conflict-Driven Clause Learning) resolver algorithm, like uv's use of pubgrub-rs. That said, I do this in my spare time so it may be a year or more before I make any real traction on implementing that.


Can you recommend any resources on CDCL? I've just started reading up on Dart's primer [0] but I feel like it's slightly out of my grasp

[0] https://github.com/dart-lang/pub/blob/master/doc/solver.md


I found the diagrams on the Wikipedia pages help build an intuitive understanding of each step: https://en.wikipedia.org/wiki/Conflict-driven_clause_learnin...

Also, the pubgrub-rs guide I find has a gentle ramp in introducing complexity: https://pubgrub-rs-guide.pages.dev/internals/intro


> At one company I worked at, we had a system where each deploy got its own folder, and we'd update a symlink to point to the active one. It worked, but it was all manual, all custom, and all fragile.

The first time I saw this I thought it was one of the most elegant solutions I'd ever seen working in technology. Safe to deploy the files, atomic switch over per machine, and trivial to rollback.

It may have been manual, but I'd worked with a deployment processes that involved manually copying files to dozens of boxes and following 10 to 20 step process of manual commands on each box. Even when I first got to use automated deployment tooling in the company I worked at it was fragile, opaque and a configuration nightmare, built primarily for OS installation of new servers and being forced to work with applications.


> It may have been manual

It's pretty easy to automate a system that pushes directories and changes symlinks. I've used and built automation around the basic pattern.


It’s been a while (a decade?!) but if I recall correctly Capistrano did this for rails deployments too, didn’t it?


Not just rails. Capistrano is tech stack agnostic. It's possible to deploy a project with nodejs using Capistrano.

And yes, it's truly elegant.

Rollbacks become trivial should you need it.


I am now feeling old for using Capistrano even today. I think there might be “cooler and newer” ways to deploy, but i never ever felt the need to learn what those ways are since Capistrano gets the job done.


I remember using mina and it was much faster than Capistrano. Sadly, it seems it's now unmaintained.


I did this, but I used rsync, and you can tell rsync to use the previous ver as the basis so it wouldn't even need to upload everything all over again. Super duper quick to deploy.

I put that in a little bash script so.. I don't know if you call anything that isn't CI "manual" but I don't think it'd be hard to work into some pipeline either.

I kind of miss it honestly.


We're getting there https://pip.pypa.io/en/stable/cli/pip_lock/ !

Pip has been a flag bearer for Python packaging standards for some time now, so that alternatives can implement standards rather than copy behavior. So first a lock file standard had to be agreed upon which finally happened this year: https://peps.python.org/pep-0751/

Now it's a matter of a maintainer, who are currently all volunteers donating their spare time, to fully implement support. Progress is happening but it is a little slow because of this.


Unfortunately that was posted 1 month before the Faster CPython project was disbanded by Microsoft, so I imagine things have slowed.


Python has about 40 keywords, I say I would regularly use about 30, and irregularly use about another 5. Hardly seems like a "junkyard".

Further, this lack of first class support for lazy importing has spawned multiple CPython forks that implement their own lazy importing or a modified version of the prior rejected PEP 690. Reducing the real world need for forks seems worth the price of one keyword.


For those curious here are the actual keywords (from https://docs.python.org/3/reference/lexical_analysis.html?ut... )

Hard Keywords:

False await else import pass None break except in raise True class finally is return and continue for lambda try as def from nonlocal while assert del global not with async elif if or yield

Soft Keywords:

match case _ type

I think nonlocal/global are the only hard keywords I now barely use, for the soft ones I rarely use pattern matching, so 5 seems like a good estimate


I recall when they added "async" and it broken a whole lot of libraries. I hope they never again introduce new "hard" keywords.


Removing "print" in 3.0 helped their case significantly, as well.


> I had been hoping someone would introduce the non-virtualenv package management solution that every single other language has where there's a dependency list and version requirements (including of the language itself) in a manifest file (go.mod, package.json, etc) and everything happens in the context of that directory alone without shell shenanigans.

Isn't that exactly a pyproject.toml via the the uv add/sync/run interface? What is that missing that you need?


> pyproject.toml

Ah ok I was missing this and this does sound like what I was expecting. Thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: