It’s much better with uv, but nothing beats the ergonomics of running go mod tidy && go run script.go.
I wish Python had a native way to build self-contained executables like Go. The binary would be larger since it’s a dynamically interpreted language, but as long as it’s self-contained, I wouldn’t mind.
I mean... at first blush it seems like `./scriptname` which is what the `uv` shebang provides beats having to remember two commands and && them together.
Pinning deps is a good thing, but it won't necessarily solve the issue of transitive dependencies (ie: the dependencies of requests itself for example), which will not be pinned themselves, given you don't have a lock file.
To be clear, a lock file is strictly the better option—but for single file scripts it's a bit overkill.
If there's a language that does this right, all ears? But I havn't seen it -
The use case described is for a small one off script for use in CI, or a single file script you send off to a colleague over Slack. Very, very common scenario for many of us. If your script depends on
a => c
b => c
You can pin versions of those direct dependencies like "a" and "b" easy enough, but 2 years later you may not get the same version of "c", unless the authors of "a" and "b" handle their dependency constraints perfectly. In practice that's really hard and never happens.
The timestamp appraoch described above isn't perfect, but would result in the same dep graph, and results, 99% of the time..
Try Scala with an Ammonite script like https://ammonite.io/#ScalaScripts . The JVM ecosystem does dependencies right, there's no need to "pin" in the first place because dependency resolution is deterministic to start with. (Upgrading to e.g. all newer patch versions of your current dependencies is easy, but you have to make an explicit action to do so, it will never happen "magically")
> 1 file, 2 files, N files, why does it matter how many files?
One file is better for sharing than N, you can post it in a messenger program like Slack and easily copy-and-paste (while this becomes annoying with more than one file), or upload this somewhere without needing to compress, etc.
> I can't think of any other language where "I want my script to use dependencies from the Internet, pinned to precise versions" is a thing.
This is the same issue you would have in any other programming language. If it is fine for possibly having breakage in the future you don't need to do it, but I can understand the use case for it.
I think it's a general principle across all of software engineering that, when given the choice, fewer disparate locations in the codebase need to have correlated changes.
Documentation is hard enough, and that's often right there at exactly the same location.
One could indicate implicit time-based pinning of transitive dependencies, using the time point at which the dependended-on versions were released. Not a perfect solution, but it's a possible approach.
I think OP was saying to look at when the package was build instead of explicitly adding a timestamp. Of course, this would only work if you speficied `requests@1.2.3` instead of just `requests`.
This looks like a good strategy, but I wouldn't want it by default since it would be very weird to suddenly having a script pull dependencies from 1999 without explanation why.
I'm not a python packaging expert or anything but an issue I run into with lock files is they can become machine dependent (for example different flavors of torch on some machines vs others).
For completeness, there's also a script.py.lock file that can be checked into version controls but then you have twice as many files to maintain, and potentially lose sync as people forget about it or don't know what to do with it.
Thanks for sharing your experience, and what you have found to work.
Sometimes I feel we (fellow HN readers) get caught into overly complex rabbit holes, so it's good to balance it out with some down-to-earth, practical perspectives.
This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.
> You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment.
True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.
And it gets even more complex if you need different tools that have different Python version requirements.
>This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.
And of course you could be working with multiple distros and versions of the same distro, production and dev might be different environment and tons of others concerns. You need something that just works across.
You almost need to use Docker for deploying Python because the tooling is so bad that it's otherwise very difficult to get a reproducible environment. For many other languages the tooling works well enough that there's relatively little advantage to be had from Docker (although you can of course still use it).
>> You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment.
>True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.
"Globally" means installed with sudo. These are installed into the user folder under ~/.local/ and called a user install by pip.
I wouldn't call it "extremely brittle" either. It works fine until you upgrade to a new version of python, in which case you install the package again. Happens once a year perhaps.
The good part of this is that unused cruft will get left behind and then you can delete old folders in ~/.local/lib/python3.? etc. I've been doing this over a decade without issue.
> "Globally" means installed with sudo. These are installed into the user folder under ~/.local/ and called a user install by pip.
> It works fine until you upgrade to a new version of python, in which case you install the package again.
Debian/Ubuntu doesn't want you to do either, and tell you you'll break your system if you force it (the override flag is literally named "--break-system-packages"). Hell, if you're doing it with `sudo`, they're probably right - messing with the default Python installation (such as trying to upgrade it) is the quickest way to brick your Debian/Ubuntu box.
Incredibly annoying when your large project happens to use pip to install both libraries for the Python part, and tools like CMake and Conan, meaning you can't just put it all in a venv.
Ok, getting it now. I said upgrade python, and you thought I meant upgrade the system python in conflict with the distro. But that's not really what I meant. To clarify... I almost never touch the system python, but I upgrade the distro often. Almost every Ubuntu/Mint has a new system Python version these days.
So upgrade to new distro release, it has a new Python. Then pip install --user your user tools, twine, httpie, ruff, etc. Takes a few moments, perhaps once a year.
I do the same on Fedora, which I've been using more lately.
Nah, pip is still brittle here because it uses one package resolution context to install all your global tools. So if there is a dependency clash you are out of luck.
> exceedingly unlikely to because your user-wide tools should be few.
Why "should"? I think it's the other way around - Python culture has shied away from user-wide tools because it's known that they cause problems if you have more than a handful of them, and so e.g. Python profilers remain very underdeveloped.
There are simply few, I don't shy away from them. Other than tools replaced by ruff, httpie, twine, ptpython, yt-dlp, and my own tools I don't need anything else. Most "user" tools are provided by the system package manager.
All the other project-specific things go in venvs where they belong.
This is all a non-issue despite constant "end of the world" folks who never learned sysadmin and are terrified of an error.
If a libraries conflict, uninstall them, and put them in a venv. Why do all the work up front? I haven't had to do that in so long I forget how long it was. Early this century.
> This is all a non-issue despite constant "end of the world" folks who never learned sysadmin and are terrified of an error.
It's not a non-issue. Yes it's not a showstopper, but it's a niggling drag on productivity. As someone who's used to the JVM but currently having to work in Python, everything to do with package management is just harder and more awkward than it needs to be (and every so often you just get stuck and have to rebuild a venv or what have you) and the quality of tooling is significantly worse as a result. And uv looks like the first of the zillions of Python package management tools to actually do the obvious correct thing and not just keep shooting yourself in the foot.
It’s not a drag if you ignore it and it doesn’t happen even once a decade.
Still I’m looking forward to uv because I’ve lost faith in pypa. They break things on purpose and then say they have no resources to fix it. Well they had the resources to break it.
But this doesn’t have much to do with installing tools into ~/.local.
> pip doesn't resolve dependencies of dependencies.
This is simply incorrect. In fact the reason it gets stuck on resolution sometimes is exactly because it resolved transitive dependencies and found that they were mutually incompatible.
Here's an example which will also help illustrate the rest of my reply. I make a venv for Python 3.8, and set up a new project with a deliberately poorly-thought-out pyproject.toml:
I've specified the oldest version of Numpy that has a manylinux wheel for Python 3.8 and the newest version of Pandas similarly. These are both acceptable for the venv separately, but mutually incompatible on purpose.
When I try to `pip install -e .` in the venv, Pip happily explains (granted the first line is a bit strange):
ERROR: Cannot install example and example==0.1.0 because these package versions have conflicting dependencies.
The conflict is caused by:
example 0.1.0 depends on numpy==1.17.3
pandas 2.0.3 depends on numpy>=1.20.3; python_version < "3.10"
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
If I change the Numpy pin to 1.20.3, that's the version that gets installed. (`python-dateutil`, `pytz`, `six` and `tzdata` are also installed.) If I remove the Numpy requirement completely and start over, Numpy 1.24.4 is installed instead - the latest version compatible with Pandas' transitive specification of the dependency. Similarly if I unpin Pandas and ask for any version - Pip will try to install the latest version it can, and it turns out that the latest Pandas version that declares compatibility with 3.8, indeed allows for fetching 3.8-compatible dependencies. (Good job not breaking it, Pandas maintainers! Although usually this is trivial, because your dependencies are also actively maintained.)
> pip will only respect version pinning for dependencies you explicitly specify. So for example, say I am using pandas and I pin it to version X. If a dependency of pandas (say, numpy) isn't pinned as well, the underlying version of numpy can still change when I reinstall dependencies.
Well, sure; Pip can't respect a version pin that doesn't exist anywhere in your project. If the specific version of Pandas you want says that it's okay with a range of Numpy versions, then of course Pip has freedom to choose one of those versions. If that matters, you explicitly specify it. Other programs like uv can't fix this. They can only choose different resolution strategies, such as "don't update the transitive dependency if the environment already contains a compatible version", versus "try to use the most recent versions of everything that meet the specified compatibility requirements".
> To get around this with pip you would need an additional tool like pip-tools, which allows you to pin all dependencies, explicit and nested, to a lock file for true reproducibility.
No, you just use Pip's options to determine what's already in the environment (`pip list`, `pip freeze` etc.) and pin everything that needs pinning (whether with a Pip requirements file or with `pyproject.toml`). Nothing prevents you from listing your transitive dependencies in e.g. the [project.dependencies] of your pyproject.toml, and if you pin them, Pip will take that constraint into consideration. Lock files are for when you need to care about alternate package sources, checking hashes etc.; or for when you want an explicit representation of your dependency graph in metadata for the sake of other tooling.
> This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.
I have built versions 3.5 through 3.13 inclusive from source and have them installed in /opt and the binaries symlinked in /usr/local/bin. It's not difficult at all.
> True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.
What brittleness are you talking about? There's no reason why the tool needs to run in the same environment as the code it's operating on. You can install it in its own virtual environment, too. Since tools generally are applications, I use Pipx for this (which really just wraps a bit of environment management around Pip). It works great; for example I always have the standard build-frontend `build` (as `pyproject-build`) and the uploader `twine` available. They run from a guaranteed-compatible Python.
And they would if they were installed for the system Python, too. (I just, you know, don't want to do that because the system Python is the system package manager's responsibility.) The separate environment don't matter because the tool's code and the operated-on project's code don't even need to run at the same time, let alone in the same process. In fact, it would make no sense to be running the code while actively trying to build or upload it.
> And it gets even more complex if you need different tools that have different Python version requirements.
No, you just let each tool have the virtual environment it requires. And you can update them in-place in those environments, too.
> This is simply incorrect. In fact the reason it gets stuck on resolution sometimes is exactly because it resolved transitive dependencies and found that they were mutually incompatible.
The confusion might be that this used to be a problem with pip. It looks like this changed around 2020, but before then pip would happily install broken versions. Looking it up, this change of resolution happened in a minor release.
You have it exactly, except that Pip 20.3 isn't a "minor release" - since mid-2018, Pip has used quarterly calver, so that's just "the last release made in 2020". (I think there was some attempt at resolving package versions before that, it just didn't work adequately.)
Ah thank you for the correction, that makes sense - it seemed very odd for a minor version release.
I think a lot of people probably have strong memories of all the nonsense that earlier pip versions resulted in, I know I do. I didn't realise this was a more solved problem now as not seeing an infrequent issue is hard to notice.
> Well, sure; Pip can't respect a version pin that doesn't exist anywhere in your project. If the specific version of Pandas you want says that it's okay with a range of Numpy versions, then of course Pip has freedom to choose one of those versions. If that matters, you explicitly specify it
Nearly every other language solves this better than this. What your suggesting breaks down on large projects.
>Nearly every other language solves this better than this.
"Nearly every other language" determines the exact version of a library to use for you, when multiple versions would work, without you providing any input with which to make the decision?
If you mean "I have had a more pleasant UX with the equivalent tasks in several other programming languages", that's justifiable and common, but not at all the same.
>What your suggesting breaks down on large projects.
Pinned transitive dependencies are the only meaningful data in a lockfile, unless you have to explicitly protect against supply chain attacks (i.e. use a private package source and/or verify hashes).
IMHO the clear separation between lockfile and deps in other package managers was a direct consequence of people being confused about what requirements.txt should be. It can be both and could be for ages (pip freeze) but the defaults were not conductive to clear separation. If we started with lockfile.txt and dependencies.txt, the world may have looked different. Alas.
The thing is, the distinction is purely semantic - Pip doesn't care. If you tell it all the exact versions of everything to install, it will still try to "solve" that - i.e., it will verify that what you've specified is mutually compatible, and check whether you left any dependencies out.
If all you need to do is ensure everyone's on the same versions of the libraries - if you aren't concerned with your supply chain, and you can accept that members of your team are on different platforms and thus getting different wheels for the same version, and you don't have platform-specific dependency requirements - then pinned transitive dependencies are all the metadata you need. pyproject.toml isn't generally intended for this, unless what you're developing is purely an application that shouldn't ever be depended on by anyone else or sharing an environment with anything but its own dependencies. But it would work. The requirements.txt approach also works.
If you do have platform-specific dependency requirements, then you can't actually use the same versions of libraries, by definition. But you can e.g. specify those requirements abstractly, see what the installer produces on your platform, and produce a concrete requirement-set for others on platforms sufficiently similar to yours.
(I don't know offhand if any build backends out there will translate abstract dependencies from an sdist into concrete ones in a platform-specific wheel. Might be a nice feature for application devs.)
Of course there are people and organizations that have use cases for "real" lockfiles that list provenance and file hashes, and record metadata about the dependency graph, or whatever. But that's about more than just keeping a team in sync.
The real world is a complicated place. You want simple answers when reality is complicated and nuanced.
The fact is that there are—and have always been—people for which these things are not the same. You might want to wish it away, but that doesn't change reality.
It just doesn't matter that much in my experience. If an issued command didn't work, it's easy to tell anyway (it's hot/cold), and you can just repeat it. HomeAssistant also has bits of special handling for items that don't communicate their state back, called "assumed state".
For the rare times I want to control my AC when being away from home, I have an air monitor nearby. I can just check if the temperature/humidity has changed, and repeat the command if it didn't work. If you _really_ cared you probably could script it to do it automagically, but I didn't feel the need to bother.
Yeah there’s very few edge cases, imo, where you need the feedback.
I have home assistant controlling an air conditioner in one room. (Well, mostly Node-RED.)
Every couple minutes it checks the temperature in the room and makes a decision on whether to call for cooling and tells the AC to turn on or off.
If it’s already on and cooling and it tells it to turn on… it’s a no-op, nothing happens. If it tells it to turn on and the command doesn’t go through… the room will stay warm so it will try the same thing in a couple of minutes. Same thing the other way (turning it off).
The remote has no feedback. I have found Tasmota IR 100% reliable over 3 years.
It sends the whole state on every transmission so the IR has no receiver.
If the message sent over IR always contains the full state, then it's only a matter of checking that the message was received.
If you are in the room, you'll know soon enough, otherwise I guess it could be possible to rely on the audio feedback (a light beep) that the AC probably emits when it successfully receives a command. (and add a temperature sensor to check that it's working properly)
Not all state is necessarily transmitted over IR. For example, my unit has a button on the remote to turn the LED on or off; over the air this is just a toggle, only the AC knows which state the LED is in. (That said, that particular issue is easy enough to handle since changing any other parameter turns the LED back on, putting it back in a known state; there's no way to keep it off.)
AC is typically something you only need when you are inside the house so it is not like any freak situation would occur. If it happens only super occasionnally at worse you just set it the homeassistant state using the remote manually.
I guess you should hide those remote in a drawer and remove the batteries when you start using homeassistant
Similar problem here. I've thought of getting IR receivers to also listen for the remote's IR signal, since you have to be able to encode the IR protocol anyways. But even then sometimes the AC unit doesn't get the signal from my remote, so I'm unsure if that's a remote issue or receiver issue.
The completely overkill setup would be to get a different remote control, get my DIY receiver to accept that and convert it to my AC unit's IR code, updating HA while at it. The remote's state would be out of sync still, but it'll keep the units in sync with HA.
A lot of remote controlled air conditioning systems (like mini-splits and windows units) send the entire state of the remote via the IR blaster every time a key is pressed so there's no chance of the two getting de-synced.
The announcement post from Fastmail does a good job at listing the advantages of Passkeys over passwords: replay resistant, database-leak resistant and fishing proof.
- database-leak resistant: if i'm understanding this correctly, this means a leaked database on the Fastmail side wouldn't compromise your Fastmail account? It's hard for me to imagine a situation where a compromise is serious enough that passwords are leaked, but nothing else?
- phishing proof: don't password managers already do this?
Re replay: No, because once someone has your password they can replay it as many times as they want. If you use your passkey on a compromised computer, the intercepted credentials can’t be reused.
Re DB leak: No, you the concern is reused passwords (or similar passwords) from a different site.
Re phishing: Yes, but one of the FUDs against passkeys is that they lock you in to a vendor. There is no more lockin than if store your passwords in a manager.
Do you manually check every site's SSL certificate before connecting? If not, how can you be sure there's not a MITM/Replay attack ongoing right now?
Very commonly user databases are the one being accessed for some reason, resulting in user data + salted passwords released.
How so? I can social engineer an employee to give me the password for a site they have in the password manager. I can't make them give me the passkey because they can't do that. It's not something you can paste in a chat.
From a security perspective, not being able “paste into chat” is a fundamental feature of passkeys. The whole point is to prevent a static secret which can easily be copied by an attacker, memorized, phished, or re-used across sites.
They sort of solve all these problems with a simpler implementation. But the disadvantage of passkeys is that you are dependent on a tech implementation ecosystem to use them, such as your phone, cloud keychain, etc. In practice, for a lot of people, that will mean tighter dependence on the smartphone, which is rather asinine as people should have the freedom to choose life without a big tech company providing for their needs.
Password managers such as Bitwarden and KeepassXC support creating and using Passkeys for accounts.
Presumably, you are already using a password manager at this point. Memorizing dozens of account passwords is not suitable for maintaining strong passwords.
Also, passwords still exist as a fall back if you need it, such as a situation you don’t have your device available. And not all accounts have to use passkeys.
Passkeys are effectively like ssh keys. Do ssh keys “lock you down” to specific devices? Sure they absolutely do unless you generate more keys or have some key management/sync workflow.
Password managers are phishing resistant. The browser plugin will not offer to autocomplete passwords on an identical-looking punycode domain.
A sufficiently long, randomly generated password is also database-leak resistant. Good luck brute-forcing a 128-bit random string, hashed with scrypt or whatever.
So the only significant advantage is replay resistance. Which might or might not be a big deal, but let's not overplay the advantages.
> Password managers are phishing resistant. The browser plugin will not offer to autocomplete passwords on an identical-looking punycode domain.
True … but the reaction to this by the vast majority of users is to go "stupid password manager autofill not working again", and copy and paste their password out of the pw manager and paste it straight into the phishing site…
Well, IME this tends to happen on "let's be super secure and disable or otherwise break the login fields" sites, so I'm not sure these people will bother implementing actually useful security measures.
The phishing resistance isn't that straightforward in practice. It requires using browser extensions, which some people avoid for understandable reasons (poor security track record compared to everything else about password managers, and some of them just aren't very good). Many services use multiple domains (my bank has a .com, a .org, and several third-party vendor domains where you might be expected to enter your credentials), so many people who don't know how to update their password manager entries are probably in the habit of manually copying info into places where it doesn't autofill. And speaking of places where it doesn't autofill, the vast majority of mobile app developers seem to be unaware of things like autofill hints for login fields and apple-app-site-association.
The “database leak” argument is wider, though. It applies to password reuse (or systematic generation) and a leak from another site - which, may be stored in plaintext, or otherwise has a compromised login procedure that leaks passwords regardless of how it’s stored for validation.
You could say - and rightly so - that a person who reuses passwords invited whatever pwnage they get. But these people walk among us, do not use a password manager (often because not tech savvy enough), and passkeys are usable for those people.
Are password managers resistant to social engineering? You can copy & paste a password to a "support chat" from the manager. You can't do that with a passkey.
The password is only resistant if the one storing it is following best practices, which are NOT enforced and you really can't check for from the outside.
Well if we're talking about social engineering, I don't think it will be difficult to convince the support guy at most companies to disable passkeys on the target account altogether. :(
If you can engineer "the support guy" then you can do a lot more than disable one passkey.
I'm talking about engineering on the other side, the person who has the password and uses it to log in. You can't social engineer Miriam from Accounting to give their passkey, you can do it with a password.
Just use something like this as a shebang, and you can have your cake and eat it too!
#!/usr/bin/env -S uv run
previous HN thread: https://news.ycombinator.com/item?id=43097006