Hacker Newsnew | past | comments | ask | show | jobs | submit | yeti-sh's commentslogin

That's right; Google also published a similar package named `fire`, I used it a lot.

> just `python your_file.py`

That's actually one of the points.

    j lint
is shorter than

    python scripts.py lint
and the letter `j` is what the index finger of your right hand is pointing to on the keyboard. There is usually a tactile bump on that key. It is something you can type very quickly. You can also install shell completion to improve the experience further.

Ergonomics matters.


Well, you can do the same trick with argh & a symlink in /usr/bin.

I'll agree this is some improvement in ergonomics if the executable supports tab completion.

With makefile + argh I can do this: put the script as a dependency of data file, and rebuild it if code changes.

    P=python3
    my_data_file.csv: my_script.py input_data.csv
        $P --flag1 --flag2 $^ $@
With your package, this becomes harder -- the file name and command name now are separated and must be checked for being in sync.

So, the package adds some features, but does it at some cost of other ways of interaction.


Regarding sync checks which Make has as a built-in feature, I am not yet positive how to best implement it. Maybe something like

    @pre(file_name=_file_is_in_sync)
    def build(file_name: Path):
        ...
I believe such libraries do exist even, but I haven't yet researched the topic; if jeeves proves to be useful for the simplest use case then it will make sense to expand its scope.


> argh & a symlink in /usr/bin.

- Is putting a symlink to one of your project directories, probably residing in $HOME, to /usr/bin/ a good practice?

- As an alternative one can put a symlink into ~/bin; but what if you have multiple different projects?

- And they can have different jeeves commands and different plugins installed. For instance, `j lint` implementations in different projects are work on are completely different.

Because these are different projects.

Rather, I prefer to have jeeves automatically create the executable command in each virtual env, and its behavior in different virtual envs will be different.


I mean `sudo symlink /usr/bin/python3 /usr/bin/p` can make a good shortcut for python. The script name can be tab-completed.


I see. The P key isn't that ergonomically placed though; and again, one will have to do this time and again for each virtual environment, right?

Why not use native Python methods, namely endpoints, instead, and automate this away?


On Dvorak keyboard it is in a very convenient place. Anyway, one can assign any letter. Not everyone uses virtualenv -- this is very frequent in Django, but I didn't see it elsewhere since I stopped working with it in 2016. Data science that I saw, revolves around Jupyter and Docker.

Anyway, my point is that your package offers conveniences like those in existing packages, and not free but at a similar cost. (Actually, I also contemplated making my own plugins system for `argh`, but just wrote a single package with my own decorator to handle just the few types I needed: pd.DataFrame and gpd.GeoDataFrame. https://github.com/culebron/erde/ )


I feel the majority still uses qwerty keyboards; I assumed that when choosing the name and the shortcut for the tool.

Experiences differ, and de gustibus on est disputandum; in my bubble, virtualenv is a must-have for Python development.

I have multiple Python projects, they run different versions of Python itself and its libraries, and for many of them Docker would be an overkill; so it is hard for me to imagine my routine without virtual environments.

I employ pyenv to manage them but venv is a venv.


Saying 'GNU Make' particularly was an automatism. When I use Make I do use GNU Make specifically. I do not have experience with BSD Make, or other Make dialects.

> did actually do a suitable job of replacing gnu make

The title does not say _a replacement_, it says _an alternative_. I never intended to replace GNU Make; if I said that anywhere — that was a mistake on my part.

> do improve upon those who created unix

I did not intend to improve upon UNIX ideas or philosophy. The improvements I am aiming for are:

- developer experience,

- conciseness and maintainability of the code.

…and these are being addressed for a narrow use case. For that use case, in my experience, Make is very often used.

I argue that within the bounds of this use case, this is an alternative which can be of use to improve productivity and everyday experience.

> I don't really understand the full job make does, so here is something which only does a simpler job which I do understand, and I like python

I classify these assumptions about my understanding or misunderstanding as ad hominem and unprofessional.


The issue is that make(1) has lots of functionality... none of which is present here.

In other words, make(1) does a lot of things, one of which is to run shell commands. If this the only thing you care about and your project offers, it would be better expressed as "Pythonic way to run commands" or something, with no mention of make.


To change the wording in the title of this post would be improper in view of multitude of comments referring to this title. I will consider rebranding this.


I appreciate and thank you for the integrity of not wanting to edit something after it's been referenced.


You can also install a jeeves plugin with pip. Say,

    pip install jeeves-yeti-pyproject
will provide you with jeeves, a bunch of commands, and a pack of dev dependencies which I personally happen to like and to use in my projects.

I don't believe Make has plugins.


That's some more indirection, besides putting a disconnected python file in the project directory.

I'm not sure whether it's suggested to install it globally, as a development dependency with poetry/similar, or with pipx https://pypa.github.io/pipx/

The "import sh" thing could have some users installing this package https://pypi.org/project/sh/

This in addition to it being known as "j". It has at least 3 names, jeeves, j, and sh.


For a Python project, I'd recommend doing this as a dev dependency.

* jeeves is the name of the project which I'm advertising, which converts a Python file to a set of commands; * `j` is the name of executable which aformentioned project exposes; * and `sh` is the library (the correct link to which you have provided) which is an optional dependency of `jeeves` and provides a more convenient interface to calling processes and executing commands from Python than `subprocess.run()`.


I looked at pyinvoke before I started jeeves. Roughly, as much as I can recall, there were a few reservations:

- no type hints for docs & validation, I wanted them

- Makefile in its basic form is very concise and doesn't require a @task decorator, I didn't want it either

I didn't need dependency graph much.


> I didn't need dependency graph much.

So you made a Make alternative that isn't a Make alternative because you never needed Make in the first place?


There are a lot of projects, both commercial and OSS, which have a Makefile for linting, version releases, etc, — and never use its dependency graph feature; or, their usage is so superficial that implementing it, say,

- via direct function calls in Python code,

- or using Typer callback functions

…is easy.

There are potential approaches worth exploring (annotations, decorators, …). I might come up with a list of options with syntax examples and ask for community's thoughts about it, — but at this point jeeves, as an MVP, is already useful for me and a few people I work with, and that motivated me to share it with the community.


The whole purpose of make is the dependency graph.


Please elaborate. Do you mean submitting commands via ssh to remote servers from the script?

The way I'd recommend to run shell commands from jeeves is `sh` library. It has features for ssh support: https://sh.readthedocs.io/en/latest/sections/contrib.html#ss...


I like sh for brevity of its API. I often have to use `_tty_out=False`, but this is easy to fix once and for all commands in a script:

    my_sh = sh.bake(_tty_out=False)
    my_sh.do_whatever()
The way how sh captures output can apparently be altered, say, by providing a callable to _out argument.


You could have probably created your own little wrapper on top of subprocess.Popen and dispatch stdout and stderr around. No need for an external library with bad tty/piping defaults just because it has a nice API (which needs to be tweaked with _tty_out=False anyway if i want to pipe the output to another command.

Btw does rich scrape the special formatting characters if piped?


An example from the top of my head:

    compose = sh.docker.compose.bake('-f', 'deploy/dev.yml')
    …
    compose.down()
    …
    compose.up('--force-recreate', '-d')
I feel this is a major improvement on top of Makefiles + shell commands in them. Nice API _matters_; it is ergonomics and therefore productivity.

You can specify `_out=rich.print` and it will work but AFAIK it won't scrape ASCII terminal formatting characters.

I had to fix these characters in my `sh` based scripts but do not consider that as a big deal.


Doesn't the snippet above hide the errors and warnings (since I do not see any code to print them)?

Does not seem very ergonimic to me.


No it does not, in fact; errors or warnings will pop up as an unhandled exception and print:

- command actually executed,

- snippet of stdout

- and snippet of stderr.

They can be handled using standard exception techniques.


Isn't that only for failed commands? so if command succeeds with warnings, or if if fails, but the retries and passed, ghen you get no output.


The exception is raised if the command returns a non-zero error code. If it returns a zero error code then the return value of, say,

    compose.up()
contains the command's stdout.

In addition stdout can be redirected to a file, or to another command, or to a callable, ­— which will be called for chunks of stdout while the command is in operation.


Thanks, will check this out!


That's right. For projects which only use subset of Make features described by (1) jeeves can be considered an alternative for Make, and I would believe there are quite a few such projects.


With interpretable languages, this feature of Make is IMHO rarely called for. Task running is; Makefile still is a standard way to do project maintenance, both locally and CI. jeeves is an attempt to extract that feature of Make, and provide it in a more accessible way.

By making .PHONY and .ONESHELL labels history, as a particular example.


Indeed, they are out of scope. But I still would argue this does have something in common with Make — just as Make, jeeves can be used as a command runner, in which role Make is oftentimes used as well.

Instead of `make` as entry point for all project-specific commands (like `make lint`, `make build`, `make deploy`), users can rely upon `j` — both locally and CI.

Regarding your particular points:

* dependency graphs aren't presently at scope, I haven't yet come up with the method of expressing them in Python which would have entirely satisfied me;

* I am not so sure I would want to implement pattern rules due to complexity they bring about. Maybe just writing explicit Python code would be enough for cases where they're used. No strong opinion though.


Having worked with Makefile and pushed it to limits, I'd say there are some deficiencies in the system you may try to tackle:

1. Databases. We ended up calling `psql -f some_script.sql && touch $my_task_name` and tracking changes with touch files. (Putting this into database on views or materialized views proved to be unsustainable.)

2. Datasets. If you just open a sqlite file, it's changed, and GNU Make thinks you must rebuild everything downstream. Datasets are mostly treated as row-order-independent, so hashing them as is does not always work.

3. Very expensive tasks that shouldn't be called always. Like my makefile had a script that parsed a million web pages, going around captchas via Tor, and touching the upstream files was to be avoided -- or if it happened, I had to manually touch the target, to avoid re-running that part.

4. Some targets can be updated and the result will always be new -- e.g. run a query to a live database, or news website. Some may produce the same. Would appreciate if any system has such a distinction.

5. Surprisingly, lots of alternative build systems don't do partial update. They only update every item in the deps graph.

If you manage to get any of these right, you'd be praised.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: