Hacker Newsnew | past | comments | ask | show | jobs | submit | jolmg's commentslogin

Different programs may take different amounts of time to cleanup and close. To know if a signal failed takes human judgment or heuristic. A program receiving a signal is even able to show a confirmation dialog for the user to save stuff, etc. before closing.

That's a valid point. Another example is SIGHUP, which will cause some programs to exit but other programs to reload their config file. In certain very specific cases, that could even cause harm.

So really what "kill" would be doing is automating a common procedure, which is different than taking responsibility for doing it correctly. It would need to be configurable.

I still think it would be a net benefit since right now incentives push people toward doing something the wrong way (even if they know better). But I can also see how it might give people a false sense of security or something along those lines.


> automating a common procedure

It's not common. If `kill` on its own (which does just SIGTERM) doesn't work, you're already in "something wrong is happening" territory, which is why:

>>> Given that this is the right way, it shouldn't be more tedious than the wrong way!

is also the wrong way to think about this. Trying a sequence of signals is not so much "the right way" as it is "the best way to handle a wrong situation". The right way is just `kill` on it's own. SIGTERM should always suffice. If it doesn't to the user's satisfaction for a nonjustifiable reason, then you can just `kill -9`, but this should be rare.

Trying a sequence of SIGINT, SIGHUP, and SIGABRT is technically better than SIGKILL but not really important unless you also want to write a bug report about the program's signal handling or fix it yourself. About SIGINT and SIGHUP, if SIGTERM doesn't work, it's unlikely that SIGINT or SIGHUP would. Likely, it would only be through oversight and the execution of default handlers.

`kill -9` is just like `rm -rf`. I wouldn't suggest that `rm` automatically run with `-r` or `-f` when `rm` on its own didn't work, and I wouldn't call automatically trying those flags "the right way".


Loads the image at a few rows per second.

Works on Android. Trying it on regular Firefox on Pinephone Pro results in:

> This page is slowing down Firefox. To speed up your browser, stop this page.

Weirdly, the image animation doesn't render until I hit the "Debug Script" button that Firefox presents, which pauses execution. It's only with the JS paused that the animation begins.

The pause is at the `for (; b < a + 60; )` loop that works an OscillatorNode. I guess a sound is supposed to be played. I checked youtube and sound works. I guess this loop prevents the firing of whatever event the animation depends on.

Loop terminates. It's just really slow. Only once it ends does the sound happen (haven't used OscillatorNodes before; probably normal).


Checked for sound on Android's Chrome. There's none. Checked youtube on Android's Chrome, sound works. Checked Firefox on Android, seems to have the same problem as desktop Firefox on Pinephone Pro. No web inspector on Android to check, but I waited and eventually the sound started playing. It's been several minutes and it's still playing. Image animation hasn't started.

> attach 10x length worth of AI appendix that would be helpful indexing and references.

Are references helpful when they're generated? The reader could've generated them themselves. References would be helpful if they were personal references of stuff you actually read and curated. The value then would be getting your taste. References from an AI may well be good-looking nonsense.


I agree wholeheartedly, I don’t see any balance in the effort someone dedicated to generating text vs me consuming it. If you feel there’s further insight to be gained by an llm, give me the prompt, not the output. Any communication channel reflects a balance of information content flowing and we are still adjusting to the proper etiquette.


"The user could have written the code themselves"

Yes, sometimes this is true, but not always.

Note, it's not one prompt (there aren't really "one prompt" any more, prompt engineering is such a 2023-2024 thing), or purely unreviewed output. It's curated output that was created by AI but iterated with me since it goes with and has to match my intention. And most of the time I don't directly prompt the agent any more. I go through a layer of agent management that inject more context into the agent that actually work on it.


If I look at the Wikipedia article for gabardine, it's supposed to be tightly woven wool, which makes more sense to me since the exterior of the fibers are supposed to be hydrophobic. Kind of confused at the existence of gabardine made of cotton which is hydrophilic... Polyester seems like it would be cheaper and more effective... Maybe in the past it was the economical choice, but cotton gabardine is still sold today. Seems like the worst material choice for gabardine of today, but maybe I'm wrong.


Gabardine is a type of weave, irrespective of material. Classic trench coats are cotton gabardine


>> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.

> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.

The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.


It might be that you crave people in order to prove to yourself that you have people on your side. Might help to be alone for an extended period, then get together with someone afterwards to kind-of prove to your subconscious that relationships don't dissolve just because you're not constantly in contact.


> Desktop computer - 1 hour of use - 50 Wh

That seems low...


50W average doesn't seem absurd, peak power is going to be an order of magnitude higher, but computers are often running pretty close to idle...


Thought the same thing. I think I measured my gaming PC to idle at like 70–100 watts. Obviously an average office PC would be lower, but you also need to add on the energy for the monitor, which seems to start at around 15 watts for some basic monitors, and I imagine that's not at high brightness.


> -PuTTY pscp allows raw passwords on the command line, or from a file. OpenSSH is unreasonable in refusing to do this.

You can use `sshpass` to force it through a command line argument. However, arguments can be viewed by any process through `/proc`, `ps`, etc. It's pretty reasonable to not support exposure of the password like that, especially since you can force it through using another tool if you really, really need to.


Both pscp and psftp have -pwfile.

It is not reasonable to insist on keys for batch use.

Not at all.


It's completely crazy to use passwords when you needn't. Passwords are a human readable shared secret, they were already obsolete when SSHv1 was invented last century.

From the outset SecSH (SSHv2, the thing you actually use today and if you're younger, likely the only thing you ever have used) has public key authentication as a Mandatory To Implement feature. Implementations where that doesn't work aren't even SSH, they're garbage.


I am forced by external vendors and internal security to use password authentication for SFTP.

I do not have a choice!

This grew out of FTP less than a decade ago. Everyone has always known password auth; it cannot die.

Are you on the same planet as the rest of us?


If our vendor required a password auth, I want three sandboxes between it and anything production. Its an explosion waiting to happen.


It can die once we stop letting it keep living with this kind of defeatist attitude


SCP protocol is fine and convenient as long as people understand that the remote file arguments are server-side shell code, and the consequences that implies.

You get the benefit of being able to e.g. get your last download off your desktop to your laptop like this:

  scp -TO desktop:'downloads/*(oc[1])' .
or this if you're on bash:

  scp -TO desktop:'$(ls -t downloads/* | head -1)' .
or pull a file from a very nested project dir for which you have setup dynamic directories (or shell variables if you're on bash):

  scp -TO desktop:'~foo/config/database.yml' config/

  scp -TO desktop:'$FOO_DIR/config/database.yml' config/
Just don't pull files from an SCP server that may be malicious. Use on trusted servers. If you do the following on your home dir:

  scp -TOr malicious:foo/ .
That may overwrite .ssh/authorized_keys, .zshrc, etc. because `foo/` is server-side shell code. The client can't say that `.zshrc` resulting from the evaluation of `foo/` doesn't make sense, because it might in the remote shell language.

> If you need something that SFTP cannot do, then use tar on both sides.

No reason to make things inconvenient between personal, trusted computers, just because there may be malicious servers out there where one has no reason to SCP.

Something else to note is that your suggestion of using `tar` like `ssh malicious 'tar c foo/' | tar x` faces basically the exact same problem. The server can be malicious and return .ssh/authorized_keys, .zshrc, etc. in the archive for `tar x` to overwrite locally basically exactly the same way. This goes with the point of this SE answer:

> I'd say a lot of Unix commands become unsafe if you consider a MITM on SSH possible. A malicious sudo could steal your password, a malicious communication client could read your mails/instant messages, etc. Saying that replacing scp with sftp when talking to a compromised server will somehow rectify the situation is very optimistic to say the least. [...] In short, if you don't pay attention to which servers you SSH into, there's a high risk for you to be screwed no matter which tools you use, and using sftp instead of scp will be only marginally safer. --- https://unix.stackexchange.com/questions/571293/is-scp-unsaf...

I think this whole problem with SCP just stems from not having properly documented this aspect in the manpage, so people expected it to just take filepaths.


> I'm not sure where the line between "hobby" and "professional" lies when it comes to linux distributions. Many of them are nonprofit but not really hobbyist at this point. Debian sure feels like a professional product to me (I daily drive it).

"Professional" means you're being paid for the work. Debian is free (gratis), contributors are volunteers, and that makes it not professional.


What about Ubuntu? Its a combination of work by volunteers and paid employees, it is distributed by a commercial company, and said company sells support contracts, but the OS itself is free.

And there are developers who are paid to work on various components of linux from the kernel, to Gnome, does that make it professional?

Is Android not professional, because you don't pay for the OS itself, and it is primarily supported by ad revenue?


I would argue they're not, because they're not fully under the responsibility of a commercial entity, because they're open source. Companies can volunteer employees to the project, even a project they started themselves, but the companies and employees can come and go. Open source projects exist independently as public goods. Ultimately, it just takes anyone in the world to fork a project to exclude everybody else from its development.

Mint started off as Ubuntu. Same project, with none of the support contracts, no involvement from Canonical needed at the end of the day, etc.

On a practical level, it doesn't make sense to put thousands of dollars per user in liabilities to non-compensated volunteers whatever the case may be with regards to the employment of other contributors.


At some point it seems to devolve from a meaningful discussion about how things should be done into a semantic argument (which are almost always pointless).

> it doesn't make sense to put thousands of dollars per user in liabilities to non-compensated volunteers

I agree when it comes to individuals. But it probably does make sense to hold formally recognized groups (such as nonprofits) accountable to various consumer laws. I think the idea odd that Windows, RHEL, Ubuntu, and Debian should all be regulated differently within a single jurisdiction given that they seem to me largely equivalent in purpose.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: