Hacker Newsnew | past | comments | ask | show | jobs | submit | Dunedan's commentslogin

I wonder if the at least partially the reason for the speed up isn't the multi-threading, but instead that rclone maybe doesn't compress transferred data by default. That's what rsync does when using SSH, so for already compressed data (like videos for example) disabling SSH compression when invoking rsync speeds it up significantly:

  rsync -e "ssh -o Compression=no" ...

Compression is off by default in OpenSSH, at least `man 5 ssh_config` says:

> Specifies whether to use compression. The argument must be yes or no (the default).

So I'm surprised you see speedups with your invocation.


Good point. Seems like I enabled it in ~/.ssh/config ages ago and did forget about it. Nonetheless, it's good to check whether it's enabled when using rsync to transfer large, already well compressed files.

IIRC rsync uses your default SSH options, so turning off compression is only needed if your default config explicitly turns it on (generally or just for that host). If sending compressible content using rsync's compression instead of SSH's is more effective when updating files because even if not sending everything it can use it to form the compression dictionary window for what does get sent (though for sending whoe files, SSH's compression may be preferable as rsync is single threaded and using SSH's compression moves that chunk of work to the SSH process).

The pre-commit framework [1] abstracts all these issues away and offers a bunch of other advantages as well.

[1]: https://pre-commit.com/


the pre-commit framework does not abstract away “hooks shouldn’t be run during a rebase”, nor “hooks should be fast and reliable”, nor “hooks should never change the index”.


Not sure how you got to that conclusion, as the pre-commit framework does indeed abstract them away. Maybe you're confusing it with something else?

> hooks shouldn’t be run during a rebase

The pre-commit framework doesn't run hooks during a rebase.

> hooks should be fast and reliable

The pre-commit framework does its best to make hooks faster (by running them in parallel if possible) and more reliable (by allowing the hook author to define an independent environment the hook runs in), however it's of course still important that the hooks themselves are properly implemented. Ultimately that's something the hook author has to solve, not the framework which runs them.

> hooks should never change the index

As I read it the author says hooks shouldn't change the working tree, but the index insteead and that's what the pre-commit framework does if hooks modify files.

Personally I prefer configuring hooks so they just print a diff of what they would've changed and abort the commit, instead of letting them modify files during a commit.


> Ultimately that's something the hook author has to solve, not the framework which runs them.

correct. i'm saying that hook authors almost never do this right, and i'd rather they didn't even try and moved their checks to a pre-push hook instead.


Depends on various factors and of course the amount of money in question. I've had AWS approve a refund for a rather large sum a few years ago, but that took quite a bit of back and forth with them.

Crucial for the approval was that we had cost alerts already enabled before it happened and were able to show that this didn't help at all, because they triggered way too late. We also had to explain in detail what measures we implemented to ensure that such a situation doesn't happen again.


Nothing says market power like being able to demand that your paying customers provide proof that they have solutions for the shortcomings of your platform.


Wait, what measures you implemented? How about AWS implements a hard cap, like everyone has been asking for forever?


What does a hard cap look like for EBS volumes? Or S3? RDS?

Do you just delete when the limit is hit?


It's a system people opt into, you can do something like ingress/egress blocked, & user has to pay a service charge (like overdraft) before access opened up again. If account is locked in overdraft state for over X amount of days then yes, delete data


I can see the "AWS is holding me ransom" posts on the front page of HN already.


A cap is much less important for fixed costs. Block transfers, block the ability to add any new data, but keep all existing data.


2 caps: 1 for things that are charged for existing (e.g. S3 storage, RDS, EBS, EC2 instances) and 1 for things that are charged when you use them (e.g. bandwidth, lambda, S3 requests). Fail to create new things (e.g. S3 uploads) when the first cap is met.


Does that mean fail to create rds backups? And that AWS needs to keep your EC2 instance and RDS instance running while you decide if you really want to pay the bill?


How about something like what runpod does? Shutdown ephemeral resources to ensure there's enough money left to keep data around for some time.


RunPod has its issues, but the way it handles payment is basically my ideal. Nothing brings peace of mind like knowing you won't be billed for more than you've already paid into your wallet. As long as you aren't obliged to fulfil some SLA, I've found that this on-demand scaling compute is really all I need in conjunction with a traditional VPS.

It's great for ML research too, as you can just SSH into a pod with VScode and drag in your notebooks and whatnot as if it were your own computer, but with a 5090 available to speed up training.


Yes, delete things in reverse order of their creation time until the cap is satisfied (the cap should be a rate, not a total)


I would put $100 that within 6 months of that, we'll get a post on here saying that their startup is gone under because AWS deleted their account because they didn't pay their bill and didn't realise their data would be deleted.

> (the cap should be a rate, not a total)

this is _way_ more complicated than there being a single cap.


> I would put $100 that within 6 months of that, we'll get a post on here saying that their startup is gone under because AWS deleted their account because they didn't pay their bill and didn't realise their data would be deleted.

The cap can be opt-in.


> The cap can be opt-in.

People will opt into this cap, and then still be surprised when their site gets shut down.


The measures were related to the specific cause of the unintended charges, not to never incur any unintended charges again. I agree AWS needs to provide better tooling to enable its customers to avoid such situations.


>How about AWS implements a hard cap, like everyone has been asking for forever?

s/everyone has/a bunch of very small customers have/


I am never going to use any cloud service which doesn't have a cap on charges. I simply cannot risk waking up and finding a $10000 or whatever charge on my personal credit card.


And for amazon that's probably fine, people paying with personal credit cards are not bringing in much money.


I'm not sure usability is moving in the right direction with KDE. Over the past years, more and more applications started to hide menus by default, sometimes adding hamburger menus instead.

There is also a "new way" (I believe QtQuick-based) for applications to create popups, which results in them not being separate windows anymore. System Settings makes prominent use of them for example and those popups just behave entirely different than one is used to. As far as I know it's not even possible to navigate these popups with the keyboard.


ASML doesn't sell chips, you're probably thinking about TSMC.


Ahh yes so I am. thanks for the correction


FYI: "Signal backup servers" currently seems to mean either Google Cloud Storage or CloudFlare R2 according to https://github.com/signalapp/storage-manager/blob/e45aaf5bd1...


> There are a couple of problems with the existing backup:

>

> 1. It is non-incremental.

I wonder if that's differently with the newly announced functionality. Their announcement doesn't sound like it:

> Once you’ve enabled secure backups, your device will automatically create a fresh secure backup archive every day, replacing the previous day’s archive.


@greysonp verified they're indeed incremental for media: https://news.ycombinator.com/item?id=45170515#45175402


I suspect the human worker still had a headset to listen in to the orders at the drive-through and just intervened when she heard that order.


That vastly depends where you live and what you use electricity for. Most of Europe for example uses much less energy [1], although that will probably change as heat pumps are becoming more and more widespread.

[1]: https://en.wikipedia.org/wiki/European_countries_by_electric...


I think this is just consumption divided by population, so very easily influenced by e.g. having little population and many data centers: I doubt the average person in Iceland is spending 10k+ bucks on electricity annually.


> […] so I am surprised he has done that and expected stability at 100C regardless of what Intel claim is okay.

Intel specifies a max operating temperature of 105°C for the 285K [1]. Also modern CPUs aren't supposed to die when run with inadequate cooling, but instead clock down to stay within their thermal envelope.

[1]: https://www.intel.com/content/www/us/en/products/sku/241060/...


I always wonder: how many sensors are registering that temp?

Because CPUs can get much hotter in specific spots at specific pins no? Just because you're reading 100, doesn't mean there aren't spots that are way hotter.

My understanding is that modern Intel CPUs have a temp sensor per core + one at package level, but which one is being reported?


There's no way on Earth Intel hasn't thought of this. Probably the sensors are in or near the places that get the hottest, or they are aware of the delta and have put in the proper margin, or something like that.


I haven't said they didn't think about it, I'm just asking due to sheer ignorance.


Yes, I have read the article and I agree Intel should be shamed (and even sued) for inaccurate statements. But it doesn't change the fact it has never been a good idea to run desktop processors at their throttling temperature -- it's not good for performance, it's not good for longevity and stability, and it's also terrible for efficiency (performance per watt).

Anyway, OP's cooler should be able to cool down 250W CPUs below 100C. He must have done something wrong for this to not happen. That's my point -- the motherboard likely overclocked the CPU and he failed to properly cool it down or set a power limit (PL1/PL2). He could have easily avoided all this trouble.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: