Hacker Newsnew | past | comments | ask | show | jobs | submit | scharman's commentslogin

Looks like a great service - kudos! Hope it succeeds and competes with imgur!


Thank you! I'm trying my best.


This is a great response. The shady operators can ruin simple services like these :( I’m now curious how you’d ever practically deal with this. How does iCloud deal with encrypted illegal content? Surely they can’t penalize Apple in these situations?


Apple's iCloud attempted to do CSAM scanning, but ultimately they backed off due to large public backlash. In the United States, there are lawmakers trying to require it, but there is still a lot of advocacy from privacy right activists, such as the EFF.


You’ve intrigued me! Can you provide a link to such a hosting option? I’ve got some side projects I’d like to explore! Thanks in advance



Wholesaleinternet.net is just 1 of many places.


I’m not a crypto geek! But, I thought the block size had to be smaller than the key size?


There's no rule that you need to mix a raw key bit with every data bit. Block ciphers usually expand their key into a bunch of subkeys to use in different rounds, and you can stretch that expansion as far as you desire.

And if you squint, a stream cipher is just a block cipher with a stupidly large block.


Exactly. The code is adjusting for responsiveness. With less cpus you need a smaller minimum slice. As you have more cpus you can increase the slice and still schedule the same number of processes per second.

E.g. 1 ms slice with 1 core = 1000 process switches per second. With 2 cores you can increase the slice to 2 ms and still maintain the same number of switches per second for the system, but reducing the switches per second on each core to 500. This reduces the overhead for the scheduler.

It seems like at around 8 times the slice efficiency starts to go the other way, so they’ve limited it. Seems reasonable, but scheduler math is crazy.

Note, that this has nothing to do with the scheduler assignments per core which have clearly been working or people would’ve noticed!


I believe it just emits at least one packet on each system 'write' call. As long as your 'write' invocations are larger blocks then I'd expect you'd see very little difference with O_NDELAY enabled or disabled. I've always assumed you want to limit system calls so I'd always assumed it to be better practice to encode to a buffer and invoke 'write' on larger blocks. So this feels like a combination of issues.

Regardless, overriding a socket parameter like this should be well documented by Golang if that's the desired intent.


ok, u got me - it was clearly a parody, but the whitepaper made me check and it took a few seconds to click. Love ur work!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: