Hacker Newsnew | past | comments | ask | show | jobs | submit | jruohonen's commentslogin

"Some of these attacks measured approximately 30 Terabits per second, which were record-breaking attacks."

IoT.



Zenodo is great too, yes, but their meta-data management is somewhat problematic; i.e., it can be changed at whim, which makes indexing difficult.

I suppose what she suggests is what some OSS projects are already doing elsewhere. (And a good take on that cursed post-modernism, which is apparently hot again.)

Just a remark about the name; there is already a well-known ngrep.

https://linux.die.net/man/8/ngrep.


Oops, didn't know, thanks! I just added a section on how to compile and try it without a system installation to avoid any name clash :)

"which allegedly compromised over 369 000 routers and Internet of Things devices in 163 countries"

So, as I share his thoughts, I've been wondering: why haven't we seen any real innovations in this space?

Mastodon wasn't really it and neither was Substack, although maybe it got slightly closer. TikTok and Telegram, maybe, for different reasons, but they'll face the same destiny.

I'd suppose the much despised "mainstream media" might be a winner here eventually. But beyond that, I am thinking about something like the following:

https://www.theguardian.com/technology/2026/mar/10/uk-societ...


Mastodon has been an obvious innovation and success, along with decentralized platforms and protocols in general.

> If you fail to prove them wrong and can produce the same results as them, they have done Good Science.

Not really in my humble opinion. Sure, the Popperian vibe is kind of fundamental, but the whole truncation into binary-valued true/false categories seldom makes sense with many (or even most?) problems for which probabilities, effect sizes, and related things matter more.

And if you fail to replicate a study, they may have still done Good Science. With replications, it should not be about Bad Science and Good Science but about the cumulation of evidence (or a lack thereof). That's what meta-analyses are about.

When we talk about Bad Science, it is about the industrial-scale fraud the article is talking about. No one should waste time replicating, citing, or reading that.


Debian has always been Debian and thus there are these purist opinions, but perhaps my take too would be something along the "one-strike-and-you-are-out" kind of a policy (i.e., you submit slop without being able to explain your submission in any way) already followed in some projects:

https://news.ycombinator.com/item?id=47109952


This is like trying to stop spam by banning emails that send you spam.

They can spin up LLM-backed contributors faster than you can ban them.


If the situation becomes that worse, I agree with you; otherwise, I don't see that as a problem.


Banning AI would hardly stop that, the LLM contributors would simply claim they're not AI.

Hence why banning AI contributions is meaningless, you literally only punish 'good' actors.


Yeah this is what I was getting at with “reputation” - I think the world where anyone can submit a patch and get human eyes on it is a thing of the past.

IIRC Mitchell Hashimoto recently proposed some system of attestations for OSS contributors. It’s non-obvious how you’d scale this.


The idea is cool, but, well:

"No account, no server, 100% private — everything happens in your browser."


Your post makes it sound like you consider this a bad thing?


Are you implying that the lack of data harvesting is a disadvantage?


It not a disadvantage but a rare trait nowadays.


I don’t see the downside here.

If you don’t believe it, maybe disconnect from network before dropping the file?


sound great


well makes sense if JavaScript is run 100% locally.

Browser can be treated as loader of code to be executed only locally with Local only data.

i hate js, but it's doable


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: