Hacker Newsnew | past | comments | ask | show | jobs | submit | fivesixzero's commentslogin

Before Kali, in the mid-late 90’s, we had had a 4-node dial-in BBS in my area code (813) called The Arena that ran an app called SerIPX - basically IPX over serial - to allow 4 players to play Doom. It was an incredible experience at the time, especially when paired with Dwango5.wad and other 4-player centric maps. Worked great on a 28.8k modem, direct, with no TCP/IP.

So many memories, but I wish I remembered more from that era. Crazy to think it was a quarter of a century ago.


I spent some time last week tinkering with a SOQuartz board and ended up getting it working with a Pine-focused distro called Plebian[1].

Took awhile to land on it though. Before that I tried all of the other distros on Pine64's "SOQuartz Software Releases"[2] page without any luck. The only one on that page that booted was the linked "Armbian Ubuntu Jammy with kernel 5.19.7" but it failed to boot again after an apt upgrade.

So there's at least one working OS, as of last week. But its definitely quite finicky and would probably need some work to build a proper device tree for any carrier board that's not the RPi CM4 Carrier Board.

[1] https://github.com/Plebian-Linux/quartz64-images

[2] https://wiki.pine64.org/wiki/SOQuartz_Software_Releases


This looks like a shift away from their old “editorialized” blog-style updates to a data-sharing-centric approach. I’m guessing that this takes less time for them and it allows various commentators and communities to create their own opinions based on the data.

I liked the tone and approach of their old blog posts but this is pretty cool too. It’s just good to see them continuing to share their data since it’s arguably relevant to a wide range of audiences.


This doesn't appear to be a shift, to me?

It's just the central landing page that Backblaze has had all of their HDD stats and blogposts linked from for years now[1].

[1] - https://web.archive.org/web/20190707132216/https://www.backb...


I’ve become a big fan of MikroTik routers and 10G/SFP+ router/switch hardware in the last few years. Their web UI and SSH console are a bit quirky but the performance is pretty great for the price.

My primary use case for their gear at home was to have a router that can handle a LACP WAN bond for my fancy cable modem as well as connecting to a 10G Ethernet switch via copper or direct-attached SFP+ to a CRS-305 10G switch. Their RB-4011 was a perfect fit, without any of the Ubiquiti SSO/controller stuff to worry about.

I haven’t explored their WiFi products yet (still using an old router as an AP) but their product range is pretty broad. Might look into it this year though.


My primary use case for a home router is solid set and forget qos. fq_codel and cake were recently added to routeros v7 beta, which means I will be plugging in my hEX again after a few years of happy edgerouter x usage.

Also interested in what access points (besides unifi) people pair with mikrotik routers. Any wifi 6 recommendations?


The standalone Ubiquiti access points are still great IMHO. It's just their recent prosumer gateway/router product line that's really struggling. I've had a great experience with the older UAP-HD-PRO. Their newish $100 Wifi 6 U6-Lite AP is tempting but haven't tried it.

If you just need one AP you can set it up in standalone mode and forget about it. If you want more monitoring and control you'll need to have a Ubiquiti controller running to manage things. (can run one in docker, on a rasp pi, or just buy their "Cloud Key" product.)


Their Edgerouter VyOS products are awesome too. I won't touch their Unifi stuff but I can't find anything that's even in the same price ballpark as the EdgeRouter 4.


> If you just need one AP you can set it up in standalone mode and forget about it

unless you need any feature besides wifi at all. then you need a controller and usg at all times.


For awhile I was actually using a UniFi NanoHD for my AP. Performance and stability were great but running a Docker container for a Ubiquti Controller (for a single AP) was annoying enough for me to bail on it. My old Asus router with OpenWRT has been fine for now and doesn’t require me to run a container. :)

I’m still looking for a proper WiFi 6 replacement that can hook up to my 10G core, ideally via 2.5/5/10G copper or preferably SFP+ DAC. Nothing’s jumped out at me yet though.


If you just want dumb WiFi, you can provision and remove the controller. Nowadays you can even do this with the UniFi phone app (standalone mode let's you configure and update firmware).

I've had a UAP AC LR at home for a few years and we've got about 6 UAP AC HD at work. We used the phone app to provision and after that you can pretty much forget about it. Great for small startups that want great coverage and dont have someone who's supposed to mess around with it.


I'm curious as to what you are doing with qos in a home setup.


Late reply, and the other reply covered it really.

Up until around a year ago I was on adsl2 with a highly symmetrical connection. I work from home mostly as does my partner, with constant syncing to various cloud services plus large uploads and downloads for work.

Maxing out the puny 1Mb of upload would render the entire connection completely unusable. Yes, you can manually limit various apps but it so much easier just to throw an edgerouter x in front of everything running stock smart queue or cake.

I'm on a faster connection now so uploads are not so much an issue, but even still it works a treat for things like gaming / VOIP.


Not have VoIP or gaming get disrupted whenever a large upload runs.

On my previous ISP latency would reach 2000+ ms when I let Dropbox sync or downloaded a huge file. Even web browsing would time out. I used Tomato to prioritize DNS, my VoIP analog telephone adapter, the first 256KB of any HTTP(S) connection, and some 27000+ ports used by games.

My current WAN connection reaches 300 ms without fq_codel enabled. With it enabled there's no jump in latency.


Yes, I recommend MikroTik as well. Got two of their cAP wireless access points. All the features you would expect on enterprise level kit at 1/4 the price easily.

Because there are so many features the setup is not as easy as some alternatives I'm sure. But the value proposition is great.

Their "RouterOS" is standardised over pretty much all of their kit. So after you have worked it out once you should be set for anything else.


One of the reason Uniquiti is so loved by techies is that you can recommend it to family/friends or set it and forget it for them (regular users also find the phone apps impressive and easy to use - it's an Apple like experience for network gear).

At this point there are probably 20+ home Unifi networks that i'm responsible for recommending or setting up, doing the same with MikroTik might turn me into a full time sysadmin :)


> connecting to a 10G Ethernet switch via copper or direct-attached SFP+

> RB-4011 was a perfect fit

Huh, isn't RB4011 the one with the very weird "you can't use a DAC in the SFP+ port" limitation?

> haven’t explored their WiFi products yet

They seem extremely underwhelming, especially in terms of software support :(

https://help.mikrotik.com/docs/display/ROS/WifiWave2 — they're finally barely rolling out WPA3, MU-MIMO/beamforming, 802.11w — in an optional beta package for a beta version of the OS, currently on 4 devices, breaking 2.4ghz on one of them, and breaking CAPsMAN (centralized management).


I had to get an active DAC cable (S+AO0005) for the RB-4011 because of the quirk you mentioned. Works great with the active cable, which was about $50 I think. I was glad I read the manual beforehand. :)

Thanks for the update on the WiFi side of things. Seems likely that I’ll be looking to another vendor for APs, but that’s fine.


Do you know how ubiquiti's "edge" line compares to mikrotik?


Ubiquiti has a polished interface that's relatively simple to use for something with enterprise-ish level features. They also have some pretty good docs. For example, their article on the harms of Broadcast/Multicast packet storms [0] is useful even if you're not using their products. Same goes for the RF Antenna patterns docs [1].

That said, my next router/gateway won't be from Ubiquiti. Though I'll keep using UI access points for now.

[0] https://help.ui.com/hc/en-us/articles/115001529267-UniFi-Man...

[1] https://help.ui.com/hc/en-us/articles/115012664088-UniFi-Int...


I'm a Mikrotik user, not a Ubiquiti user, but looks like the closest match would be Mikrotik's CRS (Cloud Router Switch) line. My home network is a CRS317-1G-16S+RM at the core and three CRS305-1G-4S+IN (one in each room), all running SwitchOS/SwOS instead of the stock RouterOS (they dual-boot, your choice), and I am very happy with them.


The Mikrotik CRS will work as a "gateway" right? That is, run a DHCP server, connect to my cable modem, provide local DNS, etc? Thanks!


If you can run RouterOS (you can) you can do all that stuff - switchOS is much more like a bare-bones packet switcher; RouterOS is a full-fledged network OS.

Check https://mikrotik.com/software for some demos and stuff.


Yep, that’s how they come by default, booting into RouterOS. I prefer my switches to just be switches, though, so I run SwOS and do all that service stuff jailed on a FreeBSD router PC.


What APs do you use with a MicroTik setup?


I like Aruba Instant APs, the kind that don't require cloud management or a separate controller, though it seems they've folded the IAP line into the regular AP line or something with their new Wi-Fi 6 gear.

I'm still using Wi-Fi 5 because it's fast enough and cheaper. My central AP is a IAP-315, an IAP-305 in the garage, and another IAP-305 at the wall by the back yard. They're all PoE and linked with wired backbone to form a single big coverage area using a single elected IAP leader as controller for the rest.

You shouldn't have trouble buying grey-market ones as long as you are careful to stick to the same regulatory domain for all of them. Aruba gear is available as USA/FCC, Japan, Israel, and RW (Rest of World) versions. I have operated RW units in FCC territory (proooobably legally but probably not worth the risk) by setting them to "US Virgin Islands" so they match FCC-allowed frequencies and power limits, but linking more than one AP still requires the hardware to be same regulatory domain.


Having owned several products from both, Mikrotik equivalents are generally way more feature packed but I find them hard to use. EdgeMax stuff is more polished, but has fewer features. Performance is comparable for the most part.


After having worked intensely with Ubiquiti Edge devices (their routers specifically), I'd recommend them time and again. Their Debian derivative EdgeOS is great to work with, both as an enabler for advanced administration, but also an approachable web ui (plausible to offload many issues to support desk without requiring insane amounts of dedication to the Craft).

For mad scientists though, the very open software stack is a good friend to have when 11th hour Requirements® dictate you must produce a rabbit without a hat, or rewrite your own domain-specific implementation to replace the Avahi service.

No experience with Mikrotic.

_On topic_: With cloud news like this, it's nice to know about the availability of Ubiquitis' Network Management System[1] which you can host and run wherever.

[1]: https://unms.com/


MT radios are inferior to UBNT for some outdoor non-WiFi applications. 802.11ac vs the proprietary AirFiber. Agree that MT is often a better option for wired scenarios.


Does it support Wireguard?

Also RouterOS does not seem open source.


Sadly RouterOS isn’t open source. They’ve received a bit of flak for their “available on request” stance on getting GPL sources too. The fact that their GPL patches aren’t readily available is pretty uncool.

WireGuard isn’t supported on RouterOS 6, which is the current stable version, afaik. RouterOS 7 (currently available in beta) did support for WG in August though, as part of 7.1beta2 [1].

[1] https://mikrotik.com/download/changelogs/development-release...


If you have any more details about the GPL issues with Mikrotik RouterOS, I recommend reporting them to the Linux developers via Software Freedom Conservancy, who have copyleft compliance projects:

https://sfconservancy.org/copyleft-compliance/#reporting


V7 supports Wireguard and UDP OVPN, it's in beta but reasonably stable, at least for home use.


finally! been waiting for any UDP VPN from mikrotik since ... 2008?


I use a Microtik hAP AC (Small little SOHO style router with an sfp and PoE). You can easily flash it with OpenWRT and use wireguard on that. All open source too.

It's great hardware but I'm no personal fan of RouterOS.


Huh. What's the experience like? Eg are there any driver issues, or edge cases with unimplemented/missing bits of functionality?


It's brilliant, everything works fine. I've even used the USB port with a smartphone for 4G backup tethering (just need to add relevant usb packages, the openwrt wiki details all this). Plus there's the luci web interface which runs like a charm. No complaints whatsoever.


Although it isn't OSS, it's based on Linux and therefore semantically comprehensible by someone familiar with iptables, iptraf, etc. Unlike say IOS which will explode your brain.


RouterOS is not, but Mikrotik added wireguard support to their firmware sometime in mid-late 2020. IDK if its out of beta yet.


No, still very shitty beta sadly. In mikrotik communities routeros7 is a meme (it'll never arrive). Even though its here, its not.


A few months ago when ROS 7's first few public beta releases were out (and before then), I'd agree with you.

However, MikroTik seem to be making slow but steady progress with new features. Stability is still an issue to an extent, but for home use I could almost make the jump.

In fact, if I didn't use CAPsMAN to centrally control the multiple access points in my home, I would make the jump purely for fq_codel/cake AQM, Wireguard and WPA3.


Mikrotik phones home too


Current job (Java dev, mid sized SaaS, 2019-now): Local specialist recruiter via random inbound LinkedIn message. Worked out really well, despite the horror stories I had heard.

Previous job (Tier 1-3 support to Java dev, startup SaaS, 2013-2018): Asked a friend how they liked it there, took a tour, met some people, was in for an interview the next week. First non-freelance or self-employed gig in many years but it turned out to be a great match.

Before Times: Post-college-dropout drift for a decade or so. Helped run a massive LAN party (dates it a bit, heh), did some PHP/Java work for friends’ small consultancies, worked retail, repaired computers, freelance music writing and photojournalism, random open source project contributions here and there.

I love seeing other “non-traditional path” stories on here. Not everyone ends up in dev work, or their current job, the same way. :)


The main argument in the article (and other places I’ve seen WG discussed) is the relative ease of auditing the core code as well as auditing implementations. In that context it’s less of an augment that it’s “more secure” and more of an argument that it’s “more cost/time effective to assure that it (the core code or Any implementation) is secure”.

That argument can be strong when considering that effective security in most projects comes down to whether assurance of security can be discerned effectively within a limited time window. Often very limited.


Increasingly it seems like heavily opinionated foundational tools and frameworks are overtaking more highly configurable alternatives, at least in terms of breadth of usage or popularity.

Could this be a positive change? Does this represent a healthy response cognitive fatigue in a world with configuration options at every possible layer?

Or does this shift to less readily configurable tools represent an overall negative? Are we losing diversity in favor of a more vulnerable monoculture crop?

Or both?

Asking for real, not sarcastically. As a developer I’m a huge proponent of simpler, more opinionated frameworks for most projects but I’m also aware my perspective is more limited than many HN commenters.


At least in some ways, it feels to me like Wireguard is more of a return to the "unix philosophy" (if there is such a thing) when compared to solutions like OpenVPN and ipsec/StrongSwan. Doug McIlroy, amongst the designers of Unix, said that tools should "Do One Thing And Do It Well." Wireguard seems like a great example: it offers very few knobs and levers in large part because the scope of its capabilities is very small. Wireguard manages the actual tunnel between endpoints, everything else (managing interfaces and routes, disseminating keys, autoconfiguring) is left for other tools. But, Wireguard provides a simple and friendly enough interface that it's easy to write other tools to do these tasks, ranging all the way from shell scripts to some big enterprise system.

This stands in clear contrast to OpenVPN, which attempts to manage all aspects of the VPN management process from endpoint config (interfaces, routes, etc) to key dissemination (strongly preferring mutual TLS auth and specifying a format for importable VPN configs). As a result, we could say that OpenVPN "Does Everything And Does It Okay," which I'd like to coin as the opposite philosophy. This has advantages if you have some kind of complicated situation and want to keep everything inside of one tool, but the result is that OpenVPN is more complicated to use and configure, and has more surface area to attack.

To some extent this kind of limited scope comes off as opinionated but I would like to view it the opposite way: Wireguard is unopinionated in that it leaves a large portion of the VPN stack for you to handle yourself, either manually or by bringing your own tool. This is a bit annoying if you're looking for a turnkey solution, but also makes Wireguard very simple and easy to understand and audit.


> if there is such a thing

There is indeed such a philosophy:

https://www.jwz.org/doc/worse-is-better.html


> Could this be a positive change?

It's normal and expected evolution of protocols and software.

Generation 1: New idea, new implementation. As people become comfortable with the new idea it gains in acceptance and hype. Try to keep it simple and fast, but it's a exercise in exploration and it gains technical debt faster then it gains new features.

Generation 2: Widespread acceptance and commercialization. Groups inside large corporations, and sometimes entire businesses, spring up around the new idea. They re-implement the idea to reduce technical debt and add flexibility. Features are piled on to make it marketable. Eventually becomes heavy and unwieldy.

Generation 3: Hype train dies down and people have learned what really matters and what really should be focused on. Third generation is lean, fast, and 'correct'. It becomes ubiquitous, people stop caring about it and people stop paying for it. It becomes just something that is always there and ends up little more then a building block for the next new idea.


Generation 4: Bloat the software with so many unnecessary features, the users must want to chat with each other no?


There isn't any generation 4. The functionality of the software is ubiquitous by that point. Nobody cares anymore except when they absolutely have no other choice. By that point even if you generated a brand new implementation you would struggle to give it away unless it was just one small part of a new innovation.


Mustn't forget that social media sharing to show that we are also hip and down with the fellow kids.


This sounds like Jared Spool's Market Maturity model.[0] He breaks up your gen 3 into two separate stages:

Stage 1: Raw Iron Stage 2: Checklist Battles Stage 3: Productivity Wars Stage 4: Transparency

[0] https://articles.uie.com/market_maturity/


TLS has shown how the quest for backwards compatibility has the unintended consequence of downgrade attacks. Wireguard's lack of cryptographic agility is a feature, not a bug. Sure, it means everyone has to upgrade when a new version of the protocol comes out, but the entire point of a VPN is security.

That said, OpenBSD's OpenIKEd is just as simple and efficient, and thanks to standard compliance (IPsec, IKEv2 and MOBIKE) it works out of the box with iOS devices.


> Sure, it means everyone has to upgrade when a new version of the protocol comes out,

It will be interesting to see what happens when (or if) large enterprises and hardware vendors adopt it.


I think Opinionated can be good. I think configurable can be good too. I think the best case is nearly always "Configurable, with smart defaults" meaning defaults that work out of the box for most uses.

Definitely programming languages are on the periphery of this conversation, but I think provide some good examples of why I like opinionated tools in general.

My language of choice right now is Go, and has been for a while. One of the things I like about it is that it's a bit opinionated. For example:

Braces around `if` statements aren't optional. I prefer this to other C-Like languages that allow you to leave out braces for one-liners.

Also the document "Effective Go" exists, which lays out the canonical "best" ways of doing a lot of things. The language doesn't force you to do these things, but there is an authoritative source that makes good suggestions.

The Antithesis of opinionated languages in my opinion is Ruby. I personally hate Ruby, but I know there are a lot of people that love it. I hate it because there are too many ways to of accomplishing the same tax, and to me this makes it harder to read. Go, on the other hand is the easiest language for me to read, largely because of `gofmt`, another thing that doesn't force you to do it a certain way, but strongly encourages a standard end result.


"My language of choice right now is Go, and has been for a while. One of the things I like about it is that it's a bit opinionated."

I've frequently described Go as a very, very good 1990s language. Going through the process of maturity takes time. You can't have a "very, very good" 2020s language right now, because at the frontier we're still feeling our way through the issues.

(Remember, whatever you're about to hit reply with and try to contradict me about it being a totally smooth and polished 2020s language that's already here is also an assertion that your example basically has no room for improvement and will not improve in the next 10-20 years. Consider your options carefully before you go too "language partisan" here.)

I believe probably >75% of the hatred Go engenders is from people afraid that Go's success will erase or invalidate the 2010s/2020s languages they prefer, because otherwise, the solution to most of these people's hate/anxiety would be to just ignore Go. To which I can say to those people, you can stop worrying. It won't. And if you stay in the industry long enough, maybe someday you'll get to use the really good and polished 2010s or 2020s language. No idea what it'll be called. And you can similarly assuage the fears of the day that this new language will erase all the benefits of the 2040s languages in development at the time.

But for "opinionated" to really work, I think you intrinsically need to have years of experience to make the right calls. There's no realistic chance that we could have gone straight to the "correct" VPN choice in one shot. Too many variables, too many dimensions, too much to learn and know about the security. It's just not possible. We collectively need the decades.


[flagged]


It's because you're posting in the flamewar style, which is what we're trying to avoid on this site. Also, it's super off topic—two generic hops away from the OP.

Programming language flamewars are a special case and not in a good way. Many of us lived through seeing those take over online communities in the past and reduce them to scorched earth. That's one of our motivations for wanting to keep HN from that fate, or at least stave it off a bit longer: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Actually your comment could be a good contribution with a little editing. The first link seems good. The second is too flamey.


I agree that programming languages are an interesting to view over that axis.

I've done a fair amount with Elm, which is undoubtedly hugely opinionated, doing things like locking JavaScript interop behind a message passing system and baking protection from XSS into the language.

Mostly I'd say this all encourages you to do things a better way, but it can be painful, and particularly given the early nature of the language, meeting the edges of the language can be very painful because of it.

In contrast, I adored working with Scala because it was so powerful, but it sits close to Ruby in the "you can do everything a million different ways" rankings. The more I did with it, the more I wanted a refined subset of what was there (which may be what Dotty/Scala 3 ends up being).

Things like "you must always use braces on if statements" are rules I always end up enforcing using tooling anyway because they are just bugs waiting to happen, and are the low-hanging fruit of this debate. Too many language take the approach of "if we can parse it, it's fine", when really the aim should be to make it clear not just to a parser, but to the person reading/writing the code too. Hopefully more languages are more opinionated about that kind of thing in the future.


> I think Opinionated can be good.

It can be. In fact it's almost essential if you are handing out knives to children; you want someone doing who is very opinionated about the dangers posed by sharp knives.

There was a recent HN post making the same point about JWT's. JWT's allow the null cipher, and in the hands of people who might not appreciate the disaster caused production code accepting JWT's using the null cipher you want a _very_ opinionated implementation that prevents it.

But for me, such an implementation would be a total pain in the arse. The null cipher is there to make debugging easier - getting things working for the first time can be very difficult without it. You can stick your opinions on whether I should be using it where the sun don't shine.


Ideally, things should be opinionated but configurable. I think of that as having good, sane defaults, with a straightforward initial setup that doesn't revolve around tweaking those defaults.

With a security product, however, I can understand the allure of offering few to no options. Laypeople get security wrong at an alarming rate, even with good defaults, so I often don't mind a security product just offering one configuration that the (presumable) security experts who built it have decided is the right way to use it.

Of course, if they turn out to be wrong about something, and a mitigation would be "disable feature X", then this requires a patch and new release, when it might have otherwise just required a configuration change.


Taken openssh. You get sane defaults and still have a ton of settings to fiddle with when you encounter odd edge cases or comlex scenarios such jumping through a chain of proxy hosts or talking to some legacy embedded ssh server and such things.


It's a necessary change and a response to cognitive overload. Everything I have to think about in an existing system is mental energy I can't spend on more useful tasks, like creating something new.

In the Information Age, attention and cognitive bandwidth have become precious and limited commodities that should never be wasted on any unnecessary concern.

I have a rule of thumb in relation to product or project adoption: every installation step cuts adoption in half. If 1000 people find a project and it has a 5 step installation, approximately 32 of them will install it. Make it a 7 step installation and that number is cut down to about 8.


This is not some strange paradigm shift, it's the UNIX model 101 [0]

[0] https://en.wikipedia.org/wiki/Unix_philosophy#Program_Design...


It definitely depends on context.

There are some cases where opinionated is just fundamentally better. I'd say that a good example of this is code formatting tools. These have been historically highly configurable, but that creates huge amounts of room for bikeshedding and conflict, where consistency is by far the most important thing and the actual style itself barely matters unless you start getting silly.

I think it essentially becomes a sliding scale on how much consistency and "getting it right" are more important over having something be optimal or perfectly adapted to the situation.

When it comes to security tools like VPNs, reducing the chances for users to shoot themselves in the foot is almost always more important than anything else, so it seems like another area that would be beneficial to have something more opinionated rather than more configurable, so the decisions are in the hands of people who have invested the time in understanding the problems at hand.


I think there's room for both. I understand your question, but just because there's a trend towards one way or the other, doesnt mean developers should just go with what mainstream is moving towards (not that that's what you're saying.)

The main problem I have with highly configurable utils is that a lot of them don't have sane defaults (or any defaults), which might be ok considering most users want-to or enjoy spending hours writing custom config, but it's a big ask for things I want to use quickly, or just try.

So, imo it depends on the software.


>heavily opinionated foundational tools

Opinionated isn't the right phrase here. For something like webdev there are 200 "right" ways to do things. For encryption "throw out all the legacy crap and focus on 1 known strong tech" isn't opinionated...it's common sense.

...if you have the luxury of a fresh start. Which mainline kernel has granted wireguard.

Monoculture...maybe...but I reckon the tons of legacy stuff in openvpn is way more dangerous. Especially because it kind forces mass adoption of weaker cyphers due to compatability BS.


> Could this be a positive change?

For cryptographic (and related) applications, it certainly seems standard engineering advice now to reduce choices and configurations to a minimum [1] (but apparently not 2 decades ago, when OpenSSL, OpenVPN, and GPG were initially released).

[1] pretty sure Bruce Schneider et al. recommended it in their 2010 book Cryptography Engineering.


> As a developer I’m a huge proponent of simpler, more opinionated frameworks

Until you run into the limits, of course. If you control both sides, you can use what you want, but as soon as you implement just one side ...


I think opinionated is the real deal. You can make smarter software and focus on features and on the way create a better future. Being able to twist and specialize everything is not always good.


As a tech-affine user I aporeciate simpler tools. I love configurability but if I have to fight with every tool it's really hard to stick with Linux...


Opinionated is great as long as it allows for future backwards compatibility. This sort of thing is critical for things like this that depend on cryptography. There has to be a way to support the old thing at the same time as the new thing when it looks like the old thing might eventually have to be swapped out. There has to be a way to do the transition.


This is the opposite of what cryptography engineers believe today.


Which ones? How do they suggest that cryptographic upgrades occur?


In the cryptography world backwards compatibility is basically "let the adversary switch me back to the old and busted protocol so I can be owned even after I upgraded to the latest version."


Or, in the DROWN case, ricochet the new protocol off the old protocol to use individual elements of the old protocol to break the new one.


Most required upgrades do not involve anything "busted". Weaknesses are often noticed long before any practical attacks are available. If you want to upgrade, say, Wireguard in such a case you would have to switch over the endpoints in pairs. Obviously that is going to be impossible in practice so the system will get backward compatibility grafted on in a fragile and dangerous way.

OpenPGP is an example of a case where relatively extreme backwards compatibility is required as old archived messages have to be accessible. But that isn't a problem because things are such that downgrade attacks are impossible. The list of desired methods is in the public key which is signed with itself. So downgrades are not always an issue.


You can straight up google 'pgp' and 'downgrade attack' so maybe that's not that great an example.


Do you have an actual example? Normally when people talk about a downgrade attack on OpenPGP they just assume it is somehow possible without actually checking that it is.

Note that I am only claiming that downgrade attacks are technically impossible for OpenPGP due to the way that it works. To break the protection against downgrades means that you have to break the root cryptography. That might not be true for other stuff... Makes for a great example though...


Something like this:

We introduce wireguard2, which is not wire-protocol-compatible with original wireguard. The same configuration files can be used, but you must generate new keys as part of your switch over.

We strongly advise you to stop using original wireguard if there is any possibility of a wealthy, organized, determined attacker intercepting your communications. (See CVE2021-x. and forthcoming paper "64 qubits can deduce Curve25519 points" by D.J. Bernstein et al.)


So at midnight July 23 2026 everyone upgrades to wireguard2 all at once? Perhaps I am not getting what you are proposing here...


Just like with TLS and its "ciphersuites", you expose the vulnerable components for as long as (1) you're required to by your users and (2) the risk is bearable. At some point, you stop exposing the vulnerable component at all. Ciphersuite negotiation doesn't free you from this requirement, but it does make it harder to ensure that peers who agree on non-vulnerable parameters are actually able to use them.

None of this is complicated. It's also worth looking back on the history of TLS vulnerabilities to get a sense of just how little ciphersuite negotiation helped anybody.


My understanding is that Wireguard has no way to do anything other than what it does now. There is no way to use an upgrade in the protocol.


The same was true of TLS!


And that is a good thing.


not everyone, only a single network at a time. a common use will be corporate VPNs or VPNs on digital ocean-like services, in that case there is little to no reason for interoperability between two distinct network.

or at least you might want different keys anyway


Every time I see a product or project that describes itself as "opinionated", what it really means is the developer implemented the subset of functionality that they require and turn away suggestions and PRs from people who need additional functionality, even if the changes would have no material impact on the author's usage. There's probably some really interesting psychological research that could be done here, but to be polite about it let's just say that authors of "opinionated" software tend to have rather colorful personalities.

Wireguard is not opinionated, it just has a very limited scope. It has one job, to create an encrypted tunnel between two endpoints, and leaves literally everything else up to other tools to build higher-level functionality upon. Contrast with OpenVPN which requires you to be your own TLS certificate authority and all the complication that goes along with that.


I mean, that's how you choose to interpret opinionated I guess.

I see it more as "convention over configuration". If you want to (or need to) tweak the configuration and settings extensively, then that tool is perhaps not for you, and that's ok. Perhaps you are a subject matter expert, and you want more control.

If you're ok with sane defaults (that were chosen by subject matter experts, and you are not one), then "opinionated" is a great thing.


> turn away suggestions and PRs from people who need additional functionality, even if the changes would have no material impact on the author's usage.

Whether it impacts a specific use case is usually here nor there - it’s usually about maintainability. And while finding contributors for open source projects can be difficult, finding people who want to do the thankless work of maintaining code long-term is much harder.


I guess my point was that "opinionated" generally seems to imply maintainability at the (explicit and very intentional) expense of utility.


> what it really means is the developer implemented the subset of functionality that they require and turn away suggestions and PRs from people who need additional functionality, even if the changes would have no material impact on the author's usage.

That's one way of looking at it. Another way of looking at it is to emphasize minimalism, the UNIX philosophy, and keeping maintenance burdens low. Sometimes, neither is the case - Ruby on Rails being the classic example of an opinionated framework, one that did expand to add additional functionality over time.


I think most developers are technocrats at heart and this is the manifestation of that.


I’ve configured IPSec vpns for the better part of 15 years.

After using WireGuard for 5 minutes I knew this was going to be a big thing.

IPsec has too many fucking knobs. It is it’s pitfall.


I feel like a lot of design failures with new wire protocols, come down to the organization responsible for the specification not having enough leverage to convince the clients/stakeholders who will eventually implement the specification to “meet them in the middle” by adapting their systems to suit the protocol; instead, the clients/stakeholders hold all the leverage, and so demand that the specification change to a shape where it has knobs allowing each of them to implement the standard with no change to their current system whatsoever, at the expense of every other client essentially having to reify “the way each other client/stakeholder does things” in the form of each knob.

I wonder if any specification group has ever thrown up their hands and said, “you know what? Fine. Let’s just create one named sub-protocol for the way each of you major players does things; and then have the clients of this protocol do a sub-protocol negotiation; and then have the client use a plugin specific to the sub-protocol that’s been negotiated. Then you don’t need any knobs; all the policy can be baked into the plugin.”

(Come to think of it, this is kind of how the authentication phase of SSH works, when configured to use PAM. “Pretend we’re MIT” (a.k.a. Kerberos); “pretend this is a Microsoft Active Directory domain” (a.k.a. NTLM auth); etc.


Whether the issue is with the CPU hardware, the mainboard design (VRM, etc), mainboard BIOS, kernel, or the Prime95 app itself still appears to be an open question.

Based on oscilloscope analysis of the VRM output in a linked thread elsewhere in the comments it looks like the board’s VRM design, or its configuration by the board’s BIOS, may be the most likely suspect.

But there are less-researched reports of similar issues on other boards as well, which makes things a bit more murky.

Given the uncertainties there it may put some people off from buying into the TR/sTRX40 platform in general. But to offer a blanket recommendation to avoid is a bit premature.


I’m curious if this behavior defined by something in hardware, microcode, boot-time BIOS flags, or higher level kernel/hypervisor/application code.


on unlocked intel cpus you can change the avx multiplier in the bios.


Oh neat (I haven’t messed with over clocking in years) - is it just avx that you can tailor? (Beyond the old school bus multipliers)


Thanks for the recommendations! I have a DSLabs DScope (100 MHz, 2-channel FPGA scope) and while it’s handy I’d prefer to have a proper hardware scope someday. Rigol’s scopes look like they nicely fit in between the basic DSO/FPGA stuff and the “proper” 4-5 digit priced test bench gear.

Any recommendations for learning resources that could help with understanding DC power supply analysis for non-EE types? While refurbishing laptops and working with microcontrollers I’ve run into some odd things where ruling out transient power supply issues would probably be helpful.


The low end Rigols make good entry-level scopes, and have a surprising amount of capability for the price.

As for learning resources, I came across a decent article on the subject when I was starting out (1), and most of the oscilloscope manufacturers have whitepapers on SMPS diagnostics, the Tektronics one I read a while back (2) gave a good overview. A lot of the whitepapers have a manufacturer-specific focus, but they still have good information that can be applied to almost any oscilloscope.

If you want to get really into the power supply and do high-side measurements you'll need an isolated differential probe, which can cost as much as an inexpensive oscilloscope, but for DC output measurement you shouldn't need anything special. Current probes are a lot more affordable if you're interested in looking at loads or current fluctuations/harmonics, but that's more useful after you've figured out a bit more what specific properties you're trying to measure.

1: https://www.testandmeasurementtips.com/test-switching-power-...

2: https://download.tek.com/document/3GW_23612_7.pdf

Edit: I forgot to mention that the EEVblog forums are a good resource also, but they sometimes aren't as friendly as they could be towards people just starting out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: