Tbf to them, most people equate streamers with individuals having thousands of viewers.. From that perspective, their statements kinda make sense.
While I personally wouldn't be able to perform under such a setting, I'd be lying if the idea isn't kinda charming - it's like wanting to be a rock star, a small part thinks it'd be cool, even if most don't actually want to live the life of a rockstar.
Though the wealth it comes with would be neat to have (I mean most streamers with thousands of non-botted viewers are millionaires at this point, right?)
You left out the important and main reason, support for ie wasn't dropped - support for IE6 was dropped. At a point in time when it was already long since deprecated by it's maintainer, Microsoft
When I got into software that employer was pretty small (50 people overall I think).
Their approach to QA was to have this be a optional thing the service people could do.
It worked surprisingly well, with the caveat that they never created regression tests.
The employer eventually switched someone from that team to establish these regression tests full time, but they had no programming experience and by the time I left no real progress was done in around 6 month. No idea what came of that, and a few years later they fired a lot of the team
MacOS has a built in 4x4 window tiling which works for this purpose for me. I don’t find ever wanting more than 4 windows open on an ultrawide. Definitely not as powerful as something like xmonad but useful for the majority of my use cases.
You can, I believe, but I often need to move between computers so I try not to mess with shortcuts too much (or go down keyboard layout rabbit holes, etc).
Honeypot sure I didn't think of that.. But I was under the impression the FBI confirmed it ? So we can rule it out.
Making the password impossible to guess - how could that not be?
Since then you know you have a breach, as its randomised gibberish, if you then get the 2nd device asking " is this you trying to login " you can definitely know you are compromised....
I can't see your logic here, that isn't " theatre " ????
If you think that is theatre what is better then? Words and numbers.. easily brute forced.. Sorry can't agree.
Why would they willingly destroy their successful honeypot if the other party announced they've access to it?
I haven't seen what's in it either though, but I would not rule it out yet, especially when the FBI is involved - which love those tactics
When you're compromised, changing the password is obviously not theatre - but changing a password which is randomly generated with enough entropy is what's pointless theatre. A secure password is secure, esp. If you're already using a password manager then the act of changing isn't meaningfully increasing your security (unless you're aware that your password was compromised) because the way to compromise it is what...? Having a keylogger on a device you logged in on? Then the changed password will be just as compromised
That's why keepass is really useful since you aren't ever typing in the password.. its generated and then copied to the clipboard.. That clipboard is then wiped after X seconds.
So then you know that you have been rooted => If that fails to resolve it.
Reduce the number of vectors to know what you have to change asap. in this scenario you don't want to be guessing about how they did it.
The randomised gibberish just means you can rule out certain things. I can agree on part of what your saying but a string high entropy password, makes it harder to brute..
Many services don't really do that whole retries thing properly. So make it take as long as possible.
If you don't use a random gibberish your password can be cracked on any consumer device in a surprisingly short amount of time...
This way you can then focus on that a session token is probably how they got in.. It's the most common vector these days...
Uh, I've worked for a few years as a frontend dev, as in literal frontend dev - at that job my responsibility started at consuming and ended at feeding backend APIs, essentially.
From that I completely agree with your statement - however, you're not addressing the point he makes which kinda makes your statement completely unrelated to his point
99.99% of all performance issues in the frontend are caused by devs doing dumb shit at this point
The frameworks performance benefits are not going to meaningfully impact this issue anymore, hence no matter how performant yours is, that's still going to be their primary complaint across almost all complex rwcs
And the other issue is that we've decided that complex transpiling is the way to go in the frontend (typescript) - without that, all built time issues would magically go away too. But I guess that's another story.
It was a different story back when eg meteorjs was the default, but nowadays they're all fast enough to not be the source of the performance issues
I guess you linking to it was a self fulfilling prophecy
If you read your own reference (not the picture, but where you took it from on Wikipedia) really really carefully, you might be able to tell why it so perfectly applies to you
The person with little knowledge overestimates they're capability, and the person which actually knows how complicated [the thing] is , usually isn't as confident they mastered it.
You’re talking about a confidence and ability gap. I have heard of the Dunning-Kruger effect. I accept all of that.
But the claim above was that having low confidence was correlated to higher skill. Ie, skill and confidence are anti correlated. The chart does not show that. The lowest data point for confidence is the point on the left of the chart. This is also the data point corresponding to people who have the least competence. Having low confidence is not evidence that you’re secretly an expert. Confidence and competence are still positively correlated according to that chart.
The Dunning-Kruger effect is not so strong that there are scores of novices convinced they are experts in a field. But in your case, I admit the data may not tell the full story.
For the record, you're most likely not even interacting with that API directly if you're using any current framework, because most just provide automagically generated clients and you only define the interface with some annotations
To me what makes this very "Java" is the arguments being passed, and all the OOP stuff that isn't providing any benefit and isn't really modeling real-world-ish objects (which IMHO is where OOP shines). .version(Version.HTTP_1_1) and .followRedirects(Redirect.NORMAL) I can sort of accept, but it requires knowing what class and value to pass, which is lookups/documentation reference. These are spread out over a bunch of classes. But we start getting so "Java" with the next ones. .connectTimeout(Duration.ofSeconds(20)) (why can't I just pass 20 or 20_000 or something? Do we really need another class and method here?) .proxy(ProxySelector.of(new InetSocketAddress("proxy.example.com", 80))), geez that's complex. .authenticator(Authenticator.getDefault()), why not just pass bearer token or something? Now I have to look up this Authenticator class, initialize it, figure out where it's getting the credentials, how it's inserting them, how I put the credentials in the right place, etc. The important details are hidden/obscured behind needless abstraction layers IMHO.
I think Java is a good language, but most modern Java patterns can get ludicrous with the abstractions. When I was writing lots of Java, I was constantly setting up an ncat listener to hit so I could see what it's actually writing, and then have to hunt down where a certain thing is being done and figuring out the right way to get it to behave correctly. Contrast with a typical Typescript HTTP request and you can mostly tell just from reading the snippet what the actual HTTP request is going to look like.
> but it requires knowing what class and value to pass
Unless you use a text editor without any coding capabilities, your IDE should show you which values you can pass. The alternative is to have more methods, I guess?
> why can't I just pass 20 or 20_000 or something
20 what? Milliseconds? Seconds? Minutes? While I wouldn't write the full Duration.ofSeconds(20) (you can save the "Duration."), I don't understand how one could prefer a version that makes you guess the unit.
Yes it is, can't add anything here. There's a tradeoff between "do the simple thing" and "make all things possible", and Java chooses the second here.
> .authenticator(Authenticator.getDefault()), why not just pass bearer token or something?
Because this Authenticator is meant for prompting a user interactively. I concur that this is very confusing, but if you want a Bearer token, just set the header.
> Unless you use a text editor without any coding capabilities, your IDE should show you which values you can pass. The alternative is to have more methods, I guess?
Fair enough, as much as I don't like it, in Java world it's safe to assume everyone is using an IDE. And when your language is (essentially) dependent on an IDE, this becomes a non-issue (actually I might argue it's even a nice feature since it's very type safe).
> 20 what? Milliseconds? Seconds? Minutes? While I wouldn't write the full Duration.ofSeconds(20) (you can save the "Duration."), I don't understand how one could prefer a version that makes you guess the unit.
I would assume milliseconds and would probably have it in the method name, like timeoutMs(...) or something. I will say it's very readable, but if I was writing it I'd find it annoying. But optimizing for readability is a reasonable decision, especially since 80% of coding is reading rather than writing (on average).
I didn't mention IHttpClientFactory - just HttpClient. I will concede that ASP manages to be confusing quite often. As for the latter, guidelines are not requirements anymore than "RTFM" is; You can use HttpClient without reading the guidelines and be just fine.
Yeah this is all over Rust codebases too for good reason. The argument is that default params obfuscate behaviour and passing in a struct (in Rust) with defaults kneecaps your ability to validate parameters at compile time.
Your http client setup is over-complicated. You certainly don't need `.proxy` if you are not using a proxy or if you are using the system default proxy, nor do you need `.authenticator` if you are not doing HTTP authentication. Nor do you need `version` since there is already a fallback to HTTP/1.1.
I mean dont get me wrong, I work with Java basically 8 hours per day.
I also get _why_ the API is as it is - It essentially boils down to the massive Inversion of Control fetish the Java ecosystem has.
It does enable code that "hides" implementation very well, like the quoted examples authentication API lets you authenticate in any way you can imagine, as in literally any way imaginable.
Its incredibly flexible. Want to only be able to send the request out after you've touched a file, send of a Message through a message broker and then maybe flex by waiting for the response of that async communication and use that as a custom attribute in the payload, additionally to a dynamically negotiated header to be set according to the response of a DNS query? yeah, we can do that! and the caller doesnt have to know any of that... at least as long as it works as intended
Same with the Proxy layer, the client is _entirely_ extensible, it is what Inversion of Control enables.
It just comes with the unfortunate side-effect of forcing the dev to be extremely fluent in enterprisey patterns. I dont mind it anymore, myself. the other day ive even implemented a custom "dependency injection" inspired system for data in a very dynamic application at my dayjob. I did that so the caller wont even need to know what data he needs! it just get automatically resolved through the abstraction. But i strongly suspect if a jr develeoper which hasnt gotten used to the java ecosystem will come across it, he'll be completely out of his depth how the grander system works - even though a dev thats used to it will likely understand the system within a couple of moments.
Like everything in software, everything has advantages and disadvantages. And Java has just historically always tried to "hide complexity", which in practice however paradoxically multiplies complexity _if youre not already used to the pattern used_.
Thanks for the thoughtful response, I appreciate it.
Yeah, I remember the first time I encountered a spring project (well before boot was out) and just about lost my shit with how much magic was happening.
It is productive once you know a whole lot about it though, and I already had to make that investment so might as well reap the rewards.
There are often controllers which do indeed just mimic the signals. Doesn't work with every appliance, it depends on the way it's implement and if the manufacturer wanted to make that approach infeasible.
But there absolutely are options to record such Signals and then replicate them via home assistant - I used them before to control a ceiling fan and various infrared devices (same idea, but not a radio there instead a "blaster" - I think it was called)
I didn't set it up again after my last move though, as I couldn't mount the ceiling fan in this apartment and the Infrared devices were just my media center (tv, audio), which are hardly in use currently
Personally I usually just create a devcontainer.json, the vscode support for that is great and I don't really mind if it fucked up the ephemeral container.
Which for the record : hasn't actually happened since I started using it like that.
Hey thanks for this! I hadn't thought about leveraging devcontainer.json, but it's a damn good idea. I'm building yoloAI for exactly this use case so I hope you don't mind if I steal it ;-)
One thing to be aware of with the pure devcontainer approach: your workspace is typically bind-mounted from the host, so the agent can still destroy your real files. Network access is also unrestricted by default. The container gives you process isolation but not file or network safety.
I'm paranoid about rogue AIs, so I try to make everything safe-by-default: the agent works on a copy of your workdir, you review a unified diff when it's done, and you apply only what you want. So your originals are NEVER touched until you explicitly say so, and network can be isolated to just the agent's required domains.
Anyway, here's what I think will work as my next yoloAI feature: a --devcontainer flag that reads your existing devcontainer.json directly and uses it to set up the sandbox environment. Your image, ports, env vars, and setup commands come from the file you already have. yoloAI just wraps it with the copy/diff/apply safety layer. For devcontainer users it would be zero new configuration :)
While I personally wouldn't be able to perform under such a setting, I'd be lying if the idea isn't kinda charming - it's like wanting to be a rock star, a small part thinks it'd be cool, even if most don't actually want to live the life of a rockstar.
Though the wealth it comes with would be neat to have (I mean most streamers with thousands of non-botted viewers are millionaires at this point, right?)
reply