im just scrolling through these while I prepare for a VC pitch session and I realize these UI are actually pretty darn good ?
Like the old Windows 98 UI (probably biased) to me just handles so much, why can't apps just look like that? Its boring sure but there's no complexity or inferred action there like modern apps ?
Why do seniors struggle to use modern mobile phones but in those days they could easily work through those UIs?
Just some ranting here but it's shockingly better than what I have currently on Mac
hard agree, there's already "voice ai" companies that use the normal models and have this "interaction" engine on top of them to produce better results than I've seen in these demos. idk why people are impressed
What I will say is that this is probably the first model after gemini live to do some of these things. It feels similar to gemini live, which I don't think is what they were going for exactly, but IMO it is still impressive as I don't think anyone else has matched full duplex video/audio/tool calling.
Next gemini releases coming next week though, we will see how that matches up!
You get a box and someone is sitting on its IP address (in the middle), proxying to the real one, so everything is getting logged. Other comments say that this mitm stops working when you use public key authentication.
> Other comments say that this mitm stops working when you use public key authentication.
It doesn't completely stop working; a MITM can still pretend to be the server, it just can't authenticate to the real server on your behalf. You could be doing all your work in a fake server controlled by the attacker, while the real server sits there untouched.
It does note that it only protects against an attacker "who learns the cloud-init user-data at any point after the script terminates".
If the attacker can get the cloud-init user-data while the script is still running (in the time between sending the cloud-config.yaml and connecting with SSH to the machine) that would still allow MitM, but would require more effort on the attacker's part to leak the cloud-init data.
The point of the script was that leaking the cloud-init data after the script has completed is harmless.
Yes, I'm just saying if you think you've set up a server with Hetzner, but Smersh is able to intercept your first interaction with it and present you a server that you think is the one you created, then it doesn't matter how much you try to harden the compromised server. But if you get MITM later in the the process, the above is the scenario you are worried about.
The box itself is probably fine. It's the path between you and it. In shared infrastructure one compromised hop somewhere upstream is enough and now you're SSHing into the wrong thing without realizing it.
I was thinking about php-nuke I while back and it's terrible security rep. I figured it was just the regular PHP foot guns of the era, but I took a look at the code recently and boy howdy that was some truly atrocious code. I'm not security person (although perhaps security minded) and I found a million problems after a cursory glance.
I mostly disagree on your disagreement unless the entire project was based on top security practices and good code in the first place. The vast majority of these web panels are a security nightmare.
These PHP systems be it cPanel, wordpress or PHP itself are most likely the biggest target besides windows. It's incredibly uncool stack especially here but it is running most of the "independent" small web.
They cannot be that bad if they are managing to be ductape of the internet.
I've done PHP development for over 20 years, including some pretty large projects. I've never had a situation where a security flaw in PHP itself forced me to scramble to patch something before it got hacked.
On the other hand, for my Linux servers, I had to do that twice in the last month with CopyFail and DirtyFrag.
That's a fair point, using 'interpreter' specifically was imprecise language on my part. My main point was php-fpm is developed by the core PHP team and is often the default in how PHP projects deploy these days, and that CVE was very similar to the recent 'fail' LPE vulnerabilities in the kernel.
> They cannot be that bad if they are managing to be ductape of the internet.
Oh, it very much can be that bad. Most "security" relies on the Hungry Tiger Theory of Security(tm).
My system doesn't need to be "secure". My system simply needs to be more secure than yours. As long as there is an easier and/or more valuable target somewhere, I'm "secure". I don't need to outrun the hungry tiger; I only need to outrun you outrunning the hungry tiger.
That theory, of course, doesn't hold anymore when there are enough tigers to simply eat everybody. And that's what AI did; it multiplied the tigers enough that they can just gorge on everything.
Now, people are going to have to put in "actual security" or lose real money over and over and over. And since everybody has outsourced everything, nobody knows how to fix it quickly. The lawyers are going to have a field day.
At the end, however, we'll have real security on our internet facing systems. But man, it's going to be painful for a while.
Every time I venture in the the web server's error log, I see all of the skiddie's attempts at accessing the most common things with most of them being .php files. Lots of /wp/admin.php and /phpadmin/ type requests. Of course, none of those are available which is why the requests are in the error log. I've never paid attention, but I wonder how long (as in how little time) for a new server to come online before it starts to get probed by a skiddie. Whether they are just war dialing IPs or paying attention to new domain announcements but I'd put it on a few hours tops.
Dismissing these as script kiddie attempts is no longer correct. This is a real industry now. It’s not like the large scale actors are going to pass up a valid unpatched vector just because it’s old hat.
Imagine this; ~40% of public websites run wordpress. (based on some AI-gen summary, even if fewer it is still an important percentage).
So you might be spinning up a new instance with 40% probability. It makes sense in mass vulnerability explotation and detection to aim for highest success rate first.
Especially when the IPv4 space is so easy to scan nowadays. And you have services like Shodan that do just that daily.
I’ve tested this recently (this post week). Had a dns entry up and pointing to an nginx server for ~12 hours, zero requests. 17 seconds after the letsencrypt cert was issued, the floodgates opened. Over a dozen of requests per second.
I don't think it's necessarily specific to LE but rather to public certificate transparency logs. LE being free and easy to automate means it's very widely used these days, but if you theoretically go to a "pay" root CA and get a cert that covers thing.com and www.thing.com , the same probing will happen on the same time scale.
As someone who pretty much exclusively uses debian, freebsd and openbsd for server OS work, I was also rather surprised recently to see the default web gui that comes on a new fedora install.
I was pleasantly surprised to learn the architecture for this - a minimal backend that does a PAM auth and gives you a shell over websocket, with only your own Linux user credentials - and then everything else (from managing files to apache to VMs) is done in frontend javascript.
Keeps the server-side backend minimal and auditable.
> The concept of a GUI wrapper on top of the Linux ecosystem is what's broken
That is a nugget, it's so true.
Wrappers in general are such an issue in software. Wrappers built on top of wrappers, this desire to abstract everything away makes things look simpler, but every layer slows things down and hides what is actually happening. Every wrapper is another layer of complexity, another hoop to jump through when you're looking for a solution to a problem.
Of course is the architecture and the creator of such a thing, isn’t the point of a tool like that for users that don’t have the tech knowledge? I have only used those systems on shared hosting, host providers are the one maintaining and should be keeping them up to date and WHM/Cpnel have plenty of customers to worry too patch holes, if they can’t then who’s fault is it, Architecture, or provider? Hope is the customers fault?
I would worry less about big shared hosting providers, who have a strong interest in patching their stuff quickly, than the market of people who get one or two dedicated servers or KVM VMs and then install cpanel on them and for the rest of the time they use it, ignore the CLI of the servers and never patch anything. There's a lot of small users of cpanel that have just a few licenses.
You misunderstood the scope and severity of the bug entirely.
Yes, if you are a single tenant, this diminishes defense in depth, so an attacker that gets access with a user like www-data can escalate to root, sure.
But more importantly, on multi-tenant systems, one tenant can get root and pwn all the other tenants.
Big shared hosting providers are the most vulnerable, 'just patching' stuff might work sure, but there's several scenarios where it might not be enough, like lightning striking twice as it just happened. Or an attacker getting in before the patch.
I understand the concept of a local privilege escalation just fine, thanks. My point was that large hosting providers are much more likely to have people paying attention to patching these things (and possibly, worst case scenario as you describe, mitigating things if someone does compromise a shared hosting system). Individual one-off cpanel instances may have nobody paying attention to security issues for months or years at a time until something totally breaks.
Like the old Windows 98 UI (probably biased) to me just handles so much, why can't apps just look like that? Its boring sure but there's no complexity or inferred action there like modern apps ?
Why do seniors struggle to use modern mobile phones but in those days they could easily work through those UIs?
Just some ranting here but it's shockingly better than what I have currently on Mac
reply