Not a security expert and also curious about implications:
I always considered it the best solution to have both: VPN encryption and TLS encryption over the VPN. Different OSI Layers. Different Attack Surfaces.
Not sure if that is a recommended pratice though (see initial remark ;) )
I checked the source of the original (like maybe many of you) to check how they actually did it and it was... simpler than expected. I drilled myself so hard to forget tables as layout... And here it is. So simple it's a marvel.
A form of virtualization was first demonstrated with IBM's CP-40 research system in 1967, then distributed via open source in CP/CMS in 1967–1972, and re-implemented in IBM's VM family from 1972 to the present. Each CP/CMS user was provided a simulated, stand-alone computer.
VAX/VMS was originally virtualization of the PDP-11. Windows NT benefited from the loss of MICA/PRISM to virtualize/isolate what was once messy, unreliable, single-tasking, cooperative Windows 3.1/9x to be more isolated, reliable, concurrent, and parallel processing where the fundamental unit of isolated granular execution was the process like UNIX.
DOS mode "VM"s run within Windows 3.x/9x/NT aren't really isolated VMs because they can't replace the DPMI server or launch another instance of (386enh mode) Windows. All they do is semi-isolate real mode and DPMI client apps somewhat to allow multiple instances of them. They can still do bad things™ and don't have full control of the "system" in the way a real system, emulator, or hardware-assisted type-1 or type-2 hypervisor does. They're "virtual" in the way DesqView was "virtual".
Consumerized enterprise virtualization happened in the PC world with VMware Workstation->GSX->Server->ESX->ESXi/vCenter in relatively quick succession around 2001-2005. Xen popped up about the time of ESX (pre-ESXi).
IBM keeps quietly churning out z-based mainframe processors like the z17. Software from the 60's and 70's still runs because they don't do the incompatibility churn that's slowly being more and more adopted in the past 15 years to break everything, all the time, so that nothing "old" that had a long-lasting standard or compatibility ABI that was working will work now. I'm sure it's a lot of work, but churn is also work and especially when it breaks N users. Also, I don't think many folks from the PC-based enterprise server world appreciate the reliability, availability, and service features mainframes have/had.. although vMotion (moving VMs between physical machine linked to shared storage) when it came out was pretty cool.
> The basalt fibers typically have a filament diameter of between 10 and 20 μm which is far enough above the respiratory limit of 5 μm to make basalt fiber a suitable replacement for asbestos.
The source mentioned is a basalt fiber brand website, so not sure if that's enough for confidence.
So does fiberglass. I would dislike working with the aforementioned basalt fiber, I suspect it's like fiberglass or carbon fibers in that you'll end up itchy later, unless you do a really good job with your PPE e.g. taping gloves to your sleeves.
This is the exact way of behaving that facilitate conspiratorial thinking. You could have looked into it. Found sources that covers harmful effects of stone wool. Instead this 'just pointing out' that it might be problematic is lazy, dumb, and potentially destructive.
You want people to be curious and investigate? Then don't snap at them when they ask a question or express confusion. Respond and show your work and they'll learn by example. Snap at them and you'll raise the temperature of the discussion and make it more polarized and reflexive, exactly the opposite of your stated preference.
And they aren't wrong, inhaling basalt fibers is dangerous and long term exposure could injure or kill you. It's just a different mechanism than asbestos. https://en.wikipedia.org/wiki/Silicosis
> (NB: I do not know if or claim that basalt fibers are more dangerous than alternatives.)
For what it's worth, the ex-composite-shop guys I used to work with said that basically everything you can make a composite out of is horribly nasty: carbon fiber, fiberglass, basalt fiber, probably anything period. After repeated exposure you develop contact dermatitis to that type of fiber and the shop moves you on to working with something else, until it happens again. Contact dermatitis is just the first visible sign, it gets worse from there. Eventually you're probably going to want to get out of the shop entirely.
See uses here: https://en.wikipedia.org/wiki/Basalt_fiber
I am no material scientist, so cannot comment on actual facts why it might be better in specific cases than Kevlar, Dyneema or Carbon. But from experience there's a lot I don't know and especially in engineering there's a lot to consider when putting materials under stressful conditions that might put this in in a specific spot superior to those mentioned above.
I understand OPs sentiment fully - and the response is probably "it depends" :D
Culture and Art is a volatile thing and let's assume a game and it's mods are a piece of culture and art. Then an update of the original that interrupts the original aspects is basically the destruction of art.
In olden times, in those 90s, when games were offline, you could mod to your hearts desire and nobody could take it away. And by now it's recognized as cultural heritage - even though those old games become less and less appealing to the audience that is used to better game ux (This is a bold statement by me. My generation grew up with those graphics and love them - our grandchildren will ask us why we did that like they will never understand why people used those loud noisy typewriters when you can tell your phone to write the text up)
Still - typewriters are still usable. But copyright law and online only games and forced updates really destroy that game you played 10 years ago as you cannot (legally) access it anymore. Mods can be updated but that requires recreating that art - if still possible with changed APIs.
But then game developers need to life off something and updating and improving games should always be in their right, see no mans sky and how it changed over the years to be a completely different game in a way that would not have been possible otherwise.
IMHO it would be simple to keep significant old versions available for the general public like WoW did with their Classic rollback (not sure if this is the best example) - or like system shock, there's the rewrite and there's the original and everyone can use that version they prefer without preventing the original developer from publishing and improving.
I would really be interested in an actual comparison, where e.g. someone compares the full TCO of a mysql server with backup, hot standby in another data center and admin costs.
On AWS an Aurora RDS is not cheap. But I don't have to spend time or money on an admin.
Is the cost justified? Because that's what cloud is. Not even talking about the level of compliance I get from having every layer encrypted when my hosted box is just a screwdriver away from data getting out the old school way.
When I'm small enough or big enough, self managed makes sense and probably is cheaper. But when getting the right people with enough redundancy and knowledge is getting the expensive part...
But actually - I've never seen this in any if these arguments so far. Probably because actual time required to manage a db server is really unpredictable.
> Probably because actual time required to manage a db server is really unpredictable.
This, and also startups are quite heterogeneous. If you have an engineer on your team with experience in hosting their own servers (or at least a homelab-person), setting up that service with sufficient resiliency for your average startup will be done within one relaxed afternoon. If your team consists of designers and engineers who hardly ever used a command line, setting up a shaky version of the same thing will cost you days - and so will any issue that comes up.
Its a skillset that is out of favour at the moment as well but having someone who has done serverops and devops and can develop as well is a bit of a money saver generally because they open up possibilities that don't exist otherwise. I think its a skillset that no one really hired for past about 2010 when cloud was mostly taking off and got replaced with cloud engineers or pure devops or ops people but there used to be people with this mixed skillset in most teams.
I've never had a server go down. Most companies don't need a hot server because it's never going to be needed.
AWS + Azure have both gone down with major outages indivudually more over the last 10 years than any of the servers in companies I worked with in the 10 years before that.
And in comparable periods, not a single server failed or disk failed or whatever.
So I get SOME companies need hot standby servers, almost no company, no SaaS, no startup, actually does.
Because if it's that mission critical, then they would have already had to move off the cloud due to how frequently AWs/Azure/etc. have gone down over the last 10 years, often for 1/2 day or so,
I've had a lot of servers going down. I've had data centers going down. For various reasons - but normally not a failed disk but configuration errors due to human error.
And I've had enough cases where the company relied on just that one guy who knew how things worked - and when they retired or left, you had big work ahead understanding the systems that guy maintained and never let anyone else touch. Yes, this might also be a leadership issue - but it's also an issue if you have no one else with that specific knowledge. So I prefer standardized, prepackaged, off the shelf solutions that I can hire replacable people for.
I always considered it the best solution to have both: VPN encryption and TLS encryption over the VPN. Different OSI Layers. Different Attack Surfaces.
Not sure if that is a recommended pratice though (see initial remark ;) )