Environment variables are often used to pass secrets around. But, despite its ubiquity, I believe that's a bad practice:
- On Linux systems, any user process can inspect any other process of that same user for it's environment variables. We can argue about threat model but, especially for a developer's system, there are A LOT of processes running as the same user as the developer.
- IMO, this has become an even more pronounced problem with the popularity of running non-containerized LLM agents in the same user space as the developer's primary OS user. It's a secret exfiltration exploiter's dream.
- Environment variables are usually passed down to other spawned processes instead of being contained to the primary process which is often the only one that needs it.
- Systemd exposes unit environment variables to all system clients through DBUS and warns against using environment variables for secrets[1]. I believe this means non-root users have access to environment variables set for root-only units/services. I could be wrong, I haven't tested it yet. But if this is the case, I bet that's a HUGE surprise to many system admins.
I think ephemeral file sharing between a secret managing process (e.g. 1Password's op cli tool) and the process that needs the secrets (flask, terraform, etc.) is likely the only solution that keeps secrets out of files and also out of environment variables. This is how Systemd's credentials system works. But it's far from widely supported.
Any good solutions for passing secrets around that don't involve environment variables or regular plain text files?
Edit: I think 1Password's op client has a good start in that each new "session" requires my authorization. So I can enable that tool for a cli sessions where I need some secrets but a rogue process that just tries to use the `op` binary isn't going to get to piggyback on that authorization. I'd get a new popup. But this is only step 1. Step 2 is...how to share that secret with the process that needs it and we are now back to the discussion above.
Since at least 2012, environment variables have been at least as secure as ordinary memory:
commit b409e578d9a4ec95913e06d8fea2a33f1754ea69
Author: Cong Wang <xiyou.wangcong@gmail.com>
Date: Thu May 31 16:26:17 2012 -0700
proc: clean up /proc/<pid>/environ handling
You can't read another process's environment unless you can ptrace-read the process, and if you can ptrace-read the process you know all its secrets anyway.
Just to clarify, that means that by default every process of the same user can access the variables. But it doesn't really matter because by default every process of the same user can read any secret directly from the target process anyway, right?
And that the right thing to do if you want to harden your system is to disallow ptrace-read, and not bother changing software that uses environment variables?
Because I think most people that just try will be able to read the variables of any process on their computer.
Yes, you can consider all processes running under the same user as able to peek at each other's data. This is the point of running under the same uid: sharing data.One uid should be considered one security domain, with any separators inside it being guardrails, not brick walls.
If you want to prevent other processes from peeking into your process, run it under a different uid. Again. that's the point. A bunch of good software does that, running only small privileged bits in separate processes, and running some / bulk of the processes under an uid with severely limited privileges. See e.g. Postfix MTA, or the typical API server + reverse proxy split.
I don't think that this part of Unix is somehow problematic, or needs changing.
I think this part of Unix is exceedingly problematic. I want to be able to run program that don't have the ability to do anything that I, personally, can do. Ideally this would be doable without nasty kludges or root's help.
Linux has some ways to accomplish this, for example:
- seccomp. It can be done quite securely, but running general purpose software in seccomp is not necessarily a good way to prevent it from acting like the running user.
- Namespaces. Unless user namespaces are in use, only root can set this up. With user namespaces, anyone can do it, but the attack surface against the kernel is huge, mount namespaces are somewhat porous (infamously, /proc/self/exe is a hole), and they can be a bit resource-intensive, especially if you want to use network namespaces for anything interesting. Also, user namespaces have kind of obnoxious interactions with ordinary filesystem uids (if I'm going to run some complex containerized stack with access to a specific directory within my home directory, who should own the files in there, and how do I achieve that without root's help?). And userns without subuid doesn't isolate very strongly, and subuid need's root's help.
- Landlock. Kind of cool, kind of limited.
- Tools like Yama. On Ubuntu, with Yama enabled (the default), programs can't generally ptrace (or read /proc/self/environ) from other programs running as the same user.
In any case, once you've done something to prevent a process from accessing /proc/fd/PID for other pids belonging to the same user (e.g. pidns) and/or restricted ptrace (e.g. PR_SET_DUMPABLE), then environ gets protected.
It's not your (unprivileged user account's) place to decide the security posture of the entire system, that's why you're running into issues with root. Even Yama, an LSM, requires root (or de facto equivalent) for initial setup (as it should).
Namespaces, if done incorrectly, can significantly increase the attack surface of the entire system (mount namespaces especially need to be treated with care) and same with regards to anything to do with user accounts.
The real security barrier on most operating systems to date is the user account and if you want full isolation (modulo kernel bugs), run a process as a separate user if you're that concerned about leakage (or ideally, don't run it at all).
> It's not your (unprivileged user account's) place to decide the security posture of the entire system, that's why you're running into issues with root.
Tell that to literally any unprivileged user who would like to run any sort of software without exposing their entire account to any possible code execution bug or malicious code in the software they're running.
> The real security barrier on most operating systems to date is the user account and if you want full isolation (modulo kernel bugs), run a process as a separate user if you're that concerned about leakage (or ideally, don't run it at all).
That a very 1980s or maybe 1990s perspective, I think. It's 2025. I have some computers. They are mine. I trust myself to administer them and do pretty much anything to them, but I would prefer not to trust all my software. Sure, I can, very awkwardly, try to isolate things by using multiple uids, but it's extremely awkward and I see no technical reason why uids are a good solution here.
And there are no shortage of technical reasons why uids are a horrible solution. I would like to run unprivileged software and delegate some degree of privilege to them -- imagine a web browser or an IDE or whatever. I, as the owner of the machine, can make a uid. But I would also like my IDE to isolate various things it runs from each other and from the IDE as a whole, and it cannot do that because it does not own the machine and should absolutely not be running as root.
I think the main point here is that... well if you can help it, don't run untrusted software, since it's by definition not trusted. There are some times where you can't really get around it (JavaScript is an increasingly big example of this and there are many ecosystems in which you are prevented from running trusted software without great difficulty) and there are many general protections that are in OSes that will help you there.
On Linux you have some combination of Landlock, AppArmor, SELinux, calling prctl(PR_SET_NO_NEW_PRIVS), and the kitchen sink. On FreeBSD you have capsicum. Windows has integrity labeling + a bunch of stuff related to Job objects + a few things to disable win32k.sys calls.
But these are helpful and shouldn't be considered a panacea. The expectation is that you're delegating authority to a computer program to perform a certain task. Do computer programs abuse that authority sometimes? Absolutely. But nonetheless that's the fundamental model of most computer security, thanks in part to its usefulness.
I skimmed the docs. It appears that this is a mechanism whereby a developer can restrict what their app can do via some configuration in Xcode. This seems almost unbelievably weak — developers want more privileges for their app, not fewer (especially developers of semi-malicious tracker-style SDKs), and the app sandbox seems entirely capable of, say, allowing a document editor app to open a document in a separate sandbox.
Yes, if we talk about interactive use, where the user may want to run a program with isolation on a whim, and can't be bothered to prepare a separate account for it.
Namespaces, to my mind, are a huge help, starting from trivial chroot and nsenter, all the way to bubblewrap [2] and containers (rootless, of course).
Also, with a properly prepared disk image, and the OS state frozen when everything heavyweight has started already, you can spawn a full-blown VM in well under a second, and run a questionable binary in it.
It would be fun to have "domains" within a user account, with their own views of the file system, network, process tree, etc, and an easy way to interactively run programs in different domains. Again, it's pretty doable; whoever creates an ergonomic product to do it, will win! (Well, not really, because most developers sadly run macOS, which lacks all these niceties, but has its own mechanisms, not very accessible to the user.)
Part of the problem with namespaces in general comes from setuid/setgid binaries. Now, nosuid helps, but the argument goes (and this is why dropping privileges on Linux sometimes requires root) that blocking setuid can actually increase attack surface (because many programs, as part of their initialization routines setuid to a specific service user). Blocking setuid blocks this which can lead to a program unintentionally running, paradoxically, with too many privileges (or the incorrect set of ones).
And in the case of allowing setuid and filesystem views, the issue here becomes that an unprivileged user could create a view of the filesystem that has an /etc/passwd and /etc/shadow file that the attacker knows in their home directory. Run some setuid program (like su or sudo) with this view and we've successfully became root, breaking out of the sandbox.
And you can't whitelist /etc/passwd or whatever either. This is why allowing anyone to play with mount points is fraught with danger.
Now is suid/sgid a fundamental part of a Unix-like system? No, but setuid was created in the world that was and even though these are arguably bugs, releasing a Linux kernel that creates a bunch of security holes in userspace is a very very bad breakage of userspace.
---
No New Privileges does make this a bit better (as you can never gain more privileges than you already have, and this is a one-way door) and the kernel docs even say that eventually unshare or chroot may be allowed when operating under no new privileges
But this is currently why you can't chroot as an unprivileged user as you can trivially blow through the security domain on most Linux distributions
I've never tried to use ptrace-read to read secrets from a running process's memory so I can't comment on that part.
I had always assumed getting secrets from a running process' memory was non-trivial and/or required root permissions but maybe that's a bad assumption.
However, reading a process' environment is as trivial as:
`cat /proc/123456/environ`
as long as it's ran by the same user. No ptrace-read required.
You do need PTRACE access to pid 123456 in order to access that file. It is transparent to you, but the kernel will use the current task's PTRACE_ATTACH access when attempting to get that information.
By default, on most distributions, a user has PTRACE_ATTACH to all processes owned by it. This can be configured with ptrace_scope:
I haven't seen much language support for it, though. On one part maybe because it's Linux only.
People that write in Rust (and maybe Go, depends how easy FFI is) should give it a try.
I wanted for a time to get some support for it in PHP, since wrapping a C function should be easy, but the thought of having to also modify php-fpm put a dent in that. I can't and don't want to hack on C code.
In practice it'd be great if the process manager spawn children after opening a secret file descriptor, and pass those on. Not in visible memory, not in /proc/*/environ
You should be able to build up a nice capability model to get access to those memfds from daemon too rather than having to spawn out of a process manager if that model fits your use case a bit better.
> - On Linux systems, any user process can inspect any other process of that same user for it's environment variables. We can argue about threat model but, especially for a developer's system, there are A LOT of processes running as the same user as the developer.
Here's the thing though, the security model of most operating systems means that running a process as a user is acting as that user. There are some caveats (FreeBSD has capsicum, Linux has landlock, SELinux, AppArmor, Windows has integrity labels), but in the general case, if you can convince someone to run something, the program has delegated authority to act on behalf of that user. (And some user accounts on many systems may have authority to impersonate others.)
While it's by no means the only security model (there exists fully capability based operating systems out there), it's the security model that is used for most forms of computing, for better or worse. One of the consequences of this is that you can control anything within your domain. I can kill my own processes, put them to sleep, and most importantly for this, debug them. Anything I own that has secrets can grab them my ptrace/process_vm_readv/ReadProcessMemory/etc.
--passphrase-fd was always the correct thing to do (gpg has had it for ages), but support for that is very meager. I mean, you can’t even kubectl login using environment variables, you pretty much have to pass tokens through command-line arguments and we’ve known that’s terrible for around forty-five years.
You've described the classic Unix security model, with minor improvements. While decent for the time it is showing its age. Specifically the difficulty in adapting to cheap, ubiquitous computing which it wasn't designed for.
If you need to keep secrets from other processes—don't run them under the same user account. Or access them remotely, although that brings other tradeoffs and difficulties.
> Environment variables are often used to pass secrets around. But, despite its ubiquity, I believe that's a bad practice:
I think environment variables are recommended to pass configuration parameters, and also secrets, in containerized applications managed by container orchestration systems.
By design, other processes cannot inspect what environment variables are running in a container.
Also, environment variables are passed to child processes because, by design, the goal is to run child processes in the same environment (i.e., same values) as the parent process, or with minor tweaks. Also, the process that spawns child processes is the one responsible for set it's environment variables, which means it already has at least read access to those secrets.
All in all I think all your concerns are based on specious reasoning, but I'll gladly discuss them in finer detail.
Please note that in my OP, I never mentioned containers.
Let's say, as a developer, I need to do some API interactions with GitHub. So, in a terminal, using 1Password's cli tool `op`, I load my GH API token and pass it to my Python script that is doing the API calls.
Presumably, the reason I use that process is because I want to keep that token's exposure isolated to that script and I don't want the token exposed on the filesystem in plaintext. There is no reason for every process running as my user on my laptop to have access to that token.
But, that's not how things work. The Claude agent running in a different CLI session (as an example) also now has access to my GitHub token. Or, any extension I've ever loaded in VS Code also has access. Etc.
It's better than having it in plain text on the file system but not by much.
Additionally, unless I've misunderstood the systemd docs, if you are passing secrets through environment variables using unit's `Environment` config, ANY USER on a server can read those values. So now, we don't even have user isolation in effect.
My reasoning is pretty plain. It could be wrong, but it's hardly specious.
> The Claude agent running in a different CLI session (as an example) also now has access to my GitHub token. Or, any extension I've ever loaded in VS Code also has access.
If you're giving untrusted software full access to your system, you're not in a position to complain about the system not being secure enough. Security starts by making mindful decisions according to your threat model. No system can keep you secure from yourself. I mean, it can, but it wouldn't be a system you have control over (see Apple, etc.).
There are many solutions that can mitigate the risks you mention. They might not be as convenient as what you're doing now, but they exist. You can also choose to not use software that siphons your data, or exposes you to countless vulnerabilities, but that's a separate matter.
I think in the context of containers you're right, there's a level of isolation and secrets are probably fine. But I think under other contexts that lack that isolation (e.g. bare-metal processes, local dev tooling) there are extra concerns.
(inb4: container env-vars are isolated from other containers, not from processes on the host system)
> By design, other processes cannot inspect what environment variables are running in a container.
That’s not exactly true. If a process is running in a container, and someone is running bash outside of that container, reading that processes environment variables is as simple as “cat /proc/<pid>/environ”. If you meant that someone in one container cannot inspect the environment variables of a process running in a different container, that’s more true. That said, containers should not be considered a security boundary in the same way a hypervisor is.
Point to a file?
E.g. `CONFIG_PATH=/etc/myapp/config.ini /opt/myapp`
That being said, I still use env vars and don't plan on stopping. I just haven't (yet?) seen any exploits or threat models relating to it that would keep me up at night.
Is that file more secure than the environment variables it's replacing? On Linux I think you can secure it to just your service with SELinux. Not sure about Windows
Linux security model is pretty broken without namespaces. systemd has some bells and whistles that help but if you want something better than environment variables you're naturally going to reach for cgroups.
Linux/Unix security model is that different actors are represented by different OS users. When you don't do that, it's already like inviting the burglars in your living room.
"Everybody" runs an operating system, where it is common to install programs by downloading random files from the internet and then clicking OK to any UAC prompt. Doesn't mean that this is not insecure.
Is there consensus around that? Given how containers and bubblewrap/flatpak are considered (to some degree) security boundaries, I approach statements like this with skepticism.
You answered your own question. Bubblewrap uses namespaces, but not by themselves. They're used in conjunction with other tools to provide a security boundary. Even then, it's not a very good security boundary. A serious security boundary should hold up even to privilege escalation, which is not true for bubblewrap.
Each kind of namespace provides its own kind of boundary. Yes, you need something like bubblewrap to stitch that all together. And it is kinda leaky, especially without taking some degree of care.
What're you saying about privilege escalation? I don't see how a user namespace does not prevent/limit privilege escalation.
More than that, I'm interested if there is some broader consensus I'm missing on the shortcomings of namespaces.
> I don't see how a user namespace does not prevent/limit privilege escalation.
They only prevent a single class of privilege escalation from setuid usage within the namespace. You can still obtain root using race conditions, heap overflows, side-channels, etc. or by coordinating with something outside of the namespace like a typical chroot escape.
Here's an old (patched) example of escaping the sandbox:
> More than that, I'm interested if there is some broader consensus I'm missing on the shortcomings of namespaces.
The consensus is that they're not security features on Linux. I'm not sure who sold you on the idea that they were, because that was not handed down by the kernel devs.
If I was actually serious about the security of a system, I'd explicitly use virtualization at-minimum. If all I wanted was a mall cop and was happy to take the risk of misconfiguration or attacks on the kernel I'd probably say something that adds a lot of extra sanity to namespaces is "good enough", like bubblewrap. I would never trust it or say that it's secure though.
cgroups aren't relevant here, I think. Not sure if that was just a typo, since you did mention namespaces in the first sentence. PID and user namespaces in particular are relevant here.
Until when? Secrets in applications in many cases (I would probably wager majority of the cases) are only useful if they're in plaintext at some point, for example if you're constructing a HTTP client or authenticating to some other remote system.
As far as high-level language constructs go, there were similarish things like SecureString (in .NET) or GuardedString (in Java), although as best as I can tell they're relatively unused mostly because the ergonomics around them make them pretty annoying to use.
> Any good solutions for passing secrets around that don't involve environment variables or regular plain text files?
Plain text files are fine, it's the permissions on that file that is a problem.
The best way is to be in control of the source to the program you want to run then you can make a change to always protect against leaking secrets by starting the program as a user that can read the secrets field. After startup the program reads the whole file and immediately drops privileges by switching to a user that cannot read the secrets file.
> On Linux systems, any user process can inspect any other process of that same user for it's environment variables. We can argue about threat model but, especially for a developer's system, there are A LOT of processes running as the same user as the developer.
This is a very good point I'd never realised! I'm not sure how you get around it, though, as if that program can even find a credential and decrypt a file, if it runs as the user then everything else can go find that credential as well.
I added a note to my original post about how 1Password's cli tool work. Access to the binary doesn't mean automatic access to any authorized session. I think that's a good start. In my case, I have it tied into my fingerprint reader, so if something want's access to a secret using `op` I get a prompt to authorize. If I haven't just called `op` myself, I know it's a suspect request.
> Any good solutions for passing secrets around that don't involve environment variables or regular plain text files?
Honestly, my answer is still systemd-creds. It's easy to use and avoids the problem that plain environment variables have. It's a few years old by now, should be available on popular distros. Although credential support for user-level systemd services was added just a few weeks ago.
A TL;DR example of systemd-creds for anyone reading this:
# Run the initial setup
systemd-creds setup
# This dir should have permissions set to 700 (rwx------).
credstore_dir=/etc/credstore.encrypted
# For user-level services:
# credstore_dir="$HOME/.config/credstore.encrypted"
# Set the secret.
secret=$(systemd-ask-password -n)
# Encrypt the secret.
# For user-level services, add `--user --uid uidhere`.
# A TPM2 chip is used for encryption by default if available.
echo "$secret" | systemd-creds encrypt \
--name mypw - "$credstore_dir/mypw.cred"
chmod 600 "$credstore_dir/mypw.cred"
The process you start in the service will then be able to read the decrypted credential from the ephemeral file `$CREDENTIALS_DIR/mypw`. The environment variable is set automatically by systemd. You can also use the command `systemd-creds cat mypw` to get the value in a shell script.
At least systemd v250 is required for this. v258 for user-level service support.
- On Linux systems, any user process can inspect any other process of that same user for it's environment variables. We can argue about threat model but, especially for a developer's system, there are A LOT of processes running as the same user as the developer.
- IMO, this has become an even more pronounced problem with the popularity of running non-containerized LLM agents in the same user space as the developer's primary OS user. It's a secret exfiltration exploiter's dream.
- Environment variables are usually passed down to other spawned processes instead of being contained to the primary process which is often the only one that needs it.
- Systemd exposes unit environment variables to all system clients through DBUS and warns against using environment variables for secrets[1]. I believe this means non-root users have access to environment variables set for root-only units/services. I could be wrong, I haven't tested it yet. But if this is the case, I bet that's a HUGE surprise to many system admins.
I think ephemeral file sharing between a secret managing process (e.g. 1Password's op cli tool) and the process that needs the secrets (flask, terraform, etc.) is likely the only solution that keeps secrets out of files and also out of environment variables. This is how Systemd's credentials system works. But it's far from widely supported.
Any good solutions for passing secrets around that don't involve environment variables or regular plain text files?
Edit: I think 1Password's op client has a good start in that each new "session" requires my authorization. So I can enable that tool for a cli sessions where I need some secrets but a rogue process that just tries to use the `op` binary isn't going to get to piggyback on that authorization. I'd get a new popup. But this is only step 1. Step 2 is...how to share that secret with the process that needs it and we are now back to the discussion above.
1: https://www.freedesktop.org/software/systemd/man/latest/syst...