Hacker Newsnew | past | comments | ask | show | jobs | submit | LethargicStud's commentslogin

This works around a real problem I'm surprised nobody has solved - the ability to register a bunch of keys at the same time.

The webauthn spec has public/private keys incorporated: https://www.w3.org/TR/webauthn-2/#sctn-sample-registration

There should be no risk to storing all your public keys in e.g. 1password. When signing up for e.g. facebook.com, you should be able to hit a button and have all your keys registered at the same time. You can send $site all your public keys, and sign auth reqs as you log in. Of course, the UX would be handled by webauthn, so you'd really just be tapping your yubikey or scanning your fingerprint on login.

Ideally, password managers would offer key servers that websites could hit in real-time to pull your public keys. That's probably a stretch - maybe websites could sync your 2fa pub keys in the background.

With such a model:

1. Having multiple yubikeys

2. Having multiple team members with access (same as 1, effectively)

3. Revocation of individual 2fa devices

4. Adding 2fa devices after account creation

Would be pretty trivial.

I assume there's something basic in the webauthn protocol that I'm overlooking that prevents such a model. What is it, and why can't we have these properties?

I for one don't want all my accounts to hinge on access to a single physical device, and I certainly don't want to register 10 yubikeys with every service (some I may not even have physical access to on a day-by-day basis).



GCP supports remotely loading public ssh keys onto a box. They do this using the metadata endpoint - this is (in theory) a trusted API endpoint available to instances @ 169.254.169.254. IAM actually uses this - when you call other services, client libs reach out to the metadata endpoint and get IAM creds to send with each request.

Anyway, they have a local process that polls the metadata endpoint and adds authorized keys on the host. So you can e.g. upload your public key in the web UI, their metadata endpoint will serve it up on your instance, the guest agent will poll the metadata endpoint and add your key to the authorized_keys file.

These folks spoofed a response from the metadata endpoint. They used https://github.com/kpcyrd/rshijack to inject their own hand-crafted public key, which the guest agent happily added to authorized_keys (and created the wouter user).

They then ssh'd using their key:

> ssh -i id_rsa -o StrictHostKeyChecking=no wouter@localhost

> Once we accomplished that, we had full access to the host VM (Being able to execute commands as root through sudo).

Looks like they had passwordless sudo as well.


What's a mitigation for the spoofed packet? TLS or something of the sort?


If the docker container was running with something other than --net=host, it could have been avoided easily with standard networking concepts (route tables with reverse path filtering or iptables rules), even if the attacker somehow managed to get CAP_NET_ADMIN in the container the host network namespace still would refuse the packets. Although, with --net=host you could actually add iptables rules that match based on the cgroup and limit the IPs/ports allowed. It'd also be possible to filter the container's syscalls with seccomp. I'm not entirely sure why the container had CAP_NET_ADMIN at all, which is required for tcpdump and the man in the middle. Also, using user namespaces would have limited the attacker's abilities even if they had root in the container. A lot of defense in depth techniques are possible here.

There's also: simply not leaving the gcp login backdoor open. We run our gcp instances similar to ec2: on first boot we take the ssh keys from the metadata service and lay them down and do not run the GCP agent, we have standard config management + ldap for login after the first boot. This means that a hacker gaining access to your GCP credentials can't gain a shell on an existing instance trivially.


Well for one not giving the container access to eth0 in the host. Ideally the container would be configured with its own network namespace, the portion of the article that mentions host network mode is talking about this. Instead of eth0 in the container just being able to see its own traffic due to how it was configured it could sniff and spoof traffic directly on the host's interface.

But yeah, it seems strange to me that the metadata endpoint isn't secured via TLS. I guess they figured they had sufficiently prevented any kind of MitM attack (but obviously not in this case) so it was unnecessary?


My main FUD with alpaca.markets is the ability to transfer assets in/out. It doesn't look like they offer ACATS transfers (although there's not much info on this!).

Using them for anything beyond entertainment budget is very scary - you may incur substantial tax liability if e.g. they ever go out of business or you choose to use a different brokerage.

Any insight here? Am I missing something? Is there any way to transfer holdings out without incurring the cap gains?


as far as i know the "actual" backend data is standardized, ie same instrument names and dates etc. Adding to that, by playing in the US markets the SEC and FINRA, have that stipulation of being able to move between institutions in a standardized and regulated way.

long story short, they all have to implement a system to move out and in (well they all want your assets so this part is obvious).


Hey!

I built this because I had 10+ bank accounts, and they were annoying to monitor. I'd log in every month and have to chase down why they didn't look as I expected them too.

I've set up Lambda functions to e.g. post bills to Splitwise, email me when my balances go above/below thresholds, and pipe all transactions into Google sheets. I've been running my own personal finances off it for a while and it's greatly improved my quality of banking life.

Would love feedback :)


I'm hacking on https://bankhooks.com

The premise is simple - define arbitrary conditions on transactions or balances, and get webhooks or emails when those conditions fire on any of your bank accounts.

I have >10 bank accounts and was having trouble monitoring them. I use this to alert me of any activity, on any of my accounts, that is unexpected. I no longer need to log into banks. I also use it to alert me if my balance gets too low or too high in any account.

I've also deployed Lambda functions to e.g. post my utilities bill to Splitwise to automatically split with my roommates, or pipe all transactions into a Google sheet so I can analyze my spending over time.

It started out as a simple hobby project but has grown into an immensely useful tool in my day-to-day life.


Very interesting. But is it possible to charge without a company created? I lack the /about and other essential information which will make me trust you more.


Coming soon! I appreciate the feedback.

It uses plaid under the hood, so I get read-only access to bank data. You can validate by inspecting network reqs - it just grabs a public token from plaid and sends it to my backend - I never see the bank creds. Whether you trust plaid is up to you.


This feels like a first-world problem


It is, but a) I live in the first world, and b) it is a real problem. Godspeed to the grandparent.


This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here[1]. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker.

1: https://cloud.google.com/community/tutorials/docker-compose-...


> docker-compose inside a docker container

Do you use Docker-in-Docker or do you mount the docker socket inside your docker-compose container?

Oh dear god .. it's Docker all the way down.


Mount the docker socket. There's some quirks with storage volume paths. Also, security implications. Was not super hard to get working though.

I'd love to go straight to containerd or even basic linux containers but I'm not willing to run kubernetes on my personal machine and haven't found any ergonomic enough ways to run containers.


Check out https://podman.io?

Like docker (uses CRI images) but daemonless.


I thought it didn't even support compose like functionality? Or did they add that now?


Docker-compose is an add-on script that only automates how containers are launched/shut down.



I'm more than willing to run K8S on my personal machine in the form of microk8s, k3s, minikube, and similar cut-down versions of k8s.


The others are probably fine, but for anyone thinking about this, minikube uses 50% CPU even on powerful machines for no reason [0]. I switched to kind and it works perfectly, super lightweight.

[0] https://github.com/kubernetes/minikube/issues/3207


I have used minikube before. Back when it was using localkube, it was ok. It isn't as much now. (Now, it uses kubeadm to bring up the full suite).

microk8s might not give as much gains.

k3s from Rancher actually cuts out a lot of code, and from I hear, can run fine on Rapsberry Pis.

I have not heard of "kind", but neat.


Using the KVM driver reduces minkube cpu waste a lot, only supported on Linux...


do you have any article links about your kind setup?


I don't know about the other guy's setup, but here's the github repo: https://github.com/kubernetes-sigs/kind

KIND - Kubernetes In Docker


1) You can run your own pull-through cache[0]

2) You can use a different registry

3) Run something like kraken[1] so machines can share already-downloaded images with eachother

4) If you need an emergency response, you can docker save[2] an image on a box that has it cached and manually distribute it/load it into other boxes

0: https://docs.docker.com/registry/recipes/mirror/

1: https://github.com/uber/kraken

2: https://docs.docker.com/engine/reference/commandline/save/


Great response here.

I'd also add as an option - https://goharbor.io


We've actually been working on something similar. Our goal is to make it as dead simple to send a letter as possible in USA. We'd love any feedback people have!

https://papermail.in

Feel free to drop us a note at support@papermail.in if you have thoughts :)


If targeting US customers I'd avoid using a foreign TLD. If you're targeting Indian customers wanting to send to US addresses it should be fine.


I agree; I wouldn't even think to click that unless I wanted to send a letter in India.


You should have a text box that I can type in, I don't want to always have to prepare a PDF.


Yes! That'll probably be our next feature! Thanks for checking it out


Should probably get a privacy policy.


Can someone explain why a government couldn't block the IP of this service? Whether it's a VPN or just dns over https, it seems the servers wouldn't have infinite dynamic ips and could therefore be blocked.


Yes you're right. That app is just dns over HTTPS for older Android phones. With all its caveats.


Yep that's how countries like Egypt block things, lovely reset packets! But activists pretend that doesn't happen sigh


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: