Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Build and design for multi tenancy all the way down to your schema.

Keep identity and login mechanism decoupled - plan to support multiple login mechanisms per user (email/password, SAML, OpenID Connect, Google) for a single identity and multiple authentication factors (TOTP, Duo, etc). Be very careful to about what you consider a verified user and how you verify email addresses.

Use TLS even for your database connections. Use encryption at rest. Automate backups and plan to restore or export data for specific customers rather than the whole application.

Use a time series database or event logging system and create an audit trail of everything any privileged user does in your system, any account or permissions changes, destructive operations, etc.



As an alternative, if you containerize everything in the stack, you can simply spool up another seperate and isolated stack of containers for each given customer. Then it's also trivial if they want it 'on premise' somewhere or 'in the cloud'. No need to try and add complexity at the schema level and make a monolith support multiple tenants.

Also, this list is quite literally supabase (https://supabase.com/) - I cannot recommend enough, especially if OP is solo, which it sounds like is the case.


You can do this with containers, or, if the price point is right, with VMs for even more isolation.

There are levels of multi-tenancy:

* logical multi-tenancy, where isolation is enforced in code and the database (every table has a 'tenant id' key)

* container level multi-tenancy, where you run separate containers and possibly in different namespaces

* virtual machine multi-tenancy, where there are different VMs for each tenant and you can use network isolation as well (NACLs, security groups)

* hardware isolation, similar to virtual machine, but you use separate hardware. Hard to scale this with software, though using something like Equinix metal might work: https://www.equinix.com/products/digital-infrastructure-serv...

These each have different tradeoffs in upgradeability, operations cost and isolation.


I ended up expanding this comment on my blog: https://www.mooreds.com/wordpress/archives/3578


I like this idea a lot, personally. Separately deployed customer instances as an approach for “multi-tenancy” also eliminates the need for sharding databases at scale, since most of your scaling will just be new deployments. Overall, this suggestion has an appealing set of trade offs if you have the DevOps chops to pull it off.

Just watch out for really really big customer instances (but then they should be paying more than enough to spend time on their particular scaling issues).


I realized after commenting if it is specifically a product where you actually need / want these customers' data to "talk" to eachother, then actually your architecture might be a bit simpler - then you can expose what you need between all the tenants. I can't speak to this approach much more though, since I haven't done a project like that yet.


This is absolutely the worse idea possible. I would never base my business on a technology that’s not widely used.


While supabase could be considered a "technology" it's really more or less fancy wrappers around a postgres database. And even then, the wrappers it uses are rather well known:

- gotrue for auth, 3.2K GitHub stars

- postgREST to expost postgres as REST API, 19.8K GitHub stars

- kong as API gateway, 33.7K GitHub stars


And that still adds more complexity than just installing Postgres and creating a monolithic API


There are additional requirements about exporting logs in real time (Splunk, etc), and sharing data via mechanisms like S3, if your service needs to sell to highly regulated industries. You’ll need third party audit findings as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: