I recently deployed Postgres on a dedicated Hetzner EX-44 server (20 cores, 64GB RAM, 2x 512GB NVMe SSDs in RAID 1) for €39/month. The price-to-performance ratio is exceptional, providing enterprise-level capacity at a fraction of typical cloud costs.
For security, I implemented TailScale which adds only ~5ms of latency while completely eliminating public network exposure - a worthwhile tradeoff for the significant security benefits.
- Real-time performance monitoring with PgHero for identifying bottlenecks
- Automated VACUUM ANALYZE operations scheduled via pgcron targeting write-heavy tables, which prevents performance degradation and helps me sleep soundly
- A custom CLI utility I built for ZSTD-compressed backups that achieves impressive compression ratios while maintaining high throughput, with automatic S3 uploading: https://github.com/overflowy/pgbackup
This setup has been remarkably stable and performant, handling our workloads with substantial headroom for growth.
I would absolutely use another backup utility (additionally if you want) if I were you (barman, pgbackrest, etc).
You are just wrapping pgdump, which is not a full featured backup solution. Great for a snapshot...
Use some of the existing tools and you get point-in-time recovery, easy restores to hot standbys for replication, a good failover story, backup rotations, etc.
The reason I wrote my own tool is because I couldn't find anything for Pg17 at the time and pgbackrest seemed overkill for my needs. Also, the CLI handles backup rotations as well. Barman looks interesting though, I'll definitely have a look, thanks!
pgbackrest was always easy to use in my experience. Not very hard to setup or configure, and low overhead. Supports spool directories for WAL shipping, compression, and block incremental backups (YAY!!!). I ran my last company on it for the last ~6 years I was there. Never any complaints, solid software (which is what you want for backups).
I have been using barman indirectly through CloundNativePG with my latest company, but don't have the operational experience to speak on it yet.
pgbackrest only looks scary because it’s so flexible, but the defaults work great in almost all cases. The most complex thing you’ll need to do is creating a storage bucket to write to, and configure the appropriate storage provider in pgbackrest's config file.
When it’s set up properly, it’s solid as rock. I’d really recommend you check it out again; it likely solves everything you did more elegantly, and also covers a ton of things you didn’t think of. Been there, done that :)
You still need it. There are tools included with Postgres that you can cobble together for a backup solution that is lacking in features and edge case testing. But i'd much rather just use the right tool for the job.
For example, pgBackRest solves real problems with features like block level incremental backups that drastically reduce storage and transfer times for many workloads, automated backup retention policies, multiple repository support for offsite backup redundancy, encryption support, point in time recovery for granular restoration capabilities, and tooling to build standby servers very quickly and efficiently. These features handle edge cases and reduce operational overhead compared to managing scripts around pg_basebackup and WAL archiving yourself. In many environments, those features are required (e.g. encryption).
Seconded. This corner of the programming world is a deep dark and scary place for people who haven't had solid industry experience. It'd be hugely helpful to have a barebones starting point to begin learning best practices.
I have one, I use it for running an arma3 server, it’s been bulletproof for reliability though, hetzner have come a long way since the first time I used them years and years ago where it was a mess.
I’m almost at the point I’d use them for some thig that mattered (I.e. money was involved).
For security, I implemented TailScale which adds only ~5ms of latency while completely eliminating public network exposure - a worthwhile tradeoff for the significant security benefits.
My optimization approach includes:
- Workload-specific configuration generated via PGTune (https://pgtune.leopard.in.ua/)
- Real-time performance monitoring with PgHero for identifying bottlenecks
- Automated VACUUM ANALYZE operations scheduled via pgcron targeting write-heavy tables, which prevents performance degradation and helps me sleep soundly
- A custom CLI utility I built for ZSTD-compressed backups that achieves impressive compression ratios while maintaining high throughput, with automatic S3 uploading: https://github.com/overflowy/pgbackup
This setup has been remarkably stable and performant, handling our workloads with substantial headroom for growth.