Thanks! Does WAL-G provides some kind of "continuous backup" where changes committed to the database are continuously streamed to the backup storage? Or does it work "step by step", for example by backing up every 5 minutes or every 10 MB?
Both back up PG's WAL files (Write Ahead Log) and allow restoring your database state as it was at a specific time or after a specific transaction committed. This is known as point-in-time recovery (PITR) [0]
Users and admins make mistakes, and accidentally delete or overwrite data. With PITR you can restore in a new environment, just before the mistake occurred and recover the data from there.
What I meant is that the archive_command is run only when a WAL segment is completed or when archive_timeout is reached. In the meantime, nothing is backed up. On a low traffic database, this can be a problem. I'm wondering if there is a way to continuously stream the WAL to an object storage like S3, without waiting to have a complete segment.
You can open multi-part transfers and close out the transfer when you're ready, which can be used so that it is very close to streaming; for this case perhaps it's close enough to try with wall-g if it otherwise supports it.
That's the usecase for archive_timeout. I set it to 60 seconds. So at most I'll have lost 60s + the time to transfer the file to s3, which shouldn't be more than a couple seconds.
According to PostgreSQL documentation, "archived files that are archived early due to a forced switch are still the same length as completely full files".
I'm afraid to use a lot of storage for WAL segments that are mostly empty:
16 MB per segment x 60 minutes x 24 hours x 7 days = 161 GB/week
Seriously awesome work on this! I was expecting some solid improvement when I heard you were rewriting this in Go, but this is beyond what I could have expected. 7x improvement on high end instance types!
Also, what an impressive project to have on the resume as a college intern. I don't think many interns get to tackle something so meaningful.
Thanks for making this! To someone who's unfamiliar with Postgres tooling, what's the difference between WAL-G and Barman? What're the advantages of using one over the other?
In summary, WAL-E is simpler program all around that focuses on cloud storage, barman does more around inventories of backups and file-based backups and configuring Postgres. There are integrative downsides to its span. WAL-E also happens to predate Barman.
WAL-G (and WAL-E) are expected to run next to the main database, while barman is to be run on a separate machine. Barman can also backup many databases. It is essentially the difference between a central backup service and local backups.
We've currently been testing it at Citus, but have not flipped it to be live for our disaster recovery yet.
We're going to start rolling it out for Forks/point-in-time recoveries first, which present less risk to start. Later we'll explore either parallel restores from WAL-E and WAL-G or possibly just flip the switch based on the results.
On restoration there's really no risk to data. Further we page our on call for any issues that happen such as WAL not progressing, or servers not coming online out of restore.
WAL-G is not yet production ready, but it has been used in a staging environment for the past few weeks without any issues. Once fdr adds parallel WAL support, he plans to take it into production.
Neat. My concern out of the gate is what would be the perf hit.
I assume I am switching from WAL-E to WAL-G for more perf. But WAL-E speaks GCS. If WAL-G needs an extra hop to do so, may lose some of the point of it..
Yeah, no idea personally. Haven't used the gateway functionality in Minio at all.
That being said, the Minio team seem pretty good with writing performance optimised code. Frank Wessels (on Minio team), has been writing articles about Go assembler and other Go optimisation things recently. eg:
There was some mention about resumable uploads in the blogpost which sadly each provider handles differently (that is the GCS layer that supports the S3 API does not accept resumable uploads).
Disclosure: I work on Google Cloud (so I'd love to see this tool point at GCS).
WAL-G has a number of unit tests, and has been tested manually in a staging environment for a number of weeks without issues. We are looking to implement more integration tests in the future.