Alternately, you can issue pg_start_backup('label'), backup the filesystem, then issue pg_stop_backup() and keep all the WAL logs from that time. That'll get you a base backup similar to the slave.
What I'm doing is this:
I've got a primary/hot spare pair, and a tertiary db on lesser equipment that's my second copy for cases where I have one of the main machines down or I have to rebuild the secondary from the primary.
The tertiary db ships logs to s3, after gpging them. Every $timeframe, I take a base backup and throw it up as well. I keep a couple, and delete the older ones. Every few months, I test a restore on ec2. There's a balance between the WAL logs that you need to keep, the time to restore, and the frequency of base backups.
[edit - parameter names. Further edit - strategy.]
http://www.postgresql.org/docs/9.0/static/runtime-config-wal... See max_standby_archive_delay and max_standby_streaming_delay, -1 lets them wait forever.
Alternately, you can issue pg_start_backup('label'), backup the filesystem, then issue pg_stop_backup() and keep all the WAL logs from that time. That'll get you a base backup similar to the slave.
What I'm doing is this:
I've got a primary/hot spare pair, and a tertiary db on lesser equipment that's my second copy for cases where I have one of the main machines down or I have to rebuild the secondary from the primary.
The tertiary db ships logs to s3, after gpging them. Every $timeframe, I take a base backup and throw it up as well. I keep a couple, and delete the older ones. Every few months, I test a restore on ec2. There's a balance between the WAL logs that you need to keep, the time to restore, and the frequency of base backups.
[edit - parameter names. Further edit - strategy.]