Hot backup with smart updates

You’re set up for speed. You have all your many gigabytes of data in memory. You have your log files on an SSD, or maybe you’re using replication to stream your updates to your a co-located backup. Your backup is on a separate power supply, but it doesn’t much matter because you have UPS. Now you’re set for speed and reliability. Next, you partition your data set and remove dependencies so you can put each partition on a separate machine, each with its own backup. Woohoo! Speed, reliability and scalability!

Slow down partner. What happens when your data center goes out? Huh?! Yeah, we’re talking about natural disasters here. Loss of power or connectivity for hours, days or beyond. Maybe even destruction of the physical media. Now there’s a phrase that strikes fear into the hearts of system designers and administrators alike.

Okay, well, let’s look at our replication strategy again. We’ll have an offsite replication server or servers, and we’ll ship the replication traffic to them. Nice, but it can get expensive. That speed we were mentioning means we’re generating lots of megabytes of log data per second.

If we can relax the synchronicity requirements, we might consider hot backup over the network. That would be implemented by using the db_hotbackup utility to local storage followed up by a network transfer of the backed up environment to our remote server. With the basic way of doing hot backup, we transfer the whole database followed by the changed log files. If our databases are huge (and they often are), this is not a better solution, since even a daily transfer of terabytes of data might add a large expense. And it can be slow.

The db_hotbackup utility does have that nifty -u option to update a hot backup. But that requires that all the log files since the last backup be present, and effectively shipped as part of the backup. Not really much better, transfer-wise, than replication.

Here’s another thought. What if we had a network-aware hot backup utility that worked a little like a smart rsync? That is, it compares databases on the local and remote machine, and copies just the changed blocks (like rsync’s –inplace option). After that’s done, copy all the log files since the beginning of the sync.

The advantage is evident when I have a large database that has a lot of updates, and many of the updates are localized. Think of a single record that may be updated 100 times between the time of two different backups. Then, instead of copying 100 log records pertaining to that record, I’m only copying the record itself.

Back in the dark ages, when there was no hot backup utility, every BDB user wrote their own. Can we write our own db_netbackup that goes back to these roots, and uses the rsync idea? Maybe. I think db_hotbackup has some extra abilities to coordinate with other administrative functions, so that, for example, needed log files are not auto-removed during a hot backup. So we’d need to consider this when we’re messing with hot backup. Possibly the best approach would be to clone db_hotbackup, and have it call an external rsync process at the appropriate point to copy files.

Too my knowledge, nobody has done this. Am I the only one that sees the need?

That’s it. Speed, reliability and scalability, now with inexpensive disaster proofing.


About ddanderson

Berkeley DB, Java, C, C , C# consultant and jazz trumpeter
This entry was posted in Berkeley DB. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s