Wednesday, February 5, 2014

Instant MySQL Restores (with just this script and a ton of disk)

I don’t think that I need to sell you on the concept of backing up your data. But backup strategies and coverage vary widely from organization to organization. This blog post will detail Polyvore's approach to MySQL Backups.

We see 3 basic uses for database backups.

  • Recover from hardware failure
  • Recover from human or application error
  • Create a new replica or slave

Each of these use cases provides a different set of challenges, so we want a comprehensive policy that covers each use case.

Recover from hardware failure

So a mouse crawled into your server and got stuck in the main cooling fan in the middle of the night, frying the server 2 minutes later. I’ll ignore the obvious question of what you were doing running your datacenter in a barn, because we want to get your site back up and running.

Like everybody else, we use standard MySQL replication to protect ourselves from physical hardware failures. If it was a slave that died, no problem. We just take it out of rotation and provision a new one at our convenience. If it was a master that died, there are slaves waiting in the wings to take over.

For the unlikely case where our entire datacenter goes away, we also maintain a set of replicating database instances on Amazon EC2. We have a powerful EC2 instance with SSDs combined with several EBS volumes with generous reserved IOPS. It is capable of running 3 simultaneous MySQL instances, each replicating a different database from our datacenter. So, if we have to do a fully remote restore, we always have up to date replicas ready to use.

So, in the case of a hardware failure on any of our database servers, standard replication ensures that we have other up-to-date copies, both within and outside of the datacenter.

Recover from human or application error

Have you ever run a delete statement and forgot the where clause? How about a bug in your application that silently corrupted data in a way that took you a few days to notice? By the time the sinking feeling reaches all the way to the bottom of your stomach all of your slaves have happily replicated your error...and now you have N copies of borked data. You are likely panicked to get the data back as soon as possible and you definitely don't want to wait hours to downloaded the backup off S3 and prepare it for use. You are probably going to selectively retrieve the data that was lost and manually restore it back into the main database.

One approach to this would be to use pt-slave-delay to keep a slave behind by an hour. However, it might take you several hours, or several days, to notice and track down the bug, over running the delay buffer.

We keep 5 days worth of backups in a ready to use format in our datacenter. The backup directories that we create can be treated as self contained instances. They contain their own data dirs, socket files, temp dirs and management scripts. They co-exist nicely with other mysql instances running on the same server. So when we need access to old data, we can log into the backups server, cd into the backup directory from 3 days ago, and run a start script to fire up a local instance with that snapshot of data. All within 2 minutes.

Occasionally we get requests from the engineers to see what the database was like 2 weeks ago, or 6 months ago. So we keep periodic snapshots going all the way back in time to when we started doing backups. Unfortunately we do not have infinite disk space on our backups server, so we built our backup tool to automatically tar/gzip/encrypt the backed up database and upload it to S3. Unfortunately, we also do not have infinite bandwidth to Amazon EC2/S3 to be blasting full sized DB backups nightly. But as I just mentioned, we do have replicating instances on EC2. On a less frequent schedule, we run a second backup against our EC2 replicas and push the encrypted copy into S3 for long term storage.

Create a new replica or slave

Since polyvore.com keeps growing, we keep growing our database cluster by adding servers to the different slave pools and by replacing aging servers with fancy new high powered ones. When the time comes to deploy a new database instance, you have a choice: restore from a previous backup, or do a clone of an existing server. Since our backup tool (based on innobackupex from Percona www.percona.com) gives us a ready-to-use instance directory from a running server, there is not much difference between these approaches. We can either copy a ready-to-go directory from the backups server or run the tool against an existing slave. Restoring from an existing snapshot is faster, but could leave you a day behind in replication on the new server. Cloning an existing server takes longer but the resulting server will be more up-to-date when it comes online.

Our Tool

Written in Perl, mysql-backup-manager coordinates the various steps in backing up a MySQL database. It operates in several phases:

You can see the latest revision of the tool on github at: https://github.com/polyvore/mysql-backup-manager or download it with git clone git@github.com:polyvore/mysql-backup-manager.git

Copy Phase

Use the innobackupex tool from Percona to do a snapshot of a running database. You run the tool on the destination server. It supports several transport options: local (if you don’t need to copy over the network), netcat (for fast copies over your trusted LAN ) and ssh (for encryption, security, and to more easily get through firewalls).

Apply Logs Phase

When you use the innobackupex tool, you must prepare the backup before you can use it. Our tool automatically does this for you at backup time, in order to simplify and speed up a restore process.

Deploy Phase

At Polyvore, we have a standard filesystem layout for MySQL that is slightly different from the Debian layout. Data files, log files, pid files, sockets all reside within a defined directory structure. This structure allows us to easily run multiple myslqd instances on the same server and to have self-contained restore directories. The deploy phase moves files into the locations that we expect them to be. It also applies a template to create a local my.cnf file and start and stop scripts for running mysqld isolated to this directory.

Tar Phase

If you are archiving the backup somewhere, create a tarball that you can upload. You have the option to encrypt the backup using OpenSSL DES3, and to split the tarball into 1G chunks for upload.

Upload Phase

Upload the file to S3. We use the s3cmd tool to actually do the upload. The tool relies on s3cmd being properly configured with your Amazon keys.

Error Detection and Reporting

Our backup tool logs to syslog at various stages through the backup process, as well upon any error. We ingest these logs with Splunk ( www.splunk.com ) and use Splunk to alert if we encounter any errors or if an expected backup fails to run.

Example Usage

To see full usage details

    ./mysql-backup-manager.pl --help

The following example will create a backup from a remote host, encrypt it, upload it to S3 and keep a copy for instant restores

    ./mysql-backup-manager.pl --dbtype=databasename --dir=/srv/db-backups/databasename 
       --tardir=/srv/db-tars/databasename --timestamp --tarsplit --upload --encrypt 
       --password=CHANGEME --s3path=s3://mybucket/path --deletetar --transport=ssh 
       --host=hostname.domain.com --verbose

Once the backup is completed, if someone accidentally changes the password of every user on our website to "kitten" and I need to do a quick query of the user table from yesterday to use in restoring the data, I can quickly do it with the following commands ( a purely hypothetical situation. I assure you this exact scenario has never happened at a company I used to work for. ):

    cd /srv/db-backups/databasename/2014-1-18-13-10/deploy
    ./start.sh
    ./connect.sh
    USE myschemaname
    SELECT username, encryptedpassword FROM users;

If our entire datacenter has slid into the Pacific Ocean with the rest of California after the next big earthquake, I can do a restore from S3 with the following commands (the important commands are also spelled out in the "status" file created during the backup:

    s3cmd get --recursive s3://mybucket/path/databasename/2014-1-18-13-10/
    cd 2014-1-18-13-10/tar
    cat databasename.tgz.encrypted-split* | openssl des3 -d -k CHANGEME | tar zxivf -
If we used the template dir to give us start and stop scripts, these files would need to be edited to reflect the new path. Then we can run start.sh to launch the instance or move the directories/files into the normal locations.

Possible Improvements

  • We should write a script that cleans up our S3 backups and implements a retention policy, such as: “Keep daily backups for the last 2 weeks in S3. Keep weekly backups for the last 2 months in Glacier. Keep monthly backups forever in Glacier.”
  • Automate the retrieval and preparation of backups from S3
  • Make paths in templated configs relative - or add a command to adjust them to a new location

  • 4 comments:

    1. I am very enjoyed for this blog. Its an informative topic. It help me very much to solve some problems. Its opportunity are so essay-writings-service.com fantastic and working style so speedy. I think it may be help all of you. Thanks a lot for enjoying this beauty blog with me. I am appreciating it very much!

      ReplyDelete
    2. Good job! I think you are doing very nice job for your database. just keep working on that.
      This is Minneapolis web design firm testimonial!

      ReplyDelete
    3. Brent,
      Terrific post. Thanks for sharing your archive and recovery strategy. It is a best practice model for us to consider as we improve our own process.

      ReplyDelete
    4. Free Download electrical projects From Azeemo.Com.Project contains project abstract,project report and Powerpoint presentation. Here,We are providing Free Collection of Project with full project report and Electrical circuit design.

      ReplyDelete