I have copied this to ensure I do not lose it! Thanks to Vito Botta for another excellent article!
Painless, ultra fast hot backups and restores of MySQL databases with Percona’s XtraBackup
January 10, 2011 by Vito Botta
A better alternative to mysqldump?
In the previous post, we’ve seen how to perform backups and restores of MySQL databases using the mysqldump tool bundled with the MySQL server distribution, together with a few tricks that help speed up both backups and restores, among other things.
mysqldump is very likely the most popular tool for backing up MySQL databases, and in most cases it just works, and pretty well too. It’s easy to use, free, comes with MySQL and produces simple text files containing standard DROP/CREATE/INSERT SQL statements that, once replayed during a restore, will simply recreate the databases, the tables and all the data on the destination MySQL server. By producing simple text files based on SQL statements, mysqldump may also be the perfect tool when migrating data between different versions of MySQL, or when you need to change something in the text file dump before restoring it.
However, mysqldump may not be up to the job if you need to back up MySQL databases that you want to keep active without any downtime, and while they are in use for writes as well as reads; for example, when the databases are also replicated and you need to copy them to another server instance to set up a new slave or resync an existing one. In these circumstances, if you resort to using mysqldump anyway you may have to choose between locking all the tables while backing up -thus blocking writes in the meantime, which in some cases may mean breaking applications- and having dumps that may contain inconsistent data. But you unlikely want your users to experience downtime, and in any case you don’t want that your databases may change while your are exporting the data, do you?
Another issue is performance. While mysqldump works great with small databases, it is not efficient with large ones and restoring a SQL dump of a large database can take a very, very long time. With the tricks we’ve seen in the previous post, it is possible to speed up both dumps and restores significantly, but still this would not work well for very large databases. So, unless you are very patient, don’t mind waiting and can afford to stay sitting a week watching your MySQL server while it’s restoring your large db, sooner or later you will have to look for other, more efficient options.
Among the other options, there are both commercial and free tools. One popular (free) choice is to use LVM snapshots; they work well when you can use them (for example, you may not be able to use LVM snapshots with virtual private servers, in many cases, unless you are given the possibility to customise the disk layout setup), but they’ve also got their weaknesses; we’ll look at this option more in detail in a future post, however suffice it to say here that LVM snapshots may significantly impact on the disks, and therefore on the performance of the whole server. Plus, LVM snapshots only take into account what MySQL has already flushed to disk at the moment, but not what data it may have in memory, thus restoring from an LVM snapshot may require a crash recovery.
A better alternative yet is MySQL Enterprise Backup (formerly InnoDB Hot Backup), which can backup a live MySQL server without downtime and at the same time ensure that our backups are consistent even if the databases are in use for writes too during the process. Plus, restores areincredibly fast. Sounds great, yeah? Unfortunately, MySQL Enterprise Backup is a pricey commercial product that may be beyond reach in many cases (5K USD per server), especially for companies with tight budgets that may choose MySQL for the its cost (none) in first place, as well as for its good performance.
But fear not, if you are on the cheap like me: luckily for us, there also exists a free, open source solution by the good people at Percona (a MySQL consultancy company), which is very, very similar to Enterprise Backup/InnoDB Hot Backup. It’s called Xtrabackup, and while it’s still pretty young having been first developed less than a couple years ago, it’s a pretty solid solution that works very well despite some limitations. It also compares well to the commercial alternative in that it only takes a bit longer to perform backups of the same data, and requires a little more disk space – I haven’t had a chance to test this myself yet, so I’ll trust what I’ve read on pythian.com: Taste test: Innobackup vs. Xtrabackup.
Here are the main features available with the current release, from the home page:
- Create hot InnoDB backups without pausing your database
- Make incremental backups of MySQL
- Stream compressed MySQL backups to another server
- Move tables between MySQL servers online
- Create new MySQL replication slaves easily
- Backup MySQL without adding load to the server
While the product itself is free, Percona also provides commercial support, if needed. So, let’s see now how to use Xtrabackup for backing up and restoring MySQL databases.
For starters, you’ll need to install XtraBackup. In this example, we’ll install a Debian package on aUbuntu server. Head to XtraBackup’s download page first, or download the latest version available (1.4 at the moment of this writing) with wget or similar making sure you choose the right package for your OS.
The package installs a binary named xtrabackup that takes care of the actual backups and restores, but it also installs a perl script named innobackupex-1.5.1 that makes XtraBackup a bit easier to use, plus it also backs up MyISAM tables. Backing up is pretty simple:
You should see an output similar to this:
In this first example, we have instructed XtraBackup to stream the backup data in tar format; the output is then compressed with gzip into a file having the current timestamp in its name. You may have noticed that I am using in the example above gzip’s lowest compression level, this is because higher levels will only require more CPU and resources in general with little improvement, especially if you pipe the compression to XtraBackup while the backup is still in progress; actually I have had often problems with compressing the backup data while the backup is still running; in many cases I found that backing up first, and then compressing the backup once this is done, works better. However the success of compressing while backing up at the same time, also depends -from my experience with the tool- on the value of the wait_timeout system variable in MySQL. On my servers, whenever I have set this variable to a low value I’ve always had problems getting backup and compression to work in one step. There may be better settings or workarounds, but I’ve found that setting wait_timeout to a value higher than the average time taken by the backup process, usually makes for a successful compressed backup without requiring a separate step for the compression. However, keep in mind that a high value for wait_timeout can cause a whole lot of other issues depending on the usage pattern of your MySQL instances.
For this reason, you may want to keep wait_timeout to a not too high value, and only change it for the XtraBackup transaction. You can easily do this by patching the innobackupex-1.5.1 script as suggested by reader “Gu Lei” in a bug somebody filed exactly for the same issue. So edit the file/usr/bin/innobackupex-1.5.1 and change it as follows:
Of course, the timeout value must be enough to cover the amount of time XtraBackup could possibly take to backup your data.
The arguments –slave-info and –databases are optional. The former allows you to save the master status info together with the data (assuming the server you are backing up data from is a master or an existing slave in a replicated environment), in case you are in the process of setting up a new slave, or resyncing an existing one. The latter, instead, allows you to specify which databases you want to backup unless you want to backup all the available databases; it accepts a space separated list of database names. Obviously, if your MySQL configuration file is in another location on your server, remember to change the path in the relevant argument accordingly.
Streaming backups to another MySQL server
The command above works pretty well if you want to make backups manually or if you want to schedule, for example, daily, full backups with cron. However if you are backing up with the purpose of restoring to another server, you may want to skip creating a local file that then needs to be transferred to the destination server for restore. You can stream Xtrabackup‘s output directly to the destination server instead, and create a backup archive there, by using netcat.
Netcat, which we have already met in the previous post on backing up with mysqldump, is a simple yet very useful utility that allows you to stream data over a TCP connection, then you can for example save that data to a file on the destination host. First, let’s get netcat running on the destination host and listening to a port of our choice, for example 6789, and by redirecting the output we’ll store it into a file on disk:
Then, on the server that hosts the data we want to migrate:
where xxx.xxx.xxx.xxx is the IP address of the destination host. XtraBackup will take its time depending on the amount of data to transfer, and once the process is completed netcat will be automatically closed on the destination host, with all the data copied into the target file. If all went well, XtraBackup will complete the process on the source host saying:
If not, well, you are going to have trouble, of course.
If backing up with XtraBackup was easy, restoring isn’t any more difficult, it just requires a few additional steps vs a single one. First, while on the destination host (we are done on the source host), backup the existing MySQL data folder, if needed, and then delete or rename it, so that we can create a new empty data folder in the same location which will contain the restored data.
Next, extract the tar archive created while streaming the data during the backup, into the new MySQL data folder. As XtraBackup reminds us at the end of the backup process, we need to use the –ignore-zeros option (or -i) when untarring. Also remember to use the option -z as well if you are restoring from a gzipped tarball – here, for example’s sake, I am assuming you have usednetcat instead.
Unless something is wrong with the archive for some reason, you should now have a new MySQL data folder identical to that on the source host at the moment the backup was performed. Only thing, this data folder is not ready yet for use with MySQL but needs to be “prepared” first to apply the changes that have been captured by XtraBackup in the transactional logs during backup. Once this is done, fix the permissions and start MySQL again:
If all went well so far, MySQL should start correctly with the restored database(s).
Setting up the new/updated instance as replication slave
Assuming now you want to use this new server as a slave of the instance you have backed data from, and know how to set up the replication (we’ll look at how to do this in another post) or that this instance already was a slave and you just need to resync it, you will have to update the information that MySQL uses to connect to the master and replicate data. If you used the option –slave-info with XtraBackup during the backup, like in my example, in the new data folder you should also see some additional files that will contain exactly this information.
Nothing complicated here… but you need to be careful as to which file you should look at. You have two possibilities here:
Scenario 1 – You are cloning an existing slave
That is, you are adding a new slave. Let’s call the host you previously backed the data from Slave-0.Slave-0 itself was already configured as slave of another MySQL server instance, its master, that we’ll call …Master. Right now you basically want to “clone” Slave-0. That is, you want to add a new slave to Master. Let’s call this new slave Slave-1. So, to recap, both Slave-0 AND the new Slave-1 will in the end be replicating from the same Master, while there won’t be any link between Slave-0 andSlave-1. Hope this is clear!
If this is what you are trying to do here, the file you need to look at is xtrabackup_slave_info. If you look at the content of this file,
you’ll see that it already has a “CHANGE MASTER..” command ready for you to use. So open a MySQL console, and just execute that command:
If everything was OK, you’ll see something like this:
...reader Mora['s]... definition of "Seconds_Behind_Master is how many seconds the slave is behind the master. It is not an estimation of how long its going to take to catch up, if the slave has been stopped for 1hour, it would be 3600seconds behind, but it can take anywhere from a few seconds, to a few days (or more) to catch up."
Note: you may have noticed that the file containing the information about the master, only contains the name and position of the master’s binary log file in use at the moment the backup was taken; there is no username, or password. This is because credentials are not saved there for obvious reasons. So unless you have added values for MASTER_HOST, MASTER_USER andMASTER_PASSWORD to your my.cnf file, you may need to add these settings to the “CHANGE MASTER..” command above.
Scenario 2 – You are setting up a new slave, or resyncing an existing slave having as master the MySQL instance you’ve backed data from in the previous steps
In this second scenario, instead, say that
a) you want to use the MySQL server instance you previously backed your data from, as master in a typical replication; let’s call this instance Master;
b) you want to use the new instance you are currently setting up from that backup, as Slave. Or, the master-slave replication between these two already existed but you need to resync this slave.
If this is what you are trying to do, the file you need to look at is xtrabackup_binlog_info instead. If you look at the content of this file,
you’ll see something slightly different to what the other file contains. The first value is the name of the binary log file on the master, which you need to attach this slave to; the second value is instead the position within that binary log from which MySQL has to start catching up with changes from the master. Optionally, you may also see a third value being the list of names of the databases excluded from the replication.
Now you can use this information to update the link with the master. Start a MySQL console and enter the following commands to do this, then start the slave:
Also in this case, you’ll have to enter the username and password you are using for the replication process, and be careful to enter the log file name and position correctly, according to yourxtrabackup_binlog_info file.
Finally, if all worked as expected, you should see:
Cool, you’re done.
I am pretty sure you’ll see right away – as soon as you try XtraBackup – the huge benefits of this alternative vs mysqldump. Make sure, though, you test backups and restores enough before using this system in production!
Know of any other tips related to XtraBackup, or of a similar or even better alternative (still free)? Please share them in the comments!