MariaDB slave restore using GTID & xtrabackup bug

Restoring MariaDB (MySQL) slave using Xtrabackup & GTID.

Recently I happen to work for a MySQL database restore, a pretty exciting task for DBA 😛

Well, backup server was already configured with Holland (backup framework) & Xtrabackup (Percona’s backup tool) which made our lives easier with steps to restore.

Extracted backup command from backup server’s holland.log:

/usr/bin/innobackupex-1.5.1 --defaults-file=/mysql_backups/xtrabackup/20141219_013002/my.cnf --stream=tar --tmpdir=/mysql_backups/xtrabackup/20141219_013002 --slave-info --no-timestamp /mysql_backups/xtrabackup/20141219_013002 > /mysql_backups/xtrabackup/20141219_013002/backup.tar.gz 2 > /mysql_backups/xtrabackup/20141219_013002/xtrabackup.log


The task here is as simple as:
– Ship & extract the backup to destination,
– Apply-logs (innobackupex –defaults-file=/path/my.cnf –apply-log /path/datadir/)
– Start database (/etc/init.d/mysqld start),
– Setup replication (CHANGE MASTER TO) .

MySQL cameup clean after following all the steps and now it’s time to setup replication.

The option –slave-info to innobackupex script stores master’s binlog coordinates in a file, xtrabackup_slave_info, which can be used to setup the new server as slave.

The co-ordinates can also be found in the xtrabackup log shown as follows:

innobackupex-1.5.1: Backup created in directory '/mysql_backups/xtrabackup/20141219_013002'
innobackupex-1.5.1: MySQL binlog position: filename 'mysql-bin.002573', position 53014559, GTID of the last change '0-10110111-42499073'
innobackupex-1.5.1: MySQL slave binlog position: master host '10.1.101.11', gtid_slave_pos 0-10110111-42499073
141219 04:55:16  innobackupex-1.5.1: Connection to database server closed
innobackupex-1.5.1: You must use -i (--ignore-zeros) option for extraction of the tar stream.
141219 04:55:16  innobackupex-1.5.1: completed OK!

I noted that the backup xtrabackup_slave_info file binlog co-ordinate did not match that of master!! And we found that the binlog co-ordinates we had were of slave itself (show master status) and not of master (relay_Master_Log_File & exec_master_pos). This appears a bug in xtrabackup.

Luckily the xtrabackup log did note the GTID positions and that helped to restore the slave as follows.

Verified the present gtid_slave_pos matches that of log and started slave. (Check the References for MariaDB documentation page about restoring slave)

MariaDB [(none)]> show global variables like 'gtid_slave_pos';
+----------------+---------------------+
| Variable_name| Value |
+----------------+---------------------+
| gtid_slave_pos | 0-10110111-42499073 |
+----------------+---------------------+
1 row in set (0.00 sec)

MariaDB [(none)]> show global variables like 'gtid_current_pos';
+------------------+---------------------+
| Variable_name| Value |
+------------------+---------------------+
| gtid_current_pos | 0-10110111-42499073 |
+------------------+---------------------+

MariaDB [(none)]>CHANGE MASTER TO master_use_gtid=current_pos; START SLAVE;

We’ll look forward to the bug fix to that reports correct (master’s) binary log positions in xtrabackup_slave_info file.

References:

  • https://bugs.launchpad.net/bugs/1404484
  • https://mariadb.com/kb/en/mariadb/documentation/replication/standard-replication/global-transaction-id/#setting-up-from-backup
  • http://www.percona.com/doc/percona-xtrabackup/2.2/innobackupex/innobackupex_option_reference.html#cmdoption-innobackupex–slave-info
1 comment
  1. Hey, I think your blog might be having browser compatibility issues.
    When I look at your blog site in Safari, it looks fine but when opening
    in Internet Explorer, it has some overlapping. I just
    wanted to give you a quick heads up! Other then that, great blog!

Leave a Reply

Your email address will not be published. Required fields are marked *