资讯

精准传达 • 有效沟通

从品牌网站建设到网络营销策划,从策略到执行的一站式服务

mysqldump原理分析

今天学习了下MySQLdump原理,具体的结论如下:
1、mysqldump在不加任何参数进行备份的时候:
如果单独备份一个表会对备份表上读锁,直到备份结束unlock,如果备份的整个库,那么会同时lock 这个库下的所有的表,最后在unlock tables,如果备份的是整个实例(加参数--all-databases 或者-A会备份除了performance_schema和performance_schema这俩库之外的所有的库),那么是一个库一库的去备份,也就是说先备份库A,把库A下的所有的表上读锁,备份完库A,unlock tables,然后再备份库B,把库B下的所有的表上读锁,备份完库B,unlock tables,那么所以在使用mysqldump备份某个表或者某个库的时候,没有加任何的参数,会上读锁,并且备份出来的数据是一致性的,但是如果备份的是整个实例,那么库和库之间的数据的一致性就不能保证了;
2、参数--single-transaction ;
针对innodb的引擎,可以加上参数 --single-transaction来保证备份的一致性,并且是借助的修改隔离级别为REPEATABLE READ+START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */开启快照读事务共同来保证一致性的,所以不需要加read lock;注意该参数仅仅对innodb引擎起作用,对于myisam引擎,虽然添加了--single-transaction参数的myisam表备份处理过程和innodb的过程完全一致,但是因为myisam不支持事务,在整个dump过程中无法保证可重复读,无法得到一致性的备份。
3、参数--master-data;
--master-data指定为2指的是会在备份文件中生成CHANGE MASTER的注释。如下所示:
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql2-bin.000049', MASTER_LOG_POS=587;
如果该值设置为1,则生成的是CHANGE MASTER的命令,而不是注释。如下所示:
CHANGE MASTER TO MASTER_LOG_FILE='mysql2-bin.000049', MASTER_LOG_POS=587;
当加上这个参数的时候,为了得到准确的binlog的位置状态信息,会通过FLUSH TABLES WITH READLOCK来保证,备份开始到结束,是不允许别的事务修改的,同时也就保证了一致性;
4、参数--single-transaction和参数参数--master-data一起使用;
也会执行 FLUSH TABLES WITH READ LOCK,但是在还没有开始备份时,也就是在 SHOW MASTER STATUS显示了主库的binlog状态之后就unlock tables了,也会 SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ,之后和单独加--single-transaction过程是一样的。
5、参数--lock-all-tables,通过给整个实例所有表都加read lock来保证一致性备份;
加上--lock-all-tables和不加上--lock-all-tables得区别就在于前者是FLUSH TABLES WITH READ LOCK对整个实例所有的表都上读锁,后者只针对要备份的表加读锁(LOCK TABLES `liu` READ);并且前者并没有显现的unlock tables,因为整个过程中数据库是不能写的,并且FLUSH TABLES WITH READ LOCK这个命令一旦这个会话结束,相应的读锁也就不存在了,而后者只是锁了一个表,显现的unlock解锁了,其实后者也是会话结束就释放对表的读锁了,也可以不加unlock,
6、START TRANSACTION和START TRANSACTION WITH CONSISTENT SNAPSHOT并不一样;
   START TRANSACTION WITH CONSISTENT SNAPSHOT相当于在执行完START TRANSACTION后对每个Innodb表执行了SELECT操作,在隔离级别为REPEATABLE READ时,并不是当start transaction 就能保证之后的查询内容是一样,而是当你发出第一个query的时候,才会开启快照读取,之后再有相同的sql查出来的结果是一样的。
    在mysqldump加上参数--single-transaction的时候使用的是START TRANSACTION WITH CONSISTENT SNAPSHOT,而不是START TRANSACTION来保证一致性的,是因为每个表的备份时间并不相同,如果使用START TRANSACTION,在对第一张表进行备份的期间,别的事务对第二个表进行了insert数据A,那么在开始对第二张表备份时,是可以看到数据A的,那么第一个表和第二个表就不是一致性的了,所以START TRANSACTION无法实现当一个库下有多个表的时候的一致性。.
综上所述:
在使用mysqldump进行数据备份的时候,尽量在业务量比较小的时候执行,并且根据是不是innodb引擎来选择不同的参数,如果是innodb的引擎可以使用--single-transaction参数来保证一致性,并且还不用上read lock;但是如果想保证整个实例的一致性(既有innodb又有myisam的表)最好还是使用参数--lock-all-tables,当然为了实现point to point恢复或者不停master服务来添加slave的目的,最好还是加上参数--master-data,同时也就能保证一致性,因为加上参数--master-data会执行FLUSH TABLES WITH READLOCK;
下面是具体的验证过程:
一:打开general log,便于分析mysqldump具体执行了什么操作
mysql> set global general_log=on;
其中,general log的存放路径可通过以下命令查看
mysql> show variables like '%general_log_file%';
二:执行MySQLdump导出表实验如下:
2.1首先什么参数都不加的情况:
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe liuhe blocks_infos_opensearch> /mysql/blocks_infos_opensearch.sql
查看相应的general_log:
bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 14:12:22 17 Quit
180429 14:12:55 18 Connect root@localhost on
18 Query /*!40100 SET @@SQL_MODE='' */
18 Query /*!40103 SET TIME_ZONE='+00:00' */
18 Query SHOW VARIABLES LIKE 'gtid\_mode'
18 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENG
INE ORDER BY LOGFILE_GROUP_NAME
18 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NA
ME
18 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
18 Init DB liuhe
18 Query SHOW TABLES LIKE 'blocks\_infos\_opensearch'
18 Query LOCK TABLES `blocks_infos_opensearch` READ /*!32311 LOCAL */
18 Query show table status like 'blocks\_infos\_opensearch'
18 Query SET SQL_QUOTE_SHOW_CREATE=1
18 Query SET SESSION character_set_results = 'binary'
18 Query show create table `blocks_infos_opensearch`
18 Query SET SESSION character_set_results = 'utf8'
18 Query show fields from `blocks_infos_opensearch`
18 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos_opensearch`
18 Query SET SESSION character_set_results = 'binary'
18 Query use `liuhe`
18 Query select @@collation_database
18 Query SHOW TRIGGERS LIKE 'blocks\_infos\_opensearch'
18 Query SET SESSION character_set_results = 'utf8'
18 Query UNLOCK TABLES
18 Quit
2.2:加上参数--single-transaction
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe --single-transaction liuhe blocks_infos_opensearch> /mysql/blocks_infos_opensearch.sql

bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 14:20:41 21 Quit
180429 14:20:47 22 Connect root@localhost on
22 Query /*!40100 SET @@SQL_MODE='' */
22 Query /*!40103 SET TIME_ZONE='+00:00' */
22 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
22 Query START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */ #开启快照读
22 Query SHOW VARIABLES LIKE 'gtid\_mode'
22 Query UNLOCK TABLES
22 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENG
INE ORDER BY LOGFILE_GROUP_NAME
22 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NA
ME
22 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
22 Init DB liuhe
22 Query SHOW TABLES LIKE 'blocks\_infos\_opensearch'
22 Query SAVEPOINT sp
22 Query show table status like 'blocks\_infos\_opensearch'
22 Query SET SQL_QUOTE_SHOW_CREATE=1
22 Query SET SESSION character_set_results = 'binary'
22 Query show create table `blocks_infos_opensearch`
22 Query SET SESSION character_set_results = 'utf8'
22 Query show fields from `blocks_infos_opensearch`
22 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos_opensearch`
22 Query SET SESSION character_set_results = 'binary'

22 Query use `liuhe`
22 Query select @@collation_database
22 Query SHOW TRIGGERS LIKE 'blocks\_infos\_opensearch'
22 Query SET SESSION character_set_results = 'utf8'
22 Query ROLLBACK TO SAVEPOINT sp #把事务回退到这个点 sp
22 Query RELEASE SAVEPOINT sp #放弃保存点,需要注意的是一旦rollback或者commit,那么之前创建的savepoint就会失效;
180429 14:20:48 22 Quit
通过.1和3.2可以看出来加上参数--single-transaction,可以保证mysqldump的时候不需要LOCK TABLES `blocks_infos_opensearch` READ ;并且使用参数--single-transaction,需要修改MySQL的隔离界别为 REPEATABLE READ来保证各个事务之间互相不影响对方,保证在执行MySQLdump的会话始终读取不到别的事务的操作,进而保证了MySQLdump出来的数据的一致性;并且为了能获得准确的pos点,需要START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */ 来开启快照读的事务,因为如果只START TRANSACTION ,并没有生成快照,而是在执行第一个select的时候,才会生成快照,也就是说如果START TRANSACTION之后,另一个事务insert了数据A,然后你再select,是可以看到的数据A的,这样就不能得到精确的pos值了。
2.3加上参数--master-data
具体如下:
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe --master-data=2 liuhe blocks_infos_opensearch> /mysql/blocks_infos_opensearch.sql
bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 18:01:27 35 Quit
180429 18:02:15 36 Connect root@localhost on
36 Query /*!40100 SET @@SQL_MODE='' */
36 Query /*!40103 SET TIME_ZONE='+00:00' */
36 Query FLUSH /*!40101 LOCAL */ TABLES
36 Query FLUSH TABLES WITH READ LOCK
36 Query SHOW VARIABLES LIKE 'gtid\_mode'
36 Query SHOW MASTER STATUS
36 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENG
INE ORDER BY LOGFILE_GROUP_NAME
36 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NA
ME
36 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
36 Init DB liuhe
36 Query SHOW TABLES LIKE 'blocks\_infos\_opensearch'
36 Query show table status like 'blocks\_infos\_opensearch'
36 Query SET SQL_QUOTE_SHOW_CREATE=1
36 Query SET SESSION character_set_results = 'binary'
36 Query show create table `blocks_infos_opensearch`
36 Query SET SESSION character_set_results = 'utf8'
36 Query show fields from `blocks_infos_opensearch`
36 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos_opensearch`
180429 18:02:16 36 Query SET SESSION character_set_results = 'binary'
36 Query use `liuhe`
36 Query select @@collation_database
36 Query SHOW TRIGGERS LIKE 'blocks\_infos\_opensearch'
36 Query SET SESSION character_set_results = 'utf8'
36 Quit
通过2.1和2.2可以看出来,通过 SHOW MASTER STATUS来显示当时binlog的位置,通过FLUSH TABLES WITH READ LOCK,来保证一致性,注意尽管只是备份一个表,由于这个binlog的位置是可以在不停主库的前提下添加从库时直接可以使用的位置,所以需要锁住整个实例的所有的表( FLUSH TABLES WITH READ LOCK),来保证这个位置在备份开始的时候,不再有任何dml操作,也就是这个位置就不再增大;
2.4同时添加上参数--master-data和--single-transaction
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe --single-transaction --master-data=2 liuhe blocks_infos_opensearch> /mysql/blocks_infos_opensearch.sql
Warning: Using a password on the command line interface can be insecure.
bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 18:50:06 37 Quit
180429 18:50:36 38 Connect root@localhost on
38 Query /*!40100 SET @@SQL_MODE='' */
38 Query /*!40103 SET TIME_ZONE='+00:00' */
38 Query FLUSH /*!40101 LOCAL */ TABLES
38 Query FLUSH TABLES WITH READ LOCK
38 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
38 Query START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */
38 Query SHOW VARIABLES LIKE 'gtid\_mode'
38 Query SHOW MASTER STATUS
38 Query UNLOCK TABLES
38 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENG
INE ORDER BY LOGFILE_GROUP_NAME
38 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('blocks_infos_opensearch')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NA
ME
38 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
38 Init DB liuhe
38 Query SHOW TABLES LIKE 'blocks\_infos\_opensearch'
38 Query SAVEPOINT sp
38 Query show table status like 'blocks\_infos\_opensearch'
38 Query SET SQL_QUOTE_SHOW_CREATE=1
38 Query SET SESSION character_set_results = 'binary'
38 Query show create table `blocks_infos_opensearch`
38 Query SET SESSION character_set_results = 'utf8'
38 Query show fields from `blocks_infos_opensearch`
38 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos_opensearch`
38 Query SET SESSION character_set_results = 'binary'
38 Query use `liuhe`
38 Query select @@collation_database
38 Query SHOW TRIGGERS LIKE 'blocks\_infos\_opensearch'
38 Query SET SESSION character_set_results = 'utf8'
38 Query ROLLBACK TO SAVEPOINT sp
38 Query RELEASE SAVEPOINT sp
38 Quit
通过2.4和2.1对比可以知道,当同时添加上参数--master-data和 --single-transaction 的时候,会执行 FLUSH TABLES WITH READ LOCK(但是还没有开始备份,在 SHOW MASTER STATUS显示了主库的binlog状态之后就unlock tables了),也会 SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ,之后和单独加--single-transaction是一样的;
2.5:如果是myisam引擎会怎么样?(创建了存储引擎为myisam的表liu)
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe --single-transaction liuhe liu>/mysql/liu.sql
查看general log发现和innodb 添加--single-transaction参数的情况是一样的执行过程
bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 19:54:28 3 Quit
180429 19:55:29 4 Connect root@localhost on
4 Query /*!40100 SET @@SQL_MODE='' */
4 Query /*!40103 SET TIME_ZONE='+00:00' */
4 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
4 Query START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */
4 Query SHOW VARIABLES LIKE 'gtid\_mode'
4 Query UNLOCK TABLES
4 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('liu'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE
_GROUP_NAME
4 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('liu')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
4 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
4 Init DB liuhe
4 Query SHOW TABLES LIKE 'liu'
4 Query SAVEPOINT sp
4 Query show table status like 'liu'
4 Query SET SQL_QUOTE_SHOW_CREATE=1
4 Query SET SESSION character_set_results = 'binary'
4 Query show create table `liu`
4 Query SET SESSION character_set_results = 'utf8'
4 Query show fields from `liu`
4 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `liu`
4 Query SET SESSION character_set_results = 'binary'
4 Query use `liuhe`
4 Query select @@collation_database
4 Query SHOW TRIGGERS LIKE 'liu'
4 Query SET SESSION character_set_results = 'utf8'
4 Query ROLLBACK TO SAVEPOINT sp
4 Query RELEASE SAVEPOINT sp
4 Quit
bogon:root@/mysql/data/data>
分析:
虽然添加了--single-transaction参数的myisam表处理过程和innodb的过程完全一致,但是因为myisam不支持事务,在整个dump过程中无法保证可重复读,无法得到一致性的备份。而innodb在备份过程中,虽然其他线程也在写数据,但是dump出来的数据能保证是备份开始时那个binlog pos的数据。
myisam引擎要保证得到一致性的数据的可以通过添加--lock-all-tables,这样在flush tables with read lock后,直到整个dump过程结束,断开线程后才会unlock tables释放锁(没必要主动发unlock tables指令),整个dump过程其他线程不可写,从而保证数据的一致性;
2.6:备份myisam的时候,加上--lock-all-tables和不加该参数的不同的执行过程如下:
2.6.1加上--lock-all-tables的情况如下:
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe --lock-all-tables liuhe liu>/mysql/liu.sql
bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 20:16:12 5 Quit
180429 20:18:18 6 Connect root@localhost on
6 Query /*!40100 SET @@SQL_MODE='' */
6 Query /*!40103 SET TIME_ZONE='+00:00' */
6 Query FLUSH TABLES
180429 20:18:19 6 Query FLUSH TABLES WITH READ LOCK
6 Query SHOW VARIABLES LIKE 'gtid\_mode'
6 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('liu'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE
_GROUP_NAME
6 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('liu')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
6 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
6 Init DB liuhe
6 Query SHOW TABLES LIKE 'liu'
6 Query show table status like 'liu'
6 Query SET SQL_QUOTE_SHOW_CREATE=1
6 Query SET SESSION character_set_results = 'binary'
6 Query show create table `liu`
6 Query SET SESSION character_set_results = 'utf8'
6 Query show fields from `liu`
6 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `liu`
6 Query SET SESSION character_set_results = 'binary'
6 Query use `liuhe`
6 Query select @@collation_database
6 Query SHOW TRIGGERS LIKE 'liu'
6 Query SET SESSION character_set_results = 'utf8'
6 Quit
2.6.2不加上--lock-all-tables得过程如下:
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe liuhe liu>/mysql/liu.sql
bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 20:25:43 7 Quit
180429 20:25:53 8 Connect root@localhost on
8 Query /*!40100 SET @@SQL_MODE='' */
8 Query /*!40103 SET TIME_ZONE='+00:00' */
8 Query SHOW VARIABLES LIKE 'gtid\_mode'
8 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('liu'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE
_GROUP_NAME
8 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA='liuhe' AND TABLE_NAME IN ('liu')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
8 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
8 Init DB liuhe
8 Query SHOW TABLES LIKE 'liu'
8 Query LOCK TABLES `liu` READ /*!32311 LOCAL */
8 Query show table status like 'liu'
8 Query SET SQL_QUOTE_SHOW_CREATE=1
8 Query SET SESSION character_set_results = 'binary'
8 Query show create table `liu`
8 Query SET SESSION character_set_results = 'utf8'
8 Query show fields from `liu`
8 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `liu`
8 Query SET SESSION character_set_results = 'binary'
8 Query use `liuhe`
8 Query select @@collation_database
8 Query SHOW TRIGGERS LIKE 'liu'
8 Query SET SESSION character_set_results = 'utf8'
8 Query UNLOCK TABLES
8 Quit
对比2.6.1和2.6.2可以知道myisam表,加上--lock-all-tables和不加上--lock-all-tables得区别就在于前者是FLUSH TABLES WITH READ LOCK对整个实例所有的表都上读锁,后者只针对要备份的表加读锁(LOCK TABLES `liu` READ);并且前者并没有显现的unlock tables,因为整个过程中数据库是不能写的,并且FLUSH TABLES WITH READ LOCK这个命令一旦这个会话结束,相应的读锁也就不存在了,而后者只是锁了一个表,显现的unlock解锁了,其实后者也是会话结束就释放对表的读锁了,也可以不加unlock,
2.7.备份整个库时候,不加任何参数,可以看到会同时lock 这个库下的所有的表,最后在unlock
bogon:root@/mysql/data/data>mysqldump -uroot -pliuwenhe liuhe > /mysql/blocks_infos_opensearch.sql

bogon:root@/mysql/data/data>more bogon.log
/usr/local/mysql/bin/mysqld, Version: 5.6.38-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 21:21:44 9 Quit
180429 21:22:21 10 Connect root@localhost on
10 Query /*!40100 SET @@SQL_MODE='' */
10 Query /*!40103 SET TIME_ZONE='+00:00' */
10 Query SHOW VARIABLES LIKE 'gtid\_mode'
10 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHE
MA.PARTITIONS WHERE TABLE_SCHEMA IN ('liuhe'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE_GROUP_NAME
10 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SC
HEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('liuhe')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
10 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
10 Init DB liuhe
10 Query show tables
10 Query LOCK TABLES `blocks_infos` READ /*!32311 LOCAL */,`blocks_infos1` READ /*!32311 LOCAL */,`blocks_inf
os_opensearch` READ /*!32311 LOCAL */,`liu` READ /*!32311 LOCAL */,`test` READ /*!32311 LOCAL */
10 Query show table status like 'blocks\_infos'
10 Query SET SQL_QUOTE_SHOW_CREATE=1
10 Query SET SESSION character_set_results = 'binary'
10 Query show create table `blocks_infos`
10 Query SET SESSION character_set_results = 'utf8'
10 Query show fields from `blocks_infos`
10 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos`
180429 21:22:23 10 Query SET SESSION character_set_results = 'binary'
10 Query use `liuhe`
10 Query select @@collation_database
10 Query SHOW TRIGGERS LIKE 'blocks\_infos'
10 Query SHOW CREATE TRIGGER `tri_insert_blocks_infos1`
10 Query SHOW CREATE TRIGGER `tri_update_blocks_infos1`
10 Query SHOW CREATE TRIGGER `tri_delete_blocks_infos1`
10 Query SHOW CREATE TRIGGER `tri_delete_blocks_infos`
10 Query SET SESSION character_set_results = 'utf8'
10 Query show table status like 'blocks\_infos1'
10 Query SET SQL_QUOTE_SHOW_CREATE=1
10 Query SET SESSION character_set_results = 'binary'
10 Query show create table `blocks_infos1`
10 Query SET SESSION character_set_results = 'utf8'
10 Query show fields from `blocks_infos1`
10 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos1`
10 Query SET SESSION character_set_results = 'binary'
10 Query use `liuhe`
10 Query select @@collation_database
10 Query SHOW TRIGGERS LIKE 'blocks\_infos1'
10 Query SET SESSION character_set_results = 'utf8'
10 Query show table status like 'blocks\_infos\_opensearch'
10 Query SET SQL_QUOTE_SHOW_CREATE=1
10 Query SET SESSION character_set_results = 'binary'
10 Query show create table `blocks_infos_opensearch`
10 Query SET SESSION character_set_results = 'utf8'
10 Query show fields from `blocks_infos_opensearch`
10 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `blocks_infos_opensearch`
180429 21:22:24 10 Query SET SESSION character_set_results = 'binary'
10 Query use `liuhe`
10 Query select @@collation_database
10 Query SHOW TRIGGERS LIKE 'blocks\_infos\_opensearch'
10 Query SET SESSION character_set_results = 'utf8'
10 Query show table status like 'liu'
10 Query SET SQL_QUOTE_SHOW_CREATE=1
10 Query SET SESSION character_set_results = 'binary'
10 Query show create table `liu`
10 Query SET SESSION character_set_results = 'utf8'
10 Query show fields from `liu`
10 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `liu`
10 Query SET SESSION character_set_results = 'binary'
10 Query use `liuhe`
10 Query select @@collation_database
10 Query SHOW TRIGGERS LIKE 'liu'
10 Query SET SESSION character_set_results = 'utf8'
10 Query show table status like 'test'
10 Query SET SQL_QUOTE_SHOW_CREATE=1
10 Query SET SESSION character_set_results = 'binary'
10 Query show create table `test`
10 Query SET SESSION character_set_results = 'utf8'
10 Query show fields from `test`
10 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `test`
10 Query SET SESSION character_set_results = 'binary'
10 Query use `liuhe`
10 Query select @@collation_database
10 Query SHOW TRIGGERS LIKE 'test'
10 Query SET SESSION character_set_results = 'utf8'
10 Query UNLOCK TABLES
10 Quit
2.8:备份整个实例的所有的库加上参数--all-databases 或者-A ,然后不加别的参数
如果是备份整个实例的所有的库,也就加上参数--all-databases 或者-A ,然后不加别的参数,如下可知,是一个库一库的去备份,也就是说先备份库A,把库A下的所有的表上读锁,备份完库A,unlock tables,然后再备份库B,把库B下的所有的表上读锁,备份完库B,unlock tables,这样就可以知道,不加任何参数的话,全实例备份时,只能保证一个库下的所有的表是一致性的,但是库和库之间却不能保证一致性;
[root@oracle3 ~]# more /home/mysql/data/data/oracle3.log
/usr/local/mysql/bin/mysqld, Version: 5.6.39-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /tmp/mysql.sock
Time Id Command Argument
180429 21:58:24 35 Query show variables like '%general_log_file%'
180429 21:58:30 35 Quit
180429 21:58:45 36 Connect root@localhost on
36 Query /*!40100 SET @@SQL_MODE='' */
36 Query /*!40103 SET TIME_ZONE='+00:00' */
36 Query SHOW VARIABLES LIKE 'gtid\_mode'
36 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SC
HEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE_GR
OUP_NAME
36 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FR
OM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
36 Query SHOW DATABASES
36 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
36 Init DB liuhe
36 Query SHOW CREATE DATABASE IF NOT EXISTS `liuhe`
36 Query show tables
36 Query UNLOCK TABLES
36 Init DB liuwenhe
36 Query SHOW CREATE DATABASE IF NOT EXISTS `liuwenhe`
36 Query show tables
36 Query LOCK TABLES `test` READ /*!32311 LOCAL */
36 Query show table status like 'test'
36 Query SET SQL_QUOTE_SHOW_CREATE=1
36 Query SET SESSION character_set_results = 'binary'
36 Query show create table `test`
36 Query SET SESSION character_set_results = 'utf8'
36 Query show fields from `test`
36 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `test`
36 Query SET SESSION character_set_results = 'binary'
36 Query use `liuwenhe`
36 Query select @@collation_database
36 Query SHOW TRIGGERS LIKE 'test'
36 Query SET SESSION character_set_results = 'utf8'
36 Query UNLOCK TABLES
36 Init DB mysql
36 Query SHOW CREATE DATABASE IF NOT EXISTS `mysql`
36 Query show tables
36 Query LOCK TABLES `columns_priv` READ /*!32311 LOCAL */,`db` READ /*!32311 LOCAL */,`event` READ /*!32311
LOCAL */,`func` READ /*!32311 LOCAL */,`help_category` READ /*!32311 LOCAL */,`help_keyword` READ /*!32311 LOCAL */,`help_relation`
READ /*!32311 LOCAL */,`help_topic` READ /*!32311 LOCAL */,`innodb_index_stats` READ /*!32311 LOCAL */,`innodb_table_stats` READ /*!
32311 LOCAL */,`ndb_binlog_index` READ /*!32311 LOCAL */,`plugin` READ /*!32311 LOCAL */,`proc` READ /*!32311 LOCAL */,`procs_priv`
READ /*!32311 LOCAL */,`proxies_priv` READ /*!32311 LOCAL */,`servers` READ /*!32311 LOCAL */,`slave_master_info` READ /*!32311 LOCA
L */,`slave_relay_log_info` READ /*!32311 LOCAL */,`slave_worker_info` READ /*!32311 LOCAL */,`tables_priv` READ /*!32311 LOCAL */,`
time_zone` READ /*!32311 LOCAL */,`time_zone_leap_second` READ /*!32311 LOCAL */,`time_zone_name` READ /*!32311 LOCAL */,`time_zone_
transition` READ /*!32311 LOCAL */,`time_zone_transition_type` READ /*!32311 LOCAL */,`user` READ /*!32311 LOCAL */
题外话
考虑一下,我们知道当没有添加任何参数的时候,mysqldump默认也会lock 这个需要备份的表,但是如果mysiam引擎中也添加--single-transaction参数(这样备份出来的数据就是不一致的)再用这个备份去创建从库或恢复到指定时间点,会有什么样的影响?
我个人的理解是如果整个dump过程中只有简单的insert操作,是没有关系的,期间肯定会有很多的主键重复错误,直接跳过或忽略就好了。如果是update操作,那就要出问题了,分几种情况考虑
1) 如果是基于时间点的恢复,假设整个dump过程有update a  set id=5 where id=4之类的操作,相当于重复执行两次该操作,应该问题不大
2) 如果是创建从库,遇到上面的sql从库会报错,找不到该记录,这时跳过就好
3)不管是恢复还是创建从库,如果dump过程中有update a set id=id+5 之类的操作,那就有问题,重复执行两次,数据全变了。


分享标题:mysqldump原理分析
浏览地址:http://www.cdkjz.cn/article/igsged.html
多年建站经验

多一份参考,总有益处

联系快上网,免费获得专属《策划方案》及报价

咨询相关问题或预约面谈,可以通过以下方式与我们联系

大客户专线   成都:13518219792   座机:028-86922220