Last_Errno: 1008 - unable to mitigate

Hello!

History
Over the last ~6 months I have had my master and a few slaves replicating with no issues… I enabled GTID about a month ago and have not seen or had any issues… replication was running smoothly over all 3 slaves… I’m new to percona/gtid/replication world but have been playing with mysql for years…

Recently I built a new beefy server to replace my existing master… Will call the new server M1… Early this week it was time to promote the new master and demote the existing master to a slave… Made sure all my my.cnf files had the appropriate options enabled. Followed this for reference: [url]https://www.percona.com/blog/2014/07/03/failover-mysql-utilities-part-2-mysqlfailover/[/url]

I issued the mysqlrpladmin --demote-master switchover command and it seemed to have worked.


Slave_IO_Running: Yes
Slave_SQL_Running: No

Last_Errno: 1008
Last_Error: Error 'Can't drop database 'test'; database doesn't exist' on query. Default database: 'test'. Query: 'DROP DATABASE `test`'


2015-11-02 19:08:45 19788 [ERROR] Slave SQL: Error 'Can't drop database 'test'; database doesn't exist' on query. Default database: 'test'. Query: 'DR
2015-11-02 19:08:45 19788 [Warning] Slave: Can't drop database 'test'; database doesn't exist Error_code: 1008
2015-11-02 19:08:45 19788 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START".
2015-11-02 19:11:03 19788 [Warning] 'proxies_priv' entry '@ root@ubuntum1' ignored in --skip-name-resolve mode.


The ‘test’ db does not show up in any of the data dir’s… I’ve made a new backup from the master, and pushed to a slave… still no change…

On a slave I have tried:
[LIST]
[]Making backup from new master and appying to slave.
[LIST]
[
]Using innobackupex and then using the --apply-log cmd on that folder.
[/LIST]
[]copy to slaves mysql data dir… and applying correct perms
[
]apply reset master cmd and set SET GLOBAL gtid_purged=“1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701”;
[]changed master too and updated the HOST to new master server IP.
[
]After I start the slave again I see the same error/issue…
[/LIST]
M1 is the new master
Slave 2 was the old master
Salve 3 was always a slave

This is the health of the servers as of 5min ago:


# Replication Topology Health:
+---------------+-------+---------+--------+------------+------------------------------------------------------------------------+
| host | port | role | state | gtid_mode | health |
+---------------+-------+---------+--------+------------+------------------------------------------------------------------------+
| Master-M1 | 3306 | MASTER | UP | ON | OK |
| Slave 2 | 3306 | SLAVE | UP | ON | SQL thread is not running., Slave has 744 transactions behind master. |
| Slave 3 | 3306 | SLAVE | UP | ON | SQL thread is not running., Slave has 744 transactions behind master. |
+---------------+-------+---------+--------+------------+------------------------------------------------------------------------+

UUIDS for servers


# UUIDS for all servers:
+---------------+-------+---------+---------------------------------------+
| host | port | role | uuid |
+---------------+-------+---------+---------------------------------------+
| Master-M1 | 3306 | MASTER | 952cc682-7ac4-11e5-ae12-d8cb8a7b044b |
| Slave 2 | 3306 | SLAVE | 54cfe8f1-81cc-11e5-9bea-f46d04991a6f |
| Slave 3 | 3306 | SLAVE | 5f8b69e7-7d0c-11e5-bcf1-d8cb8a7b0429 |
+---------------+-------+---------+---------------------------------------+
#
# Transactions executed on the server:
+---------------+-------+---------+------------------------------------------------+
| host | port | role | gtid |
+---------------+-------+---------+------------------------------------------------+
| Master-M1 | 3306 | MASTER | 1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701 |
| Master-M1 | 3306 | MASTER | 952cc682-7ac4-11e5-ae12-d8cb8a7b044b:1-770 |
| Slave 2 | 3306 | SLAVE | 1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701 |
| Slave 2 | 3306 | SLAVE | 54cfe8f1-81cc-11e5-9bea-f46d04991a6f:1-3667 |
| Slave 2 | 3306 | SLAVE | 952cc682-7ac4-11e5-ae12-d8cb8a7b044b:1-26 |
| Slave 3 | 3306 | SLAVE | 1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701 |
| Slave 3 | 3306 | SLAVE | 952cc682-7ac4-11e5-ae12-d8cb8a7b044b:1-26 |
+---------------+-------+---------+------------------------------------------------+
#
# Transactions purged from the server:
+---------------+-------+---------+------------------------------------------------+
| host | port | role | gtid |
+---------------+-------+---------+------------------------------------------------+
| Master-M1 | 3306 | MASTER | 1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-84916 |
| Slave 2 | 3306 | SLAVE | 1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701 |
| Slave 3 | 3306 | SLAVE | 1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701 |
+---------------+-------+---------+------------------------------------------------+


Ran pt-table-checksum, did not find any issues afaik…

Please help… not sure what direction or what to do from here… I appreciate any feedback. :slight_smile:

Got it working:

exported DB from master to slave…

This was the key


reset maser;
SET GLOBAL gtid_purged="1e3e4bfb-f860-11e4-9bce-f46d04991a6f:1-134701,952cc682-7ac4-11e5-ae12-d8cb8a7b044b:1-785";

CHANGE MASTER TO MASTER_HOST="master IP", MASTER_USER="rep", MASTER_PASSWORD="reppassd", MASTER_AUTO_POSITION = 1;



Relay_Master_Log_File: mysql-bin.000014
Slave_IO_Running: Yes
Slave_SQL_Running: Yes