PXC 8.0 - nodes exit from the cluster after an MDL conflict

Hello all,

Can someone shed some light on the problem with the MDL conflict causing MySQL to shutdown node in the cluster?

Our enviroment is 3 PXC nodes with loadbalancing via ProxySQL:
node1 read-write
node2 read-only
node3 read-only

8.0.26-16.1 Percona XtraDB Cluster (GPL), Release rel16, Revision b141904, WSREP version 26.4.3

When a problem occurs:
When we execute mass ALTER tables (adding columns/keys/foreign keys) on read-write node1, other nodes may crash because of MDL conflicts.

node1 my.cnf

[mysqld]
datadir                 = /var/lib/mysql
tmpdir                  = /srv/mysql_tmpdir
internal_tmp_mem_storage_engine = MEMORY
user                    = mysql
default_storage_engine  = InnoDB
binlog_format           = ROW
skip-name-resolve	= 1
sql_mode                = ""
collation_server        = utf8_unicode_ci
character_set_server    = utf8

# WSREP #
wsrep_provider          = /usr/lib/galera4/libgalera_smm.so
wsrep_provider_options  = "gcache.size=1G;gcache.recover=yes;gcs.fc_limit=160;gcs.fc_factor=0.8;pc.weight=1;cert.log_conflicts=yes;"
wsrep_cluster_address   = gcomm://10.0.1.10,10.0.2.10,10.0.3.10
wsrep_node_address      = 10.0.1.10
wsrep_node_name         = pxc-node1
wsrep_sst_donor         = pxc-node3
wsrep_cluster_name      = pxc-cluster
wsrep_sst_method        = xtrabackup-v2
wsrep_retry_autocommit  = 10
wsrep_applier_threads     = 32
wsrep_certify_nonPK     = 1
wsrep_applier_FK_checks	= 0
wsrep_certification_rules = optimized
pxc_strict_mode         = PERMISSIVE
pxc-encrypt-cluster-traffic = OFF
wsrep_ignore_apply_errors = 7
lock_wait_timeout	= 7200

# LOGGING #
skip-log-bin
general_log             = ON
general_log_file        = /var/log/mysql/general.log
wsrep_log_conflicts     = ON
log_error               = /var/log/mysql/error.log
slow_query_log          = 1
slow_query_log_file     = /var/log/mysql/slow.log
long_query_time         = 1

# CACHES AND LIMITS #
tmp_table_size          = 32M
max_heap_table_size     = 32M
max_connections         = 1000
open_files_limit        = 240010
table_definition_cache  = 262144
table_open_cache        = 4096
wait_timeout            = 14400
sort_buffer_size        = 2M
thread_cache_size       = 80
thread_pool_size        = 16
performance_schema_digests_size = 1048576

# INNODB #
innodb_flush_method             = O_DIRECT
innodb_log_files_in_group       = 2
innodb_log_file_size            = 2G
innodb_flush_log_at_trx_commit  = 2
innodb_file_per_table           = 1
innodb_buffer_pool_size         = 64G
innodb_thread_concurrency       = 0
innodb_autoinc_lock_mode        = 2
innodb_buffer_pool_instances    = 64
innodb_read_io_threads          = 16
innodb_write_io_threads         = 16
innodb_io_capacity		= 200
innodb_open_files               = 262144
innodb_print_all_deadlocks      = 1

[sst]
progress = 1
compressor='zstd -2 -T6'
decompressor='zstd -d -T6'
backup_threads=6

[xtrabackup]
parallel=6

pxc-node2 mysql-error.log

2022-03-16T17:34:26.082428Z 196328 [Note] [MY-000000] [WSREP] MDL conflict db=xxx_dbname table=xxx_client ticket=10 solved by abort
2022-03-16T17:34:27.256059Z 199044 [Note] [MY-000000] [WSREP] MDL conflict db=xxx_dbname table=xxx_client ticket=10 solved by abort
2022-03-16T17:34:29.284658Z 14 [Note] [MY-000000] [WSREP] MDL BF-BF conflict

schema:  xxx_dbname
request: (thd-tid:14    seqno:119068119         exec-mode:toi, query-state:exec, conflict-state:committed)
          cmd-code:3 3  query:ALTER TABLE `xxx_common_client` ADD `phone_hash` varchar(255) UNIQUE)

granted: (thd-tid:18    seqno:119068125         exec-mode:high priority, query-state:exec, conflict-state:committing)
          cmd-code:3 167        query:(null))

2022-03-16T17:34:29.284706Z 14 [Note] [MY-000000] [WSREP] MDL ticket: type: shared write, space: TABLE, db: xxx_dbname, name: xxx_client
2022-03-16T17:34:29.284741Z 14 [ERROR] [MY-010119] [Server] Aborting
2022-03-16T17:34:29.284755Z 14 [Note] [MY-000000] [WSREP] Initiating SST cancellation
2022-03-16T17:34:31.290194Z 14 [Warning] [MY-000000] [Server] /usr/sbin/mysqld: Forcing close of thread 210453  user: 'xxx_dbname'
2022-03-16T17:34:31.290722Z 2 [Note] [MY-000000] [WSREP] rollbacker thread exiting 2
2022-03-16T17:34:31.290801Z 14 [Note] [MY-000000] [WSREP] Server status change synced -> disconnecting
2022-03-16T17:34:31.290829Z 14 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.
2022-03-16T17:34:31.290899Z 14 [Note] [MY-000000] [Galera] Closing send monitor...
2022-03-16T17:34:31.290921Z 14 [Note] [MY-000000] [Galera] Closed send monitor.
2022-03-16T17:34:31.290984Z 14 [Note] [MY-000000] [Galera] gcomm: terminating thread
2022-03-16T17:34:31.291028Z 14 [Note] [MY-000000] [Galera] gcomm: joining thread
2022-03-16T17:34:31.291822Z 14 [Note] [MY-000000] [Galera] gcomm: closing backend
2022-03-16T17:34:31.293414Z 14 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(NON_PRIM,a8594bf1-864a,11)
memb {
        d57e7da2-86e4,0
        }
joined {
        }
left {
        }
partitioned {
        a8594bf1-864a,0
        ea59cd1a-a3b1,0
        }
)
2022-03-16T17:34:31.293492Z 14 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0
2022-03-16T17:34:31.293519Z 14 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view ((empty))
2022-03-16T17:34:31.293782Z 14 [Note] [MY-000000] [Galera] gcomm: closed
2022-03-16T17:34:31.293951Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2022-03-16T17:34:31.294090Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [128, 160]
2022-03-16T17:34:31.294113Z 0 [Note] [MY-000000] [Galera] Received NON-PRIMARY.
2022-03-16T17:34:31.294130Z 0 [Note] [MY-000000] [Galera] Shifting SYNCED -> OPEN (TO: 119068160)
2022-03-16T17:34:31.294163Z 0 [Note] [MY-000000] [Galera] New SELF-LEAVE.
2022-03-16T17:34:31.294204Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [0, 0]
2022-03-16T17:34:31.294226Z 0 [Note] [MY-000000] [Galera] Received SELF-LEAVE. Closing connection.
2022-03-16T17:34:31.294244Z 0 [Note] [MY-000000] [Galera] Shifting OPEN -> CLOSED (TO: 119068160)
2022-03-16T17:34:31.294265Z 0 [Note] [MY-000000] [Galera] RECV thread exiting 0: Success
2022-03-16T17:34:31.294731Z 14 [Note] [MY-000000] [Galera] recv_thread() joined.
2022-03-16T17:34:31.294804Z 14 [Note] [MY-000000] [Galera] Closing replication queue.
2022-03-16T17:34:31.294828Z 14 [Note] [MY-000000] [Galera] Closing slave action queue.

Maybe there is some magic wsrep option that will prevent a nodes from dropping out of a cluster during mass ALTER and MDL conflicts? Or are we wishing for a weird and contradictory ACID?
Or maybe we need some tuning wsrep flow control?

Thank you all in advance for your advice and tips.

2 Likes

Are you 100% certain that all nodes have the same schema before attempting these ALTERs? The default methodology for DDLs is to use TOI (wsrep_osu_method) which tells all nodes to execute the ALTER at the same time, so if all nodes have the same schema initially, then the ALTER will execute just fine on all nodes. But if there are differences before the ALTER, then it will probably fail.

Have you isolated a specific ALTER that causes the disconnect, or is it random ALTER? Have you tried doing a single ALTER instead of “mass ALTERs”?

3 Likes

Hi @oliko , thanks for posting to the Percona Forums, welcome!!

Have you used our data consistency checker called pt-table-checksum? With it you can check the integrity of the data across nodes in PXC. Have a look at our documentation:
https://www.percona.com/doc/percona-toolkit/LATEST/pt-table-checksum.html#percona-xtradb-cluster

2 Likes

Hello @Michael_Coburn and @matthewb !
Thank you for your prompt answers! :blush:
Let me explain a little bit of the specifics. We make changes to the table structure often and constantly, as the product is being actively developed. There are about 500 databases in our cluster and it often happens that we need to massively change the structure of the one table in each database. We analyzed the problems, we could not find a pattern. The problem occurs on different tables and databases. From one ALTER, as a rule, replication does not stop, falls on massive ALTER-s.
To complete the picture, over the past 2 months such crashes occurred 15 times with MDL, and each time we have to start MySQL on node and waiting for SST.
Thanks for the tip about pt-table-checksum, we will check it very soon, but we have to constantly re-synchronize databases, which exclude variant, that there is a difference in database structures between nodes. Also, we’ve never played around with the wsrep_osu_method option to be able to change the table structure on only 1 node. By the way, can “wsrep_osu_method = NBO” save us?
Also now found that some DDL (ALTERs) was executed past ProxySQL, so there was a situation when in fact on node1 were r/w requests from applications, and in node2 read requests + DDL. Perhaps this also contributed to the fact that we have constant MDL conflicts, right?

1 Like

@oliko,
PXC can only manage 1 ALTER at a time. If you are attempting to execute multiple DDLs that is not going to be supported/work.

You should never have differences in schemas between PXC nodes. This will absolutely cause crashes. PXC nodes should always have the same data, the same schema, at all times. This is the very nature of PXC/Galera and it is a violation of the fundamentals for there to be either data or schema differences.

1 Like

We want to clarify the situation. On the nodes of our cluster, MDL locks are constantly hanging on the “common_user” table. They appear immediately after launch and synchronization with other nodes. On different nodes, we see different locked tables in different databases. Any change to the schema of this table (for example, deleting a foreign key to this table) leads to a node crash.

Here is an example of a permanently hanging lock:

SELECT * FROM performance_schema.metadata_locks l left join `performance_schema`.`threads` t on l.OWNER_THREAD_ID = t.THREAD_ID where l.LOCK_DURATION='EXPLICIT'\G;

 OBJECT_TYPE: TABLE
        OBJECT_SCHEMA: dsf_geely
          OBJECT_NAME: common_user
          COLUMN_NAME: NULL
OBJECT_INSTANCE_BEGIN: 140454290305904
            LOCK_TYPE: SHARED
        LOCK_DURATION: EXPLICIT
          LOCK_STATUS: GRANTED
               SOURCE: dictionary_impl.cc:438
      OWNER_THREAD_ID: 170
       OWNER_EVENT_ID: 113
            THREAD_ID: 170
                 NAME: thread/sql/THREAD_wsrep_applier
                 TYPE: FOREGROUND
       PROCESSLIST_ID: 27
     PROCESSLIST_USER: NULL
     PROCESSLIST_HOST: NULL
       PROCESSLIST_DB: NULL
  PROCESSLIST_COMMAND: Query
     PROCESSLIST_TIME: 2
    PROCESSLIST_STATE: NULL
     PROCESSLIST_INFO: NULL
     PARENT_THREAD_ID: 1
                 ROLE: NULL
         INSTRUMENTED: YES
              HISTORY: YES
      CONNECTION_TYPE: NULL
         THREAD_OS_ID: 1568767
       RESOURCE_GROUP: SYS_default

Please tell me what could be the reason for this behavior.

P.S. And of course we do only one ALTER at a time. (No parallel ALTERs)

1 Like

Nodes will typically crash in PXC on purpose if the data is different between nodes. This is a safety mechanism. Data should never be different between nodes. Metadata locks occur even on plain MySQL, but since this is also PXC, the locks can have bigger impact.

Have you tried using RSU or NBO options?

1 Like

We have the same problem with PXC 8.0.26, wsrep_osu_method=TOI. Galera fails when there is a bunch of DDL statements in one transaction (add column, create indexes).
error.log:

[Note] [MY-000000] [WSREP] MDL BF-BF conflict
schema:  joy
request: (thd-tid:11 #011seqno:184031274 #011exec-mode:toi, query-state:exec, conflict-state:committed)
          cmd-code:3 2 #011query:CREATE INDEX `idx_phone_address` ON `object` (`phone_id`))
granted: (thd-tid:13 #011seqno:184031275 #011exec-mode:high priority, query-state:exec, conflict-state:committing)
          cmd-code:3 167 #011query:(null))
[Note] [MY-000000] [WSREP] MDL ticket: type: shared write, space: TABLE, db: joy, name: opn_joymediastatements
[ERROR] [MY-010119] [Server] Aborting
[Note] [MY-000000] [WSREP] Initiating SST cancellation
1 Like

Are you trying to ALTER while in the middle of an SST? That will never work. You cannot make changes to the schema while an SST is happening.

2 Likes

Nodes were synchronized, SST was not running. We’ve upgraded PXC to 8.0.27 and got another failure. 2/3 nodes had become non-Primary with following error.log:

2022-06-27T10:52:36.938296Z 0 [Note] [MY-000000] [WSREP] MDL conflict db=joy table=#sql-5942_10 ticket=10 solved by abort
2022-06-27T10:52:52.680951Z 16 [Note] [MY-000000] [WSREP] MDL BF-BF conflict
schema:  joy
request: (thd-tid:16         seqno:7127893         exec-mode:toi, query-state:exec, conflict-state:committed)
          cmd-code:3 3         query:ALTER TABLE `object` MODIFY `phone_id` bigint NULL)
granted: (thd-tid:17         seqno:7127894         exec-mode:high priority, query-state:exec, conflict-state:committing)
          cmd-code:3 167         query:(null))
2022-06-27T10:52:52.681036Z 16 [Note] [MY-000000] [WSREP] MDL ticket: type: shared write, space: TABLE, db: joy, name: opn_joymediastatements
2022-06-27T10:52:52.682284Z 16 [ERROR] [MY-010119] [Server] Aborting
2022-06-27T10:52:52.683015Z 16 [Note] [MY-000000] [WSREP] Initiating SST cancellation
2022-06-27T10:52:54.685281Z 16 [Note] [MY-000000] [WSREP] Server status change synced -> disconnecting
2022-06-27T10:52:54.685304Z 1 [Note] [MY-000000] [WSREP] rollbacker thread exiting 1
2022-06-27T10:52:54.686051Z 16 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.
2022-06-27T10:52:54.686494Z 16 [Note] [MY-000000] [Galera] Closing send monitor...
2022-06-27T10:52:54.686665Z 16 [Note] [MY-000000] [Galera] Closed send monitor.
2022-06-27T10:52:54.686838Z 16 [Note] [MY-000000] [Galera] gcomm: terminating thread
2022-06-27T10:52:54.687003Z 16 [Note] [MY-000000] [Galera] gcomm: joining thread
2022-06-27T10:52:54.687723Z 16 [Note] [MY-000000] [Galera] gcomm: closing backend
2022-06-27T10:52:54.689156Z 16 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(NON_PRIM,667a6fd0-b5e5,25)
memb {
        c63e3a1a-bfa2,0
        }
joined {
        }
left {
        }
partitioned {
        667a6fd0-b5e5,0
        d80b3217-bbde,0
        }
)
2022-06-27T10:52:54.690214Z 16 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0
2022-06-27T10:52:54.690463Z 16 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view ((empty))
2022-06-27T10:52:54.691109Z 16 [Note] [MY-000000] [Galera] gcomm: closed
2022-06-27T10:52:54.691359Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2022-06-27T10:52:54.691649Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [25, 50]
2022-06-27T10:52:54.691817Z 0 [Note] [MY-000000] [Galera] Received NON-PRIMARY.
2022-06-27T10:52:54.691960Z 0 [Note] [MY-000000] [Galera] Shifting SYNCED -> OPEN (TO: 7127917)
2022-06-27T10:52:54.692105Z 0 [Note] [MY-000000] [Galera] New SELF-LEAVE.
2022-06-27T10:52:54.692283Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [0, 0]
2022-06-27T10:52:54.692428Z 0 [Note] [MY-000000] [Galera] Received SELF-LEAVE. Closing connection.
2022-06-27T10:52:54.692594Z 0 [Note] [MY-000000] [Galera] Shifting OPEN -> CLOSED (TO: 7127917)
2022-06-27T10:52:54.692759Z 0 [Note] [MY-000000] [Galera] RECV thread exiting 0: Success
2022-06-27T10:52:54.693228Z 16 [Note] [MY-000000] [Galera] recv_thread() joined.
2022-06-27T10:52:54.693393Z 16 [Note] [MY-000000] [Galera] Closing replication queue.
2022-06-27T10:52:54.693547Z 16 [Note] [MY-000000] [Galera] Closing slave action queue.

Master error.log:

2022-06-27T10:52:54.694171Z 240393 [Note] [MY-000000] [WSREP] --------- CONFLICT DETECTED --------
2022-06-27T10:52:54.694204Z 240393 [Note] [MY-000000] [WSREP] cluster conflict due to high priority abort for threads:
2022-06-27T10:52:54.694221Z 240393 [Note] [MY-000000] [WSREP] Winning thread:
   THD: 240393, mode: toi, state: exec, conflict: committed, seqno: 7127893
   SQL: ALTER TABLE `object` MODIFY `phone_id` bigint NULL
2022-06-27T10:52:54.694234Z 240393 [Note] [MY-000000] [WSREP] Victim thread:
   THD: 240383, mode: local, state: exec, conflict: committing, seqno: 7127894
   SQL: UPDATE `opn_joymediastatements` SET `object_id` = 892672, `user_id` = 942755, `hold_id` = 3489 WHERE `opn_joymediastatements`.`id` = 783564
2022-06-27T10:52:54.694414Z 240393 [Note] [MY-000000] [WSREP] MDL conflict db=joy table=object ticket=3 solved by abort
1 Like

processlist of the crashed node:

+------+-----------------+--------------------+------+---------+------+---------------------------------------------+------------------------------------------------------------------------------------------------------+---------+-----------+---------------+
| Id   | User            | Host               | db   | Command | Time | State                                       | Info                                                                                                 | Time_ms | Rows_sent | Rows_examined |
+------+-----------------+--------------------+------+---------+------+---------------------------------------------+------------------------------------------------------------------------------------------------------+---------+-----------+---------------+
|    1 | system user     |                    | NULL | Killed  |  522 | wsrep: preparing to commit write set(37282) | NULL                                                                                                 |  522269 |         0 |             0 |
|    8 | event_scheduler | localhost          | NULL | Daemon  | 2877 | Waiting on empty queue                      | NULL                                                                                                 | 2876578 |         0 |             0 |
|   12 | system user     |                    | NULL | Killed  |  522 | wsrep: preparing to commit write set(37283) | NULL                                                                                                 |  522130 |         0 |             0 |
|   13 | system user     |                    | NULL | Killed  |  522 | wsrep: committed write set (37276)          | NULL                                                                                                 |  522455 |         0 |             0 |
|   14 | system user     |                    | NULL | Killed  |  522 | wsrep: committed write set (37279)          | NULL                                                                                                 |  522440 |         0 |             0 |
|   15 | system user     |                    | NULL | Killed  |  522 | wsrep: committed write set (37278)          | NULL                                                                                                 |  522446 |         0 |             0 |
|   16 | system user     |                    | joy  | Killed  |  522 | altering table                              | CREATE INDEX `object_phone_g3hb7o_idx` ON `object` (`phone_id`)                                      |  522367 |         0 |             0 |
|   17 | system user     |                    | NULL | Killed  |  522 | wsrep: committed write set (37277)          | NULL                                                                                                 |  522451 |         0 |             0 |
|   18 | system user     |                    | NULL | Killed  |  523 | wsrep: committed write set (37273)          | NULL                                                                                                 |  522586 |         0 |             0 |
| 1935 | joy_user        | 10.250.3.216:33636 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521450 |         0 |             0 |
| 1936 | joy_user        | 10.250.3.216:33646 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521362 |         0 |             0 |
| 1937 | joy_user        | 10.250.3.216:33648 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521371 |         0 |             0 |
| 1938 | joy_user        | 10.250.3.216:33650 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521301 |         0 |             0 |
| 1939 | joy_user        | 10.250.3.216:33652 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521287 |         0 |             0 |
| 1940 | joy_user        | 10.250.3.216:33654 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521064 |         0 |             0 |
| 1941 | joy_user        | 10.250.3.216:33656 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  521108 |         0 |             0 |
| 1942 | joy_user        | 10.250.3.216:33658 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  520944 |         0 |             0 |
| 1943 | joy_user        | 10.250.3.216:33672 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `bookmarks_locate`.`id`, `bookmarks_locate`.`is_deleted`, `bookmarks_locate`.`locate_id`, `le |  520751 |         0 |             0 |
| 1944 | joy_user        | 10.250.3.216:33674 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  520827 |         0 |             0 |
| 1945 | joy_user        | 10.250.3.216:33680 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `bookmarks_locate`.`id`, `bookmarks_locate`.`is_deleted`, `bookmarks_locate`.`locate_id`, `le |  520798 |         0 |             0 |
| 1946 | joy_user        | 10.250.3.216:33684 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `object_course`.`id`, `object_course`.`partner_campaign_id`, `object_course`.`mode`, `object_ |  520749 |         0 |             0 |
| 1947 | joy_user        | 10.250.3.216:33688 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `locate`.`id`, `locate`.`mode`, `locate`.`is_active`, `locate`.`account_id`, `locate`.`produc |  520473 |         0 |             0 |
| 1948 | joy_user        | 10.250.3.216:33692 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `bookmarks_locate`.`id`, `bookmarks_locate`.`is_deleted`, `bookmarks_locate`.`locate_id`, `le |  520763 |         0 |             0 |
| 1949 | joy_user        | 10.250.3.216:33704 | joy  | Killed  |  521 | Waiting for table metadata lock             | SELECT `bookmarks_locate`.`id`, `bookmarks_locate`.`is_deleted`, `bookmarks_locate`.`locate_id`, `le |  520690 |         0 |             0 |
| 1950 | joy_user        | 10.250.3.216:33706 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `opn_notification`.`id`, `opn_notification`.`text`, `opn_notification`.`object_id`, `opn_noti |  520447 |         0 |             0 |
| 1951 | joy_user        | 10.250.3.216:33714 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  520178 |         0 |             0 |
| 1952 | joy_user        | 10.250.3.216:33724 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  520027 |         0 |             0 |
| 1953 | joy_user        | 10.250.3.216:33728 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  520002 |         0 |             0 |
| 1954 | joy_user        | 10.250.3.216:33730 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  519954 |         0 |             0 |
| 1955 | joy_user        | 10.250.3.216:33734 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  519779 |         0 |             0 |
| 1957 | joy_user        | 10.250.3.216:33752 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  519735 |         0 |             0 |
| 1958 | joy_user        | 10.250.3.216:33754 | joy  | Killed  |  520 | Waiting for table metadata lock             | SELECT `object`.`id`, `object`.`password`, `object`.`last_login`, `object`.`is_superuser`, `object`. |  519643 |         0 |             0 |
| 2021 | monitor         | 10.250.0.204:33272 | NULL | Sleep   |    7 |                                             | NULL                                                                                                 |    6759 |         0 |             0 |
| 2024 | monitor         | 10.250.1.230:49422 | NULL | Sleep   |    3 |                                             | NULL                                                                                                 |    2770 |         0 |             0 |
| 2025 | monitor         | 10.250.3.216:34054 | NULL | Sleep   |    2 |                                             | NULL                                                                                                 |    2387 |         0 |             0 |
| 2288 | root            | localhost          | NULL | Query   |    0 | init                                        | show processlist                                                                                     |       0 |         0 |             0 |
+------+-----------------+--------------------+------+---------+------+---------------------------------------------+------------------------------------------------------------------------------------------------------+---------+-----------+---------------+
1 Like