Xtrabackup 8.0.32 locks up the database with Waiting for table flush

When we were in xtrabackup 2.4 on MySQL 5.7.41, we ran for years with no issue.

But now we upgraded 8.032 both mysql and xtrabackup, we started to the locking issue:
| 1553944 | etmysqldba | localhost | NULL | Query | 223 | Waiting for table flush | FLUSH TABLES WITH READ LOCK

The only to get out of this is to kill the xtrabackup process. Is there a workaround for this issue?

Can you please send us the full SHOW PROCESSLIST and xtrabackup logs up to this point?

Here is the xtrabackup log
2023-10-09T00:26:40.241248-04:00 0 [Note] [MY-011825] [Xtrabackup] Executing FLUSH NO_WRITE_TO_BINLOG TABLES…
2023-10-09T00:26:40.618269-04:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (120125514515804)
2023-10-09T00:26:41.624403-04:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (120125515023760)

until I killed it

2023-10-09T00:41:14.427462-04:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (120125778542255)
xtrabackup: Unknown error 1158
2023-10-09T00:41:14.455834-04:00 0 [ERROR] [MY-011825] [Xtrabackup] failed to execute query ‘FLUSH TABLES WITH READ LOCK’ : 2013 (HY000) Lost connection to MySQL server during query

| 1553944 | xtrabackup_user | localhost | NULL | Query | 217 | Waiting for table flush | FLUSH TABLES WITH READ LOCK |
| 1555859 | user1_ro | 170.74.48.65:41740 | TestDB | Sleep | 7 | | NULL |
| 1555860 | user1_ro | 170.74.48.65:41741 | TestDB | Sleep | 7 | | NULL |
| 1556482 | user1_ro | ettest4w100m3:34486 | TestDB | Sleep | 616 | | NULL |
| 1556694 | user1 | ettest1w100m3:50928 | TestDB | Sleep | 746 | | NULL |
| 1558632 | searchuser | app110w100m3:38140 | TestDB | Query | 513 | Waiting for table flush | select tms., rtm.viewName, rtm.IBIRetryNo, rtm.IBIStatus, rtm.IBIVersion,
(CASE rtm.RetryTo |
| 1560449 | user1 | test1w308m5:46906 | TestDB | Query | 345 | executing | SELECT corpId, taskName, taskTypeName, totalTime FROM TMSJobData WHERE startTime >= '2023-07-11 00:0 |
| 1560588 | user1_ro | ettest4w100m3:35134 | TestDB | Sleep | 8 | | NULL |
| 1560812 | user1 | test1w310m5:38742 | TestDB | Query | 217 | Waiting for global read lock | REPLACE into nodedata (Node,Channel,DataCenter,Application,EASI,Status,OS,CPU,MEM,SurgeQ,LB,LBstate, |
| 1561099 | user1 | test1w310m5:45534 | TestDB | Query | 216 | Waiting for global read lock | INSERT INTO ET_Dashboard (LogType,ResourceIdentifier,Datestamp,Environment,Application,ServerType,In |
| 1561148 | user1 | test1w310m5:46298 | TestDB | Query | 217 | Waiting for global read lock | REPLACE INTO prdWPMStats (Testname,Location,DateStamp,Type,H00,H01,H02,H03,H04,H05,H06,H07,H08,H09,H |
| 1561157 | user1 | test1w309m5:59308 | TestDB | Query | 216 | Waiting for global read lock | INSERT INTO Grafana_regexpTable (datestamp,easi,node,title,value) VALUES (‘2023/10/09 00:35:00’, ‘pr |
| 1561164 | user1 | test1w308m5:33656 | TestDB | Query | 213 | Waiting for global read lock | REPLACE INTO WPMData (Datestamp,Type,Test,Player,Perf,Status) VALUES (‘2023-10-09 00:35:26’,‘DASH’,’ |
| 1561168 | user1 | test1w307m5:47644 | TestDB | Query | 212 | Waiting for global read lock | INSERT INTO Daily_Perf_Stats (Date,Environment,SvcName,AvgSvcTime,MinSvcTime,MaxSvcTime,AvgSvcVol,Mi |
| 1561170 | user1 | test1w310m5:47064 | TestDB | Query | 211 | Waiting for table flush | select count(
) from TMSJobData where 1=0 |
| 1561172 | searchuser | app110w100m3:38680 | TestDB | Query | 210 | Waiting for table flush | select tms.*, rtm.viewName, rtm.IBIRetryNo, rtm.IBIStatus, rtm.IBIVersion,

saw this in the xtrabackup.log

2023-10-09T22:27:18.461287-04:00 1 [Warning] [MY-011825] [Xtrabackup] Log block checksum mismatch (block no 0 at lsn 120193034813440): expected 0, calculated checksum 3965168067 block epoch no: 0
2023-10-09T22:27:18.461321-04:00 1 [Warning] [MY-011825] [Xtrabackup] this is possible when the log block has not been fully written by the server, will retry later.
2