Table full while deleting fast

Hi all,following situation. Percona Server 5.7.29-32 and a big table. I’m alone on that table so no other transactions are going on on the database. I’m deleting with autocommit in batches which fit into redo and undo log.After some time I get a table full error. Ok. I’m deleting too fast. I stopped further delete queries and “listened” to the purger jobs, watching how the dirty pages get written to disk. After some time I don’t have any io operations left. Silence. BUT: I can’t operate on the table which caused “table full”. Something is blocked and I can’t see what it is.So, my question: What can I do to get such a table operational without shutting down the server instance? What is left that blocks the operations to the table even after everything seems to be written to the disk?Stopping or reducing speed of delete statements is no problem. Shutting down a whole instance IS a problem.
Probably someone can give me insights and/or hints to lock up such a situation.Best regards
Andreas

Is the table using the InnoDB storage engine? Is is a standalone database i.e. no replication going on? Are you using partitioning on the table? Are you forced to stick with 5.7? (This may well not be a problem, but it’s a context question really…) What’s the approach? Are you using a procedure with a LIMIT clause on the query (for example)?
I’m wondering if pt-archiver could help you out here. It sounds like a design tweak is needed but without knowing a big more it might be hard for the team to answer. But I think pt-archiver might be worth reviewing as an alternative approach. Here’s a blog post too: https://www.percona.com/blog/2013/08/12/want-to-archive-tables-use-pt-archiver/ 

Hi Lorraine,thank you for comming back to me. To your questions:- InnoDB storage engine.- Standalone DB (no replication)
- no partitioning- yes, currently I have to stcik with 5.7.- delete from blablab order by pimarykeycolum limit x;
In a second attempt I just throttled down the delete process.The thing I don’t understand: Why do I get completetly stuck without any chance to proceed. Only a complete shutdown seems to solve the problem. And the shutdown is fast because almost all dirty buffers are written. What is locking up the table?
So, why can’t I proceed as soon as the logs are flushed and cleaned up?

Hi Andreas,
I assume you’re running out of space … is it due to  undo log growth ?   What I would suggest is to take a look at the Purging information (for example in the PMM Graph)  Innodb Details Dashboard.    Do you have purge activity going ?  Does the purge “complete” after the delete batch was removed ?  It is possible there is something else going on in your system like open transaction with prevents purge from processing.


Peter said: Hi Andreas,
I assume you're running out of space ...

Hi Peter,thank you for your answer. Sorry that I respond so late. But the notification e-mail slept through.I'm really interested in investigating this behaviour because currently my only chance is to shutdown the instance completely.A simple scenario is the following:- Table with about 9 Mio rows.- Primary key is a bigint autoincrement.- On the commandline I do awhile true
doecho "delete from sometable where id > <constantnum> order by id limit 1000" | mysql -vvdone- Autocommit is on. When I check whether the rows are deleted say seem to be deleted.- I'm the only user and the above loop is the only connection on the db. So, in my opinion there is no open transaction after 'auto commit' which need these data.
- Systemtablespace has an upper bound.
- So, I'm also sure that system table space gets filled up.
- After starting the instance, having a completely idle database instance I have the following metrics for example:Log sequence number 3103792481241
Log flushed up to   3103792481241
Pages flushed up to 3103792481241
Last checkpoint at  3103792481232
Max checkpoint age    869019772
Checkpoint age target 841862905
Modified age          0- After starting the above shell script, I can see the drift caused by deleteing faster as the purger can purge.- After a while I get this "table full" error on the delete statement. NOW I cancel the shell loop. No further statements are done on the database.- With iostat and 'show enigne innodb status' I can see that the purger is working.
- The gap between the metrics 'Log sequence number', 'Log flushed up to' and 'Pages flushed up to' shrinks.
- As soon as the gap is zero I can't see any relevant writing IO on the filesystem.- At this point, the system should accept further delete statements on the above table. But it does not. I still get 'table full'.
- Shutting down the instance (which is fast because only some few pages are written) and starting it again let me delete rows from the above table.
I don't have PMM installed.  If you think it is necessary to investigate the described behaviour, I will take the effort.Best reards
Andreas