Pt-online-schema-change fails 99% Out of range value for column

I am altering some INT(11) to BIGINT(20). Both a PK and a regular column.

CHANGE COLUMN id id BIGINT(20) unsigned NOT NULL auto_increment COMMENT ‘’, CHANGE COLUMN message_id message_id BIGINT(20) unsigned NOT NULL COMMENT ‘’

Twice now pt-online-schema-change has failed at 99% with the below.

Error copying rows from database.history to database._history_new:
Copying rows caused a MySQL error 1264:
Level: Warning
Code: 1264
Message: Out of range value for column ‘message_id’ at row 13057

I’ve verified that the _history_new has BIGINT(20) for the columns it’s telling me is out of range. So why is the insert failing? The history table has int(11), so can’t have any incorrect sized values.

This is a very large table, it’s almost maxed out. It takes about 7 hours to get to 99% then fails.

Additional question. How can I speed up percona to make this change. I’ve upped the DB instance temporarily to something very large and it doesn’t utlise a lot of it’s power.

Any advice is greatly appreciated.

1 Like

Unfortunately, MySQL 5.6 unsupported and has been for a couple of years. Even 5.7 EOLs Oct '23. My only suggestion is to attempt changing 1 column at time. As for the performance, queries in MySQL are limited to single threads, so increasing the number of cores won’t have much impact. Faster disks, innodb_flush_log_at_trx_commit=2, sync_binlog=0 may help.
Barring all of that, you last option would be to create a replica, use traditional ALTER to make your change, and then swap the replica to replace the current server.


Thanks for the advice. I’ll try one at a time. I am definitely moving away from 5.6 very soon. This same query worked on some slightly smaller tables. So it’s pretty frustrating. Wish we could see what the values were for the insert attempt.

When it says “at row 15038” any idea which row/table it’s referring to?

1 Like

You can prepend PTDEBUG=1 to your command as a bash env variable. That will output A LOT OF STUFF!! BE WARNED!! LIKE LITERALLY EVERY ROW IN THE TABLE WILL BE OUTPUT! but it will show you every SQL being executed.


Thanks. I’ll try one at a time, with verbose output. I appreciate your help.

1 Like

Just an updated. We found the issue. We found negative IDs in the message_id, so of course it’s out of range. Dunce hat applied. We’ll get things patched up and run smoothly. Letting people know in case they have something similar.

To find them on the large table, the verbose output gave us a chunk-sized range of IDs to check through. We ran the query that failed manually on a temporary table. Saw the last ID it input, then went to the real table to see what would’ve been next. Found the offender.

Thank you @matthewb for your help and guidance.

1 Like