I am currently doing a case study regarding the usage of xtrabackup.
When I am executing the backup it seems that reading from the tables is possible, but when the system tries to execute writes to the tables it seems that it is not allowed for a lock is currently on-going.
Can anyone enlighten me regarding this one? and how can I get around with this?
whats happening is whenever the backup is running, when I am trying to access the application it leads me to a timeout error on the browser
that is why I assume that the application cannot write on the tables
for whenever I visit pages that only does reads from the table it can load the page but when I visit pages that will write data on the tables that is when the timeout occur.
Locking is only done for MyISAM and non-InnoDB tables. So probably your application database tables includes MyISAM tables as well. Check how innobackupex works for further details.
But upon checking all my tables are InnoDB and no table is on MyISAM.
Is it possible that the entire database is on MyISAM?
We have tried using mysqldump with --single-transaction parameter it works well without locking any table and we can still execute writes to the tables. How can we achieve this in percona?
No, the storage engine is per table, not per database.
By default, innobackupex first copies all InnoDB data and then acquires global read lock with “flush tables with read lock” before it starts to copy all the non-transactional files, so MyISAM tables and indexes, .frm table definition files, etc.
If all your tables are InnoDB, then this stage when the global lock is active should be very short, however if you have many tables, then copying the .frm files can take considerable time.
You can use the --no-lock option if you want to skip the global read lock. But it’s safe only for InnoDB tables and if no tables are changed during the backup (no ALTER, DROP, etc.).
See more details: [URL]The innobackupex Option Reference