Migrating to Percona XtraDB Cluster with sub-optimal queries.

We have a PHP app running many dynamic queries. I’m not saying they are the best, but for now it’s what we live with. There are 3 scenarios of > 1000 row updates. We are looking at going to Percona XTraDB cluster

  1. Insert/Update where ID in (CDL of 1000+ IDs)
  2. Insert Into (SELECT blah more than 1000 rows)
  3. Delete LOG Where Date < ‘11-22-2013’

Is there a preferred method of handling the different scenarios?

Further, how set is the 1000 row limit. This is a small set of our transactions (200 a day). Most of these are in the 2000-3000 affected rows.

In general large transactions is something to be avoided in Galera replication, so if possible you should look if there is a way to split updated row sets into smaller chunks.
One of the problems is that large transactions may cause significant memory usage overhead.This was addressed in Galera 3.x versions, so should be little less of an issue, but still you should not let too big transactions to happen. Of course what is big transaction depends a lot on particular hardware, RAM size, network capacity, etc.