I know memory management is a complex issue and perhaps this use-case is uninteresting or my confs are off. I’ve mostly been messing around with small instances, mem<2G.
using consecutive runs of [queries from these two mariadb issues], (on percona xtradb cluster):
or using generic large-table dumping like from this clustercontrol example
i’m fairly consistently, and regardless of suggested* memory** config variables, able to get (pxc)myslqd to be killed by OOM. (for example even using the exact config from the reference*** architecture article for amazon ec2 micro instances)
for example, running enough of these from (466) above will force any of my small ubuntu 12.04 or centos6.4 (with pxc from percona apt/yum repos) instances to kill mysqld.
CREATE TABLE t1 ( a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, b INT NOT NULL) ;
INSERT INTO t1 (b) VALUES (1),(2),(3),(4),(5),(6),(7),(8);
INSERT INTO t1 (b) SELECT t1a.b FROM t1 t1a, t1 t1b, t1 t1c, t1 t1d, t1 t1e, t1 t1f LIMIT 131064;
SELECT COUNT(*) FROM t1;
DROP TABLE t1;
Note that as mentioned on the mariadb issues, this only applies to galera-enabled configs, and disabling the galera provider generally results in a much more stable memory-footprint.
I understand that these queries might not be particularly interesting or representative, but I think in cases where you don’t have complete control over the types of queries that might be run, its useful to try to be (relatively) confident mysqld won’t go OOM because of some creatively complex user query.
But throwing an error and telling them that query is invalid is absolutely fine. I also noticed from above that ‘wsrep_max_ws_rows’ and ‘wsrep_max_ws_size’ are suggested as ways to limit this type of event, however I’ve never noticed related errors in console or mysqldump despite adjusting these values to seemingly low numbers.(admittedly im not sure what type of error would be thrown in the case where they’re working).
So my question is:
Firstly, am I interpretting something wrong? Maybe these queries mentioned above or large dumps should not be able to run on ~micro/small memory instances without OOM? But then why can they usually run without galera? Might there be some important galera params*** whose defaults are unsuitable for small-memory instances that I’m missing or using incorrectly: (gcs.recv_q_hard_limit?, do 'wsrep_max_ws_rows/size only apply in certain contexts?)
[*]is it possible to use some internal mechanism to error/abort queries (including via commandline or mysqldump), such that queries that would go OOM instead error or abort beforehand?