Possible memory leak in 5.0.67 b7

I run a MySQL cluster with a single master and multiple slaves. All systems are running MySQL enterprise 5.0.56. I decided to experiment with the latest percona release on one of the slaves to see if there was a significant difference in performance. What I’m seeing is that the mysqld process slowly grows in memory size until eventually the system starts swapping. I thought it was possible that the percona release used slightly more memory overall so I reduced innodb buffer pool from 24G to 20G but I am still seeing the same behavior. It looks like there may be a memory leak or there is a setting that I need that I don’t have. Here are my relevant config options:

skip-locking
key_buffer_size = 256M
max_allowed_packet = 16M
table_cache = 256
sort_buffer_size = 8M
myisam_sort_buffer_size = 8M

innodb_buffer_pool_size = 20G
innodb_log_file_size = 256M
innodb_file_per_table = 1
innodb_thread_concurrency = 0
innodb_file_io_threads = 10
innodb_max_dirty_pages_pct = 70
innodb_flush_method = O_DIRECT
innodb_support_xa = 0

query_cache_type = 1
query_cache_size = 64M
long_query_time = 1
max_connections = 2048
thread_cache_size = 32

With 24G innodb buffer pool size on the 5.0.56 enterprise binary the mysqld process is stable at 27G resident size in memory. The 5.0.67 percona binary steadily grows until it overwhelms the system. Is there a setting I’m missing? Any help is appreciated.

-Aaron

hello,

sorry to up an old post but i think i am suffering of the same trouble and wanted to know if it is something well declared or not

my settings are debian lenny withe mysql 5.0.67 b7 with yasufumi patch and i am suffering of the same situation of growing in memory up to the swap and memory saturation/crash.

mysql 5.0.32 unmodified does not give me this trouble(just some other because i run with bi xeon quad…)
so this is the reporting if someone can tell us more about this situation. it might be great to know if i/we missed something in our settings.

regards
frederic