Conflicting wait_timeout requirements

We’ve got a situation where we have two applications running off a shared database, both of which make use of persistent connections. One of these uses a large number of processes/connections, and recommends a very short wait_timeout value. The other uses only one or two persistent connections, but expects for it to ALWAYS be there. If that connection times out (or the network connection is temporarily lost) the process continues to run, but in a “broken” fashion.

Obviously this is an issue with the second application that should be fixed, but in the in the meantime is there any way any of you can think of to deal with this situation? Ideally I’d like to just set the timeout for the second application’s connection to a much higher value, but short of digging into the source to modify the parameters used to make the connection, I’m not sure how that could be accomplished. As far as I know it’s not possible to set a session system variable for a connection other than the current one, is it?

Any other ideas?

What about making your wait_timeout longer but on the first application you use mysql_disconnect when you’re done.

That way the second app will have an available resource instead of globally making the wait_timeout too short which will affect all apps…

You can set wait_timeout per connection.

Just execute “set wait_timeout = 42” after the connection is established. This should work with current versions.

Hope this helps

While both of those in theory would work, neither of these applications are in-house, which means if we modify it we’ll need to maintain a patch, which I’d rather avoid. If it were possible to - for example - provide different default timeout values depending on the user connection that would do it, but short of some capability along those lines that I’m not aware of, it looks like patching things is the only way to go.

Thanks