[B]gjuan wrote on Wed, 22 September 2010 16:20[/B]
…
i have found some scripts in which the make a calculation using the mysql system variables but always the result is that memory is overallocated.
what is the best way to have a real value?
That is it, and if they tell you that you are overallocated you probably are.
But all those calculations is usually at worst case scenario.
The problem is that some of the memory allocations are 1. per server instance, but some of them are 2. per connection and some are 3. per connection if it’s needed to solve the specific query that is issued.
So a calculations are usually based on the worst case scenario (max connections allowed) and that all these connections use complicated queries that allocates additional memory according to the max values in the config.
But on a normal server you usually don’t have this scenario because all clients aren’t running the most complicated query all the time.
So you overallocate and to speed up execution for a single query you allow one connection to allocate more memory, and then you hope that not all clients would allocate all memory at the same time. Since then your server would be swamped and possibly die.
So there are no easy answer, the best way is to look at a production system with the OS tools during execution and possibly adjust the settings so that you have enough headroom to handle peak load during execution.
That equation is incorrect and has never been correct, even in very early versions of the server. The truth is that there’s no equation that can give you the real or even worst-case memory consumption, and the previous response is right: run it in production and watch what happens.
Most of the configuration variables that people obsess over (sort buffer size, join buffer size, etc etc) are fine at their default values, and should not be adjusted without careful analysis of actual real-world usage on the basis of individual queries. A great deal of harm can be caused by tinkering with them.