Help/advice for my.cnf on dedicated DBServer

Hello,

I need some help/advice for my dedicated DBServer.

Here are the server specs:

2 x Dual Core Xeon 5110
6 GB DDR
2 x 73GB RAID
Linux version 2.6.18-3-686-bigmem (Debian 2.6.18-8~bpo.1) (nobse@backports.org) (gcc version 3.3.5 (Debian 1:3.3.5-13))

Only MySQL 4.1.22-standard is running on the server.
DBServer is connected to WebServer with Gb link.

Persistent connection is used in PHP script.

I would like to know if my.cnf is well optimized for such server. Here is:

[mysqld]port=3306bind-address=192.168.1.2set-variable = max_connections=1000safe-show-databaseskip-name-resolvekey_buffer = 128Mmyisam_sort_buffer_size = 128Mjoin_buffer_size = 1Mread_buffer_size = 1Msort_buffer_size = 8Mtable_cache = 1800thread_cache_size = 284tmp_table_size = 64Mmax_allowed_packet = 64Mthread_concurrency = 2query_cache_limit = 2Mquery_cache_size = 400Mquery_cache_type = 1query_prealloc_size = 16384query_alloc_block_size = 16384

and here extended-status:

±---------------------------±------------+| Variable_name | Value |±---------------------------±------------+| Aborted_clients | 7559423 || Aborted_connects | 709324 || Binlog_cache_disk_use | 0 || Binlog_cache_use | 0 || Bytes_received | 3227180633 || Bytes_sent | 2698085351 || Com_admin_commands | 692807724 || Com_alter_db | 0 || Com_alter_table | 11 || Com_analyze | 0 || Com_backup_table | 0 || Com_begin | 0 || Com_change_db | 698742429 || Com_change_master | 0 || Com_check | 0 || Com_checksum | 0 || Com_commit | 0 || Com_create_db | 0 || Com_create_function | 0 || Com_create_index | 0 || Com_create_table | 0 || Com_dealloc_sql | 0 || Com_delete | 16725128 || Com_delete_multi | 0 || Com_do | 0 || Com_drop_db | 0 || Com_drop_function | 0 || Com_drop_index | 0 || Com_drop_table | 0 || Com_drop_user | 0 || Com_execute_sql | 0 || Com_flush | 1 || Com_grant | 0 || Com_ha_close | 0 || Com_ha_open | 0 || Com_ha_read | 0 || Com_help | 0 || Com_insert | 54970627 || Com_insert_select | 0 || Com_kill | 0 || Com_load | 0 || Com_load_master_data | 0 || Com_load_master_table | 0 || Com_lock_tables | 21 || Com_optimize | 22302 || Com_preload_keys | 0 || Com_prepare_sql | 0 || Com_purge | 0 || Com_purge_before_date | 0 || Com_rename_table | 0 || Com_repair | 27300 || Com_replace | 0 || Com_replace_select | 0 || Com_reset | 0 || Com_restore_table | 0 || Com_revoke | 0 || Com_revoke_all | 0 || Com_rollback | 0 || Com_savepoint | 0 || Com_select | 1964781708 || Com_set_option | 3050 || Com_show_binlog_events | 0 || Com_show_binlogs | 97 || Com_show_charsets | 626 || Com_show_collations | 626 || Com_show_column_types | 0 || Com_show_create_db | 0 || Com_show_create_table | 647 || Com_show_databases | 96 || Com_show_errors | 0 || Com_show_fields | 839 || Com_show_grants | 277 || Com_show_innodb_status | 0 || Com_show_keys | 71 || Com_show_logs | 0 || Com_show_master_status | 0 || Com_show_ndb_status | 0 || Com_show_new_master | 0 || Com_show_open_tables | 1 || Com_show_privileges | 0 || Com_show_processlist | 0 || Com_show_slave_hosts | 0 || Com_show_slave_status | 0 || Com_show_status | 40 || Com_show_storage_engines | 3 || Com_show_tables | 3693 || Com_show_variables | 1470 || Com_show_warnings | 0 || Com_slave_start | 0 || Com_slave_stop | 0 || Com_stmt_close | 0 || Com_stmt_execute | 0 || Com_stmt_prepare | 0 || Com_stmt_reset | 0 || Com_stmt_send_long_data | 0 || Com_truncate | 0 || Com_unlock_tables | 21 || Com_update | 520363594 || Com_update_multi | 0 || Connections | 6647522 || Created_tmp_disk_tables | 0 || Created_tmp_files | 12855 || Created_tmp_tables | 139543245 || Delayed_errors | 0 || Delayed_insert_threads | 0 || Delayed_writes | 0 || Flush_commands | 1 || Handler_commit | 0 || Handler_delete | 24485830 || Handler_discover | 0 || Handler_read_first | 3570653 || Handler_read_key | 2616539365 || Handler_read_next | 4223065123 || Handler_read_prev | 1225739368 || Handler_read_rnd | 303268371 || Handler_read_rnd_next | 940210884 || Handler_rollback | 0 || Handler_update | 4211170258 || Handler_write | 358121899 || Key_blocks_not_flushed | 0 || Key_blocks_unused | 109055 || Key_blocks_used | 30399 || Key_read_requests | 20911721687 || Key_reads | 42038656 || Key_write_requests | 42950661 || Key_writes | 26590746 || Max_used_connections | 1001 || Not_flushed_delayed_rows | 0 || Open_files | 1311 || Open_streams | 0 || Open_tables | 1664 || Opened_tables | 2355973 || Qcache_free_blocks | 5692 || Qcache_free_memory | 395729176 || Qcache_hits | 792984928 || Qcache_inserts | 1689011587 || Qcache_lowmem_prunes | 104384 || Qcache_not_cached | 276446640 || Qcache_queries_in_cache | 19485 || Qcache_total_blocks | 44685 || Questions | 4051645296 || Rpl_status | NULL || Select_full_join | 0 || Select_full_range_join | 0 || Select_range | 160490 || Select_range_check | 0 || Select_scan | 49379051 || Slave_open_temp_tables | 0 || Slave_retried_transactions | 0 || Slave_running | OFF || Slow_launch_threads | 281 || Slow_queries | 38068 || Sort_merge_passes | 6193 || Sort_range | 163303568 || Sort_rows | 1167894234 || Sort_scan | 141092778 || Table_locks_immediate | 1313186832 || Table_locks_waited | 1287658550 || Threads_cached | 282 || Threads_connected | 425 || Threads_created | 254787 || Threads_running | 8 || Uptime | 2063456 |±---------------------------±------------+

My big problem is the load average on peak hours.

If you need more information, please tell me.

Thanks for your help.

Best Regards,
javvy

How big in MB are your tables respectively your indexes?

Does your application perform a lot of select/updates against the same table?

Your table_locks_waited/table_locks_immediate ratio is very high which indicate either:
1.
That a lot of your queries aren’t optimized and selects take a lot of time.

Or that you are performing a lot of updates/selects against the same table. Where you pretty fast run into the big disadvantage with MyISAM’s table locking.

Suggestions is to check if you have any queries that take a long time.
Or change to InnoDB tables since they have row level locking instead.

Hello sterin,

First of all, thanks for your reply.

How big in MB are your tables respectively your indexes?

Database is 142,7 MB. I don’t know how to see the size of the indexes (

Does your application perform a lot of select/updates against the same table?

Yes, it’s a HEAP table. Here is the structure of that table:

CREATE TABLE p (
t int(5) unsigned NOT NULL default ‘0’,
p_id varchar(20) character set latin1 collate latin1_bin NOT NULL default ‘’,
ip varchar(15) NOT NULL default ‘’,
pt smallint(5) unsigned NOT NULL default ‘0’,
u bigint(20) unsigned NOT NULL default ‘0’,
d bigint(20) unsigned NOT NULL default ‘0’,
t_g bigint(20) unsigned NOT NULL default ‘0’,
s enum(‘yes’,‘no’) NOT NULL default ‘no’,
st datetime NOT NULL default ‘0000-00-00 00:00:00’,
l_a datetime NOT NULL default ‘0000-00-00 00:00:00’,
conn enum(‘yes’,‘no’) NOT NULL default ‘yes’,
UNIQUE KEY t_p_id (t,p_id),
KEY t_s (t,s),
KEY l_a (l_a),
KEY t (t)
) ENGINE=HEAP DEFAULT CHARSET=latin1;

For example, this query: SELECT t FROM p GROUP BY t don’t use the index. How can I correct that ?

That a lot of your queries aren’t optimized and selects take a lot of time.

Yes, I’m going to log slow queries and report here. Perahps could you help me to optimize them )

Best Regards,
javvy

You haven’t done anything wrong.

That’s about as good as it can get.

The only other addition that you can do is to add the la_acc last in the ip_en index.

That way you will get rid of the filesort on the last query also.

And then you can’t speed it up any more unless it is identical queries that are issued all the time which is when the query cache can make wonders for you.

It’s ok (only ‘Using where’ is used now).

On the other hand, I have something strange with my HEAP table p. It seems that now this table is limited to 99270 record, but I’m sure this should be higher. I really don’t know why this limitation. Have you an idea ? It’s like that since I have modified the indexes with USING BTREE.

Thanks for your help…you’re helping me a lot.

[B]Edit

Sorry I found the solution: max_heap_table_size - Google is my friend too :smiley:
[/B]

Good for you! → Google for you! :wink: