Newbie, Dont know where to start.. require ur assistance

Hi,

I’m new to MySQL as I’m a Network Engineer. At the moment, I’ve set up a MySQL community server 5.0.33 on RHEL4 x64bit V3 updated kernel and without Hyperthreading. The physical server is Dell PowerEdge 6840 Series with RAID(10), 32GB Ram, 4 Xeon Duo Core processors.

I’ve to put the box into production. As I eagerly want to know what sort of the basic variables should be in place compare to my existing my.cnf (listed below). Currently the box is handling about 21,000 transaction from 1200 user entries within 3 hours. I’m using INNODB database.

At the peak hours, where my client are viewing their transaction list, and this is where the problem started. I notice the all CPU about 70%, memory utilization is about 3.2GB, mysqld spawning about 770 process, show processlist about 700 rows. About 75% of the rows, there’s process executing and queue in the list. Many of the rows has the copy to tmp select statement in it. This resulting many of other functions not responding … ie, data input, changing price, application log-in … etc.

When I check the slowlog, it stated that the transaction viewing are causing the slowness as the statement has a copy to temp & left join in the select statement. The CPU utilization maximum capped at 75% on all 8 core.

Basically, I’m not a very knowledgeable guy in programming. So at the moment I’m clueless, as I should have a BIG box but I dont know where I should start troubleshooting ? How should I carry out the benchmark testing with the proper guideline or proper variables environment that suite my server box ? Btw how to calculate the database size ?

I would very much pleased as I hoping someone who can help me here.
Thank You

my.cnf Configuration
[client]
#password = [your_password]
port = 30303
socket = /usr/local/mysql/mysql.sock

[mysqld]
port = 30303
socket = /usr/local/mysql/mysql.sock

back_log = 50
max_connections = 3000
max_connect_errors = 20
table_cache = 4096
max_allowed_packet = 3000M
binlog_cache_size = 2M
max_heap_table_size =128M
sort_buffer_size = 16M
join_buffer_size = 16M
thread_cache_size = 16
thread_concurrency = 16
query_cache_size = 64M
query_cache_limit = 2M
ft_min_word_len = 8
thread_stack = 384K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 4096M
log-bin=mysql-bin
log_slow_queries
long_query_time = 2
log_long_format

#*** MyISAM Specific options
key_buffer_size = 512M
read_buffer_size = 32M
read_rnd_buffer_size = 64M
bulk_insert_buffer_size = 128M
myisam_sort_buffer_size = 256M
myisam_max_sort_file_size = 20G
myisam_max_extra_sort_file_size = 20G
myisam_repair_threads = 1
myisam_recover

skip-bdb

*** INNODB Specific options ***

innodb_buffer_pool_size = 8G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 8
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 16M
innodb_log_file_size = 1024M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 80

[mysqldump]
quick
max_allowed_packet = 2048M

[mysql]
no-auto-rehash

Only allow UPDATEs and DELETEs that use keys.

#safe-updates

[isamchk]
key_buffer = 1024M
sort_buffer_size = 1024M
read_buffer = 16M
write_buffer = 16M

[myisamchk]
key_buffer = 1024M
sort_buffer_size = 1024M
read_buffer = 16M
write_buffer = 16M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]
open-files-limit = 65535

Here is the slow query log

Query_time: 10 Lock_time: 0 Rows_sent: 40 Rows_examined: 7102923

SELECT H.IN-DAT,H.IN-REF,G.CATCOD,G.SUBCOD,H.PRZTKN,H.IN-AMT,MAX(H. PAMOUNT), G.CLRCOD,H.DIGITS,G.BGSM,G.ODDEVE,G.HRSGRP,G.ZDCGRP,H.WLDET, B.IN-GROUP,C.CTLGRP, H.CLOGID, H.ALOGIN, H.MALGIN, H.SHLGIN, S.DESCP, H.SHRHLD, H.MASAGT, H.AGENTS,H.WSTATUS,H.CCLASS,H.ID,H.IN-OIP,H.MEDISC, H.TBLCOD,MAX(H.RESULTAMT), R.TBLCOD,BZ.TYPE,H.SHDISC, H.MSLGIN,H.MASSHR FROM IN-HDR H LEFT JOIN SUBPCK G ON H.SUBPCCD = G.SUBPCCD LEFT JOIN IN-GROUP B on G.SUBPCCD = B.SUBPCCD LEFT JOIN SUBGRP C on G.SUBPCCD = C.SUBPCCD LEFT JOIN RESULT R on R.TBLCOD = H.TBLCOD AND R.TBLCOUNT>0 LEFT JOIN SYMTST S on H.CCLASS = S.DECCOD AND S.GRPCOD=‘CLASS’ LEFT JOIN MEMMAS BZ ON BZ.USRLID = CLOGID WHERE H.TBLCOD = ‘2007043’ AND H.STATUS=‘A’ GROUP BY IN-REF,H.TBLCOD ORDER BY H.IN-DAT DESC,H.IN-REF limit 0,40;

We are also facing the cpu high issue & appreciate someone can post thier comment.

Mutt Leg

max_heap_table_size =128M
tmp_table_size = 4096M

Might want to set these to the same value. 4Gb seems to high IMO.

This seems extremely high:
max_allowed_packet = 3000M

Why would you want a packet that large? If you’re dealing on data on that scale other methods of database insertion should be used.

On to the query:

SELECT H.IN-DAT,
H.IN-REF,
G.CATCOD,
G.SUBCOD,
H.PRZTKN,
H.IN-AMT,
MAX(H.PAMOUNT),
G.CLRCOD,
H.DIGITS,
G.BGSM,
G.ODDEVE,
G.HRSGRP,
G.ZDCGRP,
H.WLDET,
B.IN-
GROUP,
C.CTLGRP,
H.CLOGID,
H.ALOGIN,
H.MALGIN,
H.SHLGIN,
S.DESCP,
H.SHRHLD,
H.MASAGT,
H.AGENTS,
H.WSTATUS,
H.CCLASS,
H.ID,
H.IN-OIP,
H.MEDISC,
H.TBLCOD,
MAX(H.RESULTAMT),
R.TBLCOD,
BZ.TYPE,
H.SHDISC,
H.MSLGIN,
H.MASSHR
FROM IN-HDR H
LEFT JOIN SUBPCK G
ON H.SUBPCCD = G.SUBPCCD
LEFT JOIN IN-GROUP B
ON G.SUBPCCD = B.SUBPCCD
LEFT JOIN SUBGRP C
ON G.SUBPCCD = C.SUBPCCD
LEFT JOIN RESULT R
ON R.TBLCOD = H.TBLCOD
AND R.TBLCOUNT>0
LEFT JOIN SYMTST S
ON H.CCLASS = S.DECCOD
AND S.GRPCOD =‘CLASS’
LEFT JOIN MEMMAS BZ
ON BZ.USRLID = CLOGID
WHERE H.TBLCOD = ‘2007043’
AND H.STATUS =‘A’
GROUP BY IN-REF,
H.TBLCOD
ORDER BY H.IN-DAT DESC,
H.IN-REF limit 0,40;

Optimizations on WHERE: INDEX(H.TBLCOD, H.STATUS, H.IN-DAT, H.IN-REF)

INDEXES for the joins:
E.g. INDEX(G.SUBPCCD)

Some can use combined indexes:
INDEX(R.TBLCOD, R.TBLCOUNT)

&:
INDEX(S.DECCOD, S.GRPCOD)

This is just a rough guess, it could be very wrong.

Do an: EXPLAIN … query

So that we can grasp the query/index utilization better.

Hi,

I’ve gone thru the messages with the my programmer. According to him, the statement has been fine tune to the maximum as there are few indexes in place. Moreover, it’s the transaction table, so basically there’s more than 20 data fields has been used. The worst scenario of this are the users view them at range of 300 transaction to 9000 transaction at a time.

To my latest tuning at the Java Web, seems all the congestion has been diverted from web to mysql server. Apparently, during the peakest point, we encounter signaficant increase of spawning process from 70-100 to 700-1400. At the processlist, it’s increase to 1300 rows with about 1200 queue thread. Most of them are the same queries… listed earlier.

At my CPU utilisation, we only encounter about 40%-70% only. Maybe the we under utilised the thread concurrent for OS. However, at memory level, it’s uses about 3Gb rams, and IO is still free.

We monitored the sql performance, however, the cache are not being used at all… I’m still struggling with the tuning.

I’ll try to get my programmer assistant to generate the output for your perusal.