Hello All,
I have been using Persona server “5.1” for a while now and i also have a “Percona Replication” server setup for backup, just incase…
Now, My user base has increased a lot to 1000 users ( Yes, one thousand ) and at times my current server cannot cope up and my web server returns a ‘time out’ error to the users and this is causing a lot of issues.
My company wants me to setup a cluster ( which i have never done ) and hence they have bought in a bigger muscular server, Since i have never done this before, Would any one here care in guiding me as to how can i go about this ?
I have read the minimum number of nodes required is 3, But is also doable with 2 nodes.
What i have now is :-
One NEW server
One Web Server
Current Percona MySQL Server
Current Percona Replication Server
Please suggest !!
Thank you all for your time and patience.
I would suggest to have an expert look at your current database servers. Maybe some smart improvements can considerably reduce the current workload.
As gmouse mentioned, getting a consultant in to take a look is usually your best bet in the long run. You can easily spend a lot of money on unnecessary / unusable systems when the requirements are driven by people who know little about databases / MySQL.
That aside, I would start by reviewing your work load. If your workload is mostly reads (often the case), then a cluster or simple replication slaves may do the trick to offload reads from the master to the slaves. If you are write heavy, that presents more of a challenge, as clusters and read slaves may not help with that, and you may have to look into sharding of some sort (difficult, complex, and can be costly). If it turns out you are read heavy, you can likely do a lot of query optimization to speed things along as well.
So the easiest way to review this is to run "
Certainly paying someone like myself to set this up for you (and help teach you how) is one option.
A self-starter option would be to go through my PXC tutorial available for free here: [url]https://github.com/percona/xtradb-cluster-tutorial[/url] and/or attend a conference where I give this tutorial.
Scott,
I’ll follow your instructions first and try to figure out what is more ( read or write ).
Can you let me know how long do i have to keep running this command “SHOW GLOBAL STATUS LIKE 'Com%'”
to figure out my requirement ?
Jayj, I’had a look at the link provided by you and shall try to flllow it and if i have problems i know where to look for you now
I highly appreciate both of you for your valuable inputs.
Thank you,
Awaiting your revert Scott !!
Hello Scott / Jayj
I have run the command “SHOW GLOBAL STATUS LIKE ‘Com%’” on my server during business hour and the results i got has been attached along with response.
Can you please look into this and let me know if its heavy read or write.
Awaiting your update.
Thank you once again.
global-status.txt (7.03 KB)
Even if it’s heavy read or write, simple query optimizations may help you more than a new server.
Those status variables are cumulative while the server is up, so as long as your server has been running for a while then running them once will give you a general idea of the overall load, and then I’d run it again maybe 24 hours later to see how much it has changed in the 24 hour period.
And yes, as gmouse mentioned query optimization is always a good thing, and often solves the majority of “load” issues that people have due to poorly written queries. You’d be surprised at how much a single MySQL server can handle when the server is properly tuned and the queries properly optimized. =)
As for the results you posted, we can do a little analysis, but without doing a thorough review of your setup and the queries that are being ran we cannot say anything definitive.
Based on what you posted, it looks like about 98% of the statements are selects (rough estimate), which means you are pretty read-heavy. This does not mean that writes are not still an issue, as you could be doing huge writes in a single statement, have bad disk I/O, bad database schema design, etc. that make writes painful. But you can likely make huge improvements by optimizing your select queries first, and then go from there. Use the pt-query-digest tool to review your slow query log, optimize the worse queries, and go from there.
[TABLE=“width: 263”]
[TR]
[TD]
Statement Type[/TD]
[TD]
Count[/TD]
[TD]
Percent[/TD]
[/TR]
[TR]
[TD]
Com_delete[/TD]
[TD="align: right"]
1058127[/TD]
[TD="align: right"]
0.51%[/TD]
[/TR]
[TR]
[TD]
Com_insert[/TD]
[TD="align: right"]
1287660[/TD]
[TD="align: right"]
0.62%[/TD]
[/TR]
[TR]
[TD]
Com_select[/TD]
[TD="align: right"]
202367658[/TD]
[TD="align: right"]
97.86%[/TD]
[/TR]
[TR]
[TD]
Com_update[/TD]
[TD="align: right"]
2061945[/TD]
[TD="align: right"]
1.00%[/TD]
[/TR]
[TR]
[TD]
Com_update_multi[/TD]
[TD="align: right"]
7409[/TD]
[TD="align: right"]
0.00%[/TD]
[/TR]
[TR]
[TD]
[/TD]
[TD="align: right"]
206782799[/TD]
[TD="align: right"]
100.00%[/TD]
[/TR]
[/TABLE]
I really appreciate both of you for contributing to my query !!!
“gmouse” you are always too precise, But sadly that leaves a lot of room for doubts and queries, ( No Offence ) i know you have a lot of other post to attend too !!!
Scott, Once again thank you for being so elaborate in your response and educating me and other too via this post.
With reference to your first pointer, I have run the query “SHOW GLOBAL STATUS LIKE ‘Com%’;” once again on the server and the results are once again attached along with this post, I would like one last view from you in it and enlighten me with your comments and suggestions …
global-status-1.txt (6.58 KB)
Looks like you either restarted your server or ran “FLUSH STATUS”, as the variables have reset. But this still shows what has happened since then, and it is still showing about 94% SELECTS.
So as mentioned earlier, I’d start by processing the slow query log with pt-query-digest and addressing some of the worst queries to see if that helps with the load issue. Keep in mind that queries with a short duration that are ran a lot can be just as bad (or worse) than a very long running query that is only ran a few times. pt-query-digest helps you with this by ranking your slow queries, so starting with that ranked list would be good. =)
Hello Scott,
Yes, indeed the server had been restarted by the client, and that is the reason for the reset of the figures.
With regards to the “slow query log” i have not enabled this in the my.cnf, Do you suggest i enable this for a day or two and then run “pt-query-digest” on the logs ?
Highly appreciate your assistance into this matter !!
Ok, I found something here :-
[URL=“pt-query-digest — Percona Toolkit Documentation”/URL]
So do you suggest i run the command :-
Or any thing specific ?
Thank you
If you enable the slow query log and leave the long query time at the default of 10 seconds that should be adequate to start out. For a completely untuned system, 24 hours is probably enough to give you a lot of queries to look at. Just try to get it during a normal business time, i.e. don’t do it during the least busy time as that won’t be representative of a normal workload. Having the slow query log enabled does add some extra I/O to record the queries, so if you you think the system is I/O bound just watch it for a bit to make sure things do not get backed up with the slow query log enabled. But unless the system is really maxed out it should be fine.
As for processing the slow query log with pt-query-digest, the default settings are normally good for basic query analysis / optimization. That will give you a ranked list of queries to look at plus statistics about each, like total run time, total count, etc. Keep in mind that processing the slow query log can be I/O intensive as well, so I would not do it on the production server. And if the slow query log is fairly large, it may take a while for the tool to process as it does a lot of summarizing.
Thank you soo much Scott
I have enabled the slow query log on the server, But i have not changed the “long_query_time” it is still #'ed out, So that means every 10 seconds my queries will be logged !!!
The rest of the points have been noted down, I shall get back to you tomorrow with an update on the results, It is business time here now :)…
Thank you soo very much once again
Scott,
Since the slow logs are being logged currently, Can you let me know what is it from the file that i need to pay attention to ? So that i can grep the same and attach it along with this post for your reference ?
Thank you for your understanding.
The long query time means the server will log queries that take over that amount of time, i.e. queries that take over 10 seconds to run by default. Not sure if that’s what you meant, but it is slightly different than what you said so wanted to clarify. =)
You do not really need to monitor the slow query log itself; just use pt-query-digest to process it after you have the sample you want (i.e. 24 hours worth of slow queries). Then just start at the top of the list that pt-query-digest gives you and see if you can optimize the queries by adding indexes or re-writing the queries when possible.
Thank you once again Scott, for assisting me and i shall revert back with the any further queries if any, and also with the output of the slow logs for suggestions
Thank you so much