mysql percona slowdown

Hi there people.
Im having a damn dificult problem with one mysql server that keeps getting slowing down when it gets spikes of heavy queries.

We have our project mounted like this:
-frontend in iis 7.5 and framework 4.5
-api in iis 7.5 with wcf and net.pipe(frontend) and mysql connector 6.3(DB)
-Database with mysql Percona Server (GPL), Release 31.0 (5.5.32-rel31.0)

When i have spikes on the api or when im deploying the project api sometimes everthing stops responding, when i go evaluate what is happening i see this:
-frontend low cpu (large queue request)
-api low cpu (large queue requests)
-mysql slow down (many queries that run in 0 to 100 milliseconds → pass to 1second-5seconds)

When this happen the last time i try the following:
-i put the front end in schedule for not to receive any requests
-i put the api in schedule for not to receive any requests
-i have killed the front end w3wp process to clean up
-i have killed the api w3wp process to clean up
-i have killed all the processes on the mysql

Try to start everthing but the mysql didnt recover, keeps slowing, the only way i have to stabilise the project is with a restart on the mysql server.

Data / Requests per second
-20 in average
-by request i have 10-20 storeprocedures invokes
-mysql server is with 3.5G´s RAM ocupied and swap is at 4Gb´s

Configurations on mysql percona server
Server virtualized
4 CPU Cores (6GHZ´s)
4Gb´s Ram
40Gb´s virtual disc
one Database with 4Gb´s of information
200 tables
3000 Storeprocedures

[mysql]
[mysqld]

GENERAL

group_concat_max_len = 65535
user = mysql
default_storage_engine = InnoDB
socket = /var/run/mysqld/mysqld.sock
pid_file = /var/run/mysqld/mysqld.pid
skip-external-locking
wait_timeout = 30
interactive_timeout = 30
max_connections = 50
thread_concurrency = 8

MyISAM

key_buffer_size = 256M
myisam_recover = FORCE,BACKUP

SAFETY

max_allowed_packet = 16M
max_connect_errors = 1000000

DATA STORAGE

datadir = /mnt/data/mysql
tmpdir = /tmp

BINARY LOGGING

log_bin = /mnt/data/mysql/mysql-bin
binlog_format = ‘ROW’
expire_logs_days = 1
sync_binlog = 1

CACHES AND LIMITS

tmp_table_size = 128M
max_heap_table_size = 128M
query_cache_type = 0
query_cache_size = 0
thread_cache_size = 50
open_files_limit = 65535
table_definition_cache = 4096
table_open_cache = 10240
myisam_sort_buffer_size = 64M

INNODB

innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 1
innodb_buffer_pool_size = 2G

LOGGING

general-log-file = /var/log/mysql/mysql.log
log_error = /var/log/mysql/error.log
slow_query_log_file = /var/log/mysql/slow.log
log_queries_not_using_indexes = 0
slow_query_log = 2

TEXT SEARCH

ft_min_word_len = 2

OTHERS

key_buffer = 32M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
max_allowed_packet = 16M
[mysqldump]
quick
max_allowed_packet = 32M
[myisamchk]
key_buffer_size = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout

any thoughs where i should go to try catch what is causing this on my dabase server?

So the database server lives on dedicated VM, right? What did you observe there? CPU bound, IO bound? What about memory, was the server swapping during the slowdown? Our tools like pt-stalk run in critical moment could provide you many details about the situation.
Another thing is some of your per session buffers really high. What was the reason you set :
read_buffer_size = 2M
read_rnd_buffer_size = 16M
?
Check this article: [URL]http://ronaldbradford.com/blog/dont-assume-per-session-buffers-2010-03-08/[/URL]

The problem was in the connector, i made some tweeks on it to improve it and this problem disapered.