Perfect sysctl.conf

Hi.

Been googling like crazy for sysctl tweaks which would be suitable for a loaded database system but have not found any why I’m turning to the forum.

I’ve created a quite nice sysctl.conf which I use with my lighttpd webservers but want something tweaked for databases.

I guess the vm should be tweaked in regards to overcommit etc and as well the net buffers but I’m not sure where to start.

Anyone ?

Tcp memory

net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216

Increase the number of incoming connections that can queue up before dropping

net.core.somaxconn = 262144

Big queue for the network device

net.core.netdev_max_backlog=30000

Apache Scaling suggests 1000 ?

net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2

Lots of local ports for connections

net.ipv4.tcp_max_tw_buckets = 1000000
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_synack_retries = 0

These ensure that TIME_WAIT ports either get reused or closed fast.

net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_tw_recycle = 1

Security

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_rfc1337 = 1

Disables IP source routing

net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.eth1.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

Decrease the time default value for tcp_fin_timeout connection

net.ipv4.tcp_fin_timeout = 30

Tuning the FS

fs.file-max = 5049800

Tuning the VM - According to http://kb.pert.geant2.net/PERTKB/ApacheScaling

vm.min_free_kbytes = 204800
vm.page-cluster = 20

Apache tuning suggests 200…

vm.swappiness = 10

[B]Quote:[/B]

net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216

Well,… I am not an experienced Syscontroll optimizer… but aren’t this number a little bit big?

16777216 = 16 MB,… this is enough for a 1280 MBit/s connection with a RTT of 100ms… Just to remember, default is 128K !

[B]Quote:[/B]

Increase the number of incoming connections that can queue up before dropping

net.core.somaxconn = 262144

When do you plan to answer the connections... if you queue so many up?

The same for the other numbers…
The default might not be ideal, but yours a so big, i don’t know if it really help increasing performance?!?