Not the answer you need?
Register and ask your own question!

slow performance

Ronny GörnerRonny Görner Current User Role Supporter
Hi, played around with nearby same settings, on 5.7 and 8.0 xtradb cluster, however, speed in inserts is very very very poor :( . Can anyone confirm this ?
We use GA first release, CEPH on 2 x 10 GBit network as underlying fs layer, and comparing with 5.7 cluster, speed is now on 25% only than before. We changed collation however, but a 75% breakdown can not be true at all ?
Best
Ronny

Answers

  • Ronny GörnerRonny Görner Current User Role Supporter
    sysbench oltp_read_only --mysql-ssl=off --report-interval=1 --time=300 --threads=4 --tables=10 --table-size=10000000 --mysql-user=xxx --mysql-password=xxx run
    result: thds: 4 tps: 3454.67 qps: 55309.68
    sysbench oltp_write_only --mysql-ssl=off --report-interval=1 --time=300 --threads=4 --tables=10 --table-size=10000000 --mysql-user=xxx --mysql-password=xxx run
    result: thds: 4 tps: 48.98 qps: 285.86
    Of course I ran those tests until the end, read is at 55000 qps ~~ and write, around 200 - 300 qps which is poor, poor, poor :( and I am out of ideas, I modified multiple variables (including slaves, wsrep, and many more of those innodb parameters) without a luck at all.
    Network below is a 2 x 10 Gigabit infrastructure on Juniper Networks equipment, and with 5.7 versions, it runs ok.






  • Marco.TusaMarco.Tusa Percona Staff Role
    @Ronny Görner  can you please share the noede layout you are using and the my.cnf file? 
  • Ronny GörnerRonny Görner Current User Role Supporter
    edited May 7
    I have 3 nodes, all of them are connected over a redundant Juniper network switching infrastructure,
    on all the config is like this, of course, ips, node-names and so on differ on each of the 3 :smile:

    # Template my.cnf for PXC
    # Edit to your requirements.
    [client]
    socket=/var/run/mysqld/mysqld.sock
    default-character-set=utf8mb4

    [mysqld]
    server-id=3
    datadir=/var/lib/mysql
    socket=/var/run/mysqld/mysqld.sock
    log-error=/var/log/mysql/error.log
    pid-file=/var/run/mysqld/mysqld.pid

    # parameter

    pxc-encrypt-cluster-traffic=OFF

    sql_mode="NO_ENGINE_SUBSTITUTION"

    open_files_limit=80000
    table_open_cache=30000
    group_concat_max_len=4096
    max_connections=800
    max_sp_recursion_depth=255
    join_buffer_size=8M
    read_buffer_size=8M
    sort_buffer_size=8M
    myisam_sort_buffer_size=256M
    thread_cache_size=64
    thread_stack=32M
    tmp_table_size=10G
    max_heap_table_size=10G
    table_definition_cache=16384
    max_allowed_packet=256M
    read_rnd_buffer_size=8M
    preload_buffer_size=16M
    net_buffer_length=16M
    query_prealloc_size=16M
    table_definition_cache=30000

    innodb_read_io_threads = 64
    innodb_write_io_threads = 64
    innodb_io_capacity = 5000
    innodb_io_capacity_max = 10000
    innodb_thread_concurrency=0
    innodb_buffer_pool_size=64G
    innodb_log_buffer_size=512M
    innodb_log_file_size=1G
    innodb_concurrency_tickets=8192
    innodb_open_files=80000

    init_connect='SET COLLATION_CONNECTION=utf8mb4_0900_ai_ci'
    init_connect='SET NAMES utf8mb4 COLLATE utf8mb4_0900_ai_ci'
    collation_server=utf8mb4_0900_ai_ci
    character-set-server=utf8mb4
    skip-character-set-client-handshake

    # Binary log expiration period is 604800 seconds, which equals 7 days
    binlog_expire_logs_seconds=604800

    # Path to Galera library
    wsrep_provider=/usr/lib/galera4/libgalera_smm.so

    # Cluster connection URL contains IPs of nodes
    #If no IP is found, this implies that a new cluster needs to be created,
    #in order to do that you need to bootstrap this node
    wsrep_cluster_address=gcomm://1.2.3.4,1.2.3.5,1.2.3.6

    # In order for Galera to work correctly binlog format should be ROW
    binlog_format=ROW

    # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
    innodb_autoinc_lock_mode=2

    # Node IP address
    wsrep_node_address=1.2.3.4

    # Cluster name
    wsrep_cluster_name=my-cluster

    #If wsrep_node_name is not specified,  then system hostname will be used
    wsrep_node_name=my-cluster-node-n

    #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
    pxc_strict_mode=ENFORCING

    # SST method
    wsrep_sst_method=xtrabackup-v2

    # limit to 1/2 of cores
    wsrep_slave_threads=16
    wsrep_log_conflicts

    # size of the cache for the replication - default is very low
    wsrep_provider_options="gcache.size=4G; gcache.page_size=2G;"
  • Ronny GörnerRonny Görner Current User Role Supporter
    Switching FS from btrfs to ext4, now we have:
    thds: 4 tps: 253.59 qps: 1541.48

  • Ronny GörnerRonny Görner Current User Role Supporter
    Running on 16 wsrep_slave_threads, and having 32 threads writting on sysbench, we have around 4000 qps. However, more than 16 threads does not seem to make any sense because of slave-limit of 16. Docs are confusing, on the hand, it should be half of available cpu cores, on the other hand, nearby the max amount of connections. So?
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.