Proxysql Cluster and query logging

Hello folks!
I’m looking for a solution to store queries that go thry my proxysql cluster.

Background:
I have a cluster of three proxysql instances. Right now the traffic goes thru the only one of them at the time. But in case of any failure another node will grab it. So three nodes of proxysql. And I want that all of them store query logs regardless of by which the traffic goes thru. Of course I can set those nodes to store logs into three different log files, but because all of them share the same configuration file it is hard to differentiate a configuration between them - and actually I would not like to do it.

The problem is that only one instance logs queries, others not.

My question is: how you guys solved it? Or is there an option to store queries into the database?

Hello @Slawek,
I have two ideas:

  1. use a shared mount point for the logs. On your NFS server, create 3 paths /opt/logs/pxy1, /opt/logs/pxy2, /opt/logs/pxy3 Then on each proxysql, mount accordingly to the same location: mount nfs:/opt/logs/pxy1 /mnt/proxysql-logs. This way, the config is the same for all 3 proxysql.

  2. Use a centralized logger. If you’re already using PMM, you can add Loki+Promtail to scrape the logs on each proxysql and send to PMM.
    Turbocharging Percona Monitoring and Management With Loki’s Log-shipping Functionality

Great ideas Matthew! Thanks.

Actually I came up with the same as your first one. I didn’t mention that my proxysql instances work on docker, so mounting a different file as a source into the docker container is not a problem. But!
Then there will be a need to aggregate those logs into the one continuous log file. I expect that only one of my 3 instances will be used at the time, so there should not be any overlaps between them, but still I don’t want to have gaps in one log file and then need to look for another file if there is anything there.

So simple question. How to easily aggregate log files on the fly? I may have some ideas, but I’m curious about yours.

Thanks,

You would need to use some external tool/program to aggregate the separate logs into 1. The only thing I can think of off-hand is what I mentioned above like Loki, Splunk, or Elasticsearch.