Is it possible to turn the QAN including Clickhouse off?

I’m using 2.25.0 version of PMM. At the moment, I don’t use the QAN feature so I would like to turn it off to reduce memory usage as the Clickhouse consumes too much memory. :frowning:

Is it possible? If true, how to do that?

Thanks. :slight_smile:

1 Like

Hi @chadr thanks for posting to the Percona forums!
There isn’t a supported way of turning off Clickhouse in PMM. Possibly the simplest would be to remove the executable bit on the binary /usr/bin/clickhouse, for example:

chmod -x /usr/bin/clickhouse

Thanks for your reply. It would be great if PMM could support such the feature in future. :slight_smile:

@chadr

What kind of resource usage are you seeing from Clickhouse? An idle server should be no more than 100 MB

Another approach could be to define a non-existent query source. For example, with MySQL you could say --query-source=slowlog but have slow_query_log=OFF

What kind of resource usage are you seeing from Clickhouse? An idle server should be no more than 100 MB

I’m monitoring about 77 MongoDB instances at same time.

ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND

root 126 2.2 3.8 83259844 15080300 ? SLl Feb06 43:01 /usr/bin/clickhouse-server --config-file=/etc/clickhouse-server/config.xml

Currently, it consumes around 15GB of memory.

@chadr you could remove the agent that is used. Here is an example for removing just the MySQL performance_schema agent for an instance.

❯ pmm-admin list --json | jq '.agent[] | select(.agent_type == "QAN_MYSQL_PERFSCHEMA_AGENT").agent_id' -r
/agent_id/ef5bc4e2-26f5-47d8-a4a9-bcdfdde5d503

❯ pmm-admin inventory remove agent /agent_id/ef5bc4e2-26f5-47d8-a4a9-bcdfdde5d503
Agent removed.

❯ pmm-admin list                                                                                                                                                                    
Service type        Service name        Address and port        Service ID
MySQL               ps80                127.0.0.1:33060         /service_id/2bc07e95-598b-46ad-bf1a-ff99f208e0e4

Agent type             Status           Metrics Mode        Agent ID                                              Service ID                                              Port
pmm_agent              Connected                            /agent_id/aa50b08b-f825-4a55-9fb3-3fc0b755d400                                                                0 
node_exporter          Running          push                /agent_id/b636aeb2-f0be-4166-952e-d3e6b8f41f19                                                                42000 
mysqld_exporter        Running          push                /agent_id/6a0e7827-4187-4978-9769-82b0e7290fd4        /service_id/2bc07e95-598b-46ad-bf1a-ff99f208e0e4        42003 
vmagent                Running          push                /agent_id/f15751b8-69dd-4a62-abe6-64f9659c0f1f                                                                42001

It should be QAN_MONGODB_PROFILER_AGENT instead of QAN_MYSQL_PERFSCHEMA_AGENT in your case, but you would be able to easily add this into any automation for new nodes, or even to remove any ones that already have it active. You can then leave ClickHouse alone and just ignore QAN in the UI.

Thank you for your reply! Yes. It would be an issue for me. Then, I have a question. I’m adding a MongoDB using below command. So is it possible to guarantee that I can see the QAN_MONGODB_PROFILER_AGENT right after the command ends like as below.

# Register a MongoDB instance
pmm-admin add mongodb --cluster=${PMM_ADMIN_MONGODB_CLUSTER} --skip-connection-check 

# Run the command to retrieve an Agent ID of `QAN_MONGODB_PROFILER_AGENT`
pmm-admin list --json

So is it possible to guarantee that I can see the QAN_MONGODB_PROFILER_AGENT right after the command ends like as below.

That will depend upon a few things, so what you need to do is wait for it to appear and then remove it, failing after some period of time of retries if unable to find it running.

Here is an Ansible snippet that shows you the idea, checking to make sure that the agent is present:

    - name: Check agent status                                                                                                                                                                                                                                 
      ansible.builtin.command: pmm-admin status --json                                                                                                                                                                            
      register: pmm_client_status_json                                                                                                                                                                                                                         
      until: ( pmm_client_status_json.stdout | from_json | json_query('pmm_agent_status.agents[?agent_type == `QAN_MONGODB_PROFILER_AGENT`].agent_id') | length > 0)                                                                                           
      retries: 3                                                                                                                                                                                                                                             
      delay: 10                                                                                                                                                                                                                                                
      no_log: true 

Such a task would:

  • Complete immediately when the agent is present in the output
  • Perform a loop of checks (3 retries, 10s delay) when the agent is absent
  • Fail after being unable to find an agent ID

In your case, where you want it to be absent and so you should allow the task to fail, so long as it wasn’t a complete failure (e.g. nothing is registered and pmm-admin cannot talk to the server). You could change the until and add failed_when, e.g.:

      until: ( (pmm_client_status_json.rc is defined and pmm_client_status_json.rc == 0) and ( pmm_client_status_json.stdout | from_json | json_query('pmm_agent_status.agents[?agent_type == `QAN_MONGODB_PROFILER_AGENT`].agent_id') | length > 0)  )
      failed_when: (pmm_client_status_json.rc is undefined or pmm_client_status_json.rc > 0)
1 Like

I tried to unregister the QAN_MONGODB_PROFILER_AGENT but there is no effect. :frowning:
The clickhouse-server still consumes such memory.

Is it possible to be related with below message?

2023.02.03 07:03:13.936153 [ 165 ] {} <Information> Application: Setting max_server_memory_usage was set to 453.21 GiB (503.57 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)

In fact, I’m running the PMM Server on docker container.

1 Like

Is it possible to be related with below message?

Yes, that explains the high usage that you are looking to reduce.
You will find this in the ClickHouse config:

    <!-- Maximum memory usage (resident set size) for server process.
         Zero value or unset means default. Default is "max_server_memory_usage_to_ram_ratio" of available physical RAM.
         If the value is larger than "max_server_memory_usage_to_ram_ratio" of available physical RAM, it will be cut down.

         The constraint is checked on query execution time.
         If a query tries to allocate memory and the current memory usage plus allocation is greater
          than specified threshold, exception will be thrown.

         It is not practical to set this constraint to small values like just a few gigabytes,
          because memory allocator will keep this amount of memory in caches and the server will deny service of queries.
      -->
    <max_server_memory_usage>0</max_server_memory_usage>

Now, changing that is possible, but it would be reverted if you replaced the container image, although you can of course rebuild using PMM as the base and adjusting that value. Just checking if it can be done without messing with the image, because I think that most options do not change what ClickHouse will see

1 Like

Whilst I can control memory usage as a whole, I couldn’t make ClickHouse see anything different when it auto-configures its memory usage, so I have created PMM-11593

1 Like

It seems the ClickHouse requires much memory basically. Please see this: Usage Recommendations | ClickHouse Docs

They recommends to run the ClickHouse with at least 32GB memory. :frowning:

If your system has less than 16 GB of RAM, you may experience various memory exceptions because default settings do not match this amount of memory. The recommended amount of RAM is 32 GB or more. You can use ClickHouse in a system with a small amount of RAM, even with 2 GB of RAM, but it requires additional tuning and can ingest at a low rate.

Yes, the ClickHouse server eats RAM like you wouldn’t believe. Just made a fresh install on Ubuntu an it takes about 12GB of RAM only in idle!!

Please add a switch to disable this in one of your next releases as it completely cripples the system

@makromedia welcome to the forum!
My earlier comment shows the ticket that tracks changes relating to this issue :slight_smile:

@Ceri_Williams Thank you for our reply and the thint for the link to the ticket. Woul be great if the memory usage of Clickhouse could be set in a simple way via GUI.

I’m completely new to Percona, am I right that Clickhouse is just resposible for tracking all the queries on the Percona server? So if it is disabled the QAN feature within the PMM will not be available any longer?

You can find out more information about the architecture of PMM Server in the docs.