PMM Settings - Metrics Resolution - Help Im Down

I have about 2k units being monitored and some id10T user set the Metrics Resolution to Frequent

I run out of ports and processes in the server and while I can get the page to load I cant seem to get it to move back to Rare or standard. How can I change this manually?

The Reason I kinda need to do it manually is because the api keeps getting 401 due to ngnix and port exhaustion (netstat returns over 800k) and I already adjusted the tcp settings trying to get it to take…

**Note this is a standalone system that I used the rpm installs with not a docker instance

Thank you!!!

I got a good laugh out of that one…we all work with that one special someone!

Have a look here: Change settings
using

  "metrics_resolutions": {
    "hr": "5s",
    "mr": "10s",
    "lr": "60s"
  }

should get you back to standard resolution.

D’oh! I just saw that you can’t hit the API…

let me check something quickly… and get back to you.

I asked internally to see if anyone has any better ideas…I came up with 2…

1 would be a temporary nullroute for enough client IP’s (as long as that didn’t also kill your own connection to the machine) to free up ports and change the setting via the API.

The other would be to directly edit the settings db but I know the team would hunt me down if I publicly posted on a forum to do that!

Yea I have already cranked up the postgress and ngnix settings to try to account for it and still get api keys as invalid. How about not public DM me to address the issue somehow? Im not sure where this setting is stored (Currently over 7k posrgres processes)

ps -ef | grep postg | wc -l
7742

I wanna say thanks for some of the hints… I do think that somehow someway there should be some sort of manual bypass for some idiots that do this… It took quite some time to recover from this even doing a DNS change so all the agents hit some /dev/null dial home address. To me it looks like as we had about 1200-1400 agents dialing in the slowness appeared to be a bottleneck on the postgress DB from what I can tell… As I didn’t know postgres enough and the interactions between the items properly to take care of that one. I DO think that if I configured grafana to use a redis cache that things may have continued to work.