There are quite a few threads about docker bypassing UFW (if that’s what you’re using … I am) and directly modifying IPTABLES. Ouch. Do a search for “docker bypassing ufw” and you’ll have an afternoon of reading.
Considering this is a real security issue (I happen to think it is), the following is how I have ‘solved’ the issue until Docker has a better solution.
By the way, there are numerous proposed solutions if you read through the search results … I just found this one below super, duper easy to do as ‘one’ layer in the security chain … it’s not comprehensive, it’s a simple, point solution.
# On my server I execute the following to get into the pmm container
user@pmm:[~]: sudo docker exec -it pmm-server bash
# In the container, use ‘vi’ to edit the pmm.conf file (it’s the nginx configuration file)
[root@a82573218cc2 opt]# vi /etc/nginx/conf.d/pmm.conf
# Add the ‘allow;’ and the ‘deny all;’ statements to the server block as reflected below
server {
listen 80;
listen 443 ssl http2;
server_name _;
server_tokens off;
allow 192.168.0.1;
deny all;
# With respect to the allow and deny statements, they can be rather creative, as in the following example
server {
listen 897;
deny 192.168.0.52;
allow 192.168.10.0/24;
allow 2002:0xy9::/32;
deny all;
# Exit from the container
[root@a82573218cc2 opt]# exit
# Restart docker with the following command (of course if you have numerous docker containers on the server, you’ll need to restart only the pmm-server container or whatever you called it)
user@ pmm:[~]: sudo systemctl restart docker
or
user@ pmm:[~]: docker restart pmm-server (or its name)
You can validate the configuration works as desired by leveraging the Epic Browser (one option) - https://www.epicbrowser.com/ and using the VPN function to give yourself a new IP on that browser session to verify the desired ‘block’. You should receive a “403 Forbidden” message.
Very cool! Just one thing to note, the /etc directory isn’t preserved across container upgrades (if you docker pull the latest image) and I am fairly certain that files will be overwritten using the in-place upgrade method as well (upgrade widget on main dashboard) so you will want to make sure you have a copy of this file either on your local system or in the /srv directory where it should be safer or you’ll lose this great config!
Well strike that config … I had added the IP’s of the servers to monitor as well, but it appears that they are now not connected. Back to the drawing board. Any thoughts on hardening this configuration? I’ll test further.
What is your ultimate goal in hardening? Allowing only specific IPs/Ranges to connect to the PMM server or limit what services are exposed or something else? I use a combination of IPtables (to allow foreign machines/networks to connect to pmm server), docker network port mapping (to only expose port 443 from the continaer) and nginx configs (to enable disable services like victoriametrics or alertmanager). Now most of what I’m doing is in a lab environment so I don’t have quite as much to worry about compared to a large enterprise network but I know we have several users on the boards here that run this against production and some even have elements of PMM exposed to the outside world so if you’re more specific about the security you’re trying to achieve i’m sure others will share their approach.
Greet feedback/questions, Steve. Ultimately I’m looking to publish PMM externally to defined IP’s with this present configuration. In the future, it’s a non-issue, considering there will be no external publishing and access will be allowed only within the trusted network. Normally I use UFW, fail2ban, etc. to harden at the server level. I guess I’ll take a look at IPTABLES to facilitate the blocking.
As a follow-up regarding iptables … what are the rules I need to avoid blocking the servers from talking to PMM? I’m assuming the port is simply 3306, but if you can clarify that would be appreciated.
It’ll depend on what version you’re using…if you’re just getting started with PMM2 hopefully you’re on version 2.14.0 at which point all clients (your DB servers) will connect to the PMM server on port 443 and will push all metrics up over that channel.
There’s a scenario where we do “remote” monitoring where there is no client installed on your DB server at which point, yes, the PMM server would need access to the relevant DB server (3306 for Mysql/Maria, 5432 for Postgres, etc).
The less fun route is if you’ve started on an older version of MySQL (pre2.14.0) which means you will have a client that sends QAN data from client to server on TCP port 443 but Node and Server specific metrics are pulled from the pmm server on any of port 42000-4200X depending on how many exporters are running on any client) at which point you could upgrade your client/server packages and follow this guide to converting your existing clients to the preferable push model.
Hopefully then your rules are simple for allowing your DB network access to everything and you can start excluding the outside world down to /graph/ only in nginx.