Hello,
For our company needs we are currently testing the Audit Log Filter component provided by Percona server for MySQL.
We create a VM with Ubuntu 24.04-LTS and Percona server 8.4.3-3. Once the default installation was done, we added new user for remote connections, we added fake data, and then we followed the doc to install the audit component.
As first test we decide to log everything happens so we create the follwing rule :
select audit_log_filter_set_filter('log_actions', '{ "filter": {"log": true} }');
select audit_log_filter_set_user('%', 'log_actions');
And during tests we notice different results :
- if database connection is made from local client, queries are written to the log file
- if database connection is made from remote client using private subnet, queries are written to the log file
- if database connection is made from remote client using db server public IP address, queries are not always written to the log file
For this last case the only thing which differs is the client location and therefore the client IP address but we have no idea why in some case it works and why in some others it doesn’t (we precise that client connections are always successful only audit log writes sometimes fail).
Do you think this is a network problem ? How can we confirm this ?
Any help would be appreciated.
Thank you.
From what I see from the description, you are setting audit log filter for the %
user
select audit_log_filter_set_user(‘%’, ‘log_actions’);
This basically means that if there is no other (user, filter)
pair already registered for the user who is trying to connect to the server, this special %
rule will be used.
In my opinion, you can see different behavior when connecting internally / externally only when there is another (user, filter)
association registered which matches that user better.
Thank you @Yura_Sorokin for your answer.
Our case doesn’t fit your idea because we have only one rule defined and special user %
enabled for this rule :
mysql> select * from mysql.component;
+--------------+--------------------+------------------------------------+
| component_id | component_group_id | component_urn |
+--------------+--------------------+------------------------------------+
| 1 | 1 | file://component_audit_log_filter |
+--------------+--------------------+------------------------------------+
2 rows in set (0.00 sec)
mysql> select * from mysql.audit_log_filter;
+-----------+-------------+---------------------------+
| filter_id | name | filter |
+-----------+-------------+---------------------------+
| 1 | log_actions | {"filter": {"log": true}} |
+-----------+-------------+---------------------------+
1 row in set (0.00 sec)
mysql> select * from mysql.audit_log_user;
+----------+----------+-------------+
| username | userhost | filtername |
+----------+----------+-------------+
| % | % | log_actions |
+----------+----------+-------------+
1 row in set (0.00 sec)
Hello @Yura_Sorokin,
Do you have any other idea about the strange problem we’re facing ?
I do not see why connecting externally to a regular router with configured port forwarding can be any different from connecting from any other host on your private subnet (say, 192.168.0.x). From the MySQL Server point of view in this case you will be connecting on behalf of your router’s internal IP address (say, 192.168.0.1).
The only potential scenario I can think of when this could be an issue is when your router is “MySQL protocol”-aware.
For instance, if you are using ProxySQL that listens on an external IP address and redirects connections to different local MySQL servers.
Another scenario could be with some hidden connection pool - when once established connections can be re-used. In this scenario audit_log_filter may behave exactly as you describe - you will see connection events only for the very first time when connection is established but not when it is re-used. This connection pool, by the way, can be a part of a standard MySQL connector library (say, JDBC for Java).