Strange behaviour with Audit Log Filter component

Hello,

For our company needs we are currently testing the Audit Log Filter component provided by Percona server for MySQL.
We create a VM with Ubuntu 24.04-LTS and Percona server 8.4.3-3. Once the default installation was done, we added new user for remote connections, we added fake data, and then we followed the doc to install the audit component.
As first test we decide to log everything happens so we create the follwing rule :

select audit_log_filter_set_filter('log_actions', '{ "filter": {"log": true} }');
select audit_log_filter_set_user('%', 'log_actions');

And during tests we notice different results :

  • if database connection is made from local client, queries are written to the log file
  • if database connection is made from remote client using private subnet, queries are written to the log file
  • if database connection is made from remote client using db server public IP address, queries are not always written to the log file
    For this last case the only thing which differs is the client location and therefore the client IP address but we have no idea why in some case it works and why in some others it doesn’t (we precise that client connections are always successful only audit log writes sometimes fail).

Do you think this is a network problem ? How can we confirm this ?

Any help would be appreciated.
Thank you.

1 Like

From what I see from the description, you are setting audit log filter for the % user

select audit_log_filter_set_user(‘%’, ‘log_actions’);

This basically means that if there is no other (user, filter) pair already registered for the user who is trying to connect to the server, this special % rule will be used.
In my opinion, you can see different behavior when connecting internally / externally only when there is another (user, filter) association registered which matches that user better.

Thank you @Yura_Sorokin for your answer.

Our case doesn’t fit your idea because we have only one rule defined and special user % enabled for this rule :

mysql> select * from mysql.component;
+--------------+--------------------+------------------------------------+
| component_id | component_group_id | component_urn                      |
+--------------+--------------------+------------------------------------+
|            1 |                  1 | file://component_audit_log_filter  |
+--------------+--------------------+------------------------------------+
2 rows in set (0.00 sec)

mysql> select * from mysql.audit_log_filter;
+-----------+-------------+---------------------------+
| filter_id | name        | filter                    |
+-----------+-------------+---------------------------+
|         1 | log_actions | {"filter": {"log": true}} |
+-----------+-------------+---------------------------+
1 row in set (0.00 sec)

mysql> select * from mysql.audit_log_user;
+----------+----------+-------------+
| username | userhost | filtername  |
+----------+----------+-------------+
| %        | %        | log_actions |
+----------+----------+-------------+
1 row in set (0.00 sec)

Hello @Yura_Sorokin,

Do you have any other idea about the strange problem we’re facing ?

I do not see why connecting externally to a regular router with configured port forwarding can be any different from connecting from any other host on your private subnet (say, 192.168.0.x). From the MySQL Server point of view in this case you will be connecting on behalf of your router’s internal IP address (say, 192.168.0.1).
The only potential scenario I can think of when this could be an issue is when your router is “MySQL protocol”-aware.
For instance, if you are using ProxySQL that listens on an external IP address and redirects connections to different local MySQL servers.
Another scenario could be with some hidden connection pool - when once established connections can be re-used. In this scenario audit_log_filter may behave exactly as you describe - you will see connection events only for the very first time when connection is established but not when it is re-used. This connection pool, by the way, can be a part of a standard MySQL connector library (say, JDBC for Java).

There is a know bug PS9033 for this issue.

@Yura_Sorokin Interesting idea but in our case we don’t use proxy or connection pool (each client connection is a new one).
The above-mentioned bug accurately describes the behavior we are observing (thank for the link @Wayne_Leutwyler )

@dba_S4dscjz please go to the bug and like it, and watch it. The more people that notice this should get it resolved much faster.

@Yura_Sorokin, there is a proposed fix for this issue in the JIRA bug report. I tested the proposed fix and and I recorded that the audit_log_filter is working as expected. See the below output:

  {
    "timestamp": "2025-02-18 13:48:37",
    "id": 27,
    "class": "command",
    "event": "command_start",
    "connection_id": 10,
    "command_data": {
      "name": "command_start",
      "status": 0,
      "command": "Query"}
  },  
  {   
    "timestamp": "2025-02-18 13:48:37",
    "id": 28,
    "class": "parse",
    "event": "query_rewritten",
    "connection_id": 10,
    "parse_data": {
      "flags": 0,
      "query": "drop database wayne",
      "rewritten_query": ""}
  },  
  {   
    "timestamp": "2025-02-18 13:48:37",
    "id": 29,
    "class": "parse",
    "event": "prepared_statement",
    "connection_id": 10,
    "parse_data": {
      "flags": 0,
      "query": "drop database wayne",
      "rewritten_query": ""}
  },  
  {  
    "timestamp": "2025-02-18 13:48:37",
    "id": 30,
    "class": "general",
    "event": "log",
    "connection_id": 10,
    "account": { "user": "wayne[wayne] @ localhost []", "host": "localhost" },
    "login": { "user": "wayne[wayne] @ localhost []", "ip": "", "proxy": "" },
    "general_data": { "status": 0 }
  },
  { 
    "timestamp": "2025-02-18 13:52:57",
    "id": 52,
    "class": "command",
    "event": "command_start",
    "connection_id": 11,
    "command_data": {
      "name": "command_start",
      "status": 0,
      "command": "Query"}
  },  
  {   
    "timestamp": "2025-02-18 13:52:57",
    "id": 53,
    "class": "parse",
    "event": "query_rewritten",
    "connection_id": 11,
    "parse_data": {
      "flags": 0,
      "query": "create database wayne",
      "rewritten_query": ""}
  },  
  {   
    "timestamp": "2025-02-18 13:52:57",
    "id": 54,
    "class": "parse",
    "event": "prepared_statement",
    "connection_id": 11,
    "parse_data": {
      "flags": 0,
      "query": "create database wayne",
      "rewritten_query": ""}
  },  
  {  
    "timestamp": "2025-02-18 13:52:57",
    "id": 55,
    "class": "general",
    "event": "log",
    "connection_id": 11,
    "account": { "user": "wayne[wayne] @  [192.168.1.110]", "host": "" },
    "login": { "user": "wayne[wayne] @  [192.168.1.110]", "ip": "192.168.1.110", "proxy": "" },
    "general_data": { "status": 0 }
  },

As you can see the audit_log is now capturing data from remote connections. Can we expect to see action taken on the bug soon?

1 Like

@Wayne_Leutwyler We found a problem in the host name identification logic inside event filtering mechanism. Basically, in the current code we use actual connection host name (the ‘host’ part of the ‘USER()’ MySQL function).
Moreover, when the connection was established via IP address (like ‘192.168.0.42’) and not via host name (like ‘foo.bar.com’), this host name identification logic could fail returning an empty string and completely ignoring / filtering out the event.
We are working on a proper fix now which will be basically changing this host identification logic. We will be using the ‘host’ part of the authenticated MySQL user (the ‘host’ part of the ‘CURRENT_USER()’ MySQL function).
Currently, as a workaround I can only suggest avoiding using IP addresses for connections.
Also, you may try removing ‘–skip-name-resolve’ option from your config file if it is there.

1 Like

@Yura_Sorokin

Thanks for the info, unfortunately we cant remove skip-name-resolve. We wont be able to move to 8.4 till this bug is fixed. Do you have an ETA to when this will be completed and released.

Thank you for all the good detailed information.

2 Likes

@Yura_Sorokin

I second what @Wayne_Leutwyler said, this issue should definitely be prioritised. We spent days on this issue, and it’s holding us back from pushing the MySQL 8.4 upgrade.

1 Like

@dba_S4dscjz @Wayne_Leutwyler @dba_S4dscjz
I am happy to announce that we fixed Jira and also implemented Jira
Here are the PRs for 8.0 and 8.4 series
PS-9033 and PS-9024 fixes: audit_log_filter_set_user does not allow wildcards in hostname (8.0) by percona-ysorokin · Pull Request #5562 · percona/percona-server · GitHub
PS-9033 and PS-9024 fixes: audit_log_filter_set_user does not allow wildcards in hostname (8.4) by percona-ysorokin · Pull Request #5565 · percona/percona-server · GitHub

Unfortunately, it is too late for this fix to be included into 8.0.41 (Q1 2025) (released yesterday) but it will be included into upcoming 8.4.4 (Q1 2025) and 9.2.0 Internal (Q1 2025).
As for 8.0 series, the fix will be included into 8.0.42 (Q2 2025).

Hopefully, now this won’t be a blocker for you to upgrade to 8.4.

2 Likes

Great news, thanks @Yura_Sorokin !

1 Like

@Yura_Sorokin

Thank you!!

1 Like