Getting null when running audit_log_read

I have audit filter running ( logging connections and drop/create/alter user) in the json format.

It have tested that it is logging correctly in the json format.

root@localhost (none)> SHOW VARIABLES LIKE ‘audit_log_filter_format’;
±------------------------±------+
| Variable_name | Value |
±------------------------±------+
| audit_log_filter.format | JSON |
±------------------------±------+
1 row in set (0.01 sec)

root@localhost (none)> SHOW VARIABLES LIKE ‘audit_log_filter_file’;
±----------------------±-----------------+
| Variable_name | Value |
±----------------------±-----------------+
| audit_log_filter.file | audit_filter.log |
±----------------------±-----------------+
1 row in set (0.01 sec)

I am trying to run audit log read but I am not getting anything. I see the audit_filter log getting populated every few seconds in the json format. My database version is 8.4.5

root@localhost (none)> SELECT audit_log_read(audit_log_read_bookmark());
±------------------------------------------+
| audit_log_read(audit_log_read_bookmark()) |
±------------------------------------------+
| [
null
]
|
±------------------------------------------+
1 row in set (0.61 sec)

root@localhost (none)> SELECT audit_log_read();
ERROR 3200 (HY000): audit_log_read UDF failed; Wrong argument format

I tried the documentation in https://dev.mysql.com/blog-archive/audit-logs-json-format-logging/

Still no luck,

root@localhost (none)> SELECT audit_log_read_bookmark();
±---------------------------------------------------+
| audit_log_read_bookmark() |
±---------------------------------------------------+
| {“timestamp”: “2026-02-03 17:24:11”, “id”: 362018} |
±---------------------------------------------------+
1 row in set (0.00 sec)

root@localhost (none)> select audit_log_read(‘{ “timestamp”: “2026-02-03 17:24:11”, “id”: 362018}’);
±----------------------------------------------------------------------+
| audit_log_read(‘{ “timestamp”: “2026-02-03 17:24:11”, “id”: 362018}’) |
±----------------------------------------------------------------------+
| [
null
]
|
±----------------------------------------------------------------------+
1 row in set (0.00 sec)

root@localhost (none)> SELECT audit_log_read();
ERROR 3200 (HY000): audit_log_read UDF failed; Wrong argument format

Can you try to manually read the audit log file content in a pretty format like below?

cat audit_filter.log | jq '.' | less

it gives a parse error because the audit filter log doesnt have a closing bracket when it is being written to.

“parse error: Unfinished JSON term at EOF at line 1709585, column 3
(END)”

thats why I am trying to run the audit_log_read because that is the only way to easily get alerts according to the percona documentation.

is there a workaround for this that does not involve closing the audit filter log and creating a new one?

I see this ticket and it i’m not sure if this is fixed yet.

Hi @ipcmlr,

The jq parse error you’re seeing is expected. The active log file (audit_filter.log) is an open JSON array while the server writes to it. It starts with [ but has no closing ]. As @Abhinav_Gupta suggested, reading the file directly is the right approach, you just need to append the missing bracket:

# Active file - append closing ] for valid JSON
sed '$ a ]' /var/lib/mysql/audit_filter.log | jq '.'

# Rotated files are already complete JSON
jq '.' /var/lib/mysql/audit_filter.*.log

No need to rotate or create a new file. sed streams the content and appends ] without modifying the original. We verified this is safe under concurrent writes.

As for audit_log_read() itself, this is a confirmed bug affecting all 8.4 releases through 8.4.7-7 (including your 8.4.5). There are three issues in component_audit_log_filter:

  1. Active file excluded from reader. The reader builds its file index by DOM-parsing each log file. The active file fails the parse (incomplete JSON array), so it’s silently skipped. Your bookmark from audit_log_read_bookmark() points to the most recent event, which is in the active file the reader can’t see.

  2. Exact timestamp matching. The reader’s start-position check uses == instead of >= for timestamp comparison. So "2026-02-03 17:24:11" only matches an event with that exact timestamp, not “from this point forward.”

  3. Malformed JSON output. Even when audit_log_read() does return data (using an exact first-event timestamp from a rotated file), the output has missing commas between sibling objects, making it invalid JSON.

PR #5803 (expected in 8.4.8) fixes the malformed JSON (PS-10347) and pagination (PS-10387); bugs 1 and 2 are known issues not yet resolved, so the sed + jq approach above remains the recommended workaround.

For your security reports, the sed + jq approach above is the most reliable path. If you want to automate it (daily job, alerting), a cron script that rotates and parses closed files works well. No events are lost during rotation:

#!/bin/bash
# Rotate, then process all properly-closed files
mysql -e "SELECT audit_log_rotate();"
for f in /var/lib/mysql/audit_filter.2*.log; do
    # Skip XML transition files (from format change)
    head -1 "$f" | grep -q '<?xml' && continue
    # Skip files missing closing ] (rare edge case)
    tail -c 2 "$f" | grep -q ']' || continue
    jq '.' "$f"
done  # pipe to your reporting tool or SIEM

References: