I have audit filter running ( logging connections and drop/create/alter user) in the json format.
It have tested that it is logging correctly in the json format.
root@localhost (none)> SHOW VARIABLES LIKE ‘audit_log_filter_format’;
±------------------------±------+
| Variable_name | Value |
±------------------------±------+
| audit_log_filter.format | JSON |
±------------------------±------+
1 row in set (0.01 sec)
root@localhost (none)> SHOW VARIABLES LIKE ‘audit_log_filter_file’;
±----------------------±-----------------+
| Variable_name | Value |
±----------------------±-----------------+
| audit_log_filter.file | audit_filter.log |
±----------------------±-----------------+
1 row in set (0.01 sec)
I am trying to run audit log read but I am not getting anything. I see the audit_filter log getting populated every few seconds in the json format. My database version is 8.4.5
The jq parse error you’re seeing is expected. The active log file (audit_filter.log) is an open JSON array while the server writes to it. It starts with [ but has no closing ]. As @Abhinav_Gupta suggested, reading the file directly is the right approach, you just need to append the missing bracket:
# Active file - append closing ] for valid JSON
sed '$ a ]' /var/lib/mysql/audit_filter.log | jq '.'
# Rotated files are already complete JSON
jq '.' /var/lib/mysql/audit_filter.*.log
No need to rotate or create a new file. sed streams the content and appends ] without modifying the original. We verified this is safe under concurrent writes.
As for audit_log_read() itself, this is a confirmed bug affecting all 8.4 releases through 8.4.7-7 (including your 8.4.5). There are three issues in component_audit_log_filter:
Active file excluded from reader. The reader builds its file index by DOM-parsing each log file. The active file fails the parse (incomplete JSON array), so it’s silently skipped. Your bookmark from audit_log_read_bookmark() points to the most recent event, which is in the active file the reader can’t see.
Exact timestamp matching. The reader’s start-position check uses == instead of >= for timestamp comparison. So "2026-02-03 17:24:11" only matches an event with that exact timestamp, not “from this point forward.”
Malformed JSON output. Even when audit_log_read() does return data (using an exact first-event timestamp from a rotated file), the output has missing commas between sibling objects, making it invalid JSON.
PR #5803 (expected in 8.4.8) fixes the malformed JSON (PS-10347) and pagination (PS-10387); bugs 1 and 2 are known issues not yet resolved, so the sed + jq approach above remains the recommended workaround.
For your security reports, the sed + jq approach above is the most reliable path. If you want to automate it (daily job, alerting), a cron script that rotates and parses closed files works well. No events are lost during rotation:
#!/bin/bash
# Rotate, then process all properly-closed files
mysql -e "SELECT audit_log_rotate();"
for f in /var/lib/mysql/audit_filter.2*.log; do
# Skip XML transition files (from format change)
head -1 "$f" | grep -q '<?xml' && continue
# Skip files missing closing ] (rare edge case)
tail -c 2 "$f" | grep -q ']' || continue
jq '.' "$f"
done # pipe to your reporting tool or SIEM