We are running a 3-node Percona XtraDB Cluster Server version 5.6.39-83.1-56-log on Red Hat Enterprise Linux Server release 7.5 (Maipo).
The cluster’s first node crashed, with the corresponding error messages appearing in the mysqld.err log file:

Retrying 4th time
2018-09-27 14:04:00 4608 [ERROR] Slave SQL: Could not execute Delete_rows event on table cgi_d8_prod.sessions; Can’t find record in ‘sessions’,
Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event’s master log FIRST, end_log_pos 1300, Error_code: 1032
2018-09-27 14:04:00 4608 [Warning] WSREP: RBR event 3 Delete_rows apply warning: 120, 67045668
2018-09-27 14:04:00 4608 [ERROR] WSREP: Failed to apply trx: source: e8ae4b07-b7ae-11e8-b74c-8a9f3a425bd0 version: 3 local: 0 state: APPLYING f
lags: 1 conn_id: 184715 trx_id: 3777747218 seqnos (l: 1170734, g: 67045668, s: 67045657, d: 67045533, ts: 1191591817699643)
2018-09-27 14:04:00 4608 [ERROR] WSREP: Failed to apply trx 67045668 4 times
2018-09-27 14:04:00 4608 [ERROR] WSREP: Node consistency compromised, aborting…

In addition, I was able to extract the following information for the associated GRA_ binary log error file:

ERROR: Error in Log_event::read_log_event(): ‘Found invalid event in binary log’, data_len: 1157, event_type: 32
WARNING: The range of printed events ends with a row event or a table map event that does not have the STMT_END_F flag set. This might be because the last statement was not fully written to the log, or because you are using a --stop-position or --stop-datetime that refers to an event in the middle of a statement. The event(s) from the partial statement have not been written to output.

Has anyone on this forum encountered this issue, and if so, do you have a solution that you can share with us ?

You should investigate why only this one node aborted. Error suggests that there was data inconsistency here as the row to be deleted did not exist on this node.
GRA file is not a complete binary log, but rather a single RBR event, without proper header. You may decode by getting first a proper header from any binary log, like:

head -c 120 /var/lib/mysql/log-bin.000001 > GRA_HEADER
cat GRA_HEADER > gra_X_Y-bin.log
cat GRA_X_Y.log >> gra_X_Y-bin.log
mysqlbinlog -vvv gra_X_Y-bin.log

The usual source of data inconsistencies are: sql_log_bin=0 used, wsrep_on=0, RSU method for incompatible DDLs, non-InnoDB tables, etc. If you have binlogs enabled, I suggest you dig through them for the Xid transaction saved with your GRA file and check at what point related rows were changed on the cluster nodes.