As long as the cluster node you are going to hang the slave off of has log_slave_updates enabled, then you can hook up to it via standard asynchronous replication. For what you are looking to do, this would be a standard solution to that problem. However, it is not necessarily robust/scalable/maintainable, as it is using standard replication and has many of the downsides of why you probably went to a cluster to start with.
The main issue is that if the one cluster node the slave is hooked up to fails, then that will break replication to the non-cluster slave, which will likely then have to be rebuilt. So if your application is hitting that non-cluster slave for reads, you’ll want to make sure it includes some logic to detect if it is out of date via a heartbeat style check of some sort (i.e. a table with a timestamp that gets replicated, and you can check how recent that time stamp is compared to the master).
Given that, you then have a single point of failure. This may be fine if you are using the non-cluster slave for non-critical items. Alternatively you could set up a few of these non-cluster slaves, attached to different cluster nodes, and use some sort of load balancer / proxy to manage reads to the non-cluster slaves. That way if one one of the non-cluster slaves goes down, you have another one that is attached to a different cluster node that is probably still working. However this gets you into higher complexity that may or may not defeat the purpose of the initial simple solution. =)