Partial lack of replication among the cluster

Hello,

Servers: Ubuntu 14.04 LTS, cluster 5.6 (newest in the repo)
3 nodes, first one is replicating from master (classic master->slave config), replica is standard.
"
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql-main
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.002553
Read_Master_Log_Pos: 51407411
Relay_Log_File: mysqld-relay-bin.000053
Relay_Log_Pos: 1793759
Relay_Master_Log_File: mysql-bin.002553
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 51407411
Relay_Log_Space: 1793933
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID:
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
"

Config from node1:

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 10.240.0.7
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 500
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
server-id = 8
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
innodb_log_file_size = 256M
innodb_buffer_pool_size = 8G

wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_address=gcomm://cluster-db-1a,cluster-db-1b,cluster-db-1c
wsrep_slave_threads=8
wsrep_sst_method=xtrabackup-v2
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_cluster_name=cluster-db-1
wsrep_sst_auth=“sstuser:resutss”
wsrep_node_name=cluster-db-1a

[mysqldump]
quick
quote-names
max_allowed_packet = 16M

[mysql]

[isamchk]
key_buffer = 16M

!includedir /etc/mysql/conf.d/

Other nodes have similar configurations, “server-id” is changed among cluster (8, 9, 10)
Unfortunatly, most of the changes replicated from “mysql-main” are only on first node (which is replicating directly, cluster-db-1a), cluster nodes are still in state in which they were during initialisation… Some tables are MyIsam, but not the problematic ones (i havent tried to test myisam, because i know they arent supported).
Creating new databases on “mysql-main” is replicating among the cluster, but not the changes in tables…