pt-table-sync over bad connections

Dear All,
I’m working for an NGO in Nepal (INF and we are using pt-table-sync to synchronize a MySQL database across our centers in different parts of the country. We do not need live syncing, once a day is enough.
So I wrote a wrapper around pt-table-sync that runs on the main server and syncs (bidirectional) table by table on all the servers in the small offices around the country. We are using VPN to connect to every office.
It works fine as long the Internet connection is OK. But here in Nepal we are suffering often from bad internet connections. So 10-25% packet loss is nothing unusual. And then we run into problems like “DBD::mysql::db commit failed: Lost connection to MySQL server during query at /usr/bin/pt-table-sync line 10925”

Could somebody give me advise how to tune pt-table-sync and the MySQL server so the connection becomes more robust and could cope with more packet loss?

You may be interested in this article:
In short, you may need to increase net_read_timeout and net_write_timeout.

Btw, why do you need so regular pt-table-sync? Isn’t MySQL replication doing what is supposed to do?