What does the --compress-zstd-level parameter actually do? The options reference only has this to say:
“This option specifies ZSTD compression level. The default value is 1. Allowed range of values is from 1 to 19. The option was implemented in Percona XtraBackup 8.0.30-22.”
This doesn’t actually explain anything. What does the level control? When should one use the default? What makes a different level better? Thanks.
Agree, I will file a JIRA issue to improve documentation
Here is a brief description about compression levels:
“Compression levels provide granular trade-offs between compression speed and the amount of compression achieved. Lower compression levels provide faster speed but larger file sizes. For example, you can use level 1 if speed is most important and level 19 if size is most important”
I would say 3 is the best choice. infact, I will ask the developer to check if we can make 3 as default.
Thank you for the explanation. This is a production server, so trading slightly larger backups for fewer resources used makes sense. I’ll use level 3 for now and experiment with a test server to see what impact the compression process has on our hardware.
Hi @AlexHall . With regards to the default level:
This is highly dependent on the data we are compressing. For example with sysbench generated data, compression levels above 1 actually give the worst compression ratio, while using imdb as an example, higher compression levels give a higher compression ration.
With that in mind, we have decided to make 1 as the default and users should test based on their data.
More info at [PXB-2991] compress-zstd-level=X makes backup size larger that default level 1 - Percona JIRA