Currently, I have pbm in an environment where the bandwidth to the storage (s3 equivalent) is essentially limited solely by the network bandwidth of my host. Under typical circumstances this would be great.
The problem is that right now, this results in an extreme network spike when the full backup kicks off - multiple gigabit spike for a short period, which is being felt in other apps/servers.
It would be really nice if there were at least limited support for a throttling of s3 requests.
At it’s very simplest - I think this could be as trivial as a config option ‘storage.s3.putDelay: milliseconds’ to add a sleep after every put.
You could certainly make this more complex if you wanted - next level up could be setting an actual rate limit that would be applied on a per-chunk basis. Example - if you set a limit of 10MB/sec, and the upload of a particular 10Mbyte chunk took 500ms, do a 500ms delay before the next chunk. That would be more choppy, but would at least eliminate the ‘fire hose on, ok, done.’ behavior.
Other related bit would be allowing override of the concurrency setting - right now you hard code at NumCPU/2 - I might want to throttle that back to a single to make things intentionally slower.
In the mean time, I’m going to look at trying out ‘trickle’ and changing how pbm agent is invoked.