My favorite was a DB project where the DB would accept DELETEs faster than the SSD could write them to disk. The eventual solution was to implement a proportional controller that looked at the "still to be persisted to disk" backlog and kept it at ~66% of the value at which the DB would stop accepting queries by selectively inserting sleeps into the DELETE-submitting process. We could have fixed it at some known-good but low value, but implementing a controller enabled us to speed up during the night (when there was low user traffic) and still not stress the db too much when a lot of users were online.
Yeah, I especially like to use CT for reducing configuration parameters or to replace meaningless configuration parameters (requests per second, batch sizes) with more meaningful ones (acceptable latency, acceptable failure rate).
It is sometimes super useful to think about large systems with high traffic of requests/messages as something that can be controlled.