> You might go down a huge engineering effort to build this
This is an overlooked issue: billing caps are hard to implement and will likely incur losses for the cloud company that does.
Take an object storage service as an example. Imagine Company X has a hard cap at US$ 1000 but some bug makes their software upload millions of files to a bucket and rack up their bill. Since objects are charged in GB/month they will not reach the cap until some time later that month. Then, when they do, what does termination of service mean? Does the cloud provider destroy every last resource associated with the account the second the hard cap is reached? If they don't, and they still have to store those files somewhere in their infra, then they'll start taking a loss while Company X just says "oops, sorry".
That's what tptacek is talking about: you want to NOT destroy the customers' resources because they can quickly figure out that something went wrong and then adjust while still maintaining service. But the longer you keep the resources the more you're paying out of pocket as a cloud provider. If you can't bill the overages to the customer, which a hard cap would imply, then you're at a loss. Reclaiming every resource associated to an account the moment a cap is reached is an extreme measure no one wants.
A hard cap then becomes only a "soft" cap, a mere suggestion, and cloud providers would then say "you hit the cap, but we had to keep your resources on the books for 12 hours, so here's the supplemental overages charges". Which would lead to probably just as many charges disputes we have today.
$1000/mo in S3 deep glacier storage buys you a petabyte (a million gigabytes) of storage. It’s hard to imagine such a small customer uploading a petabyte without noticing, and part of what happens when you hit the cap could be moving things from normal object storage to glacier.
If you turn off servers and shut off bandwidth you get rid of the vast majority of expenses.
Storage fees are a lot less risk, but if you want to cap those then you should cap the number of gigabytes directly. That prevents the overage issues you describe.
This is an overlooked issue: billing caps are hard to implement and will likely incur losses for the cloud company that does.
Take an object storage service as an example. Imagine Company X has a hard cap at US$ 1000 but some bug makes their software upload millions of files to a bucket and rack up their bill. Since objects are charged in GB/month they will not reach the cap until some time later that month. Then, when they do, what does termination of service mean? Does the cloud provider destroy every last resource associated with the account the second the hard cap is reached? If they don't, and they still have to store those files somewhere in their infra, then they'll start taking a loss while Company X just says "oops, sorry".
That's what tptacek is talking about: you want to NOT destroy the customers' resources because they can quickly figure out that something went wrong and then adjust while still maintaining service. But the longer you keep the resources the more you're paying out of pocket as a cloud provider. If you can't bill the overages to the customer, which a hard cap would imply, then you're at a loss. Reclaiming every resource associated to an account the moment a cap is reached is an extreme measure no one wants.
A hard cap then becomes only a "soft" cap, a mere suggestion, and cloud providers would then say "you hit the cap, but we had to keep your resources on the books for 12 hours, so here's the supplemental overages charges". Which would lead to probably just as many charges disputes we have today.