It makes the availability issues largely someone else's problem.
If YOUR server(s) go down for whatever reason, you're getting up at 3am and driving to the data centre to put the fire out assuming you know how to troubleshoot the issue (lets face it, most Devs probably don't since its not their specialty), and have the means of fixing it (failed RAID card? Hope you've got a spare). Then theres the headache of when the data centre is on the other side of the world - get ready to cough up hundreds of dollars to wake up remote hands, and try to get them to do the work for you (probably pretty poorly).
If AWS goes down at 3am, you simply roll over, go back to sleep and have another look when you wake up.
Of course this does all depend on the software, cloud infra being built sensibly but the same also applies for "on-prem" solutions as well.
There is plenty hybrid approaches. You can have cloudflare loadbalaner and enough redundacy to roll over and go to sleep. I think you would have less outages with dedicated hardware than aws control plane. For some compute or IO intense workloads it would make sense to self host.
If YOUR server(s) go down for whatever reason, you're getting up at 3am and driving to the data centre to put the fire out assuming you know how to troubleshoot the issue (lets face it, most Devs probably don't since its not their specialty), and have the means of fixing it (failed RAID card? Hope you've got a spare). Then theres the headache of when the data centre is on the other side of the world - get ready to cough up hundreds of dollars to wake up remote hands, and try to get them to do the work for you (probably pretty poorly).
If AWS goes down at 3am, you simply roll over, go back to sleep and have another look when you wake up.
Of course this does all depend on the software, cloud infra being built sensibly but the same also applies for "on-prem" solutions as well.