Here is a link to what Amazon posted about the outage a few days ago. I actually wrote this post originally with the later examples, not realizing that Amazon had actually posted another great one about the latest outage, but when I went looking to find out how long that outage was I was greatly pleased to find another AWS message I get to read.
This is what Amazon posted during the really large outage last year (the one that still only affected multiple availability zones for at most an hour or so):
Amazon's explanations are, I find, much more detailed (although this App Engine one was pretty good): when something serious goes wrong at AWS, we often not only get an apology (and a service credit), but we learn something about how distributed systems work in the process.
When we don't see explanations from Amazon is when a subset of the servers within a single availability zone (not even an entire zone) are inaccessible for less than an hour (which occasionally happens); otherwise, they honestly "kick ass" at post-mortem, as the above examples show.
However, it is my understanding that Google has had all kinds of random issues that only affected some customers that are dealt with in private, so that isn't different with them. The outage this morning, however, was "all of App Engine doesn't work anymore", something that has never even happened to AWS.
(Now, during the issue, Amazon really really sucks to the point where I'd often rather them say nothing than to keep having their front-line keep reassuring people; that said, in the middle of a crisis, most systems/people suck.)