Hacker News new | past | comments | ask | show | jobs | submit login

The stuff that's exclusively hosted in us-east-1 is, to my knowledge, mostly things that maintain global uniqueness. CloudFront distributions, Route53, S3 bucket names, IAM roles and similar- i.e. singular control planes. Other than that, regions are about as isolated as it gets, except for specific features on top.

Availability zones are supposed to be another fault boundary, and things are generally pretty solid, but every so often problems spill over when they shouldn't.

The general impression I get is that us-east-1's issues tend to stem from it being singularly huge.

(Source: Work at AWS.)




If I recall there was a point in time where the control panel for all regions was in us-east-1. I seem to recall an outrage where the other regions were up, but you couldn’t change any resources because the management api was down in us-east-1


This was our exact experience with this outage.

Literally all our AWS resources are in EU/UK regions - and they all continued functioning just fine - but we couldn't sign in to our AWS console to manage said resources.

Thankfully the outage didn't impact our production systems at all, but our inability to access said console was quite alarming to say the least.


The default region for global services including https://console.aws.amazon.com is us-east-1, but there are usual regional alternatives. For example: https://us-west-2.console.aws.amazon.com

It would probably be clearer that they exist if the console redirected to the regional URL when you switched regions.

STS, S3, etc have regional endpoints too that have continued to work when us-east-1 has been broken in the past and the various AWS clients can be configured to use them, which they also sadly don't tend to do by default.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: