There's another forum where a Jacksonville ARTCC controller echoed they had 15 people call out (not due to some anti vax movement) and had weather looming on top of the military who takes over half their airspace and doesn't give it back promptly (or the FAA gets the word but doesn't pass it along). Combined with constant understaffing and the fact that JAX can't stop departures from places like Atlanta (but other centers can) doesn't help.
Inter region traffic always goes over the backbone (this includes EIP to EIP). This also includes going from EC2 to any service like S3 in another region.
Except China. China to rest of world is not via backbone.
I doubt unless you're using VPC peering, Transit Gateway, or Private Link that it would be the case that user-generated traffic between regions (for ex, between EC2 instances in Dublin and Sydney) is automatically routed through their backbone. Can you point to the re:Invent presentation? Genuinely curious.
Thanks. To confirm: You're pinging between the EC2s using their public DNS, right?
If AWS backbone is used automagically, I wonder why would anyone pay for Transit Gateways or VPC Peering rather than do mTLS between their cross-region instances or tunnel via Wireguard-esque transports like tailscale or defined.net, for example. Also, since when has this been the case, if you'd know?
I'm curious what the bandwidth charges are for EC2 to EC2 cross-region when using their public IPs / DNS? Same as VPC Peering?
VPC Peering bandwidth rates are $0.01 / GB. EC2 (public Internet?) bandwidth rates are $0.09 / GB. For xfers between EC2 to EC2 via AWS backbone, I assume I'd still be charged the public Internet bandwidth rates, right?
It would revert to not being signed, which routes just fine. You just don't get the additional security benefits. It won't turn it into invalid if I'm following what you are saying.
There can be legitimate use cases why a network maybe have a very few amount of prefixes not signed or even invalid: canaries and beacons.
For example, running tests to a signed, unsigned and invalid prefix can provide insight into how other networks are routing to them.
One example is a beacon to probe to determine if a network has enabled origin validation. Failure to connect, or a change in the routing path can provide insight into which networks on the internet have enabled origin validation.
Lack of redundancy due to cost probably has something to do with it.
Same issue with internet peering in quite a few cities (no viable 2nd location, costs a lot of money to double up the gear, additona optical transport).
Not to mention that in far two many places the second location is on the other side of the river from the first.
I was in finance when I looked up the physical address of our back up data center which was in another city ... 5km up stream from our main data center, both were within the 30 year flood plain of the river.
AT&T has been pushing really hard on folks in the last year to take early retirement packages that would give you a fraction of what you would have gotten if you stayed on until being eligible. Friend of mine recently took the package and found a job elsewhere and commented that morale is pretty down if you are in that camp.
Yep it's the main spot in nashville for voice stuff. Has local access switches (dms 100), tandem, and LD goes out of there. At&t I think also has their long haul optical transport and some core ip backbone stuff there.
I was the first person in my town to get residential ISDN back in ~1996. I was a fledgling admin on the network team and work paid for it. It took me almost three months to help the local telco get it up and running, but we could never get 64kbps on the data channels because of some weird issue with the switch. (Still beat the hell out of dial up)
I think your comment is the first time I’ve read or thought about the DMS 100 since then.
Bellsouth took a few days to get my ISDN up and running and it worked great for about a year [0] - then it went down and they couldn't get it working again. Bonded ISDN channels at 128kbps was amazing - fast, and no waiting 15+ seconds for an analog modem to handshake.
Tragically there was nothing simple about ISDN. A customer would have to know their switch type at the CO and there were a number of other things that could break it. At least with a T1/DS1 you only had to worry about SF/ESF, B8ZS/AMI and number of channels.
I’ve been working with some old telco equipment as part of a handful of projects and lord, has this hit me. Particularly some old PRI terminals I have are throwing me around left and right.
I could have this wrong but we had offices in Amsterdam and Berlin and ISDN in Europe wasn't nearly as polluted with old standards as it was in the US. I don't even think a 56kbps B channel was an option in Europe. I stood up dial plants in both continents and I distinctly recall it being plug and play over the pond.
We also used the ISDN to back up our circuits and Cisco had a pretty cool demand system that would just use what was needed to service the demand.
The most brutal thing I saw was when someone compromised a customer ISDN router (the small Ascend boxes with the curses UI) and changed the creds to login to their ISP and disconnected it and forced it to redial repeatedly. The local telco charged you for every ISDN call if it was a business line and since ISDN call initiaton/setup are instant - they had a several thousand dollar phone bill. I recall seeing the RADIUS server getting slammed with auth failure for days when that happened.
Yep!!! We had to tweak some settings because the router would constantly flap channels during DR tests and we were getting billed for the call setups (international ISDN calls were not cheap lol).
ISDN was pretty much plug and play when I had it back in the day. The gateways were available off the shelf (IIRC mine came from CompUSA) and you could get them with either one or two B channels, as I recall.
Yes indeed. I had the pleasure to work for Ascend Communications for 4 years. The bulk of our business was the Ascend Max TNT that could terminate dozens of PRI lines into hundreds of digital modems. They were the bread and butter for early ISPs. For BRI lines, there was the Pipeline 50, remarkable little box with a BRI input and Ethernet output. Good times.
Back when I worked in a CO the engineering goal was to be able to run the building load on gen for at least 24 hours straight before getting fuel trucks. We had weekly gen tests to validate things were working as expected.
It could be that they had to de-energizd some equipment to perform inspection and work within the facility. I know our facility had a number of procedures on how to de-energize parts of the building and inhibit the generators from feeding that area (a lesson learned in the Hinsdale CO fire years ago).
During Hurricane Katrina a number of us had to invent a procedure for de-energizing non critical equipment to reduce power load in order to keep critical services running for an extended period of time since it was clear we werent going to get utility and resupplies for awhile.
Dark docs is kind of watered down history. The previous mentioned book is great. Back in the 90s Discovery actually had documentaries that covered this from the people actually involved (including the person who proposed the operation) - now we have SHARK WEEK.
Interconnection facilities are the best places to be (or rather what you want to be connected to in order to establish as many adjacencies). 111 8th in NYC. 165 Halsey in Newark. 350 easy cermak in chicago. Palo alto for the bay (but also eqx in San Jose). Infomart in dfw. Etc.
Cable landing stations generally service one cable and the actual networks that use it are generally some distance away.