I've followed this project for over a decade and the amount of data they are moving around is fairly routine, given their budget size and access to computing and networking resources. The total storage (~40-50PB) is pretty large, but moving 10TB around the world isn't special engineering at this point.
It's not about the size of the data in bytes, it's also the amount of changes that need to be detected and alerts that need to be sent out (estimated at millions a night). Keep in mind the downstream consumers of this data are mostly small scientific outfits with extremely limited software engineering budgets.
I've worked on quite a few large-scale scientific collaborations like this (and also worked on/talked to the lead scientists of LSST) and typically, the end groups that do science aren't the ones handling the massive infrastructure. That typically goes to well-funded sites with great infrastructure who then provide straightforward ways for the smaller science groups to operate on the bits of data they care about.
Personally, I have pointed the grid folks (I used to work on grid) towards cloud, and many projects like this have a tier 1 in the cloud. The data lives in S3, metadata in some database, and use cloud provider's notification system. The scientists work in adjacent AWS accounts that have access to those systems and can move data pretty quickly.
The difference with this project is the data from Rubin itself isn’t where most of the scientific value comes from. It’s from follow up observations. Coordinating multiple observatories all with varying degrees of programmatic access in order to get timely observations is a challenge. But hey if you insist on being an “everything is easy” Andy I won’t bother anymore.
I've setup and built my own machines and clusters, as well as setting up grids, and industrial scale infrastructure. I've seen many closet clusters, and clusters administrated by grad students. Since then, I've gone nearly 100% cloud (with a strong preference for AWS).
In my experience, there are many tradeoffs using cloud but I think when you consider the entire context (people-cost-time-productivity) AWS ends up being a very powerful way to implement scientific infrastructure. However, in consortia like this, it's usually architected in a way that people with local infrastructure (campus clusters, colo) can contribute- although they tend to be "leaf" nodes in processing pipelines, rather than central players.