I expected our offices would hit them if our devs weren't working from home. That'd be all the devs at a given site pulling images from the same public IP address.
Although in case it's what you're asking: we haven't actually hit the limits yet. They're being progressively enforced. We're just taking steps to avoid being impacted when they do start affecting us.
>Especially in CI pipelines that like to rebuild images from scratch.
If people are doing that at scale on free accounts then I can see why dockerhub feels the need to impose limits on their free offering. Also...this is why we can't have nice things.
We hit them with our CI processes. Actually, I was a bit surprised that it happened because we only do 10-15 builds a day which shouldn't have triggered the throttle. Maybe there are some background checks that are happening in CircleCI that we don't know about or something.
Most CI systems used GET requests to fetch image manifests, in order to see what the registry's most recent image is. These requests are counted towards the limits in Docker's new rules.
Systems which built on top of the GGCR library[0] are switching to using HEAD requests instead[1]. These don't fetch the entire manifest, instead relying on just headers to detect that a change has occurred.