Hacker News new | past | comments | ask | show | jobs | submit login

Still trying to wrap my head around who is actually hitting those docker limits



I expected our offices would hit them if our devs weren't working from home. That'd be all the devs at a given site pulling images from the same public IP address.

Although in case it's what you're asking: we haven't actually hit the limits yet. They're being progressively enforced. We're just taking steps to avoid being impacted when they do start affecting us.


My CI/CD workers that run a few hundred jobs per hour?


And they're configured to redownload it every single time? No local caching whatsoever?


All people that don't have a private registry mirror setup.

Especially in CI pipelines that like to rebuild images from scratch.


>Especially in CI pipelines that like to rebuild images from scratch.

If people are doing that at scale on free accounts then I can see why dockerhub feels the need to impose limits on their free offering. Also...this is why we can't have nice things.


We hit them with our CI processes. Actually, I was a bit surprised that it happened because we only do 10-15 builds a day which shouldn't have triggered the throttle. Maybe there are some background checks that are happening in CircleCI that we don't know about or something.


Most CI systems used GET requests to fetch image manifests, in order to see what the registry's most recent image is. These requests are counted towards the limits in Docker's new rules.

Systems which built on top of the GGCR library[0] are switching to using HEAD requests instead[1]. These don't fetch the entire manifest, instead relying on just headers to detect that a change has occurred.

[0] https://github.com/google/go-containerregistry

[1] https://github.com/concourse/concourse/releases/tag/v6.7.0




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: