while also improving security and governance
As someone who is working with Docker Container images, you came across the new Docker Hub pull rate limit or perhaps you are already here because you encountered the error message
You have reached your pull rate limit and are now looking for a solution on how to resolve it.
docker pull wordpress:latest Error response from daemon: pull access denied for wordpress, repository does not exist or may require ‘docker login’: denied: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Look no further as we show different ways how to overcome the Docker Hub pull rate limit.
In order to better quantify the unique solutions lets try to understand how the rate limit works.
100 pull requests per 6 hours for anonymous users. Enforced via IP Address.
200 pull request per 6 hours for users on free tier
No pull request limit for paying customers
At first glance the numbers of 100/200 pulls per 6 hours don’t look too restrictive until you find out how Docker Hub measures the rate limit:
GETrequest on registry manifest URLs (
In plain English this means, if you for example execute
docker pull alpine:3.12 twice, your rate limit deducts by two, even if on the second command execution the image wasn’t transferred. Every
docker pull count against your quota.
Hitting the request limit is a piece of cake if you deploy your application stack to a cluster that’s behind a NAT or if you use
Always as you pull policy.
If you want to dive into details and for example see what your current quota is, the Download Rate Limit section in Docker Hub documentation is a good starting point.
You didn’t come here for this recommendation. Reassess once more that Docker Hub is de facto the go-to place for all the Open-Souce software binaries and is free for public images. The pricing for unlimited private repositories is fair as well!
If you are an individual, or a small team of 2-10 people who just need a space to store images, then paying $5 to $7/month per user is the simplest solution.
Mirroring or copying images from Docker Hub to your own registry might seem like an overkill at the first glance, it has two major benefits regarding security and governance. Especially if you are using Container in an enterprise context, it is considered being best practice.
Apart from mirroring images with the docker CLI with
push, two solutions exist that can help to automate the replication process.
The third option is like option #2, and serves as an immediate step to overcome the Docker Hub rate limit, despite the similarity the advantage of this option is that there are no replication rules needed. Yet you get the same benefits of security and governance as with option #2.
The Proxy Cache is part of the Harbor 2.1 and our Dedicated Container Registry Service. It is easy to set up: When creating a new project select Proxy Cache and select the link to the registry you want to cache.
After the Proxy Cache is set up, you can test by pulling an image via the cache by appending your registry name and project, from for example this:
docker pull postgres:13 To
docker pull c8n.io/proxy/postgres:13
For the policy to work as desired it is necessary to enforce that we fetch images from the private registry and never from Docker Hub. For Kubernetes, two solutions exist to ease the workflow.
Portieris is a Kubernetes admission controller to enforce image security policies. You can create image security policies for each Kubernetes Namespace, or at the cluster level, and enforce different rules for different images.
Kubernetes Dynamic Admission Control allow you to define callbacks and rewrite image specs to point to your internal registry. There is no ready-made solution yet,
Having unauthenticated access to the container registry is convenient no passwords no users, it just works. The registry credential operator tries to auto inject secrets into the right places.
As you saw, with the right tools it is possible to overcome the Docker Hub pull limit. While this seems to be a yet uncommon to use a proxy in front of Docker Hub but this method will become the new norm and other will follow with the same future set. Container users were putting all their eggs (or Containers) into the same basket for too long by fetching all their base images from only one place. Now we see the fragility of the system it will become the new norm for everyone to regain the control of all the images they use and store them in their own registry. The Docker Hub Proxy approach is a neat solution to regain control.