dockergoogle-cloud-platformdocker-registrygoogle-container-registrygoogle-artifact-registry

Best Practices for Migrating Docker Images from Google Container Registry (GCR) to Artifact Registry with High Volume of Tags


community!

I am migrating our Docker images from the deprecated Google Container Registry (GCR) to Google Cloud Artifact Registry. I am seeking advice on best practices and strategies to handle this migration efficiently, especially considering the large scale of our Docker images.

Our current setup involves a GCR repository in the EU eu.gcr.io/my-company/ containing over 500 Docker images, each tagged more than 500 times.

We initially considered using the "mirroring" feature to transition from GCR to Artifact Registry by creating a new project in Artifact Registry with the same eu.gcr.io/my-company/ domain. However, several challenges have arisen with this approach:

  1. Loss of new features: By mirroring the Artifact Registry, we cannot take advantage of the newer features offered by the Artifact Registry.
  2. Inaccessibility of old images: After setting up the Artifact Registry and redirecting traffic to it, the older images in the GCR become inaccessible. Is there a way to maintain access to these older images post-migration or a strategy to re-host them within the Artifact Registry without losing their references?
  3. Tooling limitations: We looked into using gcrane for the migration, but it support only migrating from *.gcr.io to pkg.dev directly. Are there alternative tools or scripts recommended for this migration type that can handle many images and tags efficiently?

Given these challenges, I'm looking for advice on:

  1. The best approach to migrate from GCR to Artifact Registry considering the scale of our repositories.
  2. Strategies to ensure a smooth transition without losing access to our existing images and tags.
  3. Any tools or services that facilitate this migration, especially for repositories with a significant number of images and tags.

Thank you in advance for your insights and recommendations


Solution

  • Sharing this as a community wiki for the benefit of others

    As mentioned by @BMitch

    For copying the data itself, there are various tools for that, including skopeo sync (from RedHat), crane copy --all-tags (from Google) and regclient/regsync (my own project). These operate at the OCI level, so you'd still need something to transition any administration of those repos, including authentication. For handling 500 repositories, that will depend on whether _catalog is supported (probably not on GCR) or how much scripting you are comfortable with.