In our last deep dive https://nanotechbytes.com/post/docker-optimization-multi-stage-distroless-guide?type=article, we minimized our image size by 86% using Multi-Stage builds and Distroless images. But optimizing the image is only half the battle. The other half is optimizing the build process itself. Welcome to V4: Enterprise Build Execution.
Docker Buildx is a CLI plugin that extends the Docker command with the full power of Moby BuildKit. It transforms building from a simple linear process into a high-performance, concurrent graph execution.
Docker Bake is a high-level command built on top of Buildx. Think of it as "Make for Docker". It allows you to define all your build targets, arguments, and platforms in a declarative configuration file (docker-bake.hcl), replacing complex shell scripts and docker build flags.
Understanding what Bake does under the hood is critical before implementing it in your CI/CD environment. Here are the core mechanics:
Parallel Execution: Automatically runs independent build stages concurrently (e.g., building frontend and backend at the same time).
If you found this helpful, please like and share to support the content!
Always curious to understand the concept, learning by breaking and fixing, and passionate about sharing knowledge with the community.Get in touch with me→
Advanced Caching: Supports cache mounts (--mount=type=cache) to persist package manager caches between builds.
Multi-Platform Support: Builds for multiple architectures (AMD64, ARM64) in a single command.
Declarative Configuration: Defines build logic in HCL, JSON, or Compose files instead of flagged CLI commands.
Secret Management: Securely mounts secrets during build time without leaking them into the image layers.
These features translate into massive operational benefits. By reusing npm and apt caches, you stop re-downloading the internet on every build, slashing build times by 50% or more. Consistency across environments is guaranteed because "Works on my machine" becomes "Works everywhere" when the exact build definition is code, not a local shell history. Ultimately, your CI pipeline becomes a single command (docker buildx bake), regardless of how complex your build matrix is.
For teams running monorepo architectures with dozens of services, the parallel execution model is the single biggest win. A sequential build of ten services that each take 3 minutes means a 30-minute pipeline. With Bake, those same ten services build concurrently, capped only by the number of available CPU cores and network bandwidth. In practice, teams routinely cut 25-30 minute pipelines down to under 8 minutes after migrating to Bake.
Here is step-by-step how we transformed our optimised V3 project into an enterprise-ready V4 pipeline. In V3, every build re-downloaded node_modules. In V4, we use BuildKit cache mounts to persist the npm cache.
# V3: Standard Install
- RUN npm install && npm cache clean --force
# V4: Cached Install
+ RUN --mount=type=cache,target=/root/.npm npm installNote: We removed npm cache clean because the cache mount is temporary and managed by BuildKit, so keeping the cache speeds up the next build.
Even if you change package.json, BuildKit mounts the ~/.npm cache. You only download the diff (the new packages), not the whole world. For enterprise CI/CD, you should use distributed cache backends like GitHub Actions Cache (type=gha), AWS ECR (type=registry), or S3 (type=s3).
The cache mount pattern works equally well for other package managers. For Python projects, mount /root/.cache/pip. For Go, mount /go/pkg/mod. For Maven-based Java builds, mount /root/.m2. The principle is identical: give BuildKit a directory to persist between builds, and it will store whatever lands there during the RUN step, making every subsequent build faster than the last.
One critical nuance: cache mounts are scoped per builder instance. If your CI provider spins up a fresh runner for every job (which most do by default), you must configure an external cache backend to actually benefit from this optimisation across pipeline runs. Local caching is still valuable for repeated local development cycles, but the real enterprise gain requires exporting your cache to a shared, persistent store.
Instead of running a long command filled with flags for platforms and tags, we created a docker-bake.hcl file:
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["app"]
}
target "app" {
context = "."
dockerfile = "Dockerfile"
platforms = ["linux/amd64", "linux/arm64"]
tags = ["getting-started-todo-app:${TAG}"]
}Now, building for Production is just:
docker buildx bake appThe HCL format is the recommended choice for complex bakefiles because it supports variables, functions, and inheritance between targets. The group block is particularly powerful: it lets you define logical sets of services that should build together. A group "ci" might include only the services that changed, while group "release" builds every image tagged for the registry push. You can also use the inherits field inside a target to share common configuration like cache settings and build arguments across multiple targets without duplication.
With the platforms list in our Bakefile, we can build for both Intel (AMD64) and Apple Silicon (ARM64) simultaneously.
The Workflow:
Create a Builder: Standard Docker does not support multi-platform well. We create a new container-based builder:
docker buildx create --use --name mybuilder --driver docker-container --bootstrapBuild and Push:
docker buildx bake --pushDoes using Bake make the final image smaller? Our image size remains a tiny 189MB (same as V3). Bake optimises the process, not the product. It makes the build 50% faster and 100% more reproducible, but it does not magically shrink the bytes. The size win came from Distroless (V3); the speed win comes from Bake (V4).
BuildKit acts as a graph solver. It analyzes your Dockerfile and constructs a Directed Acyclic Graph (DAG) of dependencies.
Parallel Pulls: Instead of pulling layers sequentially, it identifies independent layers and downloads them simultaneously.
Independent Stages: Because we split client-build and backend-build into separate stages, BuildKit executes them in parallel threads.
Lazy Loading: When using remote drivers or advanced exporters, BuildKit can even lazy-load layers, prioritising what is needed for the immediate step.
Understanding this DAG model changes how you write Dockerfiles. If you have a stage that only depends on the base image and not on any earlier COPY instructions, you can declare it as an independent stage and BuildKit will execute it in parallel with everything else. This is especially useful for test stages. Run your lint check, your unit tests, and your integration tests as three separate stages with no interdependencies, and BuildKit runs all three at once.
Here is a summary of the commands we used to set up this new pipeline:
| Command | Purpose |
|---|---|
| docker buildx bake --print | Dry Run. Prints the resolved JSON configuration without building anything. Useful for debugging specific targets or variable overrides. |
| docker buildx bake app | Build. Builds the app target defined in docker-bake.hcl. |
| docker buildx bake --push | Push to Registry. Builds and directly pushes the image/manifest list to the registry. Required for multi-platform builds. |
| docker buildx bake app --set "*.platform=linux/arm64" | Override. Overrides the platform defined in the HCL file on the fly. |
The --print flag deserves extra attention. Before you commit a bakefile change and trigger a 10-minute CI run, use --print to validate that your variable substitutions resolved correctly and that every target has the platforms, tags, and build arguments you expect. It outputs pure JSON, which you can pipe to jq for inspection or diff against a known-good baseline in a pre-commit hook.
We created a new builder using docker buildx create --use --name mybuilder --driver docker-container. The default docker driver is limited. It cannot load multi-platform images conveniently or perform advanced garbage collection of the build cache.
Driver Options:
docker (Default): Uses the pre-installed Docker daemon. Good for simple, single-platform builds.
docker-container (Recommended): Launches a dedicated BuildKit container. Required for advanced caching and multi-platform builds.
kubernetes: Spins up BuildKit pods in your K8s cluster. Perfect for scaling CI/CD runners dynamically.
remote: Connects to a remote BuildKit daemon (e.g., over mTLS).
We went from a bloated 1.4GB image (V0) to a secure 189MB Distroless image (V3), and finally to a high-performance build pipeline (V4).
V0: docker build . (Slow, Bloated)
V3: Multi-Stage + Distroless (Small, Secure)
V4: Buildx + Bake (Fast, Scalable, Automatable).
You check the complete code here https://github.com/mjgit007/dock-stack-todo
The progression is deliberate. Each version solves a different dimension of the production readiness problem. Image size affects runtime: smaller images pull faster, have a smaller attack surface, and cost less to store. Build speed affects developer experience and deployment frequency: faster builds mean tighter feedback loops and more confident shipping. Bake is the final piece that ties the whole system together, giving your team a single, auditable source of truth for how every image in your infrastructure gets built.

Delete your Dockerfile. Learn how Paketo Buildpacks use the pack CLI to create secure, SBOM-ready Node.js images automatically.

Cloud Native Buildpacks eliminate the Dockerfile entirely. Discover the 5-phase lifecycle, core components, and advanced security mechanics that make CNB the enterprise standard for container builds.