Dockerizing a Monorepo: Best Practices

Explore Your Brain Editorial Team
Science Communication
Monorepos are fantastic for engineering velocity. Sharing validation schemas between a Next.js frontend and a Node.js API, utilizing a unified design system, and maintaining a single source of truth for dependencies solves dozens of integration headaches. However, when it comes time for deployment, monorepos present a unique challenge: How do you build a Docker image for just one application without accidentally packaging the entire repository?
If you naively execute COPY . . inside a standard Dockerfile at the root of a monorepo, you will copy your API, your frontend, your documentation site, and all of their dependencies into the container. Your build times will skyrocket, your CI pipeline will crawl to a halt, and your final image size could easily exceed 2 Gigabytes. We need a strategy.
1. Understanding the Target Problem
Imagine the following repository structure:
my-monorepo/
├── apps/
│ ├── web-frontend/ (Next.js app)
│ └── backend-api/ (NestJS app)
├── packages/
│ ├── database/ (Prisma ORM models)
│ ├── ui-components/ (React components)
│ └── core-utils/ (Shared typescript functions)
├── package.json
└── turbo.json
Our goal is to build an isolated, minimal Docker image for the backend-api. The challenge is that backend-api depends on database and core-utils, but we absolutely do not want to download the heavy React dependencies required by ui-components or web-frontend.
2. The Turborepo Prune Command
Turborepo provides an elegant solution to this exact problem natively. The prune command takes a target application name and generates a sparsely populated pseudo-repository containing ONLY the target app and its direct internal dependencies.
npx turbo prune --scope=backend-api --docker
This command generates a new folder called out/. Inside this folder, you will find two critical things: out/json/ (a skeleton of only the necessary package.json files) and out/full/ (the actual source code of the API and its internal dependencies).
3. Crafting the Perfect Multi-Stage Dockerfile
We will use a multi-stage Docker build. This means we create three temporary, throw-away containers to execute our build, and finally copy only the artifacts we care about into a lean, final container.
Stage 1: The Pruner
First, we bring the whole repo into a temporary container just to run the prune command.
FROM node:18-alpine AS pruner
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install turbo globally
RUN yarn global add turbo
COPY . .
# Extract ONLY the dependencies required by 'backend-api'
RUN turbo prune --scope=backend-api --docker
Stage 2: The Installer
Now we leverage Docker's layer caching mechanism. We copy ONLY the package.json skeletons extracted by the pruner. If we haven't changed our package dependencies, Docker completely skips the expensive yarn install step on subsequent builds!
FROM node:18-alpine AS installer
RUN apk add --no-cache libc6-compat
WORKDIR /app
# First copy the json skeleton and lockfile
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/yarn.lock ./yarn.lock
# Run install. This layer is fully cached unless packages change.
RUN yarn install --frozen-lockfile
# Now copy the actual source code
COPY --from=pruner /app/out/full/ .
# Execute the build script defined in the backend-api package.json
RUN yarn turbo run build --filter=backend-api...
Stage 3: The Lean Runner
Finally, we create our production image. We explicitly ignore the source code and heavy build tools from the previous stages, copying over only the compiled assets and the production node_modules.
FROM node:18-alpine AS runner
WORKDIR /app
# Best practice: Do not run node as root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 expressjs
USER expressjs
# Copy the installer's final node_modules and output
COPY --from=installer --chown=expressjs:nodejs /app/apps/backend-api/package.json .
COPY --from=installer --chown=expressjs:nodejs /app/node_modules ./node_modules
COPY --from=installer --chown=expressjs:nodejs /app/apps/backend-api/dist ./dist
CMD ["node", "dist/main.js"]
4. Benchmarking the Improvements
By implementing this multi-stage pruned architecture, you can expect massive improvements:
- Image Size: Drops from typical sizes of 2.5GB down to ~150MB.
- Security: The final image does not contain any source code, raw environment files, or build toolchains like TypeScript.
- Build Speeds: Since
yarn installis heavily cached, rebuild times can drop from 5 minutes to less than 30 seconds when only source code files are modified.
Conclusion
Monorepos require a deliberate deployment strategy. By embracing multi-stage builds and leveraging tools that understand module dependency graphs, establishing an enterprise-grade CI/CD pipeline becomes incredibly clean and highly performant.

About Explore Your Brain Editorial Team
Science Communication
Our editorial team consists of science writers, researchers, and educators dedicated to making complex scientific concepts accessible to everyone. We review all content with subject matter experts to ensure accuracy and clarity.
Frequently Asked Questions
Why should I use Turborepo over standard npm workspaces?
While standard npm/yarn workspaces handle symbolic linking perfectly fine, Turborepo acts as an intelligent build system. It caches your build outputs locally and remotely. If you haven't touched a specific microservice, Turborepo instantly replays previous build logs instead of executing the build again, saving massive amounts of compute time.
Is Alpine Linux always better for Docker images?
Not necessarily. While Alpine results in ultra-small images, it uses musl libc instead of the standard glibc. This can cause severe performance issues or missing dependency errors when building Python extensions, Node modules with C++ bindings (like the 'sharp' image processing library), or Rust binaries unless explicitly cross-compiled.
Why does my Docker build re-install npm packages when I only changed a CSS file?
Docker resolves layers sequentially. If you run 'COPY . .' before 'RUN npm install', any change to any file in your project invalidates the Docker cache for the install layer. Always copy your package.json, run the install, and then copy the rest of your source code.
References
- [1]Dockerizing a Turborepo Application — Vercel Documentation
- [2]Docker Layout Caching Guide — Docker Official Docs
- [3]Multi-stage Builds — Docker Official Docs