Skip to content

Docker Bundling

Several generators (such as ts#strands-agent and py#strands-agent) produce a Docker image that is pushed to Amazon ECR and consumed by AWS infrastructure. This guide describes the pattern they follow so that you can apply it to other use cases — for example, running a py#fast-api project on Amazon ECS, or deploying a containerised Express server.

The recommended pattern has three pieces:

  1. A bundle target on your project which produces a self-contained directory of runtime artifacts. For TypeScript this is a tree-shaken, single-file JavaScript bundle produced by Rolldown; for Python this is a requirements.txt and installed dependencies produced by uv.
  2. A minimal Dockerfile which simply COPYs the bundle output into a base image. Because bundling already handled tree-shaking and dependency installation, the Dockerfile does not need to run npm install or uv sync.
  3. A docker target which copies the Dockerfile alongside the bundle output (so that the Docker build context only contains files needed at runtime), then runs docker build.

The Docker build context is written to your project’s dist folder. Your infrastructure as code (CDK or Terraform) then points at that directory to push the image to ECR.

Configure a bundle target that invokes Rolldown. If you are starting from a ts#project, add the following to your project.json:

{
"targets": {
"bundle": {
"cache": true,
"executor": "nx:run-commands",
"outputs": ["{workspaceRoot}/dist/{projectRoot}/bundle"],
"options": {
"command": "rolldown -c rolldown.config.ts",
"cwd": "{projectRoot}"
},
"dependsOn": ["compile"]
}
}
}

And a rolldown.config.ts at the root of your project:

rolldown.config.ts
import { defineConfig } from 'rolldown';
export default defineConfig([
{
tsconfig: 'tsconfig.lib.json',
input: 'src/index.ts',
output: {
file: '../../dist/packages/my-project/bundle/index.js',
format: 'cjs',
inlineDynamicImports: true,
},
platform: 'node',
},
]);

Run the bundle target to produce dist/packages/my-project/bundle/index.js:

Terminal window
pnpm nx bundle my-project

Create a Dockerfile in your project source directory. The file does nothing more than COPY the bundle into a Node base image, plus npm install any external packages that could not be bundled. Place the RUN npm install step before the COPY, so Docker can cache the installed node_modules layer and only re-run it when the dependency list actually changes:

FROM public.ecr.aws/docker/library/node:lts
WORKDIR /app
# Install packages that cannot be bundled (declared as "external" in rolldown.config.ts).
# Kept above the COPY so this layer is cached and only invalidated when the install list changes.
RUN npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation@0.10.0
# Copy bundled application
COPY index.js /app
EXPOSE 8080
CMD ["node", "index.js"]

Add a docker target which:

  1. Copies the Dockerfile into the bundle output directory (so the build context contains only the bundle + Dockerfile), and
  2. Runs docker build (optional for CDK — see below).
{
"targets": {
"docker": {
"cache": true,
"executor": "nx:run-commands",
"options": {
"commands": [
"ncp packages/my-project/src/Dockerfile dist/packages/my-project/bundle/Dockerfile",
"docker build --platform linux/arm64 -t my-scope-my-project:latest dist/packages/my-project/bundle"
],
"parallel": false
},
"dependsOn": ["bundle"]
}
}
}

Running this target produces a local image tagged my-scope-my-project:latest, built from the minimal context at dist/packages/my-project/bundle/:

Terminal window
pnpm nx docker my-project

Configure a bundle target that uses uv to export and install dependencies for your target platform. The py#project generator and the py#lambda-function generator both configure this for you. The target configuration looks like:

{
"targets": {
"bundle-arm": {
"cache": true,
"executor": "nx:run-commands",
"outputs": ["{workspaceRoot}/dist/{projectRoot}/bundle-arm"],
"options": {
"commands": [
"uv export --frozen --no-dev --no-editable --project {projectRoot} --package my_project -o dist/{projectRoot}/bundle-arm/requirements.txt",
"uv pip install -n --no-deps --no-installer-metadata --no-compile-bytecode --python-platform aarch64-manylinux_2_28 --target dist/{projectRoot}/bundle-arm -r dist/{projectRoot}/bundle-arm/requirements.txt"
],
"parallel": false
},
"dependsOn": ["compile"]
}
}
}

Running nx bundle my-project produces dist/packages/my-project/bundle-arm/ containing your project’s source, its dependencies, and a requirements.txt — everything the image needs at runtime.

The Dockerfile simply copies the bundle into a Python base image. Because uv already installed all dependencies into the bundle directory, you do not need to run pip install inside the image:

FROM public.ecr.aws/docker/library/python:3.12-slim
WORKDIR /app
# Copy bundled package (source + installed dependencies)
COPY . /app
EXPOSE 8080
ENV PYTHONPATH=/app
ENV PATH="/app/bin:${PATH}"
CMD ["python", "-m", "my_project.main"]

Add a docker target which copies the Dockerfile into the bundle output directory, then runs docker build:

{
"targets": {
"docker": {
"cache": true,
"executor": "nx:run-commands",
"options": {
"commands": [
"rimraf dist/packages/my-project/docker",
"make-dir dist/packages/my-project/docker",
"ncp dist/packages/my-project/bundle-arm dist/packages/my-project/docker",
"ncp packages/my-project/src/Dockerfile dist/packages/my-project/docker/Dockerfile",
"docker build --platform linux/arm64 -t my-scope-my-project:latest dist/packages/my-project/docker"
],
"parallel": false
},
"dependsOn": ["bundle-arm"]
}
}
}

This clears the output directory, then copies both the bundle contents and the Dockerfile into dist/.../docker, which becomes the Docker build context.

Terminal window
pnpm nx docker my-project

Wiring the resulting build-context directory to infrastructure as code is the same for both TypeScript and Python — only the path to the build-context directory differs (dist/packages/my-project/bundle for TypeScript, dist/packages/my-project/docker for Python).

Use CDK’s DockerImageAsset pointed at the build-context directory. CDK will build the image and publish it to the CDK asset ECR repository at deploy time:

import { DockerImageAsset, Platform } from 'aws-cdk-lib/aws-ecr-assets';
import { findWorkspaceRoot } from ':my-scope/common-constructs';
import * as path from 'path';
import * as url from 'url';
const image = new DockerImageAsset(this, 'MyImage', {
directory: path.join(
// Resolve from the compiled construct location to the workspace root
findWorkspaceRoot(url.fileURLToPath(new URL(import.meta.url))),
'dist/packages/my-project/bundle',
),
platform: Platform.LINUX_ARM64,
});

The findWorkspaceRoot helper is generated by the ts#infra generator and exported from :my-scope/common-constructs. If you are not using shared constructs, you can hardcode the path to the dist directory relative to where cdk is invoked from — typically the workspace root — and omit the findWorkspaceRoot call entirely.

Use the DockerImageAsset with any AWS construct that accepts a container image, for example aws_ecs.ContainerImage.fromDockerImageAsset(image).