Docker Bundling
Several generators (such as ts#strands-agent and py#strands-agent) produce a Docker image that is pushed to Amazon ECR and consumed by AWS infrastructure. This guide describes the pattern they follow so that you can apply it to other use cases — for example, running a py#fast-api project on Amazon ECS, or deploying a containerised Express server.
The Pattern
Section titled “The Pattern”The recommended pattern has three pieces:
- A
bundletarget on your project which produces a self-contained directory of runtime artifacts. For TypeScript this is a tree-shaken, single-file JavaScript bundle produced by Rolldown; for Python this is arequirements.txtand installed dependencies produced by uv. - A minimal
Dockerfilewhich simplyCOPYs the bundle output into a base image. Because bundling already handled tree-shaking and dependency installation, theDockerfiledoes not need to runnpm installoruv sync. - A
dockertarget which copies theDockerfilealongside the bundle output (so that the Docker build context only contains files needed at runtime), then runsdocker build.
The Docker build context is written to your project’s dist folder. Your infrastructure as code (CDK or Terraform) then points at that directory to push the image to ECR.
TypeScript
Section titled “TypeScript”Bundle Target
Section titled “Bundle Target”Configure a bundle target that invokes Rolldown. If you are starting from a ts#project, add the following to your project.json:
{ "targets": { "bundle": { "cache": true, "executor": "nx:run-commands", "outputs": ["{workspaceRoot}/dist/{projectRoot}/bundle"], "options": { "command": "rolldown -c rolldown.config.ts", "cwd": "{projectRoot}" }, "dependsOn": ["compile"] } }}And a rolldown.config.ts at the root of your project:
import { defineConfig } from 'rolldown';
export default defineConfig([ { tsconfig: 'tsconfig.lib.json', input: 'src/index.ts', output: { file: '../../dist/packages/my-project/bundle/index.js', format: 'cjs', inlineDynamicImports: true, }, platform: 'node', },]);Run the bundle target to produce dist/packages/my-project/bundle/index.js:
pnpm nx bundle my-projectyarn nx bundle my-projectnpx nx bundle my-projectbunx nx bundle my-projectDockerfile
Section titled “Dockerfile”Create a Dockerfile in your project source directory. The file does nothing more than COPY the bundle into a Node base image, plus npm install any external packages that could not be bundled. Place the RUN npm install step before the COPY, so Docker can cache the installed node_modules layer and only re-run it when the dependency list actually changes:
FROM public.ecr.aws/docker/library/node:lts
WORKDIR /app
# Install packages that cannot be bundled (declared as "external" in rolldown.config.ts).# Kept above the COPY so this layer is cached and only invalidated when the install list changes.RUN npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation@0.10.0
# Copy bundled applicationCOPY index.js /app
EXPOSE 8080
CMD ["node", "index.js"]Docker Target
Section titled “Docker Target”Add a docker target which:
- Copies the
Dockerfileinto the bundle output directory (so the build context contains only the bundle +Dockerfile), and - Runs
docker build(optional for CDK — see below).
{ "targets": { "docker": { "cache": true, "executor": "nx:run-commands", "options": { "commands": [ "ncp packages/my-project/src/Dockerfile dist/packages/my-project/bundle/Dockerfile", "docker build --platform linux/arm64 -t my-scope-my-project:latest dist/packages/my-project/bundle" ], "parallel": false }, "dependsOn": ["bundle"] } }}Running this target produces a local image tagged my-scope-my-project:latest, built from the minimal context at dist/packages/my-project/bundle/:
pnpm nx docker my-projectyarn nx docker my-projectnpx nx docker my-projectbunx nx docker my-projectPython
Section titled “Python”Bundle Target
Section titled “Bundle Target”Configure a bundle target that uses uv to export and install dependencies for your target platform. The py#project generator and the py#lambda-function generator both configure this for you. The target configuration looks like:
{ "targets": { "bundle-arm": { "cache": true, "executor": "nx:run-commands", "outputs": ["{workspaceRoot}/dist/{projectRoot}/bundle-arm"], "options": { "commands": [ "uv export --frozen --no-dev --no-editable --project {projectRoot} --package my_project -o dist/{projectRoot}/bundle-arm/requirements.txt", "uv pip install -n --no-deps --no-installer-metadata --no-compile-bytecode --python-platform aarch64-manylinux_2_28 --target dist/{projectRoot}/bundle-arm -r dist/{projectRoot}/bundle-arm/requirements.txt" ], "parallel": false }, "dependsOn": ["compile"] } }}Running nx bundle my-project produces dist/packages/my-project/bundle-arm/ containing your project’s source, its dependencies, and a requirements.txt — everything the image needs at runtime.
Dockerfile
Section titled “Dockerfile”The Dockerfile simply copies the bundle into a Python base image. Because uv already installed all dependencies into the bundle directory, you do not need to run pip install inside the image:
FROM public.ecr.aws/docker/library/python:3.12-slim
WORKDIR /app
# Copy bundled package (source + installed dependencies)COPY . /app
EXPOSE 8080
ENV PYTHONPATH=/appENV PATH="/app/bin:${PATH}"
CMD ["python", "-m", "my_project.main"]Docker Target
Section titled “Docker Target”Add a docker target which copies the Dockerfile into the bundle output directory, then runs docker build:
{ "targets": { "docker": { "cache": true, "executor": "nx:run-commands", "options": { "commands": [ "rimraf dist/packages/my-project/docker", "make-dir dist/packages/my-project/docker", "ncp dist/packages/my-project/bundle-arm dist/packages/my-project/docker", "ncp packages/my-project/src/Dockerfile dist/packages/my-project/docker/Dockerfile", "docker build --platform linux/arm64 -t my-scope-my-project:latest dist/packages/my-project/docker" ], "parallel": false }, "dependsOn": ["bundle-arm"] } }}This clears the output directory, then copies both the bundle contents and the Dockerfile into dist/.../docker, which becomes the Docker build context.
pnpm nx docker my-projectyarn nx docker my-projectnpx nx docker my-projectbunx nx docker my-projectInfrastructure
Section titled “Infrastructure”Wiring the resulting build-context directory to infrastructure as code is the same for both TypeScript and Python — only the path to the build-context directory differs (dist/packages/my-project/bundle for TypeScript, dist/packages/my-project/docker for Python).
Use CDK’s DockerImageAsset pointed at the build-context directory. CDK will build the image and publish it to the CDK asset ECR repository at deploy time:
import { DockerImageAsset, Platform } from 'aws-cdk-lib/aws-ecr-assets';import { findWorkspaceRoot } from ':my-scope/common-constructs';import * as path from 'path';import * as url from 'url';
const image = new DockerImageAsset(this, 'MyImage', { directory: path.join( // Resolve from the compiled construct location to the workspace root findWorkspaceRoot(url.fileURLToPath(new URL(import.meta.url))), 'dist/packages/my-project/bundle', ), platform: Platform.LINUX_ARM64,});The findWorkspaceRoot helper is generated by the ts#infra generator and exported from :my-scope/common-constructs. If you are not using shared constructs, you can hardcode the path to the dist directory relative to where cdk is invoked from — typically the workspace root — and omit the findWorkspaceRoot call entirely.
Use the DockerImageAsset with any AWS construct that accepts a container image, for example aws_ecs.ContainerImage.fromDockerImageAsset(image).
Terraform’s AWS provider does not have a first-class “build and push a Docker image” resource. The pattern used by the generators is:
- An
aws_ecr_repositoryto hold the image. - A
null_resourcewith alocal-execprovisioner that authenticates to ECR, re-tags the locally-built image, and pushes it. - The downstream resource (e.g.
aws_ecs_task_definition) references"${aws_ecr_repository.repo.repository_url}:latest".
resource "aws_ecr_repository" "repo" { name = "my-project-repository" image_tag_mutability = "MUTABLE" force_delete = true}
# Invalidate the push whenever the locally-built image digest changesdata "external" "docker_digest" { program = ["sh", "-c", "echo '{\"digest\":\"'$(docker inspect my-scope-my-project:latest --format '{{.Id}}')'\"}'"]}
resource "null_resource" "docker_publish" { triggers = { docker_digest = data.external.docker_digest.result.digest repository_url = aws_ecr_repository.repo.repository_url }
provisioner "local-exec" { command = <<-EOT aws ecr get-login-password --region ${data.aws_region.current.id} \ | docker login --username AWS --password-stdin ${self.triggers.repository_url} docker tag my-scope-my-project:latest ${self.triggers.repository_url}:latest docker push ${self.triggers.repository_url}:latest EOT }}The data.external.docker_digest block ensures the null_resource re-runs whenever the local image hash changes, triggering a new push on every meaningful code change.
Further Reading
Section titled “Further Reading”ts#strands-agentgenerator — a complete example of this pattern for a TypeScript agent deployed to Bedrock AgentCore Runtime.py#strands-agentgenerator — the equivalent for Python.- Rolldown documentation — configuration reference for the TypeScript bundler.
uvdocumentation — reference for Python dependency export and install.