Skip to content

Relational Database

Filter this guide Pick generator option values to hide sections that don't apply.

This generator creates a new relational database project backed by Amazon Aurora (PostgreSQL or MySQL) and Prisma ORM. It generates the application code and infrastructure needed to provision and manage a database using AWS CDK or Terraform, with declarative schema definition, automatic migration deployment, and a type-safe ORM client.

You can generate a new relational database project in two ways:

  1. Install the Nx Console VSCode Plugin if you haven't already
  2. Open the Nx Console in VSCode
  3. Click Generate (UI) in the "Common Nx Commands" section
  4. Search for @aws/nx-plugin - ts#rdb
  5. Fill in the required parameters
    • Click Generate
    Parameter Type Default Description
    name Required string - Name of the database project to generate
    directory string packages The directory to store the application in.
    subDirectory string - The sub directory the project is placed in. By default this is the project name.
    service Required string Aurora Relational database service to provision.
    engine Required string PostgreSQL Database engine to use with the selected service.
    databaseUser string dbadmin Database admin username. Defaults to 'dbadmin'.
    databaseName string - Initial database name. Defaults to the project name.
    ormFramework Required string Prisma ORM framework to use for the generated project.
    iacProvider string Inherit The preferred IaC provider. By default this is inherited from your initial selection.

    The generator will create the following project structure in the <directory>/<name> directory:

    • Directoryprisma
      • Directorymodels
        • example.prisma Example model definition
      • schema.prisma Main Prisma schema (references models)
    • Directoryscripts
      • docker-pull.ts Pulls the database Docker image for local development
      • docker-start.ts Starts a local database container
      • wait-for-db.ts Waits for the local database to be ready
    • Directorysrc
      • index.ts Project entry point
      • constants.ts Local development connection details and runtime config key
      • prisma.ts Prisma runtime client wrapper
      • utils.ts Runtime config and secret helpers
      • create-db-user-handler.ts Lambda handler used to create the application database user during deployment
      • migration-handler.ts Lambda handler used to run database migrations during deployment
    • .gitignore Git ignore entries including generated Prisma client output
    • Dockerfile Container image definition for the migration handler
    • project.json Project configuration and build targets
    • prisma.config.ts Configuration for Prisma CLI

    Since this generator vends infrastructure as code based on your chosen iacProvider, it will create a project in packages/common which includes the relevant CDK constructs or Terraform modules.

    The common infrastructure as code project is structured as follows:

    • Directorypackages/common/constructs
      • Directorysrc
        • Directoryapp/ Constructs for infrastructure specific to a project/generator
        • Directorycore/ Generic constructs which are reused by constructs in app
        • index.ts Entry point exporting constructs from app
      • project.json Project build targets and configuration
    • Directorypackages/common/constructs/src
      • Directoryapp
        • Directorydbs
          • <name>.ts Infrastructure specific to your database
      • Directorycore
        • Directoryrdb
          • aurora.ts Generic Aurora database construct

    The generated project uses Prisma ORM to define your database schema and generate a type-safe client. The workflow is model-first: add or update Prisma model files under your database project’s prisma/models/ directory, then generate a migration from those model changes.

    Example User model:

    packages/postgres/prisma/models/user.prisma
    model User {
    id Int @id @default(autoincrement())
    firstName String
    lastName String
    }

    For more details, see the official Prisma data modelling guide.

    The generator automatically configures the generate target to create a type-safe TypeScript Prisma client whenever you build the project. The client is written to generated/prisma (added to .gitignore).

    You can also manually generate the client at any time:

    Terminal window
    pnpm nx generate <your-db-project-name>

    Use the prisma target to run Prisma CLI commands from the workspace root:

    Terminal window
    pnpm nx run <project>:prisma generate

    The runtime wrapper in src/prisma.ts exports:

    • DB_PACKAGE_NAME - the key used under the database runtime config namespace in AWS AppConfig
    • getPrisma() - loads database connection settings from AWS AppConfig and creates a Prisma client using IAM authentication

    The client automatically:

    • Retrieves database configuration from AWS AppConfig using RUNTIME_CONFIG_APP_ID environment variable
    • Generates temporary authentication tokens via AWS RDS Signer for IAM authentication
    • Manages SSL/TLS connections with certificate validation
    • Handles connection pooling through persistent database connection pools

    After adding or updating models under prisma/models/, use migrate dev to generate migration files and apply them to your local database at the same time.

    The generated prisma target automatically starts a local database via Docker before running:

    Terminal window
    pnpm nx run <project>:prisma migrate dev

    If you only want to generate the migration files without applying them to the local database, add --create-only:

    Terminal window
    pnpm nx run <project>:prisma migrate dev --create-only

    This generates a new migration folder in prisma/migrations each time your schema changes:

    • Directoryprisma
      • Directorymigrations
        • Directory20260405013911_initial_migrations
          • migration.sql
        • migration_lock.toml
      • schema.prisma

    When you deploy the AWS stack, the generated infrastructure automatically applies the generated migrations to the deployed database.

    When you pull migration files created by other developers, use migrate deploy to apply those existing migrations to your local database.

    Terminal window
    pnpm nx run <project>:prisma migrate deploy

    In this local development flow, migrate deploy applies the migration files to your local database; it does not deploy the database to AWS.

    The generated prisma target exposes the Prisma CLI, so you can use it to run any command supported by Prisma against the local database. See the Prisma CLI reference for available commands.

    Terminal window
    pnpm nx run <project>:prisma <prisma-command>

    Prisma Studio is a visual editor for your local database. Use it to browse tables, inspect and edit records, filter data, follow relations, and run raw SQL via the built-in SQL console. It is useful for verifying migrations and seeding test data during development. Launch it with:

    Terminal window
    pnpm nx run <project>:prisma studio

    While this section describes how to connect to your database from a tRPC API, it also serves as a reference for use in any other TypeScript project.

    Import getPrisma from your database package and call it inside your handler to get a type-safe Prisma client:

    packages/api/src/procedures/list-users.ts
    import { getPrisma } from ':my-scope/db';
    import { publicProcedure } from '../init.js';
    import { ListUsersOutputSchema } from '../schema/index.js';
    export const listUsers = publicProcedure
    .output(ListUsersOutputSchema)
    .query(async () => {
    const prisma = await getPrisma();
    return prisma.user.findMany({ orderBy: { id: 'asc' } });
    });

    getPrisma() returns a lazily-initialised, cached client. Subsequent calls within the same Lambda execution context reuse the existing connection pool rather than opening a new one.

    The Prisma client exposes fully typed models derived from your prisma/models/ schema, giving you end-to-end type safety from the database all the way through to your API response.

    Injecting the Prisma Client via Middleware

    Section titled “Injecting the Prisma Client via Middleware”

    Rather than calling getPrisma() in every procedure, you can also resolve it once in a middleware and attach it to the tRPC context so all downstream procedures can access it directly.

    First, define the plugin in src/middleware/db.ts following the same pattern as the generated middleware:

    packages/api/src/middleware/db.ts
    import { getPrisma } from ':my-scope/db';
    import { initTRPC } from '@trpc/server';
    export interface IDbContext {
    db: Awaited<ReturnType<typeof getPrisma>>;
    }
    export const createDbPlugin = () => {
    const t = initTRPC.context<IDbContext>().create();
    return t.procedure.use(async (opts) => {
    const db = await getPrisma();
    return opts.next({
    ctx: {
    ...opts.ctx,
    db,
    },
    });
    });
    };

    Then concat it onto a base procedure in your tRPC initialisation:

    packages/api/src/init.ts
    import { createDbPlugin } from './middleware/db.js';
    export const dbProcedure = publicProcedure.concat(createDbPlugin());

    Procedures built on dbProcedure receive db through context without needing to import or call getPrisma() themselves:

    packages/api/src/procedures/list-users.ts
    import { dbProcedure } from '../init.js';
    import { ListUsersOutputSchema } from '../schema/index.js';
    export const listUsers = dbProcedure
    .output(ListUsersOutputSchema)
    .query(async ({ ctx: { db } }) => {
    return db.user.findMany({ orderBy: { id: 'asc' } });
    });

    The relational database generator creates CDK or Terraform infrastructure based on your selected iacProvider.

    The CDK construct is created in common/constructs. Example usage:

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    export class ApplicationStack extends Stack {
    constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);
    ...
    const db = new MyDatabase(this, 'Db', {
    vpc,
    vpcSubnets: {
    subnetType: SubnetType.PRIVATE_ISOLATED,
    }
    });
    }
    }

    This provisions an Aurora cluster with RDS Proxy, admin credentials, application database user, runtime config registration, and migration handler.

    The generated infrastructure creates two database users:

    • Admin user - Created during cluster provisioning with credentials stored in AWS Secrets Manager
    • Application user - Created via a Lambda custom resource with IAM authentication enabled and full privileges on the application database

    The application user is automatically created with a random name and IAM authentication. getPrisma() is already configured to authenticate as this user using short-lived RDS tokens, so your application code never handles database passwords.

    Your VPC should include public subnets, private subnets with egress, and private isolated subnets. The database can run in private isolated subnets, while API Lambda functions should run in private subnets with egress so they can reach AWS services such as AppConfig.

    packages/infra/src/stacks/application-stack.ts
    const vpc = new Vpc(this, 'Vpc', {
    subnetConfiguration: [
    {
    name: 'public',
    subnetType: SubnetType.PUBLIC,
    },
    {
    name: 'private_with_egress',
    subnetType: SubnetType.PRIVATE_WITH_EGRESS,
    },
    {
    name: 'private_isolated',
    subnetType: SubnetType.PRIVATE_ISOLATED,
    },
    ],
    });

    In your application stack, deploy the API into the same VPC as the database, then call allowDefaultPortFrom and grantConnect to open the network path and grant IAM rds-db:connect permission to each Lambda handler:

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', { vpc, ... });
    const api = new Api(this, 'Api', {
    integrations: Api.defaultIntegrations(this)
    .withDefaultOptions({
    vpc,
    vpcSubnets: { subnetType: SubnetType.PRIVATE_WITH_EGRESS },
    })
    .build(),
    });
    Object.entries(api.integrations).forEach(([operation, integration]) => {
    db.allowDefaultPortFrom(integration.handler, `Allow ${operation} to connect to the database`);
    db.grantConnect(integration.handler);
    });

    Deploy the API Lambda functions into a private subnet with egress (recommended) or a public subnet, not a private isolated subnet. At runtime, getPrisma() retrieves database connection details from AWS AppConfig, which is a public AWS service endpoint. Lambda functions in a private isolated subnet have no outbound internet access and cannot reach AppConfig. Private subnets with egress route outbound traffic through a NAT Gateway which sits in a public subnet.

    The generated infrastructure includes an RDS Proxy by default, which sits between your application and the Aurora cluster. RDS Proxy provides several benefits:

    • Connection pooling - Maintains a pool of database connections that can be shared across application instances, reducing the overhead of establishing new connections
    • Connection resilience - Automatically handles failovers and reconnects during Aurora instance replacements or maintenance
    • IAM authentication - Supports IAM-based database authentication, eliminating the need to manage database credentials in your application code
    • Improved security - Enforces TLS encryption for all connections

    You can disable the RDS proxy as follows:

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    enableRdsProxy: false,
    });

    When RDS Proxy is disabled, your application connects directly to the Aurora cluster endpoint.

    For direct Aurora cluster connections from Node.js 20 or later Lambda runtimes, configure the Lambda function to load the Amazon RDS CA bundle:

    packages/infra/src/stacks/application-stack.ts
    const api = new Api(this, 'Api', {
    integrations: Api.defaultIntegrations(this)
    .withDefaultOptions({
    environment: {
    NODE_EXTRA_CA_CERTS: '/var/runtime/ca-cert.pem',
    },
    })
    .build(),
    });

    For more details, see the AWS Lambda SSL/TLS requirements for Amazon RDS connections and the Amazon RDS Proxy TLS documentation. When using RDS Proxy, you do not need to configure the RDS CA bundle in your Lambda function.

    The generated infrastructure can be customised to match your workload requirements. The following examples demonstrate a few common customisation options available.

    Configure the writer and reader instances for your Aurora cluster.

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    writer: ClusterInstance.serverlessV2('writer'),
    readers: [ClusterInstance.serverlessV2('reader')],
    });

    Control Aurora Serverless v2 scaling limits to match your workload.

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    serverlessV2MinCapacity: 0.5,
    serverlessV2MaxCapacity: 8,
    });

    Pin a specific Aurora engine version.

    By default, the generated local Docker database image matches the default Aurora engine version. If you change the Aurora engine version, it’s recommended to also use a matching local Docker database version for maximum compatibility. See the AWS release notes for Aurora PostgreSQL versions and Aurora MySQL versions to identify the corresponding community database version.

    The local database image is configured in the generated database project’s serve-local target in project.json. Update the image argument passed to scripts/docker-start.ts when you change engine versions.

    engine = PostgreSQL
    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    engineVersion: AuroraPostgresEngineVersion.VER_17_7,
    });
    engine = MySQL
    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    engineVersion: AuroraMysqlEngineVersion.VER_3_12_0,
    });

    Deletion protection is enabled by default (deletionProtection: true in CDK, deletion_protection = true in Terraform) to protect the Aurora cluster from accidental deletion.

    You can disable deletion protection for environments where database deletion is expected, such as short-lived development or preview stacks.

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    deletionProtection: false,
    });

    The CDK construct retains the Aurora cluster by default (removalPolicy: RemovalPolicy.RETAIN). Change this when you want CDK stack deletion to snapshot or destroy the cluster instead.

    When using RemovalPolicy.DESTROY, deletion protection must also be disabled before the cluster can be deleted.

    packages/infra/src/stacks/application-stack.ts
    import { RemovalPolicy } from 'aws-cdk-lib';
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    removalPolicy: RemovalPolicy.SNAPSHOT,
    });

    For an ephemeral environment where the database should be deleted with the stack:

    packages/infra/src/stacks/application-stack.ts
    import { RemovalPolicy } from 'aws-cdk-lib';
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    deletionProtection: false,
    removalPolicy: RemovalPolicy.DESTROY,
    });

    The KMS key used to encrypt the Aurora cluster and its credentials secret has automatic key rotation enabled by default. Disable it if your security policy manages rotation externally.

    packages/infra/src/stacks/application-stack.ts
    import { MyDatabase } from ':my-scope/common-constructs';
    const db = new MyDatabase(this, 'Db', {
    ...
    enableKeyRotation: false,
    });
    engine = MySQL

    When using Aurora MySQL with API Gateway streaming responses (e.g. with tRPC’s httpBatchStreamLink), the Prisma MySQL client holds onto the Node.js event loop after a query completes, preventing the Lambda from flushing the stream and ending the request.

    To work around this, explicitly disconnect the client in a finally block after each query so the event loop is free to exit and the streaming response can complete.

    Option 1: per-procedure

    export const listExampleTable = publicProcedure
    .output(z.array(ExampleTableSchema))
    .query(async () => {
    const prisma = await getPrisma();
    try {
    return await prisma.exampleTable.findMany();
    } finally {
    await prisma.$disconnect();
    }
    });

    Option 2: tRPC middleware

    If you are using the middleware pattern, add the $disconnect() call to the middleware so all procedures built on it are covered automatically:

    packages/api/src/middleware/db.ts
    import { getPrisma } from ':my-scope/db';
    import { initTRPC } from '@trpc/server';
    export interface IDbContext {
    db: Awaited<ReturnType<typeof getPrisma>>;
    }
    export const createDbPlugin = () => {
    const t = initTRPC.context<IDbContext>().create();
    return t.procedure.use(async (opts) => {
    const db = await getPrisma();
    try {
    return await opts.next({
    ctx: {
    ...opts.ctx,
    db,
    },
    });
    } finally {
    await db.$disconnect();
    }
    });
    };

    RDS IAM authentication tokens expire after 15 minutes. The MySQL Prisma client captures the IAM token as a static value at the time getPrisma() is called. An existing open connection is not affected, but if a new connection needs to be established after the token has expired, authentication will fail. The PostgreSQL adapter avoids this by refreshing the token dynamically each time the pool opens a new connection, but the MySQL adapter has no equivalent mechanism.

    For long-running tasks such as batch jobs or data migrations, call getPrisma() at the start of each unit of work rather than once for the entire operation. Because getPrisma() always creates a fresh client and fetches a new IAM token for MySQL, this ensures each connection authenticates with a valid token.