SQS Connector
The SQS storage connector makes it possible to capture the result of one or multiple middlewares in a pipeline and store their results in a user-defined SQS queue. This connector allows to nicely decouple the processing of your documents with third-party applications that can consume processed documents from a queue.
💁 This connector only forwards the CloudEvents emitted by middlewares to the SQS queue, and not the documents themselves.
🕒 Enqueue Documents
To use the SQS storage connector, you import it in your CDK stack, and connect it to a data source providing documents.
import { SqsStorageConnector } from '@project-lakechain/sqs-storage-connector';import { CacheStorage } from '@project-lakechain/core';
class Stack extends cdk.Stack { constructor(scope: cdk.Construct, id: string) { // The cache storage. const cache = new CacheStorage(this, 'Cache');
// The destination queue. const queue = // ...
// Create the SQS storage connector. const connector = new SqsStorageConnector.Builder() .withScope(this) .withIdentifier('SqsStorageConnector') .withCacheStorage(cache) .withSource(source) // 👈 Specify a data source .withDestinationQueue(queue) .build(); }}
🏗️ Architecture
This middleware makes use of the native integration between the SNS output topics of source middlewares with SQS to forward messages to a destination queue, without relying on any additional compute resources.
🏷️ Properties
Supported Inputs
Mime Type | Description |
---|---|
*/* | This middleware supports any type of documents. |
Supported Outputs
This middleware does not emit any output.
Supported Compute Types
Type | Description |
---|---|
CPU | This middleware only supports CPU compute. |
📖 Examples
- Storage Connector Pipeline - Builds a pipeline connected to other AWS services.