IGlobalConfig / ILoggingConfig / ICloudWatchLogsConfig

Description

Accelerator global CloudWatch Logs logging configuration

Remarks

You can decide to use AWS KMS CMK or server-side encryption for the log data at rest. When this encryption property is undefined, the solution will deploy AWS KMS CMK to encrypt AWS CloudWatch log data at rest. You can use deploymentTargets to control target accounts and regions for the given useCMK configuration. please see here or here for more information.

Please review CloudWatch Logs managed data identifiers for sensitive data types for more information.

Example

cloudwatchLogs:
dynamicPartitioning: path/to/filter.json
# default is true, if undefined this is set to true
# if set to false, no replication is performed which is useful in test or temporary environments
enable: true
encryption:
useCMK: true
deploymentTargets:
organizationalUnits:
- Root
replaceLogDestinationArn: arn:aws:logs:us-east-1:111111111111:destination:ReplaceDestination
exclusions:
# in these OUs do not do log replication
- organizationalUnits:
- Research
- ProofOfConcept
excludeAll: true
# in these accounts exclude pattern testApp
- accounts:
- WorkloadAccount1
- WorkloadAccount1
logGroupNames:
- testApp*
# in these accounts exclude logs in specific regions
- accounts:
- WorkloadAccount1
- WorkloadAccount1
regions:
- us-west-2
- eu-west-1
logGroupNames:
- pattern1*
dataProtection:
managedDataIdentifiers:
categories:
- Credentials
deploymentTargets:
organizationalUnits:
- Root

Hierarchy

  • CloudWatchLogsConfig

Implements

Constructors

Properties

dataProtection: undefined | CloudWatchDataProtectionConfig = undefined

CloudWatch Log data protection configuration

dynamicPartitioning: undefined | string = undefined

Declaration of Dynamic Partition for Kinesis Firehose.

Remarks

Kinesis firehose Dynamic Partition allows streaming Cloudwatch logs data to be assigned to a specific prefix. The input provided here is the path to log filter JSON file array. More details in the link: https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/centralized-logging.html Each item in the array is of the format

{ "logGroupPattern": "LogGroupName", "s3Prefix": "s3-prefix" }

The logs end up in central logs bucket under prefix CloudWatchLogs. In the above example, the log group with LogGroupName will stream to s3://<central-logs-bucket>/CloudWatchLogs/s3-prefix/

It is possible to use * for grouping log groups into same prefix. So, in the example below:

[{ "logGroupPattern": "Application*", "s3Prefix": "app" }]

The above will take log groups with name ApplicationA, ApplicationB, ApplicationC into s3 prefix app. Please make sure that logGroupPattern do not conflict each other as the logs are streamed to one destination and not replicated. For example, extending the above example to below

[{ "logGroupPattern": "Application*", "s3Prefix": "app" }, { "logGroupPattern": "App*", "s3Prefix": "apple" }]

In the above case, logs from ApplicationA can either end up in app or apple. They will not be replicated to both prefixes.

For more information on Kinesis Firehose dynamic partitioning limits please refer to:: https://docs.aws.amazon.com/firehose/latest/dev/limits.html

enable: undefined | boolean = undefined

Enable or disable CloudWatch replication

encryption: undefined | ServiceEncryptionConfig = undefined

Encryption setting for AWS CloudWatch log group data.

Remarks

For more information please refer ServiceEncryptionConfig

exclusions: undefined | CloudWatchLogsExclusionConfig[] = undefined

Exclude Log Groups during replication

replaceLogDestinationArn: undefined | string = undefined

Customer defined log subscription filter destination arn, that is associated with with the existing log group. Accelerator solution needs to disassociate this destination before configuring solution defined subscription filter destination.

Default

undefined

Remarks

When no value provided, accelerator solution will not attempt to remove existing customer defined log subscription filter destination. When existing log group(s) have two subscription filter destinations defined, and none of that is solution configured subscription filter destination, then solution will fail to configure log replication for such log groups and as a result pipeline will fail.

Generated using TypeDoc