/

Metrics

Core utility


Metrics creates custom metrics asynchronously via logging metrics to standard output following Amazon CloudWatch Embedded Metric Format (EMF).

Key features

  • Aggregate up to 100 metrics using a single CloudWatch EMF object (large JSON blob)
  • Validate against common metric definitions mistakes (metric unit, values, max dimensions, max metrics, etc)
  • Metrics are created asynchronously by CloudWatch service, no custom stacks needed
  • Context manager to create an one off metric with a different dimension

Initialization

Set POWERTOOLS_SERVICE_NAME and POWERTOOLS_METRICS_NAMESPACE env vars as a start - Here is an example using AWS Serverless Application Model (SAM)

template.yaml
Resources:
    HelloWorldFunction:
        Type: AWS::Serverless::Function
        Properties:
        ...
        Runtime: python3.8
        Environment:
            Variables:
                POWERTOOLS_SERVICE_NAME: payment                POWERTOOLS_METRICS_NAMESPACE: ServerlessAirline

We recommend you use your application or main service as a metric namespace. You can explicitly set a namespace name via namespace param or via POWERTOOLS_METRICS_NAMESPACE env var. This sets namespace key that will be used for all metrics. You can also pass a service name via service param or POWERTOOLS_SERVICE_NAME env var. This will create a dimension with the service name.

app.py
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

# POWERTOOLS_METRICS_NAMESPACE and POWERTOOLS_SERVICE_NAME defined
metrics = Metrics()
# Explicit definition
Metrics(namespace="ServerlessAirline", service="orders")  # creates a default dimension {"service": "orders"} under the namespace "ServerlessAirline"

You can initialize Metrics anywhere in your code as many times as you need - It'll keep track of your aggregate metrics in memory.

Creating metrics

You can create metrics using add_metric, and manually create dimensions for all your aggregate metrics using add_dimension.

app.py
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
metrics.add_dimension(name="environment", value="prod")metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)

MetricUnit enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. "Count".

CloudWatch EMF supports a max of 100 metrics. Metrics will automatically flush all metrics when adding the 100th metric, where subsequent metrics will be aggregated into a new EMF object.

Creating a metric with a different dimension

CloudWatch EMF uses the same dimensions across all your metrics. Use single_metric if you have a metric that should have different dimensions.

Generally, this would be an edge case since you pay for unique metric. Keep the following formula in mind:

unique metric = (metric_name + dimension_name + dimension_value)

single_metric.py
from aws_lambda_powertools import single_metric
from aws_lambda_powertools.metrics import MetricUnit

with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace="ExampleApplication") as metric:    metric.add_dimension(name="function_context", value="$LATEST")
    ...

Adding metadata

You can use add_metadata for advanced use cases, where you want to metadata as part of the serialized metrics object.

This will not be available during metrics visualization - Use dimensions for this purpose

app.py
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
metrics.add_metadata(key="booking_id", value="booking_uuid")

This will be available in CloudWatch Logs to ease operations on high cardinal data.

Exerpt output in CloudWatch Logs
cloudwatch_logs.json
{
    "SuccessfulBooking": 1.0,
    "_aws": {
        "Timestamp": 1592234975665,
        "CloudWatchMetrics": [
            {
                "Namespace": "ExampleApplication",
                "Dimensions": [
                    [
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "SuccessfulBooking",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "service": "booking",
    "booking_id": "booking_uuid"}

Flushing metrics

As you finish adding all your metrics, you need to serialize and flush them to standard output. You can do that right before you return your response to the caller via log_metrics.

lambda_handler.py
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(service="ExampleService")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

@metrics.log_metricsdef lambda_handler(evt, ctx):
    metrics.add_dimension(name="service", value="booking")
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    ...

log_metrics decorator validates, serializes, and flushes all your metrics. During metrics validation, if no metrics are provided then a warning will be logged, but no exception will be raised.

If metrics are provided, and any of the following criteria are not met, SchemaValidationError exception will be raised:

  • Minimum of 1 dimension
  • Maximum of 9 dimensions
  • Namespace is set, and no more than one
  • Metric units must be supported by CloudWatch

If you want to ensure that at least one metric is emitted, you can pass raise_on_empty_metrics to the log_metrics decorator:

lambda_handler.py
from aws_lambda_powertools.metrics import Metrics

@metrics.log_metrics(raise_on_empty_metrics=True)def lambda_handler(evt, ctx):
    ...
If you expect your function to execute without publishing metrics every time, you can suppress the warning with warnings.filterwarnings("ignore", "No metrics to publish*").

When nesting multiple middlewares, you should use log_metrics as your last decorator wrapping all subsequent ones.

lambda_handler_nested_middlewares.py
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

@metrics.log_metrics@tracer.capture_lambda_handlerdef lambda_handler(evt, ctx):
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    ...

Flushing metrics manually

If you prefer not to use log_metrics because you might want to encapsulate additional logic when doing so, you can manually flush and clear metrics as follows:

Metrics, dimensions and namespace validation still applies.

manual_metric_serialization.py
import json
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

your_metrics_object = metrics.serialize_metric_set()metrics.clear_metrics()print(json.dumps(your_metrics_object))

Capturing cold start metric

You can capture cold start metrics automatically with log_metrics via capture_cold_start_metric param.

lambda_handler.py
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(service="ExampleService")

@metrics.log_metrics(capture_cold_start_metric=True)def lambda_handler(evt, ctx):
    ...

If it's a cold start, this feature will add a metric named ColdStart and a dimension named function_name.

Testing your code

Use POWERTOOLS_METRICS_NAMESPACE and POWERTOOLS_SERVICE_NAME env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation.

pytest_metric_namespace.sh

POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest

You can ignore this if you are explicitly setting namespace/default dimension by passing the namespace and service parameters when initializing Metrics: metrics = Metrics(namespace=ApplicationName, service=ServiceName).

Edit on GitHub