Skip to content

Bedrock

bedrock

LLMeter targets for testing the Amazon Bedrock Converse and ConverseStream APIs

Alternatively, see:

BedrockBase

BedrockBase(model_id, endpoint_name=None, region=None, inference_config=None, bedrock_boto3_client=None, max_attempts=3)

Bases: Endpoint[TBedrockConverseResponseBase], Generic[TBedrockConverseResponseBase]

Base class for interacting with Amazon Bedrock endpoints.

This class provides core functionality for making requests to Amazon Bedrock endpoints, handling configuration and client initialization.

Parameters:

Name Type Description Default
model_id str

The identifier for the model to use

required
endpoint_name str | None

Name of the endpoint. Defaults to None.

None
region str | None

AWS region to use. Defaults to None.

None
inference_config dict | None

Configuration for inference. Defaults to None.

None
bedrock_boto3_client client

Pre-configured boto3 client. Defaults to None.

None
max_attempts int

Maximum number of retry attempts. Defaults to 3.

3
Source code in llmeter/endpoints/bedrock.py
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
def __init__(
    self,
    model_id: str,
    endpoint_name: str | None = None,
    region: str | None = None,
    inference_config: dict | None = None,
    bedrock_boto3_client=None,
    max_attempts: int = 3,
):
    super().__init__(
        model_id=model_id,
        endpoint_name=endpoint_name or "amazon bedrock",
        provider="bedrock",
    )

    self.region = region or boto3.session.Session().region_name
    logger.info(f"Using AWS region: {self.region}")

    self._bedrock_client = bedrock_boto3_client
    if self._bedrock_client is None:
        config = Config(retries={"max_attempts": max_attempts, "mode": "standard"})
        self._bedrock_client = boto3.client(
            "bedrock-runtime", region_name=self.region, config=config
        )
    self._inference_config = inference_config

create_payload staticmethod

create_payload(user_message, max_tokens=None, **kwargs)

Create a payload for the Bedrock Converse API request with optional multi-modal content.

⚠️ SECURITY WARNING: Format detection is for testing/development convenience ONLY. This method does NOT validate file safety, integrity, or protect against malicious content. DO NOT use with untrusted files (user uploads, external sources) without proper validation, sanitization, and security measures.

Parameters:

Name Type Description Default
user_message str | list[ContentItem]

A single text string, or an ordered list mixing strings and :class:~llmeter.prompt_utils.MediaContent objects (ImageContent, AudioContent, VideoContent, DocumentContent). The order of items in the list controls the order of content blocks in the API request.

required
max_tokens int | None

Maximum number of tokens to generate. Defaults to 256.

None
**kwargs Any

Additional keyword arguments to include in the payload.

{}

Returns:

Name Type Description
dict dict

The formatted payload for the Bedrock API request.

Raises:

Type Description
TypeError

If parameters have invalid types.

ValueError

If parameters have invalid values.

Examples:

Text only::

create_payload("Hello")

Image with text (order preserved)::

create_payload([
    ImageContent.from_path("photo.jpg"),
    "What's in this image?",
])

Mixed content::

create_payload([
    "Compare the chart with the report:",
    ImageContent.from_path("chart.png"),
    DocumentContent.from_path("report.pdf"),
])
Source code in llmeter/endpoints/bedrock.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
@staticmethod
def create_payload(
    user_message: str | list[ContentItem],
    max_tokens: int | None = None,
    **kwargs: Any,
) -> dict:
    """
    Create a payload for the Bedrock Converse API request with optional multi-modal content.

    ⚠️ SECURITY WARNING: Format detection is for testing/development convenience ONLY.
    This method does NOT validate file safety, integrity, or protect against malicious
    content. DO NOT use with untrusted files (user uploads, external sources) without
    proper validation, sanitization, and security measures.

    Args:
        user_message: A single text string, or an ordered list mixing strings
            and :class:`~llmeter.prompt_utils.MediaContent` objects
            (``ImageContent``, ``AudioContent``, ``VideoContent``,
            ``DocumentContent``).  The order of items in the list controls
            the order of content blocks in the API request.
        max_tokens: Maximum number of tokens to generate. Defaults to 256.
        **kwargs: Additional keyword arguments to include in the payload.

    Returns:
        dict: The formatted payload for the Bedrock API request.

    Raises:
        TypeError: If parameters have invalid types.
        ValueError: If parameters have invalid values.

    Examples:
        Text only::

            create_payload("Hello")

        Image with text (order preserved)::

            create_payload([
                ImageContent.from_path("photo.jpg"),
                "What's in this image?",
            ])

        Mixed content::

            create_payload([
                "Compare the chart with the report:",
                ImageContent.from_path("chart.png"),
                DocumentContent.from_path("report.pdf"),
            ])
    """
    if max_tokens is None:
        max_tokens = 256
    if not isinstance(max_tokens, int) or max_tokens <= 0:
        raise ValueError("max_tokens must be a positive integer")

    # Normalise to a list of ContentItem
    if isinstance(user_message, str):
        items: list[ContentItem] = [user_message]
    elif isinstance(user_message, list):
        items = user_message
    else:
        raise TypeError(
            "user_message must be a str or list of str/MediaContent, "
            f"got {type(user_message).__name__}"
        )

    if not items:
        raise ValueError("user_message must not be empty")

    content_blocks = _build_content_blocks(items)

    payload: dict = {
        "messages": [{"role": "user", "content": content_blocks}],
    }
    payload.update(kwargs)
    if payload.get("inferenceConfig") is None:
        payload["inferenceConfig"] = {}
    payload["inferenceConfig"] = {
        **payload["inferenceConfig"],
        "maxTokens": max_tokens,
    }
    return payload

prepare_payload

prepare_payload(payload)

Enforce required properties on the input to Bedrock Converse* APIs

In particular, ensure modelId is set in line with this Endpoint's configured model ID and apply self._inference_config if no config was explicitly set on the payload.

Source code in llmeter/endpoints/bedrock.py
304
305
306
307
308
309
310
311
312
313
314
def prepare_payload(self, payload):
    """Enforce required properties on the input to Bedrock Converse* APIs

    In particular, ensure `modelId` is set in line with this Endpoint's configured model ID
    and apply `self._inference_config` if no config was explicitly set on the payload.
    """
    payload = {**payload}
    if payload.get("inferenceConfig") is None:
        payload["inferenceConfig"] = self._inference_config or {}
    payload["modelId"] = self.model_id
    return payload

BedrockConverse

BedrockConverse(model_id, endpoint_name=None, region=None, inference_config=None, bedrock_boto3_client=None, max_attempts=3)

Bases: BedrockBase[ConverseResponseTypeDef]

Source code in llmeter/endpoints/bedrock.py
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
def __init__(
    self,
    model_id: str,
    endpoint_name: str | None = None,
    region: str | None = None,
    inference_config: dict | None = None,
    bedrock_boto3_client=None,
    max_attempts: int = 3,
):
    super().__init__(
        model_id=model_id,
        endpoint_name=endpoint_name or "amazon bedrock",
        provider="bedrock",
    )

    self.region = region or boto3.session.Session().region_name
    logger.info(f"Using AWS region: {self.region}")

    self._bedrock_client = bedrock_boto3_client
    if self._bedrock_client is None:
        config = Config(retries={"max_attempts": max_attempts, "mode": "standard"})
        self._bedrock_client = boto3.client(
            "bedrock-runtime", region_name=self.region, config=config
        )
    self._inference_config = inference_config

invoke

invoke(payload)

Invoke the Bedrock converse API with the given payload.

Source code in llmeter/endpoints/bedrock.py
318
319
320
321
322
@BedrockBase.llmeter_invoke
def invoke(self, payload: dict) -> ConverseResponseTypeDef:
    """Invoke the Bedrock converse API with the given payload."""
    client_response = self._bedrock_client.converse(**payload)  # type: ignore
    return client_response

process_raw_response

process_raw_response(raw_response, start_t, response)

Parse the response from a Bedrock converse API call.

Parameters:

Name Type Description Default
response InvocationResponse

Raw response from the Bedrock API.

required
start_t float

The timestamp when the request was initiated.

required

Returns:

Type Description
None

InvocationResponse with the generated text and metadata.

Source code in llmeter/endpoints/bedrock.py
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
def process_raw_response(
    self, raw_response, start_t: float, response: InvocationResponse
) -> None:
    """Parse the response from a Bedrock converse API call.

    Args:
        response: Raw response from the Bedrock API.
        start_t: The timestamp when the request was initiated.

    Returns:
        InvocationResponse with the generated text and metadata.
    """
    resp_meta = raw_response.get("ResponseMetadata", {})
    response.id = resp_meta.get("RequestId")
    response.retries = resp_meta.get("RetryAttempts")

    text_parts = [
        part["text"]
        for part in raw_response["output"]["message"]["content"]
        if "text" in part
    ]
    response.response_text = "".join(text_parts)

    usage = raw_response.get("usage", {})
    response.num_tokens_input = usage.get("inputTokens")
    response.num_tokens_input_cached = usage.get("cacheReadInputTokens")
    response.num_tokens_output = usage.get("outputTokens")

BedrockConverseStream

BedrockConverseStream(model_id, endpoint_name=None, region=None, inference_config=None, bedrock_boto3_client=None, max_attempts=3, ttft_visible_tokens_only=True)

Bases: BedrockBase[ConverseStreamResponseTypeDef]

Streaming endpoint for the Bedrock Converse API.

When extended thinking is enabled, the stream contains reasoningContent deltas before the visible text deltas. The ttft_visible_tokens_only parameter controls which delta sets time_to_first_token:

  • True (default) - TTFT is set on the first text delta. Reasoning deltas are ignored for timing.
  • False - TTFT is set on the first delta of any kind, including reasoning content.

Parameters:

Name Type Description Default
model_id str

Bedrock model identifier.

required
endpoint_name str | None

Display name. Defaults to None.

None
region str | None

AWS region. Defaults to None.

None
inference_config dict | None

Default inference configuration.

None
bedrock_boto3_client

Pre-configured boto3 client.

None
max_attempts int

Maximum retry attempts. Defaults to 3.

3
ttft_visible_tokens_only bool

When True (default), TTFT measures time to first visible text token. When False, TTFT includes reasoning content deltas.

True
Source code in llmeter/endpoints/bedrock.py
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    model_id: str,
    endpoint_name: str | None = None,
    region: str | None = None,
    inference_config: dict | None = None,
    bedrock_boto3_client=None,
    max_attempts: int = 3,
    ttft_visible_tokens_only: bool = True,
):
    super().__init__(
        model_id=model_id,
        endpoint_name=endpoint_name,
        region=region,
        inference_config=inference_config,
        bedrock_boto3_client=bedrock_boto3_client,
        max_attempts=max_attempts,
    )
    self.ttft_visible_tokens_only = ttft_visible_tokens_only

process_raw_response

process_raw_response(raw_response, start_t, response)

Parse the streaming response from a Bedrock ConverseStream API call.

Only text deltas contribute to response_text. reasoningContent deltas are used solely for TTFT measurement when ttft_visible_tokens_only is False.

Parameters:

Name Type Description Default
raw_response

The raw response from the Bedrock API.

required
start_t float

The timestamp when the request was initiated.

required
response InvocationResponse

The output response object to be updated in-place.

required
Source code in llmeter/endpoints/bedrock.py
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
def process_raw_response(
    self, raw_response, start_t: float, response: InvocationResponse
) -> None:
    """Parse the streaming response from a Bedrock ConverseStream API call.

    Only `text` deltas contribute to `response_text`. `reasoningContent` deltas are used solely
    for TTFT measurement when `ttft_visible_tokens_only` is `False`.

    Args:
        raw_response: The raw response from the Bedrock API.
        start_t: The timestamp when the request was initiated.
        response: The output response object to be updated in-place.
    """
    response.id = raw_response["ResponseMetadata"].get("RequestId")
    response.retries = raw_response["ResponseMetadata"]["RetryAttempts"]

    for chunk in raw_response["stream"]:
        now = time.perf_counter()

        if "contentBlockDelta" in chunk:
            delta = chunk["contentBlockDelta"]["delta"]

            if "reasoningContent" in delta:
                # Reasoning delta -- only counts for TTFT when
                # ttft_visible_tokens_only is False.
                if (
                    not self.ttft_visible_tokens_only
                    and response.time_to_first_token is None
                ):
                    response.time_to_first_token = now - start_t

            elif "text" in delta:
                delta_text = delta["text"]
                if not isinstance(delta_text, str):
                    raise TypeError("Expected string for delta text")
                if delta_text:
                    if response.time_to_first_token is None:
                        response.time_to_first_token = now - start_t
                    if response.response_text is None:
                        response.response_text = delta_text
                    else:
                        response.response_text += delta_text
                    response.time_to_last_token = now - start_t

        if "contentBlockStop" in chunk:
            response.time_to_last_token = now - start_t

        if "metadata" in chunk:
            usage = chunk["metadata"].get("usage", {})
            input_tokens = usage.get("inputTokens")
            if input_tokens is not None:
                if response.num_tokens_input is None:
                    response.num_tokens_input = input_tokens
                else:
                    response.num_tokens_input += input_tokens
            output_tokens = usage.get("outputTokens")
            if output_tokens is not None:
                if response.num_tokens_output is None:
                    response.num_tokens_output = output_tokens
                else:
                    response.num_tokens_output += output_tokens
            cache_read_input_tokens = usage.get("cacheReadInputTokens")
            if cache_read_input_tokens is not None:
                if response.num_tokens_input_cached is None:
                    response.num_tokens_input_cached = cache_read_input_tokens
                else:
                    response.num_tokens_input_cached += cache_read_input_tokens

        # Detect Bedrock stream error events
        for error_type in BEDROCK_STREAM_ERROR_TYPES:
            if error_type in chunk:
                response.time_to_last_token = now - start_t
                raise RuntimeError(
                    f"Bedrock {error_type}: {chunk[error_type]['message']}"
                )