Litellm
litellm
LiteLLM
LiteLLM(litellm_model, model_id=None)
Bases: LiteLLMBase[ModelResponse]
Endpoint for LiteLLM SDK-based models (non-streaming mode)
Source code in llmeter/endpoints/litellm.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 | |
LiteLLMBase
LiteLLMBase(litellm_model, model_id=None)
Bases: Endpoint[TLiteLLMResponseBase], Generic[TLiteLLMResponseBase]
Base class for (streaming or non-streaming) LiteLLM-based Endpoints
Source code in llmeter/endpoints/litellm.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 | |
create_payload
staticmethod
create_payload(user_message, max_tokens=256, system_message=None, **kwargs)
Create a payload for the LiteLLM completion() request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
user_message
|
str | Sequence[str]
|
The user's message or a sequence of messages. |
required |
max_tokens
|
int
|
The maximum number of tokens to generate. Defaults to 256. |
256
|
**kwargs
|
Any
|
Additional keyword arguments to include in the payload. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
dict |
dict
|
The formatted payload for the Bedrock API request. |
Source code in llmeter/endpoints/litellm.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |