Method
evaluatePrompt POST
Copy POST

Generate response to the prompt using the specified model.

Arguments:

REQUIRED KEY TYPE DESCRIPTION
No prompt str Prompt to use for generation.
No systemMessage str System prompt for models that support it.
No llmName LLMName Name of the underlying LLM to be used for generation. Default is auto selection.
No maxTokens int Maximum number of tokens to generate. If set, the model will just stop generating after this token limit is reached.
No temperature float Temperature to use for generation. Higher temperature makes more non-deterministic responses, a value of zero makes mostly deterministic reponses. Default is 0.0. A range of 0.0 - 2.0 is allowed.
No messages list A list of messages to use as conversation history. For completion models like OPENAI_GPT3_5_TEXT and PALM_TEXT this should not be set. A message is a dict with attributes: is_user (bool): Whether the message is from the user. text (str): The message's text. attachments (list): The files attached to the message represented as a list of dictionaries [{"doc_id": }, {"doc_id": }]
No responseType str Specifies the type of response to request from the LLM. One of 'text' and 'json'. If set to 'json', the LLM will respond with a json formatted string whose schema can be specified `json_response_schema`. Defaults to 'text'
No jsonResponseSchema dict A dictionary specifying the keys/schema/parameters which LLM should adhere to in its response when `response_type` is 'json'. Each parameter is mapped to a dict with the following info - type (str) (required): Data type of the parameter. description (str) (required): Description of the parameter. is_required (bool) (optional): Whether the parameter is required or not. Example: json_response_schema = {'title': {'type': 'string', 'description': 'Article title', 'is_required': true}, 'body': {'type': 'string', 'description': 'Article body'}}
No stopSequences list[str] Specifies the strings on which the LLM will stop generation.
No topP float The nucleus sampling value used for this run. If set, the model will sample from the smallest set of tokens whose cumulative probability exceeds the probability `top_p`. Default is 1.0. A range of 0.0 - 1.0 is allowed. It is generally recommended to use either temperature sampling or nucleus sampling, but not both.
Note: The arguments for the API methods follow camelCase but for Python SDK underscore_case is followed.

Response:

KEY TYPE DESCRIPTION
success Boolean true if the call succeeded, false if there was an error
result LlmResponse
KEY TYPE Description
content str Full response from LLM.
tokens int The number of tokens in the response.
stopReason str The reason due to which the response generation stopped.
llmName str The name of the LLM model used to generate the response.
inputTokens int The number of input tokens used in the LLM call.
outputTokens int The number of output tokens generated in the LLM response.
totalTokens int The total number of tokens (input + output) used in the LLM interaction.
codeBlocks LlmCodeBlock A list of parsed code blocks from raw LLM Response
KEY TYPE Description
language str The language of the code block. Eg - python/sql/etc.
code str source code string
start int index of the starting character of the code block in the original response
end int index of the last character of the code block in the original response
valid bool flag denoting whether the soruce code string is syntactically valid

Exceptions:

TYPE WHEN
InvalidEnumParameterError

An invalid value is passed for llmName.

Language: