Return an asynchronous generator which continues the conversation based on the input messages and search results.
REQUIRED | KEY | TYPE | DESCRIPTION |
---|---|---|---|
Yes | deploymentId | str | The unique identifier to a deployment created under the project. |
Yes | message | str | A message from the user |
Yes | deploymentToken | str | The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website. |
No | deploymentConversationId | str | The unique identifier of a deployment conversation to continue. If not specified, a new one will be created. |
No | externalSessionId | str | The user supplied unique identifier of a deployment conversation to continue. If specified, we will use this instead of a internal deployment conversation id. |
No | llmName | str | Name of the specific LLM backend to use to power the chat experience |
No | numCompletionTokens | int | Default for maximum number of tokens for chat answers |
No | systemMessage | str | The generative LLM system message |
No | temperature | float | The generative LLM temperature |
No | filterKeyValues | dict | A dictionary mapping column names to a list of values to restrict the retrieved search results. |
No | searchScoreCutoff | float | Cutoff for the document retriever score. Matching search results below this score will be ignored. |
No | chatConfig | dict | A dictionary specifying the query chat config override. |
No | ignoreDocuments | bool | If True, will ignore any documents and search results, and only use the messages to generate a response. |
No | includeSearchResults | bool | If True, will also return search results, if relevant. |
KEY | TYPE | DESCRIPTION |
---|---|---|
success | Boolean | true if the call succeeded, false if there was an error |
TYPE | WHEN |
---|---|
DataNotFoundError |
|
DataNotFoundError |
|
DataNotFoundError |
|