Predict API

Once your model is trained, you must deploy the model on Abacus.AI platform to generate predictions. You can use the prediction dashboard to generate the predictions from the trained model. In this section the underlying prediction API and all other additional prediction API methods are discussed for the use case in consideration:

Method
getCompletion POST
Copy POST

Returns the finetuned LLM generated completion of the prompt.

Arguments:

REQUIRED KEY TYPE DESCRIPTION
Yes deploymentToken str The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.
Yes deploymentId str The unique identifier to a deployment created under the project.
Yes prompt str The prompt given to the finetuned LLM to generate the completion.
Note: The arguments for the API methods follow camelCase but for Python SDK underscore_case is followed.

Response:

KEY TYPE DESCRIPTION
success Boolean true if the call succeeded, false if there was an error
CompletionResult

Exceptions:

TYPE WHEN
DataNotFoundError

deploymentId is not found.

Language: