Skip to main content

Invoking LLMs

The basic method to invoke large language models is client.evaluate_prompt. It is a very versatile command that allows you to:

  • Pass messages to the model and receive generated responses.
  • Return structured responses using json_response_schema
  • Pass images
  • Customize the prompt to suit your specific use case.
  • Invoke the LLM of your choice

Here is the basic command:

Basic Invocation​

r = client.evaluate_prompt(prompt = "What is the capital of Greece?", system_message = "You should answer all questions with a single word.", llm_name = "OPENAI_GPT4O")

# Response:
print(r.content)
  • system_message: These are the instructions that the model will follow
  • prompt: This is the actual message that the model receives from the user
  • llm_name: The LLM that will be used to return the response

JSON Response Example​

import json

r = client.evaluate_prompt(prompt = "In this course, you will learn about car batteries, car doors, and car suspension system",
# system_message = "OPTIONAL, but good to have",
llm_name = 'OPENAI_GPT4O',
response_type='json',
json_response_schema = {"learning_objectives": {"type": "list", "description": "A list of learning objectives", "is_required": True}}
)
learning_objectives = json.loads(r.content)
learning_objectives

Sending Images​

fo = open('test.png', 'rb')
import base64
encoded_data = base64.b64encode(fo.read()).decode('utf-8')

response = client.evaluate_prompt(
prompt='What can you see in the image?',
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{encoded_data}",
},
},
],
},
],
)