AI Agents

AI agents on the Abacus.AI platform are intelligent and autonomous assistants designed to perform a wide range of tasks by integrating user code, data transformations, machine learning models, and large language model (LLM) prompts. These agents are customized using agent_function, allowing access to both LLMs and rich APIs offered by Abacus.AI. This approach enables intuitive problem-solving using natural language inputs.

Types of AI Agents

The Abacus.AI platform supports various types of agents based on their interface:

Examples of AI Agents

Creating AI Agents

To create an AI agent on the Abacus.AI platform, follow these steps:

  1. Define Workflow Nodes and Functions: Establish the individual tasks and their functions.
  2. Define the Agent Workflow: Use nodes and edges to create the workflow graph.
  3. Create and Deploy the Agent: Deploy the agent on the platform.

Constructing Agent Functions

Agent functions are created on the create agent page, where you provide a name, description, and source code. The description guides interaction with the agent in the chat dialog.

Example: Default Agent Function

def agent_function(nlp_query):
    """
    Args:
        nlp_query (Any): Data row to predict on/with or to pass to the agent for execution
    Returns:
        The result which can be any JSON serializable Python type
    """
    from abacusai import ApiClient

    # Let agent respond like your favorite character.
    character = 'Sherlock Holmes'
    return ApiClient().evaluate_prompt(prompt=nlp_query, system_message=f'respond like {character}').content

Example: Data Query Agent Function

def agent_function(nlp_query):
    """
    Args:
        nlp_query (Any): Data row to predict on/with or to pass to the agent for execution
    Returns:
        The result which can be any JSON serializable Python type
    """
    from abacusai import ApiClient

    client = ApiClient()
    fg = client.describe_feature_group_by_table_name('Concrete Strength')
    rendered_fgs = client.render_feature_groups_for_llm([fg.feature_group_id])
    system_prompt = 'Reply SQL code only based on the following table definition:'
    for i, block in enumerate(rendered_fgs):
        system_prompt += f'{i + 1}:- {block.content}\n'
    llm_response = client.evaluate_prompt(prompt=nlp_query, system_message=system_prompt)
    return client.execute_feature_group_sql(llm_response.content).to_markdown()

Deploying Agents

Deploying an agent is straightforward, similar to deploying an ML model. Click Create New Deployment, display the agent deployment window, and click deploy agent. A deployment can be restarted after suspension.

Chat with Agents

Once deployed, agents can be interacted with through the chat window. User messages are sent to the agent function as input, and responses are displayed in the chat window. Markdown-formatted responses, such as SQL execution results, are recognized and displayed accordingly.

Execute Agent Function

A deployed agent can also be accessed via our API. Use the execute_agent function from the Abacus.AI Python library to interact with a deployed AI agent. Here's an example:

client.execute_agent(
    deployment_token='<deployment_token>',
    deployment_id='<deployment_id>',
    keyword_arguments={
        'query': 'Show me the average, min and max cement value when the water column is larger than 200'
    }
)

If the agent accepts a file input, use execute_agent_with_binary_data to interact with the agent. Here's an example of an agent that accepts a query and a document input:

file = open(<file_path>, 'rb')
client.execute_agent_with_binary_data(
    deployment_token='<deployment_token>',
    deployment_id='<deployment_id>',
    keyword_arguments={
        'query': 'Show me the average, min and max cement value when the water column is larger than 200',
        'document': 'sample.pdf'
    },
    blobs={'sample.pdf': file}
)
file.close()

Note that if the agent accepted an array of file inputs as it's second argument, the above code can be modified as below:

file_1 = open(<doc_1_file_path>, 'rb')
file_2 = open(<doc_2_file_path>, 'rb')
client.execute_agent_with_binary_data(
    deployment_token='<deployment_token>',
    deployment_id='<deployment_id>',
    keyword_arguments={
        'query': 'Show me the average, min and max cement value when the water column is larger than 200',
        'documents': ['doc_1.pdf', 'doc_2.pdf']
    },
    blobs={
        'doc_1.pdf': file_1,
        'doc_2.pdf': file_2
    }
)
file_1.close()
file_2.close()

Monitoring and Versioning

The platform supports versioning of agents, allowing you to track changes and improvements over time.

Customization

Agents can be highly customized using Python code, allowing for complex logic, data processing, and integration with external services.

Additional Resources

For more detailed tutorials, visit the AI Agents Tutorial Page.