Getting Started
This guide will help you get started with the Abacus.AI Python SDK. To get a full list of commands, you can refer to the Oficial Python SDK Documentation.
Prerequisites​
- Sign up for Abacus.AI
- Navigate to the API Keys Dashboard and generate an API key
- Have Python 3.6 or newer installed
Setting up your development environmentx​
The following steps would allow you to successfully setup your development environment and use the Abacus.AI platform:
- Install the Abacus.AI Library:
python3 -m pip install abacusai
- Initialize a string variable with the API Key generated from the API Keys Dashboard:
api_key = 'API_KEY' # replace API_KEY with the generated key
- Import and initialize the Abacus.AI client:
from abacusai import ApiClient
client = ApiClient(api_key)
If you are working within the platform's Notebooks, then you can initialise the Abacus.AI client without passing an API key:
from abacusai import ApiClient
client = ApiClient()
Using the API​
There are a couple of ways to find available APIs easily:
- Try auto-completion features in IDEs. Methods follow expressive language.
- Use the
suggest_abacus_apimethod. - Official python SDK documentation page
Example of suggest_abacus_api:
apis = client.suggest_abacus_apis("list feature groups in a project", verbosity=2, limit=3)
for api in apis:
print(f"Method: {api.method}")
print(f"Docstring: {api.docstring}")
print("---")
When creating custom objects in Abacus.AI, you'll have access to templates that cover most common functionalities. This is why our Python SDK documentation focuses on core concepts rather than extensive code samples.
For practical implementation guidance, refer to the "Machine Learning Samples" section, which contains end-to-end examples demonstrating basic usage patterns.
Additionally, the platform's AI Engineer can provide custom code examples and assist with any specific API implementation needs you may have.
Below is a cheatsheet of most commonly used methods of client:
| Method | Explanation |
|---|---|
| suggest_abacus_apis | Describe what you need, and we will return the methods that will help you achieve it. |
| describe_project | Describe's project |
| create_dataset_from_upload | Creates a dataset object from local data |
| describe_feature_group_by_table_name | Describes the feature group using the table name |
| describe_feature_group_version | Describes the feature group using the feature group version |
| list_models | List's models of a project |
| extract_data_using_llm | Extracts data from a document. Allows you to create a json output and extract specific information from a document |
| execute_data_query_using_llm | Runs SQL on top of feature groups based on natural language input. Can return both SQL and the result of SQL execution. |
| get_chat_response | Uses a chatLLM deployment. Can be used to add filters, change LLM and do advanced use cases using an agent on top of a ChatLLM deployment. |
| get_chat_response_with_binary_data | Same as above, but you can also send a binary dataset |
| get_conversation_response | Uses a chatLLM deployment with conversation history. Useful when you need to use the API. You create a conversation ID and you send it or you use the one created by Abacus. |
| get_conversation_response_with_binary_data | Same as above, but you can also send a binary dataset |
| evaluate_prompt | LLM call for a user query. Can get JSON output using additional arguments |
| get_matching_documents | Gets the search results for a user query using document retriever directly. Can be used along with evaluate_prompt to create a customized chat LLM like agent |
| get_relevant_snippets | Creates a doc retriever on the fly for retrieving search results |
| extract_document_data | Extract data from a PDF, Word document, etc using OCR or using the digital text. |
| get_docstore_document | Download document from the doc store using their doc_id. |
| get_docstore_document_data | Get extracted or embedded text from a document using their doc_id. |
| stream_message | Streams message on the UI for agents |
| update_feature_group_sql_definition | Updates the SQL definition of a feature group |
| query_database_connector | Executes a SQL query on top of a database connector. Will only work for connectors that support it. |
| export_feature_group_version_to_file_connector | Exports a feature group to a file connector |
| export_feature_group_version_to_database_connector | Exports a feature group to a database connector |
| create_dataset_version_from_file_connector | Refreshes data from the file connector connected to the file connector. |
| create_dataset_version_from_database_connector | Refreshes data from the file connector connected to the database connector. |