Combined (Conversational + Form)
Workflow Type: Hybrid Conversational Workflow with Conditional Forms​
Hybrid workflows combine conversational interactions with conditional form-based inputs. This enables intelligent context-aware bots that can dynamically expose forms only when needed, providing a seamless user experience.
Use Case: Smart Document Creation Assistant​
Users can have a natural conversation with the assistant, and when they express intent to create a document, the workflow automatically exposes a form for structured input. This combines the flexibility of conversation with the precision of form-based data collection.
Key Concepts​
Decision Nodes​
Decision nodes enable conditional branching based on data values. They evaluate expressions and control which downstream nodes execute.
| Feature | Description |
|---|---|
| Expression-based | Use Python expressions like action == "create_document" |
| Boolean output | Returns True or False to control flow |
| No function needed | Pure logic evaluation without custom code |
| Multiple branches | Can split workflow into different paths |
Conditional Form Exposure​
Unlike static form workflows, hybrid workflows show forms conditionally:
- User sends message → Chat handler analyzes intent
- Decision node evaluates intent → Routes accordingly
- If
create_document→ Form appears - If
general_chat→ Conversational response returned
Data Flow:
User Message → Chat Handler → Decision Node → (Form Node OR Direct Response)
Step 1: Create the Chat Handler Function and Node​
The chat handler analyzes user intent and determines whether to trigger form-based document creation or provide a conversational response.
from abacusai import (
AgentInterface,
WorkflowGraph,
WorkflowGraphEdge,
WorkflowGraphNode,
DecisionNode,
WorkflowNodeInputMapping,
WorkflowNodeInputSchema,
WorkflowNodeInputType,
WorkflowNodeOutputMapping,
WorkflowNodeOutputSchema,
WorkflowNodeOutputType,
)
def chat_handler(user_input):
"""
Handles general chat questions and determines if user wants to create a document
Args:
user_input (str): User's message from chat interface
Returns:
AgentResponse: Contains action type and response text
"""
from abacusai import AgentResponse, ApiClient
import json
client = ApiClient()
################################################
# Get Conversational Context
################################################
try:
# Get last N messages (multiplied by 2 for USER + ASSISTANT pairs)
chat_history = client.get_agent_context_chat_history()[-HISTORY_LIMIT*2:]
except Exception as e:
chat_history = []
# Format messages for LLM consumption
formatted_messages = []
if chat_history:
for msg in chat_history:
if hasattr(msg, "role") and (hasattr(msg, "text") or hasattr(msg, "streamed_data")):
# Use text if available, otherwise streamed_data
content = msg.text if msg.text else msg.streamed_data
formatted_messages.append({
"role": msg.role,
"content": content
})
# Combine all messages into context string
all_messages_text = "\n".join(
[f"{msg.get('role')}: {msg.get('content')}" for msg in formatted_messages]
)
################################################
# Intent Detection: Document Creation vs General Chat
################################################
# Use LLM to classify user intent
intent_response = client.evaluate_prompt(
prompt=f"Conversation History: {all_messages}\nCurrent User Message: {user_input}",
system_message="Analyze if the user is requesting to create a document, write a document, generate a document, or make a document. Respond with 'create_document' if they want to create a document, otherwise respond with 'general_chat'.",
llm_name="GEMINI_2_FLASH",
)
intent = intent_response.content.strip().lower()
################################################
# Route Based on Intent
################################################
if "create_document" in intent:
# Trigger form exposure via decision node
return AgentResponse(
action="create_document",
response="I'll help you create a document. Please fill out the form that will appear.",
)
else:
# Handle general chat with full context
context = (
f"Conversation History: {all_messages}\nCurrent User Message: {user_input}"
)
chat_response = client.evaluate_prompt(
prompt=context,
system_message="You are a helpful AI assistant. Answer the user's question based on the conversation history and current message. Be conversational and helpful.",
llm_name="OPENAI_GPT4_1",
)
return AgentResponse(action="general_chat", response=chat_response.content)
# Convert function to workflow node
chat_handler_node = WorkflowGraphNode(
"Chat Handler",
chat_handler,
output_mappings=["action", "response"], # Extract both fields from AgentResponse
output_schema=WorkflowNodeOutputSchema(
json_schema={
"type": "object",
"title": "Response",
"properties": {
"action": {"type": "string", "title": "Action"},
"response": {"type": "string", "title": "Response"},
},
}
),
)
Key Features:
get_agent_context_chat_history(): Retrieves conversation context- Intent classification: Uses LLM to detect document creation requests
- Dual outputs: Returns both
action(for routing) andresponse(for user display) - Non-streaming: Uses
evaluate_prompt()instead ofstreaming_evaluate_prompt()
Step 2: Create the Decision Node​
Decision nodes evaluate expressions and control workflow branching without requiring custom functions.
document_decision_node = DecisionNode(
"Document Decision Node",
'action == "create_document"', # Python expression evaluated against inputs
input_mappings={
"action": chat_handler_node.outputs.action, # Connect to chat handler's action output
},
)
Decision Node Behavior:
- Expression:
'action == "create_document"'evaluates toTrueorFalse - Input source: Reads
actionvalue fromchat_handler_node - Output: Passes through the
actionvalue to downstream nodes - Routing: Only nodes connected to this decision node will execute when expression is
True
How It Works:
- User says "create a document"
- Chat handler returns
action="create_document" - Decision node evaluates:
"create_document" == "create_document"→True - Form node executes (connected to decision node)
- User sees form in chat interface
Step 3: Create the Form Submit Function and Node​
When the decision node evaluates to True, this node exposes a form and generates a Word document based on user input.
def document_creator(action, title, description):
"""
Creates a document based on user input from the form
Args:
action (str): Action type from decision node (ensures routing)
title (str): Document title from form
description (str): Document description from form
Returns:
AgentResponse: Contains generated Word document as Blob
"""
from abacusai import AgentResponse, ApiClient, Blob
from docx import Document
import io
client = ApiClient()
# Stream progress updates to user
client.stream_message("Creating your document...")
################################################
# Create Word Document Structure
################################################
doc = Document()
# Add title heading
title_paragraph = doc.add_heading(title, level=1)
################################################
# Generate Content Using LLM
################################################
content_response = client.evaluate_prompt(
prompt=f"Create detailed content for a document with the title '{title}' and description '{description}'. Write comprehensive, well-structured content that fulfills the description.",
system_message="You are a professional document writer. Create well-structured, detailed content based on the given title and description. Use proper formatting and make it comprehensive.",
llm_name="OPENAI_GPT4_1",
)
# Add generated content to document
doc.add_paragraph(content_response.content)
################################################
# Save and Return as Blob
################################################
# Save document to bytes buffer
doc_buffer = io.BytesIO()
doc.save(doc_buffer)
doc_bytes = doc_buffer.getvalue()
# Return as downloadable Blob
return AgentResponse(
document=Blob(
doc_bytes,
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
filename=f"{title.replace(' ', '_')}.docx",
)
)
# Create node with input/output schemas
document_creator_node = WorkflowGraphNode(
"Document Creator",
document_creator,
# Connect to decision node output
input_mappings={
"action": document_decision_node.outputs.action,
},
# Define form interface
input_schema=WorkflowNodeInputSchema(
json_schema={
"type": "object",
"title": "Document Creation Form",
"required": ["title", "description"], # Both fields mandatory
"properties": {
"title": {
"type": "string",
"title": "Document Title",
"description": "Enter the title for your document",
},
"description": {
"type": "string",
"title": "Document Description",
"description": "Describe what content you want in the document",
},
},
},
ui_schema={
"description": {"ui:widget": "textarea"} # Multi-line input for description
},
),
# Define output as attachment
output_mappings=[
WorkflowNodeOutputMapping(
name="document",
variable_type=WorkflowNodeOutputType.ATTACHMENT # File download
),
],
output_schema=WorkflowNodeOutputSchema(
json_schema={
"type": "object",
"title": "Generated Document",
"properties": {
"document": {
"type": "string",
"title": "Document",
"format": "data-url", # Enables file download
}
},
}
),
)
Form Schema Breakdown:
| Property | Type | Widget | Required | Purpose |
|---|---|---|---|---|
title | string | text input | ✓ | Document title |
description | string | textarea | ✓ | Content description for LLM |
Output Type:
WorkflowNodeOutputType.ATTACHMENT: Marks output as downloadable fileformat: "data-url": Enables browser download of the generated document
Step 4: Build the Workflow Graph​
Assemble all nodes into a complete workflow with the appropriate interface.
# Define as conversational/chat interface
agent_interface = AgentInterface.CHAT
# Build the workflow graph
workflow_graph = WorkflowGraph(
nodes=[
chat_handler_node, # Analyzes intent
document_decision_node, # Routes based on intent
document_creator_node, # Conditionally exposes form
],
specification_type="data_flow" # Nodes execute based on data dependencies
)
# Dependencies for document generation
package_requirements = ['python-docx']
included_modules = []
org_level_connectors = []
user_level_connectors = {}
initialize_function = None
Workflow Execution Flow:
- User sends message via chat interface
chat_handler_nodeanalyzes intent → outputsactionandresponsedocument_decision_nodeevaluatesaction == "create_document"- If
True:document_creator_nodeexposes form - If
False: Conversational response returned to user
- If
- User fills form → Document generated and downloaded
Step 5: Register the Agent​
Deploy the hybrid workflow to your Abacus AI project.
from abacusai import ApiClient
client = ApiClient()
agent = client.create_agent(
project_id='your_project_id',
workflow_graph=workflow_graph,
agent_interface=agent_interface,
description="AI Assistant that can answer questions with conversation history and create documents on request",
package_requirements=['python-docx'],
org_level_connectors=[],
user_level_connectors={},
included_modules=[],
)
agent.wait_for_publish()
print(f"Hybrid conversational agent created: {agent.agent_id}")
Configuration Details:
| Parameter | Value | Purpose |
|---|---|---|
agent_interface | AgentInterface.CHAT | Enables chat interface |
memory | True | Maintains conversation context |
package_requirements | ['python-docx'] | Installs Word document library |
specification_type | "data_flow" | Nodes execute based on data dependencies |
Testing Your Agent:
- Start conversation: "Hello, how are you?" → General chat response
- Request document: "I need to create a document" → Form appears
- Fill form with title and description → Document generated and downloaded