Deep Agent MCPs: Superpowers for Your God-Tier Agent

Deep Agent now supports Model Context Protocols (MCPs), enabling seamless integration with external tools and data sources. MCP standardizes how AI agents interact with resources, making it easier to enhance Deep Agent's capabilities with real-time data, APIs, and services. This guide walks you through setting up and using MCPs in Deep Agent to create powerful, context-aware AI workflows.

What Are MCPs?

The Model Context Protocol (MCP) is an open standard that connects AI models, like those in Deep Agent, to external systems via a client-server architecture. Think of MCP as a universal connector, allowing Deep Agent to access tools (e.g., GitHub, databases) and resources (e.g., files, APIs) without custom integrations. With MCP, you can enable Deep Agent to perform tasks like querying databases, fetching web content, or automating workflows.

Prerequisites

Note: You can find a list of community-built MCP servers at Piperdream, GitHub and mcp.so. The MCP directory platforms will guide you on how to obtain the necessary tokens or API keys to configure your servers. Platforms like Pipedream support OAuth-based remote servers, making them easier to set up—so you can pick your favorite MCPs from there and get started quickly.

Step-by-Step Guide to Using MCPs in Deep Agent

Step 1: Go to MCP Settings Config Page

  1. Log in to your Abacus.AI account and navigate to the Deep Agent Homepage.
  2. Go to DeepAgent Profile Page from the top right corner of the page.
  3. Click on MCP Server Config Page.

Step 2: Install and Configure an MCP Server

  1. Choose any remote side MCP server that suits your needs (e.g., create a repository in GitHub, fetch web content). For this example, we will use the GitHub MCP server.
  2. Copy its config JSON and paste it in the JSON Config settings page.
  3. Add a new MCP server config JSON as per its transport type:
    1. Stdio: For local servers, provide the required parameters - command, args, env, etc.
      Example: "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], etc.
    2. SSE: For remote servers, provide the server URL and other optional parameters. Example: http://example.com:8000/sse
  4. Configure the server with necessary credentials or environment variables (e.g., API keys), or make sure you are authenticated on the remote server platform when using remote servers.
  5. Deep Agent will query the server to list available tools and resources.
Example Config:

The example below shows the JSON configuration for two MCP servers: GitHub and Google Tasks.

{
  "github": {
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-github"],
    "env": {
      "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
    }
  },
  "google_tasks": {
    "url": "<REMOTE_SERVER_URL>"
  }
}

Step 3: Use MCP Tools in Deep Agent

  1. In your Deep Agent chat interface, instruct the agent to use the MCP tool.
    For example: 'Lets put some example task like - Access this website and tell me its structure using playwright’s tools'
  2. Deep Agent will display a call to the tool having the requested parameters.
  3. View the tool's output in the chat, which Deep Agent can use for further tasks (e.g., summarizing content).

FAQs and Troubleshooting

Why isn't my MCP server connecting?
{
  "mcpServers": {
        "<your server config json>"
        }
}
Can I use multiple MCP servers?

Yes, Deep Agent supports adding up to 5 servers and supports up to 50 tools across the servers. Add each server in the Integrations settings, but limit active tools to avoid overwhelming the LLM.

How do I secure my MCP connections?

Use MCP servers from trusted sources only and try to follow OAuth authentication in remote servers.