Integrate MCP (Model Context Protocol) servers to supercharge CodeLLM with real-world context and automation. Whether you're pulling live data, triggering workflows, or connecting to your favorite tools—MCPs make your AI coding assistant infinitely more capable.
Model Context Protocol (MCP) is an open standard that lets CodeLLM connect with external tools, APIs, databases, and services — all through a structured interface.
It enables the AI assistant to see, act, and reason beyond static context, allowing it to perform tasks like:
Think of MCPs as real-time memory extensions for your AI.
Explore trusted MCP server directories at: Zapier MCP and Github repos.
These directories guide you through getting the required API keys or tokens to activate each tool.
You can also search for MCPs in the top bar
Paste the server’s config in mcp.servers
within settings.json
.
Configure the MCP server JSON based on its transport type:
command, args, env,
etc.)"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-github"
]
http://example.com:8000/sse
<span style="display:block;text-align:center">
{width=900px }
</span>
The example below shows the JSON configuration for two MCP servers: GitHub and Google Tasks.
"github"
and "google_tasks"
)."mcp.servers":{
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
},
"google_tasks": {
"url": "<REMOTE_SERVER_URL>"
}
}
Check the badge next to the server:
Once your server is active, you can ask the agent to perform tasks like:
CodeLLM will:
You stay in control. The AI never acts without your go-ahead.
Please ensure that the command or URL in config json is correct and the json format is as shown in the example config, and verify that the server is running.
A common mistake is copying a sample config like the one below without replacing placeholder values, such as the authentication token:
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
If required authentication parameters (e.g., access tokens) are missing or not correctly filled in, the MCP server will fail to connect. Always make sure to generate and insert valid credentials as per the instructions provided on the respective MCP directory or website.
Some platforms may wrap the json config within this given structure. When copying the config, make sure to include only the server configuration JSON (i.e, <your server config json>
part in below example) —not the wrapper. You may refer to the example config json above.
{
"mcpServers": {
<your server config json>
}
}
Yes, CodeLLM supports adding multiple MCP servers, and there is no hard limit on the number of tools you can connect across them. However, the effective limit may depend on the language model (LLM) being used by the agent.
As a best practice, we recommend limiting the number of tools to ensure better performance and avoid overwhelming the LLM with too much context.
Use MCP servers from trusted sources only and try to follow OAuth authentication in remote servers.
For further assistance, please contact our support team: support@abacus.ai.