MCP Server
The Kova MCP server lets AI assistants like Claude, Cursor, and other MCP-compatible tools manage your deployments, browse providers, and interact with AI endpoints directly.
What is MCP?
The Model Context Protocol is an open standard that allows AI assistants to call tools exposed by external servers. Kova's MCP server exposes 17 tools covering deployments, providers, AI inference, and account management -- all over a simple JSON-RPC stdio transport.
Setup
Get an API Key
Create an API key from the dashboard at Settings → API Keys, or use the API directly:
curl -X POST https://app.kovanetwork.com/api/v1/api-keys \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "mcp-server"}'
Install the Package
Install the Kova MCP server globally:
npm install -g @kova/mcp
Or use it directly with npx (no install needed):
npx @kova/mcp
Configure Your AI Client
Add the server to your MCP client config. For Claude Code, add to ~/.claude.json:
{
"mcpServers": {
"kova": {
"command": "npx",
"args": ["@kova/mcp"],
"env": {
"KOVA_API_TOKEN": "kova_xxxx.xxxxxxxx"
}
}
}
}
For Cursor, add to .cursor/mcp.json in your project:
{
"mcpServers": {
"kova": {
"command": "npx",
"args": ["@kova/mcp"],
"env": {
"KOVA_API_TOKEN": "kova_xxxx.xxxxxxxx"
}
}
}
}
Verify Connection
Once configured, your AI assistant should list the Kova tools. Try asking it to "list my deployments" or "show network providers".
Environment Variables
| Variable | Description | Default |
|---|---|---|
KOVA_API_TOKEN | Your API key for authentication | (required) |
KOVA_API_URL | Base URL for the Kova API | https://app.kovanetwork.com |
Token Required
The server will start without a token but all authenticated endpoints will fail. Always set KOVA_API_TOKEN before connecting.
Available Tools
Deployments
| Tool | Description |
|---|---|
list_deployments | List all your deployments with optional state filter |
get_deployment | Get detailed info about a deployment (SDL, leases, resources) |
create_deployment | Create a new deployment from an SDL manifest |
close_deployment | Terminate a deployment and refund remaining escrow |
get_deployment_logs | Fetch recent logs from a deployment service |
estimate_cost | Estimate hourly/monthly cost for an SDL manifest |
AI Inference
| Tool | Description |
|---|---|
list_ai_models | Browse available AI models (LLMs, image gen, audio, embeddings) |
deploy_ai_model | Deploy a model as an inference endpoint |
list_ai_endpoints | List your active inference endpoints |
delete_ai_endpoint | Delete an inference endpoint |
AI Chat
| Tool | Description |
|---|---|
create_chat_session | Start a chat session with a model |
send_chat_message | Send a message and get a response |
list_chat_sessions | List your chat sessions |
Network & Account
| Tool | Description |
|---|---|
list_providers | Browse compute providers with optional GPU filter |
list_templates | Browse deployment templates by category |
get_network_stats | Get network stats (nodes, deployments, capacity) |
get_account_balance | Get your balance and escrow totals |
Example Usage
Once connected, you can interact naturally with your AI assistant:
- "Deploy a Node.js app with 1 CPU and 512MB RAM"
- "Show me providers with RTX 4090 GPUs"
- "What are the logs for my deployment abc123?"
- "How much would it cost to run a 2 CPU, 4GB RAM deployment?"
- "Deploy llama-3-8b as an inference endpoint"
- "Close deployment xyz789"
Running Standalone
You can also run the server directly for testing:
KOVA_API_TOKEN=kova_xxxx.xxxxxxxx npx @kova/mcp
Or if installed globally:
KOVA_API_TOKEN=kova_xxxx.xxxxxxxx kova-mcp
The server reads JSON-RPC messages from stdin and writes responses to stdout. Diagnostic messages go to stderr.
Protocol Details
- Transport: stdio (stdin/stdout)
- Protocol: JSON-RPC 2.0
- MCP Version: 2024-11-05
- Server ID:
kova-networkv1.0.0
The server supports the standard MCP lifecycle: initialize → notifications/initialized → tools/list → tools/call, plus ping for health checks.