What the Splunk MCP Server Does
The Splunk MCP server exposes your Splunk instance as a set of tools that AI agents can call through the Model Context Protocol. The agent sends a tool call (e.g., "run this SPL query"), the MCP server authenticates against Splunk, executes the query, and returns structured results.
Splunk shipped its official MCP server app (v1.0.3) on Splunkbase in March 2026. It supports Enterprise and Cloud Platform versions 9.2 through 10.2. The community has also produced implementations in Python, TypeScript, and Go, each with different transport options.
The protocol works bidirectionally. Your agent can ask Splunk questions ("what errors hit the checkout service in the last hour?"), and the MCP server translates that intent into SPL, runs the search, and returns results the agent can reason over. The agent never touches Splunk directly. All access flows through the MCP server's authentication and authorization layer.
Available Tools
The official Splunk MCP server organizes tools into namespaced categories. Core platform tools use the splunk_ prefix. Splunk AI Assistant tools use saia_.
| Tool | Category | What It Does |
|---|---|---|
| run_splunk_query | Search | Execute SPL queries against your Splunk instance. Rejects destructive commands. 1-minute timeout. |
| get_indexes | Discovery | List available indexes with metadata (size, event count, earliest/latest time). |
| get_splunk_info | System | Retrieve Splunk instance information: version, OS, license type, cluster status. |
| get_knowledge_objects | Knowledge | Access saved searches, alerts, lookups, macros, and field extractions. |
| generate_spl | AI Assistant | Convert natural language queries into SPL. Requires Splunk AI Assistant. |
| explain_spl | AI Assistant | Break down existing SPL queries into plain English explanations. |
| optimize_spl | AI Assistant | Suggest performance improvements for SPL queries. |
Community Server Extras
Community implementations add tools the official server doesn't include:
- validate_spl: Check SPL syntax before execution
- search_export: Export large result sets as CSV or JSON
- run_saved_search: Trigger existing saved searches by name
- kv_store operations: Read and write KV store collections
Installation
Official Splunk MCP Server
Install the official app from Splunkbase (app ID 7931). This is the Cisco-maintained implementation with streamable HTTP transport and native RBAC integration.
Install from Splunkbase
# Download from Splunkbase and install via CLI
splunk install app splunk-mcp-server.tgz -auth admin:password
# Or install from the Splunk Web UI:
# Apps > Manage Apps > Install app from file
# Restart Splunk after installation
splunk restartCommunity Python Server (livehybrid/splunk-mcp)
The most popular community implementation. Supports stdio, SSE, and API modes. Requires Python 3.10+ and UV.
Install Community Server
# Clone the repository
git clone https://github.com/livehybrid/splunk-mcp.git
cd splunk-mcp
# Install dependencies with UV
uv sync
# Set environment variables
export SPLUNK_HOST="your-splunk-instance.com"
export SPLUNK_PORT="8089"
export SPLUNK_USERNAME="your-username"
export SPLUNK_PASSWORD="your-password"
export SPLUNK_SCHEME="https"
# Run in stdio mode (for Claude Code / Cursor)
uv run python server.py --mode stdio
# Run in SSE mode (for remote connections)
uv run python server.py --mode sse --port 8000Splunk's Python/TypeScript Server (splunk/splunk-mcp-server2)
An alternative from the Splunk GitHub org. Built on FastMCP with async/await, input SPL validation, and output sanitization. Runs via stdio, SSE, or Docker.
Docker Deployment
# Run via Docker
docker run -e SPLUNK_URL=https://your-splunk:8089 \
-e SPLUNK_TOKEN=your-bearer-token \
-e SPLUNK_VERIFY_SSL=true \
-p 8000:8000 \
splunk-mcp-server2Configuration for Claude Code and Cursor
Claude Code
Add the Splunk MCP server to your project's .mcp.json or global ~/.claude/mcp.json:
.mcp.json (Claude Code)
{
"mcpServers": {
"splunk": {
"command": "uv",
"args": [
"run",
"--directory", "/path/to/splunk-mcp",
"python", "server.py",
"--mode", "stdio"
],
"env": {
"SPLUNK_HOST": "your-splunk-instance.com",
"SPLUNK_PORT": "8089",
"SPLUNK_USERNAME": "your-username",
"SPLUNK_PASSWORD": "your-password",
"SPLUNK_SCHEME": "https"
}
}
}
}Cursor
Add the server in Cursor Settings > Tools & Integrations > New MCP Server, or edit ~/.cursor/mcp.json directly:
~/.cursor/mcp.json (Cursor)
{
"mcpServers": {
"splunk": {
"command": "uv",
"args": [
"run",
"--directory", "/path/to/splunk-mcp",
"python", "server.py",
"--mode", "stdio"
],
"env": {
"SPLUNK_HOST": "your-splunk-instance.com",
"SPLUNK_PORT": "8089",
"SPLUNK_TOKEN": "your-splunk-bearer-token",
"SPLUNK_SCHEME": "https"
}
}
}
}Authentication Options
The server supports two authentication methods. Username/password is simpler for development. Bearer tokens (Splunk authentication tokens) are better for production since they can be scoped and rotated without changing passwords.
Bearer Token Authentication
# Generate a Splunk authentication token
# Splunk Web > Settings > Tokens > New Token
# Use the token instead of username/password
export SPLUNK_TOKEN="your-bearer-token"
# Or via Splunk REST API
curl -k https://your-splunk:8089/services/authorization/tokens \
-u admin:password \
-d name=mcp-agent \
-d audience=mcpSPL Queries Through Your Agent
Once connected, your coding agent can run SPL queries in natural language. The MCP server handles translation, execution, and result formatting.
Debugging a Production Error
Agent Prompt → SPL
# You tell Claude Code:
"What errors hit the payment-service in the last 2 hours?"
# The agent calls run_splunk_query with:
index=main sourcetype=app:logs service=payment-service
level=ERROR earliest=-2h
| stats count by message, host
| sort -countInvestigating Latency Spikes
Latency Analysis
# You tell the agent:
"Show me p95 response times for the /api/checkout endpoint today"
# Generated SPL:
index=web_logs uri_path="/api/checkout" earliest=-1d
| stats perc95(response_time) as p95_ms by host, span=1h
| sort _timeCorrelating Deploy with Errors
Deploy Correlation
# You tell the agent:
"Did error rates increase after the last deploy?"
# Generated SPL:
index=main sourcetype=app:logs level=ERROR earliest=-24h
| timechart span=15m count as errors
| appendcols [
search index=deploy_logs earliest=-24h
| eval deploy_time=_time
| table deploy_time
]Query Safety
The official MCP server enforces a 1-minute timeout on all searches. Queries with destructive commands (delete, collect with unsafe args) are blocked. Long-running analytical searches should use saved searches instead of ad-hoc queries through the MCP server.
Use Cases
Debugging in Context
Read the stack trace in your editor, ask the agent to pull matching Splunk logs, and get the fix. No context switch. The agent sees both the code and the log output in the same conversation.
Incident Response
During an outage, your agent can query error rates across services, identify the affected hosts, pull recent deploy logs, and correlate timestamps. All from the terminal where you're already writing the fix.
Log-Driven Test Writing
Ask the agent to find the most common error patterns in production, then generate test cases that cover those exact failure modes. Real production data shapes better tests than guessing.
Alert Triage
The agent retrieves Splunk saved searches and alert configurations. When an alert fires, it can pull the underlying search results, explain the trigger condition, and suggest whether the alert needs tuning or the issue needs a code fix.
Performance Profiling
Query p50/p95/p99 response times by endpoint and time window. The agent identifies which code paths are slow and can cross-reference with the source code to suggest where to optimize.
Security Audit
Search for failed authentication attempts, unusual access patterns, or privilege escalations. The agent can surface anomalies in Splunk data and help write detection rules or code-level mitigations.
Security Considerations
Giving an AI agent access to production logs requires thought. Splunk logs often contain PII, API keys, session tokens, and internal hostnames. The MCP server doesn't sanitize output by default. What Splunk returns, the agent sees.
RBAC is Your Primary Control
The MCP server inherits the authenticated user's Splunk role. Create a dedicated role for MCP access with minimal permissions:
Recommended Splunk Role Configuration
# Create a restricted role for MCP agent access
# Splunk Web > Settings > Roles > New Role
Role name: mcp_agent_reader
Inheritance: user (base role)
Index access:
- Included: main, web_logs, app_logs
- Excluded: security_audit, pii_data, secrets
Capabilities:
✓ search
✓ list_inputs_data
✗ edit_user (deny)
✗ delete_by_keyword (deny)
✗ admin_all_objects (deny)
Search restrictions:
- Default app: search
- Search filter: NOT (sourcetype=access_combined password)
- Max concurrent searches: 3
- Max search time: 60sSecurity Checklist
Before Connecting to Production
- Use bearer tokens, not passwords. Tokens can be scoped, rotated, and revoked without changing the user's credentials. Set an expiration.
- Create a dedicated MCP user and role. Don't reuse your admin account. The agent should only see indexes relevant to the codebase you're working on.
- Exclude sensitive indexes. Remove security audit logs, PII stores, and secrets vaults from the MCP role's index access list.
- Enable audit logging. Splunk's internal audit index tracks every search the MCP user runs. Review periodically.
- Set search restrictions. Use search filters to prevent queries that could return credentials or PII. Limit concurrent searches and execution time.
- Disable unnecessary tools. If you only need search, disable KV store operations, user management, and other tools at the server level.
- Use SSL/TLS. Always connect to Splunk over HTTPS (port 8089). Set
SPLUNK_VERIFY_SSL=truein production.
Observability MCP Server Comparison
Splunk isn't the only observability platform with an MCP server. Datadog launched its MCP server (GA, March 2026) and Grafana maintains mcp-grafana (v0.11.2). All three follow the same protocol, so the choice depends on where your data lives.
| Capability | Splunk MCP | Datadog MCP | Grafana MCP |
|---|---|---|---|
| Maintainer | Cisco/Splunk (official) | Datadog (official) | Grafana Labs (official) |
| Transport | Streamable HTTP | Remote HTTP | stdio / SSE |
| Log Search | SPL queries | Log Explorer queries | LogQL (Loki) |
| Metrics | Via SPL on metrics indexes | Native metrics queries | PromQL (Prometheus) |
| Traces | Via SPL on trace data | APM trace analysis | TraceQL (Tempo) |
| Dashboards | Saved searches, dashboards | Dashboard access | Dashboard queries, panel queries |
| Alerts/Monitors | Saved searches, alerts | Monitor management | Alert rule access |
| Auth Model | Splunk RBAC + tokens | API/App keys + scopes | Service account tokens |
| Community Options | 6+ open source servers | 2+ open source servers | 3+ Grafana ecosystem servers |
| Unique Feature | SPL generation/optimization | Incidents + error tracking | Multi-datasource panel queries |
If your team uses Splunk, use the Splunk MCP server. If you're on Datadog, use theirs. The protocol is standardized, so switching later means changing the MCP config, not rewriting integrations. You can also run multiple observability MCP servers simultaneously if your stack spans platforms.
Code Search + Log Search: The Full Debugging Loop
The Splunk MCP server gives your agent production log access. WarpGrep gives it semantic code search across your entire codebase. Together, they close the debugging loop that developers manually perform dozens of times per day.
A concrete example: an alert fires for increased 500 errors on /api/orders. With both MCP servers connected, the agent can:
- Query Splunk for the error details, stack traces, and affected request patterns
- Search the codebase with WarpGrep to find the route handler, middleware, and database queries involved
- Cross-reference the error message with the code path to identify the root cause
- Propose a fix with full context from both production behavior and source code
Without log search, the agent guesses at what's failing in production. Without code search, it can't find the relevant source fast enough. The combination is where the real time savings compound.
WarpGrep: Code Context
Semantic search across your codebase. Finds functions, types, and patterns by meaning, not just string matching. 8 parallel tool calls per search turn. The agent understands your code before suggesting fixes.
Splunk MCP: Production Context
Live log data from your running systems. Error rates, latency percentiles, deploy correlation, alert history. The agent sees what's actually happening, not what the code says should happen.
Frequently Asked Questions
Does the Splunk MCP server work with Splunk Cloud?
Yes. The official app supports both Splunk Enterprise and Splunk Cloud Platform versions 9.2 through 10.2. The Cloud version uses streamable HTTP transport with the same RBAC controls as Enterprise. Splunk also published the MCP server on Azure Marketplace for teams running Splunk in Azure.
Can AI agents run destructive Splunk commands through the MCP server?
The official server's run_splunk_query tool is designed for safe, non-destructive searches. Queries with destructive commands are rejected before execution. RBAC adds a second layer: even if a destructive command somehow passed validation, it would fail if the authenticated user's role lacks the required capability.
Which AI coding tools support the Splunk MCP server?
Any tool that implements the Model Context Protocol: Claude Code, Cursor, Claude Desktop, VS Code (with Copilot MCP support), Windsurf, and OpenAI Codex. The official server uses streamable HTTP. Community implementations also support stdio and SSE for broader compatibility.
What is the difference between the official and community implementations?
The official Splunkbase app is maintained by Cisco/Splunk, uses streamable HTTP, includes public key encryption for tokens, and integrates with Splunk's native RBAC. Community implementations like livehybrid/splunk-mcp and splunk/splunk-mcp-server2 offer stdio/SSE transport, SPL validation guardrails, and Docker deployment. The community servers are often easier to configure for local development since stdio mode works directly with Claude Code and Cursor.
How do I limit what the agent can see in Splunk?
Create a dedicated Splunk role for MCP access. Restrict which indexes the role can search, add search filters to block queries returning credentials or PII, limit concurrent searches and execution time, and exclude capabilities like delete_by_keyword. The MCP server respects all Splunk RBAC constraints.
Related Guides
Semantic Code Search for Your Agent
WarpGrep indexes your codebase for AI coding agents. Pair it with the Splunk MCP server: search code by meaning, search logs by time. Full context debugging from one conversation.