This MCP server allows LLMs to interact with a running Prometheus instance via its API to generate and execute promql queries, list and analyze metrics, and more.
From the registry: An API-complete MCP server to manage Prometheus-compatible backends.

Please install the `prometheus-mcp-server-tjhop` MCP server into my current AI client (that's you).
Required prerequisites (do these first if not already done):
- **Prometheus server** — A running Prometheus instance
- **Download binary** — Download release binary for your platform from GitHub Releases (https://github.com/tjhop/prometheus-mcp-server/releases)
Canonical MCP server config (stdio transport):
- command: `prometheus-mcp-server`
- args: ["--prometheus.url","http://127.0.0.1:9090"]
- required environment variables:
- `PROMETHEUS_MCP_SERVER_PROMETHEUS_URL`: URL of the Prometheus instance (example: `http://your-prometheus:9090`)
- optional environment variables:
- `PROMETHEUS_MCP_SERVER_MCP_TRANSPORT`: Transport type: stdio or http (example: `stdio`)
Note: Go binary with Helm chart for Kubernetes. Supports Thanos backend. Also available via Docker. Rich docs tools, TOON encoding for token efficiency. Replace binary path with actual location.
Add this MCP server to my current client's config in the correct format for you. If you need secrets or credentials I haven't provided, ASK me — do not invent values or leave raw placeholders. After adding it, tell me how to verify the server is connected.MCP server that exposes GraphQL operations as tools for AI models.