HAL is a Model Context Protocol (MCP) server that provides HTTP API capabilities to Large Language Models, enabling them to make HTTP requests and interact with web APIs through a secure interface.
From the registry: HAL (HTTP API Layer) - An MCP server that provides HTTP API capabilities to Large Language Models
Please install the `hal` MCP server into my current AI client (that's you).
Required prerequisites (do these first if not already done):
- **Node.js 18+** — Node.js runtime (https://nodejs.org)
Canonical MCP server config (stdio transport):
- command: `npx`
- args: ["hal-mcp"]
- optional environment variables:
- `HAL_SWAGGER_FILE`: Path or URL to OpenAPI/Swagger specification file (example: `/path/to/openapi.json`)
- `HAL_API_BASE_URL`: Base URL for API requests (overrides servers in spec) (example: `<your-api-base-url>`)
- `HAL_WHITELIST_URLS`: Comma-separated URL patterns to allow (example: `https://api.github.com/*`)
- `HAL_BLACKLIST_URLS`: Comma-separated URL patterns to block (example: `http://localhost:*`)
Note: Also supports HAL_SECRET_* env vars for secure secret substitution using {secrets.key} templates, and HAL_ALLOW_* for namespaced URL restrictions.
Add this MCP server to my current client's config in the correct format for you. If you need secrets or credentials I haven't provided, ASK me — do not invent values or leave raw placeholders. After adding it, tell me how to verify the server is connected.HAL_SECRET_API_KEYrequiredSecret API key for secure requests.HAL_API_BASE_URLrequiredBase URL for API requests.HAL_SWAGGER_FILErequiredPath or URL to OpenAPI/Swagger specification file.MCP server that exposes GraphQL operations as tools for AI models.