Claude, Cursor, Zed, Windsurf, and Continue can now generate palettes, check WCAG contrast, and convert colors as native tool calls. One line of config.
{
"mcpServers": {
"colorui": {
"command": "npx",
"args": ["-y", "@colorui/mcp"]
}
}
}Implements MCP 2024-11-05. The AI picks which tool to call based on the conversation - generate, harmonise, contrast, or convert.
colorui_generate_palette returns 5 OKLCH-balanced colors plus per-color rationale and a sharable colorui.io URL.
colorui_check_contrast lets the model decide if a button passes AA before suggesting it - no more "looks fine" hallucinations.
Add a single MCP server entry to Claude Desktop, Cursor, Zed, Windsurf, or Continue. npx pulls the latest version on demand.
Zero analytics, zero tokens. The server only contacts colorui.io and only when the AI invokes a tool.
Pure JS, ~250 LOC, Node 18+. No third-party SDK on the wire. Audit it in five minutes.
colorui_generate_paletteFree-text prompt → 5-color OKLCH-balanced palette + per-color rationale + share URL.
colorui_palette_from_harmonyBase color + harmony rule → palette. Use when a brand color exists.
colorui_check_contrastWCAG 2.1 ratio + AA/AAA pass-fail flags + grade.
colorui_convert_colorConvert any CSS color into HEX / RGB / HSL / OKLCH / LAB.
Model Context Protocol is an open spec from Anthropic that lets AI clients (Claude Desktop, Cursor, Zed, Windsurf, Continue) call external tools over a JSON-RPC stdio bridge. ColorUI exposes four tools: generate palette, harmony palette, check contrast, convert color.
Add { "command": "npx", "args": ["-y", "@colorui/mcp"] } to your client's MCP server list. The full snippet for each client is on this page. Then restart the client.
ChatGPT does not yet speak MCP natively. Use our OpenAPI 3.1 spec at /api/v1/openapi.json to build a Custom GPT - that hooks into the same endpoints.
No. The MCP server only receives the arguments your AI client decides to send (a prompt, a hex value). It cannot read files, your editor state, or your chat history. What the client itself sends to its model provider is governed by your client's privacy policy.
Yes - the underlying API is rate-limited 60 requests per minute per IP. For high-volume teams, set COLORUI_API to a self-hosted deployment.
The server lives in the open at mcp/ in the colorui.io repo. Pure JS - one file, ~250 lines. Audit it before adding to your AI client.