Real-time probes against every endpoint, from the same Vercel Edge runtime that serves you. No vanity uptime number - just what is up, right now, with real latency.
5 of 5 checks passing · avg 74ms · slowest 129ms
Probed at Tue, 28 Apr 2026 21:02:42 GMT from local
/api/v1/api/v1/convert?color=%237c5cff&format=oklch/api/v1/contrast?fg=%23000000&bg=%23ffffff/api/v1/palette?count=3/api/v1/openapi.jsonNo third-party monitoring service. Probes run on every page render.
Numbers are measured from the same edge that serves you - not a synthetic check from one US region.
Poll /api/v1/status for JSON. 503 on hard outage so you can alert on HTTP status alone.
The page is regenerated server-side at most every 30 seconds. The underlying /api/v1/status endpoint is never cached and probes every public endpoint live, in parallel, on each request.
They run inside the same Vercel Edge runtime that serves the public API, against absolute URLs on colorui.io. Latency you see is the round trip from the same edge that serves a real client.
At least one - but not all - probes failed (timeout, non-2xx, or unexpected body). Other endpoints are still serving traffic; only the failing ones should be considered offline.
Subscribe to /releases.xml for shipped changes. For machine consumption, poll /api/v1/status (CORS-open, JSON, no auth) at 1/min - it returns 503 on a hard outage so you can alert on HTTP status alone.