Real-time outage tracking for OpenAI, Anthropic, Gemini, and Grok. Auto-refreshes every 60 seconds.
GPT-4, GPT-4o, GPT-3.5-Turbo, Embeddings
View status page →Claude 3 Opus, Sonnet, Haiku
View status page →Gemini Pro, Flash, Ultra / Vertex AI
View status page →xAI Grok-1, Grok-1.5, Grok-2
View status page →DeepSeek-V3, DeepSeek-R1 reasoning models
View status page →LPU-accelerated Llama, Mixtral, Gemma inference
View status page →Mistral-Large, Codestral, Mistral-Small
View status page →Command-R, Command-R+, Embed models
View status page →Open-source Llama, Mixtral, FLUX inference
View status page →Fast open-source model inference and fine-tuning
View status page →Stable Diffusion, FLUX, Whisper, Llama predictions
View status page →Unified gateway for 200+ AI models
View status page →AI Badgr acts as a transparent proxy for your existing API keys. One line of code change. Automatic failover when any provider goes down. Signed receipts for every request.
Get Started Free →We poll each provider's official Statuspage JSON endpoint (OpenAI, Anthropic) and Google Cloud incidents feed (Gemini) every 60 seconds to detect status indicator changes.
We track 429 and 5xx error rates across real API requests routed through AI Badgr. A spike above baseline immediately opens an incident window — often before the provider updates their status page.
An incident is only flagged when both the official status indicator and our own telemetry agree, reducing false positives and giving you reliable signal.
When a provider degrades, AI Badgr sends an admin email alert with a 5-minute per-provider cooldown so you're informed quickly without inbox flooding.