What engineers usually see
- •Server-sent events connection opens successfully
- •Initial events arrive normally
- •Stream stops sending events without closing
- •Client keeps connection open indefinitely
Why this is hard to debug
SSE doesn't have heartbeats or any real failure signals built in. When a stream just stops, you have no idea if the server hung, the network died, or the stream actually finished. Trying to debug this after the fact is a nightmare without per-request logs.
Minimal repro
curl -N https://aibadgr.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_OPENAI_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "test"}],
"stream": true
}'This request routes through AI Badgr and returns a stable request ID that links to an execution record.
Note: AI Badgr is OpenAI-compatible and works as a drop-in proxy. No SDK changes required — only the base_url changes.
What a per-request execution record makes visible
- SSE connection timeline
- Event count and timing
- Stall detection (time gap between events)
- Whether stream properly closed
- Complete request lifecycle
Run 1 request → get receipt
Change your base URL to https://aibadgr.com/v1 and run your request.
The response includes an X-Badgr-Request-Id header that links to a receipt showing latency, retries, tokens, cost, and failure stage for that specific execution.
Not the engineer?
Share this page with your dev and ask them to run one request through AI Badgr. That's all that's needed to get the receipt.
This kind of thing only makes sense when you can actually see what happened to a single request from start to finish, instead of trying to piece it together from scattered logs.