Streaming timeout with OpenAI

What engineers usually see

  • OpenAI streaming request exceeds client timeout
  • Request is killed locally but may still be processing upstream
  • No way to check if response completed on provider side
  • Unclear if tokens were consumed

Why this is hard to debug

Client timeouts don't tell you what happened on the provider. The request might have completed, failed, or still be running. Without a receipt, you can't reconcile client-side timeout with provider-side execution.

Minimal repro

import openai
import signal

openai.api_key = "YOUR_OPENAI_KEY"
openai.base_url = "https://aibadgr.com/v1"

# Timeout after 10 seconds
signal.alarm(10)

try:
    stream = openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "test"}],
        stream=True
    )
    for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="")
except TimeoutError:
    print("\nStream timed out")

This request routes through AI Badgr and returns a stable request ID that links to an execution record.

Note: AI Badgr is OpenAI-compatible and works as a drop-in proxy. No SDK changes required — only the base_url changes.

What a per-request execution record makes visible

  • Actual provider processing time
  • Whether request completed despite client timeout
  • Tokens consumed (even if client timed out)
  • Cost incurred regardless of client timeout
  • Provider-side completion status

Run 1 request → get receipt

Change your base URL to https://aibadgr.com/v1 and run your request.

The response includes an X-Badgr-Request-Id header that links to a receipt showing latency, retries, tokens, cost, and failure stage for that specific execution.

Not the engineer?
Share this page with your dev and ask them to run one request through AI Badgr. That's all that's needed to get the receipt.

This kind of thing only makes sense when you can actually see what happened to a single request from start to finish, instead of trying to piece it together from scattered logs.