Connection reset by peer (LLM)

What engineers usually see

  • Provider forcibly closes connection
  • Error: "Connection reset by peer"
  • Request may or may not have been processed
  • No response or error details from provider

Why this is hard to debug

Connection resets are abrupt and don't include error details. You can't tell if the provider rejected the request, experienced an internal error, or rate limited you. Receipts capture the provider-side reason.

Minimal repro

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_OPENAI_KEY",
    base_url="https://aibadgr.com/v1"
)

try:
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "test"}]
    )
    print(response)
except Exception as e:
    print(f"Connection error: {e}")

This request routes through AI Badgr and returns a stable request ID that links to an execution record.

Note: AI Badgr is OpenAI-compatible and works as a drop-in proxy. No SDK changes required — only the base_url changes.

What a per-request execution record makes visible

  • Provider error code (if any)
  • Request stage where reset occurred
  • Whether request reached provider
  • Retry behavior
  • Cost impact

Run 1 request → get receipt

Change your base URL to https://aibadgr.com/v1 and run your request.

The response includes an X-Badgr-Request-Id header that links to a receipt showing latency, retries, tokens, cost, and failure stage for that specific execution.

Not the engineer?
Share this page with your dev and ask them to run one request through AI Badgr. That's all that's needed to get the receipt.

This kind of thing only makes sense when you can actually see what happened to a single request from start to finish, instead of trying to piece it together from scattered logs.