EOF error in LLM request

What engineers usually see

  • Connection closes with unexpected EOF (end of file)
  • Stream ends prematurely without proper closure
  • No error code or message from provider
  • Partial data may have been received

Why this is hard to debug

EOF errors indicate abrupt connection closure. They don't explain why the connection closed or what data was lost. Receipts track bytes transferred and identify the closure reason.

Minimal repro

const response = await fetch('https://aibadgr.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_OPENAI_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'gpt-4o-mini',
    messages: [{role: 'user', content: 'test'}]
  })
});

const data = await response.json();
console.log(data);

This request routes through AI Badgr and returns a stable request ID that links to an execution record.

Note: AI Badgr is OpenAI-compatible and works as a drop-in proxy. No SDK changes required — only the base_url changes.

What a per-request execution record makes visible

  • Bytes received before EOF
  • Expected vs actual response size
  • Provider connection status
  • Closure reason
  • Whether response was complete

Run 1 request → get receipt

Change your base URL to https://aibadgr.com/v1 and run your request.

The response includes an X-Badgr-Request-Id header that links to a receipt showing latency, retries, tokens, cost, and failure stage for that specific execution.

Not the engineer?
Share this page with your dev and ask them to run one request through AI Badgr. That's all that's needed to get the receipt.

This kind of thing only makes sense when you can actually see what happened to a single request from start to finish, instead of trying to piece it together from scattered logs.