alt.hn

4/13/2026 at 4:10:15 PM

Claude Code may be burning your limits with invisible tokens

https://efficienist.com/claude-code-may-be-burning-your-limits-with-invisible-tokens-you-cant-see-or-audit/

by jenic_

4/14/2026 at 12:55:38 PM

This is methodologically flawed, as bytes only weakly correlate with tokens.

Unless you're sending identical requests, you can't expect the same token counts for any given of bytes, or that a slightly longer (but different) message will lead to more tokens than a slightly shorter one, or vice versa.

by marginalia_nu

4/13/2026 at 10:58:33 PM

I had the same suspicion so made this to examine where my tokens went.

Claude code caches a big chunk of context (all messages of current session). While a lot of data is going through network, in ccaudit itself, 98% is context is from cache.

Granted, to view the actual system prompt used by claude, one can only inspect network request. Otherwise best guess is token use in first exchange with Claude.

https://github.com/kmcheung12/ccaudit

by a_c

4/13/2026 at 11:50:57 PM

I got kinda obsessed with observability a month ago and wired together a full stack for personal use.

https://github.com/simple10/agent-super-spy - llm proxy + http MiTM proxy + LLMetry + other goodies

https://github.com/simple10/agents-observe - fancier claude hooks dashboard

It started as a need to keep an eye on OpenClaw but is incredibly useful for really understanding any agent harness at the raw LLM request level.

by simple10

4/13/2026 at 10:59:11 PM

What is the system prompt for $1000 Alex (RIP)?

by F7F7F7

4/14/2026 at 6:44:22 AM

I don’t buy it. The same problem was reported in Claude.ai at the same time which means same underlying root cause.

by simianwords