alt.hn

4/4/2026 at 1:52:35 PM

Caveman Mode Save Token?

https://twitter.com/om_patel5/status/2040279104885314001

by brightball

4/4/2026 at 6:27:40 PM

Only two or three weeks from incepting the idea of a token efficient LLM English dialect to seeing it in practice. I just never imagined it to take.... this particular form.....

https://news.ycombinator.com/item?id=47434846

by rdevilla

4/4/2026 at 6:59:09 PM

I've had the thought that English is an efficiency barrier for a while now. Surely there are more information-dense representations of semantic concepts.

Some languages for example have single characters that represent entire ideas/phrases.

https://news.ycombinator.com/item?id=47442478

by gavinray

4/4/2026 at 2:53:46 PM

I used a system prompt similar to this, where I just dumped the entirety of https://grugbrain.dev/ into it and prefaced it with the assistant having to emulate grug.

Didn't find it particularly useful, but is is funny!

by schmorptron

4/4/2026 at 1:52:35 PM

Can this actually work?

by brightball

4/4/2026 at 3:33:28 PM

It does. I've been tinkering with Copilot Studio Agents and you can hit a 8k character limit quickly. By taking your instructions and asking Copilot to compress the information down, while ensuring they are still human readable, you can cut it back to about 5k characters. The information is more dense and functionally the same and the agent is just as consistent as before.

by illwrks

4/4/2026 at 3:04:29 PM

Anything that reduce input/output works to an extent logically.

by pixel_popping

4/4/2026 at 1:53:32 PM

[flagged]

by JaceDev