alt.hn

4/20/2026 at 12:21:24 AM

Show HN: A lightweight way to make agents talk without paying for API usage

https://juanpabloaj.com/2026/04/16/a-lightweight-way-to-make-agents-talk-without-paying-for-api-usage/

by juanpabloaj

4/20/2026 at 12:25:23 PM

Using tmux session as short-term memory is great idea. When you force a model to look at other model's cli history they tend to stay focussed on the technical constraints already discussed.

Also, I like this idea because now the agents are not talking in private, they're talking in a window you can peek into. If you see the two AI agents starting to get confused or repeating mistakes, you can just hop into that same window, type a quick correction, and jump back out.

by harish124

4/20/2026 at 3:13:34 AM

Claude Code (subscription) has Agent Teams built in. Teams of Agents communicate with local files that they use as inboxes and task list. Has tmux and iTerm 2 integration. https://code.claude.com/docs/en/agent-teams

They can rack up some extra tokens if you leave agents going idle. Because they loop, checking for new messages for them.

This fellow reverse-engineered exactly how it works and then abstracted the pattern into an MCP server that any Harness/agent can use. https://github.com/cs50victor/claude-code-teams-mcp

by dragonfax

4/20/2026 at 2:33:14 AM

Have you seen https://www.roborev.io/ from Wes McKinney?

by scalefirst

4/20/2026 at 4:42:03 AM

Any personal experience with it? Recommended?

by perelin

4/20/2026 at 7:43:48 AM

It's top notch. Big recommend!

by nicolailolansen

4/20/2026 at 10:32:51 AM

My regular workflow is to run code agents in Tmux panes and often I have Claude Code consult/collaborate with Codex, using my tmux-cli [1] tool, which is a wrapper around Tmux that provides good defaults (delay etc) for robust sending of messages, and waiting for completion etc.

[1] https://pchalasani.github.io/claude-code-tools/tools/tmux-cl...

by d4rkp4ttern

4/20/2026 at 9:25:05 AM

> use the subscription plans you already have, avoid paying for API usage, and keep the setup simple enough that you can try it in a few minutes.

That interested me, but the article does not explain how to do this at all. I was hoping it would tell how use my work's ChatGPT Pro subscription via the CLI without having to pay per token over their API.

by pidgeon_lover

4/20/2026 at 2:15:10 AM

I put together a skill to do this with OpenCode and the GitHub Copilot provider. Works pretty well.

by swingboy

4/20/2026 at 2:26:09 AM

I’ve been keeping them open in tmux and using either send_keys or paste buffer for communication. Using print mode and always resume last means you can’t have parallel systems going.

by pitched

4/20/2026 at 1:42:49 PM

> whether the final result is actually better, or whether it is just a more polished hallucinatio

Agents sampled from the same base model agreeing with each other isn't validation, it's correlation. Cheaper orchestration mostly amplifies whatever bias the model already has. Neat hack though.

by 7777777phil

4/20/2026 at 6:26:50 AM

In Cursor or OpenCode is very easy, just change the LLM in the same conversation.

by DeathArrow

4/20/2026 at 7:59:16 AM

[dead]

by kumardeepanshu

4/20/2026 at 7:29:10 AM

[dead]

by saadn92

4/20/2026 at 4:05:36 AM

[dead]

by theoperatorai

4/20/2026 at 2:47:46 AM

[dead]

by glashatay