alt.hn

4/20/2026 at 4:35:05 PM

Show HN: Ctx – a /resume that works across Claude Code and Codex

https://github.com/dchu917/ctx

by dchu17

4/21/2026 at 11:52:53 PM

Claude Code used to have a warning that toggling thinking within a conversation would decrease performance:

> Changing thinking mode mid-conversation will increase latency and may reduce quality. For best results, set this at the start of a session.

Neither OpenAI nor Anthropic exposes raw thinking tokens anymore.

Claude Code redacts thinking by default (you can opt in to get Haiku-produced summaries at best), and OpenAI returns encrypted reasoning items.

Either way, first-party CLIs hold opaque thinking blobs that can't be manipulated or ported between providers without dropping them. So cross-agent resume carries an inherent performance penalty: you keep the (visible) transcript but lose the reasoning.

by realdimas

4/21/2026 at 8:44:30 PM

I don't think I've ever /resumed a Claude Code session even once. What do people use that for? The way I use it is to make a change, maybe document the change, and then I'm done. New session.

by LeoPanthera

4/21/2026 at 10:05:02 PM

I have like 15 concurrent sessions I leave up for weeks, 50% Codex 50% Claude Code, even though I know they work better with fresh context. Then again I also always have least 200 browser tabs up. I probably just have a mental illness.

by meowface

4/22/2026 at 1:47:54 AM

lol after reading your first sentence I literally thought to myself "this sounds like the type of person who never closes their browser tabs"

by theowaway213456

4/21/2026 at 9:52:37 PM

I'd use it if I hit the 5 hour quota mid-change and then came back later in the day in a new terminal (depending on the input/output ratio of my now un-cached context, of course).

by daemonologist

4/21/2026 at 11:51:39 PM

I spin up a lot of agents and don't always get back to them same day, so it helps a lot if my laptop restarts to install updates automatically.

by dgunay

4/22/2026 at 9:30:20 AM

[dead]

by errolabadzhiev

4/21/2026 at 6:36:52 PM

Tooling like this is why I really want to build my own harness that can replace Claude Code, because I have been building a few different custom tools that would be nice as part of one single harness so I don't have to tweak configurations across all my different environments, projects and even OS' it gets tiresome, and Claude even has separate "memories" on different devices, making the experience even more inconsistent.

by giancarlostoro

4/21/2026 at 7:21:03 PM

I've actually had the same itch and decided to give it a go ... So far I'm one year into the project, learned a ton and highly recommend to anyone who'd listen - try writing you own harness. It can be fun, it can be intoxicating, it can also be boring and mundane. However you'll learn so much along the way, even if you thought you already were well versed.

by StanAngeloff

4/21/2026 at 6:39:25 PM

Pi is very extensible, and could possibly serve as a good foundation to build on.

by arcanemachiner

4/21/2026 at 7:37:58 PM

The problem with this is that you won't get to enjoy the heavy subsidies of Claude subscriptions

But yeah, after the price hikes, it's inevitable that people will run open source harnesses

by nextaccountic

4/21/2026 at 7:59:53 PM

Interesting. What kind of context usage does it have when switching between the two providers? Like is it smart about using the # tokens when you go from claude -> codex or vice versa for a conversation?

How does ctx "normalize" things across providers in the context window ( e.g. tool/mcp calls, sub-agent results)?

by ghm2180

4/21/2026 at 4:57:12 PM

Since prompt caching won't work across different models, how is this approach better than dropping a PR for the other harnesses to review?

by buremba

4/21/2026 at 5:11:19 PM

Sorry, I may be misunderstanding the question.

The way this works is that it stores workstreams and session state in a local SQLite DB, and links each ctx session to the exact local Claude Code and/or Codex raw session log it came from (also stored locally).

What do you mean by prompt caching?

by dchu17

4/21/2026 at 5:20:33 PM

Prompt caching is done on the provider side. If you send two requests to a provider in short succession and the beginning of your second request is the same as your first (for example, because your second request is the continuation of an ongoing chat), the repeated tokens are much less expensive the second time.

Obviously, your tool does not provide this. But I think GP is undervaluing the UX advantages of having your conversation history.

by Wowfunhappy

4/21/2026 at 6:27:53 PM

Yes that's it. I actually just ask codex/claude code to look up the session id when I want to resume sessions cross harness, it's just jsonl files locally so it can access the full conversation history when needed.

by buremba

4/21/2026 at 9:16:58 PM

Great callout about the prompt caching, this switch is going to burn subscription limits on Claude real real fast.

Unless the goal is to move from one provider to another and preserve all context 1:1. And I can’t seem to find a decent reason why you would want everything and not the TLDR + resulting work.

by ycombinatornews

4/21/2026 at 4:59:07 PM

Have you considered making it possible to share a stream/context? As an export/import function.

by t0mas88

4/21/2026 at 6:12:22 PM

I wrote a tool for myself to copy (and archive) the claude/codex conversations github.com/rkuska/carn

by rkuska

4/21/2026 at 6:50:20 PM

Thanks

by t0mas88

4/21/2026 at 5:02:54 PM

that's interesting, I hadn't at this point but this sounds potentially useful

by dchu17

4/22/2026 at 6:22:51 AM

Can we also get a /last ? 9/10 times i want to resume my last session. I know its only one extra tap, but still

by ramon156

4/21/2026 at 7:47:39 PM

really interesting idea! will check it out. and thanks for making it local-first!

by phoenixranger

4/22/2026 at 9:00:56 AM

[dead]

by flowdesktech

4/21/2026 at 6:03:27 PM

[dead]

by tuo-lei