12/30/2025 at 12:14:20 AM
I struggle with these abstractions over context windows, esp when anthropic is actively focused on improving things like compaction, and knowing the eventual* goal is for the models to yave real memory layers baked in. Until then we have to optimize with how agents work best and ephemeral context is a part of that (they weren’t RL’d/trained with memory abstractions so we shouldn’t use them at inference either). Constant rediscovery that is task specific has worked well for me, doesn’t suffer from context decay, though it does eat more tokens.Otherwise the ability to search back through history is a valuable simple git log/diff or (rip)grep/jq combo over the session directory. Simple example of mine: https://github.com/backnotprop/rg_history
by ramoz
12/30/2025 at 12:19:44 AM
There is certainly a level where at any time you could be building some abstraction that is no longer required in a month, or 3.I feel that way too. I have a lot of these things.
But the reality is, it doesn't really happen that often in my actual experience. Everyone is very slow as a whole to understand what these things mean, so far you get quite a bit of time just with an improved, customized system of your own.
by AndyNemmity
12/30/2025 at 12:38:16 AM
My somewhat naive heuristic would be that memory abstractions are a complete mistep in terms of optimization. There is no "super claude mem" or "continual claude" until there actually is.https://backnotprop.com/blog/50-first-dates-with-mr-meeseeks...
by ramoz
12/30/2025 at 12:41:01 AM
I tend to agree with you, however compacting has gotten much worse.So... it's tough. I think memory abstractions are generally a mistake, and generally not needed, however I also think that compacting has gotten so wrong recently that they are also required until Claude Code releases a version with improved compacting.
But I don't do memory abstraction like this at all. I use skills to manage plans, and the plans are the memory abstraction.
But that is more than memory. That is also about having a detailed set of things that must occur.
by AndyNemmity
12/30/2025 at 12:44:19 AM
I’m interested to see your setup.I think planning is a critical part of the process. I just built https://github.com/backnotprop/plannotator for a simple UX enhancement
Before planning mode I used to write plans to a folder with descriptive file names. A simple ls was a nice memory refresher for the agent.
by ramoz
12/30/2025 at 1:17:25 AM
I understand the use case for plannotator. I understand why you did it that way.I am working alone. So I am instead having plans automatically update. Same conception, but without a human in the mix.
But I am utilizing skills heavily here. I also have a python script which manages how the LLM calls the plans so it's all deterministic. It happens the same way every time.
That's my big push right now. Every single thing I do, I try to make as much of it as deterministic as possible.
by AndyNemmity
12/30/2025 at 4:49:29 PM
Would you share an overview of how it works? Sounds interestingby edmundsauto
12/30/2025 at 5:15:37 PM
Perhaps I can release it as a standalone github skill, and then do a blog post on it or something.I'm just also working on real projects as well, so a lot of my priority is focused on new skills building, and not worrying about managing the current ones I have as github repos.
by AndyNemmity
1/1/2026 at 9:13:07 PM
That would probably be a lot of work for little gain. Would you be open to asking Claude to summarize your approach and just put it into a paste? I'm less interested in specific implementations and more about approaches, what the tradeoffs are and where it best applies.by edmundsauto