alt.hn

4/6/2026 at 9:49:34 PM

Show HN: Hippo, biologically inspired memory for AI agents

https://github.com/kitfunso/hippo-memory

by kitfunso

4/7/2026 at 10:52:05 AM

We're exploring related ideas in embodied AI rather than LLM agents. MH-FLOCKE uses Izhikevich spiking neurons with R-STDP to control quadruped locomotion — the memory is in the synaptic weights, not in a vector store.

The brain persists across sessions: stop the robot, restart it, synaptic weights reload and it continues from where it left off. Decay happens naturally through R-STDP — synapses that don't contribute to reward weaken over time. No explicit forgetting mechanism needed.

Currently running on a Unitree Go2 (MuJoCo) and a 100€ Freenove robot dog (Raspberry Pi 4, real hardware). Same architecture, different bodies.

github.com/MarcHesse/mhflocke

by ide0666

4/7/2026 at 1:47:07 AM

The biggest issue I have with these systems is, I don't want a blanket memory. I want everything to be embedded in skills and progressively discovered when they are required.

I've been playing around with doing that with a cron job for a "dream" sequence.

I really want to get them out of main context asap, and where they belong, into skills.

https://github.com/notque/claude-code-toolkit

by AndyNemmity

4/7/2026 at 3:37:06 AM

Isn't this the idea behind holographic memory? Chopping the image in half gets you the same image at half the resolution? Or so I've heard...

What you want is a context mipmap.

Then there was the Claude article describing using filesystem hierarchy to organize markdown knowledge, which apparently beats RAG.

by orbisvicis

4/7/2026 at 4:30:31 AM

[flagged]

by danieltanfh95

4/7/2026 at 10:16:27 AM

Oh hey, something I know something about!

I've long held the belief that if you want to simulate human behaviour, you need human-like memory storage, because so much of our behaviour is influenced by how our memories work. Even something as stupid as walking into between rooms and forgetting why you went there, is a behaviour that would otherwise have to be simulated directly but can be indirectly simulated by the memory of why an Agent is moving from room to room having a chance of disappearing.

Now, as for how useful this will be for something that isn't trying to directly simulate a human and is trying to be "superintelligent", I'm not entirely sure, but I am excited that someone is exploring it.

https://ieeexplore.ieee.org/abstract/document/5952114 https://ieeexplore.ieee.org/abstract/document/5548405 https://ieeexplore.ieee.org/abstract/document/5953964

I never did get many citations for these, maybe I just wasn't very good at "marketing" my papers.

by davman

4/8/2026 at 6:55:32 PM

Thank you again for all the feedback. I have made a lot of significant changes to the repo. Make sure you update it to the latest version and lots of options on how you can best utilise it.

by kitfunso

4/6/2026 at 10:37:06 PM

Cool project. I like the neuroscience analogy with decay and consolidation.

I've been working on a related problem from the other direction: Claude Code and Codex already persist full session transcripts, but there's no good way to search across them. So I built ccrider (https://github.com/neilberkman/ccrider). It indexes existing sessions into SQLite FTS5 and exposes an MCP server so agents can query their own conversation history without a separate memory layer. Basically treating it as a retrieval problem rather than a storage problem.

by nberkman

4/8/2026 at 5:29:40 AM

Nice work! I have been thinking along similar lines and designed a simulation of human memory using a tiered database design with hot/warm/cold storage, temporal data, and graph relationship nodes. Hot memory is processed by an LLM during "sleep cycles" or downtime as I have seen others mention in the comments.

You have some novel approaches here which I have learned a lot from! Your hypershpere physics approach is fascinating - it's a different approach than I took, but it accomplishes some tasks without an LLM. Your importance-based eviction system can significantly reduce the size of the ephemeral session state before it gets processed to persistent memory by the LLM, and your half-life knowledge decay mechanism is more elegant than the temporal approach I took.

If I am finally allowed to post a show hn, I'll post a few details, but our projects mostly solve different things and are complimentary. I can certainly use some things in Hippo to improve my system, maybe there is something that would interest you in mine -- Memforge (https://github.com/salishforge/memforge) if you're interested.

by artificium

4/7/2026 at 5:24:20 AM

I think explicit post-training is going to be needed to make this kind of approach effective.

As this repo notes is "The secret to good memory isn't remembering more. It's knowing what to forget." But knowing what is likely to be important in the future implies a working model of the future and your place in it. It's a fully AGI complete problem: "Given my current state and goals, what am I going to find important conditioned on the likelihood of any particular future...". Anyone working with these agents knows they are hopelessly bad at modeling their own capabilities much less projecting that forward.

by extr

4/7/2026 at 7:37:08 AM

[flagged]

by baudmusic

4/6/2026 at 11:24:17 PM

hmm the repo doesnt mention this at all but this name and problem domain brings up HippoRAG https://arxiv.org/abs/2405.14831 <- any relation? seems odd to miss out this exactly similarly named paper with related techniques.

by swyx

4/6/2026 at 10:40:49 PM

Aren't tools like claude already store context by project in file system? Also any reason use "capture" instead of "export" (an obvious opposite of import)?

by the_arun

4/6/2026 at 11:12:43 PM

> Aren't tools like claude already store context by project in file system?

They do, the missing piece is a tool to access them. See comment about my tool that addresses this: https://news.ycombinator.com/item?id=47668270

by nberkman

4/7/2026 at 4:10:58 PM

Are there any natrual ways of swapping from clock time to agent "active time"? For some agents that are running intermittently I might want to keep those memories longer (in clock time).

by zambelli

4/8/2026 at 1:36:11 PM

[dead]

by kitfunso

4/7/2026 at 7:32:41 AM

wow, i checked the repo and we have similar ideas)

we're building swarm-like agent memory agents share memories across rooms and nodes. Reading Steiner + Time Leap Capsules (yeah, Steins;Gate easter eggs lol).

your consolidation and decay mechanics are close to what we want. might integrate similar approach.

by CyborgUndefined

4/7/2026 at 10:49:11 AM

Thank you so much for all the feedback! I really appreciate it and have implemented the majority of them. Please check out v0.10.0!

by kitfunso

4/7/2026 at 9:39:50 AM

a working group of ~300 senior eng are experimenting with different skills for stuff like this: https://swg.fyi/mom

by asah

4/6/2026 at 10:13:16 PM

no open code plugin? This seems like something that should just run in the background. It's well documented that it should just be a skill agents can use when they get into various fruitless states.

The "biological" memory strength shouldn't just be a time thing, and even then, the time of the AI agent should only be conformed to the AI's lifetime and not the actual clock. Look up https://stackoverflow.com/questions/3523442/difference-betwe... monotonic clock. If you want a decay, it shouldn't be related to an actual clock, but it's work time.

But memory is more about triggers than it is about anything else. So you should absolutely have memory triggers based on location. Something like a path hash. So whever an agent is working and remembering things it should be tightly compacted to that location; only where a "compaction" happens should these memories become more and more generalized to locations.

The types of memory that often are more prominent are like this, whether it's sports or GUIs, physical location triggers much more intrinsics than conscious memory. Focus on how to trigger recall based on project paths, filenames in the path, file path names, etc.

by cyanydeez

4/6/2026 at 11:24:36 PM

Memory links to location but that's largely because humans are localised. Isn't that also a weakness. We should be trying to exploit the benefits of non-locality [of ML models and training data] too.

I feel like much of my life is virtual, non-localised. Writing missives to the four corners of the wind here and elsewhere; gaming online; research/chats with LLMs or on the web, email with people.

My physical location is often not important - a continuing context from non-physical aspects of my existence matters more.

That said, one of the things that's hard for me about digital life is the lack of waymarks - I used to be quite "geographical" in my thinking. Like "oh the part I found interesting was on the left page after the RGB diagram", I'd find that and also find my train of thought and extend it. Now, information can be in any myriad of freeform places across at least 3 devices and in emails, notebooks, bookmarks, chat histories, and of course my brain. When some ready syncretism of those things happens it feels like we'll make better advances. Personal agents can be a part of that.

by pbhjpbhj

4/6/2026 at 10:19:55 PM

coming right up, adding it as we speak

by kitfunso

4/6/2026 at 10:37:10 PM

yep came here to say this. great to hear it's in process.

by russellthehippo

4/6/2026 at 11:07:09 PM

yegge has a cool solution for this in gastown: the current agent is able to hold a seance with the previous one

by gfody

4/6/2026 at 11:27:03 PM

How does it select what to forget? Let's say I land a PR that introduces a sharp change, migrating from one thing to another. An exponential decay won't catch this. Biological learning makes sense when things we observe similar things repeatedly in order to learn patterns. I am skeptical that it applies to learning the commits of one code base.

by esafak

4/7/2026 at 9:03:44 AM

I think this is a very important question, and it makes it clear that memory systems are less about fact retrieval, and more about knowledge classification. Memories systems are not document stores -- which to be fair this hippo system does recognize and motivates by exponential decay, recall strengthening and "sleep" consolidation.

I personally don't think a memory system should try to "select what to forget", but to store everythign and live with the contradictions inherent in history. Having said that, we need to ascribe a certain confidence to each memory at storage time, where something uncertain is described as such, and when contradicting information gets stored, it reduces the confidence even further -- this on top of time decay and retreival bumps in confidence. E. T. Jaynes argued that this could be achieved in machines through Bayesian updating, say a beta distribution is stored for each memory and upon storing knowledge that confirms this memory, the beta distribution is updated to have more confidence (the original is the prior).

If every memory has a Bayesian prior denoting confidence, and this is surfaced when recalling, then the LLM itself can decide how to sythesize the different memories. Together with a "remembered on" field, the LLM can grok that the database schema was changed, or a certain design pattern was discarded (for example).

(Full disclosure, I have developed a memory system myself which I will post here in a couple days, with a slightly different target audience than hippo).

by tompdavis

4/7/2026 at 12:32:33 AM

cool project mate, gj

by matt765

4/7/2026 at 10:36:08 AM

[flagged]

by enesz

4/7/2026 at 12:12:27 PM

[flagged]

by bac2176

4/7/2026 at 4:51:13 AM

[flagged]

by suradethchaipin

4/7/2026 at 6:15:38 AM

[flagged]

by glasswerkskimny

4/7/2026 at 5:39:20 AM

[flagged]

by Sukhbat

4/7/2026 at 1:11:06 AM

[dead]

by Sim-In-Silico

4/7/2026 at 4:46:13 AM

[flagged]

by neuzhou

4/7/2026 at 2:23:57 PM

[flagged]

by Nick_Finney

4/6/2026 at 11:47:40 PM

[flagged]

by bambushu