5/4/2026 at 12:34:01 PM
Hmm, I'm not convinced that is the direction we want to go in. It's not like we have all the context of everything we ever learned present when making decisions. Heck, even for CPUs and GPUs we have strict hierachy of L1,L2,L3 shared, caches to larger memory units with constant management of those. Feel free to surprise me, but I believe having a similar stack for LLMs is the better way to go where we will have short-term memory (system-prompt, prompt, task), mid-term memory (session-knowledge, preferences), long-term memory (project knowledge, tech/stack insights), intuition memory (stemming from language, physics, rules). But right now we haven't developed best-practices yet of what information should go into what layer at what times. Increasing the overall context window is nice, but IMHO won't help us much.by stephschie
5/4/2026 at 1:59:56 PM
But, in context learning could be better. One important thing here also is the ability to align on what to more/less pay attention to — no matter the Knowledge Base. These are the highest leverage points that need to be exposed to a human to think and reason over. Constrained/Guardrailed development tasks work fine*, But exploration new direction — vs exploiting local minimas — is still an achiles-heel, even with all these knowledge unless there is sufficient steering and exploration the minima-seeking "tries" hard to win.* With Claude's 1-million context window I have been doing some slightly longer range tasks — ~1-3 days of work — with RPI/QRSPI frameworks(see last few days of comments else where on HN) in one context window. They involve a grill-me session with 20-60 sometimes more questions for tasks to get alignment which produces the design and the plan in one window.
by itissid
5/4/2026 at 2:03:18 PM
> They involve a grill-me session with 20-60 sometimes more questions for tasks to get alignment which produces the design and the plan in one window.My experience with this has been that it front-loads a lot of the LLM interactions, which can be exhausting without a reward (i.e. output.) And then, when I get the output, it's so large as to be hard to review/grok.
In other words, it feels a bit like when my coworker delivers me a month's worth of work in a single PR.
by johnmaguire
5/4/2026 at 1:16:18 PM
I have a simple and brittle system to track people and facts and associations on Newspapers, which is basically: "LLM extract people, places/projects/structure/places and save them as an Obsidian compatible graph network."For 2 or 3 newspapers it works; my idea was to use it as grounding to discover relationships between people, companies and jobs.
As for the "everyone's life", I have always assumed that there would be a graph system to point to "forgotten" documents.
Gemini said my idea was amazing and new in its implementation, even if not in spirit, but I'm assuming it was being sycophantic as usual.
by user2722
5/4/2026 at 1:56:22 PM
I always find it better to ask LLMs why this is bad and to explain itself why it thinks so. Sometimes it might hallicunate stuff but forcing it to find out the negatives is better than asking it for opinion since i am guessing they found early in training that an agreeable LLM is better received than one which is constantly truthful and considers you to be pretty dumb.by altmanaltman
5/4/2026 at 2:06:42 PM
> i am guessing they found early in training that an agreeable LLM is better received than one which is constantly truthful and considers you to be pretty dumbMy sense is that this is sort of accurate, but more likely it's a result of two things:
1. LLMs are still next-token predictors, and they are trained on texts of humans, which mostly collaborate. Staying on topic is more likely than diverging into a new idea.
2. LLMs are trained via RLHF which involves human feedback. Humans probably do prefer agreeable LLMs, which causes reinforcement at this stage.
So yes, kinda. But I'm not sure it's as clear-cut as "the researchers found humans prefer agreeableness and programmed it in."
by johnmaguire
5/4/2026 at 1:16:14 PM
> Hmm, I'm not convinced that is the direction we want to go in. It's not like we have all the context of everything we ever learned present when making decisions.I do not think it is the direction for everything.
Generally, we need consolidation of experiences and memories to just remember the important conclusions, ideas, and concepts, and then the ability to remember the full details if they are relevant (which they usually are not.)
But for some applications I am sure a billion token context would be useful.
It is likely most people need a 10 core CPU or whatever for most tasks, but for some applications you want a supercomputer with 1M cores.
by bhouston
5/4/2026 at 2:04:28 PM
I think we are wending toward a solution here for context, because no matter how big a context window is, there needs to be a way to navigate and prioritize that context, a way to handle contadictory info, etc.So we need a taxonomy, we need memory layers, we need summary/details. If there is one thing I have learned about how these LLMs work, if you give them a few flexible tools they can work the shit out of them to achieve objectives. We just need to right tools and right structure for context.
by lubujackson
5/4/2026 at 1:13:04 PM
Currently, it is difficult to live update the model’s parameters in response to new information. This difficulty applies at both an infrastructural level and an optimization level.We simply don’t know how to incorporate new information without losing old capabilities reliably. Pans handle this through extensive evaluation, heuristics, and experience.
What we do know is that models can adapt to their context, and extending the context window is an infrastructure and capex problem first. A billion useful tokens would obviate the need for any out of band memory structures.
by lumost
5/4/2026 at 2:00:13 PM
I definitely see why effort is being put into this. But it seems inherently limiting. It's like having someone sit down in a library each day with a notebook containing all their prior work, none of which they can actually remember. At the end of the day, they write out their notes, then go home and get their memory wiped for the next day. Making that notebook longer is an obvious way to improve the system, but it seems like it's going to bump into fundamental limits.by wat10000