3/1/2026 at 8:09:03 AM
Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools.by wps
3/1/2026 at 8:14:50 AM
Most normal people want the LLM to remember their interests and favourite things, so they don't have to manually re-explain when asking for advice.They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once.
by jjmarr
3/1/2026 at 8:20:52 AM
Do you have example prompts where this would be usual? Why would you want an LLM to know your favorite type of cheese? Now that I say that, I guess if you use it for recipes then it's useful if it remembers things like dietary restrictions. And even then a project seems like the better option.I can't think of much else though so I'm still curious what you or others use it for.
by deaux
3/1/2026 at 10:53:56 AM
ChatGPT knows what's in my bar and what types of base liquors I love and/or can't drink. It knows what fruit, syrups and mixes are in my fridge. It knows that my friend is allergic to mint. It knows that when I ask for recommendations, I tend to want a choice between spirit forward, tiki, martini and herbaceous.ChatGPT knows the broad strokes of the 3-4 main hardware projects I have on the go, and depending on the questions I'm asking, it will often structure its responses in a way that differentiates based on which one I'm thinking about.
It knows what resistor and capacitor values I have on my pick and place machine, and when I ask for divider ratios it will do its best to calculate based on those values to the degree that it will chain 1-2 resistors together to achieve those ratios.
I knows what kind of solder I use, and has warned me about components with sensitive reflow temperature concerns.
It's an extraordinarily useful feature for engineering and drinking, two things that are commonly found in the same Venn diagram.
by peteforde
3/1/2026 at 5:42:41 PM
> It knows what resistor and capacitor values I have on my pick and place machine, and when I ask for divider ratios it will do its best to calculate based on those values to the degree that it will chain 1-2 resistors together to achieve those ratios.Also relevant: it knows that you know what a resistor and capacitor is, and is able to tune responses to your level of knowledge. (It's not great at this, in my experience, since domain knowledge is still so jagged, but I think it's better than nothing.)
by lkbm
3/1/2026 at 12:17:52 PM
Thank you! That helped me understand. Hobbies that you regularly do, and an LLM is continuously helpful for, benefiting from memory.Personally, I would still be wary of the black box aspect -not knowing what it does remember and what it doesn't - so I would probably still use projects to make it more deterministic. But that's probably being overcautious and unnecessary in most common cases.
by deaux
3/1/2026 at 9:18:33 PM
It it just me who's getting freaked out by this?I know it's a boiling frog situation, but seeing it spelt out feels so icky vs how google ads feel.
I really want my personality data deleted from big tech...
Sigh
by Peaches4Rent
3/1/2026 at 5:55:44 PM
I will say graciously that seeing this question asked here is absolutely stunning to meIf I ask a question about vehicles it know what cars I have and what I like in cars
If I ask for a question about vacation spots it know my parties composition or preferences
Things like that
by foogazi
3/1/2026 at 10:37:21 AM
I asked chatgpt a car related question in a fresh chat, and it answered it specifically with my car in mind.Turns out a few month befor I told it in a prompt what car I was driving.
I turned memory of that day.
by Mashimo
3/1/2026 at 8:45:35 AM
Can projects overlap? If not there’s general context information that’s often useful.My job, my kids and time preferences around those things, my preferred tech setup and way of working and types of tech I’m better at. Things I already have (home assistant, little nuc, etc). I can throw a random question and not have to add this kind of information or manage it.
by IanCal
3/1/2026 at 9:44:00 AM
I get that those are the things that go into memory. What I don't get is what kind of prompt your job and kids are useful information for. Especially on the regular.by deaux
3/1/2026 at 11:44:59 AM
Let’s see, recently:Home automation fixing
Proposed integrations with some services locally
Science experiments explained at a few levels, finding good background info and where to read up about some safety information
Maths help for specific areas my kids are looking at and proposed games for that
Evaluation of coding options for my kids
How to link up some ideas on coding, electronics and using the home automation side as some fun outputs
LED strip info and work, again integrating with smart homes and what’s good around the kids
Framework evaluations for automation at work and home
Crystal identification
Looking up local council info
Relevant music suggestions for kids to play on the piano
Here some things cross over. I’m happy writing code, I typically want easy open source options, I have languages and tech I prefer, I’m moving g things to matter, I have home assistant, my son is excellent at maths given his age but I’m working more on comprehension of problems, and a lot more. All those are things that with a bit of background info change the types of answers I get and make it more useful.
by IanCal
3/1/2026 at 8:33:19 AM
I had the same question a few days ago here: https://news.ycombinator.com/item?id=47162828I didn't receive an answer besides "that's what people like", but I still can't think of (m)any situations where anyone would prefer it.
by tikotus
3/1/2026 at 9:47:41 AM
The reply about knowledge about their job and familt made me think.The only thing I can now think of is using it as a personal therapist. Or asking how to approach their kids. And they're a bit embarrassed about it, because it's still outside the Overton window -especially on HN - which is why they aren't sharing it.
If someone has different usecases, please do prove me wrong! Maybe I just lack imagination.
by deaux
3/1/2026 at 2:58:18 PM
Such an incredible amount of personal, intimate knowledge to share with a company. Sure, Google can figure out where I live and who I visit because I have an Android phone, but they'll never know the contents of those relationships.I have a line in the sand with the AI vendors. It's a work relationship. If I wouldn't share it with a colleague I didn't know super well, I'm not telling it to a AI vendor.
by 0_____0
3/1/2026 at 5:44:20 PM
I recently asked about baby-led weaning. If my baby were 2 months old, it would have been smart to mention "not yet!" but it knows she's 8 months old and was able to give contextual advice.by lkbm
3/1/2026 at 11:23:17 AM
I ask gpt a lot of questions about plants and gardening - I’m happy that it remembers where I live and understands the implications. I could remind it in every question, but this is convenient.by randrus
3/1/2026 at 5:30:20 PM
I broke my ankle and have multiple chats related to medicine, physical therapy, pain management, lawyer questions, how to handle messaging to boss and HRby damontal
3/1/2026 at 10:29:13 AM
I use it for my work. So i went it to remember everything about my business, website, the domain, which country we operate and on and on. It’s a ton of context which I don’t want to repeat each time.by vishnugupta
3/1/2026 at 12:10:28 PM
That's what projects are for. All the major chatbot companies have some equivalent and all support a standard instruction where you can include anything you need automatically.by Kye
3/1/2026 at 3:06:38 PM
Sure.ChatGPT "knows" (has context that includes) some of the things I'm good at, and some of the things I'm not good at. I have my own tolerances for communication and it has context about that, too.
I use the bot for mostly techy things. So, for instance, I'm alright with using tools, and building electronics, and punting around on a Linux box so I don't need my hand held for that. But I'm terrible at writing code, so baby steps and detailed explanation there helps me a lot. I strongly prefer pragmatism and verifiable facts. I despise sycophant speech, the empty positivity of corpo-speak, assumptions, false praise, superfluous verbosity, and apologies and/or the implication of feelings from bots.
Through a combination of some deliberate training (custom instructions, memory), and just using it (shared context), it mostly does what I want in the way that I want it done -- the first time.
I don't have to steer in the right direction with every new session. There was a time when that was necessary, but it is no longer that way. Adjustments happen increasingly automatically these days.
That saves me time and frustration, and enhances the utility of the bot.
Meanwhile: Others have their own skills and preferences that may be very different in comparison to my own. That's OK. We each get to have our own experience.
by ssl-3
3/1/2026 at 8:15:05 AM
In online Claude I often use incognito mode precisely because I don't want results to be influenced by what we talked about earlier. It's getting rather annoying to be honest.by AllegedAlec
3/1/2026 at 8:19:04 PM
I'm switching from Claude Web to Claude Code. Local files give me memory I actually control, unlike Anthropic's implementation. CC doesn't carry state between sessions — you just put whatever project context it needs in a file.by visarga
3/1/2026 at 8:20:11 AM
Keep your user prefs minimal and use project memory instead: create a new project, it will only have access to your user prefs, everything else is fresh.by qwertox
3/1/2026 at 8:48:54 AM
I did /init and now CLAUDE.md is on several layers. I wish there was a reverse init and minimum as needed init.by hbarka
3/1/2026 at 9:48:35 AM
I'll have to try projects I guess, but I just want to sometimes ask questions without it bringing up shit I asked about in the past which isn't relevant to what I'm asking this time.by AllegedAlec
3/1/2026 at 8:41:05 AM
exactly!by KellyCriterion
3/1/2026 at 10:37:59 AM
Why not turn it off then?by Mashimo
3/1/2026 at 4:42:41 PM
On the contrary, I cannot understand how people are seriously using LLM outside of software engineering without account-wide memory. When I ask things like "what do you think John should do next on project A?", I don’t want to have to explain in detail who is John, what is project A and what John was working on before.by bouzouk
3/1/2026 at 9:43:08 AM
The few times I've switched over to chatGPT I've been dumbfounded by lines like "...since you already are using SQLite...", referring to projects from months ago.I know the "memory" function can be disabled, but I have a hard time seeing that it would ever really be useful.
by 7734128
3/1/2026 at 11:32:47 AM
Yeah for me it only ever polluted the context. Irrelevant information tends to oversteer the LLM and produce worse output.by cedws
3/1/2026 at 8:21:21 PM
Gemini is terrible with personalization. It brings up everything in my bio nonstop no matter what the topic is.by astrange
3/1/2026 at 4:06:44 PM
It all depends on your usecase(s). For me, "account-wide" memory has only: (a) short description of my hardware/os/display system/etc; (b) mobile hardware and os version; and (c) my age, gender, city/country of residence, and health conditions.by gverrilla
3/1/2026 at 8:15:47 AM
I can try!I currently use ChatGPT for random insights and discussions about a variety of topics. The memory is basically a grown context about me and my preferences and interests and ChatGPT uses it to tailor responses to my knowledge, so I could relate better.
This is for me far more natural and easier than either craft a default prompt preset or create each conversation individually, that would be way too much overhead to discuss random shower thoughts between real life stuff.
This is my use case and I discovered that this can be detrimental to specific questions and prompts and I see that it can be more beneficial to have careful written prompts each time. But my use case is really ad hoc usage without the time. At least for ChatGPT.
When coding, this fails fast. There regular context resets seem to be a more viable strategy.
by pfix
3/1/2026 at 8:20:40 AM
I see what you mean, but I like having a clean slate even for those one off questions. I don’t want a differing answer to a philosophical inquiry just because the LLM remembers a prior position I’ve written about you know?by wps
3/1/2026 at 12:54:13 PM
I have all the history settings off for this reason, but something that worries me is that there's a fair bit of information about me trained right into the model weights. I'm not "famous" by any stretch but claude has awareness of some of my HN-front-page-hitting projects, etc., which I think should be enough to bias responses (although I haven't tried to measure it).I set my name to "User" in the settings, so in a clean-slate chat it has nothing to go on, but the moment claude code does something like `git log` it knows who I am again. I've even considered writing some kind of redaction proxy.
by Retr0id
3/1/2026 at 8:32:39 AM
FWIW, both OpenAI and Anthropic have a toggle to do a “Temporary/Incognito Chat” that does not use or update memory. I too wish this was the default, and then you could opt in at the end of the chat to save some long term aspects into memory.by e1g
3/1/2026 at 8:41:18 AM
That would be interesting, also at the start. As an option what to pull in. ChatGPT memory "improved" and now you normally don't even see anymore what it commits to memory!by pfix
3/1/2026 at 7:22:14 PM
Think of things like your preferred units (meters, kg, cups, tablespoons, milliliters). Or, do not suggest recipes with x ingredient. Language preferences. Etc etc etc.by Panoramix
3/1/2026 at 8:32:29 AM
I've told the LLMs that, when traveling, I don't care about nightlife and alcohol. Because they have a memory of this, when I ask for a sample itinerary for a 2 day stay in a new city, it won't waste hours in the day on the party street, wine tasting, etc.For example, instead of recommending a popular night club, it will recommend the stroll along the river to view the lit up skyline or to visit the night market instead.
It knows other preferences as well (exploring quirky neighborhoods, trying local fast food joints and markets)
by jtokoph
3/1/2026 at 8:34:11 AM
So it's because they want to be more like ChatGPT instead of being more Claude Code. I guess that makes sense - bigger marketby cyrusmg
3/1/2026 at 8:40:06 AM
Is it?Isn't there much more money in automating business processes than in answering consumer questions (sans ads)?
Automating software development has to be a multi-trillion dollar market. And that doesn't account for future growth.
by echelon
3/1/2026 at 11:42:02 AM
maybe. Software is big, but it is only a tiny percentage of the ecconomy. they need to help a lot more than software to justify their datacenter investments. even if we add all engineering that isn't a large percentage. How can they help insurance agents (or eliminate - I don't care either way), plumbers, zoo keepers, and every other job in my city? Some might be they can't - but if they can is a question worth asking.by bluGill
3/1/2026 at 2:01:02 PM
"Stop asking me to apply the plan. I will tell you when I'm ready."That alone drives me batty. I can easily spend a couple hours and multiple revisions iterating on a plan. Asking me me every single time if I want to apply it is obnoxious.
by bmurphy1976
3/1/2026 at 3:26:08 PM
I own a lot of dirt bikes, boats, snowmobiles, mowers, and blowers. It's much easier for me to ask about "My Polaris" than it is to ask about my "2011 Polaris Switchback Assault".Similarly, it remembers the dimensions of my truck, so towing/loading questions don't need extra clarification.
It's the small things.
by joenot443
3/1/2026 at 1:54:00 PM
The appeal for me is not having to constantly repeat instructions. Imagine having to repeat dietary restrictions every time you ask for a recipe.by __alexander
3/1/2026 at 8:14:27 AM
> it seems to cater to how the masses use these tools.Are you suggesting that they should ignore the needs of the vast majority of their users?
I mean, of course they do, it would be worse otherwise
by gbalduzzi
3/1/2026 at 8:28:51 AM
Well, the masses are wrong. See: insane amounts of compute wasted on “thank you”, “haha true”, “redo it”, etc. I think the UI should be designed to avoid misuse, and I think an ever growing distillation of your most common traits is not a good use of context length. If you want it, specify it. Maybe even hard limits on chat length, why are we 20 replies deep in a single chat? A user friendly option could be a single button that distills that chat down, and opens a new one with prebuilt instructions to continue the conversation. I’m no product designer though, just some thoughts.by wps
3/1/2026 at 2:44:37 PM
Because I can say “do what you did before, but about the romans this time”And it will give me a complete rundown of Roman life, because it knows what I was interested in before.
Or you can ask a tax question and it will know you’re an organic rice farmer or whatever. Claude has the best implementation because it has both memory, and previous chat searching. So it will actually read through relevant chats, rather than guessing based on memories.
by MagicMoonlight
3/1/2026 at 8:13:32 AM
Sure, it's for those customers who don't have any idea what a "context window" is.by CGamesPlay
3/1/2026 at 8:16:02 AM
This seems to imply that customers assume by default that the LLM remembers their past chats? I feel like the UI makes it incredibly obvious it’s a clean slate every time? But then again people ask ridiculous meta questions all the time to these chatbots expecting a correct answer.by wps
3/1/2026 at 11:21:43 AM
Yeah, but then they went and added "memories" and in particular automatic memory management, and now it isn't a clean slate each time. And that's exactly what this is importing: those automatically curated memories that make the chat bot "feel like" it knows you.by CGamesPlay