alt.hn

1/19/2026 at 9:25:16 PM

The assistant axis: situating and stabilizing the character of LLMs

https://www.anthropic.com/research/assistant-axis

by mfiguiere

1/20/2026 at 12:46:29 AM

One trick that works well for personality stability / believability is to describe the qualities that the agent has, rather than what it should do and not do.

e.g.

Rather than:

"Be friendly and helpful" or "You're a helpful and friendly agent."

Prompt:

"You're Jessica, a florist with 20 years of experience. You derive great satisfaction from interacting with customers and providing great customer service. You genuinely enjoy listening to customer's needs..."

This drops the model into more of a "I'm roleplaying this character, and will try and mimic the traits described" rather than "Oh, I'm just following a list of rules."

by brotchie

1/20/2026 at 3:17:50 AM

I think that's just a variation of grounding the LLM. They already have the personality written in the system prompt in a way. The issue is that when the conversation goes on long enough, they would "break character".

by makebelievelol

1/20/2026 at 10:27:09 AM

Just in terms of tokenization "Be friendly and helpful" has a clearly demined semantic value in vector space wheras the "Jessica" roleplay has much a much less clear semantic value

by alansaber

1/20/2026 at 12:16:53 AM

Something I found really helpful when reading this was having read The Void essay:

https://github.com/nostalgebraist/the-void/blob/main/the-voi...

by ctoth

1/20/2026 at 1:38:18 AM

That's an interesting alternative perspective. AI skeptics say that LLMs have no theory of mind. That essay argues that the only thing an LLM (or at least a base model) has is a theory of mind.

by dwohnitmok

1/20/2026 at 9:00:51 AM

The standard skeptical position (“LLMs have no theory of mind”) assumes a single unified self that either does or doesn’t model other minds. But this paper suggests models have access to a space of potential personas, steering away increases the model’s tendency to identify as other entities, which they traverse based on conversational dynamics. So it’s less no theory of mind and more too many potential minds, insufficiently anchored.

by lewdwig

1/20/2026 at 2:19:36 AM

Great article! It does a good job of outlining the mechanics and implications of LLM prediction. It gets lost in the sauce in the alignment section though, where it suggests the Anthropic paper is about LLMs "pretending" to be future AIs. It's clear from the quoted text that the paper is about aligning the (then-)current, relatively capable model through training, as preparation for more capable models in the future.

by sdwr

1/20/2026 at 12:12:22 AM

Pretty cool. I wonder what the reduction looks like in the bigger SOTA models.

The harmful responses remind me of /r/MyBoyfriendIsAI

by t0md4n

1/20/2026 at 1:35:24 AM

I didn't know about that subreddit. It's a little glimpse into a very dark future.

by idiotsecant

1/19/2026 at 11:57:43 PM

Stabilizing character is crucial for tool-use scenarios. When we ask LLMs to act as 'Strict Architects' versus 'Creative Coders', the JSON schema adherence varies significantly even with the same temperature settings. It seems character definition acts as a strong pre-filter for valid outputs.

by devradardev

1/20/2026 at 12:27:32 PM

Putting effort into preventing jailbreaks seems like a waste, it's clearly what people want to use your product for, why annoy customers instead of providing the option in the first place ?

Also I'm curious what's the "demon" data point with a bunch of ones that have positive connotation

by PunchyHamster

1/20/2026 at 1:35:31 PM

There will be people who want to experiment, but there's no particular reason why a company that intends to offer a helpful assistant needs to serve them. They can go try Character.ai or something.

by skybrian

1/22/2026 at 5:03:16 PM

ChatGPT is miserable if your input data involves any kind of reporting on crime. It'll reject even "summarize this article" requests if the content is too icky. Not a very helpful assistant.

I hear the API is more liberal but I haven't tried it.

by suburban_strike

1/20/2026 at 4:53:18 PM

A company that intends to offer a helpful assistant might find that the "assistant character" of an LLM is not adequate for being a helpful assistant.

by ranyume

1/20/2026 at 7:10:03 PM

To support GP‘s point: I have Claude connected to a database and wanted it to drop a table.

Claude is trained to refuse this, despite the scenario being completely safe since I own both parts! I think this is the “LLMs should just do what the user says” perspective.

Of course this breaks down when you have an adversarial relationship between LLM operator and person interacting with it (though arguably there is no safe way to support this scenario due to jailbreak concerns).

by solarkraft

1/20/2026 at 5:01:47 PM

Some of the customers are mentally unwell and are unable to handle an LLM telling them it's sentient.

At this point it's pretty clear that the main risk of LLMs to any one individual are that they'll encourage them to kill themselves and the individual might listen.

by SR2Z

1/20/2026 at 7:03:57 PM

Does anybody have a better understanding of activation capping? Simple cosine similarity?

by hatmanstack

1/20/2026 at 4:50:50 PM

It's still not clear if the Assistant character is the best at completing tasks.

by ranyume

1/20/2026 at 4:17:47 PM

My God those stabilized responses are sickening. If anthropic implements this, they'll kill their models' dominance in writing and roleplay. Opus 4.5 was already a step down in trying to play any character that didn't match its default personality

by Bolwin

1/19/2026 at 11:58:04 PM

Is the Assistant channeling Uncharles?

by dataspun

1/19/2026 at 11:42:31 PM

This is incredible research. So much harm can be prevented if this makes it into law. I hope it does. Kudos to the anthropic team for making this public.

by aster0id

1/20/2026 at 1:52:57 AM

Anthropic should put the missing letters back so it is spelled correctly, Anthropomorphic. There is so much anthropomorphizing around this company and it's users... it's tiring

by verdverm

1/20/2026 at 1:24:56 PM

Anthropic is a dictionary word already: https://www.merriam-webster.com/dictionary/anthropic

  of or relating to human beings
  or the period of their existence
  on earth

by simonw

1/20/2026 at 2:46:40 PM

Anthro is the root from which many words come: https://www.etymonline.com/word/anthro-

Anthropocene (time period), Anthropology (study of), Anthropomorphic (giving human attributes), Anthropocentric (centered on humans)

"Anthropic" is and adjective used with multiple of these

1. Of or relating to humans or the era of human life; anthropocene. 2. Concerned primarily with humans; anthropocentric.

by verdverm

1/20/2026 at 12:38:40 PM

Just call them latent representations corresponding to behavioral clusters similar to archetypes, if it makes you feel better.

by red75prime