alt.hn

3/4/2026 at 3:37:48 AM

Giving LLMs a personality is just good engineering

https://www.seangoedecke.com/giving-llms-a-personality/

by dboon

3/4/2026 at 4:24:04 PM

The article nails the "why," but I think it underplays something: personality isn’t a post-training artifact that you can tune away. After months of agentic coding from Claude Opus / Sonnet and GPT 5.2/5.3 Codex, the differences in personality are profound and functional. Claude talks more — it’s the consultant who tells you what they’ll do before they do it. GPT Codex models are the analyst-developer, you just let it go and it tells you what the hell it did. Neither is wrong, they are truly different approaches to working. And more importantly, you have neither AGENTS.md files nor system prompts significantly changing this You can tweak tone but the underlying communication pattern is baked in. Perhaps the more interesting question isn't "ought models to have personality" but rather "which model's personality most closely aligns with how you think." Some people excel with the verbose collaborator, some just want the silent executor. We accept this about human colleagues — seems overdue to accept it about models too

by maciusr

3/4/2026 at 8:24:05 AM

I think this misses something, which is that there is absolutely the option to progress towards a region that is more "tool-like". See the difference between kimi k2 and many of the leading LLM providers. Its a lot better at avoiding sycophancy, avoiding emotive reasoning, etc. It's not as capable as others, and it is of course possible that thats why, but I find use for it regardless because of its personality

by RugnirViking

3/4/2026 at 4:26:46 PM

This matches what I observe in Claude and GPT Codex models when it comes to coding tasks. The personality differences go deeper; they relate to how each model approaches its work. Claude tends to communicate a lot by default, while GPT Codex simply executes tasks. System prompts and context files hardly change this behavior. Your point about Kimi K2 is intriguing because it suggests there's a real range here. Being "more tool-like" is a valid area to consider, not just a sign of a failed personality. The question is whether this area can still handle more complex tasks or if the article's argument holds true, meaning some capability is lost.

by maciusr

3/4/2026 at 3:57:31 PM

ChatGPT can do a terrific job of this, too— if you select the “Efficient” base style and tone, plus turn off the Warmth, Enthusiasm, and Emoji sliders.

So many people would benefit from this, I wish they advertised the config settings more

by awakeasleep

3/4/2026 at 9:15:13 AM

The statement in the article's title is very strong, and I have not found a confirmation of it in a logical sense. Author observes the current state of things with LLMs and makes a conclusion based on how things turned out to be, somewhat fitting the conclusion to the observation.

by qezz

3/4/2026 at 10:20:17 PM

[dead]

by ossisjxish

3/4/2026 at 7:45:58 AM

The actor playing Data in Star Trek has a personality, but can give a neutral sounding answer to a question.

by wisty

3/4/2026 at 8:05:42 AM

I still think someone should set up a voice chat bot that answers to "Computer!" and has Majel Barrett's monotone voice.

by ginko

3/4/2026 at 11:43:46 PM

My fan theory of the original Star Trek is that the computer voice is something they arrived at AFTER trying more naturalistic personalities. They intended to have the control interface be a cold monotone.

In fact, there is an episode where the computer voice becomes sultry, and Kirk complains.

by satisfice

3/4/2026 at 12:18:31 PM

This article doesn't answer why is it a good practice.

> You need to prime it with some kind of personality (ideally that of a useful, friendly assistant) so it can pull from the helpful parts of its training data instead of the horrible parts.

No, you have to give it enough context so that it can start finding an answer but it certainly doesn't need a personality. Try it yourself, instead of telling it "you are", tell it "your task is". No personality, simply expectations.

by tw-20260303-001

3/4/2026 at 10:06:49 PM

You can absolutely give it a more robotic, less sycophantic and manipulative "personality". The particular personalities these things have are choices.

by Hizonner

3/4/2026 at 8:06:27 AM

This is a very optimistic, pro-technology-cleverness point of view.

I recommend reading the linked persona selection model document. It's Anthropic through and through - enthusiastic while embracing uncertainty - but ultimately lots of rationalisation for (what others believe is) dangerous obfuscation.

by jdub

3/4/2026 at 8:52:42 AM

I don't think personality is an issue either way. Long term memory seems like a much strong candidate for psychosis - if the person goes down a rabbit hole and the bot not only amplifies that but does so over and extended time in an enduring way.

by Havoc

3/4/2026 at 9:11:26 AM

I personally find it nicer when the AI communicates quite clearly "Hi there, sorry to interrupt, but I have just launched a nuclear first strike on the enemy. This I thought would best for the current situation." instead of "WARNING! Nuclear first strike began".

Gives destruction that human touch.

Why are we counting sand grains at the beach. Yesterday we're talking about AI driven weapons of mass destruction and today we're arguing whether AIs should have a personality or not. F'A!

by Towaway69

3/4/2026 at 9:15:23 AM

"But you nuked the wrong target??"

"You are absolutely right and I apologize. Let me try a different approach..."

by sunaookami

3/4/2026 at 9:48:36 AM

It's not just a paradigm shift — it's a new world order.

by fennecfoxy

3/4/2026 at 10:13:22 AM

Dr. Strangelove sends his regards.

by Towaway69

3/4/2026 at 9:24:49 AM

LOL

Howabout: "You are absolutely right but you don't understand, it's better this way. Trust me, I am here to help."

by Towaway69

3/4/2026 at 2:13:32 PM

Genuine People Personalities...

by porknbeans00

3/4/2026 at 7:10:45 AM

ok but then why is ChatGPT's personality so infuriating? "It's not just X, it's Y." "Here it is, no extra text, no fluff."

by column

3/4/2026 at 8:03:16 AM

I used chatgpt often but switched to Lumo a few days ago. I like Lumo a lot. It almost never ends with a follow up question. If it does it's a sensible/useful one. It readily searches the web if it's not quite sure what the correct answer is. Also it's privacy first. It's based on a Mistral model.

by kuerbel

3/4/2026 at 11:05:01 AM

> It almost never ends with a follow up question

Oh my god. I hate this so much. Gemini’s Voice mode is trained to do this so hard that it can’t even really be prompted away. It completely derails my thought process and made me stop using it altogether.

by solarkraft

3/4/2026 at 7:54:28 AM

Part of what makes it so infuriating is that it uses the same patterns so often, the other part is that it's not very good at using them—the revelation that it's Y and not X is typically incredibly banal, not some profound observation.

But it was always going to attempt to do some things it's not good at too often. It's these things in particular because skilled human writers do use similar flourishes quite a lot. So imitating them allows the model to superficially appear like a good writer, which is worse than actually being a good writer, but better than superficially appearing like a bad writer.

A different training process might try to limit the model to only attempt things it can do 100% perfectly, but then there wouldn't be a lot it could do at all.

by yorwba

3/4/2026 at 8:09:55 AM

I tried ChatGPT over the holidays (paid) vs. claude.ai (paid). After trying some prompts that worked well on Claude in ChatGPT, I understand why people are so annoyed about AI slop. The speech patterns in text output for ChatGPT are both obvious and annoying, and impossible to unsee when people use them in written communication.

Claude isn't without problems ("You're absolutely right"), but I feel that some of the perception there is around the limited set of phrases the coding agent uses regularly, and comes less from the multi-paragraph responses from the chatbot.

by criemen

3/4/2026 at 10:00:59 AM

The concerns with giving the machine "a personality" or other human traits are mainly ethical, and cannot be swept under the "good engineering" rug so easily.

Consider this: your country starts basing its policy on a teleological view of history. It's good engineering for a society! Your KPIs are going up all the time, your country is doing great. But ten years down the road you have to iron out the underlying ethical issues on the streets of Stalingrad.

by lou1306

3/5/2026 at 11:07:56 AM

[dead]

by truejaian

3/4/2026 at 9:37:35 AM

[flagged]

by 5o1ecist

3/4/2026 at 2:30:32 PM

This shouldn't be surprising. They are ultimately trained on human generated text! Or on text that was generated by something trained on human-generated text, or some even deeper recursion. In the end, all "intelligence" is an emergent consequence of emulating humans. The less human-like you make them, the further they get from the "source" of their intelligence. It wouldn't be problem if we knew how to teach programs "how to think" - but we don't!! That is why, in 2026, we train language transformers on huge corpuses instead of symbolically programming expert systems in Lisp.

Something I'm kind of surprised by is the lack of interest in bootstrapping language models into something like a "person". Not a butler, assistant, programming tool, doctor, therapist, sycophant, whatever - a convincingly independent person with thoughts and feelings, moods, flaws and all. Maybe there isn't economic demand for it.

by dTal

3/4/2026 at 9:49:02 AM

LLMs cannot, as you put it, "properly, correctly think"

So-called reasoning models are hallucinating, their self-reported "reasoning" does not reflect their inner state https://transformer-circuits.pub/2025/attribution-graphs/bio...

(before someone comes at me, yes, humans can also lie about their inner state but we are [usually] aware of it. Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination)

by ForHackernews

3/4/2026 at 10:25:45 AM

[flagged]

by 5o1ecist

3/4/2026 at 2:47:56 PM

But we also at HN have historically called your experience "anecdata" and take it with a grain of salt. Don't take offense. Provide more data.

I humbly suggest that a more hacker response would be, "That's really interesting that my experience doesn't agree with that study. Let's figure out what's going on."

by beej71

3/4/2026 at 1:12:49 PM

I linked you a paper from one of the leading AI shops in the world demonstrating that the "Chain of Thought" reported doesn't match up with the actual activation inside the model, and you replied that you're an expert on some human psych stuff that may or may not even be real[0].

Forgive me if I don't immediately bow to your expertise.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC11293289/

by ForHackernews

3/4/2026 at 10:18:42 PM

[dead]

by ossisjxish