3/4/2026 at 4:24:04 PM
The article nails the "why," but I think it underplays something: personality isn’t a post-training artifact that you can tune away. After months of agentic coding from Claude Opus / Sonnet and GPT 5.2/5.3 Codex, the differences in personality are profound and functional. Claude talks more — it’s the consultant who tells you what they’ll do before they do it. GPT Codex models are the analyst-developer, you just let it go and it tells you what the hell it did. Neither is wrong, they are truly different approaches to working. And more importantly, you have neither AGENTS.md files nor system prompts significantly changing this You can tweak tone but the underlying communication pattern is baked in. Perhaps the more interesting question isn't "ought models to have personality" but rather "which model's personality most closely aligns with how you think." Some people excel with the verbose collaborator, some just want the silent executor. We accept this about human colleagues — seems overdue to accept it about models tooby maciusr