4/6/2026 at 12:58:42 PM
Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.by ronanfarrow
4/6/2026 at 1:27:15 PM
Thank you for coming on HN and offering to answer questions.[a]This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.
OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]
Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]
Is your understanding of OpenAI's current competitive position similar?
---
[a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...
[b] https://www.latimes.com/business/story/2026-04-01/openais-sh...
[c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
by cs702
4/6/2026 at 5:17:08 PM
Thank you for this, very much appreciate the thoughtful response.The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.
by ronanfarrow
4/6/2026 at 7:41:43 PM
Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.
If you have an opinion about that, everyone here would love to hear about it.
by cs702
4/9/2026 at 8:47:23 PM
UPDATE: Well-regarded people on HN are saying OpenAI's most recent GPT-5x codex model is better than Claude 5x for certain coding tasks:by cs702
4/7/2026 at 8:08:19 AM
at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.by globalnode
4/7/2026 at 11:05:59 AM
Wild how different experience people can have. Both Google's models and Anthrophic's hallucinate a lot for me, even when I try the expensive plans and with web searches, for some reason, and none of them come close to the accuracy and hallucination-free responses of ChatGPT Pro, which to me still is SOTA and has been since it was made available. But people keep having opposite experiences apparently, I just can't make sense of it.by embedding-shape
4/7/2026 at 2:07:49 PM
Kagi (assistant.kagi.com) with Kimi K2.5 (their current default) has worked great for me in scenarios where the search result data is more important than the model.I.e. what I used to use Google for and when I don't want an AI to overly summarize / editorialize result data.
by ethbr1
4/7/2026 at 5:21:15 PM
oh thats probably because im a cheap-skate and just use the free garbo models. im sure the pro version is quite good.by globalnode
4/7/2026 at 12:25:09 AM
My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.
by irishcoffee
4/7/2026 at 8:38:32 AM
How would fraud help here? Don't they just need scale of lots of customers paying a little bit? How do you fraud your way into that?by philipallstar
4/7/2026 at 1:40:45 PM
they don't need customers, when the customers ere each others companies for example the deals openAI nvidia oracle madeby kelvinjps10
4/7/2026 at 3:56:00 PM
That's not fraud, and it's not sustainable. They aren't going to just keep doing that. It only makes sense if an AI company wants to pay for GPUs with stock, and - more importantly - the GPU company agrees to sell in exchange for stock.by philipallstar
4/7/2026 at 11:10:40 PM
s/fraud/corrupt, illegal $something.If you're picking on my vocabulary, that's fair. Fraud wasn't the point, I think you're smart enough to realize that.
by irishcoffee
4/8/2026 at 6:15:55 PM
I appreciate the implication that either you're right or I'm stupid, but maybe you should write the comment you meant to write.Trading shares for GPUs is not corrupt either.
by philipallstar
4/7/2026 at 12:37:10 AM
Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?by Ericson2314
4/7/2026 at 1:11:30 AM
I didn't asking him to evaluate them. I asked him how customer and partners perceive them.He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.
I'm curious.
by cs702
4/7/2026 at 4:15:02 AM
Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".by bloppe
4/7/2026 at 9:07:00 AM
If you were in charge of the deciding what should be done with Sam Altman, what would you choose?by keepamovin
4/7/2026 at 4:44:32 PM
I mean, its a fair question, though it does make some wonder how extreme the answers could be, so I could see why you're being downvoted.The problem is sometimes on paper everything people like Sam Altman do is legal, despite it harming so many. We've literally had a major RAM producer pull off the consumer RAM market. I feel like Sam Altman should be investigated and heavily scrutinized. He kind of is the biggest bubble in the AI bubble, we're letting him fester too far into it too, and these circular deals have seemingly somewhat stopped for now, but it might only get worse.
by giancarlostoro
4/8/2026 at 3:20:47 AM
Totally. Lying about others can be so harmful. But lying to hostiles in order to protect? Acceptable.I guess my question was more, if the article author was the judge of fate or morality, what should happen?
As to AI and Sam, I think it’s too early to tell what effects will be. So we should adopt non judgement, build good ourselves and see what unfolds.
by keepamovin
4/6/2026 at 9:26:54 PM
Many of us prefer OpenAI's Codex, because we think it's a better product.No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.
by unsupp0rted
4/6/2026 at 9:35:10 PM
Who is “us”? It does seem that some scientists prefer Codex for its math capabilities but when it comes to general frontend and backend construction, Claude Code is just as good and possibly made better with its extensive Skills library.Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems
by mliker
4/7/2026 at 12:56:29 AM
As a scientist (computational physicist, so plenty of math, but also plenty of code, from Python PoCs to explicit SIMD and GPU code, mostly various subsets of C/C++), I can confirm - Codex is qualitatively better for my usecases than Claude. I keep retesting them (not on benchmarks, I simply use both in parallel for my work and see what happens) after every version update and ever since 5.2 Codex seems further and further ahead. The token limits are also far more generous (and it matters, I found it fairly easy to hit the 5h limit on max tier Claude), but mostly it's about quality - the probability that the model will give me something useful I can iterate on as opposed to discard immediately is much higher with Codex.For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.
by keldaris
4/7/2026 at 1:32:20 PM
Have you tried the latest (3.1 pro) Gemini? In my experience, it's notably better for a similar type of problems than Opus 4.6. However, I don't really use OpenAI products to compare.by Scene_Cast2
4/7/2026 at 9:25:33 PM
I actually haven't - I tried Gemini 3.0 Pro in Antigravity and was disappointed enough that I didn't pay much attention to the 3.1 release, it was notably worse than Opus and GPT at the time, and much more prone to "think" in circles or veer off into irrelevant tangents even with fairly precise instruction. I'll give 3.1 a try tomorrow, see what happens.by keldaris
4/7/2026 at 7:58:23 AM
I've tried both against similar and haven't found it such a clear cut difference. I still find neither are able to fully implement a complex algorithm I worked on in the past correctly with the same inputs. Not sharing exactly the benchmark I'm using but think about something for improving performance of N^2 operations that are common in physics and you can probably guess the train of thought.by physicsguy
4/7/2026 at 9:22:25 PM
I've had reasonable success using GPT for both neighbor list and Barnes-Hut implementations (also quad/oct-trees more generally), both of which fit your description, haven't tried Ewald summation or PME / P3M. However, when I say "reasonable success", I don't mean "single shot this algo with a minimal prompt", only that the model can produce working and decently optimized implementations with fairly precise guidance from an experienced user (or a reference paper sometimes) much faster than I would write them by hand. I expect a good PME implementation from scratch would make for a pretty decent benchmark.by keldaris
4/8/2026 at 8:04:13 AM
Think another level of complexity of algorithm, different expansion bases plus a mix of input sources. Also not trying to one-shot it.by physicsguy
4/7/2026 at 8:20:57 PM
I can roughly guess the train of thought and I am a bit surprised that Claude is failing you.That said, I am puzzled at the algorithms that Claude & GPT "get" and ones that they do not.
(former physicist here. would love to know the kind of things you're working on. email on my profile)
by tirutiru
4/7/2026 at 2:48:47 AM
>As a scientist (computational physicist,Is there one that you prefer for, i dunno, physics?
by ricksunny
4/6/2026 at 10:13:28 PM
I'm in that camp -- I have the max-tier subscription to pretty much all the services, and for now Codex seems to win. Primarily because 1) long horizon development tasks are much more reliable with codex, and 2) OpenAI is far more generous with the token limits.Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.
by zeroxfe
4/7/2026 at 1:41:35 AM
Claude Code, Codex, and Cursor are old news. If you're having problems, it's because you're not using the latest hotness: Cludge. Everyone is using it now - don't get left behind.by the__alchemist
4/7/2026 at 3:34:57 AM
Cludge has been left behind by Clanker, that’s the new hotness. 45B valuation!by outside1234
4/7/2026 at 1:07:59 PM
ive heard that poob has it for you!by p-t
4/6/2026 at 9:47:34 PM
Us = me and say /r/codex or wherever Codex users are. I've tried both, liked both, but in my projects one clearly produces better results, more maintainable code and does a better job of debugging and refactoring.by unsupp0rted
4/6/2026 at 9:54:28 PM
That's interesting, I actively use both and usually find it to be a toss up which one performs better at a given task. I generally find Claude to be better with complex tool calls and Codex to be better at reviewing code, but otherwise don't see a significant difference.by sampullman
4/6/2026 at 11:45:29 PM
If you want to find an advocate for Codex that can give a pretty good answer as to why they think it's better, go ask Eric Provencher. He develops https://repoprompt.com/. He spends a lot of time thinking in this space and prefers Codex over Claude, though I haven't checked recently to see if he still has that opinion. He's pretty reachable on Discord if you poke around a bit.by SOLAR_FIELDS
4/7/2026 at 8:25:53 AM
Quite irrelevant what factions think. This or that model may be superior for these and those use cases today, and things will flip next week.Also. RLHF mean that models spit out according to certain human preference, so it depends what set of humans and in what mood they've been when providing the feedback.
by hirako2000
4/7/2026 at 2:31:00 PM
On the contrary, I very much care about what the other factions think because I want to know if things have already flipped and the easiest way to do so is just ask someone who's been using the tool. Of course the correct thing to do is to set up some simple evals, but there is a subjective aspect to these tools that I think hearing boots on the ground anecdata helps with.by SOLAR_FIELDS
4/8/2026 at 4:41:04 AM
Haven't done it in a while, but I've done some tasks with both Codex and Claude to compare. In all cases I asked both to put their analysis and plans for implementation into a .md file. Then I asked the other agent to analyze said file for comparison.In general, Claude was impressed by what Codex produced and noted the parts where it (i.e. Claude) had missed something vs. Codex "thinking of it".
From a "daily driver" perspective I still use Claude all the time as it has plan mode, which means I can guarantee that it won't break out and just do stuff without me wanting it to. With Codex I have to always specify "Don't implement/change, just tell me" and even then it sometimes "breaks out" and just does stuff. Not usually when I start out and just ask it to plan. But after we've started implementation and I review, a simple question of "Why did you do X?" will turn into a huge refactoring instead of just answering my question.
To be fair, that's what most devs do too (at least at first), when you ask them "Why did you do X" questions. They just assume that you are trying to formulate a "Do Y instead of X" as a question, when really you just don't understand their reasoning but there really might be a good reason for doing X. But I guess LLMs aren't sure of themselves, so any questioning of their reasoning obliterates their ego and just turns them into submissive code monkeys (or rather: exposes them as such) vs. being software engineers that do things for actual reasons (whether you agree with them or not).
by tharkun__
4/8/2026 at 2:04:48 PM
Codex has plan mode too - /planby cher88
4/6/2026 at 10:31:26 PM
Any difference in performance on mobile development?by aswanson
4/7/2026 at 12:07:03 AM
For that I'm not so sure. I tried both early 2025 and was disappointed in their ability to deal with a TCA based app (iOS) and Jetpack compose stuff on Android, but I assume Opus 4.6 and GPT 5.4 are much better.by sampullman
4/6/2026 at 11:44:51 PM
yea Im not in this "us" you speak of.by rocketpastsix
4/7/2026 at 6:52:52 AM
Of course you're not one of "us" if you're one of "them".by Finbel
4/6/2026 at 11:18:12 PM
I've found claude startlingly good at debugging race conditions and other multithreading issues though.by zem
4/6/2026 at 11:47:31 PM
My rule of thumb is that its good for anything "broad", and weaker for anything "deep". Broad tasks are tasks which require working knowledge of lots of random stuff. Its bad at deep work - like implementing a complex, novel algorithm.LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.
An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.
I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.
by josephg
4/7/2026 at 7:26:44 AM
Many paying customers say that Anthropic degraded the capability of Opus and Claude Code in the last months and the outcomes are worse. There are even discussions on HN about this.Last one is from yesterday: https://news.ycombinator.com/item?id=47660925
by DeathArrow
4/7/2026 at 6:10:38 AM
As some other people mentioned, using both/multiple is the way to go if it's within your means.I've been working on a wide range of relatively projects and I find that the latest GPT-5.2+ models seem to be generally better coders than Opus 4.6, however the latter tends to be better at big picture thinking, structuring, and communicating so I tend to iterate through Opus 4.6 max -> GPT-5.2 xhigh -> GPT-5.3-Codex xhigh -> GPT-5.4 xhigh. I've found GPT-5.3-Codex is the most detail oriented, but not necessarily the best coder. One interesting thing is for my high-stakes project, I have one coder lane but use all the models do independent review and they tend to catch different subsets of implementation bugs. I also notice huge behavioral changes based on changing AGENTS.md.
In terms of the apps, while Claude Code was ahead for a long while, I'd say Codex has largely caught up in terms of ergonomics, and in some things, like the way it let's you inline or append steering, I like it better now (or where it's far, far, ahead - the compaction is night and day better in Codex).
(These observations are based on about 10-20B/mo combined cached tokens, human-in-the-loop, so heavy usage and most code I no longer eyeball, but not dark factory/slop cannon levels. I haven't found (or built) a multi-agent control plane I really like yet.)
by lhl
4/7/2026 at 11:37:35 AM
Codex won me over with one simple thing. Reliability. It crashed less, had less load shedding and its configuration is well designed.I do regular evaluation of both codex and Claude (though not to statistical significance) and I’m of the opinion there is more in group variance on outcome performance than between them.
by kasey_junk
4/7/2026 at 8:35:49 AM
This is the way. Eg. IME Gemini is really damn good at sql.by baq
4/7/2026 at 2:39:29 PM
I have been using Codex AND Claude side by side for the same project*, with the same prompts.Codex has been consistently better on almost every level.
* (an open source framework for 2D games in Godot 4.6 GDScript, mostly using AI to review existing code)
by Razengan
4/6/2026 at 11:00:21 PM
Not a scientist and use codex for anything complex.I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.
by 7thpower
4/7/2026 at 8:33:12 AM
I’m one of those ‘us’, Claude’s outputs require significant review and iteration effort (to put it bluntly they get destroyed by gpt and Gemini). I’m basically using sonnet to do code search and write up since it is a better (more human-like) writer than gpt and faster and more reliable than gemini, but that’s about it.by baq
4/7/2026 at 12:26:01 AM
I also find Codex much more generous in terms of what you get with a Pro ($20/mo) subscription. I use it pretty much non-stop and I have yet to hit a limit. Weekly reset is much better as well.by bko
4/7/2026 at 7:38:36 AM
I prefer GLM 5.1 and MiniMax 2.7. With a better harness like Forge Code, I have better results for way less money than by using GPT and Opus.by DeathArrow
4/7/2026 at 11:55:44 AM
Usage limits are more generous and GPT 5.4 is a good model, but yes, UI/UX lags behind Claude Code. Currently I'm especially missing /rewind with code restoration and proper support for plugin marketplacesby jbergqvist
4/7/2026 at 8:14:49 AM
GPT/claude/gemini is pretty interchangeable at this point.by KaiserPro
4/7/2026 at 8:37:04 AM
Absolutely not the case. They're complementary.by baq
4/7/2026 at 5:13:02 PM
[dead]by ath3nd
4/7/2026 at 7:12:44 AM
Does this work for people? To me having a "better product" would be completely irrelevant if the use cases are evil.by shevy-java
4/7/2026 at 8:56:10 AM
i find myself being more productive with codex/copilot on coding tasks, but claude does seem to be better at planningby thaoanh404
4/7/2026 at 4:45:01 AM
Shill talkby aaa_aaa
4/6/2026 at 11:09:58 PM
[flagged]by enraged_camel
4/6/2026 at 1:46:51 PM
He’s replying on this twitter thread - perhaps someone with an account can ask there and link his comment here?https://xcancel.com/RonanFarrow/status/2041127882429206532#m
by brightbeige
4/6/2026 at 6:19:55 PM
Here is the actual link, not a link to some weird third-party site that can't be trusted.by jamiequint
4/6/2026 at 11:33:14 PM
FYI xcancel is just a mirror that allows reading replies without needing an account.by rounce
4/6/2026 at 8:09:30 PM
Whereas X can be trusted?by SwellJoe
4/7/2026 at 12:09:17 AM
Yes? It's the data source, not a third-party. How is this even a question?by jamiequint
4/7/2026 at 2:52:20 AM
There's pedantic, and then there's needlessly pedantic.xcancel is a valid workaround for X links on Hacker News and is sufficient for original attribution.
by minimaxir
4/7/2026 at 2:51:13 AM
X restricts what you can view without logging in. Many folks don't want to log in to X, for obvious reasons. Posting an xcancel link is kinda like folks posting various `archive` URLs to bypass paywalls, work around overloaded servers, etc. That's an extremely common practice here that usually goes without comment.by SwellJoe
4/7/2026 at 10:03:27 PM
What is an "obvious reason" one might not want to log into X? I can't think of any rational reason.by jamiequint
4/10/2026 at 6:54:30 AM
I’m assuming this is all sarcasm.by lasky
4/7/2026 at 2:13:55 PM
Personally, I prefer Claude for coding, but I still prefer ChatGPT for hashing out ideas for my projects (which tend to be game designs). So I use both.by cableshaft
4/6/2026 at 10:46:24 PM
It's worth noting Codex has 2x more stories than Claude https://hn.algolia.com/?query=codexby ed
4/7/2026 at 3:49:59 AM
But by page 5, those stories have around 50-60 karma, while claude page five is still 500+(i found your comment surprising based on my daily hn reading recollection - i mostly read top N daily and feel i only occassionally see codex stories).
by cloverich
4/7/2026 at 2:17:35 PM
[dead]by qotgalaxy
4/7/2026 at 1:48:30 AM
Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool worksby ATMLOTTOBEER
4/6/2026 at 4:30:47 PM
> You may want to provide proof online that you are who you say you areUnfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.
But yeah, it was a fantastic piece.
by georgemcbay
4/6/2026 at 9:29:14 PM
It wasn't getting "brigaded down" - it set off a software penalty called the flamewar detector. I turned that off as soon as I saw it.by dang
4/7/2026 at 11:56:07 AM
Thank you for keeping HN sane :-)by cs702
4/6/2026 at 5:19:21 PM
Fair request, here you go: https://x.com/RonanFarrow/status/2041203911697068112by ronanfarrow
4/6/2026 at 4:16:43 PM
The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.
Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.
by taurath
4/6/2026 at 5:14:44 PM
All fair points on trauma and memory.As noted in the piece, we spent months talking to Altman's partners and what we found and didn't is as described.
by ronanfarrow
4/6/2026 at 7:05:50 PM
Thanks for the response! Cheers just fully reread the piece and appreciate your reporting.by taurath
4/6/2026 at 9:50:44 PM
It's super neat to see you here on HN taking questions, kudos :)by girvo
4/7/2026 at 12:54:05 AM
That's not a fair assessment. "False memory syndrome" and "repressed/recovered memory" are both outside scientific mainstream consensus.by gowld
4/7/2026 at 6:58:15 AM
Correct, because there truly isn’t a great way to answer with certainty - there was evidence in the 80s of suggestive techniques being used by poorly trained psychologists, and there are many people who remember and then find corroboration.There’s a lot more who remember and may not have corroboration more than with themselves and among their close friends or healthcare provider. Part of CSA is usually there is very little a kid can do about evidence, as the power discrepancy is far too much. Often with rich abusers, the exact same process occurs. Perps pick victims who are vulnerable or controllable, and constantly seek power and domination. Nothing to do with the boardroooms or batch of ceo billionaires running the economy right now certainly.
by taurath
4/7/2026 at 2:11:18 PM
[dead]by ethbr1
4/7/2026 at 2:37:07 AM
I am very sympathetic to the situation you describe. I certainly think it is possible that Annie is describing something that happened. I think the author did a fair job of representing the allegations, finding the right balance between disclosing that they were unable to corroborate the allegations without dismissing them.That said, "recovering" memories as a therapy does not pass any sort of sniff test and it doesn't take a concerted effort to discredit the concept. Human memory is very malleable. Patients with mental health issues (which could predate abuse, or could be caused by abuse) are often in search of answers and that makes them very vulnerable.
Could a memory be buried deep in our subconscious, forgotten, only to return to the surface later? Sure, we all forget things and then remember them when triggered by something, whether that's a smell or sound or something else entirely. But can we engineer that process, with any degree of reliability? How can we even begin to reliably reverse engineer the triggers?
I think it is also important to keep in mind that Annie is rich, and the health care available to rich people can be very predatory. There are endless examples of nonsense therapies for all types of health, from ear seeds to treatments for "chronic Lyme".
Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them. If Annie's memories were triggered in adulthood, sure, that's really no different than remembering something... but "recovered"? That is something else entirely.
Correct me where I'm wrong, I'd like to learn your perspective, maybe there's a missing piece.
by fontain
4/7/2026 at 2:42:22 PM
What's the evidence that Annie is rich?by whamlastxmas
4/7/2026 at 7:39:50 AM
> recovering" memories as a therapyRecovered memory therapy was a discredited hypnotherapy that leaned heavily on suggestion or was associated often with fairly coercive interrogations during the 80s CSA panic - https://en.wikipedia.org/wiki/Day-care_sex-abuse_hysteria
> Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them.
Agree, though I think the mechanism can be a bit more towards the idea of a “recovery” of traumatic memory, even if the term as understood carries false connotations.
The concept you’re missing is dissociation, and dissociative disorders. In the 40s it was called just “hysteria”, and for many cases up to the late 90s an extreme form was called multiple personality disorder, now DID (dissociative identity disorder). https://en.wikipedia.org/wiki/Dissociative_disorder
Not everyone who goes through traumatic events will respond to it via dissociation of identity, and indeed not all people are equally capable of developing a dissociative disorder, 2 people may go through very similar events (say survive a war as siblings or even twins) and one might dissociate the traumatic experience and one might not. Dissociation doesn’t work quite like you might imagine from a term like “multiple personalities” - that happens in some extreme cases, but think of identity dissociation as an adaptive response to events or situations that are paradoxical (esp to a child’s mind), extreme or traumatic, and can’t be escaped or use of other mechanisms cant be called upon.
Dissociation is on a sort of spectrum, where at one side you have common experiences like zoning out when on a common commute, and on another you have separated self-parts/alter egos to handle wildly different situations.
It’s a mechanism I frankly wasn’t aware of and I’m not sure that I would be able to fully beleive or empathize with, but for my getting a diagnosis of a dissociative disorder changed my life, and made a thousand things about me that I could never figure out make sense. The “model” as it put it at the time responded to experiment, and by recognizing that I was dealing with pretty constant, heavy dissociation and different self states with memory deficiencies helped me figure out how to work through a ton of really intractable problems for me. I’m finally after decades of ineffective therapy able to really understand how I work.
Idk how to talk about it without sounding like I’m trying to sell the idea. But yeah it was a mind blowing thing to me. Over the last 20 years especially a ton of truly respectable research has been done and the increase in efficacy of treatments on dissociation, and trauma generally is one of the unsung advancements for humanity in the last decade. I think the number is that around 3-6% of people meet the clinical criteria for a dissociative disorder - OSDD, DID, DPDR, or dissociative amnesia. 5x more people than have schizophrenia, 5x more than have red hair.
My favorite public clinical resource I point to people is the CTAD Clinic YouTube - https://youtube.com/@thectadclinic?si=5AyR5H8K8Cf2sn3C
Pretty easy to understand explainers from a clinician in the UK.
For a more clinical and study approach this one is the currently best put together research IMO: https://www.taylorfrancis.com/books/edit/10.4324/97810030573...
The TLDR is dissociation is an important mechanism that most people don’t know about but has had a wave of research and study and is much more common than one might expect. The sad part is how often dissociative disorders correlate w abuse.
by taurath
4/7/2026 at 8:15:38 AM
Thank you very much for the details.I’m reading more now and I think the missing piece for me is the distinction between “repressed” memories and “recovered” memories.
I understood repressed memories to be an accepted idea, distinct from “recovered” memories. I am reading that the people mentioned in your original comment rejected the idea of repressed memory altogether, and believed that everything traumatic must be remembered.
So, to me, reading that someone “recovered” memory reads like they went through a specific type of therapy intended to “find” these repressed memories. Whereas to you, “recovered” memories could be repressed memories that came back to the surface organically — whether at random, triggered or through a therapy intended to deal with disassociating. Is that right?
by fontain
4/7/2026 at 3:11:50 PM
I'm confused by what you're saying. Can you help me reconcile your first post> It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation.
with
> Recovered memory therapy was a discredited hypnotherapy
I read your first post as standing up for recovered memory therapy and I can't find how the discussion of dissociation makes a difference. Does Fontain have it right that by "recovered memory" you mean "things people happened to remember on their own"?
by Noumenon72
4/7/2026 at 2:33:03 PM
False memories are much, much more common than actual recovered memories, unfortunately. OCD is a really common cause of it. People think of OCD as a physical thing, but for many people it presents as emotional rumination and can lead to false memories.by cm2012
4/6/2026 at 10:16:47 PM
[flagged]by hello_humans
4/6/2026 at 10:59:16 PM
Hi Ronan, thanks for the article and for answering questions.My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?
by jzymbaluk
4/8/2026 at 4:20:57 AM
The answer is that there really is no easy answer. It's an evolving assessment based on a complex matrix of considerations.You try to reach a critical mass of detailed, rounded understanding of a central question, integrating the most meaningful perspectives, interrogating the weak points and blind spots, and backing up the assertions with documentary evidence or strong sourcing. Eventually, you reach a point where enough sources and materials are reliably triangulating toward the same truths.
As you guessed, there's external pressures that figure in this analysis—whether competitors are closing in on the same leads; what's happening in the broader news cycle that might make a story feel more or less relevant. As you also guessed, I am more fortunate than most writers in the degree to which I get to hold off until something feels fully baked. Mostly, writers simply have to hit a deadline, and resources run out before ambition does. I have deadlines and constraints too, but I get a lot of say in how I organize all of the above.
Then there's the actual process of creating the story. Writing a densely evidence-based investigative piece is labor-intensive—in this case, weeks of initial drafting, and then much iteration. The fact-checking process at the New Yorker is exhaustive, and can span weeks. Every sentence, assertion, and piece of underlying sourcing get scrubbed by multiple independent pairs of eyes. This story had four fact-checkers working on it for the better part of a two week period, pulling very long hours. This is all brought together in a closing meeting where each sentence is revised and polished in a group.
This is all done as additional information comes in—in fact, with these large-scale bodies of reporting, there is very often a snowball effect, where a lot comes in at the end.
by ronanfarrow
4/7/2026 at 12:50:47 AM
I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.by cm2012
4/7/2026 at 1:09:25 AM
Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.by ronanfarrow
4/9/2026 at 10:31:06 PM
In this industry, truly anything can be rationalized.by littlexsparkee
4/6/2026 at 10:16:08 PM
Hi Ronan appreciate you being here. what would help you and others continue to do journalism like this? (including commenting on HN?)by fblp
4/7/2026 at 1:15:37 AM
This is a vast and tricky question. The business model has basically fallen out from under journalism, and especially this kind of labor-intensive investigative reporting. The media landscape is increasingly dominated by moneyed individuals and companies essentially buying up the discourse.I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.
by ronanfarrow
4/7/2026 at 12:54:12 PM
Only anti-trust action against big tech to break their ad monopoly (to make journalism profitable again) and breaking up media conglomerates (to reduce concentration of power in the journalism industry) can save journalism from becoming just a mouthpiece for the powerful. These things can only happen through politics. We need a political solution to save journalism.by wilkommen
4/7/2026 at 1:47:14 AM
Got it! Any recommendations on who to subscribe to? Any personal links for you?In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.
by fblp
4/7/2026 at 2:22:54 AM
Well, this piece was in The New Yorker, which is reasonably priced and regularly includes excellent investigative journalism. I get the physical copies, which can be too much to keep up with if you try to read everything, but it’s easy enough if you skim and just read the things that stick out as being of particular interest.by mplanchard
4/7/2026 at 7:43:51 AM
The New Yorker also comes with Apple News+ subscriptions (part of an Apple One plan that many people get for extra iCloud storage) which further includes a number of top-tier and local news orgs such as the Wall Street Journal, LA Times, SF Chronicle, Times of London, etc.The Sam Altman piece can be read here: https://apple.news/APTX4OkywRWeJXIL7b8a7zQ
by ilamont
4/7/2026 at 7:52:00 AM
Drop Site News, 404 Media, Boston Review, The Intercept, and Atavist are all very worth supporting.by t0lo
4/8/2026 at 2:59:24 AM
Yes I support 404 media.by trinsic2
4/7/2026 at 3:47:23 AM
Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?by ricksunny
4/7/2026 at 5:58:16 PM
Because "the public interest" is more widely defined than you think.by HeyLaughingBoy
4/7/2026 at 9:40:05 PM
I'm all ears: 1. Feel free to share why unmasking Banksy was in the public interest 2. Whether you feel all other public interest priorities had been served by investigative reporting prior to commissioning his unmasking.by ricksunny
4/7/2026 at 10:11:38 PM
I have no idea, nor care, whether or not unmasking Banksy, specifically, was in the public interest. My only point is that it's not limited to topics that you consider important.As for your #2, that seems reminiscent of "why are we going to space when there are so many problems here on Earth."
by HeyLaughingBoy
4/6/2026 at 11:49:58 PM
Ronan Farrow on Hacker News. Now I’ve seen everything.by sebmellen
4/7/2026 at 1:12:07 AM
I’ve really appreciated how substantive and polite the discourse here is, overall!by ronanfarrow
4/7/2026 at 2:22:11 AM
I'm a mod here and wanted to let you know 2 things: (1) I've marked your account with a beta feature that displays a colored line to the left of new comments (since you last viewed the page). It might help you keep track of this rather large thread.*(2) I'm sorry the post was downranked off the frontpage for a while this afternoon. A software penalty kicks in when the discussion seems overheated ("flamewar detector") but I turned this off as soon as I became aware of it. We make a point of moderating HN less when a story is YC-related (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) but as this goes against standard internet axioms, people often assume the opposite.
(* And yes, any reader who wants this is welcome to email hn@ycombinator.com to ask - I haven't turned it on for everyone because I'm worried it would slow the site down. Also, it's a bit buggy and not only have I not had time to fix it, I've forgotten what the bugs are.)
by dang
4/7/2026 at 10:46:25 AM
>beta feature that displays a colored line to the left of new comments (since you last viewed the page)Can't wait until this is released!
by ta8903
4/7/2026 at 2:38:30 PM
If you don't want to wait, the refined HN extension has had this feature for a long time, but it is device specific.by nickthegreek
4/7/2026 at 8:59:31 PM
Thank you so much. I’ll be answering more of them throughout the day as I have time.by ronanfarrow
4/7/2026 at 3:42:50 AM
Not a question but just wanted to make sure you saw this:https://theonion.com/anyone-else-have-those-weird-dreams-whe...
by tootie
4/7/2026 at 1:58:54 PM
Also this exclusive interview with the man himself:https://theonion.com/the-onions-exclusive-interview-with-sam...
Includes gems such as:
Q: What informs your personal sense of morality?
A: Previous things I’ve gotten away with.
Q: Why did you decide to devote your life to AI?
A: I just saw so much suffering in the world that needed to be automated.
by arduanika
4/7/2026 at 10:11:35 AM
It’s good to have you! We try to keep it civil :)by sebmellen
4/7/2026 at 12:57:37 AM
We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.
by philip1209
4/7/2026 at 1:11:18 AM
> whether the company that branded itself as the ethical AI lab actually is oneFWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.
by solenoid0937
4/7/2026 at 2:03:51 AM
Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.
by root_axis
4/7/2026 at 6:33:47 AM
My model is that Anthropic was founded by OpenAI engineers who self-selected for safety-consciousness. However, it's still subject to the same problem: power corrupts. I think they are better than OpenAI but they are definitely sliding.by fwipsy
4/7/2026 at 6:45:50 PM
Eventually something like what happened with the DOW might happen again (hope not) and the IPO will leave them beholden to shareholders.If the leadership doesn’t bend it might get replaced. It’s annoying. I think Claude is atm the best AI assistant, by far.
by kranke155
4/7/2026 at 10:04:01 PM
Anthropic is a public benefit corporation. This protects them from legal pressure from shareholders. Doesn't really help with market pressure/value drift though.by fwipsy
4/9/2026 at 2:23:40 PM
Yeah, and that's why they got of rid of their commitment to safety so they can stay cutting edge?by freejazz
4/8/2026 at 3:10:22 AM
Power doesn't corrupt, it reveals. (Im pretty sure this is a Stoic axiom).by trinsic2
4/7/2026 at 3:20:58 AM
> every engineer in the bay area has a way of framing the business they work for as a benign force for goodThis isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.
by JumpCrisscross
4/7/2026 at 11:22:15 AM
It should perhaps be generalized as "employees usually match the general consensus of their peer-group". Before everyone considered Meta to be ersatz drug dealers, they'd report that they feel what everyone feels.Google was "do no evil" until they had to choose between that and making the money. The culture has to be not only professed but tested.
by bombcar
4/7/2026 at 1:20:39 PM
Depending on what part of Google you work for, you can absolutely feel good about what you do. The vast majority of employees don't work on ads or adjacent areas. I've never seen another company actually care for non profit related externalities so much. People talk about it like it's the same as Haliburton or Oracle and that's not true.by surajrmal
4/7/2026 at 3:00:03 PM
The snide response is "of COURSE you can care about non-profit related externalities when your giant evil ad business is bringing in absolute dump loads of cash".And there's something true there; few companies are Snidely Whiplash evil (maybe the lawnmower but even that is just what it is) - and having large amounts of cash affords you options in many areas.
by bombcar
4/7/2026 at 3:33:56 AM
TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.
So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.
by solenoid0937
4/7/2026 at 9:22:13 AM
Indeed. The bad behavior is emergent, where most individual intentions are good. Good story, bad outcome.by rapnie
4/7/2026 at 4:33:28 AM
I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
by Bolwin
4/7/2026 at 6:36:08 AM
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.by fwipsy
4/9/2026 at 2:24:40 PM
Those are?by freejazz
4/7/2026 at 1:40:18 PM
The idea that it's not okay to arm the military is a position of privilege. The ethical issues are around how the military chooses to use its abilities, not around giving them the tools to do their jobs. We're talking about folks who are willing to give their lives up for others. If you're not going to serve yourself you should at least be willing to help them live. This has nothing to do with whether or not you support the political uses of the military. If world war 3 breaks out and you are forced to serve, you may find yourself feeling differently.by surajrmal
4/7/2026 at 2:18:15 PM
Yes and... that's a position of privilege that anyone in the position should ethically take.It's unfair to sweep provision of methods to the military under a "respect the service" catch-all justification.
Two things can simultaneously be true: (1) individuals serving in the military are making sacrifices (in terms of pay, family life, personal safety) that deserve respect and (2) the military as a political institution will amorally deploy whatever capabilities it has access to, to achieve political aims.
There's a reason the US stopped offensive chemical, biological warfare, and tactical nuclear device research and production -- effective capabilities will be used if they exist.
by ethbr1
4/7/2026 at 10:11:15 PM
With respect to the weapons programs, I'm not a historian, but I was not under the impression that the US stopped development of these weapons unilaterally or out of good will. My understanding is that it was due to a mixture of not perceiving a need or use for the capabilities, along with formal or informal international cooperation eliminating the need for deterrence.Just a couple of thoughts since it seems like the next issues in this space are rapidly arriving or already here.
by ahepp
4/8/2026 at 1:39:07 PM
As far as I've read the literature from the 60s and 70s, tactical nukes were eventually eliminated in order to assuage western Europe's concerns that large portions of their countries would be turned into irradiated wastelands for decades / centuries if war erupted between the US and USSR.It was also the product of perceived overmatch on both sides -- the Soviets believed they had superior mass of armored formations (and they did), while the US and allies believed they had technological supremacy (and they did). Ergo, neither needed tactical nukes.
It didn't hurt that it helped both in the eyes of the then vehemently anti-nuclear European movements.
Offensive bio and chemical weapon limitation is a more nuanced decision.
In both cases, their primary use was either local mass lethality or terrain denial, neither of which were important in the then-gelling American doctrines of maneuver.
The sole use case they seemed viable for was industry denial (e.g. contaminate a high capital cost industrial center), a task at which strategic sized nuclear weapons were equally adept (and more easily stored). So, if you had to have strategic nuclear weapons for deterrence, and they were capable of the same task, why have fiddly bio and chemical weapons?
But in both cases there was also a constant radiant pressure of scientists and the public campaigning against them, and being unwilling to work on or tolerate them.
Absent that, who knows how history would have turned out? Normalization is a powerful opinion shifter.
by ethbr1
4/8/2026 at 3:26:41 AM
I'd feel much better about supporting military actions of the people that are becoming part of that system if they exercised some fucking free will and not follow criminals in our government into wars that do not support our people, or our country. We have a serious problem in our government and it being connected in anyway with what is happening in that institution gives me great pause in believing in people of this country. People are stupid to not be fight this government tooth and nail.by trinsic2
4/7/2026 at 8:42:24 AM
If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.by __alexs
4/7/2026 at 7:04:12 AM
I have multiple friends at Anthropic. I can second this. One thing I notice about Anthropic culture is that it is unusually kind.So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.
by DirkH
4/8/2026 at 3:30:42 AM
You can still be kind and guard against idiot strong men. Do the right thing, for the right reasons always brings peace. There will always be people that take advantage of that, but if you stay in the right frame, eventually people that matter will see it. And, at the same time, thats not always the case and then I just chalk it up to divine intervention or some larger purpose I am unable to see with my limited mind.by trinsic2
4/7/2026 at 2:41:58 PM
Maybe people inside the company think Anthropic behaves ethically, which says something scary about either their ethical standards or their general awareness, considering how much documented unethical behavior we've seen from Anthropic leadership.[1][1] "Unless Its Governance Changes, Anthropic Is Untrustworthy" https://anthropic.ml/
by MichaelDickens
4/7/2026 at 4:08:42 PM
>the company actually is ethical and safety conscious everywhereAnthropic is emphatically not safe. None of the AI labs with customers (i.e., excluding a few small nonprofits whose revenue comes from donations) are anything like safe -- because of extinction risk. The famous positive regard that Anthropic employees have for their organization's mission means almost nothing because there have been hundreds of quite destructive cults and political parties whose members believed that theirs is the most ethical and benign organization ever.
The best thing you can say about Anthropic is that if you have to support some AI lab by becoming a customer, investor or employee, it is slightly less dangerous for the world to support Anthropic than OpenAI although IMHO (and I admit I am in a minority on this among extinction-risk activists) it is slightly less dangerous to support Google Deep Mind or Mistral than Anthropic.
All four organizations I mentioned should be shut down tomorrow with their assets returned to shareholders.
The current crop of services provided by the leading AI labs are IMHO positive on net in their effect of people and society, but the leading AI labs are spending a large fraction of the 100s of billions of dollars they've received from investors on creating more powerful models, and they might succeed in their goal of creating models that are much more powerful than the ones they have now, which is when most of the danger would manifest.
The leaders of all of the leading AI labs have the ambition of completely transforming society and the world through AI.
by hollerith
4/7/2026 at 1:56:17 AM
I think cynicism is deserved just from observing Dario's remarks.by foolswisdom
4/7/2026 at 2:21:34 PM
Im curious — how do ethical and safety conscious manifest themselves there? Is it more cultural or process driven? Do you have any examples?by victorevector
4/7/2026 at 10:01:43 AM
> the company actually is ethical and safety conscious everywhereI wonder what Anthropic tries to achieve by spreading such blatant lies with their bot accounts. I'm definitely not buying anything from a company so morally corrupt to smear the competition while claiming to be somehow "ethical". And I'm not talking just about this thread, it's a recurring pattern on Reddit.
by jarek-foksa
4/7/2026 at 11:48:09 AM
Are your friends also credited in Silicon Valley (2014)?by keybored
4/7/2026 at 2:48:50 AM
[flagged]by hypersoar
4/7/2026 at 3:24:39 AM
It might stick tbh. Their PBC+LTBT structure severely limits the power of shareholders. https://www.anthropic.com/news/the-long-term-benefit-trustby xvector
4/7/2026 at 1:05:54 AM
For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.by ronanfarrow
4/7/2026 at 2:11:51 PM
I think you might be surprised that more and more Software Engineers are souring on Anthropic (the company) and the decisions the company has made recently. Not the whole drama with the US Government, but them locking down the usage of plans to their own tooling.That really rubbed a lot of people the wrong way, as ultimately one might have a favorite tool, then suddenly they are forced to use another tool.
by intothemild
4/7/2026 at 1:09:23 AM
There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.
But just because that's true doesn't mean this article isn't very much relevant and needed.
Because it is.
by giwook
4/7/2026 at 1:35:04 AM
The New Yorker has given plenty of coverage about Anthropic in their past issues earlier this year.by freely0085
4/7/2026 at 1:47:07 AM
After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-warby k1m
4/7/2026 at 2:52:07 AM
"how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harrisNot making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.
by mptest
4/7/2026 at 7:13:23 AM
They are a private company. They have zero obligation to sell anything to any part of the government or military. The only reason they are involved in "public affairs" is because they want to profit from the government. Moreover, long before this DoW controversy, they had plenty of nationalist and anti-China rhetoric in their press releases, more so than the other AI firms.by morpheuskafka
4/7/2026 at 2:34:59 PM
The other explanation besides profit is that they're true believers that democratic militaries should be stronger than the military of dictators around the world, including AI capabilities.by cm2012
4/8/2026 at 3:35:27 AM
I appreciate your point about this. At the same time AI doesn't discriminate. Its going to help the democratic and the facist altogether.by trinsic2
4/7/2026 at 3:30:40 AM
Seriously blame anyone other than the fucking abuser. These peopleby whattheheckheck
4/7/2026 at 12:18:16 PM
Not sure that quote has aged well from a close personal friend and spirited defender of Peter Mandelson.by nswango
4/7/2026 at 1:52:16 PM
“I was only following orders”—not a legitimate defense for some footsoldier.“I had the burden of impacting public affairs through my wildly succesful corporation”—poor them.
by keybored
4/9/2026 at 2:26:56 PM
It's not sneering. Anthropic constantly puts itself out as some sort of moral arbiter when they are no different from any other business, as your quote suggests.by freejazz
4/7/2026 at 1:15:20 AM
OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.by basisword
4/7/2026 at 1:27:28 AM
We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?That would be irrational.
We should give air time to other problems?
I think everyone agrees with that.
You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.
by Nevermark
4/8/2026 at 3:38:28 AM
Fuck that. The whataboutism term is used to much to sideline an marginalize different view points. I really hate this use of it. We are too engaged in being right about everything.by trinsic2
4/7/2026 at 1:35:44 AM
[flagged]by _HMCB_
4/7/2026 at 1:39:55 AM
[flagged]by easterncalculus
4/7/2026 at 1:02:01 AM
Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.
So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.
by xvector
4/7/2026 at 1:42:11 AM
Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.
Do you see a path through this?
by tbagman
4/7/2026 at 12:50:52 AM
I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?
by aragonite
4/7/2026 at 1:02:32 AM
[flagged]by replytofarrow
4/7/2026 at 2:43:16 AM
I know why the cantilevered pool statement is there and why you mentioned it.I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.
by tsunamifury
4/7/2026 at 7:22:46 PM
Now I want to know more....by worldwidewes
4/7/2026 at 3:13:25 AM
> in 2014, [Graham] had recruited Altman to be his successor as president.> [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.
One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..
by f154hfds
4/7/2026 at 2:36:53 PM
Paul answers that here: https://x.com/paulg/status/2041363640499200353?s=20by cm2012
4/8/2026 at 5:09:18 AM
This is PG's response about Sam leaving YC. It says nothing about the original motivation of offering him the position at YC.by dmux
4/7/2026 at 4:08:21 AM
Perhaps your question answers itself.by sonofhans
4/7/2026 at 12:37:06 AM
Just wanted to say what an incredible person you are! Catch and Kill and the related reporting was awesome too!by egonschiele
4/7/2026 at 1:11:34 AM
This is so appreciated, thank you! These stories can honestly take a lot out of me so thoughtful reactions mean a lot.by ronanfarrow
4/6/2026 at 1:10:53 PM
Great reporting.Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?
by cmiles8
4/6/2026 at 5:28:41 PM
The piece is an interrogation of this very question, at great length and with some nuance. I think what it does most usefully is scrutinize an array of different answers to the question.My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.
However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.
by ronanfarrow
4/8/2026 at 10:20:40 PM
"I'm a sophomore and I'm coming!"by scyzoryk_xyz
4/6/2026 at 1:18:21 PM
(Other people's) money.by i7l
4/6/2026 at 11:43:36 PM
This is brilliant work, guys. Did you get any pressure to soften or spike the story?by felixgallo
4/7/2026 at 1:19:12 AM
I won’t get into behind-the-scenes specifics here but I think you can imagine how pressurized this topic was and the amount of heat that tends to generate. I’m used to getting a lot of blowback and it’s never fun. I just hope the work is meticulous and fair enough, and that enough people see the benefits of that, that I get to continue to do it.by ronanfarrow
4/7/2026 at 4:09:04 AM
Hey, just want to say thanks for the piece and for all the hard work and effort you did to get this out there. I've published a bit here and there, and the actual writing is only ~50% of the work load (for me at least). So thanks for going through all the effort and pain to get it out, really appreciate all the work you do for me and the rest of Joe Public.by Balgair
4/7/2026 at 6:30:19 PM
I am appreciative of your work on this piece. I'd love to see one that goes deeper into Dario Amodei. Perhaps even a series of profiles on the central figures of this AI era.Is this something you've thought about?
by AquinasCoder
4/7/2026 at 11:13:38 AM
If you want another story to run, I'd really love to see an investigation into how these different companies are convincing governements that the only path forward to win global dominance is through achieving 'agi' first and how much that contributes to the reckless acceleration of ai software and infrastructure developmentAlso a good expose on accelerationists and e/accs and who among the elites fall in this group is direly needed as well
by trunch
4/7/2026 at 8:44:14 AM
Hi Ronan. TCatK is a phenomenal book, not only in exposing the wrongdoing of powerful people, but also in presenting the meta-issue of how hard it was to get the word out, and you handled it all with nuance. You're about as close as I have to a personal hero.Long time HN lurker, made an account just to say that :)
by antirealist
4/7/2026 at 2:23:08 AM
Nice biography from Loopt to OpenAI. Why no mention of the Worldcoin cryptocurrency https://x.com/sama/status/1451203161029427208 in this piece? Was there nothing interesting to report in that area?by euio757
4/7/2026 at 6:00:08 AM
It was mentioned, but not by name.by shinryuu
4/7/2026 at 5:20:13 AM
As someone on a budget, how can I pay for good journalism when it so spread out across various (expensive) outlets?by gib444
4/7/2026 at 7:13:11 AM
Paying for 1 is doing more than paying for 0.It's not your responsibility to fund for every single one, just find the one you like the most and subscribe to that one.
by input_sh
4/7/2026 at 4:03:00 AM
The last couple sentences tie things up really nicely.by Uhhrrr
4/8/2026 at 4:23:33 AM
Appreciate this. I thought deeply about that and took a lot of time, when we had little of it to go around, iterating and shaping it. (Including talking to computer scientists to make sure I sounded as not-dopey as possible on the technical side!)by ronanfarrow
4/7/2026 at 2:32:43 AM
Hi Ronan, absolutely wild to see you here in the belly of the beast.I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.
I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.
Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?
Can’t wait to read this one, and hope the HN crowd treats you well.
by mplanchard
4/7/2026 at 2:32:35 AM
Have you considered doing a piece on Aaron Swartz? Timnit Gebru? Michael O. Church?by bck102
4/7/2026 at 3:25:35 AM
It could be titled "Hypergraphia"by doctorpangloss
4/7/2026 at 8:29:57 PM
Ask him about the “hermetic order”(that is either emergent, convergent or by design?)one discovers in interiority studies of GPT.by spacebacon
4/7/2026 at 2:10:29 AM
what model was used to create the visual at the top of the article?by jharohit
4/6/2026 at 11:05:04 PM
Any plans to tackle any of the other folks who might be mentioned in the same sentence as Altman, like Darius Amodei?by giwook
4/6/2026 at 11:08:59 PM
[flagged]by mathisfun123
4/6/2026 at 11:21:08 PM
I think the comment was out of legitimate interest rather than weighing one against the otherby yakkomajuri
4/7/2026 at 2:35:04 PM
^by giwook
4/7/2026 at 1:04:33 AM
Huh? It's a genuine question. The article is great and the writer did a fantastic job.Please try to give people the benefit of the doubt though I know it's hard in today's society.
by giwook
4/6/2026 at 11:50:42 PM
Do you think the recent conflict between Anthropic and the Department of War, and the apparent bootlicking by OpenAI has fundamentally altered the public perception of OAI? Are they the baddies now in the general public opinion?by _alternator_
4/7/2026 at 9:18:01 AM
Great article.Thank you for fielding questions. And please don't stop, your work is great.
by Akuehne
4/7/2026 at 11:47:23 AM
Please ask The New Yorker to extend some of their very generous subscription sale prices to Canada, I would subscribe to print if even a single sale applied to us, but all the sales are always USA only.by VladVladikoff
4/6/2026 at 3:25:45 PM
In depth reporting is great. This is a really tricky topic to cover over the course of 18 months. A year and a half ago OpenAI was ascendant, now it's -at best- stalling and, more likely, trending toward irrelevant.by xnx
4/6/2026 at 10:49:19 PM
Love the visual. Fantastic.by Stevvo
4/7/2026 at 10:20:38 AM
Do we have a choice?by Bombthecat
4/7/2026 at 3:18:10 AM
hey I loved that Ricky Gervais joke about you at the globesby artursapek
4/7/2026 at 7:33:28 AM
For those that don’t know or remember:“Tonight isn’t just about the people in front of the camera. In this room are some of the most important TV and film executives in the world. People from every background. But they all have one thing in common: They’re all terrified of Ronan Farrow.”
by e40
4/7/2026 at 4:21:47 PM
How do you feel about the title of your article? I assume an editor chose it.Clearly he's straight up evil; between tanking the global economy, constantly lying, and raping his 3 year old sister, it feels really disingenuous to me to frame this as an open question.
by popalchemist
4/7/2026 at 5:55:56 PM
Seems a bit conspiracy theorist to meby noodlebreak
4/7/2026 at 10:00:26 AM
The article is paywalled, where can we read it?by saberience
4/7/2026 at 2:25:52 PM
As bad as altman might be he’s just another sociopathic Tech BroI’m far more concerned with the 25 million dollar personal bribe OpenAI president Greg Brockman gave Donald Trump for his reelection -
the fact that a tech company can influence the outcome of an election directly is evil
Far more evil than Altmans shenanigans
by AIorNot
4/8/2026 at 4:13:59 AM
Exactly. In fact I'd say that everything that is said about Altman is a misdirection from that one point.People want to focus on scape goats rather than systemic problems
by trinsic2
4/7/2026 at 2:57:43 AM
From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.
I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.
I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.
I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.
I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.
While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.
"Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.
Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.
The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.
by Lerc
4/7/2026 at 9:49:46 AM
Paul Grahams's latest public statement on the issue:by nickpp
4/7/2026 at 8:06:37 AM
I have experience in dealing with Sam Altman-like behavior. I hope to explain how their tactics unfold.> I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
There are two angles to this: from an individual perspective and from a collective one.
One's interaction with such a manipulator isn't a single shot. There is not a single event that they are “beaten”. First, one gets persuaded --- you might argue that there's nothing wrong with a skillful persuasion. At some point they realize that the reality is not in line with their expectations. They bring the point up to the manipulator and ask for a change, this time in more concrete terms. The manipulator agrees with the change, negotiates compromises, and the relationship continues. After some time the manipulated party realizes that things are not going in the direction they desire. This time they ask for more concrete terms, without accepting any compromises. The manipulator accepts, yet continues to act against the terms. The manipulated party is now angry and directly confronts the manipulator. The manipulator apologizes and tells that none of it was intentional, and asks for another chance. However, at that point, the manipulator has run out of “politically correct” “persuasion tactics”, and tells blatant lies to make the other party behave.
From a collective perspective, even those “politically correct” “persuasion tactics” are discovered to be lies, because what the manipulator told different parties are in direct opposition to each other, i.e., they cannot all be truths.
> Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I understand how her behavior may raise a flag for the unsuspecting, but it was exactly the right one. Manipulators prey on the benefit of the doubt. If Toner were to bring Altman's behavior into attention of others, no doubt that Altman would manipulate them successfully.
It's unfortunate that many people are unaware of these tactics and assume the best of intentions, when such assumptions fuel the manipulation that they would better avoid.
by laserlight
4/8/2026 at 4:04:24 AM
I want to add something about the idea of persuasion. Not that I think you are not doing the word justice or that you are for or against using the tactic.Here is the etymological definition of the word:
persuasion(n.) late 14c., persuasioun, "action of inducing (someone) to believe (something) by appeals to reason (not by authority, force, or fear); an argument to persuade, inducement," from Old French persuasion (14c.) and directly from Latin persuasionem (nominative persuasio) "a convincing, persuading," noun of action from past-participle stem of persuadere "persuade, convince," from per "thoroughly, strongly" (see per) + suadere "to urge, persuade," from PIE root *swād- "sweet, pleasant" (see sweet (adj.)).
Meaning "state of being convinced" is from 1530s; that of "religious belief, creed" is from 1620s. Colloquial or humorous sense of "kind, sort, nationality" is by 1864.
IMHO if you aim to convince people of something you are on the side of trying to control people's freedom to chose. That in itself is a form of being unethical to the idea of truth.
If you can't let people come to their own conclusions, you got problems and you shouldn't be in a position of power.
In my experience the people who spend the most time convincing are people with narcissistic personality disorders. I stay far away from those people because I know they dont really value truth and justice like I do.
by trinsic2
4/7/2026 at 12:56:19 PM
[flagged]by whynotminot
4/7/2026 at 1:51:17 PM
I'm sorry that it wasn't clear. I didn't mean to imply that I was going to connect to Sam Altman. I specifically wanted to address why it wasn't the case that people were “intellectually beaten” by Sam Altman.> except the one you imagine is true
I'm not sure what you mean. I told about an example of manipulation that I witnessed. I later learned that these were common tactics employed by con-artists, scammers, etc.
> Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.
I don't need first-hand experience with someone to understand that they are a manipulator. I am comfortable forming my opinion based on reports.
by laserlight
4/7/2026 at 6:44:14 PM
I think sometimes you have to look at the patterns rather than at the single claim. If a large amount of people, that are completely unrelated, tell you very similar experiences they had with Altman, you can take that as a good indicator of his general character.And if this tendency to misunderstand/be misunderstood always results it Altman gaining more power, even if we give him the reason of the doubt and say that doesn't do it on purpose, it's still a big problem, given the responsibility he has.
The article also mentions many moments where apparently Altman straight out lied, as opposed to being "very persuasive, if you believe those sources then I don't think it's also possible to think he's sincere. I cannot open the article again to get the exact quotes, but the few I remember were: - one time he was claiming he didn't send a message, while people were literally showing him the message he sent, with the confirmation of another OpenAI employee - another time when he accused people of organising a coup, and that someone from the board informed him, and after the person from the board was called in the meeting Altman claimed he never said those words and never accused anyone
These cases can't be put to persuasion, that Altman changed their view, or that someone misremembered, they either happened or they didn't
by sambuccid
4/8/2026 at 5:50:18 AM
>I think sometimes you have to look at the patterns rather than at the single claim. If a large amount of people, that are completely unrelated, tell you very similar experiences they had with Altman, you can take that as a good indicator of his general character.Yes, but that doesn't work if you look for patterns selectively. There are large amount of people who will tell you vastly different experiences that they had with Altman. If you pick the right grouping, within it, you can find universal praise or condemnation. The article itself acknowledges that.
>The article also mentions many moments where apparently Altman straight out lied.
Does it? It has people saying he lied, and a few things he disputes that he said. If the lies were clearly apparent, I think his position would not be tenable. Which points in the article do they show statements that it clear that he has said them, that they were false, and that he knew they were false when he said them?
The points you list are not clearly apparent lies. At most they are allegations of lies. They might just be different interpretations of the same events. I have seen instances in my own life where someone has said "You said X" the other person says "No I didn't", The first then pulls up the minutes, and says "See you said X", the other responds with "That's not what that says". You see rage bait posts about terms and conditions that take that form all the time. Someone misreads a legal term as meaning something different to what it means in a legal sense and then refuses to acknowledge the commonly accepted definition.
Please respond to this, because I really am interested in the answer, but I did read the article and I didn't see what you appear to have seen.
I have made no claim to the merits of Sam Altman, I just don't like the idea of condemning someone on hearsay and insinuations. There are videos on YouTube claiming he's had people killed. At some point you have to point at something that everyone can agree on is an actual thing that happened and that it actually matters. At most what I have seen is people being able to provide one of those two points on any particular allegation.
I don't feel this should be that contentious. If it were clear there would be demands from all around saying "You did this bad thing, you must resign". Do you think that everyone dealing with OpenAI acknowledges some dark truth and is complicit?
by Lerc
4/8/2026 at 11:34:39 AM
Sorry I didn't mean that the artice has proofs he lied, just that some of the situations presented cannot be simple misunderstandings.The pieces in the article I was referring to are:
> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”)
> Amodei discovered that a provision granting Microsoft the power to block OpenAI from any mergers had been added. “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.)
I agree it's very easy for 2 different people to understand or to remember something differently, and that meeting minutes are not always a reliable source, but for me in the 2 scenarios above is almost impossible for 2 people in good faith to disagree:
In the first case, if you say something, and a big deal is made of it, and 5 minutes later the other person claims that you said some specific words and you deny it, then someone is lying, either you or the other person.
In the second case, if there is something written in a contract, and someone presents that contract to you, reads it out loud, and asks a collegue to confirm, either that person made up the provision, or you are lying, there is little room for misunderstanding.
Given there are no proofs, I can't say he's 100% culprit, and I appreciate your rigor on this because we don't want to result judging everyone by a sort of "trial by public opinion".
However, outside of trials, the judjment can be more nuanced than a boolean "culprit/innocent", and to me the reasons below(*) are enough to distrust Altman and to prefer he wasn't the person at the head of a revolutionary technology that could have huge negative consequences on the society, or on human kind as a whole.
(*) the reasons being:
- amount of people interviewed and their very similar experiences
- the author and the type of journalism he does
- the professionalism he shown in calling out in his article the not-backed allegations other rivals made(for example of murder and sexual assault)
- the power dynamic that is usually in place between someone with enormous power and whealth, and a journalist that could be intimidated by being sued multiple times
Of course the amount/type of reasons needed to distrust someone is very personal, so we might need to "agree to disagree" on this
by sambuccid
4/8/2026 at 3:20:00 PM
That's interesting, I'm sure when I read the article it didn't specifically attribute those claims to Amodei.To be frank, While I tend to think that Dario has good intentions, I'm not so sure about his judgement. He's made a lot of claims that haven't panned out. I haven't felt that it was due to dishonesty, but more because of hyperbole.
The phrasing "Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied." is very close to the pattern I described above where someone interprets a claim as something different from what was actually said and refuses to back down. Furthermore this was prefaced with "As one person briefed on the exchange recalled" so it isn't even a first hand account. We don't know who the person doing the briefing was, but if it was one of the participants of the exchange, they would have been afforded the opportunity to reframe it to put themselves in a better light.
The second claim is potentially even more of a match for the example I gave regarding people misreading legal documentation. Was this a denial about the existence of words in a document, or was it a denial that the words represented the provision that was claimed. I have seen people do this, they take the existence of the words as proof of their interpretation and take dismissal of the interpretation as a claim that the words do not exist. I don't rightly know why people do this, but I have seen it happen. I suspect you could find an abundant supply of cases like this from the records of the worlds town council meetings.
It is difficult to assess the reliability of claims made by the current administration (understating it somewhat), but one of the things that was said about the Government negotiations with Anthropic was that he wanted a gate to some AI abilities in national security circumstance by requiring a personal phone call to Amodei to clear it. No sane government on earth would agree to something like that. It would be an invitation to providing a corporate interest a massive point of leverage in a time of crisis.
But again I am in a similar position with Amodei. I don't have any direct knowledge of the person so I will reserve judgement. I generally like the approach Anthropic is taking but the exposure I have had to the statements made by Amodei himself has given me pause. I would not condemn him either, but I also wouldn't place a lot of stock in what he says unless I see more to create a more complete view of his character.
You note amount of people interviewed and their very similar experiences but it's the nature of how those claims are similar that concerns me. So many of the claims seem to fall into the pattern that requires the person reporting the claim to judge the sole meaning of what was said. How many confirmed direct quotes have been confirmed to be untrue? I'm open to the evidence, perhaps this article will draw some out, but right now I see people convincing themselves of a pattern and then interpreting their own experiences in terms of that pattern.
The thing is, if you were to ask, I think Altman would agree that he shouldn't be in charge of the world's AI. I don't think any one person should, and I would treat anyone who claimed that they were the right person for that job with massive suspicion. To say that's where he sits is to buy into the premise that whoever is the head of OpenAI controls our future. OpenAI is but one of many enterprises working on this, there are a lot of people claiming they already have lost too much ground, but then there have been many predicting their imminent collapse, like a doomsday cult rolling forward the calendar whenever it doesn't happen.
by Lerc
4/9/2026 at 5:40:15 PM
> That's interesting, I'm sure when I read the article it didn't specifically attribute those claims to Amodei.Apologies, I didn't mean to highlight Amodei in those quotes, I just selected the sentence to have enough context but not be too long, it was a coincidence that they both started with Amodei. I'm not sure if those claims came from Amodei or not, nor I have any specific feeling about him.
> Furthermore this was prefaced with "As one person briefed on the exchange recalled" so it isn't even a first hand account
I'll admit I somehow missed that part, but we don't know how much of this event was in "Amodei's notes" and how much was from the "person briefed on the exchange"
> The phrasing "..." is very close to the pattern I described above where someone interprets a claim as something different
> The second claim is potentially even more of a match for the example I gave regarding people misreading legal documentation
I think our difference in point of view here lies on how much trust we put in the author, with what I seen so far I feel I have enough trust in the author to think he investigated these claims properly and made sure they weren't just misunderstandings, and that many of those checks he did weren't included in the article for any technical/legal reasons. Much more so reading some of his comments:
> As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page.
> You try to reach a critical mass of detailed, rounded understanding of a central question, integrating the most meaningful perspectives, interrogating the weak points and blind spots, and backing up the assertions with documentary evidence or strong sourcing. Eventually, you reach a point where enough sources and materials are reliably triangulating toward the same truths.
> The fact-checking process at the New Yorker is exhaustive, and can span weeks. Every sentence, assertion, and piece of underlying sourcing get scrubbed by multiple independent pairs of eyes. This story had four fact-checkers working on it for the better part of a two week period, pulling very long hours.
As I said I'm happy to agree to disagree on this point.
> So many of the claims seem to fall into the pattern that requires the person reporting the claim to judge the sole meaning of what was said
I guess that's the nature of communications between humans. Even examples of written discussions seem contentious. The only type of claims I can think about that could be outside this category are the ones about written contracts, but it's understandable we don't have access to the actual contracts, and even if we did we couldn't really prove what was verbally agreed to be put in the contracts.
> To say that's where he sits is to buy into the premise that whoever is the head of OpenAI controls our future. OpenAI is but one of many enterprises working on this
This might start a whole new discussion, but I think being the CEO of one of the companies that produce state of the are models is enough to have a high concern. My worry is that he(or any other company) won't say "stop" if a new AI is found to be more powerful but have considerable negative impacts on society. As an example it doesn't matter who has the "strongest" atomic bomb, any country that has one is a potential treat to humanity and should have rigid controls in place.
I commented specifically on Altman because the article seems to suggest he's more power-greedy, persuasive, possibly deceptive, and with strong-leverages/contacts than the average person, or even the average CEO.
(edits: formatting)
by sambuccid
4/7/2026 at 2:38:36 PM
Paul made a statement today: https://x.com/paulg/status/2041363640499200353?s=20It clarifies he did not fire Sam
I overall agree with your takeaway, but this is not a criticism of the article itself.
by cm2012
4/7/2026 at 2:39:12 PM
> My prima facie view on Altman has been that he presents as sincere.That is how pathological liars present.
by giwook
4/8/2026 at 12:38:39 AM
What kind of situation would I choose to use the word presents in that context without being aware of that fact?I am also aware that sincere people present that way.
I don't believe there is any rational way to consider the appearance of innocence as evidence of guilt.
by Lerc
4/8/2026 at 1:36:47 PM
There is a ton of evidence out there that points to guilt. No one implied the appearance of innocence was evidence of guilt (as much as I admire the creativity in your interpretation, Mr. Self-Described Altman Apologist).by giwook
4/8/2026 at 9:13:57 PM
Making a selective quote the way you did with the response you provided made my interpretation reasonable.What other point could you have been making? You made no reference to any other evidence.
>as much as I admire the creativity in your interpretation, Mr. Self-Described Altman Apologist).
I am unsure if this is deliberate irony, or poor comprehension.
by Lerc
4/9/2026 at 2:46:04 PM
Use the multitude of search tools within your grasp. It is difficult to avoid the evidence.It may be more of a mental block than anything else.
by giwook
4/7/2026 at 10:21:44 AM
> what information would significantly change your viewsQuite simple: show me any single action took by Sam Altman which can not be construed as an attempt to get him more power/money/influence. You can't find it.
The difference between what he claims to believe and what he actually does is a textbook example of sociopathy.
by rglullis
4/7/2026 at 5:28:22 PM
When people are described as sociopathic it’s not about any particular lie, but the relationship that the person has with the truth, which is that they will lie when it suits them and tell the truth when it suits them and they don’t seem to distinguish morally between them. And more than that, they treat people the same way, and will use them while it suits them and then dispose of them when they are inconvenient.by empath75
4/8/2026 at 2:58:11 AM
Perhaps that is some of the problem. That is not what a sociopath is. There are specific criteria, and while it may come as a shock to some, there are ethical sociopaths out there. They do the right thing not because they feel it to be the right thing to do, but because they think it is a rational way to live their lives.What you are thinking of are people who do bad things. Most of those people are not sociopaths. Often they are hurting in some way, sometimes they are just living the only life they know. Most of them feel they are doing the best they can. It is extremely comforting to pathologize these people because then it's not something we could have prevented by providing a better society. It rules out the hard options of empathising with them, or reasoning with them to find common ground.
The term othering has come into use in recent years. The concept has existed through the ages, but that's the latest label for it.
This is what it looks like.
by Lerc
4/7/2026 at 3:13:15 PM
I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence. I can believe that a persons intentions are good, but I can't make everyone in the world do that, and that is what you are asking."If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him"
To play your game, he got married, had a child, and joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.
You could still construe those actions as evil if you choose to see them as evil.
I'm not going to claim that Sam Altman is not a sociopath, I lack the information and knowledge of psychology to make that determination. On the other hand I have not detected those attributes in anyone who has claimed he is a sociopath.
It seems odd that people seem to take offense at the notion that arbitrary people do not reach a conclusion that requires specialised expert knowledge and a decent amount of irrefutable evidence.
by Lerc
4/7/2026 at 3:45:13 PM
> I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influenceTry the other way around, via negativa. We definitely can find plenty of examples of people stepping out of positions of power, deciding not to do something because of moral conflict, etc. Is there any case of such action from Sam?
Fuck, anyone with any semblance of moral fortitude would refuse to take money from the Saudis. But he had no problem to do it.
> joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.
No, this is selection bias. What he did was to put himself in a position where he could have his fingers on any and every possible pie, and then when of these things turned out to be something believed to be valuable by people with money, then he manouvered himself to be in the driver seat.
by rglullis
4/8/2026 at 4:09:57 AM
>No, this is selection bias. What he did was to put himself in a position where he could have his fingers on any and every possible pie, and then when of these things turned out to be something believed to be valuable by people with money, then he manouvered himself to be in the driver seat.Thank you I was just going to point this out.
by trinsic2
4/7/2026 at 3:37:34 AM
[flagged]by clapthewind
4/6/2026 at 10:39:19 PM
I have the feeling that if you write an article in that style, the subject of the story becomes the hero even if you insert a couple of negatives. In the same manner that Michael Corleone becomes the hero of The Godfather.I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.
On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.
My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).
by rhlannx
4/7/2026 at 1:16:54 AM
For what it’s worth, I don’t think the piece at all avoids key areas of disillusionment with the technology. Quite the contrary.by ronanfarrow
4/6/2026 at 9:30:40 PM
Hi Ronan,I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.
However I'm not going to pay for yet another subscription to access one article I'm interested in.
I'm sure you can't do anything about this, but I just wanted you to know.
You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.
by FloorEgg
4/6/2026 at 10:12:32 PM
You could buy a physical copy (and this isn't meant to sound sarcastic).by cloud_line
4/6/2026 at 11:00:36 PM
You can walk down to a bookstore or anywhere that sells magazines and buy a physical copyby jzymbaluk
4/6/2026 at 9:34:05 PM
I’ve often thought about a model like this and would love to see a few news outlets run it as a pilot and see how it stacks up.by IrishTechie
4/6/2026 at 11:26:52 PM
Many have tried it (as well as the oft-recommended micropayments idea) and it never justifies the added expense and overhead of the customization. Closest is probably the NYTimes’ gift article feature.by mikeyouse
4/6/2026 at 11:47:16 PM
I really doubt the implementation difficulty is the actual reason. It's not hard to have an extra table of specific article permissions.by Dylan16807
4/7/2026 at 12:29:00 PM
Probably true, it's more likely that it's a variation on "there are only a small percentage of people willing to pay any amount of money for an article, so if we offer one-time options, a large enough percentage of people who would have otherwise subscribed with recurring revenue instead pay one-time so their lifetime value is lower"by mikeyouse
4/6/2026 at 9:32:36 PM
You could hit up a public library...by caycep
4/6/2026 at 10:25:30 PM
Looking online it looks like the newsstand price of an issue is around $10 (which I'd assume is heavily ad subsidized, if anyone is still buying print ads?) which is an interesting data point for a pricing model. (Of course, I looked online because I have no idea where I'd find a newsstand around here - the nearest newsstand that show up on google maps has reviews that say "It's just snacks and scratch tickets." and "three newspapers and no magazines" - I may have to stop by just to see what three newspapers they have :-)by eichin
4/6/2026 at 9:34:22 PM
Or just switch your browser to Reader Mode and it's free.by mattbee
4/7/2026 at 12:03:37 AM
The public library [digital edition] is absolutely the correct answer. I maintain a library card at 3 different local municipal library systems. My local city's library offers access to several Digital Library apps, including Overdrive, Hoopla, and Libby. It took me a couple searches in Libby to locate the New Yorker and it offered up the current issue right away. The article is on page 32. It is ridiculous that anyone considers to access this from "The Public Internet" or the newyorker dot com website, rather than simply turning to your public library, which has been the go-to resource for basically everyone, for hundreds of years.You're already paying for your library with your tax dollars. If you don't use it, you may lose it, but you will certainly lose out by subsidizing bums, vagrants, and other families who use the library to their heart's content.
The public library also features lots of streaming and CD music, videos, and video games, that you can freely check out without any cost. In fact, my local library staff told me that they've abolished overdue fees. Libby and the digital apps will automatically renew or return materials. My physical books even got auto-renewed three times before I needed to manually do it, or bring them back into the building.
by CookieTonsure
4/6/2026 at 11:18:51 PM
[dead]by sieabahlpark
4/7/2026 at 1:04:50 AM
There's a very minor typo in the article:> “Investors are, like, I need to know you’re gonna stick with this when times get hard,”
Should be:
> “Investors are like, I need to know you’re gonna stick with this when times get hard,”
by stavros
4/7/2026 at 3:22:34 AM
I'm not seeing a typo. Just a stylistic difference.by JumpCrisscross
4/7/2026 at 10:09:33 AM
In "that's, like, your opinion", "like" is an interjection, you can take it out and not change the meaning: "That's your opinion".In "investors were like, you need to grow", you're semi-quoting someone, and can't take it out: "investors were you need to grow".
by stavros
4/7/2026 at 4:08:28 AM
Pretty sure the correction is wrong, not merely a stylistic choice.by SwellJoe
4/7/2026 at 1:06:44 AM
[flagged]by mannyv
4/6/2026 at 1:02:15 PM
[flagged]by loloquwowndueo
4/6/2026 at 1:02:46 PM
Many browsers let you disable autoplay globally.by LoganDark
4/6/2026 at 1:06:18 PM
Sure, there are a couple of buttons I can press to stop the video. Why do I have to? Find me one person who likes auto playing videos. The page was created with a deliberate annoying choice that I have to go out of my way to override.by loloquwowndueo
4/6/2026 at 2:28:27 PM
Why do you think the author of this piece, to who you originally replied, has any control over this?by binarymax
4/6/2026 at 1:39:38 PM
I'm not talking about pausing the video after it starts playing. I'm talking about a global setting to prevent videos from playing before you manually unpause them. Safari has such a setting, for instance.by LoganDark
4/6/2026 at 3:45:24 PM
Exactly what “I have to go out of my way to override” covers, from my comment.by loloquwowndueo
4/7/2026 at 1:27:04 PM
If you don't configure your software when you first start using it, I don't know what to tell you. This shouldn't be "out of your way". You should have set this when you first started using your browser. If you didn't for whatever reason, don't be surprised when autoplaying videos exist.by LoganDark
4/7/2026 at 2:56:16 PM
Dunno man, NOBODY likes auto playing videos, yet creators keep using them for reasons unknown. This is starting to sound a lot like victim blaming. Auto playing videos should not exist, period. End of story.by loloquwowndueo
4/7/2026 at 4:40:06 PM
I don't like them either...by LoganDark
4/7/2026 at 2:22:35 AM
[flagged]by wileydragonfly
4/7/2026 at 9:47:36 AM
[flagged]by logicallee
4/7/2026 at 2:51:53 AM
Hard hitting journalism here. Is the person who lied for years to promote himself trustworthy? More news at 11!by tstrimple
4/7/2026 at 12:25:07 AM
Dang, can you substantiate that this is actually Mr. Farrow like he claims?Or Mr Farrow can you post some evidence somewhere we can see?
by wyldfire
4/7/2026 at 4:47:50 PM
https://news.ycombinator.com/item?id=47663895by r721
4/7/2026 at 12:06:29 AM
Damn, just wanted to say reporters are scary... The amount of detail here is huge. You think of hackers as the ones good at doxing... Nah, its reporters.by Uptrenda