4/6/2026 at 12:47:25 AM
I don’t know how OpenAI screwed this up. They had the best tech, the largest installed base, the best brand recognition.And somehow instead of prosecuting the lead in all areas, they got all hubristic and sloppy and just failed to iterate on the core product, while also failing to respond quickly when Anthropic showed that coding agents are the flywheel that makes the whole company faster.
It’s like they thought they had an unassailable monopoly and speedran to the lazy incumbent position, all in a matter of months.
by brookst
4/6/2026 at 1:42:25 AM
Anecdotally, I would actually argue tbe opposite - Anthropic is overrated, ass-kissed way too much here for mediocre coding abilities (especially for Elixir). ChatGPT most of the time one-shots complex solutions in comparison. The only reason why people shit on OpenAI so much is because of the defence deal, but, it's not like Anthropic is a saint either:https://www.cnbc.com/2026/02/12/anthropic-gives-20-million-t...
by neya
4/6/2026 at 9:32:25 AM
> it's not like Anthropic is a saint either:I’m so confused. Your link shows they are pushing for guardrails, what is bad about that? It is consistent with Anthropic safety-first principles, and what Dario wrote and talked about for the past decade or so? Could you be more direct with your criticism? Otherwise it’s hard to engage with
by dgellow
4/6/2026 at 3:01:51 AM
Claude Code is IMO the benchmark today. For all of the various contexts I’ve used it in it has mostly oneshot the tasks I’ve given it and is very user friendly for someone who is not a professional software engineer. To the extent it fails I can usually figure out quickly why and correct it at a high level.by ls612
4/6/2026 at 3:47:07 AM
I think Codex is a better fit for professional software engineers. It's able to one-shot larger, more complex tasks than Claude and also does better context management which is really important in a large codebase.On the other hand, I think Claude is more friendly/readable and also still better at producing out-of-the-box nice looking frontend.
by csharpminor
4/6/2026 at 4:47:05 AM
> not a professional software engineerI think this is where we might have differing opinions. I'm a CTO by profession and I know what bad code is, so it is quite easy for me, based on my professional experience, point out when Claude generates bad code. And when you point it out, or ask it why it didn't take the correct/simpler approach - the response is always along the lines of "Oops, sorry!" or "You're absolutely right to question that..."
by neya
4/7/2026 at 1:22:47 AM
Yeah my CTO says similar things. I usually just tell him to add it to the backlog and move on. At the end of the day these tools save us 3-4 eng hires and that’s what the board cares aboutby ATMLOTTOBEER
4/6/2026 at 2:49:11 AM
Why pick elixir specifically here? I’m using opus/sonnet via Claude code for a moderately complex personal project built on phoenix and have had a good experienceby phinnaeus
4/6/2026 at 3:54:26 AM
Yeah, I've been building a fairly complex app with Claude and it has been great. Backend stack is a Go service, with TS front end and a solver running or-tools in Python.I do think I do a good job of being very structured at breaking down my requirements and acceptance criteria (thanks dual lives as a devops and SRE guy and then PM). Extensive unit testing, discipline in use of sessions and memories and asking it to think of questions it should be asking me before even formulating a plan.
by FireBeyond
4/6/2026 at 4:06:20 AM
Claude is good, I'm definitely not saying it's bad. But if you work with LiveView, it will tend to choose more complexity over simplicity. Weirdly enough I have a feeling it's trained more on Python/Ruby (Object oriented paradigms) style code than functional code, so it tries to get things done not so functionally.by neya
4/6/2026 at 7:58:48 AM
Anecdotally, Claude has worked far better for our elixir team than the others we’ve tried.by heeton
4/7/2026 at 12:29:05 PM
Strange take, I wasn’t lionizing Anthropic at all, and certainly wasn’t being preachy or moralistic.I shit on OpenAI because they had a completely clear path to being dominant in consumer and enterprise, in chatbot and coding agent, and in text, image, and video. And somehow they screwed it up by being unfocused, unnecessarily slimy, drama-ridden, and slow to react to the market.
OpenAI has a lot of good tech and some good product. They may yet succeed. But they embody the lazy hare partying with celebs while the tortoises pull ahead in the race.
by brookst
4/6/2026 at 2:11:57 PM
cant take anyone seriously who thinks> ChatGPT most of the time one-shots complex solutions in comparison
is an intelligible sentence.
by anthonypasq
4/6/2026 at 3:00:03 PM
Hmm. I can definitely understand what the author was saying.Paraphrased: ChatGPT often completes complex solutions in one try whereas Claude does not (or performs less well).
I guess you can’t take me seriously?
by LoFiSamurai
4/6/2026 at 7:28:49 PM
no i cant. chatgpt is a mobile app/website, not a model or agentic harness. if you are confusing these things then sadly you have no idea whats going on.by anthonypasq
4/6/2026 at 12:55:23 AM
Sam lost the plot for me. He took too many interviews which led me to not trust him. Last straw came with him standing by Anthropic one day then throwing them under the bus the next. He showed little awareness on why that is problematic.by morcutt
4/6/2026 at 1:06:01 AM
That's why I changed as well. I got really irritated how Altman tried to get the social credit by having principles, only to change them the moment it was convenient.by tombert
4/6/2026 at 3:10:42 AM
I have appreciated Amodei’s brutal honesty about their intentions.On podcasts his attitude is basically “oh yeah all of you are basically fucked our products will take everyone’s jobs in a couple years.”
Altman is a lot more coy and comes across as saying what’s politically expedient at any given point in time.
by jimbokun
4/6/2026 at 3:41:21 AM
> On podcasts his attitude is basically “oh yeah all of you are basically fucked our products will take everyone’s jobs in a couple years.”I also appreciate his honesty, and don't really understand why the others don't emulate it because there's no cost to them to be honest. At every level of society we've decided to stick our heads in the sand and pretend like this very large tsunami isn't racing toward the coast, so as someone producing this technology you can be honest (and mostly ignored by people in denial), or be cagey and mistrusted (like Sam Altman).
by bayarearefugee
4/6/2026 at 3:49:05 AM
Because it isn’t honest, it is investor hype that these frontier labs need people to believe despite obviously hitting the sublinear part of the improvement curve.“It’s so dangerous, we’ve reached AGI, we just have to release models that are obviously incapable of abstraction for your safety”
by nostrebored
4/6/2026 at 4:00:16 AM
We still think Amodei is honest and his hype recycling is not ultimately incredibly self serving?by taurath
4/6/2026 at 6:53:12 AM
Upton Sinclair[1] has something relevant for Amodei.Funnily enough, the same thing can be used against those who criticize his view on AI. I wonder if there is a word for this.
by sifar
4/6/2026 at 4:02:07 AM
> Sam lost the plot for me. He took too many interviews which led me to not trust him. Last straw came with him standing by Anthropic one day then throwing them under the bus the next. He showed little awareness on why that is problematic.It should have become clear to all that he was an untrustworthy person when he was fired from OpenAI by its then-board. My understanding is their complaint was he was lying, untrustworthy, and manipulative; and enough stories came out at the time to confirm that.
by palmotea
4/6/2026 at 8:15:07 PM
Yeah, it likely should have.I've however been on the receiving end of some very odd board behavior and most founders I know have a story or two as well. I don't always take board decisions at face value. Reality is often askew, personality disorders run rampant in places of power, and there can be unknown incentives at play.
by morcutt
4/7/2026 at 12:16:26 AM
For me, it was when I found out Greg Brockman's MAGA donations. From wikipedia (https://en.wikipedia.org/wiki/Greg_Brockman#Personal_life):Brockman and his wife were the biggest donors to Donald Trump's Super PAC, MAGA Inc., in 2025 with each of them donating US$12.5 million. Brockman and his wife also donated $50 million to Leading the Future, a super PAC dedicated to AI deregulation that he helped found with Andreessen Horowitz co-founders Marc Andreessen and Ben Horowitz.
by saeranv
4/6/2026 at 1:03:26 AM
It's clearly because they didn't hire me after I applied :)In all seriousness, I use Codex for work and Claude at home, and I feel like nowadays they're actually pretty competitive with each other. I don't know that it's that far behind.
I agree that they clearly erroneously assumed that no one would be able to catch up with them, though. OpenAI had such a head start that that should have been by itself a moat.
by tombert
4/6/2026 at 2:51:32 AM
Yea. In my opinion the value provided for 20$ is better. I wonder how much of antropic value is the hype around claude code coming from every snake oil sales man promoting claude code as best to use with open claw to summarize your emails.by hsuduebc2
4/6/2026 at 3:07:22 AM
Claude Code became the default brand for an AI coding harness, much like ChatGPT was synonymous with AI chat bot.Even now when I hear Codex I have to stop and think “oh yeah that’s OpenAI’s competitor to Claude Code.”
by jimbokun
4/6/2026 at 3:23:04 AM
Yep, exactly. It became industry standard.by hsuduebc2
4/6/2026 at 1:31:41 AM
Does it matter that codex is now as good as claude code?Check dev spaces like twitter and discord and all anyone talks about is claude-code, openclaw, opus 4.6 etc.
The mindshare went to anthropic.
by phist_mcgee
4/6/2026 at 2:34:20 AM
Just like OpenAI's original moat, I don't think that's particularly durable. I've already seen plenty of people swing back to preferring codex, and it'll probably swap again with the next model drop. Openclaw is potentially better integrated with ChatGPT at this point because of the explicit subscription support.by efromvt
4/6/2026 at 1:52:28 PM
Nothing about AI mindshare is durable. As Anthropic tightens rate limits and/or raises prices, everyone will move to something else.This week I’ve been hitting CC rate limits, and I switched to Codex with virtually no disruption to my workflow. It’s not good for Anthropic that I was able to do that!
by jjfoooo4
4/6/2026 at 7:47:42 PM
One of the things I’ve been wanting to see is basically an estimate of their minimum revenue to meet investor expectations, too. I’ve heard people talking about using even cheaper local models aggressively to save on tokens and it increasingly makes me wonder if they’re caught in a vice where prices need to go up but if they raise them they’ll just shed usage to the competition, especially since at least Google has a much longer runway.by acdha
4/7/2026 at 12:44:15 AM
That is 100% what I think is going to happen.I wrote about it here: https://tombedor.dev/open-source-models/
by jjfoooo4
4/6/2026 at 2:12:50 AM
that's why openai bought openclawby nickstinemates
4/6/2026 at 2:33:21 AM
I mean they hired the guy who created it. It's not exactly like openclaw is a real product.by phist_mcgee
4/6/2026 at 4:33:41 AM
Sure it is.. you can download and use it right now. Helps you understand why people are interested in it, at the very least.by nickstinemates
4/6/2026 at 3:38:32 AM
Also not like it’s a particularly good piece of tech. It was the first to show a new category. But jeebus the design and security are a nightmare. Any of the numerous other claws are better choices for anything serious.by oofbey
4/6/2026 at 3:55:20 AM
Aside from the fabricated drama and the trend chasing, OpenAI still has the best overall model and API service. Anthropic is really good, no doubt. But gpt-5.4 is a better model than even Opus, even if its a marginal advantage. I use both.by Art9681
4/6/2026 at 4:10:07 AM
The advantage is marginal and sporadic, and its slower. Why would I use it over anthropic?by ramraj07
4/6/2026 at 4:23:25 AM
I dunno, my experience mirrors the parent posters: we use opus for all our coding, but gpt 5.4 for all of our enterprise agentic work via api (much bigger amount of tokens). it just seems to be more optimized for this.by bhadass
4/6/2026 at 8:18:13 AM
[flagged]by tatrions
4/6/2026 at 2:34:15 PM
It’s much cheaper, for starters.by senordevnyc
4/6/2026 at 4:07:43 AM
Do you feel the companies' positionings are only marginally different in the same way the product is only marginally different?by patcon
4/6/2026 at 6:18:11 AM
These days, yes.by satvikpendem
4/6/2026 at 12:57:01 AM
Coding assistants won't win this game. They sure will win the hearts of developers, but to scale you need mass adoption and products for which users want to pay substantially. OpenAI is falling behind in the small features in their chat and app offering and have failed to innovate in their expensive offerings.Codex btw is getting very competitive. It is fast and no longer far behind.
by oezi
4/6/2026 at 1:02:48 AM
The reality is given how much OAI has raised, they have to get to a place where they are doing insane revenues…We’re talking on the level of meta, google and probably more if they keep raising money.
They really went all in with hubris and they’re gonna get punished eventually.
by igtt
4/6/2026 at 2:11:12 PM
Just like Uber did?by fragmede
4/6/2026 at 3:30:12 PM
Uber doesn't have to convert hundreds of millions of users to a paid plan - it was paid to begin with. Much easier to raise prices from a low base than to raise them from $0. When push came to shove, Uber also cut payments to drivers and replaced them with a tip screen. Is Nvidia going to accept tip annuities on ChatGPT responses instead of full payment on its chips?Most importantly, Uber did not have fast-depreciating cars on its balance sheet - OAI meanwhile is planning to spend the GDP of a mid-size nation on owning fast-depreciating datacenters, which they deem necessary to cope with the demand for their services.
by rchaud
4/6/2026 at 1:13:01 AM
The strategic playbook of the web era said: Get a huge userbase of normies, then figure out how to monetise them (usually via advertising). OpenAI stumbled into the userbase via ChatGPT, but it's unclear if the strategy or the economics apply to AI. Anthropic tried to compete in the consumer market, but couldn't, so focussed on coding and enterprise, and it looks like that's actually turning into a smart choice, at least right now, because it turns out people will pay subscription costs for agents that do their job for them.by nayroclade
4/6/2026 at 1:35:59 AM
There are three possible paths that sort of substantiate current valuations:1) Business: LLMs become essential to every company, and you become rich by selling the best enterprise tools to everyone.
2) Consumer: LLMs cannibalize search and a good chunk of the internet, so people end up interacting with your AI assistant instead of opening any websites. You start serving ads and take Google's lunch.
3) Superhuman AGI: you beat everyone else to the punch to build a life form superior to humans, this doesn't end up in a disaster, and you then steal underpants, ???, profit.
Anthropic is clearly betting on #1. Google decided to beat everyone else to #2, and they can probably do it better and more cheaply than others because of their existing infra and the way they're plugged into people's digital lives. And OpenAI... I guess banked on #3 and this is perhaps looking less certain now?
by chromacity
4/6/2026 at 1:32:50 AM
But will they pay the unsubsidized cost when anthropic needs to turn a profit?by phist_mcgee
4/6/2026 at 1:40:46 AM
And they actually can’t increase the price much.Token generation is the metric Jensen Huang keeps pushing to temper analysts, which also affect nvidia’s future expected cash flows of course.
If increasing the price causes that metric to drop, the whole narrative falls apart and fear will spread in the stock market.
They’re all racing very close to the edge. Some closer than others.
by igtt
4/6/2026 at 1:15:43 AM
Agents increase the velocity of OpenAI and Anthropic; whomever has the best in-house agent moves the quickest.by 7e
4/6/2026 at 1:43:35 AM
Any publicly available evidence to back that up? There have been post-exit blog posts from OpenAI employees on HN before and it did sound like the only black magic they use there is that many employees work 16 hrs a day during launch of new features. I know that some current Claude Code devs are doing interviews where they claim that they use Claude Code extensively but they clearly have a conflict of interest while they are still employed at Anthropic, so it would be like asking a barber if you need a haircut.by maxnevermind
4/6/2026 at 12:55:55 PM
Look at the number of features (PRs) being pushed by these companies.by sumedh
4/6/2026 at 3:36:01 AM
Classic SV hubris. Talk to OpenAI people and they’re so convinced they’re untouchable, they don’t bother worrying about things like revenue, or product strategy. All they cared about was being the first to AGI. Well it looks like that isn’t happening soon enough. And now they have zero moat except brand recognition, which is quickly getting eroded.by oofbey
4/6/2026 at 7:13:23 PM
Congrats to OpenAI for having this fake "beef" with itself through Anthropic, but Google is still going to win the most users. Users aren't so stupid that they will fall for this kind of obvious psychological manipulation in perpetuity.by casey2
4/6/2026 at 2:26:48 AM
Despite what the folks here like to believe about themselves, I think the reality is we as attuned to what is in fashion and on trend as everyone else, just about different stuff. Last year it was Chatgpt, this year Claude is the new hotness. Things move so fast we barely have time to form our own opinion, so we fall back on what we read or hear from others. In 12 months who knows what it will be... Gemini? ¯\_(ツ)_/¯Long term, my feeling is Anthropic's focus on enterprise is the most obviously lucrative but also least defensible application of LLMs. If (more likely when) open source models reach the point of being "good enough" then it's a race to the bottom on pricing. Maybe it will be like AWS vs GCP et al, but I kinda doubt it.
by tqi
4/6/2026 at 2:17:18 AM
Same thing happened to Blackberry. Tech head start wasn't that big and product wasn't that sticky.by xnx
4/6/2026 at 3:16:17 AM
iOS had an API and a platform.Is there any equivalent for turning an AI LLM into a sticky platform? Right now seems like it’s pretty easy to use harnesses and tooling across models, including open weight and locally running ones.
by jimbokun
4/6/2026 at 3:28:10 PM
> Is there any equivalent for turning an AI LLM into a sticky platform?Not with AI specifically. Having a large, private, data store (e.g. GMail, Google Drive) is the biggest advantage.
by xnx
4/6/2026 at 1:56:02 AM
Investors do not care about the product, the users, etc. They care about cash. There are lots of ways to make cash that don't involve having a good product. But if you commit to spending a trillion dollars on hardware, then borrow hundreds of billions in the short term, and it turns out there's no way to recoup the cost, the investors go looking for better returns. This would've worked back in the old days of a bull market, angels looking for the next whale (with "modest" $5BN investments), and startups with no rivals. But in a bear market with multiple competitors trading on a commodity? Lol. Finally the bubble bursts.by 0xbadcafebee
4/6/2026 at 1:02:18 AM
What kind of OI slop is this?5.4 Extra high >> Opus 4.6
by hmartin
4/6/2026 at 1:15:13 AM
Depends on your work flow.I find that for human in the loop Gemini beats both.
by noosphr
4/6/2026 at 1:45:09 AM
Been my experience as well, but generally the anti-Google sentiment here is pretty loud so you'll never see anyone praising Gemini here pretty muchby neya
4/6/2026 at 3:10:19 AM
Some of that, sure. But realistically, a lot people are just don't want to pay for every frontier model provider out there as they're released. Not just money, but also time trying them out. (Recommend people at least try out their multimodal model.)It doesn't help that Google offers a bunch of confusing plans in multiple places. I ended up just pasting all their AI plan URLs, at least that I could find, into Claude so I didn't have to figure it out.
by TheCowboy
4/6/2026 at 4:58:13 AM
I pay for Google Workspace, so pretty much the Gemini Pro included with that suffices my use case. I can't say it will work for everyone, but I do use it for random tasks - all the way from wood working to building complex software projects, and so far I've rarely hit the limit too.by neya
4/6/2026 at 12:25:04 PM
What Gemini Pro comes with Workspace? Is there api allowance? Different than free users?by dizhn
4/6/2026 at 3:28:27 PM
I guess if you choose the business standard, you get "Pro access to features & models with enterprise-grade security & privacy"https://knowledge.workspace.google.com/admin/getting-started...
by neya
4/6/2026 at 9:17:49 PM
It says "Gemini App" but I am not sure what it's referring to.by dizhn
4/7/2026 at 6:37:00 AM
I guess it's access to Gemini, because that's the plan I'm on and you can access Gemini using gemini.google.com or their phone app (Android/iOS). I couldn't find information on the API, maybe it's on that page, but I couldn't find it from my phone browser. Anyway, I use OpenRouter pretty much, so I have no idea if the API is part of my plan.by neya
4/7/2026 at 9:49:11 AM
Thank you.by dizhn
4/6/2026 at 2:24:41 AM
I think Antigravity w/Gemini is a great product; it's been super useful on a bunch of my hobby projects. It's especially wonderful when writing firmware and needing to add support for a new chip. I can point it at a PDF datasheet and it'll do a much better job of reading it and parsing out all of the register fields than anything else. Saves me enormous amounts of time.by zbrozek
4/6/2026 at 4:56:06 AM
Thanks, been meaning to try it. I heard the limits on that is an issue and people are supposedly blowing the limits off way too easily? How has your experience been so far in this regard?by neya
4/6/2026 at 2:30:53 AM
Is anti-Google sentiment still pretty loud? People seem excited about Gemini catching up + Gemma 4by gabriel-uribe
4/6/2026 at 4:59:34 AM
Yeah, in most threads you will see anyone recommending Gemini be downvoted. Ironically, Gemini (with Workspace subscription) is the only model that explicitly states it doesn't use your inputs to train their models right under the chat box. AFAIK no other provider does that explicitly - usually there is a hidden toggle in settings you will need to turn off.by neya
4/7/2026 at 3:35:17 PM
The corporate subscription for ChatGPT says the same thing. And I would be shocked if it wasn't the same for a corporate agreement for Claude.by ziml77
4/6/2026 at 5:27:38 AM
TIL. That's cool. I mostly use Gemini 3 Flash for some background jobs because of the price/perf, but rooting for their models to improve. Competition is good.by gabriel-uribe
4/6/2026 at 8:32:12 AM
The catch: If you don't pay any subscriptions to Google, they will use your data for training their models. Agreed on competition being good.by neya
4/6/2026 at 12:57:16 PM
Google does not know how to sell.They have multiple offerings, they probably will kill some of them very soon. There is no reason to waste your time and money on Google.
by sumedh
4/6/2026 at 4:33:35 AM
I pay for both ChatGPT and Gemini.I've finished (as in: it's done, it works, and I may never need to change it again) entire projects with ChatGPT and Codex. Sometimes it takes a lot of hand-holding to get there, but it does get there and (with the exception of 4o) it's been improving since the beginning.
In contrast: I can't even get Gemini Pro to give me any answers to the most primitive questions that aren't caked in prima facie lies without at least 4 interactions, in any context, ever. The output is consistently and ridiculously garish with its insessant self-contradictions. It seems to be impossible to actually get anywhere with it.
What am I doing wrong here?
by ssl-3