4/15/2026 at 2:46:37 PM
Seems to be a very regular occurrence starting around this time of day (14:30 UTC)...Claude Code returning: API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":"---"}
Over and over again!
by edf13
4/15/2026 at 3:04:54 PM
US Pacific comes online while London is still working and they can't handle it. $380bn valuation btw.by walthamstow
4/15/2026 at 3:15:29 PM
No amount of valuation can fix global supply issues for GPUs for inference unfortunately.I suspect they're highly oversubscribed, thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).
by jjcm
4/15/2026 at 3:17:35 PM
Remember when OpenAI wasn’t allowing new subscriptions to their ChatGPT pro plans because they were oversubscribed? Pepperidge Farms remembers.by natpalmer1776
4/15/2026 at 3:25:45 PM
Wouldn't that be good? I remember back in the day you could only get Gmail thru an invite, it was an awesome strategy. "Currently closed for applications" creates FOMO. They'd just need to actually get the GPUs in relatively short supply. They could do it in bursts though, right? "Now accepting applications for a short time."I'm not an internet marketer but that sounds like a win win to me. People feel special, they get extra hype, and the service isn't broken.
by andai
4/15/2026 at 3:41:14 PM
In the case of Gmail that was fake scarcity.In the case of Anthropic is fake availability.
Sam Altman explained the idea is to scale the thing up, and see what happens.
He hadn't claimed to offer a solution to the supply problem that would unfold.
by hirako2000
4/15/2026 at 6:08:16 PM
Are you sure it was fake scarcity for Gmail? IIRC they did it because they were worried about systems falling over if it grew too fast, and discovered the marketing benefits as a side effect.by bruckie
4/15/2026 at 5:48:05 PM
Are you mixing up Anthropic and OpenAI here?by iainmerrick
4/16/2026 at 4:15:26 PM
I didn't. Anthropic and others followed the concept of scaling up models and worry about efficiency and availability later. Sam likely didn't invent the idea but he talked about it.by hirako2000
4/15/2026 at 3:32:48 PM
Yes, "Pepperidge farm remembers" is usually about how something used to be good.by the_gipsy
4/15/2026 at 5:33:16 PM
Yeah, but there was a spoof on that (in Family Guy?). It was a tie in to the movie "I Know what you Did last Summer", IIRC.by CoastalCoder
4/15/2026 at 5:23:06 PM
Google Wave demonstrated that this doesn't always work.by joquarky
4/15/2026 at 3:33:07 PM
maybe, but the response to GPU shortages being increased error rates is the concern imo. they could implement queuing or delayed response times. it's been long enough that they've had plenty of time to implement things like this, at least on their web-ui where they have full control. instead it still just errors with no further information.by scratchyone
4/15/2026 at 3:49:24 PM
I've been experiencing a good amount of delays (says it's taking extra time to really think, etc), and I'm using during off-peak time.by skeledrew
4/15/2026 at 3:51:34 PM
i notice that as well. most of the time when i see those it has a retry counter also and i can see it trying and failing multiple requests haha. almost never succeeds in producing a response when i see those though, eventually just errors out completely.by scratchyone
4/15/2026 at 3:38:54 PM
Coding is a problem solved. Claud writes the code. I edit it. I code around it.Engineer roles dead in 6 months.
by hirako2000
4/15/2026 at 4:25:00 PM
> I edit it. I code around it.You're never gonna guess what software engineers do.
by post-it
4/15/2026 at 6:09:03 PM
Because of the context I would think this is sarcasm, but I am not sure.by bulbar
4/16/2026 at 4:17:56 PM
It is.by hirako2000
4/15/2026 at 3:57:48 PM
Sure but we don't need GPUs to log in.by zachncst
4/15/2026 at 3:30:46 PM
Their issues seem to extend well beyond inference into services like auth.by sobellian
4/15/2026 at 3:45:13 PM
Yes. Whenever these outages happen, it always seems that it's their login system that is broken.by ryandrake
4/15/2026 at 5:37:51 PM
That implies that either the auth is too heavy (possible, ish) or their systems don't degrade gracefully enough and many different types of failures propagate up and out all the way to their outermost layer, ie. auth (more plausible).Disclosure: I have scars from a distributed system where errors propagated outwards and took down auth...
by bostik
4/15/2026 at 6:13:04 PM
> thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length).The dynamic thinking and response length is funny enough the best upgrade I've experienced with the service for more than a year. I really appreciate that when I say or ask something simple the answer now just comes back as a single sentence without having to manually toggle "concise" mode on and off again.
by AlecSchueler
4/15/2026 at 5:54:33 PM
A. These aren’t rate limit errors from the API.B. Everything is down, even auth.
by paulddraper
4/15/2026 at 3:10:53 PM
This precisely justifies Anthropic's market cap to be higher.by ai-x
4/15/2026 at 4:07:13 PM
Demand at an unsustainably low price does not imply demand at a sustainable price.by dsr_
4/15/2026 at 5:57:20 PM
I'm pretty sure ai-x writes sarcasm and skips the /s for pure fun. Personally, I'm amused and I like what he's doing. Others have done it before him though, it's not a new trick.by bigbadfeline
4/15/2026 at 5:14:08 PM
Assuming perfectly efficient businessby tucnak
4/15/2026 at 2:52:59 PM
I literally just came to HN to ask if I was alone with the acurséd "API Error: 500 {"type":"error","error":{"type":"api_error","message":"Internal server error"},"request_id":"…"}" greeting me and telling me to get back to using my brain!by azalemeth
4/15/2026 at 3:02:49 PM
500-series errors are server-side, 400 series are client side.A 500 error is almost never "just you".
( 404 is a client error, because it's the client requesting a file that does not exist, a problem with the client, not the server, who is _obviously_ blameless in the file not existing. )
by xnorswap
4/15/2026 at 3:06:31 PM
> A 500 error is almost never "just you".I know you added the defensive "almost" but if I had a dollar each time I saw a 500 due to the session cookies being sent by the client that made the backend explode - for whatever root cause - well, I would have a fatter wallet.
by darkwater
4/15/2026 at 5:50:20 PM
Depending on what you mean by "made the backend explode", that is a server error, so 500 is correct!Bad input should be a 4xx, but if the server can't cope with it, that's still a 5xx.
by iainmerrick
4/15/2026 at 3:17:31 PM
Indeed, and also there's a special circle of hell reserved for anyone who dares change the interface on a public API, and forgets about client caching leading to invalid requests but only for one or two confused users in particular.Bonus points if due to the way that invalid requests are rejected, they are filtered out as invalid traffic and don't even show up as a spike in the application error logs.
by xnorswap
4/15/2026 at 3:13:03 PM
I know that in principle this is true. However, I have seen claude shadow-throttle my ipv4 address (I am behind CGNAT), in line with their "VPN" policy -- so I do not trust it, frankly.by azalemeth
4/15/2026 at 5:10:40 PM
> in line with their "VPN" policyThis is how I learn that they have a "VPN" policy. Thinking of it maybe it makes sense, that is if it's what I think it is, but seems scummy nonetheless.
by paganel
4/15/2026 at 3:02:30 PM
> Seems to be a very regular occurrence starting around this time of day (14:30 UTC)...8.30am on the US west coast
by andyjohnson0
4/16/2026 at 10:44:19 AM
Probably when they're permitted to start live experimentsby imdoxxingme
4/15/2026 at 3:24:16 PM
Yep, daily haha. Well at least this time they aren't just silently reducing thinking on the server side, which ended up making a mess in my codebase when they did that last time. I'd rather a 500 than a silent rug-pull.by freedomben
4/15/2026 at 5:45:05 PM
I tend to notice it around 4pm ESTby JamesSwift