alt.hn

4/20/2026 at 2:37:48 PM

ChatGPT and Codex Down

https://status.openai.com/history

by bakigul

4/20/2026 at 3:05:40 PM

Both are down for me. :-/ I'm currently in Eastern Europe.

by kilroy123

4/20/2026 at 3:07:20 PM

Both currently working in US.

by AustinDev

4/20/2026 at 3:39:31 PM

Burn baby burn.

Meanwhile, you can always buy hardware like a Strix Halo and have local LLMs that no third party can take away from you.

by lrvick

4/20/2026 at 4:17:15 PM

I really wish local models could compete with Codex, but they are miles apart for now. I'm not sure how they would ever not be, unless local models at some point in the future catch up to the current state of 5.4 high.

Even then, the frontier models would likely have improved by an equivalent degree, so you'd again be faced with the same choice of deciding between a dramatically less effective local tool and a far more capable, closed remote model.

I guess there's going to be some point of "good enough" for most people.

I feel like the closed frontier models really got there around 8 months ago and then even moreso ~4-6 months ago with the release of the Codex series and then opus 4.6. Finally feels like you can get reliably good implementations of features that follow repo patterns and best practices, and at least with 5.4 High/Xhigh Codex, code reviews that don't mostly surface hallucinated or superficial bullshit.

While I'm rambling, I feel like when/if local models ever do catch up to this point, the frontier models are going to be so damn good that software devs are truly fucked.

by virgildotcodes

4/21/2026 at 4:50:28 AM

I do linux kernel, compiler, and operating system dev with Qwen3.5 122b running locally on a Strix Halo 128G ad 35t/s. Pretty much the most complex software problems one can work on.

I think a lot of people just want to put in a credit card and press an easy button.

by lrvick

4/21/2026 at 10:34:19 AM

Yeah the easy button, if translated to a more capable model that requires less hand holding, manual correction and consistently produces better quality code, is of course the point. You wouldn't want to go from Qwen3.5 122b back to GPT 3.5 for coding assistance.

People can definitely be productive with less powerful models. Supermaven or Cursor's tab autocomplete models from a year ago were already a huge boost over the pre-AI days. They just don't have the same capabilities as the leading models.

Curious if you've tried Gpt 5.4 High through Codex to compare for your use case?

by virgildotcodes

4/20/2026 at 4:34:00 PM

Sure, but unless you're training them yourself they can still be compromised with poisoning or bias. They're still black boxes even if you're running them locally.

by andyfilms1

4/21/2026 at 4:53:17 AM

Obviously, and that is no different than remote models. You do not and should not ever trust an LLM, but with proper handling they can still be super useful.

You give LLMs a dedicated OS to work in, let them do research or debugging and commit to branches, review and clean up those branches as you like from a trusted OS, then sign the commits and mark a PR as ready for review.

by lrvick

4/20/2026 at 5:37:04 PM

What's the alternative to frontier models? Disk-streamed GLM 5.1? By the time you get a single response back, the API will be back up.

by Archit3ch

4/21/2026 at 4:54:28 AM

35t/s on Qwen3.5 122b on a Strix Halo. The local stuff works great now. Stop giving the corpo monopolists money.

by lrvick

4/20/2026 at 4:18:29 PM

I would have expected Claude to take time off first. It turns out that both ChatGPT and Codex decided to take some time off on vacation today.

by rvz