alt.hn

4/10/2026 at 1:47:17 PM

US summons bank bosses over cyber risks from Anthropic's latest AI model

https://www.theguardian.com/technology/2026/apr/10/us-summoned-bank-bosses-to-discuss-cyber-risks-posed-by-anthropic-latest-ai-model

by ascold

4/10/2026 at 3:36:59 PM

Maybe it's marketing, but I think it's regrettable that Anthropic paired project Glasswing with Mythos. It really makes it seem like Mythos is the threat, rather than the fact that tons of vulnerabilities have always been ignored throughout the software world.

If Glasswing has been started years ago with the goal of applying fixes to AI-found gaps, then this would just be another model to add to that effort. But doing so in the ominous shadow of some new super model boosts panic IMO.

by causal

4/10/2026 at 3:47:35 PM

Cybersecurity is taken too lightly and it mostly boils down to recklessness of developers, they are just "praying" that no-one act on the issues they already know and it's something we must start talking about.

Common recklessness obviously include devs running binaries on their work machine, not using basic isolation (why?), sticky IP addresses that straight-up identify them, even worse, using same browsers to access admin panels and some random memes, obviously, hundred more like those that are ALREADY solved and KNOWN by the developers themselves. You literally have developers that still use cleartext DNS (apparently they are ok with their history accessible by random employees outsourced)

by pixel_popping

4/10/2026 at 5:12:34 PM

> it mostly boils down to recklessness of developers

I disagree. I think in big tech and the corporate world, it boils down to the organization fundamentally not valuing security and punishing developers if they "move slow", which is often the outcome when you maintain a highly security-oriented process while developing software and infrastructure.

When big leaks happen, the worst that occurs is that some trivial financial penalty is applied to the company so the incentive to ignore security problems until you're forced to acknowledge them is high.

by snovymgodym

4/10/2026 at 5:39:29 PM

Last gig I had that took QA/Test seriously was late '90s. I have no hopes the situation will improve, for quality or security, until something fundamental changes.

by specialist

4/10/2026 at 4:04:27 PM

Totally agree, though I'd argue that it's still a software failure if preventing exploits requires every user memorize and follow an onerous list of best practices.

by causal

4/10/2026 at 4:07:58 PM

This is where security is actually heavily intertwined with Privacy, by following good privacy principles, you automatically cover a lot of security issues.

by pixel_popping

4/10/2026 at 5:08:38 PM

Highly disagree.

It's most of the time a question of management not caring about security or disliking the inconvenience that security can bring.

by LunaSea

4/10/2026 at 5:27:04 PM

I agree as well, however for example for FOSS projects, it's exactly as you say, an inconvenience to secure and we comeback to the "I pray that no one exploit X".

by pixel_popping

4/10/2026 at 5:48:55 PM

FOSS projects are a different beast since contributors are working for free and no contributors might have the time to fix a security bug or review a PR fixing one.

I might add however that most companies use FOSS projects without paying for or contributing to them.

The onus is still on the final user to make sure that the code they use is safe.

by LunaSea

4/10/2026 at 5:42:58 PM

"Cybersecurity is taken too lightly and it mostly boils down to recklessness of developers, they are just "praying" that no-one act on the issues they already know and it's something we must start talking about."

I agree that cyber security is taken too lightly. However, I think that many developers don't actually know about vulnerabilities. In many companies those reports get filter through other teams and prioritized by PMs. The devs tend to do their best at meeting the afressive schedules the penny pinching business people set.

by giantg2

4/10/2026 at 6:57:20 PM

Business managers sometimes make bad decisions (at least in retrospect) around budgets and priorities. But the reality is that there are a limited number of pennies, and if someone doesn't pinch them then there are no pennies left to pay developers.

by nradov

4/10/2026 at 6:11:11 PM

I frankly believe that many know what they are doing, take the average freelancer, developing for multiple clients on the same workspace (suicidal and ethically wrong on top of it) without even disk encryption enabled or straight up syncing everything in cleartext to dropbox.

by pixel_popping

4/10/2026 at 6:25:23 PM

Or they're a freelancer because they arent good enough for a big salary job

by giantg2

4/10/2026 at 5:55:55 PM

> recklessness of developers

Nah. It's the corporations that could not care less and therefore do not reward careful work. They care about nothing but time to market. Start stacking legal and financial liability and I guarantee they are suddenly going to start caring a lot.

by matheusmoreira

4/10/2026 at 5:26:29 PM

Recklessness is based on effort, likelihood, and consequence. If you live in a small town, you might not lock your front door. No matter where you live, you probably don't lock your second floor windows.

by sdwr

4/10/2026 at 5:39:02 PM

Are we doing enough effort tho, AI era invites us to get our shit together as well, we are all guilty of it, but we must also understand that if you live in an area with a high crime rate, you adapt and lock your door, the same must be applied online now that we will have 24/7 rogue agents with sole purpose of doing ransoms and attacks of all kind.

by pixel_popping

4/10/2026 at 6:13:30 PM

I read your list and all of that is normal computer use. How can it be reckless to use a computer normally?

by MrDarcy

4/11/2026 at 6:18:15 AM

Didn't you know running binaries on your own machine is dAnGeRoUs? Better sign up for our subscription based cloud service!

by kilpikaarna

4/10/2026 at 6:17:15 PM

normal doesn't mean "right", we have piled-up a ton of bad decisions and users that are aware should now better than default settings.

by pixel_popping

4/10/2026 at 5:20:16 PM

You missed the management factor. And even if managers don't explicitly ask you to build insecure stuff they will up to the pressure to the point that you have no choice or leave the company for someone who will do just that. So the end result is the same. Rarely will individual push back with some force and then they will eventually be let go because they're 'troublemakers'.

by jacquesm

4/10/2026 at 5:06:14 PM

You're making a hubris-laden assumption coders know the gaps their baking into their software — that any human has a decent enough grip on the multitudes of spinning logic duct taped together to make the internet run. Most vulnerabilities aren't "ignored"; they're in a neverending backlog or unknown.

If you closed all of the AI-discovered security vulnerabilities tomorrow - by the next day there'd be a host of new ones. That's software, baby.

by spandrew

4/10/2026 at 6:04:22 PM

This. I’ve been hearing panic from the non-security community about Mythos because “zomg z3r0 d4y5!!” Since the announcement but these are the same people running production servers 10 updates and 2 critical security fixes behind for years.

I don’t need cutting edge AI to take you down. I need MetaSploit with a CVE list that’s been updated in the last 6 months.

by ofjcihen

4/10/2026 at 10:24:13 PM

I once had a freelance gig to upgrade an environment that hadn't been touched in years. One server had a 1500 day uptime and I could find no evidence of any in-place upgrades. They made me watch a bunch of IT security / process videos before starting the project, though. This was a decent sized organization with 100's of employees and 100's of millions in revenue.

My job at a "near unicorn" "we're still a startup 10 years later" was no better. Distros that were no longer updated. Obsolete python versions. Servers that hadn't been rebooted in years. All environments in a single AWS account. I could go on...

by icedchai

4/10/2026 at 6:02:56 PM

The strongest model we've benchmarked on our comprehensive, little known, and difficult to game benchmark, is still Claude Opus 4.5 for agentic workflows. That's not a typo.

Interpret that how you will, but if Anthropic had to take cost/resource savings measures after the last major release, less than 6 months ago, it's unlikely they have the economics to offer what Mythos is promised to be, at any sort of product scale. But I agree, it would be great to get stronger models and start securing all the junk on the web. Of course, that requires maintainers to know how to use these tools.

Benchmarks at https://gertlabs.com/?agentic=all

by gertlabs

4/10/2026 at 7:03:18 PM

I'm particularly interested if someone with relevant expertise could comment on the types of bugs Mythos found, e.g. the 27 year old OpenBSD bug.

I ask because the media around Mythos is leaning into the "Mythos is a super intelligence that can find bugs that no human can" story. But in my mind it's pretty obvious that any software that is complex enough will have a lot of lurking zero days, and better tools will asymptomatically find more of them. So it seems to me something like Mythos would just be able to do more analysis/searching for bugs at a much faster rate than previously possible. But I'm skeptical that the bugs that were found required an insane amount of analytical abilities to locate, so would really appreciate if someone could comment on that (e.g. was it "yeah, with enough time we would have found it eventually" vs. "Wow, this was an insanely difficult bug to find in the first place")

I do agree that medium/long term that tools like Mythos will be a huge boon for cyber security, because it will inherently make it easier to write bug-free code in the first place. But yeah, we're now at a point where all these "pre-AI bugs" need to be fixed and patched before folks in the wild find all these zero days.

by hn_throwaway_99

4/10/2026 at 7:23:50 PM

The OpenBSD bug was more difficult for LLMs, because it is an integer overflow bug, while out-of-bounds accesses are more common bugs that are found by most models.

The OpenBSD bug was also found by GPT-OSS and by Kimi-K2:

https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag...

The first condition for finding a bug is to actually audit the code where bugs exist.

When a human does that, this is a lot of work, which is often avoided. LLMs can simplify this, but you must use them for this purpose.

As the link above shows, using multiple older open weights models was enough to find all the bugs found by Mythos.

The improvement demonstrated by Mythos is that it could be used alone to find all those bugs, while with older models you had to run more of them to find everything, because each model would find only a part of the bugs.

Even so, I prefer using all those open weights models together, at a negligible additional cost, while Mythos is unavailable for non-privileged users and even when it will be available for more people it will be much more expensive than the alternatives.

by adrian_b

4/10/2026 at 10:19:21 PM

Thank you so much. Your comment and the linked blog post is exactly the deep analysis/explanation I was looking for.

by hn_throwaway_99

4/10/2026 at 3:45:08 PM

A year ago the LLM's weren't good enough to find these security issues. They could have done other stuff. But then again, the big tech companies were already doing other stuff, with bug bounties, fuzzing, rewriting key libraries, and so on.

This initiative probably could have started a few months sooner with Opus and similar models, though.

by skybrian

4/10/2026 at 4:47:23 PM

Using multiple older open weights models can find all the security issues that have been found by Mythos.

However, no single model of those could find everything that was found by Mythos.

https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag...

Nevertheless, the distance between free models and Mythos is not so great as claimed by the Anthropic marketing, which of course is not surprising.

In general, this is expected to be also true for other applications, because no single model is equally good for everything, even the SOTA models, trying multiple models may be necessary for obtaining the best results, but with open weights models trying many of them may add negligible cost, especially if they are hosted locally.

by adrian_b

4/10/2026 at 3:49:25 PM

That's not quite true, even a year ago LLMs were finding vulnerabilities, especially when paired with an agent harness and lots of compute. And even before that security researchers have been shouting about systemic fragility.

Mythos certainly represents a big increase in exploitation capability, and we should have anticipated this coming.

by causal

4/10/2026 at 3:51:46 PM

A lot of those bugs were found by seasoned developers and security professionals though. Anthropic claims that Mythos is finding vulns from people who have no security background, who just typed "hey, go find a vulnerability in X", went home for the night, and came back the next morning with a PoC ready. They could definitely be an exaggerating, but if it's true that's a very different threat category which is worth paying attention to.

by Analemma_

4/10/2026 at 4:05:13 PM

Previous models have done this just fine. For the last year, whenever a new model has come out I just point it at some of my repos and say something like "scan this entire codebase, look for bugs, overengineering, security flaws etc" and they always find a few useful things. Obviously each new model does this better than the last, though.

by qingcharles

4/10/2026 at 3:55:59 PM

Yes, previous models found vulnerabilities but Mythos is uniquely capable of actually exploiting them: https://red.anthropic.com/2026/mythos-preview/

by causal

4/10/2026 at 4:20:48 PM

Imo that's a big deal primarily because the issue with automatically discerned vulnerabilities has long been a high volume of reports and a very bad signal-to-noise ratio. When an LLM is capable of developing PoC exploits, that means you finally have a tool that enables meaningfully triaging reports like this.

by pxc

4/10/2026 at 4:05:04 PM

If you run Opus 4.6 and GPT 5.4 in a loop right now (maybe 100 times) against top XXXX repos, I guarantee you that you'll find at the very least, medium vulnerabilities.

by pixel_popping

4/10/2026 at 4:37:47 PM

> A year ago the LLM's weren't good enough to find these security issues

I know of two F100s that already started using foundation models for SCA in tandem with other products back in 2024. It's noisy, but a false positive is less harmful than an undetected true positive depending on the environment.

by alephnerd

4/10/2026 at 3:48:56 PM

>This initiative probably could have started a few months sooner with Opus and similar models, though.

Evidently they tried and even the most recent Opus 4.6 models couldn't find much. Theres been a step change in capabilities here.

by vonneumannstan

4/10/2026 at 3:54:21 PM

No, Opus has found a lot and 112 vulnerabilities were reported to Firefox alone by Opus [0]. But Mythos is uniquely capable of exploiting vulnerabilities, not just finding them.

[0] https://red.anthropic.com/2026/mythos-preview/

by causal

4/10/2026 at 4:26:36 PM

I guess I'm not sure why you frame this as a "rather than". What Anthropic is saying is that the norm of having tons of vulnerabilities lying around historically worked OK, but Mythos shows it will soon become catastrophically not OK, and everyone who's responsible for software security needs to know this so they can take action.

by SpicyLemonZest

4/10/2026 at 4:32:49 PM

I wonder whether this kind of release of model could become the spark that ignites a new digital "cold war" between us, europe, india and china, in which they will try to outwit their rivals and compromise their critical infrastructure using artificial intelligence.

Also I’d like to believe that this really is such a huge step forward compared to Opus, but lately I’ve found it hard to believe when I look at the statements made by the CEOs of AI companies and their associates, who are fuelling the hype surrounding this topic even further. Of course, it is good that large companies and industries that are crucial to the country are the first to have access to this, but until the launch takes place, I will approach this with a degree of scepticism.

by __natty__

4/10/2026 at 5:20:31 PM

Connecting so much stuff to the network was always crazy. Ditto computerizing so much, some yes, but as much as we have? Horribly risky.

I doubt we'll see a shift away from "everything's on the network!" because it's so incredibly beneficial to the surveillance state, but one can hope.

by lamasery

4/10/2026 at 6:55:19 PM

I used to play some games with this theme when I was a kid: the Mega Man Battle Network series. The very first stage in the very first game, some dude social engineers his way into your house, hacks your inexplicably internet connected oven and nearly burns your entire family down. By the next game, terrorist netmafias are gassing children, nuking dams and hacking airplanes fully intending to crash them with no survivors.

I love computers so much but sometimes I do think they were a mistake.

by matheusmoreira

4/10/2026 at 5:56:12 PM

Admiral Adama has entered the chat.

by cheschire

4/10/2026 at 4:51:54 PM

This invisible cyberwar is already happening; it's just that the brains powering it is getting smarter.

by mieubrisse

4/10/2026 at 5:05:06 PM

> ignites a new digital "cold war"

Already been going on for over a decade - export controls on dual use technology like Xeon processors already began being enforced back in the Obama admin.

> until the launch takes place

It's already launched. Some companies had access to Mythos for months.

> fuelling the hype

This is true. Commercially available models from a year ago are already good enough from an offensive security perspective. Their big issue was noise, but that could be managed.

by alephnerd

4/10/2026 at 6:09:22 PM

I was in the industry when key lengths for SSL were different between US domestic and US products for export. That’s one reason so much Open Source cryptography software expertise built up in Europe so quickly.

by cestith

4/10/2026 at 6:16:36 PM

Much of that muscle was already well built in Western Europe well before the SSL stuff because of KU Leuven, COSIC, and IMEC.

The issue is by the late 2000s to 2010s, most European organizations didn't take advantage of that base despite being US comparable in the 1970s-90s.

by alephnerd

4/10/2026 at 6:54:47 PM

I'm wondering whether the NSA will be granted access. It's already the largest collection of mathematicians on the planet and now they'd be given tools that could automate a lot of discovery. Or they're panicking that their "old faithful" back door will be patched soon.

by pieisgood

4/10/2026 at 3:24:11 PM

Promoting the model as potentially dangerous might backfire with the government banning it from being released by executive order.

by sroussey

4/10/2026 at 4:49:21 PM

> the government banning it from being released by executive order.

There's no legal mechanism for the president or the government at all to do that.

by petcat

4/10/2026 at 6:31:44 PM

There's no legal mechanism for the vast majority of what the president has done.

Often it happens anyway, along with some protests, some resignations and maybe an eventual court case reversal months or years later.

by marcuskane2

4/10/2026 at 5:02:32 PM

I'm sure they will find something when it really starts to bother them personally.

by rf15

4/10/2026 at 5:15:48 PM

There are ways for the government to do that sort of thing on an emergency basis, and it would take quite some time to make it's way through the courts. There are precedents from nuclear weapons technology and cryptography. I don't think it'll hold up or be particularly effective because the horse has left the barn already, but they could probably slow things down if they really wanted to.

by empath75

4/10/2026 at 6:07:54 PM

Of course there is. Fully automatic weapons are banned. Certain chemicals and biologics are banned. Certain hacking tools are banned (DMCA):

> The “tools” prohibitions, set out in sections 1201(a)(2) and 1201(b), outlaw the manufacturing, sale, distribution, or trafficking of tools and technologies that make circumvention possible. These provisions ban both technologies that defeat access controls, and also technologies that defeat use restrictions imposed by copyright owners, such as copy controls. These provisions prohibit the distribution of software that was designed to defeat CD copy-protection technologies, for example.

https://www.eff.org/pages/unintended-consequences-fifteen-ye...

by dist-epoch

4/10/2026 at 7:12:44 PM

Those things were made illegal by Congress, not by a president's executive order, which is what this thread is about.

by petcat

4/10/2026 at 6:09:34 PM

That's probably a contributing factor as to why they already partnered with all the biggest tech companies.

by scottyah

4/10/2026 at 3:48:04 PM

I think that would be a good precedent given the current lack of rules around AI Safety. These models don't seem to be plateauing yet and could be much more dangerous than Mythos in 1-2 years.

by vonneumannstan

4/10/2026 at 5:53:28 PM

Sure. Its a meeting with major bank execs AND the fed chair to discuss Anthropics new hype model.

Its definitely NOT, in any way, a meeting to discuss potential systemic risk due to insolvency/bankruptcy at some key AI-related company.

by wankerrific

4/10/2026 at 4:39:45 PM

Tangentially related, but how does one protect themselves against the bank account/brokerage being hacked? Can you print out a proof of funds/securities owned to take to court to be made whole?

by yks

4/10/2026 at 7:13:39 PM

Aside from FDIC’s insurance, nothing.

And if banks get hacked and money gets wired out - maybe we’ll come up with ways to roll back the damage.

Who knows - this is new territory.

by jannyfer

4/10/2026 at 6:09:57 PM

Your screenshot/PDF is kind of worthless since it's trivially editable. Still a good idea to have it.

Banks are required by law to be able to produce account balances in a few days. In some countries the are required to submit them to account protection institution regularly so that if a bank fails they can quickly reimburse people to prevent panic spreading to other banks.

You can probably request some sort of notarized proof of accounts, but it will probably cost you $100.

by dist-epoch

4/10/2026 at 5:12:30 PM

I hope these banks are complaining how Anthropic is preventing them from accessing their latest model and giving preferential treatment to other businesses.

by charcircuit

4/10/2026 at 6:11:06 PM

At least one of those banks is already in the partnership, and we don't know how many they reached out to.

by scottyah

4/10/2026 at 3:55:04 PM

> A recent leak of Claude’s code prompted the startup to publish a blogpost at the beginning of the month saying that AI models had surpassed “all but the most skilled humans at finding and exploiting software vulnerabilities” [...]

I've seen a bunch of people conflate the Claude Code source-map leak with the Mythos story, though not quite as blatantly as here. I'm confident that they are totally unrelated.

by simonw

4/10/2026 at 7:18:09 PM

I have a pet theory that the uptick in normal cybersecurity PRs you mention as a trend in your blog were done with Claude Code’s stealth mode and Mythos.

by jannyfer

4/10/2026 at 4:09:59 PM

[flagged]

by delis-thumbs-7e

4/10/2026 at 3:59:25 PM

[flagged]

by taytus

4/10/2026 at 3:05:46 PM

The more I live the more I believe people at the top operated in some sort of cult mentality. The level of gullibleness, temporary lack of critical thinking is only matched by their sociopathy and Machiavellianism.

I'm sure it's a great big model, but the level of hype and dishonesty is something out of Sam Altman's book.

Of course it's because of the upcoming IPO, but that's the end game, for now it's critical to get those private equity guys and bank institutions to believe the gospel and hold the bag, only then the suckers from the secondary markets will be allowed to be suckers too.

by PedroBatista

4/10/2026 at 4:24:05 PM

A good percentage of cybersecurity has always been theater. If their model helps to separate the wheat from the chaff, maybe it'll be an improvement.

by icedchai

4/10/2026 at 4:33:44 PM

> A good percentage of cybersecurity has always been theater

It is great to be in a "best-effort" business where there are no consequences for bad things happening. Cybersecurity is one of those businesses. Web search, feeds and ads are another.

Imagine you are selling locks to secure homes. A thief breaks the lock. The lock-maker is not held liable. In fact, they now start selling stronger locks, and lock sales actually improve with more thefts.

by bwfan123

4/10/2026 at 4:59:15 PM

It sounds like it’ll just kill the wheat and the chaff.

Still probably a benefit depending on your philosophy.

by guzfip

4/10/2026 at 4:41:54 PM

I'm definitely optimistic that the long-term trajectory is positive. All important software can undergo extensive penetration testing with cutting-edge vulnerability research techniques before launch? Sounds great. The problem is what goes wrong on the pathway to there.

by SpicyLemonZest

4/10/2026 at 4:20:11 PM

There's a serious problem with being very popular/prominent/powerful and becoming surrounded by sycophants out of a sort of survival of the fittest and then developing a progressively more distorted view of reality as a result. When everything can appear to be made to work to the person at the center they start making progressively worse decisions which are consequence free because of the sway they already have. (this is a big reason why "disruptor" startups work)

by colechristensen

4/10/2026 at 5:59:13 PM

Will you eat your words when major vuln disclosures come out 3-4 months from now?

by xvector

4/10/2026 at 6:08:30 PM

Will you eat your words when you find out major vuln disclosures have been happening for decades?

by ofjcihen

4/10/2026 at 6:12:15 PM

They obviously meant on an unprecedented scale.

by scottyah

4/10/2026 at 6:16:00 PM

Sure, and healthy skepticism before proof is a sign of wisdom.

Which makes taking claims from companies at face value…?

by ofjcihen

4/10/2026 at 4:58:51 PM

Need to dump the bag on retail investors and pensions before they implode

by downrightmike

4/10/2026 at 4:17:41 PM

Or, you're wrong. And the smartest AI Research Scientists and the top banking officials are both correctly worried about the ramifications. That's what you'd expect if there really was an issue here. Are you aware of the deep seated bugs in critical software that were already uncovered with Mythos? Are you able to steelman the issue here at all?

by reducesuffering

4/10/2026 at 4:25:54 PM

> Are you aware of the deep seated bugs in critical software that were already uncovered with Mythos

This. 100% this.

A large portion of the industry is under NDA right now, but most of the F500 have already already deployed or started deploying foundational models for AppSec usecases all the way back in 2023.

Sev1 vulns have already been detected using "older" foundation models like Opus 4.x

Of course the noise is significant, but that's something you already faced with DAST, SAST, and other products, and is why most security teams are also pairing models with experienced security professionals to adjudicate and treat foundation model results as another threat intel feed.

by alephnerd

4/10/2026 at 4:21:36 PM

Two things can be true.

Historically bad security that people just got by with matched with powerful tools that aren't any better than the best people, but now can be deployed by mediocre people.

by colechristensen

4/10/2026 at 4:37:40 PM

Which is exactly what Anthropic understands the situation to be. They state at the beginning of the Glasswing blogpost that Mythos is not better than the best vulnerability researchers. But it doesn't have to be to become a tremendously big deal.

by SpicyLemonZest

4/10/2026 at 6:20:49 PM

There is not just a lower barrier to entry. The best use of a tool will still be made by the most knowledgeable users. So we’re looking at lowering the bar some, but another big deal is the scale at which the top experts can work. That might actually be the longer lever. Imagine a top expert burning tokens across whole repo histories of a few dozen projects looking for likely but unconfirmed flaws, then having the model flag and rank those suspects for their own review in triaged order.

by cestith

4/10/2026 at 6:55:18 PM

They (Anthropic) really did scare the shit out of them. Didn't they.

by rvz

4/10/2026 at 3:30:45 PM

Looks like the marketing worked at least somewhat lol. Such an obvious playbook by now I’m surprised some people here fell for it.

by nothinkjustai

4/10/2026 at 3:34:45 PM

Your cynicism doesn't prove that it's fake, though.

by skybrian

4/10/2026 at 7:33:29 PM

You've got to admit that crying wolf about how dangerous their new model is for the hundredth time right when the biggest story about the company was a leak that made them and their internal vibe-coding look totally incompetent is a bit suspect.

by davebren

4/10/2026 at 7:58:21 PM

Your cynicism doesn't prove that it's fake, though.

by skybrian

4/10/2026 at 8:43:20 PM

You got the causality backwards, the cynicism is because it's most likely to be fake.

by davebren

4/10/2026 at 10:43:28 PM

A lesson of the parable about "crying wolf" is that cynicism based on previous events doesn't prove that the next event is fake. The people who ignored the warning may have thought it "most likely," but they were wrong.

by skybrian

4/11/2026 at 1:10:40 AM

Never use previous actions to predict future actions, genius advice.

by nothinkjustai

4/10/2026 at 4:18:57 PM

Just like their marketing campaign doesn’t mean those claims are real?

by nothinkjustai

4/10/2026 at 8:41:32 PM

I mean sure, they could be lying. It seems like a rather elaborate lie, though, considering that they got several other major companies to go along with it.

by skybrian

4/10/2026 at 4:56:48 PM

If it’s all marketing gimmick, then all companies that have collaborated to patch their bugs are collectively lying. If that’s the case, and they can get both OSS maintainers and the ones are on payrolls of Microsoft et al. to lie… hats off to them honestly, they deserve all the marketing exposure.

by tokioyoyo