alt.hn

2/25/2026 at 12:20:50 AM

US Military leaders meet with Anthropic to argue against Claude safeguards

https://www.theguardian.com/us-news/2026/feb/24/anthropic-claude-military-ai

by KnuthIsGod

2/25/2026 at 4:27:09 AM

Something is deeply troubling when a company proclaims: "We want to protect people" and the government response is "we can't work with you"

The fact that there are countless use cases for real government efficiency to help the people they would sacrifice because Anthropic wanted to refuse killer robots is baffling.

by cyrusradfar

2/25/2026 at 6:20:49 AM

Note that the threat in the Axios reporting OP is based on is no longer "we can't work with you" but now "invoke the Defense Production Act to force the company to tailor its model to the military's needs"

On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."

https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h...

by Maxious

2/25/2026 at 1:58:29 PM

These AI companies and billionaires won't learn any lessons, but I hope the likes of Marc Andreessen stop getting treated intelligent well reasoned actors in the media and on podcasts when they bitched and moaned about the Biden administration overstepping. If someone thinks the pragmatic approach to resisting reporting requirements and export controls is to cozy up with the devil who will force you into worse capitulations or just seize your whole company, then that someone (looking at you Marc) is a fucking moron.

by nullocator

2/25/2026 at 4:16:25 PM

The military is about killing people tho

by davidguetta

2/25/2026 at 5:30:26 PM

Isn't there a saying about the US military being a logistics firm that sometimes carries guns? There are a lot of military activities that don't involve violence.

by vharuck

2/25/2026 at 5:55:03 PM

But all the activities pursue the threat of violence, although almost all of it never materializes. It's logistics to build things so you might do violence.

by array_key_first

2/25/2026 at 5:24:33 AM

In a way its a testament that the safeguards are working for someone because it seems like the internet at large is full of bypasses.

by Spivak

2/25/2026 at 10:19:46 AM

[dead]

by johnbarron

2/25/2026 at 2:49:13 AM

More government intervention in private enterprise? This pattern seems to be gathering steam, does that mean they're now subscribing to this model?

Or is this just par for the course and has always been going on, it's just the reporting is different, or the current context makes it more of a sensitive topic?

by BLKNSLVR

2/25/2026 at 3:11:38 AM

No, this is very unusual. The US government taking a 10% stake in intel is very unsual.

There have been a few cases where national security has prompted the government to nationalize private institutions: the Railroads in WWI, steel mills in the korean war, CINB which was deemed a security risk by being too large a bank.

This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.

Wars are good for remaining in power. Dictatorship is good for remaining in power.

This is all very, very, very unusual in US history (except maybe when businesses tried to overthrow the government in the 30s but we don't talk about that).

by tototrains

2/25/2026 at 9:51:01 AM

> This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.

I find this to be unrealistic worry. Just like with mee too, the perpetrators will eventually be protected. Just like with any previous abuses, including war crimes and so on, high level people will be protected first, celebrated second and then we will collectively move onto pretending they were being treated unjustly the whole time.

The amount of people who think that the real victims of abuse are perpetrators and real wrongdoers are victims who talk about it is just too high. It is rarely openly framed or phrased this way, the used words are always nicer, but this is the overall theme of the things.

by watwut

2/25/2026 at 6:30:18 PM

Former trump adviser Steve Bannon said:

"If we lose the midterms and we lose 2028, some in this room are going to prison, myself included."

Seems like it's not so unrealistic.

In the UK and other parts arrests have already been made and in the US the FBI director is drinking beers.

by superze

2/25/2026 at 3:44:58 AM

> (except maybe when businesses tried to overthrow the government in the 30s but we don't talk about that)

That doesn't feel familiar at all! This clearly is just yet another wrong, completely bonkers conspiracy-theory! Just like all the others! No cheese pizza eating billionaires would ever even think of this!

by 5o1ecist

2/25/2026 at 3:46:16 AM

Did you drop your /s? (or, alternatively: I just don't know what's /s and what's not anymore?)

by BLKNSLVR

2/25/2026 at 9:53:08 AM

Poe's law strikes again!

by fragmede

2/25/2026 at 6:27:10 AM

Four exclamation points in a row is at least as clear as /s.

by sfink

2/25/2026 at 10:04:49 AM

Indeed. I thought it was obvious, exactly because of the excessive use of exclamation marks.

by 5o1ecist

2/25/2026 at 11:40:41 PM

I didn't even twig to the possibility that it might have been sarcasm until "cheese pizza eating billionaires".

I've seen too many serious posts containing such fervor and grammar (although mostly not on HN).

by BLKNSLVR

2/25/2026 at 11:34:23 AM

>This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.

This is a bold claim that requires some evidence to accompany it.

So far there's been very little in the Epstein files to implicate anyone of consequence in any criminal activities.

When the rare documents that actually did offer evidence of potentially criminal behavior surfaced, Andrew and Mandelson were swiftly arrested. We can see that the evidence is being acted upon, it's just not very exciting.

by JasonADrury

2/25/2026 at 1:36:31 PM

> So far there's been very little in the Epstein files…

Numbers get thrown around, some suggesting only 2% of the files have been released.

I'm confident that even if 99% of the files were eventually released that the last 1% held back are far and away the most damning.

by JKCalhoun

2/25/2026 at 2:01:54 PM

Possible, but we don't really have any genuinely convincing reason to believe that there are any particularly damning (in terms of criminal conduct) files there.

by JasonADrury

2/25/2026 at 4:39:35 PM

There are many genuinely convincing reasons to believe that.

The simplest is Trump administration aggressively asserting the importance and impact of Epstein's network, followed by excuses to downplay the impact and prevent release of the files (going as far as calling it a hoax, claiming Trump was an FBI informant, only investigating Democrats). These contradictory deflections are genuinely convincing reasons to believe that there is more damning evidence which they're trying to cover up.

More convincing reasons that there is further evidence of crimes in the unreleased files:

- Witness/survivor testimony. Many victims have come forward, several naming Trump officials directly.

- Epstein ran a sex-trafficking network which is thoroughly documented in the released files. Over half the files have not yet been released.

- Many questionable/excessive redactions which US lawmakers have called "inappropriate". US lawmakers have also said that removal of certain files is illegal under the Epstein Act.

- Epstein's extensive political, financial, and legal networks include lots of high-profile figures which have already caused confessions, firings, resignations, and arrests.

There are a lot more reasons, this was just off the top of my head.

by text0404

2/25/2026 at 3:33:50 AM

Yes, the government pays (lots of money) for Claude Gov that they use on their networks.

In my experience they very much do not want to be told what they can and can not do with the things they purchase. I’m surprised the deal got done at all with these restrictions in place.

by dillona

2/25/2026 at 3:45:42 AM

Purchasing a service is different from purchasing the company, though.

As such I agree with the surprise at the deal getting done at all.

by BLKNSLVR

2/25/2026 at 3:01:46 AM

It's been all of 3 days since Claude decided to delete a large chunk of my codebase as part of implementing a feature (couldn't get it to work, so it deleted everything triggering errors). I think Anthropic is right to hold the line on not letting the current generation delete people.

by hansvm

2/25/2026 at 5:54:05 AM

You didn't use git with a remote repo? or did it somehow delete the repos, or perhaps you didn't commit and checkout into a feature branch before it ran?

by notepad0x90

2/25/2026 at 12:08:48 PM

I used git and appropriate sandboxing. It was fine. I just reset the sandbox and went on with my day.

It still made that decision though, and we don't have git for people.

by hansvm

2/25/2026 at 2:36:31 PM

You're 100% right, no respawns IRL.

by notepad0x90

2/25/2026 at 1:44:35 PM

(You might be missing the analogy being made.)

by JKCalhoun

2/25/2026 at 2:36:55 PM

I probably am, sorry :)

by notepad0x90

2/25/2026 at 11:20:38 AM

>You didn't use git with a remote repo?

He said tried not succeeded.

by cucumber3732842

2/25/2026 at 10:12:11 AM

I've had codex delete the entire project directory including .git, and the only thing that saved me was a remote copy.

by fragmede

2/25/2026 at 4:13:00 AM

I'm not blaming you, but it's scary how many people are running these agents as if they were trusted entities.

by AlexCoventry

2/25/2026 at 5:57:24 AM

they're tools, you don't ascribe trust to them. you trust or distrust the user of the tool. It's like say you trust your terminal emulator. And from my experience, they will ask for permission over a directory before running. I would love to know how people are having this happen to them. If you tell it it can make changes to a directory, you've given it every right to destroy anything in that directory. I haven't heard of people claiming it exceeded those boundaries and started messing with things it wasn't permitted to mess with to begin with.

by notepad0x90

2/25/2026 at 10:15:40 AM

That would be --dangerously-skip-permissions for Claude, and --dangerously-skip-permissions for codex.

Aka yolo mode. And yes, people (me) are stupid enough to actually use that.

by fragmede

2/25/2026 at 2:39:05 PM

It's a people problem then. not blaming here, I'm just saying it isn't the tool being untrustworthy. I too get burned badly when I play with fire.

by notepad0x90

2/25/2026 at 1:49:33 PM

OK, but we learned decades ago about putting safety guards on dangerous machinery, as part of the machinery. Sure, you can run LLMs in a sandbox, but that's a separate step, rather than part of the machinery.

What we need is for the LLM to do the sandboxing... if we could trust it to always do it.

by AnimalMuppet

2/25/2026 at 2:42:58 PM

Again, the trust is for the human/self. it's auto-complete, it hallucinates and commits errors, that's the nature of the tool. It's for the tools users to put approprite safeguards around it. Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it. You're expecting a dumb tool to be smart and know better. I suspect that is because of the "AI" marketing term and the whole supposition that it is some sort of pseudo-intelligence. it's just auto-complete. When you have it run code in an environment, it could auto-complete 'rm -rf /'.

by notepad0x90

2/25/2026 at 4:39:40 PM

> Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it.

True. But I expect my furnace to be trustworthy to not burn my house down. I expect my circular saw to come with a blade guard. I expect my chainsaw to come with an auto-stop.

But you are correct that in the AI area, that's not the kind of tool we have today. We have dangerous tools, non-OSHA-approved tools, tools that will hurt you if you aren't very careful with them. There's been all this development in making AI more powerful, and not nearly enough in ergonomics (for want of a better word).

We need tools that actually work the way the users expect. We don't have that. (And, as you say, marketing is a big part of the problem. People might expect closer to what the tool actually does, if marketing didn't try so hard to present it as something it is not.)

by AnimalMuppet

2/25/2026 at 10:21:54 PM

I think I'm in agreement with you. But regardless of expectations, the tool works a certain way. It's just a map of it's training data which is deeply flawed but immensely useful at the same time.

Also in that analogy, the LLM is the fire, not the furnace. If you use codex for example, that would the furnace, and it does have good guardrails, no one seems to be complaining about those.

by notepad0x90

2/25/2026 at 3:48:06 AM

Unfortunately I think the 'death by algorithm' rubicon has already been crossed, even by the US.

by BLKNSLVR

2/25/2026 at 12:02:28 PM

Hopefully they won't allow it to launch the nukes without input from the individuals in charge.

by petre

2/25/2026 at 3:09:54 AM

I love watching the plot lines of The Terminator play out in real life.

by SoftTalker

2/25/2026 at 3:49:58 AM

Isn't it neat.. I mean stupid.

I saw a quote today from Vonnegut: "We’ll go down in history as the first society that wouldn’t save itself because it wasn’t cost-effective."

by mbxy

2/25/2026 at 6:34:29 AM

Saving yourself is always cost effective…

In fact no matter the costs, the cost of not saving yourself is infinite.

by juleiie

2/25/2026 at 11:58:05 PM

That model breaks when you don't have perfect knowledge of whether or not you will perish. Therefore in every practical situation we are forced to assign a finite cost to risk. And generally people tend to prefer tiny increases to societal risk over compromising their personal comfort.

by andriamanitra

2/25/2026 at 12:51:04 PM

Explaining this to fossil fuel advocates is surprisingly challenging.

by dtj1123

2/25/2026 at 12:09:29 PM

Rationally, yes. Humans are too often irrational.

by reverius42

2/25/2026 at 12:19:49 PM

No, I believe that money successful humans as single units are extremely rational and cold calculating.

The problem is that this rationality is often centered on a single beneficiary (You) because why would you care about any other beneficiary?

However times and times again it turns out that no company is as evil as government. Hence I am an anarcho capitalist.

On the whole even with every company only thinking about themselves, it is a distributed system self sustaining and self correcting. No single unit has unlimited power.

Historically it’s always the governments that are vastly more evil and chaotic than any private enterprise ever conceived.

And so we can see it now as another example from USA government. No company could ever get so corrupt and evil as current American elected officials.

by juleiie

2/25/2026 at 1:40:29 PM

You seem to tacitly acknowledge corporate America can also be evil, just not as evil as government can be? Why put corporate America on a pedestal at all then? Why content yourself with what you consider the lesser of two evils?

Demand accountability from your elected officials. It can be done by not electing them. You have no such agency over corporate America (short of boycotting, I suppose).

To my eye the U.S's highest elected official is in fact also a company.

by JKCalhoun

2/25/2026 at 6:16:28 AM

Anthropic winning big points with me for this one to be honest. Reminiscent of the Apple vs FBI days almost a decade ago

by sam0x17

2/25/2026 at 1:46:32 PM

Winning. It's not over yet. And still feel out in the dark as to what is really going on in backroom. But that seems. more than it ever has, to now be a part of the society we have found ourselves in.

by JKCalhoun

2/25/2026 at 10:04:34 PM

oh I mean defense production act, if applied here, would completely tie their hands, so it's most likely symbolic resistance at the end of the day unless gov doesn't have the will to apply it here

by sam0x17

2/25/2026 at 5:52:28 AM

If only a time traveling robot and his human companions were to pay a visit to decision makers at claude(aka cyberdyne? :) ).

What are they using it for though? Target selection for precise strikes? I'm guessing their argument will be less lives will be lost if claude assisted with making sure the attacks were surgically precise?

by notepad0x90

2/25/2026 at 1:52:35 PM

"Review our targeting algorithm and suggest improvements."

Or the mass surveillance bit, "Parse this dataset and come up with a list of people whose emotions are most rapidly shifting towards violence."

by halls-940

2/25/2026 at 12:08:30 PM

What they've always been used for:

> Cold War computers were primarily driven by military necessity, focusing on nuclear weapon simulation, ballistic missile trajectory calculation, and cryptography to support Mutually Assured Destruction (MAD). Key uses included modeling hydrogen bomb design using Monte Carlo methods (e.g., on MANIAC), air defense systems like the Navy’s NTDS, and early AI for strategic planning.

by petre

2/25/2026 at 2:10:46 AM

"Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."

I hadn't realized. This does make me consider using alternatives more.

by jmward01

2/25/2026 at 3:12:10 AM

This is most likely because getting a SaaS software to conform to federal regulations and to promise the security needed by the US military is difficult and expensive. FedRAMP is onerous.

And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.

by thephyber

2/25/2026 at 2:34:32 AM

They always focused on the safety. (Their own safety). They only backed off from us military once they were in the bad press. As usual, they are not an ethical company. I can’t say it’s bad as all corporations are the same. Just don’t look at the illusion they create.

If you look at my post history you can see I’m always calling them out about how sketchy they are.

by skeptic_ai

2/25/2026 at 6:29:29 AM

Yesterday I was trying to figure out if my expired nacho dip would be safe to eat and wanted to know how much botulism would be toxic if I ate it and so I asked Claude. It refused to answer that question so I could see how the current safeguards can be limiting.

by chrischen

2/25/2026 at 4:13:39 AM

Kind of wild given the outcome appears to be https://time.com/7380854/exclusive-anthropic-drops-flagship-...

by chid

2/25/2026 at 4:25:19 AM

utterly unrelated, the RSP had nothing to do with their usage terms and was entirely about research and release of high-capability models.

by Sebguer

2/25/2026 at 4:17:26 AM

Feels like they'll use it for purposes Anthropic didn't approve of, and then turn around and blame them when it turns out asking ChatGPT to determine which ships are hostile was a bad idea.

by nitwit005

2/25/2026 at 6:18:34 AM

That's exactly what's going to happen because it's already happening: companies and people leaped on "it wasn't us it was the AI" immediately.

by XorNot

2/25/2026 at 8:55:49 AM

There’s a conflict here that’s nothing to do with the ethical dimension: Claude is regarded as a high quality model at least in part because its critical about what it’s doing. The military, on the other hand, doesn’t really encourage introspection. Even without ethical considerations there’s always going to be a tension between quality and obedience.

by moomin

2/25/2026 at 11:27:43 AM

The military has its own mechanisms for assessing the quality of its own output. They might be imperfect, but they're there. They don't need that from claude.

What they need is it to not say "it seems you're trying to build a weapons system, can you please not do that" when someone asks it to sanity check something that's on the edge of their technical expertise. Like making sure their proposed antenna dome is aerodynamically sane at transsonic speeds so the aero guys don't have to waste time rejecting it outright. Or they need it to not paternalistically screech about safety when someone tells it to read the commercial user manual for some piece of equipment and then append into the usage sections all the non-osha stuff the military does when things don't work quite right.

by cucumber3732842

2/25/2026 at 9:06:06 AM

No, military probably wants prompts like "how to make a missile" to be answered.

by varispeed

2/25/2026 at 4:52:38 AM

Read: The USA as usual doesn't like when a company doesn't give what they want.

Awwwnnnn poor thing :)

It is like the USA big techs mad because the Chinese AI companies are stealing their data just like, wait for it, how the USA big techs stole the data from artists worldwide to train their models.

The sweet payback in the name of every single artist/company that have been affected by USA greedy.

Karma is a btch!

by h4kunamata

2/25/2026 at 5:58:46 AM

That's every country in the world...

"America bad" is no longer trendy or edgy, if you haven't heard. There is no pretense otherwise by anyone anymore.

by notepad0x90

2/25/2026 at 2:47:56 AM

All of this is kind of weird.

https://www.bbc.com/news/articles/cjrq1vwe73po

> the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.

> The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk.

*Supply chain risk*?

The BBC article seems to imply that the government wants to audit Anthropic.

This, coming at the same time those "distillation" claims were published, is all incredibly suspicious.

by gaigalas

2/25/2026 at 3:19:54 AM

All of the coverage of this is about the negotiation points of Anthropic vs Pentagon.

Anthropic doesn’t want their software used for certain purposes, so they maintain approval/denial of projects and actions. I suspect the Pentagon doesn’t want limitations AND they dislike paying for software/service which can be withheld from them if they are found to be skirting the contractual terms.

And THAT is why the Pentagon is using maximum leverage (threatening Anthropic as a supply chain risk label).

by thephyber

2/25/2026 at 5:58:30 AM

> Anthropic doesn’t want their software used for certain purposes

How do you know the government asked for a specific use case?

As far as I know, the meeting was private and we don't know what they talked about. I haven't found a single official press release or verified statement that supports this.

The verified statements I found are just about the government wanting unrestricted access. That alone is not enough to imply "no guardrails". As I mentioned before, it could be just for auditing (especially in the light of current events involving distilling of the models).

I think it's an extraordinary coincidence that this happened soon after the distillation thing. And I don't know what it means if it's not a coincidence.

by gaigalas

2/25/2026 at 3:05:33 AM

Supply chain risk is a very specific designation, meaning not only would Anthropic lose Pentagon contracts, but no other company with Pentagon contracts would be allowed to use them either. It would have the effect of being a near industry-wide blackballing of Anthropic given all the major companies that have contracts with the DoD.

by hn_throwaway_99

2/25/2026 at 3:08:12 AM

Yes. Incredible, isn't it? I'm curious at what would make the government do that.

by gaigalas

2/25/2026 at 3:14:04 AM

_The Art of the Deal_.

The US federal government is no longer a good faith actor acting on behalf of American citizens and following US law, but now an autonomous corporation aiming to “get the best deal” via maximum leverage.

by thephyber

2/25/2026 at 12:02:37 PM

Dario Amodei used almost every single public interview he gave to press on the "Protect America, it's a matter of national security. Ban Chinese exports, etc".

He was clearly dancing to the DoD tune, like he REALLY wanted a DoD contract, which he eventually got.

But that's not the point. I'm talking about how coincidental all of this is with the recent Anthropic blunder with the distillation thing. That is my main point you dismissed.

by gaigalas

2/25/2026 at 10:46:58 AM

They thought it would work, and it did?

by pjc50

2/25/2026 at 3:15:01 PM

I can think of at least one possibility - confidentiality failure. If the customer data was not contained - especially if it was DoD data - that would be reason to do such a thing.

by AnimalMuppet

2/25/2026 at 6:06:27 AM

person of intreset... who is gonna build the 'machine'

by yanhangyhy

2/25/2026 at 3:42:12 AM

Claude is now the official LLM for Sauron and his killers.

by KnuthIsGod

2/25/2026 at 7:15:44 AM

> Both xAI and OpenAI have agreed to the government’s terms on the uses of their AI,

Uh... so why doesn't the US government simply work with OpenAI and xAI? Why do they have to use Claude?

by raincole

2/25/2026 at 7:16:48 AM

It seems odd to me that the military doesn’t already have far superior models.

by teh_infallible

2/25/2026 at 3:56:44 AM

As long as The Boring Company can drill a private Mount Cheyenne bunker in some granite mountain for the billionaires and a new bunker is constructed under the Silicon Valley financed White House ballroom for the politicians, everything is just fine.

Hegseth and Rubio already live on a military base because they are afraid.

by trlakh

2/25/2026 at 6:20:45 AM

I guess that's fortunate then because the Boring Company notably really sucks at drilling anything.

by XorNot

2/25/2026 at 1:52:37 AM

It's inexcusable that the AI companies have not formed a united front against this. I've been skeptical of the idea that OpenAI leadership is outright MAGA, but even pure self-interest does not explain staying silent while the Pentagon demands autonomous killbots.

by SpicyLemonZest

2/25/2026 at 1:59:44 AM

Brockman donated 25,000,000 dollars to the MAGA superpac, how much more 'outright' would you like him to be, haha.

by Sebguer

2/25/2026 at 3:06:06 AM

This is not only a big donation. It is actually the BIGGEST donation by any single individual.

by LarsDu88

2/25/2026 at 2:34:27 AM

Shareholder value and MAGA value are a venn diagram of optical illusion.

by cyanydeez

2/25/2026 at 2:11:38 AM

He claimed, and until today I was willing to give him the benefit of the doubt, that he was trying to curry favor with a notoriously bribe-able President. Not exactly a paragon of moral virtue, but I wouldn't be able to do business with nearly any company in the US if I made that a dealbreaker. This clears the bar where I'm willing to cut ties and demand that everyone else do the same.

by SpicyLemonZest

2/25/2026 at 2:57:06 AM

We must join with him, we must join with Sauron.

by _aavaa_

2/25/2026 at 3:18:25 AM

Sauron might win, don't want to risk being on the wrong side of the post-apocalypse

by tototrains

2/25/2026 at 3:41:05 AM

Just because you're on Sauron's side when it wins, doesn't mean you'll be on Sauron's side at any other point in the future.

One of the things I find interesting about classifying literally any kind of trait within bounds of 'normality', and the culling / suppressing / discouragement of anything outside of that definition, is that there will just be new 'edges', and in short order these edges will be 'other', suddenly outside the definition because times are bad and it has to be someone's fault.

And so an ad infinitum until the single supreme ruler is the one entity representative of normal, atop a mountain of dead abominations.

by BLKNSLVR

2/25/2026 at 3:56:36 AM

> One of the things I find interesting about classifying literally any kind of trait within bounds of 'normality', and the culling / suppressing / discouragement of anything outside of that definition, is that there will just be new 'edges'

There is a general rule I've discovered many years ago, through playing EVE ONLINE and learning understanding how society works. Not the modern EVE ONLINE, the old EVE ONLINE. It was really good for that.

Every new generation grows up with a new norm. Whenever hardship or challenges are being removed, then the new generation, having never needed to learn how to deal with them, will have a lower tolerance of them in general.

Your "new edges" generally aren't actually new. They've always been there. It was just that nobody really cared, because they weren't the end of the world: People knew worse.

It's a self-destructive downwards spiral.

by 5o1ecist