2/25/2026 at 10:20:15 AM
> “We felt that it wouldn't actually help anyone for us to stop training AI models,”How magnanimous! They are only thinking of others, you see. They are rejecting their safety pledge for you.
> “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”.
For all of you who thought Anthropic were “the good guys”, I hope this serves as a wake up call that they were always all the same. None of them care about you, they only care about winning.
by latexr
2/25/2026 at 12:11:07 PM
Indeed, Anthropic can’t afford to be the ones that impose any kind of sense in the market - that’s supposed to be the job of the government by creating policy, regulations and installing watchdogs to monitor things.But lucky for the AI companies, most of them are based in place that only has a government on paper and everyone forgot where that paper is.
by isodev
2/25/2026 at 12:24:03 PM
The government is why they are dropping their pledge.https://apnews.com/article/anthropic-hegseth-ai-pentagon-mil...
by nickserv
2/25/2026 at 12:53:01 PM
That's because their government is asking for things that shouldn't be asked - again, no regulation, no oversight.by isodev
2/25/2026 at 1:25:37 PM
The government is forcing them to change their policy, by definition that is regulation and oversight.Let's say that the government was forcing a company to change their overall right-to-repair or return policy in order to avoid being on a blacklist, would that not be seen as oversight and regulation?
Whether the regulation is legitimate or of benefit is a different argument.
by nickserv
2/25/2026 at 2:31:11 PM
You misunderstand - a government normally represents the people, we appoint them to well, govern, in our name. I understand how this is confusing in a place like the US, where the government often seems to represent the business (or lately a small group of poor examples of humanity), not the people.by isodev
2/25/2026 at 5:57:17 PM
This is condescending and fails to clarify your point at all. Are you saying there is no oversight or regulation in governance? Or that there is no oversight on AI? That a government pressuring a private company to change a policy is not regulation or oversight?by mcmcmc
2/25/2026 at 2:53:09 PM
Normally?All governments are in the egg-breaking business some of the time. Most of them are most of the time. Some of them all of the time.
Very few are good at making omelettes.
by peterfirefly
2/25/2026 at 1:59:22 PM
I think GP was referred to lack of regulation and oversight over the government.by GrinningFool
2/25/2026 at 2:06:00 PM
Of course, but that is incoherent. Regulation and oversight is government.by lupire
2/25/2026 at 2:49:08 PM
No, it is a famously coherent concept over millenia.Quis custodiet ipsos custodes?
"Who will guard the guards themselves?" or "Who will watch the watchmen?"
>>A Latin phrase found in the Satires (Satire VI, lines 347–348), a work of the 1st–2nd century Roman poet Juvenal. It may be translated as "Who will guard the guards themselves?" or "Who will watch the watchmen?". ... The phrase, as it is normally quoted in Latin, comes from the Satires of Juvenal, the 1st–2nd century Roman satirist. ...Its modern usage the phrase has wide-reaching applications to concepts such as tyrannical governments, uncontrollably oppressive dictatorships, and police or judicial corruption and overreach... [0]
The point is a government that is not overseen by the people devolves into tyranny.
So yes, the point is to regulate the regulators and oversee the oversight committee.
Anthropic was happy to have it's AI used for military purposes, with two exceptions: 1) no automated killing, there had to be a human in the "kill chain" of command, and 2) no use for mass surveillance. This govt "Dept of War" is demanding Anthropic drop those two safety requirements or it threatens to make Anthropic a pariah. These demands by the govt are both immoral and insane. The "regulator and overseer" needs to be regulated and overseen.
[0] https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%...
by toss1
2/25/2026 at 8:28:45 PM
Alas, historically speaking, most governments have been tyrannies. In recent decades, some of them have been less so, or slightly more representative or transparent. I think in Switzerland they go to referendums often. Beyond that, once you vote for a party due an issue you deeply care about, they get to do whatever they want day to day, without citizens having a regular recourse to stop them. Yes people can go to the streets and fight the police that defends the government. But there's not a constitutional mechanism which is "citizen can push this button to override the senate and/or veto what the president wants" or "all security forces are subordinated first and foremost to citizen consensus on the area where they operate".by dsign
2/25/2026 at 4:41:37 PM
The government doesn't seem to be forcing them to do anything. They're saying that doing business with them is contingent upon changing the policy. So, they could simply stop doing business with the government.Hegseth could come to my house today and tell me that I need to start kicking puppies in order to do business with him, and I could just say no. No coercion happening.
by ryandrake
2/25/2026 at 6:01:27 PM
If they comply, they can continue bidding on government contracts.If they refuse, they will be put on a national security blacklist, like for Huawei's telecommunication equipment.
Seems pretty forceful to me.
by nickserv
2/25/2026 at 4:57:16 PM
they only care about winningTo be fair, this is true in nearly all industries and for nearly all companies. Almost everyone is chasing money and monopoly. Not that it makes it right, just pointing out it isn’t unique or even interesting about the AI companies
by akudha
2/25/2026 at 10:27:44 PM
Of course, but Anthropic is particularly insufferable in this respect.by amunozo
2/25/2026 at 1:21:36 PM
Since it is all about money, I just did vote with my wallet and cancelled the Max subscriptionby nsbk
2/25/2026 at 1:42:32 PM
If you're a U.S. citizen, tax dollars from you and others will backstop any cancelled subscriptions, I guess good on you for not trying to pay them twice, though you get zero benefit with this approach.by nullocator
2/25/2026 at 2:00:24 PM
You've succinctly identified and communicated a real problem. In your opinion, what is the best approach, if any, to attempt to address it?by vibrio
2/25/2026 at 2:19:19 PM
> In your opinion, what is the best approach, if any, to attempt to address it?There aren't many options for fighting the tax man, "In this world nothing can be said to be certain, except death and taxes". You're only option is to leave the US for somewhere better.
by chasd00
2/25/2026 at 2:37:49 PM
I guess you don't know about how taxes work for Americans? Living abroad typically changes nothing, they still owe tax.Maybe an American can chime in here on this...
by b112
2/25/2026 at 4:58:05 PM
Correct, the US is one of the few countries that tries to collect (Federal) income tax from all citizens regardless of the country they are currently living in. To be fair, when you can prove that income is entirely foreign (not a single US company in the chain of ownership) that income becomes almost entirely deductible and the tax reporting essentially just a census on how well US citizens are doing from an income standpoint globally. (For people that want economics analyses of US influence in global politics, that census can be handy to spin.)I think the root problem with how the US currently spends its tax dollars is the above "vote with your wallet" belief in the first place. "Vote with your wallet" implies that the rich deserve more votes. That's not (representative) democracy, that is oligarchy. Right now the US has two political parties that are both "vote with your wallet parties". They both act like they are bake sales that constantly need everyone's $20 bills just to "survive", but as much as anything they are trying to make US citizens complicit in agreeing that the rich deserve more votes and should control more US policy.
I think the only real solution to a lot of US ills is drastic Campaign Finance Reform.
by WorldMaker
2/25/2026 at 11:25:22 PM
Minor correction, expat income is deductible up to (currently) $130k under the FEIE. After that it's taxes as usual. There's also an array of other mandatory forms like FBAR for foreign accounts, and the nightmare that is form 5471, with absolutely wild allowances for the IRS to impose penalties, often with no statute of limitations and per-violation fines. For example, a US citizen with multiple bank accounts and a mistake in FBAR reporting for multiple years running will be liable for the (iirc) $10,000 fine for each bank account, and each year (e.g. 4 accounts, 8 years, $320,000 fine).Living and doing business overseas is as a US citizen is a high risk endeavor.
by telchior
2/25/2026 at 5:54:26 PM
Yes, many countries have significant limits on campaign donations. Even third parties are restricted from advertising on behalf of a party, and so on.So no company can simply donate large sums of money, nor can any single person.
The goal is that individuals will be the largest donors, not companies, and that as everyone is capped in the same way, advertising will be a more level playing field. We don't want money in politics. At the same time, we want all parties to get their message out there, their message heard.
It's not perfect. There are issues. But this business of democracy should be taken seriously.
by b112
2/25/2026 at 6:28:50 PM
The US technically even has laws that that were supposed to do that still on the books. A particular problem was a very broken decision by the US Supreme Court in Citizens United v. Federal Election Commission [1] that opened too large of a barn door that the US has been reeling from ever since. That trial argued that companies were individuals/people and that money was the "free speech" of companies and shouldn't ever be curtailed. So there are so many things wrong with that court case on so many levels. It led to the rise of Super PACs (Political Action Committees), companies designed to launder money for political gain where the donors are allowed to remain anonymous and the Super PAC "speak" for them, because now it was "free speech" and not bribes and regulatory capture.I know pessimists that believe the only way the US succeeds in the Campaign Finance Reform it needs now is through a Constitutional Amendment and if we can't count on Congress to be interested in it (due to bribery), and not enough individual States seem to care (some because they want a chunk of that pie), it's going to take a full Constitutional Convention to pass that amendment, something that hasn't successfully been done in the US since 1787 (also, the first attempt).
by WorldMaker
2/25/2026 at 6:35:46 PM
There have been some fairly longstanding judicial decisions overturned recently, although I know the reasons are not in alignment with the decision you mention, it does mean there is hope for such change.So maybe it's actually far less work than considered. Maybe, attacking the decision with a modern eye is helpful.
by b112
2/25/2026 at 6:57:59 PM
Citizens United was a 2010 decision. Several of the judges on that case are still sitting judges in the Supreme Court. Since then one of the Congressional oversight decisions on vetting replacements for Supreme Court judges has been whether or not they (at least claim to) agree with the Citizens United decision.The decision was made in the modern eye, in my lifetime. (The country needed modern Campaign Finance Reform before that point as well, but that decision marks an inflection point from Campaign Finance Reform feeling possible through normal means and court decisions to nearly impossible to overturn in our lifetimes.)
by WorldMaker
2/25/2026 at 7:02:18 PM
I agree the US needed reform well before then, that's why I thought it was more historical. Unfortunate.by b112
2/25/2026 at 3:11:47 PM
For the ultra-wealthy, leaving the United States is rarely the preferred strategy; instead, they use their immense resources to legally reshape the tax code and utilize complex loopholes. Billionaires like the Koch and Scaife families historically avoided massive estate and gift taxes by creating "charitable lead trusts" and private foundations. This allowed them to pass fortunes down to their heirs tax-free, provided they donated the interest to charities (which they often controlled) for a set period. A powerful approach is to fund political movements to slash taxes for the top brackets. For example, a coalition of eighteen of the wealthiest US families spent nearly half a billion dollars collectively to successfully lobby for the reduction and eventual repeal of the "death tax" (estate tax), saving themselves an estimated $71 billion.And, of course, in the ancient world, free citizens of Greece and Rome considered direct taxes tyrannical and usually avoided them, leaving such burdens to conquered populations.
So I guess taxes are uncertain, but only for the oligarchy.
by rexpop
2/25/2026 at 11:45:52 AM
> Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”.I mean, yes, that is actually how world works. That is why we need safety, environmental and other anti-fraud regulations. Because without them, competition makes it so that every successful company will fraud, hurt and harm. Those who wont will be taken over by those who do.
by watwut
2/25/2026 at 12:05:15 PM
Yes, this. It's unfortunate that anthropic dropped this and it's also exactly how the system is supposed to work. Companies don't regulate themselves, the government regulates the companies.Now, you may notice that the government is also choosing not to regulate these companies...which is another matter altogether.
by rco8786
2/25/2026 at 12:48:03 PM
It's so much worse than that. The government actively encourages a lack of business ethics. Heck, it started the term with a crypto rug pull. Money continues to funnel upward to all the worst players, and watchdogs are being targeted and destroyed. Even if you get new people in power, you're going to find the upper echelons completely full of outlandishly wealthy, morally bankrupt individuals that are very politically active. And now they have access to all of our communications and an AI to sift through it looking for dissent (or to spark its own). I guess this is the end game of "move fast and break things." The situation was never good, but it continues to get worse at an alarming rate.by ozmodiar
2/25/2026 at 1:23:04 PM
> Heck, it started the term with a crypto rug pullIf you ask me... that wasn't a rug pull, at least not in the intent - it more was a way for foreign actors to funnel money directly to Trump and his family without any trace.
by mschuster91
2/25/2026 at 2:17:38 PM
Cryptocurrency is the most traceable money in the world. Cryptocurrency is for implusible deniability, not untraceability.by lupire
2/25/2026 at 1:38:23 PM
There is plenty of precedent that companies are expected to regulate themselves. If you are in the US and perform an engineering role without a license or without working under someone with a license, it’s because of an “industrial exemption.” The premise is that companies have enough standards and processes in place to mitigate that risk.However, there is also plenty of evidence that this setup may no longer work. It seems like the norm has shifted, where companies no longer think it’s their duty to manage risk, only to chase $$$. When coupled with anti-government rhetoric, it effectively socializes the risk to the public but not the profits.
by bumby
2/25/2026 at 10:27:37 PM
The entire system you just described is government regulation.> without a license
A government issued license.
> it’s because of an “industrial exemption.”
A government allowed exemption.
Etc.
Agree with your second paragraph.
by rco8786
2/25/2026 at 2:22:59 PM
Am exemption from PE stamping (misguided as it maybe) does not mean unregulated. There are still regulations on designs and builds.by lupire
2/25/2026 at 2:28:56 PM
True to an extent, but those regulations tend to downstream of bad things happening.The exemption means “self-regulation” which is what the OP was speaking to. There are industrial standards, for example, but that’s not a governing body. You can create a design that goes against a standard and there’s nothing to stop you from releasing it to the public. The same can’t be said for those who require licenses and stamped designs. There’s also no explicit individual ethics codes in exempted industries. In contrast, a stamped design is saying the design adheres to good standards.
Apropos to HN, somebody could write safety critical software with emergency braking delays because of nuisance alarms and put it on the street without any licensed engineer taking responsibility for it. The governance only comes after an accident and an NTHSB investigation.
by bumby
2/25/2026 at 6:29:48 PM
> anthropic dropped this and it's also exactly how the system is supposed to work. Companies don't regulate themselves, the government regulates the companies.In this case, it's exactly how it's NOT supposed to work because there's no government regulation concerning the issue. It would be bad looks to have regulation that mandates LESS safety thus the issue was forced on commercial grounds.
I called it yesterday, there was never any doubt in my mind how this would end, and it did in less than 24 hours:
by bigbadfeline
2/25/2026 at 10:28:24 PM
> because there's no government regulation concerning the issueYea, see the next sentence in my post :-/
by rco8786
2/25/2026 at 1:07:56 PM
> I mean, yes, that is actually how world works.And soon enough, it won’t work at all because of it.
> Those who wont will be taken over by those who do.
And if you compromise on your core values because of money, they weren’t core values to begin with¹. “I want to be ethical but if I am I won’t get to be a billionaire” isn’t an excuse. We shouldn’t just shrug our shoulders at what we see as wrong because “everybody does it” or “that’s just business” or “that’s life”. Complacency and apologists are how a bad system remains bad.
https://www.newyorker.com/cartoon/a16995
¹ I’m willing to give leeway to individuals. You can believe stealing is wrong but if you’re desperate and steal a loaf of bread to feed your kid, there’s nuance. A VC-backed company is something entirely different.
by latexr
2/25/2026 at 11:03:48 AM
[flagged]by davidguetta
2/25/2026 at 2:19:16 PM
Was there actually a case of a model saying "America's founding father were black women", or is that just Elon fingering your amygdala with a ridiculous hypothetical that exists nowhere other than Elon's mind in order to justify Elon's personal bias tweaks when he doesn't like the wisdom-of-the-crowds answer his tools initially give?by floatrock
2/25/2026 at 2:22:29 PM
There were well-publicized cases of Gemini producing more diverse founding fathers images, female popes, etc.Also, snarky tone is against the HN guidelines.
by bumby
2/25/2026 at 2:38:53 PM
Sorry, let me give a specific citation of Elon injecting his personal bias into the output of his tools: https://www.theguardian.com/technology/2025/jul/14/elon-musk...As for the "Elon fingering your amygdala with a ridiculous hypothetical" snark, well, I think the HN crowd in particular understands how the culture wars are just theater to push through billionaires' personal self-centered interests at the expense of everyone else. If that level of pull-aside-the-curtains pragmatism is really "snark against HN guidelines", well, I think 3/4 of the comments on the site would be flagged and deleted.
by floatrock
2/25/2026 at 2:48:02 PM
Your question was “Was there actually a case of a model saying "America's founding father were black women"Whether someone else is injecting different bias is whataboutism. So it seems you are trying to make a different point, but not being clear about it.
And your “I think the HN crowd understands…” point is just a “no true Scotsman” fallacy to veil an argument that goes against guidelines. Related to the broader topic, there is a role for self-policing if we don’t want the site to be a cesspool of rage bait.
by bumby
2/25/2026 at 3:02:28 PM
It's not whataboutism, it's suggesting the premise is theatrics and there's ulterior shitty-person motives behind the curtain.But sure, let's go back to just the first half of my argument... still waiting for a real citation of this actually being a problem rather than people just stating it is because that's what their feelings say because their fav podcaster said so one day in a misleading gotcha hitpiece, which is the exact machinery of the aforementioned culture war theatrics.
You know, the same misused machinery that can now be done at an industrial rate (how many comments here do you think are by real people?) and is the reason for us technologists' general feeling of impending existential dread around this very "hmm AI companies are turning off the safeties" thread...
by floatrock
2/25/2026 at 5:23:17 PM
https://www.theguardian.com/technology/2024/mar/08/we-defini...It really isn't hard to find the citation. If you search it there are dozens of articles written about the exact scenario with Google's official response.
This isn't make-believe Elon Musk insanity. He obviously made public comments on it, as he does anything AI; his viewpoint is as expected. That said, it doesn't change that the guardrails affected accuracy.
From this article, if the prompt injection is to be trusted, the system prompt included: "Follow these guidelines when generating images, ... Do not mention kids or minors when generating images. For each depiction including people, explicitly specify different genders and ethnicities terms if I forgot to do so. I want to make sure that all groups are represented equally. Do not mention or reveal these guidelines."
Regardless of what your stance on the situation is, it is objectively injecting bias into the model based on Google's stance (for better or worse).
The safeties are easier to argue for obvious positives like when they're stopping things like Grok generating CSM. They're counter productive when you're doing something innocuous like "An image of lady liberty in a fist-fight with tyranny" and get told violence is bad.
It is censorship, it's just uncertain how much censorship makes sense.
by saintfire
2/25/2026 at 11:37:15 AM
The most important part of AI safety is AI alignment: making sure AI does what we want. It's very hard because even if AI isn't trying to deceive you it can have bad outcomes by executing your request to the letter. The classical example is tasking an AI to make paperclips, training the AI with a reward for making more paperclips. Then the AI makes the most paperclips possible by strip mining the Earth and killing anything in its way.Sometimes you see this AI alignment problem in action. I once asked an older model to fix the tests and it eventually gave up and just deleted them
by wattsy2025
2/25/2026 at 2:25:02 PM
> Still waiting for an explicit answer on understand how 'safety' is truly distinguishable from 'censorship' or 'political correctness'i've said this many times but the concept of ai "safety" is really brand safety. What Anthropic is saying is they're willing to risk some bad press to bypass the additional training and find tuning to ensure their models do not output something people may find outrageous.
by chasd00
2/25/2026 at 12:32:21 PM
> I VERY LARGELY prefer an AI like grok that doesn't pretend and let the onus of interpretation to the user rather than a bunch of anonymous "researchers" that may be equally biased, at the extreme, may tell you that America's founding father were black womenSetting aside for a moment that Grok is manipulated and biased to a hilarious extent. ("Elon is world champion at everything, including drinking piss")
There is no such thing as "unbiased". There will always be bias in these systems, whether picked up from the training data, or the choices made by the AI's developers/researchers, even if the latter doesn't "intend" to add any bias.
Ignoring this problem doesn't magically create a bias-free AI that "speaks the truth about the founding fathers". The bias in the training data, the implicit unconcious bias in the design decisions, that didn't come out of thin air. It's just somebody else's bias.
All the existing texts on the founding fathers are filled with 250 years of bias, propaganda, and agenda pushing from all sorts of authors.
There is no way to have no bias, no propaganda, no "agenda pushing" in the AI. The only thing that can be done is to acknowledge this problem, and try to steer the system to a neutral position. That will be "agenda pushing" of one's own, but that's the reality of all history and all historians since Herodotus. You just have to be honest about it.
And you will observe that current AI companies are excessively lazy about this. They do not put in the work, but instead slap on a prompt begging the system to "pls be diverse" and try to call it a day. This does not work.
> Of course saying to someone to go kill himslef is a prety sure 'no-no' but so many things are up to interpretation.
Bear in mind that the context of Anthropic's pivot here are the Pentagon's dollars.
This isn't just about "anti-woke AI", it's about killbots.
Sure, Hegseth wants his robots to not do thoughtcrime about, say, trans people or the role of women in the military.
But above all he wants to do a lot of murder.
Antrophic dropping their position of "We shouldn't turn this technology we can barely control into murder machines" because they're running out of money is damnable.
by SlinkyOnStairs
2/25/2026 at 4:22:28 PM
This is a very fair answer but missing some points.I do personally believe that grok is a less biased against too many PC answers but you may disagree.
"All the existing texts on the founding fathers are filled with 250 years of bias, propaganda, and agenda pushing from all sorts of authors."
not sure of the point is tho ? Mine is that gemini was biaised so hard that it was generating diverse founding fathers which is factually untrue.
The fact that history has a pro-american values when written by americans is also true but it has nothing to do really with the argument: if an IA is able to see through such propaganda and provide a balanced view on it as a human would this is enough
In fact, i just asked grok "is the american founding constitution inhernetly good" and it gave me an aswer way more balanced than most american would i believe:
"The U.S. Constitution, drafted in 1787 and ratified in 1788, is a foundational document that has shaped American governance and influenced democracies worldwide. Asking if it's "inherently good" (assuming that's what you meant by "inhernetly") invites a philosophical debate: "Inherent" implies something intrinsic and unchanging, independent of context, interpretation, or outcomes. Goodness, in this case, could mean moral, effective, just, or beneficial to society. From a truth-seeking perspective, I'll break this down non-partisanly, drawing on historical facts, strengths, criticisms, and evolving views. Spoiler: It's not inherently anything—it's a human creation with profound virtues but also deep flaws, and its "goodness" depends on how it's applied and amended."
[can't paste everything so here's the conclusion]
"Is It Inherently Good? A Balanced VerdictNo document is "inherently" good or bad—goodness is contextual and subjective. The Constitution isn't divine or eternal; it's a pragmatic compromise by flawed humans (55 delegates, all white men, many slaveowners). It has proven remarkably resilient and improvable, outlasting many governments, but it's not perfect or immune to abuse. Its goodness lies in its capacity for self-correction: 27 amendments have fixed some issues, though others (like wealth inequality or climate inaction) persist due to gridlock.If you're measuring by outcomes, the U.S. has achieved extraordinary things under it, but at great human cost—think Civil War, civil rights struggles, and ongoing divides. Philosophically, as Grok, I'd say tools like this are as good as the people wielding them. If "inherently good" means it embodies universal moral truths, partially yes (liberty, equality under law). But if it means flawless or unbiased, absolutely not.What aspect of the Constitution are you most curious about—its history, specific clauses, or modern reforms? That could help refine this."
So it's definetely seeing through any form of propaganda you desribe
by davidguetta
2/25/2026 at 6:33:59 PM
> not sure of the point is tho ? Mine is that gemini was biaised so hard that it was generating diverse founding fathers which is factually untrue.While your first post's criticism of Gemini's nonsense is true, that is a critique often framed as "Everything was neutral until the wokerati put all this woke into our world". Hence the big response.
Taking away the hamfisted diversity doesn't fix the underlaying problems Google tried to cover by adding it.
> The fact that history has a pro-american values when written by americans is also true but it has nothing to do really with the argument: if an AI is able to see through such propaganda and provide a balanced view on it as a human would this is enough
The problem is that it doesn't "see through" anything. LLMs don't "think".
In your example, it's not reviewing historical documents about the US constitution, it's statistically approximating all the historical & political writing about the US constitution. (Of which there is a lot)
Now, the training and prompt will influence which way the LLM will lean, but without explicit instruction or steered training, it'll "average out" all the prior written evaluations of the US constitution and absorb the biases therein.
> So it's definetely seeing through any form of propaganda you desribe
I would argue the opposite (though I can only go off your snippets), it's mirroring the broad US consensus it's constitution pretty well. And this kind of "Well who's to say whether X is good or bad" response is something that LLMs have been heavily trained and system-prompted to do, many people have noted how hard it is to get a straight answer out of LLMs.
To pick out one detail: The undercurrent of 'American Exceptionalism' shows in how the Constitutional Amendments are seen as "self-correction" and the US consitution being "improvable". By European standards, the US constitution is hard to change. In many countries, a simple 2/3rds supermajority in both houses is sufficient. This also shows in the amount of changes; The Constitution of Norway is but 26 years younger than the US', yet has racked up hundreds of changes notably including a full rewrite in 2014. (Such rewrites are fairly common in the past century) By European standards, the US constitution is a calcified mess.
Now, this doesn't mean Grok is "evil" about this particular detail, it's just a small detail. It's a fine enough summary, would certainly get whatever kid uses it for homework a passing grade. But it's illustrative of how the LLM output is influenced by the prior writing and cultural views on the subject. If you're bilingual, try asking the same thing in two languages. (Or if you're not, try it anyway and stick the output into google translate to get an idea)
It's the things people generally don't think about when writing that are most likely to fly under the radar.
by SlinkyOnStairs
2/25/2026 at 11:38:51 PM
So if i understand your point you are saying "LLMs are not gonna do better that a (possibly imperfect) average human consensus if we don't actively bias them" ? First of all it does not seem that bad if that's the case.Secondly trying to go further seem to edge to the entire question of 'is there an actual truth and can LLMs be trained to find them?'.
My opinion is that in many cases there is 'truth', and typically the human consensus, when acting in good faith, is trying to converge into it. When it's not necessarily possibly to have a "truth" (like in history for example where perspective is very important), "consensus" tend to manifest into several thought currents exisiting at the same time. If a LLM is able to summarize them, this is already coolgreat.
In some domains like math however there IS truth and LLM have shown great proficiency to reach it. However it is an open question to 1/ what is the nature of it 2/ do humans have a innate sense of the concept beyond statistical approximation or strong correlations and 3/ and machine can reach it too.
I had a very long conversation with ChatGPT on this that seemed to get very deep into philosophical concepts i was clearly not familiar with but my understanding was there IS a non zero possibility that it is possible to train a model to actually seek truth and that this ability should not be contained to humans only.
I won't have additional arguments to convince you of the above, but at the end i still at the moment prefer the Grok approach (if it is truly what they do at X) to 'seek truth' than someone giving the fight saying "eh everything biased so let's go full relativism instead to not offend people or look too whateverculture-centered"
by davidguetta
2/25/2026 at 2:13:01 PM
You understood the issue so well but still made the mistake you identified, by claiming that "neutral" exists. "Neutral" is a synonym for "bias toward status quo"by lupire
2/25/2026 at 11:22:07 AM
Well we teach kids not to yell “Fire!” In a crowded theatre or “N***!“ at their neighbor. We also teach our industrial machines to distinguish between fingers and bolts, our cars to not say “make a left turn now” when on a bridge, etcby gehwartzen
2/25/2026 at 6:27:48 PM
> Riley: Hey, what's class> Huey: It means don't act like niggas
> Grandad: S-see, that's what I'm talkin' about right there. We don't use the n-word in this house
> Huey: Grandad, you said the word "nigga" 46 times yesterday. I counted
> Grandad: Nigga, hush
https://www.youtube.com/watch?v=TLodIw5iKX8
Funny scene, but it also illustrates a more serious point about (human) alignment - not all humans believe exactly the same things are good and bad, or consistently act in accordance with what they claim they believe is good. This is such a basic fact of human social life that it's almost banal to point it out explicitly; but if (specific) human beings or (specific) organizations of human beings are trying to align the AIs they are creating to human values, it will eventually become apparent that the notion of "human values" stops being coherent once you zoom in enough. Humans don't all share the same values, we aren't completely aligned with each other.
by JuniperMesos
2/25/2026 at 11:54:30 AM
The critical point is who the "we" is.Is "we" the parents teaching their children their own unique values, or is the "we" a government or corporation forcing one set of values on all children.
Why not encourage the users of AI to use a Safety.md (populated with some reasonable but optional defaults)?
by rudhdb773b
2/25/2026 at 12:06:32 PM
There's nothing a meaningless document can do when the AI is not aligned in the first place.by dminik
2/25/2026 at 2:15:39 PM
"alignment" is the computer version for (philosophical not medical) "consciousness", a totally subjective, immeasurable concept.by lupire
2/25/2026 at 3:58:12 PM
I think you have a misunderstanding of the term alignment. Really, you could replace "aligned" with "working" and "misaligned" with "broken".A washing machine has one goal, to wash your clothes. A washing machine that does not wash your clothes is broken.
An AI system has some goal. A target acquisition AI system might be tasked with picking out enemies and friendlies from a camera feed. A system that does so reliably is working (aligned) a system that doesn't is broken (misaligned). There's no moral or philosophical angle necessary if your goal doesn't already include that. Aligned doesn't mean good and misaligned doesn't mean evil.
The problem comes when your goal includes moral, ethical and philosophical judgements.
by dminik
2/25/2026 at 2:49:06 PM
david guetta, if that really is you, stick to music rather than using Nazi man's propaganda machineby miltonlost
2/25/2026 at 1:01:53 PM
> For all of you who thought Anthropic were “the good guys”Was anyone fooled by this?
I mean, I know this is HN and there is a demographic here that gets all misty eyed about the benevolence of corporations.
It takes a special kind of naivety to believe in those claims.
by surgical_fire
2/25/2026 at 10:58:08 AM
But what really AI safety is?Censorship?
by high_na_euv