4/10/2026 at 3:02:51 PM
It's simple. It's because AI is the scariest technology ever made.Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
by ACCount37
4/10/2026 at 7:03:38 PM
AI, the way you are describing it, has not been invented yet. It is a fiction.What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.
I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)
by jaredcwhite
4/10/2026 at 8:02:22 PM
"It's just marketing" is just the "denial" stage wearing a flimsy disguise.Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.
What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".
by ACCount37
4/10/2026 at 8:44:27 PM
You don't seem to be engaging seriously with respected experts in this field who have been reporting for years at this point that merely scaling LLMs and so-called "agentic systems" doesn't get us anywhere close to true AGI.Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?
by jaredcwhite
4/10/2026 at 8:55:49 PM
And you don't seem to be engaging seriously with respected experts in this field who say "scaling still works, and will work for a good while longer".If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.
If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.
by ACCount37
4/11/2026 at 1:11:29 AM
That aside.Lets look at the facts.
Are LLMs displacing labour? In the aggregate - not from what one can see. The aggregate statistics tell a different story e.g. the hiring of software engineers is still growing Y-o-Y.
The limits of LLMs will be put in place through financial constraints. People like you seem to think there's an infinite stream of money to fund this stuff. Not really. Its the same reason why Anthropic and OAI are now shifting focus to generate revenues and cash flows because they will not receive external funding forever.
by dsa3a
4/11/2026 at 5:01:36 AM
I can’t speak for the states, but in AU I clearly see a massive displacement of undergrad and junior roles (only in AI exposed domains).I say this as both someone who works with many execs, hearing their musings, and someone who no longer can justify hiring junior roles themselves.
Irrespective of that; if we take this strategy of only taking action once it is visible to the layman - our scope of actions available will be invariably and significantly diminished.
Even if you are not convinced it is guaranteed and do not believe what myself and others see. I would ask you is your probability of it happening now really that close to 0? If not then would it not be prudent to take the risk seriously?
by robkop
4/11/2026 at 8:34:52 AM
> If not then would it not be prudent to take the risk seriously?What does taking the risk seriously look like?
by jurgenburgen
4/11/2026 at 9:13:33 AM
I'm not interested in reading the same arguments over and over angain. Ai is not scary anymore, it's fucking boring. Exits threadby Xmd5a
4/10/2026 at 3:19:04 PM
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
by andrewmutz
4/10/2026 at 5:11:59 PM
> Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.It's not an either/or thing though. Compare to something like combustion. Sure it definitely improved productivity but also lead to countless violent deaths.
by AlecSchueler
4/10/2026 at 3:09:45 PM
I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
by afavour
4/10/2026 at 3:52:46 PM
I remind you of why nuclear weapons exist.They exist because human minds conceived them, and human hands made them.
One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.
by ACCount37
4/11/2026 at 8:35:27 AM
Yeah, the problem with AI is that they can become too good at performing general tasks, ranging from, like, designing cancer treatments, or designing bioweapons, and everything in betweenby nextaccountic
4/10/2026 at 4:44:07 PM
By any quantifiable measure, yes, and not by small numbers either.Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.
People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.
by lopsotronic
4/10/2026 at 3:32:12 PM
Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?by SpicyLemonZest
4/10/2026 at 4:27:41 PM
> Everyone recognized that it was so dangerous to use them after the first two mass casualty eventsI really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.
by afavour
4/11/2026 at 9:53:51 AM
Truman changed after learning the real civilians death numbers that they caused. The military leaders absolutely knew the impact before, and kept advocating for its in later wars.by Hikikomori
4/10/2026 at 3:18:33 PM
Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.
The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?
by webdoodle
4/10/2026 at 3:53:42 PM
There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
by ThrowawayR2
4/10/2026 at 4:10:31 PM
Is it me making the mistake, or is it you making that very mistake in the other direction?Back in the "mainframe era", we had entire lists of tasks that even the most untrained humans would find trivial, but computers were impossibly bad at. Like following informal instructions, or telling a picture of a dog from that of a cat.
We're in the "AI era" now, and what remains of those lists? What are the areas of human advantage, the standing bastions of human specialness? Because with modern AI, the list has grown quite thin. Growing thinner as we speak.
by ACCount37
4/10/2026 at 3:11:38 PM
Most humans can do more than plagiarizing text. But let's hype up the clankers before the IPOs.by saHqtr
4/10/2026 at 4:01:01 PM
"It's all just PR" is a lame excuse not to think about the implications. Of things like: AI capabilities only ever going up over the course of the past 4 years.by ACCount37
4/10/2026 at 3:43:40 PM
> AI is the scariest technology ever madeWell, it's a good thing that all we managed so far is a large language model instead.
by otabdeveloper4
4/10/2026 at 3:21:59 PM
Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
by psychoslave
4/10/2026 at 3:55:57 PM
Humans are dangerous and hilariously exploitable.If politicians can get away with what they do? Imagine if those politicians were actually smart and diligent to a superhuman degree.
That's the kind of threat a rogue AI can pose.
Humans can easily act against their own self-interest. If other humans can and evidently do exploit that, what would stop something better than human from doing the same?
by ACCount37
4/10/2026 at 3:13:23 PM
The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
by fontain
4/10/2026 at 3:59:25 PM
IQ is among the best predictors of life success, for humans. Being up by an extra SD in the g dimension covers a multitude of sins.I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this". It's pretty obvious that human world is a product of intelligence applied at scale - and machines can beat humans at intelligence and scale both.
by ACCount37
4/10/2026 at 4:30:53 PM
>> I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this".One has to only look at the current tech and political leaders.
by sifar
4/10/2026 at 3:07:26 PM
"Why be afraid of nukes it's not like they WANT to blow up"by causal
4/10/2026 at 3:09:27 PM
This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
by sublinear
4/10/2026 at 3:12:06 PM
The vast majority of people on this planet work repetitive, uncreative jobs.by bauerd
4/10/2026 at 7:05:47 PM
Oh? And what extensive knowledge and experience makes YOU qualified to determine what "the vast majority of people on this planet" are doing for work and if those tasks are creative or uncreative?by jaredcwhite
4/10/2026 at 9:19:48 PM
Not sure what you're insinuating. What do you think is the statistically average job on this planet? It's still going to be cultivating a smallholder farm in developing countries, or working in logistics, manufacturing or the broader service in developed countries.All of these average jobs are structurally repetitive. Yes, humans do constantly inject creativity, but it's a means to an end, to getting the job done.
You apparently mistook my descriptive comment for a value judgment, but it isn't.
by bauerd
4/10/2026 at 3:26:14 PM
There is no such job done by humans today that is 100% uncreative, but people will continue to insist there is.The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.
by sublinear