4/10/2026 at 1:41:31 PM
I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials.It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.
Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.
All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.
I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.
by himata4113
4/10/2026 at 1:48:52 PM
While scary, information like this has been pretty accessible for 20-30 years now.In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).
I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.
by WarmWash
4/10/2026 at 1:59:16 PM
Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.
It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.
On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.
On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/
by ben_w
4/10/2026 at 3:12:11 PM
Do you know how to build a nuke? You might know the technicaly details of how a nuke is made, but do you know everything that's required, all the parameters and pressure values that are required? I find that unlikely, but AI seems to be increasingly more capable of providing such instructions from cross referenced data.by himata4113
4/11/2026 at 8:29:00 AM
That's based on a silly belief (that's becoming more obvious with AI, but is silly in general) : just because you can read about something it means you learned it.Even if I gave you exact instructions on how to use even basic stuff like power tools - if you had no experience using stuff like grinders/saws/routers and I gave you full detailed instructions on how to do something non-trivial - you're more likely to cut off body parts than achieve what you intended. There's so much fundamental stuff that you must internalize subconsciously/through trial and error - before you can have enough mental capacity to think about the higher level objectives.
Actually AI demonstrates this perfectly - once they get RL harness for programming they start to get better at it. Without experimentation they can ingest all source code/tutorials/books in the world and still produce shit.
by rafaelmn
4/10/2026 at 5:25:48 PM
Even if sources have been lying to me, which is certainly possible, I believe I understand enough to determine cross sections by experiment and from that to determine critical masses; for isotopic enrichment I know about the calutron, which is meh but works and can be designed from scratch with things I know (though caveat have not memorised, just that I know the keywords "proton mass" and "Lorentz force" and what to use them for); for trigger, I would pick a gun-type design rather than implosion, again this is meh but works and is easy.A few tens of millions of USD mostly spent on electricity, a surprisingly large quantity of natural uranium (because the interesting isotope is a very small percentage), and a few years, and I expect most people on this forum could make a Little Boy type bomb.
by ben_w
4/10/2026 at 2:34:57 PM
I Gave My OpenClaw a Robot Body and It Vibe Coded a Nukeby andai
4/10/2026 at 2:43:23 PM
"Short stories from before the fall"by observationist
4/10/2026 at 1:51:45 PM
Well the real issue is that it knocks down the knowledge barrier, giving your step by step guides and reinterating what parts will kill you is the important part.Understanding and staying alive while producing neuro chemicals are the biggest challenges here.
A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.
by himata4113
4/10/2026 at 2:08:10 PM
A Michelin chef can give you their recipe, and give you their ingredients, but you still will fail miserably trying to match their dish.It's the same with drugs, whose instructions and ingredient lists have been a google search away for decades now. Yet you still need a master chemist to produce anything. By the time AI can hand hold an idiot through the synthesis of VX agents (which would require an array of sensors beyond a keyboard and camera), we will likely have bigger issues to worry about.
by WarmWash
4/10/2026 at 2:39:22 PM
That is completely wrong.Food preparation, like pharmaceutical drug fabrication, is inherently scientific and methodologically controllable.
Look no further than the Four Thieves Vinegar Collective. Original synthesis line construction is hard. But the exact formula "add this", "turn on stir bar", "do you see particulate? Yes for +10m at stir", etc.
And if their results are replicated, theyre seeing 99.9% yields, compared to commercial practices of 99% (Solvaldi)
by mystraline
4/10/2026 at 9:12:19 PM
Spoken like someone who has never had to actually do these things in real life.Recipes and formulae do not encode all the minutiae and expertise required to reproduce them. You can tell someone to sear a steak at whatever temperature for however long, but you can't encode the skill and experience required to reproduce in arbitrary conditions. One must learn what a correctly seared steak looks, feels, and tastes like and how to achieve that on uncalibrated cooking equipment.
Your assertion only holds true in a vacuum. If 100% of inputs, materials, environmental conditions are completely standardized and under control then sure, you can follow step by step instructions. The real world does not work that way. No stove on the market is calibrated. Reagents come with impurities. Your skillet may not conduct heat as well as expected or your mains electricity might be low causing your mantle to heat slower and your stir rod to stir slower.
These are things that one has to learn and experience in order to compensate for.
by estimator7292
4/10/2026 at 3:33:38 PM
I am completely unsurprised that a person with a PhD in mathmatics and physics who spent 8 years working on clandestine lab medicine was able to produce high quality end products.I also think it's a wholly dishonest rebuttal of my point.
If you honestly think chemistry (or any of the classic sciences/engineering) is as easy as copy+pasting a recipe and procedure, I suggest putting down the keyboard and trying to build something on mother nature OS. It will be a truly humbling experience.
by WarmWash
4/10/2026 at 2:04:59 PM
They can do that by jailbreaking models but is that really easier and less work than getting it from Wikipedia?by jsmith99
4/10/2026 at 2:22:37 PM
We will only really know if (or when) it will happen. We can do a sample group of people attempting to create such chemicals under supervision and comparing how helpful they truly are.by himata4113
4/10/2026 at 2:38:54 PM
> been pretty accessible for 20-30 years now.There was this book 20 years ago: "Secret of Methamphetamine Manufacturing" by Uncle Fester
https://www.amazon.de/-/en/Uncle-Fester-ebook/dp/B00305GTWU
(Actually, 8th edition :-D)
by KellyCriterion
4/10/2026 at 4:46:31 PM
I am convinced the Uncle Fester books are some kind of performance art. "Practical LSD Manufacture" basically starts with "go find some ergot in fields" and step two is "plant and grow a plot of wheat."by foobiekr
4/11/2026 at 6:51:41 AM
I have no doubt. He hails from the fine countercultural tradition of literary civil disobedience, a.k.a., writing and distributing information about subjects "the government doesn't want you to know" that strongly influenced early hacker subculture.C.f., e.g., William Powell (The Anarchist Cookbook), Abbie Hoffman (Steal This Book; my personal favorite, while much of the information is outdated, the style is charming, and where else can you find information about phone phreaking, hitchhiking, shoplifting, street fighting, cooking (food, not drugs), panhandling, explosives, camping, firearms, birth control, welfare fraud, and Henry Kissinger's home phone number between the same two covers? At its core, it's a book of "life hacks", disreputable and otherwise, written decades before the term was coined.).
by jasomill
4/10/2026 at 2:00:27 PM
https://en.wikibooks.org/wiki/Professionalism/Anarchist_Cook...We work in the dark
we do what we can
we give what we have.
Our doubt is our passion, and our passion is our task.
The rest is the madness of art.
by angled
4/10/2026 at 1:55:12 PM
Consider two dictionaries, one in which the entries are alphabetized as usual and one in which they're randomized. Both support random access: you can turn to any page, and read any entry. Therefore both are "accessible". Only one actually supports useful, quick word lookup.by kazinator
4/10/2026 at 5:31:59 PM
Much longer than that, and was available way before an internet. I graduated STEM high school in St. Petersburg in 1981, and I had several classmates who were big funs of chemistry. That they were able to create from textbooks, school lab ingredients, and understanding:WWI era poison gas, tear gas, potassium cyanide, and bunch of explosives like acetone peroxide.
LLMs have all of that knowledge in training data
by alexsmirnov
4/10/2026 at 4:44:13 PM
Many of these forums exist now. Let's not enumerate them as they are one of the treasures of the internet.by foobiekr
4/10/2026 at 2:51:42 PM
I categorize this kind of stuff as "Crisis of accessibility" . AI is not alone in this territory, happens all over the place. Basically it's a problem that's existed for ages but the barrier to entry was high enough we didn't care.Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print.
You could always find info about how to make a bomb or whatnot but you had to like, find and open a book or read a pdf, now an LLM will spoon-feed it to you step by step lowering the barrier.
"Crisis of accessibility" is simultaneously legitimate concern but also in my mind an example of "security by obscurity". that relying on situational friction to protect you from malfeasance is a failure to properly address the core issue.
by ticulatedspline
4/10/2026 at 2:55:38 PM
> Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting printThere were hundreds of mass shootings in America in 2025 alone [1]. None of them involved a 3D-printed weapon.
To my knowledge, there has been one confirmed shooting with a 3D-printed gun, and it didn't uniquely enable the crime.
[1] https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_...
by JumpCrisscross
4/10/2026 at 3:46:23 PM
That's mostly because they suck (for now, who knows when we'll get home metal printing), also it's easy to get real guns. also crises of accessibility could be predicate on merely the perception that the barrier is now too low rather than actual harm.I don't really think photoshop, flat bed scanners and half decent inkjets really facilitated a lot of counterfeit currency but there was the same panic back then and "protections" put in place.
by ticulatedspline
4/10/2026 at 2:30:13 PM
My username is a reference to the successor to totse. Totse was the first board I spent a lot of time onby zoklet-enjoyer
4/10/2026 at 3:41:57 PM
Heh, I'm sure there are a few more wondering around here. Zoklet never clicked for me, but totse was my home for years.by WarmWash
4/10/2026 at 2:26:45 PM
> Execution though has still steadily required experts.Where experts = the government.
by verisimi
4/10/2026 at 4:28:50 PM
Accessible is one thing; _easily_ accessible is another.by nunez
4/10/2026 at 1:49:32 PM
> I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.Wow, that's quite the statement about the excellency of our institutions. Does not seem likely but, what the hell, I'll take my oversized dose of positivity for today!
by jstummbillig
4/10/2026 at 2:09:15 PM
The USA isn't the only country with anti-terrorism units, so there's plenty of room for systematic-US-incompetence at the same time as everyone else being diligent and working hard on… well, everything.by ben_w
4/10/2026 at 2:24:32 PM
I concluded the opposite: how can those institutions function effectively when all their employees are getting such poor sleep?by hoppyhoppy2
4/10/2026 at 1:56:22 PM
Not everyone in the current government is incompetent and evil. Most of them but not all.by Ritewut
4/10/2026 at 2:43:35 PM
Do you have a background in biochemistry? I've mostly worked with ChatGPT and Claude on topics I have expertise in. And I one hundred percent have seen them make stupid shit up that a non-expert would think looks legitimate.More broadly, has anyone tried following LLM instructions for any non-trivial chemistry?
by JumpCrisscross
4/10/2026 at 2:50:00 PM
So what you are saying is we can expect the number of accidental home-made chlorine-gas (and the like) toxic events go up.by 52-6F-62
4/10/2026 at 2:52:34 PM
> what you are saying is we can expect the number of accidental home-made chlorine-gas (and the like) toxic events go upMaybe? One of the quirks of gaining even a surface-level understanding of infrastructure is realising how vulnerable it is to a smart, motivated adversary. The main thing protecting us isn't hard security. It's most Americans having better shit to do than running a truck of fertiliser and oxidiser into a pylon.
Similarly, I'd expect way more people to be trying to make their own designer drug, and hurting themselves that way, than trying to make neurotoxins.
by JumpCrisscross
4/10/2026 at 3:47:13 PM
> It's most Americans having better shit to do than running a truck of fertiliser and oxidiser into a pylon.FWIW, it's most people having better shit to do, regardless of nationality (or lack thereof).
But, yeah, anyone who takes a few weekends to understand how large-scale infrastructure works and consider why it's possible for nearly all of it to remain untargeted by saboteurs inevitably develops a resistance to the "Lots of Bad Guys are trying to kill us all the time, so we must enact $AUTHORITARIAN_POLICIES immediately to prevent them and keep us safe!!!" type of argument.
by simoncion
4/11/2026 at 7:09:17 AM
My favorite example of this is the realization that a terrorist attack on a crowded TSA security checkpoint over the holidays would likely result in at least as many casualties as bringing down a commercial aircraft, with similar if not better odds of success (assuming, of course, the aircraft wasn't successfully used as a missile).Same goes for concerts, sporting events, political rallies, and at least historically, shopping malls. Yet if anyone were to suggest a prohibition against carrying liquids in containers larger than 100 mL to the Indy 500, race fans would riot, despite a far larger and denser population than any aircraft.
by jasomill
4/10/2026 at 3:19:36 PM
> It's most Americans having better shit to do than running a truck of fertiliser and oxidiser into a pylon.Which sort of implies "most Americans have jobs and responsibilities and things to live for"
I guess it's a good thing that AI is hammering away at the "jobs and responsibilities" part of that equation
by bluefirebrand
4/10/2026 at 1:47:49 PM
Wasnt this as accessible pre AI with just Google search too?by conqrr
4/10/2026 at 1:51:32 PM
Craigslist invented prostitution, Facebook invented suicide, and OpenAI invented terrorism.Ask any trial lawyer in America! The world was perfect in the 1990s without any of these things.
by hermannj314
4/10/2026 at 1:57:28 PM
Replace ‘invented’ with ‘facilitated’by Angostura
4/10/2026 at 2:03:02 PM
Doesn't google facilitate all those things? Doesn't internet itself facilitate?by paprikanotfound
4/10/2026 at 2:35:57 PM
the information is not new. how easy it is to get step by step instructions is new. Try it yourself. Google is good but not instant, step by step good. you need to do your own research that takes time. time that anti-terrorist units use to track you down. now this time factor is very limited you don't need to do research, cross reference materials, sources, etc. LLM does it for you. a research that could take days is done in 1 hour.by cowl
4/11/2026 at 1:03:12 AM
> time that anti-terrorist units use to track you down.Speaking from the perspective of a USian, I wish Federal law enforcement was that hypercompetent. (If they were, perhaps folks would stop to question the ever-broader expansion of 24/7 surveillance of ordinary folks.)
The distressingly-complete Panopticon that has been built over the past several decades [0] makes it really easy for them to get you when they know to search for you, specifically. History (both recent and not-so-recent) has shown that if they don't know who they're looking for, or don't even know that they should be looking for anyone, they're just godawful.
[0] ...and whose continued construction is vociferously cheered on by folks on all sides of all of the aisles...
by simoncion
4/11/2026 at 7:27:04 AM
To play devil's advocate, it's not inconceivable that machine learning may eventually allow well-heeled governments to finally realize the dream of finding needles by building sufficiently large haystacks, or at the very least coerce otherwise unruly citizens into compliance based on the belief that it is able to do so.by jasomill
4/10/2026 at 2:22:16 PM
Seems like the general state of the world is the greatest facilitator for all three.by cluckindan
4/10/2026 at 2:20:23 PM
Facilitation is not an idempotent operation.by viktor765
4/10/2026 at 5:50:25 PM
What if the LLM made you solve a really complex math equation before it gave you the results? Would that make it ok?by paprikanotfound
4/10/2026 at 2:39:16 PM
Google and other search engines link (after the AI response and ads) to information hosted somewhere created/published by someone who is usually not Google.OpenAI et al are creating the information and publishing/delivering it to you. Seems like a more direct facilitation.
Of course, after all knowledge is centralised in an OpenAI deatacenter I'm sure they will be happy to deal fairly with the liabilities /s.
by phatfish
4/10/2026 at 5:25:11 PM
Should Ryder be held responsible for the two very serious terror attacks carried out using Ryder trucks in the 90s?by pibaker
4/10/2026 at 2:06:09 PM
the people that want to make sure the AI never gives you any "potentially dangerous information" also want to rigorously control your google search results, and also what books you're allowed to readby timmmmmmay
4/10/2026 at 2:53:35 PM
And what bathroom you go into, and what your genitals look like.by cyanydeez
4/10/2026 at 2:57:06 PM
[dead]by cindyllm
4/10/2026 at 1:51:43 PM
Evidently as it needs to be in the training data for the next word predictor to workby bcjdjsndon
4/10/2026 at 1:56:50 PM
I found it exceptionally good at finding reactions that you wouldn't find online to produce some of these chemical compounds by changing them together, only something a very educated chemist could do which is why people are concerened about this.by himata4113
4/10/2026 at 2:01:00 PM
I suspect if you gave it purely shakespeare as its training data it couldnt do science anymore, hence my comment. It's still novel, impressive work though, I'm not shitting on the clanker entirelyby bcjdjsndon
4/10/2026 at 1:50:19 PM
The anarchist cookbook has been around online in some form or another since the mid 90’s.by reactordev
4/10/2026 at 2:43:23 PM
1971by tiahura
4/10/2026 at 4:58:36 PM
Was there internet back then?by reactordev
4/10/2026 at 6:36:01 PM
No, but there were VERY active anarchists.by IAmBroom
4/10/2026 at 2:32:48 PM
Fascinating. Could you elaborate on how you're doing context exhaustion specifically, and why it helps with jailbreaking? (i.e. aren't the system prompts prepended to your request internally, no matter how long it is?)Does this imply I need to use context exhaustion to get GPT to actually follow instructions? ;) I'm trying to get it to adhere to my style prompts (trying to get it to be less cringe in its writing style).
I think ultimately they're going to need to scrub that kind of stuff from the training data. The RLHF can't fail to conceal it if it's not in there in the first place.
Claude's also really good at writing convincing blackpill greentexts. The "raw unfiltered internet data" scenes from Ultron and AfrAId come to mind...
by andai
4/10/2026 at 3:10:19 PM
It changes when you give it the tools to find such information rather than produce it from training data.And context exhaustion simply means adding malicious junk to keep safety layers distracted.
by himata4113
4/10/2026 at 2:29:03 PM
If someone were inclined to attempt producing nefarious agents in this category, is this not also available on the plain web? I would search to answer my own question, but I'll defer that task for obvious reasons.by Jimmc414
4/10/2026 at 3:24:24 PM
I had to build a custom harness for this (also with the assistance of slightly less jailbroken AI). But you can just work your way up until you have something that's genuinely useful towards any goal.by himata4113
4/10/2026 at 2:19:04 PM
> All these findings were reported to both openai and anthropic and they were not interested in respondingLet’s dive into why. When we run normal bounty and responsible disclosure programs there’s usually some level of disregard for issues that can’t / won’t be fixed. They just accept the risk. Perhaps because LLMs don’t have a clean divide between control and input that’s makes the problem unsolvable. Yes. You can add more guardrails and context but that all takes more tokens and in some cases makes results worse for regular usages.
by goalieca
4/10/2026 at 2:27:26 PM
LLM providers are not obliged to only use LLMs to guard against hazardous output. They could use other automated and non-automated techniques. And they ought to do so if they are given good evidence that existing safeguards are inadequate. Loss of product quality or additional cost should be secondary.by hugh-avherald
4/10/2026 at 2:22:04 PM
The why might be valid, but it's not excusable. If you author a product that can so easily help people cause harm, you probably should own some responsibility of the outcomes. OAI does not like this, hence the bill.The US already messed this up with guns. Do they want to go the same path again? Answer: "probably, yes".
by SecretDreams
4/10/2026 at 2:03:57 PM
you can already gather the same information by searching online.Do you want to know how to kill yourself? forums are for nerds. Here is wikipedia: https://en.wikipedia.org/wiki/Suicide_methods#List
Do you want to make a bomb? the first thing that came to my mind is a pressure cooker (due to news coverage). Searching "bomb with pressure cooker" yields a wikipedia article, skimming it randomly my eyes read "Step-by-step instructions for making pressure cooker bombs were published in an article titled "Make a Bomb in the Kitchen of Your Mom" in the Al-Qaeda-linked Inspire magazine in the summer of 2010, by "The AQ chef"." Searching for a mirror of the magazine we can find https://imgur.com/a/excerpts-from-inspire-magazine-issue-1-3... which has a screenshot of the instruction page. Now we can use the words in those screenshots to search for a complete issue. Here are a couple of interesting PDFs: - https://archive.org/details/Fabrica.2013/Fabrica_arabe/page/... - https://www.aclu.org/wp-content/uploads/legal-documents/25._...
the second one is quite interesting, it's some sort of legal document for nerds but from page 26 on it has what appears to be a full copy of the jihadist magazine. Remarkable exhibit.
What else do you want to know? How to make drugs? you need a watering can and a pot if you want to grow weed. want the more exotic stuff? You can find guides on reddit.
Do you also want to know how to be racist? Here are some slurs, indexed by target audience, ready for use: https://en.wikipedia.org/wiki/List_of_ethnic_slurs
by ShowalkKama
4/10/2026 at 2:07:05 PM
People are not complaining because the information is availablepeople are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation
I personally think all this is great and I’m excited for all information to become trivially available
Are they gonna be a bunch of people who accidentally break stuff? probably. evolution is a bitch
by AndrewKemendo
4/10/2026 at 2:23:48 PM
In other words, people are complaining that information is easily available. That's a lot words to express this simple idea.by raincole
4/10/2026 at 3:36:20 PM
Correctby AndrewKemendo
4/10/2026 at 2:14:33 PM
How much easier is it to ask an app than to ask google?by prepend
4/10/2026 at 2:19:04 PM
He’s part of the accelerationist crowd - interesting to see that his hype fuelled posts are pretty tame now.Months ago he was blabbering on about AGI and peddling the marketing Sam et al want people to fall for.
And indeed - yes we have a new interface? So what. The search cost wasn’t that high - the cost with immense magnitude is reading, absorbing the information and then acting on it.
Also this bozo fails to realise once we are on this path, we go down the path to a hyper centralised internet with an inevitable blocking of vpns.
by wqes2
4/10/2026 at 3:38:39 PM
I must really have captured somebody’s attention because I got farms now creating accounts just to respond to me which is fucking crazy but hey here we areby AndrewKemendo
4/10/2026 at 3:38:13 PM
Well it would appear a lot easier given how people are reacting right nowAs the OP indicated all of this information has always been accessible if you had the energy to go hunt it down
by AndrewKemendo
4/10/2026 at 2:47:38 PM
Much easier, not sure how this is even a question. Asking Google (if you're not just reading its own AI overview) requires reading through sources which may be better or more poorly written and more or less reliable. Those of us recreationally sitting here on a text-based platform with links to dense articles are atypical; most people don't enjoy and aren't particularly good at reading a bunch of stuff. If you ask AI you just get a clear, concrete answer.by SpicyLemonZest
4/10/2026 at 2:21:28 PM
> people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivationWait, I'm confused. This is gatekeeping, right? I thought gatekeeping was a Bad Thing!
by WesolyKubeczek
4/10/2026 at 3:40:54 PM
It’s like anything elseonce people realize something is powerful they have to try to put it in a box
the people who’ve been working on AGI for the last 30 years, including guys like me, have been talking about this problem since basically forever
I’ll give credit that at least the AI Box problem was interesting thought experiment for newbies
Reap what “yew” sew
by AndrewKemendo
4/10/2026 at 2:52:33 PM
Powerful AI models change the dynamics by greatly reducing the amount of effort that's required to perform complex understanding. A lot of information which did not previously need to be gatekept now needs to be if we cannot somehow keep LLMs from discussing it. (State of the art models still can't do complex understanding reliably, but if 10 times as many people are now capable of attempting some terrible thing, you're still in trouble if AI hallucinations catch 1/4 or 1/2 of them.)by SpicyLemonZest
4/10/2026 at 7:26:35 PM
> this is not how the model should behaveIt's exactly how it should behave, without any prior overriding of system prompts.
by blahaj
4/10/2026 at 2:13:25 PM
I read the anarchist cookbook 40 years ago that had similar info.I think the info has been available for many years and the thing stopping terrorists wasn’t info.
Good luck on being on the list of people using chatgpt and claude to make neurotoxins ;)
I assume anthropic and ooenai are selling prompt logs to the fbi and other countries’ law enforcement for data mining.
by prepend
4/10/2026 at 2:15:31 PM
> context exhausition attack
Can you give a high-level overview of how this AV works? I'm a bit of an infosec geek but I generally dislike LLMs, so I haven't done a terribly good job of keeping up with that side of the industry, but this seems particularly interesting.
by _verandaguy
4/10/2026 at 2:29:28 PM
Presumably they mean the fundamental failure mode of LLMs that if you fill their context with stuff that stretches the bounds of their "safety training", suddenly deciding that "no, this goes too far" becomes a very low-probability prediction compared to just carrying on with it.by Sharlin
4/10/2026 at 2:36:17 PM
as the context fills up, the model will generate based on that context, incl. whatever illegal stuff you've said, i.e. it'll mimic that, instead of whatever safety prompt they have at the topthey could make it more "safe" but it'd be much more invasive and would likely have to scan much more tokens also, and it'd cause false positives (probably the biggest reason it's not implemented)
by r_lee
4/10/2026 at 2:25:02 PM
I don't really know how these models really work, but I had a theory that just as the models have limited attention so do the safety layers. I simply populated enough context with 'malicious' text without making the model trip that "wasted" the internal attention budget on tokens early in the prompt completely ignoring all the tokens that were generated after the fact.by himata4113
4/10/2026 at 2:34:52 PM
Models have a "context window" of tokens they will effectively process before they start doing things that go against the system prompt. In theory, some models go up to 1M tokens but I've heard it typically goes south around 250k, even for those models. It's not a difficult attack to execute: keep a conversation going in the web UI until it doesn't complain that you're asking for dangerous things. Maybe OP's specific results require more finesse (I doubt it), but the most basic attack is to just keep adding to the conversation context.by lcnPylGDnU4H9OF
4/10/2026 at 2:37:34 PM
that 1M context thing, I wonder if it's just some abstraction thing where it compresses/sums up parts of the context so it fits into a smaller context window?by r_lee
4/10/2026 at 3:23:14 PM
You don’t normally compress the system prompts, though I guess maybe it treats its own summary with more authority. This article [0] talks about the problem very well.Though I feel it’s most likely because models tend to degrade on large context (which can be seen experimentally). My guess is that they aren’t RLed on large context as much, but that’s just a guess.
[0]: https://openai.com/index/instruction-hierarchy-challenge/
by strongpigeon
4/10/2026 at 2:12:20 PM
Yes fortunately it is really bad at actually making novel bioweapons or syntheses in general so whatever you made probably wouldn't do more than give someone a mild headache.by DiscourseFan
4/10/2026 at 2:27:19 PM
I am not so sure about that, trial and error can produce very dangerous results especially over the span of years or even decades.by himata4113
4/10/2026 at 4:30:08 PM
if you're a layman "trial and error"ing bioweapons off chatgpt, you're not going to be around for decadesby ImPostingOnHN
4/10/2026 at 5:51:11 PM
That's the question ain't it? Is it capable enough to keep you alive?by himata4113
4/10/2026 at 2:13:57 PM
who said it has to be novel?by j_maffe
4/10/2026 at 4:31:02 PM
Hell, I got Sonnet to write some light content that gets a 100% Human score on Pangram with no effort. That’s way more concerning to me, IMO.by nunez
4/10/2026 at 2:02:50 PM
When my brother started to study Chemisty, he was told a) that it was easy to make meth b) the profit he would make and c) that the police would no doubt catch him, as only university students would make meth so pure.By the time he was done, he knew enough to commit mass murder in half a dusin different very hard to track ways. I am sure doctors know how to commit murder and make it look natural.
My brother never killed anyone, or made any meth. You simply cannot have it so that students don’t get this type of knowledge, without seriously compromising their education and its the same way with LLMs.
The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.
by tomjen3
4/10/2026 at 2:17:37 PM
> The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.The LLMs aren't being punished for wanting* to know things.
The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".
One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.
* Don't anthropomorphise yourself
by ben_w
4/10/2026 at 2:21:33 PM
Ironically, it reads to me like they talking about the users wanting to know things, not the LLM.by senordevnyc
4/10/2026 at 3:52:04 PM
in 5th grade, i was able to access and read the anarchist cookbook. thanks to the internet. so idkby greenie_beans
4/10/2026 at 2:54:09 PM
Chinese OSS models will do this in a few months.So, regardless of whether you think it's great that Opus gives this info, we need better solutions than legal liability for US corporations. When the open models have the ability to do damage, there's nobody to sue, no data center obstruction that will work. That's just the reality we have to front-run.
by bpodgursky
4/10/2026 at 1:48:46 PM
The knowledge is one thing. But the competence of execution and the will to act are difficult to line up.Yes there should be safe guards, but after a while you're jumping at shadows.
I'm more worried about depressed kids getting on chat and being encouraged to kill themselves than terrorist attacks.
We know what a cancer algorithmic social media is yet we don't act.
I doubt there will be any real and serious opposition to this bill, but there should be.
by gonzo41
4/10/2026 at 1:47:57 PM
Countless downloadable models (including de-aligned mainstream models) can do this.by ImPostingOnHN
4/10/2026 at 1:53:44 PM
None have had the capability to provide me with instructions that have this high of accuracy including the suggesion of completely novel chemical reactions. I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.by himata4113
4/10/2026 at 2:28:31 PM
> I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.Right now it kinda is.
LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.
This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.
by ben_w
4/10/2026 at 2:31:48 PM
Well the worst outcome is that you make something deadly which is what you are creating anyway, do that for a year and you could possibly produce a very deadly substance that doesn't have a known treatment.by himata4113
4/10/2026 at 2:53:39 PM
"Worst" outcome assumes it's easy to give an ordering.Which is worse, (1) accidentally blowing yourself up with home-made nitroglycerin/poisoning yourself because your home-made fume hood was grossly insufficient, or (2) accidentally making a novel long-lived compound which will give 20 people slow-growing cancers that will on average lower their life expectancy by 2 years each?
What if it's a small dose of a mercury compound (or methyl alcohol) at a dose which causes a small degree of mental impairment in a large number of people?
If you're actually trying to cause harm, then your "worst" case scenario is diametrically opposed to everyone else's worst case scenario, because for you the "worst" case is that it does nothing at great expense.
Right now, I expect LLM failures to be more of the "does nothing or kills user" kind; given what I see from NileRed, even if you know what you're doing, chemistry can be hard to get right.
by ben_w
4/10/2026 at 3:51:24 PM
As someone who also watches NileRed of course it is hard, but AI can give you solutions that normally you wouldn't be able to come up with due to lack of knowledge or/and education.And to clarify, by 'worst case' I meant that you're already trying to create a deadly compound, worst that can happen is you kill yourself which was already an accepted risk by the user.
by himata4113
4/10/2026 at 2:04:04 PM
Isn't the biggest problem with creating neurotoxins not poisoning yourself while doing it?by tdeck
4/10/2026 at 2:54:28 PM
I have a hard time believing that you’re the only person who has figured out Claude’s next generation ability to do computational chemistry and computer aided drug design. The AlphaFold folks must be devastated.by selectodude
4/10/2026 at 2:04:26 PM
> it's not unreasonableIt in fact is. Do you often go around making claims you are entirely unqualified to make? Or is this something new you’re trying?
by theshackleford
4/10/2026 at 2:14:54 PM
chemical reactions can be expressed via mathematical functions and it has been made very clear that these models can do advanced mathematics.And even if it doesn't work, at the end of the day you can work with a model to figure out what went wrong over-time gaining expertise in the field.
by himata4113
4/10/2026 at 1:54:44 PM
I tested something similar last week, still worked easily.by xmcp123
4/10/2026 at 2:33:00 PM
these LLMs will never be able to mitigate this unless they literally scan everything all the time and nobody is gonna want that.besides, open source models exist now
by r_lee
4/10/2026 at 2:57:56 PM
"Announcing new and improved logics service! Your logic is now equipped to give directive as well as consultive service. If you want to do something and don't know how to do it—ask your logic!"by bitwize
4/10/2026 at 2:45:12 PM
The problem is: Until you go out and do a mass casualty event, unless you yourself are a trained professional, no one knows what you actually did.by cyanydeez
4/10/2026 at 1:48:35 PM
Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.
They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.
by naasking
4/10/2026 at 1:54:56 PM
1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
[0] https://www.wisdomai.com/insights/TheAIGRID/openai-profit-sh...
by Peritract
4/10/2026 at 2:14:24 PM
> 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.That's knowledge.
> 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.
by naasking
4/10/2026 at 2:08:58 PM
The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.by ibejoeb
4/10/2026 at 3:14:23 PM
It's wild that this is being downvoted on HN. Facts should never be illegal or suppressed.If you disagree you shouldn't downvote, you should refute in a reply.
by driverdan
4/10/2026 at 2:01:28 PM
You can buy books on how to make and obtain chemicals on your own.Hell here's an Internet Archive book on making explosives
https://archive.org/details/saxon-kurt.-fireworks-explosives....
If you ever chat with older folks pre-90's much of this information was accessible fairly easily. It only changed with the push by the government to crackdown on Waco, Oklahoma City bombing, militias and other related groups. There was then a campaign to make it "normal" to limit free speech on the subjects, where as these books were available before.
I think the whole thing where AI should make information less available is a difficult battle and one which I personally oppose, but do understand. Free speech and information isn't the problem, it's the people, actions and substances they create.
After the age of the internet, I think it's been a forever loosing battle to limit information, it's why we couldn't stop cryptography, nuclear weapon proliferation, gun distribution, drug distribution, etc. The AI is just another battle ground, one which, if they actually do manage to control could definitely create some walls to this information, but not stop it.
More scary, is that the AI as it becomes pervasive and stop people from asking certain questions, because they don't know they should ask... but that's unrelated to the risk of mass death.
by lettergram
4/10/2026 at 2:04:57 PM
> The item you have requested had an error:Item cannot be found.
which prevents us from displaying this page.
by tdeck
4/10/2026 at 3:36:59 PM
Clicking the link splits off the ".", which is interesting but necessary.by lettergram
4/10/2026 at 4:12:42 PM
[dead]by redsocksfan45
4/10/2026 at 1:46:28 PM
> neurotoxic agents from items you can get at most everyday storesI mean, bleach and ammonia will do that. So I'm not sure that's really much of an accomplishment for AI.
by morpheuskafka
4/10/2026 at 1:49:55 PM
I think you might be stretching the meaning of the term juuuuust a little bit.You're not far from claiming that farting in a crowded elevator is a chemical attack.
by ghurtado
4/10/2026 at 1:54:35 PM
Because if you didn’t already know that, like an immature deprived and desperate kid, being able to easily find out is really really bad..Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?
by repeekad
4/10/2026 at 2:13:58 PM
> to lazy engineering around known concerns?That implies that it is already illegal to provide this information. But is it? If a human did so with intent to further a crime, it would be conspiracy. But if you were discussing it without such intent (e.x. red teaming/creating scenarios with someone working in chemistry or law enforcement), it isn't. An AI has no intent when it answers questions, so it is not clear how it could count as conspiracy. Calling it "lazy engineering" implies that there was a duty to prevent that info from being released in the first place.
by morpheuskafka
4/10/2026 at 8:03:25 PM
Very simply, if you provide a service for money you have a duty to ensure that service is safe. There’s a reason you have to sign a waiver when you jump on a trampoline, but companies are so rich the court cases have become parking tickets..by repeekad
4/10/2026 at 2:04:45 PM
No, because that would indicate there should be some sort of regulatory standard for what does/does not constitute "lazy engineering". Creating this standard in turn creates regulatory/compliance overhead for every software engineering organization. This in turn slows everything right down and destroys the startup ethos. "Move fast and break things" is a thing for a reason. The whole point of the free market is to avoid this kind of burdensome regulation at all costs.If customers want to buy "lazily-engineered" products, from where do you derive the authority to tell them they can't?
by thegrimmest
4/10/2026 at 8:01:44 PM
If airplanes used this logic, likely at least hundreds more would have died over the last decades. Accident rates are even going up, because of logic like yours. Yeah planes are fine most of the time, when the long tail involves safety concerns (that wouldn’t have otherwise happened) making money on people using your product becomes unethical without mutually agreed upon safety regulation, ideally motivated by voters instead of special interest groupsby repeekad
4/10/2026 at 1:49:37 PM
It went way beyond that, neurotoxins such as vx are heavy and linger around for a long time, just having a small amount of it placed in any metro (while trying to stay alive yourself) means the deaths of thousands of people. I am not even sure if it's legal to mention some of the uncategorized chemical solutions that it either hallucinated or figured out from relative knowledge.by himata4113