2/23/2026 at 11:52:17 PM
Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?by darth_avocado
2/24/2026 at 1:28:26 AM
The security team at my company announced recently that OpenClaw was banned on any company device and could not be used with any company login. Later in an unrelated meeting a non technical executive said they were excited about their new Mac Mini they just bought for OpenClaw. When they were told it was banned they sort of laughed and said that obviously doesn't apply to them. No one said anything back. Why would they? This is an executive team that literally instructed the security team to weaken policies so it could be more accommodating of "this new world we live in."by frenchtoast8
2/24/2026 at 1:45:38 AM
Similar thing at my company. Someone /very/ high up in the org chart recently said to the entire company that OpenClaw is the future of computing, and specifically called out Moltbook as something amazing and ground breaking. There is literally no way security would ever let OpenClaw in the same room as company systems, never mind actually be installed anywhere with access to our data.It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.
by ropetin
2/24/2026 at 3:23:27 AM
MoltBook is vibe coded. It passed its own API key via client side JS, and in doing so exposed full read/write access to it’s supabase db, complete with over a million API keys.
That is groundbreaking for a product held in such high esteem, just not in a good way.I lack the words to explain my frustration at this timeline.
by xmcp123
2/24/2026 at 10:55:09 AM
I miss the old days of 5.5 years ago when people were skill sceptical of Yudkowsky's AI Box experiment:by ben_w
2/24/2026 at 8:52:10 PM
Am I missing something or are both of the "we convinced someone to let the AI out" claims missing any logs of what was actually said? Why wouldn't that be shared? You can't just claim something is true because you have proof, but not share the proof.by thegrim33
2/25/2026 at 8:52:37 AM
You're not missing anything; I can't remember what his reasoning was, just that he gave one, therefore his say-so was only worth as much as your trust that he was honest.Today though, with headlines like this one in response to events such as it quotes from people in positions such as they are?
That is why I miss the old days, when not believing Yudkowsky's statements about the AI Box experiment only meant your views were compatible with the norms of corporate IT security rules.
by ben_w
2/24/2026 at 3:49:34 AM
> exposed full read/write access to it’s supabase db, complete with over a million API keys.When was this lol; I knew it didn’t drop out of the news that fast by inertia alone.
by DANmode
2/24/2026 at 4:18:07 AM
It was revealed by this post by Wiz from the beginning of this month: https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...by sheept
2/24/2026 at 7:03:16 AM
> 35,000 emails. 1.5M API keys. And 17,000 humans behind the not-so-autonomous AI networkWow, this is sure a brave new world. I'd just recently heard about the project and they've already been pwned so massively. We're accelerating into a future beyond our control.
by lioeters
2/24/2026 at 3:44:38 PM
> vibe codeds/vibe/slop/;
by NexRebular
2/24/2026 at 11:57:22 PM
Honestly “vibe coded” is already so derogatory in my eyes that I didn’t even consider another termby xmcp123
2/24/2026 at 7:19:59 AM
Sounds like you work at a music streaming company, but then again, this behavior is probably very wide spread.by techpression
2/24/2026 at 4:07:31 AM
In 3 decades of IT I have never seen such executive excitement combined with recklessness, and it is appalling.Testing new and cutting edge tech has always been a good idea, but this rampant application of it is the ultimate Running-With-Scissors meme. Risks are not being evaluated, and everything is bleeding edge.
My disgust probably comes from the instinct that the excitement is based on the allure of doing more with less, and layoffs are the only idea so many business have left.
The other camp is excited about selling more stuff because AI has been slapped onto it.
by kermatt
2/24/2026 at 4:36:19 AM
They think they can taste a great divide about to be torn in human society, and they expect to be on the top half.by lokar
2/24/2026 at 5:29:23 AM
These execs are the people who previously cared about literally nothing except not looking bad to their bosses. Now they're getting all fired up about something and taking a stand and... it's this? Lol. Lmao. Etc.by jcgrillo
2/24/2026 at 10:26:48 PM
Their excitement is that they have hope they can finally get rid of all those stupid humans doing the actual work. American MBA culture has spent decades hammering home an ideology of a worker as a necessary evil to make money, and that those workers are utter scum that deserve no empathy or thought, because greed is "right" and specifically that a hyper greedy system will of course produce the right outcomes naturally.They take it as a given that they end up on top in such a system, because they've always believed themselves the most important.
They desperately want to encourage this small chance of a future finally free of the gross masses and their horrific desires like "Vacation time" and "Sick time" and "salaries". How dare those lowly trash deign to deserve any of My rightful profit.
The american system has spent about 50 years now self selecting sociopaths at every level, rewarding people who sacrifice themselves for a company to make tiny bits more profit, ensuring that every manager at a high level eats sleeps and dreams the dumb "We are a family" line whether they actually believe it or not. It should not be surprising that the thing they get hyped about is so damn stupid. They don't want what you and I want.
This is the dream of the people who responded to the establishment of basic Labor rights and Social Security with McCarthyism. These people believe, very very genuinely, that you and I are wasting Their resources.
by mrguyorama
2/25/2026 at 12:03:11 AM
Very well said.by jcgrillo
2/24/2026 at 3:18:30 AM
The mac mini they bought with their own money to run their own stuff? Company policy doesn't apply to their personal computing.by danielmarkbruce
2/24/2026 at 3:23:22 AM
I'm sure company policy would technically prohibit them from accessing company resources from their personal computer; or if it does allow access to company resources from their personal computer then their corporate tech policy very likely does apply to their personal computing.If the executive bought it for a personal mac mini for personal use only, with no interaction with company resources, then the person probably wouldn't have told the story.
by ncallaway
2/24/2026 at 3:26:04 PM
You might be right. But this (and a few other) weird comments in this thread suggest some folks aren't thinking very clearly on this topic.by danielmarkbruce
2/24/2026 at 6:30:58 PM
> Company policy doesn't apply to their personal computing.Sure, it'll come over as "oh I'm just running an experiment" after your infra/security teams notice. Seen @ public company before current ai hype.
by derivagral
2/24/2026 at 11:54:11 PM
Great time to be a pen tester! Or a black hat hacker for that matter. The branches are drooping further every dayby huey77
2/24/2026 at 1:52:33 PM
I hope the security team talked to the legal team about that. There is potential for OpenClaw to commit crimes on behalf of the company.by trehalose
2/24/2026 at 3:57:18 AM
"Move fast and break things" (c) Zuckby zx8080
2/24/2026 at 2:41:30 PM
I mean innovation going faster than security department is not a new thing.You have to understand that the security department operates with a fundamentaly different mindset and reality than a business executive. One is responsible for compliance and avoiding adverse events and the other for ensuring the ongoing survival and relevance of the organisation.
Specific waivers for high level members are fully expected. They also have waivers for procurements. It makes sense because they can engage their personnal responsibility for this level of decisions. They don't need the security department to act as their shield.
It's clear that something like Open Claw has the potential to be deeply disruptive so seeing leaders exploring makes sense.
by StopDisinfo910
2/24/2026 at 12:35:37 AM
Those people aren't the same. Those are two ideas that you heard from the internet, and you're imagining it's the same person talking.by ekjhgkejhgk
2/24/2026 at 12:45:22 AM
There's a name for this: https://en.wiktionary.org/wiki/Goomba_fallacyby HeliumHydride
2/24/2026 at 1:26:09 AM
I'm glad that a term for this exists. It's always seemed so silly to me that someone would think that a group of people would all conform to the same opinion.by chrysoprace
2/24/2026 at 3:35:51 AM
But isn't that a requirement for joining any social media platform?by eviks
2/24/2026 at 12:23:13 PM
no.by krapp
2/24/2026 at 2:14:19 AM
Thank you!!!I've been looking for a term for this concept for years!
by CoastalCoder
2/24/2026 at 12:59:20 AM
Some of them are the same.It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.
by jacquesm
2/24/2026 at 2:31:21 AM
So they’re assuming the existence of somebody to be mad at without direct evidence?by dullcrisp
2/24/2026 at 2:48:51 AM
No, they're applying statistics.by jacquesm
2/24/2026 at 4:46:50 AM
Some people are literally the worst.I don’t know which ones specifically, but statistically speaking some must be.
by dullcrisp
2/24/2026 at 7:23:39 AM
Statistically only one person can literally be the worst, unless you can tie for the position.by techpression
2/24/2026 at 3:02:12 AM
The set of sane developers and developers who are completely ignoring security considerations are disjoint.You only get an overlap if you ignore words in the original comment.
by cwillu
2/24/2026 at 3:26:58 AM
I mean... that could be a little "no true scotsman" at that point, though.I think the most useful interpretation of the previous post is Set A is "the set of developers who appeared sane before the arrival of AI agents" and Set B is "the set of developers who are completely ignoring security considerations".
by ncallaway
2/24/2026 at 3:40:25 AM
Hmm? I have 100% met people that fall into this.by Capricorn2481
2/24/2026 at 12:16:45 AM
Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"? Can you point to a dozen blogs/Twitter profiles, or are you just inventing a fictitious "other" to attack?by throw10920
2/24/2026 at 1:51:52 AM
The person being quoted for one, who is apparently focused on safety and alignment at meta. Safety being handing over your email credentials to the shiny new thing, apparentlyby Macha
2/24/2026 at 2:01:35 AM
Are they even a developer? “Safety and alignment” as AI buzzwords are quite different from “security and privacy”. In any case, I wouldn’t take a random person with a sinecure job as exemplary of anything.by LudwigNagasena
2/24/2026 at 12:20:57 PM
The AI ate my email is the new, plausible deniality version of "my dog ate my homework"by fmajid
2/24/2026 at 3:02:53 AM
So, not sane.by cwillu
2/24/2026 at 12:16:30 PM
> Who are these developers that have both been "advocating for best practices" and also "seem to be completely abandoning all of them simply because it’s AI"?All of them. Apparently uploading all your codez to some cloud provider that doesn't even have a figleaf of a EULA is okay now, because "AI".
by otabdeveloper4
2/24/2026 at 2:09:19 PM
> All of them.An insane claim with zero evidence provided. You're just making it up. Found the tribalistic propagandist unconcerned with reality or truth.
by throw10920
2/24/2026 at 4:26:07 PM
"All of them who now all of a sudden use cloud AI IDE's".Happy now?
by otabdeveloper4
2/24/2026 at 12:14:17 AM
They aren't. They're the ones who are resisting the all in thing on AI stuff. What you're seeing is over reactive trend followers.by monksy
2/24/2026 at 2:04:27 AM
Same as the “MongoDB is webscale” crowd.by bubblewand
2/24/2026 at 4:28:06 AM
For anyone that wasn't around at the time this gem came out and doesn't get the reference:by latentsea
2/24/2026 at 6:01:57 AM
I used to love that video so muchWe need a LLM version
by esseph
2/24/2026 at 12:33:06 AM
And likely massive amounts of marketing spending pushing for people to bend over and accept AI anything anywhere.by autoexec
2/24/2026 at 2:45:32 AM
openclaw is the napster of itunes.people who have been around long enough know that we're currently in the wild west of networked agentic systems. it's an exciting time to build and explore. (just like napster and early digital music.) eventually some big company will come along and pave the cow paths and make everything safe and secure. but the people who will actually deliver that are likely playing with openclaw (and openclaw-like systems) now.
by hugs
2/24/2026 at 7:30:47 AM
Same "sane developers advocating for best practices" preached to the moon:- Alexa (and other voice assistants) spy microphones in their homes;
- Internet connected:
- locks;
- door, bedroom, living room cameras;
- lights, appliances and whatnot;
Giving full and unfettered control to their personal computer with all its accounts, apps, etc does not surprise me at all.I wonder what anthropologists will write about us these days 100 years in the future. What is super creepy and super illegal to do for a physical individual, but is given a blank check from society to be done by tech corporations at unimaginable scale.
EDIT: also corporations (from my social bubble) are giving (almost) unfiltered access to their data from LLMs (and probably soon a control of that data through "Claw" trend), that would be instantly fireable offence for any employee.
Imagine giving enterprise access to some Joe-Claw from the street and allowing him to press any buttons he wants..
by trymas
2/24/2026 at 5:02:45 AM
> Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AIThe deep irony is that the email deletion victim is an "AI alignment specialist" at Meta, and she didn't consider this failure mode.
by overfeed
2/24/2026 at 1:16:29 AM
I agree with a lot of the siblings that it's probably not the same people. But for the overlap that probably does exists, I don't think "because it's AI" is their reasoning. If I were to guess, I'd say it's something closer to "exploring the potential of this new thing is worth the risk to me".by resonious
2/24/2026 at 7:15:25 AM
A lot of us are being forced to deploy AI, and have concluded that the built in security issues are essentially unsolved. So we’re stuck.by simooooo
2/24/2026 at 7:21:02 AM
You're not being forced to deploy OpenClaw, are you? That would be quite concerning!by resonious
2/24/2026 at 2:18:23 AM
> why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of themI'm a sane developer. I do not trust AI at all. I built my own personal OpenClaw clone (long before it was even a thing) and ran controlled experiments inside a sandbox. My stack is Elixir, so this is pretty much easy. If an agent didn't actually respect your requirements, it's just as easy as running an iex command to kill that particular task.
In my experience, AI, be it any model - consistently disobeys direct commands. And worse, it consistently tried to cover up its tracks. For example, I will ask it to create a task within my backend. It will tell me it did - for no reason at all, even share me a task ID that never existed. And when asked why it lied, it would actually spin the task up and accuse me of not trusting it.
It doesn't matter which vendor, which model. This behaviour is repeatable across models and vendors. Now, why would I give something like this access to my entire personal and professional life?
To group me and others like me with the clowns doing this is an insult to me and others who have accumulated decades of experience and security best practices and who had nothing to do with OpenClaw.
by neya
2/24/2026 at 2:58:42 AM
Lots of developers have been flippant for a long time when it comes to the security of the systems they use and violate best practices on a regular basis, often for convenience. Developer ≠ sensible with personal security.by cosmic_cheese
2/24/2026 at 1:42:43 AM
I'm enthusiastic about AI (it's gone from the 2nd most important thing to happen in my career to tied for first, with the Internet) and I am baffled by OpenClaw.by tptacek
2/24/2026 at 2:40:43 AM
I thought Ben Goertzel had a good take on it: "someone made hands for a brain that doesn't exist yet"by eucyclos
2/24/2026 at 9:46:29 AM
There’s still sane people out there, I’m one of them, watching this gigantic trash heap ready to go up in flames. It’s not just OpenClaw either, it’s everything. Nobody is paying any attention and when it goes wrong it’s going to be an absolute catastrophe.by cedws
2/24/2026 at 8:02:39 AM
> developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AIRisk and reward. That balance, currently, seems tipped to favour risk taking. (Which in turn encompasses both boldness and recklessness.)
by JumpCrisscross
2/24/2026 at 1:18:47 AM
Was building a claw clone the other day when for debugging I added a bash shell. So I type arbitrary text into a Telegram bot and then it runs it as bash commands on my laptop.Naturally I was horrified by what I had created.
But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!
Funny how that works, subjectively...
(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)
As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...
by andai
2/24/2026 at 1:01:07 AM
You must not say his name. If you say it, you will summon him.by xantronix
2/25/2026 at 5:40:38 PM
OpenClaw has now weeded out the shills, leaving only the real pros :D.by lofaszvanitt
2/24/2026 at 12:41:47 AM
Developers with and without devops experience.by j45
2/24/2026 at 1:08:25 AM
This isn't any different than pre-Claude. We've always had people that wrote code, but had no clue about systems. Not everyone is a CS major. I've seen people do the strangest things that you would think a sane person would never do, yet, their the strangeness is happening by someone you would otherwise consider sane/smart. Not everyone is a sysadmin banging perl to automate things.by dylan604
2/24/2026 at 5:08:12 AM
I would agree that it doesn't have anything to do with Claude.I didn't meant to imply CS majors knew this either.
Understanding the impact of letting software run permission and operationally free within or against direct access to other software is a pretty basic thing.
Neither deterministic nor non-deterministic software performs as expected without getting it right.
We are new to non-deterministic software, let alone how it operates between different layers.
DevOps, hosting, security, etc, is all in a way software, and software configuration.
The more it's understood, the more it can inform software development, and in the case of openclaw, integrating systems.
by j45
2/24/2026 at 12:43:02 PM
Are you sure these are the same people and not new people that got hooked on hype?by mhitza
2/24/2026 at 7:10:21 PM
Obviouly the assumption of sanity was premature.by tempodox
2/24/2026 at 6:01:56 AM
This is The difference between technical and nontechnical audienceRelevant xkcd https://xkcd.com/2030/
by rk06
2/24/2026 at 12:39:55 AM
It's greed.by cromka
2/24/2026 at 3:12:33 AM
The bar for working security at Meta doesn't seem that highby petterroea
2/24/2026 at 2:49:48 AM
Honestly it’s been a breath of fresh air to have most of the gatekeeping in software be removed.Seems that it was by and large just people wanting to feel important, and holding onto their positions.
Apps need great security, but security can also get out of control. Apps need good abstractions and code hygiene but that too can get out of control.
I’ve fallen in love with programming all of again now that I’m not so tied down by perceived perfection.
by mountainriver
2/24/2026 at 10:37:50 AM
Everything is easy if you don't care about getting pwned, and you don't consider yourself responsible if this has negative effects on other people.by pjc50
2/24/2026 at 3:25:50 AM
Is this satire?by xmcp123
2/24/2026 at 3:18:22 AM
[dead]by cl0zedmind
2/24/2026 at 12:00:21 AM
[dead]by co_king_5
2/24/2026 at 12:35:36 AM
To the extent that anyone can be replaced they will be replaced and nothing they do now will save them. The good news is that so far I haven't seen companies having much success outright replacing workers with AI chatbots.by autoexec
2/24/2026 at 12:42:59 AM
it's not successfully replacing them with AI that is the problem; it's firing them to then replace them with AI which, when it doesn't work is either too late or at best incredibly disruptive for the people impacted.by skeeter2020
2/24/2026 at 1:48:26 AM
That's certain true. Lots of letting workers go only to hire new ones at much lower payby autoexec
2/24/2026 at 1:01:11 AM
They don't have the successes but they do replace them. I've seen a couple of examples of that in the last couple of months, there is just no way to avoid these abominations any more.by jacquesm
2/24/2026 at 12:08:33 AM
They're getting replaced by AI anyway, these bleeding edge agents are just surfboards for the wave.Learn fast or die trying, lol.
by observationist
2/24/2026 at 2:41:51 AM
"ever" is the key word. Like driving, we as humans will cede control, at some point, to AI.by almosthere
2/24/2026 at 4:54:11 AM
Because security isn't the be-and-end-all, it has to serve the goals of the business and its customers.Customers say that they want security with their mouths, but they say that they want features with their wallets. The best improvement to computer security you can make is turning the computer off, but this is clearly not what your (non-HN) customers want you to do.
AI has serious security risks (E.G. prompt injection), but it lets you deliver customer value a lot faster. Security doesn't matter if the competitors' technology is so much better that nobody is buying yours.
by miki123211
2/24/2026 at 5:13:13 AM
> Security doesn't matter if the competitors' technology is so much better that nobody is buying yours.
This is true right up until the moment their entire database is available as a torrent.
by antisol
2/24/2026 at 9:14:30 AM
Which companies collapsed or had to face important consequences because their database leaked?by freehorse
2/24/2026 at 12:49:02 PM
The first one which immediately springs to mind: https://www.abc.net.au/news/2025-08-08/optus-sued-by-privacy...I'm sure a search engine could help you find other examples.
by antisol