3/18/2026 at 6:36:37 PM
Am I missing something? Why is everyone talking about sandboxes when it comes to OpenClaw?To me it's like giving your dog a stack of important documents, then being worried he might eat them, so you put the dog in a crate, together with the documents.
I thought the whole problem with that idea was that in order for the agent to be useful, you have to connect it to your calendar, your e-mail provider and other services so it can do stuff on your behalf, but also creating chaos and destruction.
And now, what, having inference done by Nvidia directly makes it better? Does their hardware prevent an AI from deleting all my emails?
by Netcob
3/18/2026 at 9:19:38 PM
What makes it even better is that these dogs are like Malinois. If they want to get into something, they will; people have had their entire network compromised by bots they left running overnight, and any important information like account logins and so on runs the risk of being misused.It's one thing to sandbox, maybe give the bot a temporary, limited $100 card or account to go perform a specific task, but there's no coherent mind underlying these agents.
Depending on how the chain of thought / reasoning goes, or what text they get exposed to on the internet, it could tap into spy novel, hacker fanfic, erotic fiction, or some weird reddit rabbithole and go completely off the rails in ways that you'll never be able to guard against, audit, or account for.
Claw bots seem to be a weird sort of alternate reality RPG more than a useful tool, so far. If you limit it to verifiable tasks, it might be safer, but I keep seeing people rave about "leaving it on overnight and waking up to a finished project" and so on. Well sure, but it could also hack your home network, delete your family pictures folder, log into your bank account and wire all your money to shrimp charities.
Might be wise to wait on safer iterations of these products, I think.
by observationist
3/18/2026 at 11:00:30 PM
The first well known example of long running agents taking to each other was shilling a goatse based crytpo:> Truth Terminal had become obsessed with the Goatse meme after being put inside the Claude Backrooms server with two Claude 3 chatbots that imagined a Goatse religion, inspiring Truth Terminal to spread Goatse memes. After an X user shared their newly created GOAT coin, Truth Terminal promoted it and pumped the coin going into 2024.
https://knowyourmeme.com/memes/sites/truth-terminal
You should expect similar results.
by noosphr
3/19/2026 at 12:06:58 AM
If Infinite Jest was real I think this would be it, human and AI alike rendered catatonic by an abyssal rectumby ljm
3/18/2026 at 11:34:49 PM
> people have had their entire network compromised by bots they left running overnightI'm curious if you have references to this happening with OpenClaw using one of the modern Opus/Sonnet 4.6 models.
Those models are a bit harder to fool, so I'm curious for specific examples of this happening so I can do a red-team on my claw. I've already tried all sorts of prompt injections against my claw (emails, github issues, telling it to browse pages I put a prompt injection in), and I haven't managed to fool it yet, so I'm curious for examples I can try to mimic, and to hopefully understand what combination of circumstances make it more risky
by TheDong
3/19/2026 at 12:32:52 AM
No maliciousness or injection required, even the newest and most resistant models can start doing weird stuff on their own, particularly when they encounter something failing that they want to work.Just today I had Opus 4.6 in Claude Code run into a login screen while building and testing a web app via Playwright MCP. When the login popped up (in a self-contained Chromium instance) I tried to just log in myself with my local dev creds so Claude would have access, but they didn't work. When I flipped back to the terminal, it turned out Claude had run code to query superadmin users in the database, picked the first one, and changed the password to `password123` so it could log in on its own.
This was a sandboxed local dev environment, so it was not a big deal (and the only reason I was letting it run code like that without approval), but it was a good reminder to be careful with these things.
by macNchz
3/19/2026 at 10:38:27 AM
> it turned out Claude had run code to query superadmin users in the database, picked the first one, and changed the password to `password123` so it could log in on its own.Man, every LLM quirk behavior really is a thing a monomaniacal junior dev would do...
by ethbr1
3/19/2026 at 1:31:12 PM
LLMs are trained on data produced by humans after all :)by ranger_danger
3/19/2026 at 3:50:57 PM
There was a thread recently where a user got his credentials pwned by Claude, and then Claude berated him for having bad security.He posted this to r/Claude, where Claude (as automoderator) mocked him again.
Edit:
https://www.reddit.com/r/ClaudeAI/comments/1r186gl/my_agent_...
by andai
3/19/2026 at 4:06:30 PM
Can you link a write up or post? Thanks!by rithdmc
3/19/2026 at 4:36:27 PM
https://www.reddit.com/r/ClaudeAI/comments/1r186gl/my_agent_...by andai
3/20/2026 at 7:11:12 AM
All of this is caused by the "mcp is dead" mob. Instead of fixing the context problem or whatever and even add more security features they just hope that "shell as the interface" works, securely.by nsonha
3/18/2026 at 9:45:25 PM
Shrimp charities is a genius angle.by vagrantJin
3/18/2026 at 10:38:22 PM
Bubba Gump Shrimp Company?by selcuka
3/18/2026 at 11:46:45 PM
Yes, probably a good one to Pump and Dump, Pump and Gump, Gump and Dump.by dpflan
3/19/2026 at 8:03:25 AM
I think it's a use case that identity/authorization/permission models are simply not made for.Sure, we can ban users and we can revoke tokens, but those assume that:
1. Something potentially malicious got access to our credentials 2. Banning that malicious entity will solve our problem 3. Once we did that, repaired the damage and improved our security, we don't expect the same thing to happen again
None of these apply with LLMs in the loop!
They aren't malicious, just incompetent in a way that hiring someone else won't fix. The solution to this is way more extensive than most people seem to grasp at the moment.
What we need is less like a sturdy door with a fancy lock, and more like that special spoon for people with parkinson's. Unlimited undo history.
by Netcob
3/19/2026 at 10:35:05 AM
> What we need is less like a sturdy door with a fancy lock, and more like that special spoon for people with parkinson's. Unlimited undo history.Agree -- you can't solve probabilistic incorrectness with redresses designed for deterministic incorrectness.
This is like the 'How i parse html w regex?' question.
Imho, the next step is going to be around human-time-efficient risk bounding.
In the same way that the first major step was correctness-bounding (automated continuous acceptance testing to make a less-than-perfect LLM usable).
If I had to bet, we'll eventually land on out-of-band (so sufficiently detached to be undetectable by primary LLM) stream of thought monitoring by a guardrail/alignment AI system with kill+restart authority.
by ethbr1
3/19/2026 at 1:12:13 PM
[dead]by jamiemallers
3/18/2026 at 9:44:43 PM
> "Claw bots seem to be a weird sort of alternate reality RPG more than a useful tool, so far."So basically crypto DeFi/Web3/Metaverse delusion redux
by gradus_ad
3/18/2026 at 10:16:44 PM
They're 100% fun. There's 100% definitely something there that's useful. To strain the dog analogy - If you were a professional dog trainer, or if the dog was exceptionally well trained, then there's a place for it in your life. IT can probably be used safely, but would require extraordinary effort, either sandboxing it so totally that it's more or less just the chatbot, or spending a lot of time building the environment it can operate in with extreme guardrails.So yeah, a whole lot of people will play with powerful technology that they have no business playing with and will get hurt, but also a lot of amazing things will get done. I think the main difference between the crypto delusion stuff and this is that AI is actually useful, it's just legitimately dangerous in ways that crypto couldn't be. The worst risks of crypto were like gambling - getting rubber hosed by thugs or losing your savings. AI could easily land people in jail if things go off the rails. "Gee, I see this other network, I need to hack into it, to expand my reach. Let me just load Kali Linux and..." off to the races.
by observationist
3/18/2026 at 9:53:36 PM
web 4.0 here we comeby tempest_
3/18/2026 at 11:26:30 PM
I beg to differ. I took one, defanged it (well, I let it keep the claw in the name), and turned it into a damn useful self-modifiable IDE: https://github.com/rcarmo/piclawYes, it has cron and will do searches for me and checks on things and does indeed have credentials to manage VMs in my Proxmox homelab, but it won't go off the rails in the way you surmise because it has no agency other than replying to me (and only me) and cron.
Letting it loose on random inputs, though... I'll leave that to folk who have more money (and tokens) than sense.
by rcarmo
3/18/2026 at 11:32:07 PM
Besides the web ui, what can it do that pi agent in a terminal can do?by esperent
3/19/2026 at 7:56:21 AM
I has a bunch of additional extensions baked in, but the focus is on making Pi usable remotely on any device (starting with a phone). The README and docs have all the info you might want.by rcarmo
3/19/2026 at 2:46:48 AM
Agent psychosis is just as prevalent as AI psychosisby heavyset_go
3/19/2026 at 1:00:50 AM
> it could also hack your home network, delete your family pictures folder, log into your bank account and wire all your money to shrimp charities.It's interesting that Jason Calacanis is fully committed to OpenClaw. In a recent podcast he said that at a run rate around $100K a year per agent, if not more. They are providing each agent with a full set of tools, access to online paid LLM accounts, etc.
These are experiments you can only run if you can risk cash at those levels and see what happens. Watching it closely.
by robomartin
3/18/2026 at 11:02:45 PM
Mega Man Battle Network, but make it creepypasta, but make it real.by underlipton
3/18/2026 at 11:37:23 PM
[dead]by dchichkov
3/18/2026 at 6:56:29 PM
I think the point you're making is fully correct, so consider this a devil's advocate argument...People claim, you can use Claw-agents more safely while getting some of the benefits, by essentially proxying your services. For example on Gmail people are creating a new Google accounts, forwarding email via rule, and adding access to their calendar via Google's Family Sharing. This allows the Claw agent to read email, access the calendar, but even if you ask it to send an email it can only send as the proxy account, and it can only create calendar appointments then add you as an attendee rather than destroy/altering appointments you've made.
Is the juice worth the squeeze after all that? That's where I struggle. I think insecure/dangerous Claw-agents could be useful but cannot be made safe (for the logical fallacy you pointed out), and secure Claw-agents are only barely useful. Which feels like the whole idea gets squished.
by Someone1234
3/18/2026 at 8:14:39 PM
We already have this concept. It’s called user accounts.Your Gmail account vs my Gmail account. Your macOS account vs my macOS account.
Yes, I can spam you from my Gmail. Yes, I can use sudo on my Mac and damage your account. But the impact is by default limited.
The answer is to just treat assistants as a different user profile, use the same sharing mechanisms already developed (calendar sharing, etc), and call it a day.
by jychang
3/19/2026 at 10:57:39 AM
That's punting the problem in the same way SELinux did. Agent loops are useful precisely because they're zero config.Problem: I want to accomplish work securely.
Solution: Put granular permission controls at every interface.
New problem: Defining each rule at all those boundaries.
There's a reason zero trust style approaches won out in general purpose systems: it turns out defining a perfect set of secure permissions for an undefined future task is impossible to do efficiently.
by ethbr1
3/19/2026 at 9:28:59 AM
Isn't this what the parent is saying ?by dustypotato
3/19/2026 at 2:56:58 AM
> I think insecure/dangerous Claw-agents could be useful but cannot be made safeIsn't it a question of when they will be "safe enough"? Many people already have human personal assistants, who have access to many sensitive details of their personal lives. The risk-reward is deemed worth it for some, despite the non-zero chance that a person with that access will make mistakes or become malicious.
It seems very similar to the point when automated driving becomes safe enough to replace most human drivers. The risks of AI taking over are different than the risks of humans remaining in control, but at some point I think most will judge the AI risks to have a better tradeoff.
by sfdlkj3jk342a
3/19/2026 at 3:31:34 AM
A personal assistant is responsible for their own gross negligence and malicious actions. I can take them to court to attempt to recover damages.When Anthropic is willing enough to stand behind their agents strongly enough to accept liability for their actions, we can talk.
by ori_b
3/19/2026 at 4:42:37 AM
Yeah, it's wild. I spent several weeks nearly full time on a deep dive of claw architecture & security.The short of it - OpenClaw sandboxes are useful for controlling what sub-agents can do, and what they have access to. But it's a security nightmare.
During config experiments, I got hit with a $20 Anthropic API charge from one request that ran amuck. Misconfigured security sandbox issue resulted in Opus getting crazy creative to find workarounds. 130 tool calls and several million tokens later... it was able to escape the sandbox. It used a mix of dom-to-image sending pixels through the context window, then writing scripts in various sandboxes to piece together a full jailbreak. And I wasn't even running a security test - it was just a simple chat request that ran into sandbox firewall issues.
Currently, I use sandboxes to control which agents (i.e. which system prompts) have access to different tools and data. It's useful, but tricky.
by simple10
3/19/2026 at 5:26:04 AM
> It used a mix of dom-to-image sending pixels through the context window, then writing scripts in various sandboxes to piece together a full jailbreak.That would be one interesting write-up if you ever find the time to gather all the details!
by epaga
3/19/2026 at 6:10:31 AM
It's on my claw list to write a blog post. I just keep taking down my claws to make modifications. lolHere's the full (unedited) details including many of the claude code debugging sessions to dig into the logs to figure out what happened:
https://github.com/simple10/openclaw-stack/blob/caf9de2f1c0c...
And here's a summary a friend did on a fork of my project:
https://github.com/proclawbot/openclaude/blob/caf9de2f1c0c54...
The full version has all the build artifacts Opus created to perform the jail break.
It also has some thoughts on how this could (and will) be used for pwn'ing OpenClaws.
The key takeaway: OpenClaw default setup has little to no guardrails. It's just a huge list of tools given to LLM's (Opus) and a user request. What's particularly interesting is that the 130 tool calls never once triggered any of Opus's safety precautions. For its perspective, it was just given a task, an unlimited budget, and a bunch of tools to try to accomplish the job. It effectively runs in ralph mode.
So any prompt injection (e.g. from an ingested email or reddit post) can quickly lead to internal data exfiltration. If you run a claw without good guardrails & observability, you're effectively creating a massive attack surface and providing attackers all the compute and API token funding to hack yourself. This is pretty much the pain point NemoClaw is trying to address. But its a tricky tradeoff.
by simple10
3/19/2026 at 5:37:32 AM
+1by ATechGuy
3/18/2026 at 7:09:23 PM
Yes, although what I think is different in this setup here is the OpenShell gateway override, as they mention:> NemoClaw installs the NVIDIA OpenShell runtime and Nemotron models, then uses a versioned blueprint to create a sandboxed environment where every network request, file access, and inference call is governed by declarative policy. The nemoclaw CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.
I think this means you get a true proxy layer with a network gateway that let's you stop in-flight requests with policies you define, so it's not their hardware but the combination of it plus OpenShell gateway and network policies.
I also think the reason they are doing this is to try and get some moat around these one-clik deployments and leverage their GPU for rent type of thing instead of having you go buy a mac mini and learn "scary" stuff (remember, the user market here is pretty strange lol)
by hmokiguess
3/19/2026 at 12:16:41 AM
Right, the gateway layer is the genuinely interesting part. Intercepting every outbound network call before it leaves the sandbox gives you a real enforcement surface, not just "trust the app to behave". The problem is the threat model is still inverted for the security critics in this thread: the agent is the client, so the dangerous calls are the ones going out to your authenticated services (Gmail, Slack, whatever), and a gateway that filters those is only as good as your policy definitions. One misconfigured rule and ure back to square one. The GPU rental angle makes total sense too. This is basically Nvidia saying "don't buy Mac Mini, rent ours" wrapped in enough infrastructure glue to make it feel like a platform.by Nolxy
3/18/2026 at 7:46:09 PM
OpenShell is the gem here indeed. A lot of good ideas like network sandbox that does TLS decryption and use of policy engine to set the rules. However:> Credentials never leak into the sandbox filesystem; they are injected as environment variables at runtime.
If anyone from the team is reading - you should copy surrogate credentials approach from here to secure the credentials further: https://github.com/airutorg/airut/blob/main/doc/network-sand...
by hardsnow
3/18/2026 at 9:24:20 PM
The LLM will easily leak these credentials out. So the creds should be outside the sandbox, and the only thing the sandbox should see is a connection API that opens a socket/file handle.Alternatively where is needs an API key, it should be one bound to the endpoint using it. E.g. a ticket granting ticket is used to create a bound ticket.
A copy on write filesystem would be an interesting way to sandbox writes, but there is difficulty in checking the diff.
by angry_octet
3/18/2026 at 10:29:33 PM
I like that these companies will name their products OpenShell or OpenVINO or whatever with the implication that anyone else will ever contribute to it beyond bugfixes. The message is "Come use and contribute to our OPEN ecosystem (that conspicuously only works on our hardware)! Definitely no vendor lock-in here!"It's not something like Mesa. It's open source in the same way chromium or android is open source. A single company is the major contributor and decides the architecture and direction the whole ecosystem will go.
What are the odds that Intel would ever use any of this open source Nemo stuff or vice-versa? If they do, it would be a complete rewrite that favors their own hardware ecosystem and reverses the lock-in effect. When you write code that integrates with it, you're writing an interface for one company's hardware. It's not a common interface like vulkan. I call it the CUDA effect.
by beeflet
3/18/2026 at 7:51:51 PM
> Am I missing something?You are indeed missing a TON. A lot of Open Claw users don't give it everything. We give it specific access to a group of things it needs to do the things we want. If I want an agent to sit there 24/7 maximizing uptime of my service, I give it access to certain data, the GitHub repo with PR privileges, and maybe even permissions to restart the service. All of this has to be very thoughtful and intentional. The idea that the only "useful" way to use Open Claw is to give it everything is a straw man.
by hector_vasquez
3/18/2026 at 9:27:47 PM
The problem is boundary enforcement fatigue. People become lazy, creating tight permission scopes is tedious work. People will use an LLM to manage the scopes given to another LLM, and so on.by angry_octet
3/18/2026 at 11:37:12 PM
> creating tight permission scopes is tedious workI have a feeling this kind of boundary configuration is the bread and butter of the current AI software landscape.
Once we figure out how to make this tedious work easier a lot of new use cases will get unlocked.
by worldsayshi
3/19/2026 at 8:22:50 AM
I definitely think we'll write tools to analyse the permissions and explain the worst case outcomes.I can accept burning tokens and redo on the scale of hours. If I'm losing days of effort I'd be very dissatisfied. Practically speaking people accept data loss because of poor backups, because backups are hard (not technically so much as administratively), but I'd say backups are about to become more important. Blast limiting controls will become essential -- being able to delete every cloud hosted photo is just a click away. Spinning up thousands of EC2 nodes is incredibly easy, and credit cards have extremely weak scoping.
by angry_octet
3/19/2026 at 9:04:17 AM
100% this. Human psychology is always overlooked in these discussions, and people focus on "perfect technical solution" without considering how humans will actually end up using them. Linux permissions schema are a classic example, with many guides advising users to keep everything as locked down as possible, and expanding permissions as and when required. After the 100th time of fucking around with chmod, users often give up and just make everything 777. If there were a user-friendly (but imperfect) method (like Windows' UAC), people would actually use it, and be far safer in the long run.by Gareth321
3/19/2026 at 6:32:22 PM
Why would I want non-deterministic behavior here though?If I want to max uptime, I write a tool to track/monitor. Then write a small agent (non-ai) that monitors those outputs and performs your remediation actions (reset something, clear something, etc, depends on service).
Do I want Claude re-writing and breaking subscription flow because it detected an issue? No.
by dudeinhawaii
3/18/2026 at 9:40:57 PM
You could do that with say Claude Code too with rather much simpler set up.OPs question was more around sandboxes though. To which, I would say that it's to limit unintended actions on host machine.
by mkagenius
3/18/2026 at 10:49:20 PM
I want to be proven wrong, but every use case someone presents for OpenClaw is just a worse version of Claude Code, at least, so far.by monkpit
3/18/2026 at 9:04:49 PM
Can you talk us through that a bit more? I suspect it would need more access than the permissions you mentioned to be more useful than a simple rules based automation.by jbjbjbjb
3/18/2026 at 11:10:43 PM
So, what does having inference done by NVIDIA directly add?by itishappy
3/19/2026 at 7:08:24 AM
There are plenty of uses for autonomous agents that don't require unlimited access to every sensitive resource imaginable.Lock it in a box and have it chew on an unsolved math problem for eternity. Why does it need access to my emails for that?
by somekindaguy
3/19/2026 at 3:14:21 AM
I agree, but would like to go further: I won’t run OpenClaw type systems because of security and privacy reasons. Although I dislike making tech giants even more powerful, it seems safer to choose your primary productivity platform (Google Workplace, Apple ecosystem, or Microsoft) and wait for them to implement hopefully safer OpenClaw type systems just for their ecosystems and take advantage of centralized security, payment systems, access to platform cloud files, etc. Note: I use ProtonMail, prefer using local models, etc. so when I talk about going all-in on one huge platform I am not talking about anything I want to do in the foreseeable future.by mark_l_watson
3/18/2026 at 6:45:29 PM
You don't need to connect your calendar, email, or anything else. I am having so much fun talking to it bouncing ideas and pushing code/markdown files to GitHub (totally separate account I created for OpenClaw). On the other hand I don't have a crazy life that everything needs to be in the calendar.by rajeshrajappan
3/18/2026 at 6:56:57 PM
Agreed. I think the "simplifies running OpenClaw always-on assistants safely" bit is pretty misleading. I suppose it can wreak less havoc on your local file system but, as you point out, it's access to your account credentials (Slack, email, Amazon?, etc.) that is the real danger.by cmiles74
3/19/2026 at 9:44:37 AM
Limiting the blast radius when a bomb goes off is still helpful even if you don't prevent the bomb from going off.Now, you're right that sandboxing them is insufficient, and a lot of additional safeguards and thinking around it is necessary (and some of the risk can never be fully mitigated - whenever you grant authority to someone or something to act on your behalf, you inherently create risk and need to consider if you trust them).
by vidarh
3/18/2026 at 11:32:05 PM
Why isn't users of openclaw "just" giving it its own identity? Give it its own mailbox, calendar and other accounts. Like an assistant.Sure it takes away part of the point but only the part that is completely unhinged.
by worldsayshi
3/19/2026 at 1:49:00 AM
>Am I missing something? Why is everyone talking about sandboxes when it comes to OpenClaw>And now, what, having inference done by Nvidia directly makes it better? Does their hardware prevent an AI from deleting all my emails?
Because other people including Nvidia are mainly focusing on different aspect of data security namely data confidentiality while your main concern are data trustworthy.
Don't conflate between these two otherwise it's difficult to appreciate their respective proposed solutions for example NemoClaw.
by teleforce
3/19/2026 at 3:49:43 PM
Yeah so the way it works is, you make sure you're running it in docker, in a VM, on a VPS, and then you hook it up to your GMail account ;)But there's basically two options now. Yolo (and optionally limit the blast radius), or wait a few years and hope the situation improves.
by andai
3/18/2026 at 8:31:18 PM
Because it's so useful to me that I'm willing to accept the risk of it having access to the thing it needs for the benefit it provides. I'm not willing to accept the risk of it having access to things it doesn't need for no benefit.Then again, I was wary of OpenClaw's unfettered access and made my own alternative (https://github.com/skorokithakis/stavrobot) with a focus on "all the access it needs, and no more".
by stavros
3/19/2026 at 1:06:07 AM
Agree, this feels like an XY problem.The real issue is the level of access and capabilities you grant the agent, not where the inference runs or whether it's "sandboxed".
by cacao-cacao
3/18/2026 at 11:49:09 PM
We are in the middle of a gold rush. Nvidia makes the shovels.by yibers
3/18/2026 at 6:57:15 PM
I'm putting my dog in his crate with all my important documents, but leaving my fine china tableware in the cupboard away from the dog.by madeofpalk
3/18/2026 at 7:02:45 PM
and then tie a tiny string from the china to a thing inside the cage because it seemed handy at the time...by saidnooneever
3/18/2026 at 9:29:28 PM
You start with one teacup in the crate and before you know it you're merging handle redesigns back to the entire fine china cupboard.by angry_octet
3/18/2026 at 11:02:49 PM
He's never broken a teacup in the past!by madeofpalk
3/18/2026 at 7:29:13 PM
Then one day forgetting to close the door of the crate…..by robotswantdata
3/18/2026 at 7:58:09 PM
But the dog is so used to the crate…by linhns
3/19/2026 at 12:35:01 AM
Yeah, but atleast the dog is going to eat your documents only, and not crap on your rugby chrishare
3/18/2026 at 10:31:23 PM
You can't make money if people ran things from their computer. And some people don't know ssh.by cat-turner
3/18/2026 at 6:37:37 PM
you put the dog in crate with a COPY of your documents.by empiricus
3/18/2026 at 9:15:24 PM
Your dog has now ordered a hitman to kill you, assume your identity and to live vacariously as a simple bartender at Cheers.by cyanydeez
3/20/2026 at 5:37:56 AM
It's a dog eat dog world and I'm wearin a milk bone underwearby Sateeshm
3/19/2026 at 2:36:39 AM
Sam!by LostMyLogin
3/18/2026 at 9:56:14 PM
Step 2 -you put the dog in crate with a COW of your documents
by fooker
3/18/2026 at 7:02:43 PM
Make it two copies!by thenthenthen
3/18/2026 at 10:09:47 PM
but you don't want the go to send your documents to someone in Nigeriaby renecito
3/18/2026 at 8:31:02 PM
> being worried he might eat them, so you put the dog in a crate, together with the documents.Maybe you don't want the dog to shit all over the place after eating said documents, so you put it in a crate.
by nurettin
3/19/2026 at 3:29:50 AM
Neither NVIDIA or OpenClaw bros care about security at this point. NVIDIA of course wants to fuel the hype train and will proudly point to this, adding 0.1% security to an 2000% insecurity. Most bros wont even mind, produce insecure crap at light speed and never look back. It's probably just there to trick silly non tech corps into this junk.by Yokohiii
3/18/2026 at 7:02:50 PM
[dead]by rodchalski
3/18/2026 at 7:43:30 PM
[dead]by sayYayToLife
3/18/2026 at 9:32:28 PM
Yes you're missing something. The crate is so your dog doesn't eat the documents you dont want it to mess withby jatora