4/3/2026 at 12:37:27 PM
I like the approach of running everything locally. I'm strongly of the opinion that the privacy angle for local models is going to keep getting stronger and more relevant. The amount of articles that come out about accidents happening because of people handing too much context to cloud models the more self reinforcing this will become.by convexly
4/3/2026 at 2:28:03 PM
It's only half of the solution though. If the models are trained in a closed way, they can prioritize values encoded during training even if that's not what you want (example: ask the open Chinese models about Tiananmen). It's not beyond imagining that these models would e.g. try to send your data to authorities or advertisers when their training says so, even if you run them locally.So the full solution would be models trained in an open verifiable way and running locally.
by cousin_it
4/5/2026 at 5:47:31 AM
All it would take is for one person to catch the model doing this and the reputation of the model and the company would be destroyed irrevocably.by KetoManx64
4/6/2026 at 12:10:15 PM
Many Chinese models are being caught doing this (it's also required by law in China) but there was not much hassle.Having said that Id easily trade some censorship about Chinese affairs I don't care about for the prudishness of American models. Though I generally get the abliterated versions of both.
by wolvoleo
4/3/2026 at 4:27:11 PM
The model is only generating tokens without touching the network at all, right? How would it send data away?by wrxd
4/3/2026 at 4:31:36 PM
Theoretically, by taking the opportunity to inject an exfiltration mechanism if you ask it to write code for youby procaryote
4/3/2026 at 5:04:16 PM
Lots of people I know run models in "yolo" mode or the equivalent as well, which means it could just invoke curl or telnet to exfiltrate data.by kg
4/3/2026 at 3:32:23 PM
Another angle is when you're passing untrusted content to the AI service, e.g. anything from using it to crawl websites to spam-detection on new forum user posts.You can trigger the the service's ToS violation or worse, get tipped off to law enforcement for something you didn't even write.
by hombre_fatal
4/3/2026 at 1:04:27 PM
local is best for privacy, but i personally think you don't need to go local.anthropic, google, openai etc, decided that their consumer ai plans would not be private. partly to collect training data, the other half to employ moderators to review user activity for safety.
we trust that human moderators will not review and flag our icloud docs, onedrive or gmail, or aggregate such documents into training data for llms. it became the norm that an llm is somehow not private. it became a norm that you can't opt out of training, even on paid plans (see meta and google); or if you can opt out of training, you can't opt out of moderation.
cloud models with a zero retention privacy policy are private enough for almost everyone, the subscriptions, google search, ai search engines are either 'buying' your digital life or covering themselves for legal reasons.
you can and should have private cloud services, and if legal agreement is not enough, cryptographic attestation is already used in compute, with AWS nitro enclaves and other providers.
by lukewarm707
4/3/2026 at 1:36:32 PM
> i personally think you don't need to go local.I personally think everyone should default to using local resources. Cloud resources should only be used for expansion and be relatively bursty rather than the default.
by inetknght
4/3/2026 at 1:46:45 PM
For about two years I experimented with writing local apps using local LLMs, but I often had to blend in a commercial web search API to make my little experiments useful.by mark_l_watson
4/3/2026 at 4:38:04 PM
[dead]by Whyachi
4/3/2026 at 1:45:18 PM
I pay $13/month for Proton’s Lumo+ private chat LLM that contains an excellent built-in web search tool. I use it for everything non-technical, even just simple searching for local businesses, etc.As an enthusiastic reader of books like Privacy is Power and Surveillance Capitalism, it feels good to have a private tool that is ready at hand.
by mark_l_watson
4/3/2026 at 2:33:40 PM
do you have any provider recommendations? I've experimented with this on runpod serverless, but I've been meaning to dig deeper before I feel comfortable with personal data.I saw a service named Phala, which claims to be actually no-knowledge to server side (I think). It was significantly more expensive, but interesting to see it's out there. My thought was escaping the data-collection-hungry consumer models was a big win.
by djl0
4/3/2026 at 8:03:25 PM
> anthropic, google, openai etc, decided that their consumer ai plans would not be private. partly to collect training data, the other half to employ moderators to review user activity for safety.That's two halves of "why", sure.
Another interesting half would be that those companies have US military officers on their boards, and LLMs are the ultimate voluntary data collection platform, even better trojan horses than smartphones.
Yet another "half" could be how much enterprise value might be found by datamining for a minute or two... may I suggest reading a couple of Martha Wells books.
by sebastiennight
4/3/2026 at 12:56:19 PM
That's the way things have to go. Business risk is too high having everything ran over exposed networks.by aswanson
4/3/2026 at 1:10:54 PM
what i say about this, is that an llm is just a big file, there is nothing 'not private' about it.if you are happy with off-prem then the llm is ok too, if you need on-prem this is when you will need local.
by lukewarm707
4/3/2026 at 1:38:28 PM
> an llm is just a big file, there is nothing 'not private' about it.The private thing is the prompt.
But also, a local LLM opens up the possibility of agentic workflows that don't have to touch the Internet.
by zahlman
4/3/2026 at 2:12:59 PM
The other thing, is encrypted inferencing a thing/service currently? I want to run my own models locally just because if I'm going to be chatting to it about my day to day life why send it to a server in plaintext.by ge96
4/3/2026 at 2:27:59 PM
encrypted inferencing, meaning homomorphic encryption: no, it's not solved.cryptographic confirmation of zero knowledge: yes.
the latter, based on trust in the hardware manufacturer and their root ca. so, encrypted if you trust intel/nvidia to sign it.
there are a few services, phala, tinfoil, near ai, redpill is an aggregator of those
by lukewarm707
4/3/2026 at 5:53:04 PM
> I like the approach of running everything locally. I'm strongly of the opinion that the privacy angle for local models is going to keep getting stronger and more relevant.In HN circles perhaps. Average Joes don’t care.
by Xenoamorphous
4/4/2026 at 2:41:17 AM
I bet if you clearly explained the benefits and tradeoffs, and then gave them the choice, Average Joes would care.by nozzlegear
4/4/2026 at 8:39:40 PM
They generally do care, but not enough to change what they do or to do without something they use, like social media.So many people I know say “I only use Signal to talk to you”, it’s like I’m the awkward one for not using Facebook.
by crimsontech