3/23/2026 at 8:25:24 PM
Is there any chance you might support a local-first version of this in the future? I've been interested in apps like this and Littlebird in particular seems very attractive. But I'm loathe to essentially send screenshots/summaries/etc of all my activity to a cloud solution, regardless of any claims you make about encryption. Any mistake you make could be catastrophic for me, which thoroughly dominates any upside to using your product. It's a non-starter.by divmain
3/23/2026 at 8:53:54 PM
We will for sure, but the issue is that without local LLMs, there's no way to offer a truly fully local version. And the local LLMs are dumb. So basically, you would still need to trust the LLM providers. Totally understand that this is a deal breaker for some people, but for many users, the theoretical risk is worth it. We do regular security audits, encrypt in transit and at rest, pen tests, etc.by grena1re
3/23/2026 at 10:14:26 PM
Um, dismissing the tech as "the local LLMs are dumb" seems shortsighted. I can run some pretty impressive models on my local Mac, but it has >64gb of ram and an M3 Max.Given the privacy benefit I wouldn't dismiss them so fast. I'd suggest picking one or two that your prompts will work well with and treating it as "we let you run with local models too, if you have a computer capable of that." This will (a) quiet the people who complain about everything and (b) get more people to try the cloud model knowing they could move to a local model for real usage.
by throwaway-blaze
3/23/2026 at 11:42:04 PM
I'm not dismissing them. I'm saying they're not there yet. As a startup, we have to prioritize. We can't do everything simultaneously, and it would be a substantial engineering effort to have a dual architecture as well as potentially more security holes. And the amount of people that want to run local LLMs is very small. I use local LLMs when I'm on flights, and that is my personal assessment. They are all benchmark-maxed and incapable of reliable tool calling or consistency over meaningfully long conversations.by grena1re
3/23/2026 at 9:21:57 PM
Hey, I'm Dmitriy from Littlebird. For your use case, would you want the underlying LLM to be local as well, so that your data doesn't get sent to the "big dog" LLM providers? It's an important consideration because it wouldn't be nearly as smart then - though I totally understand if that's the only way it could work for you.We are SOC2, GDPR, and CCPA compliant btw.
by reasonmethis
3/23/2026 at 10:23:34 PM
I'm not the OP but I came here to voice the same concern. I would love to use something like this. I also signed up for rewind.ai and Limitless and pre-ordered the pendant. But ultimately I cancelled it out of privacy concerns.I wonder if it could be local storage and you could provide your own Open Router endpoint? That way it could be a local model or your own deployment of GPT/Claude in Azure/Bedrock/Vertex etc where you can control retention policies etc.
Basically, I want to know that you guys don't have access to view my stuff. I get that that limits your ability to improve the product and support issues, but when I'm sending everything it really starts to matter. Just thought I'd share what held me back from immediately signing up despite really wanting to use a product like this!
by timwis
3/24/2026 at 1:10:43 AM
[dead]by reasonmethis
3/23/2026 at 11:25:15 PM
Same here. While this looks like it would be beneficial , soon as I saw that all data gets sent to the cloud, It was a blocker.I have another app that tracks all my usage (time ) at an application. Window level, but I purchased it because it saves locally.
by ismail