3/2/2026 at 5:11:41 PM
“888 KiB Assistant” but the assistant itself is a multi terabyte rental-only model stored in some mysterious data center.by tehsauce
3/2/2026 at 11:53:11 PM
The whole point is that this fits on an ESP32, which has wifi. We're not quite at the point where it makes sense to run the whole thing locally - if you do try it, it will need a fan, and be loud etc.For my part, I installed Nanoclaw on my Arch derived OS (I love Arch!), and it worked fine until the next day some update decided to revert the power management settings, and now my glorious assistant is dead.
There's something to be said for a barebones OS. No bullshit, no updates.
Also, playing with hardware watchdog timers and GPIOs and DACs can be so much fun.
by seertaak
3/2/2026 at 5:22:18 PM
I'm getting "serverless" flashbacks.by amelius
3/3/2026 at 12:39:33 PM
modellessby pgt
3/3/2026 at 12:52:42 PM
This thread reminds me how Javas heavy GUI written in Java itself was called "lightweight" when in fact it did not feel lightweight at all on the hardware of the time.by stuaxo
3/2/2026 at 6:42:56 PM
My model is at home... just 16Gb still a lot but just FYIby kristianpaul
3/2/2026 at 6:28:40 PM
It seems to support connecting to your own LLM on the same LANby Rebelgecko
3/2/2026 at 8:14:33 PM
The point is the agent is still the LLM. No LLM, no agent.by croes
3/3/2026 at 3:58:06 AM
LLMs are not agents. LLMs are language models that simply respond to a text prompt with a textual response. Agents are middleware that take input from the user and then use LLMs to drive tools.by otterley
3/3/2026 at 6:13:47 AM
They are just a to-do list. The real work is done by the LLMsby croes
3/3/2026 at 6:26:40 AM
An LLM has no motive power, like a script without an a cast, or a program without a computer to execute it.by otterley
3/2/2026 at 7:58:05 PM
I tried connecting OpenClaw to ollama with a V100 running qwen3.5:35b but it was really, really, really slow (despite ollama itself feeling fairly fast).These "claw" agents really multiply the tokens used by an obscenely huge factor for the same request.
by dheera
3/2/2026 at 10:49:23 PM
i recently decided to get into this ocean boiling game too, the 32GB V100 seems like a pretty good VRAM/$. if i may ask, do you make any special accommodations for cooling? i've never dealt with a passively cooled card before and i'm curious whether my workstation fans (HP Z840) will be sufficient. i'm going to try 2 cards at first but i think i might be able to squeeze a third in thereby jcgrillo
3/2/2026 at 11:21:56 PM
No. I have a Titan V CEO edition, which is basically a 32GB V100 but has full active fan cooling which I'm finding works just fine.by dheera
3/2/2026 at 11:42:37 PM
Oh very cool. Some folks are printing shrouds for dual 40mm fans so I'll probably try that if the stock case fans don't do itby jcgrillo