alt.hn

2/1/2026 at 1:50:15 PM

Show HN: Zuckerman – minimalist personal AI agent that self-edits its own code

https://github.com/zuckermanai/zuckerman

by ddaniel10

2/1/2026 at 6:24:50 PM

> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.

While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?

by nullbio

2/1/2026 at 6:30:48 PM

100%. This is why I'm so reluctant to give any access to my OpenClaw. The skills hub is poisoned.

by adriancooney

2/1/2026 at 6:43:57 PM

Great point. I wrote it as important note and ill take it into account.

by ddaniel10

2/1/2026 at 3:41:23 PM

DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"

by 4b11b4

2/1/2026 at 4:03:47 PM

DIYWA - do it yourself with agent ;) hopefully zuckerman as the start point

by ddaniel10

2/1/2026 at 5:41:22 PM

I started working on something similar but for family stuff. I stopped before hitting self editing because, well I was a little bit afraid of becoming over reliant on a tool like this or becoming more obsessed with building it than actually solving a real problem in my life. AI is tricky. Sometimes we think we need something when in fact life might be better off simpler.

The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5

https://github.com/asim/aslam

by asim

2/1/2026 at 7:52:26 PM

I would change the name of the project. Why would I want to run something that keeps remind me of that guy

by noncoml

2/1/2026 at 7:58:46 PM

Does this do anything to resist prompt injection? It seems to me that structured exchange between an orchestrator and its single-tool-using agents would go a long way. And at the very least introduces a clear point to interrogate the payload.

But I could be wrong. Maybe someone reading knows more about this subject?

by scotth

2/2/2026 at 2:45:22 AM

The logo is slightly creepy

by with

2/1/2026 at 3:42:15 PM

Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.

by amelius

2/1/2026 at 4:02:15 PM

I'm building this in the hope that AI will be cheap one day. For now, I'll add many optimizations

by ddaniel10

2/1/2026 at 4:27:29 PM

AI is cheap right now. At some point the AI companies must turn to generate profit

by croes

2/1/2026 at 7:41:07 PM

Anthropic has stated that their inference process is cash positive. It would be very surprising if this wasn't the case for everyone.

It's certainly an open question whether the providers can recoup the investments being made with growth alone, but it's not out of the question.

by WalterSear

2/1/2026 at 9:04:27 PM

Problem is the models need constant training or they become outdated. That the less expensive part generates profit is nice but doesn’t help if you look at the complete picture. Hardware also needs replacement

by croes

2/1/2026 at 5:47:29 PM

Have you tested this with a local model? I'm going to try this with GLM 4.7

by Zetaphor

2/1/2026 at 6:20:03 PM

What would be the best model to try something like this on a 5800XT with 8 GB RAM?

by mcny

2/1/2026 at 4:06:09 PM

Yes, it certainly makes sense if you have the budget for it.

Could you share what it costs to run this? That could convince people to try it out.

by amelius

2/1/2026 at 4:16:40 PM

I mean, you can just say Hi to it, and it will cost nothing. It only adds code and features if you ask it to

by ddaniel10

2/1/2026 at 10:36:43 PM

This looks interesting, but I'm stuck on step 4 of the web setup: where do I get agents to start with? Shouldn't there be a default one that can help me get other ones?

by neomindryan

2/2/2026 at 5:09:56 AM

Hi, can you please contact me at dd@zuckerman.ai

by ddaniel10

2/1/2026 at 6:24:04 PM

|The agent can rewrite its own configuration and code.

I am very illiterate when it comes to Llms/AI but Why does nobody write this in Lisp???

Isn't it supposed to be the language primarily created for AI???

by joonate

2/1/2026 at 6:35:39 PM

> Isn't it supposed to be the language primarily created for AI???

In 1990 maybe

by lm28469

2/1/2026 at 6:31:53 PM

Nah, it’s pretty unrelated to the current wave of AI.

by tines

2/1/2026 at 7:57:07 PM

If hot reloading is a goal I would target Erlang or another BEAM language over a Lisp.

by plagiarist

2/2/2026 at 3:40:10 AM

Why? Many Lisp systems and Common Lisp in particular have great hot reloading capability, from redefining functions to UPDATE-INSTANCE-FOR-REDEFINED-CLASS to update the states.

by kscarlet

2/1/2026 at 5:23:11 PM

Terrible name, kind of a mid idea when you think about it (Self improving AI is literally what everyone's first thought is when building an AI), but still I like it.

by falloutx

2/1/2026 at 6:49:59 PM

Thanks for the feedback. Are you going to forget this name though?

by ddaniel10

2/1/2026 at 7:54:37 PM

I don’t know if I will forget it, but it’s enough to keep me away from considering using it

by noncoml

2/2/2026 at 5:28:07 PM

No, you are right. Its more memorable. With my accent, I can pronounce it as Suckerman

by falloutx

2/1/2026 at 8:51:46 PM

I think it's a genius name and is playful on the meme of a pale Zuckerberg being a robot.

by hereme888

2/2/2026 at 12:22:01 PM

No, in the sense that I wouldn't forget an AI agent called "Epsteinman" or "Prolapseman" either.

by deaux

2/1/2026 at 7:43:49 PM

Someone needs to send this to Spike Feresten.

by dboreham

2/1/2026 at 2:54:40 PM

there are hardcoded elements in the repo like:

/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log

by ekinertac

2/1/2026 at 3:21:59 PM

thanks

by ddaniel10

2/1/2026 at 6:56:51 PM

I am surprised that no one did this in a LISP yet.

by lmf4lol

2/1/2026 at 8:13:02 PM

i like the idea is possible to run in a docker container?

by grigio

2/1/2026 at 10:49:53 PM

could've made it in PHP ;) to be zuck-like

by dzonga

2/2/2026 at 10:10:07 AM

[dead]

by pillbitsHQ

2/1/2026 at 7:35:14 PM

[dead]

by pillbitsHQ

2/1/2026 at 9:10:34 PM

[dead]

by pillbitsHQ

2/1/2026 at 2:49:20 PM

[dead]

by pillbitsHQ

2/1/2026 at 5:59:35 PM

[dead]

by pillbitsHQ

2/1/2026 at 5:05:50 PM

I will not download or use something which constantly reminds me of this weird dude suckerberg who did a lot of damage to society with facebook

by aaaalone

2/1/2026 at 8:51:38 PM

Ok, but please don't post unsubstantive comments to Hacker News.

by dang

2/1/2026 at 5:22:49 PM

That's really good to know

by philipallstar

2/1/2026 at 6:48:35 PM

Haha it's your personal agent, let him handle the stuff you don't like. But soon, right now its not fully ready

by ddaniel10

2/1/2026 at 5:21:29 PM

I was hoping it was a Philip Roth reference but I was disappointed when I opened the page.

by zeroonetwothree

2/1/2026 at 2:01:55 PM

[flagged]

by iisweetheartii

2/1/2026 at 5:21:37 PM

AI generated response on a post about AI. Getting tired of this timeline.

by junon

2/1/2026 at 5:42:06 PM

Not only that, but the OP created that account solely to hype their own product lol. There’s another bot downthread doing the same thing. Minimally it feels like dang should not let new accounts post for 30 days or something without permission.

by ohyoutravel

2/1/2026 at 6:40:43 PM

That might reduce botting for about 30 days, people will just tee up an endless supply of parked ids that will then spin up to post after the lockout expires.

by yborg

2/1/2026 at 7:12:00 PM

Why not ban both accounts? Seems like a fine way to keep SNR high to me.

by anarticle

2/1/2026 at 7:16:04 PM

if you ban an account, they know to make a new one

if you shadowban, they are none the wiser and the effect to SNR is better

by verdverm

2/1/2026 at 6:27:36 PM

Yep. It's very obvious, and lazy.

by nullbio