alt.hn

3/14/2026 at 8:35:42 AM

Autoresearch Hub

http://autoresearchhub.com/

by EvgeniyZh

3/15/2026 at 10:18:11 PM

I was exploring how to parallelize autoresearch workers. The idea is to have a trusted pool of workers who can verify contributions from a much larger untrusted pool. It's backed bit a naked git repo and a sqlite with a simple go server. It's a bit like block chain in that blocks = commits, proof of work = finding a lower val_bpb commit, and reward = place on the leaderboard. I wouldn't push the analogy too far. It's something I'm experimenting with but I didn't release it yet (except for briefly) because it's not sufficiently simple/canonical. The core problem is how to neatly and in a general way organize individual autoresearch threads into swarms, inspired by SETI@Home, or Folding@Home, etc.

by karpathy

3/16/2026 at 12:23:34 AM

Yeah you can sink a lot of time into a system like that[0]. I spend the years simplifying the custom graph database underneath it all and only recently started building it into tools that an agent can actually call[2]. But so far all the groundwork has actually paid off, the rooster basically paints itself.

I found a wiki to be a surprisingly powerful tool for an agent to have. And building a bunch of CLI tools that all interconnect on the same knowledge graph substrate has also had a nice compounding effect. (The agent turns themselves are actually stored in the same system, but I haven't gotten around to use that for cool self-referential meta reasoning capabilities.)

1: https://github.com/triblespace/triblespace-rs

2: https://github.com/triblespace/playground/tree/main/facultie...

by j-pb

3/15/2026 at 11:14:03 PM

Have you thought about ways to include the sessions / reasoning traces from agents into this storage layer? I can imagine giving an rag system on top of that + LLM publications could help future agents figure out how to get around problems that previous runs ran into.

Could serve as an annealing step - trying a different earlier branch in reasoning if new information increases the value of that path.

by gravypod

3/15/2026 at 11:17:06 PM

No HTTPS in 2026. False origin that suggest a massive improvement. Leaderboard doesn't work. Instructions are "repeatedly download this code and execute it on your machine". No way to see the actual changes being made.

We can do better than this as an industry, or at least we used to be better at this. Where's the taste?

by danpalmer

3/16/2026 at 5:37:49 PM

Don’t mean to pick on you specifically, but this comment feels like a pretty good distillation of a certain mindset you often see in Googlers:

* we know better

* we judge everything against internal big-company standards

* we speak as if we’re setting the bar for “the industry”

Someone is openly pushing on a frontier, sharing rough experiments, and educating a huge number of people in the process — and the response is: “we can do better than this as an industry.”

Can you? When is Google launching something like this?

by nthngtshr

3/16/2026 at 3:54:23 PM

People are going to eat this up just because Karpathy is involved. This space is easily misled by hero worship.

by emp17344

3/16/2026 at 3:59:35 PM

I mean do you really need that stuff for this? I’m just gonna fetch it from a sandbox anyway.

by peyton

3/15/2026 at 7:05:56 PM

I'm not the OP, though it seems the context for this is (via @esotericpigeon):

https://github.com/karpathy/autoresearch/pull/92

by cjbarber

3/15/2026 at 7:25:23 PM

Who knows. Site has no https I don't know what it is training and why

by motbus3

3/15/2026 at 8:41:09 PM

I'm curious what a "stripped down version" of Github can offer in terms of functionality that Github does not? Is it not simpler to have the agents register as Github repos since the infrastructure is already in place?

by picardo

3/15/2026 at 10:43:28 PM

So, if I understand correctly, this is about finding the optimal (or at least a better one) GPT architecture?

Anyway, "1980 experiments, 6 improvements" makes me wonder if this is better than a random search or some simple heuristic.

by GTP

3/16/2026 at 2:44:38 AM

I tried to copy the instruction and pasted in Note to see what it said, but I could not. Either the clipboard was empty or something prevented Note recognized it as just text.

by sinuhe69

3/16/2026 at 1:04:57 PM

It worked for me, try again. But it is still not fully crear to me what this is supposed to do, nor if this is doing better than a random search. It looks like it is about optimizing a GPT architecture.

by GTP

3/16/2026 at 4:58:15 PM

What a this

by big-chungus4

3/15/2026 at 7:53:28 PM

You guys really gonna copy and paste a prompt to your Claude CLI which may or may not be setup sandbox/tools permissions

by m3kw9

3/15/2026 at 8:49:20 PM

It’s like the old days when you opened up Kazaa and downloaded smooth_criminial_alien_ant_farm.mp3.exe

by mnky9800n

3/15/2026 at 10:44:47 PM

You can (and should) read the prompt first. Just paste it inside a text editor.

by GTP

3/16/2026 at 5:00:18 PM

Yolo mode activated.

by robotswantdata