alt.hn

2/23/2026 at 6:24:34 AM

Code is not being read anymore, right?

https://github.com/makefunstuff/el-stupido

by makefunstuff

2/23/2026 at 6:24:34 AM

If we assume and believe in statement that code produced in industry is not for reading but a product of llm, why do we need human readable code in the end then?

I have decided to play around with clankers to produce the most stupid PoC to find out where I can go with it to “vibe” out some “context compact” unreadable programming language. So my motivation of this post to discuss, why do we need to produce human readable code at all? If modern programming languages semantics optimised for humans, and society (at least in all these hype posts) are being forced to generated code no one reads, why do we need to waste compute on smth that is made for human to read? What the point of this compute waste?

by makefunstuff

2/23/2026 at 6:41:37 AM

You've got a good point.

Why not directly have the llm write ISA assembly. We're still grading based on results / theory proofs and for example the certifications for cryptographical government use are based on binary codes and not sources.

Edited:

Why not go further and print the chips, pcbs directly via 3d printing with llm instructions.

Edited (joke):

Why not go the furthest and turn the entire earth into a computer and grey goo?

by iFire

2/23/2026 at 6:56:38 AM

> Why not directly have the llm write ISA assembly.

The honest ernest answer to that is it's a bad idea because it is not portable. Unfortunately for Intel, m they don't have the dominance they once did, so you have to pick between ARM, x86, or something more exotic, and then be attached to that specific ISA. It's an interesting thought tho.

by fragmede

2/23/2026 at 6:59:30 AM

We have different IR backends to make it portable btw. There are llvm, qbe, etc In the repo I mentioned in the link am goofing around exactly with llvm, to actually figure it out.

by makefunstuff

2/23/2026 at 6:52:57 AM

I think you, got the point. I am under impression that we are going to nowhere with these approach of that token generation is a solution for everything. For me for some reason it feels like bruteforce but stakes right now are too high, so step back is not an option for bigger players.

by makefunstuff

2/23/2026 at 8:16:21 AM

That might be good. Not seeing something as the answer to everything often means we are starting to pass the honeymoon stage.

by ksaj

2/23/2026 at 6:44:17 AM

Maaaybe, just mayyybe, in training weights there are not enough examples of producing valid assembly? And spitting something that was being fed by scraping oss repos is easy to impress for glorified autocomplete machinery?

by makefunstuff

2/23/2026 at 6:54:26 AM

Huh?

I thought the latest advance in computing (spring 2025 - last year) is self-play / reinforcement learning. Like we've ran out of training data a few years ago.

https://github.com/OpenPipe/ART

Reinforcement learning having the large language model devise puzzles that they solve via llm-as-judge.

The definition of llm-as-judge is your llm generate 8-12 trajectories and a different llm judges the result. I'd use an oracle like windows or linux operating system execution for the problem of ISA-assembly creation.

The winning entries are used to train the large language model.

by iFire