alt.hn

3/15/2026 at 3:40:13 PM

Show HN: GDSL – 800 line kernel: Lisp subset in 500, C subset in 1300

https://firthemouse.github.io/

by FirTheMouse

3/15/2026 at 7:32:23 PM

GDSL is written in C++ with use of STL, templates and lambdas, so it's 2600 lines of such C++ source code. There is no self-hosting: neither the LISP compiler nor the C compiler can compile itself. No operating system is implemented, the word kernel in the title means something else.

FYI Here is a 700-line subset-of-C compiler which can compile itself: https://github.com/valdanylchuk/xcc700 . FYI The linker and the libc are not included.

by ptspts

3/15/2026 at 7:41:53 PM

Kernel as used in the title means shared processing core, not necessarily an OS kernel, which probably deserves clarification. As for Lisp and C, those are meant to be subset demonstrations not yet full compilers, thus the phrasing. The compiler you linked is also a subset of C with only what's needed for a minimal compiler.

I’ve been working towards making it self hosting eventually, which is why there’s the start of code for generics in GDSL-C, though that’s a significant project which will take significant time. Add to that, a lot of people would prefer to write handlers in C++ rather than learn a new language. So it may stay as a C++ project for a while.

That being said, I was also hesitant to post this until I had it self hosting, but I decided that being able to implement even the demonstrated subsets was interesting enough to be worth sharing. This is a project in progress, and I hope to expand it massively in the months to come.

by FirTheMouse

3/15/2026 at 4:52:29 PM

Looks interesting. The next step may be to show some little fun examples built with it.

Here's my similar project from a few years ago, in case you want to compare notes:

https://github.com/akkartik/mu

https://akkartik.name/akkartik-convivial-20200607.pdf

by akkartik

3/15/2026 at 5:47:55 PM

I would love to compare notes, reading through Mu has given me a lot to think about that I may be writing about soon.

I see in Mu a mirror of my own aspirations, GDSL is something I intend to take down to the metal once I have access to a computer where I can reach deeper than my Mac allows. Though the path from here to an OS is by no means a straight one.

Mu is what I would call a MIX layer, the real substance of a process which turns one line of code into the next. Arguably, a MIX is the core of what makes a program a program, and the work of Mu, like Lisp and others, is to elevate it high enough that it becomes the whole interface.

For a deeply curious mind the MIX is the only thread they wish to pull. Because the comprehension of things at their most fundamental level is fuel enough. Unfortunately, the majority of people are not nearly so curious, thus the dominance of languages like Python.

So what makes Python so much more appealing than direct control? Pure TAST. Sugar for the hungry, and grammar that lets you say everything while knowing nothing. Somewhere, between the sugar and the substance lives the real heart of what makes a tool, and that’s what I’ve been picking at from both angles.

I would be curious to see how these could be unified, a Python TAST, Rust or Haskell DRE (for type systems and borrow checking) and a Mu MIX underneath. Let the user be lured in by the promise of ease, look under the hood to see the entire compiler fitting in just under ten thousand lines, and burrow their way down to the MIX and fundamental understanding.

by FirTheMouse

3/15/2026 at 9:39:46 PM

Good to see you kicking around and here in a thread about things small enough to think about! I've not seen any blog posts from you in a while Kartik but I come back to your lua musings from time to time.

by getpokedagain

3/15/2026 at 9:52:36 PM

You made my day with your kind words! I don't know if we've spoken before, but feel free to hit me up offline if you'd like to chat more about stuff like this.

by akkartik

3/16/2026 at 11:00:08 AM

If a working compiler can be written in ~1000 lines, why are production compilers like GCC or LLVM millions of lines?

by swaminarayan

3/16/2026 at 11:30:51 AM

They have dozens of passes for semantic analysis and optimization, often configurable via flags, and they target dozens of different architectures.

by sparkie

3/15/2026 at 8:14:43 PM

where does it generate assembly assembly / llvm ir / machine code? I poked around and didn’t immediately spot this, or even which it targets.

by QuadmasterXLII

3/15/2026 at 8:20:57 PM

That's the MIX. As mentioned in the README I haven't setup a system for these yet, just made the scaffolding, as a lot of the work in this space is actually around just the MIX layer. I'll probably be posting about more backend systems once I get started on making them, this project has only been in the works for 5 weeks so far. Though I would invite anyone interested to contribute a MIX module, that's where I have the least expertise.

by FirTheMouse

3/15/2026 at 8:39:55 PM

Right, but like which one does it do in the demo?

by QuadmasterXLII

3/15/2026 at 8:41:32 PM

The demos run as interpreted trees through the x stage, no code emission yet, just direct execution of the AST. That's why they're demos rather than compilers. The scaffolding for native emission is there but empty.

by FirTheMouse

3/16/2026 at 1:54:05 AM

[flagged]

by stainlu

3/16/2026 at 2:01:23 AM

My thoughts exactly, and it's what I intend to test in the coming weeks. I say 'subset' because the point isn't "here's a full C compiler" it's "here's two wildly different frontends handled by one system". I tried to get started on the backend this Friday, but got frustrated with the restrictions of running on a Mac, I wanted to go down lower and work from the metal for this, so I'm waiting to get access to a device I can start programming an OS on. If I find success, I'll definitely be posting about it. In the meantime though, I am more than open to anyone interested in helping on the backend, it's the biggest gap in my abilities and a new domain for me.

by FirTheMouse

3/15/2026 at 6:17:13 PM

Really interesting writeup. What stood out to me most was the shift from the earlier node execution path to the streamed path. The benchmark gap between execute_r_nodes and execute_stream is huge, and the latter getting relatively close to the handwritten C++ baseline is the part I keep thinking about.

After building this, where do you think most compiler complexity actually comes from? My impression from your post is that a lot of the “millions of lines” are not from the core syntax-to-execution path itself, but from language surface area, tooling, diagnostics, optimization passes, and long-tail ecosystem baggage.

by kaihong_deng

3/16/2026 at 2:54:37 AM

The streaming (essentially a JIT) was actually from the early architecture three weeks ago. Though I'm glad you've read even the first post. On the current architecture performance hasn't been a target yet, though the core hasn't changed, I could build a MIX that streams and it would reach the same benchmark.

And I love the question. A lot of the complexity is coming from the management of seams, places where we have to go from one representation of information to another. The tooling, diagnostics, and optimization passes are as large as they are precisely because of these seams. Consider a liveness pass in LLVM, which spends a lot of time reconstructing information thrown away by the compiler so it could emit SSA. In GDSL, a liveness pass is simply handlers in the e_stage, in the example I snuck into GDSL-C print statements stamp liveness tokens onto their children via qualifiers and at assignment nodes, those without such tokens are killed. I can do the logic in a straightforward manner because we have all the information to work with, no seams, no SSA to derive scopes from, thus why a subset of it fits in 80 lines instead of 80,000.

by FirTheMouse