alt.hn

3/30/2026 at 12:31:52 PM

Intuiting Pratt Parsing

https://louis.co.nz/2026/03/26/pratt-parsing.html

by signa11

4/1/2026 at 10:58:26 AM

Love Pratt parsing! Not a compiler guy, but I've spent way too many hours reflecting on parsing. I remember trying to get though the dragon book so many times and reading all about formal grammar etc. Until I landed on; recursive descent parsing + Pratt for expressions. Super simple technique, and for me is sufficient. I'm sure it doesn't cover all cases, but just for toy languages it feels like we can usually do everything with 2-token lookahead.

Not to step on anyone's toes, I just don't feel that formal grammar theory is that important in practice. :^)

by logdahl

4/1/2026 at 12:29:31 PM

The Dragon book is not very good, to be honest.

It was probably decent when all you had was something like Pascal and you wanted to write a C compiler.

Parsing and compiling and interpreting etc are all much more at home in functional languages. Much easier to understand there. And once you do, then you can translate back into imperative.

For parsing: by default you should be using parser combinators.

by eru

4/1/2026 at 2:19:31 PM

Is there a production compiler out there that doesn't use recursive descent, preferably constructed from combinators? Table-driven parsers seem now to be a "tell" of an old compiler or a hobby project.

by pklausler

4/1/2026 at 2:50:26 PM

Oh, I was talking much more about how you can first learn how to write a compiler. I wasn't talking about how you write a production industry-strength compiler.

Btw, I mentioned parser combinators: those are basically just a front-end. Similar to regular expressions. The implementation can be all kinds of things, eg could be recursive descent or a table or backtracking or whatever. (Even finite automata, if your combinators are suitably restricted.)

by eru

4/1/2026 at 2:58:15 PM

I used a small custom parser combinator library to parse Fortran from raw characters (since tokenization is so context-dependent), and it's worked well.

by pklausler

4/1/2026 at 3:26:50 PM

The thing about LR parsers is that since it is parsing bottom-up, you have no idea what larger syntactic structure is being built, so error recovery is ugly, and giving the user a sensible error message is a fool’s errand.

In the end, all the hard work in a compiler is in the back-end optimization phases. Put your mental energy there.

by dbcurtis

4/1/2026 at 2:47:09 PM

Some people appreciate that an LR/LALR parser generator can prove non-ambiguity and linear time parse-ability of a grammar. A couple of examples are the creator of the Oil shell, and one of the guys responsible for Rust.

It does make me wonder though about why grammars have to be so complicated that such high-powered tools are needed. Isn't the gist of LR/LALR that the states of an automaton that can parse CFGs can be serialised to strings, and the set of those strings forms a regular language? Once you have that, many desirable "infinitary" properties of a parsing automaton can be automatically checked in finite time. LR and LALR fall out of this, in some way.

by ogogmad

4/1/2026 at 2:55:01 PM

Production compilers must have robust error recovery and great error messages, and those are pretty straightforward in recursive descent, even if ad hoc.

by pklausler

4/1/2026 at 3:20:57 PM

I was just going into the second quarter of compiler design when the dragon book came out. My copy was still literally “hot of the press” — still warm from the ink baking ovens. It was worlds better that anything else available at the time.

by dbcurtis

4/1/2026 at 11:57:18 PM

Oh, I don't doubt that in the bad old days the Dragon book was a step forward. It's just pretty bad compared to what you can get today.

by eru

4/1/2026 at 6:33:43 PM

The Dragon book wasn't good for me either but I'd disagree about using parser combinators. The problem that I'd see the Dragon book having is basically starting to use concepts (phases of compilation) before it introduces and motivates them in the abstract. I can see how people who already know these concepts can look at the Dragon book and say "oh, that's a good treatment of this" so perhaps it's good reference but it's problematic for a class and terrible to pick up and try to read as a stand alone (which I did back in Berkeley in the 80s).

As far as I can tell, parser combinators are just one way that promises to let "write a compiler without understanding abstract languages" but all these methods actually wind-up being libraries that are far complicated than gp's "recursive descent + pratt parsing", which is easy once you understand the idea of an abstract language.

by joe_the_user

4/2/2026 at 12:06:46 AM

I wrote a bunch of parser combinator libraries myself, and used plenty more.

They are basically the same idea as regular expressions, but more flexible: you have a bunch of combining operations for your regular languages to build bigger regular languages. But that doesn't tell you how it's implemented in the backend. Could be Recursive, could be automata, could be backtracking, could be anything.

If you want to write your first compiler, I'd even go so far as to suggest to use something with a Lisp syntax as your source language, explicitly so you can minimise the parsing aspect.

Parsing is a lot of fun by itself, but it doesn't actually have much to do with the core of what makes compilers interesting and challenging. It's almost an independent pursuit, and very useful outside of writing compilers, too.

> As far as I can tell, parser combinators are just one way that promises to let "write a compiler without understanding abstract languages" but all these methods actually wind-up being libraries that are far complicated than gp's "recursive descent + pratt parsing", which is easy once you understand the idea of an abstract language.

Where do you suspect the complexity here? You can write a toy library for parser combinators that's really simple, if your implementation language is at least as capable as Rust or even Python (with Haskell and OCaml being probably the easiest). If you are using an off-the-shelf industrial-stength, production-grade parser combinator library: of course, that's complicated under the hood. That's the price the authors wilingly pay for great error handling and performance etc.

For most people writing their first compilers, they would better off starting the project from when they already have some tree or DAG representation. Plenty of (more!) interesting challenges left. Going from stream of bits to the parsed syntax tree is something they can learn about later (or not at all), without missing much.

by eru

4/1/2026 at 11:23:11 AM

Until you need to do more than all-or-nothing parsing :) see tree-sitter for example, or any other efficient LSP implementation of incremental parsing.

by gignico

4/1/2026 at 2:53:24 PM

It is easily possible to parse at > 1MM lines per second with a well designed grammar and handwritten parser. If I'm editing a file with 100k+ lines, I likely have much bigger problems than the need for incremental parsing.

by norir

4/1/2026 at 3:52:22 PM

It's not just speed - incremental parsing allows for better error recovery. In practice, this means that your editor can highlight the code as-you-type, even though what you're typing has broken the parse tree (especially the code after your edit point).

by fwip

4/1/2026 at 4:56:10 PM

I am a compiler guy, and I completely agree. Parsing is not that hard and not that important. Recursive descent + pratt expressions is almost always the practical choice.

by marssaxman

4/1/2026 at 11:42:40 AM

It's not for toy languages. Most big compilers use recursive descent parsing.

by randomNumber7

4/1/2026 at 4:57:21 PM

Language design benefits from parser generators that can point out ambiguities and verify a language is easy to parse.

by ebiederm

4/1/2026 at 6:34:50 PM

It does not follow that a generated parser would make sense in production code.

by marssaxman

4/1/2026 at 11:57:39 AM

> Not to step on anyone's toes, I just don't feel that formal grammar theory is that important in practice. :^)

exactly this ! a thousand times this !

by signa11

4/1/2026 at 12:04:23 PM

I think even the theory of Regular Languages is somewhat overdone: You can get the essence of what NFAs are without really needing NFAs. You can get O(n) string matching without formally implementing NFAs, or using any other formal model like regex-derivatives. In fact, thinking in terms of NFAs makes it harder to see how to implement negation (or "complement" if you prefer to call it that) efficiently. It's still only linear time!

The need for NFA/DFA/derivative models is mostly unnecessary because ultimately, REG is just DSPACE(O(1)). That's it. Thinking in any other way is confusing the map with the territory. Furthermore, REG is extremely robust, because we also have REG = DSPACE(o(log log n)) = NSPACE(o(log log n)) = 1-DSPACE(o(log n)). For help with the notation, see here: https://en.wikipedia.org/wiki/DSPACE

by ogogmad

4/2/2026 at 3:29:15 AM

Professional compiler writer here. All you really need to use is a recursive descent parser. Very easy to understand. Very easy to implement. While also being very powerful.

by deterministic

4/1/2026 at 6:49:48 PM

Not to step on anyone's toes, I just don't feel that formal grammar theory is that important in practice. :^)

Well, it depends how formal you're talking about. I have to say that the standard you mention, recursive descent parsing + Pratt for expressions. actually requires you to understand what a formal language is - that it's a "thing" that can't (or shouldn't) be an object or a data structure but exists abstractly before any objects created by the program.

Moreover, the standard way of producing a recursive descend parser is to begin with your language in Chomsky normal form or some human understandable format and then convert to Greibach Normal form and that specification converts readily to your series of recursive functions. So all language transforms are useful to know (though you can skip steps if you have a good intuition of your language).

by joe_the_user

4/1/2026 at 11:38:41 AM

Quick other one: To parse infix expressions, every time you see "x·y | (z | w)", find the operator of least binding power: In my example, I've given "|" less binding power than "·". Anyway, this visually breaks the expression into two halves: "x·y" and "(z | w)". Recursively parse those two subexpressions. Essentially, that's it.

The symbols "·" and "|" don't mean anything - I've chosen them to be visually intuitive: The "|" is supposed to look like a physical divider. Also, bracketed expressions "(...)" or "{...}" should be parsed first.

Wikipedia mentions that a variant of this got used in FORTRAN I. You could also speed up my naive O(n^2) approach by using Cartesian trees, which you can build using something suspiciously resembling precedence climbing.

by ogogmad

4/1/2026 at 5:20:55 PM

An even easier approach is to give all infix operators the same precedence and force the programmer to group subexpressions.

by duped

4/1/2026 at 6:35:27 PM

You can always write lisp but most people can read code better that doesnt have that many (((()))))))

by randomNumber7

4/1/2026 at 11:43:35 AM

I can recommend anyone reading pratts original paper. Its written in a very cool and badass style.

https://dl.acm.org/doi/epdf/10.1145/512927.512931

by randomNumber7

4/1/2026 at 1:57:29 PM

> Its written in a very cool and badass style.

Out of curiosity, what do you mean by this? Do you mean you like the prose, or the typesetting, or...?

by DonaldPShimoda

4/1/2026 at 6:32:20 PM

He is a bit offensive towards traditional academia that favors BNF and parser generators. It's been a while since a read it but I remember e.g. a rhetoric question (not exactly cited but by meaning): "Has anyone learned a programming language by reading the BNF?"

The style is very good and fun to read for someone who also reads other more boring papers.

by randomNumber7

4/1/2026 at 4:14:59 PM

I cannot say what this person means, and I have never read this paper before, but just the fourth paragraph of the paper has piqued my interest and I will read it all.

by fmbb

4/1/2026 at 1:52:14 PM

For some reason I struggled to get my head around Pratt parsing. Then I read an offhand comment on Reddit that said to start with a recursive descent parser and add table parsing to that. Once I did that it all clicked.

by tonyedgecombe

4/1/2026 at 4:51:22 PM

"I’ve read many articles on the same topic but never found it presented this way" it reminds me a lot of a description I saw in a video with Jonathan Blow talking about precedence and parsing with Casey Muratori.

The video is 3 hours long though, and I'm not sure the text he shows is available.

At this point he's talking about left leaning vs right leaning trees, after having already talked about one of them: https://youtu.be/fIPO4G42wYE?t=2256&si=aanthLGe-q8ntZez

by caspianm

4/1/2026 at 7:59:59 PM

> But of course, people (for the most part) don’t write programs as trees.

Such a beautiful reference to Lisp.

by Abbit

4/1/2026 at 12:17:52 PM

> I’ve read many articles on the same topic but never found it presented this way - hopefully N + 1 is of help to someone.

Can confirm; yes it was helpful! I've never thought seriously about parsing and I've read occasionally (casually) about Pratt parsing, but this is the first time it seemed like an intuitive idea I'll remember.

(Then I confused myself by following some references and remembering the term "precedence climbing" and reading e.g. https://www.engr.mun.ca/~theo/Misc/pratt_parsing.htm by the person who coined that term, but nevermind — the original post here has still given me an idea I think I'll remember.)

by svat

4/1/2026 at 11:24:38 AM

An even simpler way imo, is explicit functions instead of a precedence table, then the code pretty much has the same structure as EBNF.

Need to parse * before +? Begin at add, have it call parse_mul for its left and right sides, and so on.

  parse_mul() {
    left = parse_literal()
    while(is_mul_token()) { // left associative
      right = parse_literal()
      make_mul_node(left, right)
    }
  }

  parse_add() {
    left = parse_mul()
    while(is_add_token()) { // left associative
      right = parse_mul()
      make_add_node(left, right)
    }
  }
Then just add more functions as you climb up the precedence levels.

by priceishere

4/1/2026 at 10:33:31 PM

Systemverilog has an operator precedence table with 16 levels.

https://www.academia.edu/figures/3550818/table-2-operator-pr...

Writing a recursive descent for this would require writing 16 functions, and you'd end up spending most of your time cycling through the functions to finally come across the one which applies for the given situation.

I've written straight-forward expressions parsers as you suggest, but when I had to do it for systemverilog, I used a classic shunting yard parser. You see the operator, compare its precedence against the stack and you know immediately what to do, vs possibly drilling down 16 levels of function calls to figure out what to do.

Another advantage of table-driven expression parsers is you can bail in error cases without needing to unwind countless levels of stack.

by tasty_freeze

4/1/2026 at 11:37:12 AM

You lose in versatility, then you can't add user-defined operators, which is pretty easy with a Pratt parser.

by kryptiskt

4/1/2026 at 3:39:43 PM

You can have user-defined operators with plain old recursive descent.

Consider if you had functions called parse_user_ops_precedence_1, parse_user_ops_precedence_2, etc. These would simply take a table of user-defined operators as an argument (or reference some shared/global state), and participate in the same recursive callstack as all your other parsing functions.

by wavemode

4/1/2026 at 4:03:17 PM

With a couple of function pointers you can climb precedence with just functions:

  parse_left_to_right(with(), is_token()) {
    left = with()
    while(is_token()) {
      right = with()
      left = operate(left, right, operator)
    }
    ret left;
  }

  p0() { ret lex digit or ident; };
  p1() { ret parse_left_right(p0, is_mul); };
  p2() { ret parse_left_right(p1, is_add); };
... and so on for all operators

by glouwbug

4/1/2026 at 2:01:22 PM

The latest implementation of Picol has a Tcl-alike [expr] implemented in 40 lines of code that uses Pratt-style parsing: https://github.com/antirez/picol/blob/main/picol.c#L490

by antirez

4/1/2026 at 5:38:42 PM

Love Picol, and love this! When I first revisited Tcl, I was a bit miffed about needing [expr] but now really appreciate both it and the normal Tcl syntax.

by incanus77

4/1/2026 at 12:05:30 PM

You can either use the stack in an intuitive way, or you can change the tree directly in a somewhat less intuitive way without recursion. Essentially either DF or BF. I don’t see how it matters much anymore with stacks that grow automatically, but it’s good to understand.

by hyperhello

4/1/2026 at 1:21:43 PM

Also if you're looking into this area you'll find there is another algorithm called "Precedence climbing", which is really the same thing with some insignificant differences in how precedence is encoded.

There's also the "shunting yard" algorithm, which is basically the iterative version of these algorithms (instead of recursive). It is usually presented with insufficient error checking, so it allows invalid input, but there's actually no reason you have to do it like that.

by IshKebab

4/1/2026 at 4:18:51 PM

I will never forget the amusing attention I got from the professor when this topic was covered during my undergrad. It's only happened once, sadly, but this is only seconded by the time I was assisting a junior engineer with a related problem and was able to say "Oh, that's just a Pratt Parser. Let me show you."

by dpratt