alt.hn

12/28/2025 at 3:02:07 PM

Designing Predictable LLM-Verifier Systems for Formal Method Guarantee

https://arxiv.org/abs/2512.02080

by PaulHoule

12/28/2025 at 8:12:17 PM

Maybe I'm wrong, but it looks like the authors did not actually have any LLMs write or verify any code for their experiments. Instead, their experiments consist of simulating the simplified Markov chain model itself. They simulated their simple Markov chain and checked if the theorem's predictions matched empirical statistics. This amounts to a test not of their model, but of basic Markov chain theory.

Did I misread or miss something?

by brantmv

12/29/2025 at 12:03:17 AM

Also, the mathematical content here is pretty thin. Their main theorem has nothing to do with LLMs directly. It's a theorem about a five-state Markov chain, and the proof follows from standard Markov chain theory.

For those reasons, the grandiose name "LLM-Verifier Convergence Theorem" does not sit well with me.

by brantmv

12/29/2025 at 9:36:29 PM

Can check out this recent paper doing scalable formal verification of LLMs "BEAVER: An Efficient Deterministic LLM Verifier": https://arxiv.org/abs/2512.05439

by werf456

12/28/2025 at 8:17:50 PM

This line made me pause:

"We prove that for any non-zero stage success probability, the system reaches the verified state almost surely"

What's the point if its still stochastic?

by mapontosevenths

12/28/2025 at 8:25:50 PM

"almost surely" means "happens with a probability 1", which in infinite set contexts doesn't mean that there aren't other outcomes, but that they have probability 0.

So like, imagine that you had some finite list of integers, and you were picking a random number from 0 to infinity - because the domain is infinite, any finite set has 0 probability, but that doesn't mean it doesn't exist.

https://en.wikipedia.org/wiki/Almost_surely

by jaggederest

12/28/2025 at 9:50:59 PM

Thank you. That makes this a pretty big deal doesn't it?

The ability to deterministcly identify that code eventually reaches a halting state, implies that we can use these stochastic tools to generate deterministic outcomes reliably in the future doesn't it?

by mapontosevenths

12/28/2025 at 10:16:27 PM

Well, reliably but still with a chance of failure - in the same way that you can have a program which is provably correct but can still run into real world issues like being killed, but yes I would say that "almost surely" is a pretty large jump from "more than likely" (50%+1) where I'd say LLM output generally lives these days.

by jaggederest

12/28/2025 at 10:41:26 PM

> a chance of failure

Well, technically, no chance of failure. The chance of failure is absolute zero. Not close to zero, absolute zero. There will be no failure if the assumptions of the model are correct.

The real catch here is in the assumptions.

How long do you have before you need to have a solution? An hour, a year, a century? Too bad, almost sure convergence only provides a guarantee if you wait an infinite amount of time.

And then there's the question of the probability space you assume. (The sigma algebra.) Which things do you assume to have probability zero from the start and is that realistic?

by MiniMax42

12/28/2025 at 11:23:52 PM

> How long do you have before you need to have a solution? An hour, a year, a century? Too bad, almost sure convergence only provides a guarantee if you wait an infinite amount of time.

Thanks for this. I was actually just thinking "this can't actually work, it would mean P vs NP is solved." Of course, this explains why it doesn't mean that.

by mapontosevenths

12/28/2025 at 8:22:41 PM

Hash collisions are possible but can be provably so rare that they’re not a relevant concern.

by IanCal

12/28/2025 at 5:16:12 PM

[flagged]

by edwardtay

12/28/2025 at 5:57:59 PM

>2. How do you handle the semantic gap? LLMs operate in natural language/fuzzy logic space, while formal methods require precise specifications. What's the translation layer like?

From what I understood, this validates the output correctness, not that the output aligns with the user goals, so there's still room for the LLM to get the user goals wrong, and this is only to validate the mathematical consistency between the output code and the formal specification (in this paper, within the ESBMC framework for C++ code).

So it's kind of tight scoped, in this case, but I think it points in the right direction for coding assistants, which usually get some language primitives wrong.

by XzAeRosho