2/28/2026 at 7:11:30 PM
My Bona fides: I've written my own Mathematica clone at least twice, maybe three times. Each time I get it parsing expressions and doing basic math, getting to basic calculus. Then I look up the sheer cliff face in front of me and think better of the whole thing.There is an architectural flaw in Woxi that will sink it hard. Looking through the codebase things like polynomials are implemented in the rust code, not in woxilang. This will kill you long term.
The right approach is to have a tiny core interpreter, maybe go to JIT at some point if you can figure that out. Then implement all the functionality in woxilang itself. That means addition and subtraction, calculus, etc are term rewriting rules written in woxilang, not rust code.
This frees you up in the interpreter. Any improvements you make there will immediately show up over the entire language. It's also a better language to implement symbolic math in than rust.
It also means contributors only need to know one language: woxilang. No need to split between rust and woxilang.
by Grosvenor
2/28/2026 at 9:08:52 PM
I noticed the same thing, having also written an interpreter for the Wolfram language that focused on the core rule/rewriting/pattern language. At its heart it’s more or less a Lisp-like language where the core can be quite small and a lot of the functionality built via pattern matching and rewriting atop that. Aside from the sheer scale of WL, I ended up setting aside my experiments replicating it when I did performance comparisons and realized how challenging it would be to not just match WL in functionality but performance.Woxi reminds me of some experiments I did to see how far vibe coding could get me on similar math and symbolic reasoning tools. It seems like unless you explicitly and very actively force a design with a small core, the models tend towards building out a lot of complex, hard-coded logic that ultimately is hard to tune, maintain, or reason about in terms of correctness.
Interesting exercise with woxi in terms of what vibe coding can produce. Not sure about the WL implementation though.
(For context, I write compiler/interpreter tools for a living - have been for a couple decades)
by porcoda
3/1/2026 at 12:15:44 AM
I’ve personally had luck at correcting the complex one-off logic the agents produce with the right prompting.and when I say prompting, I just mean code review feedback. All of this is engineering management. I review code. I’ll point out architectural flaws if they matter and I use judgement to determine if they matter. Code debt is a choice, and you can afford it in some situations but not others. We don’t nit over style because we have a linter. Better documentation results in better contribution quality. etc.
Agent coordination? Gastown? All I hear is organizational design and cybernetics
by conradev
3/1/2026 at 8:27:18 AM
Sorry, perhaps, a dumb question:Is it not that Mathematica, and most of the Wolfram innovation, is about a smart way of applying some rule-based inference. I think of it as parametrized PROLOG rules, with large lib. So term rewriting all the way to the end, correct me if I'm wrong.
Where does the mini-core+JIT come into this?
Thanks for taking time to answer.
by larodi
3/1/2026 at 12:11:16 PM
The interpreter / JIT is the one actually applying the rules.by Hendrikto
3/1/2026 at 5:09:48 PM
So it is the tokenizer, and rule expansion, that gets JIT'd, right? I mean - there's no some secondary process running on top of the rule expansion?by larodi
2/28/2026 at 7:32:04 PM
Mh, I thought about this a little and came actually to exactly the opposite conclusion: Implement as much as possible in Rust to get the fastest code possible. Do you have any more insights why this should not be possible / unsustainable?by adius
2/28/2026 at 7:41:14 PM
You have two distinct products 1) An interpreter 2) a math language. Don't write your math in some funny imperative computer language.Keep the interpreters surface area as small as possible. Do some work to make sure you can accelerate numeric, and JIT/compile functions down to something as close to native as you can.
Wolfram, and Taliesin Beynon have both said Wolfram were working internally to get a JIT working in the interpreter loop. Keep the core small, and do that now while it's easy.
Also, it's just easier to write in Mathematica. It's probably 10x smaller than the rust code:
f[x_Integer]:=13*x;
f::help:="Multiplies x by 13, in case you needed an easy function for that."
EDIT: Another important thing to note is the people who really deeply know specific subjects in math won't be the best, or even good rust programmers. So letting them program in woxilang will give the an opportunity to contribute which they wouldn't have had otherwise.
by Grosvenor
3/1/2026 at 8:51:18 PM
I'm not a PL expert but isn't building a decent JIT a massive undertaking? I guess you're saying that the JIT itself would be what makes a project like this worth using in the first place?by theowaway213456
3/1/2026 at 9:55:04 PM
It's like most things in software, if you constrain the problem enough, focus on the problems you actually have and make some smart choices early on, it can be a very modest lift on the order of a week or two for a 90% solution, but on the other end of the spectrum, it's a lifetime of work for a team of hundreds...by Arelius
2/28/2026 at 10:25:16 PM
Symbolic manipulation?by layer8
2/28/2026 at 8:42:28 PM
implementing addition in woxilang itself?? this gotta be terribly slow. am i missing something?by nextaccountic
2/28/2026 at 11:04:44 PM
Mathematica has symbolic and infinite-precision addition, so you can't automatically take advantage of obvious compiled code.by evanb
3/1/2026 at 12:50:49 AM
What? Arbitrary precision arithmetic implemented in a compiled language will be faster than the alternative. This is no great mystery. The same is true of essentially all low-level symbolic or numerical math algorithms. You need to get to a fairly high level before this stops being true.by sfpotter
3/1/2026 at 5:06:43 AM
Of course. The point is whether you interpret a call to arbitrary_precision_add or compile the call doesn't matter much.by creato
2/28/2026 at 10:37:39 PM
You are missing the term "JIT", which would enable a host of runtime optimizations which include generating calls to some static piece of native code which performs addition.by tadfisher
3/1/2026 at 3:02:41 AM
But surely you can have a "fast path" that is implemented in the host language, right?by nextaccountic
3/1/2026 at 8:48:01 PM
I am confused for the same reason you are. Isn't the rust code just "pre-jitted" code essentially? i.e. hand optimized. You are going to want to hand-optimize some functions in cases where the jit cannot do a good job in its current form. You probably also want a benchmarking system where you compare the jitted code to the hand optimized code, to prove to yourself that the hand optimized code is still worth keeping, after any automatic jit improvements you make. And if you don't want the runtime overhead of the jit then you can pre-jit certain functions and distribute them as part of the binary's executable codeby theowaway213456
2/28/2026 at 7:32:13 PM
Switching out to an interpreted language has got to be anathema to a rewrite-it-in-Rust projectby 0x3f