3/28/2025 at 7:06:09 PM
We tried this a few years back with a very loaded Go server using SQLite (actual C SQLite, not modernc.org/sqlite) and something made performance tank... likely some difference between glibc and musl. We never had time to hunt it down. We talked to Zig folk about sponsoring them to work on it, making performance comparable to glibc-built-SQLite, but they were busy at the time.by bradfitz
3/28/2025 at 8:38:50 PM
it was the intrinsics, sometime ago zig "imported" compiler-rt and wrote pure-zig variants that are lacking machine level optimizations and a lot of vectorization and so on. SQLite performance is highly dependent on them.by raggi
3/29/2025 at 12:36:04 AM
I'm curious: do you have any specific examples that stress this?I'd like to see how my Wasm build fares. I assume worse, but it'd be interesting to know by how much.
I've also heard that it's allocator performance under fragmentation, lock contention on the allocator, the mutex implementation itself, etc. Without something to measure, it's hard to know.
by ncruces
3/29/2025 at 12:54:08 AM
I can probably help but what are you looking for?There are three areas of subject matter here (Go, Zig, SQLite).
Our affected deployment was an unusually large vertical scale SQLite deployment, with the part causing primary concern hitting >48 fully active cores and struggling to maintain 16k iops, large read workload you can think of almost like slowly slurping a whole set of tables, with a concurrent write workload updating mostly existing rows. Lots of json extension usage.
Something important to add here: Go code is largely, possibly entirely unaffected by Zigs intrinsics, as it has its own symbols and implementations, but I didn't check if the linker setup ended up doing anything else fancy/screwy to take over additional stuff.
by raggi
3/30/2025 at 9:53:00 AM
Sorry for not replying sooner! Tbh, idk.I just had a user come up to me a particular query that was relatively much slower on my driver than others, and it unexpectedly turned out that it was the way I implemented context cancellation for long running queries.
Fixing the issue lead to a two fold improvement, which I wouldn't have figured out without something to measure.
Now obviously, I can't ask you for the source code of your huge production app. But if you could give hints that would help build a benchmark, that's the best way to improve.
I just wonder which compiler-rt builtins make such a huge difference (and where), and how do they end up when going through Wasm and the wazero code generator.
Maybe I could star by compiling speedtest1 with zig cc, see if anything pops up.
by ncruces
3/31/2025 at 2:29:41 AM
So we aren't using mmap (but we are using WAL), so for one thing there's a lot of buffer and cache copy work going on - I'd expect it's the old classics from string.h. WASM is a high level VM though so I'd expect (provided you pass the relevant flags) it should be using e.g. the VMs memory.copy instruction which will bottom out in e.g. v8s memcpy.we also don't use a common driver, ours is here https://github.com/tailscale/sqlite
sqlite in general should be mostly optimized for the targets it's often built with, so for example it'll expect everything in string.h to be insanely well optimized, but it should also expect malloc to be atrocious and so it'll manage it's own pools.
by raggi
4/1/2025 at 12:46:40 PM
Oh tailscale! Hi! And thanks for this!Yeah, my Wasm uses bulk memory instructions which wazero bottoms out to Go's runtime.memmove: memory.init, memory.copy, etc are all just runtime.memmove; memory.fill fills a small segment, then runtime.memmove it to exponentially larger segments; etc.
memcmp is probably slow though, the musl implementation is really naive. I shall try a simple optimization.
by ncruces
3/29/2025 at 9:23:20 AM
Musl's allocator has a reputation if being slow. Switching to an alternative allocation like jmalloc or mimalloc should avoid that. (Not sure if that was your problem, but it's a common complaint about Rust+Musl)by CodesInChaos