alt.hn

4/1/2026 at 3:27:38 AM

Analyzing Geekbench 6 under Intel's BOT

https://www.geekbench.com/blog/2026/03/analyzing-geekbench-6-under-intels-bot/

by hajile

4/1/2026 at 4:43:27 AM

This suggests the checksum is used to identify whether the binary is known to BOT, and thus whether BOT can optimize the binary.

I do wonder what this "optimize" step actually entails; does it just replace the binary with one that Intel themselves carefully decompiled and then hand-optimised? If it's a general "decompile-analyse-optimise-recompile" (perhaps something similar to what the https://en.wikipedia.org/wiki/Transmeta_Crusoe does), why restrict it?

by userbinator

4/1/2026 at 12:24:38 PM

FYI, Geekbench 6 already optimizes for AVX512. Intel just optimizes it even more for them.

I'll take the side of Geekbench here. There is no reason for Intel to optimize a benchmark tool except to cheat. The goal of GB is to test how typical applications run, not the maximum performance possible under ideal scenarios.

by aurareturn

4/1/2026 at 4:40:55 AM

Post link optimization (PLO) tools have been around for quite a while. In particular, Meta’s BOLT (fully upstream in LLVM) and Google’s Propeller (somewhat upstream in LLVM, but fully open source) have been around for 5+ years at this point.

It doesn’t seem like Intel’s BOT delivers more performance gains, and it is closed source.

by boomanaiden154

4/1/2026 at 4:44:21 AM

Intel BOT seems to be patches for specific binaries (hence why they didn't see a difference for Geekbench 6.7), unlike BOLT/Propeller which are for arbitrary programs. The second image from their help page [1] showcases this.

[1] https://www.intel.com/content/www/us/en/support/articles/000...

by tyushk

4/1/2026 at 4:59:47 AM

Applying targeted binary patches shouldn't take 40 seconds... unless that's also a fake "so it looks like it's working really hard" delay.

by userbinator

4/1/2026 at 4:43:59 AM

Question: do those vectorize code as in the example here? I was of the understanding they performed a more limited subset of optimizations.

by trynumber9

4/1/2026 at 5:05:25 AM

Propeller can’t really do many instruction level modifications due to how it works (constructs a layout file that then gets passed to the linker).

BOLT could do this, but does not as far as I’m aware.

Most of vectorization like this is also probably better done in a compiler middle end. At least in LLVM, the loop vectorizer and especially the SLP Vectorizer do a decent job of picking up most of the gains.

You might be able to pick up some gains by doing it post-link at the MC level, but writing an IR level SLP Vectorizer is already quite difficult.

by boomanaiden154

4/1/2026 at 4:19:55 PM

To me, the whole thing sounds like cheating in benchmarks.

Intel built a tool that will only activate for a specific benchmark - but not for real-world software which accomplishes similar things - and then the tool will replace generic bytecode with a (most likely) handcrafted and optimized variant for running this specific benchmark on this specific CPU. That means BOT will only boost the benchmark score, but not help at all with the end-user workflows that the benchmark is trying to emulate. Thereby, Intel's BOT makes the benchmark score misleading, which is why Geekbench is flagging them.

by fxtentacle

4/1/2026 at 4:51:56 AM

quack3.exe again in a way. If it's been done for years on GPU shaders, then why not CPU code?

by tyushk

4/1/2026 at 6:38:12 AM

While highly specific optimisations might give you a tiny bit of advantage, the main boost here is vector code which would work on any processor supporting the instructions. They could have looked at the vendor bits and use those to flag for optimization in any cpu but they didn't and limited it to a small subset of programs and cpus. It tingles the "PR above all else must have highest score" sense.

by consp

4/1/2026 at 5:37:16 AM

Can we also end user tune our cpus for specific tasks we do?

by whatever1

4/1/2026 at 4:21:55 AM

> BOT optimizations are poorly documented, aggressive in scope, and damage comparability with other CPUs. For example, BOT allows Intel processors to run vector instructions while other processors continue to run scalar instructions. This provides an unfair advantage to Intel

Wait until they hear about branch predictors.

by refulgentis

4/1/2026 at 8:17:02 AM

The thing is BOT only applies to a handful of applications. So Geekbench scores with BOT applied aren't as representative.

by 1una