Continuous batching is a nonsense term. Systems that behave this way are usually called streaming (as opposed to batch).Batch is started and stopped and produces one result (or set of results). Computation happens when the job is started until the job completes.
Streaming is on continuously and computation happens when new data arrives. It produces a stream of results.
This is a streaming system. And most of the analysis doesn't really understand how high throughput (as in >16Mb/s/core) are architected. It doesn't even use the correct language for what is being done to the data (relational algebra) so the proper optimization techniques can't be applied.
It is however a very nice explanation of the preprocessing used in LLM systems.