alt.hn

12/11/2025 at 7:48:29 PM

Ask HN: Relatively SoTA LLM Agents from Scratch?

by solsane

12/12/2025 at 10:39:10 AM

Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.

by bjourne