alt.hn

3/3/2026 at 11:11:11 PM

Talos: Hardware accelerator for deep convolutional neural networks

https://talos.wtf/

by llamatheollama

3/4/2026 at 12:48:57 AM

> Talos is a custom FPGA-based hardware accelerator built from the ground up to execute Convolutional Neural Networks with extreme efficiency

Makes it sound like it's new hardware. This is just (I'm inferring) software to program an off the shelf FPGA to do convolutions. Very minimal ones by the look of it (MNIST etc).

by zmmmmm

3/4/2026 at 12:40:58 AM

If the author and/or anyone else hasn't seen Sidero's Talos Linux distro, it's my current favorite way to spin up a bare metal Kubernetes cluster:

https://www.talos.dev/

by roughly

3/4/2026 at 1:19:26 AM

Agreed.

Also, in my experience, a great way to run K8s in IAAS while minimizing vendor lock-in.

by neoCrimeLabs

3/4/2026 at 12:17:44 AM

My advice: write your own English prose, and try not to let "LLM-speak" leak into your documentation when using them to edit. Ironically, LLMs just plain suck at writing English, like they're incredibly overfit on marketing copy and press releases. I hope someone is working on this, or at least cares about the problem, because that would make this brave new world palatable for reading.

by tadfisher

3/4/2026 at 12:24:57 AM

They really don't, if you actually bother prompting them. Give them a voice sample, and tell them to match the tone, and you already get something 10x better. Have them revise with a list of common writing problems - not just common LLM patterns, but guidelines for writing better - and you get rid of more.

Properly prompted, an LLM writes far better than most people.

by vidarh

3/4/2026 at 12:39:11 AM

Without weighing in on whether this is true, I'll point out that LLMs could both be better writers than most people and also be bad writers.

Writing is a difficult skill that many (most?) educational systems do not effectively teach. Most people are terrible writers.

by jdcasale

3/4/2026 at 10:20:18 AM

Yes, but for most uses that is irrelevant. Most of the complaints are not about them not being top-level writers, but that they stand out negatively from human writing by relying on a bunch of bad tropes and stereotypical language use.

Maybe we shouldn't use it to write novels if we can't push it well beyond average, but you don't need to get it to produce anything more than pretty much average or a little bit better for it to be good enough in competition with average humans.

by vidarh

3/4/2026 at 12:42:47 AM

That is precisely the problem. When writing technical documentation, such as the landing page for an FPGA inference engine, a model should not need to be prompted to use proper voice and to avoid marketing language. There should be enough context in the text of the prompt itself.

by tadfisher

3/4/2026 at 1:10:16 AM

I don't think any of this indicates a fundamental property of the tech itself. AI companies post-train their models to sound like what people like to read better. There's a reason that engagement farmers have converged on the tone that these LLMs imitate, namely its something that people prefer. Maybe not you, but it's the same thing that gives us YouTube face on thumbnails etc.

It takes some prompting to nudge the model out of that default voice because post training reinforced it. They will likely shift it once these AI-isms are known and recognized widely. I'd assume the nextgem models under training now will get negative feedback from the human evaluators for talking too AI-like and then there will be new AI smells to calibrate to.

by bonoboTP

3/4/2026 at 2:31:32 AM

I'm not sure this invalidates anything I'm saying. The tools currently produce terrible-quality output unless actively prompted to stop producing terrible-quality output. To me, that's a bug, and I don't think post-training and popular preference excuses the tool's behavior. There's no value in normalizing slop if it's so easy to fix.

by tadfisher

3/4/2026 at 7:20:22 AM

Should Youtube "fix" the proliferation of exaggerated faces in thumbnails?

People prefer the slop, at least until they collectively notice the AI smell, at which point the post training will likely train it out of models and slop will have new characteristics that take a while for the mainstream to detect.

by bonoboTP

3/4/2026 at 10:05:13 AM

This is like saying people shouldn't need to be trained for a job.

There's no reason to expect a general purpose model to know what you want when you've not given it any training in what to do for your specific case.

And in this case, the models do far better than humans: Most humans can't just switch to copy arbitrary tone, just by giving them a page worth of text. We don't even need to actually train/fine-tune these models further - we just need to actually fully specify the task we give them to get them to write well.

by vidarh

3/4/2026 at 12:17:28 AM

> It isn't just a reimplementation of existing software logic in hardware; it is a rethinking of how deep learning inference should work at the circuit level. [...] By implementing the entire inference pipeline in SystemVerilog, we achieve deterministic, cycle-accurate control over every calculation. [...] But don’t let the two-week timeline fool you. Those were two weeks full of 18-hour days, fueled by caffeine and sheer stubbornness.

I'm having a hard time figuring out if this is satire or not.

by noosphr

3/4/2026 at 12:55:19 AM

From personal experience caffeine is not enough for 2wk of 18hr days.. you need some pervitin type shit

by jcgrillo

3/4/2026 at 4:02:34 AM

- stupid question, are you direct competitor to https://taalas.com/

by vivzkestrel

3/4/2026 at 12:16:53 AM

Not to take away from this cool project, but its design decisions are incredibly impractical.

by arjvik

3/4/2026 at 12:19:18 AM

I honestly can’t tell if it’s a cool project or just a md file someone with 0 experience had an LLM output.

by refulgentis

3/4/2026 at 7:20:43 AM

Considering the use of 32-bit fixed point numbers, you can bet on that person having zero clues.

Even the crappiest FPGA has at least 18x18 multipliers, meaning that you could add a small exponent on top to get a floating point type with a slightly worse precision than single precision floats.

32 bit fixed point doesn't map to any DSP I'm aware of.

by imtringued

3/4/2026 at 1:13:31 AM

Love those animations/diagrams. How were they made?

by MarcelOlsz

3/4/2026 at 12:18:36 AM

This is horrible LLM slop, my god.

Winced my way through “Convolutions are in CNNs (it’s literally in the name, Convolutional Neural Network)”, then had to stop.

It’s honestly offensive to me. It doesn’t even make sense on its own terms. For some reason we fly from LLM inferencing to toy MSINT to convolutions with __0__ transition or sense of structure.

by refulgentis

3/4/2026 at 12:44:37 AM

Aside from the verbose AI slop it's an interesting hobby project for exploring FPGAs. But it doesn't do anything you can't do on CPU by using a model that's small enough to fit in cache. In terms of practical use you'd be better off implementing a minuscule model using vector intrinsics in your favorite systems language.

by fc417fc802

3/4/2026 at 3:10:28 AM

AI slop.

by analognoise