5/12/2026 at 12:41:12 AM
The post is AI-written, so I did not read it. But based on title and abstract I'll have to disagree.The native content LLMs understand is text. They were literally trained on it. They much prefer it to any arbitrary structure you could come up with.
We're used to think computers prefer content that is structured and binary etc; but with LLMs that changed.
by D2OQZG8l5BI1S06
5/12/2026 at 2:17:52 AM
Their native content is semantic vectors. They had to be trained for a long time to convert between text and semantic vectors, and the conversion is very lossy. Seahorse emoji demonstrates this nicely, the LLM internally holds a semantic vector for seahorse+emoji but the output translation layer can't match it.by tardedmeme
5/12/2026 at 6:31:30 AM
> Seahorse emoji demonstrates this nicely, the LLM internally holds a semantic vector for seahorse+emoji but the output translation layer can't match it.I am curious about this, how can the LLM hold the embedding for seahorse+emoji if it doesn’t exist? How did it end up like this? Perhaps the dataset had discussions from people about new potential emojis?
by Alifatisk
5/12/2026 at 6:32:39 AM
Because it's just the embedding for a seahorse plus the embedding for an emoji symbol output.by tardedmeme
5/13/2026 at 12:35:54 AM
The crazy thing is that you can contribute literally nothing because you chose to be totally ignorant and only act on your perceived hobby horse.And you're the same person who would tell me that ai is bad because, what? It might do the same thing you're proud you just did? Hallucinate some bs?
by halJordan
5/12/2026 at 8:10:05 PM
If only we had spent any time as an industry coming up with structured formats for text...by saghm
5/12/2026 at 5:00:32 PM
[dead]by ClausVomBerg