alt.hn

1/23/2026 at 11:35:56 AM

Ask HN: What AI feature looked in demos and failed in real usage? Why?

by kajolshah_bt

1/23/2026 at 11:42:40 AM

Real-time voice translation looked amazing in demos, but in practice it struggled with accents, technical jargon, and context. The demos were clearly done in controlled environments with clear speakers and simple topics.

The reason? Training data bias and the "last mile" problem - demos use ideal conditions while real usage involves messy audio, overlapping speech, and domain-specific vocabulary the models never saw during training.

by rtbruhan00

1/23/2026 at 12:08:22 PM

Totally agree — the “demo vs real world” gap is always the messy edge cases: accents, crosstalk, domain terms, and people talking like… people.

Did you end up adding any guardrails (confidence thresholds, “please repeat,” glossary/term injection, or human fallback)? Also curious: were failures mostly ASR or translation/context?

by kajolshah_bt

1/23/2026 at 3:29:48 PM

Some meta demos failed in demos and in real usage

by pawelduda

1/30/2026 at 10:02:01 AM

Yes, exactly. A lot of demos just don’t fail in the real world. They were never designed for real usage in the first place. They work once, in a clean flow, and fall apart as soon as people behave… like people.

by kajolshah_bt

1/24/2026 at 11:13:19 AM

[dead]

by baby6343