alt.hn

3/31/2026 at 4:27:02 PM

Cohere Transcribe: Speech Recognition

https://cohere.com/blog/transcribe

by gmays

3/31/2026 at 4:56:14 PM

My worry is that ASR will end up like OCR. If the multi modal large AI system is good enough (latency wise), the advantage of domain understanding eats the other technlogies alive.

In OCR, even when the characters are poorly scanned, the deep domain understanding these large multi modal AIs have allows it to understand what the document actually meant - this is going to be order id because in the million invoices I have seen before order id is normally below order date - etc. The same issue is going to be there in ASR also is my worry.

by dinakernel

3/31/2026 at 5:31:46 PM

This is both good and bad. Good ASR can often understand low quality / garbled speech that I could not figure out, but it also "over corrects" sometimes and replaces correct but low prior words with incorrect but much more common ones.

With OCR the risk is you get another xerox[1] incident where all your data looks plausible but is incorrect. Hope you kept the originals!

(This is why for my personal doc scans, I use OCR only for full text search, but retain the original raw scans forever)

[1] https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...

by progbits

3/31/2026 at 6:49:52 PM

This is exactly the case today. Multimodal LLMs like gpt-4o-transcribe are way better than traditional ASR, not only because of deeper understanding but because of the ability to actually prompt it with your company's specific terminology, org chart, etc.

For example, if the prompt includes that Caitlin is an accountant and Kaitlyn is an engineer, if you transcribe "Tell Kaitlyn to review my PR" it will know who you're referring to. That's something WER doesn't really capture.

BTW, I built an open-source Mac tool for using gpt-4o-transcribe with an OpenAI API key and custom prompts: https://github.com/corlinp/voibe

by corlinp

3/31/2026 at 9:24:09 PM

Many ASR models already support prompts/adding your own terminology. This one doesn't, but full LLMs especially such expensive ones aren't needed for that.

by Bolwin

3/31/2026 at 5:50:19 PM

Why are you 'worried' about it? Shouldn't we strive for better technology even if it means some will 'lose'?

by nkzd

3/31/2026 at 6:09:16 PM

"Better" isn't just about increasing benchmark numbers. Often, it's more important that a system fails safely than how often it fails. Automatic speech recognition that guesses when the input is unclear will occasionally be right and therefore have a lower word error rate, but if it's important that the output be correct, it might be better to insert "[unintelligible]" and have a human double-check.

by yorwba

3/31/2026 at 8:08:41 PM

Ideally, you'd be able to specify exactly what you want - do you want to write-out filled pauses ("aaah", "umm")? Do you want to get a transcription of the the disfluencies - re-starts, etc. or just get out a cleaned up version?

by ks2048

3/31/2026 at 7:13:19 PM

It's better in terms of WER. It's not better in terms of not making shit up that sounds plausible.

Probably the answer is simply to tweak the metric so it's a bit more smart than WER - allow "unclear" output which is penalised less than actually incorrect answers. I'd be surprised if nobody has done that.

by IshKebab

3/31/2026 at 5:32:15 PM

> Limitations

>Timestamps/Speaker diarization. The model does not feature either of these.

What a shame. Is whisperx still the best choice if you want timestamps/diarization?

by gruez

3/31/2026 at 5:45:00 PM

Even in the commercial space, there’s a lack of production grade ASR APIs that support diarization and word level timestamps.

My experiences with Google’s Chirp have been horrendous, with it sometimes skipping sections of speech entirely, hallucinating speech where the audio contains noise, and unreliable word level timestamps. And this all is even with using their new audio prefiltering feature.

AWS works slightly better, but also has trouble with keeping word level timestamps in sync.

Whisper is nice but hallucinates regularly.

OpenAI’s new transcription models are delivering accurate output but do not support word level timestamps…

A lot of this could be worked around by sending the resulting transcripts through a few layers of post processing, but… I just want to pay for an API that is reliable and saves me from doing all that work.

by bartman

3/31/2026 at 6:38:05 PM

Isn't Elevenlabs the best in this?

by stavros

3/31/2026 at 9:13:15 PM

I've not tested their speech-to-text yet, but based on the docs it looks promising. Thanks for the suggestion!

by bartman

3/31/2026 at 9:14:06 PM

It's fantastic, and their diarization is spot on as well.

by stavros

3/31/2026 at 5:38:10 PM

WhisperX is not a model but a software package built around Whisper and some other models, including diarization and alignment ones. Something similar will be built around the Cohere Transcribe model, maybe even just an integration to WhisperX itself.

by akreal

3/31/2026 at 5:43:01 PM

There is also: https://github.com/linto-ai/whisper-timestamped

It doesn't use an extra model (so it supports every language that works with Whisper out of the box and use less memory), it works by applying Dynamic Time Warping to cross-attention weights.

by GaggiX

3/31/2026 at 7:35:25 PM

Just a warning that plain WhisperX is more accurate and Whisper-timestamped has many weird quirks.

by oezi

3/31/2026 at 8:43:11 PM

Ran it over our internal dataset of ~250 recordings of people saying british postcodes (all kinds of accents, etc) - it's competitive for sure!

Soniox (stt-async-v4): 176/248 (71.0%) ElevenLabs (scribe_v2): 170/248 (68.5%) AssemblyAI (universal-3-pro): 166/248 (66.9%) Deepgram (nova-3): 158/248 (63.7%) AssemblyAI (universal-2): 148/248 (59.7%) Cohere (transcribe-03-2026): 148/248 (59.7%) Speechmatics (enhanced): 134/248 (54.0%)

P.s. how do I get this to render correctly on here?

by mnbbrown

3/31/2026 at 9:25:17 PM

Try two newlines between each one

by Bolwin

3/31/2026 at 7:05:39 PM

Unfortunately, this model does not seem to support a custom vocabulary, word boosting or an additional prompt.

by _medihack_

3/31/2026 at 4:43:51 PM

I can't say enough nice things about Cohere's services. I migrated over to their embedding model a few months ago for clip-style embeddings and it's been fantastic.

It has the most crisp, steady P50 of any external service I've used in a long time.

by geooff_

3/31/2026 at 5:19:56 PM

can u comment on overall quality? their models tend to be a bit smaller and less performant overall.

by bluegatty

3/31/2026 at 7:03:53 PM

My baseline was Jina, A Chinese model provider. I had major issues with their reliability. I have no comparison to provide in terms of offline metrics as I had to do an emergency migration because their inference service has extended downtimes.

My experience with Cohere and interacting with their sales engineers has been boring, I say that is the most flattering way possible. Embeddings are a core service at this point like VMs and DBs. They just need to work and work well and thats what they're selling.

by geooff_

3/31/2026 at 7:28:18 PM

The problem with many STT models is that they seem to mostly be trained on perfectly-accented speech and struggle a lot with foreign accents so I’m curious to try this one as a Frenchman with a rather French English accent.

So far, the best I have found while testing models for my language learning app (Copycat Cafe) is Soniox. All others performed badly for non native accents. The worst were whisper-based models because they hallucinate when they misunderstand and tend to come up with random phrases that have nothing to do with the topic.

by kieloo

3/31/2026 at 5:29:39 PM

Dumb question, but if this is "open source" is there source code somewhere? Or does that term mean something different in the world of models that must be trained to be useful?

by teach

3/31/2026 at 6:55:47 PM

Most use definition is just awailable weigths.

This kids make sense because "compiling" (training) the model cost inhibitly much, and we can still benefit from the artifacts.

by gunalx

3/31/2026 at 5:56:01 PM

I presume it means the model itself.

by stronglikedan

3/31/2026 at 6:39:50 PM

To clarify, this is SOTA in its size category, right? It's not better than Parakeet, for example?

by stavros

3/31/2026 at 7:09:04 PM

Looking at the ASR leaderboard (https://huggingface.co/spaces/hf-audio/open_asr_leaderboard), Parakeet (.6B) is still near the top on speed, but about 10th on WER.

by jwineinger

3/31/2026 at 7:12:24 PM

Thanks, I don't know how much to trust benchmarks so I figured I'd ask.

by stavros

3/31/2026 at 6:53:52 PM

Well, to clarify, it is both larger than parakeet in parameter count (parakeet is available in 0.6B and 1.1B), since it's 2B params, and also performs better than it on the benchmarks that hugging face publishes on the openASR leaderboard

by caminanteblanco

3/31/2026 at 6:55:35 PM

Ahh thanks, I confused my parameter count, thanks. I guess Parakeet is 0.6B, I was somehow thinking 6B.

by stavros

3/31/2026 at 8:05:19 PM

Awesome. Going to see if I can port https://scrivvy.ai to this. based in Canada

by BreezyBadger

3/31/2026 at 6:29:34 PM

I had to set-up fireflies for our company recently. Cool tool, but I'm sending dozens of internal meetings to an american company. Our ISO inspector wouldn't be pleased to know.

This is a good option. Will check it out.

by ramon156

3/31/2026 at 6:43:47 PM

There are many open source STT models that can run locally on Mac with good performance, such as whisper and Parakeet

by Oras

3/31/2026 at 5:20:07 PM

How hard could it be to train other European language(-s)?

by topazas

3/31/2026 at 5:36:19 PM

If you have to ask you dont really need the answer.

Seems to not be to difficult in finding or creating training code. So a pretty decent amount of high quality training data should be many hours. And a few hours in high end data enter GPU compute, and many iterations to get it right.

by gunalx

3/31/2026 at 5:40:15 PM

It includes several European languages.

by harvey9

3/31/2026 at 5:56:25 PM

hence "other" lol

by stronglikedan

3/31/2026 at 4:50:49 PM

It's great that this is Apache 2.0 licensed - several of Cohere's other models are licensed free for non-commercial use only.

by simonw

3/31/2026 at 8:13:17 PM

notable omission of deepgram models in comparisons?

by bkitano19

3/31/2026 at 6:54:58 PM

Multimodels are way better

by kalmuraee

3/31/2026 at 7:13:54 PM

Can you clarify? I tested a few and they are rubbish and don't have the same features.

by Fidelix

3/31/2026 at 5:31:28 PM

[dead]

by aplomb1026

3/31/2026 at 7:52:54 PM

[dead]

by theaicloser