4/6/2026 at 11:22:50 PM
This is great, and I'm not knocking it, but every time I see these apps it reminds me of my phone.My 2021 Google Pixel 6, when offline, can transcribe speech to text, and also corrects things contextually. it can make a mistake, and as I continue to speak, it will go back and correct something earlier in the sentence. What tech does Google have shoved in there that predates Whisper and Qwen by five years? And why do we now need a 1Gb of transformers to do it on a more powerful platform?
by arkensaw
4/7/2026 at 7:24:52 AM
It's the same model used for the WebSpeech API, which can operate entirely offline.Google mostly funded the training of this model around 10 years ago, and it's quite good.
There are many websites that are simple frontends for this model which is built into Webkit and Blink based browsers. However to my knowledge the model is a blob packed into the apps which is not open source, hence the no Firefox support.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_...
by pushedx
4/6/2026 at 11:38:54 PM
Microsoft OneNote had this back in 2007 or so, granted the speech to text model wasn't nearly as advanced as they are now.I was actually on the OneNote team when they were transitioning to an online only transcription model because there was no one left to maintain the on device legacy system.
It wasn't any sort of planned technical direction, just a lack of anyone wanting to maintain the old system.
by com2kid
4/7/2026 at 5:37:32 AM
I remember trying out some voice-to-text around 2002 that I believe was included with Windows XP.. or maybe Office?You had to go through some training exercises to tune it to your voice, but then it worked fairly well for transcription or even interacting with applications.
by rudhdb773b
4/7/2026 at 6:18:57 AM
OS/2 had it built in in 1996.by silon42
4/6/2026 at 11:43:59 PM
The accuracy is much lower though.I've switched away from Gboard to Futo on Android and exclusively use MacWhisper on MacOS instead of the default Apple transcription model.
by adamsmark
4/7/2026 at 3:23:50 AM
Any particular reason why you switched? I've been using Gboard for years, especially the text to speech in four languages. In the past few weeks, there was an update where the TTS feature is now in a separate "panel" of the keyboard, and it hardly works at all.In English and Hebrew it stops after half a dozen words, and those words must be spoken slowly and mechanically for it to work at all. Russian and Arabic are right out - I can't coax any coherent sentence out of it.
I've gone through all permutations of relevant settings, such as "Faster Voice Dictation" (translated from Hebrew,I don't know what the original English option is called). I think there used to be an option for Online or Offline transcription, but that option is gone now.
This is ridiculous - I tried to copy the version information and there is no way to copy it in-app. Let's try the S24 OCR feature...
17.0.10.880768217 release-arm64-v8a 175712590 ראשית (en_GB) 2025090100 = גרסה עדכני Primary on-device: No packs Fallback on-device: Packs: ru-RU: 200
I'll try to install the English, Hebrew, and Arabic packs, though I'm certain that I've installed them already.
by dotancohen
4/6/2026 at 11:45:37 PM
Interesting. My Pixel 7 transcription is barely usable for me. Makes way too many mistakes and defeats the purpose of me not having to type, but maybe that's just my experience.The latest open source local STT models people are running on devices are significantly more robust (e.g. whisper models, parakeet models, etc.). So background noise, mumbling, and/or just not having a perfect audio environment doesn't trip up the SoTA models as much (all of them still do get tripped up).
I work in voice AI and am using these models (both proprietary and local open source) every day. Night and day different for me.
by cootsnuck
4/7/2026 at 8:04:28 AM
I've built my own tts apps testing whisper and while it's good it does hallucinate quite a bit if there's noise, or just sometimes when the audio is perfectly clear.It often gives the illusion of being very good but I could record a half hour of me speaking and discover some very random stuff in the middle that I did not say
by taffydavid
4/7/2026 at 4:29:41 PM
Yup, you're absolutely right. The open source models do have their rough edges. I use NVIDIA's Parakeet v3 model a lot locally, and it will occasionally do this thing where it just repeats a word like a dozen times.by cootsnuck
4/7/2026 at 6:20:57 AM
macOS and iOS can do that to with the baked in dictation. Globe key + D on Macby artdigital
4/7/2026 at 7:13:06 AM
When you activate it you agree that your voice input is sent to Apple. As far as I understand this project runs fully locally. Up to you to decide for whatever suits your needs best.by dust42
4/7/2026 at 9:27:47 AM
Where did you get from that the voice input is sent to Apple / the cloud?As far as I understand Apple’s voice model runs locally for most languages.
Siri commands can be used for training, but is also executed locally and sent to Apple separately (and this can be disabled).
by stingraycharles
4/7/2026 at 1:55:21 PM
I couldn't believe it either but when you enable it the settings of macOS you get this popup:> When you dictate text, information like your voice input and contact names are sent to Apple to help your Mac recognize what you’re saying.
by angristan
4/7/2026 at 2:46:42 PM
Elsewhere it says:"When you use Dictation, your device will indicate in Keyboard Settings if your audio and transcripts are processed on your device and not sent to Apple servers. Otherwise, the things you dictate are sent to and processed on the server, but will not be stored unless you opt in to Improve Siri and Dictation."
And:
"Dictation processes many voice inputs on your Mac. Information will be sent to Apple in some cases."
In conclusion... I think they're trying to cover all their bases, but it sounds like things are processed locally as long as the hardware can handle it.
by wat10000
4/7/2026 at 11:38:28 AM
No, that is not correct. It is running one hundred percent local. You can try it by turning off internet on your phone and try running it then. However, the built in model isn't as good, so this is probably better.by victorbjorklund
4/7/2026 at 12:01:08 PM
Nothing comes close to LLM transcription though. I just tried this. I said "globe key dictation, does this work?". Here's the transcription, verbatim:"Fucking dictation, does this work"
by nidnogg
4/7/2026 at 6:37:22 AM
yup, this is how I 'type'by dwayne_dibley
4/7/2026 at 6:49:24 AM
IMO.. one of the best. It was surprisingly good. Yet they can't even replicate in on their own systemsby vharish