3/25/2026 at 10:00:38 PM
Meanwhile they are pushing AI transcription and note taking solutions hard.Patients are guilted into allowing the doctors to use it. I have gotten pushback when asked to have it turned off.
The messaging is that it all stays local. In reality it’s not and when I last looked it was running on Azure OpenAI in Australia.
I spoke to a practice nurse a few days ago to discuss this.
She said she didn’t think patients would care if they knew the data would be shipped off site. She said people’s problems are not that confidential and their heath data is probably online anyway so who cares.
by samglass09
3/25/2026 at 11:04:55 PM
It's honestly such a big problem. One of my colleagues uses an AI scribe. I can't rely on any of his chart notes because the AI sometimes hallucinates (I've already informed him). It also tends to write a ridiculous amount of detail that are totally unnecessary and leave out important details such that I still need to comb through patient charts for (med rec, consults, etc). In the end it ends up creating more work for me. And if my colleague ever gets a college complaint I have no clue how he's gonna navigate any AI generated errors. I'm all for AI and it's great for things like copywriting, brainstorming and code generation. But from what I'm seeing, it's creating a lot more headache in the clinical setting.If you're why doesn't this guy just check the AI scribe notes? Well, probably because with the amount of detail it writes, he'd be better off writing a quick soap note.
by taikon
3/26/2026 at 12:15:35 AM
> I'm all for AI and it's great for things like copywriting, brainstorming and code generationIt's funny how the assumption is always that LLMs are very useful in an industry other than your own.
by batshit_beaver
3/26/2026 at 2:39:57 AM
I mean they are not wrong.For all the whinging about bugs and errors around here the software industry in general (some niche sub-fields excepted) long ago decided 80% is good enough to ship and we will figure the rest out later. This entire site is based on startup culture which largely prided itself on MVP moonshots.
Plus plenty of places are perfectly fine with tech dept and the AI fire hose is effectively tech debt on steroids but while it creates it at scale it can also help in understanding it.
It is is own panacea in a way.
I think it is gonna be a while before the industry figures out how to handle this better so might as well just ride the wave and not worry too much about it in software.
Still software is not medicine even if software is required in basically every industry now. It should more conservative and wait till things settle down before jumping in.
by tempest_
3/26/2026 at 12:37:33 AM
My (extensive) experience with LLM code generation is that it has the same issues you describe in your field. Hallucinations, over-engineering, misses important requirements/patterns.But engineers have these same problems. The key is that the content creator (engineers for codegen, doctors for medicine) is still responsible for the output of the AI, as if they wrote it themselves. If they make a mistake with an AI (eg, include false data - hallucinations), they should be held accountable in the same way they would if they made a mistake without it.
by _AzMoo
3/26/2026 at 1:43:41 AM
Okay but since we know how humans actually behave, they will fully trust the indeterministic machine and give away their thinking. Sadly there is a large swath of humans that will act like this, maybe 20-30%.Are you willing to put your life in the hands of these people fully using the machines to do everything?
Acting like that smart people aren't getting one shot'ed by these machines is very dangerous. Even worse is how quickly your skills actual degrade. If knew my doctor was using anything LLM related, I would switch doctors.
by shimman
3/25/2026 at 11:20:50 PM
It feels very much like AI is creating AI lock-in (if not AI _vendor_ lock-in) by creating so much detailed information that it's futile to consume it without AI tools.I was updating some gitlab pipelines and some simple testing scripts and it created 3 separate 300+ line README type metadata files (I think even the QUCIKSTART.md was 300 lines).
by rconti
3/26/2026 at 12:29:00 AM
> I'm all for AI and it's great for things like copywriting, brainstorming and code generationThat's funny. I would have said the same thing about your field prior to reading your comment.
by acuozzo
3/26/2026 at 2:39:22 AM
sounds like they need a better instructions.mdby dmtroyer
3/25/2026 at 10:34:29 PM
Is there nothing like HIPAA there or what?by SpaceNoodled
3/25/2026 at 10:44:31 PM
Very little protections. The entire medical records of a significant percentage of the NZ population were stolen recently and put up for sale online. Zero consequences for the medical practices who adopted the hacked software.by samglass09
3/26/2026 at 3:19:42 AM
Interesting, a person was telling me recently that NZ privacy laws were quite strong. Perhaps a different category.by mixmastamyk
3/26/2026 at 7:31:29 AM
The laws are, the policing is not. At least not in medical databy peterashford
3/25/2026 at 10:46:43 PM
Many AI companies, including Azure with their OpenAI hosting, are more than willing to sign privacy agreements that allow processing sensitive medical data with their models.by lights0123
3/25/2026 at 10:58:19 PM
The devil is in the details. For example, OAI does not have regional processing for AU [0] and their ZDR does not cover files[1]. Anthropic's ZDR [2] also does not cover files, so you really need to be careful, as a patient/consumer, to ensure that your health, or other sensitive data, that is being processed by SaaS frontier models is not contained in files. Which is asking a a lot of the medical provider to know how their systems work, they won't, which is why I will never opt in.[0] https://developers.openai.com/api/docs/guides/your-data#whic...
[1] https://developers.openai.com/api/docs/guides/your-data#stor...
[2] https://platform.claude.com/docs/en/build-with-claude/zero-d...
by Ucalegon
3/26/2026 at 1:45:21 AM
Azure OpenAI is not the same as paying OpenAI directly. While you may not be able to pay OpenAI for them to run models in Australia, you can pay Azure: https://azure.microsoft.com/en-au/pricing/details/azure-open...The models are licensed to Microsoft, and you pay them for the inference.
by lights0123
3/26/2026 at 2:07:34 AM
There is no way to upload files as a part of context with Azure deployments, you have to use the OAI API [0], and without having an architecture diagram of the solution, I am not going to trust it based off of the known native limitations with Azure's OAI implementation.by Ucalegon
3/25/2026 at 11:54:31 PM
I spec'd up an implementation of this that uses a hardware button with colors that is in reach of either party. The customer went with a different vendor based on price/"complexity"/training.by fhub
3/26/2026 at 4:25:57 AM
The wilful ignorance and total apathy is appalling.I've had similar experiences in Australia. I emailed one of my docs' practices asking if they use Heidi AI (or anything similar) and that I do not consent. They were using it without my consent.
In the consultation, he tried to give me the schpiel, including the 'it stays local' thing. The Heidi AI website has the scripts for clinicians; he ran through them all.
Oh, their documents for clinicians also mention every two sentences that patient/client consent is not required at all. I wonder why they keep saying that? Hmm.
This doctor knows I am a developer. When I asked him to explain what he meant by 'local data', he said the servers were in Australia. I almost flipped the desk. Aside from the fact that it is mandatory (it's the law! they do not have a choice!), it's ...kind of meaningless where the servers are, especially when he (on behalf of Heidi AI) was trying to sell it as a security or privacy feature. When I pointed that out, he just couldn't wrap his head around it. Of course he can't, he doesn't understand.
AHPRA's "Meeting your professional obligations when using Artificial Intelligence in healthcare" guideline[0] (not any kind of enforceable requirement, unfortunately) has great stuff in it. It encourages using it with the informed consent of patients. Even if my doctor read it and agreed with it, and cared about getting consent, how the hell can he inform patients sufficiently when he has absolutely no idea about, well, anything?
He keeps pushing it and asking me about whether I've changed my mind about allowing him to use it. No! He keeps asking me questions that only confirm he hasn't even done a perfunctory web search about why some people hate LLMs, especially in the context of PII and PHI.
I really do feel for clinicians, but these products are not the answer.
[0] https://www.ahpra.gov.au/Resources/Artificial-Intelligence-i...
by magnetowasright
3/25/2026 at 10:57:05 PM
The New Zealand Chief Digital Officer allowed Australian cloud providers to be used as there weren't suitable NZ data centers and this was many years ago.by jimjimjim
3/25/2026 at 11:09:24 PM
Health NZ adopted Snowflake. It was about costs/fancy tech. We have always had data centres. Nobody *needs* snowflake. They could have used Apache Spark.by samglass09
3/26/2026 at 2:26:06 AM
What are you talking about, NZ has had suitable DC's for decades now.by bfivyvysj
3/26/2026 at 12:00:24 AM
Didn't Health NZ just suffer a major data breach and have patient records ransomed?by tjpnz
3/26/2026 at 12:29:09 AM
There were two serious breaches recently but they were at private companies not HNZ.by antod
3/26/2026 at 5:07:10 AM
I genuinely think people “care” about it (in quotes). It’s one of the things where nobody cares unless something bad happens, and when bad thing happens, they shrug it off and forget about it a week later.I’d go as far as saying she’s right. And we’re in a tiny minority for even thinking about it.
by tokioyoyo