2/18/2026 at 3:20:20 PM
There are two issues I see here (besides the obvious “Why do we even let this happen in the first place?”):1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?
2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.
There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.
by gortok
2/18/2026 at 4:53:00 PM
> "The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured."I'd add (3) - a DLP policy is apparently ineffective at its purpose: monitoring data sharing between machines. (https://learn.microsoft.com/en-us/purview/dlp-learn-about-dl...).
Directly from the DLP feature page:
> DLP, with collection policies, monitors and protects against oversharing to Unmanaged cloud apps by targeting data transmitted on your network and in Microsoft Edge for Business. Create policies that target Inline web traffic (preview) and Network activity (preview) to cover locations like:
> OpenAI ChatGPT—for Edge for Business and Network options > Google Gemini—for Edge for Business and Network options > DeepSeek—for Edge for Business and Network options > Microsoft Copilot—for Edge for Business and Network options > Over 34,000 cloud apps in the Microsoft Defender for Cloud Apps cloud app catalog—Network option only
by carefulfungi
2/18/2026 at 6:12:34 PM
> a DLP policy is apparently ineffective at its purpose/Offtopic
Yes, MSFT's DLP/software malfunctioned, but getting users to MANUALLY classify things as confidential is already an uphill battle. These are for the rare subset of people that are aware of and compliant with NDAs/Confidentiality Agreements!
by caminante
2/18/2026 at 6:37:05 PM
Who can blame them, when in the end, it gets ignored anyways?by ImPostingOnHN
2/19/2026 at 3:23:55 AM
I'm an AI researcher, here's my beliefs (it'll be clear in a second why I say belief and not claim objective facts)1) you can't be sure it's gone. It's even questionable if data can be removed (longer discussion needed). These are compression machines, so the very act of training is compressing that information. The question really becomes how well that information is compressed or embedded into the model. On one hand, the models (typically) aren't invertible so the information is less likely to be compressed lodslessly. On the other hand, the models aren't invertible, so reversing them is probabilistic and they are harder to analyze in this sense.
2) as you may gather from 1) there's almost certainly more issues like this. There are many unknown unknowns waiting to be discovered. Personally this is why I'm very upset the field is so product focused and that a large portion regards theory as pointless. Theory does two things for us because it builds a deeper and more nuanced understanding. Theory advancing allows us to develop faster as we can iterate on paper rather than through experimentation. This allows us to better search the solution space and even understand our understanding. This also leads to better safety of models as it is necessary to understand them to understand where they fail and how to prevent those failures. Experimentation alone is incredibly naïve. It is like proving the correctness of your programs through testing (see the issues with TDD). Tests are great but they are bounds, not proofs. They can suggest safety, give you some level of confidence in safety, but they cannot guarantee it. We all know that the deeper understanding of your code the better tests you can write, and this is the same thing here. That theory is reducing your unknown unknowns and even before strong proofs are made we can get wider coverage in our testing.
I think we're so excited right now we're blinding ourselves. If we're cutting off or reducing fundamental research then we are killing the pipeline of development. Theory is the foundation that engineering sits on top of. But what worries me is that there's so many unknown unknowns and everyone is eagerly saying "we're just need 'good enough'" or "what's the minimum viable product". These are useful tools/questions but they have limits and it gets dangerous when putting out the minimum at scale
by godelski
2/18/2026 at 7:41:43 PM
Copilot is not a model, to my knowledge. When you’re asking about the data that it was trained on, you are most likely referring to an OpenAI or, in some circumstances, an Anthropic model. Customer data is not used for training the models that run Copilot.by samch
2/18/2026 at 3:33:34 PM
All the vendors paraphrase user data, then use the paraphrased data for training. This is what their terms of service say.They have significant experience in this. Microsoft software since the 2014, for the most part, is also paraphrased from other people's code they find laying around online.
by doctorpangloss
2/18/2026 at 4:21:09 PM
> All the vendors paraphrase user data, then use the paraphrased data for training. This is what their terms of service say.It depends. E.g. OpenAI says: "By default, we do not train on any inputs or outputs from our products for business users, including ChatGPT Team, ChatGPT Enterprise, and the API."[0]
[0] https://openai.com/policies/how-your-data-is-used-to-improve...
by benterix
2/18/2026 at 7:46:33 PM
"By default" is a fantastic escape catch in the language used there. So... What are the exceptions?by shakna
2/18/2026 at 7:44:30 PM
Why would they want to train on random garbage proprietary emails?If their models ever spit out obviously confidential information belonging to their paying customers they'll lose those paying customers to their competitors - and probably face significant legal costs as well.
Your random confidential corporate email really isn't that valuable for training. I'd argue it's more like toxic waste that should be avoided at all costs.
by simonw
2/19/2026 at 2:31:50 PM
Your opinion seems a little unimaginative. To me, since email is the primary work output of millions of Americans, including all of its leaders, there is a lot of opportunity there.by doctorpangloss
2/19/2026 at 3:10:58 PM
Valuable enough to take on that moral, legal and financial risk?by simonw
2/18/2026 at 6:17:38 PM
> Microsoft software since the 2014, for the most part, is also paraphrased from other people's code they find laying around online.That was pretty funny and explains a lot.
I wish I could do more :(
Instead I always break things when I paraphrase code without the GeniusParaphrasingTool
by moritzwarhier
2/18/2026 at 7:12:28 PM
This is exactly why I moved to self hosted code in 2017.While I couldn’t have predicted the future, even classic data mining posed a risk.
It is just reality that if you give a third party access to your data, you should expect them to use it.
It is just too tempting of a value stream and legislation just isn’t there to avoid the EULA trap.
I was targeting a market where fractions of a percentage advantage were important which did drive my what at the time was labeled paranoia
by nyrikki