alt.hn

4/23/2026 at 12:14:55 AM

OpenAI model for masking personally identifiable information (PII) in text

https://openai.com/index/introducing-openai-privacy-filter/

by tanelpoder

4/24/2026 at 7:19:30 AM

SuperagentLM made available on-edge PPI redaction models already a few years ago in sizes 20B, 3B, 200M. They still seem to be available via their legacy API - well worth checking out to compare against this one. https://docs.superagent.sh/legacy/llms/superagent-lm-redact-...

by mentalgear

4/23/2026 at 2:39:37 AM

I'm surprised nobody else has commented on this. This is a very straightforward and useful thing for a small locally runnable model to do.

by hiAndrewQuinn

4/23/2026 at 3:28:31 AM

And also something that it’s dangerous to try to do stochastically.

by apothegm

4/23/2026 at 3:51:52 AM

It's going to be stochastic in some sense whether you want it to be or not, human error never reaches zero percent. I would bet you a penny you'd get better results doing one two-second automated pass + your usual PII redaction than your PII redaction alone.

by hiAndrewQuinn

4/23/2026 at 10:21:19 AM

I think the problem is most secrets arn't stochastic; they're determinant. When the user types in the wrong password, it should be blocked. Using a probabilistic model suggests an attacker only now needs to be really close, but not correct.

Sure, there's some math that says being really close and exact arn't a big deal; but then you're also saying your secrets don't need to be exact when decoding them and they absolutely do atm.

Sure looks like a weird privacy veil that sorta might work for some things, like frosted glass, but think of a toilet stall with all frosted glass, are you still comfortable going to the bathroom in there?

by cyanydeez

4/23/2026 at 3:02:40 PM

The alternative being?

by moralestapia

4/23/2026 at 3:08:33 AM

Same here, this is an incredibly useful thing to have in the toolkit

by ashwindharne

4/23/2026 at 3:12:48 PM

Exciting! I took a look through the code and found what appear to be the entity types for future releases - this release (V2 config) supports 8 entity types, but the V4 and V7 taxonomies have >20, mostly more personal ID types. Given this is a preview release, I imagine they'll release these.

Details in my review article here: https://piieraser.ai/blog/openai-privacy-filter. Disclaimer: I also build PII detection systems.

by aubinkure

4/23/2026 at 9:30:56 AM

There's some interesting technical details in this release:

> Privacy Filter is a bidirectional token-classification model with span decoding. It begins from an autoregressive pretrained checkpoint and is then adapted into a token classifier over a fixed taxonomy of privacy labels. Instead of generating text token by token, it labels an input sequence in one pass and then decodes coherent spans with a constrained Viterbi procedure.

> The released model has 1.5B total parameters with 50M active parameters.

> [To build it] we converted a pretrained language model into a bidirectional token classifier by replacing the language modeling head with a token-classification head and post-training it with a supervised classification objective.

by stratos123

4/23/2026 at 10:50:29 AM

It would be nice if their examples weren’t mostly things that are easy to catch with regex, but it’s cool to see if released as an open, local model.

by mplanchard

4/23/2026 at 9:38:31 AM

50M effective parameters is impressively light. Is there a similarly light model on the prompt injection side? Most of the mainstream ones seem heavier

by Havoc

4/23/2026 at 3:37:55 AM

crying_laughing_emoji.gif

yes, please, feed daddy AI all your PII

by y0eswddl

4/23/2026 at 4:04:23 AM

Was my first thought as well, but this is an open weights model. You can run it on your own hardware.

by klauserc

4/23/2026 at 6:14:05 AM

> The model is available today under the Apache 2.0 license on Hugging Face (opens in a new window) and Github (opens in a new window).

Bringing back the Open to OpenAI..

by 7777777phil

4/23/2026 at 10:34:40 AM

Where's the gguf from Unsloth and co?

by ndom91