5/19/2025 at 4:34:10 AM
> A relation should be identified by a natural key that reflects the entity’s essential, domain-defined identity — not by arbitrary or surrogate values.I fairly strongly disagree with this. Database identifiers have to serve a lot of purposes, and natural key almost certainly isn’t ideal. Off the top my head, IDs can be used for:
- Joins, lookups, indexes. Here data type can matter regarding performance and resource use.
- Idempotency. Allowing a client to generate IDs can be a big help here (ie UUIDs)
- Sharing. You may want to share a URL to something that requires the key, but not expose domain data (a URL to a user’s profile image shouldn’t expose their national ID).
There is not one solution that handles all of these well. But using natural keys is one of the least good options.
Also, we all know that stakeholders will absolutely swear that there will never be two people with the same national ID. Oh, except unless someone died, then we may reuse their ID. Oh, and sometimes this remote territory has duplicate IDs with the mainland. Oh, and for people born during that revolution 50 years ago, we just kinda had to make stuff up for them.
So ideally I’d put a unique index on the national ID column. But realistically, it would be no unique constraint and instead form validation + a warning on anytime someone opened a screen for a user with a non-unique ID.
Then maybe a BIGINT for database ID, and a UUID4/7 for exposing to the world.
EDIT: Actually, the article is proposing a new principle. And so perhaps this could indeed be a viable one. And my comment above would describe situations where it is valid to break the principle. But I also suspect that this is so rarely a good idea that it shouldn’t be the default choice.
by adamcharnock
5/19/2025 at 5:46:57 AM
> will absolutely swear that there will never be two people with the same national ID...I suddenly got flash-backs.
There are duplicate ISBN numbers for books, despite the system being carefully designed to avoid this.
There are ISBN numbers that have invalid checksums, but are valid ISBNs with the invalid number in the barcode and everything. Either the calculation was incorrectly done, or it was simply a mis-print.
The same book can have hundreds of ISBNs.
There is no sane way to determine if two such ISBNs are truly the same (page numbers and everything), or a reprint that has renumbered pages or even subtly different content with corrected typos, missing or added illustrations, etc...
Our federal government publishes a master database of "job id" numbers for each profession one could have. This is critical for legislation related to skilled migrants, collective workplace agreements, etc...
The states decided to add one digit to these numbers to further subdivide them. They did it differently, of course, and some didn't subdivide at all. Some of them have typos with "O" in place of "0" in a few places. Some states dropped the leading zeroes, and then added a suffix digit, which is fun.
On and on and on...
The real world is messy.
Any ID you don't generate yourself is fraught with risk. Even then there are issues such as what happens if the database is rolled back to a backup and then IDs are generated again for the missed data!
by jiggawatts
5/19/2025 at 4:15:23 PM
The states decided to add one digit to these numbers to further subdivide
them. They did it differently, of course, and some didn't subdivide at all.
Some of them have typos with "O" in place of "0" in a few places. Some
states dropped the leading zeroes, and then added a suffix digit, which is fun.
Any identifier that is comprised of digits but is not a number will have a hilariously large amount of mistakes and alterations like you describe.In my own work I see this all the time with FIPS codes and parcel identifiers -- mostly because someone has round-tripped their data through Excel which will autocast the identifiers to numeric types.
Federal GEOIDs are particularly tough because the number of digits defines the GEOID type and there are valid types for 10, 11 and 12-digit numbers, so dropping a leading zero wreaks havoc on any automated processing.
There's a lot of ways to create the garbage in GIGO.
by lscharen
5/19/2025 at 6:51:18 AM
This isn’t a new principle, it was part of database design courses in the early 2000s at least. However from a couple of decades of bitter experience I say external keys should never be your primary keys. They’ll always change for some reason.Yes you can create your tables with ON UPDATE CASCADE foreign keys, but are those really the only places the ID is used?
Sometimes your own data does present a natural key though, so if it’s fully within your control then it’s a decent idea to use.
by sitharus
5/19/2025 at 2:58:58 PM
Even internal keys.For example, suppose you have an information management system where the user can define their own logical fields or attributes. Naturally those names should uniquely identify a field. That makes them an easy candidate for a natural key. But I would still use a surrogate key.
I've worked in two systems that had this feature. In one the field name was a primary key, and in the other they used a surrogate key and a separate uniqueness constraint. In both a requirement to let users rename fields was added later. In the one that used a surrogate key, this was an easy and straightforward change.
In the one that used field name as a natural key, the team decided that implementing the feature wasn't feasible. The name change operation, with its cascade of values needing to be updated for all the foreign key relationships, would have been way too big a transaction to be doing on the fly, in a live database, during high usage times of day. So instead they added a separate "display name" field and updated all the queries to coalesce that with the "official" name. Which is just ugly, and also horribly confusing. Especially in oddball cases where users did something like swapping the names of two fields.
by bunderbunder
5/19/2025 at 8:43:45 AM
Advice used to be that "natural keys can only be primary if they don't change" but there was always an exception to the rule after a few months.On the other hand, I remember a CEO wanting to be #1 in the database, so even non-natural primary keys can change... haha.
by whstl
5/20/2025 at 12:14:27 AM
> but are those really the only places the ID is used?I'm curious, where else would they be used?
by b-man
5/21/2025 at 9:18:02 AM
Logs, metrics, data analytics, sent to third parties as identifiers. Unless you have a system small enough that you, or a few people who can be trusted, can know every component your IDs will leak out somewhere.by sitharus
5/22/2025 at 2:52:05 AM
what you need is to add temporality to your tables. Then your logs/dependencies will work just fineby b-man
5/19/2025 at 2:42:07 PM
Use both. For each table, I use an internal id that is auto generated, and an external uuid.by prmph
5/19/2025 at 6:08:11 AM
> Allowing a client to generate IDs can be a big help here (ie UUIDs)Trusting the client to generate a high-quality ID has a long history of being a bad idea in practice. It requires the client to not be misconfigured, to not be hacked, to not be malicious, to not have hardware bugs, etc. A single server can generate hundreds of millions of IDs per second and provides a single point of monitoring and control.
by jandrewrogers
5/19/2025 at 6:33:04 AM
In context I read that as database client, meaning the application server (which is a client to the database) providing the service to the user. Having that be able to generate IDs could be useful when needing to refer to the same entity, even if there is data that has to exist in some separate database for some reason.by treyd
5/19/2025 at 6:43:34 AM
That is indeed what I had in mind, although I did leave it intentionally vague as everyone can asses what’s best for their own situationby adamcharnock
5/19/2025 at 6:51:31 AM
I’ve once attempted to implement a solution where ids are generated by UUIDv5 from a certain owner and the relationship of the new item to the owner; that way, users cannot generate arbitrary ids but can still predict ahead of time their new ids to ease optimistic behaviour.by qazxcvbnm
5/19/2025 at 9:31:06 AM
I think if you have (tenant, uuid) or (user, uuid) or whatever as your primary key, that's fine. If that tenant or user or whatever generates rubbish keys, that's their problem.by adrianmsmith
5/19/2025 at 9:35:35 AM
As someone who currently has to deal with a database that has NO surrogate keys, 100% agree with having surrogate keys always. Compound natural keys are just plain awful to deal with.Altough, it doesn't help that we have to do string manipulation to extract part of those natural keys from other fields that contain a bunch of data as one long string.
by Akronymus
5/19/2025 at 2:56:04 PM
The writing style was already obtuse and off-putting. Adding this 'new' principle (claimed as an innovation), leads me to believe it's not worth my time to read. Additionally the principles are great starting points but real-world uses may require deviation and any thought that absolutes should prevail indicates a lack of experience.by karmakaze
5/19/2025 at 5:56:27 AM
I'm with you. I've used natural keys in the past, and they've always been a problem eventually.On the other hand I've used surrogate keys for 20 years, and never encountered an issue that wasn't simple to resolve.
I get there are different camps here, and yes your context matters, but "I'm not really interested in why natural keys worked for you." They don't work for me. So arguments for natural keys are kinda meh.
I guess they work for some folk (shrug).
by bruce511
5/19/2025 at 9:44:26 AM
> Joins, lookups, indexes.You both want to control these values within your database engine, at least so that they are actually unique within the domain, and there is no real reason for it to be user-controlled anyway, as they are used referentially.
> Idempotency.
User supplied tokens for idempotency are mostly useful within the broader context of application sitting on top of database, otherwise they become subject to the same requirements internally generated ones are, without control, which is a recipe for disaster.
> Sharing
Those are the same idempotency tokens from previous points with you as the supplier. In some cases you want to share them across prod/stage/dev environments, in some cases you may want to explicitly avoid duplicates, in some cases you don't care.
All these use cases are solved with mapping tables / classifiers.
Example: in an asset management system you need to expose and identifier of a user-registered computer, that is built using components procured from different suppliers, with their own identification schemas, e.g. exposing keyboard/mouse combo as one component with two subcomponents or two (possibly linked) components.
This requires you to use all those listed identifier types in different parts of the system. You can bake those in database/schema design, or employ some normalization and use "native" identifier and mapping tables.
by friendzis
5/19/2025 at 2:45:00 PM
I agree - in many cases a surrogate key is better than a natural key.However there is a problem when people start creating surrogate keys by default and stop thinking about the natural candidate key at all, nevermind not putting on the appropriate constraints.
Especially in the case where the natural key values are within your control. Also not using them for the key ( and FK ) can create the need for additional joins just to go from surrogate to natural key value.
by DrScientist
5/19/2025 at 3:54:17 PM
We can debate the first usage (small type for storage/cpu optimization) but the other 2 are actually good examples of natural keys!Note the quotes:
> Every software project needs to represent the reality of the business he is embedded in...
> Such database needs to be designed to properly reflect reality. This can’t be automated, since the semantics of the situation need to be encoded in a way that can be processed by a computer.
Having the business need to share an ID, not give internal information, make some data opaque, use a 'arbitary' id to share as 'textual pointer' to the actual data, etc are valid natural keys.
What is wrong is just add a surrogate key just because, when the fact is that a natural key (to the domain, the business, the requirements) is the actual answer.
I discover this fact when doing sync. The moment I get rid of UUIDs and similar and instead keep data-as-natural things become MORE clear:
ulid: 01H6GZ8YQJ code: A1 name: A
ulid: 01H6GZ8YQK code: A1 name: A
ulid: 01H6GZ8YQJL code: A1 name: A
// All same, obscured by wrong key
vs
code: A1 name: A
(ie: The actual data and what I need to model a sync is NOT the same. Is similar to how in 'git' your code is one thing that is clearly distinct to the internal git structures that encode the repo)
by mamcx
5/21/2025 at 2:34:32 PM
I'd expect both code and name to be subject to deliberate changes (e.g. name "A" is extended to "Alpha" after the name column is extended from 1 to 5 characters) or "technical" edits (e.g. due to out of order data entry, code "A1" should become "A2" in order to insert the proper A1 record). Anything the user sees is a commitment.by HelloNurse
5/21/2025 at 6:55:44 PM
Sure. Data is not static. But is foolish to pretend is.Adding surrogate keys not changed that, only add other column that COULD change.
by mamcx
5/19/2025 at 2:57:13 PM
- Author makes an ontological statement - Somehow someone feels its appropriate to talk about mechanismsby marcnavarre
5/19/2025 at 5:28:38 AM
Why have both a database ID and UUIDv7, versus just a UUIDv7?by Jarwain
5/19/2025 at 6:16:41 AM
Actually, it should be a database ID and an encrypted database ID, which doesn’t require storing a second ID. Even better, you can make that key unique per session so that users can’t share keys. For security reasons, it is a bad idea to leak private state, which UUIDv7 does.A single AES encryption block is the same size as a UUID and cheap to compute.
by jandrewrogers
5/19/2025 at 7:20:33 AM
So you've got a database ID, either serial or uuid? And you encrypt it when you send it to the user, maybe encrypted against their JWT or something to maintain stateless sessions?And I guess if the user references specific resources by ID you'd have to translate back? Assuming the session has been maintained correctly,which I guess is a plus for stateful sessions. And it doesn't really matter if you get a collision on the encrypted output.
I spent enough time in the nibling comment talking about my doubts about that advice not to publicly share the identifying key. But I'll add one more point; it feels like a bunch of added complexity for marginal benefit.
by Jarwain
5/22/2025 at 5:48:30 PM
Which mode of AES do you mean? AES ECB seems to fit your example, but wouldn't it still expose the incremental nature UUID v7 with enough samples (see https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation, the Linux Tux image example)? Other AES modes produce way longer results, even when the ID is encoded with Base58 or the like.by sysysy
5/19/2025 at 7:27:11 AM
Can you explain this a bit more or link to something? I don’t really understand. What’s encrypted? A guid? A monotonic integer ID? Is the encrypted ID only used for user facing stuff? How is it decrypted? What do you gain by this?by lukevp
5/19/2025 at 4:06:32 PM
I don’t have a link. I’ve never seen a good writeup but the practice is really old.It is literally encrypting whatever type you are using as a handle for records that you send the user, typically an integer, a UUID, or some slightly more complex composite key. The only restriction is that the key should not be longer than 128-bits so that you can encrypt it to a UUID-like value; in practice you can almost always guarantee this [0]. The encryption only happens at the trust boundary i.e. when sending it to a random user. That encrypted key is not stored or used at all internally, it just changes the value of the key the user sees.
Most obvious ways of generating handles in a database leak private state. This is routinely exploited almost to the point of being a meme. Encrypting the handles you show the user prevents that.
An advantage of this is that if you are systematically encrypting exported keys then you can attach sensitive metadata to those keys at runtime if you wish, which can be very convenient. You have 128 bits to work with and a unique serial only needs 64 at most. If you are paranoid, there are schemes that enable some degree of key authentication. And while well beyond the scope here, similar constructions work nicely for compact keys in federate data models.
At scale, the cost (storage, compute, etc) of all of this matters. Encryption of keys, if done intelligently, is atypically efficient on all accounts while providing explicit inspectable security guarantees.
[0] There are reasons you might want to expand the exported key to a 256-bit value, particularly in high-assurance type environments, but the advantage of 128-bits is that it is literally drop-in compatible with UUIDs almost everywhere.
by jandrewrogers
5/19/2025 at 6:50:18 AM
> A single AES encryption block is the same size as a UUID and cheap to compute.I didn’t realise this! The UUID spec mandates some values for specific digits, so I assume this would not be strictly valid UUIDs?
by adamcharnock
5/19/2025 at 7:12:20 AM
They would not be valid UUIDs, it is an opaque 128-bit value.To be honest many companies are not using strictly standard conforming UUIDs everywhere, and UUID has become a synonym for opaque 128-bit identifier. Consumers of the value just need them to be unique. All standard UUID versions are sometimes prohibited in environments, except maybe UUIDv8, so this semantic ambiguity is helpful.
Technically, you could make it present as a standard UUIDv4 or UUIDv8 by setting a few bits, as long as you remember to add them back if you ever need to decrypt it. The entropy might be a bit off for a UUIDv4 if someone actually checks but you can guarantee the uniqueness.
Using AES to produce a UUID-like key is an old but useful trick. Both ARM and x86 do that encryption in hardware — it is cheaper to generate than the standardized UUID versions in most cases.
by jandrewrogers
5/22/2025 at 6:39:33 PM
Which mode of AES do you mean? AES ECB seems to fit your example, but wouldn't it still expose the incremental nature UUID v7 with enough samples (see https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation, the Linux Tux image example)? Other AES modes produce way longer results, even when the ID is encoded with Base58 or the like.by sysysy
5/19/2025 at 6:04:29 AM
There is a security principle to not expose real identifiers to the outside world. It makes a crack in your system easier to open.by sroussey
5/19/2025 at 7:05:50 AM
Idk that reeks of security through obscurity to me. Your authorization/permission scheme has got to be fubar'd if you're relying on obscuring IDs to prevent someone from accessing a resource they shouldn't.I'm sure I'm missing something obvious, but I'm not sure what other threat vectors there are, assuming it's in conjunction with other layers of security like a solid authorization/access control scheme.
I guess I'm not the biggest fan for a few reasons. I'd rather try and design a system such that it's secure even in the case of database leak/compromise or some other form of high transparency. I don't want to encourage a culture of getting used to obscurity and potentially depending on it instead of making sure that transparency can't be abused.
Also, it just feels wasteful. If you have two distinct IDs for a given resource, what are you building your foreign keys against? If you build it against the hidden one, and want to query the foreign table based on user input, you've gotta either do a join or do a query to Get the hidden key just to do another query. It just feels wasteful.
EDIT: apparently RFC 4122 even says not to assume UUIDs are hard to guess and shouldn't be used for security capabilities. So if it shouldn't be depended on for security, why add all this complexity to keep it secure?
by Jarwain
5/19/2025 at 7:22:26 AM
The point you may be missing is that the key itself contains information about records in the database that you don’t have access to. There are many famous examples in literature (e.g. the German Tank Problem [0]) of useful attacks on known unique serials to infer hidden information without access. In a database context, the keys for the records you are authorized to see can tell you much about the parts of the database to which you have no access.Strong randomization of record serials mitigates these attacks.
by jandrewrogers
5/19/2025 at 7:30:21 AM
I thought we were talking about UUIDv7, which is random enough to make this not a problem right?by Jarwain
5/19/2025 at 5:29:31 PM
The idea being to expose uuid instead of the natural index.It’s been downgraded as people use uuids more.
That said, security through obscurity is an effective layer, particularly for slowing an attack.
Slowing lateral movement is valuable.
by sroussey
5/19/2025 at 6:04:53 PM
Sorry I'm a bit confused.I'm in agreement that a natural key shouldn't be used as the primary key for a record.
I was responding to a comment about having a hidden "database ID" (which I interpreted as being a serial key?) and a public "Uuid", and questioning the utility of that hidden database ID versus having a public UUIDv7 as the sole primary key, followed by questioning whether the utility of obscuring that primary UUIDv7 is worth the complexity of having to manage multiple artificial keys.
I agree that security through obscurity is a valuable layer in a multi-layered security position.
I guess I just don't think obscuring a Uuid primary key is worth the added complexity in most systems.
I see it like adding a second front door to your house with a separate set of keys. Sure it'd be more secure, but it's an added pain and doesn't help if you don't have a sturdy doorframe, or smash-resistant windows.
by Jarwain
5/20/2025 at 11:38:47 AM
UUID is PUBLIC id, use it to look up the bigint numerical id, when necessary.one should not divulge scale, placement in numerical sequence, etc wtr to integer id, hence ouvlic UUID, which is basically unguessable token
by ringeryless
5/20/2025 at 4:02:33 PM
Why have a bigint numerical ID at all?by Jarwain
5/21/2025 at 2:41:22 PM
In this case obscurity is security: leaked information from the database is directly useful and a secret in itself, not only a stepping stone that might or might not facilitate further exploitation.by HelloNurse
5/19/2025 at 2:21:13 PM
The only thing UUIDv7 exposes is its creation time, which isn't tremendously useful or secret information.by sgarland
5/19/2025 at 1:11:41 PM
- Joins, lookups, indexes. Here data type can matter regarding performance and resource use.I struggle to see a practical example.
> - Idempotency. Allowing a client to generate IDs can be a big help here (ie UUIDs)
Natural keys solves this
> - Sharing. You may want to share a URL to something that requires the key, but not expose domain data (a URL to a user’s profile image shouldn’t expose their national ID).
The you have another piece of data, which you relate to the natural key. Something like `exposed-name`.
> There is not one solution that handles all of these well
Natural keys solve these issues.
> Also, we all know that stakeholders will absolutely swear that there will never be two people with the same national ID. Oh, except unless someone died, then we may reuse their ID. Oh, and sometimes this remote territory has duplicate IDs with the mainland. Oh, and for people born during that revolution 50 years ago, we just kinda had to make stuff up for them.
If this happens, the designer had a error in his design, and should extend the design to accommodate the facts that escaped him at design time.
> Actually, the article is proposing a new principle
I'm putting it in words, but such knowledge has been common in the database community for ages, afaict.
by b-man
5/19/2025 at 3:40:21 PM
> If this happens, the designer had a error in his design, and should extend the design to accommodate the facts that escaped him at design time.Errors in the initial design should be assumed as the default. Wise software engineering should make change easy.
Constraints on natural keys are business logic, not laws of mathematical truth. They are subject to change and often violated in the real world. The database as record-keeping engine should only enforce constraints on artificial keys that maintain its integrity for tracking record identity, history, and references.
Your database may not be primarily a record-keeping engine, and your tradeoffs may be different.
by fernmyth
5/20/2025 at 12:21:03 AM
> Errors in the initial design should be assumed as the default. Wise software engineering should make change easy.I don't think I said that errors would not happen.
by b-man
5/19/2025 at 2:32:57 PM
> I struggle to see a practical example.Memory, and CPU, and even storage eventually, those would be the main practical examples of where having a key that's composed of something very small saves you space and thus, time.
Say we want to use a bigint key vs a VARCHAR(30)? depending on your big key you might be talking about terabytes of additional data, just to store a key (1t rows @ bigint = 8TB, 1T rows at 30 chars? 30TB...). The data also is going to constantly shuffle (random inserts).
If you want to define the PK as the natural key with no separate column then you get to do comparisons on all the natural key columns themselves, so instead of doing 1 4 or 8 byte column comparison you get to do what? 5 char comparisons?
Having worked extensively in ETL - when a developer tells me "there's no duplication about this real world process" what they mean is "there's no duplication in my mental model about this real world process"
by hobs
5/20/2025 at 12:26:08 AM
> Memory, and CPU, and even storage eventually, those would be the main practical examples of where having a key that's composed of something very small saves you space and thus, time.> Say we want to use a bigint key vs a VARCHAR(30)? depending on your big key you might be talking about terabytes of additional data, just to store a key (1t rows @ bigint = 8TB, 1T rows at 30 chars? 30TB...). The data also is going to constantly shuffle (random inserts).
>> Joins, lookups, indexes
I don't see how what you brought up has anything to do with these.
But the main point is being missed here because of a physical vs logical conflation anyhow.
by b-man
5/19/2025 at 2:25:23 PM
> Natural keys solves thisIt would be helpful if the article used a natural key. Instead, it uses a primary key that is neither natural nor guaranteed, and is mutable. It makes assumptions that are not true, and that is one of the big dangers in using natural keys.
> If this happens, the designer had a error in his design, and should extend the design to accommodate the facts that escaped him at design time.
This again is a dangerous assumption. The danger here is the assumption that facts don't change. Facts do change. And facts that are true at design time are not necessarily true 1 day later, 1 year later, or 1 decade later, or more.
Again, when the example can't even use a natural key to present it's idea of using natural keys, we have a problem.
by jasonlotito
5/19/2025 at 10:55:21 PM
The problem with natural keys is that nobody ever says, "My bad, I should have spelled my name the same way on this form as I did when I registered for the service, I promise not to do it again." Instead they say, "No matter how I spell it, you should have still known it was my name!"by jacinabox