Interesting comment, since v4 is the only version that provides the maximal random bits and is recommended for use as a primary key for non-correlated rows in several distributed databases to counter hot-spotting and privacy issues.
Edit: Context links for reference, these recommend UUIDv4:
https://www.cockroachlabs.com/docs/stable/uuid
https://docs.cloud.google.com/spanner/docs/schema-design#uui...
In practice, most orgs with sufficiently large and complex data models use the term "UUID" to mean a pure 128-bit value that makes no reference to the UUID standard. It is not difficult to find yourself with a set of application requirements that cannot be satisfied with a standardized UUID.
The sophistication of our use case scenarios for UUIDs exceeds their original design assumptions. They don't readily support every operation you might want to do on a UUID.
Using MD5 or 122 bits of a SHA1 hash seems questionable now that both algorithms have known collisions. Using 122 bits of a SHA2/3 seems pretty limited too. Maybe if you've got trusted inputs?
Let's say i have some entity like an "organization" that has data that spans several different tables. I want to use that organization as a "parent" in such a way where i can clone them to create new "child" organizations structured the same way they are. I also want to periodically be able to pull changes from the parent organization down into the child organization.
If the primary keys for all tables involved are UUIDs, I can accomplish this very easily by mapping all IDs in the relevant tables `id => uuid5(id, childOrgId)`. This can be done to all join tables, foreign keys, etc. The end result is a perfect "child" clone of the organization with all data relations still in place. This data can be refreshed from the parent organization any time simply by repeating the process.
v7 rough ordering also helps as a PK in certain sharded DBs, while others want random, or nonsharded ones usually just serial int.
For those ~~curious~~ worried, no, this was not a security sensitive context.
It's the same reason we use UTF-8. It's well supported. UUIDs are well supported by most languages and storage systems. You don't have to worry about endianness or serialization. It's not a thing you have to think about. It's already been solved and optimized.
Now generate your random ID. Did you use a CSPRNG, or were your devs lazy and just used a PRNG? Are you doing that every time you're generating one of these IDs in any system that might need to communicate with your API? Or maybe they just generated one random number, and now they're adding 1 every time.
Now transfer it over a wire. Are you sure the way you're serializing it is how the remote system will deserialize it? Maybe you should use a string representation, since character transmission is a solved problem with UTF-8. OK, so who decides what that canonical representation is? How do we make it recognizable as an ID without looking like something that people should do arithmetic with?
It's not like random IDs were a new idea in 2002.
But, using UUIDv4 shouldn't be rocket science, either. UUID support should be built in to a language intended for web applications, database applications, or business applications. That's why you're using Go or C# instead of C. And Go is somewhat focused on micro-service architectures. It's going to need to serialize and deserialize objects regularly.
> Are you sure the way you're serializing it is how the remote system will deserialize it?
It's 16 bytes. There's no serialization.
Hex encoding with hyphens in the right spot isn't serialization?
A downvote tells me nothing. Please tell me what I'm missing, maybe I could learn something
Ah, here we are. If it's just bytes, why store it as a string? Sixteen bytes is just a 128-bit integer, don't waste the space. So now the DB needs to know how to convert your string back to an integer. And back to a string when you ask for it.
"Well why not just keep it as an integer?"
Sure, in which base? With leading zeroes as padding?
But now you also need to handle this in JavaScript, where you have to know to deserialize it to a Bigint or Buffer (or Uint8Array).
UUIDs just mean you don't need to do any of this crap yourself. It's already there and it already works. Everything everywhere speaks the same UUIDs.
(Downvote wasn't me)
Also mentioned on HN https://news.ycombinator.com/item?id=45323008
1. Users - your users table may not benefit by being ordered by created_at ( or uuid7 ) index because whether or not you need to query that data is tied to the users activity rather than when they first on-boarded.
2 Orders - The majority of your queries on recent orders or historical reporting type query which should benefit for a created_at ( or uuidv7 ) index.
Obviously the argument is then you're leaking data in the key, but my personal take is this is over stated. You might not want to tell people how old a User is, but you're pretty much always going to tell them how old an Order is.
There's also a hot spot problem with databases. That's the performance problem with autoincrement integers. If you are always writing to the same page on disk, then every write has to lock the same page.
Uuidv7 is a trade off between a messy b-tree (page splits) and a write page hot spot (latch contention). It's always on the right side of the b-tree, but it's spread out more to avoid hot spots.
That still doesn't mean you should always use v7. It does reversibly encode a timestamp, and it could be used to determine the rate that ids are generated (analogous to the German tank problem). If the uuidv7 is monotonic, then it's worse for this issue.
> To be clear, UUIDv8 is not a replacement for UUIDv4 (Section 5.4) where all 122 extra bits are filled with random data.
> UUIDv8's uniqueness will be implementation specific and MUST NOT be assumed.
Here's a spec compliant UUIDv8 implementation I made that doesn't produce unique IDs: https://github.com/robalexdev/uuidv8-xkcd-221
So, given a spec-compliant UUIDv4 you can assume it is unique, but you'd need out-of-band information to make the same assumption about a UUIDv8.
I wrote much more in a blog post: https://alexsci.com/blog/uuid-oops/
It is heathwarming to see such mundane small tech bit making front page of HN when elsewhere is is debated whether programming as profession is dead or more broadly if AI will be enslaving humanity in the next decade. :)
If you’re tired of talking about AI, why did you post this?
In the past few weeks I've started opening neovim again and just writing code. It's still 50/50 with a Claude code instance, but fuck I don't feel a big productivity difference.
a = 1
assert a == 1
// many lines here where a is never used
assert a == 1
Yes AI test cases are awesome until you read what it's doing.
Especially when folks are trying to push % based test metrics and have types ( and thus they tests assert types where the types can't really be wrong ).
I use AI to write tests. Many of them the e2e fell into the pointless niche, but I was able to scope my API tests well enough to get very high hit rate.
The value of said API tests aren't unlimited. If I had to hand roll them, I'm not sure I would have written as many, but they test a multitude of 400, 401, 402, 403, and 404s, and the tests themselves have absolutely caught issues such as validator not mounting correctly, or the wrong error status code due to check ordering.
This is the same thing as picking a new smart programming language or package, or insisting that Dvorak layout is the only real way forward.
Personally I try to put as much distance to the modality discussion and get intimate with the substance.
But we're still required to go to the office, and talking to a computer on the open space is highly unwelcome
AI delivers the feeling of productivity and the ability to make endless PoCs. For some tasks it's actually good, of course, but writing high quality software by itself isn't one.
I can way with certainty that: 1. LLM-assisted development has gotten significantly, materially better in the past 12 months.
2. I would be incredibly skeptical of any study that’s been designed, executed, analysed, written about, published, snd talked about here, within that period of time.
This is the equivalent of a news headline stating with “science says…”.
You are displaying the exact same thing that you were complaining about.
Basically, who runs golang?
The perfectionists are correct, UUIDs are awful and if there's a pile of standards that all have small problems the best thing you can do is make a totally new standard to add to the already too long list.
The in-the-trenches system software devs want this BAD. Check out https://en.wikipedia.org/wiki/Universally_unique_identifier#... They want a library that flawlessly interops with everything on that list, ideally. Something you can trust and will not deprecate a function you need for live code and it just works. I admit a certain affinity to this perspective.
The cryptobros want to wait, there is some temporary current turmoil in UUID land. Not like "drama" but things are in flux and it would be horrible for golang to be stuck permanently supporting forever some interim thing that officially gets dropped (or worse, under scrutiny has a security hole or something, but for reverse compatibility with older/present golang would need permanent-ish reverse compatibility) Can't we just wait until 2027 or so? This is not the ideal time to set UUID policy in concrete. Just wait a couple more months or a year or two? https://datatracker.ietf.org/doc/html/rfc9562
I think I covered the three groups that are fighting pretty accurately and at least semi fairly, I did make fun of the perfectionists a little but cut me a break everyone makes fun of those guys.
So, yeah, a "small technical bit" but its actually a super huge architectural / leadership / management decision.
I hope they get it correct, I love golang and have a side thing with tinygo. If you're doing something with microcontrollers that doesn't use networking and you're not locked in to a framework/rtos, just use tinygo its SO cool. Its just fun. I with tinygo had any or decent networking. Why would I need zephyr if I have go routines? Hmm.
I've been around the block a few times with UUID-alike situations and the worst thing they could decide is to swing to an extreme. They'll probably be OK this is not golangs first time around the block either.
It'll probably be OK. I hope.
Watch as they stand at the watering hole, bored and listless. A sad look on their faces, knowing that now that Go has generics, all their joy has left their life. Like the dog that caught his tail, they are confused.
One looks at his friends as if to say, "Now what?"
Suddenly there is a noise.
All heads turn as they see the HN post about UUIDs.
One of the members pounces on it. "Why debate this when the entire industry is collapsing?"
No reply. Silence.
His peers give a half-hearted smile, as if to say, "Thanks for trying" but the truth is apparent. The joy of hating on programming languages is nil when AI is the only thing looking at code any more.
The Go hater returns to the waterhole. Defeated.
The compatability guarantee is a massive win, so exciting to have a boring language to build on that doesn’t change much but just gradually gets better.
Sure, most of that is not the compiler or standard library, but dependencies. But I'm not talking random opensource library (I can't blame the core for that), but things like protobuf breaking EVERY TIME. Or x/net, x/crypto, or whatever.
But also yes, from random dependencies. It seems that language-culturally, Go authors are fine with breaking changes. Whereas I don't see that with people making Rust crates. And multiple times I've dug out C++ projects that I have not touched in 25 years, and they just work.
The x/ packages are more unstable yes, that's why they're outside stdlib, though I haven't personally noticed any breakage and have never been bitten by this. What breakage did you see?
I think protobuf is notorious for breaking (but more from user changes). I don't use it I'm afraid so have no opinion on that, though it has gone through some major revisions so perhaps that's what you mean?
I don't tend to use much third party code apart from the standard library and some x libraries (most libraries are internal to the org), I'm sure if you do have a lot of external dependencies you might have a different experience.
Sure, the Go standard library is in some sense bigger, so it's nice of them to not break that. But short of a Python2->3 or Perl5->6 migration, isn't that just table stakes for a language?
The only good thing about Go is that its standard library has enough coverage to do a reasonable number of things. The only good thing. But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.
> though [protobuf] has gone through some major revisions so perhaps that's what you mean?
No, it seems it's broken way more often than that, requiring manual changes.
This is not my experience with my own or third party code. I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade, and perhaps one caused by changes to a third party library (sendgrid, who changed their API with breaking changes, not really a Go problem).
A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?
This is an example of an unmaintained UUID library in a similar situation that is currently causing incompatibilities because they implemented the draft spec. and didn’t update when the RFC changed:
https://github.com/stevesimmons/uuid7/issues/1
Any Python developer using the uuid7 library is getting something that is incompatible with the UUIDv7 specification and other UUIDv7 implementations as a result. Developers who use the stdlib uuid package in Python 3.14+ and uuid7 as a fallback in older versions are getting different, incompatible behaviour depending upon which version of Python they are running.
This can manifest itself as a developer using UUIDv7 for its time-ordered property, deploying with Python <=3.13, upgrading to Python 3.14+ and discovering that all their data created with Python 3.13 sorts incorrectly when mixed with data created with Python 3.14+.
A UUID library that is not receiving updates is quite possibly badly broken and definitely warrants suspicion and closer inspection.
https://datatracker.ietf.org/doc/rfc9562/
The problem is not that it is a draft RFC, the problem is that the library is unmaintained with an unresponsive developer who is squatting the uuid7 package name. It’s the top hit for Python developers who want to use UUIDv7 for Python 3.13 and below.
The open issue in Google's repo about the package being malicious is not a good look. The community concluded it's a false positive. If the repo was maintained they'd confirm this and close the issue.
Maintaince is much more than RFC compliance, although the project hasn't met that bar either.
If the library just existed as a correct implementation of the RFC without bugs or significant missing features, that would be one thing. But leaving features and bug fixes already committed to the repository unreleased for years because the maintainer hasn't cut a new release since 2024 is a bad sign.
I see them crop up everywhere. IMO, they are decidedly human-unfriendly - particularly to programmers and database admins trying to debug issues. Too many digits to deal with, and they suck up too much column width in query results, spreadsheets, reports, etc.
I'm not saying they don't have a place (e.g. when you have a genuine need to generate unique identifiers across completely disconnected locations, and the id's will generally never need to be dealt with by a human). But in practice they've been abused to do everything under the sun (filenames, URL links, user id's, transaction numbers, database primary keys, etc). I almost want to start a website with a gallery of all the examples where they've been unsuitably shoehorned in when just a little more consideration would have produced something more humane.
For most common purposes, a conventional, centralized dispenser is better. Akin to the Take-A-Number reels you see at the deli. Deterministic randomization is a thing if you don't want the numbers to count sequentially. Prefixes, or sharding the ID space, is also a thing, if you need uniqueness across different latency boundaries (like disparate datacenters or siloed servers).
I've lost count of how many times I've seen a UUID generated when what the designer really should have done is just grab the primary key (or when that's awkward, the result of a GetNextId stored procedure) from their database.
Some chuckle head decided to replace the system with UUIDs. Now, they are no longer human memorable/readable/writable on paper/anything useful. Even better, they must have used some home grown implementation, because the codes were weirdly sequential. If you ever had to look at a dump of codes, the ids are almost identical minus a digit somewhere in the middle.
Destructive change that ruined a perfectly functional system.
It's funny how fast it is to just implement a counter and how much people rely on UUIDs to avoid it. If you already use postgres somewhere, just create a "counter" table for your namespace. You can easily count 10K-100k values per second or faster, with room to grow if you outscale that.
What do you get? The most efficient, compressible little integers you could ever want. You unlock data structures like roaring bitmaps/ treemaps. You cut memory to 25% depending on your cardinality (ie: you can use u16 or u32 in memory sometimes). You get insane compression benefits where you can get rows of these integers to take a few bits of data each after compression. You get faster hashmap lookups. It's just insane how this compounds into crazy downstream wins.
It is absolutely insane how little cost it is to do this and how many optimizations you unlock. But people somehow think that id generation will be their bottleneck, or maybe it's just easier to avoid a DB sometimes, or whatever, and so we see UUIDs everywhere. Although, agreed that most of the time you can just generate the unique id for data yourself.
In fairness, UUID is easier, but damn it wrecks performance.
BASKETBALL-9a176cbe-7655-4850-9e7f-b98c4b3b4704-FISH
CAKE-3a01d58f-59d3-4b0c-87dc-4152c816f442-POTATO
“Which row was it, ‘basketball fish’ or ‘cake potato’?
Of course, the words would need to be a checksum. As soon as you introduce them, nobody is looking at the hex again. Which is an improvement, since nobody is looking at all the hex now “it’s the one ending in ‘4ab’”.
But for exposed values (document ids, customer ids, that kind of thing), it can be awkward if a patient's id is suddenly "CRANKY-...-FART".
Note: FIPS181 was intended for passwords and I was using them as handy short human-readable record IDs as per your post. You probably shouldn't use FIPS181 for passwords in 2026 LOL.
Describing FIPS181 as pronounceable is optimistic. However its better than random text wrt human conversations. They start looking like mysterious assembly language mnemonics after awhile.
What are your favorite ways to approach this?
I think a maximal period linear feedback shift register might fit well.
> Would like to point out how Go is rather the exception than the norm with regards to including UUID support in its standard library.
> C#: https://learn.microsoft.com/en-us/dotnet/api/system.guid.new...
> Java: https://docs.oracle.com/javase/8/docs/api/java/util/UUID.htm...
> JavaScript: https://developer.mozilla.org/en-US/docs/Web/API/Crypto/rand...
> Python: https://docs.python.org/3/library/uuid.html
> Ruby: https://ruby-doc.org/stdlib-1.9.3/libdoc/securerandom/rdoc/S...
Is C# the language that gives the Go stdlib a run for its money? I haven't used it much. JS, Python, and Ruby, I have, quite a bit, and I have the sprawling requirements.txts and Gemfiles to prove it.
I asked the question I did upthread because, while there are a lot of colorable arguments about what Go did wrong, a complete and practical standard library where the standard library's functionality is the idiomatic answer to the problems it addresses is one of the things Go happens to do distinctively well. Which makes dunking on it for this UUID thing kind of odd.
For a short script, the standard "urllib.request" module [0] works pretty well, and is usually my first choice since it's always installed. For a larger program, I'll usually use a third-party module with more features/async support though, but I'll only do this if I'm using other third-party dependencies anyways.
> JS, Python, and Ruby, I have, quite a bit, and I have the sprawling requirements.txts and Gemfiles to prove it.
I checked the top 10 Go repositories on GitHub [1], and all but 1 of them have 30+ direct dependencies listed in their "go.mod" files (and many more indirect ones). Also, both C and JavaScript are well-known for their terrible standard libraries, yet out of all languages, JavaScript programs tend to use the most dependencies, while C programs tend to use the least. So I don't think that the number of dependencies that an average program in a given language uses says anything about the quality of that language's standard library.
That's not what happens in Golang.
But lots of programs (and most of the programs that I write) don't use any cryptography, and only have trivial networking requirements, and outside those areas, I'd argue that the Python standard library [0] has broader coverage, supports more features, and is better documented than the Go standard library [1].
The Go standard library is still pretty great though, and is well ahead of most other languages; I just personally think that it's a little worse than Python's. But if you mostly write networking/crypto code, I can easily see how you'd have the opposite opinion.
Go's package management is actually one of its strongest points, so I think that it's unsurprising/good that some projects have lots of dependencies. But I still stand by the point that you shouldn't judge a language based on how many dependencies most programs written in it use.
(Except for JavaScript, where I have no problem judging it by the npm craziness :) )
If you’re arguing as the grandparent did that Go regularly omits important packages from its standard library, then it’s not unreasonable to ask you for your idea of an exemplary stdlib.
And the more stuff you pack into the standard library the more expertise you need on the maintenance team for all these new libraries. And you don't want a standard library that is bad, because then people won't use it. And then you're stuck with the maintenance burden of code that no one uses. It's a big commitment to add something to a standard library.
So it's not that things just suddenly break.
For example they've removed asyncore, their original loop-based module before the async/await syntax existed. All the software from that era needs a total rewrite. Luckily in debian for now the module is provided as a .deb package so I didn't have to do the total rewrite.
edit: as usual on ycombinator, dowvotes for saying something factually true that can be easily verified :D
And then you answered about downstream code breakage totally outside the std lib.
I will be forever mad that they did not use that as a breaking opportunity to namespace the standard library. Something like: `import std.io` so that other libraries can never conflict.
The fact that we're discussing this at all is a reasonable argument for using a library function.
Not that it matters. I don't even think that there's a single piece of software in the world which would actually care about these bits rather than treating the whole byte array as opaque thing.
I was disappointed by Go's poor support for human-focused logging. The log module is so basic that one might as well just use Printf. The slog module technically offers a line-based handler, but getting a traditional format out of it is painful at best, it lacks features that are common elsewhere, and it's somehow significantly slower than the json handler. I can only guess that it was added as an afterthought, by someone who doesn't normally do that kind of logging.
To be fair, I suppose this might make sense if Go is intended only for enterprisey environments. I often do projects outside of those environments, though, so I ended up spending a lot of time on a side quest to build what I expected to be built-in.
I haven't explored enough of the stdlib yet to know what else that I might expect is not there. If you have a wish list, would you care to share it?
For example Go has production ready HTTP server and client implementations in the standard library. But with Python, you have to use FastAPI or Flask, and requests or httpx. For SQL there's SQLAlchemy I guess and probably some other alternatives (my Python knowledge is not that great), whereas again with Go the abstraction is just in the standard library and you only include the driver for the specific database.
We use Renovate to manage dependency upgrades. It runs once a week. Every Python project has a handful or more dependency upgrades waiting every week, primarily due to the huge amount of dependencies and transitive dependencies in each project. The Go projects sometimes have one or two, but most of the time they're silent because there is nothing to upgrade (partly due to just having so few dependencies to begin with).
Generally means it'll be going in unless something new comes up which alters people's thinking.
So, it makes sense for Go to introduce support for this as well.
Also swiss tables were great addition to Go's native maps, but then again there are faster libraries that can give you 3x performance(in case of numeric keys).
For regular connection scenarios, nbio's performance is inferior to the standard library due to goroutine affinity, lower buffer reuse rate for individual connections, and variable escape issues.
From gnet's README: gnet and net don't share the same philosophy in network programming. Thus, building network applications with gnet can be significantly different from building them with net, and the philosophies can't be reconciled.
[...]
gnet is not designed to displace the Go net, but to create an alternative in the Go ecosystem for building performance-critical network services.
Frankly, I think it's unfair to argue that the net package isn't performant, especially given its goals and API surface.However, the net/http package is a different story. It indeed isn't very performant, though one should be careful to understand that that assessment is on relative terms; net/http still runs circles around some other languages' standard approaches to HTTP servers.
A big part of why net/http is relatively slow is also down to its goals and API surface. It's designed to be easy to use, not especially fast. By comparison, there's fasthttp [1], which lives up to its name, but is much harder to work with properly. The goal of chasing performance at all costs also leads to questionable design decisions, like fiber [2], based on fasthttp, which achieves some of its performance by violating Go's runtime guarantee that strings are immutable. That is a wild choice that the standard library authors would/could never make.
Something like http/v2 and net/v2. I know gnet had(has?) issues wit implementing tls because how the entire STD is designed to work. At the time, it was a great piece of software, but by now, it is slow and outdated. A lot of progress has been made since in networking, parsing, serialization, atomics and so on.
UUIDs rarely get new versions. I don’t think it’d be too much to expect Go to stay relatively current on that.
Literal interview: concurrently hit these endpoints that returns json and sum the total of values returned. Handle any 400 or 500 level http errors.
Literal former Googlers and flubbing the interview. They would spend too much time setting up an IDE and project, not be sure how to handle errors, and unable to parse the json. We eventually added a skeleton java project and removed json from the api, allowing text only responses. I learned java people don't set up projects or deal with json. It is the only explanation
I don't think it's a strong hiring signal if they weren't already familiar with APIs for (de)serialization in between, because if they're worth anything then they'll just pick that up from documentation and be done with it.
If added, keep the scope small: implement RFC 4122 v4 generation using crypto/rand.Read with correct version and variant bit handling, provide Parse and String, MarshalText and UnmarshalText, JSON Marshal and Unmarshal hooks, and database/sql Scanner and Valuer, and skip v1 MAC and time based generation by default because of privacy and cross-platform headaches.
Go’s core design philosophy is stability. This means backwards compatibility forever. But really, even more than that. The community is largely against “v2” libraries. After the first version is introduced, Go devs trend towards stability, live with its flaws, and are super hesitant to fix things with a “v2”.
There have been exceptions. After 20 years of the frankly horrible json library, a v2 one is in the works.
Most of the uuid concerns come from a place of concern. After the api is added to the standard library, it will be the canonical api forever.
There are surely pros and cons to this design philosophy. I just don’t understand why people who disagree with Go’s core goals don’t just use a different language? Sorry to take a jab here, but are we really short on programming languages that introduce the wrong v1 api, so then the language ends up with codebases that depend on v1, v2, and v3? (Looking at you Java, Python, and C#)
But regardless of API ergonomics, I would love to have UUID v4 and v7 in the stdlib.
Having any structure whatsoever in them is pointless and stupid. UUIDs should be 128 buts of crypto.Rand() and nothing else.
Argh.
If just using random bytes, you still need to make decisions about how to serialize, put it in a URL, logging etc so you’re basically just inventing you’re own format anyway for a problem that’s already solved.
A uuidv4 is 15.25 bytes of payload encoded in 36 bytes (using standard serialisation), in a format which is not conducive to gui text selection.
You can encode 16 whole bytes in 26 bytes of easily selectable content by using a random source and encoding in base32, or 22 by using base58.
You can use different encodings based on context, just like with a random blob of bytes.
One example where UUIDs are useful is usage as primary keys in databases. The constraints provide benefits, such as global uniqueness across distributed systems.
I understand the defensiveness about implementing new features, and I understand the rationale to keep the core as small as possible. But come on, it's not like UUID is a new thing. As the opener already pointed out, UUID is essential in pretty much all languages for interoperability so it makes sense to have that in the standard language.
Anyways, I'm just happy we'll get generic methods after 10 years of debates, I suppose. Maybe we'll get an export keyword before another 10, too. Then CGo will finally be usable outside a single package without those overlapping autogenerated symbols...
However I would still advocate for it over C in scenarios easily covered by TinyGo and TamaGo.
If you want to see go-uniquie highschool debate club, look at Go team attitude to fixing logging, where community proposed multiple ways of solving it, Go team rejected all of them and then made massive navel-gazing post that could be summed up "well, there is multiple proposals THAT MEANS PEOPLE ARE UNSURE ON THE ISSUE so we won't do shit"
...then removed every question related to go logging (that were common in previous ones) in their yearly survey
The maintainers did the right thing by just saying "no."
UUIDs won because they're "good enough" - collision-resistant without coordination. But v7's timestamp ordering breaks that independence by leaking information. Now you need to reason about clock sync, monotonicity, privacy.
For distributed systems, I increasingly see folks moving to: use v7 internally (btree efficiency matters), expose v4 externally (don't leak creation order to clients). Add a mapping layer at the API boundary.
The real lesson: IDs are part of your API contract. If clients can infer system behavior from ID structure (request rate, shard assignment, rollout timing), that's signal you may not want to transmit. Standards help, but context still matters.