From what I skimmed the package should just call to the js runtime's crypto.randomUUID(). I think it should always be properly seeded.
I think it is extremely unlikely that the runtime has a bug here, but who knows? What js runtime do you use?
It's a super simple mechanism, check in common worldwide UUID database, if not in there, you can use it. Perhaps if we use a START TRANSACTION, we could ensure it's not taken as we insert. But that's all easy, I'll ask Claude to wire it up, no problem.
Things to check, in descending order of how likely they actually are:
1. Data import / migration / backup restore, perhaps? Did anyone load a CSV, run a seed script, restore a snapshot, or copy rows between environments at any point in the last year? This is what "duplicate UUID" is in 99% of cases. Check git on migrations, ops history on the DB, and ask anyone who might have been moving data around.
2. Application retry / rollback bug maybe? Code path that generates a UUID, attempts insert, fails on constraint violation, retries with the same UUID variable still in scope. Check whether UUID generation lives inside or outside the retry boundary.
3. Older versions of the uuid package in certain bundler environments would fall back to Math.random() instead of crypto.getRandomValues(). What version are you on? Anything <4.x is suspect; modern v8+/v9+ uses crypto everywhere correctly.
4. Could also be a process fork bug. If a UUID generator runs in a child process spawned from a parent that already used the PRNG, the entropy state can get copied. Rare in Node specifically, more historical in old Python/Ruby setups.
If you've ruled all of those out and the row really was generated independently a year apart via crypto.getRandomValues, go buy a lottery ticket. But it's almost certainly cause #1.
As someone that enjoys the unterminable complaints about RNG in the video game scene, I would never trust any human's rationalization of random outcomes.
No, it means extremely unlikely. Collisions can occur, as op just found out, but the chances are so abysmally small that most people don't care.
Any application I have worked on, I always had a pre-save check to see if the UUID was already present and generate a new one if it was. Don't think it ever triggered unless a bug was introduced somewhere but good practice anyway.
In my opinion, these kind of intuitions have to grow over time. And every time it’s pointed out, you learn. So please, keep pointing it out :).
I still don't see idiomatic markers of AI so that's scary if your claim is correct.
The only guesses I'm having is that we originally generated UUIDv4s on a user's phone before sending it to the database, and the UUID generated this morning that collided was created on an Ubuntu server.
I don't fully know how UUIDv4s are generated and what (if anything) about the machine it's being generated on is part of the algorithm, but that's really the only change I can think of, that it used to generated on-device by users, and for many months now, has moved to being generated on server.
To be honest, the chance that you are doing something weird is probably higher than you experiencing a real UUID conflict.
How did your database 'flag' that conflict?
The database flagged it simply by having a UNIQUE key on the invoice_id column. First entry was from 2025, second entry from today.
1 in 47.3 octillion.
i'd be suspecting a race condition or some other naive mistake, otherwise id be stocking up on lottery tickets.
(lol at the other user posting at the same time about the lottery ticket.. great minds and all that.)
Thoughts?
And use uuid v5 to hash it :)
If everything is done properly, then this is very likely the one and only time anyone involved in the telling or reading of this account will ever experience this.