I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them.
But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example).
The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out.
Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests.
The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you.
The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere.
You would never do this as it defeats the entire purpose of using blind signatures to begin with.
It's not the user who wants any of this to begin with. "You would never do that" except that it's now the only way to be let into the service.
Yes, those AI startups can also buy cheap Android phones at scale, but it's a bit harder because they'll pay for stuff that their bots have no use for (a screen, a battery, a 5G radio, software, branding, distribution, customer support etc).
You can make variations on this for a wide spectrum of rate limiting behaviors.
But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity.
I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance.
But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy.
And because I can't believe no one else in the thread has linked to it: https://www.gnu.org/philosophy/right-to-read.html
But how are you preventing multiple services from using the same value for service_domain_name because they're cooperating to correlate your use?
Not sending the same value twice would prevent them from being correlated, but now what are you supposed to do when you run out? Running you out could even be the goal: You burn a token to get a cookie and now you can't clear your cookies or you'll be denied a new one since you're out of tokens.
Of course, I think the effective purpose of google's attest feature is to invade everyone's privacy which we should assume is part of why they don't use privacy preserving techniques. Privacy preserving techniques could still be abused, however.
Maybe they're even worse for humanity because they make bad schemes more palatable. I think right now I lean towards no: the public in general will currently tolerate the most invasive forms of these systems, so our issue isn't that they're being successfully resisted and the resistance might be diminished by a scheme which is still bad but less bad.
Saying something like "the problem is not hardware attestation, but that they don't use ZKP".
You are normalizing the new behavior. You shouldn't. It doesn't matter if they use ZKP or the latest, secure technology for hardware attestation. The issue is hardware attestation. It's the same with age ID. The issue is not that Age ID is prone to data leaks, the problem itself is called Age ID.
I remember the WEI apologists trying to do the same thing to derail the argument. The problem is the goal, not the details. Just say no: DO NOT WANT!
Honestly, if the only way to secure your banking system is by locking down users' devices, there is something really bad going on at your end, security-wise. Your system should be secure even without locking down user hardware.
Once you have the attestation in place you have no guarantee who is going to get access to data like what apps are present on your device, and there will be nothing you can do to stop it.
Meanwhile, we could educate people against common scams.
How is this not just trading one smaller bad for a bigger bad? Why is this touted as an improvement?
This is a non-sensical remark because it's impossible to "prove" a counterfactual. I find stuff like this incredibly fucking annoying - please don't say this.
When online banking was first created it was an absolute chaos zone. Everyone was accessing it from desktop machines riddled with viruses and malware. There are endless stories of being discovering their life savings had been wired to Belarus by some malware running on their machine that had grabbed their banking credentials when they logged in.
https://www.google.com/search?q=site%3Akrebsonsecurity.com+b...
https://krebsonsecurity.com/2017/07/how-a-citadel-trojan-dev...
> U.S. prosecutors say Citadel infected more than 11 million computers worldwide, causing financial losses of at least a half billion dollars.
Half a billion dollars, by a single guy with a single virus!
Different parts of the world came up with different solutions for this. The US made all ACH payments reversible and international wires difficult, but that just meant the receiver paid for fraud instead of the person whose machine was full of viruses. This was an obviously bad set of incentives and hacky panic-based fix. Banks elsewhere in the world settled on providing users with authenticator devices that looked like small calculators into which you could type transaction details after plugging in a smart card. Malware could still steal all your financial data but it couldn't initiate transactions.
Obviously, all this was a hack. What was needed was computers that were secure. Apple and the Android ecosystem eventually delivered this, and the calculator devices were retired in favour of smartphones with remote attestation. This was better in literally every way, for 100% of users. Firstly, it protects financial privacy and not just transaction initiation. Secondly, it's a lot more convenient to use a device that's always with you than a dedicated standalone single-use computer. Thirdly, adding remote attestation made no difference because that's what the calculator devices were doing anyway. Fourthly, even in the case of customers of small American banks that weren't capable enough to manage dedicated hardware rollouts, getting rid of fraud instead of pushing liability around allows for lower prices and fewer headaches.
So remote attestation is a non-negotiable requirement for digital banking of any form. When Microsoft didn't deliver most banks preferred to literally manufacture and sell their customers single-use smartcards that remotely attested by you manually copying numbers back and forth between screens. Or they hid the cost of rampant fraud in the price of other services until such a time that Apple/Google saved them.
The price the owner pays for this is that they're locked out of their own expensive general-purpose computing device while still having to bear all the inconveniences (babysit OS updates, configure stuff, keep it charged, have the battery fail, buy a new device every five years, etc.)
In the meantime, the standalone chip-and-TAN device costs 30 bucks, is powered by three AAA batteries that hold their charge for five years, lives for 20 years, and never needs a single software update.
I'd choose the small single-purpose device over the enshittified, locked-down smartphone every single time.
> Smartphone HW attestation is better in every way
They're still prone to side-channel attacks like SPECTRE. Crypto wallets are practically immune because they're air-gapped.
[edit] I just realised that's Mike Hearn of early BTC fame. I suppose he would know what a crypto wallet is.
What I'm claiming is that banks have the freedom of offering their customers 2FA other than smartphone apps.
> Do you even have a phone that does not support hardware attestation or is all this posturing about something hypothetical?
All the phones I own, including my daily driver, run some flavor of Debian. None of them support hardware attestation.
I'm in Europe, bound by PSD2, and own a couple of cheap, certified chip-and-TAN devices so I can do banking.
I think you're naively presuming the issue is simple and easy to address with a letter.
Regardless of your bank, payment systems such as Visa and Mastercard have blocked transactions involving mainstream online stores such as Steam because they unilaterally deemed some games to be problematic. You cannot fix this problem with an email.
We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.
It's not like these technologies were created for the greater good and misappropriated by bad actors. They were proposed by bad actors in the first place, they cannot not be inherently good.
captcha/spambots has been a problem since USENET
I don't think remote attestation (or even more so its umbrella technology, trusted computing) is nearly as specifically targeted as DRM.
> We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.
I agree that requiring remote attestation for generic web use is evil. It's way too heavy-handed an approach better reserved
I still don't think this somehow outright disqualifies the technology itself.
Are you seriously trying to suggest copyright infringement has not been an issue over the last 30 years? Both of them are solutions to problems that we've had over the last 30 years and were created for the greater good to solve problems that developers were facing.
DMCA is abused every. single. time.
Like literally hundreds of thousands, every day.
The policy is "I will not let you access this system unless your system software implements this technological protection."
A camera is technology. A security camera is policy, because it's a camera hooked up to policies on how to watch, record, and respond to what is required, and it is a political effort when connected with laws about face masks, prohibiting spray painting of the cameras, and allowing privacy intrusions.
People have woken up to the truth as the pieces come together.
This article from 2022 is fun to look at and see how prescient it was: https://news.ycombinator.com/item?id=29859106
A TPM with measured boot (SecureBoot) does exactly this, remote attestation is how Alice proves to Bob that it is in a trusted configuration and wasn't tampered with.
A TPM where the device owner can't take ownership of the root key is worse then no TPM at all.
(One argues that since you own both of them, you should simply set up the two servers yourself with a key of your own choosing, asymmetric or otherwise, and then restrict physical access to them.)
I can perhaps agree that the idea of SB can be good, but it was designed (and is used) in a bad way. Just look at how many distros do not support SB.
If your answer is “they shouldn’t ever do that”, then you’re promoting an uncompromising position that governments are disinclined to adopt, being the primary user of identity issuance and verification on behalf of their citizens.
If your answer is “they should do that differently”, then you have a discussion about (for example) ZKP or biosigs or etc., such as the thread you’re replying to.
Which of these two paths are you here to discuss? I want to be sure I’ve correctly understood you to be arguing for the former in a thread about the latter.
Hardware attestation often also has problems of centralization, but that's something else as well.
By just labeling it as an abstract bad thing without seeing nuance, I'm afraid you won't be convincing those in power to pass or block these laws, or those convincing your fellow voters which efforts to support.
The surveillance of the future will be powered by the things we produce today. If the accepted algorithms leave cookies those cookies will be used tracked and monitized. The bad argument is the forced verification to do things on the internet. Making that start at the hardware is a lock in thats not okay. Business will always own the services and making standards that trade our practical liberty for the sake of security is a very compromised position in my opinion.
And it does start with the age verification, followed by id checks, etc. Its compromising precisely because no lines are drawn and no rights to privacy are codified in law. Without guiderails the worse path will likely be taken for maximum profit
Oh hell you do! Google profit comes from ADS! It's for their profit to surveil and track and deanonymize TO SELL ADS.
I don't know about you but I feel humiliated being forced to look at ads all day.
Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.
Hardware attestation is a surveillance mechanism. If China was enforcing the same rule, you would immediately identify it as a state-driven deanonymization effort. But when the US does it, you backpedal and suggest that it could be implemented safely in a hypothetical alternate reality. Do you want to live in a dystopia?
Who is?
> But when the US does it [...]
I don't live in the US, and while US is often setting global trends, in this case I don't think that's actually that likely, unless it somehow goes significantly better (i.e., the benefits actually vastly exceed the collateral damage to anonymity and resiliency via heterogeneity) than expected.
If all the internet was is static content, that wouldn't be much of a problem. But we live in world where packets coming to your service result in significant state changes to your database (such as user generated content).
I suspect that we are currently in the valley of do-something-about-it on the graph which is why you see all this angst from the big players. Would Google really care if automated programs were so good that they were approximating real humans to such an extent that absolutely no one can tell? I suspect they would not only be happy with such a state of affairs, they would join in.
I don't agree that it's not a problem.
Like imagine that someone managed to extract key from the specific device and distributed that key in a software implementation to fake attestation. Now Google needs to revoke that particular key to disallow its usage. This is obvious requirement.
Also I recall a discussion on Graphene's forums that DRM ID is not only retained there, but stays the same across profiles.
I was referring to the static private key that is stored in the silicon. At any time an application can initiate a license request process using DRM APIs which will elicit an unchangeable HWID from your device. The only protection is that it will be encrypted for an authorized license server private key so collusion may be required (intel agencies almost certainly sourced 'authorized' private keys for themselves). Google or Apple also has the option to authorize keys for themselves. In 'theory' all such keys should be stored in "trusted execution environments" on license servers and not divulge client identities for whatever that's worth: <https://tee.fail>.
Content Decryption Module (CDM) in your browser or Mobile SDK generates the license challenge
<https://go.buydrm.com/thedrmblog/the-anatomy-of-a-multi-drm-...>The "license challenge" (it might be a mistake I think it's supposed to be a license request) is just a packet (that can be saved and later sent to anywhere) and it contains the encrypted certificate which doubles as your HWID. An adversary needs to control the private key of the license "server" the challenge is for (this is a privacy measure introduced to prevent the CDM from offering the HWID to anyone who wants it). Now if you want the HWID you need to work for it (one time) by stealing a private key, bribing/blackmailing employees or issuing secret edicts ("here is a new license server we need a certificate for"). Working for Hollywood is also an option I suppose.
Pirates sacrifice devices when they publish ripped content due to the certificate being revoked after Hollywood downloads the torrent and by doing things like this:
For large-scale per-viewer, implement a content identification strategy that allows you to trace back to specific clients, such as per-user session-based watermarking. With this approach, media is conditioned during transcoding and the origin serves a uniquely identifiable pattern of media segments to the end user.
<https://docs.aws.amazon.com/wellarchitected/latest/streaming...>