upvote
> The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting

I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them.

reply
The way it would work with blind signatures is that the server will know the device that comes to it to request a blinded signature and will be able to rate limit how often that device asks it.

But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example).

The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out.

Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests.

reply
> But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature).

The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you.

The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere.

reply
> you present the same unblinded signature to both services

You would never do this as it defeats the entire purpose of using blind signatures to begin with.

reply
That's the point. You go to example.com and get the "sign in with Google" box as the only login option, but now you can't have separate uncorrelated Google accounts. Or if browsers do it automatically then every site does a background load or redirect through adtracker.nsa so you're presenting the same token on every service.

It's not the user who wants any of this to begin with. "You would never do that" except that it's now the only way to be let into the service.

reply
deleted
reply
I'm as biased against cryptocurrency as everyone, but couldn't we have the requestor do a bit of mining work to mint that initial id? I mean, if the service is actually making a bit of money from each request, the need for rate limiting just vanishes, right?
reply
If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.

Yes, those AI startups can also buy cheap Android phones at scale, but it's a bit harder because they'll pay for stuff that their bots have no use for (a screen, a battery, a 5G radio, software, branding, distribution, customer support etc).

reply
Just to give an example to prime your intuition: define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter). Make your scheme provably reveal the usage token when you authenticate. Now you can use the service 16 times a day on a particular domain and no more simply by blocking token reuse. And yet the service has no ability to link different tokens to each other or to a specific person because they don't have anyone elses private keys.

You can make variations on this for a wide spectrum of rate limiting behaviors.

But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity.

I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance.

But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy.

And because I can't believe no one else in the thread has linked to it: https://www.gnu.org/philosophy/right-to-read.html

reply
> define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter)

But how are you preventing multiple services from using the same value for service_domain_name because they're cooperating to correlate your use?

reply
Because-- in this hypothetical-- your user agent restricts the usage to the name displayed on the screen and also because your agent won't send the same value twice either (it'll increment the counter or tell you that its run out of tokens).
reply
Requiring the name to be displayed isn't going to do much for ordinary people. They mostly wouldn't look at it and even if they did, "continue as-is or no service for you" means they continue as-is.

Not sending the same value twice would prevent them from being correlated, but now what are you supposed to do when you run out? Running you out could even be the goal: You burn a token to get a cookie and now you can't clear your cookies or you'll be denied a new one since you're out of tokens.

reply
I'll be the first to admit that the technology can be abused-- that it's even ripe for abuse. That sort of problem can be avoided by allowing 'enough'-- and if the goal is to just prevent a site being flooded out 'enough' could be pretty high.

Of course, I think the effective purpose of google's attest feature is to invade everyone's privacy which we should assume is part of why they don't use privacy preserving techniques. Privacy preserving techniques could still be abused, however.

Maybe they're even worse for humanity because they make bad schemes more palatable. I think right now I lean towards no: the public in general will currently tolerate the most invasive forms of these systems, so our issue isn't that they're being successfully resisted and the resistance might be diminished by a scheme which is still bad but less bad.

reply
Can we stop normalizing being surveilled online and on our devices?

Saying something like "the problem is not hardware attestation, but that they don't use ZKP".

You are normalizing the new behavior. You shouldn't. It doesn't matter if they use ZKP or the latest, secure technology for hardware attestation. The issue is hardware attestation. It's the same with age ID. The issue is not that Age ID is prone to data leaks, the problem itself is called Age ID.

reply
Hell yes. I was going to post the same comment. I don't give a flying fuck how it's implemented. Remote attestation is inherently evil.

I remember the WEI apologists trying to do the same thing to derail the argument. The problem is the goal, not the details. Just say no: DO NOT WANT!

reply
The biggest problem is banking system. "Don't want - no bank for you". That's the problem.
reply
Let them know. Write a letter to the CEO. And vote with your wallet and switch banks if you can. There's always a bank willing to offer you a non-app 2FA scheme.
reply
Banks don’t do this because of profit. They do it because of decades of laws pushing in this direction. Anti-money laundering, know your customer, digitalised currency, abandoning cash, preventing tax evasion etc… it’s been getting more extensive over time.
reply
None of the things you mentioned inherently require the user to own (and babysit) an expensive general-purpose computing device produced by tracking-obsessed adtech giants and with software obsolescence built into the product.
reply
Do you think banks are using attestation gratuitously? It helps prevent a lot of fraud. You are opposing something that saves people’s savings every day just because you think it takes “freedom” away from a few hobbyists. Do you even have a phone that does not support hardware attestation or is all this posturing about something hypothetical?
reply
Can you show me examples where locking down an OS has prevented fraud in banking?

Honestly, if the only way to secure your banking system is by locking down users' devices, there is something really bad going on at your end, security-wise. Your system should be secure even without locking down user hardware.

reply
One of the threat models is that a fraudster tricks a non-technical user into installing malware, which then manipulates the user interface so that next time the user tries to send money to Bob, it actually goes to Mallory. That's a legitimate concern, and one of the causes why PSD2 mandates that all 2FA devices must have a display that shows the user where they're about to send the money and how much.
reply
And one of the threat models that police use in the US is tracking women suspected of going for abortions through the use of road cameras, and other surveillance methods.

Once you have the attestation in place you have no guarantee who is going to get access to data like what apps are present on your device, and there will be nothing you can do to stop it.

Meanwhile, we could educate people against common scams.

How is this not just trading one smaller bad for a bigger bad? Why is this touted as an improvement?

reply
> Can you show me examples where locking down an OS has prevented fraud in banking?

This is a non-sensical remark because it's impossible to "prove" a counterfactual. I find stuff like this incredibly fucking annoying - please don't say this.

reply
Look at the last 30 years of computing history?

When online banking was first created it was an absolute chaos zone. Everyone was accessing it from desktop machines riddled with viruses and malware. There are endless stories of being discovering their life savings had been wired to Belarus by some malware running on their machine that had grabbed their banking credentials when they logged in.

https://www.google.com/search?q=site%3Akrebsonsecurity.com+b...

https://krebsonsecurity.com/2017/07/how-a-citadel-trojan-dev...

> U.S. prosecutors say Citadel infected more than 11 million computers worldwide, causing financial losses of at least a half billion dollars.

Half a billion dollars, by a single guy with a single virus!

Different parts of the world came up with different solutions for this. The US made all ACH payments reversible and international wires difficult, but that just meant the receiver paid for fraud instead of the person whose machine was full of viruses. This was an obviously bad set of incentives and hacky panic-based fix. Banks elsewhere in the world settled on providing users with authenticator devices that looked like small calculators into which you could type transaction details after plugging in a smart card. Malware could still steal all your financial data but it couldn't initiate transactions.

Obviously, all this was a hack. What was needed was computers that were secure. Apple and the Android ecosystem eventually delivered this, and the calculator devices were retired in favour of smartphones with remote attestation. This was better in literally every way, for 100% of users. Firstly, it protects financial privacy and not just transaction initiation. Secondly, it's a lot more convenient to use a device that's always with you than a dedicated standalone single-use computer. Thirdly, adding remote attestation made no difference because that's what the calculator devices were doing anyway. Fourthly, even in the case of customers of small American banks that weren't capable enough to manage dedicated hardware rollouts, getting rid of fraud instead of pushing liability around allows for lower prices and fewer headaches.

So remote attestation is a non-negotiable requirement for digital banking of any form. When Microsoft didn't deliver most banks preferred to literally manufacture and sell their customers single-use smartcards that remotely attested by you manually copying numbers back and forth between screens. Or they hid the cost of rampant fraud in the price of other services until such a time that Apple/Google saved them.

reply
> Secondly, it's a lot more convenient to use a device that's always with you than a dedicated standalone single-use computer.

The price the owner pays for this is that they're locked out of their own expensive general-purpose computing device while still having to bear all the inconveniences (babysit OS updates, configure stuff, keep it charged, have the battery fail, buy a new device every five years, etc.)

In the meantime, the standalone chip-and-TAN device costs 30 bucks, is powered by three AAA batteries that hold their charge for five years, lives for 20 years, and never needs a single software update.

I'd choose the small single-purpose device over the enshittified, locked-down smartphone every single time.

reply
This reminds me of crypto wallets. I also dispute mike_hearn 's:

> Smartphone HW attestation is better in every way

They're still prone to side-channel attacks like SPECTRE. Crypto wallets are practically immune because they're air-gapped.

[edit] I just realised that's Mike Hearn of early BTC fame. I suppose he would know what a crypto wallet is.

reply
> Do you think banks are using attestation gratuitously?

What I'm claiming is that banks have the freedom of offering their customers 2FA other than smartphone apps.

> Do you even have a phone that does not support hardware attestation or is all this posturing about something hypothetical?

All the phones I own, including my daily driver, run some flavor of Debian. None of them support hardware attestation.

I'm in Europe, bound by PSD2, and own a couple of cheap, certified chip-and-TAN devices so I can do banking.

reply
> Let them know. Write a letter to the CEO.

I think you're naively presuming the issue is simple and easy to address with a letter.

Regardless of your bank, payment systems such as Visa and Mastercard have blocked transactions involving mainstream online stores such as Steam because they unilaterally deemed some games to be problematic. You cannot fix this problem with an email.

reply
Remote attestation is a technology, not a policy or a political effort, so it can't be inherently evil. You can disagree with all its known or proposed uses, but then I think it makes more sense to name these.
reply
DRM is a technology and is inherently evil. Web attestation is DRM for the web, and is inherently evil. Age ID is a technology and is inherently evil.

We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.

It's not like these technologies were created for the greater good and misappropriated by bad actors. They were proposed by bad actors in the first place, they cannot not be inherently good.

reply
>We have over 30 years of the world wide web and for these more than 3 decades this was never a problem.

captcha/spambots has been a problem since USENET

reply
DRM is arguably a specific use of various generic technology ranging from whitebox cryptography to trusted computing.

I don't think remote attestation (or even more so its umbrella technology, trusted computing) is nearly as specifically targeted as DRM.

> We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.

I agree that requiring remote attestation for generic web use is evil. It's way too heavy-handed an approach better reserved

I still don't think this somehow outright disqualifies the technology itself.

reply
>We have over 30 years of the world wide web and for these more than 3 decades this was never a problem.

Are you seriously trying to suggest copyright infringement has not been an issue over the last 30 years? Both of them are solutions to problems that we've had over the last 30 years and were created for the greater good to solve problems that developers were facing.

reply
Movies, games and music are multi billion dollar industries, in what way have they struggled in a world of endless piracy being possible?
reply
Tell me when DMCA law has worked in favor of small companies/developers?

DMCA is abused every. single. time.

reply
Individual self employed photographers successfully use the DMCA to get significant payouts from large publishers and news organisations every single day.

Like literally hundreds of thousands, every day.

reply
I think people are too quick to dismiss the possibility that some technologies are just bad and harmful and we can't shrug off responsibility and say I'm just making a neutral technology and the people using it are the ones causing harm.
reply
Remote attestation is a policy, not a technology.

The policy is "I will not let you access this system unless your system software implements this technological protection."

A camera is technology. A security camera is policy, because it's a camera hooked up to policies on how to watch, record, and respond to what is required, and it is a political effort when connected with laws about face masks, prohibiting spray painting of the cameras, and allowing privacy intrusions.

reply
Then explain why RA was invented? It is inherently against user freedom, just like "secure" boot and the rest of the corporate-authoritarian crap.

People have woken up to the truth as the pieces come together.

This article from 2022 is fun to look at and see how prescient it was: https://news.ycombinator.com/item?id=29859106

reply
I have 2 servers, Alice and Bob, Bob has a secret, I want Bob to be able to share that secret with Alice. However, I want Alice to be able to prove to Bob that it is actually Alice, that it is running the correct AliceOS, and that AliceOS was loaded on bare metal Alice without nefarious pre-book or virtualization hooks.

A TPM with measured boot (SecureBoot) does exactly this, remote attestation is how Alice proves to Bob that it is in a trusted configuration and wasn't tampered with.

reply
As someone who wanted to improve users security, that’s exactly why I find this thread fanatical opposition to attestation baffling. Nearly everyone uses a device that supports hardware attestation. It’s the best available tool to protect users from malware. We do implement a fallback that lowers security but lets the few users who have devices not able to attest properly to continue, but that really lowers security since we can’t even know if the device cryptography is itself compromised and hence can’t really trust anything it sends. If you have a different solution, do share it! I would love to use something you guys don’t find abhorrent! But until then I don’t really see the reason for all this negativity.
reply
Sadly, the problem isn't the TPM or Remote Attestation. It's Google et al choosing to only talk to devices and software they like without concern for what the user wants or trusts. Compounded by everyone else just going along with it.

A TPM where the device owner can't take ownership of the root key is worse then no TPM at all.

reply
If the price to pay for security is freedom, then let users's devices be insecure. With time, they will learn good security hygiene. And if they don't, maybe they don't deserve it.
reply
I would be the safest citizen, free from experiencing crime and violence if I'm imprisoned in my house for life.
reply
That's the academic viewpoint, but in practice it's used for far more hostile purposes.

(One argues that since you own both of them, you should simply set up the two servers yourself with a key of your own choosing, asymmetric or otherwise, and then restrict physical access to them.)

reply
And exactly how many Linux distros support Secure Boot out of the box? Just a few.

I can perhaps agree that the idea of SB can be good, but it was designed (and is used) in a bad way. Just look at how many distros do not support SB.

reply
"It’s a poor atom blaster that won’t point both ways."
reply
How should a government act to prohibit misrepresentation of one’s characteristics online, from accessing services for which that government has formally defined regulations based on characteristic into law?

If your answer is “they shouldn’t ever do that”, then you’re promoting an uncompromising position that governments are disinclined to adopt, being the primary user of identity issuance and verification on behalf of their citizens.

If your answer is “they should do that differently”, then you have a discussion about (for example) ZKP or biosigs or etc., such as the thread you’re replying to.

Which of these two paths are you here to discuss? I want to be sure I’ve correctly understood you to be arguing for the former in a thread about the latter.

reply
You're not necessarily being surveiled just because you're forced to authenticate yourself. It often is the case practically, but it's not inherent, and mixing the two up makes the discussion too imprecise in a technical forum.

Hardware attestation often also has problems of centralization, but that's something else as well.

By just labeling it as an abstract bad thing without seeing nuance, I'm afraid you won't be convincing those in power to pass or block these laws, or those convincing your fellow voters which efforts to support.

reply
I think labeling this an abstract problem because all the existing implementations as having concrete but different problems is a little bit of a Motte and Bailey fallacy.

The surveillance of the future will be powered by the things we produce today. If the accepted algorithms leave cookies those cookies will be used tracked and monitized. The bad argument is the forced verification to do things on the internet. Making that start at the hardware is a lock in thats not okay. Business will always own the services and making standards that trade our practical liberty for the sake of security is a very compromised position in my opinion.

And it does start with the age verification, followed by id checks, etc. Its compromising precisely because no lines are drawn and no rights to privacy are codified in law. Without guiderails the worse path will likely be taken for maximum profit

reply
> You're not necessarily being surveiled just because you're forced to authenticate yourself.

Oh hell you do! Google profit comes from ADS! It's for their profit to surveil and track and deanonymize TO SELL ADS.

reply
A counterexample is not a valid refutation of the general point. It can be both true that Google will deanonymize you, given the chance, and that anonymous attestation is possible.
reply
Having thought about ads, what is the ideal feedback info channel loop from manufacturers to consumers? How best to distribute the information of who can manufacture what at what cost/price and what does it do and when is it appropriate for consumers to receive or pull info from where? And if it ends up being a monopoly of 1 centralized system how do you allow for a competitor to break through without ads?
reply
Ads don't need to collect user information and form profiles. I don't understand why we must capitulate to more and more invasive advertising.

I don't know about you but I feel humiliated being forced to look at ads all day.

reply
> It often is the case practically, but it's not inherent

Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.

Hardware attestation is a surveillance mechanism. If China was enforcing the same rule, you would immediately identify it as a state-driven deanonymization effort. But when the US does it, you backpedal and suggest that it could be implemented safely in a hypothetical alternate reality. Do you want to live in a dystopia?

reply
> Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.

Who is?

> But when the US does it [...]

I don't live in the US, and while US is often setting global trends, in this case I don't think that's actually that likely, unless it somehow goes significantly better (i.e., the benefits actually vastly exceed the collateral damage to anonymity and resiliency via heterogeneity) than expected.

reply
Those in power who need convincing are the same ones pushing for mass surveillance online.
reply
There is a problem where it's becoming increasingly harder to determine which internet packets that are coming to your service are at the behest of a human in the course of normal activities or an automated program.

If all the internet was is static content, that wouldn't be much of a problem. But we live in world where packets coming to your service result in significant state changes to your database (such as user generated content).

I suspect that we are currently in the valley of do-something-about-it on the graph which is why you see all this angst from the big players. Would Google really care if automated programs were so good that they were approximating real humans to such an extent that absolutely no one can tell? I suspect they would not only be happy with such a state of affairs, they would join in.

reply
That's not a problem at all. It's an artificially created distraction, created to manufacture consent, by those pushing for this shit.
reply
> Requiring authorized silicon (and software) isn't even the biggest problem here. It is indeed the biggest issue. It prevents be from owning and using the hardware I pay for, own, or make myself. It's switching the personal computers as we know it from being open to proprietary and owned by 2 large US corporations.

I don't agree that it's not a problem.

reply
Did you just read “not even the biggest problem” as “not a problem”?
reply
I mean it's THE biggest one.
reply
Can you revoke certificate for a specific device using privacy schemes?

Like imagine that someone managed to extract key from the specific device and distributed that key in a software implementation to fake attestation. Now Google needs to revoke that particular key to disallow its usage. This is obvious requirement.

reply
Especially if the device in question is linked to an enemy of the state and the people.
reply
Would like to read a writeup on this, I was certain it was going to be something like this from the app's announcement.

Also I recall a discussion on Graphene's forums that DRM ID is not only retained there, but stays the same across profiles.

reply
I simplified the process in my description. The DRM ID Android has is not what I was referring to.

I was referring to the static private key that is stored in the silicon. At any time an application can initiate a license request process using DRM APIs which will elicit an unchangeable HWID from your device. The only protection is that it will be encrypted for an authorized license server private key so collusion may be required (intel agencies almost certainly sourced 'authorized' private keys for themselves). Google or Apple also has the option to authorize keys for themselves. In 'theory' all such keys should be stored in "trusted execution environments" on license servers and not divulge client identities for whatever that's worth: <https://tee.fail>.

reply
Citation?
reply

    Content Decryption Module (CDM) in your browser or Mobile SDK generates the license challenge
<https://go.buydrm.com/thedrmblog/the-anatomy-of-a-multi-drm-...>

The "license challenge" (it might be a mistake I think it's supposed to be a license request) is just a packet (that can be saved and later sent to anywhere) and it contains the encrypted certificate which doubles as your HWID. An adversary needs to control the private key of the license "server" the challenge is for (this is a privacy measure introduced to prevent the CDM from offering the HWID to anyone who wants it). Now if you want the HWID you need to work for it (one time) by stealing a private key, bribing/blackmailing employees or issuing secret edicts ("here is a new license server we need a certificate for"). Working for Hollywood is also an option I suppose.

Pirates sacrifice devices when they publish ripped content due to the certificate being revoked after Hollywood downloads the torrent and by doing things like this:

    For large-scale per-viewer, implement a content identification strategy that allows you to trace back to specific clients, such as per-user session-based watermarking. With this approach, media is conditioned during transcoding and the origin serves a uniquely identifiable pattern of media segments to the end user.
<https://docs.aws.amazon.com/wellarchitected/latest/streaming...>
reply
Are these the kinds of issues privacy pass intends to fix? If so, what carrot and/or stick will get it adopted?
reply