upvote
Weird rant. TPMs are great. The modern computing landscape needs a safe place to put secrets. It's what made the iPhone (Secure Enclave is effectively a TPM) years ahead of Android in terms of security.

The problem isn't the TPM, but attestation. As soon as the TPM is required to not be under your control to get access to Y, bad things happen.

Hell, in actuality, the problem isn't even attestation, its policy. The EU Parliament (the one the people vote for, the Commission are cronies) might eventually force corporations into something more citizen-friendly. Neither Apple, Google or Microsoft is going to drop a market that big.

reply
Requiring "tokens" stored in "trusted modules" and 7-factor-auth for everything is not progress, it's theater. The biggest achievement of the security orthodoxy was locking me out of my email, by requiring me to read a code sent to my email to log into my email.

I -- literally -- do not care about a single "account" in any "service" I use aside from my email and bank account. Most people would add a few social media accounts to that list.

You don't need a "place to put secrets". Your iPhone app does not do anything important enough to require a "trusted chain" of cryptographic bullshit, just use a password and Google/Apple login.

reply
What about Apple Wallet?

The reality is that there is software dependent on the user being unable to modify it. This safeguards the server against fraudulent users.

reply
Never trust user input. The users already can't modify the server.

And what actual applications did you have in mind that warrant throwing everybody under the bus? (by that I mean some applications (allegedly) need it, so it gets forced on everyone)

reply
Passkeys are better passwords. They need a TPM.
reply
> Passkeys are better passwords. They need a TPM.

Passkeys absolutely do not need TPM.

You can get passkey support in any browser with a simple 1password plugin without any TPM hardware.

The same way you could get a TOTP app on your phone without any TPM.

TPMs are just an extra security layer for most usages.

They are mainly a necessity for some shady business like DRMs.

reply
> Passkeys absolutely do not need TPM.

They do not, but how does the service you’re using know your passkey is secure? For all they know you’re just some gullible user that clicks through every fishing email you get. You’re dumb, weak, helpless, they gotta protect you from this scary world out there, and maybe yourself as well.

They can’t do that if they allow your passkey to be stored anywhere you control. KeepassXC? The second you type in your master password the keylogger will snatch it, and your entire database with it!

Okay, maybe you’re some hot shot cryptographer, you’re using a TKey (think Yubikey, except you have full control), and there’s no way your secret key leaves it even if your main computer is fully compromised. Well, the service doesn’t know that. All they see is your public key and a matching signature.

So, sorry Mr. Security Researcher, we’re gonna have to be safe, and require you to use approved hardware only. Too many (wo)men children out there must be protected, we have no way to tell you’re not one of them, so it’s remote attestation or you’re out. What’ online buying worth for anyway, when you can just cross the ocean?

---

Just so we’re clear, I agree with you here. But don’t forget there are two kinds of passkeys out there: with or without the evil remote attestation. And many companies will push for the remotely attested kind, using the exact argument I used above, except with a straight face.

Or they will just present a false dichotomy: remotely attested passkeys on the one hand, short easy to guess reused everywhere passwords on the other.

reply
> For all they know you’re just some gullible user that clicks through every fishing email you get.

Passkeys are non-phishable. That's part of their schtick. I'm not a huge passkey fan myself, but this is a real benefit.

reply
Yes, but that’s not the threat model I was alluding to. The threat model was, you get tricked into executing malware, that will steal your passkey (and your entire password database in fact), and log your master password as soon as you use it.

When the passkey is protected behind an HSM (TPM, Yubikey, Tkey…), even a compromise of your main computer can’t steal it. Attackers can still temporarily log in on your behalf, but they can’t do anything with your passkey as long as your computer is turned off. Which means you can un-pwn yourself out of this situation by reinstalling everything (but do keep your HSM!).

Overall, we have several levels of security here:

- Weak password, (potentially reused everywhere). Fished once, pwned everywhere. Not to mention password database leaks.

- Very strong unique password from your password vault (KeepassXC). Note that with automatic login, password managers may provide good phishing resistance. Manual copy pasta is still vulnerable, but at least you only compromise that one account.

- Passkey stored in your password database. Phishing proof as you say, but falls to a keylogger.

- Passkey sorted in a hardware security module. Can’t be stolen ever, save for a vulnerability in the HSM itself, or, if you haven’t set up a password for your HSM, theft.

Clearly that last option is the most secure. Clearly it would be nice if everyone could do that, though we do need a way to recover from the loss or destruction of the HSM (which in the case of the TPM may mean something as mundane as changing your graphics card). Yet often, other ways are more convenient.

Still, I strongly believe companies should not force people into one method or another. Okay, I could maybe tolerate passkeys being forced on me, but not the remote attestation part. Let me manage my own security, with my own tools (preferably open source), thank you very much. There is one use case for which I may approve of remote attestation: work accounts. Because at this point it’s not about the safety of the customer, it’s about the safety of the company itself. It makes sense then that the company (or government agency) impose whatever stringent restrictions on how to access their network. They do have to provide any required tool (company laptop, company palmtop, company dongle…), same way many companies are required to provide individual safety equipment to any of their employees working in hazardous environments.

reply
Run vaultwarden locally. Install bitwarden. Now you have software-only implementation of passkey. Dig into vaultwarden sqlite database and you'll find passkey data there. Extract and save it on disk and you have exportable passkey. See, it's all security theater without remote attestation.

I had an idea to create blatantly insecure passkey browser extension. Maybe I should do that.

reply
Attestation isn't even the problem. I'd love to be able to verify that my server's kernel hasn't been tampered with.

The problem lies in companies like Apple/Google/Microsoft rejecting attestation that they do not control.

People confusing big tech's policy choices with tech features have made "I want my laptop's auth token to only be usable on my laptop" a controversial opinion.

reply
> TPMs are great.

TPMs are a fucking mess. TPM 2 at least, I’ve worked with it for a few months. I love me some hardware security module, but I want to control it. And if it must be a standard, please please to something like the TKey, so it can be both much simpler than current ad-hoc standards and future proof.

https://loup-vaillant.fr/articles/hsm-done-right

reply
>The modern computing landscape needs a safe place to put secrets.

Does it? Why waste time on developing exploits when you can just call up grandma and get her give you the money by her "own" volition - using her secure device - by pretending to be the bank/IRS/her grand daughter using AI voice/etc.

reply
Agreed. Trying to limit progress because it may be misused is attacking the wrong part of the problem and will not work.
reply
TPMs add security against a narrow case of evil maid attacks. They might be useful for corporate computing (for cargo cult compliance purposes more than actual security) but they trojan horse more of "not owning the device you bought" with it to people that don't and shouldn't care about evil maid attacks at all.
reply
Adding brute force resistance to consumer hardware is pretty useful. Now your password can be John1985 without fear of getting brute forced within seconds.

"I don't use a TPM in my computer so it shouldn't exist" has always sounded like a weird argument against the tech in my opinion.

Many Android phones have their secret storage implemented as a virtual machine rather than a TPM. The lack of a TPM doesn't suddenly give me any more freedom, although it does come with security downsides.

reply
TPMs can also be based on free software and our own keys. It works well with Heads and Librem Key.
reply
Totally with you until you brought in AI, a completely centralized and proprietary tool.
reply
Local models exist, but there's also irony in using the tools to spread the message of the opposition.
reply
The local models are still centralized and proprietary. They are basically closed source software.
reply
Closed or open source doesn't matter; it's the ability to control them that's important. People have been cracking and patching for decades without source, but they have that control.

Contrast this with remote attestation, where they might show you the source code for everything but you're still powerless to do anything.

reply
> Closed or open source doesn't matter; it's the ability to control them that's important. People have been cracking and patching for decades without source, but they have that control.

You have no idea what has been baked into the weights in the training process. In theory you could find biases and attempt to "patch" them out, but its a vastly different process vs. patching machine code.

Consider what would happen if Google's open weight models were best at writing code targeting Google's services vs. their competitors? Is this something that could be patched? What if there were more subtle differences that you only notice much later after some statistical analysis?

reply
People are already patching these models using abliteration to prevent them from refusing any request, so it is possible for end users to change them in meaningful ways. You can download abliterated models right now from Hugging Face that will respond to all kinds of requests that frontier models refuse.
reply
Yup there's a ton of people on HN sleeping on this new tech because they refuse to look at anything AI. We now have jail broken models but the average person on here doesn't even know how to download and try a model.
reply
It doesnt help that guides ive seen have been pretty handwavy or are not specific enough to the individual situation (i have z hardware, heres how its done). It also doesnt help when every post on HN i see is like 'oh waow i did x on a mac mini with 128gb ram'. That spec is beyond many, running on generally available resources (such as hardware one might have laying around their house) do not seem fit for the purpose, so its back to building a new machine (gl when ram is worth 2x its weight in gold), or buying a $1000+ mac mini, or other device. Any low end system cant turn out tokens fast enough, or doesnt have the resources for context or processing.

Local ai is not ready, and if you think it is, prove me wrong with a detailed guide running commodity hardware with complete setup steps that can use a decently sized model.

I spent 2 weeks trying to get anything running - 8gb RX550XT, 12gb ram, 8core cpu. I even tried turboquant to lower memory utilization and still couldnt even get a 3B or 4B model loaded, and anything lower wont suit my needs (3/4B are even pushing it).

reply
TBH I never understood people trying to run LLM locally. Just rent a powerful machine in the cloud for few hours. It's cheap enough, because you don't need to own a hardware. It doesn't introduce a dependency because there are hundreds of hosters. It doesn't compromise your data, because nobody would extract data from your VM, not until you're under an investigation, anyway, and even in that case just use different jurisdiction.

Spending humongous amount of money to get machine that'll felt obsolete in 2 years? I don't know.

reply
"Local AI is not ready" > proceeds to run a 7 year old budget GPU

You're like the kid showing up to a test without a pencil.

It's ridiculous for you to suggest that an advanced AI model needs to run on your budget 7 year old graphics card that is already out of date for even today's gaming. My parents spent $2500 on a computer in 1995 and that was a 166Mhz Pentium 1. If they spent that money today it would be $5261. Think of what you can get for amount of money. Then you're over here trying to say a budget graphics card needs to somehow compete with the bleeding edge of computer innovation.

You do, in fact, need to spend money on appropriate gear if you expect to participate.

reply
If you want AI image generation and are willing to wait a little longer, you don't even need a GPU: https://news.ycombinator.com/item?id=32642255
reply
I've played with SD plenty. CPU even becomes manageable at low resolutions. But uh CPU/GPU is starting to blur now with these new AMD inference CPUs with built in GPUs. And ARM based machines like Macs. I wish more people on HN were using this stuff so we could have fun conversations about it instead of arguing over whether or not we should even be using these tools.
reply
When Stallman was getting started writing emacs in the early 80s, Unix machines were vastly out of reach price wise for the common home user, but he did his open source work anyway, and eventually the 386 came along.
reply
RMS found it acceptable to use SunOS initially to create GNU.

Open weight models can be a big boost to building Open AI (cough). Progress comes from incremental improvements, -- and open weight models are a big advance in privacy, security, and autonomy over relying on hosted closed systems.

Source vs not is only one (important!) dimension, moreover in FSF land they define source as being the preferred form for modification, at at least for some kinds of modifications the weights are the preferred form.

reply
> the weights are the preferred form

This can never be the case.

Both the licensing and source aspects of the Free Software movement are aspiring to create high level of equality of access to a [software] work between both the original author and far downstream recipients. Obviously full and universal equality is impossible because part of the work is only in the author's mind and not everyone can obtain and use computers, but approaching that as closely as possible is important and it is important to think about how to achieve a high level of equality for each work in each context. What is "source" in any given context is a choice the author makes about what level of access they want to pass on to others.

In the case of AI, weights can never be the preferred form for modification because of the equality of access issue. The people who trained the AI (and hide its training data/code but published the weights) will always have more access than the people who only have the weights. Just like a binary can almost never be the preferred form, because the authors have access to the source but we don't.

There are also many ways to bias the model and insert backdoors or other suboptimal behaviours into it during training data selection etc.

reply
>RMS found it acceptable to use SunOS initially to create GNU.

Any source on that?

reply
I know it from personal experience using GNU tools on Sun early on (really Solaris in my case, I wasn't quite that early a user), and I think from a talk or essay by RMS but for a moment I worried it might have been personal correspondence. Finding a citation seemed like a fun challenge:

https://www.gnu.org/gnu/thegnuproject.html

> [...] the easiest way to develop components of GNU was to do it on a Unix system, and replace the components of that system one by one. But they raised an ethical issue: whether it was right for us to have a copy of Unix at all.

> Unix was (and is) proprietary software, and the GNU Project's philosophy said that we should not use proprietary software. But, applying the same reasoning that leads to the conclusion that violence in self defense is justified, I concluded that it was legitimate to use a proprietary package when that was crucial for developing a free replacement that would help others stop using the proprietary package.

> But, even if this was a justifiable evil, it was still an evil. Today we no longer have any copies of Unix, because we have replaced them with free operating systems. If we could not replace a machine's operating system with a free one, we replaced the machine instead.

Still leave open the the question of RMS personally using SunOS (as opposed to some other proprietary unix) but I think at this point I'd just go dig up very old GNU sources for evidence of that, but I suspect your question was primarily about RMS' ethical reasoning which is well answered above.

reply
Thanks for the quote, I couldn't find anything online.

Although it seems to me that the comparison is somewhat fragile : it was not possible to develop GNU anywhere else, whereas we could completely build local models from scratch nowadays, unless I'm mistaken.

reply
Small models were originally built from distilling, using synthetic training materials, and filtering training material with much larger models. There is a bit of a bootstrapping problem where to build a good LLM you need a working LLM and if you don't have one the costs are absolutely eye watering.

One observation is that the LLM is a next token predictor but if you train it on the internet/textbooks/etc you get a predictor of that--- but that isn't the behavior we actually want. None of these sources tend to contain "Solve this problem for me. OK, here is the solution:".

It wasn't physically impossible to start GNU the other way around, by bashing machine code into a system until you had a working operating system. But doing so would have been a lot less reasonable-- much more expensive, making progress much less quickly, etc.

reply
Especially considering AI bots are the whole reason google is pushing this new recaptcha.
reply
"AI bots" are as stupid an argument as "think of the children". It's just a convenient distraction to restrict freedom and push their narrative.
reply
> (If it hasn't been done already, an AI-generated short film of it would be a great idea...)

Once you have the script, that’s a couple actors in a classroom, a couple e-ink readers for props, the film crew… It can be shot with less than 10 people in a day, then one person for a couple days for cutting and post production. And that’s on the very high end for this scene.

Considering the reach this video would meant to have, avoiding AI would not be that expensive.

reply
> In 1999, Intel received an absolutely massive amount of opposition when they decided to include a software-readable serial number in their CPUs, so much that they reversed the decision.

> It turns out a significant (but hopefully decreasing) number of the population is easily coerced into anything when "security" is given as a justification.

The people who opposed Intel are now telling each other how hopeless and powerless they are. You can see it on HN, in this thread: No drive, outrage, and self-organizing response to these issues, but despair - 'nobody cares', 'there's nothing we can do', etc. Quitting is a sure way to lose.

reply
The people who opposed Intel are now telling each other how hopeless and powerless they are.

I don't think those are the same people. I, for one, will continue this fight by telling everyone I know about the fact that Google is going for absolute control of the Internet, and by extension, everyone's lives. They have already become an unelected global government.

reply
I'm not talking about individuals - where is the overwhelming pushback that Intel faced?
reply
There can't be pushback without awareness. At this point it's still something that most people don't know about yet, so do your part and spread the word. Get well-known YouTubers (Loius Rossmann is the first one to come to mind) to do so too.
reply