It actually reeks of Google, since it's a technical solution to a people problem. Google doesn't seem to understand people.
This might be acceptable if it prevented or limited nefarious use cases. But it does no such thing. It doesn't help at all on that front actually and is not a problem that can be solved by technology alone.
I view SynthId as more of a method of control. It's a way for Google to label work produced by an individual using their tools as their own.
I much prefer open models that let me be creative, write code, etc.. without trying to control/track/mark me.
I am legitimately curious: can you name some?
> Actually no it just makes me use a different model
Yes, this is a very good thing when "a different model" means "a worse model."
> People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails
That's totally invalid logic. There are plenty of deception and manipulation use cases that don't run afoul of model safety rails at all. Trivially: Creating fake dating profiles to scam people. Fake product images. Fake insurance claims. Fake blackmail (e.g. of a person and another man/woman at a bar).
In fact, the only thing allowing differentiation now is how compute heavy current architectures are. It's very possible this will turn out to not be necessary.
Also my logic was not "Nefarious uses require no safety rails". That was your logic you injected into the conversation. I was merely saying that nefarious users were more likely to use models with safety rails off.