With just those primitives, CI is a service that emits "ci/tested." Review emits "review/approved." A merge controller watches for sufficient attestations and requests a ref update. The forge kernel only evaluates whether claims satisfy policy.
Vouch shifts this even further left: attestations about people, not just code. "This person is trusted" is structurally the same kind of signed claim as "this commit passed CI." It gates participation itself, not just mergeability.
All this should ideally be part of a repo, not inside a closed platform like github. I like it and am curious to see where this stands in 5 years.
I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.
Its just a layer to minimize noise.
Thing is, this system isn't supposed to be perfect. It is supposed to be better, while worth the hassle.
I doubt I'll get vouched anywhere (tho IMO it depends on context), but I firmly believe humanity (including me) will benefit from this system. And if you aren't a bad actor with bad intentions, I believe you will, too.
Only side effect is genuine contributors who aren't popular / in the know need to put in a little bit more effort. But again, that is part of worth the hassle. I'll take it for granted.
Think of this like a spam filter, not a "I met this person live and we signed each other's PGP keys" -level of trust.
It's not there to prevent long-con supply chain attacks by state level actors, it's there to keep Mr Slopinator 9000 from creating thousands of overly verbose useless pull requests on projects.
Perhaps that is the plan?
If PR is good, maintainer refunds you ;)
I noticed the same thing in communication. Communication is now so frictionless, that almost all the communication I receive is low quality. If it cost more to communicate, the quality would increase.
But the value of low quality communication is not zero: it is actively harmful, because it eats your time.
In that world there's a process called "staking" where you lock some tokens with a default lock expiry action and a method to unlock based on the signature from both participants.
It would work like this: Repo has a public key. Submitted uses a smart contract to sign the commit with along with the submission of a crypto. If the repo merges it then the smart contract returns the token to the submitter. Otherwise it goes to the repo.
It's technically quite elegant, and the infrastructure is all there (with some UX issues).
But don't do this!!!!
I did some work in crypto. It's made me realize that the love of money corrupts, and because crypto brings money so close to engineering it corrupts good product design.
Moreover, I'm not interested in having my money get handed over to folks who aren't incentivized to refund my money. In fact, they're paying processing costs on the charge, so they are disincentivized to refund me! There could be an escrow service that handles this, but now there's another party involved: I just want to fix a damn bug, not deal with this shit.
You can also integrate it in clients by adding payment/reward claim headers.
But a non-zero cost of communication can obviously also have negative effects. It's interesting to think about where the sweet spot would be. But it's probably very context specific. I'm okay with close people engaging in "low quality" communication with me. I'd love, on the other hand, if politicians would stop communicating via Twitter.
A poorly thought out hypothetical, just to illustrate: Make a connection at a dinner party? Sure, technically it costs 10¢ make that initial text message/phone call, then the next 5 messages are 1¢ each, but thereafter all the messages are free. Existing relationships: free. New relationships, extremely cheap. Spamming at scale: more expensive.
I have no idea if that's a good idea or not, but I think that's an ok representation of the idea.
I was specifically thinking about general communication. Comparing the quality of communication in physical letters (from a time when that was the only affordable way to communicate) to messages we send each other nowadays.
We've seen it everywhere, in communication, in globalised manufacturing, now in code generation.
It takes nothing to throw something out there now; we're at a scale that there's no longer even a cost to personal reputation - everyone does it.
Let's say you're a one-of-a-kind kid that already is making useful contributions, but $1 is a lot of money for you, then suddenly your work becomes useless?
It feels weird to pay for providing work anyway. Even if its LLM gunk, you're paying to work (let alone pay for your LLM).
ie, if you want to contribute code, you must also contribute financially.
That would make not-refunding culturally crass unless it was warranted.
With manual options for:
0. (Default, refund)
1. (Default refund) + Auto-send discouragement response. (But allow it.)
2. (Default refund) + Block.
3. Do not refund
4. Do not refund + Auto-send discouragement response.
5. Do not refund + Block.
6. Do not refund + Block + Report SPAM (Boom!)
And typically use $1 fee, to discourage spam.
And $10 fee, for important, open, but high frequency addresses, as that covers the cost of reviewing high throughput email, so useful email did get identified and reviewed. (With the low quality communication subsidizing the high quality communication.)
The latter would be very useful in enabling in-demand contact doors to remain completely open, without being overwhelmed. Think of a CEO or other well known person, who does want an open channel of feedback from anyone, ideally, but is going to have to have someone vet feedback for the most impactful comments, and summarize any important trend in the rest. $10 strongly disincentives low quality communication, and covers the cost of getting value out of communication (for everyone).
I get that AI is creating a ton of toil to maintainers but this is not the solution.
FOSS has turned into an exercise in scammer hunting.
Think denying access to production. But allowing changes to staging. Prove yourself in the lower environments (other repos, unlocked code paths) in order to get access to higher envs.
Hell, we already do this in the ops world.
Alternatively they might keep some things open (issues, discussions) while requiring a vouch for PRs. Then, if folks want to get vouched, they can ask for that in discussions. Or maybe you need to ask via email. Or contact maintainers via Discord. It could be anything. Linux isn't developed on GitHub, so how do you submit changes there? Well you do so by following the norms and channels which the project makes visible. Same with Vouch.
I even see people hopping on chat servers begging to 'contribute' just to get github clout. It's really annoying.
Not sure about the trust part. Ideally, you can evaluate the change on its own.
In my experience, I immediately know whether I want to close or merge a PR within a few seconds, and the hard part is writing the response to close it such that they don't come back again with the same stuff.
(I review a lot of PRs for openpilot - https://github.com/commaai/openpilot)
Even if I trust you, I still need to review your work before merging it.
Good people still make mistakes.
If you had left it at know you want to reject a PR within a few seconds, that'd be fine.
Although with safety critical systems I'd probably want each contributor to have some experience in the field too.
1. What’s the goal of this PR and how does it further our project’s goals?
2. Is this vaguely the correct implementation?
Evaluating those two takes a few seconds. Beyond that, yes it takes a while to review and merge even a few line diff.
You look at the PR and you know just by looking at it for a few seconds if it looks off or not.
Looks off -> "Want to close"
Write a polite response and close the issue.
Doesn't look off -> "Want to merge"
If we want to merge it, then of course you look at it more closely. Or label it and move on with the triage.
This is similar to real life: if you vouch for someone (in business for example), and they scam them, your own reputation suffers. So vouching carries risk. Similarly, if you going around someone is unreliable, but people find out they actually aren't, your reputation also suffers. If vouching or denouncing become free, it will become too easy to weaponize.
Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.
Good reason to be careful. Maybe there's a bit of an upside to: if you vouch for someone who does good work, then you get a little boost too. It's how personal relationships work anyway.
----------
I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…
So the really funny thing here is the first bitcoin exchange had a Web of Trust system, and while it had it's flaws IT WORKED PRETTY WELL. It used GPG and later on bitcoin signatures. Nobody talks about it unless they were there but the system is still online. Keep in mind, this was used before centralized exchanges and regulation. It did not use a blockchain to store ratings.
As a new trader, you basically could not do trades in their OTC channel without going through traders that specialized in new people coming in. Sock accounts could rate each other, but when you checked to see if one of those scammers were trustworthy, they would have no level-2 trust since none of the regular traders had positive ratings of them.
Here's a link to the system: https://bitcoin-otc.com/trust.php (on IRC, you would use a bot called gribble to authenticate)
Not easily, but I could imagine a project deciding to trust (to some degree) people vouched for by another project whose judgement they trust. Or, conversely, denouncing those endorsed by a project whose judgement they don't trust.
In general, it seems like a web of trust could cross projects in various ways.
- a problem already solved in TFA (you vouching for someone eventually denounced doesn't prevent you from being denounced, you can totally do it)
- a per-repo, or worse, global, blockchain to solve incrementing and decrementing integers (vouch vs. denounce)
- a lack of understanding that automated global scoring systems are an abuse vector and something people will avoid. (c.f. Black Mirror and social credit scores in China)
The same as when you vouch for your company to hire someone - because you will benefit from their help.
I think your suggestion is a good one.
Maybe your own vouch score goes up when someone you vouched for contributes to a project?
Then you have introverts that can be good but have no connections and won’t be able to get in.
So you’re kind of selecting for connected and good people.
Even with that risk I think a reputation based WoT is preferable to most alternatives. Put another way: in the current Wild West, there’s no way to identify, or track, or impose opportunity costs on transacting with (committing or using commits by) “Epstein but in code”.
This is a graph search. If the person you’re evaluating vouches for people those you vouch for denounce, then even if they aren’t denounced per se, you have gained information about how trustworthy you would find that person. (Same in reverse. If they vouch for people who your vouchers vouch for, that indirectly suggests trust even if they aren’t directly vouched for.)
One of my (admittedly half baked) ideas was a vouching similar with real world or physical incentives. Basically signing up requires someone vouching, similar to this one where there is actual physical interaction between the two. But I want to take it even further -- when you signup your real life details are "escrowed" in the system (somehow), and when you do something bad enough for a permaban+, you will get doxxed.
...or spam "RBL" lists which were often shared. https://en.wikipedia.org/wiki/Domain_Name_System_blocklist
why not use ai to help with the ai problem, why prefer this extra coordination effort and implementation?
I certainly have dropped off when projects have burdensome rules, even before ai slop fest
The real problem are reputation-farmers. They open hundreds of low-effort PRs on GitHub in the hope that some of them get merged. This will increase the reputation of their accounts, which they hope will help them stand out when applying for a job. So the solution would be for GitHub to implement a system to punish bad PRs. Here is my idea:
- The owner of a repo can close a PR either neutrally (e.g. an earnest but misguided effort was made), positively (a valuable contribution was made) or negatively (worthless slop)
- Depending on how the PR was closed the reputation rises or drops
- Reputation can only be raised or lowered when interacting with another repo
The last point should prevent brigading, I have to make contact with someone before he can judge me, and he can only judge me once per interaction. People could still farm reputation by making lots of quality PRs, but that's actually a good thing. The only bad way I can see this being gamed is if a bunch of buddies get together and merge each other's garbage PRs, but people can already do that sort of thing. Maybe the reputation should not be a total sum, but per project? Anyway, the idea is for there to be some negative consequences for people opening junk PRs.
GitHub customers really are willing to do anything besides coming to terms with the reality confronting them: that it might be GitHub (and the GitHub community/userbase) that's the problem.
To the point that they'll wax openly about the whole reason to stay with GitHub over modern alternatives is because of the community, and then turn around and implement and/or ally themselves with stuff like Vouch: A Contributor Management System explicitly designed to keep the unwashed masses away.
Just set up a Bugzilla instance and a cgit frontend to a push-over-ssh server already, geez.
The community might be a problem, but that doesn't mean it's a big enough problem to move off completely. Whitelisting a few people might be a good enough solution.
I can't check out unless I pay. How is that feedback?
- When I buy an item I still have to click a "check out" link to enter my address and actually pay for the item. I could take days after buying the item to click that link. - Some sellers might not accept PayPal, instead after I check out I get the sellers bank information and have to manually wire the money. I could take days after checking out to actually perform the money transfer.
Also, upvotes and merge decisions may well come from different people, who happen to disagree. This is in fact healthy sometimes.
Ya, I'm just wondering how this system avoids a 51% attack. Simply put there are a fixed number of human contributers, but effectively an infinite number of bot contributers.
if someone fresh wants to contribute, now they will have to network before they can write code
honestly i don't see my self networking just so that i can push my code
I think there are valid ways to increase the outcome, like open source projects codifying the focus areas during each month, or verifying the PRs, or making PRs show proof of working etc,... many ways to deter folks who don't want to meaningfully contribute and simply ai generate and push the effort down the real contributors
[1]: https://blog.discourse.org/2018/06/understanding-discourse-t...
Spam filters exist. Why do we need to bring politics into it? Reminds me of the whole CoC mess a few years back.
Every time somebody talks about a new AI thing the lament here goes:
> BUT THINK OF THE JUNIORS!
How do you expect this system to treat juniors? How do your juniors ever gain experience committing to open source? who vouches for them?
This is a permanent social structure for a transient technical problem.
Surely you mean this the other way around?
Mitchell is trying to address a social problem with a technical solution.
After that ships we'll continue doing a lot of rapid exploration given there's still a lot of ways to improve here. We also just shipped some issues related features here like comment pinning and +1 comment steering [1] to help cut through some noise.
Interested though to see what else emerges like this in the community, I expect we'll see continued experimentation and that's good for OSS.
[1] https://github.blog/changelog/2026-02-05-pinned-comments-on-...
Problem 2 - getting banned by any single random project for any reason, like CoC disagreement, a heated Rust discussion, any world politics views etc. would lead to a system-wide ban in all involved project. Kinda like getting a ban for a bad YT comment and then your email and files are blocked forever too.
The idea is nice, like many other social improvement ideas. The reality will 99% depend on the actual implementation and actual usage.
Your solution advocates a
( ) technical (X) social ( ) policy-based ( ) forge-based
approach to solving AI-generated pull requests to open source projects. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws.)
( ) PR spammers can easily use AI to adapt to detection methods
( ) Legitimate non-native English speakers' contributions would be affected
( ) Legitimate users of AI coding assistants would be affected
( ) It is defenseless against determined bad actors
( ) It will stop AI slop for two weeks and then we'll be stuck with it
(X) Project maintainers don't have time to implement it
(X) Requires immediate total cooperation from maintainers at once
(X) False positives would drive away genuine new contributors
Specifically, your plan fails to account for
(X) Ease of creating new GitHub accounts
(X) Script kiddies and reputation farmers
( ) Armies of LLM-assisted coding tools in legitimate use
(X) Eternal arms race involved in all detection approaches
( ) Extreme pressure on developers to use AI tools
(X) Maintainer burnout that is unaffected by automated filtering
( ) Graduate students trying to pad their CVs
( ) The fact that AI will only get better at mimicking humans
and the following philosophical objections may also apply:
(X) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
(X) Allowlists exclude new contributors
(X) Blocklists are circumvented in minutes
( ) We should be able to use AI tools without being censored
(X) Countermeasures must work if phased in gradually across projects
( ) Contributing to open source should be free and open
(X) Feel-good measures do nothing to solve the problem
(X) This will just make maintainer burnout worse
Furthermore, this is what I think about you:
(X) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out what project you maintain and
send you 50 AI-generated PRs!?
The problem is at the social level. People will not want to maintain their own vouch/denounce lists because they're lazy. Which means if this takes off, there will be centrally maintained vouchlists. Which, if you've been on the internet for any amount of time, you can instantly imagine will lead to the formation of cliques and vouchlist drama.
I don't think that's true? The goal of vouch isn't to say "@linus_torvalds is Linus Torvalds" it's to say "@linus_torvalds is a legitimate contributor an not an AI slopper/spammer". It's not vouching for their real world identity, or that they're a good person, or that they'll never add malware to their repositories. It's just vouching for the most basic level of "when this person puts out a PR it's not AI slop".
Point is: when @lt100, @lt101, … , @lt999 all vouch for something, it’s worthless.
This is from the twitter post referenced above, and he says the same thing in the ghostty issue. Can anyone link to discussion on that or elaborate?
(I briefly looked at the pi repo, and have looked around in the past but don't see any references to this vouching system.)
It spreads the effort for maintaining the list of trusted people, which is helpful. However I still see a potential firehose of randoms requesting to be vouched for. Various ways one might manage that, perhaps even some modest effort preceding step that would demonstrate understanding of the project / willingness to help, such as A/B triaging of several pairs of issues, kind of like a directed, project relevant CAPTCHA?
If you get denounced on a popular repo and everyone "inherits" that repo as a source of trust (e.g. think email providers - Google decides you are bad, good luck).
Couple with the fact that usually new contributors take some time to find their feet.
I've only been at this game (SWE) for ~10 years so not a long time. But I can tell you my first few contributions were clumsy and perhaps would have earned my a denouncement.
I'm not sure if I would have contributed to the AWS SDK, Sendgrid, Nunit, New Relic (easily my best experience) and my attempted contribution to Npgsql (easily my worst experience) would have definitely earned me a denouncement.
Concept is good, but I would omit the concept of denouncement entirely.
OVER-Denouncing ought to be tracked, too, for a user's trustworthiness profile.
I'd hesitate to create the denounce function without speaking to an attorney; when someone's reputation and career are torpedoed by the chain reaction you created - with the intent of torpedoing reputations - they may name you in the lawsuit for damages and/or to compel you to undo the 'denounce'.
Not vouching for someone seems safe. No reason to get negative.
GitHub and LLMs have reduced the friction to the point where it's overwhelming human reviewers. Removing that friction would be nice if it didn't cause problems of its own. It turns out that friction had some useful benefits, and that's why you're seeing the pendulum swing the other way.
For a single organisation, a list of vouched users sounds great. GitHub permissions already support this.
My concern is with the "web" part. Once you have orgs trusting the vouch lists of other orgs, you end up with the classic problems of decentralised trust:
1. The level of trust is only as high as the lax-est person in your network 2. Nobody is particularly interested in vetting new users 3. Updating trust rarely happens
There _is_ a problem with AI Slop overrunning public repositories. But WoT has failed once, we don't need to try it again.
It didn't work for links as reputation for search once "SEO" people started creating link farms. It's worse now. With LLMs, you can create fake identities with plausible backstories.
This idea won't work with anonymity. It's been tried.
There's likely no perfect solution, only layers and data points. Even if one of the layers only provides a level of trust as high as the most lax person in the network, it's still a signal of something. The internet will continue to evolve and fracture into segments with different requirements IMHO.
You might think this is science fiction, but the companies that brought you LLMs had the goal to pursue AGI and all its consequences. They failed today, but that has always been the end game.
(EDIT: Thanks sparky_z for the correction of my spelling!)
“After we left Samble I began trying to obtain access to certain reticules,” Sammann explained. “Normally these would have been closed to me, but I thought I might be able to get in if I explained what I was doing. It took a little while for my request to be considered. The people who control these were probably searching the Reticulum to obtain corroboration for my story.”
“How would that work?” I asked.
Sammann was not happy that I’d inquired. Maybe he was tired of explaining such things to me; or maybe he still wished to preserve a little bit of respect for the Discipline that we had so flagrantly been violating. “Let’s suppose there’s a speelycaptor at the mess hall in that hellhole town where we bought snow tires.”
“Norslof,” I said.
“Whatever. This speelycaptor is there as a security measure. It sees us walking to the till to pay for our terrible food. That information goes on some reticule or other. Someone who studies the images can see that I was there on such-and-such a date with three other people. Then they can use other such techniques to figure out who those people are. One turns out to be Fraa Erasmas from Saunt Edhar. Thus the story I’m telling is corroborated.”
“Okay, but how—”
“Never mind.” Then, as if he’d grown weary of using that phrase, he caught himself short, closed his eyes for a moment, and tried again. “If you must know, they probably ran an asamocra on me.”
“Asamocra?”
“Asynchronous, symmetrically anonymized, moderated open-cry repute auction. Don’t even bother trying to parse that. The acronym is pre-Reconstitution. There hasn’t been a true asamocra for 3600 years. Instead we do other things that serve the same purpose and we call them by the old name. In most cases, it takes a few days for a provably irreversible phase transition to occur in the reputon glass—never mind—and another day after that to make sure you aren’t just being spoofed by ephemeral stochastic nucleation. The point being, I was not granted the access I wanted until recently.” He smiled and a hunk of ice fell off his whiskers and landed on the control panel of his jeejah. “I was going to say ‘until today’ but this damned day never ends.”
“Fine. I don’t really understand anything you said but maybe we can save that for later.”
“That would be good. The point is that I was trying to get information about that rocket launch you glimpsed on the speely.”*
Xkcd 483 is directly referencing Anathem so that should be unsurprising but I think in both His Dark Materials (e.g. anbaric power) and in Anathem it is in-universe explained. The isomorphism between that world and our world is explicitly relevant to the plot. It’s the obvious foreshadowing for what’s about to happen.
The worlds are similar with different names because they’re parallel universes about to collide.
Someone who reads A Clockwork Orange will unavoidably pick up a few words of vaguely-Russian extraction by the end of it, so maybe it's possible to take advantage of that. The main problem I can see is that the new language's sentence grammar will also have to be blended in, and that won't go as smoothly.
Another thing that is amusing is that Sam Altman invented this whole human validation device (Worldcoin) but it can't actually serve a useful purpose here because it's not enough to say you are who you are. You need someone to say you're a worthwhile person to listen to.
But using this to vouch for others as a way to indicate trust is going to be dangerous. Accounts can be compromised, people make mistakes, and different people have different levels of trust.
I'd like to see more attention placed in verifying released content. That verification should be a combination of code scans for vulnerabilities, detection of a change in capabilities, are reproducible builds of the generated artifacts. That would not only detect bad contributions, but also bad maintainers.
However, it's not hard to envision a future where the exact opposite will be occur: a few key AI tools/models will become specialized and better at coding/testing in various platforms than humans and they will ignore or de-prioritize our input.
But I like the idea and principle. OSS need this and it's traded very lightly.
Feels like making a messaging app but "how messages are delivered and to whom is left to the user to implement".
I think "who and how someone is vouched" is like 99.99% of the problem and they haven't tried to solve it so it's hard to see how much value there is here. (And tbh I doubt you really can solve this problem in a way that doesn't suck.)
Honestly, my view is that this is a technical solution for a cultural problem. Particularly in the last ~10 years, open source has really been pushed into a "corporate dress rehearsal" culture. All communication is expected to be highly professional. Talk to everyone who opens an issue or PR with the respect you would a coworker. Say nothing that might offend anyone anywhere, keep it PG-13. Even Linus had to pull back on his famously virtiolic responses to shitty code in PRs.
Being open and inclusive is great, but bad actors have really exploited this. The proper response to an obviously AI-generated slop PR should be "fuck off", closing the PR, and banning them from the repo. But maintainers are uncomfortable with doing this directly since it violates the corporate dress rehearsal kayfabe, so vouch is a roundabout way of accomplishing this.
If that worked, then there would be an epidemic of phone scammers or email phishers having epiphanies and changing careers when their victims reply with (well deserved) angry screeds.
This is the level of response these PRs deserve. What people shouldn't be doing is treating these as good-faith requests and trying to provide feedback or asking them to refactor, like they're mentoring a junior dev. It'll just fall on deaf ears.
This is maturation, open source being professional is a good sign for the future
edit; and just to be totally clear this isn't an anti-AI statement. You can still make valid, even good PRs with AI. Mitchell just posted about using AI himself recently[1]. This is about AI making it easy for people to spam low-quality slop in what is essentially a DoS attack on maintainers' attention.
That means you, like John Henry, are competing against a machine at the thing that machine was designed to do.
I've seen my share of zero-effort drive-by "contributions" so people can pad their GH profile, long before AI, on tiny obscure projects I have published there: larger and more prominent projects have always been spammed.
If anything, the AI-enabled flood will force the reckoning that was long time coming.
Yes, there's room for deception, but this is mostly about superhuman skills and newcomer ignorance and a new eternal September that we'll surely figure out
Only if you allow people like this to normalize it.
Support Microsoft or be socially shunned?
> The implementation is generic and can be used by any project on any code forge, but we provide GitHub integration out of the box via GitHub actions and the CLI.
And then see the trust format which allows for a platform tag. There isn't even a default-GitHub approach, just the GitHub actions default to GitHub via `--default-platform` flag (which makes sense cause they're being invoked ON GITHUB).
So I can choose from github, gitlab or maybe codeberg? What about self-hosters, with project-specific forges? What about the fact that I have an account on multiple forges, that are all me?
This seems to be overly biased toward centralized services, which means it's just serving to further re-enforce Microsoft's dominance.
The enshitification of GitHub continues
It also addresses the issue in tolerating unchecked or seemingly plausible slop PRs from outside contributors from ever getting merged in easily. By default, they are all untrusted.
Now this social issue has been made worse by vibe-coded PRs; and untrusted outside contributors should instead earn their access to be 'vouched' by the core maintainers rather than them allowing a wild west of slop PRs.
A great deal.
There are obvious cases in Europe (well, were if you mean the EU) where there need not be criminal behaviour to maintain a list of people that no landlord in a town will allow into their pubs, for example.
It is not a cookie banner law. The american seems to keep forgetting that it's about personal data, consent, and the ability to take it down. The sharing of said data is particularly restricted.
And of course, this applies to black list, including for fraud.
Regulators have enforced this in practice. For example in the Netherlands, the tax authority was fined for operating a “fraud blacklist” without a statutory basis, i.e., illegal processing under GDPR: https://www.autoriteitpersoonsgegevens.nl/en/current/tax-adm...
The fact is many such lists exist without being punished. Your landlord list for example. That doesn't make it legal, just no shutdown yet.
Because there is no legal basis for it, unless people have committed, again, an illegal act (such as destroying the pub property). Also it's quite difficult to have people accept to be on a black list. And once they are, they can ask for their data to be taken down, which you cannot refuse.
I am European, nice try though.
It is very unclear that this example falls foul of GDPR. On this basis, Git _itself_ fails at that, and no reasonable court will find it to be the case.
if not mistaken x11 is what mitchell is running rightn ow https://github.com/mitchellh/nixos-config/blob/0c42252d8951a...
Would people recommend it? I feel like I have such huge inertia for changing shells at this point that I've rarely seriously considered it.