Are VC's just that lazy about making investment decisions? Is this yet another side-effect of ZIRP[2] and too much money chasing a return? Is nobody looking too hard in the hope of catching the next rocket to the moon?
From the outside, investing based on GitHub stars seems insane. Like, this can't be a serious way of investing money. If you told me you were going to invest my money based on GitHub stars, I'd laugh, and then we'd have an awkward silence while I realize there isn't a punchline coming.
[0] I'm from Cleveland. I get to pick on them.
[1] https://en.wikipedia.org/wiki/List_of_Cleveland_Browns_seaso... I think their record speaks for itself.
The answer is right there in front of your face. Say it with me: VCs are morons. VCs are morons. VCs are morons. Just because someone is rich, you think that means they have any clue what they're doing?
Using things like github stars is clearly stupid, but not in the way you're suggesting. They're using the GH stars as a proxy metric for "someone else will come along and give money bags to this person later, so I should get in early so I can take that money."
They're operating on metric of success which is about influence and charisma and connectedness, not revenue or technical excellence.
Again, VCs don't care if you'll make a profitable business some day. They're just interested in if someone else will come along and pay out giant bags of cash for it later in a liquidity event. If they get even one of those successes, all the stupid GH star watching pays off.
Here's another way of framing it: any harms from the false positives around "He has a lot of GH stars" or "He went to Stanford" or "I know his father at the country club" are more than mitigated by the one exit in 1000 that makes a bunch of people filthy rich.
We shouldn't expect VCs to be something they're not. But we are missing something inbetween VCs and "self financing" and "bootstrapping"
And if that's true, they should be slapped, hard. They're no longer performing a socially useful function, and and have degraded towards pure financialization. Some middleman between fools and their money.
As much as I don't like Altman, VC should be pumping money into startups like Helios--companies pursuing cutting-edge technology that could totally fail (yes, that's an organic em-dash).
I do think that ZiRP distorted things extremely badly. There's an entire generation in this software industry that lives around the business-culture expectations set during that time which as far as I could see basically amounted to "I build Uber but for X" (where X is some new business domain).
Perhaps after a bit of a painful interregnum things will be a bit different now that rates are higher and risk along with it.
Also anybody can throw a SaaS together in a few days now. Separating the wheat from the chaff in the next few years will be... interesting.
That's a extremely strong statement, and may only be true in libertarian land, where pure capitalism is a god to be worshiped and "good" has been redefined to be "whatever the free market does."
I didn't say I agree with it.
The entire game of startup investing is to identify breakout companies early. Social proof (when valid, not faked) of interest is one of the strongest signals of product market fit.
If a product has a lot of attention (users, headlines, stars, downloads, DAU) that’s a signal that it could also have a lot of customers some day. This is also why all of those metrics are targets for manipulation.
> This would be like an NFL team drafting a quarterback based on how many instagram followers they have
Major sports team are about engaging fans. If a promising recruit had a huge social media presence then that could be a contributing factor toward trying to recruit that player.
This is actually easier to understand if you look at the inverse: Some times there are players with amazing stats but who have a cloud of controversy following them. Teams will skip over these problematic players despite their performance because having popular and engaging players is important for teams but having anti-popular players will drive away fans.
Nevertheless, VCs are in fact pretty dumb sometimes and it'd be stupid to invest soley based on stars.
Over here the fans would be singing "You're getting sacked in the morning" halfway through that first season.
I guess not having relegation makes things slightly less ruthless for you.
The owner of the Cleveland Browns uses the team to generate more revenue. For NFL teams, performance has little to do with their value or ability to generate additional revenue.
There is no strong financial incentive to win in the NFL, aside from the owner's ego. The Browns' owner's ego is driven by money, and the result shows on the field.
I do find the model European Football (soccer) using promotion and relegation to be much more interesting, both from the standpoint of culling out perennially hopeless teams from top-tier competition, and for having a place for people to play who aren't absolute superstars.
I am so glad the proposed "European super league" was killed off so hard, so that we don't get a franchise model, it produces so many adverse incentives.
That would put a fire under some asses!
is there a tech equivalent? like you do a crappy job with your series A on purpose which helps you get a better series B. although there is the notion of a big round of layoffs to secure further investment
plus, what is an NFL fan going to do, stop watching football? hahahahahaha
The Haslams? Yeah, they should really sell the team, but I figure in about 10-15 years, they'll move it out of Cleveland.
Not quite the same, but the New York Jets (one of the few NFL teams that can match the dysfunction of the Browns — they have the longest active playoff drought in big 4 North American sports) passed on a few successful players because the owner, Woody Johnson, reportedly didn't like their Madden (video game) ratings [0]:
> A few weeks later, Douglas and his Broncos counterpart, George Paton, were deep in negotiations for a trade that would have sent Jeudy to the Jets and given future Hall of Fame quarterback Aaron Rodgers another potential playmaker. The Broncos felt a deal was near. Then, abruptly, it all fell apart. In Denver’s executive offices, they couldn’t believe the reason why.
> Douglas told the Broncos that Johnson didn’t want to make the trade because the owner felt Jeudy’s player rating in “Madden NFL,” the popular video game, wasn’t high enough, according to multiple league sources. The Broncos ultimately traded the receiver to the Cleveland Browns. Last Sunday, Jeudy crossed the 1,000-yard receiving mark for the first time in his career.
...
> Johnson’s reference to Jeudy’s “Madden” rating was, to some in the Jets’ organization, a sign of Brick and Jack’s influence. Another example came when Johnson pushed back on signing free-agent guard John Simpson due to a lackluster “awareness” rating in Madden. The Jets signed Simpson anyway, and he has had a solid season: Pro Football Focus currently has him graded as the eighth-best guard in the NFL.
[0] https://www.nytimes.com/athletic/6005172/2024/12/19/woody-jo...
And once it gets out that it’s a selection criteria it gets gamed to hell and back.
sounds like how the ufc does it
Github stars used to really mean something. Having 1k+ was considered a stable, mature library being used in prod by thousands of people. At 10k+ you were a top level open source project. Now they've been gamed by the dead internet just like everything else, and it's depressing as hell.
I believe that is how they made the final decision on Watson over Mayfield. Oh, wait, I don't think anything can explain that decision.
Also from Cleveland.
Go Guardians! Go Cavs!
Yes actually
Needless to say they didn’t like when I said this was a worthless metric and we needed to be using something like “working policies” or “time saved training”
There were no complementary workflows or infrastructure or anything.
It was explicitly a move to try to counter epic’s positioning and internally it was very obviously a JR versus Tim pissing contest (and JR was the only one in the contest because Tim didn’t give a fuck about Unity)
I just wanted to build a good product but unfortunately good products are not relevant
I have personally seen several company CEOs (that were billionaires!) do this in different ways. Sometimes hiring people because of it.
Here are the things I look at in order:
* last commit date. Newer is better
* age. old is best if still updating. New is not great but tolerable if commits aren't rapid
* issues. Not the count, mind you, just looking at them. How are they handled, what kind of issues are lingering open.
* some of the code. No one is evaluating all of the code of libraries they use. You can certainly check some!
What does stars tell me? They are an indirect variable caused by the above things (driving real engagement and third interest) or otherwise fraud. Only way to tell is to look at the things I listed anyway.
I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.
I hope the FTC comes down hard on this.
Edit:
* commit history: just browse the history to see what's there. What kind of changes are made and at what cadence.
I do it all the time, whenever there are competing libraries to choose among.
It's a heuristic that saves me time.
If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.
I also look at download count and release frequency. Basically I don't want to use some obscure dependency for something critical.
There are clearly inflection points where stars become useful, with "nobody has ever used this package" and "Meta/Alphabet pays to develop/maintain this package" on the two extremes.
I'm less sure what the signal says in-between those extremes. We have 2 packages, one has 5,000 stars, the other has 10,000 stars - what does this actually tell me, apart from how many times each has gone viral on HN?
Will you continue to do this after reading TFA?
A bad one.
I listed many other useful heuristics. Do you not find value in them? Do you find stars more valuable than them?
Take a moment to consider stars as a useful metric may only be useful for packages created prior to ~2015 when they weren't such a strong vanity metric, and are already very well established. This is preconditioning you to think "stars can still sometimes be useful, because I took a look at Facebook's React GH and it has a quarter million stars".
Sure, it's useful for that. But you aren't going to evaluate if the "React" package is safe. You already trivially know it is.
You'll be evaluating packages like "left-pad". Or any number of packages involved in the latest round of supply chain attacks.
For that matter, VCs are the ones stars are being targeted at, and potential employers (something this article doesn't cover, but some potential hires do hope to leverage on their resume).
If you are a VC, or an employer, it is a negative metric. If you are a dev evaluating packages, it is a vacuous metric that either tells you what you already know, or would be better answered looking at literally anything else within that repo.
The article also called out how download count can be faked trivially. I admit I have relied upon this in the past by mistake. Release frequency I do use as one metric.
When I care about making decisions for a system that will ingest 50k-250k TPS or need to respond in sub-second timings (systems I have worked on multiple times), you can bet "looking at stars" is a useless metric.
For personal projects, it is equally useless.
I care about how many tutorials are online. And today, I care more about if there was enough textual artifacts for the LLMs to usefully build it into their memory and to search on. I care if their docs are good so I spend less tokens burning through their codebase for APIs. I care if they resolve issues in a timely manner. I care if they have meaningful releases and not just garbage nothings every week.
I didn't mean for this to sound like a rant. But seriously, I just can't imagine in any scenario where "I look at stars" as a useful metric. You want to add it to the list? Sure. That is fine. But it should not be a deciding factor. I have chosen libraries with less stars because it had better metrics on things I cared about, and it was the correct choice (I ended up needing to evaluate them both anyhow. But I had my preference from the start).
Choosing the wrong package will waste you so much more time. Spend the 5 minutes evaluating for stuff that is important to your project.
* Most recent commit
* Total number of commits
This might have to die in the era of AI, but it's served me well for a long time. Rather than how many people are paying attention, it tries to measure the effort put in.
For example, let’s say I want to run some piece of software that I’ve heard about, and let’s say I trust that the software isn’t malware because of its reputation.
Most of the time, I’d be installing the software from somewhere that’s not GitHub. A lot of package managers will let anyone upload malware with a name that’s very similar to the software I’m looking for, designed to fool people like me. I need to defend against that. If I can find a GitHub repo that has a ton of stars, I can generally assume that it’s the software I’m looking for, and not a fake imitator, and I can therefore trust the installation instructions in its readme.
Except this is also not 100% safe, because as mentioned in TFA, stars can be bought.
There are many other far more useful metrics to look at first, and to focus on first, and to think about. Every time you think about stars, you'll forget the other stuff, or discount it in favor of stars.
Forget stars. They now no longer mean anything. Even if they did before, they don't anymore.
e.g. "TFA covers this already."
You might not have but the makers of dependencies that you use might so still problematic.
I have limited time on this Earth and at my employer. My job is not critical to life. I am comfortable with this level of pragmatism.
It's only not meaningful because of how other people can game it and fabricate it, but everything you just said, if it was only people like you, that would be a very meaningful number.
It doesn't even matter why you bookmarked it, and it doesn't matter that whatever the reason was, it doesn't prove the project as a whole is overall good or useful. Maybe you bookmarked it because you hate it and you want to keep track of it for reference in your ted talk about examples of all the worst stuff you hate, but really by the numbers adding up everyone's bookmarks, the more likely is that you found something interesting. It doesn't even matter what was interesting or why. The entire project could be worthless and the thing you're bookmarking was nothing more than some markdown trick in the readme. That's fine. That counts. Or it's all terrible, not a single thing of value, and the only reason to bookmark it is because it's the only thing that turned up in a search. Even that counts, because that still shows they tried to work on something no one else even tried to work on.
It's like, it doesn't matter how little a given star means, it still does mean something, and the aggregation does actually mean something, except for the fact of fakes.
Yes...which is why I said it is an indirect variable, as caused by the other things I pointed out above. Age, quality, code, utility, whether issues are addressed, interest, etc. Or fraud. Pretty cut and dry.
FWIW, I almost never star repos. Even ones I use or like. I don't see the utility for myself.
Aim for a more concise post and don't couch your statements in doubt next time if you want a productive conversation, because I don't know what you are trying to say.
Instead I look at (in addition to the above):
1. Who is the author? Is it just some person chasing Internet clout by making tons of 'cool' libraries across different domains? Or are they someone senior working in an industry sector from which project might actually benefit in expertise?
2. Is the author working alone? Are there regular contributors? Is there an established governance structure? Is the project going to survive one person getting bored / burning out / signing an NDA / dying?
3. Is the project style over substance? Did it introduce logos, discord channels, mascots too early? Is it trying too hard to become The New Hot Thing?
4. What are the project's dependencies? Is its dependency set conservative or is it going to cause supply chain problems down the line?
5. What's the project's development cadence? Is it shipping features and breaking APIs too fast? Has it ever done a patch release or backported fixes, or does it always live at the bleeding edge?
6. NEW ARRIVAL 2026! Is the project actually carefully crafted and well designed, or is it just LLM slop? Am I about to discover that even though it's a bunch of code it doesn't actually work?
7. If the project is security critical (handles auth, public facing protocol parsing, etc.): do a deeper dive into the code.
Build a SaaS and you'll have "journalists" asking if they can include you in their new "Top [your category] Apps in [current year]", you just have to pay $5k for first place, $3k for second, and so on (with a promotional discount for first place, since it's your first interaction).
You'll get "promoters" offering to grow your social media following, which is one reason companies may not even realize that some of their own top accounts and GitHub stars are mostly bots.
You'll get "talent scouts" claiming they can find you experts exactly in your niche, but in practice they just scrape and spam profiles with matching keywords on platforms like LinkedIn once you show interest, while simultaneously telling candidates that they work with companies that want them.
And in hiring, you'll see candidates sitting in interview farms quite clearly in East Asia, connecting through Washington D.C. IPs, present themselves with generic European names, with synthetic camera backgrounds, who somehow ace every question, and list experience with every technology you mention in the job post in their CVs already (not hyperbole, I've seen exactly this happen).
If a metric or signal matters, there is already an ecosystem built to fake it, and faking it starts to be operational and just another part of doing business.
Have an upvote. The first one is free.
Short term, you pay the cost of fake signaling, which is simply deadweight loss. People spend resources to inflate signals instead of improving the actual thing.
Medium term, I suppose you could see how it increases consumption, since users would probably try something with 100k stars instead of 2, GitHub wants to seem that it's used more than it really is, repo owner is also benefiting.
Long term, the correspondence between how important a (distorted) system is perceived (Github, OSS, IT in general) vs how important it really is collapses quite abruptly and unnecessarily, and you end up with a lemon market [0] where signals stop being reliable at all.
I'm increasingly convinced the issue isn't feedback itself, but centralized, global, aggregated feedback that becomes game-able without stronger identity signals.
Right now the incentives are tied (correctly or not) to these global metrics, so you get a market for faking them, with money flowing to whoever is best at juicing that signal.
If instead the signal was based on actual usage and attributions by actual developers, the incentives shift. With localized insight (think "Yeah, I like Golang") it becomes both harder to fake and harder to get at the metric rollup.
Useful reputation on the web is actually much more localized and personal. I gladly receive updates on and would support the repos I've starred. If I could chose where to put my dollars (not an investors), it would likely include the list of repos I've personally curated.
This suggests a different direction: instead of asking "how many stars does this have?", ask "who is actually depending on this, and in what context?" or better retroactively compare your top-n repos to mine and we'll get a metric seen through our lenses. If you want to include everyone in that aggregation you'll end up where we are now, but if in stead you chose the list, well, the stars could align as a good metric once more.
The interesting part is that the web already contains most of that information, we just don't treat identity as a part of the signal (yet? universally?).
What's more it became obvious to me two or so years ago that GitHub is going the way of LinkedIn slowly but surely. Lots of professionals on there just because it's expected of them, some interact occasionally with the "social media" aspect of it and fewer still really thrive on that part. Time will tell how this will pan out but just look how many Developer and Linux influencers became huge on YouTube and other places this last year. Most of them barely had more than 10k subscribers 3 years ago and now people look to them for their next tech stack and hot framework/tool/library/distro and so on.
We've recently decided to complicate life of AI bots in our repo https://archestra.ai/blog/only-responsible-ai, hoping they will just choose those AI startups who are easier to engage with.
[0] http://www.stat.yale.edu/~jtc5/papers/Ancestors.pdf [1] https://pubmed.ncbi.nlm.nih.gov/11542058/
Specifically someone submitted a library that was only several days old, clearly entirely AI generated, and not particularly well built.
I noted my concerns with listing said library in my reply declining to do so, among them that it had "zero stars". The author was very aggressive and in his rant of a reply asked how many stars he needed. I declined to answer, that's not how this works. Stars are a consideration, not the be all end all.
You need real world users and more importantly real notability. Not stars. The stars are irrelevant.
This conversation happened on GitHub and since then I have had other developers wander into that conversation and demand I set a star count definition for my "vague notability requirement". I'm not going to, it's intentionally vague. When a metric becomes a target it ceases to be a good metric as they say.
I don't want the page to get overly long, and if I just listed everything with X star count I'd certainly list some sort of malware.
I am under no obligation to list your library. Stop being rude.
I've been thinking about this a lot. These metrics are all just marketing signals to draw people's attention, trying to make some kind of deals. So the fix should be: make the cost of the signal match what it claims to represent. I'm obsessed with something called DUKI /djuːki/ (Decentralized Universal Kindness Income, a form of UBI) — the idea is that instead of stars or reviews, trust comes from deals pledging real money to the world for all as the deal happens. You can't fake that cheaply.
So the metric becomes the money itself — if you fake X amount, it costs you X, and the world will thank you by paying attention...
Imagine if GitHub let you back a star with real money — the more you put in, the more credible the star. And that money goes out as UBI for everyone. For attention makers, star anything you want, as much as you want. For attention takers, just follow the money to filter through all the noise that's so easy to manipulate...
I think as a proxy it fails completely: astroturfing aside stars don't guarantee popularity (and I bet the correlation is very weak, a lot of very fundamental system libraries have small number of stars). Stars also don't guarantee the quality.
And given that you can read the code, stars seem to be a completely pointless proxy. I'm teaching myself to skip the stars and skim through the code and evaluate the quality of both architecture and implementation. And I found that quite a few times I prefer a less-"starry" alternative after looking directly at the repo content.
Imagine you're choosing between 3 different alternatives, and each is 100,000 LOC. Is 'reading the code' really an option? You need a proxy.
Stars isn't a good one because it's an untrusted source. Something like a referral would be much better, but in a space where your network doesn't have much knowledge a proxy like stars is the only option.
100k is small, but you're right, it can be millions. I usually skim through the code tho, and it's not that hard. I don't need to fully read and understand the code.
What I look at is: high-level architecture (is there any, is it modular or one big lump of code, how modular it is, what kind of modules and components it has and how they interact), code quality (structuring, naming, aesthetics), bus factor (how many people contribute and understand the code base).
Looking at the commit history, closed vs open issues and pull requests provides a much more useful signal if you can't decide from the code.
(Sometimes still is, but the agents garbage does not help)
- link: https://github.com/pathwaycom/pathway
- watch: 115, fork: 1.6k, star: 63.5k
- issues: 32, PR-s: 3
And compare to other ETL tool, like Apache Airflow - used by me and many machine learning folks:
- link https://github.com/apache/airflow
- watch: 777, forks 16.9k!!!!!, Stars: (only!) 45.1k
- issues: 1200 (!!!), PR-s (501!!!).
If the number of stars are in the thousands, tens of thousands, or hundreds of thousands, that might correlate with a serious project. But that should be visible by real, costly activity such as issues, PRs, discussion and activity.
It is the meaning of having dozens or hundreds of stars that is undermined by the practice described at the linked post.
That said, I believe the core problem is that GitHub belongs to Microsoft, and so it will still go more towards operating like a social network than not - i.e. engagement matters. It will still take a good will to get rid of Social Network Disease at scale.
There are much better ways of finding those who have good taste.
Two projects could look exactly the same from visible metrics, and one is complete shell and the other a great project.
But they choose not to publish it.
And those same private signals more effectively spot the signal-rich stargazers than PageRank.
The fake accounts often star my old repos to look like real users. They are usually very sketchy if you think for a minute, for example starring 5,000 projects in a month and no other GitHub activity. One time I found a GitHub Sponsor ring, which must be a money laundering / stolen credit cards thing?
Even 10 years ago most VCs we spoke to had wisened up and discarded Github stars as a vanity metric.
GitHub should also introduce a way to bookmark a repo, additional to the existing options of sponsor/watch/fork/star-ing it.
one VC told me, you'll get more funding and upvotes if u don't put "india" in your username.
Founders need the ability to get traction, so if a VC gets a pitch and the project's repo has 0 stars, that's a strong signal that this specific team is just not able to put themselves out there, or that what they're making doesn't resonate with anyone.
When I mentioned that a small feature I shared got 3k views when I just mentioned it on Reddit, then investors' ears perked right up and I bet you're thinking "I wonder what that is, I'd like to see that!" People like to see things that are popular.
By the way, congrats on 200 stars on your project, I think that is definitely a solid indicator of interest and quality, and I doubt investors would ignore it.
I think VCs just know that there are no reliable systems, so they go with whatever's used.
Why am I not surprised big Capital corrupts everything. Also, Goodhart's law applies again: "When a measure becomes a target, it ceases to be a good measure".
HN Folks: What reliant, diverse signals do you use to quickly eval a repo's quality? For me it is: Maintenance status, age, elegance of API and maybe commit history.
PS: From the article:
> instead tracks unique monthly contributor activity - anyone who created an issue, comment, PR, or commit. Fewer than 5% of top 10,000 projects ever exceeded 250 monthly contributors; only 2% sustained it across six months.
> [...] recommends five metrics that correlate with real adoption: package downloads, issue quality (production edge cases from real users), contributor retention (time to second PR), community discussion depth, and usage telemetry.
Finding any curse words in hidden comments in the commit history is for me a good indication of a human working on a passion project, though ymmv.
And there are always exceptions to the exception of the exceptions.
"We ran our own analysis sampling 150 profiles per repo across 20 projects and found repos where 36-76% of stargazers have zero followers and fork-to-star ratios 10x below organic baselines"
This does not looks like appropriate signal to use on github, i doubt that this is organic baseline.If this is used as metric than study might be flawed.
It’s more expensive to compute, but the resulting scores would be more trustworthy unless I’m missing something.
In my opinion, nothing could be more wrong. GitHub's own ratings are easily manipulated and measure not necessarily the quality of the project itself, but rather its Popularity. The problem is that popularity is rarely directly proportional to the quality of the project itself.
I'm building a product and I'm seeing what important is the distribution and comunication instead of the development it self.
Unfortunately, a project's popularity is often directly proportional to the communication "built" around it and inversely proportional to its actual quality. This isn't always the case, but it often is.
Moreover, adopting effective and objective project evaluation tools is quite expensive for VCs.
I'm not supporting this view but it is what it is unfortunately.
VCs that invest based on stars do know something I guess or they are just bad investors.
IMO using projects based on start count is terrible engineering practice.
Surely a project's popularity is often related to its utility. A useful and popular project seems like exactly the kind of thing a VC might be interested in.
Hype helps raise funds, of course, and sells, of course.
But it doesn't necessarily lead to long-term sustainability of investments.
* https://arxiv.org/abs/2412.13459 (2024/2025) - Six Million (Suspected) Fake Stars in GitHub: A Growing Spiral of Popularity Contests, Spams, and Malware
You'd want to discard a lot of the noise in the bottom 20% of linking power. You want to focus more on the 'trust' factor.
In general, I’ve been dissatisfied with GitHub’s code search. It would be nice to see innovation here.
Unfortunately I still look at them, too, out of habit: The project or repo's star count _was_ a first filter in the past, and we must keep in mind it no longer is.
> Good reminder that everything gets gamed given the incentives.
Also known as Goodhart's law [1]: "When a measure becomes a target, it ceases to be a good measure".
Essentially, VCs screwed this one up for the rest of us, I think?
I agree that it has been a first filter, but should it ever have been? A star only says that someone had a passing interest in a project. Not significantly different from a 'like' on a social media post.
Id suggest the first question to ask is "if the project is an AI project or not?" If it is, dont pay attention to the stars - if it's not, use the stars as a first filter. That's the way I analyse projects on Github now.
As a side note it's kind of disheartening that everytime there is a metric related to popularity there would be some among us that will try to game it for profit, basically to manipulate our natural bias.
As a side note it's always a bit sad how the parasocial nature of the modern web make us like machine interfacing via simple widgets, becoming mechanical robot ourselves rationalising IO via simple metrics kind of forgetting that the map is never the territory.
They make it easier to sort through options, help with search and discovery, and at least give you a baseline signal for trust can get better over time.
So to me, some signal better than no signal at all.
https://github.com/karakeep-app/karakeep
Sounds useful.
I’ll star it and check it out later ;)
Now that money is flowing to Github stars, no wonder people are buying fake "stars"? Seems capitalism is working as expected...
It does feel like everything is a scam nowadays though. All the numbers seem fake; whether it's number of users, number of likes, number of stars, amount of money, number of re-tweets, number of shares issued, market cap... Maybe it's time we focus on qualitative metrics instead?
Github stars is akin to 'link popularity' or 'pagerank' which is ripe for abuse.
One way around it is to trust well known authors/users more. But it's hard to verify who is who. And accounts get bought/closed/hacked.
Another way is to hand over the algo in a way where individuals and groups can shape it, so there's no universal answer to everyone.
Stars only matter when there are very few, like if it has almost none, that’s a red flag. Otherwise it’s just noise.
Specifically if those avatars are cute animie girls.
I know you are half joking/not joking, but this is definitely a golden signal.
We should do a hall of shame!
I guess it's like fake followers on other social media platforms.
To me, it just reflects a behaviour that is typical of humans: in many situations, we make decisions in fields we don't understand, so we evaluate things poorly.
I'd give a lot of credit to Microsoft and the Github team if they went on a major ban/star removal wave of affected repos, akin to how Valve occasionally does a major sweep across CSGO2 banning verified cheaters.
For Microsoft this is another kind of sunk cost, so idk how much incentive they have to fix this situation.
I am not successful at all with my current projects (admittedly am not trying to be nowadays), so feel free to dismiss this advice that predates a time before LLM driven development, but in the past, I have had decent success in forums interacting with those with a specific problem my project did address. Less in stars, more in actual exchange of helpful contributions.
My first Open Source project easily got off the ground just by being listed in SourceForge.
On Github stars, I'd argue they are the most suitable comparison, as all the funny business regarding stars should be, if at all, detectable by Github directly and ideally, bans would have the biggest deterrent effect, if they happened in larger waves, allowing the community to see who did engage in fraudulent behaviour.
I paid github for years to keep my repos private...
But then I don't participate in the stars "economy" anyway, I don't star and I don't count stars, so I'm probably irrellevant for this study.
> When nobody is forking a 157,000-star repository, nobody is using it
that is completely not true, i don't fork a repo when i use it, only when i want to contribute to it (and usually cleanup my forks)
It’s supposed to get people to actually try your product. If they like it, they star it. Simple.
At that point, forcing the action just inflates numbers and strips them of any meaning.
Gaming stars to set it as a positive signal for the product to showcase is just SHIT.
We figured out a workaround to limit activity to prior contributors only, and add a CI job that pushes a coauthored commit after passing captcha on our website. It cut the AI slop by 90%. Full write-up https://archestra.ai/blog/only-responsible-ai
> Runa Capital publishes the ROSS (Runa Open Source Startup) Index quarterly, ranking the 20 fastest-growing open-source startups by GitHub star growth rate. Per TechCrunch, 68% of ROSS Index startups that attracted investment did so at seed stage, with $169 million raised across tracked rounds. GitHub itself, through its GitHub Fund partnership with M12 (Microsoft's VC arm), commits $10 million annually to invest in 8-10 open-source companies at pre-seed/seed stages based partly on platform traction.
This all smells like BS. If you are going to do an analysis you need to do some sound maths on amount of investment a project gets in relation to github starts.
All this says is stars are considered is some ways, which is very far from saying that you get the fake stars and then you have investment.
This smells like bait for hating on people that get investment
> As one commenter put it: "You can fake a star count, but you can't fake a bug fix that saves someone's weekend."
I'm curious what the research says here---can you actually structurally undermine the gamification of social influence scores? And I'm pretty sure fake bugfixes are almost trivial to generate by LLMs.
“gstack is not a hypothetical. It’s a product with real users:
75,000+ GitHub stars in 5 weeks
14,965 unique installations (opt-in telemetry, so real number is at least 2x higher)
305,309 skill invocations recorded since January 2026
~7,000 weekly active users at peak”
GitHub stars are a meaningless metric but I don’t think a high star count necessarily indicates bought stars. I don’t think Garry is buying stars for his project.
People star things because they want to be seen as part of the in-crowd, who knows about this magical futuristic technology, not because they care to use it.
Some companies are buying stars, sure, but the methodology for identifying it in this article is bad.