Discussion
Inside GitHub's Fake Star Economy
dafi70: Honest question: how can VCs consider the 'star' system reliable? Users who add stars often stop following the project, so poorly maintained projects can have many stars but are effectively outdated. A better system, but certainly not the best, would be to look at how much "life" issues have, opening, closing (not automatic), and response times. My project has 200 stars, and I struggle like crazy to update regularly without simple version bumps.
talsania: Seen this firsthand, repos with hundreds of stars and zero meaningful commits or issues. In hardware/RTL projects it's less prominent.
ozgrakkurt: > Jordan Segall, Partner at Redpoint Ventures, published an analysis of 80 developer tool companies showing that the median GitHub star count at seed financing was 2,850 and at Series A was 4,980. He confirmed: "Many VCs write internal scraping programs to identify fast growing github projects for sourcing, and the most common metric they look toward is stars."> Runa Capital publishes the ROSS (Runa Open Source Startup) Index quarterly, ranking the 20 fastest-growing open-source startups by GitHub star growth rate. Per TechCrunch, 68% of ROSS Index startups that attracted investment did so at seed stage, with $169 million raised across tracked rounds. GitHub itself, through its GitHub Fund partnership with M12 (Microsoft's VC arm), commits $10 million annually to invest in 8-10 open-source companies at pre-seed/seed stages based partly on platform traction.This all smells like BS. If you are going to do an analysis you need to do some sound maths on amount of investment a project gets in relation to github starts.All this says is stars are considered is some ways, which is very far from saying that you get the fake stars and then you have investment.This smells like bait for hating on people that get investment
Lapel2742: I do not look at the stars. I look at the list of contributors, their activities and the bug reports / issues.
est: > I look at the list of contributorsSpecifically if those avatars are cute animie girls.
tomaytotomato: > Specifically if those avatars are cute anime girls.I know you are half joking/not joking, but this is definitely a golden signal.
GaryBluto: Positive or negative to you? Whenever I see more than one anime-adjacent profile picture I duck out.
bjourne: > The CMU researchers recommended GitHub adopt a weighted popularity metric based on network centrality rather than raw star counts. A change that would structurally undermine the fake star economy. GitHub has not implemented it.> As one commenter put it: "You can fake a star count, but you can't fake a bug fix that saves someone's weekend."I'm curious what the research says here---can you actually structurally undermine the gamification of social influence scores? And I'm pretty sure fake bugfixes are almost trivial to generate by LLMs.
socketcluster: My project https://github.com/socketCluster/socketcluster has been accumulating stars slowly but steadily over about 13 years. Now it has over 6k stars but it doesn't seem to mean much nowadays as a metric. It sucks having put in the effort and seeing it get lost in a sea of scams and seeing people doubting my project's own authenticity.It does feel like everything is a scam nowadays though. All the numbers seem fake; whether it's number of users, number of likes, number of stars, amount of money, number of re-tweets, number of shares issued, market cap... Maybe it's time we focus on qualitative metrics instead?
ernst_klim: I think people expect the star system to be a cheap proxy for "this is a reliable piece of sorfware which has a good quality and a lot of eyes".I think as a proxy it fails completely: astroturfing aside stars don't guarantee popularity (and I bet the correlation is very weak, a lot of very fundamental system libraries have small number of stars). Stars also don't guarantee the quality.And given that you can read the code, stars seem to be a completely pointless proxy. I'm teaching myself to skip the stars and skim through the code and evaluate the quality of both architecture and implementation. And I found that quite a few times I prefer a less-"starry" alternative after looking directly at the repo content.
ethegwo: Many VCs are only doing one thing: how to use some magical quantitative metrics to assess whether a project is reliable without knowing the know-how. Numbers are always better than no numbers.
dukeyukey: Honestly I don't know if that's true. Picking up on vibes might be better than something like GitHub stats.
ethegwo: When a partner decides to recommend a startup to the investment committee, he needs some explicit reasons to convince the committee, not some kind of implicit vibe
HighlandSpring: I wonder if there's a more graph oriented score that could work well here - something pagerank ish so that a repo scores better if it has issues reported by users who themselves have a good score. So it's at least a little resilient to crude manipulation attempts
3form: It would be more resilient indeed, I think. Definitely needs a way to figure out which users should have a good score, though - otherwise it's just shifting the problem somewhat. Perhaps it could be done with a reputation type of approach, where the initial reputation would be driven by a pool of "trusted" open source contributors from some major projects.That said, I believe the core problem is that GitHub belongs to Microsoft, and so it will still go more towards operating like a social network than not - i.e. engagement matters. It will still take a good will to get rid of Social Network Disease at scale.
az226: Reputation doesn’t equal good taste in judging other projects.There are much better ways of finding those who have good taste.
az226: GitHub has all kinds of private internal metrics that could update the system to show a much higher signal/quality score. A score that is impervious to manipulation. And extremely well correlated with actual quality and popularity and value, not noise.Two projects could look exactly the same from visible metrics, and one is complete shell and the other a great project.But they choose not to publish it.And those same private signals more effectively spot the signal-rich stargazers than PageRank.
fontain: https://x.com/garrytan/status/2045404377226285538“gstack is not a hypothetical. It’s a product with real users:75,000+ GitHub stars in 5 weeks14,965 unique installations (opt-in telemetry, so real number is at least 2x higher)305,309 skill invocations recorded since January 2026~7,000 weekly active users at peak”GitHub stars are a meaningless metric but I don’t think a high star count necessarily indicates bought stars. I don’t think Garry is buying stars for his project.People star things because they want to be seen as part of the in-crowd, who knows about this magical futuristic technology, not because they care to use it.Some companies are buying stars, sure, but the methodology for identifying it in this article is bad.
apples_oranges: I look at the starts when choosing dependencies, it's a first filter for sure. Good reminder that everything gets gamed given the incentives.
msdz: > I look at the starts when choosing dependencies, it's a first filter for sure.Unfortunately I still look at them, too, out of habit: The project or repo's star count _was_ a first filter in the past, and we must keep in mind it no longer is.> Good reminder that everything gets gamed given the incentives.Also known as Goodhart's law [1]: "When a measure becomes a target, it ceases to be a good measure".Essentially, VCs screwed this one up for the rest of us, I think?[1] https://en.wikipedia.org/wiki/Goodhart%27s_law
yuppiepuppie: > The project or repo's star count _was_ a first filter in the past, and we must keep in mind it no longer is.Id suggest the first question to ask is "if the project is an AI project or not?" If it is, dont pay attention to the stars - if it's not, use the stars as a first filter. That's the way I analyse projects on Github now.
VCs explicitly use stars as sourcing signals
aledevv: > VCs explicitly use stars as sourcing signalsIn my opinion, nothing could be more wrong. GitHub's own ratings are easily manipulated and measure not necessarily the quality of the project itself, but rather its Popularity. The problem is that popularity is rarely directly proportional to the quality of the project itself.I'm building a product and I'm seeing what important is the distribution and comunication instead of the development it self.Unfortunately, a project's popularity is often directly proportional to the communication "built" around it and inversely proportional to its actual quality. This isn't always the case, but it often is.Moreover, adopting effective and objective project evaluation tools is quite expensive for VCs.
ozgrakkurt: Vast majority of mid level experienced people take stars very seriously and they won't use anything under 100 stars.I'm not supporting this view but it is what it is unfortunately.VCs that invest based on stars do know something I guess or they are just bad investors.IMO using projects based on start count is terrible engineering practice.
aledevv: also and above all because it can be easily manipulated, as the research explained in the article actually demonstrates
williamdclt: Well, pretty sure that VCs are more interested in popularity than in quality so maybe it's not such a bad metric for them.
aledevv: Yes, you're right, but popularity becomes fleeting without real quality behind the projects.Hype helps raise funds, of course, and sells, of course.But it doesn't necessarily lead to long-term sustainability of investments.