Discussion
Cal.com is going closed source. Here's why.
doytch: I get the mentality but it feels very much like security through obscurity. When did we decide that that was the correct model?
Peer_Rich: hey cofounder here. since it takes my 16 year old neighbors son 15 mins and $100 claude code credits to hack your open source project
doytch: Right, but those capabilities are available to you as well. Granted the remediation effort will take longer but...you're going to do that for any existing issues _anyway_ right?I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
ButlerianJihad: This seems kind of crazy. If LLMs are so stunningly good at finding vulnerabilities in code, then shouldn't the solution be to run an LLM against your code after you commit, and before you release it? Then you basically have pentesting harnesses all to yourself before going public. If an LLM can't find any flaws, then you are good to release that code.A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?https://en.wikipedia.org/wiki/Linus%27s_law
creatonez: This is some truly exceptionally clownish attention seeking nonsense. The rationale here is complete nonsense, they just wanted to put "because AI" after announcing their completely self-serving decision. If AI cyber offense is such a concern, recognize your role as a company handling truckloads of highly sensitive information and actually fix your security culture instead of just obscuring it.
toast0: I don't think this really helps that much. Your neighbor could ask an LLM to decompile your binaries, and then run security analysis on the results.If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
bakugo: *This comment sponsored by Anthropic
pdntspa: whooptie fuggin doo, then spend $200 on finding and fixing the issues before you push your commits to the cloud
rvz: You know what?Great move.Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
wild_egg: Haven't the SQLite tests always been closed? Getting access to them is a major reason for financially supporting them
tokai: Security through obscurity has been known to be a faulty approach for nearly 200 years. Yet here we are.
_pdp_: The real threat is not security but bad actors copying your code and calling it theirs.IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?The only open source that will remain will be the real open source projects that are true to the ethos.
fcarraldo: > The real threat is not security but bad actors copying your code and calling it theirs.How has this changed?
HyprMusic: Bad actors can rewrite it with AI and claim ownership of the result.
simonw: Drew Breunig published a very relevant piece yesterday that came to the opposite conclusion: https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-o...Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
DrammBA: I have a feeling the real reason is them trying to avoid someone using AI to copyright-wash their product, they're just using security as the excuse.
tudorg: It's funny that this news showed up just as we (Xata) have gone the other direction, citing also changes due to AI: https://xata.io/blog/open-source-postgres-branching-copy-on-...We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.Time will tell, I am in the open source camp, though.
samename: That’s a non-trivial cost for commonly severely underfunded open source projects
yawndex: Cal.com is not a severely underfunded project, it raised around $32M of VC money.
1970-01-01: This is not security via obscurity; it is reducing your attack surface as much as possible.
vlapec: LLMs really are stunningly good at finding vulnerabilities in code, which is why, with closed-source code, you can and probably will use them to make your code as secure as possible.But you won't keep the doors open for others to use them against it.So it is, unfortunately, understandable in a way...
skybrian: This seems similar to the lesson learned for cryptographic libraries where open source libraries vetted by experts become the most trusted.Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
ErroneousBosh: > since it takes my 16 year old neighbors son 15 mins and $100 claude code credits to hack your open source projectTo what end? You can just look at the code. It's right there. You don't need to "hack" anything.If you want to "hack on it", you're welcome to do so.Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?
hmokiguess: Risk tolerance and emotional capacity differs from one individual to another, while I may disagree with the decision I am able to respect the decision.That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
adamtaylor_13: Could you not simply point AI at your open source codebase and use it to red-team your own codebase?This post's argument seems circular to me.
dspillett: Reducing your attack surface as much as possible via obscurity.
dec0dedab0de: This seems dishonest, like someone is forcing the decision for other reasons, and they're using security and AI as a distraction.
poisonborz: AI sure is useful as a scapegoat for any negative PR inducing moves.
1970-01-01: Going closed source is making the branch secret/private, not making it obscure. Obscurity would be zipping up the open source code (without a password) and leaving it online. Obscurity is just called taking additional steps to recover the information. Your passwords are not obscure strings of characters, they are secrets.
popalchemist: Seems like it's just being used as a convenient pretense to back out of open-source.
ezekg: I mean, they were a COSS startup using the AGPLv3, so checks out. :)