Discussion
Search code, repositories, users, issues, pull requests...
hiciu: Besides main issue here, and the owners account being possibly compromised as well, there's like 170+ low quality spam comments in there.I would expect better spam detection system from GitHub. This is hardly acceptable.
deep_noz: good i was too lazy to bump versions
iwhalen: What is happening in this issue thread? Why are there 100+ satisfied slop comments?
nubg: Are they trying to slide stuff down? but it just bumps stuff up?
bratao: Look like the Founder and CTO account has been compromised. https://github.com/krrishdholakia
intothemild: I just installed Harness, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.Got lucky, no backdoor installed here from what i could make out of the binary
6thbit: title is bit misleading.The package was directly compromised, not “by supply chain attack”.If you use the compromised package, your supply chain is compromised.
TZubiri: Thank you for posting this, interesting.I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
xinayder: > Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.Programming for different LLM APIs is a hassle, this library made it easy by making one single API you call, and in the backstage it handled all the different API calls you need for different LLM providers.
Imustaskforhelp: Our modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.First Trivy (which got compromised twice), now LiteLLM.
jadamson: In case you missed it, according to the OP, the previous point release (1.82.7) is also compromised.
dot_treo: Yeah, that release has the base64 blob, but it didn't contain the pth file that auto triggers the malware on import.
mikert89: Wow this is in a lot of software
postalcoder: This is a brutal one. A ton of people use litellm as their gateway.
sschueller: Does anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.
bakugo: Attackers trying to stifle discussion, they did the same for trivy: https://github.com/aquasecurity/trivy/discussions/10420
Imustaskforhelp: I have created an comment to hopefully steer the discussion towards hackernews if the threat actor is stifling genuine comments in github by spamming that thread with 100's of accountshttps://github.com/BerriAI/litellm/issues/24512#issuecomment...
rdevilla: It will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.
Imustaskforhelp: Do you feel as if people will update litellm without looking at this discussion/maybe having it be automatic which would then lead to loss of crypto wallets/ especially AI Api keys?Now I am not worried about the Ai Api keys having much damage but I am thinking of one step further and I am not sure how many of these corporations follow privacy policy and so perhaps someone more experienced can tell me but wouldn't these applications keep logs for legal purposes and those logs can contain sensitive information, both of businesses but also, private individuals perhaps too?
cpburns2009: LiteLLM is now in quarantine on PyPI [1]. Looks like burning a recovery token was worth it.[1]: https://pypi.org/project/litellm/
fratellobigio: It's been quarantined on PyPI
oncelearner: That's a bad supply-chain attack, many folks use litellm as main gateway
eoskx: This is bad, especially from a downstream dependency perspective. DPSy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
Imustaskforhelp: If that would happen, The worry I would have is of all the sensitive Government servers from all over the world which might be then exploited and the amount of damage which can be caused silently by such a threat actor or something like AWS/GCP/these massive hyperscalers which are also used by the governments around the globe at times.The possibilities within a good threat could be catastrophic if we assume so, and if we assume nation-states to be interested in sponsoring hacking attacks (which many nations already do) to attack enemy nations/gain leverage. We are looking at damage within Trillions at that point.But I would assume that Linux might be safe for now, it might be the most looked at code and its definitely something safe.LLVM might be a bit more interesting as it might go a little unnoticed but hopefully people who are working at LLVM are well funded/have enough funding to take a look at everything carefully to not have such a slip up.
0123456789ABCDE: airflow, dagster, dspy, unsloth.ai, polar
dec0dedab0de: github, pypi, npm, homebrew, cpan, etc etc. should adopt a multi-multi-factor authentication approach for releases. Maybe have it kick in as a requirement after X amount of monthly downloads.Basically, have all releases require multi-factor auth from more than one person before they go live.A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.
circularfoyers: Comparing this project to is-odd seems very disingenuous to me. My understanding is this was the only way you could use llama.cpp with Claude Code for example, since llama.cpp doesn't support the Anthropic compatible endpoint and doing so yourself isn't anywhere near as trivial as your comparison. Happy to be corrected if I'm wrong.
MuteXR: You know that people can already write backdoored code, right?
otabdeveloper4: There's only two different LLM APIs in practice (Anthropic and everyone else), and the differences are cosmetic.This is like a couple hours of work even without vibe coding tools.
xinayder: When something like this happens, do security researchers instantly contact the hosting companies to suspend or block the domains used by the attackers?
hmokiguess: What is Harness?
tom_alexander: Only tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
incognito124: In the thread:> It also seems that attacker is trying to stifle the discussion by spamming this with hundreds of comments. I recommend talking on hackernews if that might be the case.
shay_ker: A general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
datadrivenangel: This was a compromise of the library owners github acccounts apparently, so this is not a related scenario to dangerous code in the training data.I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?
jFriedensreich: We just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.
nickvec: Ton of compromised accounts spamming the GH thread to prevent any substantive conversation from being had.
redrove: First line of defense is the git host and artifact host scrape the malware clean (in this case GitHub and Pypi).Domains might get added to a list for things like 1.1.1.2 but as you can imagine that has much smaller coverage, not everyone uses something like this in their DNS infra.
cpburns2009: You can see it for yourself here:https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...
jbkkd: Two URLs found in the exploit: https://checkmarx.zone/raw https://models.litellm.cloud/
worksonmine: And how would that work for single maintainer projects?
xunairah: Version 1.82.7 is also compromised. It doesn't have the pth file, but the payload is still in proxy/proxy_server.py.
danielvaughn: I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.The Python ecosystem provides too many nooks and crannies for malware to hide in.
cedws: This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
detente18: LiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-092. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt3. The package is in quarantine on pypi - this blocks all downloads.We are investigating the issue, and seeing how we can harden things. I'm sorry for this.- Krrish
Imustaskforhelp: > - KrrishWas your account completely compromised? (Judging from the commit made by TeamPCP on your accounts)Are you in contacts with all the projects which use litellm downstream and if they are safe or not (I am assuming not)I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.
rgambee: Looking forward to a Veritasium video about this in the future, like the one they recently did about the xz backdoor.
stavros: That was massively more interesting, this is just a straight-up hack.
Blackthorn: It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.
tom_alexander: Oh wow. That's a lot of compromised accounts. Guess I was wrong about it not being an attack.
redrove: >1. Looks like this originated from the trivvy used in our ci/cdWere you not aware of this in the short time frame that it happened in? How come credentials were not rotated to mitigate the trivy compromise?
cozzyd: The only way to be safe is to constantly change internal API's so that LLM's are useless at kernel code
thr0w4w4y1337: To slightly rephrase a citation from Demobbed (2000) [1]:The kernel is not just open source, it's a very fast-moving codebase. That's how we win all wars against AI-authored exploits. While the LLM trains on our internal APIs, we change the APIs — by hand. When the agent finally submits its pull request, it gets lost in unfamiliar header files and falls into a state of complete non-compilability. That is the point. That is our strategy.1 - https://en.wikipedia.org/wiki/Demobbed_(2000_film)
redrove: >I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.Token in CI could've been way too broad.
hmokiguess: What’s the best way to identify a compromised machine? Check uv, conda, pip, venv, etc across the filesystem? Any handy script around?
wswin: Containers prevent this kind of info stealing greatly, only explicitly provided creds would be leaked.
cpburns2009: safetensors is just as vulnerable to this sort of exploit using a pth file since it's a Python package.
mohsen1: If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attackit does a lot of CPU intensive work spawn background python decode embedded stage run inner collector if data collected: write attacker public key generate random AES key encrypt stolen data with AES encrypt AES key with attacker RSA pubkey tar both encrypted files POST archive to remote host
franktankbank: I can't tell which part of that is expensive unless many multiples of python are spawned at the same time. Are any of the payloads particularly large?
intothemild: Sorry i mean Harbor.. was running terminal bench
Blackthorn: Crypto has ironically made these sort of hacks a lot less effective, as rather than installing back doors or that sort of nastiness they just look for quick cash they can grab.
outside2344: Is it just in 1.82.8 or are previous versions impacted?
Imustaskforhelp: 1.82.7 is also impacted if I remember correctly.
zhisme: Am I the only one having feeling that with LLM-era we have now bigger amount of malicious software lets say parsers/fetchers of credentials/ssh/private keys? And it is easier to produce them and then include in some 3rd party open-source software? Or it is just our attention gets focused on such things?
amelius: We need programming languages where every imported module is in its own sandbox by default.
santiagobasulto: I blogged about this last year[0]...> ### Software Supply Chain is a Pain in the A*> On top of that, the room for vulnerabilities and supply chain attacks has increased dramaticallyAI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...
abhikul0: Same experience with browser-use, it installs litellm as a dependency. Rebooted mac as nothing was responding; luckily only github and huggingface tokens were saved in .git-credentials and have invalidated them. This was inside a conda env, should I reinstall my os for any potential backdoors?
binsquare: So... I'm working on an open source technology to make a literal virtual machine shippable i.e. freezing everything inside it, isolated due to vm/hypervisor for sandboxing, with support for containers too since it's a real linux vm.The problems you mentioned resonated a lot with me and why I'm building it, any interest in working to solve that together?: https://github.com/smol-machines/smolvm
Bengalilol: Probably on the side of your project, but did you try SmolBSD? <https://smolbsd.org> It's a meta-OS for microVMs that boots in 10–15 ms.It can be dedicated to a single service (or a full OS), runs a real BSD kernel, and provides strong isolation.Overall, it fits into the "VM is the new container" vision.Disclaimer: I'm following iMil through his twitch streams (the developer of smolBSD and a contributor to NetBSD) and I truly love what he his doing. I haven't actually used smolBSD in production myself since I don't have a need for it (but I participated in his live streams by installing and running his previews), and my answer might be somewhat off-topic.More here <https://hn.algolia.com/?q=smolbsd>