Discussion
Package Managers Need to Cool Down
mpalmer: Surprised not to see nix mentioned in connection with this topic!If you use nix (especially nix flakes), this consideration falls out naturally from the nixpkgs repository reference (SHA, branch, etc) you choose to track. Nixpkgs has various branches for various appetites for the "cutting edge".
jauntywundrkind: If everyone only starts using a package after 7d, it feels like it just means we don't find out about problematic packages until 7d later . The reason 7d works is because it is the "don't go first" effect (I assert with no evidence). But if everyone does the same delay, there's no longer a benefit to you delaying. This feels like a prisoners dilemma of no one upgrading.I do think there is some sense in having some cool down. Automated review systems having some time to sound alarms would be good.I'm not sure what the reporting mechanisms look like for various ecosystems. Being able to declare that there should be a hold is Serious Business, and going through with a hold or removal is a very human costly decision to make for repo maintainers.Ideally I'd like to see something like atprotocol, where individuals can create records on their PDS's that declare dangers. This can form a reputation system, that disincentivizes bad actors, and which can let anyone on the net quickly see incoming dangers, in a distributed fashion, real time.(Hire me, I'll build it.)
jvanderbot: Does the timer start at upload, or does the timer start at mass exploitation?Not a bad idea, but we'd need to have evidence of the former case to make it mandatory and widespread.IIRC, many times it's compromised credentials of maintainers, which are caught very quickly, so that's evidence of the former case.
kenperkins: I think the premise is that modern scanners are really good at finding malicious code (and are run by dozens of companies in the industry), but when it gets pushed and installed inside of that 7 day window, the spread is uncontrolled. This basically gives you opportunity to let the machinery in the package ecosystem do it's job.
idle_zealot: Has any notable package manager tried the staggered rollout model that approximately every one of us uses when deploying changes at our day jobs? The complexity is, of course, version compatibility/mismatch, but it seems like a solvable problem. Then you could have an automatic canary system.The closest I've seen to this are opt-in early release channels.
nikeee: Bun added `trustedDependencies` [1] to package.json and only executes postInstall scripts coming from these dependencies. I think this is something that should be supported across all JS package managers, even more than version cooldowns.[1]: https://bun.com/docs/guides/install/trusted
INTPenis: It's a fine balancing act between getting the latest updates and avoiding supply chain attacks.I completely understand the author here, because I'm actually also leaning more towards avoiding supply chain attacks than jumping on the latest CVEs.It's just a gut feeling, rooted in 25 years of experience as a sysadmin, but I feel like a supply chain attack can do a lot more damage in general than most unpatched known vulnerabilities.Just based on my own personal experiences, no real data.I'll try to put words to it, but a supply chain attack is more focused, higher chance of infilitration. While a CVE very rarely is exploited en masse, and exploitation often comes with many caveats.That combined with the current state of the world, where supply chain attacks seem to be a very high profile target for state actors.
ameliaquining: Can you explain how this works? Is it different from what's described in the "Language vs. system package managers" section of the post?
ameliaquining: IIUC the recent high-profile npm backdoors were mostly detected by supply-chain-security firms that ingest all package updates from the registry and look for suspicious code using automated or semi-automated analysis. Dependency cooldowns work great with this kind of thing. I agree that, if malicious packages were mostly detected via user reports, dependency cooldowns would create a prisoners' dilemma.I don't understand what you're saying about reporting mechanisms; is there something wrong with how this is currently done?
acheong08: I've been working on a project lately as my bachelor's dissertation which I later plan on working on long term on this issue.The basic premise is a secure package registry as an alternative to NPM/PyPi/etc where we use a bunch of different methods to try to minimize risk. So e.g. reproducible builds, tracing execution and finding behavioral differences between release and source, historical behavioral anomalies, behavioral differences with baseline safe package, etc. And then rather than having to install any client side software, just do a `npm config set registry https://reg.example.com/api/packages/secure/npm/`eBPF traces of high level behavior like network requests & file accesses should catch the most basic mass supply chain attacks like Shai Hulud. The more difficult one is xz-utils style attacks where it's a subtle backdoor. That requires tests that we can run reproducibly across versions & tracing exact behavior.Hopefully by automating as much as possible, we can make this generally accessible rather than expensive enterprise-only like most security products (really annoys me). Still definitely need a layer of human reviews for anything it flags though since a false positive might as well be defamation.Won't know if this is the right direction until things are done & we can benchmark against actual case studies, but at least one startup accelerator is interested in funding.
bob1029: I'm a little bit amused by the inclusion of .NET in this article.The last time I pulled in more than Dapper I was using .NET Framework 4.8. Batteries are very included now. Perhaps a cooldown on dapper and maybe two other things would protect me to some degree, but when you have 3rd party dependencies you can literally count on one hand, it's hard to lose track of this stuff over time. I'd notice any dependency upgrade like a flashing neon sign because it happens so rarely. It's a high signal event. I've got a lot of time to audit them when they occur.
jazzypants: Can you help me understand why one would ever need a post-install script in the first place, please?
0xbadcafebee: [delayed]
josephcsible: This seems like it would cause the https://xkcd.com/989/ effect.
jonhohle: I’ve mentioned this before, but at my previous employer we set up staged Artifactory so tha production couldn’t pull from anything that hadn’t been through the test stage, and test couldn’t pull from anything that hadn’t been through CI.Because releases were relatively slow (weekly) compared to other places I worked (continuous), we had a reasonable lead time to have third party packages scanned for vulns before they made it to production.The setup was very minimal, really just a script to link one stage’s artifacts to the next stage’s repo. But the end effect was production never pulled from the internet and never pulled packages that hadn’t been deployed to the previous stage.
giancarlostoro: Reminds me of my favorite setup for a React project. We had rotating deployed builds (like three or more of them) for pull requests, the React UI pointed to a specific system so you always had consistent data, devs would immediately be able to test the UI, QA could follow immediately after.
mpalmer: Yeah, definitely. Think of it as what the post calls a "system" package manager. The difference between nix and the SPMs mentioned by the post is that in the former case, control over dependencies lies with you, not with the package manager.In other words, with nix you decide the spec of the software you want installed on your machine, not the maintainer of your chosen package manager. Depending on your use case and knowledge/experience level, either choice may be preferable.
c0balt: Ime the most reasonable case is an optional compilation of native components when prebuilt ones are not compatible.
skeeter2020: and those rare zero-days can be treated as the exception, and dealt with quickly. It seems backwards to optimize for dependency change reaction time these days with the supply chain such an attractive target.
skeeter2020: I think there's a lot more than you might initially realize. A few of the top of my head (beyond your ORM): automapper, 3rd-part json or Polly, logging, server-side validation, many more. Another vector: unlikely a lot of other languages I've found way more .net libraries for 3rd party connectors or systems are community-based..NET definitely includes more these days, including lots of the things I've mentioned above, but they're often not as good and you likely have legacy dependencies.