Get the top HN stories in your inbox every day.
Scoundreller
not2b
It's a no-win situation. Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.
But what I don't get in this case is why it was not possible to reset the device to its original state. It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.
kbenson
You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system. It needs a way to be triggered through hardware examining traffic and that also needs to require the seen command be signed. That recovery boot system needs to be as simple and minimal as possibly so you can have good assurance that there aren't problems with it, and should be written in the safest language you can get away with. Guard that signing key with your life, and lock it away for a rainy day, only to be used if much of your fleet of devices is hosed entirely. It should not be the same as a firmware signing key which needs to be pulled out and used sometimes.
I think that could work, to a degree. There's always the risk that your recovery mechanism itself it exploited, so you need to make it as small and hardened a target as possible and reduce its complexity to the bare minimum. That doesn't solve the problem, which might be inherently unsolvable, but it may reduce that likelihood of it to levels where it's not a problem until long past the lifecycle of the devices.
ajross
> You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system.
Almost all devices have something like that already in the form of a bootloader or SOC bootstrapping mode. But the idea breaks down if you want to do it OTA. The full storage/kernel/network/UI stack required to make that happen isn't ever going to run under "ROM" in the sense of truly immutable storage.
The best you get is a read-only backup partition (shipped in some form on pretty much all laptops today), but that's no less exploitable really.
bippihippi1
the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore. you'd need to physically access the chip to use an external flashing device. Some devices have non-writable bootloaders. They have an internal fuse that blows after the first write, so the chip's bootloader is locked. That means you can always flash a new firmware, but you can't fix any bugs in the bootloader.
Scoundreller
Or a JTAG interface that the chip has in silicon and recovery is always possible from bare-metal. Dunno if that’s technically in the MCU’s bootloader or if the boot loader comes after.
Still requires a truck roll but at least you don’t need a hot air workstation.
dataflow
> the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore.
That seems like awful design? Can't you have an alternate immutable bootloader that can only be enable with a physical switch? Or via some alternate port or something? That way they can update the live one while still having a fallback/downgrade path in case it has issues.
incangold
25 years in tech and I’m still waiting for that free lunch
sounds
It's an interesting challenge because the device is nominally "under ISP control" but any device located in a customer's home is under the physical control of the customer. The mistrust between the ISP and the customer leads to "trusted" devices where the firmware, including the backup, can be overwritten by the ISP, but then cannot recover if it gets corrupted. And believe me, the corrupt firmware scenario happens a lot due to incompetence.
This is getting attention because it wasn't incompetence this time.
But how does blank, unprovisioned equipment discover a path to its provisioning server? Especially in light of the new "trusted" push, this is an arms race in a market segment such as routers where there isn't any money for high end solutions - only the cheapest option is even considered.
tl;dr: a social and economic problem, likely can't be fixed with a purely technical solution
sidewndr46
This was years ago, but I remember getting cable service activated somewhere in Florida with Bright House. I handed the cable guy some ancient motorola cable modem I had found at a discount store. The guy took one look at it and said "look dude, if you hacked this thing to get around bandwidth caps it is your problem if you get caught". I guess apparently that particular modem was pretty easy to modify
cuu508
Technical solution: customer treats ISP's modem/router as untrusted, and daisy chains their own router after it. Neither malware nor ISP's shenanigans can access the inner network.
utensil4778
Generally the way this works is you have two partitions in your flash chip. One contains the current firmware and the second is a place to drop new firmware. Then the bootloader twiddles a bit somewhere and boots to one partition or the other. There's really nothing stopping you from wiping the previous partition once you're done.
I think some routers still have a single flash partition and the update process here is a lot more hairy and will obviously not retain the previous version after an update.
Apart from attacks like this, there's absolutely no reason to have a protected read only copy of the factory firmware. 99.9999% all you would ever need to do to recover from a bad flash is to just fail back to the previous image.
A proper read only factory image would require an extra ROM chip to store it, as well as extra bootloader complexity required to load from ROM or copy to flash on failure. It's just barely expensive enough at scale to not be worth it for an extremely rare event.
ars
> Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.
But a switch on the route: Flip the switch the router reboots to a known safe OS, that downloads, verifies, and updates the firmware. Then it waits for you to flip the switch back before it will behave as a router again.
Unless attackers manage to steal key-signing codes, and also intercept and redirect traffic to their webserver to send a fake firmware, this seems secure to me. Only downside I'm seeing is that it would be impossible to put in a custom firmware. Maybe add a USB-key firmware option?
yjftsjthsd-h
> It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.
Humor me; how would that work? If anything, I'd expect it to be easier to overwrite the inactive slot (assuming an A/B setup, ideally with read-only root). If you really wanted, you could have a separate chip that was read-only enforced by hardware, and I've seen that done for really low level firmware (ex. Chromebook boot firmware) but it's usually really limited precisely because the inability to update it means you get stuck with any bugs so it's usually only used to boot to the real (rw) storage.
stacktrust
> My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.
There was an open hardware project for SD card emulation, where the emulator could reject writes, https://github.com/racklet/meeting-notes/blob/main/community...
OSS emulation for SPI flash, https://trmm.net/Spispy/
Some USB drives (Kanguru) and SSD enclosures (ElecGear M.2 2230 NVME) have firmware and physical switch to block writes, useful to boot custom "live ISOs" that run from RAM.
Scoundreller
Eventually in the satellite world, card emulators took over and only the receiver was a vector of attack, but then the receivers started getting simulated too.
The nice thing about emulators is that you could intercept calls that you wanted and send your own response while still taking any and all updates. Hard to break when you have more control than they do.
ThePowerOfFuet
> Some USB drives (Kanguru) and SSD enclosures (ElecGear M.2 2230 NVME) have firmware and physical switch to block writes, useful to boot custom "live ISOs" that run from RAM.
For the rest of us, there's Ventoy. https://www.ventoy.net/
aidenn0
I suppose from the point of view of someone with a black-market HU card, DirecTV was an example of an Advanced Persistent Threat. Never thought of it that way before.
Scoundreller
Funny thing about directv is that because they allowed for many manufacturers to build receivers, directv had little control over the receiver firmware, so these counter-counter measures weren’t necessary at the receiver level.
Other providers that rolled out their own receivers had high control over the receiver firmware and once users figured out how to protect their cards, the receivers became an effective attack vector for the lazy.
But that’s where a lot of the public knowledge about JTAGs really started coming to light. Awfully nice of them to put in a cutout at the bottom of the receiver.
luma
I'm not too familiar with customer DSL solutions but for cable modems, that firmware and configuration is managed by the CMTS because technology and configuration changes on the head end may require customer-side changes to ensure continued operation. The config is a pretty dynamic thing as frequency plans, signal rate, etc change over time as the cable plant and head end equipment is upgraded and maintained.
I'd expect that any attempt to lock write enable to the EEPROM would eventually result in your modem failing to provision.
Scoundreller
When your provider cuts you off, that’s when you know that your provider has a legit upgrade you need to take. Take the update and then lock stuff up again.
Of course, I don’t think you’re supposed to make mods to your vendor provided equipment…
In the satellite world, this would happen too: old firmware would be cut off. That’s when you go legit for a while with your sub’d card, take the update, and watch your sub’d channels until the new update could be reverse engineered. And probably have some heroes learn the hard way of taking the update and having some negative impacts that are harder to reverse.
luma
I'm not sure what such an approach would accomplish. If the goal is to prevent the kind of problem seen in the OP (which, let's be real - is a rare occurrence) in order to avoid an unplanned outage, you've instead created a situation where it'll fail to connect far more regularly as you're kicked off the network for not correctly handling the provisioning process. You're trading a rare unplanned outage for a common unplanned outage.
schmidtleonard
> maybe we all need to treat every device attached to the internet as having a similar susceptibility to “electronic counter-measures”
"First party malware"
ck2
ISPs can send any firmware to a docsis cablemodem, without the user knowing or accepting.
Imagine the damage that could be done by a malicious actor via the ISPs computers.
Or imagine someone being able to hack the system that does that update even without the ISP.
600K users would be a toy, they could do it to 6 Million.
Doesn't even have to be clever, just brick millions of cablemodems.
North Korea or some other government level entity could manage the resources to figure that out.
stacktrust
Do ISPs and modem vendors roll their own OTA infrastructure and signing key management, or contract it out?
russdill
Secure boot schemes can already "fix" this. If a boot image is programmed that isn't signed, the system boots to a write protected backup image. The system can also to some degree block the programming images that aren't signed, but presumably malware has gained root access.
napierzaza
[dead]
nisa
Article is light on the interesting details. How did they came in? Do these routers have open ports and services by default and answer to the Internet in a meaningful way?
Couldn't someone grab different firmware versions and compare them?
Looks like they are doing what everyone else is doing and using OpenWrt with a vendor SDK: https://forum.openwrt.org/t/openwrt-support-for-actiontec-t3...
What's interesting here is speculated the vendor send a malicious/broken update: https://www.reddit.com/r/Windstream/comments/17g9qdu/solid_r...
So why is there no official statement from the ISP? If it was an attack shouldn't there be an investigation?
I'm not familiar with how this is handled in the USA but this looks really strange.
Maybe these machines were bot infested and the vendor pushed an update that broke everything?
Maybe it's like in the article and it was a coordinated attack maybe involving ransom and everyone got told it's a faulty firmware update, keep calm?
which is also kind of bad, as the customer I'd like to know if there security incidents.
Has anyone links to firmware images for these devices? Or any more details?
chrisjj
> So why is there no official statement from the ISP? If it was an attack shouldn't there be an investigation?
We should assume a decision to make no statement was based on the outcome of an investigation.
I wonder how much of the replacement cost is insured. I am guessing none. Leaving the ISP at severe risk of, er, business discontinuity. Another good reason for no statement.
londons_explore
> Lumen identified over 330,000 unique IP addresses that communicated with one of 75 observed C2 nodes
How does Black Lotus Labs global telemetry know which IP communicated with which other IP if they have control of neither endpoint? Who/what is keeping traffic logs?
If these guys can do it, remind me again how Tor is secure because nobody could possibly be able to follow packets from your machine, through the onion hops, to the exit node where the same packet is available unencrypted...
rpcope1
I have a friend who works at Black Lotus (and who may have written this blog post, who knows). Black Lotus is part of Lumen which is Level3 and CenturyLink and is one of the biggest (if not the biggest) backbone traffic provider in the world, with a huge percentage of the worlds traffic transiting their network, and thus I think they get direct insight into the traffic including metrics on it.
vieinfernale
I'm quite disenchanted here. So this means that it is practically impossible to avoid IP fingerprints in any way ? Even with Tor, VMs, etc ? You'll always be at the mercy of whoever runs the show unless you own the physical servers
semiquaver
Of course a backbone provider can directly inspect the source and destination IP addresses of any traffic transiting its network. How could it be otherwise? Thats not fingerprinting, it’s just pulling fields out of a struct.
Tor does defeat this though. Rather than seeing the true destination of your traffic they see that of a Tor exit node.
londons_explore
But... That tor exit node then sends the traffic onwards... Again via the internet, and the backbone provider can inspect it again.
Seeing a packet heading to a tor exit node and then a similarly sized packet heading onwards a fraction of a millisecond later is a pretty surefire way to spy on individual tor users.
miohtama
The physical servers do not matter. Someone owns the physical cable.
hedora
This is reasonably standard functionality for backbone routers. They have to parse the TCP headers in hardware anyway, and can track common endpoints with O(1) state.
Of course, on the other end of the spectrum, the NSA has tapped into core internet links, is recording everything it possibly can, and is keeping it forever.
oasisbob
> They have to parse the TCP headers in hardware anyway
Backbone routers have no need to implement stateful TCP inspection or deal with the transport layer for TCP, dealing with IP is enough.
sebzim4500
Is that actually feasible with their budget?
If we are generous and assume there a zettabyte of data a year that they want to store.
At consumer prices, you would have to pay $10B per year just buying hard drives yet alone the operational costs/redundancy.
The budget for all of the US intelligence services is ~$65B. I think if they wanted to actually do what you are describing it would be the single biggest intelligence expense they have and I don't see how you hide that.
choilive
Its not. They don't store the raw IP packet data, instead they store the metadata (this was revealed in a leak a long time ago), like the type in this article (data source and destination, timestamps, size of the data, etc.) the metadata is orders of magnitude less data than the raw packets and likely easily compressible, so I wouldn't be surprised if they keep it all for a decent chunk of time.
undefined
Hikikomori
Yup. Pretty much all ISPs collect sflow/netflow from their devices to be able to debug problems or detect ddos.
luma
Presumably, Windstream is logging customer traffic as a matter of course. It might just be metadata (NetFlow/sFlow/IPFIX/etc), but one way or the other the only way they have this information is if they are recording and retaining it.
Hopefully this is made clear in Windstream's contract terms.
londons_explore
These aren't likely 'top flows', since the C&C data will probably only be a few kilobytes.
So to capture this, you at a minimum need to be logging every TCP connection's SRC IP and DST IP.
And they seem pretty confident in their worldwide map and fairly exact counts, so I would guess they must have probes covering most of the world doing this monitoring, and it likely isn't just 1-in-a-million sampling either...
luma
For whatever it's worth, what you describe above is specifically what IPFIX/NetFlow etc does. Not full-take, just metadata for each flow such as the time, src/dst ip/port, tcp sequence #, octets sent, etc.
This is common in datacenters for traffic and flow analysis for troubleshooting, capacity planning, and the occasional incident response.
More details: https://en.wikipedia.org/wiki/IP_Flow_Information_Export
kbenson
> If these guys can do it, remind me again how Tor is secure because nobody could possibly be able to follow packets from your machine, through the onion hops, to the exit node where the same packet is available unencrypted...
You're supposed to be protected by the fact that you're going through multiple nodes before exiting TOR, and traffic should be mixed. Can you find some streams if you have most/all the nodes within your network and can analyze the traffic? Probably some, but the more traffic a node handles the harder it would be.
There is a simpler approach though, which is to just run exit nodes.[1]
1: https://en.wikipedia.org/wiki/Tor_(network)#Exit_node_eavesd...
Thorrez
What do you mean by just run exit nodes? The linked section says that just running exit nodes allows the exit node to steal data sent over plain HTTP. Is that actually a problem? Who's using plain HTTP? The linked section says just running exit nodes doesn't allow deanonymization.
kbenson
I didn't mean to imply it was exactly analogous, just a lot simpler, and there is a lot to be gleaned from that data. In fact, I would assume the more data you pass over it (e.g. if you proxy all your traffic across TOR) the easier it is to make assumptions about the source, unless it actively splits it across different exit nodes, which seems like it could be problematic in a lot of cases. If you have a DigitalOcean personal server you run some services on and you access it through TOR... well you might have just made the job of anyone trying to deanonymize you much easier.
shrubble
Lumen (merger of Level3 and CenturyLink) sells services to a large part of the Internet and may provide a lot of the backhaul for Windstream. In which case they would be in the path for monitoring.
codexon
Lumen is a tier 1 network so a lot of traffic passes through them. They can man-in-the-middle the traffic and see the TCP packets going through their network.
perlgeek
"They can man-in-the-middle the traffic" could be interpreted as them having to actively do something to become the man in the middle, when they already are.
It's likely they just do sampling (think netflow) to get some statistics over the data that's already transiting their network.
ronnier
For a few years now I only buy a small x86 box with dual nics and run OpenWRT. I love it. It's open source, lots of support, good community. It supports wireguard. Latest version allows you to even run docker containers.
jeffbee
These are DSL modems, though. At some point there has to be some interface between the WAN side, be it DSL or coax or fiber, and your network. Even DSL adapters for PCIe slots are just systems on a stick, coming with all the features and bugs of a "router" but without the enclosure.
bauruine
The interface to DSL or coax only has to be a layer 2 bridge. You can put many modems into bridge mode so they don't do any layer 3 (IP) at all. For fiber, if you don't use PON at least, even a standard SFP(+) will do.
ronnier
You can tell I didn't read the article :)
fckgw
Since it were modems that were affected, OpenWRT would do nothing to protect you.
nisa
OpenWrt works for some modems pretty fine. It's not straight forward as the VDSL firmware can't often be distributed but poeple use it on avm Fritzbox devices. Also LTE devices are supported. Not sure about cable modems, probably not. It's probably involved and not straight forward so for most users, even technical ones it's no alternative.
bauruine
The article says that the "modems" affected are the Sagemcom F5380 and ActionTec T3200 which from a quick search looks like full fledged CPEs aka routers with a web interface and NAT, WIFI and all the stuff. They also write about Censys and banners so it looks like they had their web interface exposed to the Internet.
When I hear people say they use OpenWRT I assume they have their modem in bridge mode so that it doesn't even have an IP. OpenWRT would save you in that case.
ronnier
Ah, that's what I get for not reading the article.
hedora
I’ve got an old PC Engines board with openbsd on it. It’s been remarkably trouble free for something like 8 years.
rpcope1
It's huge shame Pascal basically stopped building those boards since AMD and Intel wouldn't play ball. I'd really like to have something like an APU with 10G connectivity with an x86 processor that was not built and designed in China running open firmware. With PC Engines gone now, I think you're basically out of luck.
MisterTea
My APU2 died few years back and haven't found a decent replacement. Instead I use a Lenovo Thinkstation M720Q off eBay with the PCIe x8 riser and Intel 2 port 10GbE card. You could also fit a 4 port 1Gb card too. The thing idles at 18-19W which is less than the big white rectangular Verizon "trash can" router which idled at 22W and has horrible WiFi (I use a Unifi APLR.)
ziml77
The PC Engines boards are great for that. I've got mine running OPNSense (FreeBSD) and it's not required any fuss.
glitchcrab
Another very happy PC Engines user here, I've been running pfsense and more lately opnsense on one for over 10 years now. It has never missed a beat.
chrisjj
> Latest version allows you to even run docker containers.
What could possibly go wrong...
Kiboneu
Well if you backdoor 600k routers and introduce a firmware bug with one of your patches, this is what happens.
Can't they just stage their updates? Surely, malware authors and users must be too cool for adopting standard prod practices.
perlgeek
> Surely, malware authors and users must be too cool for adopting standard prod practices.
Their economic pressures are just different, it's not their own hardware that they're bricking, nor are likely to be held liable for it.
bostonpete
What is the significance of the article/post title...?
thamer
This attack happened a few days before Halloween 2023 (pumpkins), with a large drop in the number of devices connected to the Internet – like how an eclipse suddenly brings a period of darkness, maybe?
This is just my interpretation, I also found it cryptic.
ajb
Yeah, didn't this have a more comprehensible title a few hours ago?
pragma_x
For anyone else that was confused by the headline, this is about the destruction of 600,000 individual (small) routers. Not routers that are worth $600,000 (each or combined).
thimkerbell
@dang, if there are karma points at HN, you could add some for submitters who improve upon the oft-execrable original clickbait headlines/titles. (Here, I see present verb tense being used for an incident from October of last year.)
thimkerbell
You could also subtract points for submissions whose titles appear to advocate causing harm.
sgtaylor5
related article from Ars Technica: https://arstechnica.com/security/2024/05/mystery-malware-des...
skilled
That doesn’t count as related as it is a rewrite of the original source. Just saying, it adds no details of its own.
thecosas
It does include information which the original article specifically excluded from mentioning: the ISP involved.
"Windstream" is mentioned in the first paragraph of the Ars article, while the Lumen post makes references to "a rural ISP" throughout the post.
skilled
So say that instead of related and make people waste their time reading the same information.
steelframe
For my home network I've purchased a networking appliance form-factor computer, which is basically a regular old an i3 with VT-x support in a fanless case and 4 2.5GiB NICs. I've installed my favorite stable Linux distro that gets regular automated security updates in both host and a VM, and I've device-mapped 3 of the NICs into that VM. The remaining NIC remains unattached to anything unless I want to SSH in to the host. I'm running UFW and Shorewall in the VM to perform firewall and routing tasks. If I want to tweak anything I just SSH in to that VM. I have a snapshot of the VM disk in case I mess something up so I can trivially roll back to something that I know works.
I've purchased a couple of cheaper commercial WiFi access points, and I've placed them in my house with channels set up to minimize interference.
Prior to this I've gone through several iterations of network products from the likes of Apple, Google, and ASUS, and they all had issues with performance and reliability. For example infuriating random periods of 3-5 seconds of dropped packets in the middle of Zoom conferences and what not.
Since I've rolled my own I've had zero issues, and I have a higher degree of confidence that it's configured securely and is getting relevant security updates. In short, my home network doesn't have a problem unless some significant chunk of the world that's running the same well-known stable Linux distro also has a problem.
fckgw
This attack affected modems so even with all that fancy hardware you would still be dead in the water.
tmoertel
Out of curiosity, which networking appliance form-factor computer did you purchase?
steelframe
It's a HUNSN RJ36. It came preloaded with pfSense, as many of them do, but I immediately made a full disk backup and then wiped and installed with a Linux distro because, well, "This is Linux. I know this." You're going to find a lot of people who strongly prefer one over the other, and you may find you prefer pfSense over a "do-everything-yourself" Linux distro if you give it a shot. There are also Linux distros that are targeted for network appliances, and setting them up (correctly) can be easier if the distro is built for the task.
There are quite a few machines in this category, and what's in stock at any given time tends to rotate relatively quickly. I think the one I bought might still be available, but you will want to check to see if there is something with specs that will work better for your use case.
xacky
Reminds me of the CIH virus. It's only a matter of time for ransomware authors to start using firmware blanking as a new technique.
jslakro
Useful recommendations from the canadian government
https://www.cyber.gc.ca/en/guidance/routers-cyber-security-b...
Get the top HN stories in your inbox every day.
> These reports led us to believe the problem was likely a firmware issue, as most other issues could be resolved through a factory reset.
My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.
That’s what we used to do on, ahem, satellite receivers, 20 years ago and maybe we all need to treat every device attached to the internet as having a similar susceptibility to “electronic counter-measures”.
Or at least monitor them for updates and light up a light when an update happens if it was my own equipment and I’d know if it should go off or not.