Get the top HN stories in your inbox every day.
exabrial
geerlingguy
I ran all the tests at P 2 and P 4 to verify cpu cores weren't hindering the speed, but got the same result (within 2%).
Modern A/M cores and Zen 5 cores individually have enough grunt to handle at least 10 Gbps through USB without a hitch.
On my Pi's and N100 mini PCs, I do have to use threads to hit more than about 5-6 Gbps. And testing a 25 Gbps adapter I'm testing separately, I had to use multiple threads to get my Ampere CPU to measure speeds greater than 10 Gbps.
dgacmu
Most modern ethernet chips, including those used on USB ethernet devices, have adaptive interrupt coalescing (or moderation) for network I/O, which renders this likely not as big a deal as it once was. There will still be limits on packets/sec/core but it's not because of interrupts.
dd_xplore
Or Better use only iperf (or known as version 2), it has multi threading support
undefined
fulafel
A single threaded benchmark better represents real performance, I'd argue. 10 Gbps is only 1.2 GB/s after all and few applications use parallel streams.
stonegray
I think the intention is to measure the adapter itself independent from the CPU/overall system.
Besides, I can’t think of a typical single threaded application that would use those data rates, can you?
iknowstuff
Steam downloads
mort96
All these USB version names. I used to know what they all meant, but then the USB IF went ahead and renamed them all and made a bunch of versions have the same name and renamed some versions to have the same name as the old name of other versions.
I have absolutely no idea what anyone means when they say USB 3.2 gen 2x2. I used to know what USB 3.2 meant but it's certainly not that.
adrian_b
Unfortunately "USB 3.2" is just a version of the standard, which does not give any information about the performance of a USB port or device.
USB 5 Gb/s = USB 3.2 gen 1, available on Type A or Type C connectors (or on devices on a special extended micro B connector)
USB 10 Gb/s = USB 3.2 gen 2, available on Type A or Type C connectors
USB 20 Gb/s = USB 3.2 gen 2x2, available only on Type C connectors
Moreover, "5 Gb/s" is a marketing lie. The so-called USB of 5 Gb/s has a speed of 4 Gb/s (the same as PCIe 2.0). On the other hand, 10 Gb/s and 20 Gb/s, have the claimed speeds, so USB of 10 Gb/s is 2.5 times faster than USB of 5 Gb/s, not 2 times faster.
10 Gb/s USB and Ethernet have truly the same speed, but the USB overhead is somewhat higher, leading to a somewhat lower speed. However, the speed shown in TFA, not much higher than 7 Gb/s seems too low, and it may be caused by the Windows drivers. It is possible that on other operating systems, e.g. Linux, one can get a higher transfer speed.
mbreese
The fact that you had to list all of the versions and speeds at the top of your post is illustrative of what the parent was trying to say. We can all look up what speed is associated with what version, but it’s not exactly a consumer friendly experience.
adrian_b
A few computer manufacturers do the right thing and they mark the speed on the USB ports, removing ambiguities, for example ASUS does this on my NUCs and motherboards.
Unfortunately, there are too many who do not do this, even among the biggest computer vendors.
hypercube33
Thats just port speed, charging and other features are all a crapshoot on USB making Thunderbolt the sane version of the "USB-C" family where it requires a set of things (speed, charging wattage)
repeekad
This is not what’s anti consumer, technical specifications can be confusing, it’s cable companies selling at Best Buy “gold plated” “HD ready” “braided fiber” “other bs” that is anti consumer. If you’re thinking about usb versions, you’re far from the normal consumer
mort96
USB 3.2 used to be what we now call USB 3.2 gen 2x2, doesn't it? So it used to be that the version dictated the max speed: 3.0 was 4Gb/s, 3.1 was 10, and 3.2 20, right?
But then they decided to memory hole that and now USB 3.0 and USB 3.1 are also USB 3.2 and USB 3.2 is called "generation 2x2", whatever that is supposed to mean
It makes no sense anymore. It used to be quite simple.
Aerofoli
No, they just renamed things when new standards were released (3.1 and 3.2). 20Gbps wasn't possible before 3.2, and it called Gen 2x2 at the time of release.
5 and 10 Gbps were renamed, though.
5 Gbps first was USB 3.0, then 3.1 Gen 1, then 3.2 Gen 1.
10 Gbps first was 3.1 Gen 2, then 3.2 Gen 2x1.
3.2 Gen 1x2 is also 10 Gbps, but physically different
eqvinox
> Moreover, "5 Gb/s" is a marketing lie.
It's not a lie, the b just stands for baud not bit ;-)
adrian_b
That is technically correct, but "b" has never been an accepted abbreviation for baud (which was Bd) and the naming of the first versions of the PCIe, USB 3 and SATA speeds, which were done by Intel, were obviously in contradiction with the industry standards and intended to confuse the customers.
Previously to these standards promoted by Intel, the 1 Gb/s Ethernet used the same encoding and it was rightly called by everybody "1 Gb/s", not "1.25 Gb/s", because the gross bit rate has absolutely no importance for the users of a communication standard.
Only Intel invented this marketing trick, calling PCIe 1.0 and 2.0 as 2.5 and 5 Gb/s, instead of 2 and 4 Gb/s, and similarly for USB and SATA, where e.g. SATA 3 is called 6 Gb/s, but its speed is 4.8 Gb/s.
To be fair, what Intel did was not unusual, because in the computing industry there has been a long tradition of using fake numbers in marketing for various things, like scanner or video camera resolution ("digital" zoom, "interpolated" resolution), magnetic tape capacity ("compressed" capacity), and many others.
ssl-3
Oh, it's fine.
The lack of clarity is in keeping with the USB C connector itself, which may supply or accept power at various rates or not at all, may be fast or slow, may provide or accept video or not, and may even provide an interpretation of PCI Express but probably doesn't.
It probably looks the same no matter what, and the cable selected to use probably also won't be very forthcoming with its capabilities either.
(Be sure to drink your Ovaltine.)
wongarsu
The USB A connector stayed the same between USB 1, 2 and 3. Yet most manufacturers voluntary distinguished them by giving USB 1 and 1.1 a white insert in plug and port, USB 2 a black insert and USB 3 a blue one
This was neither standarized nor enforced, yet it worked remarkably well in the real world
Then we decided to just have no markings at all on USB C cables. On the ports at least we occasionally get little thunderbolt or power symbols
mbreese
The exterior of the USB A connector stayed the same. The number of pins increased when we went from USB 2 to 3. So, even in this case, it’s slightly more complicated. The colors helped because the capabilities were very different between the ports. But when the USB IF increased the number of options (and reduced the size of the connector), different colors became impossible to do.
The problem is that there are too many uses for one connector. But this is wha we wanted - a reduced number of standardized connector/power options.
tomchuk
… and a M1 MacBook will source 5V/3A all day long to a non-PD negotiated sink. Somewhere between the M1 and M3 Apple decided to buy into USB-IF compliance and limit to 500mA.
Has lead to some very embarrassing “works on my computer” situations on prototype boards shared with my EE colleagues (I’m a software guy who dabbles in hardware when I need to)
eigen
I think the Rd pulldown options are for 0.9/1.5/3A without PD negotiation.
ssl-3
It doesn't take PD negotiation to get 5v, 3A from a compliant source. A 5.1k resistor or two (quantity depends on placement in the overall circuit) is sufficient.
This may be a matter of semantics, but I can't bring myself to call a resistor a negotiator. They only do one thing and they're very resistant to other options. :)
With nothing connected to the CC line(s) at all, then there should be no output voltage on Vcc. It shouldn't be 5v @ 3a, or 500mA, or anything else -- it should be ~exactly 0v, and therefore also 0a.
A resistor or two tells the power source what we want. Without it (or some, you know, actual PD negotiations), we get nothing.
---
A careful reader will note the repeated quantity distinction. Let me explain that.
Every USB C socket has both CC1 and CC2 pins. They're on opposite side of the connector and get used for sorting out PD, and for detecting the cable's connector orientation (if/when that matters).
But a cromulent USB C to USB C cable can have just 1 CC wire, and that's OK. It works; it isn't even wrong. To get such a cable to coax 5v from a 5v/3a source and get power for a prototype widget on Gilligan's Island, with the cable already cut in half to get at the wires inside: Wire up power and ground to your prototype. And put a 5.1k resistor between that single CC wire and ground. Voila: We've requested 5v at up to 3a.
Or: If we're being a bit more proper and snooty and want to do it The Right Way, and we actually have a USB C jack to prototype with, then that more-ideally takes two 5.1k resistors; one to pull CC1 to ground, and another to pull CC2 to ground. This does the same thing, but it does it on the connector side of things instead of the daunting no-mans-land of wires. Only one of these resistors will ever be used at one time.
Or: If we have a USB C jack and can only scrounge up one 5.1k resistor (maybe we only have a single #2 pencil to whittle down to 5.1k of resistance), or we're being particularly lazy, then that's OK too. Pick CC1 or CC2 and put 5.1k between there and ground. It will work with the cable plugged in one way, and it won't work with the cable flipped 180 degrees. That can be enough to get a thing done for the moment or whatever. (There's no solution that is as permanent as a temporary one.)
---
These are some of the things I learned when I was in the field and needed a 5v, >2.5a power supply to replace one that had died. I said to myself, "Self, just go over to Wal-Mart and get a 3a USB C power brick that comes with a cable, cut and splice that cable to fit the widget that needs power, and call it done. If it dies in the future, replacing it will be intuitive and fast."
So dumb ol' me went to Wal-Mart and bought exactly that, and I quite confidently set forth with the splicing.
This did not work. At all.
And that was a harsh rabbit hole to dive into, but it was ultimately fine. After I got back that evening I soldered a 5.1k resistor (of 1206 SMD form) mid-span between the CC wire and ground, and finished the adapter-cable quite neatly with some adhesive-lined shrink tubing.
Doing it this way got the customer's gear working faster than ordering the "right" parts and waiting for them show up would have, and it still works. That's all been a few years ago now; I consider it to be as permanent as anything ever really is.
theandrewbailey
This quagmire (along with the version names) is why I call it the Unintuitive Serial Bus.
reaperducer
The lack of clarity is in keeping with the USB C connector itself, which may supply or accept power at various rates or not at all, may be fast or slow, may provide or accept video or not, and may even provide an interpretation of PCI Express but probably doesn't.
It gets even worse.
I now have two cheap Chinese gadgets (a checki printer and a tire inflater) that have USB-C ports for charging, but will only charge with the wire that came with the gadget. The other end of which is an old-style USB plug.
It seems that USB-C sockets are cheap enough parts to use them for everything, even if the manufacturer isn't going to put any actual USB circuitry behind them.
Edit: Three. I forgot about my wife's illuminated makeup mirror.
anamexis
I keep a few of these around to deal with this: https://www.adafruit.com/product/6323
Very annoying though! The devices are just missing a couple resistors which is probably less than a cent on the BOM.
the__alchemist
Note: If it just needs 5V power (Like many microcontroller-focused devices), USB C is convenient, because chargers and cables are ubiquitous. And they all (WIth exceptions like the one you mentioned) support 5V DC power.
Bonus: YOu can enable USB 2.0 data transfer as well for firmware updates, computer interfaces etc.
So: Cheap/ubiquitous part, everyone has cables + AC adapters to their local plug: I think it's a great default power connector.
ziml77
Ah that's a fun misuse of USB ports. The companies will often even dodge issues with the USB-IF by labeling the ports as Type C and letting the customer's mind fill in the word USB.
I wish these devices would just use barrel jacks, labeled with the voltage and polarity. But these manufacturers know that the USB-C port weighs into buying decisions (and they know that most people have zero clue about the difference between a physical port and the electrical/protocol specs).
nfriedly
I repaired device like that a while back - it only took two half-cent resistors and a half-assed soldering job to make it compatible with standard USB-C cables and chargers: https://www.nfriedly.com/techblog/2021-10-10-v90-usb-c/
ssl-3
Yeah, they got cheap. They either got cheap with the BOM, or they got cheap with the QC and never tested it with USB C power sources, or they got cheap with the spec and it's working as-designed.
It just takes a couple of insignificant resistors and a USB C socket that brings out CC1 and CC2 to pads on the board to do it right. I wrote about how that works in a sister comment if you want to read more.
But those devices will charge/work just fine with any bog-standard USB A to USB C cable, alongside any decent power brick with USB A outputs. It doesn't have to be the exact cables they came with.
It's annoying in the "you cheap bastards" sort of way, but regular A to C cables will work.
(If it's really important to you, then it can be possible to hack in a couple of 5.1k resistors inside the cheap-bastard devices and make them work with regular USB C power bricks and regular USB C to C cables. The resistors will tell the source to provide 5v at up to 3A. All compliant USB C cables are required to safely pass 3A.
The mod can range from very easy, to somewhat problematic, to "fuck this, I quit". In reality, there might already be pads on the board to connect CC1 and CC2 to ground; just solder in the resistors. Or, the pins are probably brought out at the connector itself, so it can be bodged with some extra wire.
But reality is a cruel mistress and not all available PCB-mounted USB C connectors expose CC1 and CC2 at all, although in a sane and pure world absolutely all of them should.)
[tl;dr, just keep an A to C cable with the devices, always have USB A where they get used, and forget about it. The next round of cheap stuff will be better, worse, or the same, and that's a future problem.]
PaulKeeble
USB is just a complete mess. I don't mind so much ports having different capabilities if they are well documented in the specification sheets of the hardware because then at least I can find out what they are capable of, but alas it never seems to be the case. Its very hard to work out whether a port can do Displayport and to what extent/performance or its true power capability or just its real data transfer speed. More often than I like I have just hoped that something works. Anything above 5W charging and 5gbps transfer is optional.
jasomill
I have an Intel NUC where 10 Gbps devices can run faster when plugged into the 3.1 Gen 2 ports than the Thunderbolt 3 ports under NVMe load, due to the former having dedicated PCIe lanes and the latter sharing the PCH lanes with the M.2 slots, which could be highly relevant if I were doing heavy disk I/O over a 10 Gbit Ethernet adapter.
This is more than a mild annoyance in the case of faster Thunderbolt devices like eGPUs, especially since, in addition to the 2 PCIe lanes dedicated to the USB ports and a third dedicated to an SD card slot, an additional five lanes are unused.
IIRC there was a reason at one point that Intel insisted on connecting Thunderbolt controllers through the PCH, but I don't understand why they didn't at least use four lanes for one of the M.2 slots. Sure, they may have had to move the SD card slot due to configuration limitations, but in what world is SD card performance more important than NVMe performance?
reaperducer
USB is just a complete mess.
You have to go out of your way to make Apple's Lightning connector look sensible, but somehow the USB consortium has managed to do it.
ben-schaaf
To be fair, lightning only looks sensible because it never did anything other than USB2 and power delivery.
drcongo
I miss lightning. Cleanable with a toothpick and some compressed air. The USB-C port on my current iPhone is now compacted with pocket lint and I can't seem to clean it out.
TomatoCo
Going by Fabien Sanglard's cheat sheet (who I trust uncritically) https://fabiensanglard.net/usbcheat/index.html it looks like 3.2 actually is a broader term than expected. Maybe there was some awful attempt at backwards compatibility? Or forwards?
Someone1234
Great site, thanks for the link. But holy heck, that "Also Known As" column is complete chaos. What the heck is wrong with the USB Consortium, do they have brain damage?
Also, according to that table, "USB4 Gen 2×2" is a downgrade on "USB 3.2 Gen 2x2", since the cable length is 0.8m instead of 1m for the same speeds. Which is uhh unexpected.
numpad0
It allows manufacturers to clear old stocks of cables by rebranding them as latest products.
USB 1+2/3/4 are basically unrelated standards under the same USB umbrella. USB4 especially is just Thunderbolt/PCIe x4 with features. If Betamax was branded as "VHS 2.0" instead of being a separate standard it would have been felt similar to the USB4 situation.
wpm
Yeah I what I would give to have been a fly on the wall in the room where they decided to roll with such an obviously terrible and stupid naming scheme. Did anyone protest? Did anyone boldly dissent? Or did they all really just sit around and pat themselves on the back?
lpcvoid
I really, really wish somebody would explain to me what thr USB consortium was smoking, yeah. I cannot explain it.
BearOso
The cable length is only for the spec. You can get longer cables that achieve the higher bandwidth, they're just not certified for that.
Latty
To be fair they seem to have taken this often-stated criticism on board. USB 4's naming is more sensible, and they've pushed the simple data speed & power labelling that makes it easier to work out what you need.
usagisushi
Yeah, now it's USB4 Version 2.0 / USB 80Gbps / USB4 Gen4.
ac29
According to wikipedia the current marketing names for USB are just their speed: USB 5/10/20/40/80 Gbps. No version numbers or anything else.
robotnikman
In my experience, its just best to stick with Thunderbolt when you want to make sure you are getting the best speed for external devices that require it (external SSD's, Graphics Cards, Network adapters)
Much easier and reliable than navigating the confusing sea of USB standards
jasomill
While I generally agree, there are still corner cases:
As I mentioned above, a Thunderbolt port can end up with less dedicated bandwidth than a 10 Gbps USB port due to PCIe lane configuration.
Thunderbolt 3 only provides 22 Gbps PCIe bandwidth even if only a single device is connected.
Apple's TB2-to-TB3 adapter will connect any TB2 device to any TB3 host, and any TB3 (not USB) device to any TB2 host unless it's bus powered, in which case you need to daisy-chain a second TB3 device with two ports to supply power.
While Thunderbolt 4 and USB 4 PCIe are largely interchangeable, and while Thunderbolt 4 devices are backwards-compatible with Thunderbolt 3 hosts, USB 4 PCIe devices are not required to support Thunderbolt 3 hosts.
post-it
I will say, casual users don't really care. Pretty much any combination of a wall plug and a cable will charge a phone at acceptable speeds, and that's all 99% of people need.
compounding_it
In all this, people now just go to the Apple Store and buy a cable for their Apple device. This confusion benefitted such vendors and now they sell 1$ cable for an absurd amount of profit.
aggregator-ios
For those that read the article and are still confused (as I was) about what Apple hardware would give you the full 10GbE speeds:
- 10GbE Thunderbolt adapter is still the best. Full symmetrical 10GbE on laptops as far back as the 2018 MacBook Pro 13" (Intel) and every laptop since. Including the Airs starting with the M1 chip (Not sure about Neo).
- No Apple hardware supports the 3.2 v2x2 standard (20Gbps) and your connection will be downgraded to 10Gbps on these RTL8159 chips. Because of processing overhead, you will only get 5-7Gbps of total Ethernet throughput.
- Upgraded Mac Mini or Apple Studio base models have builtin 10GbE ports
For now, thunderbolt adapters are still the most reliable 10GbE for Apple laptops.
wolvoleo
> 10GbE Thunderbolt adapter is still the best. Full symmetrical 10GbE on laptops as far back as the 2018 MacBook Pro 13" (Intel) and every laptop since. Including the Airs starting with the M1 chip (Not sure about Neo).
The neo doesn't have thunderbolt at all so no, that won't fly.
bdavbdav
Luckily I suspect the intersect on the Venn diagram isn’t huge for Neo buyers, and those wanting / needing 10gigE
aggregator-ios
Yup.
aggregator-ios
Thank you, I was suspecting the same but was not sure.
GeertJohan
A Framework expansion card was also announced this week. https://frame.work/nl/en/products/wisdpi-10g-ethernet-expans...
topspin
That link notes:
"Card supports 10Gbit/s and 10/100/1000/2500/5000/10000Mbit/s Ethernet"
Nice to see; some NICs are shedding 10/100 support. Apparently, it's not necessary to do this, even in a low cost device.
userbinator
Low-cost devices are exactly where 10/100 is still widely used. On PCs, it's a common power-saving mode.
lostlogin
TVs too.
hsbauauvhabzb
For those of us who don’t know, how does it save power vs a 1gbe running at low throughput?
Tade0
100 mode saved me once when I really really really needed to have a connection in that moment, but the ethernet cable glued to the wall that I was using had only three out of eight wires even functioning.
winter_blue
Don’t we need at least four for 100 Mbps?
jcalvinowens
I also appreciate the 10/100 support. I recently needed it for some old voip equipment, and it was shockingly difficult to find an SFP+ module that worked in my 10G switch and supported 100mbps.
lucb1e
Low cost? The link mentions no price, only a "notify me" button as far as I can see. Does it show a(n estimated) price point for you somewhere?
topspin
Low cost, as in not data center/server grade hardware.
zamadatix
$99 when I look at the entry in https://frame.work/marketplace/expansion-cards
junon
100 is needed for embedded stuff, it'd render a lot of devices unusable (wiznet chips are popular and are 100 only). That'd suck.
Gigachad
IKEA smart home hub is also 100mbit.
moffkalast
Lots of industrial sensors and devices only do 4 wire 100BASE-TX so if there's no fallback to that it would be a paperweight in those situations.
rleigh
There are plenty of embedded chips which only provide RMII. No RGMII or alternatives.
t312227
-
the_mitsuhiko
That hasn't been true on switched networks in probably 20 years or so.
hnlmorg
Isn’t that only relevant for network topologies that rely heavily on broadcasting to multiple nodes. Eg token ring, WiFi and powerline adapters?
For regular Ethernet, the switch will have a table of which IPs are on which NIC and thus can dynamically send packets at the right transmission protocols supported by those NICs without degrading the service of other NICs.
vardump
We have switches now, hubs just don't exist anymore. Switches are not affected by some devices having a lower speed.
oliwarner
Is that really true? If so, is there a saner way to handle this than upgrade all the things to 10GBE? Like a POE ethernet condom that interfaces with both network and devices at native max speeds without the core network having to degrade?
HHad3
That is complete nonsense and not how switched networks work.
retired
The author only got 7Gbps with a Framework 13 and a 10G adapter from the same brand (WisdPi).
If this is the same adapter in a different housing, will it also be limited to 7Gbps?
geerlingguy
I'm guessing different mainboards could offer better USB port support for Gen 2 2x2, but right now the Ryzen AI 13" chips at least top out at USB4 / 3.2 Gen 2x1
realxrobau
Are there any that actually have a SFP+ port? That's all I want. No one wants to use 10g ethernet when DACs are cheaper than cat7, and you can just change it up to a $7 multimode when you need longer runs.
Aurornis
> No one wants to use 10g ethernet when DACs are cheaper than cat7,
You don't need Cat7 for 10G.
Cat6 is spec compliant up to 55mm. Cat6a to 100m, which is the same as Cat7.
If you're doing short runs like to a nearby switch, good Cat5e works fine in practice. I've run 10G over Cat5e through the walls for medium runs without errors because it's all I had. It works in many cases, but you're out of spec.
I use DAC where I can, but most people just want something they can plug into that RJ45 port in their wall that goes to the room down the hall where they put their switch.
There are several SFP+ to Thunderbolt/USB4 adapters on the market. Not cheap, though.
sixdonuts
Yep, 10gb over copper is not power efficient so any savings you get from getting a cheap 10gb switch will just go to your power bill. Most cost effective and flexible is a used 25gb switch. Most 25gb switches can do 1/10/25gb. 10gb networking has been dead for over 10 years.
rkagerer
Interesting observation about power use. How close do you think we are to it being practical to wire your whole home with fiber instead of CAT6 or whatever? If you're providing all your own equipment, are willing to purchase a high-end splicer for maintenance, etc.
For laptops I assume you need USB/Thunderbolt adapters. (Still no SFP+ or SFP28 module for Framework?)
For desktops you'd use an SFP28 card (taking up a PCIe slot).
For devices like Raspberry Pi's, etc. you'd use... local RJ45 switches with optical uplink ports?
harrall
You can just do a mix.
Most of my devices only need 1G or even 100Mbps. No reason to switch to fiber. 1G/2.5G copper ports don’t use that much power.
For 10G+ things, it’s fiber or DAC first if possible then RJ45 if it’s the only option.
Then my backhaul between rooms is just single mode fiber, good up to 800G. Plug in a small switch at the end and you go back to RJ45 and PoE.
I only have 10G though (to transfer large files/RAWs between my computer and my storage). Something faster would be nice because NVMe SSDs can go 50G+ but that equipment is pricey and power hungry.
rsync
Wiring ports for humans to use in a flexible and future proof manner (as in a single family home, for instance) gains a lot of utility with PoE.
The convenience and flexibility of PoE would always push me towards copper wiring.
throwawaypath
>10gb networking has been dead for over 10 years.
Not even close to being true, unless you specifically mean 10Gbps over twisted pair (Cat6/7) cable. SFP+ is the default on a ton of network gear still.
jburgess777
I think the point he is making is that the industry first went with a 10g single link, and then 40g over 4 links. Then they figured out how to do 25g over a single link, and 100g over 4 links. Those 25g/100g are common for enterprise switches. It might be fairer to say 40g is dead, 10g still has use cases.
Edit to add: If you want an example, these are the NVidia ConnectX nics available from FS.com, the lowest end one is 25g, then 100g, 200g etc.
fmajid
The SFP+ ones are all Thunderbolt or USB4 this far, i.e. not backward compatible with USB 3.x, like this QNAP one: https://www.qnap.com/en/product/qna-uc10g1sf
Galanwe
10G DACs are no cheaper than cat6, which is perfectly fine for 10G at most practical distances. Considering the target audience of these cards it seems pretty obvious to me that letting users "just buy a cat 6 cable" is miles more reasonable than having them buy a transceiver or DAC.
As for allowing to switch to fiber, that just seems orthogonal again to what these USB NICs are for, not to mention the SFP+ itself is probably more expensive than the NIC shown here...
Fnoord
DACs are very cheap (second hand and AliExpress) and they never use much W. If both machines are near each other though (which a DAC cable implies) and both run Linux and both support Thunderbolt, you might be better off with a direct ethernet over TB connection. Whether macOS supports such, I don't know.
The other side will then also need a low power NIC (of which fiber and DAC over SFP+ are less power hungry). What this article doesn't mention, is that there are also a lot of PCIe NICs on the market which aren't power hungry (RTL8127), as well as RTL8261C for switches/routers.
I've seen low power RTL NICs with SFP+ on it, too (example: [1]). With SFP+, you'll have a lot more versatility. DAC and SFP+ fiber are very cheap, btw. Especially second hand they go for virtually nothing. I have 10 SFP+ fiber lying around here doing nothing which I got for a few EUR each.
For me as European with high energy prices and solar energy gotten the beat next year (in NL), this is all very interesting.
There's a couple of good reasons why to opt for fiber in the home. You keep the energy between the different groups separated which can help. I also find fiber very easy to get through walls, allowing me to have multiple fiber connections through walls (currently I use 1x fiber + 1x ethernet for PoE possibilities from fusebox).
With all above being said, AQC100S is low power and does not get very hot. You can get these with SFP+ and PCIe/TB. They've been available for a while.
[1] https://nl.aliexpress.com/item/1005011733192115.html (no vouching for, just first hit on search)
ZekeSulastin
I just wish someone would come out with a PCIE 4x1 capable card with SFP - my main desktop’s non-GPU expansion slots are all 4x1 electrically and even the one you linked is a 3x2. As far as I can tell the only 4x1 cards available are RTL8127 or AQC113 RJ45 ones :(
I suppose an NVME riser is also an option, albeit janky.
wpm
I can also buy a roll of CAT6 and a few dozen dollars in tools and RJ-45 connectors and make my own custom length cables.
gsich
Also SFPs are always a gamble. Might work, might not, you have multiple options, meanwhile with copper RJ-45 you are guaranteed that a link will be established.
toast0
> No one wants to use 10g ethernet when DACs are cheaper than cat7,
Ethernet is media independent. Yes, yes, it was first classified for thick net, but ethernet over twisted pair (rj45 typically) is still ethernet despite the lack of vampire taps. You can run ethernet on thick or thin coax, twisted pair, dac, fiber, or even over the ether so to speak.
That said, 10g over rj45 is pretty handy when you have existing wire in walls. In my experience, it runs fine on the cat5 (not even cat5e) that's already there. Maybe it won't work on all my runs, especially if I tried all at once, but so far, I'm two for two.
The spec is for ~ 100m in dense conduit; real world runs in homes are typically shorter and with less dense cabling... and cabling often exceeds the spec it's marked for, so there's wiggle room.
lukevp
I have a fairly large house (2 story 3k sqft) with all cat5e. I iperf’d every run and they could all do 10gb negotiation and TCP, most of the runs could sustain very high UDP rates with low packet loss. There’s just one run (which is the one to the internet) which had a slightly higher UDP packet loss rate. So basically every run can do 10gb fine. Been running the whole network like this for a year. It’s been great! I just need a 10 gig capable NAS. My current one can only do 3.5 or so because it’s a usb 5gb/s which isn’t really 5 gb.
undersuit
The big bulky black box this little adapter replaces in Jeff's uses is actually just a PCIe/OCP card in an enclosure and you can replace that with a 10g card with SFP.
buserror
Modern transceivers can do 10G on absolutely garbage twisted pair. My house was wired with absolutely dire cat5 cabling. Zero shielding and barely any copper in the pairs. I thought I'd barely be able to do 1G on them, but modern transceivers (amazon) easily do 10G over like 30M of that sort of cables.
In fact I had more trouble getting quality fiber working for that sort of distance than El Cheapo cat5. They do heat up a bit, but they work wonder.
OneOffAsk
Zero shielding may actually help. Shielding acts as an antenna when not properly grounded and continuous, which is more common than not.
radicality
I’ve been using the qnap sfp+ thunderbolt one (I think it’s a marvel/aqantia chip) for a few years now everyday with my MacBook and it’s been solid
nasretdinov
10 GbE sits in a really weird spot for me, maybe I'm just not understanding something though. It's at most 1.25 GB/sec of bandwidth, yet it's relatively quite expensive. It's not sufficient bandwidth for getting good performance out of most SSDs, yet it's really excessive for any hard drives (except for RAID10 setups I guess). For SSDs you want thunderbolt (or 40+ GbE) connection for best latency and performance, and for hard drives 2.5Gbit/sec is more than enough. As I said, I might be misunderstanding something, but 10 GbE sits between the two sensible options for me.
razighter777
10gbe is a sweet spot at least for my homelab stuff. It's easy to find old enterprise gear for, cheap, and fast enough for everything I want to do.
bombcar
Exactly. Enough supports 10gbe that you might as well grab it; a few Mikrotik switches, some old enterprise gear, and an adapter gets you some good speeds.
Sure some of it might have been fine at 2.5 or 5 but those are relatively new and less commonly available.
kotaKat
I'm actually surprised at the amount of 2.5/5 gear I've been coming across lately, especially in the 2.5 space as more ISPs are pushing for gigabit+ to the house.
Verizon's been issuing a wireless router with 10G WAN and several 2.5G ports and MoCA support that includes a 2.5G adapter and they use that across all their current connection types. I was delighted to see that when I got the router a couple years ago.
walrus01
10GbE can be extremely cheap now if you're doing things like buying Intel NICs off eBay to put into your own test/dev headless servers.
There is also a glut of 40 Gbps stuff on the market because it's a dead end technology and most ISPs went straight to 100 for things like aggregation switch to router links. Not that I would encourage anyone to go whole hog on 40 Gbps just because, but if you can get a transceiver for $15, NICs for $30, and maybe you get a switch for free from electronics recycling or for 80 bucks, and can tolerate its noise and heat output...
I have seen plenty of people throw decommissioned 40 Gbps stuff straight into electronics recycling bins.
Mellanox ConnectX-3 40 Gbps QSFP NICs are literally 20 bucks on ebay.
MisterTea
10 Gb is cheap! Mikrotik has a 4x10Gb + 1x1Gb port switch for $150 USD and an 8x10Gb version for about $275. I have the 8 port version.
SFP+'s and fiber are cheap, like maybe 50 bucks for the SFP+ set and fiber. 10Gb PCIe cards are maybe ~$50 new on Amazon with Intel chips and cheaper on eBay - I bought used 10 Gb Mellanox cards for $25 each - "they just work" under FreeBSD and Linux.
Copper 10 Gb used to consume waaaaay more power (like 5+W per port!) and cost more both in terms of the SFP and cable. In reality fiber is more environmentally friendly as there is no copper, less energy used, and less plastic per meter. So my setup mostly consists of SR and BR optics and DAC's. The "DAC" direct attach cables are handy for switch-switch or short switch<->NIC runs. And I will continue to run fiber for the foreseeable future and actively avoid copper.
reaperducer
10 Gb is cheap! … $150 … $275.
San Francisco checking in.
MisterTea
It's not that much considering people pay $100+ for cable/internet and/or >=$(15 * n) streaming services PER month. Some people might want faster transfer speeds or low latency. For the price of two or three months of internet and streaming/cable you get a very fast LAN if you so desire. If you don't need it then don't spend the money.
mlyle
10 years ago, you spent $40 for a few port unmanaged gigabit switch and $80-100 for the bottom tier web-managed crap.
That corresponds to $50 and $105-130 in today's money.
Now you can get it 10 times faster with an OK management layer for $150. This is after a -long- time of 10gbps prices stagnating.
10gbps is unexpectedly cheap.
kiddico
Considering what you get and the historic prices of 10GbE those are absolute steals.
How much would they need to cost before you'd consider it cheap? If you want CHEAP then 10GbE is not for you in 2026.
sbierwagen
Keep in mind that $275 today is the same as $140 in January 2000. Tech gadgets used to be far more expensive, both in real terms and as a percentage of average income.
Analemma_
A single eero or Ubiquiti AP will be $150-300 depending on the exact capabilities, so if you're pricing out how to network your house I'd say the switch looks pretty good b
donatj
I redid the backbone of my home in 10Gb fiber, and "cheap" is not the term I would use. Especially when you can get perfectly cromulent 1GbE switches for like $10 these days.
The Mikrotik switches [1] work technically speaking but they are quite difficult to configure. You have to pull them from your network, connect physically to a specific port, force your machine onto a specific IP, connect to a specific IP. I could not get this to work in macOS nor Ubuntu despite hours of futzing with it. They both kept infuriatingly overriding my changes to the IP. I was only able to get this to work on an old Windows 10 laptop.
Once you do get their web UI up, you pray the password on the sticker on the bottom works. Neither of mine did and I had to firmware reset both and find the default password online. The web UI itself holds no hands. It's straight out of 1995, largely unstyled HTML. While using both of my devices the backend the UI talked to would crash and log me out about every five minutes. Not every five minutes after log in. Every 5 minutes wall time!
The Mikrotik switches are also fanless, and 10GbE SFP+ adapters throw off a lot of heat. If you use more than one they overheat. You can just about get away with two if you put them on opposite sides but I would not recommend it.
I've also had very mixed luck with SFP+ module compatibility with this thing. I had a number of modules that refused to run at higher than one GB, hence my fighting to get into the UI. Despite a ton of futzing between logouts I was not able to get them to work at 10Gb and returned them.
I'll be honest, my Mikrotik switches have been infuriating. I replaced one of them with a Ubiquiti Pro XG 8 8-Port 10G and holy crap the difference is night and day. It just works. Everything worked straight from the box day one, I can configure it from my phone or the web, I highly recommend this thing.
The Ubiquiti switches are multiple times more expensive but if you value your time they're well worth the price. I still have two of the Mikrotik switches on my network but am completely intent on replacing them. The Ubiquiti is worth it for online configuration alone. No need to pull the thing from your network, test your changes immediately!
MisterTea
I don't configure anything on the mikrotik. Out of the box it's a dumb switch and that is all I want.
> The Mikrotik switches are also fanless, and 10GbE SFP+ adapters throw off a lot of heat.
If you are talking about copper SFP's, then that's the problem: copper. It takes a lot of energy to drive a wire at GHz speeds, not so much with an optical link (though it's getting much better.) I have only ever felt luke warm optical and DAC SFP's. Copper 10 Gb SFP's are burning hot. I avoid using copper and run fiber.
bobbob1921
I use mikrotik equipment extensively (as in hundreds/ thousands of them over the years), while I disagree with a lot of of this, the post is absolutely correct about the ridiculous password on a sticker requirements they introduced a few years ago. The pw text is incredibly small and the way it’s printed (dpi and font) makes it very difficult to differentiate certain characters. Also the way you initially connect to them when they’re new out of the box to then enter this obnoxious password has several issues/challenges. It used to be so easy and convenient to configure brand new mikrotik devices in the past, and now it’s become a task I dread and has even caused us to buy non- mikrotik gear in several cases.
cyberax
Hah. I used a dremel tool, some radiators, and a bit of thermal glue to make my Mikrotik switch work reliably: https://pics.ealex.net/share/UxeSf_AWHLIuc-qzK5zl7JIgQvQDAZh...
It's been like this for the last 3 years. And amazingly, I still can't find a 10G switch that is just as compact.
chaz6
I got an 8 port SFP+ managed switch from AliExpress for $100!
amelius
Does microtik have any competition?
bobbob1921
in the lower end space kind of, however in many respects no they don’t. Ubiquity would be their main competition, but ubiquity equipment is cloud first whereas a strong point of mikrotik has been that you do not need a centralized cloud controller (ie local first). Also in terms of the vast capabilities of mikrotik equipment at its price point, no there is absolutely no real competitor. (Maybe PFsense is the closest competitor with strong feature set)
emb-dev
Ubiquiti, Juniper, Firewalla or Alta Labs?
CTDOCodebases
I have a zfs x 3 disk hard drive mirror and 10GbE.
For writes yes 10GbE overkill but for for reads it's faster than 2.5GbE would be.
Sure there is 5GbE but most switches that support 5GbE support 10GbE.
randusername
I chose 10GbE to fit 20 HDDs in RAID 10.
~ 1 GB/sec seems about right for a long time. I can't imagine the basic files I work with everyday getting much more storage-dense than they are in 2026.
flemhans
I remember my friend Peter, in 1999, on campus networking with 100 Mbit internet saying: I think this will be enough for many years to come. And he was kinda right — 100 Mbit is still "almost good enough" 27 years later for internet access.
cyberax
AI model files can be rather large...
whatevaa
Are you gonna run thunderbolt more than a few meters? If you think 10 is expensive, check prices above 10. You may even need fiber for that.
adrian_b
Making a long distance complex network may be expensive, but to connect directly a few computers one can use 25 Gb/s Ethernet at a reasonable price.
Last time when I checked, dual-port 25 Gb/s NICs were not much more expensive than dual-port 10 Gb/s NICs.
If you have a few computers with no more than a few meters distance between them, you can put a dual-port 25 Gb/s card in each and connect them directly with direct attach copper cables, in a daisy chain or in a ring, without an expensive switch.
nasretdinov
No, of course I'm not going to if I choose thunderbolt :). But in many cases it's fine because SSDs aren't nearly as noisy as HDDs, so the NAS can just sit under your desk.
For 40+ GbE or fibre I agree they are expensive, but at least you get full performance out of your system. SSDs aren't cheap these days either...
butvacuum
fiber vs DAC isn't really a cost concern st a home level. a 2m LC patch cable is $5 and used bidi cisco optics $5-10 each. not much more for new optics either.
AdrianB1
10 GbE has a good performance/$ ratio, better than 25 GbE, and it is 10 times faster than the basic (for today) 1 Gbps. If you need more go for 25, but the availability of cheap cards, switches and cabling (DACs, AOCs, transceivers) is lower than for 10 GbE. For me, 10 GbE is the baseline for the year 2025 at home.
deepsun
Is it also possible to power a laptop through those adapters? PoE++ can deliver up to 100W of power, more than enough for most laptops.
eqvinox
Theoretically yes, practically that hasn't been built yet. I've only seen it for 2.5Gbase-T, and only for 802.3bt Type 3 (51W).
If anyone's aware of something better, I'd be interested too :)
(Then again I wouldn't voluntarily use 5Gb-T or 10Gb-T anyway, and ≈50W is enough for most use cases.)
[ed.: https://www.aliexpress.us/item/3256807960919319.html ("2.5GPD2CBT-20V" variant) - actually 2.5G not 1G as I wrote initially]
Iulioh
Eh.
A lot of laptops won't accept less than 60w
My work laptop won't accept less than 90w (A modern HP, i7 155h with a random low end GPU)
At first everyone at the office just assumed that the USB C wasn't able to charge the pc
javawizard
I gotta say, I love my macbooks. Every Apple laptop I've owned that has USB-C ports will happily charge itself from a 5V/1.5A wall charger (albeit extremely slowly).
tjoff
They probably require higher voltages but I havent seen one myself. I usually just charge y laptop with my phone charger, what is it, 18 watts? Don't care, charges my laptop and the phone that is plugged into it overnight. Why charge at faster speeds when there is no need to
Laptop charges fine regular 5V as well.
folmar
My Thinkpad T490 will happily take any power provided voltage is high enough (15V+).
spockz
Great. So we got EU laws to mandate USB-C chargers and then get manufacturers that flaunt the spirit of the law by rejecting lower wattages.
lostlogin
A Mac mini at home used 4.64w averaged over the last 30 days. Even under load it just sips power.
_blk
The issue might not be the wattage bit rather the minimum voltage. (Some?) Macs seems to charge at 15v already, most laptops need 20v
izacus
Most laptops will take 45W. There might be some workstations that don't, but even gaming stuff with 5080s will charge on 45W.
lostlogin
The idea of a POE Mac mini makes me happy. It would be a nice way of power cycling it from the switch, tidier than the smart plug I have.
https://hackaday.com/2023/08/14/adding-power-over-ethernet-s...
yonatan8070
It's undoubtably a cool solution, but in why do you need to remotely do a hard power cycle? Won't just SSHing in and rebooting be enough?
wallst07
And when ssh is down because you OOMd or something else?
da768
Somewhat, there are a few expensive "PoE to Data + Power" adapters out there
https://www.procetpoe.com/poe-usb-converter/ (some of these are power-only)
oever
Doing home automation of lamps, sensors, speakers via PoE would be great too. It should faster and more stable than Zigbee/Wifi and with no need to change the batteries often.
JonChesterfield
I found a 5gbe one that claimed 60W, will power a phone but not the low power laptop I've got here. It probably isn't far off.
mjlee
I can’t find what you want, but you can buy PoE splitters. PoE in, ethernet and power out.
Surely a matter of time until someone does this…
gertrunde
I think class 4 tops out at about 71W delivered to the powered device, albeit 90W at the switch port.
Might be a struggle I suspect!
knolan
We used PoE hats for a bunch of Raspberry Pis once. It’s definitely a great idea.
papaver-somnamb
10GbE adoption feels different from the successful string of standard speeds that came prior, since the market congealed around one standards family per Ethernet speed circa say 100Base-TX. We've heard stories as horrific as RJ45 assemblies heating up to a degree such that thermoplastic would flow.
Was some threshold crossed where 10Gbit over CAT6-whatever cabling is crossing physics thresholds? Or perhaps 10Gbit was brought to market when tech supporting copper connections wasn't yet mature enough?
nbf_1995
I have never heard of ethernet cables getting hot to that degree except when PoE is involved.
ridiculous_fish
I bought one of these as soon as I heard about it ($74 from eBay) and tested it against my USB-4 AQC113 mainstays ($87, IO CREST brand on Amazon), from my MBP.
The new RTL-based adapter is physically smaller, runs way cooler, but only gets ~6 Gbps from my Mac to my Linux box, with a lot of jitter (iperf3).
The AQC adapter is all metal, gets uncomfortably hot, and sustains 9.3 Gbps, no problem. It's about the same size as the middle adapter in the photo.
The USB-4 AQC adapters are only ~$13 more, and yet are significantly faster with lower jitter. I'm staying with those.
Hope that helps someone!
l8rlump
I also discovered the other day that you can get high-speed networking between two computers with just a thunderbolt cable. It showed up as a 20G connection anyway.
ranon
Just got an rtl8127 pci e card to replace my aqc113. Runs cool, doesn't have as much contention on the chipset. Price was right. Good purchase and that $10 chip will allow cheaper more power efficient home 10gb equipment within the coming years.
aggregator-ios
Was the card $10 or are you saying that the chip is a $10 part?
bri3d
I'm disappointed that both the article and comments don't go into the actual differences between how these adapters work and the overhead incurred by USB.
At a high level, I'm pretty sure Thunderbolt will be significantly better in all situations:
Thunderbolt is PCIe; depending on the way the network card driver works, the PCIe controller will usually end up doing DMA straight into the buffers the SKB points to, and with io_uring or AF_XDP, these buffers can even be sent down into user space without ever being copied. Also, usually these drivers can take advantage of multiple txqueues and rxqueues (for example, per core or per stream) since they can allocate whatever memory they want for the NIC to write into.
USB is USB; the controller can DMA USB packet data into URBs but they need to be set up for each transaction, and once the data arrives, it's encapsulated in NCM or some other USB format and the kernel usually has to copy or move the frames to get SKBs. The whole thing is sort of fundamentally pull based rather than push based.
But, this is just scratching the surface; I'm sure there are neat tricks that some USB 3.2 NIC drivers can do to reduce overhead and I'd love to read an article where I learned more about that, or even saw some benchmarks that analyzed especially memory controller utilization, kernel CPU time, and performance counters (like cache utilization). Especially at 10G and beyond, a lot of processing becomes memory bandwidth limited and the difference can be extremely significant.
eqvinox
ACK. From some cursory experimentation, my laptop can roughly saturate 1G via USB, but on 2.5G things get wonky above roughly 1.9G unidirectional or 2.9G bidirectional.
> Thunderbolt is PCIe
Nit: Thunderbolt isn't PCIe, it tunnels PCIe. Depending on chips used, there's bandwidth limits; I vaguely remember 22.5G on older 40G TB Intel chips.
Aurornis
> Thunderbolt is PCIe
Thunderbolt allows PCIe tunneling, but it has some overhead over raw PCIe. That's why Thunderbolt eGPU setups don't perform as well as plugging the GPU directly into a PCIe slot.
> USB is USB
Until you get to USB4, when USB 4 supports Thunderbolt 4.
Latty
> That's why Thunderbolt eGPU setups don't perform as well as plugging the GPU directly into a PCIe slot.
The bigger factor is probably that PCI-e tunnelling at most a ×4 link, while when you plug a GPU in you are generally doing so into a ×16 or at least ×8 slot, and very few GPUs target ×4.
bri3d
Fair; I should have said "from the standpoint of the driver."
> USB 4 supports Thunderbolt 4
It's the opposite! I hate to get into it as I saw the USB naming argument pretty thoroughly enumerated in the comments here already, but the pedantic interpretation is "Thunderbolt 4 is a superset of USB4 which requires implementation of the USB4 PCIe tunneling protocol which is an evolution of the Thunderbolt 3 PCIe tunneling protocol."
From the standpoint of USB-IF a "USB4" host doesn't need to support PCIe tunneling, but Microsoft also (wisely, IMO) put a wrench into this classic USB confusion nightmare by requiring "USB4" ports to support PCIe tunneling for Windows Logo.
toast0
> At a high level, I'm pretty sure Thunderbolt will be significantly better in all situations:
None of my devices support thunderbolt; so not all situations.
Get the top HN stories in your inbox every day.
Jeff: I see a possible problem with your tests that bit me before! ipferf3 is not multithreaded by default. The more capable computers probably have an interrupt rate sufficient to handle 10gig over USB (which likely multiplies the interrupt rate needed), but it's completely possible you're pushing the interrupt rate limits on the Macbook Neo and other lower powered hardware.
Any chance you could re-run with `-P 4` where 4 is the core count?