Skip to content(if available)orjump to list(if available)

Add 10 GbE to your system with an M.2 2280 module

jiggawatts

Some people are saying that they don't feel the need for > 1 Gbps in a home network.

As a counter-point, I'm regularly limited by 1 Gbps PHY limits. During the 2 years of the pandemic I've been working from home with gigabit Internet. Why gigabit? Because that's the "speed limit" of the Ethernet PHY in the fibre broadband box, and also my laptop's ethernet port.

My laptop can write to disk at multiple GB per second, or tens of gigabits.

I transfer large volumes of data to and from a "jump box" in Azure that can easily do 20 Gbps out to the Internet.

I regularly update multiple Docker images from scratch that are gigabytes in size, each. I then upload similarly large Docker images back to Azure.

Even when physically present at work, I'll regularly pull down 50-200 GB to my laptop for some reason or another. One time I replicated an entire System Center Configuration Manager deployment (packages and all!) to Hyper-V on my laptop because the NVMe drive let me develop build task sequences 20x faster than some crappy "enterprise" VM on mechanical drives.

I have colleagues that regularly do big-data or ML queries on their mobile workstations, sometimes reading in a decent chunk of a terabyte from a petabyte sized data set.

All of these are still limited by Ethernet PHY bandwidths.

Note that servers are 100 Gbps as standard now, and 200 Gbps is around the corner.

The new Azure v5 servers all have 100 Gbps NICs, for example.

kortilla

A lot of that sounds pretty incredibly wasteful and I don’t think it should be something to strive for more of.

Docker images are layered for a reason. If you’re having to download multi gigabyte docker layers multiple times, something has gone very wrong.

Azure charges (like the other major cloud providers) by egress data. I feel bad for whoever is paying the bill for 20gbps going out.

In general, people in ML shouldn’t be pulling raw data in the hundreds of gigs down to mobile workstations. That’s a security and scaling disaster.

Trying not to be too much of a curmudgeon here, but this is the problem with 1 gbps connections. They are so good they prevent you from doing the right architecture until it’s far too late.

A good remote work setup should leave all of the heavy lifting close to the source and anything that does need to be downloaded should be incremental/shallow copies. You should be able to work from a stable coffee shop WiFi connection.

ClumsyPilot

"You should be able to work from a stable coffee shop WiFi connection."

I think there are different philosophies, just like 'no comments allowed in code', this is a matter of judgement.

I saw 5 different corporations setup centralised workflow, and usually you end up with a massive cloud bill, incorrectly configured admin rights, under-resourced VDI, and a ticketing system that takes a week to resolve the smallest issue.

If we assume for the moment that data in question is not highly sensitive (for example public satellite imagery), there is nothing wrong with getting every dev a beefy workstation with 50 TB of storage and a 20-core CPU, and it's often even cheaper.

rglullis

Then again, if it is public (or not-sensitive) data, why make it part of your infra?

If you are working with ML and have public datasets, just use the original source. Make a public mirror if you want to collaborate. Make it accessible via BitTorrent. You can bet that the amount of savings you will get from AWS will be enough to give each dev a NAS with a few tens of TB, which then they can even then use to host/mirror the public data by themselves.

ckdarby

5? Are you a consultant who goes in to fix these situations?

junon

> If you’re having to download multi gigabyte docker layers multiple times, something has gone very wrong

When I worked at Vercel during the days we supported Docker hosting (back when it was still ZEIT), we saw that even among many tenants on the same machine, the "layers" thing wasn't really that beneficial - we still had to fetch a ton of stuff over and over again.

So this is great in theory, but it doesn't really pan out in practice.

kkielhofner

I believe Docker has even throttled/blocked corporate users doing excessive layer pulls from Hub (and suggested they sign up for X plan).

Reasonable, IMHO.

m463

I also recall the saying "you can't save yourself rich"

There are some times when your thinking/workflow is constrained by your environment, sometimes significantly.

kortilla

But the person is already super rich and has developed such an obscene yacht sinking habit that they are still money constrained.

_flux

> Docker images are layered for a reason. If you’re having to download multi gigabyte docker layers multiple times, something has gone very wrong.

Well the images need to be in some order, and if you are fine tuning the very initial parts of the image, this can happen.

But it seems a better transfer algorithm could be used for those individual layers. It rather seems like if they don't match, everything's transferred, while there could be a lot better options as well. If we consider that they are structured like file systems then it opens even more options.

---

So I took a peek on my local Docker installation. It seems the layer files on the drive are gzipped json.

That's right, .json.gz, _that includes the files BASE64-encoded_.

Yes, I believe there is room for improvement, if this is also the format they are transferred in.

ericpauley

base64-encoding data only poses a 1% overhead[1] when the data is subsequently gzipped, so this is hardly an issue in practice.

[1] head -c 100000 /dev/urandom | base64 | gzip | wc -c

ashtuchkin

json is used only for manifests; actual layers are .tar.gz.

null

[deleted]

PragmaticPulp

As a counterpoint: I had 10G networking and went back to 1G last time I reorganized. Haven’t really missed the 10G for now.

I ditched the 10G because the 10G switch was hot and noisy and had a fan, whereas my 1G equipment was silent and low power. This could be overcome by putting the switch in another room, obviously, but I haven’t missed it enough to go through the trouble yet.

> During the 2 years of the pandemic I've been working from home with gigabit Internet.

Doesn’t this make every other point in your post moot? I’m similarly limited by Gigabit internet, so the only time I benefit from 10G is transfers to/from my local NAS. I realized that I almost never lose time sitting and waiting for NAS transfers to finish, so even that was only a rare benefit.

If I was buying new equipment and building out something new, I’d pick 2.5G right now. It’s significantly faster than 1G but doesn’t come with the heat and noise that a lot (though not all) of 10G switches come with.

I’m sure I’ll go back to 10G some day, but without >1G internet and no local transfer use case, I found the extra hassle of the equipment (laptop adapters, expensive/hot/noisy switches) wasn’t really worth it for me yet.

zenonu

All of my 10g equipment is silent or near silent. Most recent purchase is the TP-Link TL-SX105. Take another look the next time you redo your home network. At least consider multi-gigabit.

kkielhofner

This is exactly where I ended up recently. I went from noisy and power hungry Cisco/Juniper 10G switches to a silent Ubiquiti GigE switch with PoE and haven’t been happier.

10G was cool but I can’t recall a single instance where I really wish I had it back.

PragmaticPulp

> 10G was cool but I can’t recall a single instance where I really wish I had it back.

Same. It only really made a small difference to my most transfer-heavy workflows. Even a fast NAS is still slow relative to local storage (it’s not just about sequential transfer speeds).

I’m sure I’d feel differently if I was doing something like video editing where I had to move large files back and forth frequently and wait for them to be ready.

Teknoman117

I ended up buying one of these for my 10g home network. It's completely fanless. The 1G Ubiquity EdgeRouter-X I have runs way hotter.

https://mikrotik.com/product/crs309_1g_8s_in

qwertyuiop_

I am on FIOS 1G home fiber, how does one get 10G ? Do they have run a line from the node or street ?

secure

If your provider offers 10 Gbit/s, you only need to change the fiber optics modules at either end of the connection. The same lines can be used, no need to run anything extra.

ninkendo

I don’t see a use case for 10gb Ethernet in your home network in your post, unless I’m missing something? It sounds like your ISP is limited to 1gbps and all your use cases seem to be bottlenecked by it.

Do you have home servers you forgot to mention that you’re uploading docker images to, and those could benefit from 10gb Ethernet?

(For me, I use gigabit Ethernet everywhere at home but sometimes need to transfer large disk images from my desktop to my laptop, and using a thunderbolt cable as a network cable helps here, I can get closer to 2 gigabits of transfer speed before disk writes seem to be the bottleneck.)

emteycz

More devices used at once? If I am downloading 1 Gb/s from the internet, and meanwhile one of my housemates want to look at a movie from our NAS, the other one wants to backup 100 GB of photos to the same NAS - then 1 Gb/s home network is not enough.

toast0

Your 1G switch should be able to do 1g from your computer to your internet router and 1G from your roomate to the NAS and whatever from your NAS to the guy watching movies. Even cheap gigabit switches can process (large packets) at line rate on all the ports. If your NAS is also your internet router, maybe you can't make it work with a 1G switch, unless it can do link aggregation.

formerly_proven

I'm not arguing against faster networks, but scenarios like "one fast download makes video streams buffer" can be solved by using better routing algorithms (CAKE for example) instead of making the pipe so wide that it'll never be close to full. One of these is a configuration flag that you can flip today and costs nothing, the other means upgrading infrastructure.

kevin_thibedeau

4K video doesn't require high bandwidth.

pdimitar

In my case I'd like to transfer from/to my NAS where I centralize a number of data pieces.

I work inside a workstation but occasionally on a laptop as well.

Having 10GbE will help me when I'm on my workstation. It's not fatal, definitely, but it adds up to lost productivity.

YXNjaGVyZWdlbgo

Is your NAS NVME based if you are using spinning metal you won't reach speeds to saturate 10GbE especially with small file sizes?

Jnr

For a moment I thought about setting up 10GbE at home, but then I realized there is no point yet because my external connection to internet is limited to 1 gigabit.

The only use internally would be central storage server. But all the new NVMe SSD's are faster than 10 gigs, so my internal network would be the bottleneck.

The next logical step would be upgrading to 100 gig network, but that is too expensive right now. I should buy more fast NVMe storage for that money.

If I could get 10 gigabit connection to internet, it would probably make more sense, but I don't see it coming that soon. And I'm paying 15€ for 1 gigabit right now, I would not be willing to pay much more for 10 gigabit connection.

On top of that, my Zabbix monitoring data shows that I rarely saturate the 1 gigabit connection to internet. Most of the time I don't need more than 10 megabits.

Jnr

Oh, and I would never actually use the copper cables with RJ45 for anything above 1 gigabit. Those ethernet cards heat up a lot, use too much energy. This would be a bad investment. Get a proper patch or optical cables and use SFP+ instead.

virtuallynathan

I’ve never had this as a problem, and I use 50-100ft of Cat5e + 10GBaseT SFP+’s… it’s well within the thermal limits of the SFP, which probably uses 2-5W.

Damogran6

What I found was that the bottleneck was often my home firewall. Plenty of bandwidth inside, plenty to the cable modem…but using a Cisco or Palo SOHO firewall of hand capability was more than I wanted to spend…sure I could have set up a VM environment and run a virtual firewall…but I didn’t really want to. I’d done that in the past and talking the wife through a safe VM environment cycle when away from home and something went wrong was…harder than telling her to power cycle a netgear product.

theandrewbailey

A few months ago, I bought 2 10GbE NICs off Ebay to connect my main desktop and basement server directly. Since then, I've never seen a transfer between them that was slower than 120 MB/s, usually about 160-200 MB/s. (no RAID or NVMe to NVMe transfers)

Some advice: make sure you know the form factor of your NICs. I accidentally bought FlexibleLOM cards. They look suspiciously like PCIe x8, but won't quite fit. FlexibleLOM to PCIe x8 adapters are cheap though.

ckdarby

How do you regularly pull 50-200 GB? This sounds like terrible practices or implementation at work.

I've never seen docker images of that size that can't be shrunk with multistage, or docker shrink. I can see how someone could get to that size with just baking in all their models to the image itself but yuck.

l30n4da5

windows docker images are incredibly bloated. I don't think I've seen one that size, but the base image for a windows container is like 17GB by itself, if I remember correctly. Basically just pulling down an entire Hyper-V image.

Dave3of5

> windows docker images are incredibly bloated

They are no-where near that size ...

jcelerier

> As a counter-point, I'm regularly limited by 1 Gbps PHY limits. During the 2 years of the pandemic I've been working from home with gigabit Internet. Why gigabit? Because that's the "speed limit" of the Ethernet PHY in the fibre broadband box, and also my laptop's ethernet port.

same, I live in a rural village in france where I get 2GB fiber. Yet all my computers aren't able to leverage that.. quite frustrating to see speeds being limited at 110/120 megabytes/s where my drives are easily able to handle gigabytes.

bluedino

> Note that servers are 100 Gbps as standard now

In what world?

thfuran

In the world of unlimited budgets, where employees all spend their days pulling gigabits per second of egress from the cloud, apparently.

virtuallynathan

100GbE NICs are <$500, and 100GbE switch ports are $100-150.

protomyth

Yeah, I had a hell of a time trying to get our internal network to 100Gbps and was told by multiple experts I was being too ambitious. I ended up with 100Gbps between switches and 40Gbps to the servers.

loosescrews

I have had a lot of trouble with the Marvell AQtion controllers under Linux. They are supposed to work and are plug and play on modern kernels, but I was never able to resolve a bug where the controller stopped working after a few hours. The only resolution I found was rebooting, which made the controller not very useful.

The Intel 10GbE controllers are more expensive, but much better in my experience.

For anyone looking for an alternative, I use an M.2 to PCI-e slot riser and a regular HHHL PCI-e card. Example product: https://www.amazon.com/ADT-Link-Extension-Support-Channel-Mi...

jagrsw

If someone wants to buy additional 10G card, and has ~300 USD to spare I suggest Intel E810 series. The cheapest E810 version is, I believe, E810-XXVDA2 which has 2x28Gb SFP ports (so good for the future), and uses PCIE4, which makes it work with 10Gbps bandwidths even in x1 ports (though the card is physically x8 in size, so you need an open x1 port), and sometimes you have just a lonely x1 on your MB if you use the rest for 3 gfx cards for god-knows-what purpose :)

xroche

Same experience with a bunch of Aquantia AQC-107 (ASUS XG-C100C). Had to remove them from a Linux server, it just won't work and botch IPv6 traffic (especially routing advertisement notices ?!). Got Intel x550t2 and all the issues miraculously disappeared.

devttyeu

I have this controller, and I was able to mostly workaround the dying issue by having a cron script ping a network device every minute, and when that fails it restarts the link - `ip l set enp70s0 down; sleep 6; ip l set enp70s0 up`.

But that's acceptable only because that machine has a workload which can tolerate not having network access for a few minutes per day or so.

NavinF

Damn that’s nasty. I wish there was a way to flag known issues on every product that contains garbage chips like this.

walterbell

Thanks for the pointer, those are difficult to find via search.

martijnvds

ADT-Link[0] makes a lot of them, with the cable coming out left, right or "front" of the M.2 card, and with the PCIe slots in all kinds of orientations.

[0] http://www.adt.link/

seany

I'm using the linked adapter to put mellanox connectx2's into NUCs. Haven't found a good case solution yet, but it all works fine.

omgtehlion

If you do not need a cable, there are m.2 to 4x PCIe (open slot, so you can insert 8x and 16x cards) for $5.

loosescrews

I wasn't able to find any of those that support PCI-e 3.0. Both the Marvel and Intel controllers discussed in this thread are PCI-e 3.0.

omgtehlion

https://www.aliexpress.com/item/1005002996748461.html

↑ this one does support gen3. Anyhow, with 4x@gen2 you’ll be fine at 10G speeds (I checked). Gen3 is strictly needed only if you want 2 (or 4) ports, or 25G...

martinald

It's interesting to see just how slow faster ethernet standards have been. Feels like I've had gigabit ethernet at a pretty low cost for nearly 20 years now, but faster than that has been pretty esoteric outside of the datacentre.

I guess there really isn't much demand for faster than gigabit speeds even now (outside of servers?)

buro9

Lots of reasons:

1. Consumer broadband speeds rarely exceed 1Gbps

2. 1Gbps Local network transfers are seldom slowed by the network (as large file work typically involves HDD still)

3. Where local network transfers are impeded by the network speed the transferring itself isn't a frequent enough and blocking thing that people feel they need to fix it... they just go make a cup of tea once or twice a day

4. There are a lot of old network devices and cables out there, some built into the fabric of buildings (wall network sockets and cable runs)

5. WiFi is very very convenient, so much so that 250Mbps is good enough for almost anything and most people would rather be wireless at that speed than wired at a much higher speed (gamers and video professionals being an exception)

And ultimately, the cost and effort of investing in it doesn't produce an overwhelming benefit to people.

Even in quite large offices, it's hard to argue that this is worth it when the vast majority of people are just using web applications and very lossy audio and video meetings across the internet.

erik

Another factor: data centres moved to fiber. And fiber is less physically robust and not great for desktop connections or plugging in to a laptop.

10GBASE-T exists, but it turns out pushing 10gbit/s over 100m of twisted pair requires chips that are hot and power hungry. Again, not great for desktops or laptops. And because it’s not used in data centres, there are no economies of scale or trickle down effects.

Gigabit Ethernet and wifi being “good enough” combined with 10gig over twisted pair being expensive and power hungry means that the consumer space has been stuck for a long time.

DannyBee

This is, IMHO, completely right, though I think at this point the physical robust issue is moot.

It's easy enough to get G.657.B3/G.657.A3 cables, and you can wrap them repeatedly around a pencil and they are fine.

Also, most consumers would not notice the bend attenuation anymore becuase they aren't trying to get 10km out of them :)

salamandersauce

2.5Gb ethernet seems to be starting to trickle out at least. It's becoming a more common thing on desktop motherboards. Doesn't seem like there is a lot of 2.5Gb routers yet though.

NavinF

> fiber is less physically robust

Maybe this used to be the case long ago, but I don’t think it’s true today. Personally I’m pretty rough with fiber (having slammed cabinet doors on fiber, looped fiber around posts with a low bend radius, left the ends exposed to dust, etc) and had no issues within a data center. Can’t say the same for copper. Even the cheapest 10km optics have more margin when your link is only 50m.

Oh and bend insensitive fiber is dirt cheap and works just fine when it’s tied into knots.

sigstoat

> And fiber is less physically robust and not great for desktop connections

i buy armored multimode patch cables for, i dunno, 10 or 20% more per unit length, and they seem indestructible in my residential short-distance use cases.

> Gigabit Ethernet and wifi being “good enough”

i think this explains it all. when the average user prefers wireless to wired gigabit, we know how much bandwidth they actually need, and it isn't >=gigabit.

ksec

>requires chips that are hot and power hungry.

That is finally improving. The technology improvement meant we get less than 5W per port on 10Gbps. Cost will need to come down though.

AceJohnny2

All excellent points but I'd remove gamers from there:

> (gamers and video professionals being an exception)

I play Stadia, 4K HDR 60FPS, just fine on a gigabit ethernet connection and 150MBit internet connection. Any games not streaming the entire video are fine with just kilobits/s of data, as long as the latency is good.

So the case for home 10GE is even weaker ;)

buro9

Gamers aren't a strong answer :D but they're the audiophiles of home computing hardware and networks and the most likely to overspend in the belief that it's better :D

ericd

Seems like they're saying that gamers would prefer wired to wifi, which I think is reasonable - it's not so much a bandwidth issue as a latency issue - wifi has higher latency, and higher variance than wired ethernet, especially if you get a crappy client joining and filling up the airtime with retry attempts. But maybe that's dominated by ISP variance for most.

m_eiman

> as long as the latency is good.

Which is what rules out wifi :)

aequitas

You probably even don't need gigabit. A Stadia stream ranges from 10Mbit to 50Mbit depending on the quality settings. Latency and other network users are far more influential on gameplay.

xondono

I understood GPs point to be “everyone is pretty much on wifi except professionals and gamers anyway”.

jcelerier

Weird, SteamLink on gigabit ethernet is barely useable here

thfuran

I think 1gbps is more limiting now in consumer space than 100 Mbps was back when gigabit started becoming widespread.

walrus01

10GbE and 100GbE on fiber is quite cheap and easy now - but 99% of consumers and people doing ordinary stuff have no capability or interest in doing fiber. You can terminate cat5e or cat6 with $25 in hand tools...

I think what's new is the prevalence now of 2.5 and 5GBaseT ethernet chips that are cheap enough companies are starting to build them into any $125+ ATX motherboard. At short lengths even old crappy cat5e has a good chance of working at 2.5 or 5.0 speeds.

gjulianm

Even 100GbE is hardly seen on company datacenters. Yes, it's cheaper than before, but still more expensive than 10G, and that's extra cost multiplied by all the devices that need to have the improved hardware to take advantage. Plus, most servers won't saturate a 10G link without tweaks on the setup. For 100G it's even worse, I think it will take a long time to see them on datacenters outside of core links or for companies with heavy bandwidth use (storage, video).

bradfa

I think the common knowledge that most servers can't saturate a 10Gb Ethernet link is no longer true. In my experience even saturating 25Gb links is rather easy to do when using 9000 byte MTU on mid-tier server hardware.

100Gb links do take some thought and work to saturate, but that's improving at a good rate lately so I expect it'll become more common rather soon.

The main downside to 25Gb and 100Gb links still seems to be hardware pricing. At these speeds, PCIe network adapters and switches get rather expensive rather quick and will make you really evaluate if your situation really demands those speeds. 10Gb SFP+ and copper network adapters and switches are quite inexpensive now in 2022.

toast0

> Plus, most servers won't saturate a 10G link without tweaks on the setup.

That doesn't seem right. When I got my first 10G server, it was running dual Xeon E5-2690 (either v1 or v2), and I don't recall needing to tweak much of anything. That was mostly a single large file downloaded over http, so not super hard to tweak anyway, but server chips are a lot better now than sandy/ivy bridge. It could only get 9gbps out with https, but the 2690v4 could do 2x10G with https because aes acceleration.

walrus01

I can saturate a 10G link on a $600 desktop PC with a consumer grade NVME SSD... serious servers are capable of far more than that.

thedougd

I'm seeing 2.5 and 5 popping up all over the place. My WiFi App has 2.5 with POE. The aggregate bandwidth of the AP exceeds 1G. Spectrum cable modems have 2.5G ports now and AT&T Fiber is shipping their garbage gateway with a 5G port.

Unfortunately, I'm finding switches with 2.5 to still be overpriced.

wolrah

As I understand it 2.5 and 5G modes were originally primarily aimed at WiFi APs as real-world capacities started to scale past gigabit speeds but replacing existing wiring for an entire building worth of APs to support 10G or completely redesigning the power infrastructure to support fiber would have been impractical.

Instead we run 10G signaling at half or quarter clock rates and get something that works on the majority of existing wiring.

AFAIK the IEEE was initially resisting supporting this, but enough vendors were just doing it anyways that it was better to standardize.

zrail

I wanted to upgrade portions of my home network to multigig because Comcast is giving us 1.4Gbps and I wanted to use it. At least for me, 2.5 switches were way to expensive so I ended up with used Intel 10G cards connected with DACs to a cheap 5 port Mikrotik 10G switch. One 10GBase-T RJ45 SFP+ hooks into the modem.

yrro

2.5GBASE-T and 5GBASE-T are designed to work across 100 meters of Cat 5e. Nothing crappy about it! :)

yrro

That said...

https://www.cablinginstall.com/sponsored/berk-tek/article/16...

... points out that 5GBASE-T needs 200 MHz channel bandwidth which is past what Cat 5e is specified for. So perhaps for runs approaching 100 meters or in noisy environments, Cat 5e won't be reliable for 5GBASE-T after all.

icelancer

2.5 and 5GBaseT is a great compromise, just wish UI would support it in their cheaper line of switches.

vardump

Getting stuck to 1 Gbps is somewhat crazy, as even the slowest laptop and PC M.2 SSDs can do 10-20 Gbps easy. Fastest ones 50 Gbps+.

But I guess most people don't transfer files in their local networks anymore and use their network purely for internet access.

vladvasiliu

> But I guess most people don't transfer files in their local networks anymore and use their network purely for internet access.

I think this is clearly the case. Most new laptops don't even bother with a wired network port. I've got a new "pro" HP laptop the other day, and it only comes with some cheap Wi-Fi card. And it's not an "entry-level" laptop, and it's thick enough for an RJ-45 plug to physically fit.

I also see more and more desktop motherboards come with integrated Wi-Fi. The desktops at Work (HP) also have had integrated Wi-Fi for a while, and it's not something we look for (they all use wired Ethernet).

Gigachad

It’s all usb c now. My iPad Pro has 10 gbit Ethernet support over the usb C port.

rocqua

I am looking at wifi integrated motherboards, not for the wifi, but for the bluetooth support.

ThePadawan

I rent an apartment in a building that was erected around 2015. They laid an ethernet connection... with a 100Mbps bandwidth limit.

Some people just don't care.

sjagoe

Are the ports in pairs at each location? Then it sounds like they did a Very Bad Thing and ran one cable per pair of ports; 100Mbps uses two pairs, so why use two cables when one cable has four pairs, right? :( I've seen that a lot in much older installations, but I'd expect better from 2015 construction.

I'm busy retrofitting Ethernet in my house by pulling cat6 through the walls and pulling out the old cat3 phone cabling. It's much harder work doing two cables (not least because none of the phone cables were in conduit, so it just starts off harder already) for each pair of ports, but it's very much worth the effort.

tblt

They may have done this to run a single cable to supply both data and telephony/door intercom etc on the other pairs. I agree it's not ideal.

xondono

> But I guess most people don't transfer files in their local networks anymore and use their network purely for internet access.

Most people don’t have anywhere in their local network to transfer things to. I still laugh when people see my home server and assume that’s “your work thing”. I do use it for work, but 99.99% of what’s contained in there are family pictures and photos.

ipdashc

I don't transfer files super often in my local network, but even when I do, gigabit is... honestly fast enough. Like it's never really bugged me.

fulafel

People have been using faster ethernet in workstations for a long time, in data intensive jobs. But indeed the commodization has been going much slower than previous gens. My pet theory is that it goes back to the stall in internet connecitivity speeds, which in turn is caused by people jumping en masse to slow and flaky wifi and cellular connectivity. This then causes popular apps to adapt aggressively to low bandwidth and keeps apps requiring high bandwidth out of the mainstream.

AnthonyMouse

It's kind of chicken and egg. It's not worth buying a 10G switch when all or nearly all of your devices are gigabit and it's not worth buying 10G cards for any device when you have a gigabit switch.

What you need for the transition is for premium brands to start pushing 10G ports as a feature, e.g. Apple needs to add it to Macbooks and the Mini and start using it to bludgeon competitors who don't have it. Then once their customers have several 10G devices around, they buy a 10G switch and start demanding that every new device have it. At which point the volume gets high enough for the price to come down.

dtech

I've noticed 2.5 becoming a bit more common on enthusiast hardware, so it'll be a while yet before 10G becomes mainstream, but 2.5 and 5 might be the standard for new hardware a decade from now.

Jaruzel

The nice thing about 2.5gb/s is that you can still use existing CAT5e/6 cable runs (albeit at shorter distances).

I really want to start seeing 2.5gb/s becoming standard on Desktop motherboards asap.

tomohawk

2.5 and 5 can use existing cat5e or better cabling. There is no solution for 10GigE that uses that cabling.

eatbitseveryday

> for premium brands to start pushing 10G ports as a feature, e.g. Apple needs to add it to Macbooks and the Mini

https://www.apple.com/mac-mini/specs/

> Gigabit Ethernet port (configurable to 10Gb Ethernet)

ericd

I'd say it's worth it as soon as you have a home NAS, it lets you treat it almost as a local drive for any computer that also has 10G.

Hamuko

I have a home NAS, but I think I need at least two, maybe three 10G switches in order to get everything hooked up properly. And then I need gear to actually get 10G on my computers. Sounds a bit expensive especially since the NAS is unfortunately limited to 2x1GbE.

soneil

These are naive takes, accounting only for linespeed and nothing more, but give a useful rule of thumb:

- An 8x cdrom narrowly beats 10meg ethernet.

- 1x dvdrom narrowly beats 100meg ethernet.

- ATA133 narrowly beats 1gbit ethernet.

sigstoat

> - 1x dvdrom narrowly beats 100meg ethernet.

that doesn't fit with my recollection of reading/transferring DVDs, or with https://en.wikipedia.org/wiki/Optical_storage_media_writing_...

> - ATA133 narrowly beats 1gbit ethernet.

the electrical interface/protocol, sure. i don't think any ATA133 drive made could actually saturate its interface, or a gigabit link.

cosmotic

From my perspective the problem is availability and cost of 2.5, 5, and 10gbe switches.

comboy

Mikrotik

Sebb767

It's still around 100$ and up (I know you can get it a bit cheaper, but not everyone searches). Gigabit switches, on the other hand, are basically free.

panda88888

I have gigabit internet (downstream only... hoping for fiber one day for symmetric up/down), and I run wired gigabit ethernet for desktops and fixed devices to free up the wifi. For 99% everyday use it works just fine. Externally I hit 80+ MB/s on internet download (aggregate, very few sites saturate my downstream bandwidth), and internally I hit 90+ MB/s to and from my NAS. The only time I wish it was faster is when transferring really large files to/from the NAS, but only for 10GB+, so the extra cost of either SFP+ or copper 2.5/5/10G upgrade to the network is not really warranted. One day I might install a NVME cache on my NAS and run a straight 10G fiber from my workstation to the NAS, but that's more of a luxury (look I can transfer 10GB file in 10 sec instead of 100 sec!) than need.

social_quotient

I’ve got google fiber 1 gig up and down. It’s plenty fast for external content but internally I’ve been annoyed at the network perf to my NAS so I got a secondary network card just to go from desktop to Nas at 10GbE speeds and it’s been great and it skips the need to overhaul the entire network stack.

rocqua

Did that take any complicated setup to make the routing work?

shrx

> Applications that may make use of the module include machine vision in industrial applications, high throughput network data transmission, high-resolution imaging for surveillance, and casino gaming machines.

Does anyone know why is this useful in casino gaming machines?

Bad_CRC

I was researching about low latency applications and one seems to be webrtc video for casinos where the croupier hands and cards are streaming to the internet for remote players, could be something like that.

shrx

Right, I was thinking about that but I would not call it a "casino gaming machine" in this case, more like a casino security system, so I thought it could be something else.

walrus01

well, a M.2 2280 slot (presumably one that's NVME capable, not SATA only for storage) is just a PCI-E slot in a weird small shape.

hsbauauvhabzb

Yes, but embedded devices and laptops generally don’t have pci-e.

martijnvds

Lots of consumer motherboards don't have a lot of PCIe slots wider than x1 (which is not enough for 10GigE), but they often have multiple NVMe capable M.2 slots, that this card would work with.

In my home server for instance, which is built on a consumer μATX board, the wider PCIe slots are filled with a GPU and HBA, which leaves no way to add 10GigE without spending a lot of money on a new motherboard and CPU - or finding a way to use the M.2 slot.

londons_explore

Port expanders are a thing... Unless you plan to be needing full bandwidth to your GPU at the same time as full network bandwidth, you shouldn't have issues.

I never really understood why motherboards didn't spend a few extra dollars and make all ports 16x ports (switched), so that you can use full CPU bandwidth to any one device and not have to mess with different types of port for different devices.

AnthonyMouse

> Lots of consumer motherboards don't have a lot of PCIe slots wider than x1 (which is not enough for 10GigE)

Isn't it enough given PCIe 4.0?

t0mas88

For a home server you probably don't need PCIe 16x for the GPU? If you even need a GPU at all?

omgtehlion

There are a lot of m.2 to full-PCIe adapters on AliExpress. And a lot of cheap 10G cards on eBay.

I’m using 10G in my home LAN already for ~5 years. And just a month ago I contemplated about upgrading my notebooks to 10G.

I ordered a cheap thunderbolt→nvme adapter (for ssds) + m2→pci-e adapter on AliExpress. And they all work like a charm! Total cost was about $55(tb3 adapter)+5(m.2)+25(network card)+$8(SFP+) = 93USD. A lot cheaper than other options like QNAP or Sonnet Solo (which are in $200+ range)

manuel_w

A bit unrelated but still:

Can someone recommend quality USB(-C) ethernet adapters brand? I'm building an embedded system in a professional context and need to connect some of our own custom in-house built embedded devices (usb-c only, no ethernet) to LAN. Right now, when some device goes offline, I don't know wether it's our product or the cheapish usb-ethernet-adapter which is at fault. Would like to have something 100% reliable.

Shall I buy Lenovo or Dell?

3np

Have tried both Lenovo and Linksys (USB-A, RTL) - in both cases they would have issues and disappear after hours or days after startup. I can not tell you for certain that it's purely an issue with the USB Ethernet adapters and not something else in the stack (Armbian Bullseye).

Since it's built in-house, why are you relying on retrofitted USB dongles if reliable Ethernet connectivity is important? Unfeasible to make a revision with a port?

Anyway, if I were you I would probably just go ahead and buy one each of the top handful of contenders and try them out myself - they're not expensive and it makes sure that it really works for you, and if not, where the problem lies. If you have the same issue on several adapters with different chipsets, well...

kkielhofner

I’d be more concerned with the chip the adapter uses. While I don’t have experience with their USB-C variants I’ve had good experiences otherwise with Axis.

As far as I can tell they seem to be the “go-to” in the USB-Ethernet game and are well supported on Linux and anything else I’ve used them on.

lbriner

All very well but aren't these devices in most cases the cause of software that is too damned slow? What happens when devs with their 128GB RAM and 64GB graphics cards write things like MS Teams, Slack, Visual Studio, most other IDEs etc.? You get simple apps that take 10 seconds to start up on ordinary desktop machines.

As others have said, there are advantages to pushing people to use normal spec machines to remind them that most of the world don't have 10GbE or even 20Mb broadband but would still like apps to start quickly.

goda90

End users don't have to compile the software each time they want to run it. A dedicated performance testing setup with realistic hardware is better than wasting a developer's time waiting for their machine.

lucioperca

top_sigrid

I did not knew about this, this is pure gold

guywhocodes

This would be so much nicer if it was SFP+, 10GBASE-T transceivers are too expensive still IMO.

jagrsw

They're 50USD or so. The main problem seems to be that they use ~3-4W of power each, and it's more than power budget of a single SFP/+ slot (2W or so), and they become v.hot (~70°C), what can lead to overheating of switches.

ericd

Yeah, the mikrotiks recommend populating only every other slot if you're using RJ45 transceivers to avoid overheating.

aix1

Is the power consumption (and heat dissipation) a function of bit rate? E.g. would the same 10GBASE-T transceiver consume less power when running at, say, 2.5Gb/s than at 10Gb/s?

Would love to understand this a bit better.

(edit: corrected the units.)

sschueller

? They are like $20 for single mode fiber which you can run for miles.

https://www.fs.com/de-en/products/11555.html

ZiiS

Exactly. If it was SPF+ you could use that. As it is copper only you need to buy a more expensive (and for most uses less good) copper one for whatever you plug it into.

TheSpiceIsLife

I think you mean SFP+, small form-factor pluggable, rather than Sun Protection Factor ;)

sschueller

Copper SPF+ get very hot and waste a ton of electricity. I think doing anything over 2.5G on copper is not ideal.

nirav72

I thought I'd never needed 10GbE until I tried to copy couple of large VMs from one a server to a new one. Once I upgraded, I realized that my ISP provided bandwidth was actually 1.2 gb/s instead of the 1 gb. Somehow the default 1gbps port was the restricting factor.

n00p

I'd love to see a 10gb SFP - USB-C for my laptop I don't know why is this not a thing yet

zamadatix

A small market that cares about 10G performance but don't have a device with a thunderbolt type-c port which performs much better. I'm sure they'll land eventually though.

omgtehlion

If your usb-c is capable of thunderbolt, there are options (prebuilt and DIY), see my other comment in this thread