Get the top HN stories in your inbox every day.
mcovalt
heavyset_go
> running a homelab is a nonsensical
Depends, I get a lot of utility from mine, as it manages my media collection for streaming at home and on the go. I've tried using SaaS alternatives and managed hosting of the apps I run, but those experiences were both lackluster and relatively expensive. And since the apps I run and my media collection aren't locked behind proprietary systems or limited APIs that might disappear, the amount of integration and automation makes for a very pleasant experience.
> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
I just add my old devices to my cluster and call it a day. Even ancient hardware is suitable for it, especially if you're using old laptops that are optimized for power savings. Even old Core 2 processors in laptops can idle at a low wattage, and TDP can be less than a light bulb's when maxed out.
KennyBlanken
A media server is not a "homelab."
One of the most irritating things about "homelabs" is that most people seem to think a "homelab" means "a rack of very expensive, way-overspec'd ubiquiti gear, an OTS NAS unit, and a docker container server running media server/torrent shit."
I have a laptop running a dozen different containers - bookstack, torrent client, rss reader, and so on. I don't think of it as a "homelab."
ClumsyPilot
Given that a homelab is basically a garage for fucking around, arguing for what qualifies for fucking around is rather pointless
Karrot_Kream
You're mistaking the consumerist urges that a lot of people partake in their hobbies and the idea of a homelab itself. A homelab can be overspec'd ubiquiti gear or it could be a RasPi running a bunch of services. It's just one or more servers that sit on your home network that you can fuck around with. Yeah, I guess something you care for stability with doesn't necessarily constitute the "lab" distinction, but a lot of times these stable things come out of experimenting in a homelab.
My "media server" (browser and downloaded media played via smplayer on a stock Ubuntu install) emerged from an experimental server running a lightweight distro that I used to do anything and everything from. Once I found that which parts of the media usecase fit into my partner and my lifestyle, it graduated to a stable decently-specced Ubuntu machine that is rarely touched other than for updates and downloading new content.
noitpmeder
Why are you trying to gatekeep what is or is not a homelab. Just because it isn't sufficiently complex doesn't mean it doesn't fit the definition.
Psychotherapist
I think of a homelab as one or more servers (or a computer, laptop etc.) located in a home to play around with software, virtualization, hosting stuff both for testing and actual functional (home) use. Basically everything that's experimenting (like in a real lab) with technology. Of course the definition will be different for everyone :)
zrail
I have a production hypervisor (HP EliteDesk 800 G3 Mini). This is where things run that my spouse cares about, in particular, Home Assistant. I don't generally mess with this machine.
I also have a lab hypervisor (Dell T30). This is where I feel free to experiment with VMs and accidentally on purpose have to rebuild it every once in a while, take it down to swap out hardware, etc.
Handytinge
That depends on your thinking.
My "media server" consists of a web application, backend application, multiple *arr services, transcoding automations, fibre termination, user account management shared across multiple machines and services, multiple VLANs and LUNs, etc.
All these are spready across 16RU or so, but really only serve as a "media server".
undefined
heavyset_go
Who said anything about the cluster being just a media server?
scarface74
That’s not a “HomeLab”. You can create a Plex Media server with a $249 Nvidia Shield and some low cost USB hard drives.
_ofdw
Yeah the best part of the home lab hobby is gatekeeping because you spent more than someone else and you need to belittle them to justify how much you spent on your Juniper or whatever.
nomel
> Consumer hardware is insane.
With less technical management, I've had repeated, and bewildered, conversations trying to get them to understand that our one "computer" sitting on my desk is many many times faster than the "server" our IT team provides. "But it's a server!".
bigiain
I like to point out to people who haven’t worked it out for themselves, that the load balanced HA pair of EC2 instances with the multi AZ RDS that runs almost $200 a month at on demand rates, is somewhat less computing power and storage that the phone in my pocket.
KennyBlanken
Many times faster doesn't mean shit if it takes up 4-6x more space than it needs to in what is likely the most expensive commercial real estate the company owns/leases.
Many times faster doesn't mean shit if it can't be remotely lights-out managed and its hardware monitored using standardized tools (or at all.)
Many times faster doesn't mean shit if it doesn't have redundant PSUs.
Many times faster doesn't mean shit if failed drives can't be hotswapped.
Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.
Etc.
If you want better hardware from IT, tell management to give them more money. IT is almost always viewed as a cost center and given a shoestring budget yet asked to do, and be responsible for, he world.
You know how you're experienced from all your years as a programmer? Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.
ReactiveJelly
> Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.
The server is a big pie. If you're buying a single slice, then yes, it's very very easy for a cheap old desktop to be way faster than a cheap VPS.
> Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.
It's the managers that are idiots. Not everything needs to run in a datacenter. Some things really are kittens and not cattle.
arinlen
> Many times faster doesn't mean shit if it takes up 4-6x more space than it needs to in what is likely the most expensive commercial real estate the company owns/leases.
Unless you're hoping to monetize that spot on your desk, the real estate market means nothing in terms of cost.
> Many times faster doesn't mean shit if it can't be remotely lights-out managed and its hardware monitored using standardized tools (or at all.)
What stops you from "using standardized tools" on a box you own?
> Many times faster doesn't mean shit if it doesn't have redundant PSUs.
What leads you to believe that all those 9s are relevant or even not comparable with cloud alternatives? In fact, I'm not sure that the latest rounds of outages at AWS allow it to claim more than 3 9s during the past year.
> Also, the computer sitting on your desk is not "many many times faster" than a current, or even few years old, server.
Actually, it is.
Aeolun
So you are basically saying that 99% of the time it will work fine. Got it.
But seriously, they were comparing to the server they got, not the one you have or can provide.
It’s entirely reasonable for the IT team to provide a VPS that doesn’t have nearly the amount of power for a application that’s barely used. Doesn’t mean it’s easy to explain to management.
nomel
> Imagine IT people are the same, instead of assuming they're all idiots who are too stupid to go out and buy desktop computers instead of servers like your genius self.
Nearly all of your assumption here are incorrect or flawed, except the redundant PSU (we only had one). But, I do think they're just like me: working in a non-ideal environment with constraints outside of our direct control. The non-ideal constraint that they had, in that instance, at that time, was that they could only give us a VPS with 4 threads. It wasn't possible to do what we needed with their server. Or, to put it into your language, five nines doesn't mean shit if, in practicality, it makes a reliable space heater.
alsetmusic
> running a homelab is a nonsensical
I think a lot of people build a homelab to learn about technologies and get realworld experience deploying them. That was what drove mine for a couple of years. Once you’ve mastered servers and networking and so on, then it just becomes a fun hobby, I agree there. But someone who wants to get into networking and the like (and who lacks experience) is definitely going to need to practice with real or simulated networks to get good at it.
scarface74
Deploying servers and networking is a lot more fun when you can just sit down to Visual Studio Code and write some HCL or Yaml…
And yes I was around before the “cloud” was a thing. I first networked a Mac LC II and PowerMac 6100/60 to play with a Gopher server.
genewitch
looks better than my quick and dirty wireguard setup to get NAT Type: A behind CGNAT on game consoles - Basically put whatever is connected to a device in a "public DMZ", separate from your network: https://raw.githubusercontent.com/genewitch/opensource/maste...
Wireguard is both very frustrating and very cool. I'm currently using it similarly to give a VM a public IP, and i'm testing the details on getting multiple SSL/https hosts behind that single IP, which is something you couldn't easily do a decade ago without the host with the single IP having all of the certificates and "MITM" the entire session.
Speaking of "CPU horsepower" i just replaced a 1.5kW HP[0] server with a .2kW Ryzen 5950x "server" that is about 5% faster overall - don't forget that old stuff, while capable, adds to the electric bill, usually at a constant rate.
[0] the iLO (lights out) actually reported the server's power usage in BTU. It drew the same power as a portable space heater.
7speter
> Speaking of "CPU horsepower" i just replaced a 1.5kW HP[0] server with a .2kW Ryzen 5950x "server" that is about 5% faster overall - don't forget that old stuff, while capable, adds to the electric bill, usually at a constant rate.
What I’ve observed is people on subs like r/homelab and r/sysadmin ridicule people who appreciate the available horsepower with modern consumer tech because “no ecc memory” or the like and I wonder if you people who are looking to make labs using the latest ryzen or i7/i9 (really, I’m thinking of getting started by converting an old thinkcentre with a 4th gen i5, possibly undervolting the cpu, and 24gb ddr3 into a pfsense router and some sort of server) will really be missing out on some necessary enterprise feature?
smackeyacky
Enterprise servers make no sense at home, but they are more fun to play with than old laptops. It's a hobby. After a while you appreciate buying good tools rather than making do, like any other hobby.
Old HPE servers have jumped in price though. Last year you could buy insanely powerful stuff for under $200 but it's all $400 and up for the same gear at the moment.
BackBlast
> “no ecc memory” or the like
ECC is about long term stability and data integrity. For a router, meh, the network protocols will deal with any flips. For a file server or database it's better if those random bit flips don't happen to critical data.
AMD based systems can sometimes be forced to use ECC mode even if the BOIS doesn't support it.
ECC is more important in systems with very large ram footprints because there's that much larger of a memory footprint area for cosmic rays to corrupt. If you've got one-two sticks of ram, and you're not running vital business data, meh, it's not required.
I really like ECC. But I'm not really willing to pay a significant premium for it.
I run companion systems to production out of my house, mostly development lanes comparable to production deployments. If they're down, it's really not mission critical. I also run my home security/surveillance systems. The other significant systems are those related to my children's computer lab.
genewitch
I was running Wok + Kimchi + gingerbase on the HP, i'm now using proxmox instead. Short of having lights out management (out of band) built in, i haven't noticed much difference between the platforms being "a server" and "a desktop"; make no mistake, 5950x is a monster chip, but it's still a desktop with two few PCIe lanes for me to consider it "a server" - luckily i only require enough PCIe to have an old GPU and extra SATA ports. If i was building out stuff to do more research i'd want more pcie lanes than the Ryzen Desktop supports.
accountofme
On that note ryzen does support ecc.
It is not validated. So YMMV, but it works and is seen by Linux for me at least .
drewzero1
I recently bought an old HP server thinking it would be fun to play with. It turned out that it was loud and power-hungry, and for most things my needs could be served just as well by an old laptop. I ended up giving the server to a friend (who has their electric bill included in their rent).
marginalia_nu
> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
Yeah. I've got my search engine basically a beefy gaming PC with no graphics card and 128 Gb ram. I've not only gotten supposed HN death hugs multiple times, I've had my search engine see multiple queries per second without as much as flinching. It took Elon Musk tweeting a link to one of my blog entries before I started getting more traffic than my computer could handle and it started dropping a few connections.
Modern consumer PCs are ridiculously powerful if you make good use of the hardware.
mcovalt
Very cool! Running the not-well-maintained https://hndex.org search engine (and other memory hungry linear algebra based services) was also my original motivation for tunneling to my home as opposed to hosting on a VPS.
Are you hosted via a residential ISP? It's my hunch that peering agreements favor routes of consumer -> data center -> consumer as opposed to consumer -> consumer. That's mainly why I tunnel. Has that been your experience?
marginalia_nu
Yeah it's on residential broadband. Haven't really had much trouble to be honest. Though I'm based in Sweden and we have fairly well robust networking infrastructure all around, I guess that may be a factor.
I was hit by a botnet after my first big HN hug, so right now at least the search engine goes visitor->cloudflare->server, but if anything that's just added a bunch of ping.
I'm also doing crawling and so on on the same network and it's really not bad for that either. Granted my crawls are fairly slow and very far from saturating any connections.
agencies
Nice! Have you considered publishing your crawl data?
ReactiveJelly
> In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second.
To be fair they're not spinning every single piece of the CPU.
My desktop can play games from 2009-ish, but at idle it clocks down to like 30 watts.
It could play games from 2011 if I put the GPU back in, but the GPU's idle power draw is ridiculous...
brianzelip
Thanks for the guide. Congrats on the wedding!
2OEH8eoCRo0
Mines an old laptop and a new NAS.
bagrow
Anyone else annoyed at how narrow the term Homelab really is relative to what it could be? Any scientific or maker hobbies could take place in a "home lab," from breeding seedlings, to soldering and electronics work, to 3D printing. But it really means just networking and servers?
Seems too narrow to me.
hnaccount141
I don't think anyone would object to wider usage, it's just that the sysadmin community is the only group to have adopted the term so far.
bmitc
It's an interesting use of the word lab when rack, server, stack, etc. would have been more accurate.
morganherlocker
My personal homelab (a stack of old laptops connected to a network switch) is mostly built around various experimental antenna arrays used for rtl-sdr hobbies (aircraft and maritime telemetry collection mostly) + home automation over a zwave mesh network. The home automation ecosystem also has a lot overlap with automated gardening/growing, since automated sensing and irrigation are a great use case for tools like home assistant.
noizejoy
I often wish, that this kind of thing would be called something more like homedatacenter
rnd0
/r/homeserver ?
DoreenMichele
It's likely easier, safer and more socially acceptable to set up this kind of home lab than the kind of maker spaces you are talking about. A lot of people go to (shared/public) maker spaces precisely because their home is not suitable for that kind of physical experimenting. You probably need space, money and expertise to do home experiments of that sort and then it's probably generally wise to keep it on the down low in most cases so you don't freak out the neighbors or otherwise draw problems to yourself.
qqqwerty
Agreed. I always click the link expecting some kind of electronics or bio lab, and am instead greeted by a server and some networking equipment.
I am not really interested in reading about this kind of stuff[1]. I have a few raspberry pi's that serve as my "home lab" and that is all I really need right now. But I suspect the term took off because it gets a bunch of people like me to click.
[1] Not that I don't think it should exist, its just not high on my list of interests right now.
rnd0
>Anyone else annoyed at how narrow the term Homelab really is relative to what it could be?
I wasn't before I read this thread!
I mean, my "home lab" are old computers from 2008 to 2016 and half of what I "do" involves simply testing what weird installations (eg netbsd) I can get up and running. It's for tinkering -playing; the "lab" part coming from experimentation: "What happens when I do this...?"
I agree when it comes to the definition being to narrow. Then again, I mostly engage with it on /r/homelab and the HN attitude seems to be much more restrictive...too restrictive for my tastes, personally.
myself248
Agreed. My home lab does include some services and networking to support them, but also some nutso wifi and other radio data hardware, electronics, 3d printing, large format 2d printing, mechanical fabrication, precision metrology...
itomato
I agree. “Home Data Center Lab” is not always implied.
ketzo
A little off-topic, but I love the style of this writer.
The humor is a little goofy in the best way; the structure is detailed, but still conversational. It just feels very… personable.
Like — computers are so goddamn neat. We should all remember to have fun with this stuff sometimes.
loudthing
I've never understood why fiddling with IT equipment in your home is considering a "home lab" since you aren't necessarily creating anything (like a laboratory would do), you are just integrating together other people's industrial components to serve non business purposes. The most egregious factor of this hobby is that you are often buying equipment that costs enterprise prices with your own money, whether or not it is used equipment.
teh_klev
> I've never understood why fiddling with IT equipment in your home is considering a "home lab" since you aren't necessarily creating anything
To paraphrase the first item in the HN guidelines:
"anything that gratifies one's intellectual curiosity."[0]
You can get to explore configurations and scenarios you might never be able to do at work. But it's not about work, it's about scratching an itch.
> The most egregious factor of this hobby is that you are often buying equipment that costs enterprise prices
If you know where to look you can pick up enterprisey kit cheap. It may not be the latest and greatest, but it's probably good enough to play with.
Just because you don't understand the attraction of this pastime (or any pastime) you don't get to judge or complain about folks who enjoy tinkering.
Once upon a time I built out a home lab to run a piece of software called Zebra to learn about BGP4 and CIDR. It was about scratching an itch. If energy prices in the UK weren't so utterly bonkers at the moment then I'd love to build out a new lab to mess about with some stuff. Again, to scratch a curiosity itch.
GekkePrutser
I view those as separate things.
I have a home ESXi server which I use to host all my own stuff. I don't consider that a lab as such. It's just my home production infrastructure.
And then I have another ESXi which hosts the stuff I'm testing for work, and only work.
In my work home lab I have full access which I don't do even in the testing environment at work due to fragmentation of responsibilities. Or if I do I step on people's toes when trying stuff out that belongs to their realm. Most things are interconnected so testing something at work usually gets slowed down by the need to involve other people who don't always have time or interest. That's where my own lab comes in.
It's also a lot faster due to not needing VPN and having to deal with servers halfway across the world. And not having to deal with the evil SSL-MITM Internet proxy at work saves me SO much time.. Of course that work needs to be done to put it in production but a lot of stuff never makes it to that stage and then figuring how to smooth things with the proxy is just wasted effort.
atty
A lab is a place to experiment. The person conducting an experiment does not need to be pushing the envelope of human knowledge to enjoy and benefit from the process.
andrewf
When I was in college at the turn of the century, the rooms full of computers at university were called "labs". During the same timeframe, folks paying for a networking course+certification (eg Cisco's CCNA) would get access to a "lab" full of networking gear to play with.
These days, the nomenclature survives, but the facilities are often virtualized or remote eg https://learningnetworkstore.cisco.com/cisco-learning-labs
ghostly_s
It's a place to experiment with the skills you use in your profession. I don't see what you're struggling with.
loudthing
I guess I'm struggling with why I would buy the same commercial grade equipment to play with at home when my profession should be fulfilling the needs of that skill set. Sounds like if you feel like you need to do so, you're more than likely unfulfilled in your current career.
noitpmeder
People do this on their own time because they enjoy it! Why is this so hard to understand.
rhinoceraptor
It's the same reason why people like woodworking. Why buy a piece of furniture when you could spend 3x as much and do it yourself?
thatsunlikely
That's why you buy the last generation of common enterprise components used when industries mass upgrade, which happens every few years.
I built mine in 2018 and I paid $65 each for two E5 2670 v3 ($1600 MSRP, 2014 CPU), and $300 for a dual socket motherboard with 128GB of ram. Yeah it's not the latest and greatest but it's been going strong since then.
waynesonfire
Who the hell do you think you are judging people's interests and hobbies?
I could say the exact same thing about however you decide to spend your weekends.
ClumsyPilot
"you're more than likely unfulfilled in your current career"
Thats quite a lot of people, personally i would rather be planting trees, it i could pay the bill with that
mulmen
Meh. My motorcycle is ostensibly for use in endurance racing. I use it to go out for coffee and sandwiches.
tomc1985
Used this shit is crazy cheap. I picked up a 16-core Xeon monster with hyperthreading and 128gb of RAM for under a grand, and the power bill for it is maybe $20 a month
pkulak
My "homelab" is only used for things that actually provide value to me. I host a Matrix server for chat, Home Assistant, a Minecraft server, plus a few hundred gigs of backups and photos/videos.
I don't really know if I qualify though. All that's on a 2011 Dell, a 2-bay Synology nas, and a UPS I found on Amazon. I have zero urge to go out and buy a rack; it works great as is.
abriosi
I'm very pleased using PVE https://en.m.wikipedia.org/wiki/Proxmox_Virtual_Environment for my homelab
CameronNemo
FYI LXD can do both LXC containers and KVM VMs so it can replace Proxmox in a lot of cases. LXD is packaged in Arch, Gentoo, Alpine, Void, NixOS, and probably some other distros.
runjake
I manage a network of a few hundred multi-layer switches, several routers, and a couple thousand wireless access points across a couple dozen campuses.
My home network is an Eero. The network engineer’s kids have no homelab, as it were.
Occasionally, I get the urge to build out a proper homelab but quickly realize I don’t want to come home and deal with the same stuff and put the Eero back in.
It’s not that I don’t like learning and experimenting, it’s just that I get to do that all day at work and I need some work/life balance.
But, I still love seeing these posts and commenters chiming in with their own configurations.
eminence32
I like your observation. I think the flip side applies in some cases too: the people who don't get to run fancy network gear at part of $dayjob get some enjoyment running them for fun at home.
runjake
And indeed that’s how I got started, temporarily borrowing hardware from my workplace and cobbling it together in my cramped apartment: beige Cisco routers, switches and hubs, sun3 and sun4 hardware, a microVAX II, etc etc.
I luckily had a mentor that let me bend the rules and bring stuff home. If you are a mentor you should also bend the rules for noobs.
binkHN
I'm in a similar boat as you, but it's across numerous various clients. HOWEVER, I REFUSE to use consumer-class equipment at home when possible. I spend A LOT more, but within some sense of reason, to buy similar business-class equipment at home. Why? Because consumer-class equipment often has bugs I can't easily resolve. On the other hand, business-class equipment has the debugging and logging capabilities that I need to quickly pinpoint and resolve an issue.
In the end, my network at home works better, has higher availability and, for the times that it doesn't, I can quickly resolve the issue and get back to the enjoyment of my home versus being frustrated as to why some consumer-class thing is not working yet again.
nullwarp
I never spent much time with networking so a lot of it was a bit of a mystery to me so I spent the COVID time building out really awesome but overly built network.
I learned a ton and can now actually understand what's going on under the hood!
Since then though I definitely have simplified it a lot because it was a lot to manage but it was an excellent learning experience.
This is my long winded way of saying, network engineers don't get enough praise - that shit is actually really damn difficult to get right at scale!
Karrot_Kream
Best part is debugging networking stuff is always really hard. Opening up a packet dump can help in some cases, but when you're trying to figure out why your nftables rule isn't registering a connection in the kernel connection table, you have to do some fun stuff to figure it out.
philjohn
That's fair.
In the end though, I got fed up with my consumer gear (Orbi mesh system) never being very reliable, so had the house wired for Cat 6 and shoved a UDM Pro, USW-Pro-48-Poe and 3 U6 Pro access points in and it's been far more stable and problem free than the Orbi, to the point where the kids have stopped telling me "Dad, the wifi sucks!" and needing to reboot the whole thing, which I had to do at least once a month previously.
Bluecobra
I feel as I get older I have less and less desire to geek out at home, especially now with Covid/WFH. When I log off for the day I really don’t want to go back upstairs to my home office to do anything.
I used to have a FreeBSD server running pf and jails to partition out services. Now I simplified everything with a little tiny Ubiquti cube that I use an app on my phone to control.
undefined
cagataygurturk
I have 3-node vSphere cluster with 36 cores, 300 GB of RAM, bunch of different disk options (magnetic, SSD, NVMe)
It is all powered by Ubiquiti networking gear.
VMware VUG is providing free and legal VMware licenses. I get to run VSAN, on top of it I deploy Kubernetes and I can decide if my pod wants to use poor performance VSAN disks (like EBS on AWS) or faster local NVMe‘s.
These are just couple of things I can make at home. Everything started as an experiment and it started to look like a replica of what we can have in a datacenter, but without a SLA.
First of all, all these stuff is extremely fun for me. Secondly, I won‘t go into details but as a cloud consultant, this thing I started as a hobby became one of the big boosters of my career. I never expected such an effect but I am extremely impressed by this.
whazor
My comments:
- network switches: if you want enterprise features, low price, higher stability.. and you are willing to deal with painful configuration: check out Mikrotik - mentioned servers consume quite some electricity, check out this Dutch server thread on how to have efficient server for inspiration: https://gathering.tweakers.net/forum/list_messages/2096876 (use Google translate) note that the Framework main boards also look quite nice as server - also check out k8s at home template: https://github.com/k8s-at-home/template-cluster-k3s it uses GitOps to setup services which has proven very valuable to me by messing with configurations. I also build a search for helm chart releases: https://whazor.github.io/k8s-at-home-search/ - if going cluster mode: having a NAS for storage is more stable than using kubernetes storage solutions
All in all, I learned a lot about servers and networking by doing this as a hobby.
itomato
These are always a hoot to read.
Attitudes toward dust ingress in particular delight me. Look inside any old Dolch data acquisition unit or tire alignment computer from the 1980s and find minimal reason to be horrified.
To think a home environment would necessitate anything special is excessive.
Pizza restaurants,OTOH.
voxadam
If you think restaurants are bad, and they are, you should try machine shops. I used to maintain CNC milling machines; one was in a shop that milled EDM sinkers[1] out of solid blocks of graphite. Graphite dust is a fracking nightmare. I quickly lost count of how many servo amplifiers and PLCs I had to replace.
[1] https://en.wikipedia.org/wiki/Electrical_discharge_machining
cosmiccatnap
Am I the only one who just slaps a few vms together on an umanaged switch? I don't need to tag anything and the two vlans I have are for the wifi and the house. One vm runs openwrt, another openmediacault, and then I have arch for my dev projects. Psu hanging out sure but all that is just on a fold out table under the stairs and it's been rock solid for 3 years.
Don't get me wrong it's nice to do this stuff and I'm not knocking people for spending so much time on it but when you spend your day at work logging into systems and fixing broken junk the last thing I want to do is come home to a dead network or vm issue to solve.
sponaugle
Great to see a write up about someones experiences building a homelab. I love the homelabs, and I recently built a new house and made some dedicated space for mine.
https://www.reddit.com/r/HomeDataCenter/comments/ktz6yo/my_s...
I added 20KW of solar as well to offset to power usage.
It is interesting to see what other people have built, why they made the choices they did, and what they are using it for.
Daegalus
See, I have no interest in running large server hardware at home. I did it different.
At first i Had a stack of 2-3 old laptops running stuff. Now I just use 4 raspberry pi 4s, with 8gb. It lives in a cubby in my office desk. Low energy, low maintenance and works really well. Got rid of the laptops.
My network is the TPLink Omada stuff, sittign on some shelves in our family room. Everythign is wifi and I run a 2nd ap on the opposite side of the house using MoCA 2.5 to connect them over the coax in the 2 rooms to improve performance.
tikkabhuna
I’m still looking for Raspberry Pis. I have some on back order. Due for delivery in December and I get an email once a month pushing that back.
Daegalus
Ya, I am currently on the hunt for CM4s for a weird restoration project. But haven't found any yet. I was lucky for the 4 that I have.
I use https://rpilocator.com to monitor stock.
Get the top HN stories in your inbox every day.
This is fun. I'm more of a minimalist with my homelab setup. It's a laptop and an old NAS. I love it either way: running a homelab is a nonsensical and fun hobby in any case.
I feel like we live in a world in which it's either racks or cheap VPSs. In reality, at home, we have some serious CPU horsepower just spinning billions of empty cycles per second. Consumer hardware is insane.
I've handled 10's of thousands of unique visitors per minute and more than a couple front page of Reddit + Hacker News herds on this little laptop through a residential ISP.
Here's my setup (pic down at the bottom): https://kiwiziti.com/~matt/wireguard/