Get the top HN stories in your inbox every day.
cientifico
In a cloud server (hidden from the public)
www.keycloak.org - auth mostly for outline
www.getoutline.com - my personal "notion"
nginxproxymanager.com - to proxy things
Wireguard - remote access and interconnection between zones
cockpit-project.org - to manage VMS
github.com/coder/code-server - To remote develop
2x of docs.paperless-ngx.com/ (one for me and one for my partner) - I scan and destroy most of the letters I get.
snibox.github.io/ - my terminal companion
pi-hole (together with wireguard I don't have ads on my devices)
uptime.kuma.pet - to be sure that things are online
mailcow.email/ - for non priority domains
docs.postalserver.io/ - mail server for apps and services
At home (small 6w nic with): HomeAssistant - To control home lights
Cups - share printers
Wireguard - (connected to the cloud)apeters
Even though I sold mailcow and stopped working on it in the past months, it warms my heart SO MUCH to see people using it. :) Really, thank you.
I have some ideas for a proper successor which would be much more scaleable, modern, flexible and would be more focused to use existing transports for piggybacking mail. A fully and safely encrypted mail storage as well as a cool interface to invite for encrypted sessions etc. Smtpd and imapd using modern Python modules.
—
Providers found out it was easier to block private senders instead of developing good filtering mechanisms. I kind of understand this decision, but spammers simply abuse services like Sendgrid or create spam accounts on MS.
So… nothing really changed in regards of spam. We just lose actually *wanted* mails or have to look into the Junk folders more often, all while newsletters get on the priority lane.
That makes me unsure about developing anything in this regard. :(
André
ralgozino
Hi Andrè! I just wanted to thank you for mailcow! We used it a few years ago at my previous job to getting a new domain up & running and got a porfessional, best-practices-configured email server in no time. Great experience also for day 2 operations! Nothing like the other mail solutions available at that time.
Wishing you the best in your next endeavors!
apeters
Thank you! :)
pmontra
To the new owners: I googled mailcow, got to the web site and found nothing to explain what it is, not even in the docs. Eventually I noticed the "People also ask" section in the Google results page:
> What is Mailcow?
> fully managed by Elestio. Mailcow is a Docker-based email server, based on Dovecot, Postfix and other open-source software, that provides a modern web UI for administration.
Hopefully this is correct but who knows.
apeters
Hi, it is now owned by tinc.gmbh
I sold it on 04/2021
cientifico
Hi André.
First of all, thanks. I used hosted my own email 20 years ago, and when I saw how easy it was with mailcow, I decided to try again.
Looking forward to the mailcow successor. Is there anything up already?
Cheers.
apeters
Hi :) Not yet. I’m in a very burned out mode right now. Need to settle a bit.
apeters
Oh, and thank you for your kind words!!
tsujamin
I absolutely _loved_ mailcow. Worked out of the box way better than my hand rolled solution, and ended up using it for some small orgs too.
Thanks for all your work on it!
apeters
Thank you very much. :)
3r8riacz
Wait, what? Was mailcow sold? What does it mean for future development and features? Should we start to look for an alternative?
bmitc
> 2x of docs.paperless-ngx.com/ (one for me and one for my partner) - I scan and destroy most of the letters I get.
Ah, interesting. I have been considering finding a solution like that. How do you like it? Are there other alternatives you considered?
cientifico
There are alternatives. I chose paperless because was the most mature opensource solution.
Be aware that there is paperless, paperless-ng and paperless-ngx.
When I did the setup last year paperless-ngx look like the most maintained.
Semaphor
> When I did the setup last year paperless-ngx look like the most maintained.
Just FYI, none of those were hostile forks, but in both the community taking over after the former maintainers abandoned the project. NGX is indeed the active version.
derkades
I used paperless for a while but ended up just saving pdfs to directories instead. I can find what I need without fancy OCR features by organizing and naming files sensibly. And I am fairly certain this will still work in 40 years, while paperless, or even Linux, might no longer exist.
bmitc
That was sort of the approach I had been planning on going for (manual scanning and putting it in directories that get cloud backed up). I have a pretty good scanner app on my phone, and so just scanning after opening the mail for some important document or invoice, uploading to the document storage, and then shredding seemed to be pretty low fanfare.
fdw
In the end, paperless also stores the original pdfs in a directory. You get some more goodies from it, like automatic OCR and a web interface, but your baselines is also included.
lannisterstark
Why not do both? Use rsync to do a one way sync by keeping a filter list to the paperless consume directory so whenever you drop any new documents to your original folder structures, it copies THAT to the consume and paperless consumes it?
mrccc
I can recommend paperless. I am running it locally in a Docker container and sync database/files with iCloud (so they are backed up). I've been thinking about putting them on the actual web to access them anywhere, but so far having paperless "just" on my computer was enough.
undefined
undefined
gureddio
Thanks for sharing this list! I wasn't aware of nginxproxymanager. This is something I've been doing semi manually for years. It looks like it'll slot right in and save me some work
tunnuz
How do you keep the server hidden from the public?
cientifico
I just don't assign public Ip addresses to private VM.
Public (with different public ips) I have:
* Mailcow
* Postal
* Code server (just a dev machine with public ip)
* Wireguard
* NginxProxyManager
Private (without public ip) I have: * PiHole
* containers (where i run most of the staff)
* Monitoringcpach
One option is to use Tailscale or plain Wireguard.
pjc50
You can run pihole in the cloud? That is very useful information.
yanokwa
You could also use https://nextdns.io. It’s basically pi-hole in the cloud.
xref
I pay $20/yr for their service it’s so good, and I can turn on/off quickly per-device when I need normal dns to work, instead of having to ssh and tweak pi-hole or whatever across the whole network
sgjohnson
Why wouldn’t you be able to? Pihole is just dnsmasq with a frontend.
jon-wood
The biggest curse of the Raspberry Pi is a whole heap of stuff that has gone from “needs a UNIX server” to “but surely that requires an underpowered UNIX server with a very specific distribution of Linux”.
crtasm
You should not open its port to the internet at large (look up: open resolvers) but yes you can run it anywhere and access over VPN, etc.
lxgr
I've been using a Raspberry Pi as a home server, and it's been holding up amazingly well, given everything I've thrown at it:
- The excellent Home Assistant, for unifying across Homekit and Google Home and tracking historical temperatures and a couple of automations. The RPi has Bluetooth built in, so I can capture the data from a few Bluetooth thermometer/hygrometers running custom firmware (https://github.com/pvvx/ATC_MiThermometer) without a 802.15.4 bridge or similar.
- An AirPlay to Google Cast bridge, mainly for listening to Overcast or the occasional YouTube video on Google speakers (without subscribing to Youtube Premium/Music)
- A SMB server, for file storage and potential Time Machine backups (but I don't currently have enough storage, and locally attached SSDs are just hard to beat in terms of performance)
- A DLNA server, for watching photos and videos on my TV
- Tailscale, for the occasional use of my home connection as a VPN when traveling (really glad to be having symmetric fiber for this!)
- Caddy, as a frontend for everything web facing, to benefit from its excellent Let's Encrypt integration for automatic certificate requests and renewals
Most of this is running in Docker containers and configured via Ansible, so that if the microSD card burns out (or I botch an OS update), I can just flash a new one with an empty image and recover from there.
LeoPanthera
All of my Raspberry Pis netboot, which means I never have to worry about a card burning out, and I can change what they boot into by just renaming a symlink on the server.
BanjoBass
Seems like a smart way to do it, but that relies on having another always-on system standing as the server. OP's solution only requires the one device, the rPi unless something needs to be changed.
cassianoleal
If you run an open source router distro like OpenWRT or OPNSense, you can use it as the PXE Boot host. That's a device that needs to be running anyway.
I've seen a lot of people running their routers as a VM on something like Proxmox and that gives you even more flexibility but it does require a beefier server - one that could potentially replace all the RasPis, potentially making the PXE boot redundant. :D
LeoPanthera
That's true. In theory you could use an extra Raspberry Pi itself as the netboot server. But since my home network already has a fileserver it was an easy choice.
sorenjan
Can you netboot from a desktop computer that's not always on? If the desktop is running when the pi is booting, does the pi then need the "server" (desktop pc) again until it needs to reboot?
LeoPanthera
No, because the root filesystem is mounted by NFS.
mattvot
Which AirPlay to Google Cast bridge are you using? Thanks
lxgr
This one: https://github.com/philippe44/AirConnect
And here's a containerized version that works with the Pi: https://github.com/1activegeek/docker-airconnect
joejoesvk
i'm trying to buy rpi but can't decide which configuration. what are minimum requirements to run airplay bridge, home assistant?
greggyb
If you want a Pi (or alike) for specific reasons, go for it!
But, if you are looking for cheap and relatively low-power compute, I strongly recommend looking at used ultra small form factor PCs. You can get much more computational power and expansion, often for cheaper than a Pi. And eBay is riddled with these things, unlike recent Pi availability.
https://www.ebay.com/sch/171957/i.html?_from=R40&_nkw=%28len...
The pricing gets even better if you want to buy them in a lot.
lxgr
I'm using a Model 4 with 4GB of RAM, but 2GB would probably also be ok. (I haven't measured peak memory usage, but I imagine that having some spare memory for file caching reduces read contention for the SD.)
Home Assistant can take a minute or two to start up, keeping all four cores quite busy, so I wouldn't recommend trying this on an older model myself.
If you want to use the Pi's internal Bluetooth module, obviously you'll also need one that has one and supports Bluetooth LE. Again, I can only speak for the Model 4 here, which works great for that.
3np
In terms of minimum requirements, you could even run that on a 0 or older.
In practice, you probably want at least 3b+ for reasonable performance. If you're buying new anyway and can get them for MSRP I don't see why you wouldn't get a 4.
capableweb
Rather than listing everything I'm hosting at my home server, I'll just share what saved me the most time, repeatedly:
Mirrors of various package repositories I use.
I'm currently mirroring npm, crates, Arch packages, clojars, maven and some other things, then all my machines (desktop, servers and laptops) point to the mirror rather than directly to upstream. Some of them are mirroring dynamically (basically a cache at that point, do this for npm for example) while others I fetch the entire repository and keep on disk, cleaning out old packages when needed (I do this for Arch packages for example).
Best benefit is that downloading stuff and updating my machines takes seconds now, even if there is multi-GB updates to do, and a secondary effect is that I'm not impacted by any downtime from npm et al. Saved my bacon more than once.
Bedon292
I would be interested in doing some sort of hybrid mirror / cache of a few repos, if I could do them just in time style. I don't need all of pypi (nor do I want 13.5TB of packages). I probably only ever use a few hundred packages at most.
I would like to point all my systems at my server. And if I `pip install pandas` and its not on my server the server grabs it and passes it through and syncs that package locally from then on. Same with yum, npm, docker, or whatever.
And I just realized I could use Artifactory as a caching proxy and at least save some time there. However, that doesn't mirror the package it just caches that specific version. I would be very interested in something where the system sees that I use `pandas` now and will mirror it. Or give it a requirements.txt and it flags all those packages and dependencies for mirroring.
pbronez
Maybe DevPi would work?
https://devpi.net/docs/devpi/devpi/stable/%2Bd/index.html https://blog.jcharistech.com/2022/08/13/how-to-run-a-pypi-mi...
another option is https://pypi.org/project/python-pypi-mirror/
Bedon292
It looks like DevPi may just work as a caching proxy as well. I also found Bandersnatch too. https://github.com/pypa/bandersnatch which is a configurable mirror with allow / block lists.
Essentially want something like DevPi but add the package to the bandersnatch allow list and mirror it from then on. With some extra large packages in the deny list.
Probably possible to wire that all up reasonably well, but probably just using a caching proxy is 90%+ of the improvement anyways. So may just stick with that.
capableweb
I don't know of any particular solution for the python ecosystem, I don't use it often enough to justify dealing with that can of worms.
But for npm, there is Verdaccio which does exactly what you want, and is what I'm using.
nikisweeting
Can you share a bit about how you set this up?
Also curious if anyone has taken this a step further and MITM Squid proxied their whole home to cache all responses >100mb or something like that.
capableweb
Basically just a bunch of shellscripts for most part, that is run with systemd timers on a PC-like server (consumer components), served via Caddy.
Npm registry is using verdaccio, arch packages is using https://gitlab.archlinux.org/archlinux/infrastructure/-/blob... + what is outlined at https://wiki.archlinux.org/title/DeveloperWiki:NewMirrors, clojars is just reading the list of packages and downloading them one by one each day.
Not a unified or nicely done setup by any means, just thrown together to solve the problem at hand.
Nzen
I setup apache archiva [0] to cache maven binaries at work. It was okay.
jamroom
Gitea works really well for this - just choose "new migration" and set the remote up as a mirror - super easy.
grepfru_it
I used to do that for my servers. It works quite well but packages update so frequently you don’t see a major benefit
cableshaft
How much space does it take to mirror npm currently? Tried finding that online but not sure where to look.
capableweb
I've done it some year ago last time, in order to dig out some statistics, it was around 1.8TB then, just counting the latest available version of each package (not every version of every package).
But as another commentator said, I'm only caching packages that already been fetched now, as I have no need for everything in the registry (most of it is junk to be honest).
hbn
It sounded like they're essentially caching npm, not mirroring the whole thing
> Some of them are mirroring dynamically (basically a cache at that point, do this for npm for example)
teekert
NextCloud, Home Assistant, Paperless NGX, Minecraft, Caddy for some Hugo sites/blogs, Unify Controller, Vaultwarden, Traefik, Sabnzbd. Used to do WireGuard but now I use Tailscale, AdGuard home and FoundryVTT.
It's all docker-compose. I'm thinking of taking some services off the internet using TailScale, some already are just on the Tailnet (Home Assistant and Paperless NGX), all my SSH ports are now only open to the tailnet as well. I love Tailscale, except for the battery drain on my iPhone (which wasn't an issue with plain WireGuard)...
Btw, this is my hardware: https://blog.hmrt.nl/posts/personal-cloud-infrastructure/#ha... (beware, the rest of the post is somewhat dated).
Oh, I moved Paperless and Home Assistant to a NUC, mainly because I'm working on the house a lot and I really need those online, those were also set up with Tailscale in mind, and the advantage is that whether that NUC is plugged in, on WiFi or at my parent's place, all services (inc ssh) are still available at the same IP address (of course sensors drop when I move the NUC out of my network, there is no Tailscale for the Shelly plugs etc :)). I'm thinking of a similar setup for my NextCloud now. Always innovating, it's a nice hobby.
sithadmin
>I love Tailscale, except for the battery drain on my iPhone (which wasn't an issue with plain WireGuard).
Netmaker is a good self-hosted alternative that utilizes Wireguard.
cassianoleal
> I love Tailscale, except for the battery drain on my iPhone
Same, I hope they can fix this soon. :(
avnigo
I've run into this too, and there's been an open issue on this for some time now:
edsimpson
Yah it really does destroy my battery on iOS.
jnovek
“It's all docker-compose.”
Are you using a utility to manage docker-compose or just setting it up on the CLI (e.g. started with a systemd service)?
I run docker-compose via CLI on my server but it’s not always convenient to ssh in to check on something.
teekert
Since it's all a yaml file I just use vscode's remote-ssh to check (files opens in vscode, one click opens a shell at the bottom (ie for docker-compose ps), nice thing is that remote-ssh also allows dragging and dropping files to the server without needing anything else, rarely need that though). If you add the `restart: unless-stopped` line, all containers just come up again after crashes/reboots.
On Arch I just `sudo systemctl enable docker` (after `sudo pacman docker docker-compose`), I wrote a bit about it here: [0]. This starts docker automatically, and docker automatically starts any containers with the above mentioned line in their service entry.
grepfru_it
Look into portainer
jckahn
Very cool! Since this is all exposed to the internet, how do you keep it secure? I’ve got a spare laptop and a static IP, but I’m concerned about exposing my home server to attacks. Right now I’ve just got it all running in Tailscale, but I’d like to safely host public-facing apps too.
teekert
I religiously keep everything up to date, for anything exposed to the public I use https (Traefik does let's encrypt, caddy as well) and set 2FA for Nextcloud for example, there is also a brute-force protection app for NC.
Some services, like HA, my minecraft servers, Paperless, in fact most of them, I would indeed feel less comfortable exposing, and I don't, but for NextCloud I also use it to share large files with friends so it needs to be internet facing, as do the blogs, but Hugo generates static sites, so that is quite secure (like earlier mentioned blog).
jckahn
Thank you for the explanation! It sounds like a pretty solid system.
DrPhish
One extra layer I put on my externally facing sites is a simple auth prompt (after redirect to https!) as an unlikely-to-have-a-compromise gate before any logon for a self-hosted service. You can make it a fairly easy to remember username/password for anyone you want to share your self-hosted apps with, since its a mostly irrelevant extra step just to guard against exploits in more complicated software stacks
grepfru_it
I started using traefik as my loadbalancer which supports authentication middleware. I rigged up keycloak and forward-auth to handle external services that either do not support authentication or has a weak security profile. A poor man’s zero trust setup.
Here is the blog I used to get things started: https://geek-cookbook.funkypenguin.co.nz/docker-swarm/traefi...
GTP
In addition to the other comment, look into fail2ban: it's a bruteforce protection that isn't application-specific, it can be configured to protect form bruteforce any service that logs login attempts somewhere.
ornornor
> docker-compose
How do you deal with backups? That’s my main struggle with docker.
LilBytes
I use Borg (https://borgbackup.readthedocs.io/en/stable/) with some Python and Bash scripts.
All my containers write to the same volume mount (e.g., /mnt/docker_share/$service_name), the scripts shutdown the LVM, run the backups from Borg, sync the files to rsync.net and turn the LVM back on when the backups finish.
keraf
Not my home but my parent's (as I'm a "nomad"). They recently built a new house and put me in charge of the tech with a handsome budget, so I built a rack just like it would be my place. Ended up costing 1/4th of what other "smart home" companies quoted. We got way better bang for the buck and software capabilities.
On the server/NAS, a QNAP TS-464eU (4x 4TB HDD RAID 5 + 2x 1TO SSD RAID 1), I'm running Container Station with 4 docker containers:
- Home Assistant => For all the home automation, displayed in the kitchen on a tablet.
- Adguard => To remove internet trash and protect my parents when browsing the web.
- NextCloud => For contact, calendar and file sharing in the family as well as backups.
- Caddy => Reverse proxy to make NextCloud available from the outside.
Computers (including mine) are backed up daily via NextCloud. The NAS is also backed up off-site with a cloud provider.I did a more comprehensive list of the setup[0] if you are interested.
jstummbillig
As someone who gets caught in the occasional "could you just fix x – what do you mean y is not working? It always worked before you got here"-trap, essentially signing up to run your parents entire smart home setup, indefinitely, feels like some special kind of hell.
keraf
My choice with their new house was pretty simple. They wanted a smart home, and being the tech son I would inevitably have to help them at some point. So either I debug a system that is open and familiar, with remote access and reliable software/hardware. Or I deal with some closed proprietary trash which will end-up with me on the phone with some incompetent cash grab company. As mentioned in my post, the latter would have cost 3 to 4 times the price of all the hardware we bought (see the link) and we would just get some SBC or cheap tower instead.
With the current setup, I can easily connect via VPN to configure network devices. Ubiquiti also makes it very simple to apply network changes or upgrade it remotely (as long as it's from the same brand). I set up extensive monitoring and alerting to proactively resolve issues. I also gave my parents some training and docs on how to plug things to the Ethernet sockets (if they want to setup a new printer for example) or what to do to bypass the router (rewiring) in case the main router fails.
As a side note to helping my parents. They are both 70, not the most tech literate people but they manage to do all basic things without much trouble. I put them both on Ubuntu 8 years ago, never had any issues whatsoever. I get maybe one message or call in 6 months for a question, but it's usually web related. Once in a while when I get home I update the distro, but that's pretty much all the maintenance I do. I have Teamviewer on all their devices, just in case but never really had to use it so far.
jstummbillig
> They are both 70, not the most tech literate people but they manage to do all basic things without much trouble. I put them both on Ubuntu 8 years ago, never had any issues whatsoever. I get maybe one message or call in 6 months for a question
Okay, so can I hire your parents?
mdip
I had a few questions if you don't mind answering as I'm doing something similar for a vacation home (and my own, but this is more relevant to the remote one).
How many IoT devices are you working with?
I don't have the best router up north and some of my lousier routers can't handle more than about 25 devices before they start crapping the bed. Do they disconnect often from your router (and do the successfully reconnect)?
Similar router problem, though I've found some of the more finickier devices I own can have problems in my home where I have a number of options for connections. If you haven't had these problems, what router are you using?
What about "the internet is down"?
This was less of an issue when everything was Z-Wave/ZigBee, but all of the cheap stuff is WiFi. This mostly only concerns "the lights" and "the plugs". I don't want a switch to stop being able to control devices it's not directly attached to if home assistant or the target device can't reach the internet. All of the ones that I own broadcast state and can accept commands via UDP over the local network so I was thinking of writing something that HA could call, locally, which would issue those commands and receive the statuses (so they'd always be local-only).nixgeek
There’s another aspect though - when you come to sell the home it’s easy to market Crestron, Control4 and others specifically because they are standard (albeit expensive) solutions and have a whole ecosystem of consultants who can be brought in to diagnose and fix issues, upgrade stuff.
With DIY you are usually left with at best ripping it all out and selling it without any “Smart Home” promises, you can I guess still market that it has structured cabling to enable smarts though?
philippejara
In my experience signing up to do it or not is irrelevant because I'll end up doing it anyway since they're my parents, so i'd rather have full easy remote teamviewer/ssh access to the stuff I know works and how it works because I set it up instead of going there to debug some crap they downloaded or bought in the app store/mall that claimed to do x and now it doesn't work and now nothing is working etc etc.
Still can't avoid having at times to go and fix the god damn printer however. Printer driver programmers/designers really make me question if politicians should be the most hated profession in the world.
pooper
> Printer driver programmers/designers really make me question if politicians should be the most hated profession in the world.
Trying not to be snarky but really printers are very mechanical I/O devices. I don't think the driver programmers have much fun with their jobs, having to deal with seemingly greedy product requirements and lots of variation in real world use (humidity, paper type, ink quality, yada yada). When I start thinking about this and couple it with my own stupidity, I am amazed anything works at all. How do you even do automated integration tests on a printer?
I'd like to think I am about an average programmer and I am reminded by my own actions everyday that I know nothing. I am constantly learning (and forgetting) new ideas every week.
luckman212
My colleagues and I call this the "Ever since you..." and it's one of our longest-running chuckles.
"Ever since you installed that ad-blocker, the Wi-Fi signal in the den is really weak"— uh, that's not how that works.
It's why I make every effort to never touch or even give advice on technical matters to friends and family anymore. Almost zero upside and unlimited downside—getting angry texts and phone calls at all hours of the day and night, and offering free support for life (and almost always accompanied by zero thank you's).
JamesSwift
I thought the same. Having run HA at my own home for a couple years now, theres no way I would take ownership of someone elses install. I love the setup, its just not a "non-tech savvy friendly" ecosystem. Thats what I assume you pay for on the COTS ones.
linsomniac
My inlaws built a new house 3-4 years ago, a fairly nice, modern house. But, the whole tech side felt like it was using technology that was dated in the '90s.
TV going into a Receiver in the next room, and DVD+Apple TV connected to that receiver. 5 channel audio from the receiver. Several zones of in-ceiling speakers also run by the receiver. Some "knock off" Logitech-like smart remote control, because it's easy for the installer to program, but fairly hard to use and not something we can customize. CCTV cameras connected by coax to a central controller, I think the resolution is 640x480. One simple WiFi AP to try to cover the whole 4600sqft house.
For my own home, I went with a Google TV with soundbar and HDMI+CEC and get full control with a single remote. For full house audio I use a combination of Google Home speakers and portable bluetooth speakers. Much simpler and flexible.
dublinben
The setup your inlaws have probably doesn't rely on the cloud, and report everything they watch to advertising companies, unlike your setup.
linsomniac
They have cable for their primary viewing, which means that, sure, Google isn't seeing it directly, but their cable provider has all the details of what they are watching then. Honestly, I'd trust Google with that information over Xfinity or whoever they have for cable. Verizon?
I realize that is a concern for some, that is not a concern for me.
AndroidKitKat
I've got a slew of different computers doing different things. All of them are networked together via Tailscale.
Ubuntu 22.04 Server for the host, everything else runs in LXC containers. This is all setup on ZFS.
- https://znc.in/ IRC bouncer
- https://caddyserver.com/ Caddy Webserver for a few personal websites
- https://github.com/AndroidKitKat/waifupaste.moe/ My personal pastebin
- https://transmissionbt.com/ Torrent client that I actually use for Linux ISOs. Primarily seed different versions of Ubuntu and the latest Arch. I am looking to seed other, lesser-seeded distros, too.
- It also runs Samba
A second, dedicated computer also running Ubuntu Server 22.04. It only runs https://pleroma.social for me and a few of my friends.
A third computer, this time an M1 Mac Mini that is my Plex box. It's running the latest version of macOS Ventura and runs all the *arrs and qBittorrent. It also runs Plex itself, because it's one of the only computers that I found that was low power enough but still supported hardware transcoding in Plex. I've been meaning to find a replacement for it running Linux + an AMD GPU (I have an rx470 sitting around somewhere), but no real good deals have turned up.
ohthehugemanate
Your hypothetical new plex box does not need a discrete GPU. Plex makes really good use of Intel Quick Sync, and transcode quality has been indistinguishable or better than NVIDIA since about 5th gen. A Celeron G4900 (8th-gen dual core, 3300 passmark) has been benched as capable of 21 simultaneous 1080p transcodes.[0]
TL;DR: pick up a NUC knockoff with an >8th gen celeron or i3; it'll handle anything a casual household can throw at it.
[0] https://forums.serverbuilds.net/t/guide-hardware-transcoding... this is far and away the most comprehensive hardware guide for plex servers.
dehugger
The second you want multiple streams of higher quality then 1080p 10mbs you will want a dedicated card. I got a <200$ old Quattro card (can't remember which) and it can handle pretty much everything I've thrown at it now.
4k TVs are common, planning for 1080p use is a mistake at this point imo.
ohthehugemanate
1) if your device plays 4k, then what are you transcoding from? That's going to be the determining factor, not the 4k output stream. Certainly in a home situation, that's likely to be direct stream. If your source files are 8K or higher, I wouldn't consider you a normal home user.
2) quicksync on a 10g or newer cpu is benchmarked at 4-6 4k simultaneous streams, but a lot depends on the details: container format in particular matters a lot, and so do your transcode options. Color matching for example, is only ever done on CPU. And IIRC AV1 and VP9 are not supported by quicksync (or older discrete GPUs).
That said, it's completely correct that you should measure all advice against your actual output device capabilities and source quality.
karlshea
This is very true, QuickSync falls over very easily when trying to transcode even a single 4k stream.
grogenaut
The place you specification is a thread posted in 2019 talking about a Dell prebuolg. Do you mind posting the link for the nucs? Or is it in that thread somewhere? Or is that dell actually a nuc.
ohthehugemanate
The thread has the benchmarks, links to ebay searches, and lists of prebuilt machines (like the dells) that meet the requirements, since they're often the cheapest way to meet the spec (according to TFA). you may have to scroll up.
I just searched ebay for used celeron NUCs, and got lots of options like this one, with a 9th gen Celeron, for a hundred bucks. NB that the generation matters much more than the CPU, but some operations are still done by the CPU so if you can get an i3 or i5 it will make a marginal difference.
https://www.ebay.com/itm/115640978606?hash=item1aecbd4cae%3A...
c0wb0yc0d3r
Do you automate your Linux ISO seeding? Like getting updated torrents when a new version is released. I have been thinking about this from time to time, and haven't come up with a solution other than scraping.
natebc
I just do this. Update the ubuntu line every 2 years or so. This runs as a cronjob on my synology with the working directory set to a directory that the Synology "Download Station" is watching. It picks up the torrents and does its thing. I come in every now and then and clean out the dot releases. It's not the best but it's not the worst either.
#!/bin/bash
wget -nH --cut-dirs=4 -r -l1 --no-parent -R "*.tmp" -A "*.torrent" https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/
wget -nH --cut-dirs=4 -r -l1 --no-parent -R "*.tmp" -A "*.torrent" https://cdimage.debian.org/debian-cd/current/amd64/bt-cd/
wget -nH --cut-dirs=4 -r -l1 --no-parent -R "*.tmp" -A "*.torrent" https://releases.ubuntu.com/22.04/
rm *-DVD-{2,3}.iso.torrent debian-mac-*.torrent *.tmp *.loadedc0wb0yc0d3r
Wow, thanks!
Much simpler than my glue using changedetection.io and huginn.
I also learned some new tricks with wget.
For others interested: https://explainshell.com/explain?cmd=wget+-nH+--cut-dirs%3D4...
AndroidKitKat
I pretty much update things manually when I think of it (usually every ~2 months or so) or when I am downloading a new version for whatever reason (like to flash yet another computer I picked up). I've been looking for something to do in my downtime , so I might see if I can whip something up to automate updates, could be a fun project.
c0wb0yc0d3r
Right now I have have changedection.io call a huginn webhook to alert me when a new release is up.
ogre_battle
> Torrent client that I actually use for Linux ISOs
It's okay boss, plenty of us pirate stuff too. You can just admit it.
yjftsjthsd-h
Maybe you do, but some of us are actually, unironically, torrenting Linux distros.
isoprophlex
I ran a raspberry (model 1, then 2, 3, 4) since forever, which has been fun. Switched to an intel NUC recently as I had a spare and needed the compute power. Being able to run on 32 gb ram with an nvme disk feels good, but the pi has served my needs pretty well...
- plex for streaming media
- external hdd that a friend uses as offsite backup (he has mine)
- home assistant, mostly fed by data from the...
- mqtt broker, that ties the sensors around my house together
- postgres, for long term reporting and predictions, mostly with data from...
- some cron jobs that scrape weather data and energy prices (they change hourly, sometimes going negative)
- security camera (a shell script saving an RTSP stream)
- a docker container that I can ssh into from anywhere, that allows backing up the iphone photo roll using the "photosync" app into my photo backup folder
Soon (I tell myself) I will analyze the security camera stream with YOLO or something to detect the cats that piss against my bikes... hehehe
bmitc
Any particular security camera that you use?
I have been considering building a bespoke home system with Elixir and am slowly building some APIs and ideas.
isoprophlex
A very shitty Tapo camera made by tplink. It was cheap and exposes its RTSP stream on the network with some URL.
newuser46547
Do the cheap NUCs let you install more than 8GB RAM?
viraptor
Define cheap. I'm running an Intel NUC d54250wyk (launched 2013) which takes up to 16GB DDR3L. You can't definitely find that class of hardware got cheap on eBay these days. Update SSD and memory and it's still great.
viraptor
Stupid auto-correct. Should be: "You can definitely find that class of hardware for cheap on eBay these days."
Semaphor
Not sure which those are, but all the J/N 4000 series CPUs are only rated for 8 GB, yet I never heard of anyone having issues with 16, though not always at the maximum supported frequencies.
holsta
I have a 6th generation with 32GB:
hw.model=Intel(R) Core(TM) i3-6100U CPU @ 2.30GHz
hw.physmem=34227793920isoprophlex
Yeah it's a i10 NUC (not exactly super cheap), which takes 2x16 gb without problems. I'm running debian...
boloust
Running k3s on a small cluster of mini pcs and RPis.
Use Tailscale for MagicDNS and access from any network.
Have a custom wildcard domain pointing to my tailscale k3s node ips, and a traefik ingress controller. This means exposing a service from my cluster on a subdomain just requires creating an ingress object in k3s, and it's only accessible via tailscale. cert-manager and let's encrypt handle TLS.
All services are deployed via gitops using ArgoCD, so changes are auditable and can be easily rolled back. Replacing hardware is just a matter of installing k3s and joining the cluster, then everything automatically comes up.
Restic for backups to s3.
For home automation I use a USB zigbee controller, mosquitto, zigbee2mqtt, room assistant, and home assistant, all deployed on k3s. These control my lights, HVAC, and various garage doors and gates. Also have mains-powered zigbee switches bound directly to devices so everything still works even if network or home assistant goes down.
The RPis are used for Room Assistant, which can automatically control lights/HVAC based on presence detection via a smartwatch. More intrusive actions (e.g. making lights brighter when already turned on, opening blinds) are pushed to the smartwatch for confirmation.
Grafana/prometheus to monitor sensors.
For media, jellyfin and sonarr/ radarr. The native Jellyfin app works very well on modern LG TVs.
Pihole to block ads on any device connected to Tailscale. Works globally.
Right now it's zero maintenance, and changes are automatically synced after a git push, so I almost never SSH into the servers directly.
moritonal
Always love seeing someone else create a similar solution as your own (albiet likely better!).
I have the same setup with K3S running on a couple PIs. You have a nice CI but I decided to use cdk8s[1] which lets you compile Typescript into K8 files. For access I did almost exactly the same but with CloudFlare Tunnels (might look into Tailscale). Stealing the zigbee2mqtt and room assistant ideas.
Where do you store volumes? I eventually just bought a NAS and mount persistent NFS volumes off it.
boloust
cdk8s works really nicely with gitops.
> Where do you store volumes?
Back when it was a single node cluster, I just used hostFolder mounts with restic backups. I added Longhorn once the cluster grew, but there's still some local hostFolder mounts left around. For example, zigbee2mqtt needs to be on the node that has the zigbee controller plugged into it, so the node is tagged and zigbee2mqtt has a nodeSelector. This means the hostPath still works and I haven't needed to migrate it to Longhorn.
Longhorn initially scared me off with its relatively high listed resource requirements, but after configuring it to not run on the RPis it turned out to work quite well, most of the time just using a few percent CPU.
ohthehugemanate
Thanks for writing almost exactly the post I was going to write. differences:
I don't use tailscale; I just port forward from my router to the k3s ingress IP, since that's fixed anyway. Accordingly k3s handles letsencrypt certificates. My router has a built in openvpn server.
I haven't moved to jellyfin... yet. Plex is super slick and runs nicely in the cluster. I've learned to keep it version locked though, to avoid regressions and unwanted new "features", which means jellyfin is only a matter of time.
I also run Nextcloud, and photoprism for my photo library.
Storage is on a built-from-scraps 16TB NAS which backs up to azure blob with duplicity, and longhorn for block-based storage (since lots of services nowadays prefer sqlite, which breaks on NFS). Yes I do need that space; I run an entertainment company and we store a LOT of video and audio. Not to mention media for plex!
I have many times considered moving most of this to a cloud system, but the cost is prohibitive. If anyone can find 13+TB of storage and transcode- and ML-capable hardware (for plex and photoprism face recognition) for less than $45/mo (my cost of electricity plus annual amortized hw cost), I'm interested.
inssein
This was basically my holiday project - Photo: https://i.imgur.com/2AnP4pu.jpeg.
I'm running Longhorn for storage but haven't figured out backup yet, and haven't got to grafana / prometheus yet.
I put up my work on GitHub as well: https://github.com/inssein/mainframe. I wanted to create a separate ingress controller for internal dashboards, but for now I just setup a separate nginx-ingress for internal, and using traefik for external, feels wrong.
cassianoleal
> The native Jellyfin app works very well on modern LG TVs.
My experience on a CX OLED has been hit-and-miss. Freezes, crashes, and some times it just hangs when skipping and I have to force-close it.
I really want to like Jellyfin. I run it beside Plex and use both. I find Plex user-hostile but it still gives me a better video playing experience more consistently.
boloust
Interesting, I have a CX and a G1 and it works flawlessly. I didn't do anything special so might have just gotten lucky.
Rhubarrbb
> changes are automatically synced after a git push, so I almost never SSH into the servers directly.
Can you elaborate how you're doing this?
techn00
I'll answer in his place, he said he's using ArgoCD and running everything on k3s. ArgoCD watches the files in a repo (kubernetes yaml manifests for example) and applies them in the cluster, so that the state of the running cluster (applications) is synchronized with the git repo.
asim
Reading this thread, someone needs to put the home server in a tiny box, preinstalled with all the apps and put a slick user interface on it. I think https://umbrel.com is a thing but it's not packaged with hardware. Something that packages the hardware and software together would be pretty killer. Plug and play. Instant email. All the apps. Easy migration from existing services.
voltaireodactyl
Synology NAS products (using their quickconnect service) get pretty close to this. You have to install their apps from an App Store but that feels reasonable for even the average consumer.
JamesSwift
Seconded. Just get a Synology NAS. I'm incredibly happy with mine, they really seem to make a great product on both the hardware and software side.
guntherhermann
Part of the fun for me is the tinkering, and I'm not sure if non-techies "get" the benefit of running a home server. Maybe the tide is turning for the younger generation? I can only speak for most people my age (30-40), they are capable of interacting with technology but they have absolutely 0 understanding of how anything actually works.
"Why when I can just use Spotify/Netflix/iCloud/etc"? Is the answer I got when telling them about my own home server shiz
asim
I think it's like most home devices, it comes down to ownership. We've gotten into the habit of streaming everything from someone else e.g we're in cloud rental economy, but there's opportunity to restore ownership. Maybe there's a generation that just wants to rent but it wasn't really by choice, it's because ownership cost too much, ownership of homes, servers, storage, etc. We can reduce these costs and we can make ownership of services a thing once more. Sure, you might need someone else to fix things when they break, but it's not like I'm doing my own plumbing or electrical work in my house either.
So it's really this idea of pulling all your services back into your home server. Something you own, something that then becomes private by proxy of that. Something primarily for you and your family, no one else.
cm2187
On the other hand one of the benefit of renting is that you don't have to deal with all the maintenance. It's fine to do a little maintenance but if you do that for everything it just absorbs all your time.
Dalewyn
I know of an avid ham, and one of his favorite jokes is "Why use the radio when I can just call them on my cellphone?".
At some point, stuff like home servers really are more about the tinkering rather than any practical desire or need.
If someone's only interested in getting from Point A to Point B, he's just not going to care how fancy or interesting the mode of transport is.
cm2187
I love tinkering but not with my data. For my data storage I want a stable and tested solution. I have done enough "shit I destroyed my RAID array" for my taste.
guntherhermann
For important documents (insurance docs, mortgage, etc) I use SyncThing, for media I use my RAID1 array. I'd certainly hope that RAID1 works as I expect it to if one of my HDDs shit the bed! It's a good point though, backups aren't really backups unless they're tested... brb...
courgette
Yunohost ? Then put a sticker on a random box and your done.
Seriously, it does most of what you describe : good UI. Excellent user management between apps. Great catalogue ( 500+ all tested with different level of integration)
bluGill
That is the easy part. What I really want is someone to administer the upgrades for me. Make sure everything just works, and all security holes are closed. Too often I get things almost working perfect just as some software goes out of support, and next thing I know everything is broke again after I make one upgrade.
chrisdhal
unRaid is basically this.
Happy user of unRaid for years, just chugs away, easy updates, lots of apps available.
tetris11
dietpi.com will get you halfway there
rft
I would love for something like this to become mainstream, mostly from a privacy PoV. But, I think there are just too many different use cases, pitfalls, user problems and limitations. Something like a plug and play box could work for the very small intersection of "technically inclined and understands how to port forward and how to fix minor problems" and "not interested in tinkering with self hosting". As someone else already mentioned, a big part for people self hosting seems to be the tinkering aspect. For me tinkering and privacy are the main motivators.
On the potential problem side you have far too much friction compared to cloud services. It starts with getting a public domain and IP for sharing data with other people. Then you need to setup port forwarding, need a somewhat stable network and internet connection, run into problems with upload bandwidth being usually far less than download. Once you have the connection side taken care of, what about user onboarding, password recovery ("just mount the partition and overwrite the password" will not work), backups? How do you make clear the distinction between my self hosted Spotify and someone else's? Why can they not see my music? Why is there a difference in the first place?
I know all/most of this can be solved in some way, but it is hard. There is also never going to be a big (think billion dollar) company involved with creating such a solution as there is no monetization model. You can not go for a subscription (imo), because then you just pay someone to host something at your home using your power and bandwidth. You could say you only pay for support and maybe a relay to workaround NAT, but how would the support look like? Full access via an admin account? Privacy just got downgraded. What happens if the company hosting the relay goes under? Enjoy your brick (local would still work, but the user experience would be far worse).
Apart from all these software and user experience problems, we still need the hardware. It needs to be reasonably cheap (US and EU market maybe 200-300USD), but it needs to be reliable (it will live in the worst possible place, like a hot, dusty cupboard) and support some more advanced use cases. Of course >90% will only use it for some light file storage, music streaming and voice control, but 10% will make heavy use of transcoding, invite others to their services, store TBs of media, run their entire home automation on it. How do these 10% know they are in the 10% and need to buy different hardware? Why can the box not do automatic quality adjustment like Netflix does? Do you have a hardware migration path?
And what about cross compatibility? In this thread alone you got multiple different systems. This just multiplies user support issues or will lead to vendor lock in, just a different kind.
Lastly, you need marketing to bring this to the mainstream. And you need a big selling point to make people migrate from "free" cloud services to something they need to pay upfront.
I hate to be this negative about this topic, but I just do not see this being viable outside a relatively small, interested community. Please, correct me here, I would love to see this become real and mainstream!
joshka
I wrote up a larger reply to this, but simplified it a bunch:
1. Hardware needs to be good enough. Raspberry Pi proved that. Ubiquity of common parts solves a bunch of issues. The main limiting factors here are likely transcode speed, ethernet speed, encryption speed.
2. Software needs a good modularization story. Linux in general suffers from the idea that everything is configurable and to configure everything you have to learn about how it interacts with everything. There's a lot of focus on how to do things rather than matching common use cases to profiles that just work. As a concrete example rather than selecting the "I'm at a coffee shop or airport" profile, I need to select that I'm on an insecure network that I don't trust with a specific password and a configuration for my VPN that forces traffic ... Addressing componentization via use case design seems lacking (and a good place to introduce standardization). A lot of the "tinkering" level software seems to start with "How" rather than "Why" as the impetus.
3. Security. We have security principles that are evolving to solve these sorts of problems (OAuth2 RAR, Passkeys). Invest in them.
4. Support. Common solutions for common problems breed easier conversations. There are consultants that do NAS support for small businesses because those NAS companies have gained enough market share. Secondly we need to spend more time to start with the why rather than the what. Software developers (myself included) obsess over the latter and build systems at that level that can answer (how fast is my connection) rather than systems that inherently answer the real need (why is my connection slow or intermittent).
5. Perhaps the answer is just build a better NAS company where the focus is on home server software rather than purely around adding server software to a bunch of disks in a box. Perhaps the answer is really embracing the idea of U in NUC?
Perhaps this has been done elsewhere? Perhaps it's been done too many times to really work?
asim
Thanks for taking the time to provide such a detailed response. All your points make sense and it's those things that have to be resolved one by one to make anything like this not just a viable product but an actual sustainable business. I do keep coming back to this idea mostly because unlike a lot of people here, I don't want to tinker anymore, I don't want to hand build something from scratch, those days are long gone. I'm happy and willing to pay for something that works and maybe even a subscription for a period of time (think mobile phone contract).
Ultimately as you say privacy is the clear selling point. So much of what's in the cloud is now being exploited or hacked. Obviously the cloud is here to stay and we'll continue to rely on it for a ton of high compute, high bandwidth and high storage needs, but there's so much of what we do on a day to day basis that just doesn't require it e.g let's just say I need to talk to my family, leave notes between us, share sensitive documents, etc. All of that can very much be local in my house. All the things that we do in our physical houses we deem private, the digital should be the same. But we've shifted from the personal computer to public cloud services. There was big benefits but equally there will be huge benefits to going private once more.
I can't say this problem will be solved anytime soon. Someone has to really want to solve the problem for themselves first. I'm hacking on a little toy that might manage DNS, email, web serving, file storage, etc as one binary but I'm not sure it will go anywhere, it's just an experiment.
csixty4
There's a NUC in my basement connected to my router over gigabit ethernet:
- Caddy acting as a reverse proxy in front of the other apps
- Wallabag to capture articles I want to read later on my e-reader
- Calibre Web to manage my ebooks & PDFs
- Two Minecraft Bedrock Edition servers for my kid & their friends
- Yopass for secure password & secret sharing
Prior to this, I had a Raspberry Pi in the closet for hosting and it was frustrating. Not only did I have a hard time finding Docker containers for some apps that were actively maintained for ARM, but one time my SD card died and took everything with it. Since then, I've started mounting directories on my Synology NAS and using that as RAID-enabled storage that gets backed up to the cloud every night.
pinecamp
Be careful with calibre-web; it's had a ton of vulnerabilities. To the author's credit, they are typically addressed quickly, but there have been enough related to auth to make me wary.
bloudermilk
Maybe hosting this service behind a firewall / on a private network would make the most sense. I notice many people here are running Tailscale at home — perhaps this is why?
derkades
- Router in VM
- Main workstation in VM with GPU passthrough (saves power by not having an additional machine running)
- Nextcloud for files, calendar, contact sync
- Joplin server for notes sync
- Samba NAS for me and everyone else who lives in the house
- Occasionally one or more Minecraft servers for friends
- Jenkins CI for my open source projects
- Mail server (using the ISP's mail proxy for outgoing)
- qbittorrent for 24/7 seeding or Linux ISOs
- Storj storage nodes for some passive income using spare disk space
- Borg backup target for friends
- Home Assistant (very basic user, only use it to control some MQTT tasmota flashed relays with my phone)
- Matrix server
- InfluxDB+Grafana for collecting various metrics (server usage, temperature sensor, hooked up to serial port of smart electricity meter for power and gas usage graphs)
- WireGuard for remote access, obviously
- Many other random stuff and my own projects
jum4
Do you use your main workstation with directly connected display or via some remote technology? My home server is in the basement and I have several low end devices around but haven’t found a way to use GUI applications remotely. Currently I use only TUI ones.
derkades
About 15 meters of one USB 2.0, one displayport and one HDMI cable. Of course a 5V power supply for USB at the client end, soldered to the cheapest USB hub I could find.
I never need more than the speed USB 2.0 provides anyway, and those USB 3.0 optical extensions are way too expensive.
xref
Joplin server, after my own heart. It’s great lets me setup accts for friends/family. I store all my recipes as markdown and anyone on the server can get my shared notebook full of them
Msurrow
Thanks for the Joplin mention. I’ve been looking for a secure, gdpr compliant and preferably selfhosted note app for some time! Bingo, I guess
raybb
Joplin is old but gold in terms of open source not taking. However, I always found the experience to be a bit too clunky even after writing 100s of notes.
Obsidian is much nicer but you gotta set up your own sync or pay. Same basic premise of markdown files on your local file system. I've been enjoying it a lot and written more since switching.
Msurrow
Thanks, will look into it
hobo_mark
You do not even need to run a dedicated server if for example you already have an owncloud/nextcloud setup, since all clients (desktop and mobile) can sync from a WebDAV folder (among other backends).
courgette
How is storj going ?
derkades
Have been getting $20-50/mo for sharing 15TB. Probably won't be worth it for the average developer with insane salary here, but I'm not complaining!
courgette
Nothing to sneeze at, thanks for the feedback. That project and helium are the two crypto stuff I don’t find completely idiotic. ( or that is not finance, finance crypto “works”, too. But that idiotic too )
rpigab
I've bought a used Dell Poweredge T320 with Xeon E5-2428L (low consumption), 24GB RAM and an SSD, so it's quiet and cheap to run, and I've got Debian with dockerized services.
jwilder/nginx-proxy to act as a reverse proxy that dynamically routes traffic to containers without manually editing the config, using subdomains, with SSL wildcard cert
DokuWiki that I don't use much anymore
Nginx to serve static files used by other servers
My web resume
Piwigo for pictures
IoT: Custom Python Flask server that is used to control Philips Hue lights from ESP8266 wifi modules (cheaper than buying 20€ Philips switches)
Vaultwarden (Bitwarden) my password manager, shared with family
OpenVPN server
Wekan (self-hosted open source Trello-like)
Gitlab and Gitlab CI (created when Github didn't have free private repos, might delete at some point because it uses some CPU even when idle, but I have over 50 personal repos, also share with close family)
Nextcloud, but I don't use it for important/sensitive stuff yet, I'd have to set up a robust backup procedures
Other experiments, like openvscode-server, web interface with password to trigger wake-on-lan for my PC, etc.
Email seems like a pain because small servers are always seen as spam by big services, need to manage reputation, too complicated, so I use my hostname provider (Gandi) SMTP relay to send email, and I could set up a free inbox too, but I don't need it.duffyjp
I run a Poweredge T20 with a Xeon E3-1275Lv3. I snagged a 32GB kit of ECC ram from somebody's trashcan Mac Pro on eBay for a song and have a couple SSDs for OS and VMs. I use USB WD MyBook drives for storage and backups.
Right now the server, switch, drives etc is pulling 31 watts from the wall and it's so quiet you can't tell it's on. I'm sure it would keep running for hours off the cheap UPS I have it all on.
I had a second one as a desktop but the motherboard died last year. I'm not sure what I'll replace it with when the time comes. Probably one of those one liter PCs since I don't need internal 3.5" bays.
rpigab
If I replace it one day, it might be for a "miniITX" (whatever MB size corresponds to ~1 liter PCs) but I fear the cost of the case + specific MB + low-power processor with many threads + good NVME SSD will be through the roof, compared to this cheap used T320, and they're hard to find in used form, at least for now.
Get the top HN stories in your inbox every day.
It's been years (over a decade?) since I've had a server at home but I'm setting one up for media and I got to thinking: what else should I do with this box? So I was wondering what cool/nerdy/weird stuff you all are using home servers for. DNS and file sharing seem like obvious applications I could set up. I already run email and web on a VPS so that's taken care of. What are you doing with your home server?