Get the top HN stories in your inbox every day.
rented_mule
dmvdoug
I have nothing to say about the technical stuff, just that I’m sorry for your loss, and that from perspective you were a true friend by taking that task on after they were gone.
zeagle
I've given this some thought too and am doing some documenting for friends. Hard to know the answer.
I have paperless photos seafile and a few other things copying to a usb drive nightly that my spouse may remember to grab unencrypted. I'm tempted to throw a 2tb ssd in her laptop to just mirror it too. But access my nas let alone setting it up somewhere else after a move or with new network equipment, email hosting for our domain, domain registration are all going to be voodoo to my spouse without some guidance. I'm tempted to switch to bitwarden proper instead of self hosted too.
justsomehnguy
> are all going to be voodoo to my spouse
That's why you really need to rethink your 'if you are hearing this I musta croaked' procedure.
Thing is, 99% of the files on your NAS and whatever never would be accessed after your death. And anything of importance should be accessible even if you are alive but incapacitated but your NAS is dead.
So the best thing to do is to make a list of Very Important Documents and have it in the printed form in two locations, eg your house for the immediate access and someone's parents who are close enough. And update it every year, with a calendar reminder in both of yours calendars. You can throw a flash drive[0] there too, with the files which can't be printed but you think they have a sentimental value.
[0] personally I don't believe SSDs are for the task of the long term storage, but flash drives I've seen to survive at least 5 years
zeagle
For sure. Good advice. It's a fool dream to think almost anything has value for more than pennies on the dollar or access after one passes as many of us have learned cleaning out elderly parents homes and estates.
For what it's worth: my solution is having an external USB plugged into the NAS that gets nightly rsync'd copies of photos, phone backups, paperless' PDF archive and seafile's contents in a regular folder. Few people know to grab it. The second part is our laptops keep a copy of seafile's contents (all our documents and another flat file paperless backup in it). A few of my friends and a txt file on that drive have a list of stuff that will break in the midterm, namely: email hosting, domain renewal.
A few things on my todo list are: probably stop self hosting calendar/contacts one day, put a large SSD in her laptop so it syncs the photo share from the NAS, switch to paid bitwarden instead of self hosted.
Other things are gravy. My accountant and lawyer can figure out business stuff, corporate liquidation, and life insurance. Funny you say that about SSDs, just in the last day my <1 year old 990 is having issues.
transpute
Data recovery instructions can be documented on paper in the same physical location used for financial accounts, e.g. fireproof safe, trusted off-site records, estate attorney. These recovery instructions are also required for data hosted by third parties.
transpute
Continuity and Recovery are required by all infrastructure plans, since the number of 3rd-party suppliers is never zero, even with "self" hosted infrastructure.
ProllyInfamous
This is tangentally-related, but I feel it is very wrong that so many smaller governments (e.g. smaller US cities) host "public information" on private servers (e.g. links to PDFs from a Google Drive)... or even worse inside some walled-garden (e.g. Facebook).
My own personal DNS does not resolve to any Google/Facebook products, reducing profiling; but by denying their ad-revenue, I also deny myself access to information which IMHO should be truly available to the public (without using a private company's infrastructure).
I absolutely understand that many people will just say "don't block them, then." My argument is that governments should not host public items on private servers.
throwaway8481
Tangentially, I really dislike walking into the DMV and seeing ads from private companies. I heavily dislike that ID checking and document verification is done by ID.me and others for what is a public service I pay for through my taxes.
Maybe for a while I can avoid submitting my documents and information to partners-of-the-DMV, but just like airport security it's a convenience tax. They do not value your time, and they will demonstrate it by putting you through extra hoops to coerce you into giving them everything.
anonexpat
Unfortunately, you can’t avoid having your information sold by the DMV in California. Likely others too.
How this is legal is beyond my comprehension.
ProllyInfamous
Honest-to-god, I do not have a permanent email address (I use burners, when necessary). It has been years since I received any SPAM (cause there's nowhere to receive).
Just over a year ago, I had a civil court action where the court REQUIRED I list my email address; when I wrote "none," the clerk was upset; eventually a judge required me to sign an attestation that I do not use email.
Just seemed ridiculous that Tennessee's court systems even ask for this, let alone assume everybody has/uses email. Plus, it's all public information...
throwaway8481
Yep. California DMV. 1 in 10 Americans live in California and these companies have our data.
kevin_thibedeau
I had body camera video sent to me over a "private" YouTube link. I would have welcomed GDrive over that. On the plus side, I took advantage of the automatic transcript generation to review the obnoxious things the officer said without having to watch it all.
ProllyInfamous
>automatic transcript generation
There is a neat application called "Whisper" which will translate/transcribe just about any media format, locally on your computer.
But I guess that's only necessary if bodycam footage isn't uploaded to YouTube (wow!).
>obnoxious things the officer said
My hope is for your swift and proper resolution, whatever the charges/incident. They're allowed to lie/bait/entrap; glad you got the footage.
transpute
> governments should not host public items on private servers
Some works of the US federal government are not subject to copyright and can be mirrored freely.
What licenses do city government use to release public information?
stackskipton
Depends on the city/state laws but vast majority of the time, it's all public domain under FOIA laws. City/State won't say that because it's assumed.
transpute
Local newspapers could mirror public domain citygov content, providing a public service and growing their online traffic.
mr_toad
Getting a PDF published on a government website is a six month long process involving approval from a dozen managers, editors, and assorted hangers on. It’s little wonder people use things like Google drive and Dropbox.
treyd
So that should be an indicator that the government offices should just change their internal policies if people are going to go around them anyways.
bingo-bongo
I agree, but the keyword here is “just” - if you think 6 months is a long time for publishing a pdf, imaging trying to make actual changes to the same system :/
wannacboatmovie
> My own personal DNS does not resolve to any Google/Facebook products, reducing profiling
This is incredibly silly.
If you smashed your computer with a sledgehammer you would also be unable to access those documents.
Do you stop there? What if they host their site on GCP? Amazon? Azure? They're all in the ad business. It's a slippery slope to a whitelist-only Internet.
apitman
I think we'll see some stratification in the self hosting community over the next few years. The current community, centered around /r/selfhosted and /r/homelab, is all about articles like this. The complexity and learning are sources of fun and an end in themselves. That's awesome.
But I think there's a large untapped market for people who would love the benefits of self hosting, without needing to learn much if any of it.
I think of it similar to kit car builders vs someone who just wants to buy a car to use. Right now, self hosting is dominated by kit cars.
If self hosting is ever going to be as turnkey as driving a car, I think we're going to need a new term. I've been leaning towards "indie hosting" personally.
burningChrome
I've wanted to do this for years, but trying to secure a server is the stuff of nightmares for me.
Are there resources out there about what I need to know about making sure my stuff is secure enough and I'm not just leaving my stuff wide open for people to hack it? I've always been interested in hosting my own email server, but the security parts have kept me from doing it.
Any resources you can point me to would be much appreciated.
don-code
I do self-host my mail, and I've done so since about 2005! It gets harder and harder to with every passing year. Some notable things that have changed since then:
Many ISPs now block inbound port 25, required to receive mail via SMTP. It's quite hard to get an ISP to unblock this. My university wouldn't at all, and I left a laptop under my parents' couch for four years to do it instead. Some time later, Comcast began blocking it as well, and the only way to get it unblocked was to call support, work your way up the phone tree to someone that realized you were talking about inbound rather than outbound (no, this is _not_ a misconfiguration in Outlook), and get them to push a special config to your cable modem, which would be reset whenever another config was auto-pushed, or your modem lost power. You may notice that implies extended downtime when Comcast, my electric service, or my physical operations (read: I unplugged it by accident) suffers a failure.
Many mail servers (e.g. Gmail) require you to have reverse DNS that matches your forward DNS. Getting your ISP to understand what they're asking you to do is... difficult. The last time I changed ISPs, it took about a week to get this done. Comcast batches these updates weekly, and support wanted to double, triple, and quadruple-check that what I was asking for was, indeed, what I was asking for.
There are a bunch of anti-spam measures in effect that use DNS: SPF and DMARC are table stakes for most mail servers (again, e.g. Gmail) to speak to you. I've so far managed to get by without setting up DKIM, but I suspect that's next.
The worst part, by far, is spam blacklists. Many blacklists will already have your IP address listed by policy - you're a _residential IP_, not to be trusted. The Spamhaus PBL, for instance, automatically blocks all Comcast residential IPs. There is nothing you can do about this, and many mail servers will refuse to speak to you if you're on a blacklist.
These days I am paying Comcast an arm and a leg for business-class service, which both gives me unbridled inbound port 25, and also a (luckily!) clean IP on block lists.
apitman
Thanks for the writeup. Very interesting.
> Many mail servers (e.g. Gmail) require you to have reverse DNS that matches your forward DNS
What does this look like on a technical level, ie records and whatnot? I'm not super familiar with reverse DNS.
citizenpaul
> I left a laptop under my parents' couch for four year
My coworkers house burnt down because of doing exactly this. Though don't think it was hosting anything just being put out of the way when not using it.
BobbyTables2
Why pay for business service?
$6/month gets you a cloud VM that can be used to proxy incoming connections to your home…
layer8
A Linux server (e.g. stock Debian) on a well-reputed VPS is pretty secure by default, in my experience. Use software packages from the Linux distribution whenever possible (certainly for email software) and configure unattended security updates.
Note that you generally can’t host email from a residential IP, so you’ll probably want to use a VPS. Making services on your home network publicly accessible (i.e. not just via VPN) obviously comes with more risks; personally I wouldn’t do that.
transpute
> Making services on your home network publicly accessible
Tailscale's private-overlay-on-public-internet has made it feasible to provide services to a few trusted clients, even behind NAT.
Tailscale app on Apple TV can be an exit node, e.g. travelers can access geo-restricted content via their residential broadband connection.
apitman
I would echo others here and just use a cheap VPS to experiment with. Then you have much less to worry about.
How technical are you?
burningChrome
I'm pretty adapt technically. I've been a front-end developer for about ten years so using Wordpress and Drupal and setting up sites either manually or via an ISP is pretty familiar. In that regard, using VPS is also pretty familiar so I will most likely start there.
austin-cheney
Absolutely. I got my wife hooked on self hosting too.
I am currently writing a new web server to solve for this space that is ridiculously simple to configure for dummies like me, has proxy and TLS built in, serves http over web sockets, and can scale to support any number of servers each supporting any number of domains provided port availability. The goal is maximum socket concurrency.
I am doing this just for the love of self hosting demands in my own household. Apache felt archaic to configure and my home grown solution is already doing things Apache struggles with. I tried nginx but the proxy configuration wasn’t simple enough for me. I just want to specify ports and expect magic to happen. The best self hosted solutions ship as docker compose files that anybody can install within 2 minutes.
fragmede
Fascinating! What didn't you like about caddy?
austin-cheney
I have not tried caddy. I will look that up.
telgareith
The term is "managed vps" and/or some variation of "marketplace image", I think it's linode that has a particularly... Vibrant (not in an all positive way) selection. AWS' is pretty good, but not as diverse. I assume due to the increased technical aptitude of the average customer and the learning curve.
apitman
One thing I strongly agree with you here is being open to the cloud. Self hosting strongly favors running on your own hardware, but indie hosting focuses more about the tangible benefits, ie data ownership, mobility which breeds competition, etc.
That said, I think the VPS marketplace is still too complicated. What about updates, backups, TLS certs, domains, etc?
layer8
> What about updates, backups, TLS certs, domains, etc?
You are right that one has to take care of those individually. For domains, however, I would say that it’s important that you manage them separately from the VPS provider, because this lets you switch VPSs easily at any time. For TLS certs you use something like certbot, or a web server like Caddy that has it built-in. It’s generally straightforward. VPS providers usually also offer backup solutions. If you use software from a Linux distribution like Debian or Ubuntu, automated security updates are easy.
transpute
> Self hosting strongly favors running on your own hardware
In comparison, tenant (storage, colocation, cloud, VPS) hosting contracts often encompass Terms of Service, metered quotas/billing, acceptable use definitions, and regulatory compliance.
> data ownership, mobility which breeds competition
Historically, the buyers of commodity "web hosting" and IaaS have benefited from many competing vendors. Turnkey vertical SaaS often have price premiums and vendor lock-in. If "indie hosting" gains traction with easy to deploy and manage software, there may be upward pressure on pricing and downward pressure on mobility.
indigodaddy
As someone mentioned, regarding TLS, Caddy makes that REAL easy, as in pretty much touchless and the most dead simple config file you’ve ever seen
PLG88
I love this idea. Personally, I would love to self-host, but don't due to not being technical enough to use a command line.
I am from a non-technical background but learnt loads of technical stuff over the years, to the extent that I can describe many complex topics, present, or write stuff for technical people. But my non-computer science background means I am not familiar with the command line. I have used it and understand, but have not 'learnt its language'.
Does any 'turnkey' self-hosting solution exists which provides an abstraction, so that I can just deal with GUIs and not command line to start (and learn on the way)??
In fact, that would be a great way to learn.
rakoo
Yup, https://yunohost.org/ was built for that. You can manage your server entirely from the admin panel, installing pleroma or matrix or freshrss takes 3 clicks, and it can even manage your DNS zone for you. Mail and XMPP are included.
When you're feeling confident, you can then fiddle with the command line and explore what's happening behind the scenes.
PLG88
Wow, that looks super useful, thanks!
nakkaya
Yes there are. I would suggest going through the subreddit mentioned /r/selfhosted. There are GUI tools and NAS products that would let you host docker images. As for CLI ask an LLM, for simple common commands you are going to be dealing with they are pretty good at it.
immibis
Try setting up a server on Windows, then. Windows's native language is the GUI like Linux's is the command line. I remember there used to be lots of GUI Windows web servers - don't know if there still are.
Of course you have Windows versions of Linux servers like Apache, which still use their configuration files and so on, but you can use the GUI for the rest of everything.
gsck
Definitely worth just sitting down and learning how the command line works. Its not as scary as it looks.
No need to have any fancy comp-sci background, hell I have an arts degree!
77owen
As far as turnkey solutions go coolify.io is the one I’ve seen floating around recently.
pxc
> and learn on the way
In case you're interested in resources for this: I think _Learn Enough Developer Tools to Be Dangerous_ is a great start. I've been guiding my roommate through it as he studies, a chapter a week. If you do pick it up, just skim the chapters on editors— their examples are overly specific to a choice of editor of few outstanding strengths, and if you prefer a different editor you may struggle to find equivalents for some of the hokey examples the book uses.
https://www.oreilly.com/library/view/learn-enough-developer/...
Special editions/compilations of Linux magazines can also be a very good source of high-quality tutorials, including for CLI stuff. These are nice because while they include general introductions, they're mostly comprised of bite-sized tutorials that you can pick and choose according to your interest. I also like them because they're available in print, colorful, and shiny, and thoughtfully laid out, plus there are no ads— very pleasant compared to the web in many ways. Linux Pro Magazine did one on shell topics this year, back in February: https://www.linuxpromagazine.com/Resources/Special-Editions/...
Such magazines also include step-by-step guides for setting up services that I was able to follow (with some trepidation!) when I was just a kid who was new to Linux and still honestly a bit scared of the command line. Linux Format is really good for this because it's targeted at desktop Linux and computer hobbyists broadly rather than rather than programmers or IT professionals. Their guides assume little to no familiarity with the command line, so they often include reminders of little bits of command line basics rather than just assuming you share that context with the authors: https://linuxformat.com/
Besides web-based management interfaces for servers, like Proxmox, you might consider getting started by just running some services on a spare desktop computer. openSUSE has a long history of emphasizing GUI administration tools, so many relatively 'advanced' tasks for it do not require the command line, which is somewhat different from other distros. (If you give it a try, its GUI configurator, YaST2, will strike you at first glance as having a dated look. This is intentional— continuity is a priority for YaST, so GUI-based tutorials from many years ago will still be accurate.) It's also a distro with good guts and nice CLI tools, so you won't necessarily outgrow it after you get your feet wet with the command line.
fyi626367
Freedom Box is(was?) a pretty good system for making self hosting things accessible and easy. A couple of clicks was usually all it took.
transpute
If Apple ships a Home Intelligence competitor to $15K Tinybox, it could be called "lux hosting".
apitman
Umbrel[0] maybe? I posted a list of related services here[1] as well, though most of them are cloud based.
[0]: https://umbrel.com/
mr_toad
> But I think there's a large untapped market for people who would love the benefits of self hosting, without needing to learn much if any of it.
Isn’t that what devices like My Cloud are aimed at?
johnklos
I agree with Christian about pretty much everything here. We self-host for multiple reasons, and we don't necessarily need others to necessarily understand our rationale, although that'd be nice.
For me, one thing that stands out as something driving the desire to self-host everything is that large corporations, given enough time, invariably let us down. Christian's experience with Contabo illustrates the one game that I will do any amount of work to avoid: people who pretend to know what they're talking about but who really only waste our time in hopes to put off dealing with an issue until someone else actually fixes it.
The one place where I can't avoid this truly stupid game is with getting and maintaining Internet for my clients. You're not paying for "enterprise", with "enterprise" pricing of $750 a month for 200 Mbps? Then tough cookies - you'll get the same junk we force on our residential customers, and you'll never, ever be able to talk to a human who has any clue what you're talking about, but you'll be able to talk to plenty who'll pretend to know and will waste hours of your time.
The more time they waste of mine, the more energy I'll expend looking for ways to subvert or replace them, until I eventually rely on corporations for the absolute minimum possible.
transpute
> you'll get the same junk we force on our residential customers
In locations with few competing providers for wired broadband, 5G "broadband internet" has brought some competition to entrenched telcos. While mobile data latency is not competitive with cable/fiber, it can serve as a backup for wired connections lacking an SLA.
vel0city
The 5G T-Mobile stuff in my area has often had latency at least as competitive as most cable providers in the area. I've had friends do cloud gaming on it no problem.
xarope
"Contabo down for 4 days"... looks nervously at my contabo instance with 16GB of RAM and 600GB SSD storage.
Just putting it out there, is contabo really that bad? I've had mine for just over a year, had a roughly 6 hour outage recently which did make me a bit nervous.
rr808
Self hosting makes you realize how insanely expensive cloud providers are. AWS charging for IP4 addresses was the last straw for me.
apitman
Depends how much you value your time. I think most self hosters do it for reasons other than money.
pxc
It's really not that hard to run a server with a handful of applications on it, especially for personal/home use. It seems to me that when developers talk about this kind of thing they often massively overstate how much work is involved in running a server and keeping it up to date while massively understating the inherent complexities of working with cloud services.
I haven't really witnessed the cloud serving as a big time saver in my career. Cloud-centric ops teams seem to be consistently larger than those running applications on regular, shmegular servers (whether rented or their own), if anything.
For self-hosting purposes, I'd expect serverless deployments of open-source apps to take more time for most people to figure out and get right than just running the same apps on a VPS, unless you had spent years deploying to the cloud at work and also never learned Linux usage basics. And if you deploy to a only a single cloud VM, you're just using an extremely overpriced VPS at that point.
rr808
I was brought up on private data centers and its much much simpler than using aws or gcp.
Gigachad
I’ve found the opposite. Google drive costs me less than a VPS, and far less than a physical server to do the same job.
shprd
> I’ve found the opposite. Google drive costs me less than a VPS, and far less than a physical server to do the same job.
Can you share the web applications and databases you serve from your google drive? since you can "do the same job" and it costs less? You must be into something here.
Ignoring any privacy concerns for a moment. Neither the comment you're replying to, nor the submission is talking about just storing documents in a storage. They're hosting applications, databases, services which you need a "server" for and can't do in Google drive.
You might not the have the same use case, you might not need to host websites, databases, virtual machines, DNS, and other services, but then you're not doing "the same job". You just have a different use case. Just like you don't need an IDE if you just want to take notes.
bovem
Just today I had to sign up for a service and went to bitwarden app on my phone to generate password (linked to self hosted vaultwarden server) but the new password entry couldn’t be saved into the app because the server was unreachable.
Then I had to go restart my VM and reconnect my VPN. I am now thinking about switching to bitwarden premium and opt-out of self hosting for password managers.
otter-in-a-suit
Author here. Bitwarden (as much as I appreciate them!) isn't something I self host, since it’s too critical an application for me (similar to email). I pay for 1Password.
greenavocado
KeepassXC on Syncthing is so easy to use even my girlfriend uses it without problems
hypeatei
Exporting your vault every so often to offline storage (like an encrypted hard drive) is a good happy medium IMO.
sethammons
This is one reason why I moved from self hosted to their paid offering. The other is that I trust their security better than mine
transpute
Virtualization platform tooling can monitor VM operational status and restart when needed to maintain availability.
m463
I self host too.
a couple points
- proxmox hits an SSD pretty hard, continuously. I think with zfs, it probably hits even harder. A lot of it is every second keeping state for a cluster, even if you have only one machine.
- I bought mikrotik routers for openwrt. I tried out routeros, but it seemed to phone home. So I got openwrt going and didn't look back. I am switching to zyxel since you can have an openwrt switch with up to 48-ports.
- I used to run small things on a pi, but after getting proficient at proxmox, they've been moved to a vm or container.
- the most wonderful milestone in self-hosting was when I got vlans set up. Having vlans that stayed 100% in the house was huge.
- next good milestone was setting up privoxy. Basically a proxy with a whitelist. All the internal vlan machines could update, but no nonsense.
- it is also nice to browse the web with a browser pointing at privoxy. You'd be surprised at the connections your browser will make. Firefox internally phones home all. the. time.
dmateos
I noticed that with proxmox, my SSD wear out was going up about 1-3% a month.
There are things you can do to minimize this even with ZFS tho in terms of arc cache and setting syslog to log to memory instead of disk etc.
Now i get about 1-2% every 6 months.
timc3
Do you have any more details such as a blog post on what you did?
dmateos
it was a while ago but i think basically this
https://forum.proxmox.com/threads/minimizing-ssd-wearout-que...
zxexz
I've been curious about a running something proxmox-like for a while, but I really want something a little more "hackable" without learning an entirely new system for managing configurations, yet still has an intuitive interface that people that don't understand all layers of the stack very well can still use without having to feel reliant on the others. I'm curious if you or others have any thoughts on that. It's probably too specific and complicated to exist.
linsomniac
Has anyone tried those Lithium Ion UPSes? ~5 years ago we removed the UPS from our dev/stg stack because in the previous 5 years we had more outages caused by the UPS than issues with the utility power. A better battery technology sounds compelling.
For production, of course, it's all dual feed, generator, UPS with 10 year batteries, N+1.
zackmorris
No but I did a quick search for capacity vs cycles for sealed lead acid (SLA) vs lithium ion phosphate (LiFePO4/LFP arguably the main type right now) batteries:
https://lithiumbalance.com/lead-acid-and-lithium-ion-battery...
https://www.power-sonic.com/blog/lithium-vs-lead-acid-batter...
https://ecosoch.com/lithium-ion/
Lead acid last about 3 years max before nearing 0% capacity, while modern lithium ion last indefinitely if run below 25 C (77 F) and still have 80% capacity after 3000 cycles. The lithium manganese oxide batteries in my Nissan Leaf are 11 years old and still at 10 of 12 bars or about 83% capacity.
Lithium ion chemistry is chosen to optimize certain aspects like energy density, cycle lifetime, or environmental impact, but today doesn't make a huge difference, typically 10% as far as density goes, with cobalt still having the highest density but at a human cost due to difficulty mining it:
https://dragonflyenergy.com/types-of-lithium-batteries-guide...
I think that sodium ion batteries will eventually be the mainstream battery backup. Also sodium sulphur batteries running above 300 C (572 F) for industry. I'm having trouble finding a sodium UPS:
https://www.xpcc.com/xtreme-power-sodium-ups/
But sodium ion perform similarly to lithium ion so could be used as drop-in replacements. This one claims 6000 cycles:
https://www.ebay.com/itm/166895369465
Unfortunately there seems to be industry pressure to slow the adoption of sodium ion. Probably to protect entrenched interests in lead and lithium. So sodium is mostly only available on Alibaba/AliExpress still.
If I had money to invest, I would put it into businesses manufacturing sodium ion batteries and/or doing swaps of sodium or lithium into devices when their lead batteries wear out.
Looks like the lithium ion battery market is $55 billion annually:
https://www.grandviewresearch.com/industry-analysis/lithium-...
https://www.precedenceresearch.com/sodium-ion-batteries-mark...
So sodium ion will eventually probably be a bigger bar on those graphs than lithium iron phosphate, the current most environmentally friendly battery. And those projections are almost certainly way too conservative. They are thinking $5 billion by 2030, but I would expect more like $20 billion due to the abundance of sodium granting easier entry by a large player embracing it. Probably CATL, BYD or LG will be the most profitable outside the US:
https://www.statista.com/statistics/235323/lithium-batteries...
US battery makers aren't well-known outside of Tesla:
https://www.industryselect.com/blog/top-10-battery-manufactu...
So sodium is one of the biggest opportunities I can think of, and returns will probably depend on how this next election goes.
Looks like they are straightforward to manufacture:
https://futurebatterylab.com/the-big-beginners-guide-to-sodi...
Unfortunately Google patented a method of manufacturing sodium ion batteries, so it will be interesting to see how their portfolio will be affected when they are broken up:
https://patents.google.com/patent/WO2017067994A1/en
Like with all things tech these days, it's not about making things, but who has the most access to capital and lawyers.
Apologies this got so long, I'm an organic LLM with OCD tendencies.
akira2501
Home labs are great. They are a good learning tool to understand systems in _isolation_.
They're terrible for understanding emergent properties of production systems and how to defend yourself against active and passive attacks. Critically you also need to know how to unwind an attack after you have been bitten by one. These are the most important parts of "self hosting."
Otherwise, you might be getting in the habit of building big rube goldberg machines that are never going to be possible to deploy in any real production scenario.
Make it real once in a while.
sgarland
> Otherwise, you might be getting in the habit of building big rube goldberg machines that are never going to be possible to deploy in any real production scenario.
"Not with that attitude"
– People I have worked with.zxexz
If the feedback loop is tight enough, anything is possible! Every crash becomes a dopamine rush!
justsomehnguy
One^W two things what makes self-hosting a bit more attractive:
a) besides the some bootstrapping nuances you are not forced to have a working phone number to be able to use some resource. It's usually not a problem until... well until it became a problem. Just like for me yesterday when for whatever I tried but I couldn't register a new Google account. There is just no other option than SMS confirmation.
b) there is way less things to change 'for your own convenience', like a quiet removal of any option to pre-pay for Fastmail.
PS oh and Dynadot (which I was happy using for more than 10 years) decided (for my convenience, of course) to change the security mechanism they used for years. Of course I don't remember the answer for the security question and now I forced to never ever migrate from them, because I literally can't.
floating-io
> quiet removal of any option to pre-pay for Fastmail
Eh?
I just purchased a new 12 month Fastmail plan for my business with no issue a few weeks ago.
Up to 36 months is still listed on their pricing page...
justsomehnguy
Refer to https://news.ycombinator.com/item?id=41242945
For years I just uploaded a lump sum and it was spent as I used the service per year. This way I didn't need to bother what I would be without email when the paid period is over and I didn't need to overpay much in case I would need to cancel early.
Now I need to be sure I be around with a working CC at the time the bought period is over... and what if won't? Do I need to jump through cancel-and-reorder hoops every couple of years? More importantly, am I sure what a couple years later these hoops would work or through some very unlucky coincidence it would wipe my decade+ emails? Sure, Fastmail guys would be so sorry but... that wouldn't help me.
floating-io
Okay, I misunderstood what you meant.
That said, I feel like you're asking for something that is fairly non-standard in the industry in general, though I could be wrong; I've never tried to do things that way myself.
If I were fastmail, those tax issues they mention would definitely take precedence over other issues in any event. Writing your own sales tax/VAT/whatever software strikes me as a special kind of hell given all the tax laws that have to be supported (and kept updated!) for every different jurisdiction out there.
For me, being able to pay three years in advance and only have to renew once every three years is fine, and they did offer a workaround for you, but to each their own I suppose.
kkfx
A small suggestion about resources: try using NixOS/Guix System instead of containers to deploy home services, you'll discover that in a fraction of resources you get much more, stability, documentation and easy replication included.
Containers now, like full-stack virtualization on x86 are and was advertisement stuff pushed because proprietary software vendors and cloud providers need them, other do not need them at all and devs who works for themselves and generic users should learn that: if you sell VPS et al. obviously you need them, if you made your own infra from bare metal adding them it's just wasting resources and add dependencies instead of simplify life.
otter-in-a-suit
We use nix at work. I'm not a huge fan - I find it too opinionated. Appreciate it for what it is, though, and understand its fans.
Since at $work, we run K8s + containers in some shape or form (as well as in... basically all previous jobs), using the tech that I use in the "real world" at home is somewhat in line with my reasoning about why the time investment for self hosting is worth it as a learning exercise.
snowpalmer
I agree that removing the container would be better on resources.
However, most self-hosted software is already "pre-packaged" in Docker containers. It's much easier to grab that "off-the-shelf" than have to build out something custom.
pxc
> However, most self-hosted software is already "pre-packaged" in Docker containers. It's much easier to grab that "off-the-shelf" than have to build out something custom.
Imo, the quality and documentation level of an application's build system is a valuable signal in determining its overall health and competency. It usually (though not always) ends up being that well-maintained software written for modern runtimes is very easy to build from source and run. Even if I do end up using the developer's container image, I generally want to check out their manual deployment documentation.
transpute
NixOS improves the reproducibility of both self-hosted software and configuration state.
kkfx
In NixOS/Guix System there is no need of such package, the configuration language/package manager takes care of anything, configuration included.
Let's say you want Jellyfin?
jellyfin = {
enable = true;
user="whatyouwant";
}; # jellyfin
under services and you get it. You want a more complex thing, let's say Paperless? paperless = {
enable = true;
address = "0.0.0.0";
port = 58080;
mediaDir = "/var/lib/paperless/media";
dataDir = "/var/lib/paperless/data";
consumptionDir = "/var/lib/paperless/importdir";
consumptionDirIsPublic = true;
settings = {
PAPERLESS_AUTO_LOGIN_USERNAME = "admin";
PAPERLESS_OCR_LANGUAGE = "ita+eng+fra";
PAPERLESS_OCR_SKIP_ARCHIVE_FILE = "with_text";
PAPERLESS_OCR_USER_ARGS = {
optimize = 1;
pdfa_image_compression = "auto";
continue_on_soft_render_error = true;
invalidate_digital_signatures = true;
}; # PAPERLESS_OCR_USER_ARGS
}; # settings
}; # services.paperless
Chromium with extensions etc? chromium = {
enable = true;
# see Chrome Web Store ext. URL
extensions = [
"cjpalhdlnbpafiamejdnhcphjbkeiagm" # ublock origin
"pkehgijcmpdhfbdbbnkijodmdjhbjlgp" # privacy badger
"edibdbjcniadpccecjdfdjjppcpchdlm" # I still don't care about cookies
"ekhagklcjbdpajgpjgmbionohlpdbjgc" # Zotero Connector
# ...
]; # extensions
# see https://chromeenterprise.google/policies/
extraOpts = {
"BrowserSignin" = 0;
"SyncDisabled" = true;
"AllowSystemNotifications" = true;
"ExtensionManifestV2Availability" = 3; # sino a 06/25
"AutoplayAllowed" = false;
"BackgroundModeEnabled" = false;
"HideWebStorePromo" = false;
"ClickToCallEnabled" = false;
"BookmarkBarEnabled" = true;
"SafeSitesFilterBehavior" = 0;
"SpellcheckEnabled" = true;
"SpellcheckLanguage" = [
"it"
"fr"
"en-US"
];
}; # extraOpts
}; # chromium
Etc etc etc. You configure the entire deploy and get it generated, a custom live? With auto-partitioning and auto-install? Idem. A set of hosts in a network similar (NixOps/Disnix) and so on. The configuration language do all, fetching sources and build if a pre-built binary is not there, setting up a DB, setting up NGINX+let's encrypt SSL certs, there are derivation (package) per derivation options you can set, some you MUST set, defaults etc., it's MUCH easier than anything else, only issue is how many ready-made derivations are there, and in packaging terms Guix is very well placed, NixOS is more than Arch, even if something will be always not there or incomplete as long as devs do not learn alone the system and start using Nix/Guix also to develop, so deps are really tested in dedicated environments and so on, and users always get a clean system, can change and boot in a previous version and so on.stackskipton
Sigh Nix users.
I need to run uptime kuma, Here is Docker Compose: https://github.com/louislam/uptime-kuma/blob/1.23.X/docker/d...
What is equivalent in Nix?
apitman
I've heard good things about Nix's dependency management. Does it seamlessly handle the case where you have 3 different apps that all require 3 different version of python with different python dependencies?
Also, does it offer any isolation between apps and between apps and the OS, especially the filesystem?
kkfx
Yes by design, meaning you do not have an FHS structure where "installing" means copy this in /usr/share, this in /usr/lib ... Alla packages just see a small "virtual" root tree, meaning a network of symlinks.
You can choose for package A to se package B and C, but not D, different versions means simply different packages because you have package alone, let's say the current stable version but also package_major_minor for various supported major or minor version. As well as you can choose to fetch sources from a specific commit from a public repo, build them and link them in the system.
Every package is not "isolated" in the container self, like small userlands on a common kernel, but in the sense of "having different views of the common system", so you avoid wasting gazzilion of storage and ram and CPU plus anything is still from the upstream or yourself, there are no outdated forgotten deps left around.
Plus being anything configured in the config (well, not mandatory, but that's the typical way) you have a fully reproducible system, maybe not binary-reproducible meaning "packageA now is version x.y" when you rebuild the system, but anything could be rebuild in the current state of nixpkgs/guix states from few kb of text, typically versioned in a repo. Every update does not overwrite anything, if put some new versions in a special tree /nix/store o /gnu/store and updates symlinks accordingly, if it does not work you restore the previous state instead/before garbage collection.
Practically it do the same of IllumOS with IPS (the package manager) integrated with zfs (and the bootloader), here instead of zfs clones and snapshots you have a poor man's version with symlinks.
There is no isolation by default, meaning it's a single system, but you can "portion" the system in various ways. Of course on Linux there are no IllumOS zones of FreeBSD jails, the state of paravirtualisation it's very behind those unices.
Arelius
> Does it seamlessly handle the case where you have 3 different apps that all require 3 different version of python with different python dependencies?
Ohh yes! That is exactly what it does on a repeatable and documented way..
But it also does a lot of work to do this. And doesn't hide the systems of that so much. So that means you're likely to have yo care about how it does all that work. And if you start doing something weird, might have to do a lot of work yourself..
granra
The first, yes.
The second, you can do this by setting certain options in the systemd service for your apps, but this is true for any systemd distro.
VTimofeenko
Containers allow running software that does not have a nix package available and one could not be bothered to write. My lab is fully on NixOS, but a couple of services are happily chugging along as containers in podman.
kkfx
So you prefer manually handling such containers, that you probably do not really know enough, perhaps even for serious usage, instead of writing a derivation? No reproducibility etc
For quick testing something, nothing to say, but for real, albeit personal homelab, usage...
VTimofeenko
> manually handling such containers
Well not by hand of course. virtualisation.oci-containers is pretty darn good. Podman+systemd provides some local sandboxing. Network-wide firewall prevents odd traffic from happening.
> No reproducibility etc
Images can be pinned to specific versions providing some reproducibility guarantees. Same goes for configs mounted as volumes.
There is also some software (libedgetpu using bazel as a prime example IME) with a complicated build process. Packaging it is a major PITA and the nixpkgs issue[1] is a graveyard of attempts to do so. I just build it using a container and push the binary version to the node with coral tpu.
pxc
An escape hatch like that can be really nice for trying software out on a temporary basis, before you know if you care about it enough to write a package and a service module.
Given time limitations, I can imagine living with some applications like that for quite some time, their packaging sitting at the bottom of a long todo list. :)
XCSme
> My VPS, hosted by Contabo, randomly went down for almost 4 days.
Don't use Contabo! I have had this issue with different servers almost monthly, servers going down for 1 or 2 days, without any announcement or communication. They never say if anything is wrong, never apologies, have regular "unplanned maintenances", contacting support is almost impossible (or takes 3-4 days for a reply). As OP did, I am also migrating from Contabo to Hetzner.
Get the top HN stories in your inbox every day.
I self-host a lot of things myself. There is one scary downside I've learned in a painful way.
A friend and I figured all this out together since we met in college in the 1980s. He hosted his stuff and I hosted mine. For example, starting in 1994, we had our own domain names and hosted our own email. Sometimes we used each other for backup (e.g., when we used to host our own DNS for our domains at home as well as for SMTP relays). We also hosted for family and some friends at the same time.
Four years ago he was diagnosed with cancer and a year later we lost him. It was hard enough to lose one of the closest friends I ever had. In his last weeks, he asked if I could figure out how to support his family and friends in migrating off the servers in his home rack and onto providers that made more sense for his family's level of technical understanding. This was not simple because I had moved 150 miles away, but of course I said yes.
Years later, that migration is close to complete, but it has been far more difficult than any of us imagined. Not because of anything technical, but because every step of it is a reminder of the loss of a dear friend. And that takes me out of the rational mindset I need to be in to migrate things smoothly and safely.
But, he did have me as a succession plan. With him gone, I don't have someone who thinks enough like me to be the same for my extended family. I'm used to thinking about things like succession plans at work, but it's an entirely new level to do it at home.
So, I still host a lot, but the requirements are much more thoroughly thought through. For example, we use Paperless-ngx to manage our documents. Now there's a cron job that rsync's the collection of PDFs to my wife's laptop every hour so that she will have our important papers if something happens to me.
Thinking carefully enough to come up with reliable backups like this makes things noticeably harder because not all solutions are as obvious and simple. And it's not something that ever occurred to us in our 20s and 30s, but our families were one tragedy away from not knowing how to access things that are important soon after we were gone (as soon as the server had trouble). There is more responsibility to this than we previously realized.