Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

hardwaresofton

Deploying k8s has gotten a lot easier these days -- some alternatives in this space:

- https://docs.k0sproject.io (https://github.com/k0sproject/k0s)

- https://k3s.io (https://github.com/k3s-io/k3s/)

k0s is my personal favorite and what I run, the decisions they have made align very well with how I want to run my clusters versus k3s which is similar but slightly different. Of course you also can't go wrong with kubeadm[0][1] -- it was good enough to use minimally (as in you could imagine sprinkling a tiny bit of ansible and maintaining a cluster easily) years ago, and has only gotten better.

[0]: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

[1]: https://github.com/kubernetes/kubeadm

sandGorgon

k3s is brilliant. we run production clusters on it.

The problem with k3s is that the architecture level libraries are a bit outdated. Early on, it was for a particular reason - ARM64 (raspberry pi) support. But today, like everyone is on ARM - even AWS.

For example the network library is Flannel. Almost everyone switches to Calico for any real work stuff on k3s. it is not even a packaged alternative. Go-do-it-urself.

The biggest reason for this is a core tenet of k3s - small size. k0s has taken the opposite approach here. 50mb vs 150mb is not really significant. But it opens up alternative paths which k3s is not willing to take.

In the long run, while I love k3s to bits....I feel that k0s with its size-is-not-the-only thing approach is far more pragmatic and open for adoption.

hardwaresofton

Agreed on 100% of your points -- you've hit on some of the reasons I chose (and still choose) k0s -- Flannel is awesome but it's a little too basic (my very first cluster was the venerable Flannel setup, I've also done some Canal). I found that k0s's choice of Calico is the best -- I used to use kube-router (it was and still is amazing, great all-in-one tool) heavily but some really awesome benchmarking work[0] caused me to go with Calico.

Most of the other choices that k0s makes are also right up my alley as well. I personally like that they're not trying to ride this IoT/Edge wave. Nothing wrong with those use cases but I want to run k8s on powerful servers in the cloud, and I just want something that does it's best to get out of my way (and of course, k0s goes above and beyond on that front).

> The biggest reason for this is a core tenet of k3s - small size. k0s has taken the opposite approach here. 50mb vs 150mb is not really significant. But it opens up alternative paths which k3s is not willing to take.

Yup! 150MB is nothing to me -- I waste more space in wasted docker container layers, and since they don't particularly aim for IoT or edge so it's perfect for me.

k3s is great (alexellis is awesome), k0s is great (the team at mirantis is awesome) -- we're spoiled for choice these days.

Almost criminal how easy it is to get started with k8s (and with a relatively decent standards compliant setup at that!), almost makes me feel like all the time I spent standing up, blowing up, and recreating clusters was wasted! Though I do wonder if newcomers these days get enough exposure to things going wrong at the lower layers as I did though.

[0]: https://itnext.io/benchmark-results-of-kubernetes-network-pl...

sandGorgon

actually k3s has a cloud deployment startup - civocloud also using it. I would say that the production usage of k3s is outstripping the raspberry pi usage...but however the philosophical underpinnings remain very rpi3 centric.

Things like proxy protocol support (which is pretty critical behind cloud loadbalancers), network plugin choice, etc is going to be very critical.

yjftsjthsd-h

> For example the network library is Flannel. Almost everyone switches to Calico for any real work stuff on k3s.

What's the tradeoff? Why not flannel for Real Work™?

hardwaresofton

You could certainly use Flannel in production (Canal = Flannel + Calico) but I like the features that Calico provides, in particular:

- network policy enforcement

- intra-node traffic encryption with wireguard

- calico does not use VXLAN (sends routes via BGP and does some gateway trickery[0]), so it has slightly less overhead

[0]: https://stardomsolutions.blogspot.com/2019/06/flannel-vs-cal...

arcticfox

Is developing locally with one of these k8s implementations a good option? My current workflow is to develop locally with a combination of bare (non-containerized) servers and Docker containers, but all of my deployment is to a hosted k8s cluster.

If developing locally with k8s would likely be a better workflow, are any of these options better than the others for that?

r-bar

The best solution I have found for developing locally on k8s is k3d [0]. It quickly deploys k3s clusters inside docker. It comes with a few extras like adding a local docker registry and configuring the cluster(s) to use it. It makes it super easy to setup and tear down clusters.

I usually only reach for it when I am building out a helm charm for a project and want to test it. Otherwise docker-compose is usually enough and is less boilerplate to just get an app and a few supporting resources up and running.

One thing I have been wanting to experiment with more is using something like Tilt [1] for local development. I just have not had an app that required it yet.

[0] https://k3d.io/ [1] https://tilt.dev/

ra7

The simplest way to bring up a local k8s cluster on your machine for development is to use Kind (https://kind.sigs.k8s.io/).

phumberdroz

The best time I had so far was with dockertest[0] in go it allows you to spin up containers as part of your test suite which then allows you to test against them. So we have one go pkg that has all our containers that we need regularly.

The biggest benefit there is no need to have a docker compose or have other resources running locally you just can run the test cases if you have docker installed. [0] https://github.com/ory/dockertest

sethammons

We deploy with k8s but few of us develop with it. Nearly our whole department uses docker compose to get out dependencies running and to manage our acceptance tests locally. Some people will leverage our staging k8s cluster via kubectl and others just leverage our build pipeline (buildkite + argo cd) that also takes you to stage as it will also take you into production.

EdwardDiego

I use Minikube, I run `eval $(minikube docker-env)` and push my images straight into it - after patching imagePullPolicy to "IfNotPresent" for any resources using snapshot images - as K8s defaults to IfNotPresent, unless the image ends with "snapshot", then it defaults to "Always"...

ofrzeta

I had a good time with Kubespray. Essentially you just need to edit the Ansible inventory and assign the appropriate roles.

cyberpunk

Sure, if it works, upgrades are somewhat fraught though (I mean, upgrading a 20 node cluster is an hour long ansible run, or it was when we were using it)

We switched to rke, it’s much better.

ofrzeta

An hour to upgrade a 20 node cluster doesn't seem unreasonable for me - when you are doing a graceful upgrade that includes moving workloads between nodes. I don't know anything about rke. Might be interesting but it seems different enough from upstream Kubernetes that you have to learn new things. Seems to me a bit similar to Openshift 4 where the cluster manages the underlying machines.

rsmitty

(Full disclaimer - I'm an engineer at Talos, but I believe it's pretty relevant here)

If folks are interested in this kind of K8s deployment, they might also be interested at what we're doing at Talos (https://talos.dev). We have full support for all of these same environments (we have a great community of k8s-at-home folks running with Raspberry Pis) and a bunch of tooling to make bare metal easier with Cluster API. You can also do the minikube type of thing by running Talos directly in Docker or QEMU with `talosctl`.

Talos works with an API instead of SSH/Bash, so there's some interesting things about ease of use when operating K8s that are baked in like built-in etcd backup/restore, k8s upgrades, etc.

We're also right in the middle of building out our next release that will have native Wireguard functionality and enable truly hybrid K8s clusters. This should be a big deal for edge deployments and we're super excited about it.

c7DJTLrn

A few questions if I may. I administer a few thousand machines which are provisioned using Puppet, and I've had my eye on a more immutable style of administration integrating K8S.

  - How does Talos handle first getting on to the network? For example, some environments might require a static IP/gateway for example to first reach the Internet. Others might require DHCP.
  - How does Talos handle upgrades? Can it self upgrade once deployed?
  - What hardware can Talos run on? Does it work well with virtualisation?
  - To what degree can Talos dynamically configure itself? What I mean by this is if that a new disk is attached, can it partition it and start storing things on it?
  - How resilient is Talos to things like filesystem corruption? 
  - What are the minimum hardware requirements?
Please forgive my laziness but maybe other HNers will have the same questions.

rsmitty

Hey, thanks for the questions. I'll try to answer them in-line:

- How does Talos handle first getting on to the network? For example, some environments might require a static IP/gateway for example to first reach the Internet. Others might require DHCP.

For networking in particular, you can configure interfaces directly at boot by using kernel args.

But that being said, Talos is entirely driven by a machine config file and there are several different ways of getting Talos off the ground, be it with ISO or any of our cloud images. Generally you can bring your own pre-defined machine configs to get everything configured from the start or you can boot the ISO and configure it via our interactive installer once the machine is online.

We also have folks that make heavy use of Cluster API and thus the config generation is all handled automatically based on the providers being used.

- How does Talos handle upgrades? Can it self upgrade once deployed?

Upgrades can be kicked off manually with `talosctl` or can be done automatically with our upgrade operator. We're currently in the process of revamping the upgrade operator to be smarter, however so it's in flux a bit. As with everything in Talos, upgrades are controllable by the API.

Kubernetes upgrades can also be performed across the cluster directly with `talosctl`. We’ve tried to bake in a lot of these common operations tasks directly into the system to make it easier for everyone.

- What hardware can Talos run on? Does it work well with virtualisation?

Pretty much anything ARM64 or AMD64 will work. We have folks that run in cloud, bare metal servers, Raspberry Pis, you name it. We publish images for all of these with each release.

Talos works very well with virtualization, whether that's in the cloud or with QEMU or VMWare. We've got folks running it everywhere.

- To what degree can Talos dynamically configure itself? What I mean by this is if a new disk is attached, can it partition it and start storing things on it?

Presently, the machine configuration allows you to specify additional disks to be used for non-Talos functions, including formatting and mounting them. However, this is currently an install-time function. We will be extending this in the future to allow for dynamic provisioning utilizing the new Common Operating System Interface (COSI) spec. This is a general specification which we are actively developing both internally and in collaboration with interested parties across the Kubernetes community. You can check that out here if you have interest: https://github.com/cosi-project/community

- How resilient is Talos to things like filesystem corruption?

Like any OS, filesystem corruption can indeed occur. We use standard Linux filesystems which have internal consistency checks, but ultimately, things can go wrong. An important design goal of Talos, however, is that it is designed for distributed systems and, as such, is designed to be thrown away and replaced easily when something goes awry. We also try to make it very easy to backup the things that matter from a Kubernetes perspective like etcd.

- What are the minimum hardware requirements?

Tiny. We run completely in RAM and Talos is less than 100MB. But keep in mind that you still have to run Kubernetes, so there's some overhead there as well. You’ll have container images which need to be downloaded, both for the internal Kubernetes components and for your own applications. We're roughly the same as whatever is required for something like K3s, but probably even a bit less since we don’t require a full Linux distro to get going.

fy20

I tried a few incarnations of self-hosted k8s a few years ago, and the biggest problem I had was persistent storage. If you are using a cloud service they will integrate k8s into whatever persistent storage they offer, but if you are self-hosting you are left on your own, it seems most people end up using something like nfs or hostPath - but that ends up being a single point of failure. Have there been any developments on this recently, aimed at people wanted to run k8s on a few RaspberryPi nodes?

aloknnikhil

Have you tried using a CSI driver to help you do this? https://kubernetes-csi.github.io/docs/drivers.html

A brief description of what CSI is - https://kubernetes.io/blog/2019/01/15/container-storage-inte...

lvncelot

I've had good experiences using the Rook operator for creating a CephFS cluster. I know that you can run it on k3s, but I don't know whether RaspberryPi nodes are sufficient. Maybe the high RAM Raspi 4 ones.

sethammons

We do this at Twilio SendGrid

nullify88

I've had good experiences with Rook on k3s in production. Not on raspis though.

hardwaresofton

I'm a bit biased but Rook[0] or OpenEBS[1] are the best solutions that scale from hobbyist to enterprise IMO.

A few reasons:

- Rook is "just" managed Ceph[2], and Ceph is good enough for CERN[3]. But it does need raw disks (nothing saying these can't be loopback drives but there is a performance cost)

- OpenEBS has a lot of choices (Jiva is the simplest and is Longhorn[4] underneath, cStor is based on uZFS, Mayastor is their new thing with lots of interesting features like NVMe-oF, there's localpv-zfs which might be nice for your projects that want ZFS, regular host provisioning as well.

Another option which I rate slightly less is LINSTOR (via piraeus-operator or kube-linstor[6]). In my production environment I run Ceph -- it's almost certainly the best off the shelf option due to the features, support, and ecosystem around Ceph.

I've done some experiments with a reproducible repo (Hetzner dedicated hardware) attached as well[7]. I think the results might be somewhat scuffed but worth a look maybe anyways. I also have some older experiments comparing OpenEBS Jiva (AKA Longhorn) and HostPath [8].

[0]: https://github.com/rook/rook

[1]: https://openebs.io/

[2]: https://docs.ceph.com/

[3]: https://www.youtube.com/watch?v=OopRMUYiY5E

[4]: https://longhorn.io/docs

[5]: https://github.com/piraeusdatastore/piraeus-operator

[6]: https://github.com/kvaps/kube-linstor

[7]: https://vadosware.io/post/k8s-storage-provider-benchmarks-ro...

[8]: https://vadosware.io/post/comparing-openebs-and-hostpath/

tyingq

Distributed minio[1] maybe? Assuming you can get by with S3-like object storage.

[1] https://docs.min.io/docs/distributed-minio-quickstart-guide....

regularfry

I'm using longhorn, but it's been cpu-heavy.

nullwarp

I really liked longhorn but the CPU usage was ultimately too high for our use case.

jayrwren

seaweedfs seems pretty great for a cloud storage: http://seaweedfs.github.io

chrislusf

Thanks! I am working on SeaweedFS. https://github.com/chrislusf/seaweedfs

There are also SeaweedFS CSI Driver: https://github.com/seaweedfs/seaweedfs-csi-driver

rvdmei

I guess easiest would be Longhorn on top of k3s

nullify88

I've found Ceph is more tolerant to failures and staying available. Longhorn was certainly easier to setup and has lower operating requirements, but we encountered outages.

Kostic

From all of the low-ops K8s distributions, k3s[0] is the best from perspective of inital setup, maintenance and usage on less powerful hardware.

There are even now higher-level tools such as k3os and k3sup to further reduce the initial deployment pains.

MicroK8s prides with 'No APIs added or removed'. That's not that positive in my book. K3s on the other hand actively removes the alpha APIs to reduce the binary size and memory usage. Works great if you only use stable Kubernetes primitives.

[0] https://k3s.io/

alexellisuk

Thanks for mentioning K3sup [0]

I used Microk8s on a client project late last year and it was really painful, but I am sure it serves a particular set of users who are very much into the Snap/Canonical ecosystem.

In contrast, K3s is very light-weight and can be run in a container via the K3d project.

If folks want to work with K8s upstream or development patches against Kubernetes, they will probably find that KinD is much quicker and easier.

Minikube has also got a lot of love recently, and can run without having a dependency on Virtual Box too.

[0] https://k3sup.dev/ [1] https://kind.sigs.k8s.io/docs/user/quick-start/

stavros

This looks interesting, what control plane do the nodes usually connect to? I'm trying to see the use case for me, where I have a main NAS in my house and a few disparate Raspberry Pis, but I'm not sure if I would run the control plane on the NAS or if I would use a hosted one somewhere else.

rcarmo

I've had a number of issues with k3s on very low spec hardware (typically ARM), where it would take up to 25-50% of CPU just sitting idle with no pods. Stopped using it for those scenarios a year ago, wonder if that's fixed.

Fiahil

I had the same issue, it wasn't fixed on my last upgrade. I just let it eat some cpu : my pi is somewhat busy anyway

gnfargbl

Plain k8s has a fearsome reputation as being complex to deploy, which I don't think is quite deserved. It isn't totally straightforward, but the documentation does tend to make it sound a bit worse than it actually is.

I run a couple of small clusters and my Ansible script for installing them is pretty much:

  * Set up the base system. Set up firewall. Add k8s repo. Keep back kubelet & kubeadm.
  * Install and configure docker.
  * On one node, run kubeadm init. Capture the output.
  * Install flannel networking.
  * On the other nodes, run the join command that is printed out by kubeadm init.
Running in a multi-master setup requires an extra argument to kubeadm init. There are a couple of other bits of faffing about to get metrics working, but the documentation covers that pretty clearly.

I'm definitely not knocking k3s/microk8s, they're a great and quick way to experiment with Kubernetes (and so is GKE).

professor_v

I remember about 5 years ago I tried to deploy it on CoreOS using the available documentation and literally couldn't get it working.

I haven't done a manual deployment since. I hope it got significantly better and I may be an idiot but the reputation isn't fully undeserved.

The problem back then was also that this was usually the first thing you had to do to try it out. Doing a complicated deployment without knowing much about it doesn't make it any easier.

Uberphallus

Same here. I just wanted to play with it for my toy projects and personal services, so I didn't really push a whole lot, but it just felt like there were too many moving parts to figure out. I didn't need autoscaling or most of the advanced features of k8s, so I just went back to my libvirt-based set of scripts.

ClumsyPilot

I run kubernetes on a home server, but it took me a couple weeks of testing and trial and error to arrive at a setup I was happy with, and I already had experience of K8S in the cloud. At the time I was stuck without a work laptop, so had time to self-educate, but normally I wouldnt have that kind of time to sink in.

mrweasel

Deploying a Kubernetes cluster isn't really to complex, it doesn't even take that long. It's the long term maintenances that concerns me.

gnfargbl

This concerns me too. What should I be worrying about? The main maintenance problem that I have experienced so far is that automatic major version updates can break the cluster (which is why I now keep back those packages). Are there other gotchas that I'm likely to experience?

mrweasel

Version updates don't normally break the cluster, in my experience, but it might break things like Helm charts.

The thing that concerns me the most is managing the internal certificates and debugging networking issues.

tasubotadas

TBH, I am not that a big fan of microk8s. I have it deployed on VPS and it's far from stable.

The server itself, is, probably, overprovision, but I still struggle with responsiveness, logging, and ingress/service management. What's also funny, that using the Ubuntu's ufw service is not that seamless together with microk8s.

I am think of moving now to k3s. The only thing that's holding me back is that k3s doesn't use nginx for ingress so I'll need to change some configs.

Also, the local storage options are not that clear.

pas

You can easily deploy the Nginx ingress controller on k3s.

ojhughes

I moved from k3s to microk8s for local development. I gave up on k3s because I needed calico CNI and it was a pain to set up, on microk8s it's just `microk8s enable calico`. I also found k3s a bit too opinionated with the default Traefik ingress and service-lb.

hardwaresofton

Traefik is probably the best Ingress out there capability wise for now I think. I've written a bit on it[0] before, but IMO that choice is a good one. Even used it to do some fun external-in SSH connections[1]. I also use it to run a multi-tenant email setup (haraka @ the edge + maddy). It's not like NGINX can't run SMTP expose other ports, but Traefik is easier to manage -- CRDs instead of a ConfigMap update.

[0]: https://vadosware.io/post/ingress-controller-considerations-...

[1]: https://vadosware.io/post/stuffing-both-ssh-and-https-on-por...

ojhughes

Maybe, but I prefer to decide myself and in my case I need to test out different ingress controllers with our product. I had troubles getting Nginx ingress to work on k3s. FWIW project contour is also quite nice and has dynamic config reload https://projectcontour.io/

SOLAR_FIELDS

Is Traefik difficult to initially configure? I use the default ingress-nginx controller on my home lab setup (separate from the “official”, non default & non-free nginx-ingress, absolutely terrible name choices) and it seems to be ok for smaller use cases. It’s not without its idiosyncrasies though.

At previous employment with large scale clusters (thousands of nodes) Traefik seemed to be heavily preferred by the SRE’s in my org.

hardwaresofton

I personally think it's not that bad -- the documentation is a bit overwhelming because there are so many ways to configure it (some people are running with only docker, others using the CRDs, some people are using the built-in Ingress support, some using the new Gateway stuff). As with most things, if you read and mostly digest the documentation, you won't feel lost when it comes to setting it up -- I haven't run into any corners that were too hard to figure out or inconsistent (which is even worse).

You could get Traefik working inside your existing cluster actually by just letting NGINX route to it and seeing how easy it is to use that way -- though that may be more difficult than just spinning up a cluster on a brand new machine (or locally) and feeling your way around.

I can say that Traefik's dashboard makes it much easier to debug while it's running as it gives you a fantastic amount of feedback, Prometheus built in, etc.

zupzupper

I think they've moved off the Traefik ingress in favor of others out of the box FWIW.

PYTHONDJANGO

Is K8S still eating so many CPU cycles while idle?

Last year I checked and every physical machine in a K8S cluster was burning CPU at 20-30% - with zero payload, just to keep itself up!

Don´t you feel that this is totally inacceptable in a world with well understood climate challenges?

jscheel

It still sucks unfortunately

puppet-master

Looking forward to this being decoupled from snapd eventually. Until then, not the fientist hope I'd touch this when any alternative exists where snapd can be avoided

alias_neo

I feel like snapd must be hindering the uptake of microK8s, surely? Canonical is pushing it hard, but snapd is a hard no.

Every time I see it mentioned, I check to see if they ditched snapd yet, alas, today is not the that day.

polskibus

I thought Kubernetes is not great for environments with poor network connectivity, which is quite common when dealing with Edge and IoT scenarios. Has that changed?

alexellisuk

Yes it's changed massively, but it's recent - only since 2019.

I am the creator of the k3sup (k3s installer) tool that was mentioned and have a fair amount of experience with K3s on Raspberry Pi too.

You might also like my video "Exploring K3s with K3sup" - https://www.youtube.com/watch?v=_1kEF-Jd9pw

https://k3sup.dev/

ngrilly

What has been changed to improve k8s for applications with poor connectivity between workers on the edge and the control plane in the cloud?

smarterclayton

Other than bugs around bad connections causing hangs (the kubelet is less vulnerable to pathological failures in networking causing it to stall), nothing significant.

Kube is designed for nodes to have continuous connectivity to the control plane. If connectivity is disrupted and the machine restarts, none of the workloads will be restarted until connectivity is restored.

I.e. if you can have up to 10m of network disruption then at worst a restart / reboot will take 10m to restore the apps on that node.

Many other components (networking, storage, per node workers) will likely also have issues if they aren’t tested in those scenarios (i’ve seen some networking plugins hang or otherwise fail).

That said, there are lots of people successfully running clusters like this as long as worst case network disruption is bounded, and it’s a solvable problem for many of them.

I doubt we’ll see a significant investment in local resilience in Kubelet from the core project (because it’s a lot of work), but I do think eventually it might get addressed in the community (lots of OpenShift customers have asked for that behavior). The easier way today is run edge single node clusters, but then you have to invent new distribution and rollout models on top (ie gitops / custom controllers) instead of being able to reuse daemonsets.

We are experimenting in various ecosystem projects with patterns would let you map a daemonset on one cluster to smaller / distributed daemonsets on other clusters (which gets you local resilience).

detaro

Do you mean connectivity to the outside, or inside the cluster? The examples of Kubernetes and similar things in such scenarios I've seen usually had stable connectivity between nodes. E.g. an edge scenario would be one tiny well-connected cluster per location, remote-controlled over the (bad) external link through the API.

polskibus

I meant intra-cluster communication between nodes, when some nodes are on the Edge, some are inside the datacenter. The Edge may have pretty good overall connection to DC, but have to work with intermittent connectivity problems like dropping packets for several minutes, etc., without going crazy.

outworlder

> I meant intra-cluster communication between nodes, when some nodes are on the Edge, some are inside the datacenter.

Don't do this. Have two K8s clusters. Even if the network were reliable you might still have issues spanning the overlay network geographically.

If you _really_ need to manage them as a unit for whatever reason, federate them(keeping in mind that federation is still not GA). But keep each control plane local.

Then setup the data flows as if K8s wasn't in the picture at all.

detaro

Yeah, spanning a cluster from DC to Edge is probably not a good idea, but also generally not what I've seen suggested.

chrisweekly

Edge<->DC "dropping packets for several minutes"?

Where have you been suffering from this?

outworlder

> I thought Kubernetes is not great for environments with poor network connectivity,

No, it's ok. What you don't want to have is:

* Poor connectivity between K8s masters and ETCD. The backing store needs to be reliable or things don't work right. If it's an IOT scenario, it's possible you won't have multiple k8s master nodes anyway. If you can place etcd and k8s master in the same machine, you are fine.

You need to have a not horrible connection between masters and workers. If connectivity gets disrupted for a long time and nodes start going NotReady then, depending on how your cluster and workloads are configured, K8s may start shuffling things around to work around the (perceived) node failure(which is normally a very good thing). If this happens too often and for too long time it can be disruptive to your workloads. If it's sporadic, it can be a good thing to have K8s route around the failure.

So, if that is your scenario, then you will need to adjust. But keep in mind that no matter what you do, if network is really bad, you would have to mitigate the effects regardless, Kubernetes or not. I can only really see a problem if a) network is terrible and b) your workloads are mostly computing in nature and don't rely on the network (or they communicate in bursts). Otherwise, a network failure means you can't reach your applications anyway...

utf_8x

Microk8s is great if you're lazy. I've recently built a small-ish 3 node cluster hosting internal apps for a few hundred users and pretty much the only setup I needed to do was: install it with Snap*, enable a few plugins (storage, traefik, coredns), run the join command on each node and set up a basic loadbalancer using haproxy** and keepalived***.

* I don't like Snap. Like, a lot. But unfortunately there aren't any other options at the moment.

** I have HAProxy load-balancing both the k8s API and the ingresses. Both on L4 so I can terminate TLS on the ingress controller and automatically provision Let's Encrypt certs using cert-manager[1].

*** Keepalived[2] juggles a single floating IP between all the nodes so you can just run HAProxy on the microk8s nodes instead of having dedicated external loadbalancers.

[1] https://cert-manager.io/docs/

[2] https://github.com/acassen/keepalived

rubyist5eva

Is Snap the only way to install this on linux? This is a non-starter for me.

SemiNormal

mkesper

You probably meant "it looks like it" or "it doesn't look like there's another (supported) way". Really sad, and it's a "classic" snap so you don't get any isolation benefits but performance penalties and lots of mounted snap filesystems.

achalkias

The MicroK8s team is actively working on a "strict" snap https://github.com/ubuntu/microk8s/issues/2053. There you will get all the isolation benefits and security enhancements. https://snapcraft.io/docs/snap-confinement

Daily Digest email

Get the top HN stories in your inbox every day.

MicroK8s – Low-ops, minimal Kubernetes, for cloud, clusters, Edge and IoT - Hacker News