Get the top HN stories in your inbox every day.
asabil
pcthrowaway
I agree, and I think the author was unfortunately using coreOS because it's uncommon for cloud providers to have coreOS images nowadays, and therefore a good opportunity for him to slip in a referral code for VULTR.
Is coreOS even maintained any more? I wouldn't expect it to be very secure if the most recent VM images were built in ~2020.
Would love another writeup just using Ubuntu or some other bog-standard Linux distro.
nine_k
CoreOS was acquired by Red Hat, and now "Fedora CoreOS".offers similar concepts.
Conveniently, RH also invented both Podman and systemd.
yencabulator
Note: similar, but definitely not the same. The real continuation of CoreOS was Flatcar Linux (https://www.flatcar.org/), but then the company behind that was bought by Microsoft (https://kinvolk.io/blog/2021/04/microsoft-acquires-kinvolk/) and I really don't expect much of them anymore...
nisa
> With quadlets, the only thing required is to drop a `.container` file in the right place and you end up with a container properly supervised by `systemd`.
Is it? He defines a .network file in that butane config without it won't work. Not really obvious. I'm sure this has a use-case and it's nice to have but personally I'm not convinced. You can switch on user-namespaces in docker-daemon or even run docker itself rootless - I guess if you are in Redhat land and use podman anyway it's an alternative but for instance where is this thing logging into? journalctl --user? Can I use a logshipper like loki with this? Is there something like docker compose config that shows the fully rendered configuration? I personally don't see the point and it feels like overly complicated.
twic
It will log to wherever you configure. By default, the journal. And [0]:
> Currently, Promtail can tail logs from two sources: local log files and the systemd journal (on AMD64 machines only).
Whether it supports user services, I don't know.
[0] https://grafana.com/docs/loki/latest/send-data/promtail/
hobo_mark
butwhat?
> Butane (formerly the Fedora CoreOS Config Transpiler, FCCT) translates human readable Butane Configs into machine readable Ignition Configs.
igwhat? Why, WHY?!
9dev
Right?? They wrung everything possible out of that metaphor, and then some more, and then another bit more.
INTPenis
.network is only required if you need a network, just like you define networks in docker compose for some containers to have one shared private network.
nisa
yeah spend some time on the docs for this and it's pretty straight forward - the article and the repo kind of omits this but it's also for a different usecase. Was just irritated when I wrote that comment. It's really some oci container to systemd shim system that uses podman.
undefined
snapplebobapple
Is there an alpineos equivalent with systemd? I have grown to adore that os for virtual machines running docker with compose.
INTPenis
I started using quadlets for new system designs a month ago and I feel like I'm neck deep in it now.
My conclusion is that there is absolutely no reason to stop using docker-compose if your developers are comfortable running one command, on one file, in one git root.
Quadlets are basically docker compose, in systemd. They've finally done it, systemd has it all and now it even has docker compose. ;)
That's really all it is in practice. I'm going to continue using it because I'm a RHEL kinda guy, but don't make it up to be magic.
cyberax
The next step: systemkubed.
runiq
Nah, we got that already. Quadlet can handle k8s manifests.
https://man.archlinux.org/man/quadlet.5.en#Kube_units_%5BKub...
INTPenis
Yes but when is someone going to add logic on top of this to make it a full blown distributed container orchestrator?
Could it be done with systemd and dbus? Can dbus be distributed among several systems like mmc on Windows? I have no idea, just some questions that popped into my head lately.
sleepybrett
This might be nice, I've seen a couple of teams that instead of using a local `kind` or other local cluster to test their continers they make a docker compose and then do a bunch of work turning that into kube manifests, then maintain them separately.
all2
I've got a home server that runs docker compose as a service in systemd at startup.
I'm naive, whats the difference between doing that and using these "quadlets"?
aaravchen
I'm in a similar situation and I've been looking at quadlets a lot.
The approach is quite different from docker-compose and not really a substitute. It makes your individual containers into systemd services in an easier way than creating a unit file that calls `docker run`. But you still have to manually define networks in .network files, and configure all your dependencies in unit file syntax.
If you're very familiar with writing in systemd unit files, or really really want to use systemd to manage all container-related objects individually instead of having your container daemon do most of the work and a single compose file per group of related objects, you should consider switching. But in my experience there's little to be gained, a LOT to be lowt, and a lot of work to do the switch.
Dx5IQ
Why do you even need systems if you're using docker-compose? As long as your docker daemon starts up, docker compose can do the rest
otabdeveloper4
> finally
Systemd had dependencies between services and containers since forever.
The only difference here seems to be podman instead of systemd-nspawn.
TekMol
To me, this big problem with Docker is that it does a ton of changes to my system, even when I don't use it.
It runs a daemon, it uses a bunch of IPs, it mounts a ton of stuff ...
Is there a reason for all that noise and complexity?
There can't be a reason until I run a container, right? And even then, it seems way too much.
Is it different when using Quadlets?
jeppester
One of podman's selling points is that it doesn't need a daemon, and it runs without root privileges.
I'm not familiar enough with the lower level details to know how it works, but it certainly feels less like you are making "a ton of changes to the system" compared to docker
viraptor
This. Run podman without a daemon and with host networking and the only "weird" configuration you'll see is the overlay fs. Everything else is quite often an unnecessary overhead.
v3ss0n
What worse is , it screws up the firewall rules. Podman avoid that so , quadlets should be fine? Podman supposed to be drop-in replacement for docker but - last try (4 months ago) of podman to run our development docker containers fails to build so i think Podman is still far away from docker replacement.
prmoustache
> Podman supposed to be drop-in replacement for docker but - last try (4 months ago) of podman to run our development docker containers fails to build so i think Podman is still far away from docker replacement.
I'd be curious what failed to build under podman. I have been using podman as a replacement for docker for the last 3 years and haven't found any blocker. Sometimes you can't reuse a docker-compose file shared by a third party project straight away without adaptation but if you know the difference between docker and podman you can build and run anything that also run on docker.
magicalhippo
> Sometimes you can't reuse a docker-compose file shared by a third party project straight away without adaptation
So not a drop-in replacement then...
yrro
Hmm, podman also creates rules in the nat table if that's what you're talking about--_if_ you tell it to publish ports.
Of course, if you run rootless then there's no possibility to do so. :)
aorth
> What worse is , it screws up the firewall rules.
Yes! And it has a hard dependency on iptables, which I have removed from all my servers long ago in favor of nftables. Grrrrrr.
5e92cb50239222b
In recent versions of Ubuntu, Debian, and Arch /usr/bin/iptables is an iptables-compatible interface to nftables. That's what docker is using on those systems, and it works fine. You can manage those rules with /usr/bin/nft.
dbrgn
That's not the case (anymore). I run a NixOS based router with nftables (no iptables installed at all), and podman works just fine. It simply adds its NAT rules to nftables (unless you tell it not to).
As far as I know, this was introduced with the new networking stack (netavark).
Grimburger
Docker's firewall modifications are incredibly obnoxious and at the top of the list of gripes for me.
jve
Yeah, I also tried ~months ago. To be fair, I'v only tested out dev containers with docker, so I'm too weak to debug things what went wrong.
Any article out there on how to have windows + wsl2 + podman + vscode devcontainers working?
worksonmine
I suggest you start by just getting plain podman running the docker.io/hello-world container to reduce the complexity and simplify debugging if anything goes wrong. It's been about a decade since I last touched windows but if wsl2 is 1:1 with Linux the official podman guide should be straight forward.
It's always easier to start with the bare minimum and build from there, and you will get a better understanding of the tools you're using.
prmoustache
I don't know what `vscode devcontainers` is but to run podman on wsl2 I simply installed a fedora wsl2 image (by importing the fedora container image if my memory is correct).
sofixa
> Is there a reason for all that noise and complexity?
To make it simple at the point of use. If developers had to configure firewalls and bind mounts for Docker to work, it never would have taken off as much as it did.
rglover
A safe heuristic is that whenever you introduce an abstraction to any tech stack, you can assume that it makes shortcuts that you wouldn't have to if you implemented the underlying parts yourself (w/ zero guarantee that it makes those shortcuts well). The latter meaning: short-term harder, long-term easier. Invert for any abstraction.
Related to Docker, I finally bit down and tried to do a simple deployment stack myself using systemd and OS-level dependencies instead of containers. I'll never go back. The simplicity of the implementation (and maintenance of it) made Docker irrelevant—and something I look at as a liability—for me. There's something remarkably zen about being able to SSH into a box, patch any dependency issues, and whistle on down the road.
otabdeveloper4
> Is there a reason for all that noise and complexity?
No. It's just Docker being shit.
soupdiver
This is about Podman, not Docker
arisudesu
The syntax and examples in the article assumes usage of SystemD as service manager. Does it work on distros without SystemD too? Docker-compose does.
I also do not understand separation of services to different files. Is it supposed to be more convenient? With docker-compose, the whole application stack is described in one file, before your eyes, within single yaml hierarchy. With quadlets, it's not.
Lastly, I do not understand the author's emphasize on AutoUpdate option. Is software supposed to update without administrator supervision? I guess not. What are the rules for new version matching: update to semver minor, patch version, does it skip release candidates etc?
dathinab
> Is software supposed to update without administrator supervision
yes proper CI is a thing, and containers not being updated is actually quite a bit of an issue in the current software industry
especially if combined with custom registries auto update is quite a neet thing
oh also it's a SystemD feature to let SystemD manage your containers so why are you asking if it works without SystemD?
patrec
Updating your custom registry with new upstream dep versions after testing in CI with the all services you care about is fine. But the OP seems to just blindly pull the newest wordpress images from upstream or am I missing something? How is this meant to work reliably?
I guess given wordpress's security record taking breaking your site from time to time is preferable to your site being broken into from time to time.
dbrgn
I think you're mixing up some things. If you run the image "docker.io/wordpress:6.3.1", then the container will be updated when the image with that tag (6.3.1) is being re-built (which is a best practice, because that's the only way how you get security updates for the libraries in the base image). The tag is just a pointer to the latest image hash.
Many Docker images also provide "semantic version tags". Wordpress does too, so if you run the image "docker.io/wordpress:6.3", you will get the latest 6.3.x version.
It's up to you (and the image publisher) to decide when to auto-update, and when manual intervention is necessary.
Of course this requires trusting the publisher of that image. But even if you build your own images, you still trust the base image. It's turtles all the way down.
athrowaway3z
What are you talking about?
> Running containerized workloads in systemd is a simple yet powerful means for reliable and rock-solid deployments.
They say its reliable and rock-solid. Isn't that enough for you? /s
---
Honestly, the amount of companies who: Don't understand the problems, forgot history, and think we're innovating into new territory because of hyped up branding is utterly baffling in the whole container space.
I'm not saying they're all bad, and better common tools are a good thing. But i see so many companies operating at [required complexity level] + 1 in the hopes to no longer be bothered by simpler problems.
arisudesu
If you run only the software you wrote, then yes, it is useful feature. Otherwise I won't trust automated pulls of whatever other devs put into their public images, nor won't I trust them for following image versioning properly and not introducing some addition in minor version that would automatically expose my files to the internet if not configured explicitly. There is too much trust I don't want to put in auto-updates.
As for the SystemD dependency, in this case the quadlets can not even be compared to docker-compose, nor be a replacement to it. Docker-compose always was independent of init system, where as quadlets are strictly tied to SystemD-based distros. E.g. users of Alpine, Gentoo won't be able to replace their compose stacks with quadlets.
0xC0ncord
Quadlet is specific to systemd. It's actually just a systemd generator that looks for related files in /etc/containers/systemd or the user's ~/.config/containers/systemd to generate units from that can then be started as services. When you create or edit a file here, you then do 'systemctl daemon-reload' which re-invokes all systemd generators, quadlet included.
fariszr
I don't see how this is anything like compose. With quadlets you have to create a file for each container and deal with creating volumes and so on.
Whereas with Docker it's one file, one command and you're done, you don't have to deal with anything else.
Spivak
They're both declarative manifests describing how to run one or more images on your system.
You use .container for a single container, .kube for all-in-one pods, .network for networks, and .volume for volumes. It has all the stuff it's just broken down in a more (imho) sysadmin friendly way where all the pieces are independent and can be independently deployed.
fariszr
How's that better? How many files do you need to replace a complex compose project, 10? And you still have to deal with SystemD?
pkulak
Yeah, I see Docker Compose as a development environment setup aid. This seems like a way to set up a system.
hinkley
But if it's a stepping stone to emulating Docker Swarm, that'd be something.
whalesalad
For anything other than a hello world type project a compose file will fall over kinda quick. I would much prefer to (ahem) compose smaller things together and systemd is great for that.
rastapasta42
Lolol have been running compose in dev & prod for 7 years - still the best tool around.
GabeIsko
That might work for whatever you are doing, but the truth is that root-ful containers are not appropriate for a lot of applications, and docker as a layer to your container runtime is rough sometimes. I don't think docker wants to continue to develop this anyway - they have had enough problems trying to be profitable so instead it is time to focus on docker desktop and charging for docker hub image hosting.
I feel like we are kind of in this weird limbo where I know I need to move to a podman focused stack, but the tooling just isn't there yet. I guess that is what makes Quadlets interesting, but idk if a single tool will really emerge. There is also podman-compose floating around. I still feel like I should hold of on some stuff until a winner emerges. So at home I'm still on docker with compose files. Although I will be moving to kubernetes for... reasons.
whalesalad
It will depend on the complexity of the stack, the number of environments you are deploying into, the number of devs on the team etc. It can work, and if it is working for you in this manner don't change what isn't broken, but in my experience for sophisticated systems with many services, envs, and developers it becomes unmanageable pretty quickly.
Riverheart
In what way? Docker-compose files are composable. I can specify several compose files that layer functionality or have different behavior and tie it together with make. You can also set defaults and override with environment variables using bash syntax.
PUSH_AX
> "hello world"
This is at best hyperbole and at worst nonsense. Come on, at least put some effort into drawing the line of complexity a little clearer.
seizethegdgap
For my home server, I have a flat 2507 line docker-compose file that automatically configures and boots all of my 85 containers. I still have some complexity: .env files in /opt/<container>/, a systemd process that automatically runs
docker-compose -f /<dir>/docker-compose.yaml -d
on boot, and it's only a little irritating to have use absolute paths for everything instead of relative. But, after having to update all of my services manually for 3 years, I will never be able to go back.whalesalad
> 2507 line docker-compose file
mother_of_god.gif
cdelsolar
What do you need 85 containers for in a home lab
undefined
all2
I've been kinda partial to helm charts (on a k8s cluster). Standing up services is not awful. Have you used helm or similar? What do you think of these kind of tools?
GabeIsko
I don't manage one large compose file, but helm charts are where my home stuff is headed. Mostly for ingress controller functionality - it's the reverse proxy configuration that will get you.
kubanczyk
Curious: What's the use case for helm if you have kustomize now built into kubectl?
sunshine-o
Quadlets are very much a welcomed integration but last time I tried to create an users Quadlet in .config/containers/systemd/ with a linuxserver.io image I ended up with all sort of files owned by strange UID & GID.
So I had to add --userns keep-id to my container unit what caused all sort of problem because of podman apparently.
So you always end up with the kind of investigation & fiddling that shouldn't be necessary after 10 years of docker & containers.
broknbottle
I believe this is due to the linuxserver.io images actually being customized specifically for usage with docker.
For images intended for rootless deployments e.g. podman, take a look at the onedr0p container images, https://github.com/onedr0p/containers
sunshine-o
Thank it looks great ! and yes, I believe it is the policy of linuxserver.io not to test or support officially podman.
I have been trusting the plan but I notice that after 10 years of container industry standard etc. we have to search for podman friendly images to enjoy integration with the common Linux service manager...
Now if container-based Linux distributions are the future I'm starting to wonder if we are not gonna soon see RedHat & co. packaging docker images in RPMs to make sure guarantee things work together & people don't badly mess up the security...
stryan
Fun fact, OpenSUSE actually already does that for some common server software (LDAP, dovecot, etc); they're quadlet/systemd unit files packaged up as RPMs though, I don't think they actually include the container image.
jauntywundrkind
Overall this looks really good but it also obfuscates whether how & where this really integrates with systemd.
Maybe everything is this easy & good. Maybe this is an /etc/systems/system/WordPress.quadlet file, part & parcel to everything else in the systemd-verse. But it doesn't say clearly whether it is or isn't. It's an acontextual example.
I think it's powerful tech either way, but so much of the explanation is missing here. It focuses on the strengths, on what is consistent, but isn't discussing the overall picture of how things slot together.
In many ways I think this is the most interesting frontier for systemd. It's supposedly not a monolith, supposedly modular, it so far that has largely meant that components are modular, optional. You don't need to run the pretty fine systemd-resolvd, for example. But what k8s has done is make handling resources modular, and that feels like the broad idea here. But it seems dubious that systemd really has that extensibility builtin; it seems likely that podman quadlet is a secondary entirely unrelated controller, ape-ing what systemd does without integrating at all. It seems likely that's not a podman-quadlet fault: it's likely a broad systemd inflexibility.
Could be wrong here. But the article seems to offer no support that there is any integration, no support that this systemd-alike integrates or extends at all. Quadlet seem to be a parallel and similar-looking tech, with deep parallel, but those parallels from what I read here are handcrafted. Jt's not quadlet that fails here to be great, ut systemd not offering actual deep integration options.
goku12
Quadlet uses SystemD Generators [1] feature. Generators are a way to convert non-systemd-native configuration extensions (like containers, volumes and networks in case of quadlets) into regular systemd-native configuration like service unit-files. The quadlet generator converts the podman extension for systemd into regular service file that you can examine yourself. Non-root podman container services are in /run/user/<uid>/systemd/generator. Here is a blog post that describes the design in detail (slightly dated): [2]
[1] https://www.freedesktop.org/software/systemd/man/systemd.gen...
[2] https://blogs.gnome.org/alexl/2021/10/12/quadlet-an-easier-w...
yrro
The reason it works at all is because of systemd's extension mechanisms.
Maybe this helps. Picking a random example container unit...
[root@xoanon ~]# cat /etc/containers/systemd/oxidized.container
[Unit]
Description=Oxidized
[Service]
ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid
[Container]
Exec=oxidized
Image=docker.io/oxidized/oxidized
User=972
Group=971
NoNewPrivileges=yes
ReadOnly=yes
RunInit=yes
VolatileTmp=yes
Volume=/var/local/oxidized:/var/local/oxidized:rw,Z
PodmanArgs=--cpus=1
PodmanArgs=--memory=256m
Label=io.containers.autoupdate=registry
Environment=OXIDIZED_HOME=/var/local/oxidized
[Service]
Restart=always
[Install]
WantedBy=multi-user.target
After a `systemctl daemon-reload` an `oxidized` service springs into being. [root@xoanon ~]# systemctl status oxidized
● oxidized.service - Oxidized
Loaded: loaded (/etc/containers/systemd/oxidized.container; generated)
Active: active (running) since Sat 2023-09-23 09:53:11 UTC; 2 days ago
Process: 221712 ExecStopPost=/usr/bin/rm -f /run/oxidized.cid (code=exited, status=219/CGROUP)
Process: 221711 ExecStopPost=/usr/bin/podman rm -f -i --cidfile=/run/oxidized.cid (code=exited, status=219/CGROUP)
Process: 221713 ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid (code=exited, status=0/SUCCESS)
Main PID: 221799 (conmon)
Tasks: 8 (limit: 98641)
Memory: 169.0M
CGroup: /system.slice/oxidized.service
├─libpod-payload-b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390
│ ├─221801 /run/podman-init -- oxidized
│ └─221803 puma 3.11.4 (tcp://127.0.0.1:8888) [/]
└─runtime
└─221799 /usr/bin/conmon --api-version 1 -c b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390 -u b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata -p /run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/pidfile -n systemd-oxidized --exit-dir /run/libpod/exits --full-attach -l passthrough --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/oci-log --conmon-pidfile /run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390Learn more at:
Sep 26 00:03:05 xoanon oxidized[221803]: I, [2023-09-26T00:03:05.807594 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 01:03:15 xoanon oxidized[221803]: I, [2023-09-26T01:03:15.083603 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 02:03:24 xoanon oxidized[221803]: I, [2023-09-26T02:03:24.414821 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 03:03:33 xoanon oxidized[221803]: I, [2023-09-26T03:03:33.677828 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 04:03:42 xoanon oxidized[221803]: I, [2023-09-26T04:03:42.983589 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 05:03:52 xoanon oxidized[221803]: I, [2023-09-26T05:03:52.297830 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 06:04:01 xoanon oxidized[221803]: I, [2023-09-26T06:04:01.637348 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 07:04:10 xoanon oxidized[221803]: I, [2023-09-26T07:04:10.935352 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 08:04:20 xoanon oxidized[221803]: I, [2023-09-26T08:04:20.199651 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 09:04:29 xoanon oxidized[221803]: I, [2023-09-26T09:04:29.553178 #2] INFO -- : Configuration updated for /192.168.89.5
During the daemon-reload, systemd invoked /usr/lib/systemd/system-generators/podman-system-generator, which read the files in /etc/podman/systemd and synthesized a systemd service for each of then, which it dropped into /run/systemd/generator, which is one of the directories from which systemd loads unit files.Far from being a parallel service control mechanism (á la Docker), this is proper separation of concerns: the service is a first-class systemd service like any other; the payload of the service is the podman command that runs the container. We can introspect this a bit to examine the systemd unit that was generated:
[root@xoanon ~]# systemctl cat oxidized
# /run/systemd/generator/oxidized.service
# Automatically generated by /usr/lib/systemd/system-generators/podman-system-generator
#
[Unit]
Description=Oxidized
SourcePath=/etc/containers/systemd/oxidized.container
RequiresMountsFor=%t/containers
RequiresMountsFor=/var/local/oxidized
[Service]
ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid
Restart=always
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStopPost=-/usr/bin/podman rm -f -i --cidfile=%t/%N.cid
ExecStopPost=-rm -f %t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm --log-driver passthrough --runtime /usr/bin/crun --cgroups=split --init --sdnotify=conmon -d --security-opt=no-new-privileges --read-only --user 972:971 -v /var/local/oxidized:/var/local/oxidized:rw,Z --env OXIDIZED_HOME=/var/local/oxidized --label io.containers.autoupdate=registry --cpus=1 --memory=256m docker.io/oxidized/oxidized oxidized
[X-Container]
Exec=oxidized
Image=docker.io/oxidized/oxidized
User=972
Group=971
NoNewPrivileges=yes
ReadOnly=yes
RunInit=yes
VolatileTmp=yes
Volume=/var/local/oxidized:/var/local/oxidized:rw,Z
PodmanArgs=--cpus=1
PodmanArgs=--memory=256m
Label=io.containers.autoupdate=registry
Environment=OXIDIZED_HOME=/var/local/oxidized
[Install]
WantedBy=multi-user.target
No deep magic, just the pleasant feeling you get when you see layered systems interacting together without cross-cutting.You can learn more about the systemd generator extension mechanism at: https://www.freedesktop.org/software/systemd/man/systemd.gen...
contravariant
> you’ll see a WantedBy line. This is a great place to set up container dependencies. In this example, the container that runs caddy (a web server) can’t start until Wordpress is up and running.
Either this must be some systemd weirdness that I thankfully haven't had to deal with until now, or I'm misunderstanding something.
Did I understand correctly you don't specify which services you need but rather which ones depend on your service? So if your service doesn't start you'll need to check the configuration files of all other services to figure out which dependency is preventing it from starting?
yrro
[Install]
WantedBy=multi-user.target
This is the mechanism by which one unit can ask to be added to the Wants= of another when it is installed.i.e., when you run 'systemctl enable whatever.service', it will be symlinked into '/etc/systemd/system/multi-user.target.wants'. And 'systemctl show multi-user.target' will show 'whatever.service' in its Wants property.
https://www.freedesktop.org/software/systemd/man/systemd.uni...
During bootup of a headless system, the 'default' target is usually multi-user.target, so what we've done here is ensure that whatever.service will be started before the machine finishes booting.
https://www.freedesktop.org/software/systemd/man/bootup.html
toyg
I think it's a bit of a Podman quirk. From what I understand, podman used (and is probably still able) to generate systemd .service files. These files do have Requires and After commands, to state which other services they expect. However, Podman has since moved to using the .container file for systemd "units", which was meant to represent an transient, disposable instance but in practice reproduces a lot of what .service specifies. Probably because people didn't want to do the work twice, they tacked on an [Install] section to .container to make it behave like a service, which currently accepts only the keywords Alias, RequiredBy and WantedBy (I've not seen any documentation on how these differ) according to https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
alexlarsson
This are standard systemd service file syntax and standard systemd directives. Quadlet forwards everything but the [container] section directly to the generated service file.
vimax
Systems scans all of the unit files initially, and topologically sorts them to find the best start ordering for all services. Unit files are rescanned only when you run systems daemon-reload.
One of its main design goals is a fast system startup, to do that it does need know the dependency ordering of all services.
rcxdude
It's not how you would normally specify it, but it is an option: the normal usage for it in systemd is enabling and disabling which services start on boot: an enabled service usually gets set up as a dependency of the multi-user target which is what systemd starts on boot. (And you can get a list of dependencies from systemd if you want to debug anything: the WantedBy stuff just turns into some symbolic links in the filesystem if you want to inspect things manually)
I don't know why it's being used in that way for these containers. It'd be easier to just add a Wants line on Caddy.
fireflash38
You can do it the other way if you want, using After= or Requires.
toyg
i don't think that's the case for .container files, according to https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
alexlarsson
That section is only for the enablement of units, and is equivalent to the install section in a service file: https://www.freedesktop.org/software/systemd/man/systemd.uni...
You can still use all sorts of dependencies in the [Unit] section: https://www.freedesktop.org/software/systemd/man/systemd.uni...
mangecoeur
You can define Before, After, or WantedBy to define the dependency order. Systemd them makes sure to start services in the right order, it starts them by default but you can also configure services not to start if nothing depends on them.
malmz
I have been having a blast using quadlets on my tiny home server. Feels like I'm learning how to use systemd which is a nice bonus. My workflow consists of just connecting vscode over ssh and editing files as needed, which works well since everything is owned by my user.
madspindel
Actually, I believe "podman generate kube" is even better: https://www.redhat.com/sysadmin/kubernetes-workloads-podman-...
You can run your docker-compose file, then run "podman generate kube" and you will get a Kubernates yaml file.
Then you can run:
$ escaped=$(system-escape ~/guestbook.yaml)
$ systemctl --user start podman-kube@$escaped.service
And you can enable it to start on boot. It will read the yaml file and create the pod.
moondev
Or you just run the kubelet(with systemd) and drop your manifests I to /etc/kubernetes/manifests
Nothing can kube better than the kubelet! The new sidecar support for init containers can be used today. Every other abstraction is playing catch up
yrro
FYI you'll have to enable lingering for your user on that system. Otherwise your user services won't start on _boot_ but only when you log in.
zb3
I'm happy with docker-compose, but it seems RedHat really doesn't want me to use such simple and easy to use thing that is not as tightly coupled with other RH-specific parts as the replacement they promote.
But this only makes me like docker even more :)
darkwater
> My container deployments are often done at instance boot time and I don’t make too many changes afterwards. I found myself using docker-compose for the initial deployment and then I didn’t really use it again.
I used a very similar approach in the (now EOL'ed, gonna be replaced by full K8s) infra at $DAYJOB. My main reason to stick with docker-compose is because developers are familiar with it and can then easily patch/modify the thing themselves. Replacing with something systemd will add a dependency over the people that know systemd (which are not usually application developers in your average HTTP API shop)
GTP
I would add that this is the use case of a server running some services. But during development you may want to restart and/or rebuild a container multiple times, maybe this is still better done with docker-compose.
Get the top HN stories in your inbox every day.
It's quite unfortunate that this article mixes up what's necessary for podman quadlets with coreOS concepts.
With quadlets, the only thing required is to drop a `.container` file in the right place and you end up with a container properly supervised by `systemd`. And this of course also supports per-user rootless containers as described in [1].
[1]: https://www.redhat.com/sysadmin/quadlet-podman