Get the top HN stories in your inbox every day.
jameshart
tremon
clickops is no way to run a server
server state should be reproducible from scratch
Why? I'm not necessarily disagreeing, but too often are these kinds of statements thrown about without any qualification, as if they are self-evident truths. But they're not -- there are engineering trade-offs behind any choice, and it's no different here. So, in order to guide this discussion away from dogmatic platitudes: why should server state be reproducible from scratch? What does "from scratch" mean? Why is clickops no way to run a server?
Install an OS, add software, apply configuration
Do you think this captures "server state" completely? Software patch levels are not part of server state? What about application data? User data?
So here's my counterstatement: for any working machine, I can reproduce the server state exactly by performing a restore from backup. Backup/restore is perfectly compatible with clickops, and it's faster and more reliable than reinstalling an OS, adding software and applying configuration -- even when the software and configuration are scripted. And if your server stores non-volatile data, as is often the case in clickops environments, you will need to have a backup system anyway to restore the user data after deploying a new server.
mbreese
> too often are these kinds of statements thrown about without any qualification, as if they are self-evident truths
It's because different people think in different levels of abstraction. One admin might be thinking about a handful of servers and another an entire fleet of VMs. The way you manage each is very different. Clickops can work well for a small number of servers and a full orchestration setup can be over engineering.
But your real issue is that blanket statements never work in such scenarios. However, I think it's pretty well established that reproducible server state is a best-practice. How you get there is up to you.
But as an argument against backup/restore -- you can't use backup/restore to generate new servers from an existing template without some kind of extra scripting (if for no other reason to avoid address/naming conflicts). And if you're already scripting that...
fnordpiglet
There are a lot of reasons we arrived here over the decades of struggling to keep servers in good working order in a sea of change. One is that backup and restore is inherently fragile, and we have many instances where restorability degrades for many reasons over a long life. Backup restore verification is not a regular part of hygiene because it’s intrusive, tedious, and slow. If ever done it’s usually done once. Reproducible builds allows for automated verification and testing offline.
Changes done are only captured at snapshot intervals and are no coherent and atomic, so you can easily miss changes that are crucial but capture destructive changes in between deltas. Worse are flaws that are introduced but not observed for a long time and are now hopelessly intermixed with other changes. Reproducible build systems allow you to use a revision control system to manage change and cherry pick changesets to resolve intermixed flaws, and even if they’re deeply intermixed you can resolve in an offline server until it’s healthy to rebuild your online server.
The issue with reproducible build systems isn’t they aren’t superior to backup and restore in every way. It’s the interfaces we provide today are overly complex compared to the simple interface of “backup and restore,” which despite its promised interface always works in the backup part but often fails in the restore. These ideas of hermetic server builds are relatively new and the tooling hasn’t matured.
I would say actually click ops is an ideal way to solve that issue. Click ops that serializes resiliently to a metadata store that drives the build and is revision controlled solves that usability issue. If the metadata store is text configs and can be modified directly without breaking the user interfaces would be necessary to deal with the tedium for complex changes in a UI, while providing a nice rendering of state for simple exploratory changes. Backup and restore would be only necessary for stateful changes, but since the stateful changes aren’t at the OS layer, you won’t end up with a bricked server.
belthesar
This assumes that you're running in an environment where your servers are cattle and not pets, and in all fairness, not everyone is running large scale web platforms on some orchestration platform. I don't disagree that, even in a pets world one should know how to restore/rebuild a system, because without that, you don't have a sound BDR strategy.
marginalia_nu
Arguably, about 80% of those running their app on a cattle farm should really have gone with a pet cafe instead. Resumes would certainly be a lot less impressive, but they'd also have a lot less fires to put out and a significantly smaller infra bill.
But regarding the topic at hand, I don't think being able to manage these things with a graphical interface is necessarily a bad thing. It's basically user-space iDRAC/IPMI.
xupybd
I maintain 3 servers. It's not worth automating the deployment.
I'll spend less time just setting them up by hand.
The company will survive a few hours of downtime.
berkes
Are there any tools that allow you to manage a server like a pet, yet ensure it can be restored/rebuild?
And, while with the analogy of pets, when you are on holiday, allow your neighbors to look after your pets?
JeremyNT
There's no reason you can't use puppet/chef/ansible/whatever on pets!
The reason that (some) people don't do this is the cost/benefit analysis looks kind of weird. You'll spend a lot of time mucking around in puppet/chef/ansible/whatever for a single snowflake server, and it would be a lot faster to just go edit that config file directly.
In reality, proper backups and shell history can get you pretty far if you ever find you need to replicate a snowflake.
belthesar
In my homelab, I use Portainer to manage my hosts. All of my workloads are installed as collections of Docker containers, and I'm slowly but surely migrating even single container installs to Compose stacks. With some real bare bones GitOps, those stack files can be in Git, and deploy to the host in Portainer, thus at least giving me the recipes to rebuild my environment should it ever be lost.
HankB99
> For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone.
I'm curious if you have a specific tool or tools in mind. I've been using Ansible in my home lab, particularly for configuring Raspberry Pis. The OS install part (only?) works because it involves a bitwise copy of the image to the boot media (and some optional configuration.)
jameshart
Ansible is a good choice.
When I say ‘working server’ though, I typically mean one that is doing a job - providing a critical business service.
A ‘home lab’ of raspberry pis is a different beast.
jefurii
I'd like to see a tool, maybe a Cocpit-like or a wrapper around SSH, that would build Ansible playbooks for you as you clicked around or typed commands.
tiffanyh
> “For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone.”
I presume you only run NixOS then?
2OEH8eoCRo0
Both are good for different reasons. I prefer working in a terminal but I didn't think it was controversial that a GUI is better for visualization.
barosl
The cool thing about this project is that as it uses systemd's socket activation, it requires no server processes at all. There is no waste of resources when Cockpit is not being used. Accessing a page is literally the same as invoking a command-line tool (and quitting it). No more, no less. What a beautiful design.
arghwhat
To be fair, we've had this since BSD4.3 (1986) through inetd - which worked slightly differently, but same overall idea. Once popular, it fell out of fashion because... Well, there isn't really any reason for it.
A good server process is idle when nothing is happening, and should be using miniscule real memory that should be easy to swap out. If the server in question uses significant memory for your use-case, you also don't want it starting on demand and triggering sporadic memory pressure.
It does make it easier to avoid blocking on service start in early boot though, which is a common cause of poor boot performance.
dale_glass
There's good reasons for it though!
One is boot performance. Another is zero cost for a rarely used tool, which may be particularly important on a VPS or a small computer like a Raspberry Pi where you don't want to add costs for something that may only rarely be needed.
I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version. You don't need the tool to have its own "re-exec myself" code that's rarely used, and that could fail at an inconvenient time.
The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up. Plus there was the inetd/xinetd disagreement.
Tying in init, inetd and monit into a single system that can do all those things IMO made things much nicer.
arghwhat
> Another is zero cost for a rarely used tool.
Zero cost is only true for unused services. For rarely used services, it's a rarely occuring full cost that might come by surprise at a bad time.
> I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version.
This is only a benefit if the systemd socket unit is co figured to operate in inetd-mode (Accept=yes), where systemd spawns a new process for every accepted connection, which is quite inefficient resource-wise.
"Normal" systemd socket activation just starts the service and hands over the socket. The service runs indefinitely afterwards as if it was a normal service, and needs to be manually restarted or hot-reloaded after upgrade or configuration change.
> The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up.
Being separated has a lot of benefits - easy nesting, easy reuse in minimal containers, etc. The integrated model works best for monolithic servers.
bityard
Around the time I was first learning Linux, I recall reading that there were two ways to run a service:
1. Start the daemon on boot and have it running all the time, like some undereducated neanderthal.
2. Configure your system to run a daemon which monitors a port/socket and starts up only when there is traffic, like a civilized person.
I believe which one of these to use is highly dependent on your resources, usage, and deployment model. For services that are fast and cheap to start but are rarely used, #1 makes more sense. If you have a server or VM which only does one thing (very much the norm, these days), then running just keeping that service running all the time is easier and better for performance.
whartung
Actually I think what killed inetd is, partially, http. At the time, http was connectionless. Open socket, send packet, read response, close. Out of the box inetd would support that, for sure, but it would be constantly forking new http processes to do it.
FTP, SMTP were all stateful, so living under inetd worked OK. One process per overall session rather than individual messages within a session.
Obviously, inetd could have been hammered on to basically consume the pre-forking model then dominant in something like Apache, caching server processes, etc.
But it wasn't. Then databases became the other dominant server process, and they didn't run behind inetd either.
Apache + CGI was the "inetd" of the web age.
tanelpoder
I ended up reading more about this and looks like SSHD in Ubuntu 22.10 and later also uses systemd socket activation. So there should be no sshd process(es) started until someone SSHs in!
https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-ac...
talent_deprived
This is messed up, totally messed up:
"On upgrades from Ubuntu 22.04 LTS, users who had configured Port settings or a ListenAddress setting in /etc/ssh/sshd_config will find these settings migrated to /etc/systemd/system/ssh.socket.d/addresses.conf."
It's like Canonical is doing 1960's quality acid.
At least the garbage can be disabled:
"it is still possible to revert to the previous non-socket-activated behavior"
With having to remove snapd then mark it to not be installed and in the next Ubuntu having to fix ssh back to the current behavior, it might be easier to migrate my servers back to Debian, or look for a solid non-systemd OS.
9dev
What exactly is "garbage" about this? It's so tiring how systemd opponents insist on name-calling instead of substantiated criticism.
There is no reason every single application should manage network socket acquisition on its own - I'm not very fond of the times everyone and their mother wrote whacky shell scripts to start and stop their services, either. But somehow those seem to be the "good old times" you guys miss.
smetj
Certainly for SSH I find this a bad idea. If you need to ssh into a troubled machine then it might very well be it cannot be started.
belthesar
I don't necessarily think it's an outright bad idea, but it's certainly a departure from how sshd is traditionally run, and without awareness of this kind of change, this kind of "magic" runtime change could lead you to not expecting sshd to be unavailable in this kind of a scenario, and increase time to resolution during an incident.
If your systems are more pets than cattle, then I think I too would prefer an always-running ssh daemon. If your workflow is only to ssh into machines during bootstrap, however, then having sshd run only during initial bootstrap and then shut itself off does seem like a nice way to free up a small amount of resources without stopping or disabling the daemon post-bootstrap.
dale_glass
If it's so troubled that a process won't start, it's probably time to reach for the IPMI console. Even if ssh is still running, if the system is that broken, is bash going to start, or what tools you might need?
JAlexoid
TBH - for any non-server class machine on my network, I'm fine with that.
SSH should probably be running 24/7 on any server(to keep those resources allocated for maintenance access), but if it's my workstation with a monitor - then it's a non-issue.
mrweasel
I should really spend more time learning systemd. The more I look into it, the more cool and useful features I discover.
bityard
If you have anything at all to do with OS administration, management, or software packaging, it's worth it.
If I could offer a little advice: The systemd man pages are useful as a reference, but are terrible to learn from. Part of this is because there are parts of systemd that everyone uses, and there are parts that almost nobody uses and it's hard to guess which these are at first. Also, the man pages are dry and long and quite often fail to describe things in a way that would make any sense whatsoever to someone who isn't already intimately familiar with systemd.
Most of my systemd learning came from random blog articles and of course the excellent Arch wiki.
ramses0
Also, it's 99% "not different than doing it via command line", and also comes with a little js terminal gui, uses native users + passwords, has some lightweight monitoring history, lets you browse a bunch of configuration that you usually would have to remember byzantine systemd command lines for... it's awesome for what it is!
I'm happy to run it (aka: have it installed) on all my little raspberry pi's, because sometimes I'm not at a terminal when I want to scope them out, and/or if I'm at "just a web browser", being able to "natively ssh into them" via a web server (and then run `curl ...etc...` from a "real" command prompt) is super helpful!
winter_blue
Just want to clarify: there's still a server process running to serve the Cockpit web app's static HTML/JS assets, right?
Do you essentially mean that systemd socket activation is used basically only if/when the Cockpit web app end-user/client sends a REST/GQL/etc/? request for logs, for example?
sleepybrett
I thought the cool thing was all the rookies who install this thing in a way that it's publicly accessible. How many stories have I heard about people who accidentally configure phpMyAdmin to be publicly accessible... Now you might not JUST leak your whole customer DB!
severino
Interesting, I always thought socket activation meant defer launching a process until somebody tries to access it through the network, but... does it also finish the web server process (or whatever is used here) as well after the request is serviced?
diggan
No, it doesn't automatically close the process. Two options I can think of: Application exit when it's done with its thing or RuntimeMaxSec to make it close after a while.
systemd passes the socket on to the application so I don't think it has any reference to it anymore, so it wouldn't be able to know when the socket closes.
notpushkin
systemd-cgi :^)
wongarsu
Everything old is new again.
The next big thing will be a web server where you don't need to use the command line to deploy your project, just sync your workspace folder and it will automatically execute the file matching the URL.
darkwater
It was/is inetd[1] actually
leetrout
There is value in "porcelain"[0]
I have watched startups fold for not pushing product development further into UI/UX with off the shelf backends. At one company I worked at I showed how our backend (completely custom container orchestrator) could be replaced in a weekend with AWS Lambda and ECS. But our UI/UX and workflow tools would take much, much longer. Yet we continued to waste money and time on "building a new raft based cluster". In the mean time I was handed "add batch processing" and we already used Go so I just used Nomad under the hood and moved on.
I like working on teams that ship features not JUST tech for tech's sake.
https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Po...
aitchnyu
Hope all tools in this space have a giant banner saying your disk space is running out. This is somehow not common knowledge for those debugging servers.
snoman
What’s up with that btw? Noticed the same myself.
Semaphor
2022, 81 comments: https://news.ycombinator.com/item?id=31439811
2021, 128 comments: https://news.ycombinator.com/item?id=26197510
2018, 149 comments: https://news.ycombinator.com/item?id=16445612
fs0c13ty00
I can't imagine myself using this. One more port open, one more attack vector for those restless bots to scan for vulnerabilities, one more service I need to keep up-to-date. But I understand it would help Linux servers become more approachable, especially people that are switching away from PHP-based shared hosting to a full-featured VPS, don't have much knowledge about servers, and want something similar to cPanel or DirectAdmin.
TwoNineFive
I'm an actual RHCE. This thread has to be some big Red Hatter click farm or something. The artificial positivity is striking. Is Red Hat threatening to pull funding for this project or something? Just weird.
Cockpit is okay but it's basically Red Hat's equivalent to the Windows Server Manager tool, and I have no doubt it was directly inspired by Server Manager. It's development and improvement over the years has been painfully slow.
Nobody who is comfortable with an ssh session uses Cockpit, except maybe to create new VMs, and even then all of these comments comparing it to Proxmox are just whack because it doesn't have a quarter of the features the Proxmox UI offers. The utility for managing VMs is a recent development and even then I still prefer the Virtual Machine Manager tool because I don't want to deal with the latency increase and other limitations of working through a browser.
But anyway, there's a ton of things you can't do with Cockpit, and never will be able to do. It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.
Like happyweasel said, it's basically webmin for Red Hat.
It's kinda cool, but it's so old now and development has been so slow and it's been so over-hyped that I don't pay attention to it at all and I've never used it except what was required to get certified.
oli-g
> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim
"Instagram filters are for people who don't know how to work with Photoshop layers, don't understand basic color blending operations, and who just want to swipe."
I mean, yes.
distcs
I have no problem viewing pictures shared by those who don't understand basic color blending and just want to swipe.
But I may have a problem with people who can't do a bash for/while loop or understand pipe chaining commands be responsible for administrating my company servers.
I don't see how the comparison between adminstrating servers and sharing pictures on social media is a useful comparison.
addicted
I don’t understand why you think the fact that someone may choose to point and click to do something, a task which depending on the UI can be completely mindless and yet significantly safer eliminating any chance of making a mistake (by throwing in a typo for instance), does not know what a bash for/while loop is?
I suspect most people here used the HN web interface to post their comments, even though constructing an HTPP request to send to HN using curl to post your comment wouldn’t be significantly harder. The fact that they used the point and click HM Web UI doesn’t mean they’re incapable of constructing such a request where it actually is needed.
shortrounddev2
What if they CAN do a bash for/while loop, but they prefer not to because bash is the ugliest effing language they've ever seen
rafaelmn
Your comparison implies that the web UI is faster than SSH once you know these tools ?
You could have godlike Photoshop skills and it will take orders of magnitude more effort to get results. With SSH and shell scripts you'll likely be faster than the web UI once you're skilled enough. And it's easy to automate.
Kiro
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Karrot_Kream
> Nobody who is comfortable with an ssh session uses Cockpit
> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.
Lol! Are you ready to deploy an ssh-capable terminal emulator at all times? What's wrong with making simple tasks simple?
I run multiple Raspberry Pi cameras (with nicer camera modules) to watch the pets if the family travels. The RTSP camera streams run in a systemd unit on their boxes. I have some healthchecks to make sure packets are being streamed as other systemd units. Each camera gets its own private IP on a ZeroTier network I manage. Since copilot is only run on demand, it's a no brainer to have around for administration.
Sometimes one of the cameras just starts streaming out blank frames. I'd much rather manage this through a copilot web interface on my phone when I'm on vacation than find a keyboard to use SSH with and restart the camera stream unit. I mean sure, I could write a healthcheck which checks whether blank frames are being emitted, but it's just so much easier to restart it via copilot than it is to write that healthcheck and it only ever happens a few times a year. Shrug.
arghwhat
> Lol! Are you ready to deploy an ssh-capable terminal emulator at all times?
What terminal emulator isn't ssh-capable? Where would you not be able to open a terminal emulator? I am so confused.
> What's wrong with making simple tasks simple?
The limited tasks exposed by cockpit are also simple (or depending on the individual, simpler) in a terminal, but if you want a point-and-click UI for just a few things, go ahead.
That cockpit is very limited and seemingly has no future does not mean you can't like what it does now. Just might be worth considering if there are better-supported alternatives.
Karrot_Kream
> Where would you not be able to open a terminal emulator? I am so confused.
A mobile device. I don't use terminal emulators without access to some non-touch keyboard so I want a simple interface on my mobile device.
> That cockpit is very limited and seemingly has no future does not mean you can't like what it does now. Just might be worth considering if there are better-supported alternatives.
Sure then compare cockpit with other webmin-esque tools, not a terminal emulator. These are different interfaces, much like I don't compare a voice interface with a mouse-oriented one.
minimaul
It's a very useful tool to manage libvirt + KVM remotely without trawling through poorly documented XML, it's accessible from any platform - even an iPad, and it requires next to no setup (basically install the package and add a cert and you're done).
I consider these big pluses, I use Cockpit on Debian on my servers that run VMs rather than something like Proxmox, because 1. it's much less invasive, 2. the machines tend to run other things too, like docker containers.
Have been using it for this since ~2019.
The stats views are useful too, but I wouldn't install it for that on it's own.
edit: and honestly, there's not another good (maintained!) option that fits the niche of 'let me create libvirt VMs from a web browser on a single machine without taking over my whole system'.
dig1
I'd say it is half-baked webmin. You can only use it with NetworkManager, and if you have an even remotely complex network setup for VMs, NetworkManager usually must be turned off, which makes Cockpit practically unusable. virt-manager [1] is way more powerful for those who like managing VMs with GUI.
talent_deprived
Agree, all of it, like the term Red Hatter. This cockpit-project thing came up on Reddit yesterday as well. It feels like the podman astroturfing that was so strong last year. It also feels like Red Hat hired some of Jetbrains' hyper PR astroturfers who troll the Java and webdev forums on various sites, extolling the extreme virtues of all Jetbrains' products.
cuddlyogre
Your post implies there is something obviously better.
Genuine question.
What would that be? I'm always on the lookout for better tools.
notabee
I tend to not ever interact with /r/linux for this reason. It always seems overrun with corporate mouthpieces. I would really love to see a platform take this problem seriously, but I think for most of them (even this one) that would threaten the money supply either directly or indirectly. I'm tired of the "just don't talk about it" decorum when it's such a huge problem.
KronisLV
> Like happyweasel said, it's basically webmin for Red Hat.
Seems pretty cool to me, "meet your users where they are" and all that.
I actually wonder what other options for this sort of web based management panel there are out there, maybe more DEB oriented ones.
brancz
I was architect leading all things Observability at Red Hat until 3 years ago there was an absurd amount of support for this project internally I never understood it either. But there were huge amounts of customer support, sales and engineers who adore this thing, I genuinely don't understand the appeal when we had next level cluster-wide Observability supported on and off OpenShift.
Even being in a leadership position and basically competing within Red Hat against this, I found no answer to your question.
JAlexoid
It's "meh" level of quality, though. Useful for a very small subset and I would avoid it, if you're running a home server. (Cockpit's file server interface plugin is old and bad)
I don't really know what you'd use it for? Maybe to do minor monitoring, but it's not great to admin.
mekster
Exactly. No idea why RH is endorsing the project. There's no practical use. Listing bunch of systemd services isn't going to be any more helpful than CLI output listing everything.
Scene_Cast2
For self hosting a NAS, I find Cockpit to be leagues better than OMV.
kosikond
IMHO that depends on couple of factors and use case, and I am happily running either on two different NASes.
OMV: -- has Docker plugin with Compose support (no need for sep. Docker GUI like Portainer) -- SMB shares are (somehow) more reliable on Win clients -- has more beginner friendly GUI and attitude, easier to share with other users -- batteries-included features like fail2ban + Wireguard
Cockpit: -- first class citizen on EL / Fedora distros -- Podman yes, Docker no - no Compose/Quadlet support -- killer features like VM managements and Terminal -- bugs with Samba
sherry-sherry
Can I ask why?
I currently use OMV for serving files over my local network (just for myself) and running a handful of Docker containers. It works fine but I don't use 90% of it's features.
Scene_Cast2
Two reasons. One is the paradigm, the other is jank (or lack thereof).
OMV takes over your system - lots of "Auto-generated and maintained by OMV, do not touch" in system configs. By comparison, with Cockpit I could tweak and set up my own stuff. With OMV, when I needed to change my network settings, I had to fight bugs in the OMV GUI, and couldn't edit the configs directly. Same thing when I was trying to set up my disks in a particular way. This is a big issue because when something breaks, none of the general (non-OMV specific) answers on the forums help because you can't actually edit the configs...
The other is jank. I ran into many, many issues with OMV. Even for installing, I had to resort to 'curl .. | sudo bash' as the officially recommended option, with no proper uninstall method.
t0bia_s
How about Proxmox?
fulafel
For others curious, https://github.com/cockpit-project/cockpit shows that it's written in several languages, with C at the #1 place, with JS and Python following. "src/cockpit" (main backend logic?) is Python.
jelly1
Cockpit Developer here, the webserver is written in C and the old bridge (the one "API" which communicates with JavaScript through the webserver with system API's such as systemd, podman, dbus, etc.).
The new bridge is written in Python and when time comes we want to rewrite our webserver into some modern.
badcppdev
Piggybacking off your comment I wonder how many other people really care about the tech stack used to create any product they are running on a server. What dependencies does it have? Do I need to be conscious of vulnerabilities in some logging library or Curl?
And I also find it really interesting to see whether a product is programmed using one clear stack or a mixture.
bityard
I usually care a great deal, because it gives some strong hints up-front about what to expect from the project while trying it out and subsequently deploying it for production use.
I know some languages and ecosystems much better than others, so I have an idea how well I could support it up-front if needed. Others have different deployment styles, ranging all the way from "just copy this one binary somewhere" to "first install this language interpreter with a fricken curlpipe, then this language-specific package manager, then these hundreds of dependencies, then our app if you're still awake. But don't forget you'll still need an application server..."
The widespread use of Docker has made the last even less common, but I still run into docker containers just don't work, and I don't know the tech stack, and learning a whole tech stack just to troubleshoot someone else's broken code in order to try it out is not my most favoritest use of time.
claudex
I care if it's an obscure tech stack and there isn't a lot of contributor, because it will indicate that it's more likely that it lose interest in development. Or if I want to contribute and so I want a tech stack I know or I'm interested into.
sixothree
We have products that get run on servers only. Pretty much every single client asks about the tech stack.
swingingFlyFish
you should. Your server uptime is a necessity, the less bloat on it the better. If this was in Java I'd probably balk. No, no you should absolutely care what's on your server and even what language it's in.
itomato
Why use this when Webmin has done the job for decades?
worksonmine
God I hate these comments.
"X exists so why would anyone ever build Y". Why not? Competition is good, think about it for a minute and I'm sure you'll figure out how.
kristopolous
Because open source is about collaboration, not wasting time with redundant effort
worksonmine
I disagree, options need to exist. Some full featured, others with a more minimal approach. Different languages, and different flavors. True democracy rarely works and catering to all users more often than not ends up in unmaintainable spaghetti.
People are allowed to create their own X even if thousands of options already exist.
itomato
Its a profitable category?
pch00
A possible reason: if you're already a RHEL shop, you're happy with your RHEL support contract and you train up all your folks to be RHCE, then Cockpit is the "supported" web admin tool.
Second: webmin also has a patchy security history. I don't know enough about cockpit to say it's any better but it would certainly be enough of an issue to review all of the options.
dinkleberg
Just wanna point out that cockpit started nearly a decade ago, so it is not a new entry in this arena either.
joshmanders
Why use Webmin when cPanel has done the job a full year longer?
Tijdreiziger
Is cPanel really a similar technology? It’s more geared towards offering shared hosting than maintaining your own server, right?
0pteron
Does webmin use no resources when not in use? Couldn't find anything about it on the site
rcarmo
I used this for a while, but noticed that a bunch of interesting plugins were not maintained/updated and stopped using it in 22.04.
Get the top HN stories in your inbox every day.
People who decry graphical admin interfaces in favor of command line are missing the wood for the trees.
Sure, clickops is no way to run a server - but neither, if we’re honest, is ssh.
For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone. If you’re going in with ssh or cockpit you’re just going to screw something up.
So the only reason you should be working on a server directly is because you’re doing something exploratory. And in that case gui vs command line isn’t as clearcut as people want to make it. GUIs emphasize discoverability and visibility which can be helpful in that experimental phase when you’re trying to figure out how to get something set up right.