Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

austinjp

Genuine question: since these are core utils, and probably used billions of times every day, is anyone actually going to switch to this version? I see that the intention is for this to be a drop-in replacement, but some options and behaviours are still different.

To clarify, I'm not intending this as a negative comment. It's an impressive project, and aiming for cross-platform sharing of scripts seems a worthy goal. The graph of progress against the GNU test suite is neat and encouraging. However, I can't imagine anyone in Unixy lands moving to this - although I may be wrong there - so it feels like a "compatibility" project for MacOS and Windows. I'm not familiar with the situation on MacOS but how bad is the compatibility issue? Similarly, on Windows what's the situation with WSL or VMs or even Cygwin? Is performance the issue?

Do people actually want this, is there a market for it? It's got 17.8k stars so seemingly so? Again, I'm not intending to cast aspersions, just trying to understand the audience. This is clearly different from a "for the fun of it" side project.

cdogl

I mostly agree with your comment, but the source code of of many GNU coreutils is quite gnarly. It's ancient code (from my reference point as a 35yo) developed at a time when coding style was different and a much smaller community maintained it.

I think it's important for free software that people coming into the community are enthusiastic to maintain it. It took the wind out of my sails a little when I realised the GNU code base, while it produces critical tools I use every day, is written in a way that I found extremely (unnecessarily) terse and "clever". Tracing how different combinations of flags are handled is not much fun. Documentation is helpful for the user, less so for the tinkerer who is trying to understand the stack.

If this project manages to hit parity with GNU coreutils, and my distro(s) provide support, I'll switch to it purely on that basis.

dmd

I've interacted quite a bit with some of the authors, and when I've asked "why on earth did you do it this way" the answer is generally some form of "well, it saves nearly 6 bytes on disk in the source code! disk isn't free, son".

It's a different mindset and one that is no longer useful.

amluto

> It's a different mindset and one that is no longer useful.

I disagree. Sure, 6 bytes is approximately free in most contexts, but 100MB is not free, and 5GB is even less free.

But more importantly, writing code under constraints can force good behavior. For example, BIOS is a legacy mess but it’s a small, self-contained legacy mess that fits in a few kB. Compare to UEFI, which is unbelievably complicated and bug-ridden. A mess like UEFI could not have fit within the constraints of BIOS.

This is not to say that writing obscure code to save a couple bytes of source file size is at all worthwhile any more, but the idea that one should constrain bloat (design bloat, code bloat, executable bloat, network bloat, etc) is very much still valuable.

bayindirh

When I asked a graybeard why the variable names were so short, he said that "longer names affected compile duration very severely in the past, so this is why we used the shortest name possible".

While user facing programs are not faster by any means, computers got faster in some aspects after all.

dig1

The thing is that coreutils is used everywhere - servers, desktops, and embedded devices, including 30-40 years old machines. You want to update coreutils on old SPARC or some even older mainframe? Every byte counts.

So, saving as many bytes as possible is still very relevant.

stevehawk

You say that, but I recently ran out of disk space on my iPhone.. and the more I looked at the apps the more I realized "I bet these are all framework based apps and not native apps, and no one is tree-shaking their code or doing /any/ of the things you're supposed to in order to minimize disk usage.".. *shakes fist at clouds*

9659

except when it is.

viraptor

This is true. I've tried to read some gnu code in the past (tar) and patch another (df) and... ran away instead. It's doable, but so messy I'd rather write my own very specific command, than try improving one of the old-style gnu ones.

Throw839

This! I am horrified to touch anything old from GNU!

dspillett

> so it feels like a "compatibility" project for MacOS and Windows

I get a similar feeling. This is more attractive than ports like cygwin as a solution because if you have this in every place you can be more sure your environments match. With ports there will be more delay getting changes/fixes in different environments, at least in theory, than with a solution that is cross-platform as a core goal. The other main options, running Linux in a VM (directly or via WSL2) adds an extra layer of friction.

A cross-platform solution is more likely to match bug-for-bug in different environments, which can be as important as matching feature-for-feature: if something is going to fail out there it will fail the same way in your dev/test environments.

Also some will want to use it from a licensing PoV. Many¹ find GPL related licences bothersome and use the GNU coreutils because there isn't (until this matches their requirements) an alternative. Similarly, the language might be an attraction to some, either ideologically or because they might want to dig into the source, though this is more of a factor for projects developing new features rather than trying to be drop-in replacements.

I expect the attraction to be relatively niche though, people won't start using it as a drop-in for the GNU tools en-mass until, for instance, popular distributions use it (or a niche distribution using it becomes more popular for some other reason).

----

[1] I am not one of that many, but they are many so this is a notable consideration.

noirscape

Well for one, uutils is in rust. That alone can be pretty desirable for some people because that infers assumptions about memory safety, especially compared to the coreutils which are in C.

The license stuff is what people focus on, but to me that is much more interesting. C is a pretty decrepit language and while I don't care much for the "rewrite it in rust" cult, the coreutils are exactly the type of program rust excels at - a non-iterative target that doesn't change often and can be made with "best practices" in mind.

Not having to deal with the utterly inane amount of dead architectures that GNU projects inherit probably helps them too on that end.

lohnjemon

What kinds of memory safety bugs do you really care about in coreutils? Genuinely curious.

Given how mature and well defined the GNU Coreutils are, how small their scope is, how they are used, I really don't see the supposed security upside here.

There simply has to be a better reason to me, than "Rust good".

wongarsu

Some CVEs have happened in the past [1]. None of them memory issues, but a couple that seem unlikely in idiomatic rust or are much easier to prevent in rust.

Specifically, integer overflow is much easier to correctly handle in rust, making bugs like CVE-2015-4042 less likely, and correct handling of multibyte strings is basically enforced by the standard library, making issues like CVE-2015-4041 very unlikely in a rust implementation

1: https://www.cvedetails.com/vulnerability-list/vendor_id-72/p...

VBprogrammer

Wasn't it just a couple of years ago that we had shell shock? According to Wikipedia that bug was introduced in 1989. Of course, that wasn't a memory safety issue (and it's probably important to reiterate that memory safety doesn't mean free of bugs and / or exploits). But it does demonstrate that issues can exist for a long time in mature code without anyone noticing.

Corrado

I think the upside is that these utils are not set in stone and never updated. Future maintainers will probably find it easier to work on a more modern codebase with more footgun protections. So, yes while the current utils is great, at some point the code will have to be modified and I would much rather modify clean Rust code than 30, 40, 50 year old super optimized C that no one truly understands anymore.

YoshiRulz

I use NixOS so it's feasible for me to use these as true drop-in replacements when they're done. And the reason I'd want to do that is for hardening, as the sibling commenter suggests.

The fact it's released under MIT instead of Apache (or GPL) does worry me though.

Snow_Falls

Yeah, I'm not a fan of the rust ecosystem trying to move everything from FSF-style free/libre licenses to permissive licenses.

m4rtink

Yeah - memory safety, why not (though its not a silver bullet) but why change te license to one that can be quite dangerous over time for something this important?

silon42

Someone could release it with a relicense to GPL (and maybe LGPL if dynamically linked).

dartos

You can’t just release someone else’s BSD code under GPL.

duped

The canonical GNU packages are pretty arcane and difficult to build/understand (*), so alternatives with modern tooling/languages should be welcome.

But also, things like BusyBox and ToyBox are very popular alternatives which contain non-standard ports of only a subset of coreutils. A complete and mostly compatible set of coreutils would be more popular than either, but they also have some different constraints that Rust builds make difficult (eg: binary size).

* nb4 "it's just configure/make/install what do you mean" consider if you want coreutils without glibc, cross compiled, or want to bootstrap your environment without any of them. The GNU ecosystem is "easy" to build/use within the GNU ecosystem, which is not unopinionated about what that looks like and how it works.

pletnes

Where I’m working, WSL doesn’t work (well), VMs are banned, and personal experience with cygwin is awful (that was on win7, but still). Everything that helps me develop software on windows which runs on linux (of course!) is warmly welcome.

I’m sure there are a thousand similar but different enterprise environments like this. At home / solo dev it makes no sense, you’d just install ubuntu / get a macbook and get working without any fuss.

Docker works but it makes test runs 100x slower (yes, I measured, it’s not made up).

wongarsu

Makes me wonder how docker on Windows handles file access? I know under the hood it uses WSL2, which in turn uses a HyperV VM. In WSL2 it's a big deal where you put there files. While windows can access the files in the Linux VM and the other way around, that requires communication over the hypervisor and is magnitudes slower than accessing the files managed by your own kernel (like C: on Windows and /home in the WSL VM)

VBprogrammer

If your test runs are slower by two orders of magnitude then something is going wrong.

For example, I've had problems on docker for Mac where accessing lots of (python) files in a volume mount was slow. I had to use some beta setting at the time which was far better (though I had to do a OS upgrade to use it, which also entailed some IT nonsense).

The overheads of docker are there but if it was that bad people wouldn't use it.

pletnes

File IO seems to be the problem. I’m guessing antivirus + virtualization overhead. Don’t see what I can change about that (use linux, sure).

Docker is still useful, it’s great for sharing a complicated setup across machines.

ParetoOptimal

It is that bad. I use a Linux VM with docker from OSX and coworkers use docker with OSX.

Their compiles take 2m, mine take 30s.

thesnide

I use mingw/msys2 with great success.'

I even do some directx dev with it.

All with very portable code that can even be built in the standard github action ubuntu.

pletnes

Yes, and git bash is quite useful and built on those. In many ways the most useful alternative - it works and everyone already has it. But not a logical choice if you want to «deploy» something I guess.

cesarb

> [...] VMs are banned [...] Docker works but [...]

I might be missing something, but isn't Docker on Windows or MacOS actually a VM running Linux (or, on Windows, perhaps it uses WSL2, which is also a VM running Linux)? How could you use Docker if VMs are banned?

master-lincoln

Docker on Windows can run without VM if the guest in the container is a Windows with a matching kernel and the process isolation is enabled in docker parameters.

Similar situation for MacOS docker containers on a MacOS host (see recently discussed https://news.ycombinator.com/item?id=37655477)

But for running Linux containers on MacOS or Windows I think you are right

tahnyall

> Where I’m working, WSL doesn’t work (well), VMs are banned, and personal experience with cygwin is awful

I'm no Windows fan but WSL has worked extremely well on several different machines I've worked on since 2015 and Cygwin was no WSL but I could get a lot done with it back in the day as well.

> Docker works but it makes test runs 100x slower (yes, I measured, it’s not made up).

Yeah, I don't know what you're doing that it could be 100 times slower, something else is wrong.

pletnes

WSL conflicts with many antivirus/security packages due to e.g DNS filtering or other network restrictions. WSL itself on a non-managed box works not-too-bad, sure.

Cygwin has been very useful to me, but also caused its share of issues with its permissions model, creating large directory trees that couldn’t be deleted.

smt88

> Differences with GNU are treated as bugs.

That tells me that "some options and behaviours are still different" is also being treated like a bug.

Even if this is only used by huge companies like Meta or for new projects, it will still justify its existence..

apatheticonion

I use them on my Windows machine from PowerShell to make using the Windows command line more tolerable.

You can find the GNU coreutils for Windows on sourceforge but it always feels like spyware when I download something from that site, plus I don't know how old they are or if they are actively maintained.

With this rewrite, I like that I can download them from the repo's releases page. There is an issues page for problems and I can see the project is actively worked on.

Installing them as binaries is simple and, this goes for all Rust/Go projects, I actually know how to compile them (sorry, I know C/C++ is great but between gcc||msvc||clang, missing deps, ./configure ./install and make - I rarely have a pleasant experience compiling something in C/C++).

Dinux

We have switched to Rust about 4 years back for most of our robotics and embedded control systems. I has been a blessing to move away from C/C++ after 10 years. Sure Rust has its problems and issues, especially when it comes to async and concurrency. Yes it has a steep learning curve, yes the compiler gets in the way often but the number of _actual_ bugs (not design flaws) is probably less than 10 over 4 years. Every time I work on a C/C++ i'm painfully reminded how easy it is to shoot yourself in the foot. I hope coreutils and Rust in the kernel will eventually become the default

PartiallyTyped

I genuinely can't get enough of rust tbh. It "just" works. Don't get me wrong, a few things could be better, e.g. compile times, but it's so much easier to work with.

klabb3

Please include the domain you’re working in and what you’re comparing against when making value statements like this. It can be helpful for others and the debate at large.

PartiallyTyped

I do software analysis, with some query engines, and backend. My comparisons are against java, python, and C, though I did try to get into C++.

I also contribute to rust-lang/clippy and other rust projects.

thesnide

i wonder how much of that is due to rust being too young to have myriads of dubious code to copy from.

Perl is even more memory safe than Rust, but the amount of crappy code is overwhelming...

steveklabnik

That's the thing about a compiler enforcing rules: you can't even get some kinds of dubious code to compile, so therefore, it will never meaningfully be copied.

Of course, that doesn't mean that all bugs are prevented, or that Rust code has no bugs, or that you can't write bad Rust code. But in the context of robotics and embedded control systems, Rust solves a lot of those "bad code" issues at compile time. And you're not using Perl in that context regardless.

thesnide

oh, now I'm wondering if Ada might be interesting for robotics.

As it is another language that is said 'compiler driven'

coldtea

The issue with all these efforts is whether they'll be sustained and maintained long term, or merely until the 1-2 maintainers lose interest.

GNU coreutils on the other hand have been going for decades.

cmrx64

uutils just elapsed its first decade and going strong.

sph

coreutils is 33 years old and going strong.

https://en.wikipedia.org/wiki/GNU_Core_Utilities#History

This one might have been around for 10 years, but it's disingenuous to claim it is as extensive, feature-full and tested as the real thing.

dcsommer

Who claimed that?

eviks

Indeed, there are no examples of decades-old projects dying, so that risk doesn't exist

OrderlyTiamat

There are more (much more) examples of young projects dying than old projects. This is called the Lindy effect: that which has survived tends to survive. Taleb first used the term Lindy effect but it has been noted before.

Taleb suggested that if something non-perishable has survived for a long time its expected remaining survival is just as long, for example if a book has been in print for 40 years it is expected that it wil remain in print for another 40 years.

The point is that projects, books, and other non perishables don't have a life expectancy like biological organisms, they're actually more likely to live on if they've endured a long time.

coldtea

It's almost as if existing longevity is a sign of project community/support structures/resilience [1], and what matters for such an assessment is not a knee-jerk pointing to the existence of counter-examples to show that non-zero risk exists (as if anybody said anything about the risk being zero), but the relative probability of a fresh && much less used project dying vs a widely used mature project that has already proven it can survive for a long period of time...

Who would have thought!

[1] https://en.wikipedia.org/wiki/Lindy_effect

eviks

It's almost as if you didn't read the first sentence in your own link, which states that this "is a theorized phenomenon", not some established law of practical software development nature. So instead of knee-jerk posting of a "proof" as a response to a criticism of a knee-jerk pointing to the existence of some risk or pointing to some straw man of zero risks you might actually realize that the main thing that matters for such an assessment is an actual assessment

mijoharas

To be fair, the Lindy effect does imply that it's less likely for a decades old project to die soon than a newer one.

sph

> To be fair, the Lindy effect does imply that it's less likely for a decades old project to die soon than a newer one.

True, but doesn't that imply its opposite as well? A decade old project will probably die sooner than later, because 11+ year projects are rarer than 10 year ones.

The Lindy effect can only be observed in comparison to something else, not to deduct how long a single project in and of itself will last. Which means, coreutils will probably last longer than this one, because it's been around 33 years vs 10.

eviks

it's not fair since the original comment said nothing of the sorts, and this effect is just a theory

ary

People seem very focused on “should they or shouldn’t they, and why”, which somewhat perplexes me given that there is at least one really good reason for doing this: to create Rosetta Stones for very common pieces of software. To really and truly know if Rust should become the new C we have to start seriously exercising it in the places where C reigns. That’s not a bake-off entirely defined by technological superiority, but also of practical applicability, maintainability, and ability to ship pervasively deployed solutions. This project is a great test of Rust.

RIIR isn’t merely a “sprinkle Rust magic because I believe” thing. The tech has clear potential and I personally think a version of it will take hold where C was once assumed. To know for sure we have to ship more software with it, and we need good comparisons. I’m quite thrilled to have projects like this pressing on.

znpy

> uutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.

Now this is something I can get into.

I'm okay with having stuff rewritten in Rust, but I'm not into learning some snowflake ls/grep and having to go edit all the scripts written over the years.

w10-1

Great progress, but the last 10% often takes 50% of the time.

In this case, there may be little value to the last bits, but people are going to wait to use these until that last bit is done, for fear of what might happen.

Which triggers ranting...

Q: Why exactly are we stuck targeting cross-platform compatibility for every possible option in every possible tool?

A: Because there is no good way to check the impact of removing options or tools, or for migrating clients.

By this mechanism, "OS" and "core" features grow forever, and the convention-bound C ABI is, well, as close to forever as we get.

Rewriting in Rust does not help that one bit (nor is it supposed to).

Has no one written literate ABI interfaces that support static validation and backwards compatibility via shims and automatic migrations? Must scripting be such a drag on progress?

We do automatic migrations in databases all the time; we should be able to do it in code.

einpoklum

Has there really been no effort to modernize some/most/all of the GNU coreutils code - while keeping the language and the license?

Switching language to Rust and changing the license does not make for a drop-in replacement, I would say.

awestroke

GNU coreutils will never modernize.

Nobody is switching language and license of GNU coreutils. This is a greenfield project.

pie_flavor

> Switching language to Rust and changing the license does not make for a drop-in replacement, I would say.

Why?

steveklabnik

Not your parent, but, to "replace" something means that it fills the same need you have as something else. The trick with talking about this in a generic context is that some people "need" some things and some do not. Your parent is saying "I do not like Rust and I prefer the GPL, and so this is not equivalent in my eyes." This can be true, while for a different person, simultaneously, "I like Rust and dislike the GPL, and so this is an equivalent in my eyes" can be true as well.

sntran

Coincidentally, I was just reading about how VSCode support WASM on their web edition[1] and this was mentioned in their effort to implement the Terminal.

[1] https://code.visualstudio.com/blogs/2023/06/05/vscode-wasm-w...

ingen0s

Very cool! How long did this take?!

KolmogorovComp

First commit was 11 years ago.

tetris11

License.md:

    Copyright (c) uutils developers  
      
    Permission is hereby granted, free of charge, to any person
    obtaining a copy of this software and associated documentation
    files (the "Software"), to deal in the Software without
    restriction, including without limitation the rights to  
    use, copy, modify, merge, publish, distribute, sublicense, and/or
    sell copies of the Software, and to permit persons to whom the
    Software is furnished to do so, subject to the following
    conditions:

Yep. This project is definitely going to be embraced by the community in the long run, and definitely supplant GNU coreutils. The MIT and GPL ideologies are completely aligned.

lifthrasiir

That happened already, if you haven't noticed yet. Android has used Toybox (0BSD) as its coreutils replacement for a decade. I don't see any reason to particularly criticize uutils for this exact reason at this point.

lucideer

> embraced by the community

> Android

Yes Android is definitely representative of the community...

lifthrasiir

I've interpreted that as the user community, because your statement would be a tautology if it were the free software community instead.

berkes

I guess it very much depends on any definition of "the community" then.

Affric

Yep.

GPL is the greatest thing that has ever happened to software and this stuff seeks to destroy it.

arghwhat

As we all know, evil corporations have waited all this time for the opportunity to ship modified versions of specifically non-BSD coreutils without sources to end-users.

toyg

It's mostly that evil corporations have waited to get free work from the community on everything, and they're now getting it.

mhh__

Clang on Mac is closed source. We're already swinging back away from being able to know what went into our binaries

globular-toast

Every time someone has said something to the effect of "it'll never happen" it happens, and then some.

You need to understand how evil corporations work. They may be composed of perfectly reasonable people but, taken as a whole, they are literally psychopathic. It's not that they are "waiting" for something to be possible, it's just that at every step they will take anything and everything they can and give back as little as they possibly can.

Have you ever been in such a corporation and tried to say you should release source code when you don't have to? You'd be laughed out of the room.

Don't be fooled by companies like Microsoft taking part in "open source". They have simply calculated that right now it's advantageous for them to appear that way. But they are always taking the maximum and giving back the minimum, no matter what. We won't change this, but we can raise what that minimum is. That's why we have the GPL.

krylon

The BSD projects' userlands have been used by other projects. If I want a Unix-like userland that is permissively licensed, I already have several to choose from.

Is there any pressing need to be bug-for-bug compatible to the GNU counterpart?

(Just to be clear, I am not opposed to this project, but I'm not sure how many people will rejoice and adopt this just because of the license. But I'll admit I am going on vibes here.)

globular-toast

I'm so relieved to find the top comment here is about the licence. The trend towards MIT-style licences and away from the GPL is a worrying one.

I don't understand the problem people have with the GPL. The GPL is there to ensure free software stays free forever. In a world with copyright this is the only way to do it. Permissive licences and public domain do not work.

Companies like Microsoft hate the GPL. This alone should tell you that you, an individual enthusiast, should probably love it.

tcmart14

It depends, like all things. I am fine with the GPL, but there are ways it hinders open source. A primary example, Linux can benefit from BSD and MIT licensed code, but it doesn't go the other way. FreeBSD can not benefit from GPL code. At least on the BSD side of things, that is why the GPL is disliked. As an example, FreeBSD developers can write a filesystem with the BSD license and Linux can sort yoink that code out of the code tree and use it (supposing nothing special needs to be done with interfaces and such). But the reverse isn't true. If Linux developers write a new FS in the linux kernel and throw a GPL on it, FreeBSD can't utilize that code.

goodpoint

No, FreeBSD can very much benefit from GPL code. The choose not to use it.

david_draco

Is it even legal to take a GPL-licensed code, translating to Rust ensuring it is exactly compatible, and then releasing under non-GPL? I thought you would need to extract a clean-room specification from it first, and ideally have separate people extracting the specification and writing the new code.

krylon

You can use the man pages as your reference. Another comment also mentioned a test suite. I think using that to check how equivalent your attempt is does not violate the GPL.

Y_Y

I had the same thought. This very much feels like a derived work. Since they (aim to) replicate the behaviour if the gnutils, and not just some generic posix utiles then I think it would be hard to argue the GPL doesn't apply in a legal sense. I think in a moral sense it very much is a violation, since the GNU authors presumably intended for their work to be built on only by further copyleft works.

jillesvangurp

Drop in replacement with less license issues, reactionary types endlessly arguing the notion of freedom, code that just works, and lots of development activity. What is there not to like for the likes of Red Hat, Ubuntu, Google, Amazon and others providing Linux based products.

In all seriousness, if you remove all non GPL licensed software from your Linux distribution because it's not pure enough, you'll be left with something that is a bit less comprehensive than your average operating system. It would miss things like a UI for example. Because both X Windows and Wayland are MIT licensed. A lot of popular server software that made Linux successful. And generally a lot of stuff that most of the OSS community uses and depend on every day. Like OpenSSH which is BSD licensed. Or things like Apache httpd (for which the Apache license was invented), etc.

The GPL vision where all that stuff was going to be GPL licensed just never played out that way. Developers and companies decided otherwise. And this is fine.

It's is not a problem that a lot of people believe needs fixing (as evidenced by a lot of non GPL OSS without good GPL alternatives). But of course if you feel otherwise, best of luck fixing that problem. The world runs on software. A lot of that is free and open source. And most of that is not licensed under the GPL.

rcxdude

I think at this point the vast majority of linux users are indifferent to the difference between copyleft and non-copyleft licenses (and of those that do care, they are likely to lean against copyleft). Whether this project actually has enough advantages (and small enough disadvantages: compatibility still seems to be far from complete enough) to be adopted by users or distributions is another matter, though.

roenxi

The vast majority of Linux users are indifferent to lots of important things, operating systems are complex and there are too many parts to keep abreast of.

However, the foundations are largely GPL-licensed because the GPL is superior in the long term. If people want to donate their time to companies then good luck to them, but it is a stupid strategy to donate free time to companies. If a company is paying good money for someone to write BSD-licensed software then sure, but otherwise the developer is playing a mug's game. Not a universal rule, I can see why some crypto or standard reference libraries might want to be BSD licenced. But by and large it is an invitation for parasites to attach.

Snow_Falls

Hell, even library type software could be licensed under "weak" copyleft license such as LGPL.

rejschaap

I would expect more permissive licenses in the future. GPL was instrumental in changing the world to an open source mindset. It fixed the problems with software that became obvious in the 90s. The world has evolved and the restriction just feel outdated now.

cmrdporcupine

Agree, tho worth pointing out that GPL licensed software is still subject to parasitical behaviour from SaaS companies. While the AGPL can plug that, it is very unpopular.

denotational

> and of those that do care, they are likely to lean against copyleft

What is this based on? Asking in good faith.

rcxdude

My impression that the vast majority of linux users who are thinking about code licensing are using it commercially and so copyleft represents an extra burden on them.

(also, IMO, copyleft vs non-copyleft doesn't seem to make a huge difference to the outcomes copyleft advocates claim to care about, especially to the average user - most MIT licensed projects still generally receive contributions from people using them commercially, and some GPL projects (linux itself suffering from this greatly) still suffer from being fragmented by proprietary patches and forks, even if the license is being complied with.)

pas

Sure, users shouldn't care. The ecosystems ought to serve the users after all, and it should be sustainable, competitive with other ecosystems, etc. But all these instrumental goals are on the developers, and if carefully choosing licenses for each project can help with this, then it makes sense to pay attention to which project has which license.

Coreutils? Probably here the importance is to provide the interface the users want. And provide it everywhere thus allowing the ecosystem to grow and be able to move to other platforms even. This might require a BSD-like license. (These tools evolve slowly, and usually doesn't represent some huge know-how that other ecosystems want to lift, here the well-known efficient CLI experience is the value, which might be fair-use copyable, but as we see - for example Apple - doesn't care, and thus basically users are left with worse defaults.)

Filesystems? Stability, cross-platform compatibility, but also usually decades of extremely valuable battle-testing culminates in a specific codebase+ecosystem context. It's hard to replicate with simple copying, but of course much easier. In practice with ZFS we see that the licensing incompatibility serves to provide fewer options for users.

Drivers? Databases? Kernel? Where does the TiVo problem rears its ugly head? And what does the relative lackluster GPL enforcement track record tells us here? What about the more important problem with non-upstreamed but-shitty-as-fuck Android drivers? Are they good or bad for users? Which ecosystem are they a part of?

So, it seems, that even if one cares about licenses the big takeaway is that they matter a lot less than the other things required for a successful source-sharing community-driven project. OSI approved or nor.

(For example I have no idea what license Terraform has .. okay, I checked, it's MariaDB BSL, and OpenTF is now MPL2. So I don't think MIT/Apache2 or AGPL3 would have made a difference in Hashicorp's hostile stewardship.)

ghusbands

> Yep. This project is definitely going to be embraced by the community in the long run, and definitely supplant GNU coreutils. The ideologies are completely aligned.

By tone, I assume this is sarcasm. Could you perhaps clearly state the issue?

serf

here's an OK synopsis of why it's different, even if I don't necessarily agree with the 'style of language'

[0]: https://news.ycombinator.com/item?id=37383680

AlienRobot

Wouldn't GPL be perfectly fine with this if it was a library instead? I don't get it.

andrewstuart

With an MIT License - be good in the title/headline.

How certain is it that such utilities would be compatible including any weird edge cases?

cmrx64

that’s what the automated test suite is for! and the graph in there is looking pretty good.

Someone

And they (somewhat/largely) avoid the “a test can’t check for issues we don’t know about” problem by running the GNU CoreUtils test suite, not a suite they wrote themselves.

(Browsing https://github.com/coreutils/coreutils/tree/master/tests that test suite is written in shell and Perl)

undefined

[deleted]

moby_click

I am surprised, that these are not called oreutils.

undefined

[deleted]
Daily Digest email

Get the top HN stories in your inbox every day.