Get the top HN stories in your inbox every day.
taylorallred
lordofgibbons
The more your compiler does for you at build time, the longer it will take to build, it's that simple.
Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.
When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.
duped
I think this is mostly a myth. If you look at Rust compiler benchmarks, while typechecking isn't _free_ it's also not the bottleneck.
A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.
Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.
Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.
benreesman
The meme that static linking is slow or produces anything other than the best executables is demonstrably false and the result of surprisingly sinister agendas. Get out readelf and nm and PS sometime and do the arithematic: most programs don't link much of glibc (and its static link is broken by design, musl is better at just about everything). Matt Godbolt has a great talk about how dynamic linking actually works that should give anyone pause.
DLLs got their start when early windowing systems didn't quite fit on the workstations of the era in the late 80s / early 90s.
In about 4 minutes both Microsoft and GNU were like, "let me get this straight, it will never work on another system and I can silently change it whenever I want?" Debian went along because it gives distro maintainers degrees of freedom they like and don't bear the costs of.
Fast forward 30 years and Docker is too profitable a problem to fix by the simple expedient of calling a stable kernel ABI on anything, and don't even get me started on how penetrated everything but libressl and libsodium are. Protip: TLS is popular with the establishment because even Wireshark requires special settings and privileges for a user to see their own traffic, security patches my ass. eBPF is easier.
Dynamic linking moves control from users to vendors and governments at ruinous cost in performance, props up bloated industries like the cloud compute and Docker industrial complex, and should die in a fire.
Don't take my word for it, swing by cat-v.org sometimes and see what the authors of Unix have to say about it.
I'll save the rant about how rustc somehow manages to be slower than clang++ and clang-tidy combined for another day.
treyd
Not only does it generate more code, the initially generated code before optimizations is also often worse. For example, heavy use of iterators means a ton of generics being instantiated and a ton of call code for setting up and tearing down call frames. This gets heavily inlined and flattened out, so in the end it's extremely well-optimized, but it's a lot of work for the compiler. Writing it all out classically with for loops and ifs is possible, but it's harder to read.
fingerlocks
The swift compiler is definitely bottle necked by type checking. For example, as a language requirement, generic types are left more or less in-tact after compilation. They are type checked independent of what is happening. This is unlike C++ templates which are effectively copy-pasting the resolved type with the generic for every occurrence of type resolution.
This has tradeoffs: increased ABI stability at the cost of longer compile times.
the-lazy-guy
> Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.
Could you expand on that, please? Every time you run dynmically linked program, it is linked at runtime. (unless it explicitly avoids linking unneccessary stuff by dlopening things lazily; which pretty much never happens). If it is fine to link on every program launch, linking at build time should not be a problem at all.
If you want to have link time optimization, that's another story. But you absolutely don't have to do that if you care about build speed.
windward
>Codegen is also a problem. Rust tends to generate a lot more code than C or C++
Wouldn't you say a lot of that comes from the macros and (by way of monomorphisation) the type system?
blizdiddy
Go is static by default and still fast as hell
ChadNauseam
That the type system is responsible for rust's slow builds is a common and enduring myth. `cargo check` (which just does typechecking) is actually usually pretty fast. Most of the build time is spent in the code generation phase. Some macros do cause problems as you mention, since the code that contains the macro must be compiled before the code that uses it, so they reduce parallelism.
rstuart4133
> Most of the build time is spent in the code generation phase.
I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.
Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.
FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.
tedunangst
I just ran cargo check on nushell, and it took a minute and a half. I didn't time how long it took to compile, maybe five minutes earlier today? So I would call it faster, but still not fast.
I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.
undefined
cogman10
Yes but I'd also add that Go specifically does not optimize well.
The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).
You can see that with "gccgo". Slower to compile, faster to run.
cherryteastain
Is gccgo really faster? Last time I looked it looked like it was abandoned (stuck at go 1.18, had no generics support) and was not really faster than the "actual" compiler.
pclmulqdq
Go defaults to an unoptimized build. If you want it to run heavy optimization passes, you can turn those on with flags. Rust defaults to doing most of those optimizations on every build and allows you to turn them off.
Zardoz84
Dlang compilers does more than any C++ compiler (metaprogramming, a better template system and compile time execution) and it's hugely faster. Language syntax design has a role here.
Mawr
Not really. The root reason behind Go's fast compilation is that it was specifically designed to compile fast. The implementation details are just a natural consequence of that design decision.
Since fast compilation was a goal, every part of the design was looked at through a rough "can this be a horrible bottleneck?", and discarded if so. For example, the import (package) system was designed to avoid the horrible, inefficient mess of C++. It's obvious that you never want to compile the same package more than once and that you need to support parallel package compilation. These may be blindingly obvious, but if you don't think about compilation speed at design time, you'll get this wrong and will never be able to fix it.
As far as optimizations vs compile speed goes, it's just a simple case of diminishing returns. Since Rust has maximum possible perfomance as a goal, it's forced to go well into the diminishing returns territory, sacrificing a ton of compile speed for minor performance improvements. Go has far more modest performance goals, so it can get 80% of the possible performance for only 20% of the compile cost. Rust can't afford to relax its stance because it's competing with languages like C++, and to some extent C, that are willing to go to any length to squeeze out an extra 1% of perfomance.
pjmlp
C++23 modules compile really fast now, moreso when helped with incremental compilation, incremental linking, and a culture that has not any qualms depending on binary libraries.
teleforce
>When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.
Go and Dlang compilers were designed by those that are really good at compiler design and that's why they're freaking fast. They designed the language around the compiler constraints and at the same managed to make the language intuitive to use. For examples, Dlang has no macro and no unnecessary symbols look-up for the ambiguous >>.
Because of these design decisions both Go and Dlang are anomaly for fast compilation. Dlang in particular is notably more powerful and expressive compared to C++ and Rust even with its unique hybrid GC and non-GC compilation.
In automotive industry it's considered a breakthrough and game changing achievement if you have a fast transmission for seamless auto and manual transmission such as found in the latest Koenigsegg hypercar [1]. In programming industry however, nobody seems to care. Walter Bright the designer of Dlang has background in mechanical engineering and it shows.
[1] Engage Shift System: Koenigsegg new hybrid manual and automatic gearbox in CC850:
https://www.topgear.com/car-news/supercars/heres-how-koenigs...
pjmlp
It isn't an anomaly, it was pretty standard during the 1990's, until C and C++ took over all other compiled languages, followed by a whole generation educated in scripting languages.
phplovesong
Thats not really true. As a counter example, Ocaml has a very advanced type system, full typeinference, generics and all that jazz. Still its on par, or even faster to compile than Go.
jstanley
> Go has sub-second build times even on massive code-bases.
Unless you use sqlite, in which case your build takes a million years.
Groxx
Yeah, I deal with multiple Go projects that take a couple minutes to link the final binary, much less build all the intermediates.
Compilation speed depends on what you do with a language. "Fast" is not an absolute, and for most people it depends heavily on community habits. Rust habits tend to favor extreme optimizability and/or extreme compile-time guarantees, and that's obviously going to be slower than simpler code.
infogulch
Try https://github.com/ncruces/go-sqlite3 it runs sqlite in WASM with wazero, a pure Go WASM runtime, so it builds without any CGo required. Most of the benchmarks are within a few % of the performance of mattn/go-sqlite3.
dhosek
Because Russt and Swift are doing much more work than a C compiler would? The analysis necessary for the borrow checker is not free, likewise with a lot of other compile-time checks in both languages. C can be fast because it effectively does no compile-time checking of things beyond basic syntax so you can call foo(char) with foo(int) and other unholy things.
Thiez
This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.
vbezhenar
Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.
steveklabnik
The borrow checker is usually a blip on the overall graph of compilation time.
The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.
kimixa
While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.
Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.
Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P
taylorallred
These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.
drivebyhooting
That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.
jvanderbot
If you'd like the rust compiler to operate quickly:
* Make no nested types - these slow compiler time a lot
* Include no crates, or ones that emphasize compiler speed
C is still v. fast though. That's why I love it (and Rust).
windward
>Make no nested types
I wouldn't like it that much
tptacek
I don't think it's interesting to observe that C code can be compiled quickly (so can Go, a language designed specifically for fast compilation). It's not a problem intrinsic to compilation; the interesting hard problem is to make Rust's semantics compile quickly. This is a FAQ on the Rust website.
vbezhenar
I encountered one project in 2000-th with few dozens of KLoC in C++. It compiled in a fraction of a second on old computer. My hello world code with Boost took few seconds to compile. So it's not just about language, it's about structuring your code and using features with heavy compilation cost. I'm pretty sure that you can write Doom with C macros and it won't be fast. I'm also pretty sure, that you can write Rust code in a way to compile very fast.
taylorallred
I'd be very interested to see a list of features/patterns and the cost that they incur on the compiler. Ideally, people should be able to use the whole language without having to wait so long for the result.
vbezhenar
So there are few distinctive patterns I observed in that project. Please note that many of those patterns are considered anti-patterns by many people, so I don't necessarily suggest to use them.
1. Use pointers and do not include header file for class, if you need pointer to that class. I think that's a pretty established pattern in C++. So if you want to declare pointer to a class in your header, you just write `class SomeClass;` instead of `#include "SomeClass.hpp"`.
2. Do not use STL or IOstreams. That project used only libc and POSIX API. I know that author really hated STL and considered it a huge mistake to be included to the standard language.
3. Avoid generic templates unless absolutely necessary. Templates force you to write your code in header file, so it'll be parsed multiple times for every include, compiled to multiple copies, etc. And even when you use templates, try to split the class to generic and non-generic part, so some code could be moved from header to source. Generally prefer run-time polymorphism to generic compile-time polymorphism.
LtdJorge
There is an experimental Cranelift backend[0] for rustc to improve compilation performance in debug builds.
kccqzy
Templates as one single feature can be hugely variable. Its effect on compilation time can be unmeasurable. Or you can easily write a few dozen lines that will take hours to compile.
herewulf
My anecdata would be that the average C++ developer puts includes inside of every header file which includes more headers to the point where everything is including everything else and a single .cpp file draws huge swaths of unnecessary code in and the project takes eons to compile on a fast computer.
That's my 2000s development experience. Fortunately I've spent a good chunk of the 2010s and most of the 2020s using other languages.
The classic XKCD compilation comic exists for a reason.
pjmlp
The newbie C++ developer, unaware of pre-compiled headers, binary libraries, external templates.
Most likely treating C and C++ as scripting languages with header only libraries, only to complain about build times afterwards.
ceronman
I bet that if you take those 278k lines of code and rewrite them in simple Rust, without using generics, or macros, and using a single crate, without dependencies, you could achieve very similar compile times. The Rust compiler can be very fast if the code is simple. It's when you have dependencies and heavy abstractions (macros, generics, traits, deep dependency trees) that things become slow.
taylorallred
I'm curious about that point you made about dependencies. This Rust project (https://github.com/microsoft/edit) is made with essentially no dependencies, is 17,426 lines of code, and on an M4 Max it compiles in 1.83s debug and 5.40s release. The code seems pretty simple as well. Edit: Note also that this is 10k more lines than the OP's project. This certainly makes those deps suspicious.
MindSpunk
The 'essentially no dependencies' isn't entirely true. It depends on the 'windows' crate, which is Microsoft's auto-generated Win32 bindings. The 'windows' crate is huge, and would be leading to hundreds of thousands of LoC being pulled in.
There's some other dependencies in there that are only used when building for test/benchmarking like serde, zstd, and criterion. You would need to be certain you're building only the library and not the test harness to be sure those aren't being built too.
90s_dev
I can't help but think the borrow checker alone would slow this down by at least 1 or 2 orders of magnitude.
tomjakubowski
The borrow checker is really not that expensive. On a random example, a release build of the regex crate, I see <1% of time spent in borrowck. >80% is spent in codegen and LLVM.
FridgeSeal
Again, as this been often repeated, and backed up with data, the borrow-checker is a tiny fraction of a Rust apps build time, the biggest chunk of time is spent in LLVM.
steveklabnik
Your intuition would be wrong: the borrow checker does not take much time at all.
weinzierl
This is sometimes called amalgamation and you can do it Rust as well. Either manually or with tools. The point is that apart from very specific niches it is just not a practical approach.
It's not that it can't be done but that it usually is not worth the hassle and our goal should be for compilation to be fast despite not everything being in one file.
Turbo Pascal is a prime example for a compiler that won the market not least because of its - for the time - outstanding compilation speed.
In the same vein, a language can be designed for fast compilation. Pascal in general was designed for single-pass compilation which made it naturally fast. All the necessary forward declarations were a pain though and the victory of languages that are not designed for single-pass compilation proofs that while doable it was not worth it in the end.
john-h-k
My C compiler, which is pretty naive and around ~90,000 lines, can compile _itself_ in around 1 second. Clang can do it in like 0.4.
The simple truth is a C compiler doesn’t need to do very much!
TZubiri
I guess you can do that, but if for some reason you needed to compile separately, (suppose you sell the system to a third party to a client, and they need to modify module 1, module 2 and the main loop.) It would be pretty trivial to remove some #include "module3.c" lines and add some -o module3 options to the compiler. Right?
I'm not sure what Rust or docker have to do with this basic issue, it just feels like young blood attempting 2020 solutions before exploring 1970 solutions.
rednafi
I’m glad that Go went the other way around: compilation speed over optimization.
For the kind of work I do — writing servers, networking, and glue code — fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the price. Not having to deal with sigil soup is another plus point.
I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness. Given the amount of networking and large - scale infrastructure software written in Go, I think they absolutely nailed it.
But of course there are places where GC can’t be tolerated or correctness matters more than development speed. But I don’t work in that arena and am quite happy with the tradeoffs that Go made.
paldepind2
> I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness.
I'm a fan of Go, but I don't think it's the product of some awesome collective Google wisdom and experience. Had it been, I think they'd have come to the conclusion that statically eliminating null pointer exceptions was a worthwhile endeavor, just to mention one thing. Instead, I think it's just the product of some people at Google making a language they way they wanted to.
IshKebab
And indeed they did come to that conclusion - for Dart 2.
Go is the product of like 3 Googlers' tastes. It isn't some perfect answer born out of the experience of thousands of geniuses.
I think they got a lot right - fantastic tooling, avoiding glibc, auto-formatting, tabs, even the "no functional programming so you have to write simple code" thing is definitely a valid position. But I don't think anyone can seriously argue that Go's handling of null is anything but a huge mistake.
melodyogonna
But those people at Google were veteran researchers who wanted to make a language that could scale for Google's use cases; these things are well documented.
For example, Ken Thompson has said his job at Google was just to find things he could make better.
nine_k
They also built a language that can be learned in a weekend (well, now two) and is small enough for a fresh grad hire to learn at the job.
Go has a very low barrier to entry, but also a relatively low ceiling. The proliferation of codegen tools for Go is a testament of its limited expressive power.
It doesn't mean that Go didn't hit a sweet spot. For certain tasks, it very much did.
mike_hearn
> fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps
Well, that point in the design space was already occupied by Java which also has extremely fast builds. Go exists primarily because the designers wanted to make a new programming language, as far as I can tell. It has some nice implementation aspects but it picked up its users mostly from the Python/Ruby/JS world rather than C/C++/Java, which was the original target market they had in mind (i.e. Google servers). Scripting language users were in the market for a language that had a type system but not one that was too advanced, and which kept the scripting "feel" of very fast turnaround times. But not Java because that was old and unhip, and all the interesting intellectual space like writing libs/conf talks was camped on already.
loudmax
As a system administrator, I vastly prefer to deploy Go programs over Java programs. Go programs are typically distributed as a single executable file with no reliance on external libraries. I can usually run `./program -h` and it tells me about all the flags.
Java programs rely on the JVM, of which there are many variants. Run time options are often split into multiple XML files -- one file for logging, another to control the number of threads and so on. Checking for the running process using `ps | grep` yields some long line that wraps the terminal window, or doesn't fit neatly into columns shown in `htop` or `btop`.
These complaints are mostly about conventions and idioms, not the languages themselves. I appreciate that the Java ecosystem is extremely powerful and flexible. It is possible to compile Java programs into standalone binaries, though I rarely see these in practice. Containers can mitigate the messiness, and that helps, up until the point when you need to debug some weird runtime issue.
I wouldn't argue that people should stop programming in Java, as there are places where it really is the best choice. For example deep legacy codebases, or where you need the power of the JVM for dynamic runtime performance optimizations.
There are a lot of places where Go is the best choice (eg. simple network services, CLI utilities), and in those cases, please, please deploy simple Go programs. Most of the time, developers will reach for whatever language they're most comfortable with.
What I like most about Go is how convenient it is, by default. This makes a big difference.
rsanheim
Java still had slow startup and warmup time circa 2005-2007, on the order of 1-3 seconds for hello world and quite a bit more for real apps. That is horrendous for anything CLI based.
And you left out classloader/classpath/JAR dependency hell, which was horrid circa late 90s/early 2000s...and I'm guessing was still a struggle when Go really started development. Especially at Google's scale.
Don't get me wrong, Java has come a long way and is a fine language and the JVM is fantastic. But the java of 2025 is not the same as mid-to-late 2000s.
mike_hearn
Maybe so, although I don't recall it being that bad.
But Go wasn't designed for CLI apps. It was designed for writing highly multi-threaded servers at Google, according to the designers, hence the focus on features like goroutines. And in that context startup time just doesn't matter. Startup time of servers at Google was (in that era) dominated by cluster scheduling, connecting to backends, loading reference data and so on. Nothing that a change in programming language would have fixed.
Google didn't use classloader based frameworks so that also wasn't relevant.
pjmlp
Not when using commercial JDKs, but naturally most HNers never worked on companies that paid for those.
For example initial JIT caching experiments on OpenJDK came from J/Rockit.
rednafi
Java absolutely does not fill in the niche that Go targeted. Even without OO theology, JVM applications are heavy and memory intensive. Plus the startup time of the VM alone is a show stopper for the type of work I do. Also yes, Java isn’t hip and you couldn’t pay me to write it anymore.
pjmlp
The old OOP testament was written by Smalltalk and C++ apostles, then the school of Objective-C belivers was found, and some of the scholars helped with the new Java religion.
frollogaston
Golang having solid n:m greenthreading day 1 was its big deal. Java has had no good way to do IO-heavy multitasking, leading to all those async/promise frameworks that jack up your code. I cannot even read the Java code we have at work. Java recently got virtual threads, but even if that fixes the problem, it'll be a while before things change to that. Python had the same problem before asyncio. This isn't even a niche thing, your typical web backend needs cooperative multitasking.
I'm also not fond of any of the Golang syntax, especially not having exceptions. Or if you want explicit errors, fine, at least provide nice unwrap syntax like Rust does.
aaronblohowiak
by FAR my biggest complaint about Golang was null instead of Option. could have been special cased like their slice and map and would have been so so so much better than nil checks imho. really, a big miss.
cogman10
Java 21 has n:m green threads, but with caveats. Java 24 removed a major source of the caveats.
k__
"it picked up its users mostly from the Python/Ruby/JS world rather than C/C++/Java"
And with the increasing performance of Bun, it seems that Go is about to get a whooping by JS.
(Which isn't really true, as most of the Bun perf comes from Zig, but they are targeting JS Devs.)
rednafi
Runtimes like Bun, Deno, or type systems like TypeScript don’t change the fact it’s still JS underneath — a crappily designed language that should’ve never left throwable frontend code.
None of these runtimes make JS anywhere even close to single-threaded Go perf, let alone multithreaded (goroutine) perf.
liampulles
As the story goes, a couple of Google developers designed Go while waiting for one of their C++ projects to compile.
zenlot
If we wanted speed compile times only, we'd be using Pascal. No need for Go. In fact, if there would be ever a case for me to use Go, I'd rather go for Pascal or Delphi. But there isn't, it just doesn't fit anywhere.
rednafi
I understand the sentiment as I feel the same about Rust. I’d rather raw dog C++ than touch Rust. Doesn’t make sense and I could come up with some BS like you did and make my case anyway.
pjmlp
Yes, because the way Google uses C++, and the developers have well known positions against C++ anyway, coming from Plan 9/Inferno and Oberon backgrounds.
mark38848
What are obnoxious types? Types either represent the data correctly or not. I think you can force types to shut up the compiler in any language including Haskell, Idris, PureScript...
Mawr
I'd say you already get like 70% of the benefit of a type system with just the basic "you can't pass an int where string is expected". Being able to define your own types based on the basic ones, like "type Email string", so it's no longer possible to pass a "string" where "Email" is expected gets you to 80%. Add Result and Optional types (or arguably just sum types if you prefer) and you're at 95%. Anything more and you're pushing into diminishing returns.
hgomersall
Well it depends what you're doing. 95% is like, just your opinion man. The rust type system allows, in many cases, APIs that you cannot use wrongly, or are highly resistant to incorrect usage, but to do that requires careful thinking about. To be clear, such APIs are just as relevant internally to a project as externally if you want to design a system that is long term maintainable and robust and I would argue is the point when the type system starts to get really useful (rather than diminishing returns).
ratorx
This might work for the types you create, but what about all the code written in the language that expects the “proper” structure?
> Types either represent the data or not
This definitely required, but is only really the first step. Where types get really useful is when you need to change them later on. The key aspects here are how easily you can change them, and how much the language tooling can help.
IshKebab
That's not true. There's a spectrum of typing complexity all the way from TCL everything-is-a-string to formal languages like Lean where the types can prove all sorts of things.
As you go further towards formal verification you get:
* More accurate / tighter types. For example instead of `u8` you might have `range(0, 7)` or even a type representing all odd numbers.
* Better compile time detection of bugs. (Ultimately you are formally verifying the program.)
* Worse type errors. Eventually you're getting errors that are pretty much "couldn't prove the types are correct; try again".
* More difficulty satisfying the type checker. There's a reason formal verification of software isn't very popular.
So it's definitely true that "more obnoxious types" exist, but Go is very far from the obnoxious region. Even something like Rust is basically fine. I think you can even go a little into dependent types before they really start getting obnoxious.
TL;DR, he's just lazy and doesn't really care about bugs.
throwawaymaths
> Types either represent the data correctly or not.
No. two types can represent the same payload, but one might be a simple structure, the other one could be three or twenty nested type template abstractions deep, and created by a proc macro so you can't chase down how it was made so easily.
galangalalgol
That is exactly what go was meant for and there is nothing better than picking the right tool for the job. The only foot gun I have seen people run into is parallelism with mutable shared state through channels can be subtly and exploitably wrong. I don't feel like most people use channels like that though? I use rust because that isn't the job I have. I usually have to cramb slow algorithms into slower hardware, and the problems are usually almost but not quite embarrassingly parallel.
bjackman
I think a lot of the materials that the Go folks put out in the early days encourage a very channel-heavy style of programming that leads to extremely bad places.
Nowadays the culture seems to have evolved a bit. I now go into high alert mode if I see a channel cross a function boundary or a goroutine that wasn't created via errgroup or similar.
People also seem to have chilled out about the "share by communicating" thing. It's usually better to just use a mutex and I think people recognise that now.
rednafi
This is true. I have been writing Go for years and still think channel is a bit too low level. It probably would've benefited from a different layer of abstraction.
silverwind
You can have the best of both worlds: A fast, but sloppy compiler and slow, but thorough checkers/linters. I think it's ideal that way, but rust seems to have chosen to needlessly combine both actions into one.
danielscrubs
One day I would like to just change pascals syntax a bit to be Pythonic and just blow the socks of junior and Go developers.
the_sleaze_
That's what they did to Erlang with Elixir and now there are a lot of people saying it's the Greatest Of All Time.
I'd be interested in this project if you do decide to pursue it.
rednafi
Sounds like the guy who wanted to write curl in a weekend. /s
danielscrubs
You’d be surprised. Here is the Pascal successor Oberons very short Ebnf. https://oberon07.com/o7EBNF.xhtml
Something to look into in retirement.
frollogaston
Same but with Python and NodeJS cause I'm doing less performance-critical stuff. Dealing with type safety and slow builds would cost way more than it's worth.
rednafi
Python and NodeJS bring a whole lot of other problems. But yeah at a smaller scale these languages are faster to work with.
At the same time, I have worked at places where people had to rewrite major parts of their backend in other languages because Python/Node was no longer sufficient.
frollogaston
I'd have to see it, cause rewrites happen all the time. We had a system written in C++, then rewritten in Golang because C++ was supposedly not good enough, then rewritten back into C++.
dminik
One aspect that I find interesting is that Rust projects often seem deceivingly small.
First, dependencies don't translate easily to the perceived size. In C++ dependencies on large projects are often vendored (or even not used at all). And so it is easy to look at your ~400000 line codebase and go "it's slow, but there's a lot of code here after all".
Second (and a much worse problem) are macros. They actually hit the same issue. A macro that expands to 10s or 100s of lines can very quickly take your 10000 line project and turn it into a million line behemoth.
Third are generics. They also suffer the same problem. Every generic instantiation is eating your CPU.
But I do want to offer a bit of an excuse for rust. Because these are great features. They turn what would have taken 100000 lines of C or 25000 lines of C++ to a few 1000s lines of Rust.
However, there is definitely an overuse here making the ecosystem seem slow. For instance, at work we use async-graphql. The library itself is great, but it's an absolute proc-macro hog. There's issues in the repository open for years about the performance. You can literally feel the compiler getting slower for each type you add.
jvanderbot
You can literally feel the compiler getting slower for each type you add.
YSK: The compiler performance is IIRC exponential in the "depth" of types. And oh boy does GraphQL love their nested types.
tukantje
Unironically why typescript is a perfect fit for graphql
subarctic
Actually exponential or quadratic?
jvanderbot
The rust devs said exponential for one step. Whether or not that step is required each compilation I don't recall. It was a git issue on rustc from a while back IIRC. Unfortunately I'm not at my computer to do a search for you.
undefined
epage
> Second (and a much worse problem) are macros. They actually hit the same issue. A macro that expands to 10s or 100s of lines can very quickly take your 10000 line project and turn it into a million line behemoth.
Recently, support as added to help people analyze this
See https://nnethercote.github.io/2025/06/26/how-much-code-does-...
1vuio0pswjnm7
"They turn what would have taken 100000 lines of C or 25000 lines of C++ to a few 1000s lines of Rust."
How do they do with smaller programs that would have taken far less than 100,000 lines of C.
For whatever reasons, many authors choose to rewrite small C utilties, or create similar ones, using Rust.^1 Perhaps there are countless examples of 100,000 line C programs rewritten in Rust but the ones I see continually submitted to HN, Github and elsewhere are much smaller.
How does Rust compilation time compare with C for smaller programs.
NB. I am not asking about program size. I am asking about compilation speed.
(Also curious about resource usage, e.g. storage, memory. For example, last time I measured, Rust compiler toolchain is about 2x as large as the GCC toolchain I am using.)
1. Some of these programs due to their size seem unlikely to have undetected memory-safety issues in any language. Their size makes them relatively easy to audit. Unlike a 100,000 line C program.
lor_louis
I write a lot of C and Rust, and my personal experience is that for smaller C programs, Rust tends to have a slightly higher line count, but it's mostly due to forcing the user to handle every error possible.
A truly robust C program will generally be much larger than the equivalent Rust program.
dminik
Well, unfortunately I don't have an exact answer for you, but I do have the next best thing: speculation.
I had a quick look and found this article that compares a partial port of a C++ codebase (at around 17kloc). https://quick-lint-js.com/blog/cpp-vs-rust-build-times/
The resulting rust code apparently ended up slightly larger. This isn't entirely unsurprising to me despite the 25:1 figure from above. Certain things are much more macro-able than others (like de/serialization). Note that C++ is actually well positioned to level the field here with C++26 reflection.
Despite the slightly larger size, the compile times seem roughly equal. With rust scaling worse as the size increases. As a side-note, I did find this part relevant to some of my thoughts from above:
> For example, proc macros would let me replace three different code generators
Now, I know that C isn't C++. But, I think that when restricting yourself to a mostly C-like subset (no proc-macros, mostly no generics outside Option/Result) the result would likely mirror the one above. Depending on the domain and work needed, either language could be much shorter or longer. For example, anything involving text would likely be much shorter in rust as the stdlib has UTF-8 handling built-in. On the other hand, writing custom data structures would likely favor C.
I am interested to see if TRACTOR could help here. Being able to port C code to Rust and then observe the compile times would be pretty interesting. https://www.darpa.mil/research/programs/translating-all-c-to...
ahartmetz
That person seems to be confused. Installing a single, statically linked binary is clearly simpler than managing a container?!
jerf
Also strikes me as not fully understanding what exactly docker is doing. In reference to building everything in a docker image:
"Unfortunately, this will rebuild everything from scratch whenever there's any change."
In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)
It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.
scuff3d
Thank you! I got a couple minutes in and was confused as hell. There is no reason to do the builds in the container.
Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.
I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.
mike_hearn
It's pretty clear that this whole project was god-tier level procrastination so I wouldn't worry too much about the details. The original stated problem could have been solved with a 5-line shell script.
linkage
Half the point of containerization is to have reproducible builds. You want a build environment that you can trust will be identical 100% of the time. Your host machine is not that. If you run `pacman -Syu`, you no longer have the same build environment as you did earlier.
If you now copy your binary to the container and it implicitly expects there to be a shared library in /usr/lib or wherever, it could blow up at runtime because of a library version mismatch.
missingdays
Nobody is suggesting to copy the binary to the Docker container.
When developing locally, use `cargo test` in your cli. When deploying to the server, build the Docker image on CI. If it takes 5 minutes to build it, so be it.
vorgol
Exactly. I immediately thought of the grug brain dev when I read that.
hu3
From the article, the goal was not to simplify, but rather to modernize:
> So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.
Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.
adastra22
So put the binary in the container. Why does it have to be compiled within the container?
hu3
That is what they are doing. It's a 2 stage Dockerfile.
First stage compiles the code. This is good for isolation and reproducibility.
Second stage is a lightweight container to run the compiled binary.
Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.
Containers are good practice for CI/CD anyway.
dwattttt
Mightily resisting the urge to be flippant, but all of those benefits were achieved before Docker.
Docker is a (the, in some areas) modern way to do it, but far from the only way.
a3w
Increased security compared to bare hardware, lower than VMs. Also, lower than Jails and RKT (Rocket) which seems to be dead.
eeZah7Ux
> process isolation, increased security
no, that's sandboxing.
kenoath69
Where is Cranelift mentioned
My 2c on this is nearly ditching rust for game development due to the compile times, in digging it turned out that LLVM is very slow regardless of opt level. Indeed it's what the Jai devs have been saying.
So Cranelift might be relevant for OP, I will shill it endlessly, took my game from 16 seconds to 4 seconds. Incredible work Cranelift team.
ptspts
Isn't `gcc -O0` (for both C and C++) even slower than `clang -O0`?
BreakfastB0b
I participated in the most recent Bevy game jam and the community has a new tool that came out of Dioxus called subsecond which as the name suggests provides sub-second hot reloading of systems. It made prototyping very pleasant. Especially when iterating on UI.
https://github.com/TheBevyFlock/bevy_simple_subsecond_system
jiehong
I think that’s what zig team is also doing to allow very fast build times: remove LLVM.
norman784
Yes, Zig author commented[0] that a while ago
norman784
Nice, I checked a while ago and was no support for macOS aarch64, but seems that now it is supported.
lll-o-lll
Wait. You were going to ditch rust because of 16 second build times?
kenoath69
Pulling out Instagram 100 times in every workday, yes, it's a total disaster
johnisgood
It may also contribute to smoking. :D Or (over-)eating... or whatever your vice is.
Mawr
"Wait. You were going to ditch subversion for git because of 16 second branch merge times?"
Performance matters.
metaltyphoon
Over time that adds up when your coding consists of REPL like workflow.
sarchertech
16 seconds is infuriating for something that needs to be manually tested like does this jump feel too floaty.
But it’s also probable that 16 seconds was fairly early in development and it would get much worse from there.
MangoToupe
I don't really consider it to be slow at all. It seems about as performant as any other language this complexity, and it's far faster than the 15 minute C++ and Scala build times I'd place in the same category.
mountainriver
I also don’t understand this, the rust compiler hardly bothers me at all when I’m working. I feel like this is due to how bad it was early on and people just sticking to that narrative
BanterTrouble
The memory usage is quite large compared to C/C++ when compiling. I use Virtual Machines for Demos on my YouTube Channel and compiling something large in Rust requires 8GB+.
In C/C++ I don't even have to worry about it.
windward
I can't say the same. Telling people to use `-j$(nproc)` in lieu of `-j` to avoid the wrath of the OOM-killer is a rite of passage
gpm
I can't agree, I've had C/C++ builds of well known open source projects try to use >100GB of memory...
BanterTrouble
Maybe something else is going on then. I've done builds of some large open source projects and most of the time they was maxing the cores (I was building j32) but memory usage was fine.
Out of interest what were they?
randomNumber7
When C++ templates are turing complete is it pointless to complain about the compile times without considering the actual code :)
adastra22
As a former C++ developer, claims that rust compilation is slow leave me scratching my head.
eikenberry
Which is one of the reasons why Rust is considered to be targeting C++'s developers. C++ devs already have the Stockholm syndrome needed to tolerate the tooling.
MyOutfitIsVague
Rust's compilation is slow, but the tooling is just about the best that any programming language has.
GuB-42
How good is the debugger? "edit and continue"? Hot reload? Full IDE?
I don't know enough Rust, but I find these aspects are seriously lacking in C++ on Linux, and it is one of the few things I think Windows has it better for developers. Is Rust better?
adastra22
Slow compared to what? I’m still scraping my head at this. My cargo builds are insanely fast, never taking more than a minute or two even on large projects. The only ahead of time compiled language I’ve used with faster compilation speed is Go, and that is a language specifically designed around (and arguably crippled by) the requirement for fast compilation. Rust is comparable to C compilation, and definitely faster than C++, Haskell, Java, Fortran, Algol, and Common Lisp.
galangalalgol
Also modern c++ with value semantics is more functional than many other languages people might come to rust from, that keeps the borrow checker from being as annoying. If people are used to making webs of stateful classes with references to each pther. The borrow checker is horrific, but that is because that design pattern is horrific if you multithread it.
zozbot234
> Stockholm syndrome
A.k.a. "Remember the Vasa!" https://news.ycombinator.com/item?id=17172057
MobiusHorizons
Things can still be slow in absolute terms without being as slow as C++. The issues with compiling C++ are incredibly well understood and documented. It is one of the worst languages on earth for compile times. Rust doesn’t share those language level issues, so the expectations are understandably higher.
int_19h
But it does share some of those issues. Specifically, while Rust generics aren't as unstructured as C++ templates, the main burden is actually from compiling all those tiny instantiations, and Rust monomorphization has the same exact problem responsible for the bulk of its compile times.
const_cast
Rust shares pretty much every language-level issue C++ has with compile times, no? Monomorphization explosion, turing-complete compile time macros, complex type system.
steveklabnik
There's a lot of overlap, but not that simple. Unless you also discount C issues that C++ inherits. Even then, there's subtleties and differences between the two that matter.
pjmlp
And easily solved with incremental compilers, incremental linkers, external templates and nowadays modules.
shadowgovt
I thorougly enjoy all the work on encapsulation and reducing the steps of compilation to compile, then link that C does... Only to have C++ come along and undo almost all of it through the simple expedient of requiring templates for everything.
Oops, changed one template in one header. And that impacts.... 98% of my code.
pjmlp
Not sure about you, but traditionally my C++ have always been faster than out of the box experience with Rust.
Thanks to pre-compiled headers, incremental compiler, incremental linker, and using binary libraries for all dependencies. Nowadays modules as well.
Rust compiles from scratch, some crates might get compiled multiple times due to conflicting configurations on the dependency graph, binary libraries require additional setup for something like scache.
oreally
Classic case of:
New features: yes
Talking to users and fixing actual problems: lolno, I CBF
namibj
Incremental compilation good. If you want, freeze the initial incremental cache after a single fresh build to use for building/deploying updates, to mitigate the risk of intermediate states gradually corrupting the cache.
Works great with docker: upon new compiler version or major website update, rebuild the layer with the incremental cache; otherwise just run from the snapshot and build newest website update version/state, and upload/deploy the resulting static binary. Just set so that mere code changes won't force rebuilding the layer that caches/materializes the fresh clean build's incremental compilation cache.
maccard
The intermediates for my project are 150GB+ alone. Last time I worked with docker images that large we had massive massive problems.
undefined
AndyKelley
My homepage takes 73ms to rebuild: 17ms to recompile the static site generator, then 56ms to run it.
andy@bark ~/d/andrewkelley.me (master)> zig build --watch -fincremental
Build Summary: 3/3 steps succeeded
install success
└─ run exe compile success 57ms MaxRSS:3M
└─ compile exe compile Debug native success 331ms
Build Summary: 3/3 steps succeeded
install success
└─ run exe compile success 56ms MaxRSS:3M
└─ compile exe compile Debug native success 17ms
watching 75 directories, 1 processeswhoisyc
Just like every submission about C/C++ gets a comment about how great Rust is, every submission about Rust gets a comment about how great Zig is. Like a clockwork.
Edit: apparently I am replying to the main Zig author? Language evangelism is by far the worst part of Rust and has likely stirred up more anti Rust sentiment than “converting” people to Rust. If you truly care for your language you should use whatever leverage you have to steer your community away from evangelism, not embrace it.
frollogaston
It's unlikely that anyone was going to use Rust anyway but decided not to because they got too annoyed hearing about it.
AlienRobot
If you can't be proud about a programming language you made what is even the point?
qualeed
Neat, I guess?
This comment would be a lot better if it engaged with the posted article, or really had any sort of insight beyond a single compile time metric. What do you want me to take away from your comment? Zig good and Rust bad?
kristoff_it
I think the most relevant thing is that building a simple website can (and should) take milliseconds, not minutes, and that -- quoting from the post:
> A brief note: 50 seconds is fine, actually!
50 seconds should actually not be considered fine.
qualeed
As you've just demonstrated, that point can be made without even mentioning Zig, let alone copy/pasting some compile time stuff with no other comment or context. Which is why I thought (well, hoped) there might be something more to it than just a dunk attempt.
Now we get all of this off-topic discussion about Zig. Which I guess is good for you Zig folk... But it's pretty off-putting for me.
whoisyc's comment is extremely on point. As the VP of community, I would really encourage thinking about what they said.
nicoburns
My non-static Rust website (includes an actual webserver as well as a react-like framework for templating) takes 1.25s to do an incremental recompile with "cargo watch" (which is an external watcher that just kills the process and reruns "cargo run").
And it can be considerably faster if you use something like subsecond[0] (which does incremental linking and hotpatches the running binary). It's not quite as fast as Zig, but it's close.
However, if that 331ms build above is a clean (uncached) build then that's a lot faster than a clean build of my website which takes ~12s.
AndyKelley
The 331ms time is mostly uncached. In this case the build script was already cached (must be re-done if the build script is edited), and compiler_rt was already cached (must be done exactly once per target; almost never rebuilt).
nicoburns
Impressive!
taylorallred
@AndyKelley I'm super curious what you think the main factors are that make languages like Zig super fast at compiling where languages like Rust and Swift are quite slow. What's the key difference?
steveklabnik
I'm not Andrew, but Rust has made several language design decisions that make compiler performance difficult. Some aspects of compiler speed come down to that.
One major difference is the way each project considers compiler performance:
The Rust team has always cared to some degree about this. But, from my recollection of many RFCs, "how does this impact compiler performance" wasn't a first-class concern. And that also doesn't really speak to a lot of the features that were basically implemented before the RFC system existed. So while it's important, it's secondary to other things. And so while a bunch of hard-working people have put in a ton of work to improve performance, they also run up against these more fundamental limitations at the limit.
Andrew has pretty clearly made compiler performance a first-class concern, and that's affected language design decisions. Naturally this leads to a very performant compiler.
rtpg
> Rust has made several language design decisions that make compiler performance difficult
Do you have a list off the top of your head/do you know of a decent list? I've now read many "compiler slow" thoughtpieces by many people and I have yet to see someone point at a specific feature and say "this is just intrinsically harder".
I believe that it likely exists, but would be good to know what feature to get mad at! Half joking of course
AndyKelley
Basically, not depending on LLVM or LLD. The above is only possible because we invested years into making our own x86_64 backend and our own linker. You can see all the people ridiculing this decision 2 years ago https://news.ycombinator.com/item?id=36529456
unclad5968
LLVM isnt a good scapegoat. A C application equivalent in size to a rust or c++ application will compile an order of magnitude quicker and they all use LLVM. I'm not a compiler expert, but it doesn't seem right to me that the only possible path to quick compilation for Zig was a custom backend.
zozbot234
The Rust folks have cranelift and wild BTW. There are alternatives to LLVM and LLD, even though they might not be as obvious to most users.
VeejayRampay
what is even the point of quoting reactions from two years ago?
this is a terrible look for your whole community
coolsunglasses
I'm also curious because I've (recently) compiled more or less identical programs in Zig and Rust and they took the same amount of time to compile. I'm guessing people are just making Zig programs with less code and fewer dependencies and not really comparing apples to apples.
kristoff_it
Zig is starting to migrate to custom backends for debug builds (instead of using LLVM) plus incremental compilation.
All Zig code is built in a single compilation unit and everything is compiled from scratch every time you change something, including all dependencies and all the parts of the stdlib that you use in your project.
So you've been comparing Zig rebuilds that do all the work every time with Rust rebuilds that cache all dependencies.
Once incremental is fully released you will see instant rebuilds.
AlienRobot
One difference that Zig has is that it doesn't have multiline comments or multiline strings, meaning that the parser can parse any line correctly without context. I assume this makes parallelization trivial.
There is ino operator overloading like C, so A + B can only mean one thing.
You can't redeclare a variable, so foo can only map to one thing.
The list goes on.
Basically it was designed to compile faster, and that means many issues on Github have been getting rejected in order to keep it that way. It's full of compromises.
ww520
Nice. Didn't realize zig build has --watch and -fincremental added. I was mostly using "watchexec -e zig zig build" for recompile on file changes.
Graziano_M
New to 0.14.0!
vlovich123
Zig isn’t memory safe though right?
kristoff_it
How confident are you that memory safety (or lack thereof) is a significant variable in how fast a compiler is?
pixelpoet
It isn't a lot of things, but I would argue that its exceptionally (heh) good exception handling model / philosophy (making it good, required, and performant) is more important than memory safety, especially when a lot of performance-oriented / bit-banging Rust code just gets shoved into Unsafe blocks anyway. Even C/C++ can be made memory safe, cf. https://github.com/pizlonator/llvm-project-deluge
What I'm more interested to know is what the runtime performance tradeoff is like now; one really has to assume that it's slower than LLVM-generated code, otherwise that monumental achievement seems to have somehow been eclipsed in very short time, with much shorter compile times to boot.
jorvi
> especially when a lot of performance-oriented / bit-banging Rust code just gets shoved into Unsafe blocks anyway. Even C/C++ can be made memory safe, cf.
Your first claim is unverifiable and the second one is just so, so wrong. Even big projects with very talented, well-paid C or C++ devs eventually end up with CVEs, ~80% of them memory-related. Humans are just not capable of 0% error rate in their code.
If Zig somehow got more popular than C/C++, we would still be stuck in the same CVE bog because of memory unsafety. No thank you.
vlovich123
> Even C/C++ can be made memory safe, cf. https://github.com/pizlonator/llvm-project-deluge
> Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)
With significant performance and memory overhead. That just isn't the same ballpark that Rust is playing in although hugely important if you want to bring forward performance insensitive C code into a more secure execution environment.
ummonk
Zig is less memory safe than Rust, but more than C/C++. Neither Zig nor Rust is fundamentally memory safe.
Ar-Curunir
What? Zig is definitively not memory-safe, while safe Rust, is, by definition, memory-safe. Unsafe rust is not memory-safe, but you generally don't need to have a lot of it around.
undefined
echelon
Zig is a small and simple language. It doesn't need a complicated compiler.
Rust is a large and robust language meant for serious systems programming. The scope of problems Rust addresses is large, and Rust seeks to be deployed to very large scale software problems.
These two are not the same and do not merit an apples to apples comparison.
edit: I made some changes to my phrasing. I described Zig as a "toy" language, which wasn't the right wording.
These languages are at different stages of maturity, have different levels of complexity, and have different customers. They shouldn't be measured against each other so superficially.
steveklabnik
Come on now. This isn't acceptable behavior.
(EDIT: The parent has since edited this comment to contain more than just "zig bad rust good", but I still think the combative-ness and insulting tone at the time I made this comment isn't cool.)
echelon
> but I still think the combative-ness and insulting tone at the time I made this comment isn't cool
Respectfully, the parent only offers up a Zig compile time metric. That's it. That's the entire comment.
This HN post about Rust is now being dominated by a cheap shot Zig one liner humblebrag from the lead author of Zig.
I think this thread needs a little more nuance.
ummonk
This is an amusing argument to make in favor of Rust, since it's exactly the kind of dismissive statement that Ada proponents make about other languages including Rust.
Scuds
I have a mac m4 pro and it's 2 minutes to compile all of Deno, which is my go-to for bigass rust projects.
```
> cargo clean && time cargo build
cargo build 713.80s user 91.57s system 706% cpu 1:53.97 total
> cargo clean && time cargo build --release
cargo build --release 1619.53s user 142.65s system 354% cpu 8:17.05 total
```
this is without incremental compilation. And it's not like you have to babysit a release build if you have a CI/CD system
hu3
Interesting, M1 Max took 6 minutes and M1 Air took 11 minutes according to this article:
https://corrode.dev/blog/tips-for-faster-rust-compile-times/...
Scuds
Oh yes, apple hardware continues to improve and M4 pro still the single threaded champion of anything under 300 w.
FWIW - last stage where the binary is produced takes the longest and is single threaded and that's the largest difference between release and debug.
hu3
Sounds suspicious to me.
At a quick glance, most benchmarks show only 50% to 60% single threaded improvement between M1 and M4.
That wouldn't explain compilation times going from 6 minutes to 2 minutes.
edude03
First time someone I know in real life has made it to the HN front page (hey sharnoff, congrats) anyway -
I think this post (accidentally?) conflates two different sources of slowness:
1) Building in docker 2) The compiler being "slow"
They mention they could use bind mounts, yet wanting a clean build environment - personally, I think that may be misguided. Rust with incremental builds is actually pretty fast and the time you lose fighting dockers caching would likely be made up in build times - since you'd generally build and deploy way more often than you'd fight the cache (which, you'd delete the cache and build from scratch in that case anyway)
So - for developers who build rust containers, I highly recommend either using cache mounts or building outside the container and adding just the binary to the image.
2) The compiler being slow - having experienced ocaml, go and scala for comparisons the rust compiler is slower than go and ocaml, sure, but for non interactive (ie, REPL like) workflows, this tends not to matter in my experience - realistically, using incremental builds in dev mode takes seconds, then once the code is working, you push to CI at which point you can often accept the (worst case?) scenario that it takes 20 minutes to build your container since you're free to go do other things.
So while I appreciate the deep research and great explanations, I don't think the rust compiler is actually slow, just slower than what people might be use to coming from typescript or go for example.
ozgrakkurt
Rust compiler is very very fast but language has too many features.
The slowness is because everyone has to write code with generics and macros in Java Enterprise style in order to show they are smart with rust.
This is really sad to see but most libraries abuse codegen features really hard.
You have to write a lot of things manually if you want fast compilation in rust.
Compilation speed of code just doesn’t seem to be a priority in general with the community.
aquariusDue
Yeah, for application code in my experience the more I stick to the dumb way to do it the less I fight the borrow checker along with fewer trait issues.
Refactoring seems to take about the same time too so no loss on that front. After all is said and done I'm just left with various logic bugs to fix which is par for the course (at least for me) and a sense of wondering if I actually did everything properly.
I suppose maybe two years from now we'll have people that suggest avoiding generics and tempering macro usage. These days most people have heard the advice about not stressing over cloning and unwraping (though expect is much better imo) on the first pass more or less.
Something something shiny tool syndrome?
skeezyboy
>Compilation speed of code just doesn’t seem to be a priority in general with the community.
they have only one priority, memory safety (from a certain class of memory bugs)
Get the top HN stories in your inbox every day.
So there's this guy you may have heard of called Ryan Fleury who makes the RAD debugger for Epic. The whole thing is made with 278k lines of C and is built as a unity build (all the code is included into one file that is compiled as a single translation unit). On a decent windows machine it takes 1.5 seconds to do a clean compile. This seems like a clear case-study that compilation can be incredibly fast and makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.