upvote
The borrow checker is usually a blip on the overall graph of compilation time.

The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.

reply
While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.

Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.

Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P

reply
It truly is not about bounds checks. Index lookups are rare in practical Rust code, and the amount of code generated from them is miniscule.

But it _is_ about the sheer volume of stuff passed to LLVM, as you say, which comes from a couple of places, mostly related to monomorphization (generics), but also many calls to tiny inlined functions. Incidentally, this is also what makes many "modern" C++ projects slow to compile.

In my experience, similarly sized Rust and C++ projects seem to see similar compilation times. Sometimes C++ wins due to better parallelization (translation units in Rust are crates, not source files).

reply
These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.
reply
That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.
reply
If you'd like the rust compiler to operate quickly:

* Make no nested types - these slow compiler time a lot

* Include no crates, or ones that emphasize compiler speed

C is still v. fast though. That's why I love it (and Rust).

reply
>Make no nested types

I wouldn't like it that much

reply
This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.
reply
Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.
reply
Well, zero-cost abstractions are still abstractions. It’s not junk per-se, but things that will be optimized out if the IR has enough information to safely do so, so basically lots of extra metadata to actually prove to LLVM that these things are safe.
reply
You can address the junk problem manually by having generic functions delegate as much of their work as possible to non-generic or "less" generic functions (Where a "less" generic function is one that depends only on a known subset of type traits, such as size or alignment. Then delegating can help the compiler generate fewer redundant copies of your code, even if it can't avoid code monomorphization altogether.)
reply
Isn't something like this blocked on the lack of specialisation?
reply
I believe the specific advice they're referring to has been stable for a while. You take your generic function & split it into a thin generic wrapper, and a non-generic worker.

As an example, say your function takes anything that can be turned into a String. You'd write a generic wrapper that does the ToString step, then change the existing function to just take a String. That way when your function is called, only the thin outer function is monomorphised, and the bulk of the work is a single implementation.

It's not _that_ commonly known, as it only becomes a problem for a library that becomes popular.

reply
To illustrate:

  fn foo<S: Into<String>>(s: S) {
      fn inner(s: String) { ... }
      inner(s.into())
  }
reply
Probably, but it's the kind of thing that needs a lot of fairly significant overhauls in the compiler architecture to really move the needle on, as far as I understand.
reply