Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.
Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P
But it _is_ about the sheer volume of stuff passed to LLVM, as you say, which comes from a couple of places, mostly related to monomorphization (generics), but also many calls to tiny inlined functions. Incidentally, this is also what makes many "modern" C++ projects slow to compile.
In my experience, similarly sized Rust and C++ projects seem to see similar compilation times. Sometimes C++ wins due to better parallelization (translation units in Rust are crates, not source files).