upvote
To be even fair-er, it wasn't actually memory unsafety, it was "just" unsoundness, there was a type, that IF you gave it an io reader implementation that was weird, that implementation could see uninit data, or expose uninit data elsewhere, but the only readers actually used were well behaved readers.
reply
> well behaved readers.

Around and around we go.

reply
Vec::set_len is by no means deprecated. The lint you linked only covers a very specific unsound pattern using set_len.
reply
Indeed, and it doesn't need to be deprecated, because it's an API explicitly designed to give you low-level control where you need it, and because it is appropriately defined as an `unsafe` function with documented safety invariants that must be manually upheld in order for usage to be memory-safe. The documentation also suggests several other (safe) functions that should be used instead when possible, and provides correct usage examples: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.set... .
reply
> and because it is appropriately defined as an `unsafe` function with documented safety invariants that must be manually upheld in order for usage to be memory-safe.

Didn't we learn from c, and the entire raison detre for rust, is that coders cannot be trusted to follow rules like this?

If coders could "(document) safety invariants that must be manually upheld in order for usage to be memory-safe." there's be no need for Rust.

This is the tautology underlying rust as I see it

reply
No, this is mistaken. Rust provides `unsafe` functions for operations where memory-safety invariants must be manually upheld, and then forces callers to use `unsafe` blocks in order to call those functions, and then provides tooling for auditing unsafe blocks. Want to keep unsafe code out of your codebase? Then add `#![forbid(unsafe_code)]` to your crate root, and all unsafe code becomes a compiler error. Or you could add a check in your CI that prevents anyone from merging code that touches an unsafe block without sign-off from a senior maintainer. And/or you can add unit tests for any code that uses unsafe blocks and then run those tests under Miri, which will loudly complain if you perform any memory-unsafe operations. And you can add the `undocumented_unsafe_comment` lint in Clippy so that you'll never forget to document an unsafe block. Rust's culture is that unsafe blocks should be reserved for leaf nodes in the call graph, wrapped in safe APIs whose usage does not impose manual invariant management to downstream callers. Internally, those APIs represent a relatively miniscule portion of the codebase upon which all your verification can be focused. So you don't need to "trust" that coders will remember not to call unsafe functions needlessly, because the tooling is there to have your back.
reply
> Want to keep unsafe code out of your codebase?

And how is this feasible for a systems language? Rust becomes too impotent for its main use case if you only use safe rust.

My original point still stands... Coders historically cannot be trusted to manually manage memory, unless they're rust coders apparently

> So you don't need to "trust" that coders will remember not to call unsafe functions needlessly, because the tooling is there to have your back.

By definition, it isn't possible for a tool to reason about unsafe code, otherwise the rust compiler would do it

reply
> And how is this feasible for a systems language? Rust becomes too impotent for its main use case if you only use safe rust.

No, this is completely incorrect, and one of the most interesting and surprising results of Rust as an experiment in language design. An enormous proportion of Rust codebases need not have any unsafe code of their own whatsoever, and even those that do tend to have unsafe blocks in an extreme minority of files. Rust's hypothesis that unsafe code can be successfully encapsulated behind safe APIs suitable for the vast majority of uses has been experimentally proven in practice. Ironically, the average unsafe block in practice is a result of needing to call a function written in C, which is a symptom of not yet having enough alternatives written in Rust. I have worked on both freestanding OSes and embedded applications written in Rust--both domains where you would expect copious usage of unsafe--where I estimate less than 5% of the files actually contained unsafe blocks, meaning a 20x reduction in the effort needed to verify them (in Fred Brooks units, that's two silver bullets worth).

> Coders historically cannot be trusted to manually manage memory, unless they're rust coders apparently

Most Rust coders are not manually managing memory on the regular, or doing anything else that requires unsafe code. I'm not exaggerating when I say that it's entirely possible to have spent your entire career writing Rust code without ever having been forced to write an `unsafe` block, in the same way that Java programmers can go their entire career without using JNI.

> By definition, it isn't possible for a tool to reason about unsafe code, otherwise the rust compiler would do it

Of course it is. The Rust compiler reasons about unsafe code all the time. What it can't do is definitely prove many properties of unsafe code, which is why the compiler conservatively requires the annotation. But there are dozens of built-in warnings and Clippy lints that analyze unsafe blocks and attempts to flag issues early. In addition, Miri provides an interpreter in which to run unsafe code which provides dynamic rather than static analysis.

reply
> No, this is completely incorrect,

Show me system level rust code that only uses safe then... You can't because its impossible. I doesn't matter that it's a minority of files (!), the simple fact is you can't program systems without using unsafe. Rewrite the c dependencies in rust and the amount of unsafe code increases massively

> Most Rust coders are not manually managing memory on the regular

Another sidestep. If coders in general cannot be trusted to manage memory, why can a rust coder be trusted all of a sudden?

> . But there are dozens of built-in warnings and Clippy lints that analyze unsafe blocks and attempts to flag issues early.

We already had that, it wasn't enough, hence..... rust, remember?

reply
You are missing the forest for the trees here. The goal of that's unsafe isn't to prevent you from writing unsafe code. It's to prevent you from unsafe code by accident. That was always the goal. If you reread the comments through that lens I'm sure they'll make more sense.
reply
I think you’re deliberately being obtuse here, and if you don’t see why, you should probably reflect on your reasoning.

I’ve been using Rust for about 12 years now, and the only times I’ve had to reach for `unsafe` was to do FFI stuff. That’s it. Maybe others might have more unsafe code and for good reasons, but from my perspective, I don’t know wtf you’re talking about.

reply
> Maybe others might have more unsafe code and for good reasons, but from my perspective, I don’t know wtf you’re talking about

"well I don't need to use unsafe that much so I don't know what your point is" sounds like you don't really have an answer.

reply
Rust has never been about outright eliminating unsafe code, it's about encapsulating that unsafe code within a safe externally usable API.

When creating a dynamic sized array type, it's much simpler to reason about its invariants when you assume only its public methods have access to its size and length fields, rather than trust the user to remember to update those fields themselves.

The above is an analogy which is obviously fixed by using opaque accesor functions, but Rust takes it further by encapsulating raw pointer usage itself.

The whole ethos of unsafe Rust is that you encapsulate usages of things like raw pointers and mutable static variables in smaller, more easily verifiable modules rather than having everyone deal with them directly.

reply
The issue with C is that every single use of a pointer needs to come with safety invariants (at its most basic: when you a pass a pointer to my function, do I. take ownership of your pointer or not?). You cannot legitimately expect people to be that alert 100% of the time.

Inversely, you can write whole applications in rust without ever touching `unsafe` directly, so that keyword by itself signals the need for attention (both to the programmer and the reviewer or auditor). An unsafe block without a safety comment next to it is a very easy red flag to catch.

reply
>when you a pass a pointer to my function, do I take ownership of your pointer or not?

It's honestly frustrating how prevalent this is in C, and the docs don't even tell you this, and if you guess it does take ownership and make a copy for it and you were wrong, now you just leaked memory, or if you guessed the other way now you have the potential to double-free it, use after free, or have it mutated behind your back.

reply
The specific use case the GNU maintainer listed followed this exact pattern.
reply