-
Notifications
You must be signed in to change notification settings - Fork 710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spin-rs no longer maintained (dependency) #921
Comments
Thanks. I don't think the spinlock design in spin-rs was optimal for what we were using it for anyway. In particular, we really just need a pure spin lock that doesn't ever yield. |
In fact, we don't even need a spinlock, really. We just use
Actually, I just checked my notes and we intentionally don't use a pure spinlock to gracefully handle very edgy edge cases, e.g. the process that grabbed the lock has been suspended while (or just before) the handful of instructions that cache the CPUID results. IIRC, spin-rs was way too eager to yield, but never yielding also isn't right. |
I wrote:
If we really want to be optimistic that the initialization has already been done, then spin-rs's behavior of yielding right away makes a lot of sense, actually. So, I think we should just find the simplest thing that will work. |
If we were happen with the Once in spin-rs perhaps we should just fork that and maybe add it to the lock-api project (referenced in the CVE). |
Perhaps. I see lock_api depends on scopeguard only (its other depencencies are optional, it seems). In general the problem I'm running into now is finding an alternative that doesn't have heavy dependencies. (It looks like rust-lang/rust#56410 and similar long-stalled work has had a chilling effect on development of projects solving these problems.) |
I filled rust-lang-nursery/lazy-static.rs#163 against lazy-static. |
I put up PR #924 to see if switching to There are two parts to this: cpu.rs and rand.rs. In cpu.rs, we need to support In rand.rs, we could use It would be interesting to know what @oliver-giersch is planning to do with conquer-once if/when the aforementioned RFC is mentioned. I would hope that conquer-once would shrink in scope. But maybe conquer-once does more than the RFC proposed that I'm overlooking. |
I primarily intended for What exactly do you find too extensive about the crate's scope? |
Thanks for commenting.
As things are now, nothing! But, if/when the lazy_static functionality is in libstd, would that change what conquer-once does or how it works? For example, if libstd were available, it might be nice for conquer-once to defer to the libstd implementation. |
I might reassess whether the additional non-blocking capabilities are worth maintaining further. But as it is now, the part of the crate that would overlap with the proposed RFC (the OS reliant blocking API) is only compiled if the |
Question 1: is blocking (via spinning) the most theoretically appropriate solution here? It seems like in both cases:
So, a non blocking safe implementation is possible, roughly like this: fn get_state() -> usize {
static CACHE: AtomicUsize = AtomicUsize::new(0);
let mut res = CACHE.load(Relaxed);
if res == 0 {
res = init();
CACHE.store(res, Relaxed);
}
res
} The down side is that the
Question 2: is a worse edge case possible with interrupts? Say, we are in single core bare metal context. A hardware thread runs an initialization routine, and is preempted, by the interrupt, in the middle of the critical section. What if the interrupt handler tries to run this initialization again? It will observed that initialization is in progress, will enter a spin wait, and will cause a deadlock. The proposal from this comment doesn't suffer this problem, the interrupt handler will just run the initialization routine itself. Question 3:
Does using Context: I don't do much no_std development myself, and I'd love to understand the tradeoff related to spin locks better, for the standard lazy types RFC. |
BTW, I've managed to get a reliable reproduction of the edge case for the |
This isn't true. We actually rely on the "It is also guaranteed that any memory writes performed by the executed closure can be reliably observed by other threads at this point (there is a happens-before relation between the closure and code executing after the return)." semantics provided by And, perhaps this is another bug with using spin-rs: spin-rs's documentation doesn't have the same guarantee! See https://mvdnes.github.io/rust-docs/spin-rs/spin/struct.Once.html#method.call_once. We should check whether this is an omission in the documentation of spin-rs or if the spin-rs's
We don't currently assume idempotence, but something slightly weaker: the winning thread will cache a state that is usable from that point forward.
The CPUID instruction has severely negative performance consequences when executed. It isn't just slow, but it is also a serializing instruction. |
I think that you are overlooking that these stores are executed and require protection on x86 and x86-64 by
|
|
It looks like |
That bug is now tracked as ring issue #931. |
It's instructive to see how they handle this problem in Good news: they manage to do it without spinlocks and using only Luckily, the bug is easy to fix (rust-lang/stdarch#837), and yields an interesting insight for ring's case: although the state we want to synchronize is larger than |
Yes, we can refactor that code as needed. |
Yes, that is good. Be aware that the readers of the
I think this is because you are hoping to change the definition of For For WDYT? |
Aha, I haven't realized this, and it indeed makes the problem more complicated, as i don't fully understand interactions between language-level memory model and hardware level memory model.
Right. Specifically, I want to change those to be an array of atomic integers. And the only only property of atomics we need is that loads and stores at a single location are atomic, we don't need any additional synchronization requirements. So, it does seem that the trick from stdarch will work on x86 (although I don't think this is officially formally specified to work, atomics + asm are a grey area IIRC).
I think just using As a super-proper and cross-platfom solution, I think we can move all loads of those global variable to Rust, and pass a loaded value in registers to asm. That is, I imagine that |
Right now x86/x86-64 seems like the only supported platform where we effectively support I'm thinking to release a new major version bump of ring that goes back to using One thing I've considered: Require |
Hi @briansmith, is there any news on this? |
I would love to replace spin with something else that does the same thing, but I do not know of anything that is acceptable and backward-compatible with no-std environments. rust-secure-code/safety-dance#18 and my own personal review of the code indicates that spin-rs's When we stop maintaining 0.16.x (soon) and start 0.17.x, we'll reconsider doing something different. In (private) projects that use ring, I'm currently doing this as a workaround:
Obviously that's not ideal. That said, it's a "warning" for a reason: the warning about unmaintained crates really doesn't make much sense when you consider all the edge cases. |
Would https://github.com/rust-osdev/spinning_top be a sufficient replacement for spin? |
As someone that contributed lots to |
@Ericson2314 Thanks. Do you think |
Just to clarify, spinning_top is just lock_api with a RawMutex implementation that spins (which I think is close to a straight port of spin-rs) + some type definitions. |
@briansmith Well I like that [Do be clear this principle applies to dependencies in general: we could turn every usage of a type into a parameter with a trait, ending up with something like ML's "fully functorized" idiom. But this would be incredibly verbose.] |
The real killer thing would be if we could somehow connect optional dependencies with type parameter defaults, so just about every function in many libraries would have some e.g. allocator and lock implementation type parameters, but in the common |
Hello! |
I filed a PR some time ago updating |
Thanks everybody. It seems like this is all resolved now. I added a |
https://rustsec.org/advisories/RUSTSEC-2019-0031
The spin dependency is no longer maintained it appears. This causes
cargo audit
to fail on downstream projects. I haven't had a chance to look into potential fixes.The text was updated successfully, but these errors were encountered: