···82828383New async primitives that disallow intra-task concurrency, clone of `futures` and `futures-concurrency` for the new primitives.
84848585-## TODO:
8686-- [x] ScopedFuture
8787-- [ ] static combinators (Join Race etc), see futures-concurrency
8888-- [ ] `#[bsync]` or some compiler ScopedFuture generation
8989-- [ ] growable combinators (eg. `FutureGroup`, `FuturesUnordered`) (require alloc?)
9090-- [ ] unsound (needs `Forget`) multithreading
9191-- [ ] "rethinking async rust"
9292-- [ ] all of the above for streams
9393-- [ ] rfc?
94859586channels: need lifetimed receievers, probably needs `Forget` (arc-like channels would be unsafe)
9687···141132- need tons of interior mutability, since immutable/can't move means `poll` cannot take `&mut self`, cells everywhere
142133 - nvm lots of unsafe code, but nothing really unsound
143134- potentially bad error messages? stuff like `join!` will have to output code that manually sets up the waker self ref
135135+136136+## TODO:
137137+- [x] ScopedFuture
138138+- [x] static combinators (Join Race etc), see futures-concurrency
139139+- [x] `#[async_scoped]` or some compiler ScopedFuture generation
140140+- [ ] doubly linked list waker registration
141141+- [ ] repeating static time reactors - eg. make event poll every N seconds
142142+- [ ] io uring reactors
143143+- [ ] growable combinators (eg. `FutureGroup`, `FuturesUnordered`) (require alloc?)
144144+- [ ] unsound (needs `Forget`) multithreading
145145+- [ ] "rethinking async rust"
146146+- [ ] all of the above for streams
147147+- [ ] rfc?
148148+149149+# Chapter 3
150150+151151+man this really sucks
152152+153153+i need better things to do
154154+155155+issues from ch 2
156156+- works great[*]
157157+- *: incompatible with event loop - something still has to poll
158158+ - we're back to doubly linked list of waker registration in event loop
159159+ - this requires Forget
160160+- ScopedFuture - Future interop sucks
161161+162162+163163+structured concurrency with regular combinators:
164164+- scope holds tasks
165165+- scope cancels tasks when dropped
166166+- tasks are ran by central executor
167167+168168+pure structured concurrency with borrow checker:
169169+- high level block_on(), any task wake wakes every level up
170170+- tasks have events
171171+172172+how do the tasks register to an event loop? they don't fuck
173173+174174+175175+```rust
176176+struct Task<F: Future> {
177177+ inner: F,
178178+ prev: *const Task,
179179+ next: *const Task,
180180+ waker: Waker,
181181+}
182182+```
183183+184184+&waker is passed into all sub-tasks, calling wake clone etc panics!!!
185185+this is pretty jank
186186+187187+also waker doesn't have a lifetime so a safe code could easily register to external source that outlives the task
188188+this is unsound
189189+190190+we need
191191+```rust
192192+struct WakerRegister {
193193+ prev: *const WakerRegister,
194194+ next: *const WakerRegister,
195195+}
196196+```
197197+198198+# Borrow Checked Structured Concurrency
199199+200200+An async system needs to have the following components:
201201+202202+- event loop : polls events and schedules tasks when they are ready
203203+- tasks : state machines that progress and can await events from the event loop
204204+- task combinators : tasks that compose other tasks into useful logic structures
205205+206206+Tasks will register themselves to events on the event loop, which will need to outlive tasks and register pointers to wake the tasks, so that they can again be polled.
207207+This is incompatible with the borrow checker because the task pointers (wakers) are being stored inside an event loop with a lifetime that may exceed the tasks'.
208208+209209+`Waker` is effectively a `*const dyn Wake`. It is implemented using a custom `RawWakerVTable` rather than a `dyn` ptr to allow for `Wake::wake(self)`, which is not object safe.
210210+This method is necessary for runtime implementations that rely on the wakers to be effectively `Arc<Task>`, since `wake(self)` consumes `self` and decrements the reference count.
211211+212212+There are two types of sound async runtimes that currently exist in rust:
213213+214214+[tokio](https://github.com/tokio-rs) and [smol](https://github.com/smol-rs) work using the afformentioned reference counting system to ensure wakers aren't dangling pointers to tasks that no longer exist.
215215+216216+[embassy](https://github.com/embassy-rs/embassy) and [rtic](https://github.com/rtic-rs/rtic) work by ensuring tasks are stored in `static` task pools for `N` tasks. Scheduled tasks are represented by an intrusively linked list to avoid allocation, and wakers can't be dangling pointers because completed tasks will refuse to add themselves back to the linked list, or will be replaced by a new task. This is useful in environments where it is desirable to avoid heap allocation, but requires the user annotate the maximum number of a specific task that can exist at one time, and fails to spawn tasks when they exceed that limit.
217217+218218+An async runtime where futures are allocated to the stack cannot be sound under this model because `Future::poll` allows any safe `Future` implementation to store or clone wakers wherever they want, which become dangling pointers after the stack allocated future goes out of scope. In order to prevent this, we must have a circular reference, where the task (`Task<'scope>: Wake`) contains a `&'scope Waker` and the `Waker` contains `*const dyn Wake`. For that to be safe, the `Waker` must never be moved. This cannot be possible because something needs to register the waker:
219219+220220+```rust
221221+// waker is moved
222222+poll(self: Pin<&mut Self>, cx: &mut Context<'_> /* '_ is the duration of the poll fn */) -> Poll<Self::Output> {
223223+ // waker is moved!
224224+ // can't register &Waker bc we run into the same problem,
225225+ // and the waker only lives for '_
226226+ store.register(cx.waker())
227227+}
228228+```
229229+230230+Effectively, by saying our waker can't move, we are saying it must be stored by the task, which means it can't be a useful waker. Instead, what we could do is have a waker-register (verb, not noun) that facilitates the binding of an immovable waker to an immovable task, where the waker is guaranteed to outlive the task:
231231+232232+```rust
233233+pub trait Wake<'task> {
234234+ fn wake(&self);
235235+ fn register_waker(&mut self, waker: &'task Waker);
236236+}
237237+238238+pub struct Waker {
239239+ task: *const dyn Wake,
240240+ valid: bool, // task is in charge of invalidating when it goes out of scope
241241+}
242242+243243+pub struct WakerRegistration<'poll> {
244244+ task: &'poll mut dyn Wake,
245245+}
246246+247247+impl<'poll> WakerRegistration<'poll> {
248248+ pub fn register<'task>(self, slot: &'task Waker)
249249+ where
250250+ 'task: 'poll,
251251+ Self: 'task
252252+ {
253253+ *slot = Waker::new(self.task as *const dyn Wake);
254254+ *task.register_waker(slot)
255255+ }
256256+}
257257+```
258258+259259+This system works better because `WakerRegistration` only lives
260260+261261+262262+263263+Experienced rust programmers might be reading this and thinking I am stupid (true) because `Forget`
264264+265265+An astute (soon to be disappointed) reader might be thinking, as I did when I first learned about this sytem, "what if we ensured that there was only one `Waker` per `Task`, and gave the task a pointer to the waker, so that it could disable the waker when dropped?"
266266+267267+Unfortunately, there are a multitude of issues with this system
268268+269269+- In order to hold a pointer to the Waker from the
270270+- Preventing a `Waker` from moving means panicking on the
271271+272272+Even if it was guaranteed that wakers could not be moved or `cloned` (by panicking on `clone`), and registration occured via `&Waker`, the task would still be unable to
273273+274274+https://conradludgate.com/posts/async-stack
275275+276276+## Structured Concurrency
277277+278278+https://trio.discourse.group/t/discussion-notes-on-structured-concurrency-or-go-statement-considered-harmful/25
279279+https://kotlinlang.org/docs/coroutines-basics.html
280280+281281+Notably, the structured concurrency pattern fits very nicely with our hypothetical unsound stack based async runtime.
282282+283283+## WeakCell Pattern & Forget trait
284284+285285+https://github.com/rust-lang/rfcs/pull/3782
286286+287287+288288+There are two solutions:
289289+290290+- `Wakers` panic on `clone()`
291291+292292+## Waker allocation problem & intra task concurrency
293293+294294+we can't do intra task concurrency because WeakRegistrations
295295+296296+
···11-use futures_core::{ScopedFuture, Wake};
22-use futures_util::{MaybeDone, MaybeDoneState, maybe_done};
33-use std::{cell::Cell, task::Poll};
11+use crate::wake::WakeArray;
22+use futures_compat::LocalWaker;
33+use futures_core::FusedFuture;
44+use futures_util::maybe_done::MaybeDone;
55+use futures_util::maybe_done::maybe_done;
66+use std::pin::Pin;
77+use std::task::Poll;
4859/// from [futures-concurrency](https://github.com/yoshuawuyts/futures-concurrency/tree/main)
610/// Wait for all futures to complete.
711///
812/// Awaits multiple futures simultaneously, returning the output of the futures
913/// in the same container type they were created once all complete.
1010-pub trait Join<'scope> {
1414+pub trait Join {
1115 /// The resulting output type.
1216 type Output;
13171418 /// The [`ScopedFuture`] implementation returned by this method.
1515- type Future: ScopedFuture<'scope, Output = Self::Output>;
1919+ type Future: futures_core::Future<LocalWaker, Output = Self::Output>;
16201721 /// Waits for multiple futures to complete.
1822 ///
···2327 fn join(self) -> Self::Future;
2428}
25292626-struct WakeStore<'scope> {
2727- parent: Cell<Option<&'scope dyn Wake<'scope>>>,
2828- ready: Cell<bool>,
2929-}
3030-3131-impl<'scope> WakeStore<'scope> {
3232- fn new() -> Self {
3333- Self {
3434- parent: Option::None.into(),
3535- ready: true.into(),
3636- }
3737- }
3838- fn take_ready(&self) -> bool {
3939- self.ready.replace(false)
3030+pub trait JoinExt {
3131+ fn along_with<Fut>(self, other: Fut) -> Join2<Self, Fut>
3232+ where
3333+ Self: Sized + futures_core::Future<LocalWaker>,
3434+ Fut: futures_core::Future<LocalWaker>,
3535+ {
3636+ (self, other).join()
4037 }
4138}
42394343-impl<'scope> Wake<'scope> for WakeStore<'scope> {
4444- fn wake(&self) {
4545- self.ready.replace(true);
4646- if let Some(parent) = &self.parent.get() {
4747- parent.wake();
4848- }
4949- }
5050-}
4040+impl<T> JoinExt for T where T: futures_core::Future<LocalWaker> {}
51415242macro_rules! impl_join_tuple {
5353- ($namespace: ident $StructName:ident $($F:ident)+) => {
5454-4343+ ($namespace:ident $StructName:ident $($F:ident)+) => {
5544 mod $namespace {
5656- use super::*;
5757-5858- #[allow(non_snake_case)]
5959- pub struct Wakers<'scope> {
6060- $(pub $F: WakeStore<'scope>,)*
6161- }
6262-6363- // this is so stupid
6464- #[allow(non_snake_case)]
6565- pub struct WakerRefs<'scope> {
6666- $(pub $F: Cell<Option<&'scope dyn Wake<'scope>>>,)*
6767- }
4545+ #[repr(u8)]
4646+ pub(super) enum Indexes { $($F,)+ }
4747+ pub(super) const LEN: usize = [$(Indexes::$F,)+].len();
6848 }
69497050 #[allow(non_snake_case)]
7151 #[must_use = "futures do nothing unless you `.await` or poll them"]
7272- pub struct $StructName<'scope, $($F: ScopedFuture<'scope>),+> {
7373- $($F: MaybeDone<'scope, $F>,)*
7474- wakers: $namespace::Wakers<'scope>,
7575- refs: $namespace::WakerRefs<'scope>,
5252+ pub struct $StructName<$($F: futures_core::Future<LocalWaker>),+> {
5353+ $($F: MaybeDone<$F>,)*
5454+ wake_array: WakeArray<{$namespace::LEN}>,
7655 }
77567878- impl<'scope, $($F: ScopedFuture<'scope> + 'scope),+> ScopedFuture<'scope>
7979- for $StructName<'scope, $($F),+>
5757+ impl<$($F: futures_core::Future<LocalWaker>),+> futures_core::Future<LocalWaker> for $StructName<$($F),+>
8058 {
8159 type Output = ($($F::Output),+);
82608383- fn poll(&'scope self, wake: &'scope dyn Wake<'scope>) -> Poll<Self::Output> {
6161+ #[allow(non_snake_case)]
6262+ fn poll(self: Pin<&mut Self>, waker: Pin<&LocalWaker>) -> Poll<Self::Output> {
6363+ let this = unsafe { self.get_unchecked_mut() };
6464+6565+ let wake_array = unsafe { Pin::new_unchecked(&this.wake_array) };
6666+ $(
6767+ // TODO debug_assert_matches is nightly https://github.com/rust-lang/rust/issues/82775
6868+ debug_assert!(!matches!(this.$F, MaybeDone::Gone), "do not poll futures after they return Poll::Ready");
6969+ let mut $F = unsafe { Pin::new_unchecked(&mut this.$F) };
7070+ )+
7171+7272+ wake_array.register_parent(waker);
7373+8474 let mut ready = true;
85758676 $(
8787- self.wakers.$F.parent.replace(Some(wake)) ;
8888- self.refs.$F.replace(Some(&self.wakers.$F));
7777+ let index = $namespace::Indexes::$F as usize;
7878+ let waker = unsafe { wake_array.child_guard_ptr(index).unwrap_unchecked() };
89799090- // # SAFETY
9191- // `fut` MUST NOT LIVE PAST THIS BLOCK
9292- // OTHER MaybeDone METHODS MUTATE `self` AND `fut` HOLDS
9393- // IMMUTABLE REFERENCE INVARIANT
9494- if let MaybeDoneState::Future(fut) = unsafe { self.$F.get_state() } {
9595- ready &= if self.wakers.$F.take_ready() {
9696- // by polling the future, we create our self referentials truct for lifetime 'scope
9797- // # SAFETY
9898- // unwrap_unchecked is safe because we just put a Some value into our refs.$F
9999- // so it is guaranteed to be Some
100100- fut.poll(unsafe { (&self.refs.$F.get()).unwrap_unchecked() }).is_ready()
101101- } else {
102102- false
103103- };
104104- }
8080+ // ready if MaybeDone is Done or just completed (converted to Done)
8181+ // unsafe / against Future api contract to poll after Gone/Future is finished
8282+ ready &= if unsafe { dbg!(wake_array.take_woken(index).unwrap_unchecked()) } {
8383+ $F.as_mut().poll(waker).is_ready()
8484+ } else {
8585+ $F.is_terminated()
8686+ };
10587 )+
1068810789 if ready {
10890 Poll::Ready((
10991 $(
110110- // # SAFETY
111111- // `ready == true` when all futures are already
112112- // complete or just complete. Once not `MaybeDoneState::Future`, futures transition to `MaybeDoneState::Done`. We don't poll them after, or take their outputs so we know the result of `take_output` must be `Some`
9292+ // SAFETY:
9393+ // `ready == true` when all futures are complete.
9494+ // Once a future is not `MaybeDoneState::Future`, it transitions to `Done`,
9595+ // so we know the result of `take_output` must be `Some`.
11396 unsafe {
114114- self.$F
115115- .take_output()
116116- .unwrap_unchecked()
9797+ $F.take_output().unwrap_unchecked()
11798 },
11899 )*
119100 ))
···123104 }
124105 }
125106126126- impl<'scope, $($F: ScopedFuture<'scope> + 'scope),+> Join<'scope> for ($($F),+) {
107107+ impl<$($F: futures_core::Future<LocalWaker>),+> Join for ($($F),+) {
127108 type Output = ($($F::Output),*);
128128- type Future = $StructName<'scope, $($F),+>;
109109+ type Future = $StructName<$($F),+>;
129110130111 #[allow(non_snake_case)]
131112 fn join(self) -> Self::Future {
···133114134115 $StructName {
135116 $($F: maybe_done($F),)*
136136- wakers: $namespace::Wakers { $($F: WakeStore::new(),)* },
137137- refs: $namespace::WakerRefs { $($F: Option::None.into(),)* }
117117+ wake_array: WakeArray::new(),
138118 }
139119 }
140120 }
···155135156136#[cfg(test)]
157137mod tests {
158158- use futures_util::poll_fn;
138138+ #![no_std]
139139+140140+ use futures_core::Future;
141141+ use futures_util::{dummy_guard, poll_fn};
142142+143143+ use crate::wake::local_wake;
159144160145 use super::*;
146146+147147+ use std::pin;
161148162149 #[test]
163163- fn basic() {
150150+ fn counters() {
151151+ let mut x1 = 0;
152152+ let mut x2 = 0;
153153+ let f1 = poll_fn(|waker| {
154154+ local_wake(waker);
155155+ x1 += 1;
156156+ if x1 == 4 {
157157+ Poll::Ready(x1)
158158+ } else {
159159+ Poll::Pending
160160+ }
161161+ });
162162+ let f2 = poll_fn(|waker| {
163163+ local_wake(waker);
164164+ x2 += 1;
165165+ if x2 == 5 {
166166+ Poll::Ready(x2)
167167+ } else {
168168+ Poll::Pending
169169+ }
170170+ });
171171+ let guard = pin::pin!(dummy_guard());
172172+ let mut join = pin::pin!((f1, f2).join());
173173+ for _ in 0..4 {
174174+ assert_eq!(join.as_mut().poll(guard.as_ref()), Poll::Pending);
175175+ }
176176+ assert_eq!(join.poll(guard.as_ref()), Poll::Ready((4, 5)));
177177+ }
178178+179179+ #[test]
180180+ fn never_wake() {
181181+ let f1 = poll_fn(|_| Poll::<i32>::Ready(0));
182182+ let f2 = poll_fn(|_| Poll::<i32>::Pending);
183183+ let guard = pin::pin!(dummy_guard());
184184+ let mut join = pin::pin!((f1, f2).join());
185185+ for _ in 0..10 {
186186+ assert_eq!(join.as_mut().poll(guard.as_ref()), Poll::Pending);
187187+ }
188188+ }
189189+190190+ #[test]
191191+ fn immediate() {
164192 let f1 = poll_fn(|_| Poll::Ready(1));
165193 let f2 = poll_fn(|_| Poll::Ready(2));
166166- let dummy_waker = WakeStore::new();
167167- assert_eq!((f1, f2).join().poll(&dummy_waker), Poll::Ready((1, 2)));
194194+ let join = pin::pin!(f1.along_with(f2));
195195+ let guard = pin::pin!(dummy_guard());
196196+ assert_eq!(join.poll(guard.as_ref()), Poll::Ready((1, 2)));
168197 }
169198}
+6-1
futures-combinators/src/lib.rs
···11-mod join;
11+pub mod join;
22+pub mod race;
33+mod wake;
44+55+use join::*;
66+use race::*;
+200
futures-combinators/src/race.rs
···11+use futures_util::LocalWaker;
22+33+use crate::wake::WakeArray;
44+use std::pin::Pin;
55+use std::task::Poll;
66+77+/// from [futures-concurrency](https://github.com/yoshuawuyts/futures-concurrency/tree/main)
88+/// Wait for the first future to complete.
99+///
1010+/// Awaits multiple future at once, returning as soon as one completes. The
1111+/// other futures are cancelled.
1212+pub trait Race {
1313+ /// The resulting output type.
1414+ type Output;
1515+1616+ /// The [`ScopedFuture`] implementation returned by this method.
1717+ type Future: futures_core::Future<LocalWaker, Output = Self::Output>;
1818+1919+ /// Wait for the first future to complete.
2020+ ///
2121+ /// Awaits multiple futures at once, returning as soon as one completes. The
2222+ /// other futures are cancelled.
2323+ ///
2424+ /// This function returns a new future which polls all futures concurrently.
2525+ fn race(self) -> Self::Future;
2626+}
2727+2828+pub trait RaceExt<'scope> {
2929+ fn race_with<Fut>(self, other: Fut) -> Race2<Self, Fut>
3030+ where
3131+ Self: Sized + futures_core::Future<LocalWaker>,
3232+ Fut: futures_core::Future<LocalWaker>,
3333+ {
3434+ (self, other).race()
3535+ }
3636+}
3737+3838+impl<'scope, T> RaceExt<'scope> for T where T: futures_core::Future<LocalWaker> {}
3939+4040+macro_rules! impl_race_tuple {
4141+ ($namespace:ident $StructName:ident $OutputsName:ident $($F:ident)+) => {
4242+ mod $namespace {
4343+ #[repr(u8)]
4444+ pub(super) enum Indexes { $($F,)+ }
4545+ pub(super) const LEN: usize = [$(Indexes::$F,)+].len();
4646+ }
4747+4848+ pub enum $OutputsName<$($F,)+> {
4949+ $($F($F),)+
5050+ }
5151+5252+ impl<$($F: std::fmt::Debug,)+> std::fmt::Debug for $OutputsName<$($F,)+> {
5353+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
5454+ match self {$(
5555+ Self::$F(x) =>
5656+ f.debug_tuple(std::stringify!($F))
5757+ .field(x)
5858+ .finish(),
5959+ )+}
6060+ }
6161+ }
6262+6363+ impl<$($F: PartialEq,)+> PartialEq for $OutputsName<$($F,)+> {
6464+ fn eq(&self, other: &Self) -> bool {
6565+ match (self, other) {
6666+ $((Self::$F(a), Self::$F(b)) => a == b,)+
6767+ _ => false
6868+ }
6969+ }
7070+ }
7171+7272+ #[allow(non_snake_case)]
7373+ #[must_use = "futures do nothing unless you `.await` or poll them"]
7474+ pub struct $StructName<$($F: futures_core::Future<LocalWaker>),+> {
7575+ $($F: $F,)*
7676+ wake_array: WakeArray<{$namespace::LEN}>,
7777+ }
7878+7979+ impl<'scope, $($F: futures_core::Future<LocalWaker>),+> futures_core::Future<LocalWaker>
8080+ for $StructName<$($F),+>
8181+ {
8282+ type Output = $OutputsName<$($F::Output,)+>;
8383+8484+ #[allow(non_snake_case)]
8585+ fn poll(self: Pin<&mut Self>, waker: Pin<&LocalWaker>) -> Poll<Self::Output> {
8686+ let this = unsafe { self.get_unchecked_mut() };
8787+8888+ let wake_array = unsafe { Pin::new_unchecked(&this.wake_array) };
8989+ $(
9090+ let mut $F = unsafe { Pin::new_unchecked(&mut this.$F) };
9191+ )+
9292+9393+ wake_array.register_parent(waker);
9494+9595+ $(
9696+ let index = $namespace::Indexes::$F as usize;
9797+ let waker = unsafe { wake_array.child_guard_ptr(index).unwrap_unchecked() };
9898+9999+ // this is safe because we know index < LEN
100100+ if unsafe { wake_array.take_woken(index).unwrap_unchecked() } {
101101+ if let Poll::Ready(res) = $F.as_mut().poll(waker) {
102102+ return Poll::Ready($OutputsName::$F(res));
103103+ }
104104+ }
105105+ )+
106106+107107+ Poll::Pending
108108+ }
109109+ }
110110+111111+ impl<'scope, $($F: futures_core::Future<LocalWaker>),+> Race for ($($F),+) {
112112+ type Output = $OutputsName<$($F::Output),*>;
113113+ type Future = $StructName<$($F),+>;
114114+115115+ #[allow(non_snake_case)]
116116+ fn race(self) -> Self::Future {
117117+ let ($($F),+) = self;
118118+119119+ $StructName {
120120+ $($F: $F,)*
121121+ wake_array: WakeArray::new(),
122122+ }
123123+ }
124124+ }
125125+ };
126126+}
127127+128128+impl_race_tuple!(race2 Race2 RaceOutputs2 A B);
129129+impl_race_tuple!(race3 Race3 RaceOutputs3 A B C);
130130+impl_race_tuple!(race4 Race4 RaceOutputs4 A B C D);
131131+impl_race_tuple!(race5 Race5 RaceOutputs5 A B C D E);
132132+impl_race_tuple!(race6 Race6 RaceOutputs6 A B C D E F);
133133+impl_race_tuple!(race7 Race7 RaceOutputs7 A B C D E F G);
134134+impl_race_tuple!(race8 Race8 RaceOutputs8 A B C D E F G H);
135135+impl_race_tuple!(race9 Race9 RaceOutputs9 A B C D E F G H I);
136136+impl_race_tuple!(race10 Race10 RaceOutputs10 A B C D E F G H I J);
137137+impl_race_tuple!(race11 Race11 RaceOutputs11 A B C D E F G H I J K);
138138+impl_race_tuple!(race12 Race12 RaceOutputs12 A B C D E F G H I J K L);
139139+140140+#[cfg(test)]
141141+mod tests {
142142+ #![no_std]
143143+144144+ use std::pin;
145145+146146+ use futures_core::Future;
147147+ use futures_util::{dummy_guard, poll_fn};
148148+149149+ use crate::wake::local_wake;
150150+151151+ use super::*;
152152+153153+ #[test]
154154+ fn counters() {
155155+ let mut x1 = 0;
156156+ let mut x2 = 0;
157157+ let f1 = poll_fn(|waker| {
158158+ local_wake(waker);
159159+ x1 += 1;
160160+ if x1 == 4 {
161161+ Poll::Ready(x1)
162162+ } else {
163163+ Poll::Pending
164164+ }
165165+ });
166166+ let f2 = poll_fn(|waker| {
167167+ local_wake(waker);
168168+ x2 += 1;
169169+ if x2 == 2 {
170170+ Poll::Ready(x2)
171171+ } else {
172172+ Poll::Pending
173173+ }
174174+ });
175175+ let guard = pin::pin!(dummy_guard());
176176+ let mut race = pin::pin!((f1, f2).race());
177177+ assert_eq!(race.as_mut().poll(guard.as_ref()), Poll::Pending);
178178+ assert_eq!(race.poll(guard.as_ref()), Poll::Ready(RaceOutputs2::B(2)));
179179+ }
180180+181181+ #[test]
182182+ fn never_wake() {
183183+ let f1 = poll_fn(|_| Poll::<i32>::Pending);
184184+ let f2 = poll_fn(|_| Poll::<i32>::Pending);
185185+ let mut race = pin::pin!((f1, f2).race());
186186+ let guard = pin::pin!(dummy_guard());
187187+ for _ in 0..10 {
188188+ assert_eq!(race.as_mut().poll(guard.as_ref()), Poll::Pending);
189189+ }
190190+ }
191191+192192+ #[test]
193193+ fn basic() {
194194+ let f1 = poll_fn(|_| Poll::Ready(1));
195195+ let f2 = poll_fn(|_| Poll::Ready(2));
196196+ let race = pin::pin!(f1.race_with(f2));
197197+ let guard = pin::pin!(dummy_guard());
198198+ assert_eq!(race.poll(guard.as_ref()), Poll::Ready(RaceOutputs2::A(1)));
199199+ }
200200+}
···11+//! Any interaction between an executor/reactor intended for task::Future
22+//! with an executor/reactor intended for bcsc::Future is strictly unsound.
33+44+use std::{
55+ hint::unreachable_unchecked,
66+ mem::ManuallyDrop,
77+ pin::Pin,
88+ ptr::NonNull,
99+ task::{Context, Poll, RawWaker, RawWakerVTable, Waker},
1010+};
1111+1212+use futures_core::Wake;
1313+use lifetime_guard::{atomic_guard::AtomicValueGuard, guard::ValueGuard};
1414+1515+pub type WakePtr = Option<NonNull<dyn Wake>>;
1616+pub type LocalWaker = ValueGuard<WakePtr>;
1717+pub type AtomicWaker = AtomicValueGuard<WakePtr>;
1818+1919+static EVIL_VTABLE: RawWakerVTable = unsafe {
2020+ RawWakerVTable::new(
2121+ |_| unreachable_unchecked(),
2222+ |_| unreachable_unchecked(),
2323+ |_| unreachable_unchecked(),
2424+ |_| unreachable_unchecked(),
2525+ )
2626+};
2727+2828+/// Coerces a pinned `ValueGuard` reference to a `Waker` for use in
2929+/// `core::future::Future`
3030+///
3131+/// Any usage or storage of the resulting `Waker` is undefined behavior.
3232+pub unsafe fn guard_to_waker(guard: Pin<&LocalWaker>) -> ManuallyDrop<Waker> {
3333+ ManuallyDrop::new(unsafe {
3434+ Waker::from_raw(RawWaker::new(
3535+ guard.get_ref() as *const ValueGuard<WakePtr> as *const (),
3636+ &EVIL_VTABLE,
3737+ ))
3838+ })
3939+}
4040+4141+pub unsafe fn atomic_guard_to_waker(
4242+ guard: Pin<&AtomicWaker>,
4343+) -> ManuallyDrop<Waker> {
4444+ ManuallyDrop::new(unsafe {
4545+ Waker::from_raw(RawWaker::new(
4646+ guard.get_ref() as *const AtomicValueGuard<WakePtr> as *const (),
4747+ &EVIL_VTABLE,
4848+ ))
4949+ })
5050+}
5151+5252+/// Coerces a `Waker` into a pinned `AtomicValueGuard` reference.
5353+///
5454+/// This should only be used to undo the work of `guard_to_waker`.
5555+pub unsafe fn waker_to_guard<'a>(waker: &Waker) -> Pin<&LocalWaker> {
5656+ unsafe {
5757+ Pin::new_unchecked(&*(waker.data() as *const ValueGuard<WakePtr>))
5858+ }
5959+}
6060+6161+pub unsafe fn waker_to_atomic_guard<'a>(waker: &Waker) -> Pin<&AtomicWaker> {
6262+ unsafe {
6363+ Pin::new_unchecked(&*(waker.data() as *const AtomicValueGuard<WakePtr>))
6464+ }
6565+}
6666+6767+pub unsafe fn std_future_to_bespoke<F: core::future::Future>(
6868+ future: F,
6969+) -> impl futures_core::Future<LocalWaker, Output = F::Output> {
7070+ NormalFutureWrapper(future)
7171+}
7272+7373+pub unsafe fn bespoke_future_to_std<F: futures_core::Future<LocalWaker>>(
7474+ future: F,
7575+) -> impl core::future::Future<Output = F::Output> {
7676+ BespokeFutureWrapper(future)
7777+}
7878+7979+/// wraps `core::future::Future` in impl of `bcsc:Future`
8080+#[repr(transparent)]
8181+pub struct NormalFutureWrapper<F: core::future::Future>(F);
8282+8383+impl<F: core::future::Future> futures_core::Future<LocalWaker>
8484+ for NormalFutureWrapper<F>
8585+{
8686+ type Output = F::Output;
8787+8888+ fn poll(
8989+ self: Pin<&mut Self>,
9090+ waker: Pin<&LocalWaker>,
9191+ ) -> Poll<Self::Output> {
9292+ unsafe {
9393+ self.map_unchecked_mut(|this| &mut this.0)
9494+ .poll(&mut Context::from_waker(&guard_to_waker(waker)))
9595+ }
9696+ }
9797+}
9898+9999+/// wraps custom `bcsc::Future` in impl of `core::future::Future`
100100+#[repr(transparent)]
101101+pub struct BespokeFutureWrapper<F>(F)
102102+where
103103+ F: futures_core::Future<LocalWaker>;
104104+105105+impl<F> core::future::Future for BespokeFutureWrapper<F>
106106+where
107107+ F: futures_core::Future<LocalWaker>,
108108+{
109109+ type Output = F::Output;
110110+111111+ fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
112112+ unsafe {
113113+ self.map_unchecked_mut(|this| &mut this.0)
114114+ .poll(waker_to_guard(cx.waker()))
115115+ }
116116+ }
117117+}
118118+119119+#[cfg(test)]
120120+mod test {
121121+ use std::pin;
122122+123123+ use super::*;
124124+ use futures_core::Wake;
125125+126126+ #[derive(Debug)]
127127+ struct DummyWake;
128128+ impl Wake for DummyWake {
129129+ fn wake(&self) {}
130130+ }
131131+132132+ #[test]
133133+ fn waker_conversion() {
134134+ let dummy = DummyWake;
135135+ let guard = pin::pin!(ValueGuard::new(NonNull::new(
136136+ &dummy as *const dyn Wake as *mut dyn Wake
137137+ )));
138138+ let waker = unsafe { guard_to_waker(guard.as_ref()) };
139139+ let guard = unsafe { waker_to_guard(&waker) };
140140+ assert_eq!(
141141+ guard.get().unwrap().as_ptr() as *const () as usize,
142142+ &dummy as *const _ as *const () as usize
143143+ );
144144+ }
145145+}
+149-39
futures-core/src/lib.rs
···11-use std::task::Poll;
11+//! Redefinitions of task::Future to be incompatible with them
2233-/// A task that can be woken.
44-///
55-/// This acts as a handle for a reactor to indicate when a `ScopedFuture` is
66-/// once again ready to be polled.
77-pub trait Wake<'scope> {
88- fn wake(&self);
99-}
33+use std::{
44+ ops::{self, DerefMut},
55+ pin::Pin,
66+ task::Poll,
77+};
1081111-/// ScopedFuture represents a unit of asynchronous computation that must be
1212-/// polled by an external actor.
99+/// A future represents an asynchronous computation obtained by use of `async`.
1310///
1414-/// Implementations access a context (`cx: &'scope mut dyn Wake`) to signal
1515-/// they are ready to resume execution.
1111+/// This future assumes a nonstandard Context, which is incompatible with
1212+/// executors or reactors made for `core::future::Future`. In the interest of
1313+/// safety, it has a dedicated type.
1614///
1717-/// A notable difference between `bcsc::ScopedFuture` and `core::task::Future`
1818-/// is the latter cannot safetly ran as a task by an executor without having a
1919-/// 'static lifetime. This is because there is no way for the compiler to
2020-/// guarantee the task doesn't outlive any data, as the executor is free to
2121-/// cancel it (or refuse to) whenever it wants.
1515+/// A future is a value that might not have finished computing yet. This kind of
1616+/// "asynchronous value" makes it possible for a thread to continue doing useful
1717+/// work while it waits for the value to become available.
2218///
2323-/// Additionally, because raw/unsafe implementations of `core::task::Waker`
2424-/// effectively do lifetime-erasure, stack-allocated futures cannot prevent
2525-/// unsound behavior from wakers outliving them (even `Forget` would not
2626-/// entirely fix this due to the api).
1919+/// # The `poll` method
2720///
2828-/// In order to avoid unsound behavior, executors must either use Weak<Wake>
2929-/// for safetly losing access to tasks or enforce tasks being stored in
3030-/// `static` pools of memory.
3131-///
3232-/// `ScopedFuture` instead leverages the borrow checker to allow for (less
3333-/// powerful) stack based async execution.
2121+/// The core method of future, `poll`, *attempts* to resolve the future into a
2222+/// final value. This method does not block if the value is not ready. Instead,
2323+/// the current task is scheduled to be woken up when it's possible to make
2424+/// further progress by `poll`ing again. The `context` passed to the `poll`
2525+/// method can provide a [`Waker`], which is a handle for waking up the current
2626+/// task.
3427///
3535-/// some more:
3636-/// what occurs in `core::task::Future::poll()` is that the ref to a cx.waker
3737-/// is cloned and stored by a reactor via some method.
2828+/// When using a future, you generally won't call `poll` directly, but instead
2929+/// `.await` the value.
3830///
3939-/// The waker is no longer tied to the actual future's lifetime, making it
4040-/// unsound to not have either static tasks or reference counting.
4141-/// To avoid this, we want to use a &'scope waker instead, with 1 waker / task.
4242-pub trait ScopedFuture<'scope> {
3131+/// [`Waker`]: crate::task::Waker
3232+#[must_use = "futures do nothing unless you `.await` or poll them"]
3333+#[diagnostic::on_unimplemented(
3434+ label = "`{Self}` is not a `bcsc::Future`",
3535+ message = "`{Self}` is not a `bcsc::Future`",
3636+ note = "If you are trying to await a `core::future::Future` from within a `bcsc::Future`, note that the systems are incompatible."
3737+)]
3838+pub trait Future<Waker> {
3939+ /// The type of value produced on completion.
4340 type Output;
44414545- /// as soon as poll is called, the struct becomes self-referential,
4646- /// effectively pinned until dropped (or forgotten....D; )
4242+ /// Attempts to resolve the future to a final value, registering
4343+ /// the current task for wakeup if the value is not yet available.
4444+ ///
4545+ /// # Return value
4646+ ///
4747+ /// This function returns:
4848+ ///
4949+ /// - [`Poll::Pending`] if the future is not ready yet
5050+ /// - [`Poll::Ready(val)`] with the result `val` of this future if it
5151+ /// finished successfully.
5252+ ///
5353+ /// Once a future has finished, clients should not `poll` it again.
5454+ ///
5555+ /// When a future is not ready yet, `poll` returns `Poll::Pending` and
5656+ /// stores a clone of the [`Waker`] copied from the current [`Context`].
5757+ /// This [`Waker`] is then woken once the future can make progress.
5858+ /// For example, a future waiting for a socket to become
5959+ /// readable would call `.clone()` on the [`Waker`] and store it.
6060+ /// When a signal arrives elsewhere indicating that the socket is readable,
6161+ /// [`Waker::wake`] is called and the socket future's task is awoken.
6262+ /// Once a task has been woken up, it should attempt to `poll` the future
6363+ /// again, which may or may not produce a final value.
6464+ ///
6565+ /// Note that on multiple calls to `poll`, only the [`Waker`] from the
6666+ /// [`Context`] passed to the most recent call should be scheduled to
6767+ /// receive a wakeup.
6868+ ///
6969+ /// # Runtime characteristics
7070+ ///
7171+ /// Futures alone are *inert*; they must be *actively* `poll`ed to make
7272+ /// progress, meaning that each time the current task is woken up, it should
7373+ /// actively re-`poll` pending futures that it still has an interest in.
7474+ ///
7575+ /// The `poll` function is not called repeatedly in a tight loop -- instead,
7676+ /// it should only be called when the future indicates that it is ready to
7777+ /// make progress (by calling `wake()`). If you're familiar with the
7878+ /// `poll(2)` or `select(2)` syscalls on Unix it's worth noting that futures
7979+ /// typically do *not* suffer the same problems of "all wakeups must poll
8080+ /// all events"; they are more like `epoll(4)`.
8181+ ///
8282+ /// An implementation of `poll` should strive to return quickly, and should
8383+ /// not block. Returning quickly prevents unnecessarily clogging up
8484+ /// threads or event loops. If it is known ahead of time that a call to
8585+ /// `poll` may end up taking a while, the work should be offloaded to a
8686+ /// thread pool (or something similar) to ensure that `poll` can return
8787+ /// quickly.
8888+ ///
8989+ /// # Panics
9090+ ///
9191+ /// Once a future has completed (returned `Ready` from `poll`), calling its
9292+ /// `poll` method again may panic, block forever, or cause other kinds of
9393+ /// problems; the `Future` trait places no requirements on the effects of
9494+ /// such a call. However, as the `poll` method is not marked `unsafe`,
9595+ /// Rust's usual rules apply: calls must never cause undefined behavior
9696+ /// (memory corruption, incorrect use of `unsafe` functions, or the like),
9797+ /// regardless of the future's state.
9898+ ///
9999+ /// [`Poll::Ready(val)`]: Poll::Ready
100100+ /// [`Waker`]: crate::task::Waker
101101+ /// [`Waker::wake`]: crate::task::Waker::wake
102102+ fn poll(self: Pin<&mut Self>, waker: Pin<&Waker>) -> Poll<Self::Output>;
103103+}
104104+105105+impl<Waker, F: ?Sized + Future<Waker> + Unpin> Future<Waker> for &mut F {
106106+ type Output = F::Output;
107107+47108 fn poll(
4848- self: &'scope Self,
4949- wake: &'scope dyn Wake<'scope>,
5050- ) -> Poll<Self::Output>;
109109+ mut self: Pin<&mut Self>,
110110+ waker: Pin<&Waker>,
111111+ ) -> Poll<Self::Output> {
112112+ F::poll(Pin::new(&mut **self), waker)
113113+ }
114114+}
115115+116116+impl<Waker, P> Future<Waker> for Pin<P>
117117+where
118118+ P: ops::DerefMut<Target: Future<Waker>>,
119119+{
120120+ type Output = <<P as ops::Deref>::Target as Future<Waker>>::Output;
121121+122122+ fn poll(self: Pin<&mut Self>, waker: Pin<&Waker>) -> Poll<Self::Output> {
123123+ <P::Target as Future<Waker>>::poll(self.as_deref_mut(), waker)
124124+ }
125125+}
126126+127127+/// A future which tracks whether or not the underlying future
128128+/// should no longer be polled.
129129+///
130130+/// `is_terminated` will return `true` if a future should no longer be polled.
131131+/// Usually, this state occurs after `poll` (or `try_poll`) returned
132132+/// `Poll::Ready`. However, `is_terminated` may also return `true` if a future
133133+/// has become inactive and can no longer make progress and should be ignored
134134+/// or dropped rather than being `poll`ed again.
135135+pub trait FusedFuture<Waker>: Future<Waker> {
136136+ /// Returns `true` if the underlying future should no longer be polled.
137137+ fn is_terminated(&self) -> bool;
138138+}
139139+140140+impl<Waker, F: FusedFuture<Waker> + ?Sized + Unpin> FusedFuture<Waker>
141141+ for &mut F
142142+{
143143+ fn is_terminated(&self) -> bool {
144144+ <F as FusedFuture<Waker>>::is_terminated(&**self)
145145+ }
146146+}
147147+148148+impl<Waker, P> FusedFuture<Waker> for Pin<P>
149149+where
150150+ P: DerefMut + Unpin,
151151+ P::Target: FusedFuture<Waker>,
152152+{
153153+ fn is_terminated(&self) -> bool {
154154+ <P::Target as FusedFuture<Waker>>::is_terminated(&**self)
155155+ }
156156+}
157157+158158+/// temporary trait until Fn::call is stabilized
159159+pub trait Wake {
160160+ fn wake(&self);
51161}
···11+# Lifetime Guard
22+33+`lifetime-guard` provides `ValueGuard` and `RefGuard` structs to allow for
44+weak references to interior mutable values, similar to a singular pair of
55+`Rc` and `Weak`, but without heap allocation.
66+77+For parallelism, it provides `AtomicValueGuard` and `AtomicRefGuard` that
88+implement `Send`.
99+1010+## Example Usage
1111+1212+```rust
1313+use std::pin;
1414+use lifetime_guard::guard::*;
1515+1616+let weak = pin::pin!(RefGuard::new());
1717+{
1818+ let strong = pin::pin!(ValueGuard::new(0));
1919+ weak.as_ref().register(strong.as_ref());
2020+2121+ assert_eq!(strong.get(), 0);
2222+ assert_eq!(weak.get(), Some(0));
2323+2424+ strong.as_ref().set(1);
2525+ assert_eq!(strong.get(), 1);
2626+ assert_eq!(weak.get(), Some(1));
2727+}
2828+assert_eq!(weak.get(), None);
2929+```
3030+3131+# Safety
3232+3333+You *may not* leak any instance of either `ValueGuard` or `RefGuard` to the
3434+stack using `mem::forget()` or any other mechanism that causes their
3535+contents to be overwritten without `Drop::drop()` running.
3636+Doing so creates unsoundness that likely will lead to dereferencing a null
3737+pointer.
3838+3939+Doing so creates unsoundness that likely will lead to dereferencing a null
4040+pointer. See the
4141+[Forget marker trait](https://github.com/rust-lang/rfcs/pull/3782) rfc for
4242+progress on making interfaces that rely on not being leaked sound.
4343+4444+Note that it is sound to leak `ValueGuard` and `RefGuard` to the heap using
4545+methods including `Box::leak()` because heap allocated data will never be
4646+overwritten if it is never freed.
4747+4848+The test cases for this library have been verified to not exhibit undefined
4949+behavior using [miri](https://github.com/rust-lang/miri).
5050+
+255
lifetime-guard/src/atomic_guard.rs
···11+use core::{cell::Cell, marker::PhantomPinned, pin::Pin, ptr::NonNull};
22+33+use critical_section::Mutex;
44+55+struct RawValueGuard<T> {
66+ /// Contains the value being immutably accessed by `RefGuard` and
77+ /// mutably accessed by `Self`
88+ ///
99+ /// This needs to be a cell so that the original immutable alias
1010+ /// to `Self` (given to `RefGuard`) can continue to be referenced after
1111+ /// invalidated by the creation of a mutable alias for `Self::set`.
1212+ data: Cell<T>,
1313+ /// A pointer to a `RefGuard` with read access to `data` to invalidate that
1414+ /// `RefGuard` when `Self` is dropped.
1515+ ref_guard: Cell<Option<NonNull<AtomicRefGuard<T>>>>,
1616+}
1717+1818+/// Strong guard for granting read access to a single interior mutable value to
1919+/// [`RefGuard`](RefGuard).
2020+///
2121+/// A `ValueGuard`:`RefGuard` relationship is exclusive, and behaves similarly
2222+/// to a single `Rc` and `Weak` pair, but notably does not require heap
2323+/// allocation.
2424+///
2525+/// # Safety
2626+///
2727+/// This struct *must* not be leaked to the stack using `mem::forget` or any
2828+/// other mechanism that causes the contents of `Self` to be overwritten
2929+/// without `Drop::drop()` running.
3030+/// Doing so creates unsoundness that likely will lead to dereferencing a null
3131+/// pointer.
3232+///
3333+/// Note that it is sound to leak `Self` to the heap using methods including
3434+/// `Box::leak()` because heap allocated data will never be overwritten if it
3535+/// is never freed.
3636+pub struct AtomicValueGuard<T> {
3737+ /// Mutex is unfortunately necessary because the replace operation requires
3838+ /// confirming if the ptr is valid, meaning it's a two instruction process
3939+ /// and can't be done with atomic compare-and-swap instructions
4040+ mutex: Mutex<RawValueGuard<T>>,
4141+ _marker: PhantomPinned,
4242+}
4343+4444+impl<T> AtomicValueGuard<T> {
4545+ /// Creates a new `ValueGuard` containing `data`.
4646+ #[inline]
4747+ pub fn new(data: T) -> Self {
4848+ Self {
4949+ mutex: Mutex::new(RawValueGuard {
5050+ data: Cell::new(data),
5151+ ref_guard: Cell::new(None),
5252+ }),
5353+ _marker: PhantomPinned,
5454+ }
5555+ }
5656+5757+ /// Sets the internal value stored by `Self`.
5858+ #[inline]
5959+ pub fn set(&self, value: T) {
6060+ critical_section::with(|cs| self.mutex.borrow(cs).data.set(value));
6161+ }
6262+6363+ #[inline]
6464+ fn invalidate_ref_guard(&self) {
6565+ critical_section::with(|cs| self.mutex.borrow(cs).ref_guard.set(None));
6666+ }
6767+6868+ #[inline]
6969+ fn replace_ref_guard(&self, ref_guard: Option<NonNull<AtomicRefGuard<T>>>) {
7070+ critical_section::with(|cs| {
7171+ if let Some(guard) =
7272+ self.mutex.borrow(cs).ref_guard.replace(ref_guard)
7373+ {
7474+ unsafe { (*guard.as_ptr()).invalidate_value_guard() }
7575+ }
7676+ });
7777+ }
7878+}
7979+8080+impl<T: Copy> AtomicValueGuard<T> {
8181+ /// Gets a copy of the value stored inside this `ValueGuard`.
8282+ #[inline]
8383+ pub fn get(&self) -> T {
8484+ critical_section::with(|cs| self.mutex.borrow(cs).data.get())
8585+ }
8686+}
8787+8888+impl<T> Drop for AtomicValueGuard<T> {
8989+ #[inline]
9090+ fn drop(&mut self) {
9191+ self.replace_ref_guard(None);
9292+ }
9393+}
9494+9595+/// Weak guard for acquiring read only access to a `ValueGuard`'s value.
9696+///
9797+/// Provides [`WeakGuard::register()`](Self::register) to register a `ValueGuard`
9898+/// to `Self` and vice versa.
9999+///
100100+/// # Safety
101101+///
102102+/// This struct *must* not be leaked to the stack using `mem::forget` or any
103103+/// other mechanism that causes the contents of `Self` to be overwritten
104104+/// without `Drop::drop()` running.
105105+/// Doing so creates unsoundness that likely will lead to dereferencing a null
106106+/// pointer.
107107+///
108108+/// Note that it is sound to leak `Self` to the heap using methods including
109109+/// `Box::leak()` because heap allocated data will never be overwritten if it
110110+/// is never freed.
111111+pub struct AtomicRefGuard<T> {
112112+ value_guard: Cell<Option<NonNull<AtomicValueGuard<T>>>>,
113113+ _marker: PhantomPinned,
114114+}
115115+116116+impl<T> AtomicRefGuard<T> {
117117+ /// Creates a new `RefGuard` with no reference to a `ValueGuard`.
118118+ #[inline]
119119+ pub fn new() -> Self {
120120+ Self {
121121+ value_guard: Cell::new(None),
122122+ _marker: PhantomPinned,
123123+ }
124124+ }
125125+126126+ #[inline]
127127+ fn invalidate_value_guard(&self) {
128128+ self.value_guard.set(None);
129129+ }
130130+131131+ #[inline]
132132+ fn replace_value_guard(
133133+ &self,
134134+ value_guard: Option<NonNull<AtomicValueGuard<T>>>,
135135+ ) {
136136+ if let Some(guard) = self.value_guard.replace(value_guard) {
137137+ unsafe { (*guard.as_ptr()).invalidate_ref_guard() }
138138+ }
139139+ }
140140+141141+ /// Binds a pinned `value_guard` to `self`.
142142+ ///
143143+ /// This means they will reference each other, and will invalidate their
144144+ /// references to each other when dropped.
145145+ ///
146146+ /// This method also invalidates the existing references held by the
147147+ /// now-replaced referencees of `self` and `value_guard` to avoid
148148+ /// dangling pointers.
149149+ #[inline]
150150+ pub fn register<'a>(
151151+ self: Pin<&'a AtomicRefGuard<T>>,
152152+ value_guard: Pin<&'a AtomicValueGuard<T>>,
153153+ ) {
154154+ value_guard.replace_ref_guard(Some(self.get_ref().into()));
155155+ self.replace_value_guard(Some(value_guard.get_ref().into()));
156156+ }
157157+}
158158+159159+impl<T: Copy> AtomicRefGuard<T> {
160160+ /// Gets a copy of the value stored inside the `ValueGuard` this `RefGuard`
161161+ /// references.
162162+ #[inline]
163163+ pub fn get(&self) -> Option<T> {
164164+ self.value_guard
165165+ .get()
166166+ .map(|guard| unsafe { (*guard.as_ptr()).get() })
167167+ }
168168+}
169169+170170+impl<T> Drop for AtomicRefGuard<T> {
171171+ #[inline]
172172+ fn drop(&mut self) {
173173+ self.replace_value_guard(None);
174174+ }
175175+}
176176+177177+impl<T> Default for AtomicRefGuard<T> {
178178+ #[inline]
179179+ fn default() -> Self {
180180+ Self::new()
181181+ }
182182+}
183183+184184+#[cfg(test)]
185185+mod test {
186186+ use core::{mem, pin};
187187+188188+ extern crate alloc;
189189+190190+ use super::*;
191191+192192+ #[test]
193193+ fn basic() {
194194+ let weak = pin::pin!(AtomicRefGuard::new());
195195+ {
196196+ let strong = pin::pin!(AtomicValueGuard::new(2));
197197+ weak.as_ref().register(strong.as_ref());
198198+199199+ assert_eq!(strong.get(), 2);
200200+ assert_eq!(weak.get(), Some(2));
201201+202202+ strong.as_ref().set(3);
203203+ assert_eq!(strong.get(), 3);
204204+ assert_eq!(weak.get(), Some(3));
205205+ }
206206+207207+ assert_eq!(weak.get(), None);
208208+ }
209209+210210+ #[test]
211211+ fn multiple_registrations() {
212212+ let weak1 = pin::pin!(AtomicRefGuard::new());
213213+ let weak2 = pin::pin!(AtomicRefGuard::new());
214214+ {
215215+ let strong = pin::pin!(AtomicValueGuard::new(2));
216216+ weak1.as_ref().register(strong.as_ref());
217217+218218+ assert_eq!(strong.get(), 2);
219219+ assert_eq!(weak1.get(), Some(2));
220220+221221+ strong.as_ref().set(3);
222222+ assert_eq!(strong.get(), 3);
223223+ assert_eq!(weak1.get(), Some(3));
224224+225225+ // register next ptr, should invalidate previous weak ref (weak1)
226226+ weak2.as_ref().register(strong.as_ref());
227227+ assert_eq!(weak1.get(), None);
228228+ assert_eq!(weak1.value_guard.get(), None);
229229+230230+ assert_eq!(strong.get(), 3);
231231+ assert_eq!(weak2.get(), Some(3));
232232+233233+ strong.as_ref().set(4);
234234+ assert_eq!(strong.get(), 4);
235235+ assert_eq!(weak2.get(), Some(4));
236236+ }
237237+238238+ assert_eq!(weak1.get(), None);
239239+ assert_eq!(weak2.get(), None);
240240+ }
241241+242242+ #[test]
243243+ #[cfg_attr(miri, ignore)]
244244+ fn safe_leak() {
245245+ let strong = alloc::boxed::Box::pin(AtomicValueGuard::new(10));
246246+ let weak = pin::pin!(AtomicRefGuard::new());
247247+ weak.as_ref().register(strong.as_ref());
248248+249249+ // strong is now a ValueGuard on the heap that will never be freed
250250+ // this is sound because it will never be overwritten
251251+ mem::forget(strong);
252252+253253+ assert_eq!(weak.get(), Some(10));
254254+ }
255255+}
+237
lifetime-guard/src/guard.rs
···11+use core::{cell::Cell, marker::PhantomPinned, pin::Pin, ptr::NonNull};
22+33+/// Strong guard for granting read access to a single interior mutable value to
44+/// [`RefGuard`](RefGuard).
55+///
66+/// A `ValueGuard`:`RefGuard` relationship is exclusive, and behaves similarly
77+/// to a single `Rc` and `Weak` pair, but notably does not require heap
88+/// allocation.
99+///
1010+/// # Safety
1111+///
1212+/// This struct *must* not be leaked to the stack using `mem::forget` or any
1313+/// other mechanism that causes the contents of `Self` to be overwritten
1414+/// without `Drop::drop()` running.
1515+/// Doing so creates unsoundness that likely will lead to dereferencing a null
1616+/// pointer.
1717+///
1818+/// Note that it is sound to leak `Self` to the heap using methods including
1919+/// `Box::leak()` because heap allocated data will never be overwritten if it
2020+/// is never freed.
2121+pub struct ValueGuard<T> {
2222+ /// Contains the value being immutably accessed by `RefGuard` and
2323+ /// mutably accessed by `Self`
2424+ ///
2525+ /// This needs to be a cell so that the original immutable alias
2626+ /// to `Self` (given to `RefGuard`) can continue to be referenced after
2727+ /// invalidated by the creation of a mutable alias for `Self::set`.
2828+ data: Cell<T>,
2929+ /// A pointer to a `RefGuard` with read access to `data` to invalidate that
3030+ /// `RefGuard` when `Self` is dropped.
3131+ ref_guard: Cell<Option<NonNull<RefGuard<T>>>>,
3232+ _marker: PhantomPinned,
3333+}
3434+3535+impl<T> ValueGuard<T> {
3636+ /// Creates a new `ValueGuard` containing `data`.
3737+ #[inline]
3838+ pub fn new(data: T) -> Self {
3939+ Self {
4040+ data: Cell::new(data),
4141+ ref_guard: Cell::new(None),
4242+ _marker: PhantomPinned,
4343+ }
4444+ }
4545+4646+ /// Sets the internal value stored by `Self`.
4747+ #[inline]
4848+ pub fn set(&self, value: T) {
4949+ self.data.set(value);
5050+ }
5151+5252+ #[inline]
5353+ fn invalidate_ref_guard(&self) {
5454+ self.ref_guard.set(None);
5555+ }
5656+5757+ #[inline]
5858+ fn replace_ref_guard(&self, ref_guard: Option<NonNull<RefGuard<T>>>) {
5959+ if let Some(guard) = self.ref_guard.replace(ref_guard) {
6060+ unsafe { (*guard.as_ptr()).invalidate_value_guard() };
6161+ }
6262+ }
6363+}
6464+6565+impl<T: Copy> ValueGuard<T> {
6666+ /// Gets a copy of the value stored inside this `ValueGuard`.
6767+ #[inline]
6868+ pub fn get(&self) -> T {
6969+ self.data.get()
7070+ }
7171+}
7272+7373+impl<T> Drop for ValueGuard<T> {
7474+ #[inline]
7575+ fn drop(&mut self) {
7676+ self.replace_ref_guard(None);
7777+ }
7878+}
7979+8080+/// Weak guard for acquiring read only access to a `ValueGuard`'s value.
8181+///
8282+/// Provides [`WeakGuard::register()`](Self::register) to register a `ValueGuard`
8383+/// to `Self` and vice versa.
8484+///
8585+/// # Safety
8686+///
8787+/// This struct *must* not be leaked to the stack using `mem::forget` or any
8888+/// other mechanism that causes the contents of `Self` to be overwritten
8989+/// without `Drop::drop()` running.
9090+/// Doing so creates unsoundness that likely will lead to dereferencing a null
9191+/// pointer.
9292+///
9393+/// Note that it is sound to leak `Self` to the heap using methods including
9494+/// `Box::leak()` because heap allocated data will never be overwritten if it
9595+/// is never freed.
9696+pub struct RefGuard<T> {
9797+ value_guard: Cell<Option<NonNull<ValueGuard<T>>>>,
9898+ _marker: PhantomPinned,
9999+}
100100+101101+impl<T> RefGuard<T> {
102102+ /// Creates a new `RefGuard` with no reference to a `ValueGuard`.
103103+ #[inline]
104104+ pub fn new() -> Self {
105105+ Self {
106106+ value_guard: Cell::new(None),
107107+ _marker: PhantomPinned,
108108+ }
109109+ }
110110+111111+ #[inline]
112112+ fn invalidate_value_guard(&self) {
113113+ self.value_guard.set(None);
114114+ }
115115+116116+ #[inline]
117117+ fn replace_value_guard(&self, value_guard: Option<NonNull<ValueGuard<T>>>) {
118118+ if let Some(guard) = self.value_guard.replace(value_guard) {
119119+ unsafe { (*guard.as_ptr()).invalidate_ref_guard() }
120120+ }
121121+ }
122122+123123+ /// Binds a pinned `value_guard` to `self`.
124124+ ///
125125+ /// This means they will reference each other, and will invalidate their
126126+ /// references to each other when dropped.
127127+ ///
128128+ /// This method also invalidates the existing references held by the
129129+ /// now-replaced referencees of `self` and `value_guard` to avoid
130130+ /// dangling pointers.
131131+ #[inline]
132132+ pub fn register<'a>(
133133+ self: Pin<&'a RefGuard<T>>,
134134+ value_guard: Pin<&'a ValueGuard<T>>,
135135+ ) {
136136+ value_guard.replace_ref_guard(Some(self.get_ref().into()));
137137+ self.replace_value_guard(Some(value_guard.get_ref().into()));
138138+ }
139139+}
140140+141141+impl<T: Copy> RefGuard<T> {
142142+ /// Gets a copy of the value stored inside the `ValueGuard` this `RefGuard`
143143+ /// references.
144144+ #[inline]
145145+ pub fn get(&self) -> Option<T> {
146146+ self.value_guard
147147+ .get()
148148+ .map(|guard| unsafe { (*guard.as_ptr()).get() })
149149+ }
150150+}
151151+152152+impl<T> Drop for RefGuard<T> {
153153+ #[inline]
154154+ fn drop(&mut self) {
155155+ self.replace_value_guard(None);
156156+ }
157157+}
158158+159159+impl<T: Copy> Default for RefGuard<T> {
160160+ #[inline]
161161+ fn default() -> Self {
162162+ Self::new()
163163+ }
164164+}
165165+166166+#[cfg(test)]
167167+mod test {
168168+ use core::{mem, pin};
169169+170170+ extern crate alloc;
171171+172172+ use super::*;
173173+174174+ #[test]
175175+ fn basic() {
176176+ let weak = pin::pin!(RefGuard::new());
177177+ {
178178+ let strong = pin::pin!(ValueGuard::new(2));
179179+ weak.as_ref().register(strong.as_ref());
180180+181181+ assert_eq!(strong.get(), 2);
182182+ assert_eq!(weak.get(), Some(2));
183183+184184+ strong.as_ref().set(3);
185185+ assert_eq!(strong.get(), 3);
186186+ assert_eq!(weak.get(), Some(3));
187187+ }
188188+189189+ assert_eq!(weak.get(), None);
190190+ }
191191+192192+ #[test]
193193+ fn multiple_registrations() {
194194+ let weak1 = pin::pin!(RefGuard::new());
195195+ let weak2 = pin::pin!(RefGuard::new());
196196+ {
197197+ let strong = pin::pin!(ValueGuard::new(2));
198198+ weak1.as_ref().register(strong.as_ref());
199199+200200+ assert_eq!(strong.get(), 2);
201201+ assert_eq!(weak1.get(), Some(2));
202202+203203+ strong.as_ref().set(3);
204204+ assert_eq!(strong.get(), 3);
205205+ assert_eq!(weak1.get(), Some(3));
206206+207207+ // register next ptr, should invalidate previous weak ref (weak1)
208208+ weak2.as_ref().register(strong.as_ref());
209209+ assert_eq!(weak1.get(), None);
210210+ assert_eq!(weak1.value_guard.get(), None);
211211+212212+ assert_eq!(strong.get(), 3);
213213+ assert_eq!(weak2.get(), Some(3));
214214+215215+ strong.as_ref().set(4);
216216+ assert_eq!(strong.get(), 4);
217217+ assert_eq!(weak2.get(), Some(4));
218218+ }
219219+220220+ assert_eq!(weak1.get(), None);
221221+ assert_eq!(weak2.get(), None);
222222+ }
223223+224224+ #[test]
225225+ #[cfg_attr(miri, ignore)]
226226+ fn safe_leak() {
227227+ let strong = alloc::boxed::Box::pin(ValueGuard::new(10));
228228+ let weak = pin::pin!(RefGuard::new());
229229+ weak.as_ref().register(strong.as_ref());
230230+231231+ // strong is now a ValueGuard on the heap that will never be freed
232232+ // this is sound because it will never be overwritten
233233+ mem::forget(strong);
234234+235235+ assert_eq!(weak.get(), Some(10));
236236+ }
237237+}
+8
lifetime-guard/src/lib.rs
···11+#![doc = include_str!("../README.md")]
22+#![no_std]
33+44+// pub mod base;
55+pub mod guard;
66+77+#[cfg(feature = "atomics")]
88+pub mod atomic_guard;