|
17 | 17 | //! having the `k_work` embedded in their structure, and Zephyr schedules the work when the given
|
18 | 18 | //! reason happens.
|
19 | 19 | //!
|
20 |
| -//! Zephyr's work queues can be used in different ways: |
21 |
| -//! |
22 |
| -//! - Work can be scheduled as needed. For example, an IRQ handler can queue a work item to process |
23 |
| -//! data it has received from a device. |
24 |
| -//! - Work can be scheduled periodically. |
25 |
| -//! |
26 |
| -//! As most C use of Zephyr statically allocates things like work, these are typically rescheduled |
27 |
| -//! when the work is complete. The work queue scheduling functions are designed, and intended, for |
28 |
| -//! a given work item to be able to reschedule itself, and such usage is common. |
29 |
| -//! |
30 |
| -//! ## Waitable events |
31 |
| -//! |
32 |
| -//! The triggerable work items can be triggered to wake on a set of any of the following: |
33 |
| -//! |
34 |
| -//! - A signal. `k_poll_signal` is a type used just for waking work items. This works similar to a |
35 |
| -//! binary semaphore, but is lighter weight for use just by this mechanism. |
36 |
| -//! - A semaphore. Work can be scheduled to run when a `k_sem` is available. Since |
37 |
| -//! [`sys::sync::Semaphore`] is built on top of `k_sem`, the "take" operation for these semaphores |
38 |
| -//! can be a trigger source. |
39 |
| -//! - A queue/FIFO/LIFO. The queue is used to implement [`sync::channel`] and thus any blocking |
40 |
| -//! operation on queues can be a trigger source. |
41 |
| -//! - Message Queues, and Pipes. Although not yet provided in Rust, these can also be a source of |
42 |
| -//! triggering. |
43 |
| -//! |
44 |
| -//! It is important to note that the trigger source may not necessarily still be available by the |
45 |
| -//! time the work item is actually run. This depends on the design of the system. If there is only |
46 |
| -//! a single waiter, then it will still be available (the mechanism does not have false triggers, |
47 |
| -//! like CondVar). |
48 |
| -//! |
49 |
| -//! Also, note, specifically, that Zephyr Mutexes cannot be used as a trigger source. That means |
50 |
| -//! that locking a [`sync::Mutex`] shouldn't be use within work items. There is another |
51 |
| -//! [`kio::sync::Mutex`], which is a simplified Mutex that is implemented with a Semaphore that can |
52 |
| -//! be used from work-queue based code. |
53 |
| -//! |
54 |
| -//! # Rust `Future` |
55 |
| -//! |
56 |
| -//! The rust language, also has built-in support for something rather similar to Zephyr work queues. |
57 |
| -//! The main user-visible type behind this is [`Future`]. The rust compiler has support for |
58 |
| -//! functions, as well as code blocks to be declared as `async`. For this code, instead of directly |
59 |
| -//! returning the given data, returns a `Future` that has as its output type the data. What this |
60 |
| -//! does is essentially capture what would be stored on the stack to maintain the state of that code |
61 |
| -//! into the data of the `Future` itself. For rust code running on a typical OS, a crate such as |
62 |
| -//! [Tokio](https://tokio.rs/) provides what is known as an executor, which implements the schedule |
63 |
| -//! for these `Futures` as well as provides equivalent primitives for Mutex, Semaphores and channels |
64 |
| -//! for this code to use for synchronization. |
65 |
| -//! |
66 |
| -//! It is notable that the Zephyr implementation of `Future` operations under a fairly simple |
67 |
| -//! assumption of how this scheduling will work. Each future is invoked with a Context, which |
68 |
| -//! contains a dynamic `Waker` that can be invoked to schedule this Future to run again. This means |
69 |
| -//! that the primitives are typically implemented above OS primitives, where each manages wake |
70 |
| -//! queues to determine the work that needs to be woken. |
71 |
| -//! |
72 |
| -//! # Bringing it together. |
73 |
| -//! |
74 |
| -//! There are a couple of issues that need to be addressed to bring work-queue support to Rust. |
75 |
| -//! First is the question of how they will be used. On the one hand, there are users that will |
76 |
| -//! definitely want to make use of `async` in rust, and it is important to implement a executor, |
77 |
| -//! similar to Tokio, that will schedule this `async` code. On the other hand, it will likely be |
78 |
| -//! common for others to want to make more direct use of the work queues themselves. As such, these |
79 |
| -//! users will want more direct access to scheduling and triggering of work. |
80 |
| -//! |
81 |
| -//! ## Future erasure |
82 |
| -//! |
83 |
| -//! One challenge with using `Future` for work is that the `Future` type intentionally erases the |
84 |
| -//! details of scheduling work, reducing it down to a single `Waker`, which similar to a trait, has |
85 |
| -//! a `wake` method to cause the executor to schedule this work. Unfortunately, this simple |
86 |
| -//! mechanism makes it challenging to take advantage of Zephyr's existing mechanisms to be able to |
87 |
| -//! automatically trigger work based on primitives. |
88 |
| -//! |
89 |
| -//! As such, what we do is have a structure `Work` that contains both a `k_work_poll` as well as |
90 |
| -//! `Context` from Rust. Our handler can use a mechanism similar to C's `CONTAINER_OF` macro to |
91 |
| -//! recover this outer structure. |
92 |
| -//! |
93 |
| -//! There is some extra complexity to this process, as the `Future` we are storing associated with |
94 |
| -//! the work is `?Sized`, since each particular Future will have a different size. As such, it is |
95 |
| -//! not possible to recover the full work type. To work around this, we have a Sized struct at the |
96 |
| -//! beginning of this structure, that along with judicious use of `#[repr(C)]` allows us to recover |
97 |
| -//! this fixed data. This structure contains the information needed to re-schedule the work, based |
98 |
| -//! on what is needed. |
99 |
| -//! |
100 |
| -//! ## Ownership |
101 |
| -//! |
102 |
| -//! The remaining challenge with implementing `k_work` for Rust is that of ownership. The model |
103 |
| -//! taken here is that the work items are held in a `Box` that is effectively owned by the work |
104 |
| -//! itself. When the work item is scheduled to Zephyr, ownership of that box is effectively handed |
105 |
| -//! off to C, and then when the work item is called, the Box re-constructed. This repeats until the |
106 |
| -//! work is no longer needed (e.g. when a [`Future::poll`] returns `Ready`), at which point the work |
107 |
| -//! will be dropped. |
108 |
| -//! |
109 |
| -//! There are two common ways the lifecycle of work can be managed in an embedded system: |
110 |
| -//! |
111 |
| -//! - A set of `Future`'s are allocated once at the start, and these never return a value. Work |
112 |
| -//! Futures inside of this (which correspond to `.await` in async code) can have lives and return |
113 |
| -//! values, but the main loops will not return values, or be dropped. Embedded Futures will |
114 |
| -//! typically not be boxed. |
115 |
| -//! - Work will be dynamically created based on system need, with threads using [`kio::spawn`] to |
116 |
| -//! create additional work (or creating the `Work` items directly). These can use [`join`] or |
117 |
| -//! [`join_async`] to wait for the results. |
118 |
| -//! |
119 |
| -//! One consequence of the ownership being passed through to C code is that if the work cancellation |
120 |
| -//! mechanism is used on a work queue, the work items themselves will be leaked. |
121 |
| -//! |
122 |
| -//! The Future mechanism in Rust relies on the use of [`Pin`] to ensure that work items are not |
123 |
| -//! moved. We have the same requirements here, although currently, the pin is only applied while |
124 |
| -//! the future is run, and we do not expose the `Box` that we use, thus preventing moves of the work |
125 |
| -//! items. |
126 |
| -//! |
127 |
| -//! ## The work queues themselves |
| 20 | +//! At this point, this code supports the simple work queues, with [`Work`] items. |
128 | 21 | //!
|
129 | 22 | //! Work Queues should be declared with the `define_work_queue!` macro, this macro requires the name
|
130 | 23 | //! of the symbol for the work queue, the stack size, and then zero or more optional arguments,
|
@@ -260,8 +153,10 @@ impl<const SIZE: usize> WorkQueueDecl<SIZE> {
|
260 | 153 | /// A running work queue thread.
|
261 | 154 | ///
|
262 | 155 | /// This must be declared statically, and initialized once. Please see the macro
|
263 |
| -/// [`define_work_queue`] which declares this with a [`StaticWorkQueue`] to help with the |
| 156 | +/// [`define_work_queue`] which declares this with a [`WorkQueue`] to help with the |
264 | 157 | /// association with a stack, and making sure the queue is only started once.
|
| 158 | +/// |
| 159 | +/// [`define_work_queue`]: crate::define_work_queue |
265 | 160 | pub struct WorkQueue {
|
266 | 161 | #[allow(dead_code)]
|
267 | 162 | item: UnsafeCell<k_work_q>,
|
|
0 commit comments