diff --git a/src/ch06-03-if-let.md b/src/ch06-03-if-let.md index de9602ea8e..9d76c4af32 100644 --- a/src/ch06-03-if-let.md +++ b/src/ch06-03-if-let.md @@ -107,7 +107,7 @@ To make this common pattern nicer to express, Rust has `let`-`else`. The `let`-`else` syntax takes a pattern on the left side and an expression on the right, very similar to `if let`, but it does not have an `if` branch, only an `else` branch. If the pattern matches, it will bind the value from the pattern -in the outer scope. If the pattern does *not* match, the program will flow into +in the outer scope. If the pattern does _not_ match, the program will flow into the `else` arm, which must return from the function. In Listing 6-9, you can see how Listing 6-8 looks when using `let`-`else` in diff --git a/src/ch17-00-async-await.md b/src/ch17-00-async-await.md index 1a65613950..2598b10e43 100644 --- a/src/ch17-00-async-await.md +++ b/src/ch17-00-async-await.md @@ -5,8 +5,8 @@ be nice if we could do something else while we are waiting for those long-running processes to complete. Modern computers offer two techniques for working on more than one operation at a time: parallelism and concurrency. Once we start writing programs that involve parallel or concurrent operations, -though, we quickly encounter new challenges inherent to *asynchronous -programming*, where operations may not finish sequentially in the order they +though, we quickly encounter new challenges inherent to _asynchronous +programming_, where operations may not finish sequentially in the order they were started. This chapter builds on Chapter 16’s use of threads for parallelism and concurrency by introducing an alternative approach to asynchronous programming: Rust’s Futures, Streams, the `async` and `await` syntax that diff --git a/src/ch17-01-futures-and-syntax.md b/src/ch17-01-futures-and-syntax.md index 35b22ac17d..065b6d8f50 100644 --- a/src/ch17-01-futures-and-syntax.md +++ b/src/ch17-01-futures-and-syntax.md @@ -354,7 +354,7 @@ futures passed to it finishes first. > Note: Under the hood, `race` is built on a more general function, `select`, > which you will encounter more often in real-world Rust code. A `select` -> function can do a lot of things that the `trpl::race` function can’t, but it +> function can do a lot of things that the `trpl::race` function can’t, but it > also has some additional complexity that we can skip over for now. Either future can legitimately “win,” so it doesn’t make sense to return a diff --git a/src/ch17-02-concurrency-with-async.md b/src/ch17-02-concurrency-with-async.md index ead3dd7254..7da015cbb2 100644 --- a/src/ch17-02-concurrency-with-async.md +++ b/src/ch17-02-concurrency-with-async.md @@ -1,6 +1,7 @@ ## Applying Concurrency with Async + In this section, we’ll apply async to some of the same concurrency challenges @@ -15,6 +16,7 @@ often have different behavior—and they nearly always have different performanc characteristics. + ### Creating a New Task with `spawn_task` @@ -178,6 +180,7 @@ For an extra challenge, see if you can figure out what the output will be in each case _before_ running the code! + ### Counting Up on Two Tasks Using Message Passing diff --git a/src/ch17-03-more-futures.md b/src/ch17-03-more-futures.md index 572e3f0076..74791c1310 100644 --- a/src/ch17-03-more-futures.md +++ b/src/ch17-03-more-futures.md @@ -378,6 +378,7 @@ each other. But _how_ would you hand control back to the runtime in those cases? + ### Yielding Control to the Runtime diff --git a/src/ch17-04-streams.md b/src/ch17-04-streams.md index 2eae1d9db5..042f2639d2 100644 --- a/src/ch17-04-streams.md +++ b/src/ch17-04-streams.md @@ -1,8 +1,8 @@ ## Streams: Futures in Sequence - + So far in this chapter, we’ve mostly stuck to individual futures. The one big exception was the async channel we used. Recall how we used the receiver for our @@ -122,7 +122,7 @@ we can do that _is_ unique to streams. ### Composing Streams Many concepts are naturally represented as streams: items becoming available in -a queue, chunks of data being pulled incrementally from the filesystem when the +a queue, chunks of data being pulled incrementally from the filesystem when the full data set is too large for the computer’s , or data arriving over the network over time. Because streams are futures, we can use them with any other kind of future and combine them in interesting ways. For example, we can batch @@ -174,8 +174,6 @@ Again, we could do this with the regular `Receiver` API or even the regular timeout that applies to every item in the stream, and a delay on the items we emit, as shown in Listing 17-34. - - ```rust diff --git a/src/ch17-05-traits-for-async.md b/src/ch17-05-traits-for-async.md index a0030ac149..3dca4edc2b 100644 --- a/src/ch17-05-traits-for-async.md +++ b/src/ch17-05-traits-for-async.md @@ -1,6 +1,7 @@ ## A Closer Look at the Traits for Async + Throughout the chapter, we’ve used the `Future`, `Pin`, `Unpin`, `Stream`, and @@ -12,6 +13,7 @@ details. In this section, we’ll dig in just enough to help in those scenarios, still leaving the _really_ deep dive for other documentation. + ### The `Future` Trait @@ -118,6 +120,7 @@ future it is responsible for, putting the future back to sleep when it is not yet ready. + ### The `Pin` and `Unpin` Traits @@ -210,7 +213,7 @@ enforce constraints on pointer usage. Recalling that `await` is implemented in terms of calls to `poll` starts to explain the error message we saw earlier, but that was in terms of `Unpin`, not -`Pin`. So how exactly does `Pin` relate to `Unpin`, and why does `Future` need +`Pin`. So how exactly does `Pin` relate to `Unpin`, and why does `Future` need `self` to be in a `Pin` type to call `poll`? Remember from earlier in this chapter a series of await points in a future get diff --git a/src/ch21-03-graceful-shutdown-and-cleanup.md b/src/ch21-03-graceful-shutdown-and-cleanup.md index a58f4ddc69..3158fba0cd 100644 --- a/src/ch21-03-graceful-shutdown-and-cleanup.md +++ b/src/ch21-03-graceful-shutdown-and-cleanup.md @@ -67,7 +67,7 @@ alternative approaches. They can make your code cleaner and less error-prone. In this case, there is a better alternative: the `Vec::drain` method. It accepts a range parameter to specify which items to remove from the `Vec`, and returns -an iterator of those items. Passing the `..` range syntax will remove *every* +an iterator of those items. Passing the `..` range syntax will remove _every_ value from the `Vec`. So we need to update the `ThreadPool` `drop` implementation like this: @@ -99,7 +99,7 @@ implementation and then a change in the `Worker` loop. First, we’ll change the `ThreadPool` `drop` implementation to explicitly drop the `sender` before waiting for the threads to finish. Listing 21-23 shows the changes to `ThreadPool` to explicitly drop `sender`. Unlike with the `workers`, -here we *do* need to use an `Option` to be able to move `sender` out of +here we _do_ need to use an `Option` to be able to move `sender` out of `ThreadPool` with `Option::take`.