Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions posts/chat-server.md
Original file line number Diff line number Diff line change
Expand Up @@ -1104,7 +1104,7 @@ And to quote the `RwLock` docs from the `parking_lot` crate:

I guess we just have to be really careful 🤷

**3\)** Aquire locks in the same order everywhere
**3\)** Acquire locks in the same order everywhere

If we need to acquire multiple locks to safely perform some operation, we need to always acquire those locks in the same order, otherwise deadlocks can trivially occur, like here:

Expand Down Expand Up @@ -1728,7 +1728,7 @@ We have a lot of short strings. We can have thousands of users and hundreds of r

As you may remember, when we `send` something to a broadcast channel every call to `recv` clones that data. That means if a user sends a `String` message that is five paragraphs long in a room with 1000 other users we're going to have to clone that message 1000 times which means doing 1000 heap allocations. We know that after a message is sent it's immutable, so we don't need to send a `String`, we can convert the `String` to an `Arc<str>` instead and send that, because cloning an `Arc<str>` is very cheap since all it does is increment an atomic counter.

#### Miscellanous micro optimizations
#### Miscellaneous micro optimizations

After combing through the code I found a couple places where we were carelessly allocating unnecessary `Vec`s and `String`s, mostly for the `/rooms` and `/users` commands, and made them both only allocate a single `String` when generating a response.

Expand Down
2 changes: 1 addition & 1 deletion posts/too-many-brainfuck-compilers.md
Original file line number Diff line number Diff line change
Expand Up @@ -2202,7 +2202,7 @@ System calls are expensive, even without including the cost of switching from us

I learned a lot and still far less than I thought I would. Remember that laundry list of fancy terms I mentioned back in the intro of this article? Yeah well, I still don't know what half of that stuff is.

While doing the research and programming for this project I finally learned what _auto-vectorization_, _inlining_, _endianess_, _system calls_, _LLVM_, _SIMD_, and _ABI_ are. I kinda get what _linking_ is on a very basic level but I still get lost whenever I read anything about _linking_ because it seems like the linker does a whole bunch of really crazy complicated code manipulations other than just playing connect-the-dots with some global symbols, so I don't feel like I "fully get" what a linker actually does. I get what _custom allocators_ are in concept but I don't get why, for example, Allocator X is more performant than Allocator Y for certain workloads. This project never forced me to figure out how heap allocations work in assembly so it makes sense that allocators are still a mystery to me. I know that _TLS_ stands for Thread Local Storage and people love talking about it but I don't know why. I know _padding_ is a thing that exists purely to serve _alignment_ but I have no clue why _alignment_ is so important. Apparently if the data in your program is _aligned_ everything is faster and if it's _unaligned_ it's either slow or completely unusable. But why? What is it with all this magical _alignment_ stuff?
While doing the research and programming for this project I finally learned what _auto-vectorization_, _inlining_, _endianness_, _system calls_, _LLVM_, _SIMD_, and _ABI_ are. I kinda get what _linking_ is on a very basic level but I still get lost whenever I read anything about _linking_ because it seems like the linker does a whole bunch of really crazy complicated code manipulations other than just playing connect-the-dots with some global symbols, so I don't feel like I "fully get" what a linker actually does. I get what _custom allocators_ are in concept but I don't get why, for example, Allocator X is more performant than Allocator Y for certain workloads. This project never forced me to figure out how heap allocations work in assembly so it makes sense that allocators are still a mystery to me. I know that _TLS_ stands for Thread Local Storage and people love talking about it but I don't know why. I know _padding_ is a thing that exists purely to serve _alignment_ but I have no clue why _alignment_ is so important. Apparently if the data in your program is _aligned_ everything is faster and if it's _unaligned_ it's either slow or completely unusable. But why? What is it with all this magical _alignment_ stuff?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed spelling of endianess -> endianness


I was very surprised by how much easier it was to write the x86_64 and aarch64 compilers compared to the WebAssembly and LLVM IR compilers. I think this mostly has to do with the fact that brainfuck is a super simple language that maps very cleanly to low-level assembly instructions and if I was writing compilers for a higher-level language it'd be easier to map higher-level constructs to WebAssembly and LLVM IR than x86_64 and aarch64 but I've never tried to do this so I can't say 100% for sure.

Expand Down
2 changes: 1 addition & 1 deletion posts/tour-of-rusts-standard-library-traits.md
Original file line number Diff line number Diff line change
Expand Up @@ -2773,7 +2773,7 @@ fn main() {

`Fn` refines `FnMut` in the sense that `FnMut` requires mutable references and can be called multiple times, but `Fn` only requires immutable references and can be called multiple times. `Fn` can be used anywhere `FnMut` can be used, which includes anywhere `FnOnce` can be used.

If a closure doesn't capture anything from its environment it's technically not a closure, but just an anonymously declared inline function, and can be casted to, used, and passed around as a regular function pointer, i.e. `fn`. Function pointers can be used anywhere `Fn` can be used, which includes anwhere `FnMut` and `FnOnce` can be used.
If a closure doesn't capture anything from its environment it's technically not a closure, but just an anonymously declared inline function, and can be casted to, used, and passed around as a regular function pointer, i.e. `fn`. Function pointers can be used anywhere `Fn` can be used, which includes anywhere `FnMut` and `FnOnce` can be used.

```rust
fn add_one(x: i32) -> i32 {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3257,7 +3257,7 @@ fn main() {

`Fn` 改良了 `FnMut` ,尽管它们都可以多次调用,但是 `FnMut` 需要参数的可变引用,而 `Fn` 仅需要参数的不可变引用。`Fn` 可以在所有 `FnMut` 和 `FnOnce` 可用的地方使用。

> If a closure doesn't capture anything from its environment it's technically not a closure, but just an anonymously declared inline function, and can be casted to, used, and passed around as a regular function pointer, i.e. `fn`. Function pointers can be used anywhere `Fn` can be used, which includes anwhere `FnMut` and `FnOnce` can be used.
> If a closure doesn't capture anything from its environment it's technically not a closure, but just an anonymously declared inline function, and can be casted to, used, and passed around as a regular function pointer, i.e. `fn`. Function pointers can be used anywhere `Fn` can be used, which includes anywhere `FnMut` and `FnOnce` can be used.

如果一个闭包不从环境中捕获任何的值,那么从技术上讲它就不是闭包,而仅仅只是一个内联的匿名函数。并且它可以被转换为、用于或传递为一个常规函数指针,即 `fn`。函数指针可以用于任何 `Fn` ,`FnMut` ,`FnOnce` 可用的地方。

Expand Down