Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
85 changes: 60 additions & 25 deletions src/fiber/execution_context.cr
Original file line number Diff line number Diff line change
Expand Up @@ -8,51 +8,65 @@ require "./execution_context/*"
{% raise "ERROR: execution contexts require the `preview_mt` compilation flag" unless flag?(:preview_mt) || flag?(:docs) %}
{% raise "ERROR: execution contexts require the `execution_context` compilation flag" unless flag?(:execution_context) || flag?(:docs) %}

# An execution context creates and manages a dedicated pool of 1 or more
# An execution context creates and manages a dedicated pool of one or more
# schedulers where fibers will be running in. Each context manages the rules to
# run, suspend and swap fibers internally.
#
# EXPERIMENTAL: Execution contexts are an experimental feature, implementing
# [RFC 2](https://github.com/crystal-lang/rfcs/pull/2). It's opt-in and requires
# the compiler flags `-Dpreview_mt -Dexecution_context`.
#
# Applications can create any number of execution contexts in parallel. These
# contexts are isolated but they can communicate with the usual synchronization
# primitives such as `Channel` or `Mutex`.
#
# An execution context groups fibers together. Instead of associating a fiber to
# a specific system thread, we associate a fiber to an execution context,
# abstracting which system thread(s) the fibers will run on.
#
# Applications can create any number of execution contexts in parallel. Fibers
# running in any context can communicate and synchronize with any other fiber
# running in any context through the usual synchronization primitives such as
# `Channel`, `WaitGroup` or `Sync`.
#
# When spawning a fiber with `::spawn`, it spawns into the execution context of
# the current fiber, so child fibers execute in the same context as their parent
# (unless told otherwise).
# the current fiber, so child fibers execute in the same context as their
# parent, unless told otherwise (see `ExecutionContext#spawn`).
#
# Once spawned, a fiber cannot _move_ to another execution context. It always
# resumes in the same execution context.
# Fibers are scoped to the execution context they are spawned into. Once
# spawned, a fiber cannot _move_ to another execution context, and is always
# resumed in the same execution context.
#
# ## Context types
#
# The standard library provides a number of execution context implementations
# for common use cases.
#
# * `ExecutionContext::Concurrent`: Fully concurrent with limited parallelism.
# Fibers run concurrently to each other, never in parallel (only one fiber at a
# time). They can use simpler and faster synchronization primitives internally
# (no atomics, limited thread safety). Communication with fibers in other
# contexts requires thread-safe primitives. A blocking fiber blocks the entire
# thread and all other fibers in the context.
# * `ExecutionContext::Parallel`: Fully concurrent, fully parallel. Fibers
# running in this context can be resumed by multiple system threads in this
# context. They run concurrently and in parallel to each other (multiple fibers
# at a time), in addition to running in parallel to any fibers in other
# contexts. Schedulers steal work from each other. The parallelism can grow and
# shrink dynamically.
#
# Fibers run concurrently to each other, never in parallel (only one fiber at
# a time). They can use simpler and faster synchronization primitives
# internally (no atomics, limited thread safety), however communication with
# fibers in other contexts must be safe (e.g. `Channel, `Sync`, ...). A
# blocking fiber blocks the entire thread and all other fibers in the context.
#
# * `ExecutionContext::Parallel`: Fully concurrent, fully parallel.
#
# Fibers running in this context can be resumed by multiple system threads in
# this context. They run concurrently and in parallel to each other (multiple
# fibers at a time), in addition to running in parallel to any fibers in other
# contexts. Schedulers steal work from each other. The parallelism can grow
# and shrink dynamically.
#
# * `ExecutionContext::Isolated`: Single fiber in a single system thread without
# concurrency. This is useful for tasks that can block thread execution for a
# long time (e.g. a GUI main loop, a game loop, or CPU heavy computation). The
# event-loop works normally (when the fiber sleeps, it pauses the thread).
# Communication with fibers in other contexts requires thread-safe primitives.
# concurrency.
#
# This is useful for tasks that can block thread execution for a long time
# (e.g. CPU heavy computation) or must be reactive (e.g. a GUI or game loop).
# The event-loop works normally and so does communication and synchronization
# with fibers in other contexts (`Channel`, `WaitGroup`, `Sync`, ...). When
# the fiber needs to wait, it pauses the thread.
#
# Again, any number of execution contexts can be created (as far as the computer
# can physically allow). An advantage of starting execution contexts is that it
# creates execution boundaries, the OS thread scheduler can for example preempt
# a system thread, allowing fibers in other system threads to run.
#
# ## The default execution context
#
Expand All @@ -67,6 +81,27 @@ require "./execution_context/*"
# count = Fiber::ExecutionContext.default_workers_count
# Fiber::ExecutionContext.default.resize(count)
# ```
#
# ## Relationship with system threads
#
# Execution contexts control when and how fibers run, and on which system thread
# they execute. The term *parallelism* is the maximum number of fibers that can
# run in parallel (maximum number of schedulers) but there can be less or more
# system threads running in practice, for example when a fiber is blocked on a
# syscall.
#
# There are no guarantees on how a fiber will run on system threads. A fiber can
# start in thread A, then be resumed and terminated on thread A, B or C. This is
# true for both the `Parallel` and `Concurrent` contexts.
#
# Notable exception: `Isolated` guarantees that its fiber will always run on the
# same system thread. During its lifetime, the fiber owns the thread, but only
# for the fiber's lifetime.
#
# Threads are kept in a thread pool: threads can be started, attached and
# detached from any context at any time. Threads can be detached from a context
# and reattached to the same execution context or to another one (`Concurrent`,
# `Parallel` or `Isolated`).
Comment on lines +102 to +104
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought: I'm wondering if Isolated should perhaps work without thread pool. So we would guarantee that an isolated context always runs in a fresh system thread and that thread won't be reused afterwards. This might be a useful property for libraries that integrate deeply with the system thread. It would implicitly allow changing the threads properties without the risk of affecting other code that might reuse the thread afterwards.

Copy link
Copy Markdown
Collaborator Author

@ysbaddaden ysbaddaden Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a useful property for libraries that integrate deeply with the system thread

But detrimental to libraries that regularly spawn isolated fibers 🤷

I can see an application configuring the scheduler, cpu affinity or sigaltstack, but it it likely for a library to do that without resetting it afterwards? Shouldn't it just keep running there (like a GUI main loop)?

I'm searching what Go does (threads always return to the pool) and I can't find anything. I guess it expects any customization to be reset by the app/lib or to never return it.

(Now, we shall eventually have means to configure a context (cpu affinity, ...), attaching a scheduler shall set them, and returning the thread to the pool shall reset them back to their defaults.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was just a thought about a theoretical issue. I don't have any particular use case in mind. Thanks for clarifying. 👍

Copy link
Copy Markdown
Collaborator Author

@ysbaddaden ysbaddaden Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a valid concern and needs discussing.

I think it applies to the thread pool in general: every thread will return to the pool and that will happen at any time, and there's no use control over it. We should start designing standard mechanisms to set cpu affinity and other attributes (for the whole process, or to a specific context), so we can set/reset them as needed. Maybe customizable hooks on checkin/out, too?

@[Experimental]
module Fiber::ExecutionContext
@@thread_pool : ThreadPool?
Expand Down Expand Up @@ -172,7 +207,7 @@ module Fiber::ExecutionContext
end
end

# Creates a new fiber then calls enqueues it to the execution context.
# Creates a new fiber then enqueues it to the execution context.
#
# May be called from any `ExecutionContext` (i.e. must be thread-safe).
def spawn(*, name : String? = nil, &block : ->) : Fiber
Expand Down
10 changes: 7 additions & 3 deletions src/fiber/execution_context/concurrent.cr
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ module Fiber::ExecutionContext
#
# Fibers in this context can use simpler and faster synchronization primitives
# between themselves (for example no atomics or thread safety required), but
# data shared with other contexts needs to be protected (e.g. `Mutex`), and
# data shared with other contexts needs to be protected (see `Sync`), and
# communication with fibers in other contexts requires safe primitives, for
# example `Channel`.
#
Expand Down Expand Up @@ -51,8 +51,12 @@ module Fiber::ExecutionContext
# ```
#
# In practice, we still recommended to always protect shared accesses to a
# variable, for example using `Atomic#add` to increment *result* or a `Mutex`
# for more complex operations.
# variable, for example using `Atomic#add` to increment *result* or a `Sync`
# primitive for more complex operations.
#
# NOTE: The `Concurrent` execution context isn't tied to a system thread, and
# may switch to another system thread, for example when a fiber is blocked on
# a syscall.
class Concurrent < Parallel
# :nodoc:
def self.default : self
Expand Down
2 changes: 1 addition & 1 deletion src/fiber/execution_context/isolated.cr
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ module Fiber::ExecutionContext
# which defaults to `Fiber::ExecutionContext.default`.
#
# Isolated fibers can normally communicate with other fibers running in other
# execution contexts using `Channel`, `WaitGroup` or `Mutex` for example. They
# execution contexts using `Channel`, `WaitGroup` or `Sync` for example. They
# can also execute `IO` operations or `sleep` just like any other fiber.
#
# Calls that result in waiting (e.g. sleep, or socket read/write) will block
Expand Down
27 changes: 17 additions & 10 deletions src/fiber/execution_context/parallel.cr
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,14 @@ module Fiber::ExecutionContext
# contexts.
#
# The context internally keeps a number of fiber schedulers, each scheduler
# being able to start running on a system thread, so multiple schedulers can
# run in parallel. The fibers are resumable by any scheduler in the context,
# they can thus move from one system thread to another at any time.
# runs on a system thread, so multiple schedulers can run in parallel. The
# fibers are resumable by any scheduler in the context, and can thus move from
# one system thread to another at any time.
#
# The actual parallelism is controlled by the execution context. As the need
# for parallelism increases, for example more fibers running longer, the more
# schedulers will start (and thus system threads), as the need decreases, for
# example not enough fibers, the schedulers will pause themselves and
# parallelism will decrease.
# The actual parallelism is dynamic. As the need for parallelism increases,
# for example more fibers running longer, the more schedulers will start (and
# thus system threads), as the need decreases, for example not enough fibers,
# the schedulers will pause themselves and parallelism will decrease.
#
# The parallelism can be as low as 1, in which case the context becomes a
# concurrent context (no parallelism) until resized.
Expand Down Expand Up @@ -55,6 +54,10 @@ module Fiber::ExecutionContext
#
# p result.get # => 523776
# ```
#
# NOTE: The `Parallel` execution context isn't tied to a fixed set of system
# threads, and execution can switch to other system threads, for example when
# a fiber is blocked on a syscall.
class Parallel
include ExecutionContext

Expand Down Expand Up @@ -120,12 +123,16 @@ module Fiber::ExecutionContext
ExecutionContext.execution_contexts.push(self)
end

# The number of threads that have been started.
# :nodoc:
#
# TODO: must report how many schedulers are running (count spinning
# schedulers but don't count waiting/parked ones).
def size : Int32
@started
end
Comment thread
ysbaddaden marked this conversation as resolved.

# The maximum number of threads that can be started.
# The maximum number of schedulers that can be started, aka how many fibers
# can run in parallel or maximum parallelism of the context.
def capacity : Int32
@schedulers.size
end
Expand Down