Skip to content

Commit f5441b0

Browse files
authored
Documentation Updates (#942)
* fix readme * rename StashError * rename ActorSystem files to ClusterSystem files; document init * introduction * document lifecycle watch * cluster lifecycle image * fix rename refactoring not having worked... * [System] implement system.terminated * wip on more cluster docs * no more warnings except in NIO
1 parent d3296f6 commit f5441b0

File tree

72 files changed

+518
-376
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

72 files changed

+518
-376
lines changed
-11.2 KB
Binary file not shown.
-46.3 KB
Binary file not shown.

README.md

Lines changed: 48 additions & 222 deletions
Large diffs are not rendered by default.

Samples/Sources/SampleDiningPhilosophers/boot.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ typealias DefaultDistributedActorSystem = ClusterSystem
4343
let time = TimeAmount.seconds(20)
4444

4545
switch CommandLine.arguments.dropFirst().first {
46-
case "dist":
46+
case "dist", "distributed":
4747
try! await DistributedDiningPhilosophers().run(for: time)
4848
default:
4949
try! await DiningPhilosophers().run(for: time)

Sources/ActorSingletonPlugin/ActorSingletonProxy.swift

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ internal class ActorSingletonProxy<Message: ActorMessage> {
6868
var behavior: _Behavior<Message> {
6969
.setup { context in
7070
if context.system.settings.enabled {
71-
// Subscribe to `Cluster.Event` in order to update `targetNode`
71+
// Subscribe to ``Cluster/Event`` in order to update `targetNode`
7272
context.system.cluster.events.subscribe(
7373
context.subReceive(_SubReceiveId(id: "clusterEvent-\(context.name)"), Cluster.Event.self) { event in
7474
try self.receiveClusterEvent(context, event)
@@ -196,7 +196,7 @@ internal class ActorSingletonProxy<Message: ActorMessage> {
196196
context.log.trace("Stashed message: \(message)", metadata: self.metadata(context))
197197
} catch {
198198
switch error {
199-
case StashError.full:
199+
case _StashError.full:
200200
// TODO: log this warning only "once in while" after buffer becomes full
201201
context.log.warning("Buffer is full. Messages might start getting disposed.", metadata: self.metadata(context))
202202
// Move the oldest message to dead letters to make room

Sources/DistributedActors/ActorAddress.swift

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ import Distributed
1717
// ==== ----------------------------------------------------------------------------------------------------------------
1818
// MARK: ActorAddress
1919

20+
/// The type of `ID` assigned to all distributed actors managed by the ``ClusterSystem``.
21+
public typealias ActorID = ActorAddress
22+
2023
/// Uniquely identifies a DistributedActor within the cluster.
2124
///
2225
/// It is assigned by the `ClusterSystem` at initialization time of a distributed actor,
@@ -50,7 +53,6 @@ import Distributed
5053
///
5154
/// For example: `sact://[email protected]:7337/user/wallet/id-121242`.
5255
/// Note that the `ActorIncarnation` is not printed by default in the String representation of a path, yet may be inspected on demand.
53-
@available(macOS 10.15, *)
5456
public struct ActorAddress: @unchecked Sendable {
5557
/// Knowledge about a node being `local` is purely an optimization, and should not be relied on by actual code anywhere.
5658
/// It is on purpose not exposed to end-user code as well, and must remain so to not break the location transparency promises made by the runtime.

Sources/DistributedActors/Cluster/Cluster+Membership.swift

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ extension Cluster {
114114
self.members(withStatus: [status], reachability: reachability)
115115
}
116116

117-
/// Returns all members that are part of this membership, and have the any ``Cluster.MemberStatus`` that is part
117+
/// Returns all members that are part of this membership, and have the any ``Cluster/MemberStatus`` that is part
118118
/// of the `statuses` passed in and `reachability` status.
119119
///
120120
/// - Parameters:
@@ -574,7 +574,7 @@ extension Cluster.Membership {
574574
// MARK: Applying Cluster.Event to Membership
575575

576576
extension Cluster.Membership {
577-
/// Applies any kind of `Cluster.Event` to the `Membership`, modifying it appropriately.
577+
/// Applies any kind of ``Cluster/Event`` to the `Membership`, modifying it appropriately.
578578
/// This apply does not yield detailed information back about the type of change performed,
579579
/// and is useful as a catch-all to keep a `Membership` copy up-to-date, but without reacting on any specific transition.
580580
///

Sources/DistributedActors/Cluster/ClusterControl.swift

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ public struct ClusterControl {
3535
///
3636
/// Consider subscribing to `cluster.events` in order to react to membership changes dynamically, and never miss a change.
3737
///
38-
/// It is guaranteed that a `membershipSnapshot` is always at-least as up-to-date as an emitted `Cluster.Event`.
38+
/// It is guaranteed that a `membershipSnapshot` is always at-least as up-to-date as an emitted ``Cluster/Event``.
3939
/// It may be "ahead" however, for example if a series of 3 events are published closely one after another,
4040
/// if one were to observe the `cluster.membershipSnapshot` when receiving the first event, it may already contain
4141
/// information related to the next two incoming events. For that reason is recommended to stick to one of the ways
@@ -115,7 +115,7 @@ public struct ClusterControl {
115115
self.ref.tell(.command(.downCommand(self.uniqueNode.node)))
116116
}
117117

118-
/// Mark *any* currently known member as `Cluster.MemberStatus.down`.
118+
/// Mark *any* currently known member as ``Cluster/MemberStatus/down``.
119119
///
120120
/// Beware that this API is not very precise and, if possible, the `down(Cluster.Member)` is preferred, as it indicates
121121
/// the downing intent of a *specific* actor system instance, rather than any system running on the given host-port pair.

Sources/DistributedActors/Cluster/ClusterEventStream.swift

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
import Logging
1616

1717
/// Specialized event stream behavior which takes into account emitting a snapshot event on first subscription,
18-
/// followed by a stream of `Cluster.Event`s.
18+
/// followed by a stream of ``Cluster/Event``s.
1919
///
2020
/// This ensures that every subscriber to cluster events never misses any of the membership events, meaning
2121
/// it is possible for anyone to maintain a local up-to-date copy of `Membership` by applying all these events to that copy.
@@ -26,7 +26,7 @@ internal enum ClusterEventStream {
2626

2727
// We maintain a snapshot i.e. the "latest version of the membership",
2828
// in order to eagerly publish it to anyone who subscribes immediately,
29-
// followed by joining them to the subsequent `Cluster.Event` publishes.
29+
// followed by joining them to the subsequent ``Cluster/Event`` publishes.
3030
//
3131
// Thanks to this, any subscriber immediately gets a pretty recent view of the membership,
3232
// followed by the usual updates via events. Since all events are published through this

Sources/DistributedActors/Cluster/ClusterShell.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -313,7 +313,7 @@ internal class ClusterShell {
313313
case inbound(InboundMessage)
314314
/// Used to request making a change to the membership owned by the ClusterShell;
315315
/// Issued by downing or leader election and similar facilities. Thanks to centralizing the application of changes,
316-
/// we can ensure that a `Cluster.Event` is signalled only once, and only when it is really needed.
316+
/// we can ensure that a ``Cluster/Event`` is signalled only once, and only when it is really needed.
317317
/// E.g. signalling a down twice for whatever reason, needs not be notified two times to all subscribers of cluster events.
318318
///
319319
/// If the passed in event applied to the current membership is an effective change, the change will be published using the `system.cluster.events`.

0 commit comments

Comments
 (0)