Skip to content

Structured concurrency for server applications #447

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
151 changes: 151 additions & 0 deletions server/guides/concurrency.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
---
layout: page
title: Using Structured Concurrency in server applications
---

# Using Structured Concurrency in server applications

Swift Concurrency enables writing safe concurrent, asynchronous and data race
free code using native language features. Server systems are often highly
concurrent to handle many different connections at the same time. This makes
Swift Concurrency a perfect fit for use in server systems since it reduces the
cognitive burden to write correct concurrent systems while spreading the work
across all available cores.

Structured Concurrency allows the developer to organize their code into
high-level tasks and their child component tasks. These tasks are the primary
unit of concurrency and enable the flow of information up and down the task
hierarchy.

This guide covers best practices around how Swift Concurrency should be used in
server side applications and libraries. Importantly, this guide assumes a
_perfect_ world where all libraries and applications are fully bought into
Structured Concurrency. In reality, there are a lot of places where one has to
bridge currently unstructured systems. Depending on how those unstructured
systems are shaped there are various ways to bridge them (maybe include a
section in the with common patterns). The goal of this guide is to define a
target for the ecosystem which can be referred to when talking about the
architecture of libraries and applications.

## Structuring your application

One can think of Structured Concurrency as a tree of task where the initial task
is rooted at the `main()` entry point of the application. From this entry point
onwards more and more child tasks are added to the tree to form the logical flow
of data in the application. Organizing the whole program into a single task tree
unlocks the full potential of Structured Concurrency such as:

- Automatic task cancellation propagation
- Propagation of task locals down the task tree

When looking at a typical application it is often comprised out of multiple
smaller components such as an HTTP server handling incoming traffic,
observability backends sending events to external systems, and clients to
databases. All of those components probably require one or more tasks to run
their work and those tasks should be child tasks of the application's main task
tree for the above-mentioned reasons. Broadly speaking libraries expose two
kinds of APIs short lived almost request response like APIs e.g.
`HTTPClient.get("http://example.com)` and long-lived APIs such as an HTTP server
that accepts inbound connections. In reality, libraries often expose both since

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we say:

  • Clients have both kinds of requests: request-response style methods which are run in the request task and background work which needs to be scheduled on some root task.
  • Servers only need a root task to schedule their request handling in?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can be a bit more general here:

Suggested change
tree for the above-mentioned reasons. Broadly speaking libraries expose two
kinds of APIs short lived almost request response like APIs e.g.
`HTTPClient.get("http://example.com)` and long-lived APIs such as an HTTP server
that accepts inbound connections. In reality, libraries often expose both since
tree for the above-mentioned reasons. Broadly speaking there are two
kinds of tasks:
1. Short lived tasks, for example request-response like APIs, and
2. Long lived background tasks, for example an HTTP server
that accepts inbound connections.
Most libraries will use both since

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if the more general approach leads to clear communication of our intend here:

For client requests, we have clear rules:

Everything that we not consider part of a client request we dispatch into the background task via AsyncSequence, whereas the original client request remains within whatever task the user scheduled the work in. for the background work we need a task root. This is the reason why we need a run() for clients.

For server tasks:

We need a task root to schedule the request-response handling in. This is a behavior that is contrary to clients.

--

This distinction is extremely important to ensure that cancellation works correctly in both cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I have rewritten this part and actually added some code examples for a simplified HTTPClient and HTTPServer

they need to have some long-lived connections and then request-response like
behavior to interact with those connections, e.g. an `HTTPClient` will have a
pool of connections and then dispatch requests onto them.

The recommended pattern for those components and libraries is to expose a `func
run() async throws` method on their types, such as an `HTTPClient`, inside this
`run()` method libraries can schedule their long running work and spawn as many
child tasks as they need. It is expected that those `run()` methods are often
not returning until their parent task is cancelled or they receive a shutdown
signal through some other mean. The other way that libraries could handle their
long running work is by spawning unstructured tasks using the `Task.init(_:)` or
`Task.detached(_:)`; however, this comes with significant downsides such as
manual implementation of cancellation of those tasks or incorrect propagation of
task locals. Moreover, unstructured tasks are often used in conjunction with
`deinit`-based cleanup, i.e. the unstructured tasks are retained by the type and
then cancelled in the `deinit`. Since `deinit`s of classes and actors are run at
arbitrary times it becomes impossible to tell when resources created by those
unstructured tasks are released. Since, the `run()` method pattern has come up a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since deinits of classes and actors are run at
arbitrary times it becomes impossible to tell when resources created by those
unstructured tasks are released.

This makes it sound like a garbage collector, but ARC is deterministic, so maybe tweak the wording here to instead focus on the fact that it can be difficult to enforce cleanup at a specific time when a reference is shared between tasks. That said, this might not be important, for example if you have 5 identical worker tasks sharing a resource, and they're all shutting down, you might not care which one is the last one to shut down (and thus allow the resource to be deinited), so maybe add an example of where this is important.

lot while migrating more libraries to take advantage of Structured Concurrency
which lead the SSWG to update the [swift-service-lifecycle
package](https://github.com/swift-server/swift-service-lifecycle) to embrace
this pattern. In general, the SSWG recommends to adopt `ServiceLifecycle` and
conform types of libraries to the `Service` protocol which makes it easier for
application developers to orchestrate the various components that form the
application's business logic. `ServiceLifecycle` provides the `ServiceGroup`
which allows developers to orchestrate a number of `Service`s. Additionally, the
`ServiceGroup` has built-in support for signal handling to implement a graceful
shutdown of applications. Gracefully shutting down applications is often
required in modern cloud environments during roll out of new application
versions or infrastructure reformations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if in-scope for this guide, or if it should be a separate one, but once I have these structured tasks nicely set up, how do I communicate between them? More guidance on that topic would be great, especially how to wire up the async sequences (I presume) at task creation time, some patterns around that.

The goal of structuring libraries and applications like this is enabling a
seamless integration between them and have a coherent interface across the
ecosystem which makes adding new components to applications as easy as possible.
Furthermore, since everything is inside the same task tree and task locals
propagate down the tree we unlock new APIs in libraries such as `swift-log` or
`swift-tracing`.

After adopting this structure a common question question that comes up is how to
model communication between the various components. This can be achieved in a
couple of ways. Either by using dependency injection to pass one component to
the other or by inverting the control between components using `AsyncSequence`s.

## Resource management

Applications often have to manage some kind of resource. Those could be file
descriptors, sockets or something like virtual machines. Most resources need
some kind of cleanup such as making a syscall to close a file descriptor or
deleting a virtual machine. Resource management ties in closely with Structured
Concurrency since it allows to express the lifetime of those resources in
concurrent and asynchronous applications. The recommended pattern to provide
access to resources is to use `with`-style methods such as the `func
withTaskGroup(of:returning:body:)` method from the standard library.
`with`-style methods allow to provide scoped access to a resource while making
sure the resource is currently cleaned up at the end of the scope. For example a
method providing scoped access to a file descriptor might look like this:

```swift
func withFileDescriptor<R>(_ body: (FileDescriptor) async throws -> R) async throws -> R { ... }
```

Importantly, the file descriptor is only valid inside the `body` closure; hence,
escaping the file descriptor in an unstructured task or into a background thread
is not allowed.

> Note: With future language features such as `~Escapable` types it might be
possible to encode this constraint in the language itself.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even more importantly, a non-copyable type can represent a resource like this, where you do have full control of when it's cleaned up, making the scope-based API unnecessary. If you're linking to non-escaping, might be worth linking to non-copyable as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even with ~Copyable and ~Escapable types we still can't express deinit based resource clean up in all cases. Closing FDs or deleting VMs is an asynchronous action and deinits cannot be async at this time.

Copy link
Member

@czechboy0 czechboy0 Nov 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood, but "in all cases" is a very high bar. Non-copyable types help substantially over the status quo, by allowing a single owner known at compile time, who's responsible for managing the resource. When and how that owner chooses to free resources is orthogonal, but the important part is that you can achieve deterministic resource management without a with style API, which was my original point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You still cannot. Anything that requires an asynchronous deinit cannot achieve deterministic resource management. Deterministic means at any point in your program you can tell when the resource is freed which works for simple things with ~Copyable but won't work for stuff like file descriptors, sockets or virtual machines.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow. If I have a non-copyable type with a func close() async on it which must be called before deinit (otherwise a precondition fails), by looking at the code you know exactly when it's being freed, and when the freeing finished.

I'm not talking about deinit doing the work, I'm saying that the lack of non-copyable types meant that the only way to enforce a single known entity to free the resource was a with API. With non-copyable types, there is always exactly one owner of the value, who's responsible for freeing it (in any way that makes sense for the type, for sync types, it can be using deinit, for async types it can be using an explicit close, whatever).

I don't see any non-determinism here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, async deinit would make this even more flexible, but it's not a blocker for using non-copyable types correctly even without with style APIs. (You still can, of course, it's just that now there's a second way.)


## Task executors in Swift on Server applications

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

before we go into NIO land here, we should probably create something like how Server applications in structured concurrency:

withDiscardingTaskGroup { taskGroup in
  for try await connection in server.newConnections {
    taskGroup.addTask {
      try await handleConnection(connection) // local reasoning... yay!
    }
  }
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then you should probably explain how handConnection should consume the connections incoming messages as an AsyncSequence. Again. Local reasoning! All code is right there.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once you have established this pattern, you can explain that a NIOAsyncChannel works exactly like your connection here. Then link to the docs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added an example above for the server which I am going to pick up here again

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually just moved the whole section around task executors out. Let's keep it focused on structured concurrency.


Most of the Swift on Server ecosystem is build on top of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Most of the Swift on Server ecosystem is build on top of
Most of the Swift on Server ecosystem is built on top of

[swift-nio](https://github.com/apple/swift-nio) - a high performant event-driven
networking library. `NIO` has its own concurrency model that predates Swift
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[swift-nio](https://github.com/apple/swift-nio) - a high performant event-driven
networking library. `NIO` has its own concurrency model that predates Swift
[swift-nio](https://github.com/apple/swift-nio) - a high performance event-driven
networking library. `NIO` has its own concurrency model that predates Swift

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIO is a module, I think we should be calling it SwiftNIO here

Concurrency; however, `NIO` offers the `NIOAsyncChannel` abstraction to bridge a
`Channel` into a construct that can be interacted with from Swift Concurrency.
One of the goals of the `NIOAsyncChannel`, is to enable developers to implement
their business logic using Swift Concurrency instead of using `NIO`s lower level
`Channel` and `ChannelHandler` types, hence, making `NIO` an implementation
detail. You can read more about the `NIOAsyncChannel` in [NIO's Concurrency
documentation](https://swiftpackageindex.com/apple/swift-nio/2.61.1/documentation/niocore/swift-concurrency).

Highly performant server application often rely on handling incoming connections
and requests almost synchronously without incurring unnecessary allocations or
context switches. Swift Concurrency is by default executing any non-isolated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
context switches. Swift Concurrency is by default executing any non-isolated
context switches. By default, Swift Concurrency executes any non-isolated

method on the global concurrent executor. On the other hand, `NIO` has its own
thread pool in the shape of an `EventLoopGroup`. `NIO` picks an `EventLoop` out
of the `EventLoopGroup` for any `Channel` and executes all of the `Channel`s IO
on that `EventLoop`. When bridging from `NIO` into Swift Concurrency by default
the execution has to context switch between the `Channel`s `EventLoop` and one
of the threads of the global concurrent executor. To avoid this context switch
Swift Concurrency introduced the concept of preferred task executors in
[SE-XXX](). When interacting with the `NIOAsyncChannel` the preferred task
executor can be set to the `Channel`s `EventLoop`. If this is beneficial or
disadvantageous for the performance of the application depends on a couple of
factors:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I would mention this in this document even.


- How computationally intensive is the logic executed in Swift Concurrency?
- Does the logic make any asynchronous out calls?
- How many cores does that application have available?

In the end, each application needs to measure its performance and understand if
setting a preferred task executor is beneficial.