PipeWire 1.2.6
|
This document tries to explain how the PipeWire graph is scheduled.
Graph are constructed from linked nodes together with their ports. This results in a dependency graph between nodes. Special care is taken for loopback links so that the graph remains a directed graph.
The server (and clients) have two processing threads:
The data processing threads are given realtime priority and are designed to run with as little overhead as possible. All of the node resources such as buffers, io areas and metadata will be set up in shared memory before the node is scheduled to run.
This document describes the processing that happens in the data processing thread after the main-thread has configured it.
Nodes are objects with 0 or more input and output ports.
Each node also has:
The activation record has the following information:
When two nodes are linked together, the output node becomes a dependency for the input node. This means the input node can only start processing when the output node is finished.
This dependency is reflected in the required counter in the activation record. In below illustration, B's required field is incremented with 1. The pending field is set to the required field when the graph is started. Node A will keep a list of all targets (B) that it is a dependency of.
This dependency update is only performed when the link is ready (negotiated) and the nodes are ready to schedule (runnable).
Multiple links between A and B will only result in 1 target link between A and B.
The graph can only run if there is a driver node that is in some way linked to an active node.
The driver is special because it will have to initiate the processing in the graph. It will use a timer or some sort of interrupt from hardware to start the cycle.
Any node can also be a candidate for a driver (when the node.driver property is true). PipeWire will select the node with the highest priority.driver property as the driver.
Nodes will be assigned to the driver node they will be scheduled with. Each node holds a reference to the driver and increments the required field of the driver.
When a node is ready to be scheduled, the driver adds the node to its list of targets and increments the required field.
As seen in the illustration above, the driver holds a link to each node it needs to schedule and each node holds a link to the driver. Some nodes hold a link to other nodes.
It is possible that the driver is the same as a node in the graph (for example node A) but conceptually, the links above are still valid.
The driver will then start processing the graph by emitting the ready signal. PipeWire will then:
In our example above, Node A and B will have their pending state decremented. Node A will be 0 and will be triggered first (node B has 2 pending dependencies to start with and will not be triggered yet). The driver itself also has 2 dependencies left and will not be triggered (complete) yet.
When the eventfd is signaled on a node, we say the node is triggered and it will be able to process data. It consumes the input on the input ports and produces more data on the output ports.
After processing, node A goes through the list of targets and decrements each pending field (node A has a reference to B and the driver).
In our above example, the driver is decremented (from 2 to 1) but is not yet triggered. node B is decremented (from 1 to 0) and is triggered by writing to the eventfd.
Node B is scheduled and processes the input from node A. It then goes through the list of targets and decrements the pending fields. It decrements the pending field of the driver (from 1 to 0) and triggers the driver.
The graph always completes after the driver is triggered and scheduled. All required fields from all the nodes in the target list of the driver are now 0.
The driver calculates some stats about cpu time etc.
For remote nodes, the eventfd and the activation is transferred from the server to the client.
This means that writing to the remote client eventfd will wake the client directly without going to the server first.
All remote clients also get the activation and eventfd of the peer and driver they are linked to and can directly trigger peers and drivers without going to the server first.
Remote drivers start the graph cycle directly without going to the server first.
After they complete (and only when the profiler is active), they will trigger an extra eventfd to signal the server that the graph completed. This is used by the server to generate the profiler info.
Normally, a driver will wake up the graph and all the followers need to process the data in sync. There are cases where:
In these cases, the driver and follower roles need to be reversed and a mechanism needs to be provided so that the follower can know when it is worth processing the graph.
For notifying when the graph is ready to be processed, (non driver) nodes can send a RequestProcess event which will arrive as a RequestProcess command in the driver. The driver can then decide to run the graph or not.
When the graph is started or partially controlled by RequestProcess events and commands we say we have lazy scheduling. The driver is not always scheduling according to its own rhythm but also depending on the follower.
We can't just enable lazy scheduling when no follower will emit RequestProcess events or when no driver will listen for RequestProcess commands. Two new node properties are defined:
0 means lazy scheduling as a driver is not supported >1 means lazy scheduling as a driver is supported with increasing preference
0 means request events as a follower are not supported >1 means request events as a follower are supported with increasing preference
We can only enable lazy scheduling when both the driver and (at least one) follower has the node.supports-lazy and node.supports-request property respectively.
Node can end up as a driver (is_driver()) and lazy scheduling can be enabled (is_lazy()), which results in the following cases:
driver producer -> node.driver = true -> is_driving() && !is_lazy() -> calls trigger_process() to start the graph
lazy producer -> node.driver = true -> node.supports-lazy = 1 -> is_driving() && is_lazy() -> listens for RequestProcess and calls trigger_process() to start the graph
requesting producer -> node.supports-request = 1 -> !is_driving() && is_lazy() -> emits RequestProcess to suggest starting the graph
follower producer -> !is_driving() && !is_lazy()
driver consumer -> node.driver = true -> is_driving() && !is_lazy() -> calls trigger_process() to start the graph
lazy consumer -> node.driver = true -> node.supports-lazy = 1 -> is_driving() && is_lazy() -> listens for RequestProcess and calls trigger_process() to start the graph
requesting consumer -> node.supports-request = 1 -> !is_driving() && is_lazy() -> emits RequestProcess to suggest starting the graph
follower consumer -> !is_driving() && !is_lazy()
Some use cases:
Screensharing - driver producer, follower consumer
producer
consumer
-> producer selected as driver, consumer is simple follower. lazy scheduling inactive (no lazy driver or no request follower)
The consumer requests new frames from the producer according to its refresh rate when there are RequestProcess commands. -> this throttles the framerate to the consumer but idles when there is no activity on the producer.
producer
consumer
-> consumer is selected as driver (lazy > request) lazy scheduling active (1 lazy driver and at least 1 request follower)
The producer produces the next frame on demand. -> throttles the speed to the consumer without idle.
producer
consumer
-> producer is selected as driver (lazy <= request) lazy scheduling active (1 lazy driver and at least 1 request follower)