```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```

```
```

```
```
PI2 Parameters
Internet Inter-Domain Traffic
Arbor Networks
Arbor Networks
Arbor Networks
Uni Michigan
Uni Michigan
SCReAM
BRTCP BBR v2 Alpha/Preview Release

```
```
As a first concrete example, the pseudocode below gives the DualPI2
algorithm. DualPI2 follows the structure of the DualQ Coupled AQM
framework in . A simple ramp
function (configured in units of queuing time) with unsmoothed ECN
marking is used for the Native L4S AQM. The ramp can also be configured
as a step function. The PI2 algorithm is used
for the Classic AQM. PI2 is an improved variant of the PIE
AQM .
The pseudocode will be introduced in two passes. The first pass
explains the core concepts, deferring handling of overload to the second
pass. To aid comparison, line numbers are kept in step between the two
passes by using letter suffixes where the longer code needs extra
lines.
All variables are assumed to be floating point in their basic units
(size in bytes, time in seconds, rates in bytes/second, alpha and beta
in Hz, and probabilities from 0 to 1. Constants expressed in k (kilo), M
(mega), G (giga), u (micro), m (milli) , %, ... are assumed to be
converted to their appropriate multiple or fraction to represent the
basic units. A real implementation that wants to use integer values
needs to handle appropriate scaling factors and allow accordingly
appropriate resolution of its integer types (including temporary
internal values during calculations).
A full open source implementation for Linux is available at:
https://github.com/L4STeam/sch_dualpi2_upstream and explained in . The specification of the DualQ Coupled AQM for
DOCSIS cable modems and CMTSs is available in
and explained in .
The pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq) and the Classic queue (cq). The
pseudocode consists of the following six functions:
The initialization function dualpi2_params_init(...) () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity)
The enqueue function dualpi2_enqueue(lq, cq, pkt) ()
The dequeue function dualpi2_dequeue(lq, cq, pkt) ()
The recurrence function recur(q, likelihood) for de-randomized
ECN marking (shown at the end of ).
The L4S AQM function laqm(qdelay) () used to calculate the
ECN-marking probability for the L4S queue
The base AQM function that implements the PI algorithm
dualpi2_update(lq, cq) ()
used to regularly update the base probability (p'), which is
squared for the Classic AQM as well as being coupled across to the
L4S queue.

It also uses the following functions that are not shown in
full here:
scheduler(), which selects between the head packets of the two
queues; the choice of scheduler technology is discussed later;
cq.len() or lq.len() returns the current length
(aka. backlog) of the relevant queue in bytes;
cq.time() or lq.time() returns the current queuing delay
(aka. sojourn time or service time) of the relevant queue in
units of time (see Note a);
mark(pkt) and drop(pkt) for ECN-marking and dropping a
packet;

In experiments so far (building on experiments with PIE) on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, DualPI2 achieves good results with the default
parameters in . The
parameters are categorised by whether they relate to the Base PI2 AQM,
the L4S AQM or the framework coupling them together. Constants and
variables derived from these parameters are also included at the end
of each category. Each parameter is explained as it is encountered in
the walk-through of the pseudocode below, and the rationale for the
chosen defaults are given so that sensible values can be used in
scenarios other than the regular public Internet.
The overall goal of the code is to maintain the base probability
(p', p-prime as in ), which is
an internal variable from which the marking and dropping probabilities
for L4S and Classic traffic (p_L and p_C) are derived, with p_L in
turn being derived from p_CL. The probabilities p_CL and p_C are
derived in lines 4 and 5 of the dualpi2_update() function () then used in the dualpi2_dequeue()
function where p_L is also derived from p_CL at line 6 (). The code walk-through below
builds up to explaining that part of the code eventually, but it
starts from packet arrival.
When packets arrive, first a common queue limit is checked as shown
in line 2 of the enqueuing pseudocode in . This assumes a shared buffer
for the two queues (Note b discusses the merits of separate buffers).
In order to avoid any bias against larger packets, 1 MTU of space is
always allowed and the limit is deliberately tested before
enqueue.
If limit is not exceeded, the packet is timestamped in line 4. This
assumes that queue delay is measured using the sojourn time technique
(see Note a for alternatives).
At lines 5-9, the packet is classified and enqueued to the Classic
or L4S queue dependent on the least significant bit of the ECN field
in the IP header (line 6). Packets with a codepoint having an LSB of 0
(Not-ECT and ECT(0)) will be enqueued in the Classic queue. Otherwise,
ECT(1) and CE packets will be enqueued in the L4S queue. Optional
additional packet classification flexibility is omitted for brevity
(see ).
The dequeue pseudocode () is repeatedly called whenever
the lower layer is ready to forward a packet. It schedules one packet
for dequeuing (or zero if the queue is empty) then returns control to
the caller, so that it does not block while that packet is being
forwarded. While making this dequeue decision, it also makes the
necessary AQM decisions on dropping or marking. The alternative of
applying the AQMs at enqueue would shift some processing from the
critical time when each packet is dequeued. However, it would also add
a whole queue of delay to the control signals, making the control loop
sloppier (for a typical RTT it would double the Classic queue's
feedback delay).
All the dequeue code is contained within a large while loop so that
if it decides to drop a packet, it will continue until it selects a
packet to schedule. Line 3 of the dequeue pseudocode is where the
scheduler chooses between the L4S queue (lq) and the Classic queue
(cq). Detailed implementation of the scheduler is not shown (see
discussion later).
If an L4S packet is scheduled, in lines 7 and 8 the packet is
ECN-marked with likelihood p_L. The recur() function at the end of
is used, which is
preferred over random marking because it avoids delay due to
randomization when interpreting congestion signals, but it still
desynchronizes the saw-teeth of the flows. Line 6 calculates p_L
as the maximum of the coupled L4S probability p_CL and the
probability from the native L4S AQM p'_L. This implements the
max() function shown in to
couple the outputs of the two AQMs together. Of the two
probabilities input to p_L in line 6:
p'_L is calculated per packet in line 5 by the laqm()
function (see ),
Whereas p_CL is maintained by the dualpi2_update() function
which runs every Tupdate (Tupdate is set in line 13 of ).

If a Classic packet is scheduled, lines 10 to 17 drop or mark
the packet with probability p_C.

The Native L4S AQM algorithm () is a ramp function, similar to
the RED algorithm, but simplified as follows:
The extent of the ramp is defined in units of queuing delay,
not bytes, so that configuration remains invariant as the queue
departure rate varies.
It uses instantaneous queueing delay, which avoids the
complexity of smoothing, but also avoids embedding a worst-case
RTT of smoothing delay in the network (see ).
The ramp rises linearly directly from 0 to 1, not to an
intermediate value of p'_L as RED would, because there is no need
to keep ECN marking probability low.
Marking does not have to be randomized. Determinism is used
instead of randomness; to reduce the delay necessary to smooth out
the noise of randomness from the signal.

The ramp function requires two configuration parameters, the
minimum threshold (minTh) and the width of the ramp (range), both in
units of queuing time), as shown in lines 18 & 19 of the
initialization function in . The ramp function can be
configured as a step (see Note c).
Although the DCTCP paper
recommends an ECN marking threshold of 0.17*RTT_typ, it also shows
that the threshold can be much shallower with hardly any worse
under-utilization of the link (because the amplitude of DCTCP's
sawteeth is so small). Based on extensive experiments, for the public
Internet the default minimum ECN marking threshold in is considered a good
compromise, even though it is significantly smaller fraction of
RTT_typ.
A minimum marking threshold parameter (Th_len) in transmission
units (default 2 MTU) is also necessary to ensure that the ramp does
not trigger excessive marking on slow links. The code in lines 24-27
of the initialization function () converts 2 MTU into time
units and shifts the ramp so that the min threshold is no shallower
than this floor.
The coupled marking probability, p_CL depends on the base
probability (p'), which is kept up to date by the core PI algorithm in
executed every Tupdate.
Note that p' solely depends on the queuing time in the Classic
queue. In line 2, the current queuing delay (curq) is evaluated from
how long the head packet was in the Classic queue (cq). The function
cq.time() (not shown) subtracts the time stamped at enqueue from the
current time (see Note a) and implicitly takes the current queuing
delay as 0 if the queue is empty.
The algorithm centres on line 3, which is a classical
Proportional-Integral (PI) controller that alters p' dependent on: a)
the error between the current queuing delay (curq) and the target
queuing delay, 'target'; and b) the change in queuing delay since the
last sample. The name 'PI' represents the fact that the second factor
(how fast the queue is growing) is P roportional
to load while the first is the I ntegral of
the load (so it removes any standing queue in excess of the
target).
The target parameter can be set based on local knowledge, but the
aim is for the default to be a good compromise for anywhere in the
intended deployment environment---the public Internet. The target
queuing delay is related to the typical base RTT, RTT_typ, by two
factors, shown in the comment on line 9 of as target = RTT_typ * 0.22 *
2. These factors ensure that, in a large proportion of cases (say
90%), the sawtooth variations in RTT will fit within the buffer
without underutilizing the link. Frankly, these factors are educated
guesses, but with the emphasis closer to 'educated' than to 'guess'
(see for background investigations):
RTT_typ is taken as 34 ms. This is based on an average CDN
latency measured in each country weighted by the number of
Internet users in that country to produce an overall weighted
average for the Internet .
The factor 0.22 is a geometry factor that characterizes the
shape of the sawteeth of prevalent Classic congestion controllers.
The geometry factor is the difference between the minimum and the
average queue delays of the sawteeth, relative to the base RTT.
For instance, the geometry factor of standard Reno is 0.5.
According to the census of congestion controllers conducted by
Mishra et al in Jul–Oct 2019
, most Classic TCP traffic uses Cubic.
And, according to the analysis in , if
running over a PI2 AQM, a large proportion of this Cubic traffic
would be in its Reno-Friendly mode, which has a geometry factor of
0.21 (Linux implementation). The rest of the Cubic traffic would
be in true Cubic mode, which has a geometry factor of 0.32.
Without modelling the sawtooth profiles from all the other less
prevalent congestion controllers, we estimate a 9:1 weighted
average of these two, resulting in an average geometry factor of
0.22.
The factor 2, is a safety factor that increases the target
queue to allow for the distribution of RTT_typ around its mean.
Otherwise the target queue would only avoid underutilization for
those users below the mean. It also provides a safety margin for
the proportion of paths in use that span beyond the distance
between a user and their local CDN. Currently no data is available
on the variance of queue delay around the mean in each region, so
there is plenty of room for this guess to become more
educated.

The two 'gain factors' in line 3 of , alpha and beta, respectively
weight how strongly each of the two elements (Integral and
Proportional) alters p'. They are in units of 'per second of delay' or
Hz, because they transform differences in queueing delay into changes
in probability (assuming probability has a value from 0 to 1).
alpha and beta determine how much p' ought to change after each
update interval (Tupdate). For smaller Tupdate, p' should change by
the same amount per second, but in finer more frequent steps. So alpha
depends on Tupdate (see line 13 of the initialization function in
). It is best to update
p' as frequently as possible, but Tupdate will probably be constrained
by hardware performance. As shown in line 13, the update interval
should be frequent enough to update at least once in the time taken
for the target queue to drain ('target') as long as it updates at
least three times per maximum RTT. Tupdate defaults to 16 ms in the
reference Linux implementation because it has to be rounded to a
multiple of 4 ms. For link rates from 4 to 200 Mb/s and a maximum RTT
of 100ms, it has been verified through extensive testing that
Tupdate=16ms (as also recommended in ) is
sufficient.
The choice of alpha and beta also determines the AQM's stable
operating range. The AQM ought to change p' as fast as possible in
response to changes in load without over-compensating and therefore
causing oscillations in the queue. Therefore, the values of alpha and
beta also depend on the RTT of the expected worst-case flow
(RTT_max).
The maximum RTT of a PI controller (RTT_max in line 10 of ) is not an absolute maximum,
but more instability (more queue variability) sets in for long-running
flows with an RTT above this value. The propagation delay half way
round the planet and back in glass fibre is 200 ms. However, hardly
any traffic traverses such extreme paths and, since the significant
consolidation of Internet traffic between 2007 and 2009 , a high and growing proportion of all Internet
traffic (roughly two-thirds at the time of writing) has been served
from content distribution networks (CDNs) or 'cloud' services
distributed close to end-users. The Internet might change again, but
for now, designing for a maximum RTT of 100ms is a good compromise
between faster queue control at low RTT and some instability on the
occasions when a longer path is necessary.
Recommended derivations of the gain constants alpha and beta can be
approximated for Reno over a PI2 AQM as: alpha = 0.1 * Tupdate /
RTT_max^2; beta = 0.3 / RTT_max, as shown in lines 14 & 15 of
. These are derived
from the stability analysis in . For the default
values of Tupdate=16 ms and RTT_max = 100 ms, they result in alpha =
0.16; beta = 3.2 (discrepancies are due to rounding). These defaults
have been verified with a wide range of link rates, target delays and
a range of traffic models with mixed and similar RTTs, short and long
flows, etc.
In corner cases, p' can overflow the range [0,1] so the resulting
value of p' has to be bounded (omitted from the pseudocode). Then, as
already explained, the coupled and Classic probabilities are derived
from the new p' in lines 4 and 5 of as p_CL = k*p' and p_C = p'^2.
Because the coupled L4S marking probability (p_CL) is factored up
by k, the dynamic gain parameters alpha and beta are also inherently
factored up by k for the L4S queue. So, the effective gain factor for
the L4S queue is k*alpha (with defaults alpha = 0.16 Hz and k=2,
effective L4S alpha = 0.32 Hz).
Unlike in PIE , alpha and beta do not
need to be tuned every Tupdate dependent on p'. Instead, in PI2, alpha
and beta are independent of p' because the squaring applied to Classic
traffic tunes them inherently. This is explained in , which also explains why this more principled approach
removes the need for most of the heuristics that had to be added to
PIE.
Nonetheless, an implementer might wish to add selected heuristics
to either AQM. For instance the Linux reference DualPI2 implementation
includes the following:
Prior to enqueuing an L4S packet, if the L queue contains <2
packets, the packet is flagged to suppress any native L4S AQM
marking at dequeue (which depends on sojourn time);
Classic and coupled marking or dropping (i.e. based on p_C
and p_CL from the PI controller) is only applied to a packet if
the respective queue length in bytes is > 2 MTU (prior to
enqueuing the packet or after dequeuing it, depending on whether
the AQM is configured to be applied at enqueue or dequeue);
In the WRR scheduler, the 'credit' indicating which queue
should transmit is only changed if there are packets in both
queues (i.e. if there is actual resource contention). This
means that a properly paced L flow might never be delayed by the
WRR. The WRR credit is reset in favour of the L queue when the
link is idle.

An implementer might also wish to add other heuristics,
e.g. burst protection or enhanced
burst protection .
Notes:
The drain rate of the queue can vary
if it is scheduled relative to other queues, or to cater for
fluctuations in a wireless medium. To auto-adjust to changes in
drain rate, the queue needs to be measured in time, not bytes or
packets , .
Queuing delay could be measured directly by storing a per-packet
time-stamp as each packet is enqueued, and subtracting this from
the system time when the packet is dequeued. If time-stamping is
not easy to introduce with certain hardware, queuing delay could
be predicted indirectly by dividing the size of the queue by the
predicted departure rate, which might be known precisely for some
link technologies (see for example ).
Line 2 of the dualpi2_enqueue() function () assumes an implementation
where lq and cq share common buffer memory. An alternative
implementation could use separate buffers for each queue, in which
case the arriving packet would have to be classified first to
determine which buffer to check for available space. The choice is
a trade off; a shared buffer can use less memory whereas separate
buffers isolate the L4S queue from tail-drop due to large bursts
of Classic traffic (e.g. a Classic Reno TCP during slow-start
over a long RTT).
There has been some concern that using the step function of
DCTCP for the Native L4S AQM requires end-systems to smooth the
signal for an unnecessarily large number of round trips to ensure
sufficient fidelity. A ramp is no worse than a step in initial
experiments with existing DCTCP. Therefore, it is recommended that
a ramp is configured in place of a step, which will allow
congestion control algorithms to investigate faster smoothing
algorithms.A ramp is more general that a
step, because an operator can effectively turn the ramp into a
step function, as used by DCTCP, by setting the range to zero.
There will not be a divide by zero problem at line 5 of because, if minTh is equal to
maxTh, the condition for this ramp calculation cannot arise.

repeats the
dequeue function of , but
with overload details added. Similarly repeats the core PI algorithm
of with overload details
added. The initialization, enqueue, L4S AQM and recur functions are
unchanged.
In line 10 of the initialization function (), the maximum Classic drop
probability p_Cmax = min(1/k^2, 1) or 1/4 for the default coupling
factor k=2. p_Cmax is the point at which it is deemed that the Classic
queue has become persistently overloaded, so it switches to using
drop, even for ECN-capable packets. ECT packets that are not dropped
can still be ECN-marked.
In practice, 25% has been found to be a good threshold to preserve
fairness between ECN capable and non ECN capable traffic. This
protects the queues against both temporary overload from responsive
flows and more persistent overload from any unresponsive traffic that
falsely claims to be responsive to ECN.
When the Classic ECN marking probability reaches the p_Cmax
threshold (1/k^2), the marking probability coupled to the L4S queue,
p_CL will always be 100% for any k (by equation (1) in ). So, for readability, the constant p_Lmax is
defined as 1 in line 22 of the initialization function (). This is intended to ensure
that the L4S queue starts to introduce dropping once ECN-marking
saturates at 100% and can rise no further. The 'Prague L4S'
requirements state
that, when an L4S congestion control detects a drop, it falls back to
a response that coexists with 'Classic' Reno congestion control. So it
is correct that, when the L4S queue drops packets, it drops them
proportional to p'^2, as if they are Classic packets.
Both these switch-overs are triggered by the tests for overload
introduced in lines 4b and 12b of the dequeue function (). Lines 8c to 8g drop L4S
packets with probability p'^2. Lines 8h to 8i mark the remaining
packets with probability p_CL. Given p_Lmax = 1, all remaining packets
will be marked because, to have reached the else block at line 8b,
p_CL >= 1.
Lines 2c to 2d in the core PI algorithm () deal with overload of the L4S
queue when there is no Classic traffic. This is necessary, because the
core PI algorithm maintains the appropriate drop probability to
regulate overload, but it depends on the length of the Classic queue.
If there is no Classic queue the naive PI update function in would drop nothing, even if the L4S
queue were overloaded - so tail drop would have to take over (lines 2
and 3 of ).
Instead, the test at line 2a of the full PI update function in
keeps delay on target
using drop. If the test at line 2a of finds that the Classic queue
is empty, line 2d measures the current queue delay using the L4S queue
instead. While the L4S queue is not overloaded, its delay will always
be tiny compared to the target Classic queue delay. So p_CL will be
driven to zero, and the L4S queue will naturally be governed solely by
p'_L from the native L4S AQM (lines 5 and 6 of the dequeue algorithm
in ). But, if
unresponsive L4S source(s) cause overload, the DualQ transitions
smoothly to L4S marking based on the PI algorithm. If overload
increases further, it naturally transitions from marking to dropping
by the switch-over mechanism already described.
The choice of scheduler technology is critical to overload
protection (see ).
A well-understood weighted scheduler such as weighted round
robin (WRR) is recommended. As long as the scheduler weight for
Classic is small (e.g. 1/16), its exact value is unimportant
because it does not normally determine capacity shares. The weight
is only important to prevent unresponsive L4S traffic starving
Classic traffic. This is because capacity sharing between the
queues is normally determined by the coupled congestion signal,
which overrides the scheduler, by making L4S sources leave roughly
equal per-flow capacity available for Classic flows.
Alternatively, a time-shifted FIFO (TS-FIFO) could be used. It
works by selecting the head packet that has waited the longest,
biased against the Classic traffic by a time-shift of tshift. To
implement time-shifted FIFO, the scheduler() function in line 3 of
the dequeue code would simply be implemented as the scheduler()
function at the bottom of in
. For the public Internet a good
value for tshift is 50ms. For private networks with smaller
diameter, about 4*target would be reasonable. TS-FIFO is a very
simple scheduler, but complexity might need to be added to address
some deficiencies (which is why it is not recommended over
WRR):
TS-FIFO does not fully isolate latency in the L4S queue
from uncontrolled bursts in the Classic queue;
TS-FIFO is only appropriate if time-stamping of packets is
feasible;
Even if time-stamping is supported, the sojourn time of the
head packet is always stale. For instance, if a burst arrives
at an empty queue, the sojourn time will only measure the
delay of the burst once the burst is over, even though the
queue knew about it from the start. At the cost of more
operations and more storage, a 'scaled sojourn time' metric of
queue delay can be used, which is the sojourn time of a packet
scaled by the ratio of the queue sizes when the packet
departed and arrived .

A strict priority scheduler would be inappropriate, because it
would starve Classic if L4S was overloaded.

As another example of a DualQ Coupled AQM algorithm, the pseudocode
below gives the Curvy RED based algorithm. Although the AQM was designed
to be efficient in integer arithmetic, to aid understanding it is first
given using floating point arithmetic (). Then, one possible optimization for
integer arithmetic is given, also in pseudocode (). To aid comparison, the line numbers are
kept in step between the two by using letter suffixes where the longer
code needs extra lines.
The pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq) and the Classic queue (cq) and
consists of the following five functions:
The initialization function cred_params_init(...) () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity);
The dequeue function cred_dequeue(lq, cq, pkt) ();
The scheduling function scheduler(), which selects between the
head packets of the two queues.

It also uses the following functions that are either shown
elsewhere, or not shown in full here:
The enqueue function, which is identical to that used for
DualPI2, dualpi2_enqueue(lq, cq, pkt) in ;
mark(pkt) and drop(pkt) for ECN-marking and dropping a
packet;
cq.len() or lq.len() returns the current length
(aka. backlog) of the relevant queue in bytes;
cq.time() or lq.time() returns the current queuing delay
(aka. sojourn time or service time) of the relevant queue in
units of time (see Note a in ).

Because Curvy RED was evaluated before DualPI2, certain
improvements introduced for DualPI2 were not evaluated for Curvy RED.
In the pseudocode below, the straightforward improvements have been
added on the assumption they will provide similar benefits, but that
has not been proven experimentally. They are: i) a conditional
priority scheduler instead of strict priority ii) a time-based
threshold for the native L4S AQM; iii) ECN support for the Classic
AQM. A recent evaluation has proved that a minimum ECN-marking
threshold (minTh) greatly improves performance, so this is also
included in the pseudocode.
Overload protection has not been added to the Curvy RED pseudocode
below so as not to detract from the main features. It would be added
in exactly the same way as in for
the DualPI2 pseudocode. The native L4S AQM uses a step threshold, but
a ramp like that described for DualPI2 could be used instead. The
scheduler uses the simple TS-FIFO algorithm, but it could be replaced
with WRR.
The Curvy RED algorithm has not been maintained or evaluated to the
same degree as the DualPI2 algorithm. In initial experiments on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, Curvy RED achieved good results with the default
parameters in .
The parameters are categorised by whether they relate to the
Classic AQM, the L4S AQM or the framework coupling them together.
Constants and variables derived from these parameters are also
included at the end of each category. These are the raw input
parameters for the algorithm. A configuration front-end could accept
more meaningful parameters (e.g. RTT_max and RTT_typ) and convert
them into these raw parameters, as has been done for DualPI2 in . Where necessary, parameters are
explained further in the walk-through of the pseudocode below.
The dequeue pseudocode () is
repeatedly called whenever the lower layer is ready to forward a
packet. It schedules one packet for dequeuing (or zero if the queue is
empty) then returns control to the caller, so that it does not block
while that packet is being forwarded. While making this dequeue
decision, it also makes the necessary AQM decisions on dropping or
marking. The alternative of applying the AQMs at enqueue would shift
some processing from the critical time when each packet is dequeued.
However, it would also add a whole queue of delay to the control
signals, making the control loop very sloppy.
The code is written assuming the AQMs are applied on dequeue (Note
). All the dequeue
code is contained within a large while loop so that if it decides to
drop a packet, it will continue until it selects a packet to schedule.
If both queues are empty, the routine returns NULL at line 20. Line 3
of the dequeue pseudocode is where the conditional priority scheduler
chooses between the L4S queue (lq) and the Classic queue (cq). The
time-shifted FIFO scheduler is shown at lines 28-33, which would be
suitable if simplicity is paramount (see Note ).
Within each queue, the decision whether to forward, drop or mark is
taken as follows (to simplify the explanation, it is assumed that
U=1):
If the test at line 3 determines there is an
L4S packet to dequeue, the tests at lines 5b and 5c determine
whether to mark it. The first is a simple test of whether the L4S
queue delay (lq.time()) is greater than a step threshold T (Note
). The second
test is similar to the random ECN marking in RED, but with the
following differences: i) marking depends on queuing time, not
bytes, in order to scale for any link rate without being
reconfigured; ii) marking of the L4S queue depends on a logical OR
of two tests; one against its own queuing time and one against the
queuing time of the other (Classic)
queue; iii) the tests are against the instantaneous queuing time
of the L4S queue, but a smoothed average of the other (Classic)
queue; iv) the queue is compared with the maximum of U random
numbers (but if U=1, this is the same as the single random number
used in RED).Specifically, in line 5a the
coupled marking probability p_CL is set to the amount by which the
averaged Classic queueing delay Q_C exceeds the minimum queuing
delay threshold (minTh) all divided by the L4S scaling parameter
range_L. range_L represents the queuing delay (in seconds) added
to minTh at which marking probability would hit 100%. Then in line
5c (if U=1) the result is compared with a uniformly distributed
random number between 0 and 1, which ensures that, over range_L,
marking probability will linearly increase with queueing time.
If the scheduler at line 3 chooses to
dequeue a Classic packet and jumps to line 7, the test at line 10b
determines whether to drop or mark it. But before that, line 9a
updates Q_C, which is an exponentially weighted moving average
(Note ) of
the queuing time of the Classic queue, where cq.time() is the
current instantaneous queueing time of the packet at the head of
the Classic queue (zero if empty) and gamma is the EWMA constant
(default 1/32, see line 12 of the initialization function).
Lines 10a and 10b implement the Classic
AQM. In line 10a the averaged queuing time Q_C is divided by the
Classic scaling parameter range_C, in the same way that queuing
time was scaled for L4S marking. This scaled queuing time will be
squared to compute Classic drop probability so, before it is
squared, it is effectively the square root of the drop
probability, hence it is given the variable name sqrt_p_C. The
squaring is done by comparing it with the maximum out of two
random numbers (assuming U=1). Comparing it with the maximum out
of two is the same as the logical `AND' of two tests, which
ensures drop probability rises with the square of queuing
time.

The AQM functions in each queue (lines 5c & 10b) are two cases
of a new generalization of RED called Curvy RED, motivated as follows.
When the performance of this AQM was compared with FQ-CoDel and PIE,
their goal of holding queuing delay to a fixed target seemed
misguided . As the number of flows
increases, if the AQM does not allow host congestion controllers to
increase queuing delay, it has to introduce abnormally high levels of
loss. Then loss rather than queuing becomes the dominant cause of
delay for short flows, due to timeouts and tail losses.
Curvy RED constrains delay with a softened target that allows some
increase in delay as load increases. This is achieved by increasing
drop probability on a convex curve relative to queue growth (the
square curve in the Classic queue, if U=1). Like RED, the curve hugs
the zero axis while the queue is shallow. Then, as load increases, it
introduces a growing barrier to higher delay. But, unlike RED, it
requires only two parameters, not three. The disadvantage of Curvy RED
(compared to a PI controller for example) is that it is not adapted to
a wide range of RTTs. Curvy RED can be used as is when the RTT range
to be supported is limited, otherwise an adaptation mechanism is
required.
From our limited experiments with Curvy RED so far, recommended
values of these parameters are: S_C = -1; g_C = 5; T = 5 * MTU at the
link rate (about 1ms at 60Mb/s) for the range of base RTTs typical on
the public Internet. explains why these
parameters are applicable whatever rate link this AQM implementation
is deployed on and how the parameters would need to be adjusted for a
scenario with a different range of RTTs (e.g. a data centre). The
setting of k depends on policy (see
and respectively for its recommended
setting and guidance on alternatives).
There is also a cUrviness parameter, U, which is a small positive
integer. It is likely to take the same hard-coded value for all
implementations, once experiments have determined a good value. Only
U=1 has been used in experiments so far, but results might be even
better with U=2 or higher.
Notes:
The alternative of applying the
AQMs at enqueue would shift some processing from the critical time
when each packet is dequeued. However, it would also add a whole
queue of delay to the control signals, making the control loop
sloppier (for a typical RTT it would double the Classic queue's
feedback delay). On a platform where packet timestamping is
feasible, e.g. Linux, it is also easiest to apply the AQMs at
dequeue because that is where queuing time is also measured.
WRR better isolates
the L4S queue from large delay bursts in the Classic queue, but it
is slightly less simple than TS-FIFO. If WRR were used, a low
default Classic weight (e.g. 1/16) would need to be
configured in place of the time shift in line 5 of the
initialization function ().
A step function is shown for
simplicity. A ramp function (see and the discussion around it
in ) is recommended, because
it is more general than a step and has the potential to enable L4S
congestion controls to converge more rapidly.
An EWMA is only one possible way
to filter bursts; other more adaptive smoothing methods could be
valid and it might be appropriate to decrease the EWMA faster than
it increases, e.g. by using the minimum of the smoothed and
instantaneous queue delays, min(Q_C, qc.time()).

Although code optimization depends on the platform, the following
notes explain where the design of Curvy RED was particularly motivated
by efficient implementation.
The Classic AQM at line 10b calls maxrand(2*U), which gives twice
as much curviness as the call to maxrand(U) in the marking function at
line 5c. This is the trick that implements the square rule in equation
(1) (). This is based on the fact that,
given a number X from 1 to 6, the probability that two dice throws
will both be less than X is the square of the probability that one
throw will be less than X. So, when U=1, the L4S marking function is
linear and the Classic dropping function is squared. If U=2, L4S would
be a square function and Classic would be quartic. And so on.
The maxrand(u) function in lines 16-21 simply generates u random
numbers and returns the maximum. Typically, maxrand(u) could be run in
parallel out of band. For instance, if U=1, the Classic queue would
require the maximum of two random numbers. So, instead of calling
maxrand(2*U) in-band, the maximum of every pair of values from a
pseudorandom number generator could be generated out-of-band, and held
in a buffer ready for the Classic queue to consume.
The two ranges, range_L and range_C are expressed as powers of 2 so
that division can be implemented as a right bit-shift (>>) in
lines 5 and 10 of the integer variant of the pseudocode ().
For the integer variant of the pseudocode, an integer version of
the rand() function used at line 25 of the maxrand(function) in would be arranged to return an integer
in the range 0 <= maxrand() < 2^32 (not shown). This would scale
up all the floating point probabilities in the range [0,1] by
2^32.
Queuing delays are also scaled up by 2^32, but in two stages: i) In
line 9 queuing time qc.ns() is returned in integer nanoseconds, making
the value about 2^30 times larger than when the units were seconds,
ii) then in lines 5 and 10 an adjustment of -2 to the right bit-shift
multiplies the result by 2^2, to complete the scaling by 2^32.
In line 8 of the initialization function, the EWMA constant gamma
is represented as an integer power of 2, g_C, so that in line 9 of the
integer code the division needed to weight the moving average can be
implemented by a right bit-shift (>> g_C).
Where Classic flows compete for the same capacity, their relative
flow rates depend not only on the congestion probability, but also on
their end-to-end RTT (= base RTT + queue delay). The rates of
competing Reno flows are roughly
inversely proportional to their RTTs. Cubic exhibits similar
RTT-dependence when in Reno-compatibility mode, but is less
RTT-dependent otherwise.
Until the early experiments with the DualQ Coupled AQM, the
importance of the reasonably large Classic queue in mitigating
RTT-dependence had not been appreciated. Appendix A.1.6 of uses numerical examples to
explain why bloated buffers had concealed the RTT-dependence of
Classic congestion controls before that time. Then it explains why,
the more that queuing delays have reduced, the more that
RTT-dependence has surfaced as a potential starvation problem for long
RTT flows.
Given that congestion control on end-systems is voluntary, there is
no reason why it has to be voluntarily RTT-dependent. Therefore requires L4S congestion controls
to be significantly less RTT-dependent than the standard Reno
congestion control . Following this
approach means there is no need for network devices to address
RTT-dependence, although there would be no harm if they did, which
per-flow queuing inherently does.
At the time of writing, the range of approaches to RTT-dependence
in L4S congestion controls has not settled. Therefore, the guidance on
the choice of the coupling factor in
is given against DCTCP , which has
well-understood RTT-dependence. The guidance is given for various RTT
ratios, so that it can be adapted to future circumstances.
RTT_C / RTT_L
Reno
Cubic
1
k'=1
k'=0
2
k'=2
k'=1
3
k'=2
k'=2
4
k'=3
k'=2
5
k'=3
k'=3
In the above appendices that give example DualQ Coupled algorithms,
to aid efficient implementation, a coupling factor that is an integer
power of 2 is always used. k' is always used to denote the power. k'
is related to the coupling factor k in Equation (1) () by k=2^k'.
To determine the appropriate coupling factor policy, the operator
first has to judge whether it wants DCTCP flows to have roughly equal
throughput with Reno or with Cubic (because, even in its
Reno-compatibility mode, Cubic is about 1.4 times more aggressive than
Reno). Then the operator needs to decide at what ratio of RTTs it
wants DCTCP and Classic flows to have roughly equal throughput. For
example choosing k'=0 (equivalent to k=1) will make DCTCP throughput
roughly the same as Cubic, if their RTTs are the same .
However, even if the base RTTs are the same, the actual RTTs are
unlikely to be the same, because Classic (Cubic or Reno) traffic needs
roughly a typical base round trip of queue to avoid under-utilization
and excess drop. Whereas L4S (DCTCP) does not. The operator might
still choose this policy if it judges that DCTCP throughput should be
rewarded for keeping its own queue short.
On the other hand, the operator will choose one of the higher
values for k', if it wants to slow DCTCP down to roughly the same
throughput as Classic flows, to compensate for Classic flows slowing
themselves down by causing themselves extra queuing delay.
The values for k' in the table are derived from the formulae below,
which were developed in :
For localized traffic from a particular ISP's data centre, using
the measured RTTs, it was calculated that a value of k'=3 (equivalent
to k=8) would achieve throughput equivalence, and experiments verified
the formula very closely.
For a typical mix of RTTs from local data centres and across the
general Internet, a value of k'=1 (equivalent to k=2) is recommended
as a good workable compromise.

```
```

```
```