| Internet-Draft | qp-based SRv6 LB | February 2026 |
| Liu, et al. | Expires 30 August 2026 | [Page] |
This document describes the use of Segment Routing over IPv6 (SRv6) path selection based on Queue Pair (QP) in Intelligent Computing Wide Area Network (WAN) for Data Center Interconnection (DCI), optimizing load balancing for predictable workloads.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 30 August 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The proliferation of RDMA technology in Intelligent Computing Data Center (DC) fabrics has revolutionized high-performance computing, distributed storage, and machine learning workloads.¶
These workloads generate large, predictable flows that demand ultra-low latency, high bandwidth, and precise congestion control to ensure optimal performance. Traditional networking methods, like hash-based Equal-Cost Multi-Path (ECMP) load balancing, struggle with insufficient entropy due to the low diversity of RDMA (specifically RDMA over Converged Ethernet v2, abbreviated as RoCEv2) [IBTA-SPEC] flow identifiers. This often results in fabric hotspots, network congestion, and performance degradation.¶
The transmission process of RoCEv2 messages in intelligent computing Wide Area Network (WAN) used for Data Center Interconnection (DCI) is the same as inside the DC, and it will also generate elephant streams, which leads to fabric hotspots, network congestion, and performance degradation.¶
Segment Routing over IPv6 (SRv6) [RFC8986] provides flexible traffic engineering by supporting policy-based programmability and explicit path steering. SRv6 policy enables deterministic path steering and fine-grained traffic control for RoCEv2 flows, ensuring predictable performance.¶
This document details SRv6 path selection based on Queue Pair (QP) to optimize load balancing for predictable RoCEv2 flows in intelligent computing WAN by ensuring all packets within a QP follow the same path.¶
The following terms are used in this document:¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
Traditional ECMP load balancing faces several challenges with RoCEv2 flow:¶
By encoding an ordered list of segments in the packet header, SRv6 (Policy) allows the ingress device to directly steer RoCEv2 workload traffic through the fabric.¶
FlowSpec, as a traffic scheduling tool, can guide RoCEv2 flows to different SRv6 policies based on their characteristics (such as Dest QP), and forward them along different paths. QP-based FlowSpec protocol extensions are beyond the scope of this document.¶
QP-to-SRv6 Policy Mapping:¶
Enhanced Hash-Based Segment List (SL) Scheduling:¶
A typical WAN topology for DCI is shown in the figure below.¶
+-------------+
| DC1 |
| |
| QP1 ~ QP4 |
+------+------+
+---------------------|---------------------+
| +----+----+ WAN |
| | PE1 | |
| +----+----+ |
| | |
| +---------+-----+-----+---------+ |
| | | | | |
| +--+--+ +--+--+ +--+--+ +--+--+ |
| | P1 | | P2 | | P3 | | P4 | |
| +--+--+ +--+--+ +--+--+ +--+--+ |
| | | | | |
| +---------+-----+-----+---------+ |
| | |
| +----+----+ |
| | PE2 | |
| +----+----+ |
+---------------------|---------------------+
+------+------+
| DC2 |
| |
| QP1 ~ QP4 |
+-------------+
The topology consists of two Provider Edge (PE) devices, and each of the PEs is connected to four Provider (P) devices and one DC.¶
In this example, there are 2 DCs, in which four QPs can be established to transmit RoCEv2 workloads.¶
In the above topology, there are four paths that pass through WAN from DC1 to DC2.¶
all paths is below:¶
During the Day-0 cluster fabric bring-up, the topology is provisioned with SRv6 SIDs on the PE and P devices. These SIDs are statically configured, making them independent of any dynamic routing protocol state.¶
The PE1 could create two SRv6 Policies with PE2 as the endpoint. Each SRv6 Policy contain two SLs. The following is provisioned:¶
The fabric is now orchestrating four AI workloads. During this orchestration, the collective communication among DCs necessitates periodic data transmission from DC1 to DC2.¶
Between DC1 to DC2, each AI workload is divided into a separate QP, and QPs are QP1, QP2, QP3, and QP4.¶
During AI job computation, firstly, RoCEv2 packets are redirected to different SRv6 policies based on QP to achieve coarse-grained traffic classification and isolation; secondly, within a single policy, QP is used as a hash key for SL selection, distributing multiple QP flows evenly across multiple candidate paths (SLs) contained in that policy to achieve fine-grained load balancing.¶
Assume that the AI training task traffic carried by each QP has different requirements for link quality. The traffic of QP1 and QP2 requires a low-latency path (Policy 1), while the traffic of QP3 and QP4 requires a high-bandwidth path (Policy 2). On PE1, the QP-to-SRv6 Policy Mapping Table is created as shown below:¶
| QP Range | SRv6 Policy Name |
|---|---|
| QP1, QP2 | Policy 1 |
| QP3, QP4 | Policy 2 |
In selected SRv6 Policy for each RoCEv2 packet, QP-based hash algorithm is used to select one specific SL for fowarding the RoCEv2 packet, as shown below:¶
PE1 will encapsulate each RoCEv2 packet with an outer IPv6 header and SRH using the selected SL, and then forward it to the appropriate link.¶
The PE1->P1 link carries the traffic of QP1, the PE1->P2 link carries the traffic of QP2, the PE1->P3 link carries the traffic of QP3, and the PE1->P4 link carries the traffic of QP4.¶
In ingress device, the control plane must support QP range to SRv6 Policy mapping by protocol extension or local configuration. For non-RoCEv2 traffic, the system MUST revert to the standard five-tuple hash for SL selection.¶
The ingress devices require deep packet inspection capability to parse BTH headers, programmable hash engines with configurable input fields, sufficient TCAM/SRAM for QP classification mapping tables, and support for multiple active SRv6 policies with multiple SLs.¶
When network congestion or failure occurs, operators can flexibly configure QP range to SRv6 Policy mapping strategies on the ingress device to guide RoCEv2 flows to the appropriate path.¶
Malicious actors could spoof QP values to bypass mapping policies, cause hash collisions, or exhaust specific network paths. Mitigations may include cryptographic validation of RoCEv2 packets, and QP whitelisting/blacklisting.¶
QP values may reveal application-level information, so QP values SHOULD be anonymized or encrypted.¶
The additional packet processing (such as parsing BTH headers) could be exploited for Denial of Service (DoS) attacks; therefore, implementations MUST support graceful degradation mechanisms (such as rate limiting) under attack.¶
This document has no IANA actions.¶