<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.30 (Ruby 3.4.8) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-calabria-bmwg-ai-fabric-terminology-00" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="AI Fabric Benchmarking Terminology">Benchmarking Terminology for AI Network Fabrics</title>
    <seriesInfo name="Internet-Draft" value="draft-calabria-bmwg-ai-fabric-terminology-00"/>
    <author initials="F." surname="Calabria" fullname="Fernando Calabria">
      <organization>Cisco</organization>
      <address>
        <email>fcalabri@cisco.com</email>
      </address>
    </author>
    <author initials="C." surname="Pignataro" fullname="Carlos Pignataro">
      <organization>Blue Fern Consulting</organization>
      <address>
        <email>carlos@bluefern.consulting</email>
      </address>
    </author>
    <author initials="Q." surname="Wu" fullname="Qin Wu">
      <organization>Huawei</organization>
      <address>
        <email>bill.wu@huawei.com</email>
      </address>
    </author>
    <author initials="G." surname="Fioccola" fullname="Giuseppe Fioccola">
      <organization>Huawei</organization>
      <address>
        <email>giuseppe.fioccola@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="February" day="26"/>
    <area>Operations and Management</area>
    <workgroup>BMWG Working Group</workgroup>
    <keyword>benchmarking</keyword>
    <keyword>terminology</keyword>
    <keyword>AI training</keyword>
    <keyword>AI inference</keyword>
    <keyword>network fabric</keyword>
    <keyword>RDMA</keyword>
    <keyword>RoCEv2</keyword>
    <keyword>UET</keyword>
    <keyword>collective communication</keyword>
    <keyword>AllReduce</keyword>
    <keyword>JCT</keyword>
    <keyword>TTFT</keyword>
    <keyword>KV cache</keyword>
    <abstract>
      <?line 99?>

<t>This document defines benchmarking terminology for evaluating
Ethernet-based network fabrics used in distributed Artificial
Intelligence (AI) training and inference workloads. It provides a
unified vocabulary consolidating and extending terms from RFC 1242,
RFC 8238, and the companion AI fabric methodology documents,
establishing precise, vendor-neutral definitions for collective
communication primitives, RDMA transport mechanisms (RoCEv2 and Ultra
Ethernet Transport), congestion control behaviors, AI-specific Key
Performance Indicators (KPIs), and fabric topology concepts.</t>
      <t>This document is a companion to draft-bmwg-ai-fabric-training-bench-00
and draft-bmwg-ai-fabric-inference-bench-00. Those documents
<bcp14>SHOULD NOT</bcp14> be applied without first consulting the terminology defined
herein. Where definitions herein overlap with RFC 1242 or RFC 8238,
the AI fabric context definition in this document takes precedence.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://fcalabri.github.io/bmwg-ai-fabric-terminology/draft-calabria-bmwg-ai-fabric-terminology-00.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-calabria-bmwg-ai-fabric-terminology/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/fcalabri/bmwg-ai-fabric-terminology"/>.</t>
    </note>
  </front>
  <middle>
    <?line 118?>

<section anchor="introduction">
      <name>Introduction</name>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
      <section anchor="scope-and-purpose">
        <name>Scope and Purpose</name>
        <t>This document defines terminology specifically for benchmarking
Ethernet-based AI network fabrics in controlled laboratory
environments. The defined terms cover:
distributed AI training collective communication patterns, LLM
inference serving architectures, RDMA transport semantics (RoCEv2
and UET), congestion control mechanisms, fabric topology
characteristics, and performance metric definitions.</t>
        <t>This document does not define acceptance criteria, performance
requirements, or configuration recommendations. It does not address
benchmarking of live operational networks, intra-node (NVLink/PCIe)
interconnects, or storage networking.</t>
      </section>
      <section anchor="relationship-to-existing-bmwg-work">
        <name>Relationship to Existing BMWG Work</name>
        <t>This document extends the foundational BMWG terminology established
in <xref target="RFC1242"/> (network interconnect benchmarking terminology) and
<xref target="RFC8238"/> (data center benchmarking terminology). Where terms are
defined in those RFCs, this document provides AI fabric context
extensions; the core definitions remain as established. This document
also extends the test methodology framework of <xref target="RFC2544"/> and
<xref target="RFC8239"/> as applied in the companion AI fabric methodology
documents.</t>
      </section>
      <section anchor="relationship-to-companion-documents">
        <name>Relationship to Companion Documents</name>
        <t>This document is one of three companion Internet-Drafts addressing AI
fabric benchmarking:</t>
        <ul spacing="normal">
          <li>
            <t>draft-bmwg-ai-fabric-terminology-00 (this document): Terminology
definitions.</t>
          </li>
          <li>
            <t><xref target="TRAINING-BENCH"/>: Benchmarking methodology for AI training
workloads.</t>
          </li>
          <li>
            <t><xref target="INFERENCE-BENCH"/>: Benchmarking methodology for AI inference
serving workloads.</t>
          </li>
        </ul>
        <t>Implementers and evaluators <bcp14>SHOULD</bcp14> read this terminology document
before applying the companion methodology documents. Terms defined
here are used normatively in those documents and are not redefined
there unless the specific workload context introduces a substantive
difference, which is noted explicitly.</t>
      </section>
    </section>
    <section anchor="general-benchmarking-terms">
      <name>General Benchmarking Terms</name>
      <t>The following terms establish the general measurement framework
applicable to all AI fabric benchmarking activities.</t>
      <table anchor="tab-gen-bench">
        <name>General Benchmarking Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>AI Fabric</strong></td>
            <td align="left">The dedicated Ethernet backend network interconnecting accelerators (GPUs/XPUs) for distributed AI training and inference workloads. Typically implemented as a non-blocking Clos (fat-tree) topology running RoCEv2 or UET transport. Distinct from the front-end (management/storage) network.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DUT</strong></td>
            <td align="left">Device Under Test. The network element(s) whose performance characteristics are being measured. In AI fabric benchmarking the DUT is one or more fabric elements: leaf switches, spine switches, NICs, or the complete fabric assembly.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>SUT</strong></td>
            <td align="left">System Under Test. The complete AI compute system including accelerators, NICs, the fabric DUT, and serving/training software, when end-to-end metrics are the measurement objective.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RT</strong></td>
            <td align="left">Router Tester / Traffic Generator. Test equipment capable of generating and receiving network traffic at specified rates with nanosecond-resolution timestamping sufficient for the measurements defined in the companion methodology documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>JFI</strong></td>
            <td align="left">Jain's Fairness Index. A scalar measure of flow-level throughput fairness across n flows: <tt>JFI = (Σxᵢ)² / (n · Σxᵢ²)</tt> where xᵢ is the throughput of flow i. A value of 1.0 indicates perfect fairness; lower values indicate disparity. <strong><bcp14>SHOULD</bcp14></strong> be computed per <xref target="RFC1242"/> reporting conventions.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Offered Load</strong></td>
            <td align="left">The total traffic rate presented to the DUT from test equipment, expressed as a fraction of line rate (0–100%) or as absolute bit/s. Offered load is controlled independently of DUT absorption, enabling characterization of saturation behavior.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Trial Duration</strong></td>
            <td align="left">The time interval over which a single measurement is conducted. For AI fabric tests, the <strong><bcp14>RECOMMENDED</bcp14></strong> minimum is 60 seconds for throughput tests and 300 seconds for soak/stability tests, per the methodology in <xref target="RFC2544"/> as extended herein.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Warmup Period</strong></td>
            <td align="left">A mandatory pre-measurement interval during which traffic is sent but results are not recorded. Ensures adaptive routing tables, PFC watermarks, and DCQCN/UET congestion controllers reach steady state before measurement begins. <strong><bcp14>RECOMMENDED</bcp14></strong> minimum: 10 seconds.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Binary Search</strong></td>
            <td align="left">An iterative test procedure for determining the maximum offered load at which a DUT meets a specified acceptance criterion (e.g., zero packet loss). The search halves the candidate load range at each iteration, converging to a resolution of 0.1% offered load within 10 iterations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Percentile Latency</strong></td>
            <td align="left">A latency statistic expressing that the specified fraction of all measured latency samples fall at or below the reported value. Denoted Pxx (e.g., P50, P95, P99, P99.9). Tail latency (P99 and above) is especially relevant for AI fabric benchmarking because SLO violations are determined by worst-case, not median, performance.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="collective-communication-terms">
      <name>Collective Communication Terms</name>
      <t>The following terms define the collective communication operations that
are the primary traffic sources in distributed AI workloads.</t>
      <table anchor="tab-collect-comm">
        <name>Collective Communication Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Collective Operation</strong></td>
            <td align="left">A coordinated communication pattern executed simultaneously across all accelerators in a training or inference group. Core collectives: AllReduce (gradient aggregation), AllGather (parameter distribution), ReduceScatter (partial reduction + scatter), and AllToAll (expert dispatch in MoE models).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>AllReduce</strong></td>
            <td align="left">A collective in which each participant contributes a tensor and all participants receive the element-wise sum (or other reduction) of all contributions. The dominant communication primitive in data-parallel and tensor-parallel training. BusBW is the primary KPI.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>AllGather</strong></td>
            <td align="left">A collective in which each participant contributes a shard of a tensor and all participants receive the concatenation of all shards. Used in tensor-parallel (Megatron-style) layers to reconstruct distributed activations or parameters.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ReduceScatter</strong></td>
            <td align="left">A collective combining an element-wise reduction with a scatter, so each participant receives a distinct slice of the reduced result. Used in ZeRO-stage optimizer strategies and as the first half of a ring-AllReduce.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>AllToAll</strong></td>
            <td align="left">A collective in which each participant sends a distinct payload to every other participant and receives a distinct payload from every other participant. The critical collective for Mixture-of-Experts token dispatch. Generates N²−1 independent point-to-point flows for N participants.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Ring Algorithm</strong></td>
            <td align="left">An AllReduce (or AllGather/ReduceScatter) algorithm structured as a logical ring of participants. Each participant sends to its right neighbor and receives from its left neighbor in 2(N−1) steps. Bus bandwidth efficiency = 2(N−1)/N, approaching 100% for large N. Standard baseline for BusBW calculation.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>BusBW</strong></td>
            <td align="left">Bus Bandwidth. Effective per-accelerator throughput during a collective, normalizing for algorithm overhead: <tt>BusBW = (data_size × algo_factor) / elapsed_time</tt>. For ring AllReduce, <tt>algo_factor = 2(N−1)/N</tt>. Enables comparison across cluster sizes and collective algorithms.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>CCL</strong></td>
            <td align="left">Collective Communication Library. A software library providing optimized implementations of collective operations (AllReduce, AllGather, etc.) over a specific transport. The CCL implementation <strong><bcp14>MUST</bcp14></strong> be documented in the test report.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>SPMD</strong></td>
            <td align="left">Single Program Multiple Data. The execution model underlying bulk-synchronous distributed training, in which all accelerators execute identical computation on distinct data partitions, synchronizing at collective barriers between steps.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Bulk Synchronous Parallel (BSP)</strong></td>
            <td align="left">A distributed computation model structured as alternating compute and communicate phases with a global synchronization barrier between phases. Standard training workloads follow BSP: forward pass → backward pass → AllReduce gradient sync → optimizer step.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="distributed-parallelism-strategy-terms">
      <name>Distributed Parallelism Strategy Terms</name>
      <t>The following terms define the parallelism strategies used in
distributed AI model training and inference, which determine traffic
patterns and fabric requirements.</t>
      <table anchor="tab-distri-parallel">
        <name>Distributed Parallelism Strategy Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Data Parallelism (DP)</strong></td>
            <td align="left">A distributed training strategy replicating the full model on each accelerator, partitioning the training dataset across replicas. Gradient synchronization after each backward pass requires an AllReduce across all DP ranks. Memory-efficient for small models; communication overhead scales with parameter count.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Tensor Parallelism (TP)</strong></td>
            <td align="left">A distributed training and inference strategy partitioning individual weight matrices across multiple accelerators. Each rank computes a partial result; AllGather or ReduceScatter collectives are required within each layer to aggregate results. Dominant parallelism within a node (intra-node).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Pipeline Parallelism (PP)</strong></td>
            <td align="left">A distributed strategy assigning contiguous groups of transformer layers to distinct stages (accelerators or nodes). Each stage processes one microbatch and forwards activations to the next stage. Generates point-to-point inter-stage traffic across the fabric (activations and gradients).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Expert Parallelism (EP)</strong></td>
            <td align="left">A parallelism strategy for Mixture-of-Experts models distributing expert sub-networks across accelerators. Each token is routed to its designated experts (typically top-K of E total experts), requiring AllToAll communication for dispatch. Wide EP (e.g., 96-way) generates dense inter-node AllToAll at every MoE layer.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>MoE</strong></td>
            <td align="left">Mixture of Experts. A transformer architecture replacing dense feed-forward layers with a set of E expert sub-networks, of which only top-K experts (typically K=2 or K=4) are activated per token via a learned router. MoE enables large model capacity with sub-linear compute, but introduces AllToAll communication requirements proportional to E and sequence length.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DP Attention</strong></td>
            <td align="left">Data Parallelism applied to the attention computation, where the KV cache is partitioned across data-parallel ranks. Each rank holds 1/DP_SIZE of the KV cache; AllToAll communication exchanges attention outputs. Used in inference to reduce per-accelerator memory footprint for long contexts.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ZeRO</strong></td>
            <td align="left">Zero Redundancy Optimizer. A memory optimization strategy for data-parallel training that shards model states (parameters, gradients, optimizer states) across DP ranks instead of replicating them. Stage 1 shards optimizer states; Stage 2 adds gradient sharding; Stage 3 adds parameter sharding. Each stage increases AllGather/ReduceScatter communication.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="network-transport-terms">
      <name>Network Transport Terms</name>
      <section anchor="rocev2-and-rdma-terms">
        <name>RoCEv2 and RDMA Terms</name>
        <t>The following terms define RDMA and RoCEv2 transport semantics as
used in AI fabric benchmarking.</t>
        <table anchor="tab-rocev2">
          <name>Distributed Parallelism Strategy Terms</name>
          <thead>
            <tr>
              <th align="left">Term</th>
              <th align="left">Definition</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>RDMA</strong></td>
              <td align="left">Remote Direct Memory Access. A transport mechanism enabling direct memory-to-memory data transfer between hosts without involving the destination CPU, providing zero-copy semantics and kernel bypass. Implementations include InfiniBand Verbs (native IB), iWARP (RDMA over TCP), and RoCEv2 (RDMA over Converged Ethernet v2).</td>
            </tr>
            <tr>
              <td align="left">
                <strong>RoCEv2</strong></td>
              <td align="left">RDMA over Converged Ethernet version 2. An RDMA transport encapsulating InfiniBand transport layer (BTH) over UDP/IP, enabling RDMA semantics on standard Ethernet infrastructure. Requires lossless fabric operation (PFC or equivalent) for correctness. Standardized in IBTA Annex 16; transported over UDP destination port 4791.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>QP</strong></td>
              <td align="left">Queue Pair. The fundamental RDMA communication endpoint comprising a Send Queue (SQ) and Receive Queue (RQ). QPs are connection-oriented in Reliable Connected (RC) mode. Multiple QPs per source-destination pair are used to increase ECMP entropy in fabric load balancing.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Reliable Connected (RC)</strong></td>
              <td align="left">An RDMA QP transport service type providing reliable, in-order delivery between exactly two endpoints. The primary QP type for AI collective operations via RoCEv2. Requires connection setup before data transfer and maintains per-QP state for retransmission.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>RDMA Verb</strong></td>
              <td align="left">An operation primitive of the RDMA programming model. Key verbs: SEND/RECV (two-sided, receiver must post a buffer), WRITE (one-sided, target memory written directly), READ (one-sided, remote memory read), and Atomic (compare-and-swap, fetch-and-add). AI collectives predominantly use WRITE and SEND.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>UET</strong></td>
              <td align="left">Ultra Ethernet Transport. A transport protocol defined by the Ultra Ethernet Consortium (UEC) Specification 1.0 as a next-generation AI/HPC fabric transport. UET is connectionless, supports native packet spraying (RUD), and integrates multipath load balancing and congestion control. Transported over UDP destination port 4793 (pending IANA verification).</td>
            </tr>
            <tr>
              <td align="left">
                <strong>PDC</strong></td>
              <td align="left">Packet Delivery Context. The ephemeral, lightweight transport endpoint in UET, analogous to but distinct from an RDMA Queue Pair. PDCs are connectionless (no setup handshake), enabling low-latency initiation and reduced per-flow state in the NIC and switch.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>ROD</strong></td>
              <td align="left">Reliable Ordered Delivery. A UET transport service providing reliable, in-order packet delivery, semantically equivalent to RoCEv2 RC mode. Suitable for legacy RDMA applications requiring strict ordering guarantees.</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="ultra-ethernet-transport-uet-terms">
        <name>Ultra Ethernet Transport (UET) Terms</name>
        <t>The following terms define UET-specific concepts introduced by the
Ultra Ethernet Consortium (UEC) Specification 1.0
<xref target="UEC-SPEC-1.0"/>.</t>
        <table anchor="tab-uet">
          <name>Ultra Ethernet Transport (UET) Terms</name>
          <thead>
            <tr>
              <th align="left">Term</th>
              <th align="left">Definition</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>RUD</strong></td>
              <td align="left">Reliable Unordered Delivery. A UET transport service providing reliable delivery without maintaining packet order across paths. Enables native packet spraying across ECMP paths without reorder-buffer overhead at the receiver NIC. The preferred UET service class for AI training collectives.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>RUDI</strong></td>
              <td align="left">Reliable Unordered Delivery for Idempotent operations. A UET transport service optimized for operations safe to execute more than once (e.g., RDMA Writes to non-accumulating targets), allowing simplified retransmission logic with reduced state overhead.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>UUD</strong></td>
              <td align="left">Unreliable Unordered Delivery. A UET transport service providing best-effort, unordered packet delivery with minimal overhead. Suitable for telemetry, speculative operations, or workloads with application-layer loss tolerance.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>UEC Profile</strong></td>
              <td align="left">A defined subset of UET features targeting a specific use case: AI Base (core AI training/inference, mandatory feature set), AI Full (AI Base plus deferred send, exact-match tagging, extended atomics), or HPC (latency-optimized for traditional HPC workloads with fine-grained synchronization).</td>
            </tr>
            <tr>
              <td align="left">
                <strong>LLR</strong></td>
              <td align="left">Link Layer Retry. An optional UEC link-layer enhancement providing fast per-hop error recovery at the Ethernet link layer. LLR detects symbol errors at the FEC level and retransmits the affected frame before it is dropped, reducing the frequency of transport-layer retransmission and improving tail latency.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Packet Trimming</strong></td>
              <td align="left">An optional UEC link-layer behavior in which a congested switch, rather than dropping the full packet, transmits only the packet header (trimmed packet) to the receiver. Trimming enables the receiver to detect loss and initiate selective retransmission more rapidly, reducing bandwidth waste versus silent drop.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>CBFC</strong></td>
              <td align="left">Credit-Based Flow Control. An optional UEC link-layer buffer management mechanism using explicit credit grants from downstream to upstream devices. CBFC provides backpressure without transmitting PFC PAUSE frames, eliminating the head-of-line blocking and storm propagation risks associated with PFC.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Entropy Value</strong></td>
              <td align="left">A per-packet field in the UET header used to distribute packets of a single message across available ECMP paths, providing explicit spray entropy independent of the IP 5-tuple. Enables hardware-assisted packet spraying without requiring transport-layer state in the switch.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>GIN</strong></td>
              <td align="left">GPU-Initiated Networking. A communication paradigm in which GPU threads directly initiate network RDMA operations (sends, one-sided writes/reads) to the NIC hardware without CPU involvement, eliminating the CPU-GPU synchronization round-trip. Reduces effective latency by several microseconds for fine-grained operations.</td>
            </tr>
            <tr>
              <td align="left">
                <strong>KVCXL</strong></td>
              <td align="left">KV Cache Transfer Library. A software library providing standardized point-to-point data transfer primitives (register, transfer, notify) for inference engines, abstracting underlying transport mechanisms (intra-node interconnect, RDMA, PCIe, storage interfaces). Enables transport-agnostic KV cache migration in disaggregated serving architectures.</td>
            </tr>
          </tbody>
        </table>
        <section anchor="uet-transport-services-comparison">
          <name>UET Transport Services Comparison</name>
          <table anchor="tab-uet-compare">
            <name>UET Transport Services Comparison</name>
            <thead>
              <tr>
                <th align="left">Service</th>
                <th align="left">Ordered</th>
                <th align="left">Reliable</th>
                <th align="left">Retransmission Complexity</th>
                <th align="left">Primary Use Case</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">
                  <strong>ROD</strong></td>
                <td align="left">Yes</td>
                <td align="left">Yes</td>
                <td align="left">Full per-QP state</td>
                <td align="left">Legacy RDMA / ordered AI ops</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>RUD</strong></td>
                <td align="left">No</td>
                <td align="left">Yes</td>
                <td align="left">Reduced (unordered)</td>
                <td align="left">AI training collectives with spray</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>RUDI</strong></td>
                <td align="left">No</td>
                <td align="left">Yes</td>
                <td align="left">Minimal (idempotent)</td>
                <td align="left">RDMA Writes; simple retransmit</td>
              </tr>
              <tr>
                <td align="left">
                  <strong>UUD</strong></td>
                <td align="left">No</td>
                <td align="left">No</td>
                <td align="left">None</td>
                <td align="left">Telemetry, speculative ops</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>
    </section>
    <section anchor="congestion-control-and-fabric-behavior-terms">
      <name>Congestion Control and Fabric Behavior Terms</name>
      <t>The following terms define congestion management mechanisms and
associated fabric behaviors critical to AI workload performance.</t>
      <table anchor="tab-congest-control">
        <name>Congestion Control and Fabric Behavior Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>PFC</strong></td>
            <td align="left">Priority Flow Control (IEEE 802.1Qbb). A lossless Ethernet mechanism in which a receiver transmits a PAUSE frame to its upstream neighbor on a specific priority class when its ingress buffer approaches a configured threshold, temporarily halting transmission of that priority. Required for lossless RoCEv2 operation. PFC operates hop-by-hop and can propagate congestion upstream (PFC storm risk).</td>
          </tr>
          <tr>
            <td align="left">
              <strong>PFC Storm</strong></td>
            <td align="left">A pathological condition in which PFC PAUSE frames propagate across multiple hops, causing widespread throughput degradation or deadlock unrelated to the original congestion source. Detection and mitigation <strong><bcp14>SHOULD</bcp14></strong> be part of soak test evaluation per the companion methodology documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>PFC Deadlock</strong></td>
            <td align="left">A circular PFC dependency in which sets of flows mutually pause each other indefinitely, resulting in zero progress for affected traffic classes. Deadlock risk is elevated in non-tree topologies and <strong><bcp14>MUST</bcp14></strong> be evaluated in fabric-level soak tests.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ECN</strong></td>
            <td align="left">Explicit Congestion Notification ([RFC3168]). An IP-layer mechanism in which a congested router marks packets with the Congestion Experienced (CE) codepoint in the IP ECN field instead of dropping them. The receiver echoes congestion feedback to the sender via the transport protocol, triggering rate reduction. Used with RoCEv2 as part of DCQCN.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DCQCN</strong></td>
            <td align="left">Data Center Quantized Congestion Notification. An end-to-end congestion control algorithm for RoCEv2 flows, combining ECN marking at congested switches with rate-based sender reduction using an AIMD scheme. Note: PFC serves as a separate, orthogonal backstop to prevent packet loss during DCQCN convergence; PFC is <strong>not</strong> a component of the DCQCN algorithm itself.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ECN Marking Ratio</strong></td>
            <td align="left">The fraction of packets (expressed as a percentage) that are marked with the CE codepoint in the IP ECN field over a measurement interval. A high ECN Marking Ratio indicates persistent congestion and is a primary Fabric Health Indicator.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Incast</strong></td>
            <td align="left">A traffic pattern in which multiple sources simultaneously send to a single destination, potentially overwhelming the destination's NIC receive buffer and the switch's egress port buffer. Incast is a dominant congestion mechanism in AllReduce and collective operations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Incast Ratio</strong></td>
            <td align="left">The ratio of concurrent senders to receivers in an incast communication pattern (N:1). The incast ratio determines the oversubscription factor at the destination port and is a primary test parameter for congestion characterization.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Packet Spray</strong></td>
            <td align="left">A load balancing strategy distributing individual packets of a single RDMA message across all available ECMP paths, maximizing link utilization at the cost of potential out-of-order delivery at the receiver. Native in UET (RUD mode); requires NIC reorder buffering for RoCEv2 RC mode.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>DLB / Flowlet</strong></td>
            <td align="left">Dynamic Load Balancing using flowlet detection. A per-flow rerouting mechanism that reassigns a flow to a new ECMP path when the flow has been idle longer than the flowlet gap threshold (typically 500 ns–2 µs), reducing out-of-order packet risk compared to packet spray while improving utilization over static per-flow ECMP.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ECMP</strong></td>
            <td align="left">Equal-Cost Multi-Path routing. A forwarding mechanism distributing traffic across multiple equal-cost paths, typically via hash of the IP 5-tuple (or entropy field in UET). ECMP imbalance (MMR &gt; 1.0) is a primary fabric efficiency metric for AI traffic.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>MMR</strong></td>
            <td align="left">Max-Mean Ratio. The ratio of the flow count (or traffic load) on the most heavily utilized link to the average flow count per link across all fabric links. MMR = 1.0 indicates perfect ECMP balance; MMR &gt; 1.0 quantifies imbalance that degrades effective fabric bandwidth.</td>
          </tr>
        </tbody>
      </table>
      <section anchor="load-balancing-strategy-comparison">
        <name>Load Balancing Strategy Comparison</name>
        <table anchor="tab-load-balance">
          <name>Load Balancing Strategy Comparison</name>
          <thead>
            <tr>
              <th align="left">Strategy</th>
              <th align="left">Granularity</th>
              <th align="left">Reorder Risk</th>
              <th align="left">Utilization</th>
              <th align="left">Complexity</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>ECMP (5-tuple hash)</strong></td>
              <td align="left">Per-flow</td>
              <td align="left">None</td>
              <td align="left">Low (elephant flow bias)</td>
              <td align="left">Low</td>
            </tr>
            <tr>
              <td align="left">
                <strong>DLB / Flowlet</strong></td>
              <td align="left">Per-flowlet</td>
              <td align="left">Low</td>
              <td align="left">Medium</td>
              <td align="left">Medium</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Packet Spray (RoCEv2)</strong></td>
              <td align="left">Per-packet</td>
              <td align="left">High</td>
              <td align="left">High</td>
              <td align="left">High (NIC reorder buffer)</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Packet Spray (UET RUD)</strong></td>
              <td align="left">Per-packet</td>
              <td align="left">None (transport tolerates OOO)</td>
              <td align="left">High</td>
              <td align="left">Low</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="fabric-topology-and-infrastructure-terms">
      <name>Fabric Topology and Infrastructure Terms</name>
      <t>The following terms define fabric topology architectures and
infrastructure components referenced in the companion methodology
documents.</t>
      <table anchor="tab-fabric-topo">
        <name>Fabric Topology and Infrastructure Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Clos / Fat-Tree Topology</strong></td>
            <td align="left">A multi-stage switch topology providing non-blocking or oversubscribed connectivity between all leaf-to-leaf pairs. In AI fabric deployments, a two-tier (leaf-spine) or three-tier (leaf-spine-superspine) Clos is standard. Full bisection bandwidth (1:1) is the target for training fabrics; 2:1 or 4:1 oversubscription may be acceptable for inference fabrics.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Rail-Optimized Topology</strong></td>
            <td align="left">A topology in which the NIC ports of each server are distributed across multiple ToR switches (one NIC port per switch), such that collective traffic between adjacent servers traverses different physical paths. Minimizes switch-to-switch traffic during ring AllReduce, maximizing effective BusBW. Requires ECMP-aware collective placement.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Bisection Bandwidth</strong></td>
            <td align="left">The aggregate bandwidth across the minimum cut that divides the fabric into two equal halves. Non-blocking fabrics provide bisection bandwidth equal to half the total edge (server-facing) bandwidth. Limits worst-case all-to-all communication throughput.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Oversubscription Ratio</strong></td>
            <td align="left">The ratio of total edge (server-facing) bandwidth to total bisection bandwidth in a Clos fabric. A 1:1 ratio is non-blocking; higher ratios (e.g., 2:1, 4:1) reduce fabric cost but may bottleneck all-to-all and AllReduce patterns when all server ports are active simultaneously.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ToR Switch</strong></td>
            <td align="left">Top-of-Rack switch. The first-hop aggregation switch connecting accelerator servers in a rack to the spine layer of the fabric. In rail-optimized topologies, multiple ToR switches serve a single rack, with each server's NICs distributed across ToRs.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Spine / Superspine</strong></td>
            <td align="left">Intermediate and top-layer switches in a multi-tier Clos fabric, providing inter-rack and inter-pod connectivity respectively. Spine switches aggregate multiple ToR switches; superspine switches aggregate multiple spine pods.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>NIC</strong></td>
            <td align="left">Network Interface Controller. The hardware device providing network connectivity for an accelerator host. AI fabric NICs support RDMA (RoCEv2 or UET), hardware offload for collective operations, and, optionally, GPU-Initiated Networking (GIN). NIC model and firmware version <strong><bcp14>MUST</bcp14></strong> be documented in all benchmark reports.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Buffer Occupancy</strong></td>
            <td align="left">The instantaneous or time-averaged fill level of a switch port's packet buffer, expressed in bytes or as a fraction of total buffer capacity. Elevated sustained buffer occupancy indicates congestion. P99 buffer occupancy is a Fabric Health Indicator in the companion methodology documents.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Zero-Impact Failover</strong></td>
            <td align="left">Sub-microsecond automatic path convergence upon a link or switch failure resulting in no measurable increase to JCT or TTFT. Requires pre-programmed alternate paths and hardware-level fast reroute (FRR) with sub-microsecond detection, not relying on routing protocol convergence.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Link Utilization</strong></td>
            <td align="left">The fraction of the nominal link capacity actually used for data transmission over a measurement interval, expressed as a percentage. Reported as mean, P95, and P99 per link. High asymmetric link utilization (low average but high peak) is characteristic of bursty AI inference traffic.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="training-specific-terms">
      <name>Training-Specific Terms</name>
      <t>The following terms are specific to AI training workload benchmarking
and are used normatively in <xref target="TRAINING-BENCH"/>.</t>
      <table anchor="tab-training-specific">
        <name>Training-Specific Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>JCT</strong></td>
            <td align="left">Job Completion Time. The wall-clock elapsed time from the start of a training job (or benchmark iteration) until all participating accelerators complete their work, inclusive of all forward pass, backward pass, and collective communication phases. JCT is the primary end-to-end training efficiency KPI.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Roofline JCT</strong></td>
            <td align="left">The theoretical minimum JCT assuming perfect (zero-contention, zero-queuing) network behavior: <tt>Roofline JCT = computation_time + serialization_delay</tt>, where <tt>serialization_delay = message_size / link_rate</tt>. Provides a reference for evaluating fabric overhead.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>JCT Ratio</strong></td>
            <td align="left">The ratio of measured JCT to Roofline JCT. A value of 1.0 indicates no network-induced overhead. Values &gt; 1.0 quantify fabric inefficiency: <tt>JCT Ratio = JCT_measured / JCT_roofline</tt>. The JCT Ratio is the primary comparative metric for AI training fabric benchmarking.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Gradient Synchronization</strong></td>
            <td align="left">The AllReduce collective operation performed after the backward pass of each training step to sum the locally computed gradients across all data-parallel replicas. The dominant communication event in data-parallel training, occurring once per training step per layer.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Step Time</strong></td>
            <td align="left">The wall-clock duration of a single training iteration (forward pass + backward pass + gradient synchronization + optimizer step). Step time = computation time + communication time, where the communication time is dominated by the AllReduce collective.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Soak Test</strong></td>
            <td align="left">A sustained-load test run for an extended period (minimum 24 hours for stability evaluation) at a defined offered load fraction (e.g., 70% or 90% of maximum throughput). Soak tests detect buffer leaks, ECMP imbalance drift, PFC storm initiation, and long-tail error accumulation not visible in short-duration tests.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section anchor="inference-specific-terms">
      <name>Inference-Specific Terms</name>
      <t>The following terms are specific to AI inference serving workload
benchmarking and are used normatively in
<xref target="INFERENCE-BENCH"/>.</t>
      <table anchor="tab-infer-specific">
        <name>Inference-Specific Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>TTFT</strong></td>
            <td align="left">Time to First Token. The elapsed time from receipt of an inference request by the serving system to emission of the first output token. Encompasses prompt processing (prefill), KV cache generation, optional KV cache transfer (in disaggregated architectures), and the initial decode step. Interactive serving target: TTFT &lt; 500 ms at P99.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>ITL</strong></td>
            <td align="left">Inter-Token Latency. The elapsed time between successive output tokens during the autoregressive decode phase. Measured at P50, P95, P99, and P99.9 to characterize tail latency behavior. Interactive serving target: ITL &lt; 50 ms at P99.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>TPS</strong></td>
            <td align="left">Tokens Per Second. Aggregate throughput of the inference serving system, measured as the total number of output tokens generated per second across all concurrent requests. Reported separately for input-side (prefill) TPS and output-side (decode) TPS.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>KV Cache</strong></td>
            <td align="left">Key-Value Cache. The intermediate attention state (key and value projection matrices from multi-head attention layers) computed during the prefill phase and reused during each decode step to avoid redundant recomputation. KV cache size scales with: <tt>layers × attention_heads × head_dim × sequence_length × precision</tt>. The attention head configuration <strong><bcp14>MUST</bcp14></strong> be reported in all benchmark results.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Prefill Phase</strong></td>
            <td align="left">The compute-bound phase of LLM inference in which the entire input prompt is processed in parallel to generate the KV cache and the first output token. Characterized by high arithmetic intensity (200–400 ops/byte), high accelerator utilization (90–95%), and large activation tensors. Prefill latency dominates TTFT for long prompts.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Decode Phase</strong></td>
            <td align="left">The memory-bandwidth-bound phase of LLM inference in which output tokens are generated autoregressively, one token per forward pass, by reading the KV cache. Characterized by low arithmetic intensity (60–80 ops/byte), lower accelerator utilization (20–40%), and memory-bandwidth-limited KV cache reads. Decode throughput limits TPS.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Disaggregated Serving</strong></td>
            <td align="left">An inference serving architecture in which the prefill phase and decode phase are executed on physically separate groups of accelerators (workers), connected by a network fabric. Allows independent scaling of prefill and decode resources (xPyD) but introduces KV cache transfer as a fabric-critical data movement.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>xPyD Ratio</strong></td>
            <td align="left">The allocation ratio of x prefill workers to y decode workers in a disaggregated serving cluster. Example: 3P9D denotes 3 prefill nodes and 9 decode nodes. The optimal xPyD ratio depends on model size, prompt/output length distributions, and TTFT/ITL SLO targets.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Continuous Batching</strong></td>
            <td align="left">A dynamic inference scheduling technique that inserts new requests into an active decode batch as slots become available (without waiting for the current batch to complete), improving accelerator utilization compared to static batching. Generates variable batch sizes that affect fabric traffic burstiness.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>PagedAttention</strong></td>
            <td align="left">A KV cache memory management technique storing attention keys and values in fixed-size, non-contiguous virtual pages (typically 16–64 KB), inspired by OS virtual memory management. Reduces memory fragmentation and enables efficient KV cache sharing across requests with common prefixes.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Prefix Caching</strong></td>
            <td align="left">Reuse of previously computed KV cache segments for inference requests sharing a common prompt prefix (e.g., a fixed system prompt), eliminating redundant prefill computation. Prefix cache hit rate is a secondary KPI for inference serving efficiency.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Normal Dispatch</strong></td>
            <td align="left">An AllToAll MoE dispatch communication mode optimized for the prefill phase. Payload sizes are variable (depending on token-to-expert routing), generating dynamic tensor shapes incompatible with static graph capture. Maximizes throughput for large batches at the cost of higher per-dispatch latency.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Low-Latency Dispatch</strong></td>
            <td align="left">An AllToAll MoE dispatch communication mode optimized for the decode phase. Payload sizes are padded to fixed maximum dimensions (compatible with static graph capture), enabling lower kernel-launch overhead at the cost of slight bandwidth inefficiency. Target: &lt; 200 µs per dispatch round trip.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>SLO</strong></td>
            <td align="left">Service Level Objective. A quantitative target for an inference serving KPI. AI inference SLOs typically specify maximum TTFT (e.g., &lt; 500 ms P99) and maximum ITL (e.g., &lt; 50 ms P99) under a specified request arrival rate.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Speculative Decoding</strong></td>
            <td align="left">An inference acceleration technique using a small draft model to generate candidate token sequences verified in parallel by the target model. Reduces effective ITL but generates bursty, variable-length KV cache traffic; noted as a future benchmarking area not fully specified in the current companion documents.</td>
          </tr>
        </tbody>
      </table>
      <section anchor="inference-phase-characteristics">
        <name>Inference Phase Characteristics</name>
        <table anchor="tab-infer-character">
          <name>Inference Phase Characteristics</name>
          <thead>
            <tr>
              <th align="left">Phase</th>
              <th align="left">Compute Bound</th>
              <th align="left">Arithmetic Intensity</th>
              <th align="left">Accelerator Util.</th>
              <th align="left">Primary KPI</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>Prefill</strong></td>
              <td align="left">Yes</td>
              <td align="left">200–400 ops/byte</td>
              <td align="left">90–95%</td>
              <td align="left">TTFT</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Decode</strong></td>
              <td align="left">No (memory BW bound)</td>
              <td align="left">60–80 ops/byte</td>
              <td align="left">20–40%</td>
              <td align="left">ITL, TPS</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="kpi-classification-terms">
      <name>KPI Classification Terms</name>
      <t>The following terms define the three-tier KPI taxonomy used across both
companion methodology documents.</t>
      <table anchor="tab-kpi-class">
        <name>KPI Classification Terms</name>
        <thead>
          <tr>
            <th align="left">Term</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>Primary KPI</strong></td>
            <td align="left">A top-level performance indicator directly representing end-user experience or training efficiency. In training: JCT Ratio and BusBW. In inference: TTFT and ITL. Primary KPIs are the principal reporting metric and the basis for comparative benchmarking across DUT implementations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Secondary KPI</strong></td>
            <td align="left">A fabric-level performance indicator providing mechanistic explanation for primary KPI values. Examples: collective operation throughput (BusBW), KV cache transfer goodput, AllToAll dispatch latency, ECMP imbalance (MMR), and link utilization. Secondary KPIs enable root-cause analysis of Primary KPI deviations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Fabric Health Indicator (FHI)</strong></td>
            <td align="left">An operational metric characterizing fabric stability and anomaly conditions rather than peak performance. FHIs include: PFC event rate, PFC storm occurrence, ECN marking ratio, packet loss rate, buffer occupancy (P99), and retransmission rate. FHIs <strong><bcp14>SHOULD</bcp14></strong> be continuously monitored and reported throughout all test categories.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Goodput</strong></td>
            <td align="left">The application-useful data delivered per unit time, excluding retransmitted packets, protocol overhead, and padding. Goodput may differ significantly from raw throughput during congestion events; both <strong><bcp14>SHOULD</bcp14></strong> be reported in benchmarking results.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>Zero Packet Loss</strong></td>
            <td align="left">A test acceptance criterion requiring that no packets are dropped by the DUT during the measurement interval. For RoCEv2 and UET transports, zero packet loss is the target operating condition. The binary search procedure in the companion methodology documents determines the maximum offered load satisfying this criterion.</td>
          </tr>
        </tbody>
      </table>
      <section anchor="kpi-tier-summary">
        <name>KPI Tier Summary</name>
        <table anchor="tab-kpi-tier">
          <name>KPI Tier Summary</name>
          <thead>
            <tr>
              <th align="left">Tier</th>
              <th align="left">Training Examples</th>
              <th align="left">Inference Examples</th>
              <th align="left">Purpose</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">
                <strong>Primary KPI</strong></td>
              <td align="left">JCT Ratio, BusBW</td>
              <td align="left">TTFT, ITL, TPS</td>
              <td align="left">Direct end-user experience / business impact</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Secondary KPI</strong></td>
              <td align="left">AllReduce BusBW, MMR, Link Utilization</td>
              <td align="left">AllToAll dispatch latency, KV transfer goodput</td>
              <td align="left">Root cause analysis of Primary KPI deviations</td>
            </tr>
            <tr>
              <td align="left">
                <strong>Fabric Health Indicator (FHI)</strong></td>
              <td align="left">PFC events, ECN ratio, packet loss, buffer P99, retx rate</td>
              <td align="left">PFC events, ECN ratio, packet loss, buffer P99</td>
              <td align="left">Ongoing fabric stability and anomaly detection</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="referenced-standards-abbreviations">
      <name>Referenced Standards Abbreviations</name>
      <t>The following abbreviations refer to normative and informative IETF
documents referenced throughout this document and the companion
methodology documents.</t>
      <table anchor="reference-standard">
        <name>Referenced Standards Abbreviations</name>
        <thead>
          <tr>
            <th align="left">Reference</th>
            <th align="left">Definition</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">
              <strong>RFC 1242</strong></td>
            <td align="left">"Benchmarking Terminology for Network Interconnect Devices" (Bradner, 1991). Defines foundational benchmarking terms (throughput, latency, frame loss rate, back-to-back frames). The baseline terminology reference for BMWG work. Where terms in this document overlap with RFC 1242 definitions, the AI fabric context definitions herein take precedence.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 2544</strong></td>
            <td align="left">"Benchmarking Methodology for Network Interconnect Devices" (Bradner &amp; McQuaid, 1999). Defines test methodologies for throughput, latency, frame loss rate, and back-to-back measurements. The AI fabric methodology documents extend RFC 2544 procedures for AI-specific traffic patterns and test durations.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 8238</strong></td>
            <td align="left">"Data Center Benchmarking Terminology" (Bitar et al., 2017). Extends RFC 1242 with data center-relevant terms including forwarding table scaling, congestion, and VM/SDN. Incast, ECN, and buffer occupancy concepts in this document align with RFC 8238 definitions.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 8239</strong></td>
            <td align="left">"Data Center Benchmarking Methodology" (Bitar et al., 2017). Defines test methodologies for data center network functions including incast, ECN marking, and lossless behavior. The AI fabric companion methodology documents extend RFC 8239 for distributed AI collective traffic patterns.</td>
          </tr>
          <tr>
            <td align="left">
              <strong>RFC 2119 / RFC 8174</strong></td>
            <td align="left">"Key words for use in RFCs to Indicate Requirement Levels" (Bradner, 1997; Leiba, 2017). Define the normative requirement language: <bcp14>MUST</bcp14>, <bcp14>MUST NOT</bcp14>, <bcp14>REQUIRED</bcp14>, <bcp14>SHALL</bcp14>, <bcp14>SHALL NOT</bcp14>, <bcp14>SHOULD</bcp14>, <bcp14>SHOULD NOT</bcp14>, <bcp14>RECOMMENDED</bcp14>, <bcp14>MAY</bcp14>, and <bcp14>OPTIONAL</bcp14>. RFC 8174 clarifies that these terms are normative only when in uppercase; lowercase uses are not normative.</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>TODO acknowledge.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC2544">
          <front>
            <title>Benchmarking Methodology for Network Interconnect Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
            <date month="March" year="1999"/>
            <abstract>
              <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2544"/>
          <seriesInfo name="DOI" value="10.17487/RFC2544"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="IBTA-ROCE" target="https://www.infinibandta.org">
          <front>
            <title>InfiniBand Architecture Specification, Annex 16: RoCE</title>
            <author>
              <organization>InfiniBand Trade Association</organization>
            </author>
            <date year="2010"/>
          </front>
        </reference>
        <reference anchor="UEC-SPEC-1.0" target="https://ultraethernet.org">
          <front>
            <title>Ultra Ethernet Specification 1.0</title>
            <author>
              <organization>Ultra Ethernet Consortium</organization>
            </author>
            <date year="2024"/>
          </front>
        </reference>
        <reference anchor="TRAINING-BENCH">
          <front>
            <title>Benchmarking Methodology for AI Training Network Fabrics</title>
            <author initials="F." surname="Calabria" fullname="Fernando Calabria">
              <organization/>
            </author>
            <author initials="C." surname="Pignataro" fullname="Carlos Pignataro">
              <organization/>
            </author>
            <date year="2026" month="February"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-bmwg-ai-fabric-training-bench-00"/>
        </reference>
        <reference anchor="INFERENCE-BENCH">
          <front>
            <title>Benchmarking Methodology for AI Inference Serving Network Fabrics</title>
            <author initials="F." surname="Calabria" fullname="Fernando Calabria">
              <organization/>
            </author>
            <author initials="C." surname="Pignataro" fullname="Carlos Pignataro">
              <organization/>
            </author>
            <date year="2026" month="February"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-bmwg-ai-fabric-inference-bench-00"/>
        </reference>
        <reference anchor="RFC1242">
          <front>
            <title>Benchmarking Terminology for Network Interconnection Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="July" year="1991"/>
            <abstract>
              <t>This memo discusses and defines a number of terms that are used in describing performance benchmarking tests and the results of such tests. This memo provides information for the Internet community. It does not specify an Internet standard.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="1242"/>
          <seriesInfo name="DOI" value="10.17487/RFC1242"/>
        </reference>
        <reference anchor="RFC8238">
          <front>
            <title>Data Center Benchmarking Terminology</title>
            <author fullname="L. Avramov" initials="L." surname="Avramov"/>
            <author fullname="J. Rapp" initials="J." surname="Rapp"/>
            <date month="August" year="2017"/>
            <abstract>
              <t>The purposes of this informational document are to establish definitions and describe measurement techniques for data center benchmarking, as well as to introduce new terminology applicable to performance evaluations of data center network equipment. This document establishes the important concepts for benchmarking network switches and routers in the data center and is a prerequisite for the test methodology document (RFC 8239). Many of these terms and methods may be applicable to network equipment beyond the scope of this document as the technologies originally applied in the data center are deployed elsewhere.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8238"/>
          <seriesInfo name="DOI" value="10.17487/RFC8238"/>
        </reference>
        <reference anchor="RFC8239">
          <front>
            <title>Data Center Benchmarking Methodology</title>
            <author fullname="L. Avramov" initials="L." surname="Avramov"/>
            <author fullname="J. Rapp" initials="J." surname="Rapp"/>
            <date month="August" year="2017"/>
            <abstract>
              <t>The purpose of this informational document is to establish test and evaluation methodology and measurement techniques for physical network equipment in the data center. RFC 8238 is a prerequisite for this document, as it contains terminology that is considered normative. Many of these terms and methods may be applicable beyond the scope of this document as the technologies originally applied in the data center are deployed elsewhere.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8239"/>
          <seriesInfo name="DOI" value="10.17487/RFC8239"/>
        </reference>
      </references>
    </references>
    <?line 436?>

<section numbered="false" anchor="appendix-a-term-cross-reference-to-companion-documents">
      <name>Appendix A: Term Cross-Reference to Companion Documents</name>
      <t>The following table identifies which terms from this document are used
in each companion methodology document.</t>
      <table anchor="tab-cross-ref">
        <name>Term Cross-Reference to Companion Documents</name>
        <thead>
          <tr>
            <th align="left">Term Category</th>
            <th align="left">Used in Training Bench</th>
            <th align="left">Used in Inference Bench</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">General Benchmarking Terms (§2)</td>
            <td align="left">All terms</td>
            <td align="left">All terms</td>
          </tr>
          <tr>
            <td align="left">Collective Communication (§3)</td>
            <td align="left">AllReduce, AllGather, ReduceScatter, AllToAll, BusBW, CCL, Ring Algorithm, BSP, SPMD</td>
            <td align="left">AllToAll, BusBW</td>
          </tr>
          <tr>
            <td align="left">Parallelism Strategies (§4)</td>
            <td align="left">DP, TP, PP, EP, MoE, ZeRO</td>
            <td align="left">EP, MoE, DP Attention</td>
          </tr>
          <tr>
            <td align="left">RDMA / RoCEv2 (§5.1)</td>
            <td align="left">RDMA, RoCEv2, QP, RC mode, RDMA Verb</td>
            <td align="left">RDMA, RoCEv2, QP, RC mode, GIN, KVCXL</td>
          </tr>
          <tr>
            <td align="left">UET Terms (§5.2)</td>
            <td align="left">UET, PDC, ROD, RUD, RUDI, UUD, LLR, Packet Trimming, CBFC, UEC Profile, Entropy Value</td>
            <td align="left">UET, RUD, GIN</td>
          </tr>
          <tr>
            <td align="left">Congestion Control (§6)</td>
            <td align="left">PFC, PFC Storm, PFC Deadlock, ECN, DCQCN, ECN Marking Ratio, Incast, Incast Ratio, Packet Spray, DLB/Flowlet, ECMP, MMR</td>
            <td align="left">PFC, ECN, DCQCN, Incast, Packet Spray, ECMP</td>
          </tr>
          <tr>
            <td align="left">Fabric Topology (§7)</td>
            <td align="left">Clos, Rail-Optimized, Bisection BW, Oversubscription, ToR, Spine, NIC, Buffer Occupancy, Zero-Impact Failover, Link Utilization</td>
            <td align="left">Clos, Bisection BW, ToR, NIC, Buffer Occupancy, Link Utilization</td>
          </tr>
          <tr>
            <td align="left">Training-Specific (§8)</td>
            <td align="left">JCT, Roofline JCT, JCT Ratio, Gradient Sync, Step Time, Soak Test</td>
            <td align="left">Soak Test</td>
          </tr>
          <tr>
            <td align="left">Inference-Specific (§9)</td>
            <td align="left">—</td>
            <td align="left">TTFT, ITL, TPS, KV Cache, Prefill, Decode, Disaggregated Serving, xPyD, Continuous Batching, PagedAttention, Prefix Caching, Normal/Low-Latency Dispatch, SLO</td>
          </tr>
          <tr>
            <td align="left">KPI Classification (§10)</td>
            <td align="left">Primary KPI (JCT Ratio, BusBW), Secondary KPI, FHI, Goodput, Zero Packet Loss</td>
            <td align="left">Primary KPI (TTFT, ITL), Secondary KPI, FHI, Goodput, Zero Packet Loss</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section numbered="false" anchor="appendix-b-term-taxonomy-summary">
      <name>Appendix B: Term Taxonomy Summary</name>
      <t>The following table provides a concise summary of all defined terms
organized by category, with the section reference for the full
definition.</t>
      <table anchor="tab-taxo">
        <name>Complete Term Taxonomy</name>
        <thead>
          <tr>
            <th align="left">Section</th>
            <th align="left">Term(s)</th>
            <th align="left">Category</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">2</td>
            <td align="left">DUT, SUT, RT, JFI, Offered Load, Trial Duration, Warmup Period, Binary Search, Percentile Latency, AI Fabric</td>
            <td align="left">General Benchmarking</td>
          </tr>
          <tr>
            <td align="left">3</td>
            <td align="left">Collective Operation, AllReduce, AllGather, ReduceScatter, AllToAll, Ring Algorithm, BusBW, CCL, SPMD, BSP</td>
            <td align="left">Collective Communication</td>
          </tr>
          <tr>
            <td align="left">4</td>
            <td align="left">Data Parallelism, Tensor Parallelism, Pipeline Parallelism, Expert Parallelism, MoE, DP Attention, ZeRO</td>
            <td align="left">Parallelism Strategies</td>
          </tr>
          <tr>
            <td align="left">5.1</td>
            <td align="left">RDMA, RoCEv2, QP, Reliable Connected (RC), RDMA Verb, UET, PDC, ROD</td>
            <td align="left">Transport — RDMA / RoCEv2</td>
          </tr>
          <tr>
            <td align="left">5.2</td>
            <td align="left">RUD, RUDI, UUD, UEC Profile, LLR, Packet Trimming, CBFC, Entropy Value, GIN, KVCXL</td>
            <td align="left">Transport — UET</td>
          </tr>
          <tr>
            <td align="left">6</td>
            <td align="left">PFC, PFC Storm, PFC Deadlock, ECN, DCQCN, ECN Marking Ratio, Incast, Incast Ratio, Packet Spray, DLB/Flowlet, ECMP, MMR</td>
            <td align="left">Congestion Control</td>
          </tr>
          <tr>
            <td align="left">7</td>
            <td align="left">Clos/Fat-Tree, Rail-Optimized, Bisection Bandwidth, Oversubscription Ratio, ToR Switch, Spine/Superspine, NIC, Buffer Occupancy, Zero-Impact Failover, Link Utilization</td>
            <td align="left">Fabric Topology</td>
          </tr>
          <tr>
            <td align="left">8</td>
            <td align="left">JCT, Roofline JCT, JCT Ratio, Gradient Synchronization, Step Time, Soak Test</td>
            <td align="left">Training-Specific</td>
          </tr>
          <tr>
            <td align="left">9</td>
            <td align="left">TTFT, ITL, TPS, KV Cache, Prefill Phase, Decode Phase, Disaggregated Serving, xPyD Ratio, Continuous Batching, PagedAttention, Prefix Caching, Normal Dispatch, Low-Latency Dispatch, SLO, Speculative Decoding</td>
            <td align="left">Inference-Specific</td>
          </tr>
          <tr>
            <td align="left">10</td>
            <td align="left">Primary KPI, Secondary KPI, Fabric Health Indicator, Goodput, Zero Packet Loss</td>
            <td align="left">KPI Classification</td>
          </tr>
          <tr>
            <td align="left">11</td>
            <td align="left">RFC 1242, RFC 2544, RFC 8238, RFC 8239, RFC 2119/8174</td>
            <td align="left">Referenced Standards</td>
          </tr>
        </tbody>
      </table>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA919W28bWZLmO3/FWRd6h6wmKcvluljumhmZkl3q0s26lKdn
0KhKkkkq28lMdmZSsqZcwDwt9nlmgX1d7D7M+wCDfa993/+w/Us2vog4tyRp
q6YHWGALKEsiM0+eEycuX8SJiBwMBp0ma/J0zzx6kRaTm0VSvc2KublKq0VW
lHk5vzezsjL7R+Y0be7K6q15mYyrbFI/6iTjcZXe0p30pXxoto3xqDNJmnRe
Vvd7JitmZaczLSdFsqDnTqtk1gwmSY4RksF4cTcfJNlgxgMOGj/G4PHjTr0a
L7K6zsqiuV/SzUeHVy87xWoxTqu9zpQesdeZlEWdFvWq3jNNtUo7NMHPOkmV
JjTRs2VaJQ3dXZukmJqTpEjm6SItmkcdLG1elaslKHHy5pV5U8oqXuHDR523
6T1dMt3rmIEZB6vE38Ek8SeRo6mSrNBv6U9aclrRTSn+LpSOskJ8cnFwss8/
y9Hh7RP8dn14hR+TMs/TSZPdpvTrYrEqsglPn4fN84t0upIxfzvi66+uXvLP
b78zk2Ryk3Y6t2mxIqIYM8+am9WYFjdTUu9sp/Qjuj4nYtYNXX/TNMt6b2fH
3jeUkYZZ+YERdn7Jrg5vmkX+qNNJVs1NWYHC9HxjZqs8FxZ59DKtCtqw0ox0
wEd8BZG4yZK8xhVD+aheVXpPfGlZzZMi+3um3p4ZZfWk5M/TRZLlAVX+eoKv
hkTtRxvmMUqqvKzNeTYvkiapyrVpjNam0bo2nseLfJUaLM6MiCdXecMsE0xr
wg/86zFdRxxUDCfusk3Te50V5s1qbVKv1yZlL4pn880quUuz6PnjLM+Hd6u/
vuGvtpHlVbaq0+WSlpKVE+LZ9e15tTaF+NKPTmSujxjO9L5oSp2irBYJ5GSv
04F+cX8Zc/Tian9wcTY63OMBrbY7KmY0vxdQA/vV5CZrSM5WVWoul+kkm6mc
9c1+UaTvzO4XeyycMlnazXlKomEl4+7ubpjxcGMarkmGtBq+0DE0/zfAKkll
+QdfVck0Nft1XU4ylWv6j9WYefJ493GH/r4+HA0uz+mf3eHjeAXXOWkZc9jc
EGOkTTxxQ1dvnuwKd6V604dn2noAWLSsmmy1iKb55CmmeXWxf3R6dPpq8OLw
dPRNPNHIKJzQs8tpZFiuVFmuW5hNU8sKYqiXQ6cK9AtjhLHWVEV842jopbd1
Z1u441V+MXj8RFg4rbK0BpPZOR0VDVNocAClZy1aW+XpIgdsPGDLwJunLw8v
iGCH/xayHVmjYi7T6vb/d/o5ExoQsDMYDEwyrom2k6bTubrJakO4YgWLbqYp
yVlaR8Y6tNRMxvQ2yVcJq13L54NxUqfTlpWuzQofkn6dZvS4bLxqUiiOBhJH
Oq6DNeR5Nuf96O4f9RwEYKThZm8waF4m03pojhqzrMrbbEqzTDpk3GcZDXpb
TpLxKk+qewNtX+bZlCfI46TvmrSY2pXUZlaVC3PxcmR2nzx90u/gt6+efPZV
ny+mBQE2LEmzkkYgjpG1mEXAS5Zcdb9D5j4Z51l9g+GXFWmTOu0bAhDTshoU
6YoWlAtVM4FQIKBHKJ0IodAA2SLD53Wf8Q3oUdRLUiD0/MkNzamm+XcF8vB0
Wd24bYBWkOt7fRBiTtPDuPRrU5U5betNcpuVFQ2/fzSoVfmZb9P7znlasQUA
uY+IWjQjus50vz0/qntCGqVEUy6FDDTqJF029bDNRfR7EhCxKR8q3XjKAxl5
aK5uyjr1e9G5/Obs+vjAnJ5d0TpNslzm4Iw7Al7lqjGzrKob46EAb3TI2cL6
0w5RMs2KoXmDX6Ktk29MeZtWebLkkR0bkfI3jpE6GNuzDqhPPBiMBZloIpI1
yVtiaDBQOsU6hyqoi2w6zQmSfgKBr0qCrmzyOp98Yi7SP66yipF4bY6TYr4i
XI6tSA3hbsjMtDaPTq4vrx715Sdog98vDl9fH10cHuD3y2/2j4/dLx29Qmjp
f/N3js5OTg5PD+Rm0Dr6qPPoZP93j4RdHp2dXx2dne4fP1pfLrkWYAvapwx6
jNYN1ZDUHRLrCWkKURsvRuc//7fdp+bHH/8D0fbJ7u6zn37SP77a/fIp/XF3
kxbytLLI7/VPov59h/Y/TSqMkuQ5Aftl1hCqomtrU9+UdwXvJlH5078DZX6/
Z34znix3n/6lfoAFRx9amkUfMs3WP1m7WYi44aMNj3HUjD5vUTqe7/7vor8t
3YMPf/NXOTG3Gex+9Vd/2WHuuZyUhD1BufNVtSRJ2mYKQhmxGoNIKqYgcupa
xoAEoG0PMqeKcrqATGVZQc3cd9LiNqvKgnkZkp1acVSVPYHQkbMa2hHvMG51
+cwyaWAmad+Pj0863qDUavyTAMWu69yagHTRYOKqcllBkZu5Wbt6Fd1v68oO
fQN7S2a8xoDCs8tA55J9wQ2BvllTq9OStqMo7daYZAL1y3eTzGDspB+O2akC
DdE3bHgIR89X4s4bUjZELDJV4t2zbXXPSKZTIkndiZBAOTM5iFzakAAZN91j
egCJcpUMipLgeff0u+OseLtzPjpKex2WcXp2QYSWidS066St7M009lB1Wi6T
ucmWUBCH70AverKLLrSJIsa9ZnU+K1e6GJoX3xEyrzPVpOSJEX/88a9Ij0B3
kx7pWk4Np7oVBfWwex0ZAAofA9BzyealuH37fdaqCFOTEuxYNmcNCXtGQ9b9
lrZ0gGfNpnR4+Qjt1M8VurSMVgVnsIDaC9YPEQse0CHNWEaURBgjgjyziuAq
U4h4QPXx50+hgiNSQEHTo6zx5VV9FE91nA3fzAQjd/uBM/briKMkiaC5NTdV
Gj4xxsm15WvszP5RRycTbhg5w4MteCUKwJhutEm9vTB0RyA9kuQB0Sz29n76
aS8O/C3WnRUfEAsQMI/V8oEeMlgQTnPqLxi1c7RY5qwq0koCfQrzgQLVWFVp
MhXWjICT5aJxOgP3YfPvLcLyG7ERQA+ZaHUEvhgcsOfgAhRkbZyAuHt5krgW
6qpK7RANj7EqctplnoKDuXa1DpBlCqngSph6RT4R1D2h8mk2U2L1CVNkkxsw
GD0mhTNBrD0hb/MeEO0T8yotUmD8tSBuLVBsRqapvPPOh5NCnttc716kSb0S
Te1FrcNiRI5NzmAJOMbLT6RjEtg+4rUUG/meH2/emwOPN9933hOa5P/p+08/
dcHnTz+lC8XeMuanFTpnYpxM3pJGMJt0ozx1kuZppY7Cq/Preudv6J8ec9w2
U73Vsbu6XyqwyBwnTlmZEOWLwTgvJ7zWEfzl7ixpyH9I0553R6pVwU9Q74jm
QJbaW/OhOWBbMmnEAWSDQZijGWCN3YULa++ocerZhQ+NEO3g+orJdZDeZjT3
62JKuv6KNlQQiyVTKpPvEiHumGNDM9/CAcy/41REllmAdPNRsW2fMWeahVN3
lVlA4vRSfXC9Z/I0mZmaHJTJDWBNvQRc8H+fHo3EDFsJzQl+21GSmmDPmNhb
V32pq768r5t0sbZqdztNGb/ThptaLiVa56tpm1Ps45n+8khakgAi1Us7jlvq
ctbcEY36DO0N7dSgKXnDBC8JATFUKEHl+A+CBu0aLmQJF+QI6uTpxw685RkU
g8gwTW7I3xnApiWPRH4Dix8ZFhFVF1SAp5axErX73uhoSWNVDvFvhRMBcRWL
pCBuIPmZDsgClfmKJbPJFlAJiyUvd4URMtYCuj3BupyaXLerW5SrrP63L494
+b8lov5FTYKfkXiTciQ3P303NPumRhy/so/CYmekswZ5SpoXBrVczW+W8KHt
ncmkKulHwdcRv/1AjzBfm+7//h/v/s+//vfez/9CxO0W5uf/aeSTn/+l9wN2
kAbHn+BfRhl+aH2myTAfGB6exu7wMa1VFFPNcgRYZqfx3NAdtJF8ee0uhOpZ
JgSIiYWJfdl00frHqeVPBt7m7xT9/Z62EvpB/IjilignaFiId8aWYGqOSUs5
ZdmU5Ey6Dccew3WvRWWRqrZyKoomYqk+LAgQiFVuM+gD7CGDa5JTHq77+E//
8E+7jx//qgc5xYVjZhnSFhlpqKGx82KDltWhX0WESJckI/Q00qY0LKaC+6ul
xOfTAiYI63XaSE4RcHGdNNZBsOEiS4sr8jByc6Bfe2oQC4txoJ3g8IgaTTKq
9JQ8Fk6ZKwIZ0HUvBZxYd4lIpaqBhNa7u/QoghvZYrXA7V88NiJHtUqJ4yK+
n8Xzs8fxRXWZvN2B6c1yYgz7IPCBCJkXH5Ktv1N0+3uGzQyLiaw2MCS0eJNU
i9XSnBPtSmGMfUMqfsr+LLhhEC3aUme6qhh4MX0sA9GawDuGrCUxIwJUdQBs
CNJPQarDAsMBwSZLdnVp2RLJgoqi1Zy/HJm7BDAjYX8MdDgYvR6d7sAQrjus
OZAeYTqaCSnEZEr+fQPeUxwXzn+czjPIxJZd2TO7jtyWQC+yAuHYyxQ+thCo
MPBSGdGJVJBjM0mn0DkMGVKBldbSLZJ3vOVlyOqkXS1zga0XadowenMad90r
phV30+F82Dd/n1alWQLWNDRaXffEhtU8R3OT5LepaKYJ0Q4h5FQeShiCnFV6
NBNLFwFJYoVRzXnGhNBMoNhJlB4Pd38Vzx6GgBiMqOUGcQQjVoL/mJG8HNOT
i8m9slUuf/HuMGywKkQIRdMKYC49KdQoAI0WWfiBEthsEgx8S7dzIAfaF+OI
NkRMHWqVUFMqwPf83TtLxvPPH9M/zz7HP8/4n+EzkDLJcveMLn0qCH1MGqEH
Hk95igzxKoIDt4mauS1gZ5xOEnIDzOXxmSEtlNscBHZwhVNoWmOOdNY4Nkfk
HSKzICibFFEoBDT+cc98QqIyIEMuUWQ5NPr60XYI/+gnQPyRjy6NoujSB2C+
hmjESm8JTpU+sQK72LFQBqcAEB2rHupyVU3YwLVRdei/PQT2BytxWR3KZJOS
1AyJLEbeGEMjpksn/OCapDInCUvLVU17qWCAeSn0CBB18Li/rALYzzkjQ6Jm
FZKHgITLzzDdeZVMGQkl83mVznkuvT6ueJXAQzFdMvLkKQHKObLINTLE5YQn
ztfhSB0+ooTPza8BefClnmzQoFcl/UMM/o7o0giEaCDrhTkpDwlmT9Mc6kLd
JztNRzxHV7pB9BOrCn70JFuC1Vnr8t5BYSFsA8sOCaEHBxfWii2FGRTUD+4y
EoWatGGX7iqZAG49PSvp7hGiV9ivK0lO5PEbT5qYq5ImGYCatIpcTsF4dv4z
u41D82JVv3hjAZzl1G/PjwLSyP7820lTEyqZ8pIeTCUcRUHxJKHe43GIDNd6
CtleVPcEbEUu4KBu7nPSUXlyD5NImhxWtyCeIvpGMseOtsoszctxoFPiEeut
k4A2YWz94HhrPXOys5BYDiXvrVwnmK4d1Jpar7bO4ZVyCEyHg//BcMLT4G/T
izNaLmKvJaGIRUYm0eAsuCELn2pMRaOpfGJGRnEmewHgMnCcH+w3i84v2e6a
44zB1JfJPZtHojz5HMRQwuDhPd7nSjfeylB7y83qqRIaQIQhnCPMz0n2DuH/
QTkbHLL0gwPepoVTAkPrINKTT3/+lz/953/cDTG2WZYE7+CX8i/iE/HIpxHD
Ohbh2GM+L2k+NwuLjALNB5NopWgnYqge8bXeZ4Q72a6zH0HglVdXaaA+fvTh
5j0ggmcQpWx+05AjSz/GKm6O1kxYXJSns+Aa2t0n3VPQogf0uKxZNRgk89xl
U+LgVF1ZwgJfu0t3TvuID1YlTQfzhIfDpMqRcWNOh+ayAYgm+cchEvtD+Fq0
Di1vshIc4FAmvmAa4ukv7NNpwbOZ7jFt6SAwTKG/oGg8CViiL1HHPPt7fINn
e5LDtbkhoEwOr0zoa4n7f1+TGJn/9V/50u9npCRK2qodEvBkSXL3PRykH8TX
qWTzda/75ofglohOPwDwM7AXN7/KatINamsn+YpDGHiuyGzA026+juNGo2Mm
0VYgc5yNK9LjHAvQiAv5ovyZHj4wU6nCmPoInVWFs3ACAa7pBkt1PE0OaDMZ
9sRTTHx8NgjVQV5p1q0H0VJwNivOvA1z+HAIuxQCX13w6vzkQKJX4oieVyXh
ioU5QRIADW0OaPvkcQJvOJgCY29WCHRJIHu8yt8O6ntCh2QsCPREJsGaxr5X
eGtQSKGTyaAvVAchFKHmqvDqjI+RWE6ZgKT99bHCj0kTEnqcVFUGizVOm7uU
NJYKohWN/K25DGZ97gzfi8vznirscCnhpIQILSWTAwgmGiqRWJ8wn2UmkrYb
ktvaGrF5Xo5ptX4RGliQibt5y02B8DvY6BCuImxDU9+DWN7hsmVCwvCn//SP
HKqOP/H61AFJzIK/Cy1fugx9AyXuACuy7sGH4b+4CAcBFS2Zs3pBC2LLev9A
V2EZ3BoYZU2jah9/yxZtjqzbcwvnKVlvomOPxMOUnvCg+IGuBAQnWmr3YCNP
+VCuJQVJaM4kVDcfObG6FnoOg4VAePpeGlzWjh0SwlKTK69aUQcmPnoVbnnI
eMkMapOfEbOMUgBUCVgncG0OzhEFeEuDn6SLsrofpFGotl4kdhX187afp2aD
46xWNrzzMilXhVNYVwJ4I8JefZiw8YmKI3NENsRGSYuvSBbvUjb2iwTx89TF
chdWI4aKS3EDFm4FHlDDO1QAl88DnwxZUJHzFXh37LkrnV0khLeCcTcHUNTV
S20YbGgOrP8SyobejJMhJBv4xAPnoZ1nS0EPESXPN1LSkYwYIZtrQgnp6fkK
WpN9VTZxbJ8QUUirwFXw+BuomkxepPmJIJgXXMdDCbQBenPcq4aexDHOIqMt
GLO7yTIpyq2OnA0NKhc4tuQxQlDaAqAcbFSQ7w4lZJeDQ5duODyea/Wkd3MF
DsckPHQk3KCs7rcBahGMwE8nGqurXa/GA5tD4gRunQcFkpPfiainRNkzPhGp
OXVWTmb5Wd3GnSQ25XLwLbbuUEP2eg25/cKIisbE+Y+lVg8x1QN4Q6bbHJ7b
CNizLwZ3yX3PHgqlmEhRaxRcUmDcsAgbsl+CQALzjaUvfcC0VILxRGWCQGIh
u4V5SqznkgkrQH7oLE2nA2sSlTGtE5k2svwNxO7jG7ESnDwnxNpAxW+/5tPU
b79+2mMZVsbRcxTZmNssgQ+SJhWCcrxHtEysOFUUKwhf9DyO1SaIw/M0MSnI
alJZJdPnUHhwPL9lj0K7BaHicxzO/kHukB4p/nHFijFPizkcAzVf52afFFTh
ImBr9szmsKjkJfbqECX19VALF9jyIfCo070cMmCWjmMsakq8dr0pc5L43Z2D
8+8vj/720LrxdtDn2yiQvkPOGfSOnyARnyYYBD68deDQBpu2tle0YKtGXF82
yypTq5aXqgtJ7ThciRACk+xvEU2HuifMBjfvzMIqcK+Op1BLJhtpiZggzppx
PFsiNw6DsoD5cB9xrtNV/QjM4cKepbi12MjBx/EGaNrCHguGnMSWu/aR7dGe
6wVPkDhUB1gSl9Mo9vvP5Htv1e0Fkd4nO1GljI+3OPjx7obIVFRnQDABpw8D
ngJSbaWDSxK3qBQ5Vz6lnJMgP45X+TK+Xm7dlDaZ1B1bArA5yP9AsIlnyQk+
cRWhgwMSerK4gsTM/gTG1KnMOF3eH3ZO5SZhTFhLZVF2uUTZBi7JTYmjRJs9
nhW3ZX5rAegU52gaaxydX/cDHxknTOQ9LO9DKhCR3iKlJjfje6DNoTlqOdCS
KpGGdU7fpdWYuL6Q47KjF2Szsjf7F2SCmPLsPF+NzjWCrZsQfDeSo6kwoef2
iTPtcr3Q9IO3kLhhmU+GCFK18mNpI5NlzREZWnowd3+JQLvui6tv1N+/Pjjf
OToPzqB5TE8sVhPqBbpZkAKrEueLDm3qe82HeJzopbzlQg+E9V6OYLVw5S3B
7qLpae1FBTYomGOsvylBjYLL3lz12nO/DPrWTj7afF7i0y+f7Vqyvj5nkr5e
pStAz6yS2MIMKpK3O5f1tpR4MRXkBtNSZbXEpC6R5iIjdS9f92SbNeatH1+8
pg19fS7Q2mZmlcWgrDIXGbkghcA5LCP5nj7tXox6rFyHPhCCUWDO5bBpEK2S
1uGT8gC8VI2Zw9HJOc2ezPSSD851FzggS34/WQXWgDY0vnEiNgDKdHl9HumR
ihOtULUciFil4yDgMsDZOE6OkZZMsmylN31HIAWY5q501NUDEXtmgUdhYD2B
3By/AqwRUQmYzhMa+Gq1tAfmsSLBdiH1tqH/mbQDeqIcsOORVcpXamG2oxGI
AMm3VPEM7U9sFBvwtUsJaC04gwzmcohqHojtuN4zl4enBzsXh6PvCM7dlYOa
QOy0b0O7ZPNXOIMnRUfcNl7hnJq0yZuLo6tD0yXsYq+Xakhr0e+qDEhD1Wl+
j/O2w/2D6I5K1LTegbRRe9DWkDtHvoeENNMBfTao75Jln0As4Wz+mwwpcXW0
JVwZY0+yaFtxLCzzxKBYpSXg9aGkerXKMK98aDFUYES9pqTHuMSq8T2TdmsR
p+leH5LsrJWNap4iwaSBzRPjbOedb85HLrnFzwH5GFnISNBhfQLCS3xfG1X6
mqlQL6uE45Ddi+sDJSQcjbl4HuK5E5hoyZ2G5tpZH0NPjI+ptc8IdGnl3NH+
6T64yq3ae9oHI6b4uUz2wIriSGCjBleXBLZwyN43OcIPGoQITcnUOq+gDhaZ
5OUcDjjqhFaN97P5QCKxCiPQtDSTtiZk29AtShVUggRTAmZv015ggDjVTbMW
pPpaQkV8CCKnaJBeTk8T+dV48+nRSFwMTqx0Inx2oFBF1d0ZVBQNYikDFowy
U52i+6COU2awqq7vbCb7aN7MgV6KBi5GquUvVxknCQmoT+cJLVXw21LxsJQK
WJcYmHKCvBB6MP6erwha0namdQhJEcQA8PvFSPSTreIJ+brqPQB90mW+ftEW
InqP0Upy5xdLcufHH8Pq8Z9+eihEvW5t+3VR/hkb702ahaDWlnChqfCCMIa6
O9AAtT802qJC9GI23HyHG79KebiBGAIftNT0Imc0iOutIUV7BSwQi7JLmeQI
p7YKGEJV7uTk+uDoYxTjcY6m6WJZNpzYu/RJU9to6U+pcHNgy+tkxg6wPYzh
1GlyN3EAg2NXieywWLxB4hhrHqSek5u8WliUK8aQS2ItY9Y4pdJ038iqy5ms
BDmsKhENYqnrzJZyz3VR/Xn8MyY1jtA0fds3KzdES3nIlDh5TzM2ZTKRnmg4
P6FhVUNSwgSI0BGnkPsDGgk5eYUyEOyfc9yxRJyh8GkDJGM4jJtluc2hsSYY
dRgStcJaZ2nCtXlKd0HGTvCBA5D3tQdmewFE2uUCqID1doIjEZ+fqcPCLCCn
6Mi8xCFE146yzFesaYS9cVTeF0g5WHCYlpz5OR/5ueTQhIEN2IJoAqPfVZMy
iPmxQQBBY1S4rEU+kGAwx9Tx3Pjwwlnc4+MLphkq7MwxE/kC+zQUwKijg8Jk
397qNqTFDegf1JPx+XYCBEhSf1MuDa2WgSkXW95bwXe6E4PZ+CVNgY+WJqR0
6/vFmCAU313bu17i6Zw/LnZUxaKREHTCh/OSqrhwCacZJwdPyZlYCogkiXGH
RJXE8e5dIB4CoGtrSR3jowUvkgXWZyU6zCLicEWQGsjZg+3NtLN50MEJrwVW
qTX/feRt33A+MWkUXkR0wCUS2DeeEBJ0vXE6GiIIX7nBrJzM9mwA0irgoZu2
C61G+hlHErwzInkCFhnVgN2tj9MiGevCKllm0/w+oLzP5rgjRkk5GkCSUWcM
NLBIl2Lw4qXAwBHdnDWDF1wC/BKYaWRx54coLEbHF+EEAZxVracFXHdlJvwA
ROIQ9GUwOC3vkKyVJgusfrXU36dcp0OmApPzRZQ4+OP0WSgAa/3strCGQejg
fP/68lD4kxQd6c1FVvgzS+wVzjf4iMmVJjEYJP2y4Fh0MtcgdVbjZEMb1ujR
F57hjlnUgf4OCbf2cCVFohrzBRmW3OU4QCcqo1hv3B9lKcvUkq/lcu/rGrFH
e7RyS/LAOt5jgDCA5ejMkCFw7n2+k/qfR+fm8wGh6jz1qANBzzt27IivWDra
AMTDDQs229IcYewYW786OmX6vDq/HhwpU09tXJNDDe3QCuKl02y+8KJL93KR
KHSu9WG9hNhiHgmKBYksnC/VN87JZT84rXd4ICekcAksDdxSR/RECSCmWgHS
4ia6YIBptU+rK9QzD2hzl0M9Vq2RVqUibH2WMWKNt1JGiJPEsOwhsichehJ6
fvvd6G8kNejb78yITy+ubPziYTlBdRhBa51DxvEQ39vEdKt0Du6o+u5rztzO
ZvcSpPMHFmkxRxeCvutYg4cGqTmbG6QElehh0aKgu75BVXrf1aDzFbNkIoe0
VqU6pkzmRclJ9+6AZ5HN1cGXjGx3aj3d3Fcg9JlWJAvqMD3EBxJ36RMWe3/B
pcC+WiqjOTMMTop+TLtpXc4AXL9njBCo/BEX7r3DKRw57xoUuybwMwICCtyb
8P/Iw/0dzcD+y/gpCnIRPgkczR1joShhrHJZm9hrOi3dSBcKlLsOvfagETc7
E3qAyKqq5VYEQ54o0O1mzpXo2cC3QP3nAuIDu9i0kDkPp/8UWN3VNnhcx9s9
0FiX2/aPbaUtOnCBGzWgbF5cy0hFJB/3lIMI0CbzygihE1gnd0qjrYJ83izp
uKDmICqveJiPfK4YgbgNGYr3ET4w3aPDw0Pz1eMnw93X4zFCgD6876TEw4IA
i3ns48BVEhpwmyzgkIHLYgVa9O7E0s5LnFguOc04rDAHXrAoxSawptLmSFpq
wBaTValxkEtqDXxGqjIj23KTaMOhUPrYhiaNe6QLL0/14FUXbouZreoeMjiR
P2Fvy+VgfM/onaN9SeGgR7TzbuV8LCIYBbjER/Ho40t87JI7yHbZlGLYE9e3
SIjehkjBY9sJRTQ50t+o5RH7TxhsqZ0EfCIugplTTZdCPD+ZAlSRpq/QDMIf
wxOxyCDInOzi5MgClUqNhuU59k4zntu00bAKFIfzXOhYJm+1OFMbmgEvpL4q
+gGltSDDgU7WZsBnFZRBxSSyoIlDi0q6WhGa5IkvVs2KI3hLLnbifChJYAfk
YlFKBZPbxlU0kNSxIfSfarDFeVQ24YdZGLbHTo/3m8uwUHqlh0MIb6CM3lbR
2xKAMNFWqSM3aCsMce0cBR09DkcC0A4tigzU2GnZ+EhbFzWWn+1+8dXve+wY
HJ0r+tso4N7ZkswSwxWODu6yEWAg5Z/GiTRIP4cpGR32aAzaDBtiVvxK03Xw
2uUIhL7bQkJdTsHQ5Eo5AbLPQfINPArLoMCJqEnOEpur2DprAObJ5nMJrFaS
66alH5qxIY3F9EC+dvzKpZwufwV/+MSVkTSeeb1CRBhQbAvdmdZBFf2GLkY+
3R18pdNgXu0H5SugnOs+0ax5w9YyY33aDkoJ4+tcRB8g3/Lo5MDUE5wQDDHb
dI9lB1gK7MglQSmAPBKDiJQ35Zx9SJCdVBn3iCGNcsuhDV/cabP7mVS2UhMM
8ZyHJ0n49FMCnUREaVZHZt07N3KTpwWZgTSfBWxuTnT1FyCsq4UOay8te3Zb
Fd9LKfTkFhNsBIAOQEy798zKhx/hWM2e31RlDNN5QxbOrM0zrqZnF61oQi7g
cAFPUhGhAo5vUrJiN74zoKXEUTFJ6kZVn9U9tmLQSbCzBbaUsVVCCN6QAlp1
WoPTKHJOGbFJ4ShWTXY5X2zIxfiLmv0vWxdmrbU2dRTGpGtS0ZoslHINWm5g
GbLyoGLOI6dQKwX5wXHlxbqDpePGPMLXSM1EMVlVVarlOL72jJWNVFCCijzG
5qLM7unerlYx63Uyusv7luhQyZGbMXrrLUVtScWJRuvWDv/W2EDKtV12k2RS
eN3R6iPQirNdAp/bYub4jNKlhEWpoUG68qaQBgP3dlwDmZYbYxtcRi7lExzC
pEfkLh+8UWtfs+Q7TkMOHeI7rfSC1lkIaavElroB1uN8lk/ces99QrmwpAwk
7GYLi9qndKrZj1+QtwRknKciWAf3RYJTc7SgMC8c6UR/zuRCjfmJivfnlVVq
mwR4DmaVg/wNZM5yBwou/S75APvOk07wLwcwccFNgjoTIOJpnnJmoA122ksw
jXmy9Dg4TCL9/PFjU9R/+od/emJ+/ldJwNUgY0Rq1d+MVdRxYs0QhpGgVGgK
PsQb7iirRa6Un3gyYE1ed59Iks7hH4m/BiNsPWfBDM6xaKUXqKgptTH1Ij5t
ZVc7NZfy0MxVyoSeEEAGRMub9TAaV/7ZgJuL+iEYMJRdyRYiOHTlycmF+Uuc
V/ZiObWtgHzxnTYW9Cdy+MplIJ/IUcJJ8m5wkuJQHVQcxmrKsQBXKvAs7cIh
zT14UdyvAeu9ScltRIoG7wmaDkDmbBItwlTzaDQAbr4kkGObRpRJvQUt9est
rWCYLEqU58YRxfyRgdAMcNYTjRlfPI0okGY9Xl88GBYEsZIbWHTkaoIe7p3r
kXdLet2xeCuGYz9+jyKWAo6ERGguVIVcQDLem+uA5d9HsZwPBG6YWl3LbWBC
yb46t3LighvH9EeXHIXlTaIVrWacJXVPv9qmqexA0AR6pTlJpzhu97+sWQbb
XdNPRqX9vfkGKCb+0V1XqL2Ng0IjI2FmfVReZNdDczmdBF+dnZ31/PNkqZYV
wOsDx0vCBx/fVInmKGNc2V5lYJajKKnxAZGcdv/lKMjIYZw4T9KjWuR2aET1
w82jolaMD+oogW5sxARJM7iCH2lXaDvisGKV5GcBYH76Pogc9XcrqxCtjLks
UdJ5bsHfNsMPegJtzuDJcLszZCnWreZphJ/z8l77nybIBhw0qDvs8p3cFq0n
fdBo7mtfDeoVQLJcxQtFox4NeA8l5DnOao05+LOy7i5hMtfjSpLn9OxXnCdt
h/vcPNnbxeOf4kcboS2Se+5jLe1s7KG8j4zrIC6dgqDP4MydNbf2wRHdQXJ7
WiEJZ6TlpQ8QfC5J9ozbHsT27aq88I4eEv/cUJJDyl/1kNDGT4qrVq3tcDs5
/UMyEQxcMeylC/ATRTXah5GGvbmvORalaS4c0uXqZ3kY2MAymI6v7l+74jqA
g94EcDl3kOIJTTlI7pKoP4lB2Q17Wr7Hkd19V3nuUL6vZfOcEVRh2X5WExw+
smHK5HgyKNEif66UFFbgCe1PBBc5EBfbWlmPNzcypNxOY3Ezh8Y1MEunJJZd
IftgxiVFvdAMHmccS/X9dSB1oHSyVofiY3mucVqbobc4QQ+ZCuMHvm7T8rgS
kOVTiAHsRiKoj+DOnZ5gz9kzRiAC39Y27YcksQ857Nn6GNdnt5a+XCyOZUNa
n3TR25AQ2kBGXUJXXsvomduQiEyJoLn6qbTlA7sCUBKtS2ZkIVW5BDi+QITJ
noVe2eYcEvf17XGsit3co9PJF9OrCmNW3B9S4m8W7yklSZ1W0Cw+i8VHCvtb
FAI/x3treFJfAhuBjhFnvd6kZmgwp9YueWo75tKpYiYLd/XlPk9ago7iNT0+
ttPgZYoBYs0ecEh44C0le0wOm1hLUKFsmZ2Km1dJL9qhuYwaagaivpEgz403
JB+8Sa6gZ7vlE4nk9EmPpY/sWaWFnXmqJQbu1FmyHkLrqvdG6+GYcRGxBype
hoHx5O3RlGRxubtRa1XS7+6h5WwmXViit1tEuWIJkqhs/gci2tsO8U331dEp
uTwwKVIBxlWxWbXgR9mKlO29ICB0rshIG0IEXRE4KnQ2mayWie2xJuETbv8r
8sigIFukA/VY8HwGHAh7SyhCRA1j/4WNRCscDVs80nTG90CW2sUxChCqSpMZ
2ZpI8vVsgL5e1Y0c3dukTDvrwBXyYZghOrFtuBSP3RLG+4WdRFHxNziiS8n1
eklqAaBF2musxoMg+QDvzykX4oPDqQ6Cr2a15EM39vlKK67o6ZlLbWtwzFGU
Gt5k+OOqTkhr/XZ0hZvx6rbAaqProi2IgDrRdhWpZrqCjVyCimwl579JkITM
z8uLi54vSQ3X46IrfW3KKNkHkqPB03WlBMFaXcIe1hp4bBujxVzezaHHXIjj
imTpGjki4owfWzvZOk/cHg1e6zjq48+gnRYC0Dd0c6F9/fi9EMRM1jsfikuU
1PcLjSeshdK6cBKtgw+LyRHoZZq8ZSwcNz7GgscrMmH3UWvyMDxh3S7bfJ3M
jvW6HupNie9l35Q1sLneH3K1oGF8Q5oyyjtwJ9/RWy9sF/JNHcvXm74/0K0i
/paWveVYnXvpOJLhiATzvgMAmfC5nnYZkjasrrU1aQ45Ngp68P2BRuuGb+3w
XSh7ZoXWk3GXtfUu367bMz0ik+TfvhQv1lqZxPGboD1LP2600W/HzFthbW0E
A/lu9ZgLTq3cgoIwV9CC7qIsZ5yYZ6nITWpv0rJKJZHBYm88hea04sMEG1Lq
aglnoeXU0jV08MdVumJQas2pTZDYMz+EDzRfhwXi3PoJ7Qbxcg4rKt+TTUvu
f7DF4z9s+JJG0fi29JbaYXn7HtGFH4ZInNYXYHm/vvVmLlcQ2Uo0xwy3AHHX
JhTXcBGJX9UHujKTklaSDLJCUnf8Q7+T1sxRUO7euzd+/9BD2k6NFk+/f+/m
s8N/VjqdH0QA/NUtPpGoscTl12Kfof8dFyLbDENb4H0Z5+I5cnmkvwnn2KQY
KFRuNYOZxZ1mrKsdNMZJ+QgTzR1xOQk1a3vXrdoVu4cx0lY/Adf65gM9H+WE
dK3fo29gBcxQSf+4QloEtGa5tJ1PHDzHp9BKjj6BYpraPtLh2Y0b0Kke043a
Of26Ra9fxw2cwvzIX7c6OfVQ0QtiQugiOTQqhy2nlT4MWzisf8sJ6UzNxlcH
bmIBRxCkRKCJvAZeHIQbSINDblG2Kiz+dvUDS+4jbbpWNT15SnicDKQ0F3J9
q32mSg+HUYmrmoi6DDtYoc7tl49/BbT0DD9mrq2yd9lBN5fKYbPHFUfmZMBJ
a7fOH6ZVNmuk47QkE/nCOdHwOB8acOq9VBX4GpqyYAh1m9WZoDq8j6tqBo5b
XEKJRQDuPXHeMAsO2GLZxey7Nz3+W+3++tuiLACIX4r0AQDQ2fCmlgciAEBb
EapMEthecjvOK3Rb0ZrKNbvP55JLsfph1w+unUAc415TVGQ5+poGlERFaWm2
9af0EZEGL8iMZc3KXYsI7C6WjW1jxD4bqsHIRSKn0OXJ+kpY7/j5b11icHct
izYKbPf8exn13bjEoUiN0OZt7BLbkIquzL6/FUQ0v+GzxwWXpqBVtT2avzr2
oYQB09X23d5AX9dhb8XNJljpB+RxySZ8ykXOTyVZBrhOZ8vQBt3D1KphNnEf
bQXdw2fYkuBIPY1qWIKu/B9aOi2PV76+8KvzSw0t8cTPaQsu2cshI+9iEvGr
GYT4bXEQ/ul74KCNY8WtlReM4+6YULZlknQPsv6iN2xBUoTybR24KTYRSN8/
lxU0Mifkew40tEB5JSA/V7+VXeAvffa75LxLAnx6P2C0Ip/ZlIowzOQa7EiC
cxdvWcRzBBiRMPxBQ5OutxpLpYSgtJbSDiFtmnreyAf8owsRjtH6KVYueg3D
h0AG+Oz+tsykXhnNeOS1Ac76Db3UMZoM+tAR7tKOUWhfaqf3/Q2XR9BH+OX7
abbA77aV0vfSSgkfyRtP6Q5FZX6BvN74fXNhxMZ1md8Qr9HOb3qmp8Q4BzEc
ylCyDcaokFBCEacdH58EfBqddWBaqDEDw1j9ldWuExtPxEOi0rFp3NvJaqJN
CnIUSCxjBXaAE84fg+PB3FTUMOPdJ4/xfo+npJfKZb2DCBGiaXx9EJGL3Otn
uOPZ579SfSjttHwPN+1uXcM7EIpZfWHxSy3q0PV1EiI4Qh8IQ8V01mY5Ltb+
QILHEg/j6KU+1o4IBOL8SLqILSW1KHQepX2EFQ27ERuozeGHjcT+ApT7KiK1
vDhmK62fyO5YWq9RAfU7WIvjC64CQq4t0zBQnrkcoQRa5yAydfoWavd+jA++
oTLm6HU1EVoaprpr2c+utZygcbqdKNGgt2H8Li8AHainvg0aC4mT1gs9h4DC
SGIOq8OgXNiFmLkZBnPDCzIkBbD77vz+oNdu9LYODyRqKkEgV4rAAbBFeRud
xmHAlnOLGm3bK876ue/cvHSZEPd7O0H7GZ8ebC7u0e7LBIne8Ys09sxn588O
0IyvhJR95sbnzo+8+md2eP5IdCX7LrQWnrbN2ltyW27ff5e4u6+SuqNSpdo3
fOeBhlUg3zsw/HhlhhaquwJNdLQsuKPlC9QwO6YzU00vC3iPNmC6ygUdT26K
jPS+HFJmRc3dAZEoZq2zHFPyYUIT4B1taFmbOi8bpI6Rzk6DFL2uLY+7S7LG
psSxF6bWXwYAFNKgEzpguZyvbbIbJo1pGthY1xu2zLwlVcHTkKdII21Jxp3p
S6Zs0xY5rEawMpOuUTbZZJ5O4x6C+0GNmLS+CSpuPCXhMUnetDWWhCRqDyWY
+WbZO3IaZf9xfhl0JL3NqkZSI7nfqE8t2/2C1NYXT8233CysqJdcT0KCe3bp
blqbmC8ttC0Aq2TuW27zmyC1KM63vPV4gtRw0FfC8QRH0eFPc+eiFKuJTfo7
xlmWCy8AcFRlELblpGAHjfzD0rm0eowzIdxD3WT8k9VP4QeqN5wIba37I9f0
4qpMD6SsLEdwSlcgsyL1LHn8mWSqA9DqSzFaE7UKxIe93Dkf95xHG2nuNxq8
FECaPqKRpnsnSRymgKJoNb1Ysw40Y31NgnaMxzmalYCuaB09z2BDzIFW6Riq
5xtEn+AFeFZl6Os5iO5L5loWvob9ejlGEQGcV8nyBucZ0r3tRDIwWOD8u+Xc
SwBYItO6nZurx/bI6XSUaJX1H5d3A3Xh/j1JGTtw65RcJtOpKBxhLBtgIdis
7+bVhlcfoUyrKREtVroGDvJkhdcVtVuyWMrU3FIpSogIOexKPcLfGEKdSL5l
mOVIwEXGRoqMNYZ1LP09bT3pMR+VnfmXKu5rKLeRKGuQ4pRsgjAcmo8CKvSE
OkiJlcDLvSMc41SVVue9kwfb06ZqchUMXXCRu4arg6NXgtn4B7rO4yVskFaf
YOBrNxm+bURjztoI0raaXEtYtPs3vzTYdmUPPAj/MjFBudaPqrWhVsv50CiN
7bomTd3Wy7+xfIAn34JYDtT6TrIHihRCTAW+eK6vshVktWJkGUe0CM5ykA6d
K+4DQtrTYrXQ/tQ4Oim2cTsm31rQbltcTtNk3dfiiQQ4n1+YiuCZfCFZrzi3
fcEcTDvm0f+RQ//vuT2oRQo4hB0G9c7Q0R/KmFVfKih3Xvfc6EPrmwF1gndD
j8pWD3fVvL54Y9iJQp5p2zPh8cX1QGTq6rjPwYw2SV1saI2mm4kmAVGsdYSq
QF+F97C3EgTZkRijSd6VRbnQ42g1/OOyuel8LIvggVXCfm989qKe14ev0s1c
EoNr41Cl+hpO6Y4yHdAUK2lpzaWAJszCDHXkUeE+3wuOlqBuNDXwKFAHGljk
Y+er42HITf6FtOiijEPUPHi/qJ5H2SjCOKmzWnNm/KFV683O0ssYL/2Ne8Y6
BRYCDiVYVKW5mWY+P8jWNuiLBfNEC4FmZRW+4EuxqfN66r3NJ2CBTe8y7cKQ
sPPq5mU5pUv63i63jframQNqHmzwo5V8MDQRFWpFrGTbSiQurtg/TvJ7kJsM
Zij9SJeKCbotVab78psj163UrZchNe9qELENDhn92Q2fE5DgJIxttaK6jtoG
IVsifm0hPdN1B5aySDnFk2JIf/4iR3fS5iqszeRJ9qOySLl1LU0IL2xU8raa
A4m55Jm036drfUpaEuHtDJGdqQ6h8T1lBzh6sJF8AIaDYxRWepfglbCD99uD
NmK0e2SHxOXXSiwNHhNya/QQL31n3zXtOzj49jPS4UbycyyKkqUCu4lrKDPg
PE9JOzb8MgjoSu45Kicsyd2G90cFlXC8O/Vz1octaoUhz0jGWxFPbquupQzH
tGFWCTKC2fRy06CVDhzYonRFc5wPKP20LKyAJglizZvLR1/66jQQKWo6V6+/
RrWV666yIYQRNpeIx1heCauvW/Uvf31YElq7pnHj62FrenA9k9YwN1ntqRQC
k7fLbCAtHtR+bjONiknw9RXM3+VqAcXBdgx/v3fJRU4r8omStcbBh+erallu
bqyy0eo5I9TXl58JtOgHqMC2Qd9k6HZIwGuOV8BuIF9vq7Vwp9n8nD6qqPqm
nbImF27T1KTf25odTj1pX/NQ7ftg5et0YC2abl3DOeXGp2qkD96Jf/5L70Uj
nWJeflSZu+TAFosxYgo4LGQhgWQXvirHtiKvzf54XDmytMFZEn4p2T/SplKP
ne3bgNzfR4dXL31VT1gHFGhmFhV7kYMnTh472+GcW8FHuqMS4fGydd7BR2sv
2s0KGZtfmhimOmv4mYbmLjmPCFJUybRAju3us2eofOaH4qgNmNpa5Ei9Cp7t
erXd94wr7WFCs0isgOgHt3SQ1iZaXu3eR9gEE47Tr16cvHnFAeSheSNJJfxo
Vm8hgWGC8mSpjR6UNGbqyKevQPep2PoKjvASfR056dy3HOyhHQ1TTjEqXmG+
geAnwWY+nODmP5qTyetVkk2Z9M8C0rNl8iySpe33sn+I4GC2iOiBRdJAuafD
ZrsgSTTGLtkbFtuHdhC+YjDsUFDry27rxiUr1SEFv3ry2VdCwbDLxjb2BbEy
soEGryTLUVTyePdL1O7y/Gq/07zvjGcmqVQf2NdhW4axYCYoQZYaMD1d6QeY
Q2j43cnO5cGp7WTA6k2J2wZ6QbfktuDnhHg8V2L1Icu1KPPsI5QJ+GwbZT7C
QQGJ/MnTqpiIAHgqZX7NFvjaLCRto+RzJq5agvVhyBGwFlZsXwoVvoVvQ2Wb
5a5IGHd3n5FV5pF2v1S5RKt+WpU261vxC6RwCR9JqfFLbX47bxHH4tp68Mvn
9Hk2Tlp01ZRyawmClyWRPBbzVTInlwJH8n3+15yeXaGX/+vro4vDg765/Gb/
+Fh/yHeCZu1Pe/3o7OTk8PQAt5zs/04If3Z+dXR2uk++sV0wGhJVUpFtX1lf
p0HylZ8n90WV3ltoXIVsdVK9zyUkylVoqzq1NzX+RovwnE4euBeJqBH+uLkV
s7w/eVugiHkqxw00qiSzpNOvH82SvE5x2dXZwRnhcXsleqB1yN6xMuNBlhxV
f2f29yToMYInP/AGk7Z45NjvwLLclofFERqph+CXiTJF9VCYiakZ4JFca3Za
x75w78Ns7+M0I3HUEEWz75JygJdlPfjCo179ptMGuXL6lq/rT7LOP//zk55g
TF1G9HvnA++upVs/64U4NnrTbPR2JR9t6FuwOxoRi8dvZO7jHaPE5OcnBwHq
dTAcAcj19vbYBZrJU8zk4BwAnbxz+v+Q/j8pD/v84m00nbB/hy8g40G1TaJ9
kc/P//z5cNd2Kezrx33zmu7XhiHaqxxvC/nwZa+OTgHRR39zzA/iBoSW6p8P
me78yoXzgxHddUZyfHEt/xz1zTV+PT4ml6DVNLnPbXX7JujkTcsNW9nacXk0
moTu41rnBJrGFz1B5xLS4EZ08qvtXqYGjdsy9df7GvWd3Qs77rg5c0cAuvv4
xY62K5DoEjs79snhA+xo8f0ckMIi2gUotIIvsQKUGNJ6o1JsYhxfJUwc166L
7aNWsC8lhX0UvYHT4hq1vtlUeLXRSZMZxE/k8bcMvD5E5/2Gihla4Fc98Ur7
UWFAP3RUo+T5vnHJ4X3j0qJxrOR/77zflKZLD3uGh/3pH/7Lmtfbd51q+zbR
qa9pN32zMbWmzwkWfbMhAQL7Gx7i9018NE1U43PZnU0Hi33OssASNkQPaAm7
j3utg4Zu26fv9eO4ZR9Btr4NRcmuh5Gg9niONL98IN/ghK0SLdslVT/cVqm1
tIbuhRq6K3tAYMMlDzZpS1/YApRKXIyqCF6wlhbZfHc2DJ2ymtOUNAFMo4r3
fd9EzUpB7KbZnuwdj22H0j9XvXheRpe7nXgLuGbOnkDVXwMW4Z8LiMJLIvqZ
hqPQGqQPbYlz/ZXNg36TVIvVEkm3WcmqgSNilxwR6+NjwF10Nzq2HhNeTiDq
ZosFxVw+M5GFPFu6xOtfaBbXTGFgJmER2TiaD5hjzOapWX+fJpFi7dXGtOAN
r+ntm/X3zm4wms6kbrHGmAiZ0M2mcfOryAKL2o9NokT6tF0M9FJsreVZYIi2
5Yys44fMaGQ5Y5vdejTsNx74xf9Tk7nBjmNSX6oN2rEdYT5oDm3GwrpVtLPx
7RDURO74TgB/vrVs23Gs4KtfZuaCeqStFm/dnuJBzx5i3ORA15o499d2Q2cn
+WeYu8DEbTV8fbMpccJstOdY7O7j2HitG6zNgd8PW8MNlpcfxlKv0Za+Cwz1
XVTD/fZMvyXnfIe91PebY7JBMRIZN98FTKthI7NHxu3/Alpjiz0irgAA

-->

</rfc>
