<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.35 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-lg-bmwg-benchmarking-methodology-for-rov-01" category="info" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.33.0 -->
  <front>
    <title abbrev="ROVBench">Benchmarking Methodology for Route Origin Validation (ROV)</title>
    <seriesInfo name="Internet-Draft" value="draft-lg-bmwg-benchmarking-methodology-for-rov-01"/>
    <author initials="L." surname="Liu" fullname="Libin Liu">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>liulb@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="N." surname="Geng" fullname="Nan Geng">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>gengnan@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="April" day="09"/>
    <area>Operations and Management [REPLACE]</area>
    <workgroup>BMWG</workgroup>
    <abstract>
      <?line 44?>

<t>This document defines a benchmarking methodology for routers that implement ROV. The methodology focuses on device-level behavior, including processing of validated ROA payload (VRP) updates, the interaction between ROV and BGP, control-plane resource utilization, and the scalability of ROV under varying operational conditions. The procedures described here follow the principles and constraints of the Benchmarking Methodology Working Group (BMWG) and are intended to produce repeatable and comparable results across implementations.</t>
    </abstract>
  </front>
  <middle>
    <?line 48?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Route Origin Validation (ROV), as specified in <xref target="RFC6811"/>, allows routers to use validated Route Origin Authorization (ROA) information, which is distributed via the RPKI-to-Router (RTR) protocol defined in <xref target="RFC8210"/>, to classify BGP routes as Valid, Invalid, or NotFound. Deployments of ROV continue to increase across networks, and router vendors have implemented ROV processing as part of their control-plane functions.</t>
      <t>While operational experience is growing, there is currently no standardized methodology for measuring the performance impact and behavioral characteristics of ROV on routing devices. As with other protocol features evaluated by the Benchmarking Methodology Working Group (BMWG), a consistent and repeatable test framework is essential for:</t>
      <ul spacing="normal">
        <li>
          <t>Comparing router implementations,</t>
        </li>
        <li>
          <t>Evaluating scalability under controlled conditions,</t>
        </li>
        <li>
          <t>Characterizing the control-plane costs of ROV processing, and</t>
        </li>
        <li>
          <t>Understanding how ROV influences BGP convergence and routing stability.</t>
        </li>
      </ul>
      <t>This document defines a benchmarking methodology for routers that implement ROV, which builds upon the foundational benchmarking principles defined in <xref target="RFC1242"/>, <xref target="RFC2285"/>, <xref target="RFC2544"/>, <xref target="RFC2889"/>, and <xref target="RFC3918"/>. The methodology focuses on the Device Under Test (DUT) and uses controlled, reproducible inputs to isolate the effects of ROV from external dependencies. In particular, the benchmarking framework assumes the presence of an RPKI-to-Router (RTR) update source, which may be an RPKI Cache Server or an RTR traffic generator capable of delivering synthetic Validated ROA Payloads (VRPs).</t>
      <t>The objective of this document is to define a set of metrics and procedures to quantify:</t>
      <ul spacing="normal">
        <li>
          <t>The latency of ROV state updates within the DUT,</t>
        </li>
        <li>
          <t>The impact of ROV on BGP control-plane performance,</t>
        </li>
        <li>
          <t>The scalability of ROV processing under varying VRP and BGP table sizes, and</t>
        </li>
        <li>
          <t>The control-plane resource utilization associated with enabling ROV.</t>
        </li>
      </ul>
      <t>By providing a consistent framework, this document enables vendors, operators, and researchers to evaluate ROV functionality under controlled and repeatable conditions, improving understanding of implementation performance and supporting informed deployment decisions.</t>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="scope-and-goals">
      <name>Scope and Goals</name>
      <t>This document specifies a laboratory-based benchmarking methodology for evaluating the performance of router implementations of ROV as defined in <xref target="RFC6811"/>. The scope of this benchmarking methodology includes:</t>
      <ul spacing="normal">
        <li>
          <t><strong>ROV processing performance</strong>: Measurement of the time and resources required for a router to process VRP updates received via the RTR protocol.</t>
        </li>
        <li>
          <t><strong>Impact on BGP control-plane performance</strong>: Quantification of how enabling ROV affects BGP convergence times and routing table stability.</t>
        </li>
        <li>
          <t><strong>Scalability under controlled conditions</strong>: Evaluation of the router's ability to handle large VRP sets, rapid VRP churn, and BGP updates influenced by ROV.</t>
        </li>
        <li>
          <t><strong>Resource utilization</strong>: Measurement of system CPU utilization, system memory consumption, and relevant control-plane process load associated with ROV processing.</t>
        </li>
      </ul>
      <t>The goals of this document are:</t>
      <ul spacing="normal">
        <li>
          <t>To define a repeatable, controlled methodology for benchmarking ROV-enabled routers.</t>
        </li>
        <li>
          <t>To provide standardized metrics that allow for comparison across implementations.</t>
        </li>
      </ul>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>The terminology used in this document follows the conventions of <xref target="RFC1242"/>, <xref target="RFC2285"/>, and subsequent BMWG publications. The following terms are used with specific meanings in the context of ROV benchmarking.</t>
      <t>Route Origin Validation (ROV): A procedure defined in <xref target="RFC6811"/> that compares the origin AS of a BGP announcement with the set of authorized origins derived from validated ROA objects. ROV results in one of three states: Valid, Invalid, or NotFound.</t>
      <t>Validated ROA Payload (VRP): The processed output from a relying party containing prefix-origin pairs that routers use for ROV decisions. VRPs are transported via the RPKI-to-Router (RTR) protocol.</t>
      <t>RPKI-to-Router (RTR) Session: A protocol session between a router and an RPKI Cache Server. In benchmarking, RTR sessions may be emulated or generated using traffic/test tools to deliver synthetic VRP updates.</t>
      <t>ROV Update Processing Latency: The time from when a router receives new VRP data (via RTR) until the updated ROV state is reflected in the router's local Routing Information Base (RIB) or implemented in routing decisions.</t>
      <t>VRP-Triggered Revalidation Latency:
The time interval between completion of VRP installation and the moment all affected prefixes have updated validation states.</t>
      <t>BGP-Triggered ROV Validation Latency:
The time interval between receipt of a BGP UPDATE message and completion of the ROV validation procedure for that route.</t>
      <t>BGP Convergence Time: The time required for the router's control plane to process BGP updates and reach a stable routing state, while ROV validation is active.</t>
      <t>Resource Utilization: System CPU utilization, system memory consumption, and, when observable, per-process utilization of the ROV process and the BGP or routing process while the router performs ROV-related tasks, including processing of VRP updates and applying ROV policy.</t>
      <t>ROV Churn: A burst of VRP changes (e.g., many ROA additions or withdrawals) that may trigger significant re-validation and BGP recalculation, which is used in stress tests.</t>
      <t>ROV Scalability Limit: The maximum number of VRPs, RTR sessions, or ROV-triggered BGP changes that the router can process while maintaining normal operational performance.</t>
    </section>
    <section anchor="test-setup-and-laboratory-environment">
      <name>Test Setup and Laboratory Environment</name>
      <t>This section describes the required test topology, equipment, DUT configuration, RPKI data emulation, and traffic generation conditions. The goal of the test environment is to isolate the DUT and subject it to clearly defined RPKI-RTR and BGP tests, while providing accurate timing and state measurements.</t>
      <section anchor="test-topology">
        <name>Test Topology</name>
        <figure anchor="test-topo">
          <name>The test topology for ROV benchmarking.</name>
          <artwork><![CDATA[
+-------------------+    RTR    +----------------------+
|    RTR Emulator   |---------->|          DUT         |
|(RTR Update Source)|           |     (ROV Enabled)    |
+-------------------+           +----------------------+
                                 /\          /\
                                                          |           | Data-plane Traffic
                              BGP |  +-----------------+    
+---------------------+           |  |      Tester     |
|BGP Traffic Generator|-----------+  |(Data-plane Load)|
+---------------------+              +-----------------+
]]></artwork>
        </figure>
        <t>The test topology consists of four primary components: the DUT, an RPKI-RTR update source, a BGP traffic generator, and a tester for generating data-plane traffic load. The DUT is a router equipped with ROV capabilities, supporting the RPKI-RTR protocol and applying ROV policies to received BGP routes. The RPKI-RTR update source may be either a real RPKI cache implementation running in isolated mode or a dedicated emulator capable of producing arbitrary VRP sets and update patterns. This RTR source connects directly to the DUT using the RTR protocol and provides controlled VRP updates, including serial increments, cache resets, and bursty or delayed update sequences.</t>
        <t>The BGP traffic generator establishes one or more BGP peering sessions with the DUT and is responsible for delivering a full routing table together with controlled withdrawal or re-announcement events. Because IPv4 and IPv6 tables differ in scale and may exercise different implementation paths, the test setup must state the number of IPv4 routes and the number of IPv6 routes separately. A test <bcp14>MAY</bcp14> use a mixed baseline table (for example, 1,000,000 IPv4 routes and 250,000 IPv6 routes) or <bcp14>MAY</bcp14> benchmark each address family identifier (AFI) separately. In either case, the chosen route counts and AFIs under test must remain fixed across repeated runs for the same condition. The generator should be capable of presenting both stable baseline routing conditions and timed ROV-affected prefixes whose validation status will change in response to VRP updates.
A tester is connected to the DUT to introduce controlled data-plane load during benchmarking. When present, the tester <bcp14>SHOULD</bcp14> generate stable and deterministic traffic loads so that the impact of forwarding load on ROV processing can be evaluated. When data-plane load is applied, its rate, frame size, traffic pattern, and address-selection rules must be documented in the test report.</t>
      </section>
      <section anchor="dut-configuration-requirements">
        <name>DUT Configuration Requirements</name>
        <t>The DUT must be configured with ROV enabled on all BGP sessions receiving test routes. The router must establish a stable and fully functional RPKI-RTR session with the RTR emulator. To ensure that performance results are attributable solely to ROV behavior, all non-essential features on the DUT, such as additional routing protocols, unnecessary telemetry mechanisms, and unused services, should be disabled. Logging related to ROV may remain enabled for debugging purposes but must be rate-limited to avoid skewing CPU measurements or affecting test repeatability. All system parameters relevant to routing performance, such as multipath behavior or maximum-prefix limits, must be documented prior to testing.</t>
      </section>
      <section anchor="rtr-data-source-emulation">
        <name>RTR Data Source Emulation</name>
        <t>The RTR emulator must be capable of generating synthetic VRP data sets with user-defined characteristics. This includes the ability to create arbitrary combinations of prefixes and ASNs, overlapping VRPs, conflicting VRPs, and other edge cases relevant to validation logic. The VRP datasets should mimic realistic global distributions where appropriate, but must also support scaling tests where VRP volumes are substantially higher than today's norm. The data source must further support generating controlled bursts of VRP updates, ranging from 100 to 10,000 VRP changes per second, and must allow for both additive updates and withdrawals.</t>
      </section>
      <section anchor="bgp-traffic-generation-requirements">
        <name>BGP Traffic Generation Requirements</name>
        <t>The BGP traffic generator must present the DUT with a stable baseline routing table prior to initiating any benchmark. This ensures that the DUT begins each test run in a known, converged state with predictable CPU and memory utilization. The generator must also provide a set of ROV-affected prefixes whose origin AS can be manipulated in concert with VRP updates from the RTR emulator. These prefixes should span a range of prefix lengths and originate from diverse ASes to reflect realistic routing conditions. The traffic generator must support deterministic convergence triggers, such as the precise injection of BGP updates following a VRP change or the simultaneous application of both BGP and VRP events.</t>
      </section>
      <section anchor="traffic-profile-parameters">
        <name>Traffic Profile Parameters</name>
        <t>When data-plane traffic is used, the following parameters <bcp14>SHOULD</bcp14> be specified:</t>
        <ul spacing="normal">
          <li>
            <t>Fixed frame size used for the measurement. For convergence measurements, a small fixed packet size <bcp14>SHOULD</bcp14> be used to improve time resolution. A 128-byte packet at Layer 3 is one practical choice. Other fixed sizes <bcp14>MAY</bcp14> be used when required by the traffic generator or encapsulation overhead, but the selected size <bcp14>MUST</bcp14> be documented.</t>
          </li>
          <li>
            <t>Traffic rate in packets per second (PPS). For convergence measurements, the traffic <bcp14>SHOULD</bcp14> use a constant rate so that packet arrival times map directly to time resolution. Burst traffic <bcp14>MAY</bcp14> be used in stress scenarios, but such tests <bcp14>MUST</bcp14> be reported separately from constant-rate convergence tests.</t>
          </li>
          <li>
            <t>Traffic pattern. For convergence measurements, constant-rate traffic is <bcp14>RECOMMENDED</bcp14>.</t>
          </li>
          <li>
            <t>Source and destination IP address selection rules. When the setup sends one stream per tested prefix, the destination address for that stream <bcp14>SHOULD</bcp14> be selected from within the tested prefix. The first usable address is one valid example. The selection rule <bcp14>MUST</bcp14> be documented and applied consistently across runs.</t>
          </li>
          <li>
            <t>Whether traffic matches ROV-affected prefixes.</t>
          </li>
        </ul>
        <t>Each frame size and PPS combination <bcp14>SHOULD</bcp14> be reported separately.</t>
      </section>
    </section>
    <section anchor="benchmarking-methodology">
      <name>Benchmarking Methodology</name>
      <t>This section describes the general methodology for benchmarking ROV behavior on a DUT. The goal is to ensure that all tests are repeatable, comparable across different environments, and representative of realistic deployment conditions. The methodology defines how to establish a controlled and stable test environment, how to specify and vary input conditions, and how to measure key performance metrics associated with ROV processing.</t>
      <section anchor="general-considerations">
        <name>General Considerations</name>
        <t>Before any measurements are taken, the DUT must reach a well-defined steady state in which the RPKI-RTR session is fully established, the VRP set has been completely synchronized, and the BGP control plane has converged. A warm-up period is recommended to eliminate any cold-start effects that could bias measurement results.</t>
        <t>All sources of measurement noise should be avoided. Features such as logging, real-time telemetry export, or periodic background tasks can interfere with timing-sensitive measurements; therefore, such features should be disabled or rate-limited during benchmarking. CPU clock scaling, thermal throttling, or other variable-performance modes should be minimized if the test setup allows it.</t>
      </section>
      <section anchor="test-control-and-input-conditions">
        <name>Test Control and Input Conditions</name>
        <t>Accurate benchmarking depends on precise control of the input conditions applied to the DUT. All tests should begin from a consistent baseline consisting of:</t>
        <ul spacing="normal">
          <li>
            <t>A predefined VRP set size (e.g., tens of thousands to millions of entries).</t>
          </li>
          <li>
            <t>A stable and realistic baseline BGP RIB-in with IPv4 and IPv6 route counts documented separately (for example, 1,000,000 IPv4 routes and 250,000 IPv6 routes in a mixed table, or equivalent AFI-specific baselines).</t>
          </li>
        </ul>
        <t>From this baseline, input variables may be modified to stress different aspects of ROV behavior. These variables include the VRP churn rate, ranging from steady incremental updates to high-intensity bursts, and the type of RPKI-RTR updates provided to the DUT, such as incremental updates versus full-table refreshes. Each of these conditions may trigger different processing strategies within the DUT, and therefore must be explicitly controlled and documented.</t>
      </section>
      <section anchor="metrics-and-measurements">
        <name>Metrics and Measurements</name>
        <t>Benchmarking ROV behavior requires collecting quantitative performance metrics that reflect how the DUT processes validation information and incorporates it into the BGP decision process. Therefore, this document proposes key performance metrics including ROV update processing latency, ROV validation latency, BGP convergence time, VRP storage size, system CPU and memory utilization, per-process utilization where available, and ROV state rebuild time.</t>
        <t><strong>ROV update processing latency</strong> measures the time from receipt of an RTR update (incremental or full) until the DUT has fully updated its internal validation state. This metric captures the efficiency of ROV data structures and algorithms.</t>
        <t><strong>ROV validation latency</strong> measures the time interval between a router's receipt of a BGP UPDATE message that contains a new or changed route, and the completion of the ROV procedure for that route, producing a validation state of Valid, Invalid, or NotFound. This metric isolates the internal validation step, excluding the larger BGP convergence process, and provides insight into the responsiveness of the DUT's validation engine.</t>
        <t><strong>BGP convergence time</strong> with ROV enabled measures how long the DUT takes to converge on BGP prefixes whose validation states change due to VRP updates. This reflects the real operational behavior of ROV as it interacts with the control plane.</t>
        <t>The <strong>VRP storage size</strong> inside the DUT should also be recorded to evaluate the scalability of the implementation when operating with large VRP datasets. Alongside this, <strong>resource utilization</strong> should be monitored to identify performance limits or resource-intensive operations triggered by ROV. At minimum, the benchmark <bcp14>SHOULD</bcp14> collect system-level CPU utilization and system-level memory consumption. When the DUT exposes process-level counters, the benchmark <bcp14>SHOULD</bcp14> also collect utilization for the ROV process and for the BGP process or main routing process.</t>
        <t>A recovery-related measurement, <strong>ROV state rebuild time</strong> after RTR session reset, quantifies the time needed for the DUT to re-establish a complete and correct ROV validation state after an RTR session reset or cache outage. This metric reflects robustness and recovery behavior under fault or restart scenarios.</t>
        <t>Finally, the DUT should be evaluated under high-pressure scenarios by measuring its behavior when processing VRP bursts, such as surges of 100-10,000 VRPs per second. This measurement reveals whether the implementation can sustain abrupt workload increases without dropping updates, stalling, or entering unstable states.</t>
      </section>
    </section>
    <section anchor="benchmark-tests">
      <name>Benchmark Tests</name>
      <t>This section defines the individual benchmark tests used to evaluate the performance and behavior of a DUT implementing ROV. Each test focuses on a specific aspect of the ROV processing pipeline, including VRP ingestion, validation, interaction with BGP, scalability limits, and robustness under stress and failure conditions. All tests assume the laboratory setup and input conditions described previously.</t>
      <section anchor="test-rov-update">
        <name>ROV Update Processing Latency</name>
        <t><strong>Objective</strong>: Measure the latency from the arrival of an RTR PDU until the new VRP information is installed in the DUT's internal ROV tables.</t>
        <t>The <strong>test procedures</strong> for ROV update processing latency are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Prepare baseline state  </t>
            <ul spacing="normal">
              <li>
                <t>Establish RTR session between DUT and the RTR emulator.</t>
              </li>
              <li>
                <t>Preload DUT with a selected baseline VRP size (e.g., 100k VRPs).</t>
              </li>
              <li>
                <t>Ensure BGP is fully converged.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Inject controlled RTR update  </t>
            <ul spacing="normal">
              <li>
                <t>From the emulator, send a single incremental update modifying a known VRP.</t>
              </li>
              <li>
                <t>Alternatively, for full-refresh tests, send a full VRP set replacement PDU sequence.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Timestamp PDU transmission  </t>
            <ul spacing="normal">
              <li>
                <t>Record the exact moment the first update PDU is sent.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Monitor DUT internal state  </t>
            <ul spacing="normal">
              <li>
                <t>Use device instrumentation (API, CLI, or telemetry) to detect the exact moment the VRP table reflects the update.</t>
              </li>
              <li>
                <t>Confirm the VRP entry has been added, removed, or modified as expected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Calculate latency  </t>
            <ul spacing="normal">
              <li>
                <t>The latency is the time difference between the moment the RTR PDU is sent and the moment the VRP is applied on the DUT.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat for multiple VRP table sizes  </t>
            <ul spacing="normal">
              <li>
                <t>E.g., 50k, 100k, 500k, and 1M VRPs.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat at least 10 times per condition  </t>
            <ul spacing="normal">
              <li>
                <t>Compute mean and standard deviation.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="test-rov-validation">
        <name>ROV Validation Latency</name>
        <t><strong>Objective</strong>: Measure how long the DUT takes to apply updated VRPs to the validation states of affected BGP prefixes.</t>
        <t>The <strong>test procedures</strong> for ROV validation latency are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Establish baseline  </t>
            <ul spacing="normal">
              <li>
                <t>Load the baseline BGP table with IPv4 and IPv6 route counts stated explicitly (for example, 1,000,000 IPv4 routes and 250,000 IPv6 routes, or an AFI-specific baseline).</t>
              </li>
              <li>
                <t>Ensure all prefixes have a known baseline validation state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Select a controlled prefix set  </t>
            <ul spacing="normal">
              <li>
                <t>Pick a set of prefixes (e.g., 1,000) whose origin AS is tied to specific VRPs.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Trigger validation update  </t>
            <ul spacing="normal">
              <li>
                <t>Modify VRPs so that these prefixes change validation state (Valid-&gt;Invalid or Invalid-&gt;Valid).</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Timestamp VRP installation completion  </t>
            <ul spacing="normal">
              <li>
                <t>As measured in <xref target="test-rov-update"/>.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Monitor DUT validation table  </t>
            <ul spacing="normal">
              <li>
                <t>Continuously query validation state for selected prefixes.</t>
              </li>
              <li>
                <t>Note the timestamp when all prefixes reflect the updated state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Compute latency  </t>
            <ul spacing="normal">
              <li>
                <t>The validation latency is the time difference between the moment the VRP installation is completed and the moment all affected validation states have been updated.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat with varying set sizes  </t>
            <ul spacing="normal">
              <li>
                <t>E.g., 10 prefixes, 100 prefixes, 1,000 prefixes.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="bgp-convergence-with-rov-enabled">
        <name>BGP Convergence with ROV Enabled</name>
        <t><strong>Objective</strong>: Measure BGP convergence time for routes impacted by ROV state changes, and compare to BGP-only convergence without ROV.</t>
        <t>The <strong>test procedures</strong> for BGP convergence with ROV enabled are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Prepare baseline  </t>
            <ul spacing="normal">
              <li>
                <t>Establish full-table BGP adjacency.</t>
              </li>
              <li>
                <t>Enable ROV on DUT.</t>
              </li>
              <li>
                <t>Ensure stable initial convergence.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Select test prefixes  </t>
            <ul spacing="normal">
              <li>
                <t>Choose prefixes that will transition from Valid to Invalid once VRP updates are applied.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Trigger VRP state change  </t>
            <ul spacing="normal">
              <li>
                <t>Send VRP modifications via RTR.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Monitor BGP behavior  </t>
            <ul spacing="normal">
              <li>
                <t>Observe best-path selection changes.</t>
              </li>
              <li>
                <t>Timestamp withdrawal or replacement of Invalid prefixes.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Measure convergence  </t>
            <ul spacing="normal">
              <li>
                <t>Convergence Timer Starts: The convergence timer <bcp14>SHOULD</bcp14> start at the timestamp when the DUT completes installation of the relevant VRP update.</t>
              </li>
              <li>
                <t>Convergence Timer Ends: The convergence timer <bcp14>SHOULD</bcp14> end when both the BGP RIB and FIB reach stable state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat test with ROV disabled  </t>
            <ul spacing="normal">
              <li>
                <t>Use identical routing changes for baseline comparison.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record:  </t>
            <ul spacing="normal">
              <li>
                <t>Time to withdraw Invalid prefixes.</t>
              </li>
              <li>
                <t>Time until new best paths stabilize.</t>
              </li>
              <li>
                <t>Differences relative to ROV-disabled baseline.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="vrp-scalability-tests">
        <name>VRP Scalability Tests</name>
        <t><strong>Objective</strong>: Evaluate DUT performance with varying VRP table sizes.</t>
        <t>The <strong>test procedures</strong> for VRP scalability tests are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Generate VRP datasets at sizes:  </t>
            <ul spacing="normal">
              <li>
                <t>E.g., 50k, 100k, 500k, 1M.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Load each dataset into the RTR emulator.</t>
          </li>
          <li>
            <t>For each dataset, measure:  </t>
            <ul spacing="normal">
              <li>
                <t>Full-table synchronization time.</t>
              </li>
              <li>
                <t>VRP update processing latency (from <xref target="test-rov-update"/>).</t>
              </li>
              <li>
                <t>ROV validation latency (from <xref target="test-rov-validation"/>).</t>
              </li>
              <li>
                <t>System memory consumption.</t>
              </li>
              <li>
                <t>System CPU utilization during sync and steady state.</t>
              </li>
              <li>
                <t>Per-process utilization for the ROV process and the BGP or routing process, when available.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record failures  </t>
            <ul spacing="normal">
              <li>
                <t>Session drops</t>
              </li>
              <li>
                <t>Timeouts</t>
              </li>
              <li>
                <t>Missing VRPs</t>
              </li>
              <li>
                <t>ROV process crashes</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat 10 times per size for statistical stability.</t>
          </li>
        </ol>
      </section>
      <section anchor="vrp-churn-and-stress-tests">
        <name>VRP Churn and Stress Tests</name>
        <t><strong>Objective</strong>: Stress-test the DUT under rapid VRP changes to measure stability, performance, and correctness.</t>
        <t>The <strong>test procedures</strong> for VRP churn and stress tests are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Baseline setup  </t>
            <ul spacing="normal">
              <li>
                <t>Load a stable VRP table (e.g., 500k).</t>
              </li>
              <li>
                <t>Establish the baseline BGP table with IPv4 and IPv6 route counts stated explicitly.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Generate controlled churn patterns  </t>
            <ul spacing="normal">
              <li>
                <t>Rapid add or remove spikes: 100-10,000 VRPs per second.</t>
              </li>
              <li>
                <t>Sustained churn: continuous modifications for 5-10 minutes.</t>
              </li>
              <li>
                <t>Mixed churn: adds, removes, and changes simultaneously.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Measure DUT behavior  </t>
            <ul spacing="normal">
              <li>
                <t>VRP update backlog or queueing.</t>
              </li>
              <li>
                <t>ROV validation delays.</t>
              </li>
              <li>
                <t>System CPU spikes and, when available, spikes in the ROV process and the BGP or routing process.</t>
              </li>
              <li>
                <t>BGP convergence degradation.</t>
              </li>
              <li>
                <t>Missed or dropped VRP updates.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Check correctness  </t>
            <ul spacing="normal">
              <li>
                <t>Verify that no stale or inconsistent ROV states remain.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record crash, stall, or throttling events.</t>
          </li>
        </ol>
      </section>
      <section anchor="resource-utilization">
        <name>Resource Utilization</name>
        <t><strong>Objective</strong>: Measure resource consumption under various ROV workloads.</t>
        <t>The <strong>test procedures</strong> for RTR session behavior tests are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Establish monitoring tools  </t>
            <ul spacing="normal">
              <li>
                <t>System CPU sampling (100-500 ms interval).</t>
              </li>
              <li>
                <t>System memory usage tracking.</t>
              </li>
              <li>
                <t>Per-process sampling for the ROV process and the BGP or routing process, when available.</t>
              </li>
              <li>
                <t>Hardware counters if available.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Measure under conditions  </t>
            <ul spacing="normal">
              <li>
                <t>Idle ROV.</t>
              </li>
              <li>
                <t>Full VRP sync.</t>
              </li>
              <li>
                <t>VRP churn.</t>
              </li>
              <li>
                <t>BGP convergence triggered by ROV events.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record  </t>
            <ul spacing="normal">
              <li>
                <t>System CPU load curves.</t>
              </li>
              <li>
                <t>Peak and steady-state system memory consumption.</t>
              </li>
              <li>
                <t>Process-level load curves for the ROV process and the BGP or routing process, when available.</t>
              </li>
              <li>
                <t>Any evidence of saturation (e.g., 100% CPU, memory exhaustion).</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Identify thresholds  </t>
            <ul spacing="normal">
              <li>
                <t>Points where performance degrades or ROV processing becomes unstable.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="rtr-session-behavior-tests">
        <name>RTR Session Behavior Tests</name>
        <t><strong>Objective</strong>: Evaluate robustness and recovery of DUT under RTR failure and failover scenarios.</t>
        <t>The <strong>test procedures</strong> for resource utilization are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Session reset test  </t>
            <ul spacing="normal">
              <li>
                <t>Establish normal RTR session.</t>
              </li>
              <li>
                <t>Trigger forced session reset from emulator.</t>
              </li>
              <li>
                <t>Measure time to reestablish RTR session, ROV state rebuild time, time until validation state becomes consistent again.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Cache failover test  </t>
            <ul spacing="normal">
              <li>
                <t>Configure DUT with two RTR servers (primary + secondary).</t>
              </li>
              <li>
                <t>Terminate primary RTR connection.</t>
              </li>
              <li>
                <t>Measure failover time and data consistency after switch.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Full resynchronization timing  </t>
            <ul spacing="normal">
              <li>
                <t>From emulator, force a full Reset Query sequence.</t>
              </li>
              <li>
                <t>Measure full VRP reload time.</t>
              </li>
              <li>
                <t>Compare across different VRP scales.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Incremental update performance  </t>
            <ul spacing="normal">
              <li>
                <t>Send controlled incremental PDUs.</t>
              </li>
              <li>
                <t>Measure processing latency and correctness.</t>
              </li>
              <li>
                <t>Introduce occasional malformed or unexpected PDUs to test robustness.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
    </section>
    <section anchor="reporting-requirements">
      <name>Reporting Requirements</name>
      <t>An ROV benchmarking report <bcp14>MUST</bcp14> provide enough detail to allow reproducibility and meaningful comparison across different DUTs. Each report <bcp14>MUST</bcp14> include the following elements:</t>
      <ul spacing="normal">
        <li>
          <t><strong>Test environment description</strong>: The report <bcp14>MUST</bcp14> specify the DUT hardware and software versions, the testbed topology, and all ROV-related configuration parameters required to replicate the setup.</t>
        </li>
        <li>
          <t><strong>Input conditions</strong>: The report <bcp14>MUST</bcp14> document the VRP set size, the RIB-in size with IPv4 and IPv6 counts stated separately, the presence and rate of VRP churn, and whether RTR updates were incremental or full.</t>
        </li>
        <li>
          <t><strong>Metrics and results</strong>: Each measured metric <bcp14>MUST</bcp14> include its definition, a brief description of the measurement procedure, and results presented in tabular numerical form (including minimum, average, maximum, and at least P95 values). Graphs <bcp14>MAY</bcp14> be included for clarification.</t>
        </li>
        <li>
          <t><strong>Deviations and anomalies</strong>: Any deviation from the expected behavior <bcp14>MUST</bcp14> be described, including the conditions under which it occurred and whether the test was repeated.</t>
        </li>
        <li>
          <t><strong>Summary of observations</strong>: The report <bcp14>MUST</bcp14> include a concise summary of overall DUT performance, scalability limits observed, and any significant effects of enabling ROV on BGP behavior.</t>
        </li>
      </ul>
      <t>In addition, the report <bcp14>MUST</bcp14> include, at minimum, the following parameters:</t>
      <ul spacing="normal">
        <li>
          <t>DUT hardware model, CPU architecture, memory size, and software version.</t>
        </li>
        <li>
          <t>Complete DUT configuration relevant to ROV and BGP.</t>
        </li>
        <li>
          <t>Testbed topology description.</t>
        </li>
        <li>
          <t>VRP table size.</t>
        </li>
        <li>
          <t>VRP churn rate.</t>
        </li>
        <li>
          <t>RIB-in size, with IPv4 and IPv6 route counts stated separately.</t>
        </li>
        <li>
          <t>AFIs under test (IPv4, IPv6, or mixed).</t>
        </li>
        <li>
          <t>Number of RTR sessions.</t>
        </li>
        <li>
          <t>RTR timer configuration.</t>
        </li>
        <li>
          <t>Presence and parameters of data-plane traffic (if used), including fixed frame size, PPS rate, traffic pattern, and address-selection rule.</t>
        </li>
        <li>
          <t>ROV policy mode (e.g., reject Invalid).</t>
        </li>
        <li>
          <t>System CPU sampling interval and any process-level sampling interval.</t>
        </li>
        <li>
          <t>Measurement repetition count.</t>
        </li>
      </ul>
      <t>For each metric, the report <bcp14>MUST</bcp14> provide:</t>
      <ul spacing="normal">
        <li>
          <t>Metric definition.</t>
        </li>
        <li>
          <t>Measurement method.</t>
        </li>
        <li>
          <t>Minimum, average, maximum, and at least P95 values.</t>
        </li>
        <li>
          <t>Number of samples collected.</t>
        </li>
      </ul>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>This document defines a benchmarking methodology for evaluating ROV on routing devices. As such, it does not introduce new protocols, modify existing security mechanisms, or create new vulnerabilities within the RPKI system or BGP itself. All benchmarking activities are intended to take place in isolated laboratory environments. Nevertheless, a number of security considerations apply to the execution and interpretation of the tests described in this document.</t>
      <t>Benchmarking ROV necessarily involves the generation, manipulation, and replay of RPKI objects. These test artifacts <bcp14>MUST NOT</bcp14> be injected into production RPKI repositories, production RPKI caches, or live BGP routing systems. Test-generated RPKI data sets <bcp14>SHOULD</bcp14> be clearly separated from real-world trust anchors, and laboratory RPKI caches <bcp14>SHOULD</bcp14> use isolated test Trust Anchors to prevent accidental propagation.</t>
      <t>Similarly, BGP routing information used in the tests including simulated full tables, invalid prefixes, or artificially crafted origin-AS combinations, <bcp14>MUST NOT</bcp14> leak into production routing domains. All BGP sessions used for testing <bcp14>MUST</bcp14> be confined to a closed environment without external connectivity.</t>
      <t>Tests involving stress conditions, such as high churn rates or large-scale VRP updates, may cause elevated CPU or memory consumption on the DUT. Operators performing such tests <bcp14>SHOULD</bcp14> ensure that the DUT is not simultaneously connected to any production network to avoid unintended service degradation.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no actions for IANA.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-normative-references">
      <name>Normative References</name>
      <reference anchor="RFC2119">
        <front>
          <title>Key words for use in RFCs to Indicate Requirement Levels</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <date month="March" year="1997"/>
          <abstract>
            <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="2119"/>
        <seriesInfo name="DOI" value="10.17487/RFC2119"/>
      </reference>
      <reference anchor="RFC8174">
        <front>
          <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
          <author fullname="B. Leiba" initials="B." surname="Leiba"/>
          <date month="May" year="2017"/>
          <abstract>
            <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="8174"/>
        <seriesInfo name="DOI" value="10.17487/RFC8174"/>
      </reference>
      <reference anchor="RFC6811">
        <front>
          <title>BGP Prefix Origin Validation</title>
          <author fullname="P. Mohapatra" initials="P." surname="Mohapatra"/>
          <author fullname="J. Scudder" initials="J." surname="Scudder"/>
          <author fullname="D. Ward" initials="D." surname="Ward"/>
          <author fullname="R. Bush" initials="R." surname="Bush"/>
          <author fullname="R. Austein" initials="R." surname="Austein"/>
          <date month="January" year="2013"/>
          <abstract>
            <t>To help reduce well-known threats against BGP including prefix mis- announcing and monkey-in-the-middle attacks, one of the security requirements is the ability to validate the origination Autonomous System (AS) of BGP routes. More specifically, one needs to validate that the AS number claiming to originate an address prefix (as derived from the AS_PATH attribute of the BGP route) is in fact authorized by the prefix holder to do so. This document describes a simple validation mechanism to partially satisfy this requirement. [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="6811"/>
        <seriesInfo name="DOI" value="10.17487/RFC6811"/>
      </reference>
      <reference anchor="RFC8210">
        <front>
          <title>The Resource Public Key Infrastructure (RPKI) to Router Protocol, Version 1</title>
          <author fullname="R. Bush" initials="R." surname="Bush"/>
          <author fullname="R. Austein" initials="R." surname="Austein"/>
          <date month="September" year="2017"/>
          <abstract>
            <t>In order to verifiably validate the origin Autonomous Systems and Autonomous System Paths of BGP announcements, routers need a simple but reliable mechanism to receive Resource Public Key Infrastructure (RFC 6480) prefix origin data and router keys from a trusted cache. This document describes a protocol to deliver them.</t>
            <t>This document describes version 1 of the RPKI-Router protocol. RFC 6810 describes version 0. This document updates RFC 6810.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8210"/>
        <seriesInfo name="DOI" value="10.17487/RFC8210"/>
      </reference>
      <reference anchor="RFC1242">
        <front>
          <title>Benchmarking Terminology for Network Interconnection Devices</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <date month="July" year="1991"/>
          <abstract>
            <t>This memo discusses and defines a number of terms that are used in describing performance benchmarking tests and the results of such tests. This memo provides information for the Internet community. It does not specify an Internet standard.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="1242"/>
        <seriesInfo name="DOI" value="10.17487/RFC1242"/>
      </reference>
      <reference anchor="RFC2285">
        <front>
          <title>Benchmarking Terminology for LAN Switching Devices</title>
          <author fullname="R. Mandeville" initials="R." surname="Mandeville"/>
          <date month="February" year="1998"/>
          <abstract>
            <t>This document is intended to provide terminology for the benchmarking of local area network (LAN) switching devices. It extends the terminology already defined for benchmarking network interconnect devices in RFCs 1242 and 1944 to switching devices. This memo provides information for the Internet community. It does not specify an Internet standard of any kind.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2285"/>
        <seriesInfo name="DOI" value="10.17487/RFC2285"/>
      </reference>
      <reference anchor="RFC2544">
        <front>
          <title>Benchmarking Methodology for Network Interconnect Devices</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
          <date month="March" year="1999"/>
          <abstract>
            <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2544"/>
        <seriesInfo name="DOI" value="10.17487/RFC2544"/>
      </reference>
      <reference anchor="RFC2889">
        <front>
          <title>Benchmarking Methodology for LAN Switching Devices</title>
          <author fullname="R. Mandeville" initials="R." surname="Mandeville"/>
          <author fullname="J. Perser" initials="J." surname="Perser"/>
          <date month="August" year="2000"/>
          <abstract>
            <t>This document is intended to provide methodology for the benchmarking of local area network (LAN) switching devices. This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2889"/>
        <seriesInfo name="DOI" value="10.17487/RFC2889"/>
      </reference>
      <reference anchor="RFC3918">
        <front>
          <title>Methodology for IP Multicast Benchmarking</title>
          <author fullname="D. Stopp" initials="D." surname="Stopp"/>
          <author fullname="B. Hickman" initials="B." surname="Hickman"/>
          <date month="October" year="2004"/>
          <abstract>
            <t>The purpose of this document is to describe methodology specific to the benchmarking of multicast IP forwarding devices. It builds upon the tenets set forth in RFC 2544, RFC 2432 and other IETF Benchmarking Methodology Working Group (BMWG) efforts. This document seeks to extend these efforts to the multicast paradigm.</t>
            <t>The BMWG produces two major classes of documents: Benchmarking Terminology documents and Benchmarking Methodology documents. The Terminology documents present the benchmarks and other related terms. The Methodology documents define the procedures required to collect the benchmarks cited in the corresponding Terminology documents. This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="3918"/>
        <seriesInfo name="DOI" value="10.17487/RFC3918"/>
      </reference>
    </references>
    <?line 566?>

<section numbered="false" anchor="ack">
      <name>Acknowledgements</name>
      <t>Many thanks to Giuseppe Fioccola and Christian Giese for their review, comments, and suggestions.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA61deXPbSHb/n58C8VRqJS+pWJ6ZzaySTCLLx6riQyvJs7U5
KgUCTRJjEGBwSOaMnc+Sz5JPlnd2vwZAeSZZ1x4UCDS6X7/j967mYrGYtV1a
5f+RlnXlzpKu6d2s2DX0qe2ePnny+ydPZ1nanSVFtaqTr5KLjcs+zNp+uS3a
tqirbr+D5y5f3L6cpY1Lz5J3O9ekHXzTJjBw8iat0rXbuqpL/vX6xdXr84sX
/z67X58lz9786dVsltdZlW5hhLxJV92iXC+W23v4H1dlm23afCiq9WLruk2d
12W93i9WdbNo6rvFk9PZrCu6Ep58Zu5N3oR7E7g3ua77ziXvmmJdVMkPaVnk
NLfk6PrdD8ezdLls3N1ZAn/QKLMyrWBqrprN0h4Gas5mC1h4e5bMkoTn+bpY
wkivix6u1A3c/C+bulqv+7TKerieLmtYfd3s4eus6PY4veJHmBn+XfdV18Cl
i01RpXDBbdOiPEvKoi+X//TTOivT5YnL+5MM3j9879u0Sl45Gode+4c+vXdF
cuuyTYXLLVz7K165hpGqtPqnDY1yktXb2ayqmy0Q586dwX3XLy+enp7+Xj5+
d/q338jH3313eqpXn54+kY+nT795qo89/e5b/fjtN/rY0+++08G+/v3pd/Bx
hgxl3oj/WSwWSbpsuybNutnsdlO0CTBIT9yTu1VROWCqxDJHsh1seIMb3rRJ
t0m7pNjuSuY92OGT5HbjBvdnfQtDAj/k7q7I3KJ0d66EF2zSu6Ju5rAHWdnn
+J5dU2cOOB4+1qvkjjnJ5TDwebJL92Wd5snRD9dXx0m/w2/aOUzBwQAwG1gN
8tzSdffOVTgXEo1nr67msEOwQXW52AHruaRxbd03mUv6riiLn4hX53QzDtZm
KbAIfNHtcRI4Tl/lroHpNHuamcpeWuLAeUFyyAunBeQ9vAEW22ZNsYTZb1zj
gAxlWd/TG3YNrLgAorHwwhi4GbCIFl+IdxyUtj/VfO0V7MAuOULxPqZBQC0Q
GWCmsIwaJ5L3Ga5159IuXZZO3rXdpQ39CXPsS3hlmjV124ZdZLVyMiM+2RZ5
XrrZ7KvkEikIQ+K3s9mDEg+0bJN257JiVcBs4I6ffxam/vwZvkRCtIGH6gT4
w262HfucNIRsEg5/fpx4nsZtu98U2SZBHi6AisWyxyHuipToeH31z5eLrl7Q
kA08fnt9jKTp6qwuhdnDBFHUcIIwI9ASwIarPbIPz7TFRdFK50CLO/4AovC2
7l6C/OcnyXO3K+s9krBVxkG+K6re4ZCw56C6YaVC8AoYFXazZc5jaiR3sIE1
EAVEw4UtIQn4wUoHzAX2sRN+KZoBh6/6KtNt/NOmgN22TOs+wh8FsJhDuq2b
+h6GJElq6ErWNw28tdwnVZ2Q5UqbvPgJZjHUA1tYUN/ghIivXUMbQwMDn2Ud
LU0FHcVlk6Kcwtvbrsg8mWBncf04DqsIkKbzNrkvuk1S47TCnq2Am0m8HGxB
T/yy3P96mQGik9zBPFBx0Q4ESYHd7pJVAyYBdwhJAmSH+wpYwgrt1exxckGS
hOPK1g0EaI43veBJ4l1WrbA+kS0rXW7UCD124cn0k9I23t+sbgOTBbYgVsIB
3uMLaOfw+Q3oHbwR5KbscdtbYmsY8s41a+IDZUGaaSfzPPmLWwcV12VflHkL
Shx2Hle3QglS9ozGNrpyKK5oDlFc6Q80iOEPMInhDzCKpHZgiXQBTePnzw8a
KpzTc+JEJiUgAOCIo+fvb1nb0n1h/+bIPKQdC2Sfotr1HSm2oq1L4FEaz61W
Lgu7tmrqLUgiEArXnAPzwYtgqcj7lxVJd5H1ZdqwiYuIElgTtBRsTStmxbW0
mfAGwDGTuo/NZsL2T3djm+5hfH0muUgzGO3GNcAdqOHw+u01YNV0tSoyRDWO
0FeSpTsSF3hf7kpAGCQO7b6C2cDk1S6I/b5i+92SAW+Pibng2eWPQBV4llWZ
5baCSMi7DizXOlJ3sGMNqg7cBmNt4c7/BHjYgc4m8cSxkfRV5q048DUsXYAD
KZdCdvr97VyfEb0VFJNIihE+o+b8YxOgwWjrGD/A8hWZJKxvWlCurRfe25G4
TwEW3Po6K4i8pChdBWPh+IjCZrNne5zBXUEaIFJ2nnvmA4LTCEAaMUJzMRr0
kTVk69IGmIOttmpgZmexOOm0ghsoWKPvkOI4TyWT11pAxlilRvYFB2z73a5u
SGcxJIAX5d4Iw8esaMUIfvVVcu3+sy8ax/b5NbggPbhMzIQf3D4BggBvPnrz
/ub20Zz/P3n7jj5fv/jj+8vrF8/x880fzl+/9h9mcsfNH969f/08fApPXrx7
8+bF2+f8MFxNokuzR2/O//yIqfvo3dXt5bu3568fJcSXdmcQ3QHFlwJ1QdJx
29N2FkAmPPPs4up//vv0G9ByfyWexefP8gf6FvDH/cYJ0q0rMO/8J4jAfpbu
YHMaHAXgGUp20aVly0gO7EdFIBYI+fhfkTL/fpb8/TLbnX7zvVzABUcXlWbR
RaLZ+MroYSbixKWJ13hqRtcHlI7ne/7n6G+lu7n49/9YotJZnH73j9/PEP3e
ZCALRLhXNdBlaBgV7KJpLL13ulgC3MsftpUuIIQhhAIBmIYWqmHSsU1kjH0i
OgnnrGr14CzY+3Itqc3Hjweqy0zo8eMzwFUI+Niei6/SFVun6oG0FEB7FrWc
VpjqKtgrwYFJBaoiblzmQP8b0A7GRvHeCfirNK1L0ctfUMg4xz+yHSgy1how
TQRAVj0mqdjiIQzCtbQRGBIF7SGRzufml8E5nI+CQJ4LrpAJ8ht4k4wApNnA
W0u0WTAVog/YO5C/BiQxp7+zTd+I8OK0lXwe1BEQJg9cpng9YTUm9rDdg1nY
JhdX72N/WK5v3RZYmexHv90FV7lx4MYDoYd7ITtMrvrQRMW8RRNFRl2jSI3t
Pyi9M17LrYEBwYrMLcmHchWxO7x3wcZNHS20CTQu20g3cnMIYxCAJXeVxmTn
GTBdddhpBl1x65ptQeGiPa+vCxcQOeZj/c7BgVaR/h36GiLoh6Eum8BlC8KG
Y6Bfk+x6YPIsNREJHppYGabRkimhSdCOiOLK0JOr4KY2EUyEpAV0qqrGkvPk
C+7/WXIekNkBDcWkZYIKfK3F4b8h/Eo8nlYVOAYZsyrNlwI0DARTiQzA2Pwo
asOGNAlh6zh6xDgTaIKr0dAHvK6uREU2zjFCbM8edPNns0lYy2GpsxAEapHG
QCZwBXg+yLolAUDE9iRRXVpU7OQAkT4uhAK7tFDnST0pDJBQoBUmH2ANqgXe
UMDmVYtY6JfGPnAPp76/cRRvli1kl7vlaz6y5hU6xZ0mnAZyYCzHzEmnyzit
+htu25dERliYOBUOXSviVXY2/oY88a6uS/EFyMuwLkYwJLgkIM979nCuggl7
zW4A7w1ZK9oPBD9hLWKFMCxzT6PCKGlyhMRkxwkksiS68uty41IUaMRWJfCX
yrZR8WUNloJiWjiXyxC6Sp5hMOjo+vLZMVLABnsKGw8JIBamtbgFHlk7tK3X
7i4Inq5x5tdIQPGO3GneNhS20qkZwiWCyADIK8WdkPDntmblCyiQraTLhT+d
hKWUAOb1LDjodbyKpggk+uFXTZK2YdcFFfD+6vn57QtQT20LcN3HMMNCiNXh
PWY2QfmgzARB4vklF8bg3xYY9vfziYBLtI1iahK2cgbKWFvMdhEkAd1VRg4m
qNKxw12O5ltgEBZ9YGRhNdrvgy0+S27+TzZ6zjxeg4kAIpPFBKi00JlbZ9IQ
Ur9WhsAFSlDHhOhlJYFGisJasrag6ohHurTFCOehCL/FgaRNdjtWkTSRGkzZ
XsT6AtEPqqVlDy6iPpsBalrDs0fuZH0yB8VS7Ukpp7ngL5w5Go68Se8BZBwz
N6AC6phLwfdeVwQWgekbtzDbokgLeDItMRQzCDirKW+7BgmCqkqVkEWHr4tt
0TGPbdOPxbbfJlW/XWJshRbRxtqRrA2SsPNiRChVVkrzN1SHeQ+2ZIuZBDEs
lG0qo+CvAcsKV4CeN67rd7TikFxLXlR3RVNXqA/E4WkdJ1nU7WTL7aVGtPWO
oM48wcs7fHqOIRZkz1Wx7huhI5kN0rJsCEISJg404fuGaRZEjN7/wJe6MFWJ
G9nQG75d0BKigKToOMYPHi94wYpQyCDiVvjgDO6oyqwJpmQZLoJUBv2NI5Mh
2AZkLUEHIu6tUGQ2+y//b/bbxfjfbxP4hzOAf1Pf4y2zT3rTCyIbcEuSfAo3
fE/f8z9ct/77NPuEJl7t4w1pmWNzc8KfEcLBzhNYPuYHD05V/h2cavKlf3/z
b/bzl+8/+C9ex3PgKvFHbpmZvjA07vanqYXQMicJEJPgk58C7rhr+OLsEw4s
U8DEMofTPsWDfDoy830NWPJ4muTxG5PJ6VoO+/ks+QpZeIESmVAe/x8e3arE
qJh6XBlB/Eef1XWxt0oQkfySFTAQRufhiT2ZZEDSwPZnPqLqY9DIdYO4M9v2
UUSZ5T+ltwINVwEYEhQKVNInEXmzRkBWRzuqepF0z846nhStRpVcYKzVxA49
WLaBhwPmqOBQs49ZhPQgT2N6wR7xFpTMQk8AMSEqwIxw8yDQ2fRVxVFNVWPg
k9a5o2g86KscXTy45lQDmEi85CFQLzXLAggF26PhBM5d8Mx2aYe5B5o3EI5M
EE8Wdrmi6EgOSj3DNCCsWNWowPNBmEaj8ehK28SINfAWBQAgwVwapURJW86F
EBhi7iTcTJZ+j2sG3J/unZ86e7yUJWQunWSmxBEEK9oN5XSIeACT+O6dk2SF
+iTeu1RbQaAefCrgeKTsiqehSY40WfWAkOMoUVevHW0wDWaoEBAIISm3iBxb
d0f2InkGOAMdvcuru29oCvDhdzw07gWAcYrPYqqBgTBylfvoGvAQnNxA9m8Q
NU+7jdRJkDS3ZOq3PX7s1EIGREJv14y3QMDo29/pt63DUoIOnNoTwGU09pvz
P5OvmiZb8BdgC8HFoUgqE+iIIp4fU5zgPDmdP3nyBP87eufTb/11fRv5SDi8
V1MJA+08J/S1SrcFcCrwH8X+0KE9f3l5HE0SvFKRwAzmxSTJNnXr2N1yXMjD
U4CHWwns0cqIXg0W9lTJitYmISCOR2FgqQc2Ur+hTbcmyyGgxXNmu6n7EgPD
seA6yjEDOy1rDM0wzTwJldcCFuL9KbbsaS3G7to9Lm7op/XI7GUpiJJ8TeZz
cmsih/pcNXHRqlbg6hIVFCpr6KTUxDC8UdUUHMm5SCAyMcmf0DmRVQf+hLdJ
rF9jAkoJXG3uOJhG9QORGQB+rAM6Dmk82JF7jOvB62kqdTXMziGERuWsBQUy
s+Ea0LqAPSgw3VsAlzTk0FEyjRJ4cz8dUa1izZhBF7CLAp6bHgWaGApeqzHA
EDogfgO2AvPEIBIpfWHBc5TLYhWI9+iQCrSt9dPwZ80JHtSBXvexNeMQYdtF
9kysKY3s9WnwbnGBqAj3Jv8XbKAGjrxuxYtqs04w+uoqxMu8bTb14WuT4Esg
JpX1cCC+Lh0bJIYsWkKGa6rqamHqNLRQpA5JXrD6qDJa7x+mpfVryZSBquyR
0zHgAKYTNIfDcPAewD1KTNFuxTz1FTmA6FtjxcrcSHVetETtE4Bz6zXViKg/
zBNHzS3KRDeGDcyy5/t3fbOrscgAFu73FTluUaI7ySOld3UBE/jgKLyLwQHr
gBBYIJUQdlZi55LKOAeiSQQBleTWUazRB/YR5yhxTMrbExE2sivQuvh9IBPL
Pu6ClVBC0wXiTLA7gMeaskI4N44rY5oWeATxsHgo4uVQ0dntgIUCwwctavBi
HCUkT5NAELEj7F2zUM9vUJYkkEgTY8Q+JlODZVyglQK8AvS7LKqQm/P6lwzJ
zVv06QE3lKA+JP/fUu5iBYCyC1coLUvmyeVrRzYq3g2jyLEWNWMR1cXR2oQH
t0D1jFAmK8p1WS+xykRL5BjyUL0XTKqpYStIm3luS0vQpoKRCXEoD+lj+Na7
uqTiE5RSTER0KYkeyOemWOM6QK5B+Oo83f+mpVgEz5i3QpAxvm3VN7RufaHZ
RGNUCA+2g7ARZsiqNRfF1NvkFFADUOqU8YMNEe1wfIfGkykty9TcDplcVgt3
LopJmfARc+iETzetk6dhKb1YzJ43pMST6UGbz5e9wID9A0J3HHzYB7MqjMta
1cSK8A1LRykSAk2sDPqK0v3Jh6q+r+Y+D6rBDJoSzBOcDX49KhiiHEcbTfhw
iG8CC2l6zZfvPARUQgZITDIonGInSYKCokCZayQTZEOHtPUTBgaQvwsvEdlo
dynF/Qn9eGlNSletASizFNJEkAg0co6oH0Y6v1Hvj4L9Rr7G2IxJcmD3lc9j
NBMlojn41wZtKxVehPWL6kfBEjB/G4AOyb7UMH+ioLRAlQ2Apu4Fy4QUOfE/
59zYaxOvhKNYsoyrpl5hLOzKWwssL42xki5ZAqRzqe/TeRlDIzAP9tnXCp9h
Sf5LgtcBWHGgVZG1MXEnyUvKygayWfuHUYZ2i8CA8ToAwg/AgzRieDWNjSJF
VUg+DQA4o2fOPk9On363WO7JZaYRQKhegzvaJF/jImtKemP8PqMC1xrAwEny
jvQZv5iqu8R3kdTrhlIdEjWV+tUxr6CzVIFta8UCkhXZuDRnRc25UEk70bKo
EicysphjX/j9IzRNGUZciNWJydHV1c3xlwhqpykkZG+PStgpgs5BD0F0Qq+m
KTDHw8UV23QXBxaGBH9GAX59jSVbiLW3GcAm0IYtU4JkhK2TkoDxM+EzdQBZ
mHWqC5pqJHMUwI8IJkj+S3SJxzQSYEqQTnBcwTTsySDo4X29vPJ+7MBNEF9E
0t7gt4PVyJnpkBTplvaQ/CZVp7xNdnjvI2s2TB41AqhsxJnRUBoZjSwFBQXu
T9+yAyBDiyAQQlEPX6qQogVNsKiPtBVcOSNlirBf6mH3lWwL0ILkSim8TbsM
ozuTRoXKS16guTOqBN8FnG4hmyHDBNOgAjxYWv5gVoTluPxiZYpBz2iXwFab
HAfnMqybhAqNWR1hV1wN43s7hHIhJmTyI76YUzBIqtW3wZyZIsqhRbOr0XJw
LK/CWRr/cFD62Zq6ejOVuT7KBmBP92KZLJdQR2Wi+JXcLcJHZZvWa/SlwQ+X
HZFJeyW7c4EMl2sz3Wz2zK0wRoiwKnKmqMoi/SD1ksHd1nTvvStL708AA6f5
XqsDKkkZRnFm9Y1hg9mBDtFKsZkStE02KRbvhQQ+ajJwbbINEBGrX0LnkqmM
kzw1PutxHdqy+7TZLkCNYANILUFOGHfr+4YcOmwEfZAE4A/nC5gYYBUtX5fC
HfJ0C/QCTTWZuO1AYXItpRqQirbDTVWNECY4y+TF4uxeqseukKdk53lOnLkg
QxHccfcRZZWypbwY4NslGJx1g2U6nHgmHEk1BigGEoegpN0COL9lrG+3+e+4
CQZZQJCXDyOMvXuK5VqXfDLKhag5K+vsg7pR3GiDadkOtrDr+BqKP+k2YP8C
h19EnF3n0RQQM26p9KkwSVC2ENJeVXQmA3khXEFRZRKtCy9asFma0IxUE7ck
UPxEcacyl2Reh0LqtXiIEHKQgfWVnz6CfCmJMmXp3umRa1wiQFWp5+SHiHCp
XJA+l8Q/DCAFhABtU5w1qomiLNUlhxc0haOuAxzOhK+C1vMTQDm6vny2KCR0
FQfko1CxMWMGaPw/4tzsknH0XPR6zdkssK1IpvOXlwtftadTpoW9ZC8Ia33l
8ly2SHnKl14BP3FvHupehlTBVqQ4fGhUUfOk/lQYTAIkXltRharERSOvXPSh
T/cA76vHgpWvxXqzoN7FFiMs7OYHrYa9zzSXOLXWqm9p2S24S1PvQi+uZ327
kOoct4LFbxAsEFJgxm5tg0JULBKIZOLH2LzZOewNHjaV6BpYo/hwFaguTCUi
xBnYSQPcSXjfmGYXU7ZLduoQihC/AhV/WUr0j1tjxNRPmUwukhK/diN9qmjk
tJaxjQqWTBUb5cqqrG52WDaCXNGhyq29RdIKNh2K+Eh1bFwGi5Eoinoesuwh
g0htuZLJDFshHT/zYYWVvz5V8D1nlQIeF1aYcSjfFEVPhzwOl1JJWO0uLUqW
XxwgFAs2jprf6M2ojR4/uJLHj9U+taHYnmTKVspxg5YMcmRZH/gB2d3WL+Ku
Ii5g2KHFfAWVw0o32rCyT+JKvAsYbe38hBwC8cK2WXF0r2v6jO8ieF+u6wZk
Y9v6JY93Z3KtowrBNNTkfalYUKAKVUNhaQCWdqIfR0ERqQQPema6rPBQLeHc
JtpHBKMg5UO9wpaekuJvQyv7aA/cbg5aQ3m/20ifQDPiZ+GgeZyQh+WDjjVy
qZntO0DBre86B874TSTnDnU4c+mU4MCGjfJLfgdRiZS1zJZyhYCeSd/rMNrM
8XC6EhUZR7HyfpShZDqK3tJitEHBW3CvfNcMqyg6MsBk/iPoLO2Jjx8PdQMs
uiCXwS9McA3FO8mJBGWoaFpb5Lpxj2A3rvvgcs2dxr5pZqEhRIP8iKmAsDKH
Ajb78eOpDkGYqUGM4CzAKiTYxVnyWMdypoYLFHgwNcp3pn28TUJdovabnHcM
SPvtoFdVXWuxRKJV5fyHQTEr+4n2hnFVq4mGIOHRBWgZCCDTy2OEzCh0OjkX
2iadkH2/BhiHNbB6nVmVr1Oey1Rpq2kDLE37D/y99zWwxsGYS4PV2BTAZqUr
TLVa55CqYeba2FpYvVg5l5uoqCTjG7eIvXD2GKVsusGo29A08lT43WJIoteT
xqTaHFgrCEGsu7zoNfUS0E2lRFMiBOnjWopVCg6i8Bi5lT6ShwAWHE+wSfOh
ZNnkvIxDmBEDGBQI8IMgS4YzCZCd/fvvudrAG1iUKIWaChrhuTU7rKdPnixC
EslGSv36rdt757CD6V7DU2PJRkcUsCdaoiRdNj1YLezA5cICORiCVRFQOckB
B1Gq0Ce5qFBfHUUEiA03zLa+P43r702sijy/dhSi4pANm5q8APvQ25Z7cdU0
Kh7pr2H7rVWsKRfg6aK1DZlRNR+nELrr09B2xL7GRPk5SVWx826M2j7uW1hj
dBNBWGDkeXQKDKlOOvrFal1NRnODn2dYZinxg0jkAbshX9ngV/BjuederLAv
lm59BfXILQ5NusCxQLG+5bjiV8mDDSvJz1y+iScxMSN8Rkv8TjvmTS+fTIYf
89kwDbwHiHj1/L1BgtrqYuE85b2J20JBCuMCj0xw0lyV5o0k7XDoxgdlplWl
B5EtRdTQ86YO2bK+B0//9ATIgG60SYGyfkpmVMG7SF549WY1lYJDLd4b5wL9
APACkjqbddXot38pGX0TXwB18CHhYwvMVDgqi3bBR/FMsC3xNcezp1h7RpXn
xt8zkN0P+VK3Tic+p4A/zhFIR0dLDN1a9uX3jEMplYsT1dfLuOcl7R1yDerX
lbgFC/F/tdJd3kWljRpmgf0oU6lURPbR4ksixNcn1EEDe7Ld0bfUiCaHloVl
XRMk4oV9xLosaTLqQjpBhACGIG2FyT14/JuT5A0DF9YvyoIDpniP9Y98XAcy
b9MHxXt0fnU5Ty5eX5Lm9OHDY24n63BPJqeFy/cxgoAueZ4D6lJtVrP1z2Gw
aR/itmme8xEhWzCJ7Aj4+Avcg6fxZJKqm317klxIs0kQaL/OWyPlhQEDGpbI
nJeELrRyqTAY4g67vXTmocbNlE3RxH53AruIuQY+94dKf0pLJs5yBuEgufn2
yQcWHvyI/4vvPX1DokTD/q0fFv5TgjLr4HbJFO64qZq1aBgZD97pudmi0uQC
te4SC0g9QuLV67j/zOrVYD8O69bDngzVh3sXmoCCOFhjJwaVsKaorNfju6Af
0qJjZ3msPnEcUKBBQXpt5kmHzQWMim2ok/fvS5FOWkZuY1f/jyjnXM6VmQxm
Hg/ES/QsJr7iRkTVd34147gFLB2U7w0p+DgtJWUfqOKCbSiyD6FUxb9MjQCu
4nhUq4KCqKFUXYrnb1SQEjs0kxuq/TekwpmBTBWrLV4RH3gE3I+IvxffS5gB
6SofF9/TV8eqSIOiHvV+mtCHn9O5x7jSuz3EIp9VYVkNbebHbGXElo5CI/AD
Hg36BqPFIEN5Y2wFxHDD21rgaOfXw528lj00khlUdm44AnSZKpFJFTshbb9O
244ITPXT7IvlD3bajvUGcTrZEVnIiYAKoztJdvV4Ic2NjJQxKFYlECll+xcJ
aKSSpNDNNsv6YI+0hx3UmFPBonAyWCvV2T6CILsvlXpzc1AhBXywp5iOq8kG
c0FXiQ87ekh/DiczClkd0qQjKDoBQk0ugYqn8h8BKwHDRDCRvpbjpMSejtWb
eHJc2lfaGQ+0mKxSOD3I16aurcIgJUKF/oTJ2IiSa0BqAQnrdQaSJWrF5bpQ
xAFDLcbhsLBdYQI3TirHGNzIIRSJdNAP4RxSyzuQQAoZ5B31KiPDg7Kh2uJQ
PyL8MSBfUGvDLpuAW7FxRdYa8TgqL2FZyyFWZ0Wd4k1yg0GL9kzP6Io43Dcu
cGRDai8HakoxhKqDNlYUekCMVv2GXRljzsHUXlT5lyaGO0SzoGo/DWpdXz4j
kXsJ/8/lDDaqcGKxH/GeFyCfCI+AOAcXM1NaryW4VP0Ssrx6mEoAguglnBld
TBn/2m/s5CYOWEHcW3RtlyQo2P2k5/f8NKTic6/KWy7Rx1gn1+gv/Op0yuyy
447Y9m6KsiQjVfhCIyeURDORk0hXD+DzF7EgSZ95eSgEmlJgr7SBJioRT8U8
nA3twwisn77x5Hp6wuCR2EOGCumEkav9NVfM2bvnCifOvLMbdGeoZxHsgMmx
eKuCJEwFE45Is02AlCGYPICmR88bx2A0xs2hUxem7xsGuqVMBNcs/kuoFhqM
cHUgx3goVq0iPT6vQc6C8GnJE9LG4phLvKu1ypxdeIxEtmMpq/F8y+jqm8KH
VdsxxXWOWZNitp00r6iUyN2jmAuhQHTfW1Yj8eFbIoJ0HgSt+YbjdhLtHIgh
f7ngNmZRvRzuswdrydEKobLMv3Iet76YOHrF8f4vSWvm52mPiTgoss982IvC
ibHn5vsDgtoQ1wTldeQ2eZDyl/L4rDbwusUee0Zr1cZiE/4hUqc5V0xRGARc
peID6qCHAu2BGzlyrq8403OVsY49RhtI9m9hQExIcRdbcLKopkYGgMm0GpJR
xClsYKvkec2ozhQncCuFxy1j5YQ1aGW9xqWCl9M7OejssAqi5uahKTOqgyll
znMxpQXyncRpf7k2GLxtCJFzt27SPJ1QaSjmXPlGOYq4w9vfC5qFfsHAyooh
lmvQ2yWAygdMl9SigFUkvhrMuwWtNMr5wb/1aou0ieRGOLznC+p8U7UEgiZO
1TnovfhcqtHs4QhXDN/T7DR/8wsiOFGkWnjnYT0QhFcSt5T3p3OwglQYHsEg
DN5yhPIE2iDZtr504mHz1XOpRANsq2fyjW2PH/8vYHjCG/6QNvl92jifssVq
xvjGp0Hy/CmLmlbx41zm7F+ZoV/6EDZY2ZNYUEkFfEEChkluy09fewac2gpK
LmQ9ODFtRMz0gzH2C3ahDh7fFOUrbHLbjP4X2gtDhPNqD+tE/M6nj7ZY+iph
dJ8H+Wtc5Vzn7D5u0p5ycT7OdKnFBXimXrup8aDvsJqafuaAa6QsLGaV41o5
/chivCVWKLvWZztDn6iilGcqVNMgwGPxQ0lqWGsABjiwJgA1GVjTqXMmVf2Q
uE+f1nxA0HUJnGunASfCDHKOk1EkQ9dH3HN4f0a1qHZUPmrc4HOr0DWBKL5W
40IBgXnd/EAF25wfZKdrFM/TnbMn7a9Fl6No86GBnsDx6rXj3TRIdve1TAqP
GWyTIz1/5reCGeDzUNvxaZzsNvDNOIKcZjBh4IQeYVJ6vC3VtfmVYASeaiZa
mFm28aOg50Mng7gJnwa5Oc71hTwf7Zwm365p3/5IUVKbcJuaqGo6yWtOuE4X
EksbdYWoOymqCoV3nF80UhqFegzws1nJq+fvh/BCpzqVBB4A6vjBS3+2RJ1l
acv1XCAIctY3VZVo+oxerN3kRtIZAqC3Icf9ROeAz2bn1ej0I+kD4l4l7WB1
Vd2vN5gzTDF9XkvvcDj4nz1yrhals1RhZyaOjA3UB7bWumP7QltVHfomHRdW
6CnNt8NDz7jGYMd1XxwGsoNqg00oABXTSzapXnX0BwoVd9toT8GSEht6oBuX
cpbRCX/RqW7xOQJ6KlxNwTg6syh0soWTnQflElPT9xXCtjFGDt1AC8jF+uQ9
Tng1sT8T6vTn2lTbht/A0OLN+MBlreyx5ef39Gsp41pbvzBbvC2dMWSNcMN9
akXqqKKdx7olKtQp5FC8ZNkUbmX3WAOFtgzJG6K5faW2mks5R7rE35TAE30A
hWf8eyZbqhmWAhtfyZcCNwA0nOuJDrL9mqa9+v23qPB77D1IXoE3vfE9rrIO
rk7L4HXeQTthyjzXRK1UB1c1CHXhiDwIQnwiN9SyeDH3ANp3Emp1jS0TknpO
hYps2eXkxg61Cf7ETR7trW+juU/DqT7hvO9+S8YD6C7naR5kVt1GshbUN9Oa
h5GqIEKDsOBUnZK8SDu8sB/LHlZpfk8kOttcqmpD28Zsdln5E0/mEl8eTXeO
WxuVcU41bVODdqRAsDUJvC+qlG+yTYElFcSDghBZSqfUDPd0XmiN4uiIyOjs
C/NzXtKiO9BOVjr4jji4Gq6FNhW+ZrTH/JcGReyxUjTI8LyoIxxjTgNwxQdG
H475hW/9cVr24E+ZDP7SCsXsI1pQ9/CV1VRG0+Lvr4wb8I/AmcI6vmMrF6tB
b/2cumG5ZedXHF/EOcjZwpzTygfUiZ/QOKp2kni9LHvKX/Xl/crhcS3v6D4e
6U1UfLlzXSFp7F7qhnzwmdXrmOnFqhM7s542Gnf8Eu56leu/WkEOd51WFRp0
RMt8BbAK1BLK/7Ap9f/0K0zmlyUe+I0tLH3F86xgeDz8uu7MgV6YRDHHInGh
GWhiac5rdbr2ZCTU+Hw+Dj5915cYJtRjF21/FJ1+KA6w5ANB57lyxaWW0dLo
dGQeYfg7d1iLk1CmLzot0RRm2tbnk+Qt8BUeNlNyl4Q52s4vJ4uoL0U+kulw
H+Eu0/ckP8YSJe84shP9MkvU5nQy0b2lB07RGXbVXV3eRV3krLn9cShF+BEG
WPpeu+PCIffcqUeKCH9HakWNDvo7LWyif9Tjyv2PBfIpNjgOSkpLUScMjg6/
pWpw3mo8D9GfgclJDdxQnAEmUsKp7uGwX8pBhZZ7PYRXFWqu3U1pubivG3Qz
GzpNBgjmf4jI7K6ZkT2SwjMC0eCWhjjnIfjwborn4FG+lK7Ec5GbegfOqaCU
G7DAJU5sHi3PVsqGX3LQLTfnWxZ6tj25Z1wti3o4zl9yGVRDP1bCBydl+AOt
Tk+gWZxHJxXA/X4PSwwoDTfPC3iNEVMpWo6OegtHqfCZWx5FkbWp5FwxbFjG
G62DoQUX/ifL1IW+k1+KExIg70pXJGV8TAu/ltlj8b6xwhTzoTaXBZ9tGR3v
hO2XfC4moQEkDloQ+vHBYdgsKlp8pz9epTiLZhVOCvE58XC2gnpHBWvCOBMQ
n4AotkopLz/mGE5l6yuvpeSEuCiqTr+oef72/AuqHmtIqzrhonaO+OFT8uOc
mGrAgc4zrIMr8dQw9mp//gq++YwHALN6c/k/PFqlZevwWN83OHU8mesDicKr
Ami727nkZQGwGKSGJOxiQ2eh4c/hFq7V5jf8oUmsYHf3dNqEOU2i7ddSk9+e
zP4Xz32kA/F4AAA=

-->

</rfc>
