<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.35 (Ruby 3.4.9) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-illyes-aipref-cbcp-04" category="info" submissionType="independent" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.32.0 -->
  <front>
    <title abbrev="cbcp">Crawler best practices</title>
    <seriesInfo name="Internet-Draft" value="draft-illyes-aipref-cbcp-04"/>
    <author initials="G." surname="Illyes" fullname="Gary Illyes">
      <organization>Independent</organization>
      <address>
        <email>synack@garyillyes.com</email>
      </address>
    </author>
    <author initials="M." surname="Kuehlewind" fullname="Mirja Kühlewind">
      <organization>Ericsson</organization>
      <address>
        <email>mirja.kuehlewind@ericsson.com</email>
      </address>
    </author>
    <author initials="A." surname="Kohn" fullname="AJ Kohn">
      <organization>Blind Five Year Old</organization>
      <address>
        <email>aj@blindfiveyearold.com</email>
      </address>
    </author>
    <date year="2026" month="April" day="09"/>
    <keyword>next generation</keyword>
    <keyword>unicorn</keyword>
    <keyword>sparkling distributed ledger</keyword>
    <abstract>
      <?line 44?>

<t>This document describes best practices for web crawlers.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/garyillyes/cbcp"/>.</t>
    </note>
  </front>
  <middle>
    <?line 49?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Automatic clients, such as crawlers and bots, are used to access web resources,
including indexing for search engines or, more recently, training data for new
artificial intelligence (AI) applications. As crawling activity increases,
automatic clients must behave appropriately and respect the constraints of the
resources they access. This includes clearly documenting how they can be
identified and how their behavior can be influenced. Therefore, crawler
operators are asked to follow the best practices for crawling outlined in this
document.</t>
      <t>For the purposes of this document, a crawler is an automated
HTTP <xref target="HTTP-SEMANTICS"/> client that retrieves resources across one or more web
sites without direct human initiation of individual requests. A crawler
discovers URIs during retrieval and schedules them for later processing. It
relies on algorithmic prioritization and protocol-level instructions such as the
Robots Exclusion Protocol <xref target="REP"/> to govern its behavior.</t>
      <t>To further assist website owners, it should also be considered to create a
central registry where website owners can look up well-behaved crawlers. Note
that while self-declared research crawlers, including privacy and malware
discovery crawlers, and contractual crawlers are welcome to adopt these practices,
due to the nature of their relationship with sites, they may exempt themselves
from any of the Crawler Best Practices with a rationale.</t>
    </section>
    <section anchor="recommended-best-practices">
      <name>Recommended Best Practices</name>
      <t>The following best practices should be followed and are already applied by a
vast majority of large-scale crawlers on the Internet:</t>
      <ol spacing="normal" type="1"><li>
          <t>Crawlers must support and respect the Robots Exclusion Protocol.</t>
        </li>
        <li>
          <t>Crawlers must be easily identifiable through their user agent string.</t>
        </li>
        <li>
          <t>Crawlers must not interfere with the regular operation of a site.</t>
        </li>
        <li>
          <t>Crawlers must support caching directives.</t>
        </li>
        <li>
          <t>Crawlers must expose the ranges they are crawling from in a standardized format.</t>
        </li>
        <li>
          <t>Crawlers must expose a page that explains how the crawling can be blocked, whether
the page is rendered, and how the crawled data is used.</t>
        </li>
      </ol>
      <section anchor="crawlers-must-respect-the-robots-exclusion-protocol">
        <name>Crawlers must respect the Robots Exclusion Protocol</name>
        <t>All well behaved-crawlers must support the REP as defined in
<xref section="2.2.1" sectionFormat="of" target="REP"/> to allow site owners to opt out from crawling.</t>
        <t>Especially if the website chooses not to use a robots.txt file as defined
by the REP, crawlers further need to respect the <tt>X-robots-tag</tt> in the HTTP header.</t>
      </section>
      <section anchor="crawlers-must-be-easily-identifiable-through-their-user-agent-string">
        <name>Crawlers must be easily identifiable through their user agent string</name>
        <t>As outlined in <xref section="2.2.1" sectionFormat="of" target="REP"/> (Robots Exclusion Protocol; REP),
the HTTP request header 'User-Agent' should clearly identify the crawler,
usually by including a URL that hosts the crawler's description. For example:</t>
        <t><tt>User-Agent: Mozilla/5.0 (compatible; ExampleBot/0.1; +https://www.example.com/bot.html)</tt>.</t>
        <t>This is already a widely accepted practice among crawler operators. To remain
compliant, crawler operators must include unique identifiers for their crawlers
in the case-insensitive User-Agent, such as
"contains 'googlebot' and 'https://url/...'". Additionally, the name should clearly
identify both the crawler owner and its purpose as much as reasonably possible.</t>
      </section>
      <section anchor="crawlers-must-not-interfere-with-the-normal-operation-of-a-site">
        <name>Crawlers must not interfere with the normal operation of a site</name>
        <t>Depending on a site's setup (computing resources and software efficiency) and its
size, crawling may slow down the site or even take it offline altogether. Crawler
operators must ensure that their crawlers are equipped with back-out logic that
relies on at least the standard signals defined by <xref section="15.6" sectionFormat="of" target="HTTP-SEMANTICS"/>,
preferably also additional heuristics such as a change in the relative response time
of the server.</t>
        <t>Therefore, crawlers should log already visited URLs, the number of requests sent to
each resource, and the respective HTTP status codes in the responses, especially if
errors occur, to prevent repeatedly crawling the same source
repeatedly crawling the same source. Using the same data, crawlers should, on a best
effort basis, crawl the site at times of the day when the site is estimated to have
fewer human visitors.</t>
        <t>Generally, crawlers should avoid sending multiple requests to the same resources
at the same time and should limit the crawling speed to prevent server overload, if
possible, following the limits outlined in the REP protocol. Additionally, resources
should not be re-crawled too often. Ideally, crawlers should restrict the depth of
crawling and the number of requests per resource to prevent loops.</t>
        <t>Crawlers should not attempt to bypass authentication or other access restrictions,
such as when login is required, CAPTCHAs are in use, or content is behind a paywall,
unless explicitly agreed upon with the website owner.</t>
        <t>Crawlers should primarily access resources using HTTP GET requests, resorting to
other methods (e.g., POST, PUT) only if there is a prior agreement with the publisher
or if the publisher's content management system automatically makes those calls when
JavaScrt runs. Generally, the load caused by executing JavaScript should be
carefully considered or even avoided whenever possible.</t>
      </section>
      <section anchor="crawlers-must-support-caching-directives">
        <name>Crawlers must support caching directives</name>
        <t><xref target="HTTP-CACHING"/> HTTP caching removes the need of repeated access from crawlers to
the same URL.</t>
      </section>
      <section anchor="crawlers-must-expose-the-ip-ranges-they-use-for-crawling">
        <name>Crawlers must expose the IP ranges they use for crawling</name>
        <t>To complement the REP, crawler operators should publish the IP ranges they have
allocated for crawling in <xref target="JAFAR"/> format, and keep this information reasonably
up-to-date, according to the specification.</t>
        <t>The resource containing the IP addresses must be linked from the page describing
the crawler using the <tt>client-ranges</tt> relation. To facilitate efficient machine
discovery This relation should be provided via an HTTP <tt>Link</tt> header or as a
<tt>&lt;link&gt;</tt> element in the page's HTML metadata section. For example:</t>
        <t><tt>
&amp;lt;link rel="client-ranges" href="https://example.com/crawlerips.json"&gt;
</tt></t>
      </section>
      <section anchor="crawlers-must-explain-how-the-crawled-data-is-used-and-how-the-crawler-can-be-blocked">
        <name>Crawlers must explain how the crawled data is used and how the crawler can be blocked</name>
        <t>Crawlers must be easily identifiable through their <tt>user-agent</tt> string, and they
should explain how the data they collect will be used. In practice, this is usually
done via the documentation page linked in the crawler's user agent. Additionally,
the documentation page should include a contact address for the crawler owner.</t>
        <t>The webpage should also provide an example REP file to block the crawler and a method
for verifying REP files.</t>
        <t>If the crawler has exempted itself of these best practices, the documentation
page should describe the reason for that.</t>
        <t>All endpoints hosting identification, documentation, and IP range data must be
publicly and highly available, and served with minimal latency for programmatic
access.</t>
      </section>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TODO Security</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-normative-references">
      <name>Normative References</name>
      <reference anchor="JAFAR">
        <front>
          <title>A JSON-Based Format for Publishing IP Ranges of Automated HTTP Clients</title>
          <author fullname="Gary Illyes" initials="G." surname="Illyes">
            <organization>Independent</organization>
          </author>
          <date day="30" month="September" year="2025"/>
          <abstract>
            <t>   This document defines a standardized JSON format for automated HTTP
   client (e.g., web crawlers, AI bots) operators to disclose their IP
   address ranges publicly.  A consistent, machine-readable format for
   IP range publication simplifies the task of identifying and verifying
   legitimate automated traffic, thereby decreasing maintenance load on
   website operators while reducing the risk of inadvertently blocking
   beneficial clients.  This specification codifies and extends common
   existing practices to provide a simple yet extensible format that
   accommodates a variety of use cases.

            </t>
          </abstract>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-illyes-aipref-jafar-00"/>
      </reference>
      <reference anchor="REP">
        <front>
          <title>Robots Exclusion Protocol</title>
          <author fullname="M. Koster" initials="M." surname="Koster"/>
          <author fullname="G. Illyes" initials="G." surname="Illyes"/>
          <author fullname="H. Zeller" initials="H." surname="Zeller"/>
          <author fullname="L. Sassman" initials="L." surname="Sassman"/>
          <date month="September" year="2022"/>
          <abstract>
            <t>This document specifies and extends the "Robots Exclusion Protocol" method originally defined by Martijn Koster in 1994 for service owners to control how content served by their services may be accessed, if at all, by automatic clients known as crawlers. Specifically, it adds definition language for the protocol, instructions for handling errors, and instructions for caching.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9309"/>
        <seriesInfo name="DOI" value="10.17487/RFC9309"/>
      </reference>
      <reference anchor="HTTP-SEMANTICS">
        <front>
          <title>HTTP Semantics</title>
          <author fullname="R. Fielding" initials="R." role="editor" surname="Fielding"/>
          <author fullname="M. Nottingham" initials="M." role="editor" surname="Nottingham"/>
          <author fullname="J. Reschke" initials="J." role="editor" surname="Reschke"/>
          <date month="June" year="2022"/>
          <abstract>
            <t>The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document describes the overall architecture of HTTP, establishes common terminology, and defines aspects of the protocol that are shared by all versions. In this definition are core protocol elements, extensibility mechanisms, and the "http" and "https" Uniform Resource Identifier (URI) schemes.</t>
            <t>This document updates RFC 3864 and obsoletes RFCs 2818, 7231, 7232, 7233, 7235, 7538, 7615, 7694, and portions of 7230.</t>
          </abstract>
        </front>
        <seriesInfo name="STD" value="97"/>
        <seriesInfo name="RFC" value="9110"/>
        <seriesInfo name="DOI" value="10.17487/RFC9110"/>
      </reference>
      <reference anchor="HTTP-CACHING">
        <front>
          <title>HTTP Caching</title>
          <author fullname="R. Fielding" initials="R." role="editor" surname="Fielding"/>
          <author fullname="M. Nottingham" initials="M." role="editor" surname="Nottingham"/>
          <author fullname="J. Reschke" initials="J." role="editor" surname="Reschke"/>
          <date month="June" year="2022"/>
          <abstract>
            <t>The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document defines HTTP caches and the associated header fields that control cache behavior or indicate cacheable response messages.</t>
            <t>This document obsoletes RFC 7234.</t>
          </abstract>
        </front>
        <seriesInfo name="STD" value="98"/>
        <seriesInfo name="RFC" value="9111"/>
        <seriesInfo name="DOI" value="10.17487/RFC9111"/>
      </reference>
      <reference anchor="RFC2119">
        <front>
          <title>Key words for use in RFCs to Indicate Requirement Levels</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <date month="March" year="1997"/>
          <abstract>
            <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="2119"/>
        <seriesInfo name="DOI" value="10.17487/RFC2119"/>
      </reference>
      <reference anchor="RFC8174">
        <front>
          <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
          <author fullname="B. Leiba" initials="B." surname="Leiba"/>
          <date month="May" year="2017"/>
          <abstract>
            <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="8174"/>
        <seriesInfo name="DOI" value="10.17487/RFC8174"/>
      </reference>
    </references>
    <?line 221?>

<section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>TODO acknowledge.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA51Z23IbuRF9x1cgdFVsJyRtee3Nrrw3WpZt7VqWIsu12Uql
SuAMSMLCDCbADGnatf+SD8lT8mM53cBcSMlbW/GDTM7g0ug+ffo0OJlMRG1q
qw/l6MirjdVeznWoZeVVVptMh5FQ87nXawzI5lk1Epmq9dL57aE05cIJkbus
VAUWyL1a1BNj7VaHiTKV14sJTZk8fCxCMy9MCMaV9bbSNDXXlcafshZlU8y1
PxQ5Fj4U2OiJuCOV1+pQzi6OZ/iycf566V1THcqfX8qf8c2US/mSnohrvcXr
/FDIiSz1h1oudam9qrEVPWpKkznPH0Ol/LWlmbkJtTfzpta5tDpfai+EauqV
87SMkPhnynAoX07lCR+HHy0aa+NJXyq/Hb5xfqlK85E3PZQng7PRW10oYw9l
2JYqu/5hibnRR9PMFTvbnU7lT41eWb2Be/a2PDX+vZI//fffw9e7+x57k4VA
x+43LWja9Lpb9QedBt3YfIbN3arc23b2Y/90d7dn8GQuX5i1lr9o5eWZzYcb
q/c/zGnEAgO2eO9szluK0vkCS6wRaoz+cfZidgGXTZ5Pb4PPe7VQnsZdHJ8f
Sr/Ivv7i4df0/dXl5fnk7fHp7M3lydHb+Org4GH36mh29Orkzcv2xYEQBNZu
ZyEmk4lUc8AAMBficmWCBJCbAkGTuQ4Z4KHDXipIrCA3ei6zmClhmhYqTJ5b
LYDUk7L2Lm8yhp+YNbWjLTOZWYOVw1iGJltJFbolpIIX545eAfKyCYBk7aTK
sF/gzbwOrvH4OsYZMtvkhGDKnw/0gUwK8C9W1eXSlLDS+bEsHBbzOsOmdjuW
OKYpGfmqVjyn1BuhfG0WJjPKYr1aW2uQO5mW92Yn96WqKmsyDnaYylmymNYg
b6xNvcWkDFkayDK1f1JZNHDdXK8UAIK1vKu8QYLbLZ8Yh6p0Vst6pWWGHdhA
zHILeiS6M9O3bfLGVHKYohPwKrM4N9Zr40a2rdwmTslUid2FoSzEIeFV2ja9
Nj5aZuCIOJC4zDZ0+py20QAfPDhuwyRcRZziKF5wrArXMUwLZ21c8jaodB5z
TY3/McWUGGuCaE0GgF5gIM2vGl+5oJMLBnAEMFozpCG8yORsnQvCuvz0aTcb
fv01BQHrqBquBtfpNVbuvaoy7wAvV2qgJYIFUBPB1Hi5MWDCBmlgPIVo1RTY
E/CpDaOBDAT8AIG8AXK8/meDoxNGOm+BXzO3JnS/uzjBSRpPXkiGYA6FImQr
nTc2hrhgf1mcycOHjqKNGWDfGlDAWchUqSzKDmwrgDKAiT4nNuIFMa92mbMT
i8MSogGqmIihyzrC1oWjdJPHHwAjKkjyPE2EI0Ez8B4CuyTzceo6dEhBrC4R
8cZjEY/FAooIeY2cJt0GRQc5bGoZ4DwLuNngCFiEbqDQR8BQwmC4EpSZnt23
pGK0lRsC3d56DE7r3LVsKryydhITKu8ZSL5xtRYc6M3KWA0ysItJrjOraEuE
PJJDO2EsexKBE9cqiwlZKLvBjC5028EMeo9jMFdSyHvuYosteF0zaeWu4pQO
us+EscgbfksgL1XdYE7McmQhghsZZmUqxp1kCI5jDhdqK/UHXcRFC5wMKBYL
7wqYtE2ryFa2PKMEPO8SkFdTMioBZTVx9R15oWFsQcU535tARUCnhCbn7OVz
iuq8HZIIhdnAIqj5NjImns/xUawVphfqPaGUTUU8lnoSMpjSO9CVfARUDaBN
1yhMB9P2QIlCQ1NVztc3WPOzMJ6KR/trwGrwtAFXtnyo5jCjXkFBLVcpFig9
gPWSeIPUEdJPfLG/UOlqLhV+wWAlF5MtAHGD88nIkokjFMdyKh5/7kSZylZR
jRHPoC6jnj7ZH6w/ECvGXVS57CqC1z29MiJArdixhpuUz81HxCHW+6n48jNr
KlnhuJEk8ciiAoW2QvSLp/owty4D6Y8pT4kASGkwbdMShri15CQfD+tMCnQe
yy5GUXknIIo7d/Zs+l2hhaKwlokgVdZ8kt3qWl7l+JwoL9eLVHrEp09vNROi
fDR9ND2gIHWEp7iQDbkHDymdqRKwg1uPwP5jshWygQAVk7DlrWzluIgRULBA
w272fJ5pDXG+IIrqrRLIlWTruM+KlmNLHUlz6Jurv03icpNaLa9iQdWs+eQK
Waj99Dbv/n8JAHeHndr9WQfe+2zIntKI+2PRWZnqZbJW3n2HXScz2vVuSzKt
sEm2bgdQ8mPRhIY9P98OqFyh0r6OUF45VOPhlLshKdqKTJ9KUhz6gyoqS0r4
qjcAXYb7CAGuHjyZPpT3wJQVshmOeoqD8fhnrn7wcHrwVP55VddVOHzwYLPZ
TNNipO8fwA3TVV3Y+1fTJKtJs7QMCcrIWQJC0FXUfbUEK1XhKNkSl3dyC2KM
4o+eohRkjzWKFNGNcTHMSRpS0wcnd7FmTEWZZXwHM5Gwk0HBTpD6GnWaWEj2
DunkuhhR/WOCuLt0bmk1znmXU/1u64nG2wfT6fTuCEooz00sOyy+ufChRO6G
V3ThxVqrYcBiBvLqJECSMqS0KZKOIdmN5edwJV4FitGtuP8MX3MPZm+jayGe
c+/KqrVMDwGgoGtIEIZEU0cx12lJEnNuUZN8kHpBDQV09PZ+az9U5cdWSdNM
quqByCbHMdmcyDoA5VrjgbrWJKPcYkF5B+zUbsmU29G42As7IkeqgtG/G2Ou
E0g4U1UAG59/jiZ8QqRm3RJCkiYNNSaeayrdbFeqJjBwiVj2XIrU66ng4Mn0
S3LgvggfC2phYShFidWg6mCB7IcmDkB+r02h8ldU4VpOi9porZn+IJFwQFNo
kVQPMLpmsrvZrHRiBSfsUm9tyMs50URIkORrF7K8lfBYlJoGJzQqcxfhWNGi
RczDZBNTGfxTNxCpjrqxzupoLDbRwyohtPcUMJdlDfpTsDq8s6b9PABHzYzd
9hjhE3LKsAnid4yZIm93HlPNveGTcUQ1iTsBrFKtnKMwhDSwhyNBCe5uO1Ks
xhJ9AFjwGlYx3IjReagei4XewKexZWKXO74leMl3UswG+0FSa2dy8jznXNHY
2oBL+5gk5cxH6pJOqLp/SnbGLExhN4Wpd0UMIhGNbJ0e4SNJ6Vun4BZEqGWS
8UAF0yq8XthrYqO+aHuufcrrDU02ERHN6QCTVhLVDvpiUWsUpJNc3+4bLINC
nCp/joqxwhTRX0QkXN6CZBBEZ8Tw3GimKgrI0d5OZJ+q69hroG/bVmjwqNFe
EUdniSWxR2z+4u1Max41MGPR5jGjhMiljLoQ7MO68Gh2fnn0ahY5CS8hN8a0
JFUWMs1wq0mXaiRMtxu4BMW+tLQTyVPwak1MsvQUywZZ1hP6Ttt4y/HQ6RXK
G7sdmJ7ou+Gs4Xx+eXzZeTDG0DPXgxHiuQvwsMuDvKeny+lYnp+9vcTfd5f3
kVWdFvScGiq26NFcvlXrjK2auTWBNDTeJ/3YPbsbOocgh6DGeG7YBsRGdrdM
TCoFKgUJHaqN9CS6Xvyo1upthsT2Dd1cDVKPwQy0YzTfs825vcxiQUvTIJL6
Xk9kiBVdhm6HbXxbqThzqaxgV03Z9Jul+PNdjxDpBiddWkJPcjjakZA/bh0b
nyiJGeiRENt49go9SnfRsQMI/1Z7Bo3VyflOb0WifXh9xfceLL5iMPYF+0CF
tXCL0bxtcWZJajYyNn/nmowVNt8IwwWxe4vF51rrKl6Kdbe4QH8vgkRTTWo3
oR8QxuQR5/OI20iSVIYWKYljxey5Iem6lutgLao03lIT07YOMI7u+9jHXc+X
LonJP0Pp1nRF6Cpewk3i6a+6uw6WtAuVGWtqugtqJRMBngI+vINhAd1OHNxB
gHfXjL21UXQjyHi5eg07r9rGgnIPaSiuviHzv7uSOoUv0TcdAsn26vL0NeW1
4hY1RE1zo0e4uhJ/tPVTWonM+Xa0c7aRXCFNvh21SnjYDyS/GLDue0Rr9B0v
djseqQP/zdb5lt7a7zXoA+77/Z3fFbV+E279rlLv16mebVvC9g1kw+JdM8ol
9acbw4157PLlSdl1N+MEXjoFN28ip6tXCh6vlG56Y5QZXQlybYvSNXJ9j7pX
csVnFkrGt52RioCHsQnmbWO023ukLEFZGS7CIjZBj1CX4sxKgBt7qpwUhp0F
+ZoslQ5BuwHZ6HooTdqJVJFPFjuzViqkuz/NXYS2iyTFwv5d+/imE8XQ7Pbn
nKRPiTTSqel2iK9UoL4qxz9AUPfMXJTQElljvLt8BEfLbBEJCW2CuS9LP3Os
zHJFH9fKWMXSinUaia/UjhTgHurF6O4bXRMbBhcvvSq40on02wddXh65kkQM
32jTOs+pHWEIpNvLa4CRfgwNcnT67u3laBz/l2/O+PPF8V/fnVwcP6fPb1/N
Xr/uPog04u2rs3evn/ef+plHZ6enx2+ex8l4KnceidHp7JdRPN3o7Pzy5OzN
7PWo/bmj/2WNpE/t4u8t6EqhyriGBdHGiCH/7Oj8P/86eIxi8IeLF0ePDg6+
Rj2IX746+MtjfKGSG3dj4RG/crIqdHrK820g4pqpCiRr6Q6bixMaThIo8Oaf
/k6e+ceh/GaeVQePv0sP6MA7D1uf7Txkn918cmNydOItj27ZpvPmzvM9T+/a
O/tl53vr98HDb77nHnpy8NX33/G1o0TX2vCV9FESNKrFz9nzs+4tDz2ZvZnd
HLYTT8rS0sWRKgrh9jdR6rN5mVl2XTqi8iVNCeLTYdTrOv92tEBs9OjXtLvq
RiJC/wOEHo0bmiAAAA==

-->

</rfc>
