<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.34 (Ruby 2.6.10) -->
<?rfc docmapping="yes"?>
<?rfc comments="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-haynes-nfsv4-flexfiles-v2-03" category="std" consensus="true" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="Flex File Layout v2">Parallel NFS (pNFS) Flexible File Layout Version 2</title>
    <seriesInfo name="Internet-Draft" value="draft-haynes-nfsv4-flexfiles-v2-03"/>
    <author initials="T." surname="Haynes" fullname="Thomas Haynes">
      <organization>Hammerspace</organization>
      <address>
        <email>loghyr@gmail.com</email>
      </address>
    </author>
    <date/>
    <area>General</area>
    <workgroup>Network File System Version 4</workgroup>
    <keyword>Internet-Draft</keyword>
    <abstract>
      <?line 74?>

<t>Parallel NFS (pNFS) allows a separation between the metadata (onto a
metadata server) and data (onto a storage device) for a file.  The
Flexible File Layout Type Version 2 is defined in this document as
an extension to pNFS that allows the use of storage devices that
require only a limited degree of interaction with the metadata
server and use already-existing protocols.  Data protection is also
added to provide integrity.  Both Client-side mirroring and the
Erasure Coding algorithms are used for data protection.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <?line 85?>

<t>Discussion of this draft takes place
on the NFSv4 working group mailing list (nfsv4@ietf.org),
which is archived at
<eref target="https://mailarchive.ietf.org/arch/search/?email_list=nfsv4"/>. Source
code and issues list for this draft can be found at
<eref target="https://github.com/ietf-wg-nfsv4/flexfiles-v2"/>.</t>
      <t>Working Group information can be found at <eref target="https://github.com/ietf-wg-nfsv4"/>.</t>
      <t>This draft is currently a work in progress.  It needs to be
determined if we want to copy v1 text to v2 or if we want just a diff
of the new content.  For right now, we are copying the v1 text and
adding the new v2 text.  Also, expect sections to move as we push the
emphasis from flex files to protection types.</t>
      <t><em>As a WIP, the XDR extraction may not yet work.</em></t>
    </note>
  </front>
  <middle>
    <?line 104?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>In Parallel NFS (pNFS) (see Section 12 of <xref target="RFC8881"/>), the metadata
server returns layout type structures that describe where file data is
located.  There are different layout types for different storage systems
and methods of arranging data on storage devices.  <xref target="RFC8435"/> defined
the Flexible File Version 1 Layout Type used with file-based data
servers that are accessed using the NFS protocols: NFSv3 <xref target="RFC1813"/>,
NFSv4.0 <xref target="RFC7530"/>, NFSv4.1 <xref target="RFC8881"/>, and NFSv4.2 <xref target="RFC7862"/>.</t>
      <t>To provide a global state model equivalent to that of the files
layout type, a back-end control protocol might be implemented between
the metadata server and NFSv4.1+ storage devices.  An implementation
can either define its own proprietary mechanism or it could define a
control protocol in a Standards Track document.  The requirements for
a control protocol are specified in <xref target="RFC8881"/> and clarified in
<xref target="RFC8434"/>.</t>
      <t>The control protocol described in this document is based on NFS.  It
does not provide for knowledge of stateids to be passed between the
metadata server and the storage devices.  Instead, the storage
devices are configured such that the metadata server has full access
rights to the data file system and then the metadata server uses
synthetic ids to control client access to individual data files.</t>
      <t>In traditional mirroring of data, the server is responsible for
replicating, validating, and repairing copies of the data file.  With
client-side mirroring, the metadata server provides a layout that
presents the available mirrors to the client.  The client then picks
a mirror to read from and ensures that all writes go to all mirrors.
The client only considers the write transaction to have succeeded if
all mirrors are successfully updated.  In case of error, the client
can use the LAYOUTERROR operation to inform the metadata server,
which is then responsible for the repairing of the mirrored copies of
the file.</t>
      <t>This client side mirroring provides for replication of data but does
not provide for integrity of data.  In the event of an error, a user
would be able to repair the file by silvering the mirror contents.
I.e., they would pick one of the mirror instances and replicate it to
the other instance locations.</t>
      <t>However, lacking integrity checks, silent corruptions are not able to
be detected and the choice of what constitutes the good copy is
difficult.  This document updates the Flexible File Layout Type to
version 2 by providing error-detection integrity (CRC32) for erasure coding.  Data
blocks are transformed into a header and a chunk.  It introduces new
operations that allow the client to rollback writes to the data file.</t>
      <t>Using the process detailed in <xref target="RFC8178"/>, the revisions in this
document become an extension of NFSv4.2 <xref target="RFC7862"/>.  They are built on
top of the external data representation (XDR) <xref target="RFC4506"/> generated
from <xref target="RFC7863"/>.</t>
      <t>This document defines <tt>LAYOUT4_FLEX_FILES_V2</tt>, a new and independent
layout type that coexists with the Flexible File Layout Type version 1
(<tt>LAYOUT4_FLEX_FILES</tt>, <xref target="RFC8435"/>).  The two layout types are NOT
backward compatible: an FFv2 layout cannot be parsed as an FFv1 layout
and vice versa.  A server <bcp14>MAY</bcp14> support both layout types simultaneously;
a client selects the desired layout type in its LAYOUTGET request.</t>
      <section anchor="definitions">
        <name>Definitions</name>
        <dl>
          <dt>chunk:</dt>
          <dd>
            <t>One of the resulting chunks to be exchanged with a data server after
a transformation has been applied to a data block.  The resulting chunk
may be a different size than the data block.</t>
          </dd>
          <dt>control communication requirements:</dt>
          <dd>
            <t>the specification for information on layouts, stateids, file metadata,
and file data that must be communicated between the metadata server and
the storage devices.  There is a separate set of requirements for each
layout type.</t>
          </dd>
          <dt>control protocol:</dt>
          <dd>
            <t>the particular mechanism that an implementation of a layout type would
use to meet the control communication requirement for that layout type.
This need not be a protocol as normally understood.  In some cases,
the same protocol may be used as a control protocol and storage protocol.</t>
          </dd>
          <dt>client-side mirroring:</dt>
          <dd>
            <t>a feature in which the client, not the server, is responsible for
updating all of the mirrored copies of a layout segment.</t>
          </dd>
          <dt>data block:</dt>
          <dd>
            <t>A block of data in the client's cache for a file.</t>
          </dd>
          <dt>data file:</dt>
          <dd>
            <t>The data portion of the file, stored on the data server.</t>
          </dd>
          <dt>replication of data:</dt>
          <dd>
            <t>Data replication is making and storing multiple copies of data in
different locations.</t>
          </dd>
          <dt>Erasure Coding:</dt>
          <dd>
            <t>A data protection scheme where a block of data is replicated into
fragments and additional redundant fragments are added to achieve parity.
The new chunks are stored in different locations.</t>
          </dd>
          <dt>Client Side Erasure Coding:</dt>
          <dd>
            <t>A file based integrity method where copies are maintained in parallel.</t>
          </dd>
          <dt>(file) data:</dt>
          <dd>
            <t>that part of the file system object that contains the data to be read
or written.  It is the contents of the object rather than the attributes
of the object.</t>
          </dd>
          <dt>data server (DS):</dt>
          <dd>
            <t>a pNFS server that provides the file's data when the file system
object is accessed over a file-based protocol.</t>
          </dd>
          <dt>fencing:</dt>
          <dd>
            <t>the process by which the metadata server prevents the storage devices
from processing I/O from a specific client to a specific file.</t>
          </dd>
          <dt>file layout type:</dt>
          <dd>
            <t>a layout type in which the storage devices are accessed via the NFS
protocol (see Section 5.12.4 of <xref target="RFC8881"/>).</t>
          </dd>
          <dt>gid:</dt>
          <dd>
            <t>the group id, a numeric value that identifies to which group a file
belongs.</t>
          </dd>
          <dt>layout:</dt>
          <dd>
            <t>the information a client uses to access file data on a storage device.
This information includes specification of the protocol (layout type)
and the identity of the storage devices to be used.</t>
          </dd>
          <dt>layout iomode:</dt>
          <dd>
            <t>a grant of either read-only or read/write I/O to the client.</t>
          </dd>
          <dt>layout segment:</dt>
          <dd>
            <t>a sub-division of a layout.  That sub-division might be by the layout
iomode (see Sections 3.3.20 and 12.2.9 of <xref target="RFC8881"/>), a striping pattern
(see Section 13.3 of <xref target="RFC8881"/>), or requested byte range.</t>
          </dd>
          <dt>layout stateid:</dt>
          <dd>
            <t>a 128-bit quantity returned by a server that uniquely defines the
layout state provided by the server for a specific layout that describes
a layout type and file (see Section 12.5.2 of <xref target="RFC8881"/>).  Further,
Section 12.5.3 of <xref target="RFC8881"/> describes differences in handling between
layout stateids and other stateid types.</t>
          </dd>
          <dt>layout type:</dt>
          <dd>
            <t>a specification of both the storage protocol used to access the data
and the aggregation scheme used to lay out the file data on the underlying
storage devices.</t>
          </dd>
          <dt>loose coupling:</dt>
          <dd>
            <t>when the control protocol is a storage protocol.</t>
          </dd>
          <dt>(file) metadata:</dt>
          <dd>
            <t>the part of the file system object that contains various descriptive
data relevant to the file object, as opposed to the file data itself.
This could include the time of last modification, access time, EOF
position, etc.</t>
          </dd>
          <dt>metadata server (MDS):</dt>
          <dd>
            <t>the pNFS server that provides metadata information for a file system
object.  It is also responsible for generating, recalling, and revoking
layouts for file system objects, for performing directory operations,
and for performing I/O operations to regular files when the clients
direct these to the metadata server itself.</t>
          </dd>
          <dt>mirror:</dt>
          <dd>
            <t>a copy of a layout segment.  Note that if one copy of the mirror is
updated, then all copies must be updated.</t>
          </dd>
          <dt>non-systematic encoding:</dt>
          <dd>
            <t>An erasure coding scheme in which the encoded shards do not contain
verbatim copies of the original data.  Every read requires decoding,
even when no shards are lost.  The Mojette non-systematic transform is
an example.  Non-systematic encodings are typically used for archival
workloads where reads are infrequent.</t>
          </dd>
          <dt>recalling a layout:</dt>
          <dd>
            <t>a graceful recall, via a callback, of a specific layout by the metadata
server to the client.  Graceful here means that the client would have
the opportunity to flush any WRITEs, etc., before returning the layout
to the metadata server.</t>
          </dd>
          <dt>revoking a layout:</dt>
          <dd>
            <t>an invalidation of a specific layout by the metadata server.
Once revocation occurs, the metadata server will not accept as valid any
reference to the revoked layout, and a storage device will not accept
any client access based on the layout.</t>
          </dd>
          <dt>resilvering:</dt>
          <dd>
            <t>the act of rebuilding a mirrored copy of a layout segment from a
known good copy of the layout segment.  Note that this can also be done
to create a new mirrored copy of the layout segment.</t>
          </dd>
          <dt>rsize:</dt>
          <dd>
            <t>the data transfer buffer size used for READs.</t>
          </dd>
          <dt>stateid:</dt>
          <dd>
            <t>a 128-bit quantity returned by a server that uniquely defines the set
of locking-related state provided by the server.  Stateids may designate
state related to open files, byte-range locks, delegations, or layouts.</t>
          </dd>
          <dt>storage device:</dt>
          <dd>
            <t>the target to which clients may direct I/O requests when they hold
an appropriate layout.  See Section 2.1 of <xref target="RFC8434"/> for further
discussion of the difference between a data server and a storage device.</t>
          </dd>
          <dt>storage protocol:</dt>
          <dd>
            <t>the protocol used by clients to do I/O operations to the storage
device.  Each layout type specifies the set of storage protocols.</t>
          </dd>
          <dt>systematic encoding:</dt>
          <dd>
            <t>An erasure coding scheme in which the first k of the k+m encoded
shards are identical to the original k data blocks.  A healthy read
(no failures) requires no decoding — the data shards are read directly.
Decoding is triggered only when data shards are missing.  Reed-Solomon
Vandermonde and Mojette systematic are examples.</t>
          </dd>
          <dt>tight coupling:</dt>
          <dd>
            <t>an arrangement in which the control protocol is one designed
specifically for control communication.  It may be either a proprietary
protocol adapted specifically to a particular metadata server or a
protocol based on a Standards Track document.</t>
          </dd>
          <dt>uid:</dt>
          <dd>
            <t>the user id, a numeric value that identifies which user owns a file.</t>
          </dd>
          <dt>write hole:</dt>
          <dd>
            <t>A write hole is a data corruption scenario where either two clients
are trying to write to the same chunk or one client is overwriting an
existing chunk of data.</t>
          </dd>
          <dt>wsize:</dt>
          <dd>
            <t>the data transfer buffer size used for WRITEs.</t>
          </dd>
        </dl>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="coupling-of-storage-devices">
      <name>Coupling of Storage Devices</name>
      <t>A server implementation may choose either a loosely coupled model or a
tightly coupled model between the metadata server and the storage devices.
<xref target="RFC8434"/> describes the general problems facing pNFS implementations.
This document details how the new flexible file layout type addresses
these issues.  To implement the tightly coupled model, a control protocol
has to be defined.  As the flexible file layout imposes no special
requirements on the client, the control protocol will need to provide:</t>
      <ol spacing="normal" type="1"><li>
          <t>management of both security and LAYOUTCOMMITs and</t>
        </li>
        <li>
          <t>a global stateid model and management of these stateids.</t>
        </li>
      </ol>
      <t>When implementing the loosely coupled model, the only control protocol
will be a version of NFS, with no ability to provide a global stateid
model or to prevent clients from using layouts inappropriately.  To enable
client use in that environment, this document will specify how security,
state, and locking are to be managed.</t>
      <t>The loosely and tightly coupled locking models defined in Section 2.3
of <xref target="RFC8435"/> apply equally to this layout type, including the use of
anonymous stateids with loosely coupled storage devices, the handling
of lock and delegation stateids, and the mandatory byte-range lock
requirements for the tightly coupled model.</t>
      <section anchor="layoutcommit">
        <name>LAYOUTCOMMIT</name>
        <t>Regardless of the coupling model, the metadata server has the
responsibility, upon receiving a LAYOUTCOMMIT (see Section 18.42 of
<xref target="RFC8881"/>) to ensure that the semantics of pNFS are respected (see
Section 3.1 of <xref target="RFC8434"/>).  These do include a requirement that data
written to a data storage device be stable before the occurrence of
the LAYOUTCOMMIT.</t>
        <t>It is the responsibility of the client to make sure the data file is
stable before the metadata server begins to query the storage devices
about the changes to the file.  If any WRITE to a storage device did not
result with stable_how equal to FILE_SYNC, a LAYOUTCOMMIT to the metadata
server <bcp14>MUST</bcp14> be preceded by a COMMIT to the storage devices written to.
Note that if the client has not done a COMMIT to the storage device, then
the LAYOUTCOMMIT might not be synchronized to the last WRITE operation
to the storage device.</t>
      </section>
      <section anchor="sec-Fencing-Clients">
        <name>Fencing Clients from the Storage Device</name>
        <t>With loosely coupled storage devices, the metadata server uses synthetic
uids (user ids) and gids (group ids) for the data file, where the uid
owner of the data file is allowed read/write access and the gid owner
is allowed read-only access.  As part of the layout (see ffv2ds_user
and ffv2ds_group in <xref target="sec-ffv2_layout"/>), the client is provided
with the user and group to be used in the Remote Procedure Call
(RPC) <xref target="RFC5531"/> credentials needed to access the data file.
Fencing off of clients is achieved by the metadata server changing
the synthetic uid and/or gid owners of the data file on the storage
device to implicitly revoke the outstanding RPC credentials.  A
client presenting the wrong credential for the desired access will
get an NFS4ERR_ACCESS error.</t>
        <t>With this loosely coupled model, the metadata server is not able to fence
off a single client; it is forced to fence off all clients.  However,
as the other clients react to the fencing, returning their layouts and
trying to get new ones, the metadata server can hand out a new uid and
gid to allow access.</t>
        <t>It is <bcp14>RECOMMENDED</bcp14> to implement common access control methods at the
storage device file system to allow only the metadata server root
(super user) access to the storage device and to set the owner of all
directories holding data files to the root user.  This approach provides
a practical model to enforce access control and fence off cooperative
clients, but it cannot protect against malicious clients; hence, it
provides a level of security equivalent to AUTH_SYS.  It is <bcp14>RECOMMENDED</bcp14>
that the communication between the metadata server and storage device
be secure from eavesdroppers and man-in-the-middle protocol tampering.
The security measure could be physical security (e.g., the servers
are co-located in a physically secure area), encrypted communications,
or some other technique.</t>
        <t>With tightly coupled storage devices, the metadata server sets the
user and group owners, mode bits, and Access Control List (ACL) of
the data file to be the same as the metadata file.  And the client must
authenticate with the storage device and go through the same authorization
process it would go through via the metadata server.  In the case of
tight coupling, fencing is the responsibility of the control protocol and
is not described in detail in this document.  However, implementations
of the tightly coupled locking model (see <xref target="sec-state-locking"/>) will
need a way to prevent access by certain clients to specific files by
invalidating the corresponding stateids on the storage device.  In such
a scenario, the client will be given an error of NFS4ERR_BAD_STATEID.</t>
        <t>The client need not know the model used between the metadata server and
the storage device.  It need only react consistently to any errors in
interacting with the storage device.  It <bcp14>SHOULD</bcp14> both return the layout
and error to the metadata server and ask for a new layout.  At that point,
the metadata server can either hand out a new layout, hand out no layout
(forcing the I/O through it), or deny the client further access to
the file.</t>
        <section anchor="implementation-notes-for-synthetic-uidsgids">
          <name>Implementation Notes for Synthetic uids/gids</name>
          <t>The selection method for the synthetic uids and gids to be used for
fencing in loosely coupled storage devices is strictly an implementation
issue.  That is, an administrator might restrict a range of such ids
available to the Lightweight Directory Access Protocol (LDAP) 'uid' field
<xref target="RFC4519"/>.  The administrator might also be able to choose an id that
would never be used to grant access.  Then, when the metadata server had
a request to access a file, a SETATTR would be sent to the storage device
to set the owner and group of the data file.  The user and group might
be selected in a round-robin fashion from the range of available ids.</t>
          <t>Those ids would be sent back as ffv2ds_user and ffv2ds_group to the
client, who would present them as the RPC credentials to the storage
device.  When the client is done accessing the file and the metadata
server knows that no other client is accessing the file, it can
reset the owner and group to restrict access to the data file.</t>
          <t>When the metadata server wants to fence off a client, it changes the
synthetic uid and/or gid to the restricted ids.  Note that using a
restricted id ensures that there is a change of owner and at least one
id available that never gets allowed access.</t>
          <t>Under an AUTH_SYS security model, synthetic uids and gids of 0 <bcp14>SHOULD</bcp14> be
avoided.  These typically either grant super access to files on a storage
device or are mapped to an anonymous id.  In the first case, even if the
data file is fenced, the client might still be able to access the file.
In the second case, multiple ids might be mapped to the anonymous ids.</t>
        </section>
        <section anchor="example-of-using-synthetic-uidsgids">
          <name>Example of using Synthetic uids/gids</name>
          <t>The user loghyr creates a file "ompha.c" on the metadata server, which
then creates a corresponding data file on the storage device.</t>
          <t>The metadata server entry may look like:</t>
          <figure anchor="fig-meta-ompha">
            <name>Metadata's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r--r--    1 loghyr  staff    1697 Dec  4 11:31 ompha.c
]]></sourcecode>
          </figure>
          <t>On the storage device, the file may be assigned some unpredictable
synthetic uid/gid to deny access:</t>
          <figure anchor="fig-data-ompha">
            <name>Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19452   28418    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>When the file is opened on a client and accessed, the user will try to
get a layout for the data file.  Since the layout knows nothing about
the user (and does not care), it does not matter whether the user loghyr
or garbo opens the file.  The client has to present an uid of 19452
to get write permission.  If it presents any other value for the uid,
then it must give a gid of 28418 to get read access.</t>
          <t>Further, if the metadata server decides to fence the file, it <bcp14>SHOULD</bcp14>
change the uid and/or gid such that these values neither match earlier
values for that file nor match a predictable change based on an earlier
fencing.</t>
          <figure anchor="fig-fenced-ompha">
            <name>Fenced Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19453   28419    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>The set of synthetic gids on the storage device <bcp14>SHOULD</bcp14> be selected such
that there is no mapping in any of the name services used by the storage
device, i.e., each group <bcp14>SHOULD</bcp14> have no members.</t>
          <t>If the layout segment has an iomode of LAYOUTIOMODE4_READ, then the
metadata server should return a synthetic uid that is not set on the
storage device.  Only the synthetic gid would be valid.</t>
          <t>The client is thus solely responsible for enforcing file permissions
in a loosely coupled model.  To allow loghyr write access, it will send
an RPC to the storage device with a credential of 1066:1067.  To allow
garbo read access, it will send an RPC to the storage device with a
credential of 1067:1067.  The value of the uid does not matter as long
as it is not the synthetic uid granted when getting the layout.</t>
          <t>While pushing the enforcement of permission checking onto the client
may seem to weaken security, the client may already be responsible
for enforcing permissions before modifications are sent to a server.
With cached writes, the client is always responsible for tracking who is
modifying a file and making sure to not coalesce requests from multiple
users into one request.</t>
        </section>
      </section>
      <section anchor="sec-state-locking">
        <name>State and Locking Models</name>
        <t>An implementation can always be deployed as a loosely coupled model.
There is, however, no way for a storage device to indicate over an NFS
protocol that it can definitively participate in a tightly coupled model:</t>
        <ul spacing="normal">
          <li>
            <t>Storage devices implementing the NFSv3 and NFSv4.0 protocols are
always treated as loosely coupled.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do not return the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID are indicating that
they are to be treated as loosely coupled.  From the locking viewpoint,
they are treated in the same way as NFSv4.0 storage devices.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do identify themselves with the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID can potentially
be tightly coupled.  They would use a back-end control protocol to
implement the global stateid model as described in <xref target="RFC8881"/>.</t>
          </li>
        </ul>
        <t>A storage device would have to be either discovered or advertised over
the control protocol to enable a tightly coupled model.</t>
        <section anchor="loosely-coupled-locking-model">
          <name>Loosely Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
When an NFSv4 version is used as the data access protocol, the metadata
server may make stateid-related requests of the storage devices.  However,
it is not required to do so, and the resulting stateids are known only
to the metadata server and the storage device.</t>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with by the metadata server.  Stateids are
selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server may need to interact with the storage device to
locate the file to be opened, but no locking-related functionality
need be used on the storage device.</t>
            </li>
            <li>
              <t>OPEN_DOWNGRADE and CLOSE only require local execution on the
metadata server.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and only
used on the metadata server.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  No storage device
involvement is required.</t>
            </li>
            <li>
              <t>TEST_STATEID and FREE_STATEID are processed locally on the
metadata server, without storage device involvement.</t>
            </li>
          </ul>
          <t>All I/O operations to the storage device are done using the anonymous
stateid.  Thus, the storage device has no information about the openowner
and lockowner responsible for issuing a particular I/O operation.
As a result:</t>
          <ul spacing="normal">
            <li>
              <t>Mandatory byte-range locking cannot be supported because the
storage device has no way of distinguishing I/O done on behalf of
the lock owner from those done by others.</t>
            </li>
            <li>
              <t>Enforcement of share reservations is the responsibility of the
client.  Even though I/O is done using the anonymous stateid, the
client must ensure that it has a valid stateid associated with the
openowner.</t>
            </li>
          </ul>
          <t>In the event that a stateid is revoked, the metadata server is responsible
for preventing client access, since it has no way of being sure that
the client is aware that the stateid in question has been revoked.</t>
          <t>As the client never receives a stateid generated by a storage device,
there is no client lease on the storage device and no prospect of lease
expiration, even when access is via NFSv4 protocols.  Clients will
have leases on the metadata server.  In dealing with lease expiration,
the metadata server may need to use fencing to prevent revoked stateids
from being relied upon by a client unaware of the fact that they have
been revoked.</t>
        </section>
        <section anchor="tightly-coupled-locking-model">
          <name>Tightly Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
These stateids must be made known to the storage device using control
protocol facilities, the details of which are not discussed in this
document.</t>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with primarily on the metadata server.  Stateids
are selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server needs to interact with the storage device to
locate the file to be opened and to make the storage device aware of
the association between the metadata-server-chosen stateid and the
client and openowner that it represents.  OPEN_DOWNGRADE and CLOSE
are executed initially on the metadata server, but the state change
made must be propagated to the storage device.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and are
available for use on the metadata server.  Because I/O operations
are allowed to present lock stateids, the metadata server needs the
ability to make the storage device aware of the association between
the metadata-server-chosen stateid and the corresponding open stateid
it is associated with.</t>
            </li>
            <li>
              <t>Mandatory byte-range locks can be supported when both the metadata
server and the storage devices have the appropriate support.  As in
the case of advisory byte-range locks, these are assigned by the
metadata server and are available for use on the metadata server.
To enable mandatory lock enforcement on the storage device, the
metadata server needs the ability to make the storage device aware
of the association between the metadata-server-chosen stateid and
the client, openowner, and lock (i.e., lockowner, byte-range, and
lock-type) that it represents.  Because I/O operations are allowed
to present lock stateids, this information needs to be propagated to
all storage devices to which I/O might be directed rather than only
to storage device that contain the locked region.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  Because I/O operations
are allowed to present delegation stateids, the metadata server
requires the ability:  </t>
              <ol spacing="normal" type="1"><li>
                  <t>to make the storage device aware of the association between
the metadata-server-chosen stateid and the filehandle and
delegation type it represents</t>
                </li>
                <li>
                  <t>to break such an association.</t>
                </li>
              </ol>
            </li>
            <li>
              <t>TEST_STATEID is processed locally on the metadata server, without
storage device involvement.</t>
            </li>
            <li>
              <t>FREE_STATEID is processed on the metadata server, but the metadata
server requires the ability to propagate the request to the
corresponding storage devices.</t>
            </li>
          </ul>
          <t>Because the client will possess and use stateids valid on the storage
device, there will be a client lease on the storage device, and the
possibility of lease expiration does exist.  The best approach for the
storage device is to retain these locks as a courtesy.  However, if it
does not do so, control protocol facilities need to provide the means
to synchronize lock state between the metadata server and storage device.</t>
          <t>Clients will also have leases on the metadata server that are subject
to expiration.  In dealing with lease expiration, the metadata server
would be expected to use control protocol facilities enabling it to
invalidate revoked stateids on the storage device.  In the event the
client is not responsive, the metadata server may need to use fencing
to prevent revoked stateids from being acted upon by the storage device.</t>
        </section>
      </section>
    </section>
    <section anchor="xdr-description-of-the-flexible-file-layout-type">
      <name>XDR Description of the Flexible File Layout Type</name>
      <t>This document contains the External Data Representation (XDR)
<xref target="RFC4506"/> description of the flexible file layout type.  The XDR
description is embedded in this document in a way that makes it simple
for the reader to extract into a ready-to-compile form.  The reader can
feed this document into the shell script in <xref target="fig-extract"/> to produce
the machine-readable XDR description of the flexible file layout type.</t>
      <figure anchor="fig-extract">
        <name>extract.sh</name>
        <sourcecode type="shell"><![CDATA[
#!/bin/sh
grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??'
]]></sourcecode>
      </figure>
      <t>That is, if the above script is stored in a file called "extract.sh"
and this document is in a file called "spec.txt", then the reader can
run the script as in <xref target="fig-extract-example"/>.</t>
      <figure anchor="fig-extract-example">
        <name>Example use of extract.sh</name>
        <sourcecode type="shell"><![CDATA[
sh extract.sh < spec.txt > flex_files2_prot.x
]]></sourcecode>
      </figure>
      <t>The effect of the script is to remove leading blank space from each
line, plus a sentinel sequence of "///".</t>
      <t>XDR descriptions with the sentinel sequence are embedded throughout
the document.</t>
      <t>Note that the XDR code contained in this document depends on types
from the NFSv4.1 nfs4_prot.x file <xref target="RFC5662"/>.  This includes both nfs
types that end with a 4, such as offset4, length4, etc., as well as
more generic types such as uint32_t and uint64_t.</t>
      <t>While the XDR can be appended to that from <xref target="RFC7863"/>, the various
code snippets belong in their respective areas of that XDR.</t>
    </section>
    <section anchor="device-addressing-and-discovery">
      <name>Device Addressing and Discovery</name>
      <t>Data operations to a storage device require the client to know the
network address of the storage device.  The NFSv4.1+ GETDEVICEINFO
operation (Section 18.40 of <xref target="RFC8881"/>) is used by the client to
retrieve that information.</t>
      <section anchor="sec-ff_device_addr4">
        <name>ff_device_addr4</name>
        <t>The ff_device_addr4 data structure (see <xref target="fig-ff_device_addr4"/>)
is returned by the server as the layout-type-specific opaque field
da_addr_body in the device_addr4 structure by a successful GETDEVICEINFO
operation.</t>
        <figure anchor="fig-ff_device_versions4">
          <name>ff_device_versions4</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_device_versions4 {
           uint32_t        ffdv_version;
           uint32_t        ffdv_minorversion;
           uint32_t        ffdv_rsize;
           uint32_t        ffdv_wsize;
           bool            ffdv_tightly_coupled;
   };
]]></sourcecode>
        </figure>
        <figure anchor="fig-ff_device_addr4">
          <name>ff_device_addr4</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_device_addr4 {
           multipath_list4     ffda_netaddrs;
           ff_device_versions4 ffda_versions<>;
   };
]]></sourcecode>
        </figure>
        <t>The ffda_netaddrs field is used to locate the storage device.  It
<bcp14>MUST</bcp14> be set by the server to a list holding one or more of the device
network addresses.</t>
        <t>The ffda_versions array allows the metadata server to present choices
as to NFS version, minor version, and coupling strength to the
client.  The ffdv_version and ffdv_minorversion represent the NFS
protocol to be used to access the storage device.  This layout
specification defines the semantics for ffdv_versions 3 and 4.  If
ffdv_version equals 3, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to
0 and ffdv_tightly_coupled to false.  The client <bcp14>MUST</bcp14> then access the
storage device using the NFSv3 protocol <xref target="RFC1813"/>.  If ffdv_version
equals 4, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to one of the
NFSv4 minor version numbers, and the client <bcp14>MUST</bcp14> access the storage
device using NFSv4 with the specified minor version.</t>
        <t>Note that while the client might determine that it cannot use any of
the configured combinations of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled, when it gets the device list from the metadata
server, there is no way to indicate to the metadata server as to
which device it is version incompatible.  However, if the client
waits until it retrieves the layout from the metadata server, it can
at that time clearly identify the storage device in question (see
<xref target="sec-version-errors"/>).</t>
        <t>The ffdv_rsize and ffdv_wsize are used to communicate the maximum
rsize and wsize supported by the storage device.  As the storage
device can have a different rsize or wsize than the metadata server,
the ffdv_rsize and ffdv_wsize allow the metadata server to
communicate that information on behalf of the storage device.</t>
        <t>ffdv_tightly_coupled informs the client as to whether or not the
metadata server is tightly coupled with the storage devices.  Note
that even if the data protocol is at least NFSv4.1, it may still be
the case that there is loose coupling in effect.  If
ffdv_tightly_coupled is not set, then the client <bcp14>MUST</bcp14> commit writes
to the storage devices for the file before sending a LAYOUTCOMMIT to
the metadata server.  That is, the writes <bcp14>MUST</bcp14> be committed by the
client to stable storage via issuing WRITEs with stable_how ==
FILE_SYNC or by issuing a COMMIT after WRITEs with stable_how !=
FILE_SYNC (see Section 3.3.7 of <xref target="RFC1813"/>).</t>
      </section>
      <section anchor="storage-device-multipathing">
        <name>Storage Device Multipathing</name>
        <t>The flexible file layout type supports multipathing to multiple
storage device addresses.  Storage-device-level multipathing is used
for bandwidth scaling via trunking and for higher availability of use
in the event of a storage device failure.  Multipathing allows the
client to switch to another storage device address that may be that
of another storage device that is exporting the same data stripe
unit, without having to contact the metadata server for a new layout.</t>
        <t>To support storage device multipathing, ffda_netaddrs contains an
array of one or more storage device network addresses.  This array
(data type multipath_list4) represents a list of storage devices
(each identified by a network address), with the possibility that
some storage device will appear in the list multiple times.</t>
        <t>The client is free to use any of the network addresses as a
destination to send storage device requests.  If some network
addresses are less desirable paths to the data than others, then the
metadata server <bcp14>SHOULD NOT</bcp14> include those network addresses in
ffda_netaddrs.  If less desirable network addresses exist to provide
failover, the <bcp14>RECOMMENDED</bcp14> method to offer the addresses is to provide
them in a replacement device-ID-to-device-address mapping or a
replacement device ID.  When a client finds no response from the
storage device using all addresses available in ffda_netaddrs, it
<bcp14>SHOULD</bcp14> send a GETDEVICEINFO to attempt to replace the existing
device-ID-to-device-address mappings.  If the metadata server detects
that all network paths represented by ffda_netaddrs are unavailable,
the metadata server <bcp14>SHOULD</bcp14> send a CB_NOTIFY_DEVICEID (if the client
has indicated it wants device ID notifications for changed device
IDs) to change the device-ID-to-device-address mappings to the
available addresses.  If the device ID itself will be replaced, the
metadata server <bcp14>SHOULD</bcp14> recall all layouts with the device ID and thus
force the client to get new layouts and device ID mappings via
LAYOUTGET and GETDEVICEINFO.</t>
        <t>Generally, if two network addresses appear in ffda_netaddrs, they
will designate the same storage device.  When the storage device is
accessed over NFSv4.1 or a higher minor version, the two storage
device addresses will support the implementation of client ID or
session trunking (the latter is <bcp14>RECOMMENDED</bcp14>) as defined in <xref target="RFC8881"/>.
The two storage device addresses will share the same server owner or
major ID of the server owner.  It is not always necessary for the two
storage device addresses to designate the same storage device with
trunking being used.  For example, the data could be read-only, and
the data consist of exact replicas.</t>
      </section>
    </section>
    <section anchor="flexible-file-version-2-layout-type">
      <name>Flexible File Version 2 Layout Type</name>
      <t>The original layouttype4 introduced in <xref target="RFC5662"/> is extended as shown in
<xref target="fig-orig-layout"/>.</t>
      <figure anchor="fig-orig-layout">
        <name>The original layout type</name>
        <sourcecode type="xdr"><![CDATA[
       enum layouttype4 {
           LAYOUT4_NFSV4_1_FILES   = 1,
           LAYOUT4_OSD2_OBJECTS    = 2,
           LAYOUT4_BLOCK_VOLUME    = 3,
           LAYOUT4_FLEX_FILES      = 4,
           LAYOUT4_FLEX_FILES_V2   = 5
       };

       struct layout_content4 {
           layouttype4             loc_type;
           opaque                  loc_body<>;
       };

       struct layout4 {
           offset4                 lo_offset;
           length4                 lo_length;
           layoutiomode4           lo_iomode;
           layout_content4         lo_content;
       };
]]></sourcecode>
      </figure>
      <t>This document defines structures associated with the layouttype4
value LAYOUT4_FLEX_FILES_V2.  <xref target="RFC8881"/> specifies the loc_body structure
as an XDR type "opaque".  The opaque layout is uninterpreted by the
generic pNFS client layers but is interpreted by the flexible file
layout type implementation.  This section defines the structure of
this otherwise opaque value, ffv2_layout4.</t>
      <section anchor="ffv2codingtype4">
        <name>ffv2_coding_type4</name>
        <figure anchor="fig-ffv2_coding_type4">
          <name>The coding type</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_coding_type4 {
   ///     FFV2_CODING_MIRRORED                  = 1,
   ///     FFV2_ENCODING_MOJETTE_SYSTEMATIC      = 2,
   ///     FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC  = 3,
   ///     FFV2_ENCODING_RS_VANDERMONDE          = 4
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type4 (see <xref target="fig-ffv2_coding_type4"/>) encompasses
a new IANA registry for 'Flexible Files Version 2 Erasure Coding
Type Registry'.  I.e., instead of defining a new Layout Type for
each Erasure Coding, we define a new Erasure Coding Type.  Except
for FFV2_CODING_MIRRORED, each of the types is expected to employ
the new operations in this document.</t>
        <t>FFV2_CODING_MIRRORED offers replication of data and not integrity of
data.  As such, it does not need operations like CHUNK_WRITE (see
<xref target="sec-CHUNK_WRITE"/>.</t>
        <section anchor="encoding-type-interoperability">
          <name>Encoding Type Interoperability</name>
          <t>The data servers do not interpret erasure-coded data — they store and
return opaque chunks.  The NFS wire protocol likewise does not depend
on the encoding mathematics.  However, a client that writes data using
one encoding type <bcp14>MUST</bcp14> be able to read it back, and a different
client implementation <bcp14>MUST</bcp14> be able to read data written by the first
client if both claim to support the same encoding type.</t>
          <t>This interoperability requirement means that each registered
encoding type <bcp14>MUST</bcp14> fully specify the encoding and decoding
mathematics such that two independent implementations produce
byte-identical encoded output for the same input.  The specification
of a new encoding type <bcp14>MUST</bcp14> include one of the following:</t>
          <ol spacing="normal" type="1"><li>
              <t>A complete mathematical specification of the encoding and decoding
algorithms, including all parameters (e.g., field polynomial,
matrix construction, element size) sufficient for an independent
implementation to produce interoperable results.</t>
            </li>
            <li>
              <t>A reference to a published patent or pending patent application
that contains the algorithm specification.  Implementors can then
evaluate the licensing terms and decide whether to support the
encoding type.</t>
            </li>
            <li>
              <t>A declaration that the encoding type is a proprietary
implementation.  In this case, the encoding type name <bcp14>SHOULD</bcp14>
include an organizational prefix (e.g.,
FFV2_ENCODING_ACME_FOOBAR) to signal that interoperability is
limited to implementations licensed by that organization.</t>
            </li>
          </ol>
          <t>Option 1 is <bcp14>RECOMMENDED</bcp14> for encoding types intended for broad
interoperability.  Options 2 and 3 allow vendors to register encoding
types for use within their own ecosystems while preserving the
encoding type namespace.</t>
          <t>The rationale for this requirement is that erasure coding moves
computation from the server to the client.  If the client cannot
determine how data was encoded, it cannot decode it.  Unlike layout
types (where the server controls the storage format), encoding types
require client-side agreement on the mathematics.</t>
        </section>
      </section>
      <section anchor="sec-ffv2_layout">
        <name>ffv2_layout4</name>
        <section anchor="sec-ffv2_flags4">
          <name>ffv2_flags4</name>
          <figure anchor="fig-ffv2_flags4">
            <name>The ffv2_flags4</name>
            <sourcecode type="xdr"><![CDATA[
   /// const FFV2_FLAGS_NO_LAYOUTCOMMIT   = FF_FLAGS_NO_LAYOUTCOMMIT;
   /// const FFV2_FLAGS_NO_IO_THRU_MDS    = FF_FLAGS_NO_IO_THRU_MDS;
   /// const FFV2_FLAGS_NO_READ_IO        = FF_FLAGS_NO_READ_IO;
   /// const FFV2_FLAGS_WRITE_ONE_MIRROR  = FF_FLAGS_WRITE_ONE_MIRROR;
   /// const FFV2_FLAGS_ONLY_ONE_WRITER   = 0x00000010;
   ///
   /// typedef uint32_t            ffv2_flags4;
]]></sourcecode>
          </figure>
          <t>The ffv2_flags4 in <xref target="fig-ffv2_flags4"/>  is a bitmap that allows the
metadata server to inform the client of particular conditions that
may result from more or less tight coupling of the storage devices.</t>
          <dl>
            <dt>FFV2_FLAGS_NO_LAYOUTCOMMIT:</dt>
            <dd>
              <t>can be set to indicate that the client is not required to send
LAYOUTCOMMIT to the metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_IO_THRU_MDS:</dt>
            <dd>
              <t>can be set to indicate that the client should not send I/O
operations to the metadata server.  That is, even if the client
could determine that there was a network disconnect to a storage
device, the client should not try to proxy the I/O through the
metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_READ_IO:</dt>
            <dd>
              <t>can be set to indicate that the client should not send READ
requests with the layouts of iomode LAYOUTIOMODE4_RW.  Instead, it
should request a layout of iomode LAYOUTIOMODE4_READ from the
metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_WRITE_ONE_MIRROR:</dt>
            <dd>
              <t>can be set to indicate that the client only needs to update one
of the mirrors (see <xref target="sec-CSM"/>).</t>
            </dd>
            <dt>FFV2_FLAGS_ONLY_ONE_WRITER:</dt>
            <dd>
              <t>can be set to indicate that the client only needs to use a
CHUNK_WRITE to update the chunks in the data file.  I.e., keep the
ability to rollback in case of a write hole caused by overwriting.
If this flag is not set, then the client <bcp14>MUST</bcp14> write chunks with
CHUNK_WRITE with the cwa_guard set in order to prevent collision
across the data servers.</t>
            </dd>
          </dl>
        </section>
      </section>
      <section anchor="ffv2fileinfo4">
        <name>ffv2_file_info4</name>
        <figure anchor="fig-ffv2_file_info4">
          <name>The ffv2_file_info4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_file_info4 {
   ///     stateid4                fffi_stateid;
   ///     nfs_fh4                 fffi_fh_vers;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_file_info4 is a new structure to help with the stateid
issue discussed in Section 5.1 of <xref target="RFC8435"/>.  I.e., in version 1
of the Flexible File Layout Type, there was the singleton ffv2ds_stateid
combined with the ffv2ds_fh_vers array.  I.e., each NFSv4 version
has its own stateid.  In <xref target="fig-ffv2_file_info4"/>, each NFSv4
filehandle has a one-to-one correspondence to a stateid.</t>
      </section>
      <section anchor="ffv2dsflags4">
        <name>ffv2_ds_flags4</name>
        <figure anchor="fig-ffv2_ds_flags4">
          <name>The ffv2_ds_flags4</name>
          <sourcecode type="xdr"><![CDATA[
   /// const FFV2_DS_FLAGS_ACTIVE        = 0x00000001;
   /// const FFV2_DS_FLAGS_SPARE         = 0x00000002;
   /// const FFV2_DS_FLAGS_PARITY        = 0x00000004;
   /// const FFV2_DS_FLAGS_REPAIR        = 0x00000008;
   /// typedef uint32_t            ffv2_ds_flags4;
]]></sourcecode>
        </figure>
        <t>The ffv2_ds_flags4 (in <xref target="fig-ffv2_ds_flags4"/>) flags details the
state of the data servers.  With Erasure Coding algorithms, there
are both Systematic and Non-Systematic approaches.  In the Systematic,
the bits for integrity are placed amongst the resulting transformed
chunk.  Such an implementation would typically see FFV2_DS_FLAGS_ACTIVE
and FFV2_DS_FLAGS_SPARE data servers.  The FFV2_DS_FLAGS_SPARE ones
allow the client to repair a payload without engaging the metadata
server.  I.e., if one of the FFV2_DS_FLAGS_ACTIVE did not respond
to a WRITE_BLOCK, the client could fail the chunk to the
FFV2_DS_FLAGS_SPARE data server.</t>
        <t>With the Non-Systematic approach, the data and integrity live on
different data servers.  Such an implementation would typically see
FFV2_DS_FLAGS_ACTIVE and FFV2_DS_FLAGS_PARITY data servers.  If the
implementation wanted to allow for local repair, it would also use
FFV2_DS_FLAGS_SPARE.</t>
        <t>The FFV2_DS_FLAGS_REPAIR flag can be used by the metadata server
to inform the client that the indicated data server is a replacement
data server as far as existing data is concerned.  <cref source="Tom">Fill in</cref></t>
        <t>See <xref target="Plank97"/> for further reference to storage layouts for coding.</t>
      </section>
      <section anchor="ffv2dataserver4">
        <name>ffv2_data_server4</name>
        <figure anchor="fig-ffv2_data_server4">
          <name>The ffv2_data_server4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_server4 {
   ///     deviceid4               ffv2ds_deviceid;
   ///     uint32_t                ffv2ds_efficiency;
   ///     ffv2_file_info4         ffv2ds_file_info<>;
   ///     fattr4_owner            ffv2ds_user;
   ///     fattr4_owner_group      ffv2ds_group;
   ///     ffv2_ds_flags4          ffv2ds_flags;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_server4 (in <xref target="fig-ffv2_data_server4"/>) describes a data
file and how to access it via the different NFS protocols.</t>
      </section>
      <section anchor="ffv2codingtypedata4">
        <name>ffv2_coding_type_data4</name>
        <figure anchor="fig-ffv2_coding_type_data4">
          <name>The ffv2_coding_type_data4</name>
          <sourcecode type="xdr"><![CDATA[
   /// union ffv2_coding_type_data4 switch
   ///         (ffv2_coding_type4 fctd_coding) {
   ///     case FFV2_CODING_MIRRORED:
   ///         ffv2_data_protection4   fctd_protection;
   ///     default:
   ///         ffv2_data_protection4   fctd_protection;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type_data4 (in <xref target="fig-ffv2_coding_type_data4"/>) describes
the data protection geometry for the layout.  All coding types carry an
ffv2_data_protection4 (<xref target="fig-ffv2_data_protection4"/>) specifying the
number of data and parity shards.  The coding type enum determines how
the shards are encoded; the protection structure determines how many
shards there are.</t>
        <t>Although the FFV2_CODING_MIRRORED case and the default case currently
carry the same type, the union form is intentional.  Future revisions
of this specification may assign distinct arm types to specific coding
types; using a union now avoids an incompatible change to the XDR at
that time.</t>
        <t>For FFV2_CODING_MIRRORED, fdp_data is 1 and fdp_parity is the number
of additional copies (e.g., fdp_parity=2 for 3-way mirroring).
Erasure coding types registered in companion documents (e.g.,
Reed-Solomon Vandermonde, Mojette systematic) use fdp_data &gt;= 2 and
fdp_parity &gt;= 1.</t>
        <figure anchor="fig-ffv2_stripes4">
          <name>The stripes v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_striping {
   ///     FFV2_STRIPING_NONE = 0,
   ///     FFV2_STRIPING_SPARSE = 1,
   ///     FFV2_STRIPING_DENSE = 2
   /// };
   ///
   /// struct ffv2_stripes4 {
   ///         ffv2_data_server4       ffs_data_servers<>;
   /// };
]]></sourcecode>
        </figure>
        <t>Each stripe contains a set of data servers in ffs_data_servers.
If the stripe is part of a ffv2_coding_type_data4 of
FFV2_CODING_MIRRORED, then the length of ffs_data_servers
<bcp14>MUST</bcp14> be 1.</t>
      </section>
      <section anchor="ffv2key4">
        <name>ffv2_key4</name>
        <figure anchor="fig-ffv2_key4">
          <name>The ffv2_key4</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef uint64_t ffv2_key4;
]]></sourcecode>
        </figure>
        <t>The ffv2_key4 is an opaque 64-bit identifier used to associate a
mirror instance with its backing storage key.  The value is assigned
by the metadata server and is opaque to the client.</t>
      </section>
      <section anchor="sec-ffv2-mirror4">
        <name>ffv2_mirror4</name>
        <figure anchor="fig-ffv2_mirror4">
          <name>The ffv2_mirror4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_mirror4 {
   ///         ffv2_coding_type_data4  ffm_coding_type_data;
   ///         ffv2_key4               ffm_key;
   ///         ffv2_striping           ffm_striping;
   ///         uint32_t                ffm_striping_unit_size; // The minimum stripe unit size is 64 bytes.
   ///         uint32_t                ffm_client_id;
   ///         ffv2_stripes4           ffm_stripes<>; // Length of this array is the stripe count
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_mirror4 (in <xref target="fig-ffv2_mirror4"/>) describes the Flexible
File Layout Version 2 specific fields.  The ffm_client_id tells the
client which id to use when interacting with the data servers.</t>
        <t>The ffm_coding_type_data is which encoding type is used
by the mirror.</t>
        <t>The ffm_striping is which striping method is used by the mirror.</t>
        <t>The ffm_striping_unit_size is the stripe unit size used
by the mirror. If the value of ffm_striping is FFV2_STRIPING_NONE,
then the value of ffm_striping_unit_size <bcp14>MUST</bcp14> be 1.</t>
        <t>The ffm_stripes is the array of stripes for the mirror. If there
is no striping or the ffm_coding_type_data is FFV2_CODING_MIRRORED,
then the length of ffm_stripes <bcp14>MUST</bcp14> be 1.</t>
        <t><cref source="Tom">Nuke ffm_client_id?</cref></t>
      </section>
      <section anchor="ffv2layout4">
        <name>ffv2_layout4</name>
        <figure anchor="fig-ffv2_layout4">
          <name>The ffv2_layout4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layout4 {
   ///     ffv2_mirror4            ffl_mirrors<>;
   ///     ffv2_flags4             ffl_flags;
   ///     uint32_t                ffl_stats_collect_hint;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layout4 (in <xref target="fig-ffv2_layout4"/>) describes the Flexible
File Layout Version 2.</t>
        <t>The ffl_mirrors field is the array of mirrored storage devices that
provide the storage for the current stripe; see <xref target="fig-parallel-fileystem"/>.</t>
        <t>The ffl_stats_collect_hint field provides a hint to the client on
how often the server wants it to report LAYOUTSTATS for a file.
The time is in seconds.</t>
        <figure anchor="fig-parallel-fileystem">
          <name>The Relationship between MDS and DSes</name>
          <artwork><![CDATA[
                +-----------+
                |           |
                |           |
                |   File    |
                |           |
                |           |
                +-----+-----+
                      |
     +-------------+-----+----------------+
     |                   |                |
+----+-----+       +-----+----+       +---+----------+
| Mirror 1 |       | Mirror 2 |       | Mirror 3     |
| MIRRORED |       | MIRRORED |       | REED_SOLOMON |
+----+-----+       +-----+----+       +---+----------+
     |                   |                |
     |                   |                |
+-----------+      +-----------+      +-----------+
|+-----------+     | Stripe 1  |      |+-----------+
+| Stripe N  |     +-----------+      +| Stripe N  |
 +-----------+           |             +-----------+
     |                   |                |
     |                   |                |
+-----------+      +-----------+      +-----------+
| Storage   |      | Storage   |      |+-----------+
| Device    |      | Device    |      ||+-----------+
+-----------+      +-----------+      +||  Storage  |
                                       +|  Devices  |
                                        +-----------+
]]></artwork>
        </figure>
        <t>As shown in <xref target="fig-parallel-fileystem"/> if the ffm_coding_type_data
is FFV2_CODING_MIRRORED, then each of the stripes <bcp14>MUST</bcp14>
only have 1 storage device. I.e., the length of ffs_data_servers
<bcp14>MUST</bcp14> be 1. The other encoding types can have any number of
storage devices.</t>
        <t>The abstraction here is that for FFV2_CODING_MIRRORED, each
stripe describes exactly one data server. And for all other
encoding types, each of the stripes describes a set of
data servers to which the chunks are distributed. Further,
the payload length can be different per stripe.</t>
      </section>
      <section anchor="ffv2dataprotection4">
        <name>ffv2_data_protection4</name>
        <figure anchor="fig-ffv2_data_protection4">
          <name>The ffv2_data_protection4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_protection4 {
   ///     uint32_t fdp_data;    /* data shards (k) */
   ///     uint32_t fdp_parity;  /* parity/redundancy shards (m) */
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_protection4 (in <xref target="fig-ffv2_data_protection4"/>) describes
the data protection geometry as a pair of counts: the number of data
shards (fdp_data, also known as k) and the number of parity or
redundancy shards (fdp_parity, also known as m).  This structure is
used in both layout hints and layout responses, and applies
uniformly to all coding types:</t>
        <table anchor="fig-protection-examples">
          <name>Example data protection configurations</name>
          <thead>
            <tr>
              <th align="left">Protection Mode</th>
              <th align="left">fdp_data</th>
              <th align="left">fdp_parity</th>
              <th align="left">Total DSes</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Mirroring (3-way)</td>
              <td align="left">1</td>
              <td align="left">2</td>
              <td align="left">3</td>
              <td align="left">3 copies, no encoding</td>
            </tr>
            <tr>
              <td align="left">Striping (6-way)</td>
              <td align="left">6</td>
              <td align="left">0</td>
              <td align="left">6</td>
              <td align="left">Parallel I/O, no redundancy</td>
            </tr>
            <tr>
              <td align="left">RS Vandermonde 4+2</td>
              <td align="left">4</td>
              <td align="left">2</td>
              <td align="left">6</td>
              <td align="left">Tolerates 2 DS failures</td>
            </tr>
            <tr>
              <td align="left">Mojette-sys 8+2</td>
              <td align="left">8</td>
              <td align="left">2</td>
              <td align="left">10</td>
              <td align="left">Tolerates 2 DS failures</td>
            </tr>
          </tbody>
        </table>
        <t>By expressing all protection modes as (fdp_data, fdp_parity) pairs,
a single structure serves mirroring, striping, and all erasure
coding types.  The coding type (<xref target="fig-ffv2_coding_type4"/>) determines
HOW the shards are encoded; the protection structure determines
HOW MANY shards there are.</t>
        <t>The total number of data servers required is fdp_data + fdp_parity.
The storage overhead is fdp_parity / fdp_data (e.g., 50% for 4+2,
25% for 8+2).</t>
      </section>
      <section anchor="sec-ffv2-layouthint">
        <name>ffv2_layouthint4</name>
        <figure anchor="fig-ffv2_layouthint4">
          <name>The ffv2_layouthint4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layouthint4 {
   ///     ffv2_coding_type4       fflh_supported_types<>;
   ///     ffv2_data_protection4   fflh_preferred_protection;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layouthint4 (in <xref target="fig-ffv2_layouthint4"/>) describes the
layout_hint (see Section 5.12.4 of <xref target="RFC8881"/>) that the client can
provide to the metadata server.</t>
        <t>The client provides two hints:</t>
        <dl>
          <dt>fflh_supported_types</dt>
          <dd>
            <t>An ordered list of coding types the client supports,
with the most preferred type first.  The server <bcp14>SHOULD</bcp14> select a type
from this list but <bcp14>MAY</bcp14> choose any type it supports.  If the server
does not support any of the listed types, it returns
NFS4ERR_CODING_NOT_SUPPORTED, and the client can retry
with a different list to discover the overlapping set.</t>
          </dd>
          <dt>fflh_preferred_protection</dt>
          <dd>
            <t>The client's preferred data protection geometry as a
(fdp_data, fdp_parity) pair.  The server <bcp14>SHOULD</bcp14> honor this hint but
<bcp14>MAY</bcp14> override it based on server-side policy.  A server that manages
data protection via administrative policy (e.g., per-directory or
per-export objectives) will typically ignore this hint and return the
geometry dictated by policy.</t>
          </dd>
        </dl>
        <t>For example, a client that prefers Mojette systematic with 8+2
protection would send:</t>
        <artwork><![CDATA[
fflh_supported_types = { FFV2_CODING_MIRRORED,
                         FFV2_ENCODING_MOJETTE_SYSTEMATIC,
                         FFV2_ENCODING_RS_VANDERMONDE }
fflh_preferred_protection = { fdp_data = 8, fdp_parity = 2 }
]]></artwork>
        <t>A server with a policy of RS 4+2 for this directory would ignore
both hints and return a layout with FFV2_ENCODING_RS_VANDERMONDE
and (fdp_data=4, fdp_parity=2).  A server without erasure coding
might return FFV2_CODING_MIRRORED with (fdp_data=1, fdp_parity=2)
for 3-way mirroring.</t>
        <t>Note: In <xref target="fig-ffv2_layout4"/> ffv2_coding_type_data4 is an enumerated
union with the payload of each arm being defined by the protection
type. ffm_client_id tells the client which id to use when interacting
with the data servers.</t>
        <t>The ffv2_layout4 structure (see <xref target="fig-ffv2_layout4"/>) specifies a layout
in that portion of the data file described in the current layout
segment.  It is either a single instance or a set of mirrored copies
of that portion of the data file.  When mirroring is in effect, it
protects against loss of data in layout segments.</t>
        <t>While not explicitly shown in <xref target="fig-ffv2_layout4"/>, each layout4
element returned in the logr_layout array of LAYOUTGET4res (see
Section 18.43.2 of <xref target="RFC8881"/>) describes a layout segment.  Hence,
each ffv2_layout4 also describes a layout segment.  It is possible
that the file is concatenated from more than one layout segment.
Each layout segment <bcp14>MAY</bcp14> represent different striping parameters.</t>
        <t>The ffm_striping_unit_size field (inside each ffv2_mirror4) is the
stripe unit size in use for that mirror.  The number of stripes is
given by the number of elements in ffs_data_servers within each
ffv2_stripes4.  If the number of stripes is one, then the value for
ffm_striping_unit_size <bcp14>MUST</bcp14> default to zero.  The only supported
mapping scheme is sparse and is detailed in <xref target="sec-striping"/>.  Note
that there is an assumption here that both the stripe unit size and
the number of stripes are the same across all mirrors.</t>
        <t>The ffl_mirrors field represents an array of state information for
each mirrored copy of the current layout segment.  Each element is
described by a ffv2_mirror4 type.</t>
        <t>ffv2ds_deviceid provides the deviceid of the storage device holding
the data file.</t>
        <t>ffv2ds_file_info is an array of ffv2_file_info4 structures, each
pairing a filehandle (fffi_fh_vers) with a stateid (fffi_stateid).
There <bcp14>MUST</bcp14> be exactly as many elements in ffv2ds_file_info as there
are in ffda_versions.  Each element of the array corresponds to a
particular combination of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled provided for the device.  The array allows for
server implementations that have different filehandles and stateids
for different combinations of version, minor version, and coupling
strength.  See <xref target="sec-version-errors"/> for how to handle versioning
issues between the client and storage devices.</t>
        <t>For tight coupling, fffi_stateid provides the stateid to be used
by the client to access the file.  For loose coupling and an NFSv4
storage device, the client will have to use an anonymous stateid
to perform I/O on the storage device.  With no control protocol,
the metadata server stateid cannot be used to provide a global
stateid model.  Thus, the server <bcp14>MUST</bcp14> set the fffi_stateid to be
the anonymous stateid.</t>
        <t>This specification of the fffi_stateid restricts both models for
NFSv4.x storage protocols:</t>
        <dl>
          <dt>loosely couple</dt>
          <dd>
            <t>the stateid has to be an anonymous stateid</t>
          </dd>
          <dt>tightly couple</dt>
          <dd>
            <t>the stateid has to be a global stateid</t>
          </dd>
        </dl>
        <t>By pairing each fffi_fh_vers with its own fffi_stateid inside
ffv2_file_info4, the v2 layout addresses the v1 limitation where a
singleton stateid was shared across all filehandles.  Each open file
on the storage device can now have its own stateid, eliminating the
ambiguity present in the v1 structure.</t>
        <t>For loosely coupled storage devices, ffv2ds_user and ffv2ds_group
provide the synthetic user and group to be used in the RPC credentials
that the client presents to the storage device to access the data
files.  For tightly coupled storage devices, the user and group on
the storage device will be the same as on the metadata server; that
is, if ffdv_tightly_coupled (see <xref target="sec-ff_device_addr4"/>) is set,
then the client <bcp14>MUST</bcp14> ignore both ffv2ds_user and ffv2ds_group.</t>
        <t>The allowed values for both ffv2ds_user and ffv2ds_group are specified
as owner and owner_group, respectively, in Section 5.9 of <xref target="RFC8881"/>.
For NFSv3 compatibility, user and group strings that consist of
decimal numeric values with no leading zeros can be given a special
interpretation by clients and servers that choose to provide such
support.  The receiver may treat such a user or group string as
representing the same user as would be represented by an NFSv3 uid
or gid having the corresponding numeric value.  Note that if using
Kerberos for security, the expectation is that these values will
be a name@domain string.</t>
        <t>ffv2ds_efficiency describes the metadata server's evaluation as to
the effectiveness of each mirror.  Note that this is per layout and
not per device as the metric may change due to perceived load,
availability to the metadata server, etc.  Higher values denote
higher perceived utility.  The way the client can select the best
mirror to access is discussed in <xref target="sec-select-mirror"/>.</t>
        <section anchor="error-codes-from-layoutget">
          <name>Error Codes from LAYOUTGET</name>
          <t><xref target="RFC8881"/> provides little guidance as to how the client is to
proceed with a LAYOUTGET that returns an error of either
NFS4ERR_LAYOUTTRYLATER, NFS4ERR_LAYOUTUNAVAILABLE, and NFS4ERR_DELAY.
Within the context of this document:</t>
          <dl>
            <dt>NFS4ERR_LAYOUTUNAVAILABLE</dt>
            <dd>
              <t>there is no layout available and the I/O is to go to the metadata
server.  Note that it is possible to have had a layout before a
recall and not after.</t>
            </dd>
            <dt>NFS4ERR_LAYOUTTRYLATER</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should continue
with I/O to the storage devices.</t>
            </dd>
            <dt>NFS4ERR_DELAY</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should not
continue with I/O to the storage devices.</t>
            </dd>
          </dl>
        </section>
        <section anchor="client-interactions-with-ffflagsnoiothrumds">
          <name>Client Interactions with FF_FLAGS_NO_IO_THRU_MDS</name>
          <t>Even if the metadata server provides the FF_FLAGS_NO_IO_THRU_MDS
flag, the client can still perform I/O to the metadata server.  The
flag functions as a hint.  The flag indicates to the client that
the metadata server prefers to separate the metadata I/O from the
data I/ O, most likely for performance reasons.</t>
        </section>
      </section>
      <section anchor="layoutcommit-1">
        <name>LAYOUTCOMMIT</name>
        <t>The flexible file layout does not use lou_body inside the
loca_layoutupdate argument to LAYOUTCOMMIT.  If lou_type is
LAYOUT4_FLEX_FILES, the lou_body field <bcp14>MUST</bcp14> have a zero length (see
Section 18.42.1 of <xref target="RFC8881"/>).</t>
      </section>
      <section anchor="interactions-between-devices-and-layouts">
        <name>Interactions between Devices and Layouts</name>
        <t>The file layout type is defined such that the relationship between
multipathing and filehandles can result in either 0, 1, or N
filehandles (see Section 13.3 of <xref target="RFC8881"/>).  Some rationales for
this are clustered servers that share the same filehandle or allow
for multiple read-only copies of the file on the same storage device.
In the flexible file layout type, while there is an array of
filehandles, they are independent of the multipathing being used.
If the metadata server wants to provide multiple read-only copies
of the same file on the same storage device, then it should provide
multiple mirrored instances, each with a different ff_device_addr4.
The client can then determine that, since each of the fffi_fh_vers
values within ffv2ds_file_info are different, there are multiple
copies of the file for the current layout segment available.</t>
      </section>
      <section anchor="sec-version-errors">
        <name>Handling Version Errors</name>
        <t>When the metadata server provides the ffda_versions array in the
ff_device_addr4 (see <xref target="sec-ff_device_addr4"/>), the client is able
to determine whether or not it can access a storage device with any
of the supplied combinations of ffdv_version, ffdv_minorversion,
and ffdv_tightly_coupled.  However, due to the limitations of
reporting errors in GETDEVICEINFO (see Section 18.40 in <xref target="RFC8881"/>),
the client is not able to specify which specific device it cannot
communicate with over one of the provided ffdv_version and
ffdv_minorversion combinations.  Using ff_ioerr4 (<xref target="sec-ff_ioerr4"/>)
inside either the LAYOUTRETURN (see Section 18.44 of <xref target="RFC8881"/>)
or the LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/> and <xref target="sec-LAYOUTERROR"/>
of this document), the client can isolate the problematic storage
device.</t>
        <t>The error code to return for LAYOUTRETURN and/or LAYOUTERROR is
NFS4ERR_MINOR_VERS_MISMATCH.  It does not matter whether the mismatch
is a major version (e.g., client can use NFSv3 but not NFSv4) or
minor version (e.g., client can use NFSv4.1 but not NFSv4.2), the
error indicates that for all the supplied combinations for ffdv_version
and ffdv_minorversion, the client cannot communicate with the storage
device.  The client can retry the GETDEVICEINFO to see if the
metadata server can provide a different combination, or it can fall
back to doing the I/O through the metadata server.</t>
      </section>
    </section>
    <section anchor="sec-striping">
      <name>Striping via Sparse Mapping</name>
      <t>While other layout types support both dense and sparse mapping of
logical offsets to physical offsets within a file (see, for example,
Section 13.4 of <xref target="RFC8881"/>), the flexible file layout type only
supports a sparse mapping.</t>
      <t>With sparse mappings, the logical offset within a file (L) is also
the physical offset on the storage device.  As detailed in Section
13.4.4 of <xref target="RFC8881"/>, this results in holes across each storage
device that does not contain the current stripe index.</t>
      <figure anchor="fig-striping">
        <name>Stripe Mapping Math</name>
        <artwork><![CDATA[
L: logical offset within the file

W: stripe width
    W = number of elements in ffs_data_servers

S: number of bytes in a stripe
    S = W * ffm_striping_unit_size

N: stripe number
    N = L / S
]]></artwork>
      </figure>
    </section>
    <section anchor="recovering-from-client-io-errors">
      <name>Recovering from Client I/O Errors</name>
      <t>The pNFS client may encounter errors when directly accessing the
storage devices.  However, it is the responsibility of the metadata
server to recover from the I/O errors.  When the LAYOUT4_FLEX_FILES
layout type is used, the client <bcp14>MUST</bcp14> report the I/O errors to the
server at LAYOUTRETURN time using the ff_ioerr4 structure (see
<xref target="sec-ff_ioerr4"/>).</t>
      <t>The metadata server analyzes the error and determines the required
recovery operations such as recovering media failures or reconstructing
missing data files.</t>
      <t>The metadata server <bcp14>MUST</bcp14> recall any outstanding layouts to allow
it exclusive write access to the stripes being recovered and to
prevent other clients from hitting the same error condition.  In
these cases, the server <bcp14>MUST</bcp14> complete recovery before handing out
any new layouts to the affected byte ranges.</t>
      <t>Although the client implementation has the option to propagate a
corresponding error to the application that initiated the I/O
operation and drop any unwritten data, the client should attempt
to retry the original I/O operation by either requesting a new
layout or sending the I/O via regular NFSv4.1+ READ or WRITE
operations to the metadata server.  The client <bcp14>SHOULD</bcp14> attempt to
retrieve a new layout and retry the I/O operation using the storage
device first and only retry the I/O operation via the metadata
server if the error persists.</t>
    </section>
    <section anchor="client-side-protection-modes">
      <name>Client-Side Protection Modes</name>
      <section anchor="sec-CSM">
        <name>Client-Side Mirroring</name>
        <t>The flexible file layout type has a simple model in place for the
mirroring of the file data constrained by a layout segment.  There
is no assumption that each copy of the mirror is stored identically
on the storage devices.  For example, one device might employ
compression or deduplication on the data.  However, the over-the-wire
transfer of the file contents <bcp14>MUST</bcp14> appear identical.  Note, this
is a constraint of the selected XDR representation in which each
mirrored copy of the layout segment has the same striping pattern
(see <xref target="fig-parallel-fileystem"/>).</t>
        <t>The metadata server is responsible for determining the number of
mirrored copies and the location of each mirror.  While the client
may provide a hint to how many copies it wants (see <xref target="sec-ffv2-layouthint"/>),
the metadata server can ignore that hint; in any event, the client
has no means to dictate either the storage device (which also means
the coupling and/or protocol levels to access the layout segments)
or the location of said storage device.</t>
        <t>The updating of mirrored layout segments is done via client-side
mirroring.  With this approach, the client is responsible for making
sure modifications are made on all copies of the layout segments
it is informed of via the layout.  If a layout segment is being
resilvered to a storage device, that mirrored copy will not be in
the layout.  Thus, the metadata server <bcp14>MUST</bcp14> update that copy until
the client is presented it in a layout.  If the FF_FLAGS_WRITE_ONE_MIRROR
is set in ffl_flags, the client need only update one of the mirrors
(see <xref target="sec-write-mirrors"/>).  If the client is writing to the layout
segments via the metadata server, then the metadata server <bcp14>MUST</bcp14>
update all copies of the mirror.  As seen in <xref target="sec-mds-resilvering"/>,
during the resilvering, the layout is recalled, and the client has
to make modifications via the metadata server.</t>
        <section anchor="sec-select-mirror">
          <name>Selecting a Mirror</name>
          <t>When the metadata server grants a layout to a client, it <bcp14>MAY</bcp14> let
the client know how fast it expects each mirror to be once the
request arrives at the storage devices via the ffv2ds_efficiency
member.  While the algorithms to calculate that value are left to
the metadata server implementations, factors that could contribute
to that calculation include speed of the storage device, physical
memory available to the device, operating system version, current
load, etc.</t>
          <t>However, what should not be involved in that calculation is a
perceived network distance between the client and the storage device.
The client is better situated for making that determination based
on past interaction with the storage device over the different
available network interfaces between the two; that is, the metadata
server might not know about a transient outage between the client
and storage device because it has no presence on the given subnet.</t>
          <t>As such, it is the client that decides which mirror to access for
reading the file.  The requirements for writing to mirrored layout
segments are presented below.</t>
        </section>
        <section anchor="sec-write-mirrors">
          <name>Writing to Mirrors</name>
          <section anchor="single-storage-device-updates-mirrors">
            <name>Single Storage Device Updates Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is set, the
client only needs to update one of the copies of the layout segment.
For this case, the storage device <bcp14>MUST</bcp14> ensure that all copies of
the mirror are updated when any one of the mirrors is updated.  If
the storage device gets an error when updating one of the mirrors,
then it <bcp14>MUST</bcp14> inform the client that the original WRITE had an error.
The client then <bcp14>MUST</bcp14> inform the metadata server (see <xref target="sec-write-errors"/>).
The client's responsibility with respect to COMMIT is explained in
<xref target="sec-write-commits"/>.  The client may choose any one of the mirrors
and may use ffv2ds_efficiency as described in <xref target="sec-select-mirror"/>
when making this choice.</t>
          </section>
          <section anchor="client-updates-all-mirrors">
            <name>Client Updates All Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is not set, the
client is responsible for updating all mirrored copies of the layout
segments that it is given in the layout.  A single failed update
is sufficient to fail the entire operation.  If all but one copy
is updated successfully and the last one provides an error, then
the client needs to inform the metadata server about the error.
The client can use either LAYOUTRETURN or LAYOUTERROR to inform the
metadata server that the update failed to that storage device.  If
the client is updating the mirrors serially, then it <bcp14>SHOULD</bcp14> stop
at the first error encountered and report that to the metadata
server.  If the client is updating the mirrors in parallel, then
it <bcp14>SHOULD</bcp14> wait until all storage devices respond so that it can
report all errors encountered during the update.</t>
          </section>
          <section anchor="sec-write-errors">
            <name>Handling Write Errors</name>
            <t>When the client reports a write error to the metadata server, the
metadata server is responsible for determining if it wants to remove
the errant mirror from the layout, if the mirror has recovered from
some transient error, etc.  When the client tries to get a new
layout, the metadata server informs it of the decision by the
contents of the layout.  The client <bcp14>MUST NOT</bcp14> assume that the contents
of the previous layout will match those of the new one.  If it has
updates that were not committed to all mirrors, then it <bcp14>MUST</bcp14> resend
those updates to all mirrors.</t>
            <t>There is no provision in the protocol for the metadata server to
directly determine that the client has or has not recovered from
an error.  For example, if a storage device was network partitioned
from the client and the client reported the error to the metadata
server, then the network partition would be repaired, and all of
the copies would be successfully updated.  There is no mechanism
for the client to report that fact, and the metadata server is
forced to repair the file across the mirror.</t>
            <t>If the client supports NFSv4.2, it can use LAYOUTERROR and LAYOUTRETURN
to provide hints to the metadata server about the recovery efforts.
A LAYOUTERROR on a file is for a non-fatal error.  A subsequent
LAYOUTRETURN without a ff_ioerr4 indicates that the client successfully
replayed the I/O to all mirrors.  Any LAYOUTRETURN with a ff_ioerr4
is an error that the metadata server needs to repair.  The client
<bcp14>MUST</bcp14> be prepared for the LAYOUTERROR to trigger a CB_LAYOUTRECALL
if the metadata server determines it needs to start repairing the
file.</t>
          </section>
          <section anchor="sec-write-commits">
            <name>Handling Write COMMITs</name>
            <t>When stable writes are done to the metadata server or to a single
replica (if allowed by the use of FF_FLAGS_WRITE_ONE_MIRROR), it
is the responsibility of the receiving node to propagate the written
data stably, before replying to the client.</t>
            <t>In the corresponding cases in which unstable writes are done, the
receiving node does not have any such obligation, although it may
choose to asynchronously propagate the updates.  However, once a
COMMIT is replied to, all replicas must reflect the writes that
have been done, and this data must have been committed to stable
storage on all replicas.</t>
            <t>In order to avoid situations in which stale data is read from
replicas to which writes have not been propagated:</t>
            <ul spacing="normal">
              <li>
                <t>A client that has outstanding unstable writes made to single
node (metadata server or storage device) <bcp14>MUST</bcp14> do all reads from
that same node.</t>
              </li>
              <li>
                <t>When writes are flushed to the server (for example, to implement
close-to-open semantics), a COMMIT must be done by the client
to ensure that up-to-date written data will be available
irrespective of the particular replica read.</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="sec-mds-resilvering">
          <name>Metadata Server Resilvering of the File</name>
          <t>The metadata server may elect to create a new mirror of the layout
segments at any time.  This might be to resilver a copy on a storage
device that was down for servicing, to provide a copy of the layout
segments on storage with different storage performance characteristics,
etc.  As the client will not be aware of the new mirror and the
metadata server will not be aware of updates that the client is
making to the layout segments, the metadata server <bcp14>MUST</bcp14> recall the
writable layout segment(s) that it is resilvering.  If the client
issues a LAYOUTGET for a writable layout segment that is in the
process of being resilvered, then the metadata server can deny that
request with an NFS4ERR_LAYOUTUNAVAILABLE.  The client would then
have to perform the I/O through the metadata server.</t>
        </section>
      </section>
      <section anchor="erasure-coding">
        <name>Erasure Coding</name>
        <t>Erasure Coding takes a data block and transforms it to a payload
to send to the data servers (see <xref target="fig-encoding-data-block"/>).  It
generates a metadata header and transformed block per data server.
The header is metadata information for the transformed block.  From
now on, the metadata is simply referred to as the header and the
transformed block as the chunk.  The payload of a data block is the
set of generated headers and chunks for that data block.</t>
        <t>The guard is an unique identifier generated by the client to describe
the current write transaction (see <xref target="sec-chunk_guard4"/>).  The
intent is to have a unique and non-opaque value for comparison.
The payload_id describes the position within the payload.  Finally,
the crc32 is the 32 bit crc calculation of the header (with the
crc32 field being 0) and the chunk.  By combining the two parts of
the payload, integrity is ensured for both the parts.</t>
        <t>While the data block might have a length of 4kB, that does not
necessarily mean that the length of the chunk is 4kB.  That length
is determined by the erasure coding type algorithm.  For example,
Reed Solomon might have 4kB chunks with the data integrity being
compromised by parity chunks.  Another example would be the Mojette
Transformation, which might have 1kB chunk lengths.</t>
        <t>The payload contains redundancy which will allow the erasure coding
type algorithm to repair chunks in the payload as it is transformed
back to a data block (see <xref target="fig-decoding-db"/>).  A payload is
consistent when all of the contained headers share the same guard.
It has integrity when it is consistent and the combinations of
headers and chunks all pass the crc32 checks.</t>
        <t>The erasure coding algorithm itelf might not be sufficient to detect
errors in the chunks.  The crc32 checks will allow the data server
to detect chunks with issues and then the erasure decoding algorithm
can reconstruct the missing chunk.</t>
        <section anchor="encoding-a-data-block">
          <name>Encoding a Data Block</name>
          <figure anchor="fig-encoding-data-block">
            <name>Encoding a Data Block</name>
            <artwork><![CDATA[
                 +-------------+
                 | data block  |
                 +-------+-----+
                         |
                         |
   +---------------------+-------------------------------+
   |            Erasure Encoding (Transform Forward)     |
   +---+----------------------+---------------------+----+
       |                      |                     |
       |                      |                     |
   +---+------------+     +---+------------+     +--+-------------+
   | HEADER         | ... | HEADER         | ... | HEADER         |
   +----------------+     +----------------+     +----------------+
   | guard:         | ... | guard:         | ... | guard:         |
   |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
   |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
   | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
   | crc32   :      | ... | crc32   :      | ... | crc32   :      |
   +----------------+     +----------------+     +----------------+
   | CHUNK          | ... | CHUNK          | ... | CHUNK          |
   +----------------+     +----------------+     +----------------+
   | data: ....     | ... | data: ....     | ... | data: ....     |
   +----------------+     +----------------+     +----------------+
     Data Server 1          Data Server N          Data Server 6
]]></artwork>
          </figure>
          <t>Each data block of the file resident in the client's cache of the
file will be encoded into N different payloads to be sent to the
data servers as shown in <xref target="fig-encoding-data-block"/>.  As CHUNK_WRITE
(see <xref target="sec-CHUNK_WRITE"/>) can encode multiple write_chunk4 into a
single transaction, a more accurate description of a CHUNK_WRITE
is in <xref target="fig-example-chunk-write-args"/>.</t>
          <figure anchor="fig-example-chunk-write-args">
            <name>Example of CHUNK_WRITE_args</name>
            <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 3     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x32ef89             |
  |         [1]:  0x56fa89             |
  |         [2]:  0x7693af             |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
          </figure>
          <t>This describes a 3 block write of data from an offset of 1 block
in the file.  As each block shares the cwa_owner, it is only presented
once.  I.e., the data server will be able to construct the header
for the i'th chunk from the cwa_chunks from the cwa_payload_id, the
cwa_owner, and the i'th crc32 from the cw_crc32s.  The cwa_chunks
are sent together as a byte stream to increase performance.</t>
          <t>Assuming that there were no issues, <xref target="fig-example-chunk-write-res"/>
illustrates the results.  The payload sequence id is implicit in
the CHUNK_WRITEargs.</t>
          <figure anchor="fig-example-chunk-write-res">
            <name>Example of CHUNK_WRITE_res</name>
            <artwork><![CDATA[
  +-------------------------------+
  | CHUNK_WRITEresok              |
  +-------------------------------+
  | cwr_count: 3                  |
  | cwr_committed: FILE_SYNC4     |
  | cwr_writeverf: 0xf1234abc     |
  | cwr_owners[0]:                |
  |        co_chunk_id: 1         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[1]:                |
  |        co_chunk_id: 2         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[2]:                |
  |        co_chunk_id: 3         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  +-------------------------------+
]]></artwork>
          </figure>
          <section anchor="calculating-the-crc32">
            <name>Calculating the CRC32</name>
            <figure anchor="fig-calc-before">
              <name>CRC32 Before Calculation</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id : 0     |
  | crc32   : 0        |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
        Data Server 1
]]></artwork>
            </figure>
            <t>Assuming the header and payload as in <xref target="fig-calc-before"/>, the crc32
needs to be calculated in order to fill in the cw_crc field.  In
this case, the crc32 is calculated over the 4 fields as shown in
the header and the cw_chunk.  In this example, it is calculated to
be 0x21de8.  The resulting CHUNK_WRITE is shown in <xref target="fig-calc-crc-after"/>.</t>
            <figure anchor="fig-calc-crc-after">
              <name>CRC32 After Calculation</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x21de8              |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="decoding-a-data-block">
          <name>Decoding a Data Block</name>
          <figure anchor="fig-decoding-db">
            <name>Decoding a Data Block</name>
            <artwork><![CDATA[
    Data Server 1          Data Server N          Data Server 6
  +----------------+     +----------------+     +----------------+
  | HEADER         | ... | HEADER         | ... | HEADER         |
  +----------------+     +----------------+     +----------------+
  | guard:         | ... | guard:         | ... | guard:         |
  |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
  |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
  | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
  | crc32   :      | ... | crc32   :      | ... | crc32   :      |
  +----------------+     +----------------+     +----------------+
  | CHUNK          | ... | CHUNK          | ... | CHUNK          |
  +----------------+     +----------------+     +----------------+
  | data: ....     | ... | data: ....     | ... | data: ....     |
  +---+------------+     +--+-------------+     +-+--------------+
      |                     |                     |
      |                     |                     |
  +---+---------------------+---------------------+-----+
  |            Erasure Decoding (Transform Reverse)     |
  +---------------------+-------------------------------+
                        |
                        |
                +-------+-----+
                | data block  |
                +-------------+
]]></artwork>
          </figure>
          <t>When reading chunks via a CHUNK_READ operation, the client will
decode them into data blocks as shown in <xref target="fig-decoding-db"/>.</t>
          <t>At this time, the client could detect issues in the integrity of
the data.  The handling and repair are out of the scope of this
document and <bcp14>MUST</bcp14> be addressed in the document describing each
erasure coding type.</t>
          <section anchor="checking-the-crc32">
            <name>Checking the CRC32</name>
            <figure anchor="fig-example-chunk-read-crc">
              <name>CRC32 on the Wire</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_READresok                    |
  +------------------------------------+
  | crr_eof: false                     |
  | crr_chunks[0]:                     |
  |        cr_crc: 0x21de8             |
  |        cr_owner:                   |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  |        cr_chunk  :  ......         |
  +------------------------------------+
]]></artwork>
            </figure>
            <t>Assuming the CHUNK_READ results as in <xref target="fig-example-chunk-read-crc"/>,
the crc32 needs to be checked in order to ensure data integrity.
Conceptually, a header and payload can be built as shown in
<xref target="fig-example-crc-checked"/>.  The crc32 is calculated over the 4
fields as shown in the header and the cr_chunk.  In this example,
it is calculated to be 0x21de8.  Thus this payload for the data
server has data integrity.</t>
            <figure anchor="fig-example-crc-checked">
              <name>CRC32 Being Checked</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id  : 0    |
  | crc32    : 0       |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
       Data Server 1
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="write-modes">
          <name>Write Modes</name>
          <t>There are two basic writing modes for erasure coding and they depend
on the metadata server using FFV2_FLAGS_ONLY_ONE_WRITER in the
ffl_flags in the ffv2_layout4 (see <xref target="fig-ffv2_layout4"/>) to inform
the client whether it is the only writer to the file or not.  If
it is the only writer, then CHUNK_WRITE with the cwa_guard not set
can be used to write chunks.  In this scenario, there is no write
contention, but write holes can occur as the client overwrites old
data.  Thus the client does not need guarded writes, but it does
need the ability to rollback writes.  If it is not the only writer,
then CHUNK_WRITE with the cwa_guard set <bcp14>MUST</bcp14> be used to write chunks.
In this scenario, the write holes can also be caused by multiple
clients writing to the same chunk.  Thus the client needs guarded
writes to prevent over writes and it does need the ability to
rollback writes.</t>
          <t>In both modes, clients <bcp14>MUST NOT</bcp14> overwrite payloads which already
contain inconsistency.  This directly follows from <xref target="sec-reading-chunks"/>
and <bcp14>MUST</bcp14> be handled as discussed there.  Once consistency in the
payload has been detected, the client can use those chunks as a
basis for read/modify/update.</t>
          <t>CHUNK_WRITE is a two pass operation in cooperation with CHUNK_FINALIZE
(<xref target="sec-CHUNK_FINALIZE"/>) and CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).
It writes to the data file and the data server is responsible for
retaining a copy of the old header and chunk. A subsequent CHUNK_READ
would return the new chunk. However, until either the CHUNK_FINALIZE
or CHUNK_ROLLBACK is presented, a subsequent CHUNK_WRITE <bcp14>MUST</bcp14> result
in the locking of the chunk, as if a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>)
had been performed on the chunk. As such, further CHUNK_WRITES by
any client <bcp14>MUST</bcp14> be denied until the chunk is unlocked by CHUNK_UNLOCK
(<xref target="sec-CHUNK_UNLOCK"/>).</t>
          <t>If the CHUNK_WRITE results in a consistent data block, then the
client will send a CHUNK_FINALIZE in a subsequent compound to inform
the data server that the chunk is consistent and can be overwritten
by another CHUNK_WRITE.</t>
          <t>If the CHUNK_WRITE results in an inconsistent data block or if the
data server returned NFS4ERR_CHUNK_LOCKED, then the client sends a
LAYOUTERROR to the metadata server with a code of
NFS4ERR_PAYLOAD_NOT_CONSISTENT. The metadata server then selects a
client (or data server) to repair the data block.</t>
          <t><cref source="Tom">Since we don't have all potential chunks available,
it can either chose the winner or pick a random client/data server.
If the client is the winner, then the process is to use CHUNK_WRITE_REPAIR
to overwrite the chunks which are not consistent. If it is a random
client, then the client should just CHUNK_ROLLBACK and CHUNK_UNLOCK
until it gets back to the original chunk.</cref></t>
          <t>The client which is repairing the chunk can decide to rollback to
the previous chunk via CHUNK_ROLLBACK. Note that CHUNK_ROLLBACK
does not unlock the chunk, that has to be explicitly done via
CHUNK_UNLOCK.</t>
          <section anchor="single-writer-mode">
            <name>Single Writer Mode</name>
            <t>In single writer mode, the metadata server sets FFV2_FLAGS_ONLY_ONE_WRITER
in ffl_flags, indicating that no other client holds a write layout for
the file.  The client sends CHUNK_WRITE with cwa_guard.cwg_check set to
FALSE, omitting the guard value.  Because only one writer is active,
there is no risk of two clients overwriting the same chunk concurrently.</t>
            <t>The single writer write sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The client issues CHUNK_WRITE (cwa_guard.cwg_check = FALSE) for each
shard.  The data server places the written block in the PENDING state
and retains a copy of the previous block for rollback.</t>
              </li>
              <li>
                <t>The client issues CHUNK_FINALIZE to advance the blocks from PENDING
to FINALIZED, validating the per-block CRC32.</t>
              </li>
              <li>
                <t>The client issues CHUNK_COMMIT to advance the blocks from FINALIZED
to COMMITTED, persisting the block metadata to stable storage.</t>
              </li>
            </ol>
            <t>If the client detects an error after CHUNK_WRITE but before CHUNK_FINALIZE
(e.g., a CRC mismatch on a subsequent CHUNK_READ), it issues CHUNK_ROLLBACK
to restore the previous block content.  CHUNK_ROLLBACK does not lock the
chunk; the next CHUNK_WRITE is permitted immediately.</t>
          </section>
          <section anchor="repairing-single-writer-payloads">
            <name>Repairing Single Writer Payloads</name>
            <t>In single writer mode, inconsistent blocks arise from a client or data
server failure during a CHUNK_WRITE / CHUNK_FINALIZE sequence.  Because
no other writer is active, the repair client is always the original writer
(or a substitute designated by the metadata server after lease expiry).</t>
            <t>The repair sequence for a single writer payload is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The repair client issues CHUNK_READ to identify which blocks are in an
inconsistent state (PENDING with a CRC mismatch, or in the errored
state set by a prior CHUNK_ERROR).</t>
              </li>
              <li>
                <t>For each errored block, the repair client reconstructs the correct
data using the erasure coding algorithm (RS matrix inversion or Mojette
back-projection) from the surviving consistent blocks.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>)
to write the reconstructed data.  CHUNK_WRITE_REPAIR bypasses the guard
check and applies different data server policies (e.g., allowing writes
to blocks in the errored state).</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_FINALIZE and CHUNK_COMMIT to persist the
repaired blocks.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to
clear the errored state and make the blocks available for normal reads.</t>
              </li>
            </ol>
          </section>
          <section anchor="sec-multi-writer">
            <name>Multiple Writer Mode</name>
            <t>In multiple writer mode, the metadata server does not set
FFV2_FLAGS_ONLY_ONE_WRITER, indicating that concurrent writers may hold
write layouts for the file.  The client sends CHUNK_WRITE with
cwa_guard.cwg_check set to TRUE, supplying a chunk_guard4 in cwa_guard.cwg_guard
that uniquely identifies this write transaction across all data servers.</t>
            <t>The multiple writer write sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The client selects a unique chunk_guard4 for this transaction.  The
cg_client_id identifies the client (derived from the client's
clientid4); the cg_gen_id is a per-client generation counter
incremented for each new transaction.</t>
              </li>
              <li>
                <t>The client issues CHUNK_WRITE (cwa_guard.cwg_check = TRUE) for each
shard.  The data server checks that no other client's block is in the
PENDING state for this chunk.  If another client's block is already
pending, the data server returns NFS4ERR_CHUNK_LOCKED with the
clr_owner field identifying the lock holder.</t>
              </li>
              <li>
                <t>On NFS4ERR_CHUNK_LOCKED, the client <bcp14>MUST</bcp14> back off.  It issues
CHUNK_ROLLBACK for any shards it has already written in this
transaction, then retries after a delay.</t>
              </li>
              <li>
                <t>If all CHUNK_WRITEs succeed, the client issues CHUNK_FINALIZE and
CHUNK_COMMIT as in single writer mode.</t>
              </li>
            </ol>
            <t>The guard ensures that the complete set of shards forming a consistent
erasure-coded block all carry the same chunk_guard4.  A reader that
encounters shards with different guard values knows the payload is not
yet consistent and <bcp14>MUST</bcp14> either retry or report NFS4ERR_PAYLOAD_NOT_CONSISTENT.</t>
          </section>
          <section anchor="repairing-multiple-writer-payloads">
            <name>Repairing Multiple Writer Payloads</name>
            <t>In multiple writer mode, inconsistent blocks can arise from two sources:
a client failure leaving some shards in PENDING state, or two clients
writing different data to the same chunk before one has committed.</t>
            <t>The metadata server coordinates repair by designating a repair client
(identified in the layout via FFV2_DS_FLAGS_REPAIR on the target data
server).  The repair sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The repair client issues CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) on the
affected block range of each data server.  If any lock attempt returns
NFS4ERR_CHUNK_LOCKED, the repair client records the existing lock
holder's chunk_owner4 and proceeds; the lock holder's data is a
candidate for the winning payload.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_READ on all data servers to retrieve
the current payload.  It examines the chunk_owner4 of each shard to
identify which transaction (if any) produced a consistent set across
all k data shards.</t>
              </li>
              <li>
                <t>If a consistent set is found (all k data shards carry the same
chunk_guard4), that payload is the winner.  The repair client issues
CHUNK_WRITE_REPAIR to copy the winner's data to any data servers whose
shard is inconsistent, followed by CHUNK_FINALIZE and CHUNK_COMMIT.</t>
              </li>
              <li>
                <t>If no consistent set exists (all available payloads are partial), the
repair client selects one transaction's payload as authoritative
(typically the one with the most complete set of shards, or the most
recent cg_gen_id) and proceeds as above.</t>
              </li>
              <li>
                <t>After all data servers carry consistent, finalized, committed data, the
repair client issues CHUNK_REPAIRED to clear the errored state and
CHUNK_UNLOCK to release the locks acquired in step 1.</t>
              </li>
              <li>
                <t>The repair client reports success to the metadata server via
LAYOUTRETURN.</t>
              </li>
            </ol>
          </section>
        </section>
        <section anchor="sec-reading-chunks">
          <name>Reading Chunks</name>
          <t>The client reads chunks from the data file via CHUNK_READ.  The
number of chunks in the payload that need to be consistent depend
on both the Erasure Coding Type and the level of protection selected.
If the client has enough consistent chunks in the payload, then it
can proceed to use them to build a data block.  If it does not have
enough consistent chunks in the payload, then it can either decide
to return a LAYOUTERROR of NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server or it can retry the CHUNK_READ until there are
enough consistent chunks in the payload.</t>
          <t>As another client might be writing to the chunks as they are being
read, it is entirely possible to read the chunks while they are not
consistent.  As such, it might even be the non-consistent chunks
which contain the new data and a better action than building the
data block is to retry the CHUNK_READ to see if new chunks are
overwritten.</t>
        </section>
        <section anchor="whole-file-repair">
          <name>Whole File Repair</name>
          <t><cref source="Tom"> Describe how a repair client can be assigned
with missing FFV2_DS_FLAGS_ACTIVE data servers and a number of
FFV2_DS_FLAGS_REPAIR data servers.  Then the client will either
move chunks from FFV2_DS_FLAGS_SPARE data servers to the
FFV2_DS_FLAGS_REPAIR data servers or reconstruct the chunks for the
FFV2_DS_FLAGS_REPAIR based on the decoded data blocks, The client
indicates success by returning the layout.  </cref></t>
          <t><cref source="Tom"> For a slam dunk, introduce the concept of a
proxy repair client.  I.e., the client appears as a single
FFV2_CODING_MIRRORED file to other clients.  As it receives WRITEs,
it encodes them to the real set of data servers.  As it receives
READs, it decodes them from the real set of data servers.  Once the
proxy repair is finished, the metadata server will start pushing
out layouts for the real set of data servers.  </cref></t>
        </section>
      </section>
      <section anchor="mixing-of-coding-types">
        <name>Mixing of Coding Types</name>
        <t>Multiple coding types can be present in a Flexible File Version 2
Layout Type layout.  The ffv2_layout4 has an array of ffv2_mirror4,
each of which has a ffv2_coding_type4.  The main reason to allow
for this is to provide for either the assimilation of a non-erasure
coded file to an erasure coded file or the exporting of an erasure
coded file to a non-erasure coded file.</t>
        <t>Assume there is an additional ffv2_coding_type4 of FFV2_CODING_REED_SOLOMON
and it needs 8 active chunks.  The user wants to actively assimilate
a regular file.  As such, a layout might be as represented in
<xref target="fig-example_mixing"/>.  As this is an assimilation, most of the
data reads will be satisfied by READ (see Section 18.22 of <xref target="RFC8881"/>)
calls to index 0.  However, as this is also an active file, there
could also be CHUNK_READ (see <xref target="sec-CHUNK_READ"/>) calls to the other
indexes.</t>
        <figure anchor="fig-example_mixing">
          <name>Example of Mixed Coding Types in a Layout</name>
          <artwork><![CDATA[
 +-----------------------------------------------------+
 | ffv2_layout4:                                       |
 +-----------------------------------------------------+
 |     ffl_mirrors[0]:                                 |
 |         ffs_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: 0                     |
 |         ffm_coding: FFV2_CODING_MIRRORED            |
 +-----------------------------------------------------+
 |     ffl_mirrors[1]:                                 |
 |         ffs_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[1]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[2]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[3]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[4]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[5]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[6]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |             ffv2_data_server4[7]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |     ffm_coding: FFV2_CODING_REED_SOLOMON            |
 +-----------------------------------------------------+
]]></artwork>
        </figure>
        <t>When performing I/O via a FFV2_CODING_MIRRORED coding type, the
non- transformed data will be used, Whereas with other coding types,
a metadata header and transformed block will be sent.  Further,
when reading data from the instance files, the client <bcp14>MUST</bcp14> be
prepared to have one of the coding types supply data and the other
type not to supply data.  I.e., the CHUNK_READ call to the data
servers in mirror 1 might return rlr_eof set to true (see
<xref target="fig-read_chunk4"/>), which indicates that there is no data, where
the READ call to the data server in mirror 0 might return eof to
be false, which indicates that there is data.  The client <bcp14>MUST</bcp14>
determine that there is in fact data.  An example use case is the
active assimilation of a file to ensure integrity.  As the client
is helping to translated the file to the new coding scheme, it is
actively modifying the file.  As such, it might be sequentially
reading the file in order to translate.  The READ calls to mirror
0 would be returning data and the CHUNK_READ calls to mirror 1 would
not be returning data.  As the client overwrites the file, the WRITE
call and CHUNK_WRITE call would have data sent to all of the
data servers.  Finally, if the client reads back a section which
had been modified earlier, both the READ and CHUNK_READ calls would
return data.</t>
      </section>
      <section anchor="sec-rs-encoding">
        <name>Reed-Solomon Vandermonde Encoding (FFV2_ENCODING_RS_VANDERMONDE)</name>
        <section anchor="overview">
          <name>Overview</name>
          <t>Reed-Solomon (RS) codes are Maximum Distance Separable (MDS) codes:
for a (k+m, k) code, any k of the k+m encoded shards suffice to
recover the original data.  The code tolerates the simultaneous loss
of up to m shards.</t>
        </section>
        <section anchor="galois-field-arithmetic">
          <name>Galois Field Arithmetic</name>
          <t>All RS operations are performed over GF(2^8), the Galois field with
256 elements.  Each element is represented as a byte.</t>
          <dl>
            <dt>Irreducible Polynomial</dt>
            <dd>
              <t>The field is constructed using the irreducible polynomial
x^8 + x^4 + x^3 + x^2 + 1 (0x11d in hexadecimal).  The primitive
element (generator) is g = 2, which has multiplicative order 255.</t>
            </dd>
            <dt>Addition</dt>
            <dd>
              <t>Addition in GF(2^8) is bitwise XOR.</t>
            </dd>
            <dt>Multiplication</dt>
            <dd>
              <t>Multiplication uses log/antilog tables.  For non-zero elements
a and b: a * b = exp(log(a) + log(b)), where the exp table is
doubled to 512 entries to avoid modular reduction on the index sum.</t>
            </dd>
          </dl>
          <t>These are the classical constructions from Berlekamp (1968) and
Peterson &amp; Weldon (1972).  The log/antilog table approach for GF(2^8)
multiplication predates all known patents on SIMD-accelerated GF
arithmetic.  Implementors considering SIMD acceleration of GF(2^8)
operations should be aware of US Patent 8,683,296 (StreamScale),
which covers certain SIMD-based GF multiplication techniques.</t>
        </section>
        <section anchor="encoding-matrix">
          <name>Encoding Matrix</name>
          <t>The encoding process uses a (k+m) x k Vandermonde matrix, normalized
so that its top k rows form the identity matrix:</t>
          <ol spacing="normal" type="1"><li>
              <t>Construct a (k+m) x k Vandermonde matrix V where V[i][j] = j^i
in GF(2^8).</t>
            </li>
            <li>
              <t>Extract the top k x k sub-matrix T from V.</t>
            </li>
            <li>
              <t>Compute T_inv = T^(-1) using Gaussian elimination in GF(2^8).</t>
            </li>
            <li>
              <t>Multiply: E = V * T_inv.  The result has an identity block on top
(rows 0 through k-1) and the parity generation matrix P on the
bottom (rows k through k+m-1).</t>
            </li>
          </ol>
          <t>The identity block makes the code systematic: data shards pass through
unchanged, and only the parity sub-matrix P is needed during encoding.</t>
        </section>
        <section anchor="encoding">
          <name>Encoding</name>
          <t>Given k data shards, each of shard_len bytes, encoding produces m
parity shards, each also shard_len bytes:</t>
          <artwork><![CDATA[
For each byte position j in [0, shard_len):
  For each parity shard i in [0, m):
    parity[i][j] = sum over s in [0, k) of P[i][s] * data[s][j]
]]></artwork>
          <t>where the sum and product are in GF(2^8).  All shards (data and
parity) are the same size.</t>
        </section>
        <section anchor="decoding">
          <name>Decoding</name>
          <t>When one or more shards are lost (up to m), reconstruction proceeds
by matrix inversion:</t>
          <ol spacing="normal" type="1"><li>
              <t>Select k available shards (from the k+m total).</t>
            </li>
            <li>
              <t>Form a k x k sub-matrix S of the encoding matrix E by selecting the
rows corresponding to the available shards.</t>
            </li>
            <li>
              <t>Compute S_inv = S^(-1) using Gaussian elimination in GF(2^8).</t>
            </li>
            <li>
              <t>Multiply S_inv by the vector of available shard data at each byte
position to recover the original k data shards.</t>
            </li>
            <li>
              <t>If any parity shards are also missing, regenerate them by
re-encoding from the recovered data shards.</t>
            </li>
          </ol>
          <t>The reconstruction cost is dominated by the matrix inversion, which
is O(k^2) in GF(2^8) multiplications.</t>
        </section>
        <section anchor="rs-interoperability-requirements">
          <name>RS Interoperability Requirements</name>
          <t>For two implementations of FFV2_ENCODING_RS_VANDERMONDE to
interoperate, they <bcp14>MUST</bcp14> agree on all of the following parameters.
Any deviation produces a different encoding matrix and renders
data unrecoverable by a different implementation.</t>
          <ul spacing="normal">
            <li>
              <t>Irreducible polynomial: x^8 + x^4 + x^3 + x^2 + 1 (0x11d)</t>
            </li>
            <li>
              <t>Primitive element: g = 2</t>
            </li>
            <li>
              <t>Vandermonde evaluation points: V[i][j] = j^i in GF(2^8)</t>
            </li>
            <li>
              <t>Matrix normalization: E = V * (V[0..k-1])^(-1)</t>
            </li>
          </ul>
          <t>These four parameters fully determine the encoding matrix for any
(k, m) configuration.</t>
        </section>
        <section anchor="rs-shard-sizes">
          <name>RS Shard Sizes</name>
          <t>All RS shards (data and parity) are exactly shard_len bytes.  This
simplifies the CHUNK operation protocol: chunk_size is exactly the
shard size for all mirrors.</t>
          <table anchor="tbl-rs-shards">
            <name>RS shard sizes for common configurations</name>
            <thead>
              <tr>
                <th align="left">Configuration</th>
                <th align="left">File Size</th>
                <th align="left">Shard Size</th>
                <th align="left">Total Storage</th>
                <th align="left">Overhead</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">4+2</td>
                <td align="left">4 KB</td>
                <td align="left">1 KB</td>
                <td align="left">6 KB</td>
                <td align="left">50%</td>
              </tr>
              <tr>
                <td align="left">4+2</td>
                <td align="left">1 MB</td>
                <td align="left">256 KB</td>
                <td align="left">1.5 MB</td>
                <td align="left">50%</td>
              </tr>
              <tr>
                <td align="left">8+2</td>
                <td align="left">4 KB</td>
                <td align="left">512 B</td>
                <td align="left">5 KB</td>
                <td align="left">25%</td>
              </tr>
              <tr>
                <td align="left">8+2</td>
                <td align="left">1 MB</td>
                <td align="left">128 KB</td>
                <td align="left">1.25 MB</td>
                <td align="left">25%</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>
      <section anchor="sec-mojette-encoding">
        <name>Mojette Transform Encoding (FFV2_ENCODING_MOJETTE_SYSTEMATIC, FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC)</name>
        <section anchor="overview-1">
          <name>Overview</name>
          <t>The Mojette Transform is an erasure coding technique based on discrete
geometry rather than algebraic field operations.  It computes 1D
projections of a 2D grid along selected directions.  Given enough
projections, the original grid can be reconstructed exactly.</t>
          <t>The transform operates on unsigned integer elements using modular
addition.  The element size is an implementation choice: 128-bit
elements leverage SSE SIMD instructions; 64-bit elements are
compatible with NEON and AVX2 vector widths.  No Galois field
operations are required.</t>
        </section>
        <section anchor="grid-structure">
          <name>Grid Structure</name>
          <t>Data is arranged as a P x Q grid of unsigned integer elements,
where P is the number of columns and Q is the number of rows.
For k data shards of S bytes each with W-byte elements:</t>
          <artwork><![CDATA[
P = S / W       (columns per row)
Q = k           (rows = data shards)
]]></artwork>
        </section>
        <section anchor="directions">
          <name>Directions</name>
          <t>A direction is a pair of coprime integers (p_i, q_i).  Implementations
<bcp14>SHOULD</bcp14> use q_i = 1 for all directions <xref target="PARREIN"/>.  For n = k + m total
shards, n directions are generated with non-zero p values symmetric
around zero:</t>
          <ul spacing="normal">
            <li>
              <t>For n = 4: p = {-2, -1, 1, 2}</t>
            </li>
            <li>
              <t>For n = 6: p = {-3, -2, -1, 1, 2, 3}</t>
            </li>
          </ul>
        </section>
        <section anchor="forward-transform-encoding">
          <name>Forward Transform (Encoding)</name>
          <t>For each direction (p_i, q_i), the forward transform computes a 1D
projection.  Each bin sums the grid elements along a discrete line:</t>
          <artwork><![CDATA[
Projection(b, p, q) = SUM over all (row, col) where
                       col * p - row * q + offset = b
                       of Grid[row][col]
]]></artwork>
          <t>The number of bins B in a projection is:</t>
          <artwork><![CDATA[
B(p, q, P, Q) = |p| * (Q - 1) + |q| * (P - 1) + 1
]]></artwork>
          <t>For q = 1, this simplifies to:</t>
          <artwork><![CDATA[
B = abs(p) * (Q - 1) + P
]]></artwork>
          <t>The byte size of the projection is B * W.</t>
        </section>
        <section anchor="katz-reconstruction-criterion">
          <name>Katz Reconstruction Criterion</name>
          <t>Reconstruction is possible if and only if the Katz criterion
<xref target="KATZ"/> holds:</t>
          <artwork><![CDATA[
SUM(i=1..n) |q_i| >= Q    OR    SUM(i=1..n) |p_i| >= P
]]></artwork>
          <t>When all q_i = 1, the q-sum simplifies to n &gt;= Q.</t>
        </section>
        <section anchor="inverse-transform-decoding">
          <name>Inverse Transform (Decoding)</name>
          <t>The inverse uses the corner-peeling algorithm:</t>
          <ol spacing="normal" type="1"><li>
              <t>Count how many unknown elements contribute to each bin.</t>
            </li>
            <li>
              <t>Find any bin with exactly one contributor (singleton).</t>
            </li>
            <li>
              <t>Recover the element, subtract from all projections.</t>
            </li>
            <li>
              <t>Repeat until all elements are recovered.</t>
            </li>
          </ol>
          <t>The algorithm is O(n * P * Q).</t>
        </section>
        <section anchor="systematic-mojette">
          <name>Systematic Mojette</name>
          <t>In the systematic form (FFV2_ENCODING_MOJETTE_SYSTEMATIC), the first
k shards are the original data rows and the remaining m shards are
projections.  Healthy reads require no decoding.</t>
          <t>Reconstruction of missing data rows:</t>
          <ol spacing="normal" type="1"><li>
              <t>Load available parity projections.</t>
            </li>
            <li>
              <t>Subtract contributions of present data rows (residual).</t>
            </li>
            <li>
              <t>Corner-peel the residual to recover missing rows.</t>
            </li>
          </ol>
          <t>Reconstruction cost is O(m * k) — a fundamental advantage over RS
at wide geometries (k &gt;= 8).</t>
        </section>
        <section anchor="non-systematic-mojette">
          <name>Non-Systematic Mojette</name>
          <t>In the non-systematic form (FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC),
all k + m shards are projections.  Every read requires the full
inverse transform.  This provides constant performance regardless of
failure count, but at higher baseline read cost than systematic.</t>
        </section>
        <section anchor="mojette-shard-sizes">
          <name>Mojette Shard Sizes</name>
          <t>Unlike RS, Mojette parity shard sizes vary by direction:</t>
          <table anchor="tbl-mojette-proj-sizes">
            <name>Mojette projection sizes for 4+2, 4KB shards, 64-bit elements</name>
            <thead>
              <tr>
                <th align="left">Direction (p, q)</th>
                <th align="left">Bins (B) for P=512, Q=4</th>
                <th align="left">Size (bytes, 64-bit elements)</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">(-3, 1)</td>
                <td align="left">521</td>
                <td align="left">4168</td>
              </tr>
              <tr>
                <td align="left">(-2, 1)</td>
                <td align="left">518</td>
                <td align="left">4144</td>
              </tr>
              <tr>
                <td align="left">(-1, 1)</td>
                <td align="left">515</td>
                <td align="left">4120</td>
              </tr>
              <tr>
                <td align="left">(1, 1)</td>
                <td align="left">515</td>
                <td align="left">4120</td>
              </tr>
              <tr>
                <td align="left">(2, 1)</td>
                <td align="left">518</td>
                <td align="left">4144</td>
              </tr>
              <tr>
                <td align="left">(3, 1)</td>
                <td align="left">521</td>
                <td align="left">4168</td>
              </tr>
            </tbody>
          </table>
          <t>When using CHUNK operations, the chunk_size is a nominal stride; the
last chunk in a parity shard <bcp14>MAY</bcp14> be shorter than the stride.</t>
        </section>
      </section>
      <section anchor="comparison-of-encoding-types">
        <name>Comparison of Encoding Types</name>
        <table anchor="tbl-encoding-comparison">
          <name>Comparison of erasure encoding types</name>
          <thead>
            <tr>
              <th align="left">Property</th>
              <th align="left">Reed-Solomon</th>
              <th align="left">Mojette Systematic</th>
              <th align="left">Mojette Non-Systematic</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">MDS guarantee</td>
              <td align="left">Yes</td>
              <td align="left">Yes (Katz)</td>
              <td align="left">Yes (Katz)</td>
            </tr>
            <tr>
              <td align="left">Shard sizes</td>
              <td align="left">Uniform</td>
              <td align="left">Variable</td>
              <td align="left">Variable</td>
            </tr>
            <tr>
              <td align="left">Reconstruction cost</td>
              <td align="left">O(k^2)</td>
              <td align="left">O(m * k)</td>
              <td align="left">O(m * k)</td>
            </tr>
            <tr>
              <td align="left">Healthy read cost</td>
              <td align="left">Zero</td>
              <td align="left">Zero</td>
              <td align="left">Full decode</td>
            </tr>
            <tr>
              <td align="left">GF operations</td>
              <td align="left">Yes (GF(2^8))</td>
              <td align="left">No</td>
              <td align="left">No</td>
            </tr>
            <tr>
              <td align="left">Recommended k</td>
              <td align="left">k &lt;= 6</td>
              <td align="left">k &gt;= 4</td>
              <td align="left">Archive only</td>
            </tr>
          </tbody>
        </table>
        <t>At small k (k &lt;= 6), RS is the conservative choice with uniform shard
sizes.  At wider geometries (k &gt;= 8), systematic Mojette offers lower
reconstruction cost.  Non-systematic Mojette is suitable only for
archive workloads where reads are infrequent.</t>
      </section>
      <section anchor="handling-write-holes">
        <name>Handling write holes</name>
        <t>A write hole occurs when a client begins writing a stripe but does not
successfully write all k+m shards before a failure.  Some data servers
will hold new data while others still hold old data, producing an
inconsistent payload.</t>
        <t>The CHUNK_WRITE / CHUNK_ROLLBACK mechanism addresses this.  When a client
issues CHUNK_WRITE, the data server retains a copy of the previous shard
and places the new data in the PENDING state.  If any shard write fails,
the client issues CHUNK_ROLLBACK to each data server that received a
CHUNK_WRITE, restoring the previous content.  The payload remains
consistent from the reader's perspective throughout, because PENDING
blocks carry the new chunk_guard4 value and CHUNK_READ returns the last
COMMITTED or FINALIZED block when a PENDING block exists.</t>
        <t>In the multiple writer model, a write hole can also arise when two clients
are racing.  The chunk_guard4 value on each shard identifies which
transaction wrote it.  A reader that finds shards with different guard
values detects the inconsistency and either retries (if a concurrent write
is still in progress) or reports NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server to trigger repair.</t>
      </section>
    </section>
    <section anchor="nfsv42-operations-allowed-to-data-files">
      <name>NFSv4.2 Operations Allowed to Data Files</name>
      <t><cref source="Tom"> In Flexible File Version 1 Layout Type, the
emphasis was on NFSv3 DSes.  We limited the operations that clients
could send to data files to be COMMIT, READ, and WRITE.  We further
limited the MDS to GETATTR, SETATTR, CREATE, and REMOVE.  (Funny
enough, this is not mandated here.)  We need to call this out in
this draft and also we need to limit the NFSv4.2 operations.  Besides
the ones created here, consider: READ, WRITE, and COMMIT for mirroring
types and ALLOCATE, CLONE, COPY, DEALLOCATE, GETFH, PUTFH, READ_PLUS,
RESTOREFH, SAVEFH, SEEK, and SEQUENCE for all types.  </cref></t>
      <t><cref source="Tom"> Of special merit is SETATTR.  Do we want to
allow the clients to be able to truncate the data files?  Which
also brings up DEALLOCATE.  Perhaps we want CHUNK_DEALLOCATE?  That
way we can swap out chunks with the client file.  CHUNK_DEALLOCATE_GUARD.
Really need to determine capabilities of XFS swap!  </cref></t>
    </section>
    <section anchor="sec-layouthint">
      <name>Flexible File Layout Type Return</name>
      <t>layoutreturn_file4 is used in the LAYOUTRETURN operation to convey
layout-type-specific information to the server.  It is defined in
Section 18.44.1 of <xref target="RFC8881"/> (also shown in <xref target="fig-LAYOUTRETURN"/>).</t>
      <figure anchor="fig-LAYOUTRETURN">
        <name>Layout Return XDR</name>
        <sourcecode type="xdr"><![CDATA[
      /* Constants used for LAYOUTRETURN and CB_LAYOUTRECALL */
      const LAYOUT4_RET_REC_FILE      = 1;
      const LAYOUT4_RET_REC_FSID      = 2;
      const LAYOUT4_RET_REC_ALL       = 3;

      enum layoutreturn_type4 {
              LAYOUTRETURN4_FILE = LAYOUT4_RET_REC_FILE,
              LAYOUTRETURN4_FSID = LAYOUT4_RET_REC_FSID,
              LAYOUTRETURN4_ALL  = LAYOUT4_RET_REC_ALL
      };

   struct layoutreturn_file4 {
           offset4         lrf_offset;
           length4         lrf_length;
           stateid4        lrf_stateid;
           /* layouttype4 specific data */
           opaque          lrf_body<>;
   };

   union layoutreturn4 switch(layoutreturn_type4 lr_returntype) {
           case LAYOUTRETURN4_FILE:
                   layoutreturn_file4      lr_layout;
           default:
                   void;
   };

   struct LAYOUTRETURN4args {
           /* CURRENT_FH: file */
           bool                    lora_reclaim;
           layouttype4             lora_layout_type;
           layoutiomode4           lora_iomode;
           layoutreturn4           lora_layoutreturn;
   };
]]></sourcecode>
      </figure>
      <t>If the lora_layout_type layout type is LAYOUT4_FLEX_FILES and the
lr_returntype is LAYOUTRETURN4_FILE, then the lrf_body opaque value
is defined by ff_layoutreturn4 (see <xref target="sec-ff_layoutreturn4"/>).  This
allows the client to report I/O error information or layout usage
statistics back to the metadata server as defined below.  Note that
while the data structures are built on concepts introduced in
NFSv4.2, the effective discriminated union (lora_layout_type combined
with ff_layoutreturn4) allows for an NFSv4.1 metadata server to
utilize the data.</t>
      <section anchor="sec-io-error">
        <name>I/O Error Reporting</name>
        <section anchor="sec-ff_ioerr4">
          <name>ff_ioerr4</name>
          <figure anchor="fig-ff_ioerr4">
            <name>ff_ioerr4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_ioerr4 {
   ///         offset4        ffie_offset;
   ///         length4        ffie_length;
   ///         stateid4       ffie_stateid;
   ///         device_error4  ffie_errors<>;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Recall that <xref target="RFC7862"/> defines device_error4 as in <xref target="fig-device_error4"/>:</t>
          <figure anchor="fig-device_error4">
            <name>device_error4</name>
            <sourcecode type="xdr"><![CDATA[
   struct device_error4 {
           deviceid4       de_deviceid;
           nfsstat4        de_status;
           nfs_opnum4      de_opnum;
   };
]]></sourcecode>
          </figure>
          <t>The ff_ioerr4 structure is used to return error indications for
data files that generated errors during data transfers.  These are
hints to the metadata server that there are problems with that file.
For each error, ffie_errors.de_deviceid, ffie_offset, and ffie_length
represent the storage device and byte range within the file in which
the error occurred; ffie_errors represents the operation and type
of error.  The use of device_error4 is described in Section 15.6
of <xref target="RFC7862"/>.</t>
          <t>Even though the storage device might be accessed via NFSv3 and
reports back NFSv3 errors to the client, the client is responsible
for mapping these to appropriate NFSv4 status codes as de_status.
Likewise, the NFSv3 operations need to be mapped to equivalent NFSv4
operations.</t>
        </section>
      </section>
      <section anchor="sec-layout-stats">
        <name>Layout Usage Statistics</name>
        <section anchor="ffiolatency4">
          <name>ff_io_latency4</name>
          <figure anchor="fig-ff_io_latency4">
            <name>ff_io_latency4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_io_latency4 {
   ///         uint64_t       ffil_ops_requested;
   ///         uint64_t       ffil_bytes_requested;
   ///         uint64_t       ffil_ops_completed;
   ///         uint64_t       ffil_bytes_completed;
   ///         uint64_t       ffil_bytes_not_delivered;
   ///         nfstime4       ffil_total_busy_time;
   ///         nfstime4       ffil_aggregate_completion_time;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Both operation counts and bytes transferred are kept in the
ff_io_latency4 (see <xref target="fig-ff_io_latency4"/>.  As seen in ff_layoutupdate4
(see <xref target="sec-ff_layoutupdate4"/>), READ and WRITE operations are
aggregated separately.  READ operations are used for the ff_io_latency4
ffl_read.  Both WRITE and COMMIT operations are used for the
ff_io_latency4 ffl_write.  "Requested" counters track what the
client is attempting to do, and "completed" counters track what was
done.  There is no requirement that the client only report completed
results that have matching requested results from the reported
period.</t>
          <t>ffil_bytes_not_delivered is used to track the aggregate number of
bytes requested but not fulfilled due to error conditions.
ffil_total_busy_time is the aggregate time spent with outstanding
RPC calls. ffil_aggregate_completion_time is the sum of all round-trip
times for completed RPC calls.</t>
          <t>In Section 3.3.1 of <xref target="RFC8881"/>, the nfstime4 is defined as the
number of seconds and nanoseconds since midnight or zero hour January
1, 1970 Coordinated Universal Time (UTC).  The use of nfstime4 in
ff_io_latency4 is to store time since the start of the first I/O
from the client after receiving the layout.  In other words, these
are to be decoded as duration and not as a date and time.</t>
          <t>Note that LAYOUTSTATS are cumulative, i.e., not reset each time the
operation is sent.  If two LAYOUTSTATS operations for the same file
and layout stateid originate from the same NFS client and are
processed at the same time by the metadata server, then the one
containing the larger values contains the most recent time series
data.</t>
        </section>
        <section anchor="sec-ff_layoutupdate4">
          <name>ff_layoutupdate4</name>
          <figure anchor="fig-ff_layoutupdate4">
            <name>ff_layoutupdate4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layoutupdate4 {
   ///         netaddr4         ffl_addr;
   ///         nfs_fh4          ffl_fhandle;
   ///         ffv2_io_latency4 ffl_read;
   ///         ffv2_io_latency4 ffl_write;
   ///         nfstime4         ffl_duration;
   ///         bool             ffl_local;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>ffl_addr differentiates which network address the client is connected
to on the storage device.  In the case of multipathing, ffl_fhandle
indicates which read-only copy was selected. ffl_read and ffl_write
convey the latencies for both READ and WRITE operations, respectively.
ffl_duration is used to indicate the time period over which the
statistics were collected.  If true, ffl_local indicates that the
I/O was serviced by the client's cache.  This flag allows the client
to inform the metadata server about "hot" access to a file it would
not normally be allowed to report on.</t>
        </section>
        <section anchor="ffiostats4">
          <name>ff_iostats4</name>
          <figure anchor="fig-ff_iostats4">
            <name>ff_iostats4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_iostats4 {
   ///         offset4            ffis_offset;
   ///         length4            ffis_length;
   ///         stateid4           ffis_stateid;
   ///         io_info4           ffis_read;
   ///         io_info4           ffis_write;
   ///         deviceid4          ffis_deviceid;
   ///         ffv2_layoutupdate4 ffis_layoutupdate;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t><xref target="RFC7862"/> defines io_info4 as in <xref target="fig-ff_iostats4"/>.</t>
          <figure anchor="fig-io_info4">
            <name>io_info4</name>
            <sourcecode type="xdr"><![CDATA[
   struct io_info4 {
           uint64_t        ii_count;
           uint64_t        ii_bytes;
   };
]]></sourcecode>
          </figure>
          <t>With pNFS, data transfers are performed directly between the pNFS
client and the storage devices.  Therefore, the metadata server has
no direct knowledge of the I/O operations being done and thus cannot
create on its own statistical information about client I/O to
optimize the data storage location.  ff_iostats4 <bcp14>MAY</bcp14> be used by the
client to report I/O statistics back to the metadata server upon
returning the layout.</t>
          <t>Since it is not feasible for the client to report every I/O that
used the layout, the client <bcp14>MAY</bcp14> identify "hot" byte ranges for which
to report I/O statistics.  The definition and/or configuration
mechanism of what is considered "hot" and the size of the reported
byte range are out of the scope of this document.  For client
implementation, providing reasonable default values and an optional
run-time management interface to control these parameters is
suggested.  For example, a client can define the default byte-range
resolution to be 1 MB in size and the thresholds for reporting to
be 1 MB/second or 10 I/O operations per second.</t>
          <t>For each byte range, ffis_offset and ffis_length represent the
starting offset of the range and the range length in bytes.
ffis_read.ii_count, ffis_read.ii_bytes, ffis_write.ii_count, and
ffis_write.ii_bytes represent the number of contiguous READ and
WRITE I/Os and the respective aggregate number of bytes transferred
within the reported byte range.</t>
          <t>The combination of ffis_deviceid and ffl_addr uniquely identifies
both the storage path and the network route to it.  Finally,
ffl_fhandle allows the metadata server to differentiate between
multiple read-only copies of the file on the same storage device.</t>
        </section>
      </section>
      <section anchor="sec-ff_layoutreturn4">
        <name>ff_layoutreturn4</name>
        <figure anchor="fig-ff_layoutreturn4">
          <name>ff_layoutreturn4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layoutreturn4 {
   ///         ffv2_ioerr4     fflr_ioerr_report<>;
   ///         ffv2_iostats4   fflr_iostats_report<>;
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>When data file I/O operations fail, fflr_ioerr_report&lt;&gt; is used to
report these errors to the metadata server as an array of elements
of type ff_ioerr4.  Each element in the array represents an error
that occurred on the data file identified by ffie_errors.de_deviceid.
If no errors are to be reported, the size of the fflr_ioerr_report&lt;&gt;
array is set to zero.  The client <bcp14>MAY</bcp14> also use fflr_iostats_report&lt;&gt;
to report a list of I/O statistics as an array of elements of type
ff_iostats4.  Each element in the array represents statistics for
a particular byte range.  Byte ranges are not guaranteed to be
disjoint and <bcp14>MAY</bcp14> repeat or intersect.</t>
      </section>
    </section>
    <section anchor="sec-LAYOUTERROR">
      <name>Flexible File Layout Type LAYOUTERROR</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send error
information to the metadata server (see <xref target="sec-io-error"/>), it <bcp14>MAY</bcp14>
use LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTERROR4args is treated the same as ff_ioerr4.</t>
    </section>
    <section anchor="flexible-file-layout-type-layoutstats">
      <name>Flexible File Layout Type LAYOUTSTATS</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send I/O
statistics to the metadata server (see <xref target="sec-layout-stats"/>), it
<bcp14>MAY</bcp14> use LAYOUTSTATS (see Section 15.7 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTSTATS4args.lsa_layoutupdate is overloaded with the same
contents as in ffis_layoutupdate.</t>
    </section>
    <section anchor="flexible-file-layout-type-creation-hint">
      <name>Flexible File Layout Type Creation Hint</name>
      <t>The layouthint4 type is defined in the <xref target="RFC8881"/> as in
<xref target="fig-layouthint4-v1"/>.</t>
      <figure anchor="fig-layouthint4-v1">
        <name>layouthint4 v1</name>
        <sourcecode type="xdr"><![CDATA[
   struct layouthint4 {
       layouttype4        loh_type;
       opaque             loh_body<>;
   };
]]></sourcecode>
      </figure>
      <artwork><![CDATA[
                          {{fig-layouthint4-v1}}
]]></artwork>
      <t>The layouthint4 structure is used by the client to pass a hint about
the type of layout it would like created for a particular file.  If
the loh_type layout type is LAYOUT4_FLEX_FILES, then the loh_body
opaque value is defined by the ff_layouthint4 type.</t>
    </section>
    <section anchor="fflayouthint4">
      <name>ff_layouthint4</name>
      <figure anchor="fig-ff_layouthint4-v2">
        <name>ff_layouthint4 (v1 compatibility)</name>
        <sourcecode type="xdr"><![CDATA[
   union ff_mirrors_hint switch (bool ffmc_valid) {
       case TRUE:
           uint32_t    ffmc_mirrors;
       case FALSE:
           void;
   };

   struct ff_layouthint4 {
       ff_mirrors_hint    fflh_mirrors_hint;
   };
]]></sourcecode>
      </figure>
      <t>The ff_layouthint4 is retained for backwards compatibility with
Flex Files v1 layouts.  For Flex Files v2 layouts, clients
<bcp14>SHOULD</bcp14> use ffv2_layouthint4 (<xref target="fig-ffv2_layouthint4"/>) instead,
which provides coding type selection and data protection geometry
hints via ffv2_data_protection4 (<xref target="fig-ffv2_data_protection4"/>).</t>
    </section>
    <section anchor="recalling-a-layout">
      <name>Recalling a Layout</name>
      <t>While Section 12.5.5 of <xref target="RFC8881"/> discusses reasons independent
of layout type for recalling a layout, the flexible file layout
type metadata server should recall outstanding layouts in the
following cases:</t>
      <ul spacing="normal">
        <li>
          <t>When the file's security policy changes, i.e., ACLs or permission
mode bits are set.</t>
        </li>
        <li>
          <t>When the file's layout changes, rendering outstanding layouts
invalid.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the need to enforce
locking constraints.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the requirements
regarding resilvering as described in <xref target="sec-mds-resilvering"/>.</t>
        </li>
      </ul>
      <section anchor="cbrecallany">
        <name>CB_RECALL_ANY</name>
        <t>The metadata server can use the CB_RECALL_ANY callback operation
to notify the client to return some or all of its layouts.  Section
22.3 of <xref target="RFC8881"/> defines the allowed types of the "NFSv4 Recallable
Object Types Registry".</t>
        <figure anchor="fig-new-rca4">
          <name>RCA4 masks for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MIN     = 20;
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MAX     = 21;
   ///
]]></sourcecode>
        </figure>
        <figure anchor="fig-CB_RECALL_ANY4args">
          <name>CB_RECALL_ANY4args XDR</name>
          <sourcecode type="xdr"><![CDATA[
   struct  CB_RECALL_ANY4args      {
       uint32_t        craa_layouts_to_keep;
       bitmap4         craa_type_mask;
   };
]]></sourcecode>
        </figure>
        <t>Typically, CB_RECALL_ANY will be used to recall client state when
the server needs to reclaim resources.  The craa_type_mask bitmap
specifies the type of resources that are recalled, and the
craa_layouts_to_keep value specifies how many of the recalled
flexible file layouts the client is allowed to keep.  The mask flags
for the flexible file layout type are defined as in <xref target="fig-mask-flags"/>.</t>
        <figure anchor="fig-mask-flags">
          <name>Recall Mask Flags for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_cb_recall_any_mask {
   ///     PNFS_FF_RCA4_TYPE_MASK_READ = 20,
   ///     PNFS_FF_RCA4_TYPE_MASK_RW   = 21
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>The flags represent the iomode of the recalled layouts.  In response,
the client <bcp14>SHOULD</bcp14> return layouts of the recalled iomode that it
needs the least, keeping at most craa_layouts_to_keep flexible file
layouts.</t>
        <t>The PNFS_FF_RCA4_TYPE_MASK_READ flag notifies the client to return
layouts of iomode LAYOUTIOMODE4_READ.  Similarly, the
PNFS_FF_RCA4_TYPE_MASK_RW flag notifies the client to return layouts
of iomode LAYOUTIOMODE4_RW.  When both mask flags are set, the
client is notified to return layouts of either iomode.</t>
      </section>
    </section>
    <section anchor="client-fencing">
      <name>Client Fencing</name>
      <t>In cases where clients are uncommunicative and their lease has
expired or when clients fail to return recalled layouts within a
lease period, the server <bcp14>MAY</bcp14> revoke client layouts and reassign
these resources to other clients (see Section 12.5.5 of <xref target="RFC8881"/>).
To avoid data corruption, the metadata server <bcp14>MUST</bcp14> fence off the
revoked clients from the respective data files as described in
<xref target="sec-Fencing-Clients"/>.</t>
    </section>
    <section anchor="new-nfsv42-error-values">
      <name>New NFSv4.2 Error Values</name>
      <figure anchor="fig-errors-xdr">
        <name>Errors XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* Erasure Coding error constants; added to nfsstat4 enum */
   ///
   /// const NFS4ERR_CODING_NOT_SUPPORTED   = 10097;
   /// const NFS4ERR_PAYLOAD_NOT_CONSISTENT = 10098;
   /// const NFS4ERR_CHUNK_LOCKED           = 10099;
   /// const NFS4ERR_CHUNK_GUARDED          = 10100;
   ///
]]></sourcecode>
      </figure>
      <t>The new error codes are shown in <xref target="fig-errors-xdr"/>.</t>
      <section anchor="error-definitions">
        <name>Error Definitions</name>
        <table anchor="tbl-protocol-errors">
          <name>X</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Number</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">10097</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CODING_NOT_SUPPORTED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_NOT_CONSISTENT</td>
              <td align="left">10098</td>
              <td align="left">
                <xref target="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_LOCKED</td>
              <td align="left">10099</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_LOCKED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_GUARDED</td>
              <td align="left">10100</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_GUARDED"/></td>
            </tr>
          </tbody>
        </table>
        <section anchor="sec-NFS4ERR_CODING_NOT_SUPPORTED">
          <name>NFS4ERR_CODING_NOT_SUPPORTED (Error Code 10097)</name>
          <t>The client requested a ffv2_coding_type4 which the metadata server
does not support.  I.e., if the client sends a layout_hint requesting
an erasure coding type that the metadata server does not support,
this error code can be returned.  The client might have to send the
layout_hint several times to determine the overlapping set of
supported erasure coding types.</t>
        </section>
        <section anchor="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT">
          <name>NFS4ERR_PAYLOAD_NOT_CONSISTENT (Error Code 10098)</name>
          <t>The client encountered a payload in which the blocks were inconsistent
and stays inconsistent.  As the client can not tell if another
client is actively writing, it informs the metadata server of this
error via LAYOUTERROR.  The metadata server can then arrange for
repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_LOCKED">
          <name>NFS4ERR_CHUNK_LOCKED (Error Code 10099)</name>
          <t>The client tried an operation on a chunk which resulted in the data
server reporting that the chunk was locked. The client will then
inform the metadata server of this error via LAYOUTERROR.  The
metadata server can then arrange for repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_GUARDED">
          <name>NFS4ERR_CHUNK_GUARDED (Error Code 10100)</name>
          <t>The client tried a guarded CHUNK_WRITE on a chunk which did not match
the guard on the chunk in the data file. As such, the CHUNK_WRITE was
rejected and the client should refresh the chunk it has cached.</t>
          <t><cref source="Tom">This really points out either we need an array of
errors in the chunk operation responses or we need to not send an
array of chunks in the requests. The arrays were picked in order to
reduce the header to data cost, but really do not make sense.</cref></t>
          <t><cref source="Tom">Trying out an array of errors.</cref></t>
        </section>
      </section>
      <section anchor="operations-and-their-valid-errors">
        <name>Operations and Their Valid Errors</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-ops-and-errors">
          <name>Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CHUNK_COMMIT</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ERROR</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_FINALIZE</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_HEADER_READ</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_LOCK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_CHUNK_LOCKED, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_READ</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_PAYLOAD_NOT_CONSISTENT, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_REPAIRED</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ROLLBACK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_UNLOCK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_CHUNK_GUARDED, NFS4ERR_CHUNK_LOCKED, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOSPC, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE_REPAIR</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOSPC, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="callback-operations-and-their-valid-errors">
        <name>Callback Operations and Their Valid Errors</name>
        <t>The callback operations and their valid errors are presented in
<xref target="tbl-cb-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-cb-ops-and-errors">
          <name>Callback Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Callback Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CB_CHUNK_REPAIR</td>
              <td align="left">NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DEADSESSION, NFS4ERR_DELAY, NFS4ERR_CODING_NOT_SUPPORTED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_ISDIR, NFS4ERR_LOCKED, NFS4ERR_NOTSUPP, NFS4ERR_OLD_STATEID, NFS4ERR_SERVERFAULT, NFS4ERR_STALE,</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="errors-and-the-operations-that-use-them">
        <name>Errors and the Operations That Use Them</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-errors-and-ops"/>.  All operations not defined in this document
are defined in Section 18 of <xref target="RFC8881"/> and Section 15 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-errors-and-ops">
          <name>Errors and the Operations That Use Them</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Operations</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">CB_CHUNK_REPAIR, LAYOUTGET</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="exchgid4flagusepnfsds">
      <name>EXCHGID4_FLAG_USE_PNFS_DS</name>
      <figure anchor="fig-EXCHGID4_FLAG_USE_PNFS_DS">
        <name>The EXCHGID4_FLAG_USE_PNFS_DS</name>
        <sourcecode type="xdr"><![CDATA[
   /// const EXCHGID4_FLAG_USE_ERASURE_DS      = 0x00100000;
]]></sourcecode>
      </figure>
      <t>When a data server connects to a metadata server it can via
EXCHANGE_ID (see Section 18.35 of <xref target="RFC8881"/>) state its pNFS role.
The data server can use EXCHGID4_FLAG_USE_ERASURE_DS (see
<xref target="fig-EXCHGID4_FLAG_USE_PNFS_DS"/>) to indicate that it supports the
new NFSv4.2 operations introduced in this document.  Section 13.1
of <xref target="RFC8881"/> describes the interaction of the various pNFS roles
masked by EXCHGID4_FLAG_MASK_PNFS.  However, that does not mask out
EXCHGID4_FLAG_USE_ERASURE_DS.  I.e., EXCHGID4_FLAG_USE_ERASURE_DS can
be used in combination with all of the pNFS flags.</t>
      <t>If the data server sets EXCHGID4_FLAG_USE_ERASURE_DS during the
EXCHANGE_ID operation, then it <bcp14>MUST</bcp14> support all of the operations
in <xref target="tbl-protocol-ops"/>.  Further, this support is orthogonal to the
Erasure Coding Type selected.  The data server is unaware of which type
is driving the I/O.</t>
    </section>
    <section anchor="new-nfsv42-attributes">
      <name>New NFSv4.2 Attributes</name>
      <section anchor="attribute-89-fattr4codingblocksize">
        <name>Attribute 89: fattr4_coding_block_size</name>
        <figure anchor="fig-fattr4_coding_block_size">
          <name>XDR for fattr4_coding_block_size</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef uint64_t                  fattr4_coding_block_size;
   ///
   /// const FATTR4_CODING_BLOCK_SIZE  = 89;
   ///
]]></sourcecode>
        </figure>
        <t>The new attribute fattr4_coding_block_size (see
<xref target="fig-fattr4_coding_block_size"/>) is an <bcp14>OPTIONAL</bcp14> to NFSv4.2 attribute
which <bcp14>MUST</bcp14> be supported if the metadata server supports the Flexible
File Version 2 Layout Type.  By querying it, the client can determine
the data block size it is to use when coding the data blocks to
chunks.</t>
      </section>
    </section>
    <section anchor="new-nfsv42-common-data-structures">
      <name>New NFSv4.2 Common Data Structures</name>
      <section anchor="sec-chunk_guard4">
        <name>chunk_guard4</name>
        <figure anchor="fig-chunk_guard4">
          <name>XDR for chunk_guard4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct chunk_guard4 {
   ///     uint32_t   cg_gen_id;
   ///     uint32_t   cg_client_id;
   /// };
]]></sourcecode>
        </figure>
        <t>The chunk_guard4 (see <xref target="fig-chunk_guard4"/>) is effectively a 64 bit
value, with the upper 32 bits, cg_gen_id, being the current generation
id of the chunk on the DS and the lower 32 bits, cg_client_id, being
an unique id established when the client did the EXCHANGE_ID operation
(see Section 18.35 of <xref target="RFC8881"/>) with the metadata server.  The
lower 32 bits are set passed back in the LAYOUTGET operation (see
Section 18.43 of <xref target="RFC8881"/>) as the ffm_client_id (see
<xref target="sec-ffv2-mirror4"/>).</t>
      </section>
      <section anchor="chunkowner4">
        <name>chunk_owner4</name>
        <figure anchor="fig-chunk_owner4">
          <name>XDR for chunk_owner4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct chunk_owner4 {
   ///     chunk_guard4   co_guard;
   ///     uint32_t       co_id;
   /// };
]]></sourcecode>
        </figure>
        <t>The chunk_owner4 (see <xref target="fig-chunk_owner4"/>) is used to determine
when and by whom a block was written.  The co_id is used to identify
the block and <bcp14>MUST</bcp14> be the index of the chunk within the file.  I.e.,
it is the offset of the start of the chunk divided by the chunk
length.  The co_guard is a chunk_guard4 (see <xref target="sec-chunk_guard4"/>),
used to identify a given transaction.</t>
        <t>The co_guard is like the change attribute (see Section 5.8.1.4 of
<xref target="RFC8881"/>) in that each chunk write by a given client has to have
an unique co_guard.  I.e., it can be determined which transaction
across all data files that a chunk corresponds.</t>
      </section>
    </section>
    <section anchor="new-nfsv42-operations">
      <name>New NFSv4.2 Operations</name>
      <figure anchor="fig-ops-xdr">
        <name>Operations XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* New operations for Erasure Coding start here */
   ///
   ///  OP_CHUNK_COMMIT        = 77,
   ///  OP_CHUNK_ERROR         = 78,
   ///  OP_CHUNK_FINALIZE      = 79,
   ///  OP_CHUNK_HEADER_READ   = 80,
   ///  OP_CHUNK_LOCK          = 81,
   ///  OP_CHUNK_READ          = 82,
   ///  OP_CHUNK_REPAIRED      = 83,
   ///  OP_CHUNK_ROLLBACK      = 84,
   ///  OP_CHUNK_UNLOCK        = 85,
   ///  OP_CHUNK_WRITE         = 86,
   ///  OP_CHUNK_WRITE_REPAIR  = 87,
   ///
]]></sourcecode>
      </figure>
      <table anchor="tbl-protocol-ops">
        <name>Protocol OPs</name>
        <thead>
          <tr>
            <th align="left">Operation</th>
            <th align="left">Number</th>
            <th align="left">Target Server</th>
            <th align="left">Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">CHUNK_COMMIT</td>
            <td align="left">77</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_COMMIT"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ERROR</td>
            <td align="left">78</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_ERROR"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_FINALIZE</td>
            <td align="left">79</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_FINALIZE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_HEADER_READ</td>
            <td align="left">80</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_HEADER_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_LOCK</td>
            <td align="left">81</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_LOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_READ</td>
            <td align="left">82</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_REPAIRED</td>
            <td align="left">83</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_REPAIRED"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ROLLBACK</td>
            <td align="left">84</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_ROLLBACK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_UNLOCK</td>
            <td align="left">85</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_UNLOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE</td>
            <td align="left">86</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE_REPAIR</td>
            <td align="left">87</td>
            <td align="left">DS</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE_REPAIR"/></td>
          </tr>
        </tbody>
      </table>
      <section anchor="sec-CHUNK_COMMIT">
        <name>Operation 77: CHUNK_COMMIT - Activate Cached Chunk Data</name>
        <section anchor="arguments">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_COMMIT4args">
            <name>XDR for CHUNK_COMMIT4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cca_offset;
   ///     count4          cca_count;
   ///     chunk_owner4    cca_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_COMMIT4resok">
            <name>XDR for CHUNK_COMMIT4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4resok {
   ///     verifier4       ccr_writeverf;
   ///     nfsstat4        ccr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_COMMIT4res">
            <name>XDR for CHUNK_COMMIT4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_COMMIT4res switch (nfsstat4 ccr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_COMMIT4resok   ccr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description">
          <name>DESCRIPTION</name>
          <t>CHUNK_COMMIT is COMMIT (see Section 18.3 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner activating the blocks.
As such, all of the normal semantics of COMMIT directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_COMMIT
works on blocks and not a raw data stream.  As such cca_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cca_count is a count of
blocks to activate and not bytes to activate.</t>
          <t>Further, while it may appear that the combination of cca_offset and
cca_count are redundant to cca_chunks, the purpose of cca_chunks
is to allow the data server to differentiate between potentially
multiple pending blocks.</t>
          <t><cref source="Tom">Describe how CHUNK_COMMIT and CHUNK_FINALIZE interact.
How does CHUNK_COMMIT interact with a locked chunk?</cref></t>
        </section>
      </section>
      <section anchor="sec-CHUNK_ERROR">
        <name>Operation 78: CHUNK_ERROR - Report Error on Cached Chunk Data</name>
        <section anchor="arguments-1">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_ERROR4args">
            <name>XDR for CHUNK_ERROR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ERROR4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cea_stateid;
   ///     offset4         cea_offset;
   ///     count4          cea_count;
   ///     nfsstat4        cea_error;
   ///     chunk_owner4    cea_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-1">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_ERROR4res">
            <name>XDR for CHUNK_ERROR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ERROR4res {
   ///     nfsstat4        cer_status;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-1">
          <name>DESCRIPTION</name>
          <t>CHUNK_ERROR allows a client to report that one or more chunks at
the specified block range are in error.  The cea_offset is the
starting block offset and cea_count is the number of blocks
affected.  The cea_error indicates the type of error detected
(e.g., NFS4ERR_PAYLOAD_NOT_CONSISTENT for a CRC mismatch).</t>
          <t>The data server records the error state for the affected blocks.
Once marked as errored, the blocks are not returned by CHUNK_READ
until they are repaired via CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>)
and the repair is confirmed via CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>).</t>
          <t>The client <bcp14>SHOULD</bcp14> report errors via CHUNK_ERROR before reporting
them to the metadata server via LAYOUTERROR.  This allows the data
server to prevent other clients from reading corrupt data while
the metadata server coordinates repair.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_FINALIZE">
        <name>Operation 79: CHUNK_FINALIZE - Transition Chunks from Pending to Finalized</name>
        <section anchor="arguments-2">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_FINALIZE4args">
            <name>XDR for CHUNK_FINALIZE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_FINALIZE4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cfa_offset;
   ///     count4          cfa_count;
   ///     chunk_owner4    cfa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-2">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_FINALIZE4resok">
            <name>XDR for CHUNK_FINALIZE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_FINALIZE4resok {
   ///     verifier4       cfr_writeverf;
   ///     nfsstat4        cfr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_FINALIZE4res">
            <name>XDR for CHUNK_FINALIZE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_FINALIZE4res switch (nfsstat4 cfr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_FINALIZE4resok   cfr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-2">
          <name>DESCRIPTION</name>
          <t>CHUNK_FINALIZE transitions blocks from the PENDING state (set by
CHUNK_WRITE) to the FINALIZED state.  A finalized block is visible
to the owning client for reads and is eligible for CHUNK_COMMIT.</t>
          <t>The cfa_offset is the starting block offset and cfa_count is the
number of blocks to finalize.  The cfa_chunks array lists the
chunk_owner4 entries whose blocks are to be finalized.  Each
owner's blocks at the specified offsets <bcp14>MUST</bcp14> be in the PENDING state;
if not, the corresponding entry in the per-owner status array
ccr_status is set to NFS4ERR_INVAL.</t>
          <t>CHUNK_FINALIZE serves as the CRC validation checkpoint: the data
server <bcp14>SHOULD</bcp14> have validated the CRC32 of each block at CHUNK_WRITE
time.  After CHUNK_FINALIZE, the block metadata (CRC, owner, state)
is persisted to stable storage so that it survives data server
restarts.</t>
          <t>Blocks that have been finalized but not yet committed <bcp14>MAY</bcp14> be rolled
back via CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_HEADER_READ">
        <name>Operation 80: CHUNK_HEADER_READ - Read Chunk Header from File</name>
        <section anchor="arguments-3">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4args">
            <name>XDR for CHUNK_HEADER_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    chra_stateid;
   ///     offset4     chra_offset;
   ///     count4      chra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-3">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4resok">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4resok {
   ///     bool            chrr_eof;
   ///     nfsstat4        chrr_status<>;
   ///     bool            chrr_locked<>;
   ///     chunk_owner4    chrr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <t><cref source="Tom">Do we want to have a chunk_owner for reads versus writes?
Instead of co-arrays, have one single in the responses?</cref></t>
          <figure anchor="fig-CHUNK_HEADER_READ4res">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_HEADER_READ4res switch (nfsstat4 chrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_HEADER_READ4resok     chrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-3">
          <name>DESCRIPTION</name>
          <t>CHUNK_HEADER_READ differs from CHUNK_READ in that it only reads chunk
headers in the desired data range.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_LOCK">
        <name>Operation 81: CHUNK_LOCK - Lock Cached Chunk Data</name>
        <section anchor="arguments-4">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_LOCK4args">
            <name>XDR for CHUNK_LOCK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_LOCK4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cla_stateid;
   ///     offset4         cla_offset;
   ///     count4          cla_count;
   ///     chunk_owner4    cla_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-4">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_LOCK4res">
            <name>XDR for CHUNK_LOCK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_LOCK4res switch (nfsstat4 clr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     case NFS4ERR_CHUNK_LOCKED:
   ///         chunk_owner4    clr_owner;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-4">
          <name>DESCRIPTION</name>
          <t>CHUNK_LOCK acquires an exclusive lock on the block range specified
by cla_offset and cla_count.  While locked, other clients' CHUNK_WRITE
operations to the same block range will fail with NFS4ERR_CHUNK_LOCKED.
The lock is associated with the chunk_owner4 in cla_owner.</t>
          <t>If the blocks are already locked by a different owner, the operation
returns NFS4ERR_CHUNK_LOCKED with the clr_owner field identifying the
current lock holder.</t>
          <t>CHUNK_LOCK is used in the multiple writer mode (<xref target="sec-multi-writer"/>)
to coordinate concurrent access to the same block range.  A client
that needs to repair chunks <bcp14>SHOULD</bcp14> acquire the lock before writing
replacement data.</t>
          <t>The lock is released by CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) or
implicitly when the client's lease expires.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_READ">
        <name>Operation 82: CHUNK_READ - Read Chunks from File</name>
        <section anchor="arguments-5">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_READ4args">
            <name>XDR for CHUNK_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    cra_stateid;
   ///     offset4     cra_offset;
   ///     count4      cra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-5">
          <name>RESULTS</name>
          <figure anchor="fig-read_chunk4">
            <name>XDR for read_chunk4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct read_chunk4 {
   ///     uint32_t        cr_crc;
   ///     uint32_t        cr_effective_len;
   ///     chunk_owner4    cr_owner;
   ///     uint32_t        cr_payload_id;
   ///     bool            cr_locked<>;  // TDH - make a flag
   ///     nfsstat4        cr_status<>;
   ///     opaque          cr_chunk<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4resok">
            <name>XDR for CHUNK_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4resok {
   ///     bool        crr_eof;
   ///     read_chunk4 crr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4res">
            <name>XDR for CHUNK_READ4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_READ4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///          CHUNK_READ4resok     crr_resok4;
   ///     default:
   ///          void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-5">
          <name>DESCRIPTION</name>
          <t>CHUNK_READ is READ (see Section 18.22 of <xref target="RFC8881"/>) with additional
semantics over the chunk_owner.  As such, all of the normal semantics
of READ directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_READ
works on blocks and not a raw data stream.  As such cra_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cra_count is a count of
blocks to read and not bytes to read.</t>
          <t>When reading a set of blocks across the data servers, it can be the
case that some data servers do not have any data at that location.
In that case, the server either returns crr_eof if the cra_offset
exceeds the number of blocks that the data server is aware or it
returns an empty block for that block.</t>
          <t>For example, in <xref target="fig-example-CHUNK_READ4args"/>, the client asks
for 4 blocks starting with the 3rd block in the file.  The second
data server responds as in <xref target="fig-example-CHUNK_READ4resok"/>.  The
client would read this as there is valid data for blocks 2 and 4,
there is a hole at block 3, and there is no data for block 5.  The
data server <bcp14>MUST</bcp14> calculate a valid cr_crc for block 3 based on the
generated fields.</t>
          <figure anchor="fig-example-CHUNK_READ4args">
            <name>Example: CHUNK_READ4args parameters</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4args                |
  +--------------------------------+
  | cra_stateid: 0                 |
  | cra_offset: 2                  |
  | cra_count: 4                   |
  +----------+---------------------+
]]></artwork>
          </figure>
          <figure anchor="fig-example-CHUNK_READ4resok">
            <name>Example: Resulting CHUNK_READ4resok reply</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4resok               |
  +--------------------------------+
  | crr_eof: true                  |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x3faddace         |
  |     cr_owner:                  |
  |         co_chunk_id: 2         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  | crr_chunks[0]:                 |
  |     cr_crc: 0xdeade4e5         |
  |     cr_owner:                  |
  |         co_chunk_id: 3         |
  |         co_guard:              |
  |             cg_gen_id   : 0    |
  |             cg_client_id: 0    |
  |     cr_payload_id: 1           |
  |     cr_chunk: 0000...00000     |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x7778abcd         |
  |     cr_owner:                  |
  |         co_chunk_id: 4         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  +--------------------------------+
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="sec-CHUNK_REPAIRED">
        <name>Operation 83: CHUNK_REPAIRED - Confirm Repair of Errored Chunk Data</name>
        <section anchor="arguments-6">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_REPAIRED4args">
            <name>XDR for CHUNK_REPAIRED4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_REPAIRED4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cpa_stateid;
   ///     offset4         cpa_offset;
   ///     count4          cpa_count;
   ///     chunk_owner4    cpa_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-6">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_REPAIRED4res">
            <name>XDR for CHUNK_REPAIRED4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_REPAIRED4res switch (nfsstat4 cpr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-6">
          <name>DESCRIPTION</name>
          <t>CHUNK_REPAIRED signals that blocks previously marked as errored
(via CHUNK_ERROR, <xref target="sec-CHUNK_ERROR"/>) have been repaired.  The
repair client writes replacement data via CHUNK_WRITE_REPAIR
(<xref target="sec-CHUNK_WRITE_REPAIR"/>), then calls CHUNK_REPAIRED to clear
the error state and make the blocks available for normal reads.</t>
          <t>The cpa_offset and cpa_count identify the repaired block range.
The cpa_owner identifies the repair client that performed the
repair.  The data server verifies that the blocks were previously
in error and that the repair data has been written and finalized.</t>
          <t>If the blocks are not in the errored state, the operation returns
NFS4ERR_INVAL.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_ROLLBACK">
        <name>Operation 84: CHUNK_ROLLBACK - Rollback Changes on Cached Chunk Data</name>
        <section anchor="arguments-7">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_ROLLBACK4args">
            <name>XDR for CHUNK_ROLLBACK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ROLLBACK4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         crb_offset;
   ///     count4          crb_count;
   ///     chunk_owner4    crb_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-7">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_ROLLBACK4resok">
            <name>XDR for CHUNK_ROLLBACK4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ROLLBACK4resok {
   ///     verifier4       crr_writeverf;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_ROLLBACK4res">
            <name>XDR for CHUNK_ROLLBACK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_ROLLBACK4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_ROLLBACK4resok   crr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-7">
          <name>DESCRIPTION</name>
          <t>CHUNK_ROLLBACK reverts blocks from the PENDING or FINALIZED state
back to their previous state, effectively undoing a CHUNK_WRITE
that has not yet been committed via CHUNK_COMMIT.</t>
          <t>The crb_offset is the starting block offset and crb_count is the
number of blocks to roll back.  The crb_chunks array lists the
chunk_owner4 entries whose blocks are to be rolled back.  Each
owner's blocks at the specified offsets <bcp14>MUST</bcp14> be in the PENDING or
FINALIZED state; blocks that have already been committed via
CHUNK_COMMIT cannot be rolled back.</t>
          <t>CHUNK_ROLLBACK is used in two scenarios:</t>
          <ol spacing="normal" type="1"><li>
              <t>A client discovers an encoding error after CHUNK_WRITE and
before CHUNK_COMMIT, and needs to undo the write to try again.</t>
            </li>
            <li>
              <t>A repair client needs to undo a repair attempt that was found
to be incorrect before committing it.</t>
            </li>
          </ol>
          <t>The data server deletes the pending chunk data and restores the
block metadata to EMPTY.  If the block was in the FINALIZED state,
the persisted metadata is also removed.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_UNLOCK">
        <name>Operation 85: CHUNK_UNLOCK - Unlock Cached Chunk Data</name>
        <section anchor="arguments-8">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_UNLOCK4args">
            <name>XDR for CHUNK_UNLOCK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_UNLOCK4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cua_stateid;
   ///     offset4         cua_offset;
   ///     count4          cua_count;
   ///     chunk_owner4    cua_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-8">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_UNLOCK4res">
            <name>XDR for CHUNK_UNLOCK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_UNLOCK4res switch (nfsstat4 cur_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-8">
          <name>DESCRIPTION</name>
          <t>CHUNK_UNLOCK releases the exclusive lock on the block range
previously acquired by CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>).  The
cua_owner <bcp14>MUST</bcp14> match the owner that acquired the lock; otherwise
the operation returns NFS4ERR_INVAL.</t>
          <t>If the blocks are not locked, the operation returns NFS4_OK
(idempotent).</t>
          <t>A client <bcp14>SHOULD</bcp14> release chunk locks promptly after completing
its write or repair operation.  Chunk locks are also released
implicitly when the client's lease expires.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE">
        <name>Operation 86: CHUNK_WRITE - Write Chunks to File</name>
        <section anchor="arguments-9">
          <name>ARGUMENTS</name>
          <figure anchor="fig-write_chunk_guard4">
            <name>XDR for write_chunk_guard4</name>
            <sourcecode type="xdr"><![CDATA[
   /// union write_chunk_guard4 switch (bool cwg_check) {
   ///     case TRUE:
   ///         chunk_guard4   cwg_guard;
   ///     case FALSE:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4args">
            <name>XDR for CHUNK_WRITE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY = 0x00000001;
   ///
   /// struct CHUNK_WRITE4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwa_stateid;
   ///     offset4            cwa_offset;
   ///     stable_how4        cwa_stable;
   ///     chunk_owner4       cwa_owner;
   ///     uint32_t           cwa_payload_id;
   ///     uint32_t           cwa_flags;
   ///     write_chunk_guard4 cwa_guard;
   ///     uint32_t           cwa_chunk_size;
   ///     uint32_t           cwa_crc32s<>;
   ///     opaque             cwa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-9">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE4resok">
            <name>XDR for CHUNK_WRITE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE4resok {
   ///     count4          cwr_count;
   ///     stable_how4     cwr_committed;
   ///     verifier4       cwr_writeverf;
   ///     nfsstat4        cwr_block_status<>;
   ///     bool            cwr_block_activated<>;
   ///     chunk_owner4    cwr_owners<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4res">
            <name>XDR for CHUNK_WRITE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE4res switch (nfsstat4 cwr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE4resok    cwr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-9">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE is WRITE (see Section 18.32 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner and the activation of
blocks.  As such, all of the normal semantics of WRITE directly
apply.</t>
          <t>The main difference between the two operations is that CHUNK_WRITE
works on blocks and not a raw data stream.  As such cwa_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cwr_count is a count of
written blocks and not written bytes.</t>
          <t>If cwa_stable is FILE_SYNC4, the data server <bcp14>MUST</bcp14> commit the written
header and block data plus all file system metadata to stable storage
before returning results.  This corresponds to the NFSv2 protocol
semantics.  Any other behavior constitutes a protocol violation.
If cwa_stable is DATA_SYNC4, then the data server <bcp14>MUST</bcp14> commit all
of the header and block data to stable storage and enough of the
metadata to retrieve the data before returning.  The data server
implementer is free to implement DATA_SYNC4 in the same fashion as
FILE_SYNC4, but with a possible performance drop.  If cwa_stable
is UNSTABLE4, the data server is free to commit any part of the
header and block data and the metadata to stable storage, including
all or none, before returning a reply to the client.  There is no
guarantee whether or when any uncommitted data will subsequently
be committed to stable storage.  The only guarantees made by the
data server are that it will not destroy any data without changing
the value of writeverf and that it will not commit the data and
metadata at a level less than that requested by the client.</t>
          <t>The activation of header and block data interacts with the co_activated
for each of the written blocks.  If the data is not committed to
stable storage then the co_activated field <bcp14>MUST NOT</bcp14> be set to true.
Once the data is committed to stable storage, then the data server
can set the block's co_activated if one of these conditions apply:</t>
          <ul spacing="normal">
            <li>
              <t>it is the first write to that block and the
CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY flag is set</t>
            </li>
            <li>
              <t>the CHUNK_COMMIT is issued later for that block.</t>
            </li>
          </ul>
          <t>There are subtle interactions with write holes caused by racing
clients.  One client could win the race in each case, but because
it used a cwa_stable of UNSTABLE4, the subsequent writes from the
second client with a cwa_stable of FILE_SYNC4 can be awarded the
co_activated being set to true for each of the blocks in the payload.</t>
          <t>Finally, the interaction of cwa_stable can cause a client to
mistakenly believe that by the time it gets the response of
co_activated of false, that the blocks are not activated.  A
subsequent CHUNK_READ or HEADER_READ might show that the co_activated
is true without any interaction by the client via CHUNK_COMMIT.</t>
          <section anchor="guarding-the-write">
            <name>Guarding the Write</name>
            <t>A guarded CHUNK_WRITE is when the writing of a block <bcp14>MUST</bcp14> fail if
cwa_guard.cwg_check is TRUE and the target chunk does not have the
same cg_gen_id as cwa_guard.cwg_guard.cg_gen_id.  This is
useful in read-update-write scenarios.  The client reads a block,
updates it, and is prepared to write it back.  It guards the write
such that if another writer has modified the block, the data server
will reject the modification.</t>
            <t>As the chunk_guard4 (see <xref target="fig-chunk_guard4"/> does not have a
chunk_id and the CHUNK_WRITE applies to all blocks in the range of
cwa_offset to the length of cwa_data, then each of the target blocks
<bcp14>MUST</bcp14> have the same cg_gen_id and cg_client_id.  The client <bcp14>SHOULD</bcp14>
present the smallest set of blocks as possible to meet this
requirement.</t>
            <t><cref source="Tom"> Is the DS supposed to vet all blocks first or
proceed to the first error?  Or do all blocks and return an array
of errors?  (This last one is a no-go.)  Also, if we do the vet
first, what happens if a CHUNK_WRITE comes in after the vetting?
Are we to lock the file during this process.  Even if we do that,
we still have the issue of multiple DSes.  </cref></t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE_REPAIR">
        <name>Operation 87: CHUNK_WRITE_REPAIR - Write Repaired Cached Chunk Data</name>
        <section anchor="arguments-10">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4args">
            <name>XDR for CHUNK_WRITE_REPAIR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE_REPAIR4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwra_stateid;
   ///     offset4            cwra_offset;
   ///     stable_how4        cwra_stable;
   ///     chunk_owner4       cwra_owner;
   ///     uint32_t           cwra_payload_id;
   ///     uint32_t           cwra_chunk_size;
   ///     uint32_t           cwra_crc32s<>;
   ///     opaque             cwra_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-10">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4resok">
            <name>XDR for CHUNK_WRITE_REPAIR4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE_REPAIR4resok {
   ///     count4          cwrr_count;
   ///     stable_how4     cwrr_committed;
   ///     verifier4       cwrr_writeverf;
   ///     nfsstat4        cwrr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4res">
            <name>XDR for CHUNK_WRITE_REPAIR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE_REPAIR4res switch (nfsstat4 cwrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE_REPAIR4resok   cwrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-10">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE_REPAIR has the same semantics as CHUNK_WRITE
(<xref target="sec-CHUNK_WRITE"/>) but is used specifically for writing
reconstructed chunk data to a replacement data server during
repair operations.</t>
          <t>The repair workflow is:</t>
          <ol spacing="normal" type="1"><li>
              <t>The repair client reads surviving chunks from the remaining
data servers via CHUNK_READ.</t>
            </li>
            <li>
              <t>The client reconstructs the missing chunks using the erasure
coding algorithm (RS matrix inversion or Mojette corner-peeling).</t>
            </li>
            <li>
              <t>The client acquires a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) on the
target data server to prevent concurrent writes during repair.</t>
            </li>
            <li>
              <t>The client writes the reconstructed data via CHUNK_WRITE_REPAIR.</t>
            </li>
            <li>
              <t>The client calls CHUNK_FINALIZE and CHUNK_COMMIT to persist
the repair.</t>
            </li>
            <li>
              <t>The client calls CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to
clear the error state.</t>
            </li>
            <li>
              <t>The client releases the lock via CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>).</t>
            </li>
          </ol>
          <t>CHUNK_WRITE_REPAIR is distinguished from CHUNK_WRITE to allow the
data server to apply different policies to repair writes (e.g.,
bypassing guard checks, logging repair activity, or prioritizing
repair I/O).  The CRC32 validation on the repair data follows the
same rules as CHUNK_WRITE.</t>
          <t>The target blocks <bcp14>SHOULD</bcp14> be in the errored state (set by
CHUNK_ERROR) or EMPTY.  If the blocks are in the COMMITTED state
with valid data, the data server <bcp14>MAY</bcp14> reject the repair to prevent
overwriting good data.</t>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The combination of components in a pNFS system is required to
preserve the security properties of NFSv4.1+ with respect to an
entity accessing data via a client.  The pNFS feature partitions
the NFSv4.1+ file system protocol into two parts: the control
protocol and the data protocol.  As the control protocol in this
document is NFS, the security properties are equivalent to the
version of NFS being used.  The flexible file layout further divides
the data protocol into metadata and data paths.  The security
properties of the metadata path are equivalent to those of NFSv4.1x
(see Sections 1.7.1 and 2.2.1 of <xref target="RFC8881"/>).  And the security
properties of the data path are equivalent to those of the version
of NFS used to access the storage device, with the provision that
the metadata server is responsible for authenticating client access
to the data file.  The metadata server provides appropriate credentials
to the client to access data files on the storage device.  It is
also responsible for revoking access for a client to the storage
device.</t>
      <t>The metadata server enforces the file access control policy at
LAYOUTGET time.  The client <bcp14>MUST</bcp14> use RPC authorization credentials
for getting the layout for the requested iomode ((LAYOUTIOMODE4_READ
or LAYOUTIOMODE4_RW), and the server verifies the permissions and
ACL for these credentials, possibly returning NFS4ERR_ACCESS if the
client is not allowed the requested iomode.  If the LAYOUTGET
operation succeeds, the client receives, as part of the layout, a
set of credentials allowing it I/O access to the specified data
files corresponding to the requested iomode.  When the client acts
on I/O operations on behalf of its local users, it <bcp14>MUST</bcp14> authenticate
and authorize the user by issuing respective OPEN and ACCESS calls
to the metadata server, similar to having NFSv4 data delegations.</t>
      <t>The combination of filehandle, synthetic uid, and gid in the layout
is the way that the metadata server enforces access control to the
data server.  The client only has access to filehandles of file
objects and not directory objects.  Thus, given a filehandle in a
layout, it is not possible to guess the parent directory filehandle.
Further, as the data file permissions only allow the given synthetic
uid read/write permission and the given synthetic gid read permission,
knowing the synthetic ids of one file does not necessarily allow
access to any other data file on the storage device.</t>
      <t>The metadata server can also deny access at any time by fencing the
data file, which means changing the synthetic ids.  In turn, that
forces the client to return its current layout and get a new layout
if it wants to continue I/O to the data file.</t>
      <t>If access is allowed, the client uses the corresponding (read-only
or read/write) credentials to perform the I/O operations at the
data file's storage devices.  When the metadata server receives a
request to change a file's permissions or ACL, it <bcp14>SHOULD</bcp14> recall all
layouts for that file and then <bcp14>MUST</bcp14> fence off any clients still
holding outstanding layouts for the respective files by implicitly
invalidating the previously distributed credential on all data file
comprising the file in question.  It is <bcp14>REQUIRED</bcp14> that this be done
before committing to the new permissions and/or ACL.  By requesting
new layouts, the clients will reauthorize access against the modified
access control metadata.  Recalling the layouts in this case is
intended to prevent clients from getting an error on I/Os done after
the client was fenced off.</t>
      <section anchor="crc32-integrity-scope">
        <name>CRC32 Integrity Scope</name>
        <t>The CRC32 values carried in CHUNK_WRITE and returned from CHUNK_READ
are intended to detect accidental data corruption during storage or
transmission — for example, bit flips in storage media or network
errors.  CRC32 is not a cryptographic hash and does not protect
against intentional modification: an adversary with access to the
network path could replace a chunk and recompute a valid CRC32 to
match.  The "data integrity" provided by the CRC32 mechanism in this
document refers to error detection, not protection against an active
attacker.  Deployments requiring protection against active attackers
<bcp14>SHOULD</bcp14> use RPC-over-TLS (see <xref target="sec-tls"/>) or RPCSEC_GSS.</t>
      </section>
      <section anchor="chunk-lock-and-lease-expiry">
        <name>Chunk Lock and Lease Expiry</name>
        <t>When a client holds a chunk lock (acquired via CHUNK_LOCK) and its
lease expires or the client crashes, the lock is released implicitly
by the data server.  This opens a window in which another client
may write to the previously locked range before the original client's
repair is complete.  Implementations <bcp14>SHOULD</bcp14> ensure that the lease
period for chunk locks is sufficient to complete repair operations,
and <bcp14>SHOULD</bcp14> implement CHUNK_UNLOCK explicitly on abort paths.  The
metadata server's LAYOUTERROR and LAYOUTRETURN mechanisms provide
the coordination point for detecting and resolving such races.</t>
      </section>
      <section anchor="error-code-information-disclosure">
        <name>Error Code Information Disclosure</name>
        <t>The new error codes NFS4ERR_CHUNK_LOCKED (10099) and
NFS4ERR_PAYLOAD_NOT_CONSISTENT (10098) convey information about
chunk state to the caller.  Both of these errors <bcp14>MAY</bcp14> be returned
to callers whose credentials have not been verified by the data
server (e.g., when the AUTH_SYS uid presented does not match the
synthetic uid on the data file).  The information they reveal —
that a chunk is locked, or that a CRC mismatch occurred — does
not directly disclose file contents but may indicate concurrent
write activity.  Implementations that are concerned about this
level of disclosure <bcp14>SHOULD</bcp14> require that operations on CHUNK ops
only succeed after credential verification and return
NFS4ERR_ACCESS for unverified callers rather than the more
specific error codes.</t>
      </section>
      <section anchor="sec-tls">
        <name>Transport Layer Security</name>
        <t>RPC-over-TLS <xref target="RFC9289"/> <bcp14>MAY</bcp14> be used to protect traffic between the
client and the metadata server and between the client and data servers.
When RPC-over-TLS is in use on the data server path, the synthetic
uid/gid credentials carried in AUTH_SYS remain the access control
mechanism; TLS provides confidentiality and integrity for the transport
but does not replace the fencing model described in <xref target="sec-Fencing-Clients"/>.
Servers that require transport security <bcp14>SHOULD</bcp14> advertise this via the
SECINFO mechanism rather than silently dropping connections.</t>
      </section>
      <section anchor="rpcsecgss-and-security-services">
        <name>RPCSEC_GSS and Security Services</name>
        <t><em>Why we don't want to support RPCSEC_GSS.</em></t>
        <t>Because of the special use of principals within the loosely coupled
model, the issues are different depending on the coupling model.</t>
        <section anchor="loosely-coupled">
          <name>Loosely Coupled</name>
          <t>RPCSEC_GSS version 3 (RPCSEC_GSSv3) <xref target="RFC7861"/> contains facilities
that would allow it to be used to authorize the client to the storage
device on behalf of the metadata server.  Doing so would require
that each of the metadata server, storage device, and client would
need to implement RPCSEC_GSSv3 using an RPC-application-defined
structured privilege assertion in a manner described in Section
4.9.1 of <xref target="RFC7862"/>.  The specifics necessary to do so are not
described in this document.  This is principally because any such
specification would require extensive implementation work on a wide
range of storage devices, which would be unlikely to result in a
widely usable specification for a considerable time.</t>
          <t>As a result, the layout type described in this document will not
provide support for use of RPCSEC_GSS together with the loosely
coupled model.  However, future layout types could be specified,
which would allow such support, either through the use of RPCSEC_GSSv3
or in other ways.</t>
        </section>
        <section anchor="tightly-coupled">
          <name>Tightly Coupled</name>
          <t>With tight coupling, the principal used to access the metadata file
is exactly the same as used to access the data file.  The storage
device can use the control protocol to validate any RPC credentials.
As a result, there are no security issues related to using RPCSEC_GSS
with a tightly coupled system.  For example, if Kerberos V5 Generic
Security Service Application Program Interface (GSS-API) <xref target="RFC4121"/>
is used as the security mechanism, then the storage device could
use a control protocol to validate the RPC credentials to the
metadata server.</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t><xref target="RFC8881"/> introduced the "pNFS Layout Types Registry"; new layout
type numbers in this registry need to be assigned by IANA.  This
document defines a new layout type number: LAYOUT4_FLEX_FILES_V2
(see <xref target="tbl_layout_types"/>).</t>
      <table anchor="tbl_layout_types">
        <name>Layout Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Layout Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">LAYOUT4_FLEX_FILES_V2</td>
            <td align="left">0x5</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t><xref target="RFC8881"/> also introduced the "NFSv4 Recallable Object Types
Registry".  This document defines new recallable objects for
RCA4_TYPE_MASK_FF2_LAYOUT_MIN and RCA4_TYPE_MASK_FF2_LAYOUT_MAX
(see <xref target="tbl_recallables"/>).</t>
      <table anchor="tbl_recallables">
        <name>Recallable Object Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Recallable Object Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MIN</td>
            <td align="left">20</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MAX</td>
            <td align="left">21</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t>This document introduces the 'Flexible File Version 2 Layout Type
Erasure Coding Type Registry'.  The registry uses a 32-bit value
space partitioned into ranges based on the intended scope of the
encoding type (see <xref target="tbl-coding-ranges"/>).</t>
      <table anchor="tbl-coding-ranges">
        <name>Erasure Coding Type Value Ranges</name>
        <thead>
          <tr>
            <th align="left">Range</th>
            <th align="left">Purpose</th>
            <th align="left">Allocation Policy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">0x0000–0x00FF</td>
            <td align="left">Standards Track</td>
            <td align="left">IETF Review (RFC required)</td>
          </tr>
          <tr>
            <td align="left">0x0100–0x0FFF</td>
            <td align="left">Experimental</td>
            <td align="left">Expert Review</td>
          </tr>
          <tr>
            <td align="left">0x1000–0x7FFF</td>
            <td align="left">Vendor (open)</td>
            <td align="left">First Come First Served</td>
          </tr>
          <tr>
            <td align="left">0x8000–0xFFFE</td>
            <td align="left">Private/proprietary</td>
            <td align="left">No registration required</td>
          </tr>
          <tr>
            <td align="left">0xFFFF</td>
            <td align="left">Reserved</td>
            <td align="left">—</td>
          </tr>
        </tbody>
      </table>
      <dl>
        <dt>Standards Track (0x0000–0x00FF)</dt>
        <dd>
          <t>Encoding types intended for broad interoperability.  The
specification <bcp14>MUST</bcp14> include a complete mathematical description
sufficient for independent interoperable implementations (see
<xref target="encoding-type-interoperability"/>).  Allocated by IETF Review.</t>
        </dd>
        <dt>Experimental (0x0100–0x0FFF)</dt>
        <dd>
          <t>Encoding types under development or evaluation.  An Internet-Draft
is sufficient for allocation.  The specification <bcp14>SHOULD</bcp14> include
enough detail for interoperability testing.  Allocated by Expert
Review.</t>
        </dd>
        <dt>Vendor (open) (0x1000–0x7FFF)</dt>
        <dd>
          <t>Encoding types with a published specification or patent reference.
Interoperability is expected among implementations that license or
implement the specification.  The registration <bcp14>MUST</bcp14> include either a
math specification or a patent reference.  Allocated First Come
First Served.</t>
        </dd>
        <dt>Private/proprietary (0x8000–0xFFFE)</dt>
        <dd>
          <t>Encoding types for use within a single vendor's ecosystem.
No registration is required.  Interoperability with other
implementations is not expected.  The encoding type name <bcp14>SHOULD</bcp14>
include an organizational identifier (e.g.,
FFV2_ENCODING_ACME_FOOBAR).  A client that encounters a value
in this range from an unrecognized server <bcp14>SHOULD</bcp14> treat it as an
unsupported encoding type.</t>
        </dd>
      </dl>
      <t>This partitioning prevents contention for small numbers in the
Standards Track range and provides a clear signal to clients about
what level of interoperability to expect.</t>
      <t>This document defines the FFV2_CODING_MIRRORED type for Client-Side
Mirroring (see <xref target="tbl-coding-types"/>).</t>
      <table anchor="tbl-coding-types">
        <name>Flexible File Version 2 Layout Type Erasure Coding Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Erasure Coding Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">FFV2_CODING_MIRRORED</td>
            <td align="left">1</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_MOJETTE_SYSTEMATIC</td>
            <td align="left">2</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC</td>
            <td align="left">3</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_RS_VANDERMONDE</td>
            <td align="left">4</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
        </tbody>
      </table>
    </section>
    <section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The following from Hammerspace were instrumental in driving Flexible
File Version 2 Layout Type: David Flynn, Trond Myklebust, Didier
Feron, Jean-Pierre Monchanin, Pierre Evenou, and Brian Pawlowski.</t>
      <t>Pierre Evenou contributed the Mojette Transform encoding type
specification, drawing on the work of Nicolas Normand, Benoit Parrein,
and the discrete geometry research group at the University of Nantes.</t>
      <t>Christoph Helwig was instrumental in making sure the Flexible File
Version 2 Layout Type was applicable to more than the Mojette
Transformation.</t>
      <t>Chris Inacio, Brian Pawlowski, and Gorry Fairhurst guided this
process.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC4121">
          <front>
            <title>The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2</title>
            <author fullname="L. Zhu" initials="L." surname="Zhu"/>
            <author fullname="K. Jaganathan" initials="K." surname="Jaganathan"/>
            <author fullname="S. Hartman" initials="S." surname="Hartman"/>
            <date month="July" year="2005"/>
            <abstract>
              <t>This document defines protocols, procedures, and conventions to be employed by peers implementing the Generic Security Service Application Program Interface (GSS-API) when using the Kerberos Version 5 mechanism.</t>
              <t>RFC 1964 is updated and incremental changes are proposed in response to recent developments such as the introduction of Kerberos cryptosystem framework. These changes support the inclusion of new cryptosystems, by defining new per-message tokens along with their encryption and checksum algorithms based on the cryptosystem profiles. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4121"/>
          <seriesInfo name="DOI" value="10.17487/RFC4121"/>
        </reference>
        <reference anchor="RFC4506">
          <front>
            <title>XDR: External Data Representation Standard</title>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <date month="May" year="2006"/>
            <abstract>
              <t>This document describes the External Data Representation Standard (XDR) protocol as it is currently deployed and accepted. This document obsoletes RFC 1832. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="67"/>
          <seriesInfo name="RFC" value="4506"/>
          <seriesInfo name="DOI" value="10.17487/RFC4506"/>
        </reference>
        <reference anchor="RFC5531">
          <front>
            <title>RPC: Remote Procedure Call Protocol Specification Version 2</title>
            <author fullname="R. Thurlow" initials="R." surname="Thurlow"/>
            <date month="May" year="2009"/>
            <abstract>
              <t>This document describes the Open Network Computing (ONC) Remote Procedure Call (RPC) version 2 protocol as it is currently deployed and accepted. This document obsoletes RFC 1831. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5531"/>
          <seriesInfo name="DOI" value="10.17487/RFC5531"/>
        </reference>
        <reference anchor="RFC5662">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 External Data Representation Standard (XDR) Description</title>
            <author fullname="S. Shepler" initials="S." role="editor" surname="Shepler"/>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="January" year="2010"/>
            <abstract>
              <t>This document provides the External Data Representation Standard (XDR) description for Network File System version 4 (NFSv4) minor version 1. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5662"/>
          <seriesInfo name="DOI" value="10.17487/RFC5662"/>
        </reference>
        <reference anchor="RFC7530">
          <front>
            <title>Network File System (NFS) Version 4 Protocol</title>
            <author fullname="T. Haynes" initials="T." role="editor" surname="Haynes"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="March" year="2015"/>
            <abstract>
              <t>The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.</t>
              <t>This document, together with the companion External Data Representation (XDR) description document, RFC 7531, obsoletes RFC 3530 as the definition of the NFS version 4 protocol.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7530"/>
          <seriesInfo name="DOI" value="10.17487/RFC7530"/>
        </reference>
        <reference anchor="RFC7861">
          <front>
            <title>Remote Procedure Call (RPC) Security Version 3</title>
            <author fullname="A. Adamson" initials="A." surname="Adamson"/>
            <author fullname="N. Williams" initials="N." surname="Williams"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document specifies version 3 of the Remote Procedure Call (RPC) security protocol (RPCSEC_GSS). This protocol provides support for multi-principal authentication of client hosts and user principals to a server (constructed by generic composition), security label assertions for multi-level security and type enforcement, structured privilege assertions, and channel bindings. This document updates RFC 5403.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7861"/>
          <seriesInfo name="DOI" value="10.17487/RFC7861"/>
        </reference>
        <reference anchor="RFC7862">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 Protocol</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document describes NFS version 4 minor version 2; it describes the protocol extensions made from NFS version 4 minor version 1. Major extensions introduced in NFS version 4 minor version 2 include the following: Server-Side Copy, Application Input/Output (I/O) Advise, Space Reservations, Sparse Files, Application Data Blocks, and Labeled NFS.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7862"/>
          <seriesInfo name="DOI" value="10.17487/RFC7862"/>
        </reference>
        <reference anchor="RFC7863">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 External Data Representation Standard (XDR) Description</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document provides the External Data Representation (XDR) description for NFS version 4 minor version 2.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7863"/>
          <seriesInfo name="DOI" value="10.17487/RFC7863"/>
        </reference>
        <reference anchor="RFC8178">
          <front>
            <title>Rules for NFSv4 Extensions and Minor Versions</title>
            <author fullname="D. Noveck" initials="D." surname="Noveck"/>
            <date month="July" year="2017"/>
            <abstract>
              <t>This document describes the rules relating to the extension of the NFSv4 family of protocols. It covers the creation of minor versions, the addition of optional features to existing minor versions, and the correction of flaws in features already published as Proposed Standards. The rules relating to the construction of minor versions and the interaction of minor version implementations that appear in this document supersede the minor versioning rules in RFC 5661 and other RFCs defining minor versions.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8178"/>
          <seriesInfo name="DOI" value="10.17487/RFC8178"/>
        </reference>
        <reference anchor="RFC8434">
          <front>
            <title>Requirements for Parallel NFS (pNFS) Layout Types</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>This document defines the requirements that individual Parallel NFS (pNFS) layout types need to meet in order to work within the pNFS framework as defined in RFC 5661. In so doing, this document aims to clearly distinguish between requirements for pNFS as a whole and those specifically directed to the pNFS file layout. The lack of a clear separation between the two sets of requirements has been troublesome for those specifying and evaluating new layout types. In this regard, this document updates RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8434"/>
          <seriesInfo name="DOI" value="10.17487/RFC8434"/>
        </reference>
        <reference anchor="RFC8435">
          <front>
            <title>Parallel NFS (pNFS) Flexible File Layout</title>
            <author fullname="B. Halevy" initials="B." surname="Halevy"/>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>Parallel NFS (pNFS) allows a separation between the metadata (onto a metadata server) and data (onto a storage device) for a file. The flexible file layout type is defined in this document as an extension to pNFS that allows the use of storage devices that require only a limited degree of interaction with the metadata server and use already-existing protocols. Client-side mirroring is also added to provide replication of files.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8435"/>
          <seriesInfo name="DOI" value="10.17487/RFC8435"/>
        </reference>
        <reference anchor="RFC8881">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 Protocol</title>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <author fullname="C. Lever" initials="C." surname="Lever"/>
            <date month="August" year="2020"/>
            <abstract>
              <t>This document describes the Network File System (NFS) version 4 minor version 1, including features retained from the base protocol (NFS version 4 minor version 0, which is specified in RFC 7530) and protocol extensions made subsequently. The later minor version has no dependencies on NFS version 4 minor version 0, and is considered a separate protocol.</t>
              <t>This document obsoletes RFC 5661. It substantially revises the treatment of features relating to multi-server namespace, superseding the description of those features appearing in RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8881"/>
          <seriesInfo name="DOI" value="10.17487/RFC8881"/>
        </reference>
        <reference anchor="RFC9289">
          <front>
            <title>Towards Remote Procedure Call Encryption by Default</title>
            <author fullname="T. Myklebust" initials="T." surname="Myklebust"/>
            <author fullname="C. Lever" initials="C." role="editor" surname="Lever"/>
            <date month="September" year="2022"/>
            <abstract>
              <t>This document describes a mechanism that, through the use of opportunistic Transport Layer Security (TLS), enables encryption of Remote Procedure Call (RPC) transactions while they are in transit. The proposed mechanism interoperates with Open Network Computing (ONC) RPC implementations that do not support it. This document updates RFC 5531.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9289"/>
          <seriesInfo name="DOI" value="10.17487/RFC9289"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Plank97">
          <front>
            <title>A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like System</title>
            <author initials="J." surname="Plank" fullname="J. Plank">
              <organization/>
            </author>
            <date year="1997" month="September"/>
          </front>
        </reference>
        <reference anchor="RFC1813">
          <front>
            <title>NFS Version 3 Protocol Specification</title>
            <author fullname="B. Callaghan" initials="B." surname="Callaghan"/>
            <author fullname="B. Pawlowski" initials="B." surname="Pawlowski"/>
            <author fullname="P. Staubach" initials="P." surname="Staubach"/>
            <date month="June" year="1995"/>
            <abstract>
              <t>This paper describes the NFS version 3 protocol. This paper is provided so that people can write compatible implementations. This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="1813"/>
          <seriesInfo name="DOI" value="10.17487/RFC1813"/>
        </reference>
        <reference anchor="RFC4519">
          <front>
            <title>Lightweight Directory Access Protocol (LDAP): Schema for User Applications</title>
            <author fullname="A. Sciberras" initials="A." role="editor" surname="Sciberras"/>
            <date month="June" year="2006"/>
            <abstract>
              <t>This document is an integral part of the Lightweight Directory Access Protocol (LDAP) technical specification. It provides a technical specification of attribute types and object classes intended for use by LDAP directory clients for many directory services, such as White Pages. These objects are widely used as a basis for the schema in many LDAP directories. This document does not cover attributes used for the administration of directory servers, nor does it include directory objects defined for specific uses in other documents. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4519"/>
          <seriesInfo name="DOI" value="10.17487/RFC4519"/>
        </reference>
        <reference anchor="PARREIN">
          <front>
            <title>Multiple Description Coding Using Exact Discrete Radon Transform</title>
            <author initials="B." surname="Parrein" fullname="B. Parrein">
              <organization/>
            </author>
            <author initials="N." surname="Normand" fullname="N. Normand">
              <organization/>
            </author>
            <author initials="J.-P." surname="Guedon" fullname="J.-P. Guedon">
              <organization/>
            </author>
            <date year="2001"/>
          </front>
          <seriesInfo name="IEEE" value="Data Compression Conference (DCC)"/>
        </reference>
        <reference anchor="NORMAND">
          <front>
            <title>A Geometry Driven Reconstruction Algorithm for the Mojette Transform</title>
            <author initials="N." surname="Normand" fullname="N. Normand">
              <organization/>
            </author>
            <author initials="A." surname="Kingston" fullname="A. Kingston">
              <organization/>
            </author>
            <author initials="P." surname="Evenou" fullname="P. Evenou">
              <organization/>
            </author>
            <date year="2006"/>
          </front>
          <seriesInfo name="LNCS" value="4245, pp. 122-133, DGCI 2006"/>
        </reference>
        <reference anchor="KATZ">
          <front>
            <title>Questions of Uniqueness and Resolution in Reconstruction from Projections</title>
            <author initials="M." surname="Katz" fullname="M. Katz">
              <organization/>
            </author>
            <date year="1978"/>
          </front>
          <seriesInfo name="Springer" value=""/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIADrFxmkAA+y9bXYcx5Uo+D9WkU2/OQSsQpEAIYoCLbshAJTQJgE+AJJa
r48HL1GVBaRZlVmdWQWwbLHPLGIWMGuZpcxK5n5G3IjMAiBL7fabMzzdMpmV
GR83btzvj62tLbcoF9NiL3ufN/l0Wkyzkzfn2cYc/ruZvZkWH8uraZG9KeE/
b/NVvVxk3xdNW9ZVtuPyq6umuN2j16JXbnfcuB5V+QzGHTf5ZLF1k6+qot2q
Ju3t7tYE3p/A6+3W7c7W8xdulC+K67pZ7WXtYuzG8K+97K+H+xdHn9yortqi
apftXrZoloUr5w39rV3sPH/+5XNYQ1Pke9k3RVXA8t1d3Xy4burlfC87KRb4
L17X+apdFDO/9F33oVjBr+O97LhaFE1VLLYOcZ3OtYu8Gl/m07qCRayK1s3L
vezfFvVokLV1s2iKSQt/W834L7DLWT6fl9X1IBvVs1lRLdo/OZcvFzd1s+ey
LZfBn7KC5V8Ms28JCPSIYXNxU8/y1j6vm+u8Kv+SL2CZe/ADDNm083xU0K/F
LC+ne9m0vr5ZNf98jf8awrTOVXUzg29uC5gzO3tzsLu9s61//fz5S/nr55+/
0Kefv3y5I3/94vMXz/Wvr15uh7/uhL++kL++2v7ilf5198Vu+Ovn+tdXr3SE
L3defbnnXFlN7OLeT/Pqw5df7NF+BPX2s4vlom7KfJrB4ZwVxXjrvJ7WM/jH
QT0G4GYwRPYmX04XWxf1FE66GhUA1exs//hwa1p+0AOmQT3w4c8Ww/5fhjwv
PVPgJw8XeXNdLPaym8Vivvfs2V1xNSyKUTtcLj4Mi/Hy2X/8eY5vP5P/5nM4
mGcH51tfvtx68WJneLPg2Rl7z4s5LOeqaLLtL7/8guGx/Wr7hT+T7S8JGPtn
Z0fHJxEwnryDbZZzQNrDoh015RxRQeHwXYv/PfqYjxbZYQk/F4siO8vH8MYF
QKVFUD/pQAHhwHv+eoj3vCnKKvnhZJid4DFV4+SHfxluvR9m3ywLmMNsEG7f
Nv2zLZqyaPGQda7jo6Mj2MZhvshh2bN5U7Qtb6GaFE2BR7dxeHCwies8OT17
t39yGANgH65zPSsWzSo7bABtECWQDsC1HxEw9qdALcrFzYzwYnFTZO/qPxcL
AMWjgLB2r/vD7I8A33ZRp+ABEBzBQuplDIKXa0Dw9uTgHDayu7P7+SCbz4fZ
9s7O1vaLF4Ps8JuDY/oSF/jH/Yv/EW/9vy+LFrfYZvUk+64q/30JhK1tM1gr
AKGtp0sCQNkByaSpZ9n7BsBA/27v2/872Ga++IvZyvaXX7xas5XzeQMgKYDq
Pnni3NbWVpZfwbSAgM71sQx4UN/BgmGoOfxMi7sCUlzAMeJJwbnmY0SNjbpa
1Fnu/AOY+7ZoNmmv9g1gCnWTXxfZuLgtR8UmHXqeIQMZZkBCC9fLpi5W8yLw
qqxs4ftJWRVjhN7iBv9dj5ZIsbO8dXmVFR8XwGnwbZgVNwNv5QvdEK592RZ4
LvF6WnrNNcW/L8sGfq+mK1jdtJyVC5hrXFw3BX1VIqPJ+bTuAHkjaDjePO0d
Z8mnwNbGqy3YF+ADXPl5UwMPqqctbJluFj7gs8at5dO2dvl4DDPi4pv6thwX
NOU1XJQVfPR1DVMeTEvY71aLP87KpqnxbGlSWIw7avJ2CVsQWpPrLYPhG9r8
mEA/jqcfMlZU8O/LE/zPor48g8UD5J1DGrXk6w8gYKgjnwVq+wEgB7QUOFvN
mAEQv93NkGfj7MTFM+Rw+K8pgCHbIOnhn8tiMRkCn9wcuLubcnRD+29GN0Ap
xhmcxL/9aQNpeAtEHD+Xn4b62TN88Kwt6H/+QBz1Eof/ikbfHGbn9bKBVY1q
ABKCpmxbuJW8BKY3fhejHJEbni6rdOprgNzyCrnzM5x56+6ahZ9nVvjZBOD9
IBv+hjbsuSUAJRk9e8ToOOBFWB/8ZbQEcl8tCClJHgLsh8O7RqoMaHG8yCrg
ty2izVXhxsBPmhnfkkl2V2R3OdwP+G1Uz1fZ7Xa2gFuC/77dAVHFvvNnEMlg
inE5mTg66gIGvoPvAAerBcz0Bt5vyusbmLC+G+B3iFU4Lu4e39fhkTADLutj
HAamw59gmH3A9AHc1TlgH5AMpna4oll9C0O2OPB82dL1csVsfpO3AAUijwh5
IhutXBK9QAsgFS1A7nIfCdcPx+8HNPG/Hp4hUdBLO8tXiOUgFC4IksNLxvxZ
OR5PC+d+g7JkU4+ZIjt3XPXK1Bst0INzmXl7B+/FX/8qktOnT5uDXroAfH7Z
wD6nTNtwwRkTf7iwTIGA1KC4AAhzdwNsljbKd7Vs3bRGGXvM9LJh0ONRIT9e
2FFbvuL+J6V1LclXSCfHuLibekwsCmSJvLrGk6KZYEcJcYQZeXcgIn76pCTY
4R5jqq2Uejui30R0iFridraucvy3AYxsHbeTj2A6/HnZKuYg1D3h3CMC84KX
g7LYp08DRzRn+JwfoiAMD5kQDbftsQyIEvAPO/I2CMifPuF9CwQ3z66n9RWI
saBFgDQyAxIyzZAx3ObTgi8SrVcuCOGiM9CHabKrfPRhq4DZ8Oo09dTvABAN
bw8ccDkD8RA5F+xWeKuLeKvhJrKZz3oOZr8KIxHFcUhxCoA2fMsnlZULOOc7
IhkgCMD4IJTNitENKCntjEgAUMF6OR3rB7nrrBsoTp6do16VN4A2IKONPnjm
yyiZCfskBQpR0OXd/eMpt3Dvy0nJXNwcEO11BMRef3SKdrt8SDdFd0C9MT0i
AfydkQ1QEkBIlBJ0WrgfSAL0vPGufAByNi3G1yIawLmXSk6zeU4YaeSfVNxR
5ttzPMcg3gEfHdifncocTDurSXkNBGCctcvRDaNWHx4ADcwmy+lU7ogjMtwy
NgqNIGrBl1yXVPWOBTeyde2qgt8W5SiTvSpoRyRgyDz4Q1mNSwDVEu6EnwdJ
LRBHIKzjEtEunxphBICIL8queU44DKBycyD0RC8QPZpiPi2BppHmDberHMvf
cfHwY17SaMBeQJ7V++ZXAND9AdDcjfrkoUHvvuXIkUHohUWxD7Ubwln8Jr9F
eQOXyIN5EPM8guoCI4LwvBx9AKIq7+PrKPcxu8KdoOnDE3hgJdkdSGTw7+sa
38UHMtPQmaFJBkX1oBwziSz4OwR51Qo7g+9vcmCZgDojkACI4zszIl+3JZ0k
Is8qW87HwkSOUThhYbjAlwdml0REUIbFR2/3fzz97uLo7Oz0LKtBY851apZz
+iBtpDoCUXLy9Ek4YDlZXnMxDgfulMCqRCSwSWRff6w4tEcqFldpUVdw0njv
XXrvvXCtrzJccNrilk4B2GOl8MkRJI27I1IJlIGwhI4bd+K5QXa1ghVOAQzK
wwQzRI6Ccz4eFkOC9yrj0RCH4MiLGBZo+FignaTVK0FbQ4oO8xJ0aiL0+l5G
MgJKUwCxb+u7As8CcH1E0mnY7eimAJwd4DJxl6O6aZZzlsIQYxBMsjl3heQM
hSwUzIXOjW5qIGC41jvEaVJiy8VyUTCeXtf1mIVNkFpQDClHyynfHEueGRX5
k/XaHyzh1iuAAFk+PtwOncoWL44Vat3exsHZwYsdVjILUYhGpBCJ5uWuAFAf
eLMLNTcQCyFl9YZUH9ousLCbZfWBpexSZEPkIMWd85ch3O36ztwiwo16OkVp
QC99Sq/hnL7zsg7sjWgu7AmIkOWP21+8QgGG781t2dKkwvCch+hVAcpEkUV6
MBxSn8hDZGxFALhallOkN25RzxX/8PumUoIPiMckkq/VBgjVmzwaGiaBcV+T
6RZQxBHV04leCNu2p84iRpv9TyYru5dv3h796+Wb47dH55ff7/xPvGWoLJDS
Vo2LOUhRSJCs0LxgpCO9ug16+HocUgTadhs908KcRrzdFAq/uKtjmRpBdXJ6
4fA070AIQlvxHAByhUYfAPmbN6DgyBdAPvEOkfTQoPSQt/LKtrxCQjiKAbQ4
pDv7yqXe7f8INHs+rxsYATX+aBltOYPLlFdFvWynq9coYwlRLKZwEfg6ATEs
kZJaqAG2oCDI+//m6ILEtaJdwAn95jdwLfBciJWDyk84v+fcXpadBqIEOIBm
TWTJ+LsKSMVHFCWvVcjPs0g0mgAiwSL9LWMUQmnmCsWpfA40jS0e8iFdTS9R
RjM6VN+Q8FrlpvwLYUQV7hWP4LwQi0b9ZaVMwUqpvEWSUlgmlXeYN4Tlwv8x
KJFming4YGKvnG9AJxoUNkLSGSrUV4VZQSxJ9kn7rl+SZJ2vNFY5FK2IQ6Vy
d1bkoxt7YwwsVG4OO4exFkig88YoBUzPUt2C2GGEVMS9HEkKoL0XBcuuDwJe
hIA8UluHTCnQlJHJ7cmN4oBSO5wHCTEVykQLYDPMr1ukeijMtAOGXj4rjMrF
SLPUe9ijlMDJKcT1IcKsT7JkwAHxLnLU2/FasawT6P6Alh9k30Gf8Evcj010
0/USUAB3W1yTquVcwHFeyz7/w0s7ZWXW8hSEJsCGwhpcZQj8O49woTcHaY63
87E0MyDIsBLlbxjvCwbqkbZ4yEPhG/5XAMEs/6CmShwT/z5TP0nYsezBGdOG
EWpi+6YCIDFnZi3seKZGlDyFTxskKWb5wLXya74+xPPHXqeBjQOyoW3MvIJj
qpkWYFuClIW3CG20JMGTzYwpJMnfDD84lv4tsUE3O0c0698eS5WkygYhhy04
skcBH043y+GdXI3kc7FewTwbOMymOSO6gHj97XmrAllfoRtCeS2N2AYEYMqP
eo4DvELZBuQNEZJaTwMIXDK2jAd0CwVWT7DzxQK0dxQdXfSiIqnQxY3D8029
eWTWl+e8BZX/dQ+A9PTxnSrAZmNOFoKUVK1NNZFea54yVGBSVCN/FlZKA2E0
3PyuqkkKRNtnF2AhSYbBW3D87FTURc+HjAhpHsr1pe0YyqmQSdh9WF3q7Yhs
bbdlrpY250liZOH8fLi9M9xNrZywkOtyHODC1v5yTCIcCHsNLBi0+qUIbCXK
cWjZIbmB18afMORB05jW1TXeCN5HGNmyYi/voBGDbyCdRuC89FK8Y2Etdpyy
Gk2XiDQx5xckDHAwMN10qgDxXlht7IMvXw/kOH43WVmjKVGP6rrJWb0UUx1e
pS3S+Wv+xzPW9hE1YgOEH1A4go7YLq+20E7TJoyaZAeAf/S7N0MCEuPYIpTy
GqPDb7MXwxfDnedEGQENdoZfds3dCO+mnJMyDjcaVAcXm8hhjO5XtFOSQVEq
WsFu0RBdmB2yqKU73N55tXUFqu+/L3OGPlvU6eMsj0jCktyuAE1VN9BsZ0dV
ojFWCMjXzCb9hTNmIm9pRFuPvWle7ku8AsPPhx3PAPpQlg2e+MBFb6bgCbN5
roGIVaLoXI3Jn6ZW4xhYzMLYLCCPvG+kh2R0sJ80DovT/iqQABUunPICfyny
6+umuM4tA9ZPYOKM4VjEF5U8sijNTdGD5FKxF5Zc1y0yk+V86omwJ+tdK3Vr
rr6h4cL6lEjHwu+jud8t8HjQuuRo5hgN40Q/nha3ufoGZCQeYoAiZw3KnEAi
hgAoZMV0ItSJTfBCl+jFRTkj5WuagxoBN9Mf1MAfArwxyI5O3ziYoeSfisUI
tpwypI13noXSxtcyUf+hpZZBeozZqGf56LzumPnELED22KYYgSBizLu3NUqD
gpOst3RPALUs+GFeNLgW8lGBCjGCE14FW2Qrulf8IlJOa6HB5V2TmsPuw4BF
RFfRUtXwmReszvSxdT0wx6K63iIydvVJ61mG7nRhgBOy8Om71szXOrHLDthe
ijqBiHSqQqrhFuPEqi0GUo7WeyAMVlisEouXXsVIHqBv0OVwQ96ccU0aiyA6
2tuuYOhZYnkHif26VJsQ7OwI3luxrVtUO7waPOvAofjDMK5qnQfFjmndqhFd
A36SDXlTAcKFbFk5aqEEy96dix1vNS9HrB5qlAOHDUhI4bTOx62Iy7ho/gqQ
nHgQMVaPo/4gDbseFZPlVNB4QEITnHvO1r0BH37KNoS1pI7g1KHwjQ5Oa5sV
uRoUjSmRzcRo7GezL5mHgMsBF4ThJlP0lufVKvvh7PjiqGUiMAC8ATAUwifV
yii8vh/DCQp8NVMgoMykXhoVMR7YsR/0FI3TOLAymtFo2bT9Xpq7ErCfTNBA
4+YY2cPOIdwfLE7jz2T9tFpv6hqI0TZmJOmQDiEVu7m8ozBAiEDhDfmBdGLc
Hlld0HDKUTaR5t5LCUS+d+hprIyFXO7WPWSDfJrojyESi+Z4oCJ4eiPA4kUh
9tLOAnpGhf2gtSzshFU5um4A96slyhlsUPNX6Oxo/xD58K8riqHlCvU91MsB
gFvAQEkZv080A6Ccq4iDJh20cV5X8IDXlukYABkg+xVT+QHJlVskV9JsGO8L
3PpaGAfKoMKBaJMWawKcOKw0qC3CMXgZzDWQ24gwG1jLKrupp2OkYfmcne+4
UC+WnxtxcWe4HSRAcngzR2RhEXhTHINVGKnQGxQTy2vPRTB77DEERoLe1cpv
E/YNHKLLT7s+beQK+SiyWHtvvz94G34XQuJgZb+Mp03KBjjlB4XPh89myuac
YT+suQHd1g14rvbBGI8psgLdQNPFDbM4twFsbJKXU3TlbgaGB0+V52X/z//x
fxoTWZiTWCTjyXQ1dIf6PlpKYPbrgs1r0xUjTvr5rCQrASzJRlW773OUm+Fv
EuKm3NSAEb8W7okAXpDSF8vSiJsU/8N22dia2SNeowTDNw/hqsoDct2JeDg7
hl+WEcUKK+puboNRgtUB+MCc6IAdmEwgkZU65hfI6MMQnpDfE67i3NIaLtCr
+yi7BYOGXgcq3gZ7KuvqcNkLNdmFJ6yU0IKDnxUwuKhQnRCBRMCCTicVR9k3
yYF1tfr+5dKhfZssjLh5EiqZmeEBAUjwZba0Oh98Kq+LmxuW/LNZAYsW7C46
sy6Ht4A+SyQCZP38QE5thPqTd9+dXzwZ8P+i7wz/fnb03787Pjs6xL+ff7v/
9q3/i5M3zr89/e7tYfhb+PLg9N27o5ND/hieZtEj9+Td/o9PWAJ4cvr+4vj0
ZP/tk55I4aYQAw2F884x/h49Ay4KJfr64P3//X9t7wJF/icgyTvb218CSeZ/
vNr+AukzXlaezd9d9uk7oPZF3lDMFAry+bxcAPMmVbC9Qf6PZw6A/O2/IWT+
tJf97mo03979vTzADUcPFWbRQ4JZ90nnYwZiz6OeaTw0o+cJpOP17v8Y/Vvh
bh7+7g9TDCrb2n71h987DLHMDoQEITqeCyc4FNuo837QxPeEBGR0Q+YAT0TI
OkBxMjAgnBsH6xFBIGLX+ekB31ufMW9oo9CMWYZMnpyyhMQMdN4ZaLH5iIxg
qF3H62+HHUc4OvlboBEcM4BS3ERd2KmFFx0OGO0LAGL9lAOaUZGqw0RiNOjZ
+KDH7eXQCcsXQUI6keuJFb1vITBP3TLPIwINilXkeqytPj3oZyEsiRdRjDtQ
oe0hnG+VCxdSM1RbgJaA0iUeDTuuEQuPL8jI5dzOMInWLPWcKcQ1GpDBpkYy
jNlGXutB55WjPozivWhEVgxE2hC5KjXKgCMuBuwNB1jlV+VU9LT+GNNy7Dzi
0ksce6TiF2kOHBCrxhIQWIJAOV0xGgBDgRNzwTzOpA9YWFHdlk1dzeRYLBbS
8pndrggVFeYDFquZwImcbkgnA3csUZkKNbpCCf7pt7TFKIEjyL4vnJF9McYY
wwJWGHWrEgCtOoqyZUOZnhundYCoXVerGRrpvDmUjiE91uSO8wGrZVU1E85i
8dqC8fwrqcD8o5yMUYma4To++bVXU+IvLHo7dwZzNuMpaqYiz6rYZjGyL0YU
Dd3eDkeIN8iWc3K/j4ryltVVO1litn413EWrtbNWazwAjmIMdokWZEyUpGmB
RO9Y1EVcQn6Ko3oT94uOdiNhNi2qs97mmUcxAmx0R7uJ+BdNkEii31/RzUZ6
JRYPuq8jzpugSDWymthtY/yqd1bGAPMg9y64Wf4B4yhl4BBsW7auO296KlfF
dckKE6iHzarXI5hfqXmcY2laaytGAXoSrDtZT0IVqBcUNOE4YoaRnpd2ibea
bhJ+iBFPl+c/nhwMUjRI7EFqrCKBBAOZEH/GquDH36Tur3BeQxfZPw1Ub3IO
w0ZrxgMDslG0c4LixJJQkXZVjW6AyIHI6g3tZDdnmHnN1fVOIZfwDft6JcdK
SC++HUsp2V9/A2RyS97ekrc/AU95NK3pC8nOfEg26idttiGKScv5dNf0TL2s
7aYnKx4fB6JNEEEErgKiJqoqkw7WcqhiMbaeRjGEKWmD2TL63iWvs5eS32aB
wTpRhEQTTZlMbnfG7SUFzZKBnv8tO8DQRoQiPr3kz3zqTFBn1BbkfJgfAYXg
QeMEV6sGvpwVM8S59+hfH1MwBSzfbZy9P5CgRUyhBh4zArUbOT+I5iSR9Hq3
RMNTxKgnE9ypsmYKIqAAkPEa0yffZ2QqhHQ+5n5J9szxM/SSKKS7Me4qUsVW
Foq9nmEES4nchC2gTPJAOEClF5cK+7VbxKNS2UCiOZV33sG1uTbvBsSSMEKB
CcoKDg1hOeVT7B6dnV3uHxwcnZ9zLO5QLgDz6vWCVMep0tqI4wwDLgqHkIY3
YI1TxYfXGPdcEjsd8WFNhLpPWNHiU4GdauizY4Yo7lA9NcDiUfDV8ckOYiN5
6c2CHJLn9XDcPkrqQLXW3GQ01d6QVrhciHFWDhvDJSTkH0iy3CBlQ0bB0vNl
Noi2FDRn8Bmo/Kn5W8yNE8tl5EjzE9K97VtxUwPj2GiXc6ZDmL7rcz+6pJIp
RE2WPIKtEhm8ZeqdQ2sJ2j59VplP2SN2CxPSTBoTTtIsGg7VC+nQQIRJDmip
Y+GYZBA6+hQWRFs8JoxqofW3KgujFRjVFx+XK5FiWX6Nfl20TeFdQqlRPngN
GnqFjKfE7JCQNAJIRcF6XjGJ88L2v7v4FpjrufeKmkN1waMTRUU+pJHGwMdw
fJq8YN5U5LdFOwZNACsZqNazVVZbMNwWJzUG3WuRz+bky+AwNb+JWaHGVUls
mN+sWoK8f2WjGF4PbT4P26dG9ZZkJXKCmH4IiCarxNIeQNQBms2KTHvR7tsB
ho9R+CZfUTiWG/IXeFqSiMyPYqaAmiwIJ7yCqeyAECq7Khciye8zOh0IOr2l
POX9g7ebKjcGeszcxtvghLz4+UVa29c8CSa36MKlciJsel4UIWa952phWhAI
MsvrGzMP1QCQkiJOA9BKdQuaTzSYK/XA+bwWyflJjMEDpYMPSMQ9kbNOyHdk
PWPjRsf2Zkhzah7R8L971UeWK1huIG1sS35GJYXYE5kW8uwuX1lVWp18MGzR
oI/b+jai6Dp8yQU/p3BINNwSQNj7oKplzJ0z7wPBkOTl6AaImJp5I7FGTQbX
VBdD84vEbkBc9ev9w8vzi/2Lo+NDzXvkT31wNPoR+aAJLsskRfGRgeUheZy5
A3NGyjprF5xxjuwDNI+C88nKyvkaCACJNXjMw4qVkSw5zFyt+5ky4zRhbh3x
y9sPEnqCbNQ7zvZFOZzXsJhBb9qsyYJNmLE6iv3jShM93AayFz1ziryTW1Uu
OFwNBKSVPUlxzwV+aTPWfoMaxXFsxERtiO0B51YUbJ+hcO+ELE9Fa5YIX5XG
IuGxDSqBkYAxsNxf5OohTQTvOsbtjaiyQCeRmEyMGj1YEq3M8vGsrEqsHQJD
iQIGN4MGQfWdDCDIIDGXFbcU0inlmN/iN3cFfXnoY3mEBL/3YZdvD/ffb2ZP
YadPAZ7FdOwk3wjt8BI/0rcYdZLrlGIxxt2NCWkkfa8qWDX3IWockOnVmguy
5t+ty6C9yccuV4evURtUD8uz8yO4vxdnmc8WbIsQIJYw9Y40ZVhWN+v1oqsE
0d5ZNpiy+YXYcYNVJ7aa+gr+NcnbG19khii8nlU4IjaMXtwgxMh0Fq2d0tgw
CzmodVlHreMNOrUB393UmuHIOgf+OlO+magoa33KP8ShWlQHhswGI42g1msX
LHOJGQPppUTVwH23+kCIBLcDDURcRIPKmqOhiDJF/Uhgttl9P6xDIay40SZK
jLed4+RqCELxfp3e6INgeB148OM2Ch9hw3HuolfinORFSC3iSRErwm4xS6dA
SwrGneD04UoTOOkmXaPIpWYCr9p8V3EepZeMjdDJ+uA6ogYreO45SAFkpEYr
gDcahmAvIfJ8fVmHCYfBHN0GhKsKTdFhaMAFyZmVMuTEajwux0Fg4rgCFJsG
lBQsdiwX2VPoDMcRl2d61C7UPyAEydgXGENkmhZrQY1lHp8WQ8EuGqsdFosf
2NW2ym6O2NuP4OODX8tm6PpyETqJJFLqlT2pseTKcPREBZw0u5sd4cjrKvNt
LCSts2EEg9tFz6UAwGFxCpDegHd9yLAs3J5z//Ef/5G1NwWId1vN3Vazhf+H
da22dQMoksH9wUcvv/wiOyxGWbabbW/vvdjOZDM4iPvrXvabSXm9hdNu0Q9c
teurJ+9kIU9bEKFRv5/oh08+OXfat4VBoDqal9hyYARrNMsKaN4Yrhy5ZSJE
fyZ3l+QJxoi+bW7pNre/3P18B/6282p3+1X/NnHxl317xR/ivR6u3ecPUcIM
RhPMi0rDKTRgrhr73JFBsMiRUIuHB0IQ2YjUFNixUmLkU0kRfMFeyOQZBNsb
olZXFKCoQ2+QH0brdozg3m4ShfSPZpRzgNyalcgYvVHFvM6bK44LM1cvKuUg
zlBlU1j/AO1yE4a9E9MPm0rnWF+JorHYNF96k1pLwjKzF44h0e3DaAO+MaXk
haL4j55AnoaPVqahiCFPRTVjQC3o6aUZg/4yLgwviZgYE1EnlF2WYnlIVHcE
aCutG02iTFkBuPB7kTcAp8bJjz5/kxClqvU1NNt4nFduEkJyKj+OSKnDh7D+
hWD9lz8T65kgx3j/hp5l69GfxW8OUPPX9Xqtphf4U5C7SO+LuWpVZ1JflKSy
ymvTWMmPzpCEcY2660pAcIxULwLTekX2kJmp+AeOT3UqyZjYF/lJyI0iMKf3
wPTsRzk+fXd6eLR7iUGeEnm+6Clv096QBCc6XJ5Ysdm3wxeRgFf1WCSHmEgu
5scItEHGJJU71nbJFoG+3HpakHYapxewOZBKiyIWhlvZOhJ/e03Q7Ctni6hw
D+v/oCvDTvGiotBNFFL7DaGS7m4M50gunr98uQf/+cJM5Jj8mFsdT5M9YhrX
meYLP82NXFpFLDyWlDjmaJKvrtEiXvrz6qiVLEcVY9Z8gBSFwAiNif7hhoC9
bG/0JzHLaqRFOAeuM0J+kyoKeqdE/rZg6/RdkX+AyXzgQSREwXtSR5ETTj0G
uBgDzOGrL9YmzEgmbsiolLh0sjFSgvRYKnSkDqh8epevOunbGCrHO0MNp2wd
TbZi77pXRCThmR3HmmORg0g6KkKwMClkKvCRxbLlOiSo4cQ1Gij+mYNhxCr2
joMq2B8ZW8Sc69QGkzBy2hBF/cyn9Uqz4vtvC15HImMDjA5hyx0QHLSvSapc
jKxSLoqsnJxWW8WJpUwuuOziWGpO3OK8HOFZzqm+DV7f3mgJEJO2Mu+O9YaM
NIyHy8SF2mnPQ6QxYgLWRRU4LEh+HfP9iCAwpKnW1V7jjUjmTDBv4chH/3rw
7TfHh1hjZP+by+/Ojy7fwyiXh+fZZJpfs6pf01v7J98cXR4fSibKWIphsZ0i
o8DMlYm5uWepWfZG9Xo1mCJ/EwNZNJQMIs5SMi/jacKgCqpuFt5j4CBxskTj
Z7C42yLUZvnboIIoMq8XTPSmKxzkqmMg1ko2zEeo7uo95fdAOMVa3lHAXH8A
WRsbtE04zJBiFBMa7dNztCiKVOAr2xFeA5R/4C6M4a+LUlPOXa9JfaGRXOuu
gOp6bwUHDuTHiCaILJ8mWJgAfg7XkfTbQUCQeVPO8qakhI18umDHe79ze6Bp
3FL+RyqomVyHEG9Hy2FicLvrY+XK1tfl8DqCaMgKkP5imsgYOCqHZ/A79HS1
PzXbOocDH5Swo7GkOmB1UjUkhRI0IcEWoMR5PGg0X5NItSacFA7vGzL7k18E
JGPgu74I6OCh86IItWKMTHxSU01jpoen749O+A06s+yeM7NpNEIJvfC6JoSB
LfFtPSppTd7oL7wcy4Uf6m3xHhNffaSuqsJXq+tbEA7QZwugWq0SKapOh7We
M77Y7IwMGizfRVZi2QmMxv4ExpNlNeJyHyB94Cg0qZqG11kvBOqXh6c/nHxz
tn94RGA6eHt6fqS+FC4mjWuaZsVHEG+0lpBQxG7qHQy6P74t256IwlZLCdvC
oTQ21gxYOyYF6JSR64+xRUtSKir488VRBH+SBVDVCDVx3IMrdCtgFAu/3r0e
hkysRw3OsgNKDEhscApOx5SMK6ylOS3ZMxVuhHhLiVOc1KnpnTo73NbArbRk
qNICXuLF0fmFuuJoc2/Ojo7CAzv+w6fBMcGcrB9hr1kCchdQC+7NtvLu4oaz
AU25XG8V1Iw94o9Lf9zR9xyDF5fX8IGIeGs4AEwjgNkknErC6CZiudfk6ETL
HzqqycyUlAnWu3WBs5Sq4qunSRk0uo6jXApSEsnq3QjKMZjhwikvy5JVFFwL
QYmiLW7yKYZwiUzEcb68MfGM1K0A9UoMOSIAHcVaDiZoEXeAk5VDus91jvP5
hN8jZgDkWsTVqT+j5xj1hg7iQdiCZGNyS9HyJVdWpZkeuo2j+MOVMq43WvCS
C3/5z+k+UILt2tCtVCcTnzsdpE2xxWqT1PJjkRzXVRGUJBR8E/3rLo/CjnVh
Vfbv0mchFJOTpeIVaq0axy4KDoAuWrM9X7dQ0lZj+66zJhwZCj0ha0zaRB4q
CvKnIGiq3ICvu+LjvGykcEPIixc5p2wpaIPlItsiQINQKa6BREsarV1HUslj
gfzfe+Z5sWb2Xle55bN4xdSBbOInNMla2QVXMOKDAzaKBfwoxpygqAkIFR+d
VtfItZ4GSZqUyZ4cGkm1FyLy/q8h1V5EWSW+WMMsH6uI2E+5+aaL7B+UY0wf
AqJRqhlCc4So0iquSauzSj5w4ateO5Pc+PcSMANM12KkipqkcDfFP6qo6dsn
/ApypsZHknbSRyXkVigH0u2uCwbc4jVujZAzVYGwS8OPwBJI9FK67pmCr92K
JGWdxKrHw1IqYVXJSvdaF+DV0hBksfSTyIOor/cAr0x+rWn56+To/w+IvGpT
8q5x5ITLwCm6a/xaBJpY0NODUE+68UiRpBJSkPpWImjMYDFpZw+h4jo8VBR9
HComDmAqwqD5bShmMzuPL/fwfmnQY0AQBIl1+iJV3izAfY/uSeMU40xKyXlc
xRndsKJNvg4zB+Ir68GPHpRUFHk8fhCh8oagkGZGSBDZ29d6pvvW4RHk0dhB
wuJaBHkkdniwSnSLp1IhuzDbYB+X1zNs5Y6BDoK/blEpvn761n+p7I2ipdxz
qZIqgaavTkzM2II87av8x5wal+AjNzgsHq1TpvylqsmLVCnNbNkxr6WQbeta
mjX9F+jOP5tg9WZO9iyN1yL1NAxuguSBPt9h9ktJWPbzyBhydJaDFO2oyVrY
DdfXtKhHK93hlV41Rf6BnerobQlL6jEocIJTr/lgre2gR/2NzQcwSWSkiCZ5
iJ33ENS+s5FkZr4PovX6WEgVS5Kg6dSp8HVQ6aPI6HmNee6t76PmGTZrtr0p
UYRYTZGFbOyH9TVv48XaeVZdT9Um9qVSJQ3xtl7hTn3SioR4pEk4pRSd01vc
eqGCo6WWwM/aVRQSj6EkoUONWKI7HoKgJaSZ9HKEOdxLpCohJdIQup+ZceIL
FLMiygG2D2ujme/p1C6pkh8uKID0McpqL63w4QLcPSxorfdBidgoxV9Q3wwf
3l90FNv7QvqtkcSn2XsHAhtBbot+yWyNju3u0bEzo2PntFNVsXvlaFCdsdeZ
7TcqVHFtX4S0NUNU5vlIO0BQJe+zng4QznaAGHfnXVvMQi4RjOHsZ7AUDGOh
ytrdXk6VZnVQXX3qPFhi9X9UDVzoKUONOwjVqOebdvTgToyLegubNpQsf818
kwH6CIN9J3REycSquWCMUsbLZY8gxhnJPAABvoXYHYStLJgYWoEAA6OTEIen
87OgZEOjfvNPz0DffdbeuGvgOtnT/z377bNnz55m/+232U8Z0vWn7R/4WfaH
PzxNn/03eBZFRyl0JDBK/jlsbzgKSuL9Jdwsv8KufLrx1tQzl1AHZFvwwA4j
RWGTblzdT9BQNlx8XDwJ4Uf2PJqlXEeePG87gN+S+lbklg3wam+ysJrsd5nO
k/2eAH5JgcE7l0gshh/7QKPjKog0sFbah6YQA8owmYjNzyxYWQD1NQQCR5zw
CpsPZ9QQWpP4sF0DIMsgm0+X3OEBDagFRgdhbUrutfMETvIJ7DJBJNMBpfsZ
KfV6qSSbRWMrjcXIVhtkTKXOmUIQ+u4jd2ZheonVhZ3PK9DOe9Wk3RUA85lz
7vVL336GEEKKcJNCB184bnIiVUt8R5HdgQhUaAqbtMUCHkyL6npxs6ulLqlr
JPInjMVppDQPVhPlriny9RKu84udSzaW4D9e7l6GoCa/eVY6Mdy6GqvtIpca
jqa5DdN6KQvMzUbbqoSvFmiQxnArsUOUjVbGoJhPwHDxT8OgMCGTb6ktsM+V
frRTwqFEEKycI0ocO4U68TfqdTRCFbym2WKukobqUk6o30kudNFHfHxzdHF4
9P3xwdHxyZvT0PUo27BFQ56npa69b/9qFa/GgUjUUNME0Um8uiVhTpPJJS/k
Epe5K9FNyVO5dum7Uh1E7J2aL0gRocn3nzYdeS9C3Uq+QiwHMRNkckwa55ZP
EwSp998xsJeyksY5DXd5VY9XanSKFhTWwi4G34xtHViFjn0cNyzp4+dmmxI5
0QJYVDXBPx6x5c9kMr7Vd18/+OKsrOrm0W9TFdGHX7vrvHZVg1xm/tBrEuNy
KTEu9MGn13Ewb8/uhS73/IQU+T4IClLZhXH4Hejm1EV4VxeXX1Yox42bNtpG
33Lobf3n735//y54BZ0d0OMnHrHN9Ixs/kYtamuB7sm9dFq1BaOqYswmokF9
kDU/n1yjIKLWQZEW53hCLYp2aNamm6X6kSvbYrujCgSbAHeOax0H22PVIBlm
kBEOhn9SL1CteQRnSNQ+Ti8TOmVRXRLSEpQOmrpyKGfDrUwWoMnO6aGJvgSV
i8vmx5VttS4S1W81a2szjkjcpaQBFy2b6vPAC0YMsvV38Bi7uwJS+jxsOLlH
lA0AYyZpDjTcwrgfexTXqPnu7YugUpm2u5z4YPfgZA+7P3MPpu0ht/KNMQFL
cl5R1r43M5uddM/LRZuQfuReQPKdZ6M5IgnozosCUSKX76qdmQjWiutYSFC/
xvNpQ1fQNUBoF2Zdx9AadGHBVs6+s5REVJjzuvAd3miX3NBcRa8kSE6NIqX3
ttvo3HVha5TGzDZMNWSQIOtD9qrQ+C4xXgSgubscm80t4SZM2VTGLN+y1e66
vTVKci9z9RhjH4TRFHNHVlGcadcQFqICqAAZlwqQpW9xEju3rvGUg7hZuEZ3
/M8m0ATTvI1Xm38sZ8uZCx/yNyZYpVdH98UVE0zlgjG3cVs7Hhx7K8UN7jqN
Vhf378O3pOwSZRdvLBbFomCZfotDL9XhIaLoi1ys4pycBVuSzINOnglqS0nA
6xo3rKa4cqaNycoMncB8PxDNXhVxlpCL0g8kM9N5h0+cthN3HkHUYhXPUO/O
5n0WjKGBllwhxEvJImv7K5GFaoHc9YtzGTBLpKd0nxQc6HoXvQ6PP0vrUZUJ
eBEBT13QE6SYna4I41I0vIur/3YKy331lfMl5fBsr1YmIExWST0g1w3wT3aA
qBQh9h36wmsVzHY2fSZEVI/tncpvaFPjq722kqpc0zYIfRLs4nMwUheDl378
vFv80xYXBIoGEhmNTFJXcBvvyjFueMSWTirP0iwr3wgPX7sBNELKy85Bb4aG
YVyZNiNOKZ6UJIelWRgYacyeLsB+dMPJztogqG+namNbca2bnEr1r/lEs7+K
j9Q7UOQGyipQVaycFw47VYToR6B2AnOyMIz6O553Cn/Ayda+M2qyDnsGg0R4
9nZNZCkkrNaTSOpNxuqKvVqmCj92G1wrG3Ep0Rs2jUtIpWxT7F6LPW5QIp+v
Ky7BZ8msm4NA/Kx/gs6D8ov7elyEytPEZ3EBPpccuahK8MF6PWmKQq3SNjcx
hQF5LdBeuxCRhmuAdfwFPgCfJURaqQzmzGAYGM2tjtuyIaKDYIwrKLCXFLGu
vSc7MRSzNr2TkHR3t1BWLsIMXmKyju5n5PkxXhaHl65WASuq3CYlW1CqpQrq
ZD8N87d2lAVWwuA6HcV8motDXyjL8SFaq+Ufei81jZQKXHc/yo4PtViG93+B
YjIm8U88FIWXuvqlfvRpm2MKpUGq+FJRaTQBPacwxvYMojLAZGZzboPNi2Vi
JiXp3SO2KkfURx648XfrtPW2PzhGJH8T+X7FFIHEu8rvrj8uMt7dwdeXgGLH
b368lE0eZhuxyHtD5umx9hVdSIUPfzYoG5iEROqXII2TRd8+Pmw3uWiNz9V+
DIxUKw6nZSnXsVXpcRncw8q7SuVoJMJ4DRA4VoDArHURPXUKI7OGtmwdl+mL
bZBaOdHUVTSf+r0Ah3ShRzW+FOEVhjdy0fXpinWOu7qPWHk6mGDtAuvz09Z9
/5jAsToCu6+H0HHturh5qBq9iWUJP08MGjgMLjaR/sOaOR1Y2Bu+3W2+LNAE
eNWNQyc50WGVJzZYt6J037j44Cbns/nK21E220W8sI7YIwu7yRsLKum9wcUf
GzfL/wybxYVNrObPMeZaDpGqfHLaZVUg+PJmFYpj39VrJS+ulfHAeRFGOg8N
9pwuOXjlDSYLswtnEBiML3jo69oOfKE0eYEqobHHB+UUaRzcssU+9qx+Lwry
TupjNU1mGPlRdthFzyL7C8ORsHeE5akF+x58zwhgXmzIxsG2tGBubC3GP0W1
nEXzRJZOvlq7l4Cw3+9ebl+i7H0Oz7/Ktgd9752eH+5cnn79L0cHF/gavLfT
+97Xb08P/nj5/enb794d8Xsvet978/boX/2k9N7uA+9dfr9D732ur3167fSv
Ytvl3V5Ky+FkxxYU9s+0Hl3i08i0K7b9zh98GU38Ytm9bxnJ9OKv6hnxkn+K
5henVt/b/NPr7ta4qsNu/DY/7Hk7QMm8Lc/s3qzl2mCcWq17sJpEYjZfx55C
No56R0gnCNTYhOiYuMhIPyrAXbZdUuPmUnpKYS7HhS/QqUcC+xM+4CdiFJXj
1v4WaLCyPWFEP1ZXIlW517iifIWJ+ldLcWynH8UKqO28mpB11S1a0XojW7L3
HZFxEevxoDR8V7Z+6QSqQWaKaO+yikxPuNHUJUPVEgoMFCBC0XmNsRd/xz9v
3ny/c3lwenh88s3lu+Ozs9MzEHI7f5R4RF8dneh3p/9ydHGBGv75xdG7/Yvj
A/1q5xFfnZyeRF8qYen/6gxwZB843tm7U/ivXeGuftTxyqQAMBgujboCYhc9
AIt8jMmP6Act2GBKzVtYnz3eP9mnYM52IQzwacRKWsNL4o7wDnlKdiafPkXO
SmGzWMYYq4tg9hsVVSD7C85leBGVhyTdMx4UNE1tASMfxb/Tx5i29pGaJ+J6
+/BCCtRoHVfyu7NlwAdqgTowrVeOFcw768vuVIp1rhf3SK1qlQ+rXMR54xWX
RsXLeN2wEYWqtLHlFUMA4rJRXPc0rAGrjWUH33538sdLbhpgTMjmMTFdzFU6
ki51DNxjpAE0mmjqf/2NtrFjB3KZvCAIZaRt34nV0xNtfLfF/VrpXWkzt+JA
HBJYpP6E0ARq9dUGLz4Q2caUgcZtEgkJgYYUzOEk9k1XjZVjbriTnE2bD7ol
u0vYuEgrIxXSoWXFj0EUTy2PWgOPCuGUXFFSO3Z607ePrIul394xaFbtNqGE
F+v1+UGkk9BompdUbMbK1yRDRisdOm0Un5yl7UtiGrQSwvNFxhoPrmfbkyVV
w5YOOxF8WQPifzgDbFuR6448NnQ+XaC0PuKMYuVDb0Pt7ws3f25KsNGGy2q+
VNdp5MYkKx/dy55tqGklOOskQaykToLbw2wfDcuwukVhECefdjuMr4dBhjH1
1yBWLG5mre3wg3rnPG9g+Qu8JVKNnJ3i83q6qupZmU8HnH+0aMqPJLkT9+Sk
Syn3gT6RTQDvBNbD5pG64ca2HsRRfRBv6BI4W8SYam0IVAZ2cP9Rb9o8my+v
pmWLlYWwpg3ab7E/Ndvx5Qm2OVLoZ1kWtxsn25GCIwbj0BQWxrLMI/YO0SAF
igSqJsHoRcXu3ALdMgJuDBX21fKiS0EjJFfiBW4OvprmEvjjI8ViRKEKorav
YweUGkhLHW1bUcXiQagomtSsw++1NRCgTnOdV1IDnXquAcf6KLjgOpLA/sG7
o8s3p6df75+RTYWURy0/lF7vkvIYpuWsFEaV3jOGo0p3+SJaDIDolGM6t9NO
DlykyuyPaQtpduQhaOp87NL1YKaexPbt0JG9EEfeLXyJ5030j6mOH15C5zSx
CAVrH3yG+iNcMu4I0YqLe87J62Kzd91joBBFMRg3AnYtaBgKJWhwJxPEuE8r
xj1iYNwMSI40klef75pe2MdRkyB2sbvge0ePERP9vFUqNzDOeCIl6LGGob6r
iKFru2uCzkZokKOFwjluPI75YDco9y0wZ6ddvWR5Wy31crtuiigXy7JNDWgL
0rmPZgtdb1iaoCdYCqmN3uEnn1LhnegbozzWUzoHOfkycg2izPvmTf+Pr+8b
5fj08uLbs+8u3x2Kwm9HMT/eOwhWFISXg/htB5Ef1w9AYtbl6cmRyH3RAOmP
64c5PXn7I71In5zROp5/fE5/tp/rd/o5njAIwZ1YNvxjTqJHe5BDM3qDefwk
s4qDvOqjmO0Rf8qYgl6Vi1k+z9SurX68nqgqdrbbC4Pl90IVDqwoXEqsKLqN
0KUnDcK46hyFfDXsAYkbQqwrnyRieS9acTdZzdjkel4h3CRpaN9TeYkqLt7X
kywUj1mLsj9rDVLYkn32QGaPn5k4zHbd7Ma7biMPxAvANsUkWoiDCu4o8UeN
1VQajBLWo0Bem9DUs06usItc9iNLk7ZNQQ+adGAll+8XwQnHcKHbeWzCoVAn
qTSalBn9gfg/aarkP/KFRTl1zJcMXjsAzBv8V/fuNKUSP2u/VL/Jp34u55Qq
hNXP5VLMSu6GYbqRHJy/4+iE9fTnlywBnbPOqqZhXfQRaXw+/thUWmbzwIei
mLOLKKTvAdObUkl/7Iai+c62bzVl55HEY3pKD7nCLDqOsWrfgxEvPJ6sjwz0
dhehvsNdfnm9zJsxAaZEYW9cxN1QYbklxRnmo6a2HdpEdzZmL9z7JRLHrtHL
RwNHr8VGL8nA6lhhJ6A2XMqPr+0H1aS9nPRYbemDyQ3F/L2+z/5kVtLhIv6n
hJOET8pW9LZgLwTA3RTTuY2f4kR8auoRlw/ReJvP4yadn3OQp5iXfPjftl6C
tWllA0PuaOoSe7gtUPjjThG6Fo6PtEZgeUFAxvEWfhGkbEflA9nlihTnzqfx
sophuasHFCZshEGcyfPlyklww9HLSv3Ufe5q0Od0goBnuFTi3R00M7LI4bmQ
g/2Di+PvvU0yiCLPt/tEGP/Z+fv9s2DJNJ/t3PsZfHV88WPPZ7v3fnZ29H7/
+Kzns1evHy0oebD0YLr/rYPo/pcYz8MHG7HU5H9AGyv9zZfL4egGItqTLp3I
Mqr6m9g4reGB8JcanJHx6JzUJhTouaJsXW3ZR5IGzM52poDhZw4twF5jXCzN
2yYpvZ4c71k+q6vrlsl/KDe5aPKqRfmugJuC9BPDzySrPDFQcFJs6ImBbKkP
9ygrrw+7Evgg8Ptew6aHLoSUBt9+U8zzsqEScKspaLQ+0quorvNrjQpLwpMD
dZlYs1LvpZH2thLFMnZ0IZnHk+cxEpdYBMMgncAcNUbigd2HFpbFunM2/mME
ZzjRKeZ2AVEKQbwJWB9/eq4XCN3Tk0uezMM6tEun4cLeCy1+jujItSn59Aah
nxzleGP4YQ+0xB7QSzVIJBABx6Z+pUncvaqLl4BCDE0SHBwFSrkkaH2S0/9o
cBF/W1Lw36houLn970ZNgSFpy2aElKeePfn9G4xtKKvfPcOffu/cOUl07zFF
88svQCWjJA7p8hXZ91QxUqGXwnmIlFgGge0JeI33iyL2xVgYYXWgK40Iq9Sf
I3mkjzKbbwoxf45W0VepWJF85X8RH7j/LF8smt1LjgTpToYFzde+L02j7Pv0
pLuuwAY668LH90pXEXA7bMf8mHAe+1nKfMxvyH+0VnQrDcqdrwGPFquQWARX
TDs0BjKBLppQ+K/Xe0sTdlFoWZUiVnVflnBfC0n8s9F1YE5Gi7E82YyRj/SC
Pj/cXjpsgIo0V4WF4VnR2OHR6xi1JzlVBP2FYz3g0RVwpAffeSM+/e4ACQp0
XojwIMQQhQVn10UNpNDEPIWOhti+2FqJRyD7YlM+1w+MjRQXzY+4DvE3qW2X
c6giT+k8J6aFYV1j5fnWAEzRAd6Mgd1877iBJH3AieVsf31NWzHbDGpI/D3W
sVo5GYB1BBiGCt5KMVTP/1O3LyGiJoAJ3vDD0bLBSzRdOQaZ93MtVBXRW4L8
Rnx7FRuzMTBsSQsFJbPkziG1qLex24r6UFCFJakuiz3fkH9xurppImqN8a81
qFaWgFnY1MmMu7GYRCof8skmJ4xXoUqokvyEdoW1XvfJeH6p3G6bkwrgiZyv
1KRlBCAP35hNgjki3BwjZ9Sb5j/6aocQ9MUWpoyxrQNJw9AdxdZ93nxwf5Il
AbdUcdEc9uTrBO6sKMZb5/W0xnbW3+fYGm6GCtYge1f/uVgstGM1SlubXCVF
d/b7r9gR4szO4Nn28J6gFso8wGV2A1rOL86O3yMQT05PjlDJ6UaU+FdQ6Dk/
6g9x8S8dHp3QOzuGJGWRddkye86JaBNGH1M+5Tv6vLWPW8OC+4ifn8DQPHmW
3e5gJMsRqsH8yKRHaPOjKCSBYmjj6YfaZkhGwApTeSMJKmuoZz3pDegwZiMO
cONEyXg+n868bbjjh2LVZYhWN8XSDuHVHjDh4w5bwIcxJ6DXSrqyEmDxcnfr
CpMiNYOjCfnDGtmW5Y5vDoXm5JX28UFF8Epax6gICRNEXXy4TCJVc3P31Jyk
9my0ntiJFkDEK7DupC159OleadR/14uf3bOF57PO49e9HxMw4z/4MTzuf9/f
4vh9fdz5aL3oGz66xGykS6pOAN9yQdiywpxORWh8gYIFEMgvd6kUYTv8OXPx
WVwmgnm8rcJKs2GFBV5vXNhbfyEWPvtICbq/u0sOWVhHCvQoUzSX5zGm68uJ
pKM4E8m51gTorAkwhK6ZxtrF1AsZEXSyRTGdRnlqnHlc+kpZnPnc13M6Mf76
sRM8RIjxoJ1oBUrU0xtGmzTjeMzz3/snkuWTFDhZO0LAt+T0Apr1rERd4b6z
V7qoLiuTBoNrvzILsRQ1Wm/h69j7VDl9rlJrvMCm4A7wATyav7rmMHrZgOtl
A2FNdr09avzJ8kOCWX/wKr1SQ3HA30v5ohDuSAHVuxHd2Kk8blOd2Ph60y9i
ffV+OjIlW3l7ic4PEK4vb0oJ0l533XX96XWX5/F115eT6y6Pf+5194jkgRKq
lkToxL8W3Xbk5KO2lQxNOAYzOJb1BSFfZyHwFoPDAETTLdS7SYqkKE1dUBeK
GjzGs7WUtRP6c3snnEO1pZ4s4noWnNhVqukTQ6fYUYklN88lcZW7++IKqHwB
lz/jLr8ty60uPfDPtsKfzzq//mT//jf8Sqf2N367/lde82dr1hx9ZbfnP9lK
/sgYP/UN0x33MzNSd0H20Wd2ip+ydyycbftB/aOd7qMXMhs8UIXUvNN9dHZ0
dHh5fvr29N3pyd++yJ8Dh58NM53FLmf9I/dT942fsnPmYtt+lvgt95l/5URf
6Zsoesv1vdKzk56r8o8CKl+cIMCl51H6kVQysB91H6UAftzyfvopFEzoucFr
/sC5yAran/FVAgzLoLo02rKpM2xIgRaYm3LuS9RiDBoVwTsvWlRb90My3D2E
XyNy+gQQt04AYT3UZjBY0cNRPAZVadnupImyF+vRKixnHpFHIQkNDZVgqlXm
bXauG4OFI+RXLdWgpA44UrOEqxTem5/hRPYMrJ0yG6nudCRTD7N9qU+B0de0
3jhGtB30gsuawtme4CJ7gq+QbgJXqLEIprSUV9h+Yphpy2syOapTUaArDqZg
P59TWQqcPXW9GKvow+4Xa1/9a698pvao1/j02W8FWmzP3Piwmf322drP2Gj1
mj7jvz8DEWhZjfNqtPJjzOwYa10ZdqG97gzzQo9LI7Ij97g1ElPyI03aFEFB
XmDMU0a1tN0zpkc1K6n5d0NhOWBvIzfLgUEAjGrnDV+Kya9uXA/QAnjTsWab
PrvOm6TL1i0l8IWc+xJyhtIfh8fLA62XIIXHKFIfYAAaFBqSpytxpEa20D0H
hPx9AA72LALa7c2YP1nL7E/ZRb3A+spA24jYh9LAIGkA9fRSCuV2kzV2E15E
kQVllBf0/2zDpZ68/nL+5ISl0ocv9cOX8P/P5X/fC9nE8MEBl4fwcMXPz86t
hTbb/Qxn3JWZX9Lap9IcaSfDFq5chqZlIYnNuVvtqs1e0Zev5Mvt5/d+6jmF
B6GW4W3TOrwpHmrZNeYhhPVfrzD3zNdSnU7t+xhdSDVNDCKGw9kkTG4HLpfo
JYNARMfaYBcfeMVXMAUmkjB4Z5Gjx8eycU/WYHCduG9Pf8h+geuFvn+3f/Jj
1uN6If2E8DDxESm59gG6GPKniPyZgRXrOMqjMFTwhtK7Wovsz8K34m74/Pn/
RswFUGvgdj7nfwC2bA5TjR2vZmTDDI/vN2NGn3fU88gFmsnz6c2lL+NGP/Vq
9n2+Sfx0ThECAKzHeintCvtVdvqtT23nj3pVd/qpo75LDjJrv1Ghrc+H2zvD
3U793jQkFSvyee18XWD2RXjf69aYxEb0dQ+LxnVBTHGx+xLziY0wpORCJBuZ
hWgFr4HztsBZ3S4yD36+XZQIqIluSUkXtANkXMRJK1djyTecGFO63+3/iKVK
a6mJpA0/dOKQpyIBLT6TUjOpTCUlHFSWRFVzpORwi8Uud4/OzlRMOzm9uDz/
7v3707MLlNWSapco9GARxZWTkthB/JlKdSLtNU2f4V+mUi4IpLChQL4PQwn8
4dyetgaQ97J7dw/t7IX7TV1p8hBhIYDaIahxsU1JeTvYsY7blEiTFsqxmdfT
coROkv2ovcQsr4DmtC5dJYZY5GO06aOITDW3eQQlPiAxbnFPHmyqBGIFPuAS
ZllN7SqwUeQmVz4JkVnldVVT9pBuAA/J9H730BmXo4W2lZS1swfX1yCJk2gZ
4G2PI5TN3UAXndkfR2lhKP4eG5L6blX2VfbXNbbWtcrbQ4n7j/40ycP/tB77
aJmeOXyVvbK4hC7VjHKfqOe6WN/4AsiJwi0DYQUFFJ+YFk6WAcXH5kjcC3Ke
nJtPOqBh79sFRU96lP9qN3aZb1rs9OGPkcPccSVZmbg3zoEWEebYTuZwPX55
KV67l4Q9eyvuOqcdezTRX84tUB3HKISic6J1Ye2bnFpPajcSLSQk/gpDSbi7
xxoPT/ZID4+738NjDNdr6rzHNuxQIkSPmqsq4rXDkoUhKdnnTXjGOfZ9BMXy
rOWfi+uZ5Cty9cOS1HkvLXqfL1mCxavu7d4sszttALBuFVp/yp+02JC5DCkl
0AjoYWvX6MHHfmZc1J99LZXitqy39Z0OkFcBvYMbVKLqnxhVYhCKmq/uE82l
9pXzfY+y60Y+CrZ+X8lrFyV8qqdgmwa8GO50xA5rP4iXj0UIMPxywEUsIlQg
xe/eT/mkuJTiVCrHkpUI4SGBopiVzT3hfXqctGor0gE5hCJ+SHJDqDUeWLR3
jYUk9vv9hOydAOmO2F/YrnihNsWp4jqORDgMipyplUWKr464cRDxg6vPXVO3
WLnL4Q055t74D00uJntS5M8OklHfXAhHE/DBPkosSnKfk1KjvYBa/KVoaq3c
g/Y4z/GcVkZsRzcFO1taALXEjJWaGaCVtlCT0OkoyyXUEl6oKY17ty1n82Bg
oxd868kO6LVyWHfnUeU0SV9CNVH8ZGs9Z7aaaGX9sRhgYos1+8IulsZ4CTQm
XuZKEArrfcYewp7sUUXSOICAawEkYcdGyL8pfKxyf/KoNhxwMZXzY/oAYwW/
7jcNTA6VpMSkiQInx9mZnJ4Nm321qVKDNvvbsMlcm6TDNsErriZRtCGhKB9f
hmS1eWsSRrTioNb8T6GsnQppbyHLiJu5uCh31xeP/0W14/WExt6LGjV5iZo3
TKiqIIfbJwUQCPfJNh2oWgB2K43jtDs4zBReS8vgP6bjg9OOD5g24ZMs0zru
XD2ZA6zl1OUVHIJS3dqo3Z3plNyT2IzwiZKgB1HKX4zr+jD0j3BpoxvbnUAY
+htKuogKi5PNqJKctJ4+rlFvRG5iq9V6sShztZrVy9bn92HGZNFQrCs16lzT
zo6SXKq60zWvvxKq7lWKLJh+GWoPyLPraX2VT52+iva1KeHYUvsuJ70g2ENj
wEuQpPk729K6PL21ZKJB4D4B0UWRiAg1LYMRm0t0fvSw8BH3oEPRkfiy86QQ
2zO+ybUHbC/MXVy1/r7PBUzh069XmVIv4fKBaIWIQZTOom2yYOASyij9qHaU
0pv6mfh8myucSDoQGwFdyM7Use+o5GSOXMTwKnPZlaZRh2cqb9eLZmSzwJBn
QtokTRNr8sBikCxIjHoOVOJ6iUqfyk8iWN5uB4ov1zQ+r85lHtjkE2mJEJJL
4hCTVQX/g4q2f5dzUkxfGFnH2fuDbARQwaBPEDddah/zvLq3rn9CD3ySSCtU
Ie180NkSfpWsETWu7kRa2DdIHOuadL7mkBvp9dfLPUyKe7eBFslZxcIEb9nU
b7GW0D287zzUnykdhEkq5GCzBz/lBqPa1AWLPXIaEr5nEowGpvsalQ22Sc9f
JhrIkDCMO99ogD4lzQ9S8COlofLLUrZJSsU6LLA0Y4s61Y2UHd0JzdU+gCjN
+gbnLIXnvBmgo74CnHQzXglohdOqL5VmZkOlocZYP8yF1uYI3aYAwUw7kQJv
zRfSE483BTu2e8Iuel741PRNwiUGQZvdhdK5UZ1t4WQvsiVQNxyV6J8WGUp6
AkcAEjGc91ROpJjcH4vmiuCE+ABYuGQ/H47FhQVz7R2q17EtAsCnU0dUF6sZ
/fO4nmEnXt5hEDtDNlwS7pZclqetltaiPlOtNt9gdRyPTxrqGUE82hM3F2/J
Xa0EGoQ25KhzqmfONY/95AgZPC1JDRlzoDe8SieJ/aLz8cBFXSP6zfLcIhGU
Z65LLdABOoZajxSrDsMuF1qDCvGGW65GZmixnuNT7IOsce4mya2NCwyIwkWf
SQC6r6GYHdHHB+SOI63bmwycs5VevfAFi1uAoAesYkwmFmauN3FOMlXad9T2
uvBdJENZcToPscOTFYwWgWdHhhxvmucvLs5+fLt/cXQ2yOLn353sf79//Hb/
67dHLL3qz4dH8MKQMonVfoQFdj8ufCy3JseA7LF2TJUjfAcnRZpQ5l18BCjp
cW+B6zrFgZBsbW5XZAph6fkWqyCMg+FEGs9gnwEu/S7VNamZyzBdtoIoXjM1
f+CKE1LGQ+mA7QDFRsXrhvKTfWqJiupTat3LFRoqTsKeN5RhwWOQS0UKySCU
y2pZsAWRKuP0ttgxy6ej+q9dNVY405VnD6+cbs2BFIJXq6lvA7umWJdzR6ZU
USrfR4rNuhEwbnnQoQXURMlqHPcUTSpojGyyrGTFucbdamoAlZSR9G8vRxlH
Sa92os4TqiCFBjbt06Xv4bJ81SB5kp0O2GmINeKmnJEp2yCqgi1aUX3nAm62
KNU9/YW8G3BJvd6X2hW0FWnTYb69mCylek/eXHOZbFi9nUVak8AYkqzguuWw
JeZM52GbEYle0k4MJQyNmurYXndsuRe2vcpuI6xS9VlDAZEOcOh3K5BIGyyV
odWAKWRKMkg3xM9FzZNIvjNWBfZ6UsEyNDeylf35INseoMRy4uy7kUt7+8Xw
RWd3WXaOV9uXMmS9UHJrEM+WkscYiVdJ4wNjYeLQuPqO7B2+zY7vJaD5laqm
IpxUUerpNuGkdsjazlWD0JowmCjFRmYhwU0uMrZGhdKxWrvKwtv0SHBr2qxw
rLsRLtduVAsDeTDds1uxAgciqE1x/Ojekqm+FA037Pi/E71kaGMQtDBqUpBt
gI6aURHFL1rN2xmRvdfk1xg7mNY7woe+h1jP2adZDInvwLN1uYXf4mni8WiG
xRGbhjkQJjGFoV+n6NXxYuLe17mVRRSXtqe9V/OLGAGiIrlUagPmpN8fl+hU
CbHTxowPtVp5HFpSvN3P76Hp1vVDtRWsxyFnMlhEcHjH2RxkjWFoA3DivkYx
maHG13FDlU22ocVlDrVqtdaBloQyTZALjTal4qltzUiwocAOUyUnWHWT7reu
22PVwhDrolI8HJxpWcMmqZaAHDI/oM7Y4nhiiosTMvs5O7r47uykC4RO9JCr
7WdHVMUz/urz4Uv/0RevqO8Jnh0vxnz26ZNLpebNjhxStvVU+T6ABqDN0RNx
UUWxNLC0T4ViKYGH3PF4O6M9wmKe+We8gTIE7bw7Pjk9u/z+6Owc/nr+bv/i
4Fv2MXoZYMaNeHyZZbyaZQtPQTenYjbcMkcPSSJTzJ5QiGBdGkOScEgyYW5S
u52oRe76b7EnUfT1cIeB5wpJTvbCloaOo7C//gqmTY3DfYvdEfH54PQdlDbi
rYt8EmnIE73Z6S6GyMTybKc0Kn4ZLNO9nggSIIQoTXK0E2ApRKRgtUr7SWXN
nmA3akQpnl2MOTpnn+M7cUYypfaORvW9c+i/YeytDxsjg9cYK02zmYfH823f
JiBAXlNRdW4dw3z5ZtVGz4RpSSgDXroB16GW2CNnhKTOvR3cL4OQ39X5Fpp5
skQtnxU/bVVQtWtPl/mWrInoxueY/3hXax0Z+7FvV7bmcGudzQ0yqVtNVdvx
daxz2aqVu+CaCFF3LroV/kpLpYSIi2sJBBCyPkpa39u9NXtVUQCgtKcfUn9Q
Cqv6Ifvqkf535873zKuUnc6dBKXdJg53DsP9kP12TYQBaMB+CVIcBD86gY/e
Zs+y8yhW1YcvSIyq5Gspnr8DaRKzc+A6nBUUgkgMBjUu1VPhKrH4whTYdtJB
ExeGMy8rqmbObJdigjiOC92wJDWoo6DbCjj0oF5otqmE75ehnWqPUYTJPwdN
+sLkuFZehW0A19W+XKLzoBQd0T3SxCQ9NB5Yi9FpMYdFzHkoXzQ0YQ+cOo53
cl2+LQyuWy0in67+IiIgE35uBOCr8zDIONLbCUhWtjcKG4tbBRdnwY+B5vnw
/bqhH7XtAoW78aF5X3+7Zn0CKbEzrbBtBcr8ZCPW+mpat86VGLmEuhqGdXJ5
WfWrqMmEoy5YuZEFozMLrWVoFJRuukSG1apOx39TLmJrt0oKUkabCjw6tjFj
8aEez6bvgeGBKIa0G9kPho9RepdphSgLz8mSTHZ0HACtvm1aHam/OcqNWI3r
uWlYMc+vuRBJbHQv1FpLU4YGFGIaBALBnbkEZUNBbMYZGJfOaFlp3xUOADaL
E5VO2n86lrGEjfueYeSd9kNfrVTWlGLQpTYw0mtG5n/egV4mZLlNcU3hEiLs
fEa1qfFlKg75yGrefuUSpxwalzptXB91Itb40SaU4A5bCRc34SUUj86uKVSa
132v9eFSWiW2Oz6+OcpZ7UL6EDKV3TpHeSdJP2pZmbRvhKQiFlCwdPVDTbO5
QG5LaMdOdeQ33NNV9FoX4hSt1uv7KC6aXKNGe6LzLkwhCRN3Fbrs2IAmrazT
cgukceY734Bw0ispqJfVx19TsqO0jqaYD2lNhfe3kcaaGMVSjJem01Sor225
DuE1xqrDX7aw15Lj2q3MnT0kpMme1LLQDqW6cjHQs4zCCoKHmjfesA8FNowF
wrznTVxglZY5wZCo3kCwxOKgVEPsMz5EEfWWym3cX11hHa9hAYt5rxg9lM/o
vQiZrUlIrHdooJ1UQzxid9oPagLTevsoPgRZX4s4aLU5Hdh3442sGnEukWru
fdqEj/zPOVXwNYlbGBl2qxYg2wYYkFi6RNWaC2B16cT4scHnRiGs9BkbEEyA
ECqioX8XNp1vk3iCJNDX698Wkm1epnEFcohkjJab688kGZLsunhtkECZBijh
3mtcEVtUo3K5wRqSIscs/0DxXijVAGExDZLJnpZTzynJtLQmtWR1rpRGjFwx
meLMhJD6IovHkw7lwW9ITgBC35ZTFhRsOwZjr/TRtHqrKNBCAqJKDsXwc4W4
p15hx9fup8iBObLTRTlNDEfBsV4uWLy3W1lYX03a7sBxZAYrD1JzJjoJ7n2H
fCi0N4jJa+vMXSE5S3y3LdvSY48XFkniDgHeshaFyrcdvuad04t1lktKvFc/
SQcBPEnAogBFUQVP82zcbulxUoTvwI2XjVIf88vAYhLhJrIQFOOT7Cu40yjG
AK6mSLpmV+qnOyeCzcKMFBMRq0DkDb/HfEseRhPQTsjJyyJ9B0POQeC0qPOB
gq6wxHPekuWVIyVaS0olvqmuuE+38z04mgZTnzJx2aQlenS7ndgJNyuQqEcU
OpRUp6bm+RQjWxXrOfgb7/i0mCw0mKLDTeIw1AFsaUQdz+TmqNeXCwc4Qj38
QeZivsgdxNp5UayJTB54cwNuAxOHgo9dsFlfFDkNQ825hIW3eIlBwFFYBkVc
OOdFhDv2JPk2KkQxbuvprYaXpYvG9LoQk2F6xnBeyZqg1h4TifWGELEjq2Rb
Lpac6OBJsNg6hFeLWI7JeChRzQmTgl+wY8FTXuYTEEM/xwBL3QYNBCeZROfC
j69FC0lop8rALKohAAnH8ysSxrlUPldsWi5wMV3wuG7ML7xFLU7whgjTZoo7
8p4rjspql1cVpVDaLqJllNMksBuRo4XZeScsZkLlC3KvwkhI8EVQvZlQ4pEY
Wprw40BQqYNAiL0qQDlWsvND+Pxdad1GMR2nt5FKcdaS1oiRujPfEeVtdQTn
HmI66r8PLEcDBInEPNBcx2cr3MPmOTYv6R+YnCrx16IigUJ7aIVBndEfEIA8
/5gNTmR76PBBMu7wa8T2+mIur4uFCSSiwYJM1RlRAiZLDZRcXwvfq8vcL4ci
dGSW6GbTeOlgKTHt8PPCs3Mz1tM2NZ7RXZcISjwzac7FvX2nrNVRd/owMhr7
y0VLyTVmmRzO5vOqe2QOvKb4FiUxdQL08lBeZn1kmSPwe6KGuHJTs6z7Gxs8
o/iNtbd/IY7bHkhuvazrUSIk/wTdJ0L4cMtNwBZTI02386XDNelwwlZwRlUS
AEOfUzg13w8DFU7s+6YWB5GMMWgY7SzU/Wa+cgHrkeghBeNWtl5HQ5aAb4ca
eoKZLNJZicRf93uwk4m5N3F03PiIEaJDRdbSxEcXTdJtmqcXS+iOwEzlho57
QW57OFJ/hJZAwOAl2h5CVIMWGljUc+fzDNH8wwTC27vFLOmtxPkitVOZZimp
wN27FjTLiLou5xCWc5fD30nToONOhTsxEmZt7bEOaz7I4riuCc1hl28ka4ap
v2Y+gOEHMtIedflQN3hBNsdTtr4fWWSz7FMgOgf9gBminASTANknsVOpE+TL
K9X0glfAh+1FFqibYBGXpFFH0YNBIJELwQG46TbRtMhhm8Uisnb2a46M12TM
0FxhkDhasZ8S6VETU0RMYhpMXOLk9ILNbLbznHzsfIgB4AXmufgEeaRa6L+G
n+vWk27q5F7xbRFZSrQ2oV53GB2jLmC0GWsXGs8P/b0RTwB1guRJ/Eh1N2XS
R8YSCWrFCsZrF2OJr1nb6Z7pvHup26zR6H2ZHDN3H4qO2nPixLZYTnoCXHAI
EX8pyw8pL8jWHsESKT66CGKN770FrqNGd6aJgvVzdPGESkUiEQkP8i9GFD8I
PxbmswID08t25nxgk+0H5ekZamxBoe7eU/x8xCghbaS8xdR0+vPFlWMq6D3R
EtygvYCJW1i2QGGLhm04E9HGNSH6qYthS96jAwIJVYFx+9EUtXdml63UgK3q
amuSY3UlRZR91CZaVLZBJ4n4mNaMyI3DLwnPiDYeDshRW6RV8NmklwWr66yy
zmx2Klca+dVPlgLD83E+qYiw+AqHc/yxMQmnCXsGqnd9TdUSDr6+1EUd7L99
69aEJxs/ZWlkCVCFm4WsRH3DklHcy4BYbI05kMqpwoJgSOQW9JtUJUQJZw1q
iHYn8hedApxWtlFOfDKTpIQumVqulSo3qZrDva5rTuChrBkJXAouPvxdvHFS
axH3AeKI+B5xZStjmvONAo41O8E6CMmzGdwJy6ofKgOxG0XL8qESvowlOY7r
q2l5LXE3ubozS9IHXEhfyttVNbpp6gqYznSV7E/4gPW7kPEqd0EdoRMgUjIg
/JcTabPZskVMmfjEFdkLxZTTUq/QXMDbYlKF1m4EJX0aXolYGAPGBySInVpn
Zfj6RqbU+EUML2Q69BCGYdRRRpvIhb/45fuSmbJsWg6bkYoqwAmrAW0hibFa
JLEw409Pj5Ms7LgXRuIs44Pc6EH2mKVtSnmGWjadj9mLjkOwMI1uJRxsSMui
C2ZwaDJdtjcqe3sn+oaNVCJhXi2AOO5oCqhCPTox9bQtZjk6z9pNrKckaEDn
dSUXN8rIpoXVkWlgOcfBSBOw3myfQulNV/ht2YQsQh+GGRL29fojJNQO806B
eM67Owu2Z9/uEBkGk6TUbt3vYKN4mamo4iPM5FPntAilaxTJnKuSUTshqYrJ
5rQrCYPkicnviC7DqtOTWqQ5VMIpIZlc8g38wqZ0mw3e9TqGhVCqMWMScSFb
JkVSs00iBggZaHMEiLR41APHovR+XFLIeGLyO8QuI5uqtYclkI6m0PttJL9G
WpdTw4L1cng31D0uH4lvwSUgrtEdjL/eaDetum9QIdX+tLaBzWljiWPN0GpW
1WBvyozjdEUNk1H31z0OGRSsxgUiEdJNdRlI7Pb65LhY/5B2l6iZajUDzR/y
4sv9UZe/SXq3Opf0cl3kH3wnvuxqWo8+8OlrR1Wt3e/7lboFd5/31n5bAdM4
wrXEKpKMfItGFmfYwl0XlZQ1zcOisRSm5Aybfq6yJsr6tFtDKMkXeDt1kKTS
C62wMxoqIUh90S6uAbhhhJZjN1ZZKJGohUuiNd5o7IJdp7ynTWgvbqLiYBGY
tSwRV7xSkIxlDnbwS81nX6IofC7eaG4EzgLpsiqxw5FptRQG7dTbUNsgqzQS
oMkWBNqV+C6MHZTWwp3Hd/kkL7BzKinCklIpuVSyEE6DrLak85IvYcSp4kCk
6orPUSCEFdDitOJ53ZbehaLqKr+MZ4jWXhDdeAvN6MWOehrgb9h0Cp5FniIh
dHKIG+qXcfwt54TxFX8eKivrUX69kphoNeFgqU7kad5ULisbmD63aPclHjoO
SfrKC0OFMX+NGDGY1QgsQ5n03Q9fi2tdJUdXFUiZAJSArRgNEWjw1PRD0ra+
sBgYg84N3uI3HBd8Yp3Bo0nRbV4XPJSJCk+96jLtVWfWDnPZZvZhlwE8HE5A
cUT1rJT+QFLOkD8lfUwKwEs5Y69444BSBdJd6E0UyVn9Sn4127oa2bjGV+rt
9J3dTHVnESSR6YVOzkmdwhg2RjOXrcdYm1ETdkJT07Ja4+kj8mBI6bhQUnrF
F2/fjwdMVqop0P0lzwyZKryZin0OSlSSDD26zUN3zLJvOJg7sTFxrTcd3l+J
ONHI9VAsKiGdi0GC79fophh9aH1eSYRhAYBAgKYT47wkA4u1zSO2jhYu5Bx5
BPdlo8106ekZHuL8YBGaqrjAe62iM9eDCOt1nHnhY3nFAMPBvEw6WL490orj
eXaIa/gaD7m/tU3a/KX7wk8WUfoaTugI9/eaye5tVkE/dXrOmGHX/6EJox4h
KnV4MGz4G4vUBGTJ8WY865o57lmQ32Zfx5K1j3/627/qLJO7iKx/3HOwP2Xf
Hu0fHp2Z+YbD4eMf9x5SWMejHvM6iBbsdSZ85GM9cRA5kI9n2R5V3eeXH/lY
x/AFUfeofr6+/KjHPIaRKPaokD+/HD1+1//4cx2DyQguLdr4Ix//eudy8O13
J3+0+MgTPvLxr7cOJDl7OMswmvCRj3+tdWRMPsVIsB12ah+f9D9+GWXq9Cgo
vnNCH7GmyvJUzMsQXxu0jHohpY+X1nP1FOMvRjeqZnO3czWaSG8C5Lw1rNr0
aWGU1GJobeEbrsUdYvJOl59evYutAIQbbFC1AYvmMRaqGlGpY0r59BnmpBhc
Ej/b5cVqNTSrLaBpiQrB5qPRkmpLjE2nDtJ/7BJYxZZVs3DHSoYYnPPmuqXi
N8wkH2I6Hkd+srPgIFnPn59+3oiju1wLyiFBWTciv8iZbHsWO/tfZPPiXoZJ
UpfnP54c7K55MVCo3tnN1FjJa6/7RvSi+TOqLxNKvuZFevn6MiLY970YUeV4
jaxIUh1YJJfbz3df9W+GaGq7djfx1P/2/E/w5vOPL3aKyasv73txm1/8/OUk
v//FHX7xi5dfvsgnPS/6zbS0keHQk7voxUeiWUSb1lyHtLUL3CmD65f4ijTA
KOPuUi+EXrGGr1WvyaOKNZslcXQCKEvvOZOBybSD4mF5DNIhRLJXlNM4P4pY
85F2Dt0OaPLxXb861kS0JErsaCxEs1LhHablU2xnRQpc8AMH+EfPwm2RKKOw
SlVheDjW+8OngnGqR/jhqVyt0OBrzlGnJB9KPsPyq/mMg2nQxtxGRlkKhWyX
Mx81uiC/sPj5RdsY3EMFAZafPjkA1ZIaNPgERErMTUxM7CzF6ghkE0IzFlYt
14j7hCw+mrJ2iCrMXn/4+YiupLS5pJgYJSG9NwvfEfdRhz6GdwhEgEsTIIsf
J9s7L3bzq1HyDp18y+ThHgIClJAJE5Ks7bXv9FHLLlXtEsq+d1Ia2Vnz9s9Z
884/xpp3fs6aX6x95z97zQ/j6kP0uOl22krIMbxB1Jhc7AdqiBTz4cHZwYsd
ewN75d2Ownfv+vmLHtApVAy0voh+WMOuEz3KHLhXd54/clWJYvKIL1iFyCxT
vfcL/hOpB9Ehoil4S9z8cnB0CNnX/OwgmIrp2Azdjiz/1pan8qsZm4sniBnK
+RCMqyKkdVBArvd2T5APqs5APIht0ZpCHUVwezu3GcwnEuxK+3irFriu34Jm
EbM2hTVQeLJGZC2S0Rc1FgZ9/nFne1y88kH4yH4QNgbfyXUSKyMEFljyFpVC
/P+F+X9YYf6Le1/8LxPmCef6XvzPE71jlI0JxT49SukEUvfD4l7z7i+xWfwq
dpNfwcz4q6ziFxsZfwUb469gYvwVLIy/goHxVzmRX2xe/FVW8YuNi482usvj
ZB0qP6yx9vc//Ru/We/duO/pZyltVpeKJz3GpXKGSlFbbEZzPna+LlTW7/0x
vzzkjHrIm5V6TSy1Nq5RJdW9pJgINYXVaVqhcA/qfygyBtdD0VyfTkcRR5OR
93TGRtCw7j5LbOS2RTuA1DXHsLK46ht5tMUXKf5HkQeDS1aCDKSgBoW/aMyu
pMWg15lispahCMaonhdaSdtpTUD6QEOQtfGF793g3xL7kfbbcD1RARo/fIDO
1rWazYN/rKCHh9BjX4ivzyNHHDXNZVFP9rBsXVusRVd+kRGiz05gXpQ/owbl
mb1eCSV9ca0U918uv5nNkGGtR5L6VeyXVP0W1ZpImJK84R/KpuiqW+ZCag26
fK27QMfH6gVBR4o0L8TPRO2S4NY4JGXoDtBmOV8sOUkt71P8pA/F1bKcLiJV
K1lcg0FTNHFI7rxXfXNd9a0n7MwfWI/65nrUtyxR35Ytf6Pb8c2uTPY4hoOk
kPlfx1ahZolYzjLGin8MW8V6U0UPCqUmC1K++SevhXDyhtSyuvCVljFS7Spv
sUOuJLtzS3eKH08CcRjFMNELi2Frhag0upULd1FbVs7SOD15+yMlaZCefhYK
JfvMX3Eo2EaU93Qi9ampNqdU68OGkgLkbiBznE/24mraVEmZk1J735bIXWu9
8EFqqFtyXKXkKju58NrJi50oPuZI72A7Kqq8KWstcs25X/SyZhuSWIGpwzwE
19TE0Wv0mProUcn+B0hLEkA9HTvP+ZfRWz6NhMrU0LoxR5++47lKfolMUfSl
aYHS1NMphZ/xBz4xURK1U6i5x0ANXUkqXfSCzPVCrAMTKvJEZrOlhAaGkuFS
ijCpo0MxbSH2NgYU8wMBkNOsljrzRQ7JJyVJF9VYwZb1gM2lYKP0Fd9FrdX6
wm3IIPWHGVz7WsqK+l84LZhaVj7ibrTS7AOffzmppf0gOq3YfS8CLfNC9BVZ
6Y7r3JOtMvSZIfSEoU8paSDM5iPehTEgE+BkHxJNk5qdmjrIuaca94fFWJDW
MHXBtT2jKkCrZz7lOTEZ5hJJiyH2vrJfielD4Z+EZfzdm+OT/bfH/+PIbdjo
BX2KtAP3L/LD6du3X+8f/DGL3tWnVMzheJEFVPAeSk6pFHZ7f640lj3MOSo4
zuWAO2sZt6ClTWo0Uo7jqNbQIJ3SMeQbn8XF+eimOloCEuwOFG/clsZCaaYz
OR+D5hLD/VLXL2o1JveGljIgISzEcbw9TWGLT7CkOVbf4IwrdoVyn3oTT+3r
w0yWDW3HLOcc7joV/rRp2JinVFSYssZQiOKalxUul4kED/TdCS4lRhJ+xvX4
JEfEAsEUO85t2GvQ8kK+h7PZNJQOkSeHIUWGA7wxzLlecuKE4W29JRf8xpLo
W2FESkwwiZEaidUpCB/eYURpFlFMkxbQtKFGoYe15q6EAz86NIkwmnRbYKfW
3KUJrT3ChOTXkm4NWq6O/37/x7en+4eXQD0vD05Pzo/PL45OLoYkRndrVRTa
bAsnlTVs1FHSyGaSNx0lU/xu1BSTrK2Xzaj46ulFPXv6+3Pqe3FH+XFPNSAf
45prYuX51BM9TX0j8Zvip/iKjog0Em8rq4ozA+clpolgydoxkHBe6bMotaVT
vCIMYOCsaUmceYGE2Loez47e7x+fYYhzYDwhUFpZj681oGgwDOxfl+i0gFrn
hLlE158xiTChOoECyzXkKwsDU/EfjXcnKqlVe5gu/O4ZnsPvna1owovlnNWQ
vyx3hPOsRqU0KFC+LDXSfF0GfhmNPfFSh6bVV/yLCx2KKo4NDFTQZ4uycmW6
w2vVR2e3r1YSqR31A8urKKaT3CCBdCLGogDRnxpHFevXy9wuLmAo+fA+7gRE
UVvGmTpMh6oh2q6LmvxE9bai+9yR/LzUNxzdXV+SqsIda2v3Zv/t+dEgq2em
UjQLiNrD8GspKkYSJgJOQIDYR6mjJG96SbopWw64BHFB5SvF7qgUtWAGKPKc
1zRdSdZBDGneeYiZwe6220O7abHF2V1v9O34q4w2u8kaFdrKQLvDMCkpBBER
eaoE3HqJFxNpJSGMr9f7o5PD45NvuPsrDiQVlCk7JRYxPHrzACRxyQ2AHe+s
34tnUhjJOb7NpbSh2jJJupR14ArgLf0CSD0cX2mq6QCHl+hZUk1h4hfrJ5bM
43um9RPJxPzFBU4slZx1YsmW0lvik8w1N7ZTAYPFWFO8gV189nivQju/VNzk
xiE57tJ3J5HU3z6JblNc6WbznrRwDvGiboq+YxRdEVAnIaueIik9coTpr0Ve
/BjLdNw+UzLwyxmVoF8UdBeIGp15ahrTpfein6ylTZHcoNZvuJyFxBN6/bWJ
TEpS/V6rIEWxwNmzFDH1WgYy4TwF65AJCYnjvKvQaGl6l6/amMvwp26DEoDx
3BblYsmByuV1ZVMlO8VNCFemFN8HBL9sVlrWWSb2hISzi2PIhXytQGTSBVtM
QcMnyomcy6mpaB7YBYtweEWi0yCakW0oCRHJymIsN3PRtCaqqkbEij5Eyk01
x+eglOvFINltk+nJG6Fv+qkRi5PtmJwo0cKxcMaIKgsQWEPt97W5YBtn51hE
qSk/YgVQaeBTNz7pD4ZCUrcFotCfuY77ZojmhCFvudpGB1sDibrnCKwcFes4
9hfUdZhOBRnLbB2LfrHZpmfQqxUqvcIKiKXgUMxRqOYQNRVqTVZAxEZqlDmw
mZ8QJjQM4HZZnZVVCcbEB86njUe6+yAY/IUMMl2g4UKQiRTBfFoyKYD580dg
OsLi6DBR0eUpmwIJLFMsAd/ZRMZFED9EfCRUUZ2QIbDBDtNUdUNp3ztNbDCy
mFaWwJ84xA+LDQMNjLMg7pPQPIFGo+F6Sa0rnAVZRSZpqXYFymjOCmitN9Y/
VkJz6yW07OLsOxDQqHnVSuwXJsebbDDRx4yjXA2EUrxBbPPZ5uJU6KaQS2Uq
1Jxs6orWxE+A+7BQ5tU8zTOPFj3RoqdmCZKqnvjC4qX74TfGRUOFhJNyY09b
530P5Xh3k7lu8MKRyoSykIwjuffcVI4qAAq15tq1khFOxBRNPXa594pu94qh
eKKPk0IlPbZPNXjahgoFYhGEgSKxNIDZO6Im3gzRHUVtnDDMnBuTdLMBtMV0
n3nBW5n5CMSnKhn7yiOVndCkeHOoAgYQ+tNqvc0iNjHllNE14e50DHWcMRHD
iL9juSYEbaslkbWPscr0Jdu4iQ7bFKkFByFwPUMWKnIQP+B+Mz2W+qLmsFuu
YJaYX9dS6bBiIdTsOO1KclEJCfaG2iou2p1HMkNkt2i3UlunslWNDNjiZDap
hIGlhPNG+rYEvUwuKmWxN2wepRIpvlBmq1MlFW+M7thSTWupEeElKyqJsCoW
qdGMqxxryxzsJEOWaSq794CxqSMqp4wjEpb7GUWfuEzujSAyo0bLpicgd15+
VoEZOB/JMVQuU5Guii8kiXVGMXaqFCfCQ8dRotoOqt+Ixj7rYk3PklFdN2NM
/i/UHoMSo4rPjBkRu3cbns76EBOxNqA5hrjk4bkwSpGMxFS8yBus9mmUiE0f
fxyL3I8Uq9faq2VG0rZ9ZylCY2ot5XurWDud0LwVExztgyRkDEdaT3S6ojKe
KQk3H0XBpfhRGIQJ2VMhtEz4djkaAQ2Axbh9nVK9p+q6R1MoEkwsZzYOVJuN
idy/hmupeIbzkEoiZduiFFTpWYWdn4jY3YR6MqFWy/GCQhR8A7VoNwpdwm2R
9xLNJ6pJUxLcNxEC4yXWw4zM9UivWOag44TlfpAF09VhnkD9TZKPyGeFBvqN
zkcJJWM5PRCzTTEJGloUbLYxykagDaQ6UgwoF26+MmPokaLdBDAugv8dGpk9
q2emHXY2EJeh9Y2slek9B6rqFDiEmC2DJsjX3pdJBfaxwFs+laapXh9I5Taq
EhmO82lrEyry5eIGtb8c9XocY2OxmnO3KnFGm46o1JS+n08xPZR3eC0jcsKo
vLYZ3SGa+qq+LVhn4TjvDqYzEkSwRaNC+RdkzaHa4VibvHVh0K/64HmvV28C
krA1mS8cGyL05qMlhDsSEqtfFPNsG7bysu9Wa8FoKYu6ziuDRmyY2VZClVIm
ZxJBecC+BFabEi90ZMDngodppmZwtBqzPBAZkdZDz87+SjosuRY+wsn6snzs
ii+4lFQ9u6CqPVqdHZtE4UTz0A9Ou4el3hhkkUVFFdfMhL0r9JWanfTYHcli
2WleUMooxpCNo8I/PgQjqg/qfu6k1g3F/hEXGjjncTHeyQNikFYe6KlzWaa9
hw2z8H5aDkN67Ba4eUisTITyi0nERwg7WGDUEpIh7VFFlbgWXIILq/hjYjIw
hVKyjal2aOwS41JcK/WLOesXCw7rUleDYSNahwrrnHU25ph32Va4qOoRCMnA
oy1mhLEBSleMEVqjNykXV/cDOnR49kEDRJGd8RPL3f0Bw2u4kCYLtYnn88lF
PXvy++xQUsepNVIi0KkTOm9R5sNQGqTHWvMoluf2Dy6Ovz+KqSjvPDS265UA
I0sBEYTI90hud8Zuh+XoI+ISD3j+fv/sqCOxIHAfnDjp02qxRRs59o5BrYB8
B8SCtSITKz6wpaBD3WolyFcruadeo9XS9N472ndmb9jsPM1n2Zi8lGW1YAFJ
9DmKa6UyHFjT8uMqPtYoTV8rrFPjReaOWvWWdnxwiqqHVGQGDkZkfBGbEVq+
NOVC6jHDDlmdJS85VxhpPSFksTifKhtPECAeyCHat3QVGbwyjuct94x0qm28
Ihig7FdWJVbZ7TftcaAHldGeL9sbJDGowqRmuXsm9oeH9sfyo8TWGIYESqRX
L02MfasXTgJ5OLDkjfYfpbv8vRjHd9xbVq2IwUUtDaLISzJYoAra5ORMpB+5
+OvuwJEwDk+ZfHE/U3qDV3WJq9qVYWdI2bDqAffS5b7D3jpUtrbWLdmlQvAS
EpBZGWozcg14sSU4vjWKWOSy824C/UGAXnxEiUYAGt5Mh7DDm1G0QkMhrIqL
aeZj7mMMx9nZOtcnD9fg7Ojo8PL89O3pu9MTJ6GDHG34SjxUcYE6EACa0FCD
38BuMQqPwoVmvaECB3Mf39zOc0Tqq2H6HyZh6HCsH6mzn1YC5mPBLRr4D1iY
rk3cD0ttWqSjhdda0uGBPBHXoehdaSKfbb8a7uzg5/8mLeT/tOlQapdONuPi
Y/bc1iPPzUowzBOXw5DCDUsErePEGI0DNRyvU70In3LxIpmTlAXiDzR74atd
PCqboS9m+6foCq1Py+3Ejf+CGfEPxnRId4K1ySnJjCEHZDJpL/E0L4UO3fd5
9CF/DPs1X+/C/I/6UD8etxyNck+WtV3qTG7ZXtbLZOIPf0Wo9pTZeGip/xhQ
7RW2HjPj9t99xp2/+4wv/u4z7v46M4LAenzx46Nm/PzvPuPLX2dGlsofNeMX
/0kzrqM3lpsnM/6tNKcntUfYck8FGZANgctauZAlPpbsQiKrRFbjW1gInhNZ
ewmnESbZLoWCUFQWPerkgNkWA+xBgWIdW9tErjdC6cA9tnS7lyBYyXjD8d4D
bj2o2bihHBky7rKS3q0oDLQ9jjqU3qVzjpYdj9piGuGZ3dtB6Q6CAZVupiyX
2r4VKUJG5uCWBCFJwKmWWFbaN2FbpDIxszRTSj9VdzsokQVJLiKg4d6lpiI2
VNc4104fIx8AyYbFOxKNcBW9ywod0HRVz+NV4Yq4pgtlxT40r0k4Nkfguo3A
+HUMQgVJTj/br3zdbjR7YR0bLXwv8l5XC1CBXRI0QwZi0soCC0jeFNO5moMQ
8yTnUQIkFC5kFmGkaEegKGqZG+dlb85RSdrNdk0+VxqasKAmhp0etVGCqV+Q
gM8fVxt61brntuOYKv0RsiYoaD4GfKOPnRTKjgfodP4w+WS6XkZyLsdJiBQ8
AhxnQA95hXTLBMO4GmmoNO4SXVdL82sPwMgQTG52fFvaIyP6hYQR7hkOh1jk
DXwFuoI34hIMTHZPAAmDQRCcNk9qNpam39LS9N/Dl4CyNfzXlKEmgnl0orT/
/PL7/ZPDozOg/YdHm2rbbn1VVUm0PL39f6v71u22kSTN//kUWM/ZLamLki1Z
vqmmpo+si0vdtqUWZVfV9IUHIkEJbZJgA6Rldclz5h12nnCeZOOaGQmAFOXy
9Oz4dJdNMpHIS2RkXL/AYi7ZtXPRK9bOuusJ2yPQhvgm/ZSP5+PkQEtRd5Fn
keNk7c2BNt11HDC49uHbcSf5wN92yM/j8WXhJ48VKx4pRkXPKBuNq60xa9NI
R3tuuQrWKAv4gXDm5iMYVEZFC9FXRpVciLaCowyn+iodFXDQjijWYo8C87JZ
3gelGXb/rBtytsQJFPJ9cEivjta2//KcXULaFYdtUHDS9pOnWKBnLOYiQteV
zxLw7/VaD7iIYb1YlmPeJ/vHaTG6mRRjOI/O7Yqpg+NCqsTG4YV4w9w8PQ1P
f/rL8+Tb5NNfdui/j+m/2/DfrWTt0aetLfKsXAEzQ2v6OB2pK3paAvsib5UO
fE1Cf4pynUrRJt8n2x1jTJEwAQoAw4uL+MX2kydoiBCrA01FP+CLZR2pLHk+
u8bIgZ9Ozja9xYg6k+fir5Dv4hZfPsQCUPB3QkHSlZRxQFHg71lZ+G1wzHku
dmG5f5NcwOCzT9M1eG4tXYfVwH9crK/LNaTmF+4zIYyJ+YWUiX2ytY2mdy3c
yUW94HhLCSjYAub5inSBRoJqPubAg0pSl4l54B3RxxwR3U8iNxIXXmblKPsA
N0yytvXi6XNy67lTvJzQIvV/kh+BFPBkbr14tq1b1lgMNHSWBRIfHkVZazeO
1xFIccA1a9A7PMH0/GkqJUQnSff4zcEGVi8fScWVV0cu9acFZQotzoVlC8hV
MODiVvhk4p+UW1DHYE6X5NvY0kvvuskpDSF53nn6/HFn+8XTZK1LeKRdWK9s
veNdEOzAzEryRNBg2Ur96iipzXOW9a8osE45gGeWbygWVwo46Jeah0Rkxlxs
PfkErMvyW47i7UgsJjpMXaici9QxhQdKSmzV4kYcADC7kWc5vmPf2+KXvyl5
L/T5/k9/zP/05z/98a9/+jNQ8l//knMQnq4vRz4cfpph3SyOOaGhYK/V/GJD
ejtnWnvPsQP7xXiKMePnvXzyEYPu/rK2sbUuDOZVOgdiRSPkCPjCJK0dYPaw
ywm92U0O4fn3cM6orwhcT220fh0kOQ/NrFPyjNN6PUq0AtQHHIQKDVJExQQh
ylROTagLXKwzmBZ39CF09O0YupL4n9rrx1QuaqaXSnVTzTKsudLfjcIlpPQH
9efmE6x+eqmlVCnbx4zRrPMpCbpwo4ZiyUpnNVp07hVV+I6iNDqJWq7pc2+E
/rkbyoO35IouEeDCTt9vnyVbY+3pXbYc+th3Qv31pYn+irv7x0ed8NT6rktC
pLx9S5Jr4zE1SuTXP+Z//uNfkUCB/fG9WWlDkAdgPqfYovozEApOF/4FzWlQ
LrBhfFZiGgZ0QjhLQAkvoeLtsj9rKmHKIqwnUVkYhPPbjIHtROkkLatklHfp
C58cofF4TeQHuByM14xZJ4dZYLJqPaCfD3aXy/R9MLElOlSvFaIQNCtmePVq
MgJmmzROa1elJr/p8sMh2q7Zs68OVtgCov64rKeoDfWxxMe/K8e/+8XHX7qQ
hJOPMCwuSVh7r6gDs0B9FEirBEhe4RYBsB719GRTA9Yiwqf9I7IXFy7unhYO
Y7faxQ1Hsngp2DratMxz/LLzq6xOBH2kEVQoC1oYk2lTowiRlVC9O1n78Jft
dSsAxZeVXlEghx5j7ChdlwLTcJZRRAwLNXR6MT7S18j05YuSZSoAite573nG
CtMNWyHSyzLzlUy1EEShaRgo6Y9JDtl0WFIY60KqICEsKDXRmXVa5aQ/vNcq
Vq7mE1lrIgzK0glPx7OiIqLHrVLubnKXmLsOz56qTKtS4S5LsfCTvWszDMaV
ORVYGHq3ed+avYOnWYLwYgA9Gy7BNXj60eYmXGR/+vM6nSoVBYfFvDQrmnCx
bWuDaJ52CdN2ax+Q3aLINcwv56WukJBNl05YFxhe5VWaOpNMLJMEDYCQOGqX
hCB1OKodGJIKGEkoAFlosfXdxACnMn4T9Yo8iQ89/UJziEq536IMFCaS3LID
GCcA/w6zgQ/nyCuTrhQKvSWlFS11ya273djYgK52vt2G73eS37+Ev7b4r6f8
15NH/xvbSZOt5A1+ieoat918wt9os+e2JxT86W/+uP3EtpGutrafa1fbT7R3
bId20tnFCJVu2QcxkerG0MpUWkhwTJzFrEj14DO72DlJLAmgiIuU/jcnvzs8
P0fM3u754Zu98+P9TtLe4u3JW9NKzQNjftNiGwEyw+ZwtIp4DKqnwncIIkHc
lhIo3V1mxZjCf2Ci7EMnkJzL7KJMc8GMNuo4R932+baqkq0DF/LkKra0bR8k
l2WOHtYCTWMS8yZoM9IHy1kcu2V76MS3DfUjgQpxApyQtlwK3j4sI81Ie5pP
OJaIjX1Y7E+0UblWRWt06pUXSVk1bj1FKC5HnBARGPJ+tovktgGKs/P9YtAf
nYtu95BVsNzolt8lT3ewfRhHSu7oMWh8xFHJKv728OQt8Ye99z9t6/19nQ+w
umCSvC0ig4erGUpKvp60/vErXL8uDQBDF9yBRnCXFIEu9o9TkHX+wGuN9ppF
q9YRsfBUI5FNMGUxmo8nHIX1h+bPKAxt0mUZxz/DT11mdCyF0Px/3CBRWN8q
cvIpykTJw+RH8Zus6Suxhiv0v+7+AC0svCMrIN/bF66zdEsSqKdGYNCBNiXh
CoN3aFpoh8l0JYB5T3t5J/lbL1+3ujevv+v+cPLu9QFZpKEFvHnLc9pA+8kv
v5zunZ0dHr+l2AkyltDAv01EDnWqOkzsY7i5ofAqLZQ3skw1f6W6GeNZzvsu
LSnmHH+mmuT6np1daP198svGdifZ2Ook8L/tz+b3p/r7Y/jdtOkkj4UBSV09
w3LWlAWuu6DOhBUNa9YRYYY7CGfWc5M05idqu7vA6OP5WFJakUzDASIek3p2
lozg4laS8R2tXXSSKYxhHYno3RtWhnBfkEYwyHq0Lg6QNqdgghiaI5AkpskG
khr862+wXVLe5vvkYtFTaHGB0f4RnvnzH6ELUa7Oo7NxgQAML9kZF6bOuSfY
/OUajryTnHaSP+D4b6e3KNT8Acayhaaz27/R51P9zJB/tBF/QxrscEiMlSAK
7Rp+Ty+qtel61ONpGCaXokE+6LEhzAhh2L9JfhRW8/t09neQjiPhfJ8ylsiC
WPsFYQQ0bJYSL0SDF8s+ddb3T//yy+/3zv/182eGF5HRw0au5d9vbW5O1mER
evlt8i/fA/OBPycEFhn9PpXfZW4/aklTOahMmX/bQG03Wik4FNirzPGYlAl7
3a6pJrsuhg1pMa+8QaOEQ7sxzbJRlIaulqc5gaZcg3gJ0jzIbmQB9OSNQb5l
foGKIbqu5DBskp6aY8QrPISng/iBSnuoTfsHgQzWOMZyVkxAUwRV88zodfIm
zNe9YHMVoy0gIFG4lTdRvzzLphnl6GL8NTawt1hQ2eRGNrVXUeGaAKWcwv//
sC5L2fU2Hp94zwB+1vzDdrs7BStlLHlZzdwHq4E2fBesl6s9C/Q4AVkbm6es
PILRZVk6ml3diJtJLlhynGbeiFSj7mLoI5f9S3nDX1NGisl0IZ05WmrY265u
ht9Gla40WjPMZY0KBM7JfkF2BE9vMkX+1Sr0Oja+luuDV436ZG0MG/ZhPfnP
f/8P9J9i/WK67UaMsjJDOYf6O+u6FKOnB3hFkTRJCAIf8Og81w1/C7fVkk3H
y2zFja/Jyx3HiVXfRpuYxJt4COPkLdQdFHclqHxOD62/kRQgUUJMxeWTYu5Z
qMGFBg142wjN1MXQaV4l5ZsyRCbCOeWXKFSj2I03E4+AVpjk7DBjWSUV6CMF
8t1klH/IYJk7/vfI/seqy8cUZohZk3rz7qJed2DuYboBb5OXeOOsveSM7tPv
Qa+Cq+X7HVT0kNOviXGzJquuBwVvDaWDLezqyfYWKmhbT5+TLraGIgP/sPWc
ftjZkR+2/A9P6IftR/zDou8XddT6ZtXuVGfCvd/gVRE1z69buL+CwgfaaCfZ
Ab1RJa/a1B9ocAyrDTXtWyNJIs0bw4LHxHXgYAEJUTqlG6XVTFH46L63u/hm
72eKBLgqypnqYcQOqQN2Pe/7GvfIDLzqKXHet8kpWZSgy9vYS30bCCucsfBl
7Wz6jX5z0KUEaaD8DHX9n2HB+L9reEOv1z44NRXw0t4m7yY5HeTb5D2Mmjie
/ae7rQsMdDJu1UB3G3iQ/Sc8ZnmyPvSvKAr7v47mKHYztD4+8erI+pRl3GJC
wu7fFvIfGRQI0hN0GHyArz8k//w9Ve4gjoYHZa/sX5GXFWWWQH++Smo/7JMC
LEc7p/q5ty5RVBHS2R7onWNmaGv8XrjdzrqqU+FqZeXHVMK/UQ3ly38uS020
5GgD0DjPbLls48sde9EqJRRo+0PH7nVWuhZTK2mgEafWJ1HGnOfs+KRlQdS3
VNbpuig/KDxtVmZyl7IvYVhy7AtT+A9afsDg9qKGFj4yqnElZek1COQiu0S2
polcKR2baUZ8WHPenOTCsJ2Pe6Sl/tbfHJK3nmqaPMy3iwnyNhLFUewZiqIh
8YrzvCgCDNZh5hvg/znAiu2zjIbtouz9kKF2fpW14ld5kIhxhk6vvBr7Agsc
8Q7D/NEuh2uCe7RiYyxDf2NKIkNlwJbzE26DlQvJ68zReIVxJauORdxuhS/z
Am4DulTyc0BuctF8GOzMA8Z5UEYPdHZuEjtZ1KtM9l2U18MJ7og8NM04gkz8
jMUcL3OBFFTwOo+3oOnbPkNOAWNIIa9HFSkYyYyyr0BW9Sh06P3yEHUa4cgb
qivMX3LC9KYXm9rAIUYdD8BI58WjXjM2BPVrQR1IfE+RNDWypzkTOP4mmd5g
3LBTxSbSX2O+a5LP6lgcmAo1WArD4cSMoYh6HMBhoaRxQQ3mBrGzXJLuI6gj
dPPwMczJPH6Jh2U9QHRUX5icSgF4+eUlDQDNRMi1sK+PO5vbyUm4YfYkPR4e
IKMb2tOr9hQ72Mr2vKutxORdcYxtNp5eEQ72dUomTnzz4+SgS9z+RzR/jHON
VTT3HcNByX5z/gvBC2u1GYqIlZRnJsoOxcWxb52xf6l/wVV29j0oJcCjrw7P
987PzzpJV/+xD13gQcU+zg7fnLzHTtaO5pPJjaTtdny+DgYbglg9IPMWIYmv
0ws1F5sDUrExLkguFQwHZTpkOBYi8OvQngZIw9PNiWzYL6mqekVcqUD4CKyx
q6/u+EiaXVkF4Th0nhn+BuVG9p+gG5uDgslo+/r1yT7Nev/1yVv86+T0505y
cBh+gIU6+qGTnL6jv/AFvdPX77odd3bYPT85O8Rvu3vv+e/Dw9/ze7uHf3gH
qtChtynSO+9K3zyBL4CjIazwOCs5cVn2Bx49oCXDnDH0SFKWnQmsVILQSsog
CEz64r41ZPNbvHuQCXBGFa5HhUF/YcrwptOsvEqnlX8dc8XQ5LfIedKZu05v
sA2yrOo6ndJmayq1rwkgQDYcTlvvqffq3d7ZwSZotQTvoPQQvHr9dMrOXOQe
cOn9dNSld/0vs5Rwpmtn0mZAnnFEKPtpOHPrKp/MQHrjD8znezjAHcIRN3WQ
LPKB8eBxoeqP2Y10sYG7u0FbNwQhi5G9fVNSCzxeDPu/M2CvnKhnsud2dja3
cI6//CIJdJ8/I9gGRaNEVaXssBjK/N/+7d+ST4NSbJsPf8NRUil7TwRnLJoM
HY6XPf1uHzYk+c1DpwZUeFja78BteA7/3+9hUUv+/ftk67vlTbvHB9p0e3lT
fLE2ffydk7bZZD5Oov3hrMtfatZbO6cdHuH3rQPvLH8Qx9vyIHy9/EEaffNB
+Foe+8xTkoi1FoqLZsRW6lA0dFQOpfjod7bZKJtczq7iZvxd1Ezqm+7YZvJd
1A7IhQfGa+zpmLiGJwoe4DRFx2T05oticPPP/0I9ymxBvQGKtpOFXoEj9K/W
WvZ0VPb4M35cjxeEkgWam7zbZsRvWVwZoyRsRpOGI5iCONbaE0ap2vnI7kXj
oDq0v9SWcf/d2RnII72jH3Y5FyBevYuiGLW8DhS4MoU16I/SfBxvtNmWxgP8
Iy1iy0N5gdLlTv0h/rqlve5T60v4R10Sm8oUMRXRnoX7CuP96eAMVWXBUKmP
XFOZ6d/AGfUkHb0+/Im2uqv2XxfRSWhr6cIg1CtlKsmStOoM770ApXfYiydv
corrvyGflfiOlOuvmNuNiwogmBymYTGwtL0F4KPMc16ll5nDU4ggX/0YCL+B
N2xGm8E7SZ8XqHrnwUpEB1O/MWvqXJiMAyMQ86EKUBB07YicxXpmRnhnqEeR
Zy7X4Cw+x2uNPesX44vcw37UV2o9kQXi6BsR6baaBRsKB2SKkcB+FmxVoEU8
pEU8yzStn+/vvNig1RX3Jr47L+CbHfndf/4c3YkPHz7UQ0zJhfqM/qZ/avx3
CEqT5b+2aY0HU1PDg23TGh+mppYP27YYItbPejTJHWlLHyphsdj4s/4rOoph
LeQc+i/w/J1lIpSDbkECxrPnT7dBwGACq2ovtoX2ol8+f96NVlZWNX76l5jR
4k9h+oOsp19FbGgyrHBRdsJztEjzqt6qV0xBOtjxrehjG2+KByWLEn35QECq
wtL5Y+RFwQCZpOd6oJGHZDWzKhkubvD7875pFDODt5HTwgPacIaDQ3l0IQiX
ya0TTwmIuWMvYaciWwdnPr22Y0ln06x5x1I1KyuGdp1PtxFrNgeO8cOcFIJO
ZkZExBFoTTdJfBMDw5VgmLH1r8wG39nhhJyeKlZ9mdUDh3FkdYW2AS2D0FSi
DSVezhhFJLB7SfrJ5lOnYjRTOfCVw490MVB0e8vUApQGGR2hR0ymZYUdY6TV
EEHsmr+WySgKVSiRYvDnTaEoSvEap9Op2MAqRiTBrJNpibD8zCgTJnpNIavC
Odh0r/MPGeb9dLyq/NjaDQwSGr6HP6D/DO4+HA/1b2KQhNvKdf2uolCocDNZ
jWkDh1BFXLeHaY2T/s3OHZzWt2uy2znQ/dOd3iwwxhGc5qpHNuYKjlCDPbY9
QR6wez6Db1HUwPu85UuemRQzOH2jnBzujeeAoWEx4B37HEUW9S7m1U0Pf1vp
mfTyEl2cs0zHCDscP73k1gh7ZK8O/y2yyZeYgRlOKjlOK88RKs/ZMA4c+dQH
BJryJSCjd0Q1H+0vClBTZQye7CULrhu349rkM/mN0qd9dihb5ONwO+eXCO1p
GElMpTCSJK75zPKT15pnej0EekfcEDSVomUKV4VfZsxNS/qqrwX2RTZQ6OzB
mdLwg8QDIWNoAZqX+RJwgbUI2qzkKgwK5uUPPIW293GdYn7eRPKRQ12dECif
BPhnyRlG/5CIt753p3XEpA7SR4rh719RpIJOwxcbM8Z77AYen2KsEHpRFh0U
e//y+CkdQ7fQgMcx/YWXohMJDZTD+Qj6HlEeEUfk0JXUxwQP4X9tp02dd+FV
9C1oxQQ6h6dgPkPbCqXEnJ3uc/7x5h3HULul9B7G96aovw30fDls4eOZeYWT
0DU5EPR6e7z5uGEm4gvBswWj5TAqokHSrNBNOOCTO0knhX6uqNLZOB9M6CaE
kVC04hVG3f8unczT8sah+//Fs0fJvgeAHqDXGN1s6Sg5x1muvTvfX49v7TCs
SZ36GRtMquDQKudaD4hx1jSlAoOFUC1wNUR+QU9np1MDKw9WTarFFBQpQLcu
eVD4llRcvpSEtCCCIPVQrO1Ay0vg4GAbQqEw1j2753vnXTrj/fkYiy1TJZqc
oCOwE5RyJGeHpodbYQpcVgqIccwVrWynhocoG6K8LJS0yM8nCqUoEho4NctM
+RNsD5e+X6vJQCOmRMaRg04NaYDthW+MWg2sQ8uUhuUu0cMi7iD5kYmdAMUE
dpf3N0M3kPO6HgsTESMPmlzM35eKGbUeGhcmzmdQBusGsl38pu1q7Q2NaY3L
qXEJ1UbjhoSjN8NqLYnv33W58wiUPButGyYlbD0qgG/cfe/HixZu/uh7vPt1
tYILMKeofU4shsXFiAF1ctckYKCHCaUUUCFCDZKxsreWTxZoEAzHIz9pitrF
ZcfugEHK5HdT2Xe6oMgrjn42D9vrd0OUHFlwx6Z7IV3cjlxYL4FMLBQhyIUt
/mZMZbDbYi8rHSK9gIie7zopLsxQ5leRDeg6o1C0kQ6c+EE5zzphN1uwWRya
SXjGJa6jT6bzxTf6wHgyDY9DRKSkYbpyvhJpuwXqAnnMg6ti9kA0I4ZRZH1v
ZrBHOJ8LNuIi47eo6kxyg0+3YlmKtIm7FAdudKeRRiTgakVDjW++mrHGN19k
sIFDjevXaN/KBhY1bucEdcuJto7MJw0uE59qnqr5ahV1QFbe6gL8FTKDNuOR
n5e1G5nnSANvWo38Y5HBqKZJJXneI0n2uzsakRzYZgfyr5EJ6WeKFESBbgqX
ZKdmoKkhmfjq2xfA7TKtvwrPOXO5NnlbpXI2Biq1o8pegUSO8cn0AgKVAJH1
0sfw4yE3sgAhW3OJUX7jnLBhCaqanOLIYxFPAR2GnsUQ/wjmaD7VMnB8wawA
sQR4lbXF+pkg/5EED0sdEgGpddmNchJbw1e0dc+nxcS1Yh47x1V4Q0n6YZb6
2tvtZviMIohpamgrn0vJc+kyRhODefhKE8zpgo2LLwaxai2YllZWwrOQqwz5
kDWNkJPoQjQYweqmejtS6AIMT5iskpFJ4/BKk7G9EfrH3AvIFVx/0p7Sq0Ec
Fai1whdhidPiOhIvzQobYvdS5IC4xVSaI5ERpOgp4+C6cj7ZoDttnE6ANhik
B5XMYUo4RBwDX4zEwGXSdTEvdn55SSqajEtQwTohLpAr+Q41nVcHg/PeoHmj
2lmM5upaB/qjXNKcA4T96s2uoB1Xtx36KCLWlJ0885AVH1R0th7VT9mUSu3i
75uuBvpAw+jYG0dtqHqlJJEN1ZEiw8DEQ0GFpk3lbdTcBvokz+eaT+z8TbKp
TLCTRN9J9He4RExDNFvGP6imbE28Ni0QzsDlHKPyVAhyLATB6tg0DB9116KN
N61BzpiJlZLNUkogJbuUPPZNdM95+Y1k0Jayc87DgynPQuHRj1gFVNC3OTWH
wt0Un8wZ+dLKRy3BZJH4qzeB81F9kTAq8SreOK6iL2FrxPIvm2EbzsiaHqR+
yBX0IN/DAjWEPB38eVTy5x5vjXEy1Z4Rru+foc/Np+5QNnRkdWVDvvex+6Ew
SO1kYnxqp23cRgQXY73woNhM3+JitSDoHgELtw5dnd4z1IAm4/3kJ41DQwsL
c41E9X/4SgB+XqYiFnmiW501VHpkUugcguVCD1KncVm0LI3jQZK5gS5JtOzU
8BzhEqRII7TZtO6wuf7SZJQzUHjtfl+wmIkspjPiw6rLaXqnEHWuMNQn/DDD
RJLkpbmztai9z4YQt4gb5NVfEaWCa8LBnEtOkyO3Hl5TwNg274gns2VT+Iia
bz7X603nmr6ugY10R47HwMVISQyFjIQwXWRywYR01GBRZEg5WJ6RAuO4i4ID
RJnwWsLP6lRv7Ojeo/6Zq1XDqqC4FE0zhnl/svk0id1r67VZMe2bcciFT/Sp
C0unwER/SHTpOIObwxkbGw2Bw22opGfm4T2JlwLNhTO60taRfe2/dZ/QkGkI
++5NipxwvFEOyTdsFBsN6xv17B+5UTQE2qjNUZVGSicuL5o/MLhfE+J1B52k
AFSiPDY01rt2dR9JAif8A5xhliZCmOeOjywKMZf0ahtkSS8WRF7z6MbHrQWK
q+3f664twVqj4iqOz6rHz0mbOITO3pzxcPTetO//uIW3ZrL0T/vUmmvVjH6I
TEpUyANh3dLkirgo6pHk7adFBloTMlHLUEKpkBqtzSfCMHAJCj4eOg4Nu1ox
JMyGecnqORvmlcRhXuLCqxMF01X8Q7TZHP4EDQR6p0eT5kDGZI0sr8PhuN+D
d2IxOU8IZMbEMry7dYvF4222WNBj0ut30WNHe6+78XMLghFrE/Ivrw+XJbar
6Ms2Qov72/i43ZDR+EVrQIUKgkIgW+smiMY2pOgHdAXIzqPyf82FFO3jDAKL
p5uzLoCetaKOMCP727b+1vEJEgbIw4jAMli1RMXfIyMUjq3omCZ52Kf4KU6c
eIWIN5v6cIrFI+E7GC8SkOtDu3gU9R85fBvokOOzOBuOuRuKwoTrpPx8e/PJ
5pN6gDhG7M0ptYy1+IpQVLH4Her84UiyNMvVrPyLrDmkjd8zPnr9YhIUUu7I
ekN9KST1+3skNKTtirBNOPVN1aJvUCYFEZny6rGaPehOBBRZqRttb/81FeGa
YkJAhZk2SLwYxIoouCzrgVi72dq3zN13yWhqpIw3R82woHSYTW+hGqvMjfMg
TWaav800/ibD+7RP8CSYBkbzp9TMFAnly/ouLZJdkkg6Oxtvqnz0kWeV1kKi
WHoYD6oN04ruNNQ491/2OPK/t/f25wWlftOJ1iqMm5Nnmotlq3KGGgJI3WhG
q9vkKHyOahdLOkwxJCNlOOhC5G57e/Nxg8TFykwKgroWKIdHdJ4HHDnFRwjN
WO7kAhPHpXTCWXYJy1nePNhs6M6ck3C2v7fTO//59LD3Zq/7+97R0bYkRvTe
HHP9h++T7UffrfzQ3k/60FarOjzJrjfKfurVYOwqGaeVFJX7uP3gc5vUEe8A
i8R8vSvzt3cMXSllqoJY1ZsVvQ9ZNvX3DRyfcToNzgVqjOvaw6G03REt79eE
6eYvEvd9rrVbOzUCsqUmBO+C6oVLpVgqfYrZjyQbCDlyRS1ujMHySPxcM1vV
2WgOMkUnGQ1CQiqr+GfZoya4KClGjHR80HnbCoqQEXr1kDDeYMvduDaeWneQ
GmcZdu5rqsHwqYiJG94lkNPQTcCH98BgJxvUSV2ORTKmHBsubHbR4yH3YBK8
cpHp6BTOFxB4r0byZCXEk9FZpfGPciKWW4rCkP3hYLp4g6M6oh/CGSGxg76L
jZqc5VDfDsNujicamplFyc4iTAjH0g2r9yPdC3a1E6q8ojLtFVynuI/EjmdS
n7iNiKLtdDo0ntOyBSc/LvFZJeg6p3Vm3DJU5kzHJ29ODg53tMRul4pulHg0
kdQXb9zdr/R36MJX/qhZ72SrDeStF3inFt4mrxs030GmJU4r5lexALXPjx6h
Qx+ximGLSeoQEAPNkqSAvEnQgcmOzYc9LxMuqYwuuezTlGopk/sHhq3PowHS
jKlOWxoUnTruin3/YqZjJsZ2p4/FB7+M/vonwFcuqurYemmYVK20Zk3ZbxMO
QbQ8V9B9utUR4ng+ZfdPm8mBMG2HGXrbiiFX9eCRDsL8Qyif9wOYEPiaBOJY
ApFN2eA9Ym4EW/Y2u/b2Fs73eE9upzqrUpbx8Df1Is4+pI+THb/DIBSmGZ9O
QGzuNw9rXfEdrmnlglmEWeXdd6enJ2fnVGvu+2Tr0aMXz75rf2hBLjo/9HzB
Q5wAizW8o2p2/NCLpQ9Rqqx9Ch+Cx1r5KFuMN2ANfXkptiHjtZwI60QgBF1B
LVhSSzUN/ajYyBt14L2dsFvJrXy78M9t8pY9RLdSV5iIMOFSXAhfgy2Wb8Yt
bwb+g4lqWXuQGm9tnwv2ivt83uyzvX2912gzua8XLeMzrdp70J295Q1d1IU0
oz4UxEbBhDfUzcGb/dMDidVfuqhrvGv7yK5pcRXOdunS1uq6a9htS33YEO1U
5zTOVzbHqltFGUoPx9WC0FxaeUWVLRrySmTyLeC5KA/5COY6f6u/tcPABOEM
BPha5O7kqTaT5SQRCnZWU+7sSu9usQ4Rpuwo4YDeKJWdwifRCCo5IOwMdjIU
yhdqTEXBze+g4/pGPq9v5AJ6jrYS4Y0oYJw2U6FYNKmHhi9QKhSvZvVUCkgF
dntTRV83yk/h6lK1tQyxPoZa4d1GtWs1LsEG4ipdZKBud8tKzIPjPUQjjHEb
qDDdotWSBVGQdcm1JFWgjc+2tvbRaa+v+IvG0bGnPlpnREKRqApf7YXAgAhq
TOMaMXA+WKuNx8FGM/hIfX40rcjggGRr3keKFk7XLQn009CRJcvYgFdpW8Zk
xWVUlhevI6xk+zoq62tbSIajyeKiZY0VHeQDgS2ZSYYaPaaOWY/zFnlpN0P9
N7KAmBdgIkWZ/ZXRsjXSQLmWGsiGGIZi++eSLhSXiUamFhAQitUsGQ6DEfUp
zkcEXsVMMW5WJ5w/txMJpKVqDhnRDOQKccGMwnuc99gKeIeP1iBGWzE1USM5
+6DSf2Di1Ep3jioNMJOTUpAKV4NIZAyxKNMaFLITH6giZJVtLkNFOS9vxGAX
O5fZWW4rqhtAH9yQc5Lo36M9j8WTiqmniJux4E9mP+tnr1XTxru2mFYb8IRc
tZyshMiqRoLCeUWOJhON5ayaHuUr1g1ehCCjv27VfHibJG/5uVoZS0Q8kjBE
qGKSlbwkL4vB2eqd/L7jz+Te/v5htxs+v9w7ADkxfD44BG4QPh79cPjT6fHZ
4UH46vjt+73X5uNJ+DfcOSg8hC+6h2fvD8+O9t69Pjdfnu+9PuSh85jZ9fzl
Y66NaNko7GsV2Ot/0lL9APr84RmbB/6LxvzrB4lXYfIr9tNeqL96l3mp/ksP
xLIVaxfH7ruiZ4ene8daFf0fd0I8BuA/9rXv3hoK+se9lm/7X0u2Ir/cRc33
Javu6f6vPpc0QaGlf8RR+BVjVqU3vohV573z+peqMPvquVpRXmh6uu4rOPQv
/r+SHZoL4EWH1f5EAsbLnuVGxrZRJxT4jDt5fngcUfzeQRfo6/jk7UKSarNC
rHSdHncPjs0A6qetQXwnr1tGuJgiO3ZFlDYbe+1dZCuTndCpbInqFuYxBN9L
3mGQyVU2/pVCrRj4cLwwbk+YFlbiS+jy+VK6fNJKl3fYEBMr9raQqqXKO2yJ
SZNwO6Lwvjo8t9sZL0/NlnrX1tBGJoc/7f/w6vgAA5j2XvXedQ975Gk56C7w
RjfbH57tdd+dHcIjPNPvk0efHqGy/Aitv9bsu/BdOnKklYWNyDAs0MSRos85
nJL7VzcD5GzX+ZinDnvee/vqsHd8UHNPPN983PBOiJMXAwEwkyopCzQWnF/F
wMcagrB0WfBlEsC3cHYS/mjyM8l9p7ZASZQ3LglzBiLwrEaSjZ/l480t1whe
YG+IAuTOsjL1dSfwq49pSUjIfgkqh94xDl2LJ0P+OJwNFroorjOJP00DaDY7
1jByZ9lyeWPr0jWFhXfqn88nUYYGhaWYkoc0dnLnbfrA2ihiKIPlXfoyAUnC
LbBE5LdA42xn7KOSPbNjCLvlyHsRWceVtR0xIq6W2ZFeMDoVvi8uMcdJwYRr
nqbzEBDmzcLRKaiS+cRXZRaTKQa9E/BtACc4fnjCKMSG0PZmUjemIsbvPybP
X+wmwxQ+7qhdnSywVLigwT3wbcCKG6mR4c+irr5rdY4dIe7sjrLQl3h79rqk
GX8PA2vP81jwAu+WODgjI+GidpFnKvXLsLBbc+wXtaFYP8pLODk9Bylj7zVu
sK68f4eEARJxYV0Hb5QXf0QjBs4wDR+b7CJA6G0bpUwZCsnf5hlbs/I4B5Hz
3sRR4PzpYRRxrlIxE7CMuYKCq5cgaoxNHFvxGlS2z5USCeDaV5ljioswxNn+
ar9amHMUP2ejQEwkUv+yd5lNerV85bgBr4NtUws9it5Uoyb7m6eg6AGDOBTN
i2nDwx+OsKLq0x0MG2J0804IwIP9hm1/vE1Rh50wqY6k5NJuCqh5KLntuEqf
McyyafXAw1py9YaoY78Y0je6uTjrLUFxrsKyDXl1hYH0Gu8odIRG7pnc8A0W
6la4kRtpDgG5GD0A0Vg1coNCwvG6QuE2Ak5GWSpYoumwWrjjeqDfuqDVYHh0
WAQ95JwM93F7gyOZNXJWibe4hjVfCClg28SEGtEJghTzvxfRasJt7iJUeVMr
ofJvNUKVBxqEyt8LoWqoXGAVXHaAQLiAGLAWmNYjSLm8xizTQpk06gioQpKe
nffscVqUsEAWVwbZp5iAa6B/Kko4YVBXWS3PNcLx4S4GOYZah9wC/NJx7msY
K3tnqDhP21FusKjP6x1Xnxn6hqh0qal44PNNwxsoSYFHwkm5/uKJTsyTzeeb
W5s76HKJiJZWQ0tzyyJRPYeLMAA5n+j8gfGhB9kcah1L8IHP1A/td3qgIkWY
iUv7ZVFVXLKyBkKpvq9Q07x5HwStZUm8DT5QgyKqCUa8wRRj1YiygTu31+KB
AAHi2bNOs1Fk8sdGz1saRQZ6bPSipVFkEUd55VFLo8gijY22WhpFlmJstN3a
yFhisdHjtkbWboqNdloaRVZObPSkpVFkk8RGTxc1UqsMNvIrHvErNFaYKCGj
ymqkUBIp3rFEaeJ6zhEGaganheQj/tHG+txyR6SgJ+2eKX7o2TN9upsk9dfx
ybfPckhN6DEmIu7x+Yo9cg5nrcOY4LjDFyt2qM/W+4zpk354/mjFPs2z9W5j
iuZut1bsFp+t9xcTP/e3vWJ/beOLzwn393jl/vjZRp/RseI+d1btU56t9xmf
Qu7zyYp98rP1HuMjyz0+XbFHera1Q3+8ucNVD459tj2azNi6TuU74CxqmQzM
4Nmz3fggbyR7KEqjfWWfohySfbqISOtgzSI6vBwbsnf26t2bw7fnTZOYyG/2
GQO9rwLaYuh9bVEHjer30zbAKIqAMnhL2CyAD8Uyo4hs2oqUrhgJIcqdaMyg
Jhw2GhDzxdU5O+y+e73i2mC8cC16H7N+hnnm0e/6/ZIhQeCHYTSvOgQ3tmTs
4RXnxW9fNjFqQTOrTYaTLettfbalH1oY03pNkMdwa/Fe7dZBLFpGydOjf+9E
q2ArQ9g+fCLmCqtw1xr4vT047O6fHZNlwrnoJIFgKv9q6G2taptLB4xrimUs
s3E6odxyXzvY0CwH2aUzVVvZcrDpfKyTsaoxrJvtcKjD8pBY6XSKgHicQJbm
E4+T0s8iuCyEuLQ2VZFV7awdArVQrSwxZ3g4zqRMr32lgywdC1LwHGVuf5YR
ZcgrHTg9Vmv0R4NVrv3S/G9mWWjjjGpD1QTbwjJJPqdA0FAVLVhrKscZmBS3
NAlrzGnHxgrp2YsoOvRPhLRVW45uVRiw4OuEXxChSPvjmhA5moFpW7I04MfX
IXbMqiHSTxgKJ0INsIIwp3gE9sZGq+m8nBaM1Bh+cmyeCmWpVsHOSabFjL4a
jW4Cjo4undJlW1TYgZjVKfcqOjehjJ8X3NTqvul+gNZkK4+Pmvwudm0JouQj
89vWCDMQKXcjeXNDSlWIEwtZ2dLrT9FC7nH7GSCMe15+dUzDfpa24hk2Lsls
tUsya7skG5cJtCJv2vKrFF+Jn5ZzWrMWrZw2/H6/W5SfQwb+y/K5lLY4xR3D
XHgf+J+XXQdMXwJMlTah7hhnaEKptWMEUpYgzpSRGTRLcSAsKMDH5ZOoxkLY
bDHkuHYuiufL77jafAz+Fx1al5JNNQTQ6+ZHOKYhF5N/41KS2cCtZZuXm3eF
SwmexP7ZPtZIp7jedbmF4sKpfUR/ptfxa9jtqGmVOlLPbk4ICDstP3A2JT2j
SEt6KwnAkKYJoKknaD0OFiankOsbYaYYEi0lJVqk97XFsvm6C7BrFFfNmIHD
nEApQ39esVprV5l0YeppjgyWyN7s0BuTnFTX9eHmSE7jRWA1bRHjmtxaNULY
EUekBPkTb7uZzWmj7DLEUeOkecpWMzV7Xdur+x6NvDJlPiNm/WK3fiVsJOdo
SWO8xn0+MfTyU7l8YIiEEAfX+SDi3F6lvxfz1qe+jvIyXI0vD1dSXoarKi/x
HFo5WtTkfqzXP7qKCjNcWYUZrqrC1N6/fHorqTG2dYsiM/xyRaY2Vp7mV1Vl
orHfuRbL7i9/5Gb+wFXKR30yaVSZGhUeFHRtAel1ZTyh8LJWsd7DWsVyUPmm
ypGZcdEdeQrInTiKFCOlPJVUqiCgB3CUX3pAWSsaKtsc1u7GBRoG3Y3D+G50
9bsRZ6Ij1uvRn0FJcECAPX46Oq4w+JLrOKP8bS4jRgX06yDoeo4e+8Yvt8L9
e5GAh115n09bnfDvXI4YhOqw9g4FSr+F8dzoU8BuN1i/lBJGNBUXFHaDQBgF
7202SIU4e6XuQLzhKZRNSs9cZf0PlBqz27hY5GIjzUweEbA46OTxNokaBOXK
7q6ZvY6pCAfSE5WUiEdkbv9w/6xBn52EptzhtVpHLQjrkueUEkm1LQhWVxE/
q8KEHZUfc5ylTYvEKulAWKj0vBRi8fVVLlBhMqQuZU5uMirLMkZf30BRmcuC
8CfIJWukBDWVrrUbQdfrF+fzR3pxWoM16jqpajY/cLIPnWQKf7B3pTVV3+u6
NA/+ao2nf1Xere5QozvuVGoTX6itvLMx+Fb+WW91v9vSPt1yYdbLQsDQy15W
3HFXYqPGZbmwO9aSay0b0gW2XE28aE7pznUL13CbicDWy+YjlEZmsHANYAWb
OTvMs+q37jigOfaLDU5863APqGghTuQoC1lykmAXLAVLZILa6FvEgrAJ95cL
mkvo9+CrSgf1WdxjoxZICZa/sKlIpAPjiFJHe+5rUuHecfgA5xz61MVBVpHG
RbxVcaZj1ralrI1cPRvJa+Tty4025Ni5Fx/DJ76OyWa0oslmtJpqMFpJNRit
ZIwJk2ylA//znRzOHhR6qv2EjL7kgEREHbWup+M0Hm0uS1lbll93mvxUFy/f
HTI2kXDaJ4g4hqD+1B/NK8RmYfHUGqHZAuSFQHdxY6iGRVilDsLsIbQp4vWd
WFv/JpKejGVfi94jPq99J2WiE4AOGVrbFp/jv1WKT6uq6Ockw/nItGg3MC5Z
iTSEHhvBOB0hl7hRky6F5ASTvchuUfiwFKGo2vP+wzCUCOD0ZqOBjznSKGaN
BaSpYCkCGqDZLo3EEo7lrd90CZWMcChyGv22wT+gTYhAfNXoQRWe5WWhRE/b
+pOupGV/kJEaODUyLokKIkK00JMEKUInYhASZAYETBilfYbvloJedu/KjBCQ
jG1MvOprbd7y9QSRq8fTUd7P0adUi2xELEfCU2JYpqrBzbd37U1hJdRqgWx6
f6H060ijqwijK8iiq4mid8ig9xY+8TCxOLc43lfG1+uX/WVRlNjER95i6Yzl
V1Eby23pUWBL6tHGDQHWiK/YKjk/+AGohiAJUspkWCooLxCT6+jOfRF9F0u+
dj1rO2R+ajM0NSnzDjWg36IB2Nf3VxbU75LQaxLfkkt+iRj8RVJwczl05vcQ
f1e4sZcLvvrrshubBVqptFJ38G9vtwdmBw+/W+7hD87xpc58zFUSiftruvLJ
B/JFjvzyf6wjv7zLke8LEUZOfKrjIzl36vZIBRnKrxyH+dZ86pUNEyaBI60k
p42Adm1TRVxhHXhywz+m4jv09b3csShY2FUEWSjoMyoZCSvxWF1+2xzInR4M
s2n31EiEWt6UZE1hJqEXvlCGHU9nN7Le7KxLZW+0KJPWjwpodfxN/fLTArla
qq36wKiqOzoyT2ZevHtcemNyFOx+TouClaFc7GTkIOsIfrVlNMSGKBPtPKBd
XgtaUDrgtDQ2ezIIP6fwcng3gqjzeLeJknYIupTbpShjYui6jPqxh7D1NZ7j
TpInMogG/mM/HSE8P8acyOv5MjfPPk4uSLTjM+Ek3wWB3lEWrhhr1ik/5Ywj
fsM2fPvtxh1/vnVxCKqBOg5/blfvychcu8mjekfU062h4t1ku9HGNKLDvZvs
NNvUx9Q+vG9joMZ2kvWZxvzzbmM1Qmk1vWS/4nqHu/ML15v4wy4VMV28lCpv
/PHRn3cXNMI/TH2wcZ8eD+EGxEJzrY3o5mt2ZBpRw4Lf2kNa2F7ciDIyar3F
jaihpoHBv3eTxwsb+Wym3eRpY+BBZt1NthYuAQ56N9mEPy1j+oLFHKDFbCd7
0t7oCxbz8eJGX7KYjxY2MotZa3TPxcT8eVhPyqP/dYv57Nmz5+lFf/DVFnNn
caP/SZS5AsO4gxlGeobnhmeEj4g3dqMpmiVuRPS2FoLHgYdKrMxGss+hNBi7
J6CFhxzts8gC7CNq7mk44Ke+jiV4uqIleLqaJXi6kiV4upIlOJ7oAuXINLmX
Rdg/2a4wTr+KVfjLrbjR8JZP/U7dUAgUccDTUWUk34riphAvApS1RoyaW6vF
cHXaspvWjTtZQ9NEFlQ7oMil5A5L6na+BWFsblkYm8A3IJRSVT+GaM0cZWnp
6hF6KMCSRcbadD+mOZX4oHUVhZYcQRqqMY1N2VOvmGlSaIimi4MiN8PzZNcN
9UVtAJ7GX+KWhPLQM794LdAQEr1kFCCLoxs21GlEpoju0lheTD1iAiltnOT2
cmlUH/bRZgFHxU90GCETXt6a2VuVO1cPzYi56I7nohpLsJGcFYJstM91du6O
fvYhB/djo/LU14mhKy9WYo/QbAX2iK1Ws51Fc2hnE7bJ/aIC/KOrxNCVbTF0
ywe9zOgXtbnT8Gdafy3jX+tY7239W4HR27HfuRZLGb2eIYyFRTCRRWFxWAUt
jntzppY6MAdlInq0LaTFfDIo2K4UBTtxXFHlY4iIrYRAosDk40g4f2xWiITT
s7MsEg4jlQg6wlfwufgaoXAcAKUdf41guKJ0tT34LjJssXFN3I3NxYyTu/rp
hOyA8UAbdGF9hNdFUvWzCaJFYSW1rU3vzKMacAUZ+tBsNunb2hipiWbjJFTM
9gH6Fm+eHRbbjLxXEAmHloDhDJDYypskvUxzRFHYxgHEl2L8ZKq/prAG46nc
mQhMMQSioDHwXiFkfIkGaB2TLBwD5bSE9A+yUaYZBGqeFWgJMmxSGRUMueM2
rha3B289fHN6/jOVnjRG3evUx4/UdpprBIXAPt8VhbhXaMcdwwYMGtflE70u
xfe5kbybjO4ONBGv6L1uR37m66gY8xVVjPlqKsZ8JRVjvpKKYafZynxNg3up
F/Jc+4U0/29WLszglk36jvtGiFA885ISc1ekiDMah4QEGJ9+06Mv/nw1ceuu
MlulLB2NyM7Epu971UiD7zjO5DqvOOOjIaQ24ofbxV6NWlncB2ybWwNBf8yJ
iBgAu9dIkuHIA2YwqoQVwNBwQYi7YkFT4EgYFIG4SMwuTS0EfTWsyb7pheNT
iHtwqMSvCYB4uhsx+Y3kRxqFhEBQNkstAIJhBe7iMXxEaEq9CIgnqoPbv77s
UVx22/nwpXCbQVUBdwk6aAIv1UrirnpmWkZbOzXNFm0SK+PgWYUWoQu7vb39
8+P3e/Dx+KhHN4kgctKfUPKxjUtTN1+FSdOircanpWkLq+bw9N5Vcb1T6xS+
Xsqutc87gzKk4YLAjAWtCUsyatiyp9jwTrAu7ZGftGiHS1uX/cfbd4V32J7v
VPzMxrdy8PD7/VQ+fq5F32vcwtdlyy1cJwBuJXJr1LKhP16vnIMFLQWQcaXg
ct9cU9zvDDK/Fhv7iruwRI01De7SYX3TFnnh+ssVWDtEmdtXVV/DsJcvwDJB
gu8YEH75Hw2Aivb4lfsgVEjWqyJVEGSBRFOsGNyC3/PwNLrFfZXoFlafvyi8
5fp/bHiL8o5aeItaIWuL4L/GMBeWzsKtgl0cHb8+7HV/fru/02kEhHAMArEg
r3xCZxLhz2CHNEiuxz6aMwIerVZ1A+rZONL04gQs55OaUQaU8tlwjipNWDaA
eRpGi3h524lCI4XIK9xZrPlLoTEXGaxyrrUo8xmC+SapfyqB30YaZVNfjYO9
8z2zGpOlSwJzdULx7SvSTDrDBtmkmF9eyVlxdoFgKco8+5iFt9bXqGnSJkGV
PAIcwTMsM7IP+K/NnJScKRp5mFZXeJjTylkawFw2wd6YFhVlbqpxPcXDOSiL
KWvrYeUw1e7d2+753svXhy1kZEalKwebNQ1glAsISjnPYhrCeCPQmQaEzIr8
B/0Qk6zTWDe2gIAsL4TEkjyvpkblOBRggJ4yQvUlUtKKszherlTLRiROg8cQ
+mp+UWFtrgkytYvMGJoag5W9o3wZ/6oKuN8gE/zNKACIDGiSY0OvYvB/YGXF
TYgcw61CYGPCyxRkACmKjdjXKhQEN4btzRxtXfBAjwRdOYKnR/BfCnpLJSYt
FJq8sPXlhZtH98SCg6EoL5UJ4i+CjEERYZQcKscrZm7BWKSGnzAZXnlXO3b+
KNu3SLYAnei3J4w1zSmxGCsj4BP2LUv2tp1bOGT51Key82+qeAj5kPFCaDoV
JRAMJC+bbshd536TJAHQdZiX1cwYAb030ldJX0VFoiLSnAFM/WPPkU00x5u2
mlMt5Zmk50Uxf3xsCHV4fjEbRWj6sqk8SAyFw6tvLukH0AapVNJWYCdPJgF6
m4LvrjWZD6OL0A1HWK4UBom86SKjzhDklvpMLQeHdazxoXA+1YeqFn3HsYOh
NCPxvLizwBg1vBMDJAfiZYx2kqGnDQEldSKWm1lTtFkJw/hJ9BlK5e96VQIz
HBwATd0iz7hxDj9/yJCnXGQjuTxwo/hkYh41ks9lNlPXKadIosgQjR9eNkxH
HGwau0bVhOPb4m3rzMKaMGqYsk0f5CqtWMc4MdhX5qAjWeNaKRtDvmaXIGIw
bY6Qf0Kx+BVqngqhRsYWtB211aLMq2DNkeQZnLlCNXPRbcyKymF5VKfd9CYV
fB5NKP5qmjHSqli8tfIDl6VFEsOLNoT/YL3JqFP5lzZQyQdEUtjn4XyExIJe
jI35FNPn2ZoSfA9xSVyBUeCpdBw/UhG+vWArTNEKVjL/4q6ANMQvczzjBauC
m8GRqMy3hi8Qq0lR6K8ag3jLheGVWhqXv6PLhqtz8mVOz2iYs9OitCtBxNdW
OHUapOX3I3KuAAPNM4VCqx0/zoIreJdFkBfZgOGv9fThXIS728MsGy8QS0Q3
uutJfdeRyZjornjX2LTppBwQPz/GOvbVrB58XgWJDIY6zuhiybH2KRltx3wH
t6RfJ8e8ygddLpYg2Nwfs5ldGb5bihLGUvSlNGm4c8iF9Vvg11i62T7GLh6U
s3xRUOeLgsIDa0TSo7RiaCzSWibFxmWxuY71jaqCCk1fZ4l4uGBUjl6JGHrk
zJuCYlQRCUb7C9dxRvvJll95Fo/0b90eJsfRMrESpTqcr21Cp6HAFD30SSIu
uBlEOuu4a7zbkXb9ttKNiPvh8wMPuhk+3g5L9/xZZAZWhCm1Bp9p2Mty91MU
vHMvJ5R98utZOVdImbNtV7VzlqsbOsuVLZ3l/Uyd5f0sk+V9TJPlvWyT0a4t
Ng3ZZl9gqdTHVzNYrmixvIfJ8h42y1WBo1rmdffyrW5hNE+0Ghp/paUxHrhM
/OsbHO0sVlqdO82PytuuBCWILsBg/UuryF7XDE5EqyTK9hpjIYEgfSpRrT4i
znMmmw5SsoKCektL2oyO1BgFYvqu7gLUMEX5Hs2IQ0RMzSWmw/wWSViMFuQD
HUyMENzBaY7mBtyEKOvLwgPuHXDIRiS6+XnxCoJYX5k3zCuVbsWy6OiQco7a
6LKA5bkaJ2tnXfTqlvknuBU/SukjWL03xV/hXiTAKOCbG9MMFIXJJTpYH0ej
CKgFd3mUNdsIRiGCUA1iViEFTUK8qF9yA3t0wJ1oCNKIF9Pu9ZJoV+jkSdSJ
DW31OFYBi1Z0XBwlR5HQNPxmQ3dPF3Z3J7gj6mS4NyPG+42iaKHnZ7VtN95/
ElTCDJdm6G+2Hj8sbwbTgeWdc00ig9rCIpNFBXa1LSN7g7FNTwv0f2tuJB8R
3h2GBHUXN1htCDeTy7eQhlR1YCKXl2GLWW3MZ6DfFhgclyOx5n83B/L44YmE
KAgymAEYKyZmZzRtzgNZsn5VzkdZncnI0Y6kdI0iCPFkURBuDeqOIrQRCqE1
QKlS2FZSOoigzn08IJkSQrZgi11972erEsnswrFx6IhR9fSyKAYK6vBPVOAQ
TtDsBhMmqnwQ6sYQXdVwpYsxKPuE5oliMtcGFMM8wUJosEfB+kepCoy+A0Rk
OCMzpALojqvVbH3LthK0JNAMgHQmDuOzZzeCeoHD9gc2jYytUp8wS7HiGdmA
2dzl1LxPL7AeBG+3ByGsIH8QPlXtijEBq0GOnG+kiiB7JORbdvyY9rZT1qG0
hCSuCwyjs3AdcN9x3WB/BfwXCdFzW1olMQbhXSazHkpZOp7ZiIvRDdmnI0WY
KtcYN085WGQnwgen6exKlX8do4v3KrKbY/vWgQt0uKz7p6gsWZVsbT7b3KK3
bm9ub27VHYnkdBlEC9UyiJUGwLobLaGTJdQCUoqjchWsuYPsY963BeHgrQgy
WbB9uhWZlsidTF8eXDKdo1Y/I1tEAKTk9ylYpS/mJKtd75bejBWjgXvC1EtC
VAdtcMBQ6r6fABUt8zFVooTHxbNjiwwQpkQlxUMHNlF8oMufe2P45fAS05+T
/sTVWht/NoFH+5kal9HLyT36g4K3wA0iWIc6cgLOaC4ysn+gafLsdJ/WFZj8
3wUi0iwGDvOSVXS+8uQcCPxz8CvkBWPurPFLj0/enBwc7jCaATSuffvjus+w
bsnzIPcVyVNSkdnt7b/Wd1bRbnXUwHJjPEdx8XVJtNd8cfE90KWaDVpnEW4O
v4IBoAm90ZSmH2XFg+iTIR5lh2w+pmgbLxh87cQ0ZAbPg+BAXbxT6whEPria
QDqZ9GIIUWnZMoEfa8UN0XXjYPT4GuOdR4NtdpWOhjg0DL9DPIMREoZgJBCd
mHOXEaa2EgxfPtga7b5obhGv8JQD6JOT08O3tNOyFySZ6RGrkXYnqfJxPkpL
QRyUrfy4w0cP45YvIzWgdnfiAl3BuxDUoLqZwBtgxMkcK0HiCC5zDxfFm+LE
Q3Od3gRb98LjVjtlcos0qjzqepPPELWrsKdhfJUO1xUXf6Xa0Or852iLorxJ
5BfqdQ57waXwUtMLSQhOCSz3pG1NjpdzZcVoRp7YF4SONkMNjLSKmWh0EmlO
oUYFj8ivtIOVJn3rIRuqw5P+qNeeoC0h5IbQtuM+TPhM0BHwTfMBLRraI9ks
qHblSYbrm5a5js2FFU99hEGYUDvvbue16MAhbg4nViUl8rLCJ/LTANEPs0lf
Ycv8WzpScHCcpYj4IZ7e5pSQ08B4gG+xE8cZ3m5rFZCtFo+nB0ZjPkx0jTZh
qvmrZD0kl3E64ZrjSLH5ZE7Fk5PGJUnxJTIzhZzXqF8ZwVz1nZj3rJGbA2nC
CeASb/x6xOJYZ8NYBOqixn740IV1+6aqbUxlWVl9e5TpwikQDkjzlTqU2mFE
wSUwotd0WHyAMrIkCgzh5auC69THEJEvgT1NFO5UDIdEAwq8T2Znhzh15J2C
TmYpL1LcZ2ZZI/NzZJs+btnlE9WkhFpM+DgqilxZc2BWGOk5qmHpUIUoc29+
oEkAp6D14TBqElOSs8M/vOMUUuZ9eUU1M+GEuWYmidANUlntbn7Ia8olmmUb
UFUMBBndlFUi7qVwhei5wtSYyrqcsoGrcV2lAHjbGW1cLJdUvr48GfByLGY+
yyYDlku9hcOWS1DhJtXsUb4jK1oI9lM4cxYoAQdpgDKeOI5cdOBjeNUl6R7d
PpA4cxSvHs/Jp16WOSck1VKKQi2MGmyrY6U1TIKrfOCaUYZtKjsv1R6Q2Yq5
Rs9RAeNH7Hjlxf/57//Bfm7F/7mAwzAc5VNaPH1qnA1AEcTgnGyGZjbHzqFN
naxKUUCJN9NZcVmmU2B4eOFdsdKj7BkVIxiw092luUgYo3Us7pIraoAqRQqX
E/v2rTjkZCSsmfQF9YfMh75gK68kHoC5AeDhEaPnHfMn5Jp+4KNaaM8eqF7g
I2T4qXGG7CSvxk21s8wIZxdGZ+uv4BVmJ07Xn8wdZ0hn36WzWdr/QDLDAcyh
uBkTRbJ6j7vX9jSzDX20csLBRIjfQAvExvnrri3yOxtVjA6JLbqH+71X3a4S
La3Yaw1EeU35EYeYH3EjkFpePUHG5ksJs9VrzaedBPsXWrrW2XENsmaUcJEI
+1P7XAl0kglfaEBeGn4oe1GXsqB1QR5GjKaaDNACPJEbV33egtSJpbxM7E3E
UAXVlP3KwvKwCXClSwzw8MkjavbigCJMViEVQcP15C6T7YBhzTUOjKaHk3LA
NPNiEOpYSxILRvTMh0O02km1MOm/kf1SdUjwlpeEUMHI9AirrRkwSDgXWJ/G
WB5c7QKFy9FUnWEyoM9nh+fvzt4G4q/0dDArVPRUpE4qJUDzEvonXkophMWI
RHgKScDQIEm7kSJf+6guHk8oSpF6Osir/qggUzmxTrxA+GT1C9TXWxFl17Ye
PXrxgqjO3VHwiJo+X8e75GOGMSvh1bBSIDfxxrBlUY0A6NZHmntZzK5CzJfU
/NFyAcK7CVmWHtDEWisIkVOa01bhbIm267mNLcMg9Zt81Mveu/Mfet2fu6jK
JBJ1kBkW69PCXKT1qKTrJQM12dqZz7DMEt6LQO5wM7io/nVeBehizTaLKkYl
RZ8E0gHdKjggF5QYFllwS0UIwUuc2Bw6jvBgajUr43NwfFjVAt1yzHgYJT+U
0Z1J28fcmaMfYaMGnpqCpKeIvFjyK1KBiaTgO9SQYdii4Wt+WpC1eNf6QjT+
2nY1gwOehvnEb7HSBLzwirP2RJwFhuPUcWZJXc4J1ViiGlOv0xssz6G2TY42
QN7uXMT5ydb3Yvv5i8+flTjVKCcXCtaRQX5jI/bVMtII3tWwVowENQH+prn1
mG3yrRENKCehAm+oohmYjZypE+tEqEM+vMwH0ckxYpM/Cuy3o4djEdF5pvVd
giPw5j4q/CV9kt0b7ykvsql8PtM1d0ik/oipoEECtSh8aGgZYXgvlVIcMHIh
bswRN9jYZznz8+dN1xWnog/GJUL0++ut1goejXLQLCdIypxdkbhPcIEfvz06
MUKJJakqH1FMM8V7T7kE2GQilmEhqSAF0Pw9ReH4UNtyrvfj1Q1H1Uy+mfn6
DxSDBAM1UkTPuZcc2KmmLiJlNh7hV6CEwDpMcQdRmlPTSwH8AAYJAtwUi6zQ
KnZCtA6b64NPq5FjQQ/69ed4wgTkGO52X7p1ZqZq6X+crIVvPz5e5+Py7PnT
LTguSD4oZyXDtJ8DgeRk3U8VX5KtHvlMku69qTuygy2z5saWtpZjhqIgIU1U
hce0JDrhYdhItqbdrGZmT02QLHblJhIcFuQGuxLirE758FIQHvO4jUE2zPFm
Y6fuHDk9bOtHoDTUsKsKqRQmRq6qcTqZELyAORDimHA7my+MNwKWfFthPH3k
QOUtORTyPyhwISSQ1UWd0plQMTwEYAaCo8haCbydEDu/8nyWeXe0wCA2wb1E
Wdx5dNlQgAFJUkDBIPxoBGLdRqHWHu4VyWMyyj9knLvAKTJsq8NeEFOk4kD0
aEjiEFAXIdnv0GpPAZepdNOxRngq/7h4ZXzSgBMe6E8x3U98SM0xAS2OUyi8
k0bOqpOzKicuSX4orjOiu+GcfIJmPJVoZhfGfN1xdnn4JJFQKOPpKELu7Kqk
PBuxKcfj+/jYUQFMMehdpzeVHv5zjFm2h/9HmgFFMiu7kOK7SiJtzip/rMiM
goXNPqUkxsw0QCat2p6re5xqBx+NiPMqa3dmYkynVNoiWkVvjLn7NhubLzH8
kyJcGsI3QX9KJdOBz3NYOyeh8jNZKN1Q9tdirloECDxMfp+VF1lZVMn7J8kr
xKeFq7l+WSR7gVEkpyXaAMZkBCmHeFWuwXs39k6Phc3ubG0Dm3UaL6RhR9qn
v9JMWkZ8ypiwnETTL1tHfLa2jmpDqPNc8s8f773da/jmjdsUxYSyGMz74i56
QA7x10z050T0Z9klWuduHnxnDbF0QBm2J1imSmmaKE++IE6aX0olVByNMLVg
a2A+XEV23sR0vyuK207v6PXhTz1Mguj23m87MQXMLkY9fqhHh5TjUZJbO4nk
LVI4/blN3lM60m0Ca6BfYfHn2+RNPgFSec83aoUolbcbGxvUVdsA4PtHn57Q
89DV+cuDrUfYlL6wkJj4BQW+1Ueq4W52oHu0WmQxefA53iky1te3i31IbC8k
vnpCrhXeOue3Tm+SxqLjkpfhaXXZABd1Z/t7O73zn08Pe2/2ur/vHR1t93gd
em+O2e+1pMXeT3Z/whvC9rSPWXeqbZtglxqbRIvr92n5kG+TbUaFNfv1Otqt
2zs62fsJO9m6qxPdbDNt3esFs65te7xXfs+Zr3xzpNEbBNwhS5FsW4J3h5Ly
u89RefQSpYZvhJf700pOkDR5vL2B9lIy6IJMgYzOR8TQHYw3PuP4WZDvYMKt
0DasSZQebYqOciCGDf56g3sK5EDyx21yKiXkb5O9keLPJ6fs9qftka1mVI3/
/Pf/i/84OoJvuuiXoJQRUC77H+Cb48PzI5j1xxyIfA3pSEOM1rkreHRL+zii
Pg4/oTFrzMZn+TjTLuSZLX3vM37mPUweSHINrXbQMewKJijsY+I1/5MUpIE+
/lwfh6cPccIlJR895IAN4OGwIbfJ20K3R6FpxCQpvRzxu884TAq+JiuFJ714
kT1IbgtR8DGj1SfSq6/iWm2h153bTZLDSZRL7imAgODLImX1k6Jv0gtUOW7E
ShfLheR24pxZvvnEQDhG1Q+tOOivZymQ7P/OGBSHXDKcVCg5JfrCUV3YrYj+
gJsqUW7guDfqY5QoIqY7ubECCQGdRuSxViOf1pWZTwakN3zMRsWUTjNKI3jE
FPdnb8KCxSSbbRyU6ZBc97V5pv4o1BQLXkY1nPJCOsnoHgAt5SNZp3iiyYwd
WfXZMr07P9+YtNdqtN86YU3Wnl+MOOozHmpBphHvYkBvE1aXqA2P5NMpF19P
xwUGkbRZyoApZJQ8aNLObXRJtGTRcYoITyT0FH0oV83xps0R22ULx93Z4w6r
13ay12oMoHUNVYcR00KqdSU/0nZ8A4vTL0S8dXVGYQIpyQFfW1naHtIyXH1J
xe2lCy/LFrPxCd7OkiPmzy2u0iUIuBxiBSfD4+OqvdcdHb3f7h2+3T9BwMTe
3v6bw97RycnLvbN1U3qMNxVfOMdhV+zkgrvIi5d0R5AXEfWOCbrDLidU8zau
8ov4GpRbjjEqEzefiDaWDeL5bMpV6685dk+RK7VSm64qsJQHF4u8WYNd8hBR
OAqxeBJ+zZjNDGrMTlo2zVNambftNo9qIVuyWZcLVIZDiqcFluV9c4wuD3R+
45ZRIgW9cKOLiv6bHI2xFOTQuJNjAbrtuiDxbHUZWmUqubRbR5kYiblFlN6O
hLOYkN6c/O7w/Bxzorvnh2/2zo/3pZftX9Pb25O3tsdbqV9wz97OQEfYe3tw
ePbmBP4bVAEB8F/eW+0aj9SFFcS/1q2ryZj/lOz1MRwJdOVL+hJeysSdDb5/
QMnXLIpmEuOOHdHZ+yEdj+GtJCASUHVOiRFyLSJ4TskJKTpSt3iku8lBCscE
mt5MJh04RJgD/+bmwyi7mGPO5UE+ADbijuBMwM+/y9LJxil8AS99U0xIrYav
5RvMmyzmbCF8WebAIk7TawzO/5AjN7aNWMmWgBM8P5qUQn4JiuiJGEUst3Rg
hum1sduyLW2YvM1BXwee85bgSQad5CW8DNjQaQqvhpE6Hw+eg0yDks5lBhcH
yuDo+krL/lVyWRbzqWLQvpvkZOEFPoDdI0YHWob2r0rg+MX0KvkhG13nlwJQ
Gu/COP3ALkpx/UZ049rpBvsRG6nm9bLnWNw6skzOL5MmTtOI4LpJ+3nRqa8+
b8mrooR5HqV5eTXHe/JynjN0QV45TX51/w9b+TuYhKcCAA==

-->

</rfc>
