<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.34 (Ruby 2.6.10) -->
<?rfc docmapping="yes"?>
<?rfc comments="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-haynes-nfsv4-flexfiles-v2-04" category="std" consensus="true" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="Flex File Layout v2">Parallel NFS (pNFS) Flexible File Layout Version 2</title>
    <seriesInfo name="Internet-Draft" value="draft-haynes-nfsv4-flexfiles-v2-04"/>
    <author initials="T." surname="Haynes" fullname="Thomas Haynes">
      <organization>Hammerspace</organization>
      <address>
        <email>loghyr@gmail.com</email>
      </address>
    </author>
    <date/>
    <area>General</area>
    <workgroup>Network File System Version 4</workgroup>
    <keyword>Internet-Draft</keyword>
    <abstract>
      <?line 81?>

<t>Parallel NFS (pNFS) allows a separation between the metadata (onto a
metadata server) and data (onto a storage device) for a file.  The
Flexible File Layout Type Version 2 is defined in this document as
an extension to pNFS that allows the use of storage devices that
require only a limited degree of interaction with the metadata
server and use already-existing protocols.  Data protection is also
added to provide integrity.  Both Client-side mirroring and the
Erasure Coding algorithms are used for data protection.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <?line 92?>

<t>Discussion of this draft takes place
on the NFSv4 working group mailing list (nfsv4@ietf.org),
which is archived at
<eref target="https://mailarchive.ietf.org/arch/search/?email_list=nfsv4"/>. Source
code and issues list for this draft can be found at
<eref target="https://github.com/ietf-wg-nfsv4/flexfiles-v2"/>.</t>
      <t>Working Group information can be found at <eref target="https://github.com/ietf-wg-nfsv4"/>.</t>
    </note>
  </front>
  <middle>
    <?line 103?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>In Parallel NFS (pNFS) (see Section 12 of <xref target="RFC8881"/>), the metadata
server returns layout type structures that describe where file data is
located.  There are different layout types for different storage systems
and methods of arranging data on storage devices.  <xref target="RFC8435"/> defined
the Flexible File Version 1 Layout Type used with file-based data
servers that are accessed using the NFS protocols: NFSv3 <xref target="RFC1813"/>,
NFSv4.0 <xref target="RFC7530"/>, NFSv4.1 <xref target="RFC8881"/>, and NFSv4.2 <xref target="RFC7862"/>.</t>
      <t>To provide a global state model equivalent to that of the files
layout type, a back-end control protocol might be implemented between
the metadata server and NFSv4.1+ storage devices.  An implementation
can either define its own proprietary mechanism or it could define a
control protocol in a Standards Track document.  The requirements for
a control protocol are specified in <xref target="RFC8881"/> and clarified in
<xref target="RFC8434"/>.</t>
      <t>The control protocol described in this document is based on NFS.  It
does not provide for knowledge of stateids to be passed between the
metadata server and the storage devices.  Instead, the storage
devices are configured such that the metadata server has full access
rights to the data file system and then the metadata server uses
synthetic ids to control client access to individual data files.</t>
      <t>In traditional mirroring of data, the server is responsible for
replicating, validating, and repairing copies of the data file.  With
client-side mirroring, the metadata server provides a layout that
presents the available mirrors to the client.  The client then picks
a mirror to read from and ensures that all writes go to all mirrors.
The client only considers the write transaction to have succeeded if
all mirrors are successfully updated.  In case of error, the client
can use the LAYOUTERROR operation to inform the metadata server,
which is then responsible for the repairing of the mirrored copies of
the file.</t>
      <t>This client side mirroring provides for replication of data but does
not provide for integrity of data.  In the event of an error, a user
would be able to repair the file by silvering the mirror contents.
I.e., they would pick one of the mirror instances and replicate it to
the other instance locations.</t>
      <t>However, lacking integrity checks, silent corruptions are not able to
be detected and the choice of what constitutes the good copy is
difficult.  This document updates the Flexible File Layout Type to
version 2 by providing error-detection integrity (CRC32) for erasure
coding.  Data blocks are transformed into a header and a chunk.  This
document also introduces new operations that allow the client to roll
back writes to the data file.</t>
      <t>Using the process detailed in <xref target="RFC8178"/>, the revisions in this
document become an extension of NFSv4.2 <xref target="RFC7862"/>.  They are built on
top of the external data representation (XDR) <xref target="RFC4506"/> generated
from <xref target="RFC7863"/>.</t>
      <t>This document defines <tt>LAYOUT4_FLEX_FILES_V2</tt>, a new and independent
layout type that coexists with the Flexible File Layout Type version 1
(<tt>LAYOUT4_FLEX_FILES</tt>, <xref target="RFC8435"/>).  The two layout types are NOT
backward compatible: an FFv2 layout cannot be parsed as an FFv1 layout
and vice versa.  A server <bcp14>MAY</bcp14> support both layout types simultaneously;
a client selects the desired layout type in its LAYOUTGET request.</t>
    </section>
    <section anchor="requirements-language">
      <name>Requirements Language</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="sec-motivation">
      <name>Motivation</name>
      <t>Server-sided erasure coding places the erasure-coding compute at
the server, which becomes a bottleneck as the number of concurrent
clients grows.  Moving the erasure transform to the client
parallelizes the compute across all writers: each client encodes
locally and fans out the resulting chunks to the data servers
directly, keeping the metadata server in its coordinator role for
metadata rather than making it a data-path funnel.</t>
      <t>Flex Files v1 (<xref target="RFC8435"/>) already places the replication
transform at the client via client-side mirroring, but mirroring
provides no integrity check: silent byte corruption is
undetectable, and repairing a damaged mirror requires choosing a
trusted copy essentially blind.  Flex Files v2 adds two integrity
mechanisms -- a per-chunk CRC32 for on-wire and at-rest bit-flip
detection, and the chunk_guard4 compare-and-swap primitive (see
<xref target="sec-chunk_guard4"/>) for detecting concurrent-writer
inconsistency -- while preserving the client-side compute model.
The chunk_guard4 per-chunk header is 8 bytes total (a 32-bit
generation id and a 32-bit owning-client short-id); this keeps
the metadata-server overhead for maintaining erasure-coding
consistency to the smallest value that still admits a CAS
tiebreaker.</t>
      <t>An alternative to client-side erasure coding is to keep the
erasure-coding transform inside the storage system -- that is, on
the data servers themselves, or on a server-side pre-ingest stage
between the client and the data servers.  This approach has real
advantages: a single codec is fixed at the storage system, so
clients do not have to negotiate codec support; repair never
traverses the client; and the wire protocol stays minimal because
no on-wire consistency primitives are needed.</t>
      <t>Flex Files v2 does not choose that path, for three reasons:</t>
      <ul spacing="normal">
        <li>
          <t><strong>Scale bottleneck.</strong>  The storage system becomes a scale
bottleneck in exactly the way this section opens with:
large-scale parallel workloads drive the aggregate
erasure-coding compute beyond what a bounded storage tier can
supply, while clients are the naturally horizontally scaling
resource.</t>
        </li>
        <li>
          <t><strong>Loss of per-file codec flexibility.</strong>  A single
server-fixed codec forecloses the option of picking different
codecs for different files in the same namespace, which
matters when files have different durability and performance
requirements.</t>
        </li>
        <li>
          <t><strong>Benchmark evidence.</strong>  Measurements summarised in
<xref target="sec-implementation-status"/> show that client-side encoding
with the overhead introduced here is competitive with
server-side encoding on realistic workloads, and scales the
encoding compute with the writer population rather than with
the data-server count.</t>
        </li>
      </ul>
      <t>The right answer for a given deployment is not universal;
<xref target="sec-rejected-alternatives"/> records the alternatives considered
and why each was not chosen for Flex Files v2's target workload
classes.</t>
      <t>Client-side erasure coding turns write-hole recovery into a
protocol-level concern rather than an implementer-internal one.
In Flex Files v1, the replication transform produces independent
full-copy mirrors, so a partial write is detected and repaired by
resilvering from a surviving copy.  A single server-side
coordinator has enough visibility to drive that repair without
help from the client.  Under a (k, m) erasure code, in contrast,
a write transaction fans out across multiple data servers with no
single server-side actor holding whole-transaction visibility:
when the client fails mid-fan-out, the partial state across data
servers must be reconciled by the metadata server, and the
reconciliation protocol must be specified on the wire so that any
compliant client, data server, or repair agent can participate.
The chunk_guard4 CAS primitive, the PENDING / FINALIZED /
COMMITTED state machine, the CHUNK_LOCK escrow mechanism, and
CB_CHUNK_REPAIR together form that on-wire reconciliation
protocol.</t>
      <t>Scope note: the consistency goal of Flex Files v2 is RAID
consistency across the chunks that make up an encoded stripe, not
POSIX write ordering across arbitrary application writes.  The
protocol does not attempt to make overlapping application writes
from different clients atomic: that is the province of file
locking (<xref target="RFC8881"/>, Section 12) and of application-level
coordination.  What the protocol does guarantee is that the
chunks comprising a given stripe agree on which write produced
them, so that readers and repair clients never observe a
half-applied stripe.  Readers who need cross-write ordering
beyond a single stripe <bcp14>MUST</bcp14> use the existing NFSv4 locking
primitives.</t>
    </section>
    <section anchor="sec-use-cases">
      <name>Use Cases</name>
      <t>The protocol is designed around three workload classes.  The
percentages below reflect the expected deployment mix in
installations that choose Flex Files v2 for its combination of
integrity and performance; individual deployments may diverge.</t>
      <dl>
        <dt>Single writer, multiple readers (approximately 90% of expected</dt>
        <dd>
          <t/>
        </dd>
        <dt>deployments):</dt>
        <dd>
          <t>The common case is a file written by one client and subsequently
read by many.  Examples include artifacts deposited by batch
jobs, container images, and media files.  The protocol is
optimized for this case; see <xref target="sec-system-model-progress"/>.</t>
        </dd>
        <dt>Multiple writers without sustained contention (approximately 9% of</dt>
        <dd>
          <t/>
        </dd>
        <dt>expected deployments):</dt>
        <dd>
          <t>Files with multiple concurrent writers where races on the same
chunk are rare.  Examples include shared-directory append-only
logs and distributed builds.  The chunk_guard4 CAS primitive and
per-chunk locking cover this case without penalizing the common
single-writer path.</t>
        </dd>
        <dt>Multiple writers with high-frequency contention, no overwrite</dt>
        <dd>
          <t/>
        </dd>
        <dt>(approximately 1% of expected deployments):</dt>
        <dd>
          <t>High-performance computing (HPC) checkpoint workloads, in which
many ranks write disjoint regions of the same file in lockstep.
The protocol relies on block alignment to keep per-chunk
contention rare despite overall high writer count.  Contention
that does occur is resolved via the deterministic tiebreaker
rule defined in <xref target="sec-chunk_guard4"/>.</t>
        </dd>
      </dl>
      <t>Scale targets include multi-thousand-client deployments (on the
order of tens of thousands of concurrent clients for HPC
checkpointing), parallel-filesystem replacements, and multi-rack
shared-storage clusters.  The repair protocol (see
<xref target="sec-repair-selection"/>) is designed to let such deployments
tolerate data-server failures and concurrent-writer races without
blocking the critical path for the first two workload classes.</t>
    </section>
    <section anchor="definitions">
      <name>Definitions</name>
      <dl>
        <dt>chunk:</dt>
        <dd>
          <t>One of the resulting chunks to be exchanged with a data server after
a transformation has been applied to a data block.  The resulting chunk
may be a different size than the data block.</t>
        </dd>
        <dt>control communication requirements:</dt>
        <dd>
          <t>the specification for information on layouts, stateids, file metadata,
and file data that must be communicated between the metadata server and
the storage devices.  There is a separate set of requirements for each
layout type.</t>
        </dd>
        <dt>control protocol:</dt>
        <dd>
          <t>the particular mechanism that an implementation of a layout type would
use to meet the control communication requirement for that layout type.
This need not be a protocol as normally understood.  In some cases,
the same protocol may be used as a control protocol and storage protocol.</t>
        </dd>
        <dt>client-side mirroring:</dt>
        <dd>
          <t>a feature in which the client, not the server, is responsible for
updating all of the mirrored copies of a layout segment.</t>
        </dd>
        <dt>data block:</dt>
        <dd>
          <t>A block of data in the client's cache for a file.</t>
        </dd>
        <dt>data file:</dt>
        <dd>
          <t>The data portion of the file, stored on the data server.</t>
        </dd>
        <dt>replication of data:</dt>
        <dd>
          <t>Data replication is making and storing multiple copies of data in
different locations.</t>
        </dd>
        <dt>Erasure Coding:</dt>
        <dd>
          <t>A data protection scheme where a block of data is replicated into
fragments and additional redundant fragments are added to achieve parity.
The new chunks are stored in different locations.</t>
        </dd>
        <dt>Client Side Erasure Coding:</dt>
        <dd>
          <t>A file based integrity method where copies are maintained in parallel.</t>
        </dd>
        <dt>(file) data:</dt>
        <dd>
          <t>that part of the file system object that contains the data to be read
or written.  It is the contents of the object rather than the attributes
of the object.</t>
        </dd>
        <dt>data server (DS):</dt>
        <dd>
          <t>a pNFS server that provides the file's data when the file system
object is accessed over a file-based protocol.</t>
        </dd>
        <dt>fencing:</dt>
        <dd>
          <t>the process by which the metadata server prevents the storage devices
from processing I/O from a specific client to a specific file.</t>
        </dd>
        <dt>file layout type:</dt>
        <dd>
          <t>a layout type in which the storage devices are accessed via the NFS
protocol (see Section 5.12.4 of <xref target="RFC8881"/>).</t>
        </dd>
        <dt>gid:</dt>
        <dd>
          <t>the group id, a numeric value that identifies to which group a file
belongs.</t>
        </dd>
        <dt>layout:</dt>
        <dd>
          <t>the information a client uses to access file data on a storage device.
This information includes specification of the protocol (layout type)
and the identity of the storage devices to be used.</t>
        </dd>
        <dt>layout iomode:</dt>
        <dd>
          <t>a grant of either read-only or read/write I/O to the client.</t>
        </dd>
        <dt>layout segment:</dt>
        <dd>
          <t>a sub-division of a layout.  That sub-division might be by the layout
iomode (see Sections 3.3.20 and 12.2.9 of <xref target="RFC8881"/>), a striping pattern
(see Section 13.3 of <xref target="RFC8881"/>), or requested byte range.</t>
        </dd>
        <dt>layout stateid:</dt>
        <dd>
          <t>a 128-bit quantity returned by a server that uniquely defines the
layout state provided by the server for a specific layout that describes
a layout type and file (see Section 12.5.2 of <xref target="RFC8881"/>).  Further,
Section 12.5.3 of <xref target="RFC8881"/> describes differences in handling between
layout stateids and other stateid types.</t>
        </dd>
        <dt>layout type:</dt>
        <dd>
          <t>a specification of both the storage protocol used to access the data
and the aggregation scheme used to lay out the file data on the underlying
storage devices.</t>
        </dd>
        <dt>loose coupling:</dt>
        <dd>
          <t>when the control protocol is a storage protocol.</t>
        </dd>
        <dt>(file) metadata:</dt>
        <dd>
          <t>the part of the file system object that contains various descriptive
data relevant to the file object, as opposed to the file data itself.
This could include the time of last modification, access time, EOF
position, etc.</t>
        </dd>
        <dt>metadata server (MDS):</dt>
        <dd>
          <t>the pNFS server that provides metadata information for a file system
object.  It is also responsible for generating, recalling, and revoking
layouts for file system objects, for performing directory operations,
and for performing I/O operations to regular files when the clients
direct these to the metadata server itself.</t>
        </dd>
        <dt>mirror:</dt>
        <dd>
          <t>a copy of a layout segment.  Note that if one copy of the mirror is
updated, then all copies must be updated.</t>
        </dd>
        <dt>non-systematic encoding:</dt>
        <dd>
          <t>An erasure coding scheme in which the encoded shards do not contain
verbatim copies of the original data.  Every read requires decoding,
even when no shards are lost.  The Mojette non-systematic transform is
an example.  Non-systematic encodings are typically used for archival
workloads where reads are infrequent.</t>
        </dd>
        <dt>recalling a layout:</dt>
        <dd>
          <t>a graceful recall, via a callback, of a specific layout by the metadata
server to the client.  Graceful here means that the client would have
the opportunity to flush any WRITEs, etc., before returning the layout
to the metadata server.</t>
        </dd>
        <dt>revoking a layout:</dt>
        <dd>
          <t>an invalidation of a specific layout by the metadata server.
Once revocation occurs, the metadata server will not accept as valid any
reference to the revoked layout, and a storage device will not accept
any client access based on the layout.</t>
        </dd>
        <dt>resilvering:</dt>
        <dd>
          <t>the act of rebuilding a mirrored copy of a layout segment from a
known good copy of the layout segment.  Note that this can also be done
to create a new mirrored copy of the layout segment.</t>
        </dd>
        <dt>rsize:</dt>
        <dd>
          <t>the data transfer buffer size used for READs.</t>
        </dd>
        <dt>stateid:</dt>
        <dd>
          <t>a 128-bit quantity returned by a server that uniquely defines the set
of locking-related state provided by the server.  Stateids may designate
state related to open files, byte-range locks, delegations, or layouts.</t>
        </dd>
        <dt>storage device:</dt>
        <dd>
          <t>the target to which clients may direct I/O requests when they hold
an appropriate layout.  See Section 2.1 of <xref target="RFC8434"/> for further
discussion of the difference between a data server and a storage device.</t>
        </dd>
        <dt>storage protocol:</dt>
        <dd>
          <t>the protocol used by clients to do I/O operations to the storage
device.  Each layout type specifies the set of storage protocols.</t>
        </dd>
        <dt>systematic encoding:</dt>
        <dd>
          <t>An erasure coding scheme in which the first k of the k+m encoded
shards are identical to the original k data blocks.  A healthy read
(no failures) requires no decoding -- the data shards are read directly.
Decoding is triggered only when data shards are missing.  Reed-Solomon
Vandermonde and Mojette systematic are examples.</t>
        </dd>
        <dt>tight coupling:</dt>
        <dd>
          <t>an arrangement in which the control protocol is one designed
specifically for control communication.  It may be either a proprietary
protocol adapted specifically to a particular metadata server or a
protocol based on a Standards Track document.</t>
        </dd>
        <dt>uid:</dt>
        <dd>
          <t>the user id, a numeric value that identifies which user owns a file.</t>
        </dd>
        <dt>write hole:</dt>
        <dd>
          <t>A write hole is a data corruption scenario where either two clients
are trying to write to the same chunk or one client is overwriting an
existing chunk of data.</t>
        </dd>
        <dt>wsize:</dt>
        <dd>
          <t>the data transfer buffer size used for WRITEs.</t>
        </dd>
      </dl>
    </section>
    <section anchor="coupling-of-storage-devices">
      <name>Coupling of Storage Devices</name>
      <t>A server implementation may choose either a loosely coupled model or a
tightly coupled model between the metadata server and the storage devices.
<xref target="RFC8434"/> describes the general problems facing pNFS implementations.
This document details how the new flexible file layout type addresses
these issues.  To implement the tightly coupled model, a control protocol
has to be defined.  As the flexible file layout imposes no special
requirements on the client, the control protocol will need to provide:</t>
      <ol spacing="normal" type="1"><li>
          <t>management of both security and LAYOUTCOMMITs and</t>
        </li>
        <li>
          <t>a global stateid model and management of these stateids.</t>
        </li>
      </ol>
      <t>When implementing the loosely coupled model, the only control protocol
will be a version of NFS, with no ability to provide a global stateid
model or to prevent clients from using layouts inappropriately.  To enable
client use in that environment, this document will specify how security,
state, and locking are to be managed.</t>
      <t>The loosely and tightly coupled locking models defined in Section 2.3
of <xref target="RFC8435"/> apply equally to this layout type, including the use of
anonymous stateids with loosely coupled storage devices, the handling
of lock and delegation stateids, and the mandatory byte-range lock
requirements for the tightly coupled model.</t>
      <section anchor="layoutcommit">
        <name>LAYOUTCOMMIT</name>
        <t>Regardless of the coupling model, the metadata server has the
responsibility, upon receiving a LAYOUTCOMMIT (see Section 18.42 of
<xref target="RFC8881"/>) to ensure that the semantics of pNFS are respected (see
Section 3.1 of <xref target="RFC8434"/>).  These do include a requirement that data
written to a data storage device be stable before the occurrence of
the LAYOUTCOMMIT.</t>
        <t>It is the responsibility of the client to make sure the data file is
stable before the metadata server begins to query the storage devices
about the changes to the file.  If any WRITE to a storage device did not
result with stable_how equal to FILE_SYNC, a LAYOUTCOMMIT to the metadata
server <bcp14>MUST</bcp14> be preceded by a COMMIT to the storage devices written to.
Note that if the client has not done a COMMIT to the storage device, then
the LAYOUTCOMMIT might not be synchronized to the last WRITE operation
to the storage device.</t>
      </section>
      <section anchor="sec-Fencing-Clients">
        <name>Fencing Clients from the Storage Device</name>
        <t>With loosely coupled storage devices, the metadata server uses synthetic
uids (user ids) and gids (group ids) for the data file, where the uid
owner of the data file is allowed read/write access and the gid owner
is allowed read-only access.  As part of the layout (see ffv2ds_user
and ffv2ds_group in <xref target="sec-ffv2_layout"/>), the client is provided
with the user and group to be used in the Remote Procedure Call
(RPC) <xref target="RFC5531"/> credentials needed to access the data file.
Fencing off of clients is achieved by the metadata server changing
the synthetic uid and/or gid owners of the data file on the storage
device to implicitly revoke the outstanding RPC credentials.  A
client presenting the wrong credential for the desired access will
get an NFS4ERR_ACCESS error.</t>
        <t>With this loosely coupled model, the metadata server is not able to fence
off a single client; it is forced to fence off all clients.  However,
as the other clients react to the fencing, returning their layouts and
trying to get new ones, the metadata server can hand out a new uid and
gid to allow access.</t>
        <t>It is <bcp14>RECOMMENDED</bcp14> to implement common access control methods at the
storage device file system to allow only the metadata server root
(super user) access to the storage device and to set the owner of all
directories holding data files to the root user.  This approach provides
a practical model to enforce access control and fence off cooperative
clients, but it cannot protect against malicious clients; hence, it
provides a level of security equivalent to AUTH_SYS.  It is <bcp14>RECOMMENDED</bcp14>
that the communication between the metadata server and storage device
be secure from eavesdroppers and man-in-the-middle protocol tampering.
The security measure could be physical security (e.g., the servers
are co-located in a physically secure area), encrypted communications,
or some other technique.</t>
        <t>With tightly coupled storage devices, the metadata server sets the
user and group owners, mode bits, and Access Control List (ACL) of
the data file to be the same as the metadata file.  And the client must
authenticate with the storage device and go through the same authorization
process it would go through via the metadata server.  In the case of
tight coupling, fencing is the responsibility of the control protocol and
is not described in detail in this document.  However, implementations
of the tightly coupled locking model (see <xref target="sec-state-locking"/>) will
need a way to prevent access by certain clients to specific files by
invalidating the corresponding stateids on the storage device.  In such
a scenario, the client will be given an error of NFS4ERR_BAD_STATEID.</t>
        <t>The client need not know the model used between the metadata server and
the storage device.  It need only react consistently to any errors in
interacting with the storage device.  It <bcp14>SHOULD</bcp14> both return the layout
and error to the metadata server and ask for a new layout.  At that point,
the metadata server can either hand out a new layout, hand out no layout
(forcing the I/O through it), or deny the client further access to
the file.</t>
        <section anchor="implementation-notes-for-synthetic-uidsgids">
          <name>Implementation Notes for Synthetic uids/gids</name>
          <t>The selection method for the synthetic uids and gids to be used for
fencing in loosely coupled storage devices is strictly an implementation
issue.  That is, an administrator might restrict a range of such ids
available to the Lightweight Directory Access Protocol (LDAP) 'uid' field
<xref target="RFC4519"/>.  The administrator might also be able to choose an id that
would never be used to grant access.  Then, when the metadata server had
a request to access a file, a SETATTR would be sent to the storage device
to set the owner and group of the data file.  The user and group might
be selected in a round-robin fashion from the range of available ids.</t>
          <t>Those ids would be sent back as ffv2ds_user and ffv2ds_group to the
client, who would present them as the RPC credentials to the storage
device.  When the client is done accessing the file and the metadata
server knows that no other client is accessing the file, it can
reset the owner and group to restrict access to the data file.</t>
          <t>When the metadata server wants to fence off a client, it changes the
synthetic uid and/or gid to the restricted ids.  Note that using a
restricted id ensures that there is a change of owner and at least one
id available that never gets allowed access.</t>
          <t>Under an AUTH_SYS security model, synthetic uids and gids of 0 <bcp14>SHOULD</bcp14> be
avoided.  These typically either grant super access to files on a storage
device or are mapped to an anonymous id.  In the first case, even if the
data file is fenced, the client might still be able to access the file.
In the second case, multiple ids might be mapped to the anonymous ids.</t>
        </section>
        <section anchor="example-of-using-synthetic-uidsgids">
          <name>Example of using Synthetic uids/gids</name>
          <t>The user loghyr creates a file "ompha.c" on the metadata server, which
then creates a corresponding data file on the storage device.</t>
          <t>The metadata server entry may look like:</t>
          <figure anchor="fig-meta-ompha">
            <name>Metadata's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r--r--    1 loghyr  staff    1697 Dec  4 11:31 ompha.c
]]></sourcecode>
          </figure>
          <t>On the storage device, the file may be assigned some unpredictable
synthetic uid/gid to deny access:</t>
          <figure anchor="fig-data-ompha">
            <name>Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19452   28418    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>When the file is opened on a client and accessed, the user will try to
get a layout for the data file.  Since the layout knows nothing about
the user (and does not care), it does not matter whether the user loghyr
or garbo opens the file.  The client has to present an uid of 19452
to get write permission.  If it presents any other value for the uid,
then it must give a gid of 28418 to get read access.</t>
          <t>Further, if the metadata server decides to fence the file, it <bcp14>SHOULD</bcp14>
change the uid and/or gid such that these values neither match earlier
values for that file nor match a predictable change based on an earlier
fencing.</t>
          <figure anchor="fig-fenced-ompha">
            <name>Fenced Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19453   28419    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>The set of synthetic gids on the storage device <bcp14>SHOULD</bcp14> be selected such
that there is no mapping in any of the name services used by the storage
device, i.e., each group <bcp14>SHOULD</bcp14> have no members.</t>
          <t>If the layout segment has an iomode of LAYOUTIOMODE4_READ, then the
metadata server <bcp14>SHOULD</bcp14> return a synthetic uid that is not set on the
storage device.  Only the synthetic gid would be valid.</t>
          <t>The client is thus solely responsible for enforcing file permissions
in a loosely coupled model.  To allow loghyr write access, it will send
an RPC to the storage device with a credential of 1066:1067.  To allow
garbo read access, it will send an RPC to the storage device with a
credential of 1067:1067.  The value of the uid does not matter as long
as it is not the synthetic uid granted when getting the layout.</t>
          <t>While pushing the enforcement of permission checking onto the client
may seem to weaken security, the client may already be responsible
for enforcing permissions before modifications are sent to a server.
With cached writes, the client is always responsible for tracking who is
modifying a file and making sure to not coalesce requests from multiple
users into one request.</t>
        </section>
      </section>
      <section anchor="sec-state-locking">
        <name>State and Locking Models</name>
        <t>An implementation can always be deployed as a loosely coupled model.
There is, however, no way for a storage device to indicate over an NFS
protocol that it can definitively participate in a tightly coupled model:</t>
        <ul spacing="normal">
          <li>
            <t>Storage devices implementing the NFSv3 and NFSv4.0 protocols are
always treated as loosely coupled.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do not return the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID are indicating that
they are to be treated as loosely coupled.  From the locking viewpoint,
they are treated in the same way as NFSv4.0 storage devices.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do identify themselves with the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID can potentially
be tightly coupled.  They would use a back-end control protocol to
implement the global stateid model as described in <xref target="RFC8881"/>.</t>
          </li>
        </ul>
        <t>A storage device would have to be either discovered or advertised over
the control protocol to enable a tightly coupled model.</t>
        <section anchor="loosely-coupled-locking-model">
          <name>Loosely Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
When an NFSv4 version is used as the data access protocol, the metadata
server may make stateid-related requests of the storage devices.  However,
it is not required to do so, and the resulting stateids are known only
to the metadata server and the storage device.</t>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with by the metadata server.  Stateids are
selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server may need to interact with the storage device to
locate the file to be opened, but no locking-related functionality
need be used on the storage device.</t>
            </li>
            <li>
              <t>OPEN_DOWNGRADE and CLOSE only require local execution on the
metadata server.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and only
used on the metadata server.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  No storage device
involvement is required.</t>
            </li>
            <li>
              <t>TEST_STATEID and FREE_STATEID are processed locally on the
metadata server, without storage device involvement.</t>
            </li>
          </ul>
          <t>All I/O operations to the storage device are done using the anonymous
stateid.  Thus, the storage device has no information about the openowner
and lockowner responsible for issuing a particular I/O operation.
As a result:</t>
          <ul spacing="normal">
            <li>
              <t>Mandatory byte-range locking cannot be supported because the
storage device has no way of distinguishing I/O done on behalf of
the lock owner from those done by others.</t>
            </li>
            <li>
              <t>Enforcement of share reservations is the responsibility of the
client.  Even though I/O is done using the anonymous stateid, the
client <bcp14>MUST</bcp14> ensure that it has a valid stateid associated with the
openowner.</t>
            </li>
          </ul>
          <t>In the event that a stateid is revoked, the metadata server is responsible
for preventing client access, since it has no way of being sure that
the client is aware that the stateid in question has been revoked.</t>
          <t>As the client never receives a stateid generated by a storage device,
there is no client lease on the storage device and no prospect of lease
expiration, even when access is via NFSv4 protocols.  Clients will
have leases on the metadata server.  In dealing with lease expiration,
the metadata server may need to use fencing to prevent revoked stateids
from being relied upon by a client unaware of the fact that they have
been revoked.</t>
        </section>
        <section anchor="tightly-coupled-locking-model">
          <name>Tightly Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
These stateids <bcp14>MUST</bcp14> be made known to the storage device using control
protocol facilities.  For flex files v2 deployments in which the storage
devices are NFSv4.2 servers, those facilities are provided by the
TRUST_STATEID, REVOKE_STATEID, and BULK_REVOKE_STATEID operations
defined in <xref target="sec-tight-coupling-control"/>.</t>
          <t>The metadata server and a storage device establish that they can
use TRUST_STATEID via a two-part handshake, both parts of which
<bcp14>MUST</bcp14> succeed before the metadata server may issue TRUST_STATEID
against that storage device for production traffic:</t>
          <ol spacing="normal" type="1"><li>
              <t><strong>Capability probe.</strong>  At control-session setup the metadata
server sends a TRUST_STATEID against the anonymous stateid
(see <xref target="sec-tight-coupling-probe"/>).  A storage device that
supports tight coupling <bcp14>MUST</bcp14> reject the probe with
NFS4ERR_INVAL; a storage device that does not support tight
coupling returns NFS4ERR_NOTSUPP and the metadata server
falls back to loose coupling.  The metadata server records
the result per storage device in ffdv_tightly_coupled.</t>
            </li>
            <li>
              <t><strong>Control-session gating.</strong>  The metadata server presents
EXCHGID4_FLAG_USE_PNFS_MDS at EXCHANGE_ID when it opens the
control session to the storage device
(see <xref target="sec-tight-coupling-control-session"/>).  The storage
device <bcp14>MUST</bcp14> reject any incoming TRUST_STATEID,
REVOKE_STATEID, or BULK_REVOKE_STATEID that does not arrive
on such a session with NFS4ERR_PERM.  This is the
authorization mechanism that distinguishes the metadata
server from ordinary pNFS clients, which connect with
EXCHGID4_FLAG_USE_PNFS_DS or EXCHGID4_FLAG_USE_NON_PNFS and
are therefore structurally unable to invoke these operations.</t>
            </li>
          </ol>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with primarily on the metadata server.  Stateids
are selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server needs to interact with the storage device to
locate the file to be opened and to make the storage device aware of
the association between the metadata-server-chosen stateid and the
client and openowner that it represents.  OPEN_DOWNGRADE and CLOSE
are executed initially on the metadata server, but the state change
<bcp14>MUST</bcp14> be propagated to the storage device.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and are
available for use on the metadata server.  Because I/O operations
are allowed to present lock stateids, the metadata server needs the
ability to make the storage device aware of the association between
the metadata-server-chosen stateid and the corresponding open stateid
it is associated with.</t>
            </li>
            <li>
              <t>Mandatory byte-range locks can be supported when both the metadata
server and the storage devices have the appropriate support.  As in
the case of advisory byte-range locks, these are assigned by the
metadata server and are available for use on the metadata server.
To enable mandatory lock enforcement on the storage device, the
metadata server needs the ability to make the storage device aware
of the association between the metadata-server-chosen stateid and
the client, openowner, and lock (i.e., lockowner, byte-range, and
lock-type) that it represents.  Because I/O operations are allowed
to present lock stateids, this information needs to be propagated to
all storage devices to which I/O might be directed rather than only
to storage device that contain the locked region.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  Because I/O operations
are allowed to present delegation stateids, the metadata server
requires the ability:  </t>
              <ol spacing="normal" type="1"><li>
                  <t>to make the storage device aware of the association between
the metadata-server-chosen stateid and the filehandle and
delegation type it represents</t>
                </li>
                <li>
                  <t>to break such an association.</t>
                </li>
              </ol>
            </li>
            <li>
              <t>TEST_STATEID is processed locally on the metadata server, without
storage device involvement.</t>
            </li>
            <li>
              <t>FREE_STATEID is processed on the metadata server, but the metadata
server requires the ability to propagate the request to the
corresponding storage devices.</t>
            </li>
          </ul>
          <t>Because the client will possess and use stateids valid on the storage
device, there will be a client lease on the storage device, and the
possibility of lease expiration does exist.  The best approach for the
storage device is to retain these locks as a courtesy.  However, if it
does not do so, control protocol facilities need to provide the means
to synchronize lock state between the metadata server and storage device.</t>
          <t>Clients will also have leases on the metadata server that are subject
to expiration.  In dealing with lease expiration, the metadata server
would be expected to use control protocol facilities enabling it to
invalidate revoked stateids on the storage device.  In the event the
client is not responsive, the metadata server may need to use fencing
to prevent revoked stateids from being acted upon by the storage device.</t>
        </section>
      </section>
      <section anchor="sec-tight-coupling-control">
        <name>Tight Coupling Control Protocol</name>
        <t>When an NFSv4.2 storage device participates in a tightly coupled
deployment, the metadata server and the storage devices need a
control protocol that:</t>
        <ol spacing="normal" type="1"><li>
            <t>registers the layout stateid with each storage device so the
storage device can validate client I/O independently; and</t>
          </li>
          <li>
            <t>revokes trust promptly when the metadata server withdraws the
client's authorization -- for example, on CB_LAYOUTRECALL
timeout, lease expiry, or layout return after error.</t>
          </li>
        </ol>
        <t>This specification defines that control protocol as three new
NFSv4.2 operations: TRUST_STATEID (<xref target="sec-TRUST_STATEID"/>),
REVOKE_STATEID (<xref target="sec-REVOKE_STATEID"/>), and BULK_REVOKE_STATEID
(<xref target="sec-BULK_REVOKE_STATEID"/>).  These operations are sent by the
metadata server to each storage device over a dedicated control
session (see <xref target="sec-tight-coupling-control-session"/>) and <bcp14>MUST NOT</bcp14>
be sent by pNFS clients.</t>
        <section anchor="sec-tight-coupling-probe">
          <name>Capability Discovery</name>
          <t>A storage device indicates support for tight coupling implicitly,
by processing TRUST_STATEID rather than returning NFS4ERR_NOTSUPP.
The metadata server probes each storage device during
control-session setup:</t>
          <figure anchor="fig-trust-stateid-probe">
            <name>TRUST_STATEID capability probe</name>
            <artwork><![CDATA[
SEQUENCE + PUTROOTFH + TRUST_STATEID(
    tsa_layout_stateid = ANONYMOUS_STATEID,
    tsa_iomode         = LAYOUTIOMODE4_READ,
    tsa_expire         = 0,
    tsa_principal      = "")
]]></artwork>
          </figure>
          <t>The anonymous stateid is used deliberately: a correctly implemented
storage device <bcp14>MUST</bcp14> reject it (see <xref target="sec-TRUST_STATEID"/>), so the
probe cannot accidentally register garbage in the trust table.  The
metadata server interprets the probe response as follows:</t>
          <ul spacing="normal">
            <li>
              <t>NFS4ERR_NOTSUPP: tight coupling is not supported on this
storage device.  The metadata server falls back to loose coupling
(anonymous stateid plus fencing) and sets ffdv_tightly_coupled
to false for this storage device.</t>
            </li>
            <li>
              <t>NFS4ERR_INVAL: tight coupling is supported.  The anonymous
stateid was correctly rejected.  The metadata server records the
capability and sets ffdv_tightly_coupled to true for this
storage device.</t>
            </li>
            <li>
              <t>NFS4_OK: the storage device accepted an anonymous stateid into
its trust table.  This is a storage device bug.  The metadata
server <bcp14>SHOULD</bcp14> log the anomaly.  It <bcp14>MAY</bcp14> treat the capability as
confirmed to avoid downgrading to loose coupling, but it <bcp14>MUST</bcp14>
immediately issue REVOKE_STATEID to remove the bogus entry.</t>
            </li>
          </ul>
          <t>The capability is recorded per storage device, not per file.
Partial support across a mirror set is permitted: each
ff_device_versions4 entry returned by GETDEVICEINFO carries its
own ffdv_tightly_coupled flag, set independently.</t>
        </section>
        <section anchor="sec-tight-coupling-control-session">
          <name>Control Session</name>
          <t>The metadata server establishes an NFSv4.2 session to each
tight-coupling-capable storage device at startup.  On this session
the metadata server acts as the storage device's client and
presents EXCHGID4_FLAG_USE_NON_PNFS in its EXCHANGE_ID args.</t>
          <t>The storage device <bcp14>MUST</bcp14> verify that any incoming TRUST_STATEID,
REVOKE_STATEID, or BULK_REVOKE_STATEID compound arrives on a
session whose owning client presented EXCHGID4_FLAG_USE_PNFS_MDS
in its EXCHANGE_ID args.  Requests that arrive on any other
session <bcp14>MUST</bcp14> be rejected with NFS4ERR_PERM.  This is the sole
access control on these operations; a pNFS client connecting to
the storage device does not present EXCHGID4_FLAG_USE_PNFS_MDS
and therefore cannot invoke them.</t>
          <t>The EXCHGID4_FLAG_USE_PNFS_MDS check replaces any path- or
filehandle-level gating.  TRUST_STATEID operates on a filehandle
that may be any file on the storage device, and the metadata
server is the sole authority that can legitimately speak this
protocol.</t>
          <t>Because the EXCHGID4_FLAG_USE_PNFS_MDS check relies on the owning
client's self-declaration at EXCHANGE_ID time, the storage device
cannot by itself distinguish a legitimate metadata server from any
other host that sets the flag.  Deployments are therefore
responsible for constraining who can establish a control session
in the first place.  Two mechanisms are <bcp14>RECOMMENDED</bcp14>:</t>
          <ol spacing="normal" type="1"><li>
              <t>The control session <bcp14>SHOULD</bcp14> use RPCSEC_GSS with a machine
principal that the storage device has been configured to
accept as a metadata server.  The storage device validates
the principal before accepting EXCHANGE_ID with
EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>Alternatively, the control session <bcp14>SHOULD</bcp14> run over a
network path isolated from pNFS clients (for example, a
dedicated management VLAN or mutual TLS (<xref target="RFC9289"/>) with
an allowlisted client certificate), such that only
configured metadata servers can reach the storage device on
that path.</t>
            </li>
          </ol>
          <t>Deploying neither mechanism reduces the authorization strength
of TRUST_STATEID and the revocation operations to "any host
that can reach the storage device can invoke them"; a strict
deployment <bcp14>MUST</bcp14> apply at least one of the above.</t>
        </section>
        <section anchor="sec-tight-coupling-layoutget">
          <name>Flow at LAYOUTGET</name>
          <t>For each new or refreshed layout segment, the metadata server:</t>
          <ol spacing="normal" type="1"><li>
              <t>chooses the layout stateid (as it would without tight coupling);</t>
            </li>
            <li>
              <t>identifies the tight-coupling-capable storage devices in the
mirror set (those for which ffdv_tightly_coupled is true);</t>
            </li>
            <li>
              <t>fans out TRUST_STATEID to each such storage device,
specifying the layout stateid, the layout iomode, a
tsa_expire derived from the metadata server's lease (see
<xref target="sec-tight-coupling-lease"/>), and the client's authenticated
identity in tsa_principal;</t>
            </li>
            <li>
              <t>waits for all fan-outs to complete (or reach their per-storage-
device timeout) before returning the layout.</t>
            </li>
          </ol>
          <t>If every storage device in the mirror set rejects the TRUST_STATEID
fan-out, the metadata server <bcp14>MUST NOT</bcp14> return the layout; instead it
returns NFS4ERR_LAYOUTTRYLATER.  If some storage devices accept and
others reject, the metadata server <bcp14>MAY</bcp14> return a layout covering
only the accepting storage devices, subject to the mirror-set rules
for minimum acceptable coverage.  A storage device that returns
NFS4ERR_DELAY is retried until either success or the metadata
server's LAYOUTGET-response budget is exhausted.  If a storage
device returns NFS4ERR_NOTSUPP at this time (having accepted the
probe earlier), the metadata server <bcp14>MUST</bcp14> clear
ffdv_tightly_coupled for this storage device, fall back to loose
coupling, and re-issue the layout accordingly.</t>
        </section>
        <section anchor="sec-tight-coupling-principal">
          <name>Principal Binding and the Kerberos Gap</name>
          <t>Flex files v1 has a known gap: a client authenticated to the
metadata server with Kerberos has no way to present the same
authenticated identity to the storage device, because v1 layouts
carry only ffds_user / ffds_group (POSIX uid/gid for AUTH_SYS).  A
strict Kerberos deployment on v1 must either allow AUTH_SYS from
the metadata server's subnet or accept that the v1 data path is
not Kerberos-protected.</t>
          <t>The tsa_principal field in TRUST_STATEID closes that gap.  When a
client authenticates to the metadata server as a Kerberos
principal (e.g., alice@REALM), the metadata server passes that
principal name to each storage device in tsa_principal.  The
storage device then enforces a two-part check on each CHUNK
operation that presents the layout stateid:</t>
          <t>a.  the stateid is in the trust table and has not expired; and</t>
          <t>b.  the caller's authenticated identity (the RPCSEC_GSS display
    name on the CHUNK compound) matches tsa_principal.</t>
          <t>Both conditions <bcp14>MUST</bcp14> hold.  On principal mismatch the storage
device <bcp14>MUST</bcp14> return NFS4ERR_ACCESS -- the semantics are "you do
not have an authorized layout for this file", which matches the
existing fencing error and avoids the confusion of
NFS4ERR_WRONGSEC (which directs the client to re-authenticate
with a different flavor) or NFS4ERR_BAD_STATEID (which directs
the client to return the layout).</t>
          <t>The metadata server <bcp14>MUST</bcp14> populate tsa_principal with the
RPCSEC_GSS display name of the authenticated client when the
client authenticated to the metadata server via RPCSEC_GSS.  The
metadata server <bcp14>MUST</bcp14> set tsa_principal to the empty string only
for AUTH_SYS and TLS clients (for which there is no server-
verified per-user identity).  Setting tsa_principal to the empty
string for an RPCSEC_GSS client disables the principal check on
the storage device and silently re-opens the flex files v1
Kerberos gap; it is a metadata server bug, not a protocol option.</t>
          <t>If tsa_principal is the empty string, no principal check applies.
This is the expected setting for AUTH_SYS and TLS clients:</t>
          <ul spacing="normal">
            <li>
              <t>AUTH_SYS clients have no server-verified identity.  The
storage device's stateid check and the AUTH_SYS uid/gid on the
data file together constitute the authorization.  In a tightly
coupled deployment the data file's owner/group need not match
the metadata file's, since ffv2ds_user and ffv2ds_group are
ignored (see <xref target="sec-ffv2-mirror4"/>).</t>
            </li>
            <li>
              <t>TLS clients have transport-layer authentication via mutual TLS
(<xref target="RFC9289"/>).  The TLS layer authenticates the client machine;
the stateid check confirms the metadata server authorized that
machine to access this file.  The machine-level authentication
is handled beneath the RPC layer and is not reflected in
tsa_principal.  Opportunistic TLS (STARTTLS without certificate
verification) provides encryption but not authentication, and
therefore has the same authorization properties as plain
AUTH_SYS.</t>
            </li>
          </ul>
        </section>
        <section anchor="sec-tight-coupling-trust-gap">
          <name>Client-Detected Trust Gap</name>
          <t>A window exists between a successful TRUST_STATEID fan-out and
the client's first I/O to the storage device.  A transient failure
may cause the storage device to forget or reject the entry before
the client's first CHUNK_WRITE arrives.  The client cannot
distinguish this case from legitimate revocation; both surface as
NFS4ERR_BAD_STATEID on the storage device.</t>
          <t>The recovery path:</t>
          <ol spacing="normal" type="1"><li>
              <t>The client sends LAYOUTERROR(layout_stateid, device_id,
NFS4ERR_BAD_STATEID) to the metadata server.</t>
            </li>
            <li>
              <t>The metadata server retries TRUST_STATEID against the
reporting storage device.  If the retry succeeds, the
metadata server returns NFS4_OK for LAYOUTERROR.  The client
retries the original I/O.</t>
            </li>
            <li>
              <t>If the retry fails -- the storage device is unreachable or
returns a hard error -- the metadata server issues
CB_LAYOUTRECALL for that device and the client returns the
layout segment covering that storage device.  The client is
expected to re-request via LAYOUTGET.</t>
            </li>
          </ol>
          <t>This is the same LAYOUTERROR path used for NFS4ERR_ACCESS or
NFS4ERR_PERM in the fencing model (see <xref target="sec-Fencing-Clients"/>),
with the metadata server's action being "retry TRUST_STATEID"
instead of "rotate uid/gid".</t>
        </section>
        <section anchor="sec-tight-coupling-lease">
          <name>Lease and Renewal</name>
          <t>tsa_expire in a TRUST_STATEID request is a wall-clock expiry
instant expressed as an nfstime4.  The metadata server <bcp14>MUST</bcp14> set
tsa_expire to the current wall-clock time plus the metadata
server's client lease period.</t>
          <t>The metadata server <bcp14>MUST</bcp14> re-issue TRUST_STATEID for an entry
before tsa_expire while the corresponding layout is outstanding.
The <bcp14>RECOMMENDED</bcp14> trigger is: when an entry is within half the
lease period of its tsa_expire, re-issue TRUST_STATEID with a
refreshed tsa_expire.  Renewing on every SEQUENCE that keeps the
layout stateid alive is correct but produces
metadata-server-to-storage-device traffic proportional to the
client's SEQUENCE rate, which is undesirable in steady state.</t>
          <t>If an entry expires on the storage device before the metadata
server renews it -- for example, because the metadata server is
partitioned from the storage device for longer than the lease
period -- the storage device <bcp14>MUST</bcp14> return NFS4ERR_BAD_STATEID to
the client on the next CHUNK operation.  The client returns the
layout to the metadata server and re-requests.  This is the same
recovery path as the trust gap described above.</t>
        </section>
        <section anchor="sec-tight-coupling-ds-crash">
          <name>Storage Device Crash Recovery</name>
          <t>The trust table is volatile.  The storage device <bcp14>MUST NOT</bcp14> persist
trust entries across restarts; a storage device restart therefore
empties the trust table.</t>
          <t>The client detects a storage device restart via NFS4ERR_BADSESSION
or NFS4ERR_STALE_CLIENTID on its data server session.  The client
returns the affected layout segment to the metadata server via
LAYOUTRETURN and re-requests via LAYOUTGET.  The metadata server
then fans out fresh TRUST_STATEID operations to the recovered
storage device.</t>
          <t>Planned storage device restarts (software upgrade, etc.) <bcp14>SHOULD</bcp14>
drain in-flight CHUNK operations before shutting down.</t>
        </section>
        <section anchor="sec-tight-coupling-mds-crash">
          <name>Metadata Server Crash Recovery</name>
          <t>When the metadata server restarts, its control sessions to the
storage devices are lost.  Trust entries remain on the storage
devices until tsa_expire, but the metadata server is no longer
renewing them; the entries are effectively orphaned until the
metadata server completes grace.</t>
          <t>When the metadata server reconnects to a storage device with a
new boot epoch -- that is, the EXCHANGE_ID returns a new server
owner on the storage device's view of the metadata server -- the
storage device <bcp14>SHOULD</bcp14> mark all trust entries established under
the prior metadata-server epoch as pending-revalidation.  While an
entry is pending-revalidation:</t>
          <ul spacing="normal">
            <li>
              <t>I/O that presents the entry's stateid <bcp14>MUST</bcp14> receive
NFS4ERR_DELAY, not NFS4ERR_BAD_STATEID.  NFS4ERR_DELAY tells
the client to retry with the same stateid -- the metadata
server is recovering and may yet revalidate the entry.
NFS4ERR_BAD_STATEID would instead cause the client to return
the layout immediately, producing a thundering herd against
the metadata server during grace.</t>
            </li>
            <li>
              <t>An entry remains pending-revalidation until the metadata
server either re-issues TRUST_STATEID for it (which transitions
it back to trusted) or until the entry's tsa_expire elapses
(which removes it).</t>
            </li>
          </ul>
          <t>The metadata server's recovery sequence is:</t>
          <ol spacing="normal" type="1"><li>
              <t>Reconnect to each storage device and establish a fresh
control session.</t>
            </li>
            <li>
              <t>Optionally issue BULK_REVOKE_STATEID with an all-zeros
clientid to each storage device.  This clears the prior trust
table eagerly; skipping this step is correct, because orphan
entries expire via tsa_expire.</t>
            </li>
            <li>
              <t>Enter grace and accept RECLAIM operations from clients.  For
each reclaimed layout, fan out TRUST_STATEID to the relevant
storage devices.</t>
            </li>
            <li>
              <t>Exit grace.  Clients that did not reclaim in time have their
state revoked; the metadata server issues REVOKE_STATEID or
BULK_REVOKE_STATEID on their behalf.</t>
            </li>
          </ol>
          <t>Metadata servers <bcp14>SHOULD</bcp14> persist the set of outstanding
TRUST_STATEID entries (clientid, layout stateid, storage device
address, tsa_expire) to stable storage.  With this persistence
the metadata server can re-issue TRUST_STATEID for all known
entries immediately upon reconnecting to each storage device,
before clients begin reclaiming.  This shrinks the window during
which the storage device returns NFS4ERR_DELAY for client I/O.
Persistence is a latency optimization, not a correctness
requirement: the re-layout path handles recovery in all cases.</t>
        </section>
        <section anchor="sec-tight-coupling-compat">
          <name>Backward Compatibility</name>
          <ul spacing="normal">
            <li>
              <t>NFSv3 storage devices are unchanged.  They are always treated
as loosely coupled; TRUST_STATEID does not exist on NFSv3
servers.</t>
            </li>
            <li>
              <t>NFSv4.2 storage devices for which the TRUST_STATEID probe
returns NFS4ERR_NOTSUPP are treated as loosely coupled;
fencing is the only revocation mechanism, the same as for
NFSv3.</t>
            </li>
            <li>
              <t>NFSv4.2 storage devices for which the probe returns
NFS4ERR_INVAL support tight coupling; the metadata server uses
TRUST_STATEID at LAYOUTGET and REVOKE_STATEID or
BULK_REVOKE_STATEID for revocation instead of fencing.</t>
            </li>
          </ul>
          <t>A single deployment <bcp14>MAY</bcp14> contain a mix of tight-coupled and
loose-coupled storage devices; each is negotiated independently
via the probe.</t>
        </section>
      </section>
    </section>
    <section anchor="device-addressing-and-discovery">
      <name>Device Addressing and Discovery</name>
      <t>Data operations to a storage device require the client to know the
network address of the storage device.  The NFSv4.1+ GETDEVICEINFO
operation (Section 18.40 of <xref target="RFC8881"/>) is used by the client to
retrieve that information.</t>
      <section anchor="sec-ff_device_addr4">
        <name>ff_device_addr4</name>
        <t>The ff_device_addr4 data structure (see <xref target="fig-ff_device_addr4"/>)
is returned by the server as the layout-type-specific opaque field
da_addr_body in the device_addr4 structure by a successful GETDEVICEINFO
operation.</t>
        <t>The ff_device_versions4 and ff_device_addr4 structures are
reused unchanged from <xref target="RFC8435"/>; they are reproduced here for
reader convenience and are not part of the XDR extracted from
this document.</t>
        <figure anchor="fig-ff_device_versions4">
          <name>ff_device_versions4 (reused from RFC 8435)</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_device_versions4 {
           uint32_t        ffdv_version;
           uint32_t        ffdv_minorversion;
           uint32_t        ffdv_rsize;
           uint32_t        ffdv_wsize;
           bool            ffdv_tightly_coupled;
   };
]]></sourcecode>
        </figure>
        <figure anchor="fig-ff_device_addr4">
          <name>ff_device_addr4 (reused from RFC 8435)</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_device_addr4 {
           multipath_list4     ffda_netaddrs;
           ff_device_versions4 ffda_versions<>;
   };
]]></sourcecode>
        </figure>
        <t>The ffda_netaddrs field is used to locate the storage device.  It
<bcp14>MUST</bcp14> be set by the server to a list holding one or more of the device
network addresses.</t>
        <t>The ffda_versions array allows the metadata server to present choices
as to NFS version, minor version, and coupling strength to the
client.  The ffdv_version and ffdv_minorversion represent the NFS
protocol to be used to access the storage device.  This layout
specification defines the semantics for ffdv_versions 3 and 4.  If
ffdv_version equals 3, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to
0 and ffdv_tightly_coupled to false.  The client <bcp14>MUST</bcp14> then access the
storage device using the NFSv3 protocol <xref target="RFC1813"/>.  If ffdv_version
equals 4, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to one of the
NFSv4 minor version numbers, and the client <bcp14>MUST</bcp14> access the storage
device using NFSv4 with the specified minor version.</t>
        <t>Note that while the client might determine that it cannot use any of
the configured combinations of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled, when it gets the device list from the metadata
server, there is no way to indicate to the metadata server as to
which device it is version incompatible.  However, if the client
waits until it retrieves the layout from the metadata server, it can
at that time clearly identify the storage device in question (see
<xref target="sec-version-errors"/>).</t>
        <t>The ffdv_rsize and ffdv_wsize are used to communicate the maximum
rsize and wsize supported by the storage device.  As the storage
device can have a different rsize or wsize than the metadata server,
the ffdv_rsize and ffdv_wsize allow the metadata server to
communicate that information on behalf of the storage device.</t>
        <t>ffdv_tightly_coupled informs the client as to whether the
metadata server is tightly coupled with this storage device.  Note
that even if the data protocol is at least NFSv4.1, it may still
be the case that there is loose coupling in effect.  For an NFSv4.2
storage device, the metadata server sets ffdv_tightly_coupled to
true only after confirming the storage device implements the
TRUST_STATEID control protocol via the capability probe described
in <xref target="sec-tight-coupling-probe"/>.  An NFSv4.2 storage device that
does not implement TRUST_STATEID (returning NFS4ERR_NOTSUPP to the
probe) <bcp14>MUST</bcp14> be advertised with ffdv_tightly_coupled set to false.</t>
        <t>If ffdv_tightly_coupled is not set, then the client <bcp14>MUST</bcp14> commit
writes to the storage devices for the file before sending a
LAYOUTCOMMIT to the metadata server.  That is, the writes <bcp14>MUST</bcp14> be
committed by the client to stable storage via issuing WRITEs with
stable_how == FILE_SYNC or by issuing a COMMIT after WRITEs with
stable_how != FILE_SYNC (see Section 3.3.7 of <xref target="RFC1813"/>).</t>
      </section>
      <section anchor="storage-device-multipathing">
        <name>Storage Device Multipathing</name>
        <t>The flexible file layout type supports multipathing to multiple
storage device addresses.  Storage-device-level multipathing is used
for bandwidth scaling via trunking and for higher availability of use
in the event of a storage device failure.  Multipathing allows the
client to switch to another storage device address that may be that
of another storage device that is exporting the same data stripe
unit, without having to contact the metadata server for a new layout.</t>
        <t>To support storage device multipathing, ffda_netaddrs contains an
array of one or more storage device network addresses.  This array
(data type multipath_list4) represents a list of storage devices
(each identified by a network address), with the possibility that
some storage device will appear in the list multiple times.</t>
        <t>The client is free to use any of the network addresses as a
destination to send storage device requests.  If some network
addresses are less desirable paths to the data than others, then the
metadata server <bcp14>SHOULD NOT</bcp14> include those network addresses in
ffda_netaddrs.  If less desirable network addresses exist to provide
failover, the <bcp14>RECOMMENDED</bcp14> method to offer the addresses is to provide
them in a replacement device-ID-to-device-address mapping or a
replacement device ID.  When a client finds no response from the
storage device using all addresses available in ffda_netaddrs, it
<bcp14>SHOULD</bcp14> send a GETDEVICEINFO to attempt to replace the existing
device-ID-to-device-address mappings.  If the metadata server detects
that all network paths represented by ffda_netaddrs are unavailable,
the metadata server <bcp14>SHOULD</bcp14> send a CB_NOTIFY_DEVICEID (if the client
has indicated it wants device ID notifications for changed device
IDs) to change the device-ID-to-device-address mappings to the
available addresses.  If the device ID itself will be replaced, the
metadata server <bcp14>SHOULD</bcp14> recall all layouts with the device ID and thus
force the client to get new layouts and device ID mappings via
LAYOUTGET and GETDEVICEINFO.</t>
        <t>Generally, if two network addresses appear in ffda_netaddrs, they
will designate the same storage device.  When the storage device is
accessed over NFSv4.1 or a higher minor version, the two storage
device addresses will support the implementation of client ID or
session trunking (the latter is <bcp14>RECOMMENDED</bcp14>) as defined in <xref target="RFC8881"/>.
The two storage device addresses will share the same server owner or
major ID of the server owner.  It is not always necessary for the two
storage device addresses to designate the same storage device with
trunking being used.  For example, the data could be read-only, and
the data consist of exact replicas.</t>
      </section>
    </section>
    <section anchor="flexible-file-version-2-layout-type">
      <name>Flexible File Version 2 Layout Type</name>
      <t>The original layouttype4 introduced in <xref target="RFC5662"/> is extended as shown in
<xref target="fig-orig-layout"/>.  The layout_content4 and layout4 structures are
reused unchanged from <xref target="RFC5662"/>; the layouttype4 enum is extended
with the new LAYOUT4_FLEX_FILES_V2 value.  The full enum and
surrounding structures below are reproduced for reader
convenience; only the new constant LAYOUT4_FLEX_FILES_V2 is part
of the XDR extracted from this document (see
<xref target="fig-orig-layout-extract"/>).</t>
      <figure anchor="fig-orig-layout">
        <name>The original layout type (illustrative; reused from RFC 5662 with extension)</name>
        <sourcecode type="xdr"><![CDATA[
       enum layouttype4 {
           LAYOUT4_NFSV4_1_FILES   = 1,
           LAYOUT4_OSD2_OBJECTS    = 2,
           LAYOUT4_BLOCK_VOLUME    = 3,
           LAYOUT4_FLEX_FILES      = 4,
           LAYOUT4_FLEX_FILES_V2   = 5
       };

       struct layout_content4 {
           layouttype4             loc_type;
           opaque                  loc_body<>;
       };

       struct layout4 {
           offset4                 lo_offset;
           length4                 lo_length;
           layoutiomode4           lo_iomode;
           layout_content4         lo_content;
       };
]]></sourcecode>
      </figure>
      <t>The extracted XDR contribution for this extension is the new
layouttype4 constant alone:</t>
      <figure anchor="fig-orig-layout-extract">
        <name>New layouttype4 value (extracted)</name>
        <sourcecode type="xdr"><![CDATA[
   /// const LAYOUT4_FLEX_FILES_V2 = 5;
]]></sourcecode>
      </figure>
      <t>This document defines structures associated with the layouttype4
value LAYOUT4_FLEX_FILES_V2.  <xref target="RFC8881"/> specifies the loc_body structure
as an XDR type "opaque".  The opaque layout is uninterpreted by the
generic pNFS client layers but is interpreted by the flexible file
layout type implementation.  This section defines the structure of
this otherwise opaque value, ffv2_layout4.</t>
      <section anchor="ffv2codingtype4">
        <name>ffv2_coding_type4</name>
        <figure anchor="fig-ffv2_coding_type4">
          <name>The coding type</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_coding_type4 {
   ///     FFV2_CODING_MIRRORED                  = 1,
   ///     FFV2_ENCODING_MOJETTE_SYSTEMATIC      = 2,
   ///     FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC  = 3,
   ///     FFV2_ENCODING_RS_VANDERMONDE          = 4
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type4 (see <xref target="fig-ffv2_coding_type4"/>) encompasses
a new IANA registry for 'Flexible Files Version 2 Erasure Coding
Type Registry'.  I.e., instead of defining a new Layout Type for
each Erasure Coding, we define a new Erasure Coding Type.  Except
for FFV2_CODING_MIRRORED, each of the types is expected to employ
the new operations in this document.</t>
        <t>The 32-bit ffv2_coding_type4 value space is partitioned by
intended scope -- Standards Track, Experimental, Vendor (open),
and Private / proprietary -- with different allocation policies
per range, so that vendors can assign codec values without
consuming standards-track codepoints.  See
<xref target="tbl-coding-ranges"/> and the accompanying prose in
<xref target="iana-considerations"/> for the range assignments and allocation
policies.</t>
        <t>FFV2_CODING_MIRRORED offers replication of data and not integrity of
data.  As such, it does not need operations like CHUNK_WRITE (see
<xref target="sec-CHUNK_WRITE"/>.</t>
        <section anchor="encoding-type-interoperability">
          <name>Encoding Type Interoperability</name>
          <t>The data servers do not interpret erasure-coded data -- they store and
return opaque chunks.  The NFS wire protocol likewise does not depend
on the encoding mathematics.  However, a client that writes data using
one encoding type <bcp14>MUST</bcp14> be able to read it back, and a different
client implementation <bcp14>MUST</bcp14> be able to read data written by the first
client if both claim to support the same encoding type.</t>
          <t>This interoperability requirement means that each registered
encoding type <bcp14>MUST</bcp14> fully specify the encoding and decoding
mathematics such that two independent implementations produce
byte-identical encoded output for the same input.  The specification
of a new encoding type <bcp14>MUST</bcp14> include one of the following:</t>
          <ol spacing="normal" type="1"><li>
              <t>A complete mathematical specification of the encoding and decoding
algorithms, including all parameters (e.g., field polynomial,
matrix construction, element size) sufficient for an independent
implementation to produce interoperable results.</t>
            </li>
            <li>
              <t>A reference to a published patent or pending patent application
that contains the algorithm specification.  Implementors can then
evaluate the licensing terms and decide whether to support the
encoding type.</t>
            </li>
            <li>
              <t>A declaration that the encoding type is a proprietary
implementation.  In this case, the encoding type name <bcp14>SHOULD</bcp14>
include an organizational prefix (e.g.,
FFV2_ENCODING_ACME_FOOBAR) to signal that interoperability is
limited to implementations licensed by that organization.</t>
            </li>
          </ol>
          <t>Option 1 is <bcp14>RECOMMENDED</bcp14> for encoding types intended for broad
interoperability.  Options 2 and 3 allow vendors to register encoding
types for use within their own ecosystems while preserving the
encoding type namespace.</t>
          <t>The rationale for this requirement is that erasure coding moves
computation from the server to the client.  If the client cannot
determine how data was encoded, it cannot decode it.  Unlike layout
types (where the server controls the storage format), encoding types
require client-side agreement on the mathematics.</t>
        </section>
      </section>
      <section anchor="sec-ffv2_layout">
        <name>ffv2_layout4</name>
        <section anchor="sec-ffv2_flags4">
          <name>ffv2_flags4</name>
          <figure anchor="fig-ffv2_flags4">
            <name>The ffv2_flags4</name>
            <sourcecode type="xdr"><![CDATA[
   /// const FFV2_FLAGS_NO_LAYOUTCOMMIT  = FF_FLAGS_NO_LAYOUTCOMMIT;
   /// const FFV2_FLAGS_NO_IO_THRU_MDS   = FF_FLAGS_NO_IO_THRU_MDS;
   /// const FFV2_FLAGS_NO_READ_IO       = FF_FLAGS_NO_READ_IO;
   /// const FFV2_FLAGS_WRITE_ONE_MIRROR =
   ///     FF_FLAGS_WRITE_ONE_MIRROR;
   /// const FFV2_FLAGS_ONLY_ONE_WRITER  = 0x00000010;
   ///
   /// typedef uint32_t            ffv2_flags4;
]]></sourcecode>
          </figure>
          <t>The ffv2_flags4 in <xref target="fig-ffv2_flags4"/>  is a bitmap that allows the
metadata server to inform the client of particular conditions that
may result from more or less tight coupling of the storage devices.</t>
          <t>Each flag below describes both the semantics when set and the
normative requirement it places on the client.  When a flag is
not set, the client <bcp14>MUST</bcp14> follow the default behavior described
for its unset state.</t>
          <dl>
            <dt>FFV2_FLAGS_NO_LAYOUTCOMMIT:</dt>
            <dd>
              <t>When set, the client <bcp14>MAY</bcp14> omit the LAYOUTCOMMIT to the
metadata server.  When unset, the client <bcp14>MUST</bcp14> send LAYOUTCOMMIT
per <xref target="RFC8881"/> Section 18.42.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_IO_THRU_MDS:</dt>
            <dd>
              <t>When set, the client <bcp14>MUST NOT</bcp14> proxy I/O operations through
the metadata server, even after detecting a network disconnect
to a storage device.  When unset, the client <bcp14>MAY</bcp14> retry failed
I/O via the metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_READ_IO:</dt>
            <dd>
              <t>When set, the client <bcp14>MUST NOT</bcp14> issue READ against layouts of
iomode LAYOUTIOMODE4_RW, and <bcp14>MUST</bcp14> instead request a separate
layout of iomode LAYOUTIOMODE4_READ for any read I/O.  When
unset, the client <bcp14>MAY</bcp14> issue READ against either iomode.</t>
            </dd>
            <dt>FFV2_FLAGS_WRITE_ONE_MIRROR:</dt>
            <dd>
              <t>When set, the client <bcp14>MAY</bcp14> update only one mirror of each
layout segment (see <xref target="sec-CSM"/>) and rely on the metadata
server or a peer data server to propagate the update to the
remaining mirrors.  When unset, the client <bcp14>MUST</bcp14> update all
mirrors.</t>
            </dd>
            <dt>FFV2_FLAGS_ONLY_ONE_WRITER:</dt>
            <dd>
              <t>When set, the client is the exclusive writer for the layout
and <bcp14>MAY</bcp14> issue CHUNK_WRITE without setting cwa_guard, retaining
the ability to use CHUNK_ROLLBACK in the event of a write hole
caused by overwriting.  When unset, the client <bcp14>MUST</bcp14> set
cwa_guard on every CHUNK_WRITE so that chunk_guard4 CAS can
prevent collisions across concurrent writers.</t>
            </dd>
          </dl>
        </section>
      </section>
      <section anchor="ffv2fileinfo4">
        <name>ffv2_file_info4</name>
        <figure anchor="fig-ffv2_file_info4">
          <name>The ffv2_file_info4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_file_info4 {
   ///     stateid4                fffi_stateid;
   ///     nfs_fh4                 fffi_fh_vers;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_file_info4 is a new structure to help with the stateid
issue discussed in Section 5.1 of <xref target="RFC8435"/>.  I.e., in version 1
of the Flexible File Layout Type, there was the singleton ffv2ds_stateid
combined with the ffv2ds_fh_vers array.  I.e., each NFSv4 version
has its own stateid.  In <xref target="fig-ffv2_file_info4"/>, each NFSv4
filehandle has a one-to-one correspondence to a stateid.</t>
      </section>
      <section anchor="sec-ffv2_ds_flags4">
        <name>ffv2_ds_flags4</name>
        <figure anchor="fig-ffv2_ds_flags4">
          <name>The ffv2_ds_flags4</name>
          <sourcecode type="xdr"><![CDATA[
   /// const FFV2_DS_FLAGS_ACTIVE        = 0x00000001;
   /// const FFV2_DS_FLAGS_SPARE         = 0x00000002;
   /// const FFV2_DS_FLAGS_PARITY        = 0x00000004;
   /// const FFV2_DS_FLAGS_REPAIR        = 0x00000008;
   /// typedef uint32_t            ffv2_ds_flags4;
]]></sourcecode>
        </figure>
        <t>The ffv2_ds_flags4 (in <xref target="fig-ffv2_ds_flags4"/>) flags details the
state of the data servers.  With Erasure Coding algorithms, there
are both Systematic and Non-Systematic approaches.  In the Systematic,
the bits for integrity are placed amongst the resulting transformed
chunk.  Such an implementation would typically see FFV2_DS_FLAGS_ACTIVE
and FFV2_DS_FLAGS_SPARE data servers.  The FFV2_DS_FLAGS_SPARE ones
allow the client to repair a payload without engaging the metadata
server.  I.e., if one of the FFV2_DS_FLAGS_ACTIVE did not respond
to a WRITE_BLOCK, the client could fail the chunk to the
FFV2_DS_FLAGS_SPARE data server.</t>
        <t>With the Non-Systematic approach, the data and integrity live on
different data servers.  Such an implementation would typically see
FFV2_DS_FLAGS_ACTIVE and FFV2_DS_FLAGS_PARITY data servers.  If the
implementation wanted to allow for local repair, it would also use
FFV2_DS_FLAGS_SPARE.</t>
        <t>The FFV2_DS_FLAGS_REPAIR flag informs the client that the
indicated data server is a replacement for a previously failed
ACTIVE data server, whose content has been (or is being)
reconstructed from the surviving shards of the mirror set.  A
REPAIR data server differs from a SPARE in two ways:</t>
        <ul spacing="normal">
          <li>
            <t>A SPARE is standing by with no payload; the client <bcp14>MAY</bcp14> fail
over to it at write time without metadata-server coordination.</t>
          </li>
          <li>
            <t>A REPAIR has been promoted by the metadata server to replace a
failed ACTIVE, and its payload was placed there by a repair
client executing the flow in <xref target="sec-repair-selection"/> rather
than directly by the original writer.  The flag is the
client's indication that reads from this data server return
erasure-decoded content rather than content produced by the
original write.</t>
          </li>
        </ul>
        <t>Clients that rely on write-provenance information (for example,
deployments that track which client wrote which generation)
<bcp14>SHOULD</bcp14> be aware of the REPAIR flag so they do not treat the
reconstructed payload as if it had been written directly by the
cg_client_id recorded in the chunk_guard4; the guard values
still match across the mirror set by construction, but the
physical write path differs.</t>
        <t>Over the lifetime of a file, a single data server <bcp14>MAY</bcp14> transition
ACTIVE -&gt; REPAIR (on replacement) or REPAIR -&gt; ACTIVE (once the
metadata server has accepted the reconstructed content as
authoritative and the fail-over is complete); the metadata
server reflects the current flag set in the next layout it
returns.</t>
      </section>
      <section anchor="ffv2dataserver4">
        <name>ffv2_data_server4</name>
        <figure anchor="fig-ffv2_data_server4">
          <name>The ffv2_data_server4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_server4 {
   ///     deviceid4               ffv2ds_deviceid;
   ///     uint32_t                ffv2ds_efficiency;
   ///     ffv2_file_info4         ffv2ds_file_info<>;
   ///     fattr4_owner            ffv2ds_user;
   ///     fattr4_owner_group      ffv2ds_group;
   ///     ffv2_ds_flags4          ffv2ds_flags;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_server4 (in <xref target="fig-ffv2_data_server4"/>) describes a data
file and how to access it via the different NFS protocols.</t>
      </section>
      <section anchor="ffv2codingtypedata4">
        <name>ffv2_coding_type_data4</name>
        <figure anchor="fig-ffv2_coding_type_data4">
          <name>The ffv2_coding_type_data4</name>
          <sourcecode type="xdr"><![CDATA[
   /// union ffv2_coding_type_data4 switch
   ///         (ffv2_coding_type4 fctd_coding) {
   ///     case FFV2_CODING_MIRRORED:
   ///         ffv2_data_protection4   fctd_protection;
   ///     default:
   ///         ffv2_data_protection4   fctd_protection;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type_data4 (in <xref target="fig-ffv2_coding_type_data4"/>) describes
the data protection geometry for the layout.  All coding types carry an
ffv2_data_protection4 (<xref target="fig-ffv2_data_protection4"/>) specifying the
number of data and parity shards.  The coding type enum determines how
the shards are encoded; the protection structure determines how many
shards there are.</t>
        <t>Although the FFV2_CODING_MIRRORED case and the default case currently
carry the same type, the union form is intentional.  Future revisions
of this specification may assign distinct arm types to specific coding
types; using a union now avoids an incompatible change to the XDR at
that time.</t>
        <t>For FFV2_CODING_MIRRORED, fdp_data is 1 and fdp_parity is the number
of additional copies (e.g., fdp_parity=2 for 3-way mirroring).
Erasure coding types registered in companion documents (e.g.,
Reed-Solomon Vandermonde, Mojette systematic) use fdp_data &gt;= 2 and
fdp_parity &gt;= 1.</t>
        <figure anchor="fig-ffv2_stripes4">
          <name>The stripes v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_striping {
   ///     FFV2_STRIPING_NONE = 0,
   ///     FFV2_STRIPING_SPARSE = 1,
   ///     FFV2_STRIPING_DENSE = 2
   /// };
   ///
   /// struct ffv2_stripes4 {
   ///         ffv2_data_server4       ffs_data_servers<>;
   /// };
]]></sourcecode>
        </figure>
        <t>Each stripe contains a set of data servers in ffs_data_servers.
If the stripe is part of a ffv2_coding_type_data4 of
FFV2_CODING_MIRRORED, then the length of ffs_data_servers
<bcp14>MUST</bcp14> be 1.</t>
      </section>
      <section anchor="ffv2key4">
        <name>ffv2_key4</name>
        <figure anchor="fig-ffv2_key4">
          <name>The ffv2_key4</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef uint64_t ffv2_key4;
]]></sourcecode>
        </figure>
        <t>The ffv2_key4 is an opaque 64-bit identifier used to associate a
mirror instance with its backing storage key.  The value is assigned
by the metadata server and is opaque to the client.</t>
      </section>
      <section anchor="sec-ffv2-mirror4">
        <name>ffv2_mirror4</name>
        <figure anchor="fig-ffv2_mirror4">
          <name>The ffv2_mirror4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_mirror4 {
   ///         ffv2_coding_type_data4  ffm_coding_type_data;
   ///         ffv2_key4               ffm_key;
   ///         ffv2_striping           ffm_striping;
   ///         uint32_t                ffm_striping_unit_size;
   ///         uint32_t                ffm_client_id;
   ///         ffv2_stripes4           ffm_stripes<>;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_mirror4 (in <xref target="fig-ffv2_mirror4"/>) describes the Flexible
File Layout Version 2 specific fields.</t>
        <t>The ffm_client_id is a 32-bit value, assigned by the metadata
server at layout-grant time, that the client <bcp14>MUST</bcp14> use as the
cg_client_id field of chunk_guard4 (see <xref target="sec-chunk_guard4"/>) in
every CHUNK_WRITE it issues against the mirror's data servers.
Its purpose is to satisfy the 32-bit-per-field budget of
chunk_guard4 while preserving the guarantee that concurrent
writers on the same file are distinguishable:</t>
        <ul spacing="normal">
          <li>
            <t>The NFSv4 clientid4 (<xref target="RFC8881"/>) is a 64-bit structured
value whose low 32 bits (a slot index) are not guaranteed
unique across clients that hold layouts on the same file.
Folding clientid4 to 32 bits locally at each client could
therefore collide with another client's folded value and
violate the uniqueness contract on chunk_guard4.</t>
          </li>
          <li>
            <t>Only the metadata server has the information needed to avoid
such collisions: it sees every layout it grants on a file and
can assign a dense 32-bit ffm_client_id that is guaranteed
distinct from the ffm_client_ids assigned to other clients
holding concurrent write layouts on the same file.  The
metadata server <bcp14>MUST</bcp14> assign ffm_client_id subject to this
uniqueness rule.</t>
          </li>
          <li>
            <t>Because cg_client_id participates in the deterministic
tiebreaker for racing writers (see <xref target="sec-chunk_guard4"/>),
having the metadata server assign it also lets the metadata
server influence which client wins contention by choosing
the numeric ordering of the values it hands out.  Specific
ordering policies are implementation-defined and out of
scope for this document, but the protocol mechanism is
present.</t>
          </li>
        </ul>
        <t>An ffm_client_id is scoped to the file and layout for which it
was granted.  A client that holds layouts on two different files
may receive two different ffm_client_ids from the same metadata
server, and a client that relinquishes and later re-acquires a
layout on a given file <bcp14>MAY</bcp14> be assigned a different ffm_client_id.
ffm_client_id does NOT survive a metadata server restart: the
metadata server reassigns values as clients reclaim layouts
during the grace period.</t>
        <t>The ffm_coding_type_data is which encoding type is used
by the mirror.</t>
        <t>The ffm_striping selects the striping method used by the
mirror.  The three permissible values are FFV2_STRIPING_NONE
(the mirror is not striped), FFV2_STRIPING_SPARSE (stripe units
are mapped to the same physical offset on every data server,
leaving holes), and FFV2_STRIPING_DENSE (stripe units are
packed contiguously on each data server without holes).  See
<xref target="sec-striping"/> for the mapping math for each option.</t>
        <t>The ffm_striping_unit_size is the stripe unit size used
by the mirror.  The minimum stripe unit size is 64 bytes.  If
the value of ffm_striping is FFV2_STRIPING_NONE, then the value
of ffm_striping_unit_size <bcp14>MUST</bcp14> be 1.</t>
        <t>The ffm_stripes is the array of stripes for the mirror; the
length of the array is the stripe count.  If there is no
striping or the ffm_coding_type_data is FFV2_CODING_MIRRORED,
then the length of ffm_stripes <bcp14>MUST</bcp14> be 1.</t>
      </section>
      <section anchor="ffv2layout4">
        <name>ffv2_layout4</name>
        <figure anchor="fig-ffv2_layout4">
          <name>The ffv2_layout4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layout4 {
   ///     ffv2_mirror4            ffl_mirrors<>;
   ///     ffv2_flags4             ffl_flags;
   ///     uint32_t                ffl_stats_collect_hint;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layout4 (in <xref target="fig-ffv2_layout4"/>) describes the Flexible
File Layout Version 2.</t>
        <t>The ffl_mirrors field is the array of mirrored storage devices that
provide the storage for the current stripe; see <xref target="fig-parallel-filesystem"/>.</t>
        <t>The ffl_stats_collect_hint field provides a hint to the client on
how often the server wants it to report LAYOUTSTATS for a file.
The time is in seconds.</t>
        <figure anchor="fig-parallel-filesystem">
          <name>The Relationship between MDS and DSes</name>
          <artwork><![CDATA[
                +-----------+
                |           |
                |           |
                |   File    |
                |           |
                |           |
                +-----+-----+
                      |
     +-------------+-----+----------------+
     |                   |                |
+----+-----+       +-----+----+       +---+----------+
| Mirror 1 |       | Mirror 2 |       | Mirror 3     |
| MIRRORED |       | MIRRORED |       | REED_SOLOMON |
+----+-----+       +-----+----+       +---+----------+
     |                   |                |
     |                   |                |
+-----------+      +-----------+      +-----------+
|+-----------+     | Stripe 1  |      |+-----------+
+| Stripe N  |     +-----------+      +| Stripe N  |
 +-----------+           |             +-----------+
     |                   |                |
     |                   |                |
+-----------+      +-----------+      +-----------+
| Storage   |      | Storage   |      |+-----------+
| Device    |      | Device    |      ||+-----------+
+-----------+      +-----------+      +||  Storage  |
                                       +|  Devices  |
                                        +-----------+
]]></artwork>
        </figure>
        <t>As shown in <xref target="fig-parallel-filesystem"/> if the ffm_coding_type_data
is FFV2_CODING_MIRRORED, then each of the stripes <bcp14>MUST</bcp14>
only have 1 storage device. I.e., the length of ffs_data_servers
<bcp14>MUST</bcp14> be 1. The other encoding types can have any number of
storage devices.</t>
        <t>The abstraction here is that for FFV2_CODING_MIRRORED, each
stripe describes exactly one data server. And for all other
encoding types, each of the stripes describes a set of
data servers to which the chunks are distributed. Further,
the payload length can be different per stripe.</t>
      </section>
      <section anchor="ffv2dataprotection4">
        <name>ffv2_data_protection4</name>
        <figure anchor="fig-ffv2_data_protection4">
          <name>The ffv2_data_protection4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_protection4 {
   ///     uint32_t fdp_data;    /* data shards (k) */
   ///     uint32_t fdp_parity;  /* parity/redundancy shards (m) */
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_protection4 (in <xref target="fig-ffv2_data_protection4"/>) describes
the data protection geometry as a pair of counts: the number of data
shards (fdp_data, also known as k) and the number of parity or
redundancy shards (fdp_parity, also known as m).  This structure is
used in both layout hints and layout responses, and applies
uniformly to all coding types:</t>
        <table anchor="fig-protection-examples">
          <name>Example data protection configurations</name>
          <thead>
            <tr>
              <th align="left">Protection Mode</th>
              <th align="left">fdp_data</th>
              <th align="left">fdp_parity</th>
              <th align="left">Total DSes</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Mirroring (3-way)</td>
              <td align="left">1</td>
              <td align="left">2</td>
              <td align="left">3</td>
              <td align="left">3 copies, no encoding</td>
            </tr>
            <tr>
              <td align="left">Striping (6-way)</td>
              <td align="left">6</td>
              <td align="left">0</td>
              <td align="left">6</td>
              <td align="left">Parallel I/O, no redundancy</td>
            </tr>
            <tr>
              <td align="left">RS Vandermonde 4+2</td>
              <td align="left">4</td>
              <td align="left">2</td>
              <td align="left">6</td>
              <td align="left">Tolerates 2 DS failures</td>
            </tr>
            <tr>
              <td align="left">Mojette-sys 8+2</td>
              <td align="left">8</td>
              <td align="left">2</td>
              <td align="left">10</td>
              <td align="left">Tolerates 2 DS failures</td>
            </tr>
          </tbody>
        </table>
        <t>By expressing all protection modes as (fdp_data, fdp_parity) pairs,
a single structure serves mirroring, striping, and all erasure
coding types.  The coding type (<xref target="fig-ffv2_coding_type4"/>) determines
HOW the shards are encoded; the protection structure determines
HOW MANY shards there are.</t>
        <t>The total number of data servers required is fdp_data + fdp_parity.
The storage overhead is fdp_parity / fdp_data (e.g., 50% for 4+2,
25% for 8+2).</t>
      </section>
      <section anchor="sec-ffv2-layouthint">
        <name>ffv2_layouthint4</name>
        <figure anchor="fig-ffv2_layouthint4">
          <name>The ffv2_layouthint4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layouthint4 {
   ///     ffv2_coding_type4       fflh_supported_types<>;
   ///     ffv2_data_protection4   fflh_preferred_protection;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layouthint4 (in <xref target="fig-ffv2_layouthint4"/>) describes the
layout_hint (see Section 5.12.4 of <xref target="RFC8881"/>) that the client can
provide to the metadata server.</t>
        <t>The client provides two hints:</t>
        <dl>
          <dt>fflh_supported_types</dt>
          <dd>
            <t>An ordered list of coding types the client supports,
with the most preferred type first.  The server <bcp14>SHOULD</bcp14> select a type
from this list but <bcp14>MAY</bcp14> choose any type it supports.  If the server
does not support any of the listed types, it returns
NFS4ERR_CODING_NOT_SUPPORTED, and the client can retry
with a different list to discover the overlapping set.</t>
          </dd>
          <dt>fflh_preferred_protection</dt>
          <dd>
            <t>The client's preferred data protection geometry as a
(fdp_data, fdp_parity) pair.  The server <bcp14>SHOULD</bcp14> honor this hint but
<bcp14>MAY</bcp14> override it based on server-side policy.  A server that manages
data protection via administrative policy (e.g., per-directory or
per-export objectives) will typically ignore this hint and return the
geometry dictated by policy.</t>
          </dd>
        </dl>
        <t>For example, a client that prefers Mojette systematic with 8+2
protection would send:</t>
        <artwork><![CDATA[
fflh_supported_types = { FFV2_CODING_MIRRORED,
                         FFV2_ENCODING_MOJETTE_SYSTEMATIC,
                         FFV2_ENCODING_RS_VANDERMONDE }
fflh_preferred_protection = { fdp_data = 8, fdp_parity = 2 }
]]></artwork>
        <t>A server with a policy of RS 4+2 for this directory would ignore
both hints and return a layout with FFV2_ENCODING_RS_VANDERMONDE
and (fdp_data=4, fdp_parity=2).  A server without erasure coding
might return FFV2_CODING_MIRRORED with (fdp_data=1, fdp_parity=2)
for 3-way mirroring.</t>
        <section anchor="sec-codec-negotiation">
          <name>Codec Negotiation</name>
          <t>Because the coding-type registry is expected to grow over time
(new erasure codes are added, older ones fall out of favour,
vendors register private codes; see <xref target="iana-considerations"/>),
neither clients nor metadata servers are required to implement
every registered codec.  The protocol uses ffv2_layouthint4 as
the negotiation surface:</t>
          <dl>
            <dt>Client-side advertisement:</dt>
            <dd>
              <t>A client that wishes to influence codec selection <bcp14>SHOULD</bcp14>
send the set of codecs it actually implements in
fflh_supported_types.  A client <bcp14>MUST NOT</bcp14> claim support for
a codec it cannot encode or decode: a false advertisement
produces silent data unavailability when the resulting layout
is issued.</t>
            </dd>
            <dt>Metadata-server selection:</dt>
            <dd>
              <t>The metadata server <bcp14>SHOULD</bcp14> select a codec from the client's
fflh_supported_types list when the server's policy permits.
The server <bcp14>MAY</bcp14> override the hint when its policy dictates a
specific codec (for example, per-export objectives); in that
case the server issues a layout with the policy-dictated
codec and the client <bcp14>MUST</bcp14> either honour it or fail its I/O
with NFS4ERR_CODING_NOT_SUPPORTED.</t>
            </dd>
            <dt>Fallback when no overlap exists:</dt>
            <dd>
              <t>If the server's policy cannot be satisfied by any codec the
client supports, the server returns
NFS4ERR_CODING_NOT_SUPPORTED on the LAYOUTGET.  The client
<bcp14>MAY</bcp14> retry with a different (possibly empty) fflh_supported
_types list to learn the server's codec repertoire through
the errors returned, and <bcp14>MAY</bcp14> fall back to I/O via the
metadata server if no mutually-supported codec exists
(see <xref target="sec-Fencing-Clients"/> for the MDS-I/O fallback).</t>
            </dd>
            <dt>Runtime codec change:</dt>
            <dd>
              <t>If a metadata server changes its codec policy after layouts
have been issued (for example, a deployment upgrade that
retires an older codec), the metadata server <bcp14>MUST</bcp14> recall the
affected layouts via CB_LAYOUTRECALL and may re-issue new
layouts with the new codec.  Clients that do not support the
new codec LAYOUTRETURN with NFS4ERR_CODING_NOT_SUPPORTED,
and the server either grants a layout using a mutually-
supported codec or the client falls back to I/O via the
metadata server.</t>
            </dd>
          </dl>
          <t>This mechanism deliberately avoids a separate capability-bit
handshake at EXCHANGE_ID.  ffv2_layouthint4 already provides
per-request negotiation surface; adding a session-level
capability set would duplicate it and would complicate codec
upgrades without additional value, because a client that
genuinely upgrades its codec set at runtime can simply update
the fflh_supported_types on its next LAYOUTGET.</t>
          <t>Note: In <xref target="fig-ffv2_layout4"/> ffv2_coding_type_data4 is an enumerated
union with the payload of each arm being defined by the protection
type. ffm_client_id tells the client which id to use when interacting
with the data servers.</t>
          <t>The ffv2_layout4 structure (see <xref target="fig-ffv2_layout4"/>) specifies a layout
in that portion of the data file described in the current layout
segment.  It is either a single instance or a set of mirrored copies
of that portion of the data file.  When mirroring is in effect, it
protects against loss of data in layout segments.</t>
          <t>While not explicitly shown in <xref target="fig-ffv2_layout4"/>, each layout4
element returned in the logr_layout array of LAYOUTGET4res (see
Section 18.43.2 of <xref target="RFC8881"/>) describes a layout segment.  Hence,
each ffv2_layout4 also describes a layout segment.  It is possible
that the file is concatenated from more than one layout segment.
Each layout segment <bcp14>MAY</bcp14> represent different striping parameters.</t>
          <t>The ffm_striping_unit_size field (inside each ffv2_mirror4) is the
stripe unit size in use for that mirror.  The number of stripes is
given by the number of elements in ffs_data_servers within each
ffv2_stripes4.  If the number of stripes is one, then the value for
ffm_striping_unit_size <bcp14>MUST</bcp14> default to zero.  The mapping scheme
(sparse or dense) is selected per mirror by ffm_striping and is
detailed in <xref target="sec-striping"/>.  Note
that there is an assumption here that both the stripe unit size and
the number of stripes are the same across all mirrors.</t>
          <t>The ffl_mirrors field represents an array of state information for
each mirrored copy of the current layout segment.  Each element is
described by a ffv2_mirror4 type.</t>
          <t>ffv2ds_deviceid provides the deviceid of the storage device holding
the data file.</t>
          <t>ffv2ds_file_info is an array of ffv2_file_info4 structures, each
pairing a filehandle (fffi_fh_vers) with a stateid (fffi_stateid).
There <bcp14>MUST</bcp14> be exactly as many elements in ffv2ds_file_info as there
are in ffda_versions.  Each element of the array corresponds to a
particular combination of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled provided for the device.  The array allows for
server implementations that have different filehandles and stateids
for different combinations of version, minor version, and coupling
strength.  See <xref target="sec-version-errors"/> for how to handle versioning
issues between the client and storage devices.</t>
          <t>For tight coupling, fffi_stateid provides the stateid to be used
by the client to access the file.  The metadata server registers
fffi_stateid with each tight-coupling-capable storage device via
TRUST_STATEID (see <xref target="sec-tight-coupling-control"/>) before returning
the layout; the storage device validates subsequent CHUNK operations
against its trust table.</t>
          <t>For loose coupling and an NFSv4 storage device, the client <bcp14>MUST</bcp14> use
the anonymous stateid to perform I/O on the storage device, because
the metadata server stateid has no meaning to a storage device that
is not participating in the control protocol.  In this case the
metadata server <bcp14>MUST</bcp14> set fffi_stateid to the anonymous stateid.</t>
          <t>For an NFSv3 storage device (ffdv_version = 3), the tight-coupling
model does not apply: <xref target="sec-ff_device_addr4"/> requires
ffdv_tightly_coupled to be FALSE whenever ffdv_version equals 3,
because NFSv3 has no wire encoding for stateids.  The corresponding
fffi_stateid element in the ffv2ds_file_info array <bcp14>MUST</bcp14> therefore
be the anonymous stateid and is unused; an NFSv3 data server uses
the synthetic-uid fencing model (see <xref target="sec-Fencing-Clients"/>)
rather than a stateid-based trust table.</t>
          <t>This specification of the fffi_stateid restricts both models for
NFSv4.x storage protocols:</t>
          <dl>
            <dt>loosely couple</dt>
            <dd>
              <t>the stateid has to be an anonymous stateid</t>
            </dd>
            <dt>tightly couple</dt>
            <dd>
              <t>the stateid has to be a global stateid</t>
            </dd>
          </dl>
          <t>By pairing each fffi_fh_vers with its own fffi_stateid inside
ffv2_file_info4, the v2 layout addresses the v1 limitation where a
singleton stateid was shared across all filehandles.  Each open file
on the storage device can now have its own stateid, eliminating the
ambiguity present in the v1 structure.</t>
          <t>For loosely coupled storage devices, ffv2ds_user and ffv2ds_group
provide the synthetic user and group to be used in the RPC credentials
that the client presents to the storage device to access the data
files.  For tightly coupled storage devices, the user and group on
the storage device will be the same as on the metadata server; that
is, if ffdv_tightly_coupled (see <xref target="sec-ff_device_addr4"/>) is set,
then the client <bcp14>MUST</bcp14> ignore both ffv2ds_user and ffv2ds_group.</t>
          <t>The allowed values for both ffv2ds_user and ffv2ds_group are specified
as owner and owner_group, respectively, in Section 5.9 of <xref target="RFC8881"/>.
For NFSv3 compatibility, user and group strings that consist of
decimal numeric values with no leading zeros can be given a special
interpretation by clients and servers that choose to provide such
support.  The receiver may treat such a user or group string as
representing the same user as would be represented by an NFSv3 uid
or gid having the corresponding numeric value.  Note that if using
Kerberos for security, the expectation is that these values will
be a name@domain string.</t>
          <t>ffv2ds_efficiency describes the metadata server's evaluation as to
the effectiveness of each mirror.  Note that this is per layout and
not per device as the metric may change due to perceived load,
availability to the metadata server, etc.  Higher values denote
higher perceived utility.  The way the client can select the best
mirror to access is discussed in <xref target="sec-select-mirror"/>.</t>
        </section>
        <section anchor="error-codes-from-layoutget">
          <name>Error Codes from LAYOUTGET</name>
          <t><xref target="RFC8881"/> provides little guidance as to how the client is to
proceed with a LAYOUTGET that returns an error of either
NFS4ERR_LAYOUTTRYLATER, NFS4ERR_LAYOUTUNAVAILABLE, and NFS4ERR_DELAY.
Within the context of this document:</t>
          <dl>
            <dt>NFS4ERR_LAYOUTUNAVAILABLE</dt>
            <dd>
              <t>there is no layout available and the I/O is to go to the metadata
server.  Note that it is possible to have had a layout before a
recall and not after.</t>
            </dd>
            <dt>NFS4ERR_LAYOUTTRYLATER</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should continue
with I/O to the storage devices.</t>
            </dd>
            <dt>NFS4ERR_DELAY</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should not
continue with I/O to the storage devices.</t>
            </dd>
          </dl>
        </section>
        <section anchor="client-interactions-with-ffflagsnoiothrumds">
          <name>Client Interactions with FF_FLAGS_NO_IO_THRU_MDS</name>
          <t>Even if the metadata server provides the FF_FLAGS_NO_IO_THRU_MDS
flag, the client can still perform I/O to the metadata server.  The
flag functions as a hint.  The flag indicates to the client that
the metadata server prefers to separate the metadata I/O from the
data I/ O, most likely for performance reasons.</t>
        </section>
      </section>
      <section anchor="layoutcommit-1">
        <name>LAYOUTCOMMIT</name>
        <t>The flexible file layout does not use lou_body inside the
loca_layoutupdate argument to LAYOUTCOMMIT.  If lou_type is
LAYOUT4_FLEX_FILES, the lou_body field <bcp14>MUST</bcp14> have a zero length (see
Section 18.42.1 of <xref target="RFC8881"/>).</t>
      </section>
      <section anchor="interactions-between-devices-and-layouts">
        <name>Interactions between Devices and Layouts</name>
        <t>The file layout type is defined such that the relationship between
multipathing and filehandles can result in either 0, 1, or N
filehandles (see Section 13.3 of <xref target="RFC8881"/>).  Some rationales for
this are clustered servers that share the same filehandle or allow
for multiple read-only copies of the file on the same storage device.
In the flexible file layout type, while there is an array of
filehandles, they are independent of the multipathing being used.
If the metadata server wants to provide multiple read-only copies
of the same file on the same storage device, then it should provide
multiple mirrored instances, each with a different ff_device_addr4.
The client can then determine that, since each of the fffi_fh_vers
values within ffv2ds_file_info are different, there are multiple
copies of the file for the current layout segment available.</t>
      </section>
      <section anchor="sec-version-errors">
        <name>Handling Version Errors</name>
        <t>When the metadata server provides the ffda_versions array in the
ff_device_addr4 (see <xref target="sec-ff_device_addr4"/>), the client is able
to determine whether or not it can access a storage device with any
of the supplied combinations of ffdv_version, ffdv_minorversion,
and ffdv_tightly_coupled.  However, due to the limitations of
reporting errors in GETDEVICEINFO (see Section 18.40 in <xref target="RFC8881"/>),
the client is not able to specify which specific device it cannot
communicate with over one of the provided ffdv_version and
ffdv_minorversion combinations.  Using ff_ioerr4 (<xref target="sec-ff_ioerr4"/>)
inside either the LAYOUTRETURN (see Section 18.44 of <xref target="RFC8881"/>)
or the LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/> and <xref target="sec-LAYOUTERROR"/>
of this document), the client can isolate the problematic storage
device.</t>
        <t>The error code to return for LAYOUTRETURN and/or LAYOUTERROR is
NFS4ERR_MINOR_VERS_MISMATCH.  It does not matter whether the mismatch
is a major version (e.g., client can use NFSv3 but not NFSv4) or
minor version (e.g., client can use NFSv4.1 but not NFSv4.2), the
error indicates that for all the supplied combinations for ffdv_version
and ffdv_minorversion, the client cannot communicate with the storage
device.  The client can retry the GETDEVICEINFO to see if the
metadata server can provide a different combination, or it can fall
back to doing the I/O through the metadata server.</t>
      </section>
    </section>
    <section anchor="sec-striping">
      <name>Striping</name>
      <t>The flexible file layout type version 2 inherits the dense and
sparse striping dispositions defined by the file layout type in
Section 13.4 of <xref target="RFC8881"/>.  The disposition for a given
mirror is selected by the ffm_striping field (see
<xref target="sec-ffv2-mirror4"/>) and applies to every data server in that
mirror's ffs_data_servers list.  Three values are permitted:</t>
      <dl>
        <dt>FFV2_STRIPING_NONE:</dt>
        <dd>
          <t>The mirror is not striped.  ffm_striping_unit_size <bcp14>MUST</bcp14> be 1
and ffm_stripes <bcp14>MUST</bcp14> contain exactly one stripe.  The entire
mirror lives on that stripe's single data server list, with
no offset transformation.</t>
        </dd>
        <dt>FFV2_STRIPING_SPARSE:</dt>
        <dd>
          <t>Logical offsets within the file map to the same numeric
offset on each data server.  A data server that does not own
the stripe unit at a given logical offset presents a hole at
that offset.  This is the simpler model and matches the
mental picture of "the file is laid out end-to-end on each
data server, but each data server stores only its stripe
units".</t>
        </dd>
        <dt>FFV2_STRIPING_DENSE:</dt>
        <dd>
          <t>Stripe units owned by a given data server are packed
contiguously on that data server, with no holes.  The
logical offset is transformed into a compact physical offset
on the target data server.  This matches pre-existing
deployments that follow the dense layout convention of
Section 13.4.4 of <xref target="RFC8881"/>.</t>
        </dd>
      </dl>
      <t>The mapping math for sparse and dense is given in
<xref target="fig-striping-math"/>.  Common definitions apply to both.</t>
      <figure anchor="fig-striping-math">
        <name>Sparse and dense stripe mapping math</name>
        <artwork><![CDATA[
L: logical offset within the file (bytes)
U: stripe-unit size in bytes  = ffm_striping_unit_size
W: stripe width               = length of ffs_data_servers
S: stripe size in bytes       = W * U
N: stripe number              = L / S
i: index (0-based) of the data server that owns L
                              = (L / U) mod W
R: byte offset within the stripe unit
                              = L mod U

FFV2_STRIPING_SPARSE:
  physical offset on data server i:
      P_sparse(L) = L
  other data servers see a hole at offset L.

FFV2_STRIPING_DENSE:
  physical offset on data server i:
      P_dense(L) = N * U + R
             = (L / S) * U + (L mod U)
  each data server stores only the stripe units it owns,
  packed contiguously.
]]></artwork>
      </figure>
    </section>
    <section anchor="recovering-from-client-io-errors">
      <name>Recovering from Client I/O Errors</name>
      <t>The pNFS client may encounter errors when directly accessing the
storage devices.  However, it is the responsibility of the metadata
server to recover from the I/O errors.  When the LAYOUT4_FLEX_FILES
layout type is used, the client <bcp14>MUST</bcp14> report the I/O errors to the
server at LAYOUTRETURN time using the ff_ioerr4 structure (see
<xref target="sec-ff_ioerr4"/>).</t>
      <t>The metadata server analyzes the error and determines the required
recovery operations such as recovering media failures or reconstructing
missing data files.</t>
      <t>The metadata server <bcp14>MUST</bcp14> recall any outstanding layouts to allow
it exclusive write access to the stripes being recovered and to
prevent other clients from hitting the same error condition.  In
these cases, the server <bcp14>MUST</bcp14> complete recovery before handing out
any new layouts to the affected byte ranges.</t>
      <t>Although the client implementation has the option to propagate a
corresponding error to the application that initiated the I/O
operation and drop any unwritten data, the client should attempt
to retry the original I/O operation by either requesting a new
layout or sending the I/O via regular NFSv4.1+ READ or WRITE
operations to the metadata server.  The client <bcp14>SHOULD</bcp14> attempt to
retrieve a new layout and retry the I/O operation using the storage
device first and only retry the I/O operation via the metadata
server if the error persists.</t>
    </section>
    <section anchor="client-side-protection-modes">
      <name>Client-Side Protection Modes</name>
      <section anchor="sec-CSM">
        <name>Client-Side Mirroring</name>
        <t>The flexible file layout type has a simple model in place for the
mirroring of the file data constrained by a layout segment.  There
is no assumption that each copy of the mirror is stored identically
on the storage devices.  For example, one device might employ
compression or deduplication on the data.  However, the over-the-wire
transfer of the file contents <bcp14>MUST</bcp14> appear identical.  Note, this
is a constraint of the selected XDR representation in which each
mirrored copy of the layout segment has the same striping pattern
(see <xref target="fig-parallel-filesystem"/>).</t>
        <t>The metadata server is responsible for determining the number of
mirrored copies and the location of each mirror.  While the client
may provide a hint to how many copies it wants (see <xref target="sec-ffv2-layouthint"/>),
the metadata server can ignore that hint; in any event, the client
has no means to dictate either the storage device (which also means
the coupling and/or protocol levels to access the layout segments)
or the location of said storage device.</t>
        <t>The updating of mirrored layout segments is done via client-side
mirroring.  With this approach, the client is responsible for making
sure modifications are made on all copies of the layout segments
it is informed of via the layout.  If a layout segment is being
resilvered to a storage device, that mirrored copy will not be in
the layout.  Thus, the metadata server <bcp14>MUST</bcp14> update that copy until
the client is presented it in a layout.  If the FF_FLAGS_WRITE_ONE_MIRROR
is set in ffl_flags, the client need only update one of the mirrors
(see <xref target="sec-write-mirrors"/>).  If the client is writing to the layout
segments via the metadata server, then the metadata server <bcp14>MUST</bcp14>
update all copies of the mirror.  As seen in <xref target="sec-mds-resilvering"/>,
during the resilvering, the layout is recalled, and the client has
to make modifications via the metadata server.</t>
        <section anchor="sec-select-mirror">
          <name>Selecting a Mirror</name>
          <t>When the metadata server grants a layout to a client, it <bcp14>MAY</bcp14> let
the client know how fast it expects each mirror to be once the
request arrives at the storage devices via the ffv2ds_efficiency
member.  While the algorithms to calculate that value are left to
the metadata server implementations, factors that could contribute
to that calculation include speed of the storage device, physical
memory available to the device, operating system version, current
load, etc.</t>
          <t>However, what should not be involved in that calculation is a
perceived network distance between the client and the storage device.
The client is better situated for making that determination based
on past interaction with the storage device over the different
available network interfaces between the two; that is, the metadata
server might not know about a transient outage between the client
and storage device because it has no presence on the given subnet.</t>
          <t>As such, it is the client that decides which mirror to access for
reading the file.  The requirements for writing to mirrored layout
segments are presented below.</t>
        </section>
        <section anchor="sec-write-mirrors">
          <name>Writing to Mirrors</name>
          <section anchor="single-storage-device-updates-mirrors">
            <name>Single Storage Device Updates Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is set, the
client only needs to update one of the copies of the layout segment.
For this case, the storage device <bcp14>MUST</bcp14> ensure that all copies of
the mirror are updated when any one of the mirrors is updated.  If
the storage device gets an error when updating one of the mirrors,
then it <bcp14>MUST</bcp14> inform the client that the original WRITE had an error.
The client then <bcp14>MUST</bcp14> inform the metadata server (see <xref target="sec-write-errors"/>).
The client's responsibility with respect to COMMIT is explained in
<xref target="sec-write-commits"/>.  The client may choose any one of the mirrors
and may use ffv2ds_efficiency as described in <xref target="sec-select-mirror"/>
when making this choice.</t>
          </section>
          <section anchor="client-updates-all-mirrors">
            <name>Client Updates All Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is not set, the
client is responsible for updating all mirrored copies of the layout
segments that it is given in the layout.  A single failed update
is sufficient to fail the entire operation.  If all but one copy
is updated successfully and the last one provides an error, then
the client needs to inform the metadata server about the error.
The client can use either LAYOUTRETURN or LAYOUTERROR to inform the
metadata server that the update failed to that storage device.  If
the client is updating the mirrors serially, then it <bcp14>SHOULD</bcp14> stop
at the first error encountered and report that to the metadata
server.  If the client is updating the mirrors in parallel, then
it <bcp14>SHOULD</bcp14> wait until all storage devices respond so that it can
report all errors encountered during the update.</t>
          </section>
          <section anchor="sec-write-errors">
            <name>Handling Write Errors</name>
            <t>When the client reports a write error to the metadata server, the
metadata server is responsible for determining if it wants to remove
the errant mirror from the layout, if the mirror has recovered from
some transient error, etc.  When the client tries to get a new
layout, the metadata server informs it of the decision by the
contents of the layout.  The client <bcp14>MUST NOT</bcp14> assume that the contents
of the previous layout will match those of the new one.  If it has
updates that were not committed to all mirrors, then it <bcp14>MUST</bcp14> resend
those updates to all mirrors.</t>
            <t>There is no provision in the protocol for the metadata server to
directly determine that the client has or has not recovered from
an error.  For example, if a storage device was network partitioned
from the client and the client reported the error to the metadata
server, then the network partition would be repaired, and all of
the copies would be successfully updated.  There is no mechanism
for the client to report that fact, and the metadata server is
forced to repair the file across the mirror.</t>
            <t>If the client supports NFSv4.2, it can use LAYOUTERROR and LAYOUTRETURN
to provide hints to the metadata server about the recovery efforts.
A LAYOUTERROR on a file is for a non-fatal error.  A subsequent
LAYOUTRETURN without a ff_ioerr4 indicates that the client successfully
replayed the I/O to all mirrors.  Any LAYOUTRETURN with a ff_ioerr4
is an error that the metadata server needs to repair.  The client
<bcp14>MUST</bcp14> be prepared for the LAYOUTERROR to trigger a CB_LAYOUTRECALL
if the metadata server determines it needs to start repairing the
file.</t>
          </section>
          <section anchor="sec-write-commits">
            <name>Handling Write COMMITs</name>
            <t>When stable writes are done to the metadata server or to a single
replica (if allowed by the use of FF_FLAGS_WRITE_ONE_MIRROR), it
is the responsibility of the receiving node to propagate the written
data stably, before replying to the client.</t>
            <t>In the corresponding cases in which unstable writes are done, the
receiving node does not have any such obligation, although it may
choose to asynchronously propagate the updates.  However, once a
COMMIT is replied to, all replicas <bcp14>MUST</bcp14> reflect the writes that
have been done, and this data <bcp14>MUST</bcp14> have been committed to stable
storage on all replicas.</t>
            <t>In order to avoid situations in which stale data is read from
replicas to which writes have not been propagated:</t>
            <ul spacing="normal">
              <li>
                <t>A client that has outstanding unstable writes made to single
node (metadata server or storage device) <bcp14>MUST</bcp14> do all reads from
that same node.</t>
              </li>
              <li>
                <t>When writes are flushed to the server (for example, to implement
close-to-open semantics), a COMMIT must be done by the client
to ensure that up-to-date written data will be available
irrespective of the particular replica read.</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="sec-mds-resilvering">
          <name>Metadata Server Resilvering of the File</name>
          <t>The metadata server may elect to create a new mirror of the layout
segments at any time.  This might be to resilver a copy on a storage
device that was down for servicing, to provide a copy of the layout
segments on storage with different storage performance characteristics,
etc.  As the client will not be aware of the new mirror and the
metadata server will not be aware of updates that the client is
making to the layout segments, the metadata server <bcp14>MUST</bcp14> recall the
writable layout segment(s) that it is resilvering.  If the client
issues a LAYOUTGET for a writable layout segment that is in the
process of being resilvered, then the metadata server can deny that
request with an NFS4ERR_LAYOUTUNAVAILABLE.  The client would then
have to perform the I/O through the metadata server.</t>
        </section>
      </section>
      <section anchor="erasure-coding">
        <name>Erasure Coding</name>
        <t>Erasure Coding takes a data block and transforms it to a payload
to send to the data servers (see <xref target="fig-encoding-data-block"/>).  It
generates a metadata header and transformed block per data server.
The header is metadata information for the transformed block.  From
now on, the metadata is simply referred to as the header and the
transformed block as the chunk.  The payload of a data block is the
set of generated headers and chunks for that data block.</t>
        <t>The guard is an unique identifier generated by the client to describe
the current write transaction (see <xref target="sec-chunk_guard4"/>).  The
intent is to have a unique and non-opaque value for comparison.
The payload_id describes the position within the payload.  Finally,
the crc32 is the 32 bit crc calculation of the header (with the
crc32 field being 0) and the chunk.  By combining the two parts of
the payload, integrity is ensured for both the parts.</t>
        <t>While the data block might have a length of 4kB, that does not
necessarily mean that the length of the chunk is 4kB.  That length
is determined by the erasure coding type algorithm.  For example,
Reed Solomon might have 4kB chunks with the data integrity being
compromised by parity chunks.  Another example would be the Mojette
Transformation, which might have 1kB chunk lengths.</t>
        <t>The payload contains redundancy which will allow the erasure
coding type algorithm to repair chunks in the payload as it is
transformed back to a data block (see <xref target="fig-decoding-db"/>).</t>
        <t>The protocol provides two levels of payload integrity, consumed at
different points in the read path:</t>
        <dl>
          <dt>Consistency:</dt>
          <dd>
            <t>A payload is <strong>consistent</strong> when all of the chunks that belong
to it carry the same chunk_guard4 value (see
<xref target="sec-chunk_guard4"/>).  Consistency alone does NOT imply the
bytes are free of corruption; it means only that every chunk in
the payload came from the same write transaction.  A reader
detects inconsistency when it assembles a payload and finds
differing chunk_guard4 values across chunks.</t>
          </dd>
          <dt>Integrity:</dt>
          <dd>
            <t>A payload has <strong>integrity</strong> when it is consistent AND every
contained chunk passes its CRC32 check.  Integrity is the
precondition for returning the payload's data block to the
application.</t>
          </dd>
        </dl>
        <t>The separation matters because the two checks detect different
failure modes.  Consistency detects protocol-level failures (racing
writers, partial writes, rollback windows); the CRC32 detects
byte-level corruption (network errors, media errors, software bugs
in the erasure transform).  Neither subsumes the other.</t>
        <t>The two-level integrity model also reflects a deeper property of
distributed writes: <strong>last-writer-wins does not apply to a payload
spread across independent data servers.</strong>  The ordering of writes
arriving at one data server may differ from the ordering arriving
at another; the "last" write on DSa may well be the "first" on
DSc.  The chunk_guard4 CAS primitive (see <xref target="sec-chunk_guard4"/>)
resolves this by serializing concurrent writers per chunk rather
than by imposing a global order.</t>
        <t>The erasure coding algorithm itself might not be sufficient to
detect all byte-level errors in the chunks.  The CRC32 checks
allow the data server to detect chunks with integrity issues; the
erasure decoding algorithm can then reconstruct the affected
chunks from the remaining integral chunks in the payload.</t>
        <section anchor="encoding-a-data-block">
          <name>Encoding a Data Block</name>
          <figure anchor="fig-encoding-data-block">
            <name>Encoding a Data Block</name>
            <artwork><![CDATA[
                 +-------------+
                 | data block  |
                 +-------+-----+
                         |
                         |
   +---------------------+-------------------------------+
   |            Erasure Encoding (Transform Forward)     |
   +---+----------------------+---------------------+----+
       |                      |                     |
       |                      |                     |
   +---+------------+     +---+------------+     +--+-------------+
   | HEADER         | ... | HEADER         | ... | HEADER         |
   +----------------+     +----------------+     +----------------+
   | guard:         | ... | guard:         | ... | guard:         |
   |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
   |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
   | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
   | crc32   :      | ... | crc32   :      | ... | crc32   :      |
   +----------------+     +----------------+     +----------------+
   | CHUNK          | ... | CHUNK          | ... | CHUNK          |
   +----------------+     +----------------+     +----------------+
   | data: ....     | ... | data: ....     | ... | data: ....     |
   +----------------+     +----------------+     +----------------+
     Data Server 1          Data Server N          Data Server 6
]]></artwork>
          </figure>
          <t>Each data block of the file resident in the client's cache of the
file will be encoded into N different payloads to be sent to the
data servers as shown in <xref target="fig-encoding-data-block"/>.  As CHUNK_WRITE
(see <xref target="sec-CHUNK_WRITE"/>) can encode multiple write_chunk4 into a
single transaction, a more accurate description of a CHUNK_WRITE
is in <xref target="fig-example-chunk-write-args"/>.</t>
          <figure anchor="fig-example-chunk-write-args">
            <name>Example of CHUNK_WRITE_args</name>
            <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 3     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x32ef89             |
  |         [1]:  0x56fa89             |
  |         [2]:  0x7693af             |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
          </figure>
          <t>This describes a 3 block write of data from an offset of 1 block
in the file.  As each block shares the cwa_owner, it is only presented
once.  I.e., the data server will be able to construct the header
for the i'th chunk from the cwa_chunks from the cwa_payload_id, the
cwa_owner, and the i'th crc32 from the cw_crc32s.  The cwa_chunks
are sent together as a byte stream to increase performance.</t>
          <t>Assuming that there were no issues, <xref target="fig-example-chunk-write-res"/>
illustrates the results.  The payload sequence id is implicit in
the CHUNK_WRITEargs.</t>
          <figure anchor="fig-example-chunk-write-res">
            <name>Example of CHUNK_WRITE_res</name>
            <artwork><![CDATA[
  +-------------------------------+
  | CHUNK_WRITEresok              |
  +-------------------------------+
  | cwr_count: 3                  |
  | cwr_committed: FILE_SYNC4     |
  | cwr_writeverf: 0xf1234abc     |
  | cwr_owners[0]:                |
  |        co_chunk_id: 1         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[1]:                |
  |        co_chunk_id: 2         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[2]:                |
  |        co_chunk_id: 3         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  +-------------------------------+
]]></artwork>
          </figure>
          <section anchor="calculating-the-crc32">
            <name>Calculating the CRC32</name>
            <figure anchor="fig-calc-before">
              <name>CRC32 Before Calculation</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id : 0     |
  | crc32   : 0        |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
        Data Server 1
]]></artwork>
            </figure>
            <t>Assuming the header and payload as in <xref target="fig-calc-before"/>, the crc32
needs to be calculated in order to fill in the cw_crc field.  In
this case, the crc32 is calculated over the 4 fields as shown in
the header and the cw_chunk.  In this example, it is calculated to
be 0x21de8.  The resulting CHUNK_WRITE is shown in <xref target="fig-calc-crc-after"/>.</t>
            <figure anchor="fig-calc-crc-after">
              <name>CRC32 After Calculation</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x21de8              |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="decoding-a-data-block">
          <name>Decoding a Data Block</name>
          <figure anchor="fig-decoding-db">
            <name>Decoding a Data Block</name>
            <artwork><![CDATA[
    Data Server 1          Data Server N          Data Server 6
  +----------------+     +----------------+     +----------------+
  | HEADER         | ... | HEADER         | ... | HEADER         |
  +----------------+     +----------------+     +----------------+
  | guard:         | ... | guard:         | ... | guard:         |
  |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
  |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
  | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
  | crc32   :      | ... | crc32   :      | ... | crc32   :      |
  +----------------+     +----------------+     +----------------+
  | CHUNK          | ... | CHUNK          | ... | CHUNK          |
  +----------------+     +----------------+     +----------------+
  | data: ....     | ... | data: ....     | ... | data: ....     |
  +---+------------+     +--+-------------+     +-+--------------+
      |                     |                     |
      |                     |                     |
  +---+---------------------+---------------------+-----+
  |            Erasure Decoding (Transform Reverse)     |
  +---------------------+-------------------------------+
                        |
                        |
                +-------+-----+
                | data block  |
                +-------------+
]]></artwork>
          </figure>
          <t>When reading chunks via a CHUNK_READ operation, the client will
decode them into data blocks as shown in <xref target="fig-decoding-db"/>.</t>
          <t>At this time, the client could detect issues in the integrity of
the data.  The handling and repair are out of the scope of this
document and <bcp14>MUST</bcp14> be addressed in the document describing each
erasure coding type.</t>
          <section anchor="checking-the-crc32">
            <name>Checking the CRC32</name>
            <figure anchor="fig-example-chunk-read-crc">
              <name>CRC32 on the Wire</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_READresok                    |
  +------------------------------------+
  | crr_eof: false                     |
  | crr_chunks[0]:                     |
  |        cr_crc: 0x21de8             |
  |        cr_owner:                   |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  |        cr_chunk  :  ......         |
  +------------------------------------+
]]></artwork>
            </figure>
            <t>Assuming the CHUNK_READ results as in <xref target="fig-example-chunk-read-crc"/>,
the crc32 needs to be checked in order to ensure data integrity.
Conceptually, a header and payload can be built as shown in
<xref target="fig-example-crc-checked"/>.  The crc32 is calculated over the 4
fields as shown in the header and the cr_chunk.  In this example,
it is calculated to be 0x21de8.  Thus this payload for the data
server has data integrity.</t>
            <figure anchor="fig-example-crc-checked">
              <name>CRC32 Being Checked</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id  : 0    |
  | crc32    : 0       |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
       Data Server 1
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="write-modes">
          <name>Write Modes</name>
          <t>There are two basic writing modes for erasure coding and they depend
on the metadata server using FFV2_FLAGS_ONLY_ONE_WRITER in the
ffl_flags in the ffv2_layout4 (see <xref target="fig-ffv2_layout4"/>) to inform
the client whether it is the only writer to the file or not.  If
it is the only writer, then CHUNK_WRITE with the cwa_guard not set
can be used to write chunks.  In this scenario, there is no write
contention, but write holes can occur as the client overwrites old
data.  Thus the client does not need guarded writes, but it does
need the ability to rollback writes.  If it is not the only writer,
then CHUNK_WRITE with the cwa_guard set <bcp14>MUST</bcp14> be used to write chunks.
In this scenario, the write holes can also be caused by multiple
clients writing to the same chunk.  Thus the client needs guarded
writes to prevent over writes and it does need the ability to
rollback writes.</t>
          <t>In both modes, clients <bcp14>MUST NOT</bcp14> overwrite payloads which already
contain inconsistency.  This directly follows from <xref target="sec-reading-chunks"/>
and <bcp14>MUST</bcp14> be handled as discussed there.  Once consistency in the
payload has been detected, the client can use those chunks as a
basis for read/modify/update.</t>
          <t>CHUNK_WRITE is a two pass operation in cooperation with CHUNK_FINALIZE
(<xref target="sec-CHUNK_FINALIZE"/>) and CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).
It writes to the data file and the data server is responsible for
retaining a copy of the old header and chunk. A subsequent CHUNK_READ
would return the new chunk. However, until either the CHUNK_FINALIZE
or CHUNK_ROLLBACK is presented, a subsequent CHUNK_WRITE <bcp14>MUST</bcp14> result
in the locking of the chunk, as if a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>)
had been performed on the chunk. As such, further CHUNK_WRITES by
any client <bcp14>MUST</bcp14> be denied until the chunk is unlocked by CHUNK_UNLOCK
(<xref target="sec-CHUNK_UNLOCK"/>).</t>
          <t>If the CHUNK_WRITE results in a consistent data block, then the
client will send a CHUNK_FINALIZE in a subsequent compound to inform
the data server that the chunk is consistent and can be overwritten
by another CHUNK_WRITE.</t>
          <t>If the CHUNK_WRITE results in an inconsistent data block, or if the
data server returns NFS4ERR_CHUNK_LOCKED, the client reports the
condition to the metadata server via LAYOUTERROR with an error code
of NFS4ERR_PAYLOAD_NOT_CONSISTENT.</t>
        </section>
        <section anchor="sec-repair-selection">
          <name>Selecting the Repair Client</name>
          <t>The repair topology involves three actors communicating along
distinct paths, as shown in <xref target="fig-repair-topology"/>.</t>
          <figure anchor="fig-repair-topology">
            <name>Repair topology</name>
            <artwork><![CDATA[
     +------------+              +----------------+
     | Reporting  |              |                |
     | client     | ----(1)----> |    Metadata    |
     | (detects   | LAYOUTERROR  |    server      |
     |  error)    |              |                |
     +------------+              +----------------+
                                   |        ^  ^
                                   | (2a)   |  |
                                   |        |  |
                           +-------+--+  (2b) (3)
                           |          |  |   |
                 CB_CHUNK_REPAIR      |   |  |
                 (RACE or SCRUB)      |   |  |
                                   |   |   |  |
                                   v   |   |  |
     +-------------+           +--------+-------------+
     |  Repair     | ----(4)-> |                      |
     |  client     |  CHUNK_   |    Data servers      |
     |  (selected  |  LOCK_    |    (mirror set for   |
     |   by MDS)   |  ADOPT,   |    affected ranges)  |
     |             |  CHUNK_   |                      |
     |             |  WRITE_   +----------------------+
     |             |  REPAIR,
     |             |  CHUNK_
     |             |  FINALIZE,
     |             |  CHUNK_
     |             |  COMMIT,
     |             |  CHUNK_
     |             |  REPAIRED
     +-------------+

     (1) Reporter LAYOUTERRORs the MDS.
     (2a) MDS selects a repair client (may be same as reporter).
     (2b) MDS escrows the chunk lock and issues CB_CHUNK_REPAIR.
     (3)  Repair client adopts the lock and drives the repair.
     (4)  Repair client issues CHUNK_* ops against the mirror set.
]]></artwork>
          </figure>
          <t>The metadata server is the authority that selects which client
(or, in a tightly coupled deployment, which data server) repairs
an inconsistent payload.  This is analogous to the way the
metadata server assigns per-mirror priority via ffv2ds_efficiency
(see <xref target="sec-select-mirror"/>): the protocol does not prescribe the
selection algorithm, and each deployment <bcp14>MAY</bcp14> tune its policy.</t>
          <t>Implementations <bcp14>MAY</bcp14> consider factors such as:</t>
          <ul spacing="normal">
            <li>
              <t>Whether a client holds an active write layout on the affected
payload (the client most likely to hold surviving shards in
cache).</t>
            </li>
            <li>
              <t>Whether a client has previously reported consistent shards to
the metadata server via LAYOUTSTATS or a prior LAYOUTERROR.</t>
            </li>
            <li>
              <t>Whether the layout exposes a data server carrying
FFV2_DS_FLAGS_REPAIR as a target for reconstructed shards.</t>
            </li>
            <li>
              <t>Network proximity, observed latency, or recent client load --
the same class of information that informs ffv2ds_efficiency.</t>
            </li>
          </ul>
          <t>The selection algorithm is not normative.  What is normative is
that every client <bcp14>MUST</bcp14> be prepared to:</t>
          <ol spacing="normal" type="1"><li>
              <t>Receive a repair request for a payload that the client does
not have an outstanding write layout on, and did not write;
and</t>
            </li>
            <li>
              <t>Continue its own workload after reporting
NFS4ERR_PAYLOAD_NOT_CONSISTENT without itself being selected
to repair the payload it reported.</t>
            </li>
          </ol>
          <t>The metadata server signals the selected client via the
CB_CHUNK_REPAIR callback (<xref target="sec-CB_CHUNK_REPAIR"/>), which
identifies the file, the affected ranges (each with its own
triggering nfsstat4), and a wall-clock deadline.  A client that
receives CB_CHUNK_REPAIR for a file for which it does not
already hold a layout <bcp14>MUST</bcp14> acquire a layout via LAYOUTGET before
attempting the repair.</t>
          <t>Operational expectations for CB_CHUNK_REPAIR:
CB_CHUNK_REPAIR is an exceptional path, triggered only by
concurrent-writer races or data-server failures.  It is not a
steady-state operation and its frequency is a function of
racing-writer and data-server-failure rates in the deployment
rather than of normal client workload.  Implementations <bcp14>SHOULD</bcp14>
treat the CB_CHUNK_REPAIR handler as rare-path code and avoid
over-optimising it.  Implementations <bcp14>SHOULD</bcp14>, however, provision
enough client-side compute to handle a repair transaction
without stalling their foreground I/O, because foreground
throughput during repair is the externally observable cost of
this callback.</t>
        </section>
        <section anchor="repair-protocol-normative-vs-informative">
          <name>Repair Protocol: Normative vs. Informative</name>
          <t>The selection algorithm is non-normative and deployment-tunable.
The externally-observable state transitions of the repair flow
are normative.  The line between the two is drawn at what
another party on the wire -- the metadata server, another
client, a reader -- can observe.  What no other party can see
(client-internal ordering, retry policy, whether to CHUNK_READ
first to confirm the failure) is left to implementations.</t>
          <t>The following requirements are normative.  An implementation
that violates any of these can leak inconsistency or write-holes
into the cluster:</t>
          <ol spacing="normal" type="1"><li>
              <t><strong>Final state flat.</strong>  Every shard in every range identified
in a CB_CHUNK_REPAIR <bcp14>MUST</bcp14> reach either the COMMITTED state
(repaired) or the EMPTY state (rolled back).  No shard is
left in PENDING or FINALIZED indefinitely.</t>
            </li>
            <li>
              <t><strong>Lock before write.</strong>  The repair client <bcp14>MUST</bcp14> adopt the
lock on every affected range via CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT (<xref target="sec-CHUNK_LOCK"/>) before issuing
any CHUNK_WRITE_REPAIR, CHUNK_ROLLBACK, or CHUNK_WRITE on a
chunk in that range.  The lock on the affected chunks is
held continuously from the failure that triggered
CB_CHUNK_REPAIR through the adoption; at no point is the
range unlocked.</t>
            </li>
            <li>
              <t><strong>Clear the errored state.</strong>  On the reconstruction path,
the repair client <bcp14>MUST</bcp14> issue CHUNK_REPAIRED
(<xref target="sec-CHUNK_REPAIRED"/>) after CHUNK_COMMIT.  Without it,
readers continue to see holes regardless of on-disk state.</t>
            </li>
            <li>
              <t><strong>Release locks explicitly.</strong>  CHUNK_ROLLBACK does not
release chunk locks.  On the rollback path the client <bcp14>MUST</bcp14>
issue CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) on each affected
chunk.  A client that walks away without either completing
CHUNK_REPAIRED or issuing CHUNK_UNLOCK holds the locks
until lease expiry, blocking progress for other writers.</t>
            </li>
            <li>
              <t><strong>Deadline honored.</strong>  The client <bcp14>MUST</bcp14> drive every range to
its final flat state before ccra_deadline, or <bcp14>MUST</bcp14> respond
to the CB_CHUNK_REPAIR with NFS4ERR_DELAY (requesting an
extension), NFS4ERR_CODING_NOT_SUPPORTED (declining), or
NFS4ERR_PAYLOAD_LOST (declaring the data unrecoverable).
A deadline that elapses without any of these leaves the
metadata server free to re-select; the client <bcp14>MUST NOT</bcp14>
continue repair-related CHUNK operations after the
deadline without first re-verifying its layout and the
chunk lock state.</t>
            </li>
            <li>
              <t><strong>Terminal return codes.</strong>  NFS4ERR_CODING_NOT_SUPPORTED
<bcp14>MUST</bcp14> mean "decline; select another client."
NFS4ERR_PAYLOAD_LOST <bcp14>MUST</bcp14> mean "the data is not
recoverable; do not retry."  The metadata server relies on
these to decide whether to re-issue.</t>
            </li>
          </ol>
          <t>The following aspects are informative / implementation-defined:</t>
          <ul spacing="normal">
            <li>
              <t>Choice between the reconstruction path (CHUNK_WRITE_REPAIR)
and the rollback path (CHUNK_ROLLBACK) on a given range.  The
protocol <bcp14>MUST</bcp14> support both; the client <bcp14>MAY</bcp14> use either based
on its local state and whether reconstruction is feasible
from surviving shards.</t>
            </li>
            <li>
              <t>Ordering among multiple affected ranges in a single
CB_CHUNK_REPAIR (parallel or serial).</t>
            </li>
            <li>
              <t>Whether to issue CHUNK_READ to confirm the failure mode
before reconstructing.</t>
            </li>
            <li>
              <t>Retry policy on transient CHUNK_WRITE_REPAIR errors below the
deadline cutoff.</t>
            </li>
            <li>
              <t>How the repair status is surfaced to local filesystem API
callers.</t>
            </li>
          </ul>
        </section>
        <section anchor="carrying-out-the-repair">
          <name>Carrying Out the Repair</name>
          <t>With the normative framing above, the reconstruction path is:</t>
          <ol spacing="normal" type="1"><li>
              <t>CHUNK_LOCK with CHUNK_LOCK_FLAGS_ADOPT on each affected
range (<xref target="sec-CHUNK_LOCK"/>).</t>
            </li>
            <li>
              <t>CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>) with the
reconstructed data for each inconsistent shard.  The
client's chunk_owner4 on this and all subsequent operations
is the one it presented in the CHUNK_LOCK ADOPT above;
prior owners' generation ids are now historical.</t>
            </li>
            <li>
              <t>CHUNK_FINALIZE (<xref target="sec-CHUNK_FINALIZE"/>) and CHUNK_COMMIT
(<xref target="sec-CHUNK_COMMIT"/>) to persist the repaired shards.</t>
            </li>
            <li>
              <t>CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to clear the
errored state.</t>
            </li>
          </ol>
          <t>The rollback path, when reconstruction is not possible:</t>
          <ol spacing="normal" type="1"><li>
              <t>CHUNK_LOCK with CHUNK_LOCK_FLAGS_ADOPT on each affected
range.</t>
            </li>
            <li>
              <t>CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>) on each affected
shard to restore the previously committed content.</t>
            </li>
            <li>
              <t>CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) on each shard.</t>
            </li>
          </ol>
          <t>In both paths, the repair client <bcp14>SHOULD</bcp14> target reconstructed
shards according to the following fallback order: first, any
data server in the layout carrying FFV2_DS_FLAGS_REPAIR; then
the data server that reported the failure (the one carrying the
failing shard at the range identified by ccr_offset and ccr_count
in the CB_CHUNK_REPAIR argument); then, if both of the above are
unreachable, a data server carrying FFV2_DS_FLAGS_SPARE.  If
none of the above are available, the client <bcp14>MUST</bcp14> return
NFS4ERR_PAYLOAD_LOST on the CB_CHUNK_REPAIR response.</t>
          <section anchor="single-writer-mode">
            <name>Single Writer Mode</name>
            <t>In single writer mode, the metadata server sets FFV2_FLAGS_ONLY_ONE_WRITER
in ffl_flags, indicating that no other client holds a write layout for
the file.  The client sends CHUNK_WRITE with cwa_guard.cwg_check set to
FALSE, omitting the guard value.  Because only one writer is active,
there is no risk of two clients overwriting the same chunk concurrently.</t>
            <t>The single writer write sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The client issues CHUNK_WRITE (cwa_guard.cwg_check = FALSE) for each
shard.  The data server places the written block in the PENDING state
and retains a copy of the previous block for rollback.</t>
              </li>
              <li>
                <t>The client issues CHUNK_FINALIZE to advance the blocks from PENDING
to FINALIZED, validating the per-block CRC32.</t>
              </li>
              <li>
                <t>The client issues CHUNK_COMMIT to advance the blocks from FINALIZED
to COMMITTED, persisting the block metadata to stable storage.</t>
              </li>
            </ol>
            <t>If the client detects an error after CHUNK_WRITE but before CHUNK_FINALIZE
(e.g., a CRC mismatch on a subsequent CHUNK_READ), it issues CHUNK_ROLLBACK
to restore the previous block content.  CHUNK_ROLLBACK does not lock the
chunk; the next CHUNK_WRITE is permitted immediately.</t>
          </section>
          <section anchor="repairing-single-writer-payloads">
            <name>Repairing Single Writer Payloads</name>
            <t>In single writer mode, inconsistent blocks arise from a client or data
server failure during a CHUNK_WRITE / CHUNK_FINALIZE sequence.  Because
no other writer is active, the original writer is the typical choice
for repair, but the metadata server <bcp14>MAY</bcp14> designate any client according
to the rules in <xref target="sec-repair-selection"/>.  A designated client that
did not originate the writes <bcp14>MUST</bcp14> follow the rollback path of that
section if it cannot reconstruct the payload from surviving shards.</t>
            <t>The repair sequence when the selected client is the original writer is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The repair client issues CHUNK_READ to identify which blocks are in an
inconsistent state (PENDING with a CRC mismatch, or in the errored
state set by a prior CHUNK_ERROR).</t>
              </li>
              <li>
                <t>For each errored block, the repair client reconstructs the correct
data using the erasure coding algorithm (RS matrix inversion or Mojette
back-projection) from the surviving consistent blocks.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>)
to write the reconstructed data.  CHUNK_WRITE_REPAIR bypasses the guard
check and applies different data server policies (e.g., allowing writes
to blocks in the errored state).</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_FINALIZE and CHUNK_COMMIT to persist the
repaired blocks.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to
clear the errored state and make the blocks available for normal reads.</t>
              </li>
            </ol>
          </section>
          <section anchor="sec-multi-writer">
            <name>Multiple Writer Mode</name>
            <t>In multiple writer mode, the metadata server does not set
FFV2_FLAGS_ONLY_ONE_WRITER, indicating that concurrent writers may hold
write layouts for the file.  The client sends CHUNK_WRITE with
cwa_guard.cwg_check set to TRUE, supplying a chunk_guard4 in cwa_guard.cwg_guard
that uniquely identifies this write transaction across all data servers.</t>
            <t>The multiple writer write sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The client selects a unique chunk_guard4 for this transaction.  The
cg_client_id identifies the client (derived from the client's
clientid4); the cg_gen_id is a per-client generation counter
incremented for each new transaction.</t>
              </li>
              <li>
                <t>The client issues CHUNK_WRITE (cwa_guard.cwg_check = TRUE) for each
shard.  The data server checks that no other client's block is in the
PENDING state for this chunk.  If another client's block is already
pending, the data server returns NFS4ERR_CHUNK_LOCKED with the
clr_owner field identifying the lock holder.</t>
              </li>
              <li>
                <t>On NFS4ERR_CHUNK_LOCKED, the client <bcp14>MUST</bcp14> back off.  It issues
CHUNK_ROLLBACK for any shards it has already written in this
transaction, then retries after a delay.</t>
              </li>
              <li>
                <t>If all CHUNK_WRITEs succeed, the client issues CHUNK_FINALIZE and
CHUNK_COMMIT as in single writer mode.</t>
              </li>
            </ol>
            <t>The guard ensures that the complete set of shards forming a consistent
erasure-coded block all carry the same chunk_guard4.  A reader that
encounters shards with different guard values knows the payload is not
yet consistent and <bcp14>MUST</bcp14> either retry or report NFS4ERR_PAYLOAD_NOT_CONSISTENT.</t>
          </section>
          <section anchor="repairing-multiple-writer-payloads">
            <name>Repairing Multiple Writer Payloads</name>
            <t>In multiple writer mode, inconsistent blocks can arise from two sources:
a client failure leaving some shards in PENDING state, or two clients
writing different data to the same chunk before one has committed.</t>
            <t>The metadata server coordinates repair by designating a repair
client according to the rules in <xref target="sec-repair-selection"/>.  The
FFV2_DS_FLAGS_REPAIR flag, when present on a data server in the
layout, identifies the target data server into which reconstructed
shards should be written; it does not by itself identify the
repair client.  The repair sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The repair client issues CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) on the
affected block range of each data server.  If any lock attempt returns
NFS4ERR_CHUNK_LOCKED, the repair client records the existing lock
holder's chunk_owner4 and proceeds; the lock holder's data is a
candidate for the winning payload.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_READ on all data servers to retrieve
the current payload.  It examines the chunk_owner4 of each shard to
identify which transaction (if any) produced a consistent set across
all k data shards.</t>
              </li>
              <li>
                <t>If a consistent set is found (all k data shards carry the same
chunk_guard4), that payload is the winner.  The repair client issues
CHUNK_WRITE_REPAIR to copy the winner's data to any data servers whose
shard is inconsistent, followed by CHUNK_FINALIZE and CHUNK_COMMIT.</t>
              </li>
              <li>
                <t>If no consistent set exists (all available payloads are partial), the
repair client selects one transaction's payload as authoritative
(typically the one with the most complete set of shards, or the most
recent cg_gen_id) and proceeds as above.</t>
              </li>
              <li>
                <t>After all data servers carry consistent, finalized, committed data, the
repair client issues CHUNK_REPAIRED to clear the errored state and
CHUNK_UNLOCK to release the locks acquired in step 1.</t>
              </li>
              <li>
                <t>The repair client reports success to the metadata server via
LAYOUTRETURN.</t>
              </li>
            </ol>
          </section>
        </section>
        <section anchor="sec-reading-chunks">
          <name>Reading Chunks</name>
          <t>The client reads chunks from the data file via CHUNK_READ.  The
number of chunks in the payload that need to be consistent depend
on both the Erasure Coding Type and the level of protection selected.
If the client has enough consistent chunks in the payload, then it
can proceed to use them to build a data block.  If it does not have
enough consistent chunks in the payload, then it can either decide
to return a LAYOUTERROR of NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server or it can retry the CHUNK_READ until there are
enough consistent chunks in the payload.</t>
          <t>As another client might be writing to the chunks as they are being
read, it is entirely possible to read the chunks while they are not
consistent.  As such, it might even be the non-consistent chunks
which contain the new data and a better action than building the
data block is to retry the CHUNK_READ to see if new chunks are
overwritten.</t>
        </section>
        <section anchor="whole-file-repair">
          <name>Whole File Repair</name>
          <t>Whole-file repair is the case in which too many data servers have
failed, or too many chunks have been lost, for the per-range repair
flow defined in <xref target="sec-repair-selection"/> to reconstruct the file in
place.  In this case the metadata server <bcp14>MUST</bcp14> either:</t>
          <ol spacing="normal" type="1"><li>
              <t>Construct a new layout backed by replacement data servers and
drive the reconstruction via the <strong>Data Mover</strong> mechanism (a
designated data server acts as the source of truth for client
I/O during the transition, pushing reconstructed content to the
replacement data servers in the background).  The Data Mover
mechanism also covers the non-repair cases where a file's layout
must change while remaining available to clients -- policy-
driven layout transitions, data server maintenance evacuation,
administrative ingest, TLS coverage transition, and
filehandle-backend migration.</t>
            </li>
            <li>
              <t>If the metadata server has no data-mover-capable data server
available, or the surviving shards are insufficient to
reconstruct any portion of the file, terminate the affected
byte ranges with NFS4ERR_PAYLOAD_LOST (see
<xref target="sec-NFS4ERR_PAYLOAD_LOST"/>).</t>
            </li>
          </ol>
          <t>The Data Mover mechanism is work in progress and is specified in a
companion document.  The current design is maintained at
<eref target="https://github.com/ietf-wg-nfsv4/flexfiles-v2-data-mover">https://github.com/ietf-wg-nfsv4/flexfiles-v2-data-mover</eref>.  Once
the mechanism is submitted as an Internet-Draft, this section will
be updated with a normative reference to that draft.</t>
          <t>Implementations that do not support the Data Mover mechanism can
still perform recovery for cases where per-range repair suffices,
using CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>) and the repair
client selection rules in <xref target="sec-repair-selection"/>.  Such
implementations will surface NFS4ERR_PAYLOAD_LOST on any failure
that exceeds per-range repair's reach, including the multi-data-
server failure scenarios the Data Mover mechanism is intended to
handle.</t>
        </section>
      </section>
      <section anchor="mixing-of-coding-types">
        <name>Mixing of Coding Types</name>
        <t>Multiple coding types can be present in a Flexible File Version 2
Layout Type layout.  The ffv2_layout4 has an array of ffv2_mirror4,
each of which has a ffv2_coding_type4.  The main reason to allow
for this is to provide for either the assimilation of a non-erasure
coded file to an erasure coded file or the exporting of an erasure
coded file to a non-erasure coded file.</t>
        <t>Assume there is an additional ffv2_coding_type4 of FFV2_CODING_REED_SOLOMON
and it needs 8 active chunks.  The user wants to actively assimilate
a regular file.  As such, a layout might be as represented in
<xref target="fig-example_mixing"/>.  As this is an assimilation, most of the
data reads will be satisfied by READ (see Section 18.22 of <xref target="RFC8881"/>)
calls to index 0.  However, as this is also an active file, there
could also be CHUNK_READ (see <xref target="sec-CHUNK_READ"/>) calls to the other
indexes.</t>
        <figure anchor="fig-example_mixing">
          <name>Example of Mixed Coding Types in a Layout</name>
          <artwork><![CDATA[
 +-----------------------------------------------------+
 | ffv2_layout4:                                       |
 +-----------------------------------------------------+
 |     ffl_mirrors[0]:                                 |
 |         ffs_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: 0                     |
 |         ffm_coding: FFV2_CODING_MIRRORED            |
 +-----------------------------------------------------+
 |     ffl_mirrors[1]:                                 |
 |         ffs_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[1]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[2]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[3]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[4]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[5]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[6]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |             ffv2_data_server4[7]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |     ffm_coding: FFV2_CODING_REED_SOLOMON            |
 +-----------------------------------------------------+
]]></artwork>
        </figure>
        <t>When performing I/O via a FFV2_CODING_MIRRORED coding type, the
non- transformed data will be used, Whereas with other coding types,
a metadata header and transformed block will be sent.  Further,
when reading data from the instance files, the client <bcp14>MUST</bcp14> be
prepared to have one of the coding types supply data and the other
type not to supply data.  I.e., the CHUNK_READ call to the data
servers in mirror 1 might return rlr_eof set to true (see
<xref target="fig-read_chunk4"/>), which indicates that there is no data, where
the READ call to the data server in mirror 0 might return eof to
be false, which indicates that there is data.  The client <bcp14>MUST</bcp14>
determine that there is in fact data.  An example use case is the
active assimilation of a file to ensure integrity.  As the client
is helping to translated the file to the new coding scheme, it is
actively modifying the file.  As such, it might be sequentially
reading the file in order to translate.  The READ calls to mirror
0 would be returning data and the CHUNK_READ calls to mirror 1 would
not be returning data.  As the client overwrites the file, the WRITE
call and CHUNK_WRITE call would have data sent to all of the
data servers.  Finally, if the client reads back a section which
had been modified earlier, both the READ and CHUNK_READ calls would
return data.</t>
      </section>
      <section anchor="sec-rs-encoding">
        <name>Reed-Solomon Vandermonde Encoding (FFV2_ENCODING_RS_VANDERMONDE)</name>
        <section anchor="overview">
          <name>Overview</name>
          <t>Reed-Solomon (RS) codes are Maximum Distance Separable (MDS) codes:
for a (k+m, k) code, any k of the k+m encoded shards suffice to
recover the original data.  The code tolerates the simultaneous loss
of up to m shards.  <xref target="Plank97"/> is a tutorial treatment of RS
coding in RAID-like systems and is the recommended background
reading for implementers unfamiliar with the construction used
here.</t>
        </section>
        <section anchor="galois-field-arithmetic">
          <name>Galois Field Arithmetic</name>
          <t>All RS operations are performed over GF(2^8), the Galois field with
256 elements.  Each element is represented as a byte.</t>
          <dl>
            <dt>Irreducible Polynomial</dt>
            <dd>
              <t>The field is constructed using the irreducible polynomial
x^8 + x^4 + x^3 + x^2 + 1 (0x11d in hexadecimal).  The primitive
element (generator) is g = 2, which has multiplicative order 255.</t>
            </dd>
            <dt>Addition</dt>
            <dd>
              <t>Addition in GF(2^8) is bitwise XOR.</t>
            </dd>
            <dt>Multiplication</dt>
            <dd>
              <t>Multiplication uses log/antilog tables.  For non-zero elements
a and b: a * b = exp(log(a) + log(b)), where the exp table is
doubled to 512 entries to avoid modular reduction on the index sum.</t>
            </dd>
          </dl>
          <t>These are the classical constructions from Berlekamp (1968) and
Peterson &amp; Weldon (1972).  The log/antilog table approach for GF(2^8)
multiplication predates all known patents on SIMD-accelerated GF
arithmetic.  Implementors considering SIMD acceleration of GF(2^8)
operations should be aware of US Patent 8,683,296 (StreamScale),
which covers certain SIMD-based GF multiplication techniques.</t>
        </section>
        <section anchor="encoding-matrix">
          <name>Encoding Matrix</name>
          <t>The encoding process uses a (k+m) x k Vandermonde matrix, normalized
so that its top k rows form the identity matrix:</t>
          <ol spacing="normal" type="1"><li>
              <t>Construct a (k+m) x k Vandermonde matrix V where V[i][j] = j^i
in GF(2^8).</t>
            </li>
            <li>
              <t>Extract the top k x k sub-matrix T from V.</t>
            </li>
            <li>
              <t>Compute T_inv = T^(-1) using Gaussian elimination in GF(2^8).</t>
            </li>
            <li>
              <t>Multiply: E = V * T_inv.  The result has an identity block on top
(rows 0 through k-1) and the parity generation matrix P on the
bottom (rows k through k+m-1).</t>
            </li>
          </ol>
          <t>The identity block makes the code systematic: data shards pass through
unchanged, and only the parity sub-matrix P is needed during encoding.</t>
        </section>
        <section anchor="encoding">
          <name>Encoding</name>
          <t>Given k data shards, each of shard_len bytes, encoding produces m
parity shards, each also shard_len bytes:</t>
          <artwork><![CDATA[
For each byte position j in [0, shard_len):
  For each parity shard i in [0, m):
    parity[i][j] = sum over s in [0, k) of P[i][s] * data[s][j]
]]></artwork>
          <t>where the sum and product are in GF(2^8).  All shards (data and
parity) are the same size.</t>
        </section>
        <section anchor="decoding">
          <name>Decoding</name>
          <t>When one or more shards are lost (up to m), reconstruction proceeds
by matrix inversion:</t>
          <ol spacing="normal" type="1"><li>
              <t>Select k available shards (from the k+m total).</t>
            </li>
            <li>
              <t>Form a k x k sub-matrix S of the encoding matrix E by selecting the
rows corresponding to the available shards.</t>
            </li>
            <li>
              <t>Compute S_inv = S^(-1) using Gaussian elimination in GF(2^8).</t>
            </li>
            <li>
              <t>Multiply S_inv by the vector of available shard data at each byte
position to recover the original k data shards.</t>
            </li>
            <li>
              <t>If any parity shards are also missing, regenerate them by
re-encoding from the recovered data shards.</t>
            </li>
          </ol>
          <t>The reconstruction cost is dominated by the matrix inversion, which
is O(k^2) in GF(2^8) multiplications.</t>
        </section>
        <section anchor="rs-interoperability-requirements">
          <name>RS Interoperability Requirements</name>
          <t>For two implementations of FFV2_ENCODING_RS_VANDERMONDE to
interoperate, they <bcp14>MUST</bcp14> agree on all of the following parameters.
Any deviation produces a different encoding matrix and renders
data unrecoverable by a different implementation.</t>
          <ul spacing="normal">
            <li>
              <t>Irreducible polynomial: x^8 + x^4 + x^3 + x^2 + 1 (0x11d)</t>
            </li>
            <li>
              <t>Primitive element: g = 2</t>
            </li>
            <li>
              <t>Vandermonde evaluation points: V[i][j] = j^i in GF(2^8)</t>
            </li>
            <li>
              <t>Matrix normalization: E = V * (V[0..k-1])^(-1)</t>
            </li>
          </ul>
          <t>These four parameters fully determine the encoding matrix for any
(k, m) configuration.</t>
        </section>
        <section anchor="rs-shard-sizes">
          <name>RS Shard Sizes</name>
          <t>All RS shards (data and parity) are exactly shard_len bytes.  This
simplifies the CHUNK operation protocol: chunk_size is exactly the
shard size for all mirrors.</t>
          <table anchor="tbl-rs-shards">
            <name>RS shard sizes for common configurations</name>
            <thead>
              <tr>
                <th align="left">Configuration</th>
                <th align="left">File Size</th>
                <th align="left">Shard Size</th>
                <th align="left">Total Storage</th>
                <th align="left">Overhead</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">4+2</td>
                <td align="left">4 KB</td>
                <td align="left">1 KB</td>
                <td align="left">6 KB</td>
                <td align="left">50%</td>
              </tr>
              <tr>
                <td align="left">4+2</td>
                <td align="left">1 MB</td>
                <td align="left">256 KB</td>
                <td align="left">1.5 MB</td>
                <td align="left">50%</td>
              </tr>
              <tr>
                <td align="left">8+2</td>
                <td align="left">4 KB</td>
                <td align="left">512 B</td>
                <td align="left">5 KB</td>
                <td align="left">25%</td>
              </tr>
              <tr>
                <td align="left">8+2</td>
                <td align="left">1 MB</td>
                <td align="left">128 KB</td>
                <td align="left">1.25 MB</td>
                <td align="left">25%</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>
      <section anchor="sec-mojette-encoding">
        <name>Mojette Transform Encoding (FFV2_ENCODING_MOJETTE_SYSTEMATIC, FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC)</name>
        <section anchor="overview-1">
          <name>Overview</name>
          <t>The Mojette Transform is an erasure coding technique based on discrete
geometry rather than algebraic field operations.  It computes 1D
projections of a 2D grid along selected directions.  Given enough
projections, the original grid can be reconstructed exactly.</t>
          <t>The transform operates on unsigned integer elements using modular
addition.  The element size is an implementation choice: 128-bit
elements leverage SSE SIMD instructions; 64-bit elements are
compatible with NEON and AVX2 vector widths.  No Galois field
operations are required.</t>
        </section>
        <section anchor="grid-structure">
          <name>Grid Structure</name>
          <t>Data is arranged as a P x Q grid of unsigned integer elements,
where P is the number of columns and Q is the number of rows.
For k data shards of S bytes each with W-byte elements:</t>
          <artwork><![CDATA[
P = S / W       (columns per row)
Q = k           (rows = data shards)
]]></artwork>
        </section>
        <section anchor="directions">
          <name>Directions</name>
          <t>A direction is a pair of coprime integers (p_i, q_i).  Implementations
<bcp14>SHOULD</bcp14> use q_i = 1 for all directions <xref target="PARREIN"/>.  For n = k + m total
shards, n directions are generated with non-zero p values symmetric
around zero:</t>
          <ul spacing="normal">
            <li>
              <t>For n = 4: p = {-2, -1, 1, 2}</t>
            </li>
            <li>
              <t>For n = 6: p = {-3, -2, -1, 1, 2, 3}</t>
            </li>
          </ul>
        </section>
        <section anchor="forward-transform-encoding">
          <name>Forward Transform (Encoding)</name>
          <t>For each direction (p_i, q_i), the forward transform computes a 1D
projection.  Each bin sums the grid elements along a discrete line:</t>
          <artwork><![CDATA[
Projection(b, p, q) = SUM over all (row, col) where
                       col * p - row * q + offset = b
                       of Grid[row][col]
]]></artwork>
          <t>The number of bins B in a projection is:</t>
          <artwork><![CDATA[
B(p, q, P, Q) = |p| * (Q - 1) + |q| * (P - 1) + 1
]]></artwork>
          <t>For q = 1, this simplifies to:</t>
          <artwork><![CDATA[
B = abs(p) * (Q - 1) + P
]]></artwork>
          <t>The byte size of the projection is B * W.</t>
        </section>
        <section anchor="katz-reconstruction-criterion">
          <name>Katz Reconstruction Criterion</name>
          <t>Reconstruction is possible if and only if the Katz criterion
<xref target="KATZ"/> holds:</t>
          <artwork><![CDATA[
SUM(i=1..n) |q_i| >= Q    OR    SUM(i=1..n) |p_i| >= P
]]></artwork>
          <t>When all q_i = 1, the q-sum simplifies to n &gt;= Q.</t>
        </section>
        <section anchor="inverse-transform-decoding">
          <name>Inverse Transform (Decoding)</name>
          <t>The inverse uses the corner-peeling algorithm:</t>
          <ol spacing="normal" type="1"><li>
              <t>Count how many unknown elements contribute to each bin.</t>
            </li>
            <li>
              <t>Find any bin with exactly one contributor (singleton).</t>
            </li>
            <li>
              <t>Recover the element, subtract from all projections.</t>
            </li>
            <li>
              <t>Repeat until all elements are recovered.</t>
            </li>
          </ol>
          <t>The algorithm is O(n * P * Q).</t>
        </section>
        <section anchor="systematic-mojette">
          <name>Systematic Mojette</name>
          <t>In the systematic form (FFV2_ENCODING_MOJETTE_SYSTEMATIC), the first
k shards are the original data rows and the remaining m shards are
projections.  Healthy reads require no decoding.</t>
          <t>Reconstruction of missing data rows proceeds via the
corner-peeling algorithm of <xref target="NORMAND"/>:</t>
          <ol spacing="normal" type="1"><li>
              <t>Load available parity projections.</t>
            </li>
            <li>
              <t>Subtract contributions of present data rows (residual).</t>
            </li>
            <li>
              <t>Corner-peel the residual to recover missing rows.</t>
            </li>
          </ol>
          <t>Reconstruction cost is O(m * k) -- a fundamental advantage over RS
at wide geometries (k &gt;= 8).</t>
        </section>
        <section anchor="non-systematic-mojette">
          <name>Non-Systematic Mojette</name>
          <t>In the non-systematic form (FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC),
all k + m shards are projections.  Every read requires the full
inverse transform.  This provides constant performance regardless of
failure count, but at higher baseline read cost than systematic.</t>
        </section>
        <section anchor="mojette-shard-sizes">
          <name>Mojette Shard Sizes</name>
          <t>Unlike RS, Mojette parity shard sizes vary by direction:</t>
          <table anchor="tbl-mojette-proj-sizes">
            <name>Mojette projection sizes for 4+2, 4KB shards, 64-bit elements</name>
            <thead>
              <tr>
                <th align="left">Direction (p, q)</th>
                <th align="left">Bins (B) for P=512, Q=4</th>
                <th align="left">Size (bytes, 64-bit elements)</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">(-3, 1)</td>
                <td align="left">521</td>
                <td align="left">4168</td>
              </tr>
              <tr>
                <td align="left">(-2, 1)</td>
                <td align="left">518</td>
                <td align="left">4144</td>
              </tr>
              <tr>
                <td align="left">(-1, 1)</td>
                <td align="left">515</td>
                <td align="left">4120</td>
              </tr>
              <tr>
                <td align="left">(1, 1)</td>
                <td align="left">515</td>
                <td align="left">4120</td>
              </tr>
              <tr>
                <td align="left">(2, 1)</td>
                <td align="left">518</td>
                <td align="left">4144</td>
              </tr>
              <tr>
                <td align="left">(3, 1)</td>
                <td align="left">521</td>
                <td align="left">4168</td>
              </tr>
            </tbody>
          </table>
          <t>When using CHUNK operations, the chunk_size is a nominal stride; the
last chunk in a parity shard <bcp14>MAY</bcp14> be shorter than the stride.</t>
        </section>
      </section>
      <section anchor="comparison-of-encoding-types">
        <name>Comparison of Encoding Types</name>
        <table anchor="tbl-encoding-comparison">
          <name>Comparison of erasure encoding types</name>
          <thead>
            <tr>
              <th align="left">Property</th>
              <th align="left">Reed-Solomon</th>
              <th align="left">Mojette Systematic</th>
              <th align="left">Mojette Non-Systematic</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">MDS guarantee</td>
              <td align="left">Yes</td>
              <td align="left">Yes (Katz)</td>
              <td align="left">Yes (Katz)</td>
            </tr>
            <tr>
              <td align="left">Shard sizes</td>
              <td align="left">Uniform</td>
              <td align="left">Variable</td>
              <td align="left">Variable</td>
            </tr>
            <tr>
              <td align="left">Reconstruction cost</td>
              <td align="left">O(k^2)</td>
              <td align="left">O(m * k)</td>
              <td align="left">O(m * k)</td>
            </tr>
            <tr>
              <td align="left">Healthy read cost</td>
              <td align="left">Zero</td>
              <td align="left">Zero</td>
              <td align="left">Full decode</td>
            </tr>
            <tr>
              <td align="left">GF operations</td>
              <td align="left">Yes (GF(2^8))</td>
              <td align="left">No</td>
              <td align="left">No</td>
            </tr>
            <tr>
              <td align="left">Recommended k</td>
              <td align="left">k &lt;= 6</td>
              <td align="left">k &gt;= 4</td>
              <td align="left">Archive only</td>
            </tr>
          </tbody>
        </table>
        <t>At small k (k &lt;= 6), RS is the conservative choice with uniform shard
sizes.  At wider geometries (k &gt;= 8), systematic Mojette offers lower
reconstruction cost.  Non-systematic Mojette is suitable only for
archive workloads where reads are infrequent.</t>
      </section>
      <section anchor="sec-spare-substitution">
        <name>First-Line Substitution to a Spare</name>
        <t>When a client's CHUNK_WRITE to an FFV2_DS_FLAGS_ACTIVE data server
fails with a transport-level error, NFS4ERR_IO, NFS4ERR_NOSPC, or
any other code that indicates the data server cannot accept the
shard, and the layout includes a data server flagged
FFV2_DS_FLAGS_SPARE (<xref target="sec-ffv2_ds_flags4"/>) that is not already
holding a shard for the affected payload, the client <bcp14>MAY</bcp14> substitute
the spare for the failing active data server for this write.</t>
        <t>Substitution avoids the full metadata-server repair flow.  The
client issues CHUNK_WRITE to the spare in place of the failing
ACTIVE and, if successful, proceeds with CHUNK_FINALIZE and
CHUNK_COMMIT against the full set of data servers the payload
now resides on (the k-1 healthy ACTIVE plus the substituted
SPARE).  The spare becomes the i-th shard holder for the
affected payload.</t>
        <t>The client <bcp14>MUST</bcp14> inform the metadata server of the substitution
before returning the layout.  This is done via LAYOUTERROR on
the failing ACTIVE (reporting the error code the client
encountered) in the same compound as, or before, any
LAYOUTSTATS reporting of the substitution.  The metadata server
uses the LAYOUTERROR to decide whether to update the layout in
place -- promoting the spare to ACTIVE and demoting the failing
ACTIVE to a stale-or-unreachable state -- or to push new
layouts via CB_RECALL_ANY to other clients so readers do not
continue to consult the failing ACTIVE.</t>
        <t>Substitution is optional.  A client that does not implement it,
or does not have a suitable spare in the layout, falls through
to the normal write-hole handling below.  Substitution is also
not available to clients writing with cwa_stable == FILE_SYNC
unless the client is prepared to drive FILE_SYNC semantics on
the spare as well; otherwise the substitution silently
downgrades the durability contract.</t>
        <t>Substitution <bcp14>MUST NOT</bcp14> be used when the existing PENDING state
on any shard of the affected payload carries a different
chunk_guard4 than the current transaction (the range has been
adopted by a repair client already -- the normal repair flow
applies and substitution would collide).</t>
      </section>
      <section anchor="handling-write-holes">
        <name>Handling write holes</name>
        <t>A write hole occurs when a client begins writing a stripe but does not
successfully write all k+m shards before a failure.  Some data servers
will hold new data while others still hold old data, producing an
inconsistent payload.</t>
        <t>The CHUNK_WRITE / CHUNK_ROLLBACK mechanism addresses this.  When a client
issues CHUNK_WRITE, the data server retains a copy of the previous shard
and places the new data in the PENDING state.  If any shard write fails,
the client issues CHUNK_ROLLBACK to each data server that received a
CHUNK_WRITE, restoring the previous content.  The payload remains
consistent from the reader's perspective throughout, because PENDING
blocks carry the new chunk_guard4 value and CHUNK_READ returns the last
COMMITTED or FINALIZED block when a PENDING block exists.</t>
        <t>A single-shard CHUNK_WRITE failure <bcp14>MAY</bcp14> also be handled without
CHUNK_ROLLBACK by substituting the failing data server with an
FFV2_DS_FLAGS_SPARE, per <xref target="sec-spare-substitution"/>.  This
avoids engaging the metadata server's repair flow and is the
preferred path on transient single-DS failures when the layout
exposes a suitable spare.</t>
        <t>In the multiple writer model, a write hole can also arise when two clients
are racing.  The chunk_guard4 value on each shard identifies which
transaction wrote it.  A reader that finds shards with different guard
values detects the inconsistency and either retries (if a concurrent write
is still in progress) or reports NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server to trigger repair.</t>
        <t>When substitution and CHUNK_ROLLBACK are both unavailable, and
the payload cannot be reconstructed because too many shards have
been lost (for example, a catastrophic multi-DS failure with no
spares provisioned), the repair flow ultimately terminates with
NFS4ERR_PAYLOAD_LOST; see
<xref target="sec-NFS4ERR_PAYLOAD_LOST"/>.</t>
      </section>
    </section>
    <section anchor="sec-system-model">
      <name>System Model and Correctness</name>
      <t>The design decisions in this document -- centralized coordination
through the metadata server, CAS semantics via chunk_guard4,
pessimistic lock escrow during repair, and erasure-coded reads
from any sufficient subset -- depart visibly from a classical
distributed-consensus protocol such as Paxos or Raft.  This
section states the system model those decisions rest on, the
consistency and progress guarantees the protocol provides under
that model, and how the protocol relates to (and when it relies
on) classical consensus.  It is intended as the correctness
framing for implementers and reviewers; the normative wire
behavior is defined in the preceding sections.</t>
      <section anchor="sec-system-model-wire">
        <name>Wire Semantics vs Implementation</name>
        <t>The protocol defines wire semantics, not data-server
implementation.  The operations introduced in
<xref target="sec-new-ops"/> (CHUNK_WRITE, CHUNK_FINALIZE, CHUNK_COMMIT,
CHUNK_ROLLBACK, CHUNK_LOCK / CHUNK_UNLOCK, CHUNK_READ,
CHUNK_REPAIRED, CHUNK_ERROR, CHUNK_HEADER_READ,
CHUNK_WRITE_REPAIR) together with the per-chunk state machine
(<xref target="sec-system-model-chunk-state"/>) and the chunk_guard4 CAS
(<xref target="sec-chunk_guard4"/>) are the entire surface a peer observes.
The data server's internal representation of persistent state is
not exposed on the wire, and two data-server implementations
that satisfy the same wire semantics <bcp14>MAY</bcp14> differ arbitrarily in
their internal structure.</t>
        <t>In particular, the protocol does NOT exchange:</t>
        <ul spacing="normal">
          <li>
            <t>which on-disk layout (log-structured, append-only,
in-place-overwrite, external object store, key-value store,
or any other) a data server uses to persist chunks;</t>
          </li>
          <li>
            <t>whether a data server holds PENDING and FINALIZED chunks in
a single blob or in distinct regions;</t>
          </li>
          <li>
            <t>how a data server represents the CHUNK_LOCK table, the guard
epoch, or the escrow owner;</t>
          </li>
          <li>
            <t>whether a data server's chunk retention beyond COMMIT is
implemented via shadow blocks, journals, reference counts,
or copy-on-write.</t>
          </li>
        </ul>
        <t>This decoupling is deliberate.  It lets the protocol accommodate
future smart-DS designs -- including designs that integrate more
closely with storage back-ends that already provide atomic
replace, multi-version concurrency, or internal erasure coding --
without protocol revisions, provided the wire semantics are
preserved.  Conversely, a data server implementer is free to
pick the representation that best fits the underlying storage
stack without fear that some less common implementation choice
is disallowed.</t>
        <t>The counterpart of this rule is that the wire is the entire
contract.  Any behavior a client relies on <bcp14>MUST</bcp14> be observable
via the operations listed above; any behavior that is not
observable (cache state, background scrubbing cadence,
internal retry ordering, on-disk layout) is implementation
detail and <bcp14>MUST NOT</bcp14> be depended upon.</t>
      </section>
      <section anchor="sec-system-model-roles">
        <name>Actors and Roles</name>
        <t>Three actors participate on behalf of any given file:</t>
        <dl>
          <dt>pNFS client:</dt>
          <dd>
            <t>Issues CHUNK operations to data servers over the data path;
issues LAYOUTGET, LAYOUTRETURN, LAYOUTERROR, and SEQUENCE to
the metadata server on the control path.  Authenticates to the
metadata server via AUTH_SYS, RPCSEC_GSS, or TLS.  <bcp14>MAY</bcp14> be
selected as a repair client via CB_CHUNK_REPAIR.</t>
          </dd>
          <dt>Metadata server (MDS):</dt>
          <dd>
            <t>Is the sole coordinator for the file.  Grants, renews, and
revokes layouts; issues TRUST_STATEID / REVOKE_STATEID /
BULK_REVOKE_STATEID to each tight-coupled data server; selects
the repair client under the rules in
<xref target="sec-repair-selection"/>; owns the reserved
CHUNK_GUARD_CLIENT_ID_MDS escrow identity for in-flight repair.</t>
          </dd>
          <dt>Data server (DS):</dt>
          <dd>
            <t>Persists chunks and enforces the per-file trust table, the
per-chunk guard CAS (chunk_guard4), the per-chunk lock state
(including the MDS-escrow owner), and the chunk state machine
(EMPTY / PENDING / FINALIZED / COMMITTED).  Has no
coordinator role.  Has no knowledge of the erasure coding type
in use for any file: the erasure transform is performed
entirely at the client, and the data server stores the
resulting chunks without interpreting their contents.</t>
          </dd>
        </dl>
        <t>The protocol does NOT mandate how a data server implements the
chunk state machine or stores PENDING chunks.  An implementation
<bcp14>MAY</bcp14> use per-client staging files, a single append-only instance
file with an index, a separate metadata-header file paired with
a blocks file, a log-structured store, or any other
representation that preserves the normative semantics (the
EMPTY / PENDING / FINALIZED / COMMITTED transitions, the
chunk_guard4 CAS, lock continuity across revocation, and the
integrity checks).  The choice is a data-server implementation
concern and is transparent to clients and the metadata server.</t>
        <t>Each file is owned by exactly one metadata server at any given
instant.  Ownership transfer between metadata servers (for
example, during MDS failover) is implementation-defined and out
of scope for this document; see <xref target="sec-system-model-consensus"/>.</t>
      </section>
      <section anchor="sec-system-model-failures">
        <name>Failure Model</name>
        <t>The protocol assumes:</t>
        <dl>
          <dt>Crash-stop:</dt>
          <dd>
            <t>Clients, metadata servers, and data servers fail by stopping.
A restarted component rejoins the protocol with a fresh epoch
and participates in the grace / reclaim path already defined
in <xref target="RFC8881"/>.  Correct components do not exhibit arbitrary
(Byzantine) behavior.</t>
          </dd>
          <dt>Fail-silent data servers:</dt>
          <dd>
            <t>Data servers report honestly about the state of the data they
hold.  The protocol detects on-disk bit rot via CRC32
(see <xref target="sec-CHUNK_WRITE"/>) but does not defend against a data
server that deliberately lies about whether a chunk is
COMMITTED or what its contents are.  Byzantine data servers
are explicitly outside the trust model; see
<xref target="sec-system-model-nongoals"/>.</t>
          </dd>
          <dt>Authenticated writers and their own data:</dt>
          <dd>
            <t>An authenticated client may write arbitrary (even
semantically-invalid) bytes into chunks it owns.  The CRC32
check detects transport corruption, not adversarial content.
This matches the existing NFSv4 authorization model: once
you have write access, you may write anything.</t>
          </dd>
          <dt>Network partitions:</dt>
          <dd>
            <t>The protocol is partition-tolerant at the cost of availability
during the partition window.  A client partitioned from a
data server recovers via LAYOUTERROR and may be issued a new
layout (possibly against a spare, see
<xref target="sec-spare-substitution"/>).  An MDS partitioned from a data
server eventually renews trust entries on reconnection; in
the interim, the data server returns NFS4ERR_DELAY for
affected stateids (see <xref target="sec-tight-coupling-mds-crash"/>).
Message loss is bounded by RPC retransmit; eventual delivery
is assumed once the partition heals.
</t>
            <t>Split-brain scenarios (in which a partitioned minority of
the data servers in a mirror set attempts to make progress
independently of the majority) cannot drive inconsistent
writes to COMMITTED state.  The chunk_guard4 CAS on each
write requires the guard value from a successor chunk to
strictly advance the guard value of its predecessor; on
partition heal, any writes attempted on the minority side
are detected by the majority because their guard values do
not satisfy the CAS precondition, and those writes are
discarded.  When reconciliation is impossible -- for example,
the erasure code has lost too many shards across both sides
of the partition to reconstruct any single consistent
generation -- the repair flow terminates with
NFS4ERR_PAYLOAD_LOST (see <xref target="sec-NFS4ERR_PAYLOAD_LOST"/>),
which is terminal for the affected ranges.</t>
          </dd>
          <dt>Lease bound:</dt>
          <dd>
            <t>All state held by a data server on behalf of a metadata server
is bounded by the TRUST_STATEID expiry (see
<xref target="sec-tight-coupling-lease"/>).  An orphaned entry will
eventually expire even if the metadata server never returns.</t>
          </dd>
        </dl>
      </section>
      <section anchor="sec-system-model-chunk-state">
        <name>Chunk State Machine</name>
        <t>Each chunk on a data server occupies exactly one of four states.
The transitions below are the complete set; any implementation
of the data server's chunk state table <bcp14>MUST</bcp14> admit these
transitions and no others.</t>
        <figure anchor="fig-chunk-state-machine">
          <name>Chunk lifecycle on the data server</name>
          <artwork><![CDATA[
                       CHUNK_WRITE
                    (fresh cg_gen_id)
      +---------+ ------------------> +-----------+
      |  EMPTY  |                     |  PENDING  |
      +---------+ <------------------ +-----------+
           ^        CHUNK_ROLLBACK        |  ^
           |       (discard PENDING)      |  | CHUNK_WRITE
           |                              |  | (replace PENDING,
           |                              |  |  same writer, same
           |                              |  |  cg_gen_id)
           |                              |  |
           |             CHUNK_FINALIZE   v  |
           |          (writer stops       |
           |           further writes)    |
           |                              v
           |                       +-------------+
           |        CHUNK_ROLLBACK |  FINALIZED  |
           |       (discard        +-------------+
           |        FINALIZED)           |
           |                             | CHUNK_COMMIT
           |                             |  (make durable and
           |                             |   globally visible)
           |                             v
           |                       +-------------+
           +-------------------- > |  COMMITTED  |
                CHUNK_ROLLBACK     +-------------+
             (only via repair;          |
              replaces with a newer     | CHUNK_WRITE with a higher
              COMMITTED generation      | cg_gen_id begins a new
              or discards per the       | PENDING successor;
              rollback invariant)       | the prior COMMITTED is
                                        | retained until its
                                        | successor is COMMITTED
                                        | (see the rollback
                                        v  invariant below)
                                  (next PENDING
                                   against same chunk)
]]></artwork>
        </figure>
        <t>States:</t>
        <dl>
          <dt>EMPTY:</dt>
          <dd>
            <t>The chunk has no payload.  CHUNK_READ returns a zero-filled
result; CHUNK_WRITE against an EMPTY chunk is the first write.</t>
          </dd>
          <dt>PENDING:</dt>
          <dd>
            <t>The chunk has payload accepted by CHUNK_WRITE but not yet
finalized.  Not visible to CHUNK_READ (see
<xref target="sec-system-model-consistency"/>).  Further CHUNK_WRITEs from
the same writer <bcp14>MAY</bcp14> replace the payload in place (same
cg_gen_id).</t>
          </dd>
          <dt>FINALIZED:</dt>
          <dd>
            <t>The writer has signalled via CHUNK_FINALIZE that it will send
no more CHUNK_WRITEs for this generation.  Still not visible
to CHUNK_READ, but a candidate for CHUNK_COMMIT.</t>
          </dd>
          <dt>COMMITTED:</dt>
          <dd>
            <t>The chunk is durable and globally visible.  Subsequent
CHUNK_READs return this content until a newer COMMITTED
generation replaces it.  A higher-generation PENDING successor
<bcp14>MAY</bcp14> exist concurrently; the rollback invariant in
<xref target="sec-system-model-consistency"/> requires the data server to
retain the COMMITTED content while that successor exists.</t>
          </dd>
        </dl>
        <t>Transitions are driven by the operations named on the arrows.
CHUNK_ROLLBACK against a COMMITTED chunk is used only on the
repair path (see <xref target="sec-CHUNK_ROLLBACK"/>) and replaces the chunk
with a newer COMMITTED generation chosen by the repair client,
rather than returning the chunk to EMPTY.</t>
      </section>
      <section anchor="sec-system-model-consistency">
        <name>Consistency Guarantees</name>
        <t>The protocol provides <strong>per-chunk linearizability on COMMITTED
state</strong>:</t>
        <ol spacing="normal" type="1"><li>
            <t>Once CHUNK_COMMIT returns success to a writer for a given
chunk, every subsequent CHUNK_READ whose stateid postdates
the COMMIT observes either that writer's data or the data of
a later committed write.  A reader <bcp14>MUST NOT</bcp14> observe a
rolled-back write as if it had committed.</t>
          </li>
          <li>
            <t>Concurrent writers on the same chunk in multi-writer mode
serialize via chunk_guard4.  On guard conflict one writer
succeeds; the other receives NFS4ERR_CHUNK_GUARDED and <bcp14>MUST</bcp14>
either abandon the write or re-read and retry.  At most one
generation becomes COMMITTED per serialized decision.</t>
          </li>
          <li>
            <t>During repair, the chunk's lock is held continuously -- first
by the original writer, then transferred to the MDS-escrow
owner on REVOKE_STATEID, and finally adopted by the repair
client via CHUNK_LOCK_FLAGS_ADOPT.  No writer that did not
hold the lock may observe or mutate the chunk.  The
invariant "a chunk with a live lock has exactly one logical
owner at any instant" is preserved across revocation.</t>
          </li>
        </ol>
        <t>Across multiple chunks the protocol makes <strong>no multi-chunk
atomicity or ordering guarantee</strong>.  A reader that reads chunk A
at one offset and chunk B at another <bcp14>MAY</bcp14> observe A's new value
and B's old value simultaneously.  Applications that require
multi-chunk atomicity <bcp14>MUST</bcp14> layer it above this protocol -- for
example, via file-level checksums, application-level generation
fields, or external transaction managers.</t>
        <t><strong>The chunk is the unit of atomicity.</strong>  Two properties follow:</t>
        <ol spacing="normal" type="1"><li>
            <t>Chunk-aligned writes do not interfere.  Two concurrent
writers whose writes cover disjoint chunks -- even writes
that cover adjacent chunks -- never race.  Each write
terminates independently at COMMITTED per the per-chunk
linearizability rule above.</t>
          </li>
          <li>
            <t>Sub-chunk overlapping writes from different writers
produce chunk-resolution-granularity contention.  When two
concurrent writers target overlapping byte ranges within a
single chunk, chunk_guard4 resolves them: one writer's
entire chunk-generation wins and becomes COMMITTED; the
other writer sees NFS4ERR_CHUNK_GUARDED and is expected to
re-read and retry if it wishes to apply its change on top
of the winning generation (see
<xref target="sec-NFS4ERR_CHUNK_GUARDED"/>).  The protocol does NOT
produce byte-level merges of overlapping sub-chunk writes:
the losing writer's bytes are not preserved as a partial
update within the winning generation.</t>
          </li>
        </ol>
        <t>Applications that require byte-level write merging or sub-chunk
ordering guarantees <bcp14>MUST</bcp14> serialize such writes externally, for
example via NFSv4 byte-range locks (<xref target="RFC8881"/>, Section 12).
The chunk size that bounds the atomicity unit for a given file
is the product of ffm_striping_unit_size and the stripe width
W in <xref target="fig-striping-math"/>; applications can query
fattr4_coding_block_size (see <xref target="sec-fattr4_coding_block_size"/>)
to learn the effective chunk size and align their writes
accordingly.</t>
        <t>This choice -- chunk-boundary atomicity rather than stripe- or
block-boundary atomicity -- is load-bearing for the rest of the
consistency story: the chunk_guard4 CAS evaluates at the chunk
level, the PENDING / FINALIZED / COMMITTED state machine is per
chunk, CHUNK_LOCK is per chunk, and repair via CB_CHUNK_REPAIR
operates on chunks.  A different atomicity boundary would
require redefining those primitives, which this revision does
not.</t>
        <dl>
          <dt>Erasure-coded reads:</dt>
          <dd>
            <t>A reader of an erasure-coded file reconstructs the plaintext
from any sufficient subset of k shards of the (k+m)-shard
stripe; the guard values on those shards <bcp14>MUST</bcp14> agree.  Shards
with stale guards are ignored.  This is not a quorum read in
the Paxos sense -- there is no voting on a value; there is
only reconstruction of the single value identified by the
current guard.</t>
          </dd>
          <dt>Rollback invariant:</dt>
          <dd>
            <t>The data server <bcp14>MUST</bcp14> retain the prior FINALIZED or COMMITTED
content of a chunk while any successor PENDING chunk exists.
A corollary of this rule is the <strong>lowest-guard-recoverable</strong>
property: as long as at least k data servers in the mirror
set retain the chunk at some generation G or lower, the
payload that was COMMITTED at generation G (or earlier) can
be reconstructed.  This is the correctness basis for
CHUNK_ROLLBACK (see <xref target="sec-CHUNK_ROLLBACK"/>): rollback does not
synthesize data, it simply selects the lowest-generation
chunks whose guards agree across the mirror set and discards
the higher-generation PENDING or FINALIZED chunks that
triggered the rollback.  The protocol never relies on locating
or reconstructing data from outside the mirror set.</t>
          </dd>
          <dt>Visibility of non-committed state:</dt>
          <dd>
            <t>PENDING and FINALIZED chunks <bcp14>MUST NOT</bcp14> be globally visible.
CHUNK_READ returns only COMMITTED content; a CHUNK_READ whose
target chunk is currently PENDING or FINALIZED sees the
predecessor COMMITTED content (or an EMPTY chunk if none
exists), not the in-progress successor.  A writer observing
its own PENDING or FINALIZED chunk <bcp14>MAY</bcp14> receive the in-progress
content on the same stateid that produced it, but no other
stateid -- on the same or a different client -- sees it.
The retention window that makes the prior COMMITTED content
available to CHUNK_READ and to CHUNK_ROLLBACK is itself
bounded; see <xref target="sec-system-model-retention-scope"/> for the
normative scoping rule.</t>
          </dd>
        </dl>
      </section>
      <section anchor="sec-system-model-retention-scope">
        <name>Ownership and Scope of Retained Prior Content</name>
        <t>The rollback invariant in <xref target="sec-system-model-consistency"/>
requires a data server to retain the prior FINALIZED or
COMMITTED content of a chunk while any successor PENDING chunk
exists.  That retained content -- sometimes informally called
the "safe buffer" -- is not global state.  It is scoped to the
stateid that wrote the PENDING successor, and its retention and
visibility are governed by that owning stateid's lease.</t>
        <dl>
          <dt>Owner:</dt>
          <dd>
            <t>The data server <bcp14>MUST</bcp14> record, alongside each PENDING chunk,
the owning stateid (the stateid presented on the CHUNK_WRITE
that produced the PENDING).  This is the owning writer's
stateid; it identifies the client and openowner/lockowner
that the data server will release the PENDING to on
CHUNK_FINALIZE or CHUNK_COMMIT, and that the MDS will treat
as the authoritative owner for purposes of
<xref target="sec-system-model-progress"/>.</t>
          </dd>
          <dt>Visibility:</dt>
          <dd>
            <t>Before transition to COMMITTED, the PENDING content is
visible only on the owning stateid.  A CHUNK_READ presenting
any other stateid (from the same client or a different
client) <bcp14>MUST</bcp14> observe the predecessor COMMITTED or EMPTY
state, not the PENDING successor.  This is the normative
form of the "non-committed data <bcp14>MUST NOT</bcp14> be globally visible"
rule in the Visibility bullet above.</t>
          </dd>
          <dt>Retention window:</dt>
          <dd>
            <t>The data server <bcp14>MUST</bcp14> retain the predecessor COMMITTED (or
FINALIZED) content that the PENDING is superseding for as
long as the owning stateid's lease is valid.  If the owning
stateid's lease expires without the PENDING reaching
COMMITTED, the retention obligation for that PENDING ends
(see <xref target="sec-system-model-progress"/> for the scavenger rule
that drives demotion).  If the PENDING does reach COMMITTED,
the new COMMITTED generation supersedes the prior one under
the standard rollback invariant and its own retention is
governed by any newer PENDING successor.</t>
          </dd>
        </dl>
        <t>The practical effect is that the "safe buffer" for a chunk is
not an unbounded chunk-global state but a per-writer window
bounded by that writer's lease.  The data server always has a
rule for discarding retained prior content -- it is the
owning stateid's lease expiry -- so a chunk cannot accumulate
indefinitely many retained generations even in the presence of
dropped or partitioned writers.</t>
      </section>
      <section anchor="sec-system-model-progress">
        <name>Progress and Termination</name>
        <t>Under the failure model above, the protocol guarantees the
following progress properties:</t>
        <dl>
          <dt>Data-path progress:</dt>
          <dd>
            <t>If all mirrors are reachable and none are failed, a
CHUNK_WRITE followed by CHUNK_FINALIZE followed by
CHUNK_COMMIT completes in O(1) round trips independent of
cluster size.  In particular, there is no consensus round,
no leader election, and no quorum voting on the write
itself.  The three operations <bcp14>MAY</bcp14> be amortized across
compounds: a steady-state writer sending a series of
CHUNK_WRITEs can piggyback the CHUNK_FINALIZE of the previous
write on the compound that carries the next write (for
example, <tt>SEQUENCE + PUTFH + CHUNK_FINALIZE + CHUNK_WRITE</tt>),
reducing the data-path happy case to a single round trip per
CHUNK_WRITE rather than three.  The CHUNK_COMMIT for the
final write in a sequence <bcp14>MAY</bcp14> similarly ride on the CLOSE
compound.  These compound-packing optimizations are
permitted by the normal NFSv4.2 compound rules and require
no protocol extensions.</t>
          </dd>
          <dt>Repair termination:</dt>
          <dd>
            <t>Every CB_CHUNK_REPAIR completes in bounded time.  The client
selected as the repair client either:
</t>
            <ol spacing="normal" type="1"><li>
                <t>returns NFS4_OK for every range in ccra_ranges (repair
succeeded), or</t>
              </li>
              <li>
                <t>returns NFS4ERR_PAYLOAD_LOST for one or more ranges (the
erasure code lost too many shards to reconstruct; the
data is permanently unrecoverable), or</t>
              </li>
              <li>
                <t>fails to respond within the ccra_deadline, in which case
the metadata server <bcp14>MUST</bcp14> re-select under the rules in
<xref target="sec-repair-selection"/> or <bcp14>MUST</bcp14> declare the ranges lost.</t>
              </li>
            </ol>
            <t>NFS4ERR_PAYLOAD_LOST is terminal for the affected ranges.
The protocol makes no further attempt to recover them.</t>
          </dd>
          <dt>Eventual trust-table convergence:</dt>
          <dd>
            <t>After a metadata server restart, each data server's trust
table converges to the metadata server's view within one
metadata-server lease period.  Entries that the metadata
server does not re-issue expire naturally via tsa_expire;
entries that the metadata server does re-issue transition
from pending-revalidation back to active on the next
TRUST_STATEID (see <xref target="sec-tight-coupling-mds-crash"/>).</t>
          </dd>
          <dt>Orphaned PENDING scavenger:</dt>
          <dd>
            <t>A PENDING chunk whose owning stateid (see
<xref target="sec-system-model-retention-scope"/>) has expired without
transition to FINALIZED or COMMITTED is an orphan.  The
metadata server <bcp14>MUST</bcp14> drive demotion of orphaned PENDINGs so
that no chunk remains in a non-terminal state indefinitely:
</t>
            <ol spacing="normal" type="1"><li>
                <t>When an owning stateid's lease expires, the metadata
server identifies every PENDING chunk owned by that
stateid (either from its own bookkeeping or by query
against the data server) and issues the control-plane
operations needed to demote each PENDING.</t>
              </li>
              <li>
                <t>Demotion replaces the PENDING with the predecessor
COMMITTED (or EMPTY) content that the data server has
been retaining under
<xref target="sec-system-model-retention-scope"/>.  The data server
<bcp14>MUST NOT</bcp14> wait for a separate client action before
performing the demotion.</t>
              </li>
              <li>
                <t>Any CHUNK_LOCK held in escrow on behalf of the expired
stateid (see <xref target="sec-chunk_guard_mds"/>) is released after
an MDS-defined grace period.  The grace period exists to
let a recovering client reclaim its lock via the grace /
reclaim path defined in <xref target="RFC8881"/>; on expiry of the
grace period without reclaim, the lock becomes available
for new CHUNK_LOCK_FLAGS_ADOPT acquirers.</t>
              </li>
            </ol>
            <t>The scavenger timeout (the delay between lease expiry and
demotion) is implementation-defined but <bcp14>SHOULD</bcp14> be tied to
the metadata server lease period so that it composes
naturally with existing NFSv4 grace / reclaim semantics.  A
scavenger timeout shorter than the lease risks racing an
in-progress client reclaim; a timeout substantially longer
than the lease extends the retention budget without a
commensurate benefit.</t>
          </dd>
        </dl>
        <t>The protocol does NOT guarantee progress if the metadata server
is unavailable for longer than its lease period -- this is the
standard NFSv4 lease assumption and is inherited unchanged.</t>
      </section>
      <section anchor="sec-system-model-consensus">
        <name>Relation to Classical Consensus</name>
        <t>Classical consensus protocols (Paxos, Raft, Viewstamped
Replication) solve the problem of reaching agreement among
mutually-distrusting replicas in the absence of a trusted
coordinator.  They typically cost two or three round trips per
decision, require a majority of replicas to be live and
reachable for progress, and impose the overhead of leader
election and log replication.</t>
        <t>This protocol is not a consensus protocol and does not attempt
to be.  Its approach instead is:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong>Designated coordinator.</strong>  The metadata server is the
coordinator for a file.  Clients accept the MDS's authority
for layout grants, stateid registration, repair client
selection, and revocation.  This assumption is the same one
made by <xref target="RFC8434"/> and all pNFS layout types to date.</t>
          </li>
          <li>
            <t><strong>Per-chunk CAS, not per-chunk voting.</strong>  Concurrent writes
on the same chunk serialize via chunk_guard4 as a CAS
primitive (see <xref target="sec-chunk_guard4"/>).  No replica vote is
required; the data server that owns the chunk evaluates the
guard locally and rejects stale writes with
NFS4ERR_CHUNK_GUARDED.</t>
          </li>
          <li>
            <t><strong>Pessimistic locks off the critical path.</strong>  CHUNK_LOCK is
used only during repair, never on the normal write path.
Lock escrow (see <xref target="sec-chunk_guard_mds"/>) preserves the
"exactly one owner" invariant across stateid revocation
without requiring a consensus round to elect the next owner.</t>
          </li>
          <li>
            <t><strong>Erasure-coded reads replace quorum reads.</strong>  A reader
reconstructs from any k of k+m shards with matching guards.
No voting is needed because there is no disagreement to
resolve: the guard identifies the single generation that was
committed.</t>
          </li>
        </ol>
        <t>The result is a data path with O(1) round-trip cost independent
of the number of replicas, and a repair path whose cost is
bounded by the number of affected chunks rather than by the
cluster size.</t>
        <t>Metadata-server high availability is orthogonal.  Deployments
that require a highly-available metadata server <bcp14>MAY</bcp14> replicate
metadata-server state across multiple metadata server instances
using classical consensus (Raft, Paxos, or equivalent).  Such
replication is implementation-defined; from a pNFS client's
perspective a highly-available metadata server looks like a
single metadata server that occasionally resets its session and
triggers grace-period reclaim, and the client's behavior is
already specified by <xref target="RFC8881"/>.  This protocol neither
requires nor precludes such an implementation.</t>
      </section>
      <section anchor="sec-system-model-nongoals">
        <name>Non-Goals</name>
        <t>For clarity, the protocol explicitly does not provide:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Byzantine fault tolerance.</strong>  A data server that
deliberately misreports its state, or a client that
bypasses its own authentication, is outside the trust model.
Deployments requiring Byzantine tolerance <bcp14>MUST</bcp14> add it in a
layer above or below this protocol.</t>
          </li>
          <li>
            <t><strong>Metadata server high availability.</strong>  Single-MDS-per-file
is the protocol model.  MDS HA, if deployed, is implemented
below the wire protocol and transparent to clients.</t>
          </li>
          <li>
            <t><strong>Cross-file atomicity.</strong>  Writes to multiple files are not
atomic at the protocol level.  File-system-level transactions
are not defined.</t>
          </li>
          <li>
            <t><strong>Multi-chunk atomicity within a single file.</strong>  COMMITs on
distinct chunks are independent.  A reader may observe a
partial write across chunks; applications must layer their
own consistency if they need otherwise.</t>
          </li>
          <li>
            <t><strong>Global linearizability across unrelated files.</strong>  Each
file's COMMITTED state is linearizable in isolation; no
total order is defined across files.</t>
          </li>
          <li>
            <t><strong>Authenticated malicious client protection.</strong>  An
authenticated client may write garbage into its own chunks
with a correctly computed CRC32; see
<xref target="sec-security-crc32-scope"/>.  The CRC32 check is a
transport-integrity check, not an adversarial-integrity
check.</t>
          </li>
          <li>
            <t><strong>General-purpose intent primitive.</strong>  Christoph Hellwig
observed at IETF 121 (November 2024) that the intent-based
pattern used here (CHUNK_WRITE -&gt; CHUNK_FINALIZE -&gt;
CHUNK_COMMIT with CHUNK_ROLLBACK as the abort path) has
potential applicability beyond erasure coding -- for
example, as a general multi-target atomic-ish write
primitive.  This document scopes the mechanism to erasure
coding: the on-wire operations carry erasure-coding-specific
semantics (chunk_guard4, mirror-set repair, per-codec
geometry), and generalising the primitive is explicit
future work.  Protocol extensions that reuse the
intent / finalize / commit pattern in other contexts are
not precluded by this document but are not defined by it.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="nfsv42-operations-allowed-to-data-files">
      <name>NFSv4.2 Operations Allowed to Data Files</name>
      <t>In the Flex Files Version 1 Layout Type (<xref target="RFC8435"/>), the data path
between client and data server was NFSv3 (<xref target="RFC1813"/>); the
operations a client sent to a data file were limited to READ,
WRITE, and COMMIT, and the operations the metadata server sent on
its control plane to the data server were limited to GETATTR,
SETATTR, CREATE, and REMOVE.  An NFSv4.2 data server, as used by
the Flex Files Version 2 Layout Type, exposes a much larger
operation set.  This section defines which operations a client <bcp14>MAY</bcp14>
send to a data file, which operations the metadata server <bcp14>MAY</bcp14>
send, and which operations a data server <bcp14>MUST</bcp14> reject.</t>
      <t>The restrictions below apply only to operations directed at a data
file on a data server.  Clients retain the full NFSv4.2 operation
set for files visible through the metadata server, including the
operations prohibited below (RENAME, LINK, CLONE, COPY, ACL-scoped
SETATTR, and so on).  The metadata server <bcp14>MAY</bcp14> internally use
operations on data files that clients <bcp14>MUST NOT</bcp14> send, as part of
its control-plane duties for the file (see
<xref target="sec-system-model-roles"/>).</t>
      <section anchor="sec-ops-mds">
        <name>Control Plane: Metadata Server to Data Server</name>
        <t>When the metadata server acts as a client to a data server, it is
managing the data file on behalf of the metadata file's namespace.
A data server <bcp14>MUST</bcp14> support the following operations on data files
when issued by the metadata server:</t>
        <ul spacing="normal">
          <li>
            <t>SEQUENCE, PUTFH, PUTROOTFH, GETFH (<xref target="RFC8881"/> Sections 18.46,
18.19, 18.21, 18.8): session and filehandle plumbing.</t>
          </li>
          <li>
            <t>LOOKUP (<xref target="RFC8881"/> Section 18.15): runway pool directory
traversal.</t>
          </li>
          <li>
            <t>GETATTR (<xref target="RFC8881"/> Section 18.7): reflected GETATTR after a
write layout is returned, and any other attribute queries the
metadata server needs to reconcile its cached view.</t>
          </li>
          <li>
            <t>SETATTR (<xref target="RFC8881"/> Section 18.30): data file truncate for
MDS-level SETATTR(size) fan-out, synthetic uid/gid rotation
for fencing, and mode-bit initialisation on runway assignment.</t>
          </li>
          <li>
            <t>CREATE (<xref target="RFC8881"/> Section 18.4): runway pool file creation.</t>
          </li>
          <li>
            <t>REMOVE (<xref target="RFC8881"/> Section 18.25): cleanup on MDS file
unlink.</t>
          </li>
          <li>
            <t>OPEN, CLOSE (<xref target="RFC8881"/> Sections 18.16, 18.2): used by the
metadata server when it acts as a client to the data server
for InBand or proxy I/O.</t>
          </li>
          <li>
            <t>EXCHANGE_ID, CREATE_SESSION, DESTROY_SESSION,
BIND_CONN_TO_SESSION, DESTROY_CLIENTID (<xref target="RFC8881"/> Sections
18.35, 18.36, 18.37, 18.34, 18.50): control-session
management.  The metadata server sets
EXCHGID4_FLAG_USE_NON_PNFS in its EXCHANGE_ID.  A data
server that supports the tight-coupling control protocol
(see <xref target="sec-tight-coupling-control-session"/>) identifies the
metadata server's session by EXCHGID4_FLAG_USE_PNFS_MDS and
accepts TRUST_STATEID, REVOKE_STATEID, and
BULK_REVOKE_STATEID on that session.</t>
          </li>
          <li>
            <t>TRUST_STATEID (<xref target="sec-TRUST_STATEID"/>), REVOKE_STATEID
(<xref target="sec-REVOKE_STATEID"/>), BULK_REVOKE_STATEID
(<xref target="sec-BULK_REVOKE_STATEID"/>): the MDS-to-DS tight-coupling
trust-table control operations.</t>
          </li>
        </ul>
        <t>The metadata server <bcp14>MAY</bcp14> also use other NFSv4.2 operations on data
files as implementation-defined control-plane actions (for
example, COPY or CLONE to migrate a data file between data
servers during a data mover operation).  The list above is the
minimum set a Flex Files v2 data server <bcp14>MUST</bcp14> support for the
metadata server's use.</t>
      </section>
      <section anchor="sec-ops-client">
        <name>Data Path: Client to Data Server</name>
        <t>A pNFS client with an active Flex Files v2 layout <bcp14>MUST</bcp14> restrict
the operations it issues against data files to the operations
defined below.  A data server <bcp14>MUST</bcp14> reject any other operation on
a data file with NFS4ERR_NOTSUPP.</t>
        <section anchor="session-and-identity-plumbing">
          <name>Session and Identity Plumbing</name>
          <t>Required for all protection modes:</t>
          <ul spacing="normal">
            <li>
              <t>SEQUENCE, PUTFH, GETFH, PUTROOTFH (<xref target="RFC8881"/> Sections 18.46,
18.19, 18.8, 18.21).</t>
            </li>
            <li>
              <t>EXCHANGE_ID, CREATE_SESSION, DESTROY_SESSION,
BIND_CONN_TO_SESSION, DESTROY_CLIENTID (<xref target="RFC8881"/> Sections
18.35, 18.36, 18.37, 18.34, 18.50).</t>
            </li>
            <li>
              <t>RECLAIM_COMPLETE (<xref target="RFC8881"/> Section 18.51).</t>
            </li>
            <li>
              <t>SECINFO, SECINFO_NO_NAME (<xref target="RFC8881"/> Sections 18.29, 18.45):
discovery of acceptable security flavours on the data
server.</t>
            </li>
          </ul>
          <t>These operations are baseline NFSv4.2 session plumbing and are
supported on data files as on any NFSv4.2 file.</t>
        </section>
        <section anchor="getattr-on-a-data-file">
          <name>GETATTR on a Data File</name>
          <t>GETATTR <bcp14>MAY</bcp14> be issued by a client against a data file.  The
primary use case is repair: a repair client selected by
CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>) may need to query the
per-server file size or allocation state when reconstructing a
payload, and the data mover described informally in
<xref target="sec-system-model-roles"/> similarly benefits from attribute
queries on surviving mirrors.  Diagnostic use is also permitted.</t>
          <t>Clients <bcp14>MUST NOT</bcp14> treat GETATTR values returned by a data server as
authoritative for any file attribute (size, timestamps, owner,
mode, ACL, and so on).  The metadata server is the sole authority
for file attributes.  Values returned by a data server reflect the
per-server data file instance only and <bcp14>MAY</bcp14> diverge from the
metadata server's view, particularly during a write layout's
lifetime or during a Data Mover transition.  A client that uses a
data-server GETATTR result to determine the file's visible size
will observe inconsistencies.</t>
        </section>
        <section anchor="setattr-on-a-data-file">
          <name>SETATTR on a Data File</name>
          <t>Clients <bcp14>MUST NOT</bcp14> issue SETATTR against a data file.  A data server
<bcp14>MUST</bcp14> reject a client SETATTR with NFS4ERR_NOTSUPP.</t>
          <t>Attribute changes on data files <bcp14>MUST</bcp14> be reconciled with the
metadata server's view and cannot be applied unilaterally by a
client.  A client that wants to truncate, change the mode, change
ownership, or otherwise modify attributes on a file <bcp14>MUST</bcp14> issue
SETATTR to the metadata server for the file's MDS handle; the
metadata server fans the change out to the data files as a
control-plane operation.</t>
          <t>This rule explicitly covers truncate (SETATTR with size in the
bitmap): a client <bcp14>MUST NOT</bcp14> truncate a data file directly.
Similarly, a client <bcp14>MUST NOT</bcp14> issue DEALLOCATE against a data
file; see the next subsection.</t>
        </section>
        <section anchor="mirrored-data-files-ffv2codingmirrored">
          <name>Mirrored Data Files (FFV2_CODING_MIRRORED)</name>
          <t>For a mirror whose ffm_coding_type_data is FFV2_CODING_MIRRORED
(see <xref target="sec-ffv2-mirror4"/>), client operations on the data file
follow the same pattern as the File Layout Type in <xref target="RFC8881"/>
Section 13.6 and the Flex Files v1 Layout Type in <xref target="RFC8435"/>:</t>
          <t>Required:</t>
          <ul spacing="normal">
            <li>
              <t>READ (<xref target="RFC8881"/> Section 18.22).</t>
            </li>
            <li>
              <t>WRITE (<xref target="RFC8881"/> Section 18.32).</t>
            </li>
            <li>
              <t>COMMIT (<xref target="RFC8881"/> Section 18.3).</t>
            </li>
          </ul>
          <t>Optional (the client <bcp14>MAY</bcp14> send, and the data server <bcp14>MAY</bcp14> support):</t>
          <ul spacing="normal">
            <li>
              <t>READ_PLUS (<xref target="RFC7862"/> Section 15.10): hole-aware reads.</t>
            </li>
            <li>
              <t>SEEK (<xref target="RFC7862"/> Section 15.11): hole and data detection.</t>
            </li>
            <li>
              <t>ALLOCATE (<xref target="RFC7862"/> Section 15.1): space reservation hint.</t>
            </li>
          </ul>
          <t>The client <bcp14>MUST NOT</bcp14> send:</t>
          <ul spacing="normal">
            <li>
              <t>DEALLOCATE (<xref target="RFC7862"/> Section 15.4): hole punching is a
metadata-server responsibility; the client issues DEALLOCATE
on the metadata-server filehandle, and the metadata server
fans out to the data servers as a control-plane operation.</t>
            </li>
          </ul>
        </section>
        <section anchor="erasure-coded-data-files-ffv2encoding">
          <name>Erasure-Coded Data Files (FFV2<em>ENCODING</em>*)</name>
          <t>For a mirror whose ffm_coding_type_data is any of the erasure-
coding types defined in this document
(FFV2<em>ENCODING_MOJETTE_SYSTEMATIC,
FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC,
FFV2_ENCODING_RS_VANDERMONDE), client operations use the CHUNK</em>*
operations rather than READ / WRITE / COMMIT.</t>
          <t>Required for all erasure-coded clients:</t>
          <ul spacing="normal">
            <li>
              <t>CHUNK_WRITE (<xref target="sec-CHUNK_WRITE"/>).</t>
            </li>
            <li>
              <t>CHUNK_READ (<xref target="sec-CHUNK_READ"/>).</t>
            </li>
            <li>
              <t>CHUNK_FINALIZE (<xref target="sec-CHUNK_FINALIZE"/>).</t>
            </li>
            <li>
              <t>CHUNK_COMMIT (<xref target="sec-CHUNK_COMMIT"/>).</t>
            </li>
            <li>
              <t>CHUNK_HEADER_READ (<xref target="sec-CHUNK_HEADER_READ"/>).</t>
            </li>
            <li>
              <t>CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) and CHUNK_UNLOCK
(<xref target="sec-CHUNK_UNLOCK"/>).</t>
            </li>
            <li>
              <t>CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).</t>
            </li>
          </ul>
          <t>Required for clients that participate in repair:</t>
          <ul spacing="normal">
            <li>
              <t>CHUNK_ERROR (<xref target="sec-CHUNK_ERROR"/>).</t>
            </li>
            <li>
              <t>CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>).</t>
            </li>
            <li>
              <t>CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>).</t>
            </li>
          </ul>
          <t>Clients <bcp14>MUST NOT</bcp14> send:</t>
          <ul spacing="normal">
            <li>
              <t>READ, WRITE, COMMIT against an erasure-coded data file.  A
data server <bcp14>MUST</bcp14> reject these with NFS4ERR_NOTSUPP and <bcp14>MAY</bcp14>
log the client for operator attention; this case is almost
always a client bug in which the client did not inspect the
mirror's ffm_coding_type_data before issuing I/O.</t>
            </li>
            <li>
              <t>READ_PLUS, SEEK, ALLOCATE, DEALLOCATE against an erasure-
coded data file.  Chunk-level allocation is a
metadata-server responsibility.</t>
            </li>
          </ul>
        </section>
        <section anchor="operations-that-must-not-be-sent-to-a-data-file">
          <name>Operations That MUST NOT Be Sent to a Data File</name>
          <t>Clients <bcp14>MUST NOT</bcp14> send the following operations to a data server
on a data file, regardless of protection mode.  A data server
<bcp14>MUST</bcp14> return NFS4ERR_NOTSUPP:</t>
          <ul spacing="normal">
            <li>
              <t>OPEN, CLOSE, OPEN_DOWNGRADE, OPEN_CONFIRM (<xref target="RFC8881"/>
Sections 18.16, 18.2, 18.18, 18.20).  Opens occur on the
metadata server; the stateid obtained there is used on the
data path.</t>
            </li>
            <li>
              <t>LOCK, LOCKU, LOCKT, RELEASE_LOCKOWNER (<xref target="RFC8881"/> Sections
18.10, 18.11, 18.13, 18.24).  Byte-range locks on data files
are not supported; erasure-coded files use CHUNK_LOCK, and
mirrored files rely on metadata-server coordination.</t>
            </li>
            <li>
              <t>DELEGPURGE, DELEGRETURN, WANT_DELEGATION (<xref target="RFC8881"/> Sections
18.5, 18.6 and <xref target="RFC7862"/> Section 15.3).  Delegations are
issued by the metadata server.</t>
            </li>
            <li>
              <t>Any operation whose purpose is to manipulate the file's
namespace: RENAME, LINK, SYMLINK, CREATE (at the file-
creation use, not MDS runway creation), REMOVE.  Namespace
operations belong on the metadata server.</t>
            </li>
            <li>
              <t>Any ACL-scoped SETATTR or GETATTR bit (FATTR4_ACL,
FATTR4_DACL, FATTR4_SACL).  Access control on data files is
delegated to the metadata server.</t>
            </li>
            <li>
              <t>CLONE, COPY, COPY_NOTIFY, OFFLOAD_CANCEL, OFFLOAD_STATUS
(<xref target="RFC7862"/> Sections 15.13, 15.2, 15.3, 15.8, 15.9).
File-level data migration is a metadata-server responsibility.</t>
            </li>
            <li>
              <t>LAYOUTGET, LAYOUTCOMMIT, LAYOUTRETURN, LAYOUTSTATS,
LAYOUTERROR, GETDEVICEINFO, GETDEVICELIST (<xref target="RFC8881"/>
Sections 18.43, 18.42, 18.44, <xref target="RFC7862"/> Sections 15.7,
15.6, <xref target="RFC8881"/> Sections 18.40, 18.41).  Layout operations
belong on the metadata server.</t>
            </li>
            <li>
              <t>TRUST_STATEID, REVOKE_STATEID, BULK_REVOKE_STATEID
(<xref target="sec-TRUST_STATEID"/>, <xref target="sec-REVOKE_STATEID"/>,
<xref target="sec-BULK_REVOKE_STATEID"/>).  These are MDS-to-DS
control-plane operations; a data server rejects them with
NFS4ERR_PERM when received on a client session (see
<xref target="sec-tight-coupling-control-session"/>).</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="callback-path-data-server-to-client">
        <name>Callback Path: Data Server to Client</name>
        <t>A data server does not call back directly to pNFS clients.
Recall notifications and repair coordination flow through the
metadata server's backchannel session with the client.  The
callbacks a client will observe that affect its data files are:</t>
        <ul spacing="normal">
          <li>
            <t>CB_LAYOUTRECALL (<xref target="RFC8881"/> Section 20.3).</t>
          </li>
          <li>
            <t>CB_NOTIFY_DEVICEID (<xref target="RFC8881"/> Section 20.12).</t>
          </li>
          <li>
            <t>CB_RECALL_ANY (<xref target="RFC8881"/> Section 20.6).</t>
          </li>
          <li>
            <t>CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>).</t>
          </li>
        </ul>
        <t>A data server influences these callbacks only indirectly, via
LAYOUTERROR reports the client issues to the metadata server or
by returning error codes that prompt the client to report.  A
data server <bcp14>MUST NOT</bcp14> attempt to send CB_* operations to clients
directly.</t>
      </section>
      <section anchor="summary-table">
        <name>Summary Table</name>
        <t><xref target="tbl-ops-allowed"/> lists each relevant NFSv4.2 operation and its
applicability on a data file in each direction.  "required" means
the data server <bcp14>MUST</bcp14> support the operation when received on the
indicated path; "OPT" means the data server <bcp14>MAY</bcp14> support it and the
client <bcp14>MUST</bcp14> tolerate the absence of support; "<bcp14>MUST NOT</bcp14>" means the
client <bcp14>MUST NOT</bcp14> send the operation and the data server <bcp14>MUST</bcp14> reject
it with NFS4ERR_NOTSUPP; "<bcp14>MAY</bcp14>" means the metadata server <bcp14>MAY</bcp14> use
the operation as an implementation-defined control-plane action.</t>
        <table anchor="tbl-ops-allowed">
          <name>NFSv4.2 operations allowed on data files</name>
          <thead>
            <tr>
              <th align="left">Operation</th>
              <th align="left">Client -&gt; DS</th>
              <th align="left">MDS -&gt; DS</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">SEQUENCE, PUTFH, GETFH, PUTROOTFH</td>
              <td align="left">required</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">EXCHANGE_ID, CREATE_SESSION, DESTROY_SESSION, BIND_CONN_TO_SESSION, DESTROY_CLIENTID</td>
              <td align="left">required</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">RECLAIM_COMPLETE</td>
              <td align="left">required</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">SECINFO, SECINFO_NO_NAME</td>
              <td align="left">required</td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">GETATTR</td>
              <td align="left">OPT (non-authoritative)</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">SETATTR</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">LOOKUP, CREATE, REMOVE</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">READ, WRITE, COMMIT</td>
              <td align="left">required (mirrored); <bcp14>MUST NOT</bcp14> (erasure-coded)</td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">READ_PLUS, SEEK, ALLOCATE</td>
              <td align="left">OPT (mirrored); <bcp14>MUST NOT</bcp14> (erasure-coded)</td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">DEALLOCATE</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE, CHUNK_READ, CHUNK_FINALIZE, CHUNK_COMMIT, CHUNK_HEADER_READ, CHUNK_LOCK, CHUNK_UNLOCK, CHUNK_ROLLBACK</td>
              <td align="left">required (erasure-coded); <bcp14>MUST NOT</bcp14> (mirrored)</td>
              <td align="left">not used</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ERROR, CHUNK_REPAIRED, CHUNK_WRITE_REPAIR</td>
              <td align="left">required (erasure-coded repair clients); <bcp14>MUST NOT</bcp14> (mirrored)</td>
              <td align="left">not used</td>
            </tr>
            <tr>
              <td align="left">OPEN, CLOSE, OPEN_DOWNGRADE, OPEN_CONFIRM</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">OPT (proxy I/O)</td>
            </tr>
            <tr>
              <td align="left">LOCK, LOCKU, LOCKT, RELEASE_LOCKOWNER</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">DELEGPURGE, DELEGRETURN, WANT_DELEGATION</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">RENAME, LINK, SYMLINK</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">CLONE, COPY, COPY_NOTIFY, OFFLOAD_CANCEL, OFFLOAD_STATUS</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MAY</bcp14> (data migration)</td>
            </tr>
            <tr>
              <td align="left">LAYOUTGET, LAYOUTCOMMIT, LAYOUTRETURN, LAYOUTSTATS, LAYOUTERROR, GETDEVICEINFO, GETDEVICELIST</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">ACL-scoped GETATTR/SETATTR bits</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">TRUST_STATEID, REVOKE_STATEID, BULK_REVOKE_STATEID</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">required (tight coupling)</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="sec-layouthint">
      <name>Flexible File Layout Type Return</name>
      <t>layoutreturn_file4 is used in the LAYOUTRETURN operation to convey
layout-type-specific information to the server.  It is defined in
Section 18.44.1 of <xref target="RFC8881"/> (also shown in <xref target="fig-LAYOUTRETURN"/>).</t>
      <figure anchor="fig-LAYOUTRETURN">
        <name>Layout Return XDR</name>
        <sourcecode type="xdr"><![CDATA[
      /* Constants used for LAYOUTRETURN and CB_LAYOUTRECALL */
      const LAYOUT4_RET_REC_FILE      = 1;
      const LAYOUT4_RET_REC_FSID      = 2;
      const LAYOUT4_RET_REC_ALL       = 3;

      enum layoutreturn_type4 {
              LAYOUTRETURN4_FILE = LAYOUT4_RET_REC_FILE,
              LAYOUTRETURN4_FSID = LAYOUT4_RET_REC_FSID,
              LAYOUTRETURN4_ALL  = LAYOUT4_RET_REC_ALL
      };

   struct layoutreturn_file4 {
           offset4         lrf_offset;
           length4         lrf_length;
           stateid4        lrf_stateid;
           /* layouttype4 specific data */
           opaque          lrf_body<>;
   };

   union layoutreturn4 switch(layoutreturn_type4 lr_returntype) {
           case LAYOUTRETURN4_FILE:
                   layoutreturn_file4      lr_layout;
           default:
                   void;
   };

   struct LAYOUTRETURN4args {
           /* CURRENT_FH: file */
           bool                    lora_reclaim;
           layouttype4             lora_layout_type;
           layoutiomode4           lora_iomode;
           layoutreturn4           lora_layoutreturn;
   };
]]></sourcecode>
      </figure>
      <t>If the lora_layout_type layout type is LAYOUT4_FLEX_FILES and the
lr_returntype is LAYOUTRETURN4_FILE, then the lrf_body opaque value
is defined by ff_layoutreturn4 (see <xref target="sec-ff_layoutreturn4"/>).  This
allows the client to report I/O error information or layout usage
statistics back to the metadata server as defined below.  Note that
while the data structures are built on concepts introduced in
NFSv4.2, the effective discriminated union (lora_layout_type combined
with ff_layoutreturn4) allows for an NFSv4.1 metadata server to
utilize the data.</t>
      <section anchor="sec-io-error">
        <name>I/O Error Reporting</name>
        <section anchor="sec-ff_ioerr4">
          <name>ff_ioerr4</name>
          <figure anchor="fig-ff_ioerr4">
            <name>ff_ioerr4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_ioerr4 {
   ///         offset4        ffie_offset;
   ///         length4        ffie_length;
   ///         stateid4       ffie_stateid;
   ///         device_error4  ffie_errors<>;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Recall that <xref target="RFC7862"/> defines device_error4 as in <xref target="fig-device_error4"/>:</t>
          <figure anchor="fig-device_error4">
            <name>device_error4</name>
            <sourcecode type="xdr"><![CDATA[
   struct device_error4 {
           deviceid4       de_deviceid;
           nfsstat4        de_status;
           nfs_opnum4      de_opnum;
   };
]]></sourcecode>
          </figure>
          <t>The ff_ioerr4 structure is used to return error indications for
data files that generated errors during data transfers.  These are
hints to the metadata server that there are problems with that file.
For each error, ffie_errors.de_deviceid, ffie_offset, and ffie_length
represent the storage device and byte range within the file in which
the error occurred; ffie_errors represents the operation and type
of error.  The use of device_error4 is described in Section 15.6
of <xref target="RFC7862"/>.</t>
          <t>Even though the storage device might be accessed via NFSv3 and
reports back NFSv3 errors to the client, the client is responsible
for mapping these to appropriate NFSv4 status codes as de_status.
Likewise, the NFSv3 operations need to be mapped to equivalent NFSv4
operations.</t>
        </section>
      </section>
      <section anchor="sec-layout-stats">
        <name>Layout Usage Statistics</name>
        <section anchor="ffiolatency4">
          <name>ff_io_latency4</name>
          <figure anchor="fig-ff_io_latency4">
            <name>ff_io_latency4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_io_latency4 {
   ///         uint64_t       ffil_ops_requested;
   ///         uint64_t       ffil_bytes_requested;
   ///         uint64_t       ffil_ops_completed;
   ///         uint64_t       ffil_bytes_completed;
   ///         uint64_t       ffil_bytes_not_delivered;
   ///         nfstime4       ffil_total_busy_time;
   ///         nfstime4       ffil_aggregate_completion_time;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Both operation counts and bytes transferred are kept in the
ff_io_latency4 (see <xref target="fig-ff_io_latency4"/>.  As seen in ff_layoutupdate4
(see <xref target="sec-ff_layoutupdate4"/>), READ and WRITE operations are
aggregated separately.  READ operations are used for the ff_io_latency4
ffl_read.  Both WRITE and COMMIT operations are used for the
ff_io_latency4 ffl_write.  "Requested" counters track what the
client is attempting to do, and "completed" counters track what was
done.  There is no requirement that the client only report completed
results that have matching requested results from the reported
period.</t>
          <t>ffil_bytes_not_delivered is used to track the aggregate number of
bytes requested but not fulfilled due to error conditions.
ffil_total_busy_time is the aggregate time spent with outstanding
RPC calls. ffil_aggregate_completion_time is the sum of all round-trip
times for completed RPC calls.</t>
          <t>In Section 3.3.1 of <xref target="RFC8881"/>, the nfstime4 is defined as the
number of seconds and nanoseconds since midnight or zero hour January
1, 1970 Coordinated Universal Time (UTC).  The use of nfstime4 in
ff_io_latency4 is to store time since the start of the first I/O
from the client after receiving the layout.  In other words, these
are to be decoded as duration and not as a date and time.</t>
          <t>Note that LAYOUTSTATS are cumulative, i.e., not reset each time the
operation is sent.  If two LAYOUTSTATS operations for the same file
and layout stateid originate from the same NFS client and are
processed at the same time by the metadata server, then the one
containing the larger values contains the most recent time series
data.</t>
        </section>
        <section anchor="sec-ff_layoutupdate4">
          <name>ff_layoutupdate4</name>
          <figure anchor="fig-ff_layoutupdate4">
            <name>ff_layoutupdate4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layoutupdate4 {
   ///         netaddr4         ffl_addr;
   ///         nfs_fh4          ffl_fhandle;
   ///         ffv2_io_latency4 ffl_read;
   ///         ffv2_io_latency4 ffl_write;
   ///         nfstime4         ffl_duration;
   ///         bool             ffl_local;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>ffl_addr differentiates which network address the client is connected
to on the storage device.  In the case of multipathing, ffl_fhandle
indicates which read-only copy was selected. ffl_read and ffl_write
convey the latencies for both READ and WRITE operations, respectively.
ffl_duration is used to indicate the time period over which the
statistics were collected.  If true, ffl_local indicates that the
I/O was serviced by the client's cache.  This flag allows the client
to inform the metadata server about "hot" access to a file it would
not normally be allowed to report on.</t>
        </section>
        <section anchor="ffiostats4">
          <name>ff_iostats4</name>
          <figure anchor="fig-ff_iostats4">
            <name>ff_iostats4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_iostats4 {
   ///         offset4            ffis_offset;
   ///         length4            ffis_length;
   ///         stateid4           ffis_stateid;
   ///         io_info4           ffis_read;
   ///         io_info4           ffis_write;
   ///         deviceid4          ffis_deviceid;
   ///         ffv2_layoutupdate4 ffis_layoutupdate;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t><xref target="RFC7862"/> defines io_info4 as in <xref target="fig-ff_iostats4"/>.</t>
          <figure anchor="fig-io_info4">
            <name>io_info4</name>
            <sourcecode type="xdr"><![CDATA[
   struct io_info4 {
           uint64_t        ii_count;
           uint64_t        ii_bytes;
   };
]]></sourcecode>
          </figure>
          <t>With pNFS, data transfers are performed directly between the pNFS
client and the storage devices.  Therefore, the metadata server has
no direct knowledge of the I/O operations being done and thus cannot
create on its own statistical information about client I/O to
optimize the data storage location.  ff_iostats4 <bcp14>MAY</bcp14> be used by the
client to report I/O statistics back to the metadata server upon
returning the layout.</t>
          <t>Since it is not feasible for the client to report every I/O that
used the layout, the client <bcp14>MAY</bcp14> identify "hot" byte ranges for which
to report I/O statistics.  The definition and/or configuration
mechanism of what is considered "hot" and the size of the reported
byte range are out of the scope of this document.  For client
implementation, providing reasonable default values and an optional
run-time management interface to control these parameters is
suggested.  For example, a client can define the default byte-range
resolution to be 1 MB in size and the thresholds for reporting to
be 1 MB/second or 10 I/O operations per second.</t>
          <t>For each byte range, ffis_offset and ffis_length represent the
starting offset of the range and the range length in bytes.
ffis_read.ii_count, ffis_read.ii_bytes, ffis_write.ii_count, and
ffis_write.ii_bytes represent the number of contiguous READ and
WRITE I/Os and the respective aggregate number of bytes transferred
within the reported byte range.</t>
          <t>The combination of ffis_deviceid and ffl_addr uniquely identifies
both the storage path and the network route to it.  Finally,
ffl_fhandle allows the metadata server to differentiate between
multiple read-only copies of the file on the same storage device.</t>
        </section>
      </section>
      <section anchor="sec-ff_layoutreturn4">
        <name>ff_layoutreturn4</name>
        <figure anchor="fig-ff_layoutreturn4">
          <name>ff_layoutreturn4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layoutreturn4 {
   ///         ffv2_ioerr4     fflr_ioerr_report<>;
   ///         ffv2_iostats4   fflr_iostats_report<>;
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>When data file I/O operations fail, fflr_ioerr_report&lt;&gt; is used to
report these errors to the metadata server as an array of elements
of type ff_ioerr4.  Each element in the array represents an error
that occurred on the data file identified by ffie_errors.de_deviceid.
If no errors are to be reported, the size of the fflr_ioerr_report&lt;&gt;
array is set to zero.  The client <bcp14>MAY</bcp14> also use fflr_iostats_report&lt;&gt;
to report a list of I/O statistics as an array of elements of type
ff_iostats4.  Each element in the array represents statistics for
a particular byte range.  Byte ranges are not guaranteed to be
disjoint and <bcp14>MAY</bcp14> repeat or intersect.</t>
      </section>
    </section>
    <section anchor="sec-LAYOUTERROR">
      <name>Flexible File Layout Type LAYOUTERROR</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send error
information to the metadata server (see <xref target="sec-io-error"/>), it <bcp14>MAY</bcp14>
use LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTERROR4args is treated the same as ff_ioerr4.</t>
    </section>
    <section anchor="flexible-file-layout-type-layoutstats">
      <name>Flexible File Layout Type LAYOUTSTATS</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send I/O
statistics to the metadata server (see <xref target="sec-layout-stats"/>), it
<bcp14>MAY</bcp14> use LAYOUTSTATS (see Section 15.7 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTSTATS4args.lsa_layoutupdate is overloaded with the same
contents as in ffis_layoutupdate.</t>
    </section>
    <section anchor="flexible-file-layout-type-creation-hint">
      <name>Flexible File Layout Type Creation Hint</name>
      <t>The layouthint4 type is defined in the <xref target="RFC8881"/> as in
<xref target="fig-layouthint4-v1"/>.</t>
      <figure anchor="fig-layouthint4-v1">
        <name>layouthint4 v1</name>
        <sourcecode type="xdr"><![CDATA[
   struct layouthint4 {
       layouttype4        loh_type;
       opaque             loh_body<>;
   };
]]></sourcecode>
      </figure>
      <artwork><![CDATA[
                          {{fig-layouthint4-v1}}
]]></artwork>
      <t>The layouthint4 structure is used by the client to pass a hint about
the type of layout it would like created for a particular file.  If
the loh_type layout type is LAYOUT4_FLEX_FILES, then the loh_body
opaque value is defined by the ff_layouthint4 type.</t>
    </section>
    <section anchor="fflayouthint4">
      <name>ff_layouthint4</name>
      <figure anchor="fig-ff_layouthint4-v2">
        <name>ff_layouthint4 (v1 compatibility)</name>
        <sourcecode type="xdr"><![CDATA[
   union ff_mirrors_hint switch (bool ffmc_valid) {
       case TRUE:
           uint32_t    ffmc_mirrors;
       case FALSE:
           void;
   };

   struct ff_layouthint4 {
       ff_mirrors_hint    fflh_mirrors_hint;
   };
]]></sourcecode>
      </figure>
      <t>The ff_layouthint4 is retained for backwards compatibility with
Flex Files v1 layouts.  For Flex Files v2 layouts, clients
<bcp14>SHOULD</bcp14> use ffv2_layouthint4 (<xref target="fig-ffv2_layouthint4"/>) instead,
which provides coding type selection and data protection geometry
hints via ffv2_data_protection4 (<xref target="fig-ffv2_data_protection4"/>).</t>
    </section>
    <section anchor="recalling-a-layout">
      <name>Recalling a Layout</name>
      <t>While Section 12.5.5 of <xref target="RFC8881"/> discusses reasons independent
of layout type for recalling a layout, the flexible file layout
type metadata server should recall outstanding layouts in the
following cases:</t>
      <ul spacing="normal">
        <li>
          <t>When the file's security policy changes, i.e., ACLs or permission
mode bits are set.</t>
        </li>
        <li>
          <t>When the file's layout changes, rendering outstanding layouts
invalid.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the need to enforce
locking constraints.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the requirements
regarding resilvering as described in <xref target="sec-mds-resilvering"/>.</t>
        </li>
      </ul>
      <section anchor="cbrecallany">
        <name>CB_RECALL_ANY</name>
        <t>The metadata server can use the CB_RECALL_ANY callback operation
to notify the client to return some or all of its layouts.  Section
22.3 of <xref target="RFC8881"/> defines the allowed types of the "NFSv4 Recallable
Object Types Registry".</t>
        <figure anchor="fig-new-rca4">
          <name>RCA4 masks for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MIN     = 20;
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MAX     = 21;
   ///
]]></sourcecode>
        </figure>
        <figure anchor="fig-CB_RECALL_ANY4args">
          <name>CB_RECALL_ANY4args XDR</name>
          <sourcecode type="xdr"><![CDATA[
   struct  CB_RECALL_ANY4args      {
       uint32_t        craa_layouts_to_keep;
       bitmap4         craa_type_mask;
   };
]]></sourcecode>
        </figure>
        <t>Typically, CB_RECALL_ANY will be used to recall client state when
the server needs to reclaim resources.  The craa_type_mask bitmap
specifies the type of resources that are recalled, and the
craa_layouts_to_keep value specifies how many of the recalled
flexible file layouts the client is allowed to keep.  The mask flags
for the flexible file layout type are defined as in <xref target="fig-mask-flags"/>.</t>
        <figure anchor="fig-mask-flags">
          <name>Recall Mask Flags for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_cb_recall_any_mask {
   ///     PNFS_FF_RCA4_TYPE_MASK_READ = 20,
   ///     PNFS_FF_RCA4_TYPE_MASK_RW   = 21
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>The flags represent the iomode of the recalled layouts.  In response,
the client <bcp14>SHOULD</bcp14> return layouts of the recalled iomode that it
needs the least, keeping at most craa_layouts_to_keep flexible file
layouts.</t>
        <t>The PNFS_FF_RCA4_TYPE_MASK_READ flag notifies the client to return
layouts of iomode LAYOUTIOMODE4_READ.  Similarly, the
PNFS_FF_RCA4_TYPE_MASK_RW flag notifies the client to return layouts
of iomode LAYOUTIOMODE4_RW.  When both mask flags are set, the
client is notified to return layouts of either iomode.</t>
      </section>
    </section>
    <section anchor="client-fencing">
      <name>Client Fencing</name>
      <t>In cases where clients are uncommunicative and their lease has
expired or when clients fail to return recalled layouts within a
lease period, the server <bcp14>MAY</bcp14> revoke client layouts and reassign
these resources to other clients (see Section 12.5.5 of <xref target="RFC8881"/>).
To avoid data corruption, the metadata server <bcp14>MUST</bcp14> fence off the
revoked clients from the respective data files as described in
<xref target="sec-Fencing-Clients"/>.</t>
    </section>
    <section anchor="new-nfsv42-error-values">
      <name>New NFSv4.2 Error Values</name>
      <figure anchor="fig-errors-xdr">
        <name>Errors XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* Erasure Coding error constants; added to nfsstat4 enum */
   ///
   /// const NFS4ERR_CODING_NOT_SUPPORTED   = 10097;
   /// const NFS4ERR_PAYLOAD_NOT_CONSISTENT = 10098;
   /// const NFS4ERR_CHUNK_LOCKED           = 10099;
   /// const NFS4ERR_CHUNK_GUARDED          = 10100;
   /// const NFS4ERR_PAYLOAD_LOST           = 10101;
   ///
]]></sourcecode>
      </figure>
      <t>The new error codes are shown in <xref target="fig-errors-xdr"/>.</t>
      <section anchor="error-definitions">
        <name>Error Definitions</name>
        <table anchor="tbl-protocol-errors">
          <name>Error Definitions</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Number</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">10097</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CODING_NOT_SUPPORTED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_NOT_CONSISTENT</td>
              <td align="left">10098</td>
              <td align="left">
                <xref target="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_LOCKED</td>
              <td align="left">10099</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_LOCKED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_GUARDED</td>
              <td align="left">10100</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_GUARDED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_LOST</td>
              <td align="left">10101</td>
              <td align="left">
                <xref target="sec-NFS4ERR_PAYLOAD_LOST"/></td>
            </tr>
          </tbody>
        </table>
        <section anchor="sec-NFS4ERR_CODING_NOT_SUPPORTED">
          <name>NFS4ERR_CODING_NOT_SUPPORTED (Error Code 10097)</name>
          <t>The client requested a ffv2_coding_type4 which the metadata server
does not support.  I.e., if the client sends a layout_hint requesting
an erasure coding type that the metadata server does not support,
this error code can be returned.  The client might have to send the
layout_hint several times to determine the overlapping set of
supported erasure coding types.</t>
        </section>
        <section anchor="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT">
          <name>NFS4ERR_PAYLOAD_NOT_CONSISTENT (Error Code 10098)</name>
          <t>The client encountered a payload in which the blocks were inconsistent
and stays inconsistent.  As the client can not tell if another
client is actively writing, it informs the metadata server of this
error via LAYOUTERROR.  The metadata server can then arrange for
repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_LOCKED">
          <name>NFS4ERR_CHUNK_LOCKED (Error Code 10099)</name>
          <t>The client tried an operation on a chunk which resulted in the data
server reporting that the chunk was locked. The client will then
inform the metadata server of this error via LAYOUTERROR.  The
metadata server can then arrange for repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_GUARDED">
          <name>NFS4ERR_CHUNK_GUARDED (Error Code 10100)</name>
          <t>The client tried a guarded CHUNK_WRITE on a chunk which did not match
the guard on the chunk in the data file. As such, the CHUNK_WRITE was
rejected and the client should refresh the chunk it has cached.</t>
        </section>
        <section anchor="sec-NFS4ERR_PAYLOAD_LOST">
          <name>NFS4ERR_PAYLOAD_LOST (Error Code 10101)</name>
          <t>Returned by a repair client on the CB_CHUNK_REPAIR response
(ccrr_status) to indicate that the identified ranges cannot be
repaired and the underlying data is no longer recoverable.
Causes include: too few surviving shards to meet the
reconstruction threshold (Katz criterion for Mojette, any
k-of-(k+m) subset for Reed-Solomon Vandermonde), inability to
roll back to a previously committed payload because that payload
is also lost, or exhaustion of all FFV2_DS_FLAGS_SPARE and
FFV2_DS_FLAGS_REPAIR data servers available in the layout.</t>
          <t>On receipt, the metadata server <bcp14>MUST NOT</bcp14> retry the repair by
selecting a different client -- the payload is damaged and the
metadata server transitions the affected file or byte range into
an implementation-defined damaged state.  Operator notification
and restore-from-snapshot are out of scope for this specification.</t>
          <t>NFS4ERR_PAYLOAD_LOST is distinct from NFS4ERR_DELAY (transient;
metadata server <bcp14>MAY</bcp14> extend the deadline or re-select) and from
NFS4ERR_IO (per-operation failure; metadata server <bcp14>MAY</bcp14> retry or
re-select).  Only NFS4ERR_PAYLOAD_LOST is terminal.</t>
        </section>
      </section>
      <section anchor="operations-and-their-valid-errors">
        <name>Operations and Their Valid Errors</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-ops-and-errors">
          <name>Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CHUNK_COMMIT</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ERROR</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_FINALIZE</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_HEADER_READ</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_LOCK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_CHUNK_LOCKED, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_READ</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_PAYLOAD_NOT_CONSISTENT, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_REPAIRED</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ROLLBACK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_UNLOCK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_CHUNK_GUARDED, NFS4ERR_CHUNK_LOCKED, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOSPC, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE_REPAIR</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOSPC, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">TRUST_STATEID</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, NFS4ERR_INVAL, NFS4ERR_NOFILEHANDLE, NFS4ERR_NOTSUPP, NFS4ERR_PERM, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">REVOKE_STATEID</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, NFS4ERR_INVAL, NFS4ERR_NOFILEHANDLE, NFS4ERR_NOTSUPP, NFS4ERR_PERM, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">BULK_REVOKE_STATEID</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_NOTSUPP, NFS4ERR_PERM, NFS4ERR_SERVERFAULT</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="callback-operations-and-their-valid-errors">
        <name>Callback Operations and Their Valid Errors</name>
        <t>The callback operations and their valid errors are presented in
<xref target="tbl-cb-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-cb-ops-and-errors">
          <name>Callback Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Callback Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CB_CHUNK_REPAIR</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DEADSESSION, NFS4ERR_DELAY, NFS4ERR_CODING_NOT_SUPPORTED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_ISDIR, NFS4ERR_LOCKED, NFS4ERR_NOTSUPP, NFS4ERR_OLD_STATEID, NFS4ERR_PAYLOAD_LOST, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="errors-and-the-operations-that-use-them">
        <name>Errors and the Operations That Use Them</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-errors-and-ops"/>.  All operations not defined in this document
are defined in Section 18 of <xref target="RFC8881"/> and Section 15 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-errors-and-ops">
          <name>Errors and the Operations That Use Them</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Operations</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">CB_CHUNK_REPAIR, LAYOUTGET</td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_LOST</td>
              <td align="left">CB_CHUNK_REPAIR</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="exchgid4flagusepnfsds">
      <name>EXCHGID4_FLAG_USE_PNFS_DS</name>
      <figure anchor="fig-EXCHGID4_FLAG_USE_PNFS_DS">
        <name>The EXCHGID4_FLAG_USE_PNFS_DS</name>
        <sourcecode type="xdr"><![CDATA[
   /// const EXCHGID4_FLAG_USE_ERASURE_DS      = 0x00100000;
]]></sourcecode>
      </figure>
      <t>When a data server connects to a metadata server it can via
EXCHANGE_ID (see Section 18.35 of <xref target="RFC8881"/>) state its pNFS role.
The data server can use EXCHGID4_FLAG_USE_ERASURE_DS (see
<xref target="fig-EXCHGID4_FLAG_USE_PNFS_DS"/>) to indicate that it supports the
new NFSv4.2 operations introduced in this document.  Section 13.1
of <xref target="RFC8881"/> describes the interaction of the various pNFS roles
masked by EXCHGID4_FLAG_MASK_PNFS.  However, that does not mask out
EXCHGID4_FLAG_USE_ERASURE_DS.  I.e., EXCHGID4_FLAG_USE_ERASURE_DS can
be used in combination with all of the pNFS flags.</t>
      <t>If the data server sets EXCHGID4_FLAG_USE_ERASURE_DS during the
EXCHANGE_ID operation, then it <bcp14>MUST</bcp14> support all of the operations
in <xref target="tbl-protocol-ops"/>.  Further, this support is orthogonal to the
Erasure Coding Type selected.  The data server is unaware of which type
is driving the I/O.</t>
    </section>
    <section anchor="new-nfsv42-attributes">
      <name>New NFSv4.2 Attributes</name>
      <section anchor="sec-fattr4_coding_block_size">
        <name>Attribute 89: fattr4_coding_block_size</name>
        <figure anchor="fig-fattr4_coding_block_size">
          <name>XDR for fattr4_coding_block_size</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef uint64_t                  fattr4_coding_block_size;
   ///
   /// const FATTR4_CODING_BLOCK_SIZE  = 89;
   ///
]]></sourcecode>
        </figure>
        <t>The new attribute fattr4_coding_block_size (see
<xref target="fig-fattr4_coding_block_size"/>) is an <bcp14>OPTIONAL</bcp14> to NFSv4.2 attribute
which <bcp14>MUST</bcp14> be supported if the metadata server supports the Flexible
File Version 2 Layout Type.  By querying it, the client can determine
the data block size it is to use when coding the data blocks to
chunks.</t>
      </section>
    </section>
    <section anchor="new-nfsv42-common-data-structures">
      <name>New NFSv4.2 Common Data Structures</name>
      <section anchor="sec-chunk_guard4">
        <name>chunk_guard4</name>
        <figure anchor="fig-chunk_guard4">
          <name>XDR for chunk_guard4</name>
          <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_GUARD_CLIENT_ID_MDS  = 0xFFFFFFFF;
   ///
   /// struct chunk_guard4 {
   ///     uint32_t   cg_gen_id;
   ///     uint32_t   cg_client_id;
   /// };
]]></sourcecode>
        </figure>
        <t>On the wire, a single CHUNK_WRITE carries the 8-byte header
followed by the opaque payload, as shown in
<xref target="fig-chunk-wire-layout"/>.  The payload length is carried
separately in the CHUNK_WRITE4args cwa_chunks&lt;&gt; slot; the
diagram shows the per-chunk framing only.</t>
        <figure anchor="fig-chunk-wire-layout">
          <name>Per-chunk wire layout</name>
          <artwork><![CDATA[
    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                          cg_gen_id                            |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                         cg_client_id                          |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                           cr_crc                              |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                    opaque payload ...                         |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   Bytes 0-3:   cg_gen_id      (per-chunk generation counter)
   Bytes 4-7:   cg_client_id   (owning-client short id)
   Bytes 8-11:  cr_crc         (CRC32 over the opaque payload)
   Bytes 12-N:  opaque payload (encoded shard; variable length)
]]></artwork>
        </figure>
        <t>The chunk_guard4 (see <xref target="fig-chunk_guard4"/>) is effectively a 64-bit
value identifying a specific write transaction on a specific chunk.
It has two fields:</t>
        <dl>
          <dt>cg_gen_id:</dt>
          <dd>
            <t>A per-chunk monotonic generation counter.  Each chunk's gen_id
starts at 0 when the chunk is first written and is incremented
on each successful write by any client.  cg_gen_id is NOT a
timestamp -- the protocol does not rely on a global clock,
and no interpretation of cg_gen_id as a wall-clock value is
supported.  cg_gen_id values are NOT comparable across distinct
chunks; a given cg_gen_id is only meaningful within the scope
of a single chunk on a single file.</t>
          </dd>
          <dt>cg_client_id:</dt>
          <dd>
            <t>A 32-bit value established by the metadata server at the time
the client's layout is granted (see <xref target="sec-ffv2-mirror4"/> and
ffm_client_id).  The metadata server <bcp14>MUST</bcp14> assign distinct
cg_client_id values to distinct clients that hold concurrent
write layouts on the same file.  cg_client_id is opaque with
respect to client identity -- a data server <bcp14>MUST NOT</bcp14>
interpret its bits as naming or ordering clients in any
external sense.  The value supports two operations only:
equality comparison (to detect whether two chunks were written
by the same transaction) and numeric comparison (to implement
the tiebreaker rule below).</t>
          </dd>
          <dt>Uniqueness contract:</dt>
          <dd>
            <t>The pair (cg_gen_id, cg_client_id) uniquely identifies a write
transaction on a chunk.  Neither field alone is globally
unique; two clients <bcp14>MAY</bcp14> independently write with the same
cg_gen_id on the same chunk (in particular, both may write
with cg_gen_id equal to some prior value + 1), and the
cg_client_id is what makes the resulting transactions
distinguishable.</t>
          </dd>
          <dt>Deterministic tiebreaker for concurrent writers:</dt>
          <dd>
            <t>When two or more clients race on the same chunk in the
multi-writer mode, the client whose cg_client_id compares
numerically lowest wins the race.  A data server enforces this
by accepting the first CHUNK_WRITE whose guard check succeeds
and rejecting later writers with NFS4ERR_CHUNK_GUARDED; across
the mirror set, the subset of data servers on which each
client wins will vary, but the deterministic tiebreaker
ensures all clients agree on which client's write ultimately
becomes COMMITTED.  A client that lost the race on at least
one data server <bcp14>MUST</bcp14> re-read the chunk and <bcp14>MAY</bcp14> retry its write
with a refreshed cg_gen_id.  A client that detects no forward
progress after a bounded number of retries <bcp14>MUST</bcp14> escalate via
LAYOUTERROR and the repair coordination flow in
<xref target="sec-repair-selection"/>.</t>
          </dd>
        </dl>
        <t>The numeric ordering of cg_client_id values is arbitrary with
respect to the clients' external identities -- it is a
deterministic total order over the opaque 32-bit values, not a
preference ordering over the clients themselves.  A deployment
that requires a specific client to win a race <bcp14>MUST</bcp14> arrange
cg_client_id assignment at the metadata server; the protocol does
not provide a preference mechanism at layout-grant time.</t>
        <section anchor="metadata-server-assignment-rules-for-cgclientid">
          <name>Metadata-Server Assignment Rules for cg_client_id</name>
          <t>To uphold the uniqueness contract, the metadata server <bcp14>MUST</bcp14>
follow these rules when assigning cg_client_id (that is, when
populating ffm_client_id at layout-grant time):</t>
          <ul spacing="normal">
            <li>
              <t>Two clients holding concurrent write layouts on the same
file <bcp14>MUST</bcp14> receive distinct cg_client_id values.  A client
that holds only a read layout need not be assigned a
distinct value.</t>
            </li>
            <li>
              <t>The reserved sentinel CHUNK_GUARD_CLIENT_ID_MDS (0xFFFFFFFF)
<bcp14>MUST NOT</bcp14> be assigned to any client.</t>
            </li>
            <li>
              <t>A cg_client_id <bcp14>MAY</bcp14> be reused by the metadata server after
the prior holder's layout has been fully returned (via
LAYOUTRETURN or revocation).  The metadata server <bcp14>SHOULD</bcp14>
avoid reusing a cg_client_id within a single lease period
to simplify diagnosis of stale writes.</t>
            </li>
            <li>
              <t>cg_client_id values do not persist across metadata-server
restart.  Clients reclaiming layouts during the grace period
receive freshly assigned values; the protocol does not rely
on any pre-restart assignment surviving.</t>
            </li>
          </ul>
        </section>
        <section anchor="data-server-collision-handling">
          <name>Data-Server Collision Handling</name>
          <t>A (cg_gen_id, cg_client_id) pair that the uniqueness contract
would otherwise render unique can nonetheless collide if a
client and the metadata server disagree about which
cg_client_id the client currently holds, or if a client
presents a spoofed cg_client_id.  The data server enforces the
contract locally:</t>
          <ul spacing="normal">
            <li>
              <t>If the data server receives a CHUNK_WRITE whose
chunk_guard4 has the same (cg_gen_id, cg_client_id) as a
chunk already in PENDING, FINALIZED, or COMMITTED state
AND the presented payload differs from the retained
payload, the data server <bcp14>MUST</bcp14> reject the write with
NFS4ERR_CHUNK_GUARDED and <bcp14>SHOULD</bcp14> report the collision to
the metadata server via LAYOUTERROR.  This situation is a
protocol violation on one side of the conversation; the
metadata server resolves it by revoking the offending
client's layout and selecting a repair client under
<xref target="sec-repair-selection"/>.</t>
            </li>
            <li>
              <t>If a client presents CHUNK_GUARD_CLIENT_ID_MDS as
cg_client_id in any client-originated operation, the data
server <bcp14>MUST</bcp14> reject the operation with NFS4ERR_INVAL (see
<xref target="sec-chunk_guard_mds"/>).</t>
            </li>
            <li>
              <t>A cg_client_id that does not match any layout the data
server has been told about (via TRUST_STATEID) <bcp14>MUST</bcp14> be
rejected.  Unknown cg_client_id values are treated as stale
layouts; the data server returns the error specified in
<xref target="sec-tight-coupling-control"/> for unknown stateids.</t>
            </li>
          </ul>
        </section>
        <section anchor="sec-chunk_guard_mds">
          <name>Reserved cg_client_id Value: CHUNK_GUARD_CLIENT_ID_MDS</name>
          <t>The value <tt>CHUNK_GUARD_CLIENT_ID_MDS</tt> (0xFFFFFFFF) is reserved.
It denotes that the chunk lock is held by the metadata server
itself, in escrow during a repair coordination sequence (see
<xref target="sec-repair-selection"/>).  The data server produces a
chunk_guard4 with this cg_client_id when the metadata server
revokes the prior holder's stateid while that holder still holds
chunk locks; the locks <bcp14>MUST NOT</bcp14> be dropped and are transferred to
the MDS-escrow owner instead.</t>
          <t>The metadata server does not originate CHUNK_LOCK or CHUNK_WRITE
traffic on its own session.  Clients <bcp14>MUST NOT</bcp14> present
CHUNK_GUARD_CLIENT_ID_MDS as the cg_client_id of any
client-originated chunk_guard4 or chunk_owner4.  A data server
that receives such a value from a client <bcp14>MUST</bcp14> reject the
operation with NFS4ERR_INVAL.</t>
          <t>The MDS-escrow owner is released only by a CHUNK_LOCK from the
client selected via CB_CHUNK_REPAIR, carrying
CHUNK_LOCK_FLAGS_ADOPT.  See <xref target="sec-CHUNK_LOCK"/>.</t>
        </section>
      </section>
      <section anchor="chunkowner4">
        <name>chunk_owner4</name>
        <figure anchor="fig-chunk_owner4">
          <name>XDR for chunk_owner4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct chunk_owner4 {
   ///     chunk_guard4   co_guard;
   ///     uint32_t       co_chunk_id;
   /// };
]]></sourcecode>
        </figure>
        <t>The chunk_owner4 (see <xref target="fig-chunk_owner4"/>) is used to determine
when and by whom a block was written.  The co_chunk_id is used
to identify the chunk and <bcp14>MUST</bcp14> be the index of the chunk within
the file.  I.e., it is the offset of the start of the chunk
divided by the chunk length.  The co_guard is a chunk_guard4
(see <xref target="sec-chunk_guard4"/>), used to identify a given
transaction.</t>
        <t>The co_guard is like the change attribute (see Section 5.8.1.4 of
<xref target="RFC8881"/>) in that each chunk write by a given client has to have
an unique co_guard.  I.e., it can be determined which transaction
across all data files that a chunk corresponds.</t>
      </section>
    </section>
    <section anchor="sec-new-ops">
      <name>New NFSv4.2 Operations</name>
      <figure anchor="fig-ops-xdr">
        <name>Operations XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* New operations for Erasure Coding start here */
   ///
   ///  OP_CHUNK_COMMIT        = 77,
   ///  OP_CHUNK_ERROR         = 78,
   ///  OP_CHUNK_FINALIZE      = 79,
   ///  OP_CHUNK_HEADER_READ   = 80,
   ///  OP_CHUNK_LOCK          = 81,
   ///  OP_CHUNK_READ          = 82,
   ///  OP_CHUNK_REPAIRED      = 83,
   ///  OP_CHUNK_ROLLBACK      = 84,
   ///  OP_CHUNK_UNLOCK        = 85,
   ///  OP_CHUNK_WRITE         = 86,
   ///  OP_CHUNK_WRITE_REPAIR  = 87,
   ///
   /// /* MDS-to-DS control-plane operations for tight coupling */
   ///
   ///  OP_TRUST_STATEID       = 88,
   ///  OP_REVOKE_STATEID      = 89,
   ///  OP_BULK_REVOKE_STATEID = 90,
   ///
]]></sourcecode>
      </figure>
      <t>The following amendment blocks extend the nfs_argop4 and
nfs_resop4 dispatch unions defined in <xref target="RFC7863"/> with arms for
each of the new operations defined in this document.  A consumer
that combines this document's extracted XDR with the RFC 7863
XDR applies these amendments at the union's extension point.</t>
      <figure anchor="fig-nfs_argop4-amend">
        <name>nfs_argop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_argop4 amendment block */
   ///
   /// case OP_CHUNK_COMMIT: CHUNK_COMMIT4args opchunkcommit;
   /// case OP_CHUNK_ERROR: CHUNK_ERROR4args opchunkerror;
   /// case OP_CHUNK_FINALIZE: CHUNK_FINALIZE4args opchunkfinalize;
   /// case OP_CHUNK_HEADER_READ:
   ///     CHUNK_HEADER_READ4args opchunkheaderread;
   /// case OP_CHUNK_LOCK: CHUNK_LOCK4args opchunklock;
   /// case OP_CHUNK_READ: CHUNK_READ4args opchunkread;
   /// case OP_CHUNK_REPAIRED: CHUNK_REPAIRED4args opchunkrepaired;
   /// case OP_CHUNK_ROLLBACK: CHUNK_ROLLBACK4args opchunkrollback;
   /// case OP_CHUNK_UNLOCK: CHUNK_UNLOCK4args opchunkunlock;
   /// case OP_CHUNK_WRITE: CHUNK_WRITE4args opchunkwrite;
   /// case OP_CHUNK_WRITE_REPAIR:
   ///     CHUNK_WRITE_REPAIR4args opchunkwriterepair;
   /// case OP_TRUST_STATEID: TRUST_STATEID4args optruststateid;
   /// case OP_REVOKE_STATEID: REVOKE_STATEID4args oprevokestateid;
   /// case OP_BULK_REVOKE_STATEID:
   ///     BULK_REVOKE_STATEID4args opbulkrevokestateid;
]]></sourcecode>
      </figure>
      <figure anchor="fig-nfs_resop4-amend">
        <name>nfs_resop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_resop4 amendment block */
   ///
   /// case OP_CHUNK_COMMIT: CHUNK_COMMIT4res opchunkcommit;
   /// case OP_CHUNK_ERROR: CHUNK_ERROR4res opchunkerror;
   /// case OP_CHUNK_FINALIZE: CHUNK_FINALIZE4res opchunkfinalize;
   /// case OP_CHUNK_HEADER_READ:
   ///     CHUNK_HEADER_READ4res opchunkheaderread;
   /// case OP_CHUNK_LOCK: CHUNK_LOCK4res opchunklock;
   /// case OP_CHUNK_READ: CHUNK_READ4res opchunkread;
   /// case OP_CHUNK_REPAIRED: CHUNK_REPAIRED4res opchunkrepaired;
   /// case OP_CHUNK_ROLLBACK: CHUNK_ROLLBACK4res opchunkrollback;
   /// case OP_CHUNK_UNLOCK: CHUNK_UNLOCK4res opchunkunlock;
   /// case OP_CHUNK_WRITE: CHUNK_WRITE4res opchunkwrite;
   /// case OP_CHUNK_WRITE_REPAIR:
   ///     CHUNK_WRITE_REPAIR4res opchunkwriterepair;
   /// case OP_TRUST_STATEID: TRUST_STATEID4res optruststateid;
   /// case OP_REVOKE_STATEID: REVOKE_STATEID4res oprevokestateid;
   /// case OP_BULK_REVOKE_STATEID:
   ///     BULK_REVOKE_STATEID4res opbulkrevokestateid;
]]></sourcecode>
      </figure>
      <t>Operations 77 through 87 (the CHUNK_* operations) are sent by
clients to storage devices on the data path.  Operations 88
through 90 (TRUST_STATEID, REVOKE_STATEID, BULK_REVOKE_STATEID)
are sent by the metadata server to storage devices on the
MDS-to-DS control session (see
<xref target="sec-tight-coupling-control-session"/>); they <bcp14>MUST NOT</bcp14> be sent by
pNFS clients.</t>
      <table anchor="tbl-protocol-ops">
        <name>Protocol OPs</name>
        <thead>
          <tr>
            <th align="left">Operation</th>
            <th align="left">Number</th>
            <th align="left">Target Server</th>
            <th align="left">Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">CHUNK_COMMIT</td>
            <td align="left">77</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_COMMIT"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ERROR</td>
            <td align="left">78</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_ERROR"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_FINALIZE</td>
            <td align="left">79</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_FINALIZE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_HEADER_READ</td>
            <td align="left">80</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_HEADER_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_LOCK</td>
            <td align="left">81</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_LOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_READ</td>
            <td align="left">82</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_REPAIRED</td>
            <td align="left">83</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_REPAIRED"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ROLLBACK</td>
            <td align="left">84</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_ROLLBACK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_UNLOCK</td>
            <td align="left">85</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_UNLOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE</td>
            <td align="left">86</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE_REPAIR</td>
            <td align="left">87</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE_REPAIR"/></td>
          </tr>
          <tr>
            <td align="left">TRUST_STATEID</td>
            <td align="left">88</td>
            <td align="left">DS (MDS control)</td>
            <td align="left">
              <xref target="sec-TRUST_STATEID"/></td>
          </tr>
          <tr>
            <td align="left">REVOKE_STATEID</td>
            <td align="left">89</td>
            <td align="left">DS (MDS control)</td>
            <td align="left">
              <xref target="sec-REVOKE_STATEID"/></td>
          </tr>
          <tr>
            <td align="left">BULK_REVOKE_STATEID</td>
            <td align="left">90</td>
            <td align="left">DS (MDS control)</td>
            <td align="left">
              <xref target="sec-BULK_REVOKE_STATEID"/></td>
          </tr>
        </tbody>
      </table>
      <section anchor="sec-CHUNK_COMMIT">
        <name>Operation 77: CHUNK_COMMIT - Activate Cached Chunk Data</name>
        <section anchor="arguments">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_COMMIT4args">
            <name>XDR for CHUNK_COMMIT4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cca_offset;
   ///     count4          cca_count;
   ///     chunk_owner4    cca_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_COMMIT4resok">
            <name>XDR for CHUNK_COMMIT4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4resok {
   ///     verifier4       ccr_writeverf;
   ///     nfsstat4        ccr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_COMMIT4res">
            <name>XDR for CHUNK_COMMIT4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_COMMIT4res switch (nfsstat4 ccr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_COMMIT4resok   ccr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description">
          <name>DESCRIPTION</name>
          <t>CHUNK_COMMIT is COMMIT (see Section 18.3 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner activating the blocks.
As such, all of the normal semantics of COMMIT directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_COMMIT
works on blocks and not a raw data stream.  As such cca_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cca_count is a count of
blocks to activate and not bytes to activate.</t>
          <t>Further, while it may appear that the combination of cca_offset and
cca_count are redundant to cca_chunks, the purpose of cca_chunks
is to allow the data server to differentiate between potentially
multiple pending blocks.</t>
          <section anchor="interaction-with-chunkfinalize">
            <name>Interaction with CHUNK_FINALIZE</name>
            <t>CHUNK_COMMIT transitions a chunk from FINALIZED to COMMITTED
(see <xref target="sec-system-model-chunk-state"/>).  A chunk <bcp14>MUST</bcp14> have
previously been transitioned from PENDING to FINALIZED via
CHUNK_FINALIZE before CHUNK_COMMIT is accepted:</t>
            <ul spacing="normal">
              <li>
                <t>If the target chunk is PENDING (i.e., the writer never
issued CHUNK_FINALIZE), the data server <bcp14>MUST</bcp14> reject the
CHUNK_COMMIT entry for that chunk with
NFS4ERR_PAYLOAD_NOT_CONSISTENT in the corresponding
ccr_status slot.  The writer is expected to either issue
CHUNK_FINALIZE to advance the state or CHUNK_ROLLBACK to
abandon the PENDING generation.</t>
              </li>
              <li>
                <t>If the target chunk is EMPTY (no generation to commit), the
data server <bcp14>MUST</bcp14> reject with NFS4ERR_PAYLOAD_NOT_CONSISTENT
for that chunk.</t>
              </li>
              <li>
                <t>If the target chunk is already COMMITTED at the generation
identified by the cca_chunks entry's cg_gen_id, the
CHUNK_COMMIT is idempotent and <bcp14>MUST</bcp14> succeed.  Idempotence
preserves the NFSv4 COMMIT contract for duplicate-request
retransmission.</t>
              </li>
              <li>
                <t>If the target chunk is FINALIZED at a different generation
than the one named in the cca_chunks entry, the data server
<bcp14>MUST</bcp14> reject with NFS4ERR_CHUNK_GUARDED.  A client that sees
this has lost a race and <bcp14>SHOULD</bcp14> re-read the chunk (see
sec-chunk_guard4).</t>
              </li>
            </ul>
            <t>The three-step CHUNK_WRITE -&gt; CHUNK_FINALIZE -&gt; CHUNK_COMMIT
sequence <bcp14>MAY</bcp14> be pipelined within a single NFSv4.2 compound
(see sec-system-model-progress); each operation evaluates the
current state of the target chunks independently.</t>
          </section>
          <section anchor="interaction-with-a-locked-chunk">
            <name>Interaction with a Locked Chunk</name>
            <t>When a chunk is locked via CHUNK_LOCK (see <xref target="sec-CHUNK_LOCK"/>),
CHUNK_COMMIT is permitted only when the submitter owns the
lock -- that is, when the stateid carried on the compound
matches the lock holder's stateid (or is an
CHUNK_LOCK_FLAGS_ADOPT-transferred continuation):</t>
            <ul spacing="normal">
              <li>
                <t>The owning writer <bcp14>MAY</bcp14> issue CHUNK_COMMIT; the chunk
transitions from FINALIZED to COMMITTED normally.</t>
              </li>
              <li>
                <t>A non-owning client <bcp14>MUST</bcp14> receive NFS4ERR_CHUNK_LOCKED in
the corresponding ccr_status slot.  The chunk's state is
not changed.</t>
              </li>
              <li>
                <t>During repair, the MDS-escrow owner
(CHUNK_GUARD_CLIENT_ID_MDS, see <xref target="sec-chunk_guard_mds"/>)
holds the lock while the repair client adopts it via
CHUNK_LOCK_FLAGS_ADOPT.  CHUNK_COMMIT during the escrow
window is permitted only to the holder of the adopted
lock.</t>
              </li>
            </ul>
            <t>This rule is what <xref target="sec-system-model-consistency"/> calls
"lock continuity across revocation": the COMMIT privilege
follows the lock without gaps in which a non-owner could race.</t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_ERROR">
        <name>Operation 78: CHUNK_ERROR - Report Error on Cached Chunk Data</name>
        <section anchor="arguments-1">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_ERROR4args">
            <name>XDR for CHUNK_ERROR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ERROR4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cea_stateid;
   ///     offset4         cea_offset;
   ///     count4          cea_count;
   ///     nfsstat4        cea_error;
   ///     chunk_owner4    cea_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-1">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_ERROR4res">
            <name>XDR for CHUNK_ERROR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ERROR4res {
   ///     nfsstat4        cer_status;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-1">
          <name>DESCRIPTION</name>
          <t>CHUNK_ERROR allows a client to report that one or more chunks at
the specified block range are in error.  The cea_offset is the
starting block offset and cea_count is the number of blocks
affected.  The cea_error indicates the type of error detected
(e.g., NFS4ERR_PAYLOAD_NOT_CONSISTENT for a CRC mismatch).</t>
          <t>The data server records the error state for the affected blocks.
Once marked as errored, the blocks are not returned by CHUNK_READ
until they are repaired via CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>)
and the repair is confirmed via CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>).</t>
          <t>The client <bcp14>SHOULD</bcp14> report errors via CHUNK_ERROR before reporting
them to the metadata server via LAYOUTERROR.  This allows the data
server to prevent other clients from reading corrupt data while
the metadata server coordinates repair.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_FINALIZE">
        <name>Operation 79: CHUNK_FINALIZE - Transition Chunks from Pending to Finalized</name>
        <section anchor="arguments-2">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_FINALIZE4args">
            <name>XDR for CHUNK_FINALIZE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_FINALIZE4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cfa_offset;
   ///     count4          cfa_count;
   ///     chunk_owner4    cfa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-2">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_FINALIZE4resok">
            <name>XDR for CHUNK_FINALIZE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_FINALIZE4resok {
   ///     verifier4       cfr_writeverf;
   ///     nfsstat4        cfr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_FINALIZE4res">
            <name>XDR for CHUNK_FINALIZE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_FINALIZE4res switch (nfsstat4 cfr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_FINALIZE4resok   cfr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-2">
          <name>DESCRIPTION</name>
          <t>CHUNK_FINALIZE transitions blocks from the PENDING state (set by
CHUNK_WRITE) to the FINALIZED state.  A finalized block is visible
to the owning client for reads and is eligible for CHUNK_COMMIT.</t>
          <t>The cfa_offset is the starting block offset and cfa_count is the
number of blocks to finalize.  The cfa_chunks array lists the
chunk_owner4 entries whose blocks are to be finalized.  Each
owner's blocks at the specified offsets <bcp14>MUST</bcp14> be in the PENDING state;
if not, the corresponding entry in the per-owner status array
ccr_status is set to NFS4ERR_INVAL.</t>
          <t>CHUNK_FINALIZE serves as the CRC validation checkpoint: the data
server <bcp14>SHOULD</bcp14> have validated the CRC32 of each block at CHUNK_WRITE
time.  After CHUNK_FINALIZE, the block metadata (CRC, owner, state)
is persisted to stable storage so that it survives data server
restarts.</t>
          <t>Blocks that have been finalized but not yet committed <bcp14>MAY</bcp14> be rolled
back via CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_HEADER_READ">
        <name>Operation 80: CHUNK_HEADER_READ - Read Chunk Header from File</name>
        <section anchor="arguments-3">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4args">
            <name>XDR for CHUNK_HEADER_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    chra_stateid;
   ///     offset4     chra_offset;
   ///     count4      chra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-3">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4resok">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4resok {
   ///     bool            chrr_eof;
   ///     nfsstat4        chrr_status<>;
   ///     bool            chrr_locked<>;
   ///     chunk_owner4    chrr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_HEADER_READ4res">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_HEADER_READ4res switch (nfsstat4 chrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_HEADER_READ4resok     chrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-3">
          <name>DESCRIPTION</name>
          <t>CHUNK_HEADER_READ differs from CHUNK_READ in that it only reads chunk
headers in the desired data range.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_LOCK">
        <name>Operation 81: CHUNK_LOCK - Lock Cached Chunk Data</name>
        <section anchor="arguments-4">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_LOCK4args">
            <name>XDR for CHUNK_LOCK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_LOCK_FLAGS_ADOPT  = 0x00000001;
   ///
   /// struct CHUNK_LOCK4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cla_stateid;
   ///     offset4         cla_offset;
   ///     count4          cla_count;
   ///     uint32_t        cla_flags;
   ///     chunk_owner4    cla_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-4">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_LOCK4res">
            <name>XDR for CHUNK_LOCK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_LOCK4res switch (nfsstat4 clr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     case NFS4ERR_CHUNK_LOCKED:
   ///         chunk_owner4    clr_owner;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-4">
          <name>DESCRIPTION</name>
          <t>CHUNK_LOCK acquires an exclusive lock on the block range specified
by cla_offset and cla_count.  While locked, other clients' CHUNK_WRITE
operations to the same block range will fail with NFS4ERR_CHUNK_LOCKED.
The lock is associated with the chunk_owner4 in cla_owner.</t>
          <t>If the blocks are already locked by a different owner and
cla_flags does not include CHUNK_LOCK_FLAGS_ADOPT, the operation
returns NFS4ERR_CHUNK_LOCKED with the clr_owner field identifying
the current lock holder.</t>
          <t>CHUNK_LOCK is used in the multiple writer mode (<xref target="sec-multi-writer"/>)
to coordinate concurrent access to the same block range, and in the
repair flow (<xref target="sec-repair-selection"/>) to transfer lock ownership
to a repair client.</t>
          <t>The lock is released by CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) or
implicitly when the client's lease expires.</t>
          <section anchor="lock-transfer-via-chunklockflagsadopt">
            <name>Lock Transfer via CHUNK_LOCK_FLAGS_ADOPT</name>
            <t>The CHUNK_LOCK_FLAGS_ADOPT flag in cla_flags requests an atomic
transfer of lock ownership to cla_owner for every chunk in
[cla_offset, cla_offset+cla_count).  The data server <bcp14>MUST</bcp14> perform
the transfer as a single atomic step per chunk: there is no window
in which the chunk is unlocked.  After a successful ADOPT, subsequent
CHUNK_WRITE, CHUNK_WRITE_REPAIR, CHUNK_ROLLBACK, and CHUNK_UNLOCK
operations <bcp14>MUST</bcp14> present cla_owner as their chunk_owner4.</t>
            <t>CHUNK_LOCK_FLAGS_ADOPT is the sole mechanism by which a chunk lock
can change hands without first being released.  The lock ordering
invariant -- that every chunk in a payload transitioning through
repair is held by exactly one owner continuously from failure
detection to repair completion -- depends on it.</t>
            <t>CHUNK_LOCK_FLAGS_ADOPT is valid only when the caller has been
selected as the repair client for the range by the metadata server,
typically via CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>).  A data
server that receives CHUNK_LOCK with the ADOPT flag from a client
that has not been so designated <bcp14>MAY</bcp14> reject the operation with
NFS4ERR_ACCESS.  The mechanism by which the data server determines
designation is coupling-model dependent:</t>
            <ul spacing="normal">
              <li>
                <t>In a tightly coupled deployment, the metadata server notifies the
data server via the control protocol (e.g., TRUST_STATEID with
the new client's stateid or a similar facility).</t>
              </li>
              <li>
                <t>In a loosely coupled deployment, the data server <bcp14>MAY</bcp14> rely on the
metadata server's authentication of the client and accept ADOPT
from any authenticated client holding a current layout that
includes the range.  The write-hole exposure cost is that a misbehaving
client can trigger spurious ownership transfers; the write-hole
exposure is bounded by the chunk_guard4 checks that subsequent
CHUNK_WRITEs from displaced writers experience.</t>
              </li>
            </ul>
            <t>The current lock holder at the moment of ADOPT <bcp14>MAY</bcp14> be:</t>
            <ol spacing="normal" type="1"><li>
                <t>Another client whose stateid remains valid (for example, a
client that has stopped making progress but has not yet lost
its lease).  The prior owner's PENDING or FINALIZED shards
remain on disk until the new owner issues CHUNK_WRITE_REPAIR,
CHUNK_ROLLBACK, or CHUNK_COMMIT.</t>
              </li>
              <li>
                <t>The metadata server itself, acting through the
CHUNK_GUARD_CLIENT_ID_MDS escrow owner
(<xref target="sec-chunk_guard_mds"/>).  This occurs when the metadata
server has revoked the prior holder's stateid in a tightly
coupled deployment.</t>
              </li>
            </ol>
            <t>In either case, ADOPT's effect from the repair client's
perspective is the same: after the successful return the caller
holds the lock and may drive the range to consistency.</t>
            <t>The data server <bcp14>MUST</bcp14> reject CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT if cla_owner's cg_client_id equals
CHUNK_GUARD_CLIENT_ID_MDS -- that value is reserved for server
production and <bcp14>MUST NOT</bcp14> be presented by a client.  The operation
returns NFS4ERR_INVAL in that case.</t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_READ">
        <name>Operation 82: CHUNK_READ - Read Chunks from File</name>
        <section anchor="arguments-5">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_READ4args">
            <name>XDR for CHUNK_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    cra_stateid;
   ///     offset4     cra_offset;
   ///     count4      cra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-5">
          <name>RESULTS</name>
          <figure anchor="fig-read_chunk4">
            <name>XDR for read_chunk4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct read_chunk4 {
   ///     uint32_t        cr_crc;
   ///     uint32_t        cr_effective_len;
   ///     chunk_owner4    cr_owner;
   ///     uint32_t        cr_payload_id;
   ///     bool            cr_locked;
   ///     nfsstat4        cr_status;
   ///     opaque          cr_chunk<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4resok">
            <name>XDR for CHUNK_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4resok {
   ///     bool        crr_eof;
   ///     read_chunk4 crr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4res">
            <name>XDR for CHUNK_READ4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_READ4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///          CHUNK_READ4resok     crr_resok4;
   ///     default:
   ///          void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-5">
          <name>DESCRIPTION</name>
          <t>CHUNK_READ is READ (see Section 18.22 of <xref target="RFC8881"/>) with additional
semantics over the chunk_owner.  As such, all of the normal semantics
of READ directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_READ
works on blocks and not a raw data stream.  As such cra_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cra_count is a count of
blocks to read and not bytes to read.</t>
          <t>When reading a set of blocks across the data servers, it can be the
case that some data servers do not have any data at that location.
In that case, the server either returns crr_eof if the cra_offset
exceeds the number of blocks that the data server is aware or it
returns an empty block for that block.</t>
          <t>For example, in <xref target="fig-example-CHUNK_READ4args"/>, the client asks
for 4 blocks starting with the 3rd block in the file.  The second
data server responds as in <xref target="fig-example-CHUNK_READ4resok"/>.  The
client would read this as there is valid data for blocks 2 and 4,
there is a hole at block 3, and there is no data for block 5.  The
data server <bcp14>MUST</bcp14> calculate a valid cr_crc for block 3 based on the
generated fields.</t>
          <figure anchor="fig-example-CHUNK_READ4args">
            <name>Example: CHUNK_READ4args parameters</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4args                |
  +--------------------------------+
  | cra_stateid: 0                 |
  | cra_offset: 2                  |
  | cra_count: 4                   |
  +----------+---------------------+
]]></artwork>
          </figure>
          <figure anchor="fig-example-CHUNK_READ4resok">
            <name>Example: Resulting CHUNK_READ4resok reply</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4resok               |
  +--------------------------------+
  | crr_eof: true                  |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x3faddace         |
  |     cr_owner:                  |
  |         co_chunk_id: 2         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  | crr_chunks[0]:                 |
  |     cr_crc: 0xdeade4e5         |
  |     cr_owner:                  |
  |         co_chunk_id: 3         |
  |         co_guard:              |
  |             cg_gen_id   : 0    |
  |             cg_client_id: 0    |
  |     cr_payload_id: 1           |
  |     cr_chunk: 0000...00000     |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x7778abcd         |
  |     cr_owner:                  |
  |         co_chunk_id: 4         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  +--------------------------------+
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="sec-CHUNK_REPAIRED">
        <name>Operation 83: CHUNK_REPAIRED - Confirm Repair of Errored Chunk Data</name>
        <section anchor="arguments-6">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_REPAIRED4args">
            <name>XDR for CHUNK_REPAIRED4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_REPAIRED4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cra_stateid;
   ///     offset4         cra_offset;
   ///     count4          cra_count;
   ///     chunk_owner4    cra_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-6">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_REPAIRED4res">
            <name>XDR for CHUNK_REPAIRED4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_REPAIRED4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-6">
          <name>DESCRIPTION</name>
          <t>CHUNK_REPAIRED signals that blocks previously marked as errored
(via CHUNK_ERROR, <xref target="sec-CHUNK_ERROR"/>) have been repaired.  The
repair client writes replacement data via CHUNK_WRITE_REPAIR
(<xref target="sec-CHUNK_WRITE_REPAIR"/>), then calls CHUNK_REPAIRED to clear
the error state and make the blocks available for normal reads.</t>
          <t>The cra_offset and cra_count identify the repaired block range.
The cra_owner identifies the repair client that performed the
repair.  The data server verifies that the blocks were previously
in error and that the repair data has been written and finalized.</t>
          <t>If the blocks are not in the errored state, the operation returns
NFS4ERR_INVAL.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_ROLLBACK">
        <name>Operation 84: CHUNK_ROLLBACK - Rollback Changes on Cached Chunk Data</name>
        <section anchor="arguments-7">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_ROLLBACK4args">
            <name>XDR for CHUNK_ROLLBACK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ROLLBACK4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cra_offset;
   ///     count4          cra_count;
   ///     chunk_owner4    cra_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-7">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_ROLLBACK4resok">
            <name>XDR for CHUNK_ROLLBACK4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ROLLBACK4resok {
   ///     verifier4       crr_writeverf;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_ROLLBACK4res">
            <name>XDR for CHUNK_ROLLBACK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_ROLLBACK4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_ROLLBACK4resok   crr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-7">
          <name>DESCRIPTION</name>
          <t>CHUNK_ROLLBACK reverts blocks from the PENDING or FINALIZED state
back to their previous state, effectively undoing a CHUNK_WRITE
that has not yet been committed via CHUNK_COMMIT.</t>
          <t>The cra_offset is the starting block offset and cra_count is the
number of blocks to roll back.  The cra_chunks array lists the
chunk_owner4 entries whose blocks are to be rolled back.  Each
owner's blocks at the specified offsets <bcp14>MUST</bcp14> be in the PENDING or
FINALIZED state; blocks that have already been committed via
CHUNK_COMMIT cannot be rolled back.</t>
          <t>CHUNK_ROLLBACK is used in two scenarios:</t>
          <ol spacing="normal" type="1"><li>
              <t>A client discovers an encoding error after CHUNK_WRITE and
before CHUNK_COMMIT, and needs to undo the write to try again.</t>
            </li>
            <li>
              <t>A repair client needs to undo a repair attempt that was found
to be incorrect before committing it.</t>
            </li>
          </ol>
          <t>The data server deletes the pending chunk data and restores the
block metadata to EMPTY.  If the block was in the FINALIZED state,
the persisted metadata is also removed.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_UNLOCK">
        <name>Operation 85: CHUNK_UNLOCK - Unlock Cached Chunk Data</name>
        <section anchor="arguments-8">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_UNLOCK4args">
            <name>XDR for CHUNK_UNLOCK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_UNLOCK4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cua_stateid;
   ///     offset4         cua_offset;
   ///     count4          cua_count;
   ///     chunk_owner4    cua_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-8">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_UNLOCK4res">
            <name>XDR for CHUNK_UNLOCK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_UNLOCK4res switch (nfsstat4 cur_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-8">
          <name>DESCRIPTION</name>
          <t>CHUNK_UNLOCK releases the exclusive lock on the block range
previously acquired by CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>).  The
cua_owner <bcp14>MUST</bcp14> match the owner that acquired the lock; otherwise
the operation returns NFS4ERR_INVAL.</t>
          <t>If the blocks are not locked, the operation returns NFS4_OK
(idempotent).</t>
          <t>A client <bcp14>SHOULD</bcp14> release chunk locks promptly after completing
its write or repair operation.  Chunk locks are also released
implicitly when the client's lease expires.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE">
        <name>Operation 86: CHUNK_WRITE - Write Chunks to File</name>
        <section anchor="arguments-9">
          <name>ARGUMENTS</name>
          <figure anchor="fig-write_chunk_guard4">
            <name>XDR for write_chunk_guard4</name>
            <sourcecode type="xdr"><![CDATA[
   /// union write_chunk_guard4 switch (bool cwg_check) {
   ///     case TRUE:
   ///         chunk_guard4   cwg_guard;
   ///     case FALSE:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4args">
            <name>XDR for CHUNK_WRITE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY = 0x00000001;
   ///
   /// struct CHUNK_WRITE4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwa_stateid;
   ///     offset4            cwa_offset;
   ///     stable_how4        cwa_stable;
   ///     chunk_owner4       cwa_owner;
   ///     uint32_t           cwa_payload_id;
   ///     uint32_t           cwa_flags;
   ///     write_chunk_guard4 cwa_guard;
   ///     uint32_t           cwa_chunk_size;
   ///     uint32_t           cwa_crc32s<>;
   ///     opaque             cwa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-9">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE4resok">
            <name>XDR for CHUNK_WRITE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE4resok {
   ///     count4          cwr_count;
   ///     stable_how4     cwr_committed;
   ///     verifier4       cwr_writeverf;
   ///     nfsstat4        cwr_block_status<>;
   ///     bool            cwr_block_activated<>;
   ///     chunk_owner4    cwr_owners<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4res">
            <name>XDR for CHUNK_WRITE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE4res switch (nfsstat4 cwr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE4resok    cwr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-9">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE is WRITE (see Section 18.32 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner and the activation of
blocks.  As such, all of the normal semantics of WRITE directly
apply.</t>
          <t>The main difference between the two operations is that CHUNK_WRITE
works on blocks and not a raw data stream.  As such cwa_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cwr_count is a count of
written blocks and not written bytes.</t>
          <t>If cwa_stable is FILE_SYNC4, the data server <bcp14>MUST</bcp14> commit the written
header and block data plus all file system metadata to stable storage
before returning results.  This corresponds to the NFSv2 protocol
semantics.  Any other behavior constitutes a protocol violation.
If cwa_stable is DATA_SYNC4, then the data server <bcp14>MUST</bcp14> commit all
of the header and block data to stable storage and enough of the
metadata to retrieve the data before returning.  The data server
implementer is free to implement DATA_SYNC4 in the same fashion as
FILE_SYNC4, but with a possible performance drop.  If cwa_stable
is UNSTABLE4, the data server is free to commit any part of the
header and block data and the metadata to stable storage, including
all or none, before returning a reply to the client.  There is no
guarantee whether or when any uncommitted data will subsequently
be committed to stable storage.  The only guarantees made by the
data server are that it will not destroy any data without changing
the value of writeverf and that it will not commit the data and
metadata at a level less than that requested by the client.</t>
          <t>The activation of header and block data interacts with the co_activated
for each of the written blocks.  If the data is not committed to
stable storage then the co_activated field <bcp14>MUST NOT</bcp14> be set to true.
Once the data is committed to stable storage, then the data server
can set the block's co_activated if one of these conditions apply:</t>
          <ul spacing="normal">
            <li>
              <t>it is the first write to that block and the
CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY flag is set</t>
            </li>
            <li>
              <t>the CHUNK_COMMIT is issued later for that block.</t>
            </li>
          </ul>
          <t>There are subtle interactions with write holes caused by racing
clients.  One client could win the race in each case, but because
it used a cwa_stable of UNSTABLE4, the subsequent writes from the
second client with a cwa_stable of FILE_SYNC4 can be awarded the
co_activated being set to true for each of the blocks in the payload.</t>
          <t>Finally, the interaction of cwa_stable can cause a client to
mistakenly believe that by the time it gets the response of
co_activated of false, that the blocks are not activated.  A
subsequent CHUNK_READ or HEADER_READ might show that the co_activated
is true without any interaction by the client via CHUNK_COMMIT.</t>
          <section anchor="guarding-the-write">
            <name>Guarding the Write</name>
            <t>A guarded CHUNK_WRITE is when the writing of a block <bcp14>MUST</bcp14> fail if
cwa_guard.cwg_check is TRUE and the target chunk does not have the
same cg_gen_id as cwa_guard.cwg_guard.cg_gen_id.  This is
useful in read-update-write scenarios.  The client reads a block,
updates it, and is prepared to write it back.  It guards the write
such that if another writer has modified the block, the data server
will reject the modification.</t>
            <t>As the chunk_guard4 (see <xref target="fig-chunk_guard4"/> does not have a
chunk_id and the CHUNK_WRITE applies to all blocks in the range of
cwa_offset to the length of cwa_data, then each of the target blocks
<bcp14>MUST</bcp14> have the same cg_gen_id and cg_client_id.  The client <bcp14>SHOULD</bcp14>
present the smallest set of blocks as possible to meet this
requirement.</t>
          </section>
          <section anchor="per-block-acceptance-semantics">
            <name>Per-Block Acceptance Semantics</name>
            <t>A CHUNK_WRITE targets a contiguous range of blocks on a single
data server.  The data server evaluates each block independently
and reports the outcome per block in cwr_block_status (see
<xref target="fig-CHUNK_WRITE4resok"/>):</t>
            <ul spacing="normal">
              <li>
                <t>Each block is subjected to the guard check (when
cwa_guard.cwg_check is TRUE), the cg_client_id validation
(see <xref target="sec-chunk_guard4"/>), and any other local preconditions
(storage-space limits, tight-coupling trust-table state,
etc.).</t>
              </li>
              <li>
                <t>Blocks that pass their preconditions are written and their
cwr_block_status entry is NFS4_OK.  Blocks that fail produce
the appropriate error code
(NFS4ERR_CHUNK_GUARDED, NFS4ERR_NOSPC, etc.) in the
corresponding cwr_block_status slot, and their data is
NOT persisted.</t>
              </li>
              <li>
                <t>cwr_count reflects only the blocks that were written
successfully; failed blocks do not contribute.</t>
              </li>
              <li>
                <t>The top-level cwr_status is NFS4_OK when the call itself was
structurally valid and the data server could evaluate each
block.  Per-block failures are reported in cwr_block_status,
not by failing the whole operation.  The data server returns
a top-level error only if it could not evaluate the request
at all (for example, NFS4ERR_BADXDR, NFS4ERR_SERVERFAULT).</t>
              </li>
            </ul>
            <t>This is the "continue and report" discipline.  It is
intentionally not all-or-none: atomicity is already per-chunk
(see <xref target="sec-system-model-consistency"/>), so there is no
file-level correctness reason to reject the entire compound
because of a single chunk guard failure.  Per-block reporting
gives the client the information it needs to construct a
targeted CHUNK_ROLLBACK or CHUNK_WRITE retry that covers only
the blocks that failed.</t>
            <t>The data server does not hold a file-wide lock across the
per-block evaluation.  The chunk_guard4 CAS is evaluated
atomically per chunk at the point the data server updates that
chunk's state, so an interleaving CHUNK_WRITE from a different
client that arrives mid-compound will either win its own CAS
race (and the losing client sees NFS4ERR_CHUNK_GUARDED for the
contested block) or be rejected itself, without introducing
data-server-level locking beyond the per-chunk scope.</t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE_REPAIR">
        <name>Operation 87: CHUNK_WRITE_REPAIR - Write Repaired Cached Chunk Data</name>
        <section anchor="arguments-10">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4args">
            <name>XDR for CHUNK_WRITE_REPAIR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE_REPAIR4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwra_stateid;
   ///     offset4            cwra_offset;
   ///     stable_how4        cwra_stable;
   ///     chunk_owner4       cwra_owner;
   ///     uint32_t           cwra_payload_id;
   ///     uint32_t           cwra_chunk_size;
   ///     uint32_t           cwra_crc32s<>;
   ///     opaque             cwra_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-10">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4resok">
            <name>XDR for CHUNK_WRITE_REPAIR4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE_REPAIR4resok {
   ///     count4          cwrr_count;
   ///     stable_how4     cwrr_committed;
   ///     verifier4       cwrr_writeverf;
   ///     nfsstat4        cwrr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4res">
            <name>XDR for CHUNK_WRITE_REPAIR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE_REPAIR4res switch (nfsstat4 cwrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE_REPAIR4resok   cwrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-10">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE_REPAIR has the same semantics as CHUNK_WRITE
(<xref target="sec-CHUNK_WRITE"/>) but is used specifically for writing
reconstructed chunk data to a replacement data server during
repair operations.</t>
          <t>The repair workflow is:</t>
          <ol spacing="normal" type="1"><li>
              <t>The repair client reads surviving chunks from the remaining
data servers via CHUNK_READ.</t>
            </li>
            <li>
              <t>The client reconstructs the missing chunks using the erasure
coding algorithm (RS matrix inversion or Mojette corner-peeling).</t>
            </li>
            <li>
              <t>The client acquires a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) on the
target data server to prevent concurrent writes during repair.</t>
            </li>
            <li>
              <t>The client writes the reconstructed data via CHUNK_WRITE_REPAIR.</t>
            </li>
            <li>
              <t>The client calls CHUNK_FINALIZE and CHUNK_COMMIT to persist
the repair.</t>
            </li>
            <li>
              <t>The client calls CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to
clear the error state.</t>
            </li>
            <li>
              <t>The client releases the lock via CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>).</t>
            </li>
          </ol>
          <t>CHUNK_WRITE_REPAIR is distinguished from CHUNK_WRITE to allow the
data server to apply different policies to repair writes (e.g.,
bypassing guard checks, logging repair activity, or prioritizing
repair I/O).  The CRC32 validation on the repair data follows the
same rules as CHUNK_WRITE.</t>
          <t>The target blocks <bcp14>SHOULD</bcp14> be in the errored state (set by
CHUNK_ERROR) or EMPTY.  If the blocks are in the COMMITTED state
with valid data, the data server <bcp14>MAY</bcp14> reject the repair to prevent
overwriting good data.</t>
        </section>
      </section>
      <section anchor="sec-TRUST_STATEID">
        <name>Operation 88: TRUST_STATEID - Register Layout Stateid on Data Server</name>
        <section anchor="arguments-11">
          <name>ARGUMENTS</name>
          <figure anchor="fig-TRUST_STATEID4args">
            <name>XDR for TRUST_STATEID4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct TRUST_STATEID4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        tsa_layout_stateid;
   ///     layoutiomode4   tsa_iomode;
   ///     nfstime4        tsa_expire;
   ///     utf8str_cs      tsa_principal;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-11">
          <name>RESULTS</name>
          <figure anchor="fig-TRUST_STATEID4res">
            <name>XDR for TRUST_STATEID4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union TRUST_STATEID4res switch (nfsstat4 tsr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-11">
          <name>DESCRIPTION</name>
          <t>TRUST_STATEID registers a layout stateid with the data server so
that subsequent CHUNK operations presenting that stateid can be
validated against the data server's per-file trust table.  It is
the mechanism by which tight coupling (see
<xref target="sec-tight-coupling-control"/>) is established between the
metadata server and the data server for a particular layout.</t>
          <t>TRUST_STATEID operates on the current filehandle; a PUTFH naming
the data server's file <bcp14>MUST</bcp14> precede it in the same compound.</t>
          <t>tsa_layout_stateid is the stateid the metadata server issued in
the LAYOUTGET that produced this layout.  It <bcp14>MUST NOT</bcp14> be a special
stateid (anonymous, invalid, read-bypass, or current).  The sole
exception is the capability probe described in
<xref target="sec-tight-coupling-probe"/>: when the metadata server sends
TRUST_STATEID with tsa_layout_stateid set to the anonymous stateid
against the root filehandle, the data server <bcp14>MUST</bcp14> reject the
request with NFS4ERR_INVAL.  That rejection is the positive
response to the probe.</t>
          <t>tsa_iomode is the iomode of the layout (LAYOUTIOMODE4_READ or
LAYOUTIOMODE4_RW).  The data server <bcp14>MAY</bcp14> enforce this against the
CHUNK operation presented: a READ-iomode trust entry does not
authorize CHUNK_WRITE.</t>
          <t>tsa_expire is the absolute wall-clock time at which the trust
entry becomes invalid if not renewed.  See
<xref target="sec-tight-coupling-lease"/>.  The data server <bcp14>MUST</bcp14> reject a
TRUST_STATEID whose tsa_expire has tv_nseconds &gt;= 10^9 with
NFS4ERR_INVAL.</t>
          <t>tsa_principal is the client's authenticated identity as verified
by the metadata server at LAYOUTGET time.  For RPCSEC_GSS clients
this is the GSS display name (e.g., "alice@REALM").  For AUTH_SYS
and TLS clients, tsa_principal <bcp14>MUST</bcp14> be the empty string,
indicating that no principal binding is enforced on subsequent
CHUNK operations.  See <xref target="sec-tight-coupling-principal"/>.</t>
          <t>If the data server receives TRUST_STATEID on a session whose
owning client did not present EXCHGID4_FLAG_USE_PNFS_MDS at
EXCHANGE_ID, the data server <bcp14>MUST</bcp14> return NFS4ERR_PERM.  The data
server <bcp14>MUST NOT</bcp14> process TRUST_STATEID on a regular client
session.</t>
          <t>If a trust entry already exists for the same tsa_layout_stateid
on the same current filehandle, TRUST_STATEID atomically updates
tsa_expire and tsa_principal; this is the renewal path (see
<xref target="sec-tight-coupling-lease"/>).</t>
          <t>At registration time, the data server tags the new trust entry
with the identity of the metadata server -- derived from the
clientid of the owning client of the control session on which
TRUST_STATEID arrived.  This tag is consulted by REVOKE_STATEID
and BULK_REVOKE_STATEID to ensure that revocation only affects
entries registered by the same metadata server (see
<xref target="sec-BULK_REVOKE_STATEID"/>).  In a multi-metadata-server
deployment sharing a single data server, each metadata server
registers and revokes only its own entries; the tag is opaque to
pNFS clients and is not carried on the wire.</t>
        </section>
        <section anchor="response-codes">
          <name>RESPONSE CODES</name>
          <ul spacing="normal">
            <li>
              <t>NFS4_OK: the trust entry is registered (or updated).</t>
            </li>
            <li>
              <t>NFS4ERR_BADXDR: arguments could not be decoded.</t>
            </li>
            <li>
              <t>NFS4ERR_BAD_STATEID: tsa_layout_stateid was a special stateid
other than the anonymous stateid on the root filehandle.</t>
            </li>
            <li>
              <t>NFS4ERR_DELAY: the data server is temporarily unable to process
the request; the metadata server <bcp14>SHOULD</bcp14> retry.</t>
            </li>
            <li>
              <t>NFS4ERR_INVAL: tsa_layout_stateid was the anonymous stateid
and the current filehandle is not the root filehandle;
tsa_expire is malformed; or the current filehandle is a
directory (except in the capability-probe case).</t>
            </li>
            <li>
              <t>NFS4ERR_NOFILEHANDLE: no current filehandle is set.</t>
            </li>
            <li>
              <t>NFS4ERR_NOTSUPP: the data server does not implement
TRUST_STATEID.  This is the capability-probe response (see
<xref target="sec-tight-coupling-probe"/>).</t>
            </li>
            <li>
              <t>NFS4ERR_PERM: the request arrived on a session whose owning
client did not present EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>NFS4ERR_SERVERFAULT: the data server failed while processing
the request.</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="sec-REVOKE_STATEID">
        <name>Operation 89: REVOKE_STATEID - Revoke Registered Stateid on Data Server</name>
        <section anchor="arguments-12">
          <name>ARGUMENTS</name>
          <figure anchor="fig-REVOKE_STATEID4args">
            <name>XDR for REVOKE_STATEID4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct REVOKE_STATEID4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        rsa_layout_stateid;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-12">
          <name>RESULTS</name>
          <figure anchor="fig-REVOKE_STATEID4res">
            <name>XDR for REVOKE_STATEID4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union REVOKE_STATEID4res switch (nfsstat4 rsr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-12">
          <name>DESCRIPTION</name>
          <t>REVOKE_STATEID invalidates a single trust entry on the data
server.  Subsequent CHUNK operations that present the revoked
stateid <bcp14>MUST</bcp14> fail with NFS4ERR_BAD_STATEID.</t>
          <t>The metadata server calls REVOKE_STATEID in any of the following
situations:</t>
          <ul spacing="normal">
            <li>
              <t>CB_LAYOUTRECALL timeout: the client did not return the layout
within the recall timeout.  REVOKE_STATEID terminates the
client's ability to issue further I/O to the data server
without waiting for tsa_expire.</t>
            </li>
            <li>
              <t>LAYOUTERROR with NFS4ERR_ACCESS or NFS4ERR_PERM: the data
server rejected the client's I/O; the trust entry is stale
and must be removed.  This mirrors the fencing case in the
loose-coupled model.</t>
            </li>
            <li>
              <t>Explicit LAYOUTRETURN: the client returned the layout cleanly.
The metadata server <bcp14>MAY</bcp14> issue REVOKE_STATEID at this time or
<bcp14>MAY</bcp14> rely on tsa_expire; either is correct.</t>
            </li>
          </ul>
          <t>REVOKE_STATEID operates on the current filehandle; a PUTFH naming
the data server's file <bcp14>MUST</bcp14> precede it in the same compound.  The
filehandle and rsa_layout_stateid together identify the trust
entry to revoke.</t>
          <t>In-flight CHUNK operations that arrived before REVOKE_STATEID
completes <bcp14>MAY</bcp14> be allowed to finish.  The data server <bcp14>MUST NOT</bcp14>
process new CHUNK operations presenting rsa_layout_stateid after
REVOKE_STATEID returns.</t>
          <t>Lock state (see <xref target="sec-CHUNK_LOCK"/>) held by the revoked stateid
is NOT released as part of REVOKE_STATEID; the data server <bcp14>MUST</bcp14>
transfer each held lock to the MDS-escrow owner (see
<xref target="sec-chunk_guard_mds"/>).  Dropping a chunk lock during
revocation would permit a write hole and is prohibited; the
repair coordination sequence in <xref target="sec-repair-selection"/> assumes
that locks held by a revoked writer remain held until a repair
client adopts them via CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT.</t>
          <t>If the data server receives REVOKE_STATEID on a session whose
owning client did not present EXCHGID4_FLAG_USE_PNFS_MDS at
EXCHANGE_ID, the data server <bcp14>MUST</bcp14> return NFS4ERR_PERM.</t>
          <t>REVOKE_STATEID is scoped to the issuing metadata server's entries
(see the tagging rule in <xref target="sec-TRUST_STATEID"/>).  The data server
<bcp14>MUST NOT</bcp14> remove an entry that was registered by a different
metadata server, even if rsa_layout_stateid happens to match.  In
a multi-metadata-server deployment, one metadata server therefore
cannot revoke another metadata server's entries.</t>
          <t>REVOKE_STATEID is idempotent: revoking a stateid that has no
matching trust entry (either no entry exists, or the entry was
registered by a different metadata server) returns NFS4_OK.  The
metadata server therefore does not need to track precisely which
entries are currently live on which data server in order to revoke
safely.</t>
        </section>
        <section anchor="response-codes-1">
          <name>RESPONSE CODES</name>
          <ul spacing="normal">
            <li>
              <t>NFS4_OK: the trust entry was removed, or no matching entry
existed (idempotent).</t>
            </li>
            <li>
              <t>NFS4ERR_BADXDR: arguments could not be decoded.</t>
            </li>
            <li>
              <t>NFS4ERR_BAD_STATEID: rsa_layout_stateid was a special stateid.</t>
            </li>
            <li>
              <t>NFS4ERR_DELAY: the data server is temporarily unable to process
the request.</t>
            </li>
            <li>
              <t>NFS4ERR_INVAL: rsa_layout_stateid was the anonymous stateid.</t>
            </li>
            <li>
              <t>NFS4ERR_NOFILEHANDLE: no current filehandle is set.</t>
            </li>
            <li>
              <t>NFS4ERR_NOTSUPP: the data server does not implement
REVOKE_STATEID.</t>
            </li>
            <li>
              <t>NFS4ERR_PERM: the request arrived on a session whose owning
client did not present EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>NFS4ERR_SERVERFAULT: the data server failed while processing
the request.</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="sec-BULK_REVOKE_STATEID">
        <name>Operation 90: BULK_REVOKE_STATEID - Revoke All Stateids for a Client</name>
        <section anchor="arguments-13">
          <name>ARGUMENTS</name>
          <figure anchor="fig-BULK_REVOKE_STATEID4args">
            <name>XDR for BULK_REVOKE_STATEID4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct BULK_REVOKE_STATEID4args {
   ///     clientid4       brsa_clientid;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-13">
          <name>RESULTS</name>
          <figure anchor="fig-BULK_REVOKE_STATEID4res">
            <name>XDR for BULK_REVOKE_STATEID4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union BULK_REVOKE_STATEID4res switch (nfsstat4 brsr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-13">
          <name>DESCRIPTION</name>
          <t>BULK_REVOKE_STATEID removes every trust entry on the data server
that was registered on behalf of the named client.  The data
server applies this as a scan over its trust table.</t>
          <t>The metadata server calls BULK_REVOKE_STATEID in any of the
following situations:</t>
          <ul spacing="normal">
            <li>
              <t>Client lease expiry: when a client's lease on the metadata
server expires, the metadata server revokes all of that
client's layouts.  A single BULK_REVOKE_STATEID replaces the N
per-file REVOKE_STATEID compounds that per-entry revocation
would require.</t>
            </li>
            <li>
              <t>CB_LAYOUTRECALL with LAYOUTRECALL4_ALL: the metadata server is
recalling all layouts for a client.  BULK_REVOKE_STATEID is the
data-server-side complement.</t>
            </li>
            <li>
              <t>Metadata server restart cleanup: after the metadata server
reconnects to a data server, it <bcp14>MAY</bcp14> issue
BULK_REVOKE_STATEID(brsa_clientid = all-zeros) to clear the
prior trust table before re-issuing TRUST_STATEID as clients
reclaim.  See <xref target="sec-tight-coupling-mds-crash"/>.</t>
            </li>
          </ul>
          <t>BULK_REVOKE_STATEID is scoped to the issuing metadata server's
entries (see the tagging rule in <xref target="sec-TRUST_STATEID"/>).  The
data server <bcp14>MUST NOT</bcp14> affect entries registered by a different
metadata server.  Consequently, in a multi-metadata-server
deployment sharing a single data server, one metadata server
cannot clear another metadata server's entries via
BULK_REVOKE_STATEID.</t>
          <t>The special value with all fields of brsa_clientid set to zero
means "revoke every entry owned by the issuing metadata server,
regardless of which pNFS client registered it".  The data server
<bcp14>MUST</bcp14> interpret this value as a clear of the issuing metadata
server's entries only, and <bcp14>MUST NOT</bcp14> treat it either as "the pNFS
client whose clientid happens to be zero" or as a global table
clear across metadata servers.</t>
          <t>BULK_REVOKE_STATEID does not operate on the current filehandle;
no PUTFH is required in the compound.</t>
          <t>If the data server receives BULK_REVOKE_STATEID on a session
whose owning client did not present EXCHGID4_FLAG_USE_PNFS_MDS at
EXCHANGE_ID, the data server <bcp14>MUST</bcp14> return NFS4ERR_PERM.</t>
          <t>Like REVOKE_STATEID, BULK_REVOKE_STATEID is idempotent (no error
is returned if there are no matching entries) and preserves chunk
locks held under any revoked stateid by transferring them to the
MDS-escrow owner (see <xref target="sec-chunk_guard_mds"/>), rather than
dropping them.</t>
        </section>
        <section anchor="response-codes-2">
          <name>RESPONSE CODES</name>
          <ul spacing="normal">
            <li>
              <t>NFS4_OK: the matching entries were removed, or there were
none (idempotent).</t>
            </li>
            <li>
              <t>NFS4ERR_BADXDR: arguments could not be decoded.</t>
            </li>
            <li>
              <t>NFS4ERR_DELAY: the data server is temporarily unable to process
the request.</t>
            </li>
            <li>
              <t>NFS4ERR_NOTSUPP: the data server does not implement
BULK_REVOKE_STATEID.</t>
            </li>
            <li>
              <t>NFS4ERR_PERM: the request arrived on a session whose owning
client did not present EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>NFS4ERR_SERVERFAULT: the data server failed while processing
the request.</t>
            </li>
          </ul>
        </section>
      </section>
    </section>
    <section anchor="new-nfsv42-callback-operations">
      <name>New NFSv4.2 Callback Operations</name>
      <figure anchor="fig-cb-ops-xdr">
        <name>Callback Operations XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* New callback operations for Erasure Coding start here */
   ///
   ///  OP_CB_CHUNK_REPAIR     = 16,
   ///
]]></sourcecode>
      </figure>
      <t>The following amendment blocks extend the nfs_cb_argop4 and
nfs_cb_resop4 dispatch unions defined in <xref target="RFC7863"/> with arms
for the new callback operation defined in this document.</t>
      <figure anchor="fig-nfs_cb_argop4-amend">
        <name>nfs_cb_argop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_cb_argop4 amendment block */
   ///
   /// case OP_CB_CHUNK_REPAIR: CB_CHUNK_REPAIR4args opcbchunkrepair;
]]></sourcecode>
      </figure>
      <figure anchor="fig-nfs_cb_resop4-amend">
        <name>nfs_cb_resop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_cb_resop4 amendment block */
   ///
   /// case OP_CB_CHUNK_REPAIR: CB_CHUNK_REPAIR4res opcbchunkrepair;
]]></sourcecode>
      </figure>
      <section anchor="sec-CB_CHUNK_REPAIR">
        <name>Callback Operation 16: CB_CHUNK_REPAIR - Request Repair of Inconsistent Chunk Ranges</name>
        <section anchor="arguments-14">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CB_CHUNK_REPAIR4args">
            <name>XDR for CB_CHUNK_REPAIR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// enum cb_chunk_repair_reason4 {
   ///     CB_REPAIR_REASON_RACE  = 1,
   ///     CB_REPAIR_REASON_SCRUB = 2
   /// };
   ///
   /// struct cb_chunk_range4 {
   ///     offset4         ccr_offset;
   ///     count4          ccr_count;
   ///     nfsstat4        ccr_error;
   /// };
   ///
   /// struct CB_CHUNK_REPAIR4args {
   ///     nfs_fh4                     ccra_fh;
   ///     stateid4                    ccra_layout_stateid;
   ///     nfstime4                    ccra_deadline;
   ///     cb_chunk_repair_reason4     ccra_reason;
   ///     cb_chunk_range4             ccra_ranges<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-14">
          <name>RESULTS</name>
          <figure anchor="fig-CB_CHUNK_REPAIR4res">
            <name>XDR for CB_CHUNK_REPAIR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CB_CHUNK_REPAIR4res {
   ///     nfsstat4           ccrr_status;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-14">
          <name>DESCRIPTION</name>
          <t>CB_CHUNK_REPAIR is sent by the metadata server to request that
a selected client repair one or more inconsistent chunk ranges.
Selection follows the rules in <xref target="sec-repair-selection"/>; those
rules are normative for how the client <bcp14>MUST</bcp14> respond on receipt
of this callback.</t>
          <t>The ccra_fh identifies the file whose chunks are inconsistent.
The callback compound carries the filehandle directly; there is
no preceding PUTFH in callback compounds.</t>
          <t>The ccra_layout_stateid carries the recipient client's current
layout stateid for the file if one is held.  A client that does
not hold a layout on ccra_fh <bcp14>MUST</bcp14> ignore ccra_layout_stateid
(it will be the anonymous stateid) and <bcp14>MUST</bcp14> acquire one via
LAYOUTGET before issuing any CHUNK operation on the ranges.</t>
          <t>The ccra_deadline is a wall-clock nfstime4 (seconds and
nanoseconds since the epoch, as defined in Section 3.3.1 of
<xref target="RFC8881"/>) by which the client is expected to have driven every
range to completion (CHUNK_REPAIRED on the reconstruction path,
or CHUNK_UNLOCK on the rollback path).  Missing the deadline
does not corrupt state -- the metadata server <bcp14>MAY</bcp14> re-select
another repair client after the deadline elapses -- but a
client that has missed the deadline <bcp14>MUST</bcp14> re-verify its layout
and the chunk lock state before continuing any repair-related
CHUNK operation.</t>
          <t>The ccra_reason distinguishes the two flows that cause the
metadata server to issue a repair callback:</t>
          <dl>
            <dt>CB_REPAIR_REASON_RACE:</dt>
            <dd>
              <t>A live-race repair.  A client (not necessarily the recipient
of this callback) detected a chunk-level inconsistency at
write or read time and reported it via LAYOUTERROR.  The
metadata server is driving repair synchronously because the
affected chunk is on the critical path of some I/O.  The
recipient <bcp14>SHOULD</bcp14> prioritise the callback over background
work.</t>
            </dd>
            <dt>CB_REPAIR_REASON_SCRUB:</dt>
            <dd>
              <t>A background scrub.  The metadata server has detected stale
or inconsistent payloads during a scheduled integrity sweep
and is opportunistically driving repair.  No client is
currently blocked on these ranges.  The recipient <bcp14>MAY</bcp14>
schedule the callback at lower priority than
CB_REPAIR_REASON_RACE, and <bcp14>MAY</bcp14> return NFS4ERR_DELAY to defer
repair to a more convenient time; the metadata server will
retry.</t>
            </dd>
          </dl>
          <t>The two reasons share all other semantics: the same ccra_ranges
encoding, the same response codes, the same deadline contract.
Only the priority / retry behaviour differs.</t>
          <t>The ccra_ranges array lists every chunk range the metadata
server requests the client to repair.  Each entry carries its
own ccr_error describing the failure mode the client is being
asked to remedy.  The repair strategy depends on the error code;
see <xref target="sec-repair-selection"/> for the normative and guidance
split.</t>
          <t>The metadata server <bcp14>SHOULD</bcp14> keep each CB_CHUNK_REPAIR compound
within the back-channel maximum (ca_maxrequestsize) negotiated
in CREATE_SESSION (see Section 18.36.3 of <xref target="RFC8881"/>).  If the
set of affected ranges would exceed that maximum, the metadata
server <bcp14>MAY</bcp14> issue multiple CB_CHUNK_REPAIR callbacks to the same
client.  Each callback is independent; the client drives each
to completion before the deadline on that callback's ranges.</t>
          <t>The fact that a range appears in ccra_ranges implies the data
server holds a chunk lock on the range (the failure occurred in
or around a PENDING or FINALIZED state that established the
lock).  The repair client <bcp14>MUST</bcp14> use CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT (<xref target="sec-CHUNK_LOCK"/>) to take ownership
of the lock before issuing CHUNK_WRITE_REPAIR, CHUNK_ROLLBACK,
or CHUNK_WRITE on any chunk in a requested range.</t>
        </section>
        <section anchor="response-codes-3">
          <name>Response Codes</name>
          <t>The ccrr_status value returned by the client has the following
normative meanings to the metadata server:</t>
          <dl>
            <dt>NFS4_OK</dt>
            <dd>
              <t>The client has accepted the request and driven every range in
this callback to completion (CHUNK_REPAIRED or CHUNK_UNLOCK on
every affected chunk).  The metadata server clears the repair
queue entry.</t>
            </dd>
            <dt>NFS4ERR_DELAY</dt>
            <dd>
              <t>The client has accepted the request but requires more time.
The metadata server <bcp14>MAY</bcp14> extend the deadline by issuing a new
CB_CHUNK_REPAIR with a later ccra_deadline, or <bcp14>MAY</bcp14> re-select
another client.  The client continues to hold any locks it has
adopted until the original or extended deadline.</t>
            </dd>
            <dt>NFS4ERR_CODING_NOT_SUPPORTED</dt>
            <dd>
              <t>The client does not implement the encoding type of the layout
and cannot reconstruct.  The metadata server <bcp14>MUST NOT</bcp14> retry with
the same client and <bcp14>SHOULD</bcp14> select a different client.</t>
            </dd>
            <dt>NFS4ERR_PAYLOAD_LOST</dt>
            <dd>
              <t>The client has concluded that the identified ranges cannot
be repaired -- there are not enough surviving shards to
reconstruct and rollback is also impossible.  The metadata
server <bcp14>MUST NOT</bcp14> retry the repair and transitions the affected
ranges into an implementation-defined damaged state.  See
<xref target="sec-NFS4ERR_PAYLOAD_LOST"/>.</t>
            </dd>
          </dl>
          <t>All other error codes listed in <xref target="tbl-cb-ops-and-errors"/> are
treated by the metadata server as retriable: the metadata server
<bcp14>MAY</bcp14> issue a subsequent CB_CHUNK_REPAIR to the same or a
different client.  If the client becomes unreachable (no
response within the deadline), the metadata server re-selects
per <xref target="sec-repair-selection"/>.</t>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The combination of components in a pNFS system is required to
preserve the security properties of NFSv4.1+ with respect to an
entity accessing data via a client.  The pNFS feature partitions
the NFSv4.1+ file system protocol into two parts: the control
protocol and the data protocol.  As the control protocol in this
document is NFS, the security properties are equivalent to the
version of NFS being used.  The flexible file layout further divides
the data protocol into metadata and data paths.  The security
properties of the metadata path are equivalent to those of NFSv4.1x
(see Sections 1.7.1 and 2.2.1 of <xref target="RFC8881"/>).  And the security
properties of the data path are equivalent to those of the version
of NFS used to access the storage device, with the provision that
the metadata server is responsible for authenticating client access
to the data file.  The metadata server provides appropriate credentials
to the client to access data files on the storage device.  It is
also responsible for revoking access for a client to the storage
device.</t>
      <t>The metadata server enforces the file access control policy at
LAYOUTGET time.  The client <bcp14>MUST</bcp14> use RPC authorization credentials
for getting the layout for the requested iomode ((LAYOUTIOMODE4_READ
or LAYOUTIOMODE4_RW), and the server verifies the permissions and
ACL for these credentials, possibly returning NFS4ERR_ACCESS if the
client is not allowed the requested iomode.  If the LAYOUTGET
operation succeeds, the client receives, as part of the layout, a
set of credentials allowing it I/O access to the specified data
files corresponding to the requested iomode.  When the client acts
on I/O operations on behalf of its local users, it <bcp14>MUST</bcp14> authenticate
and authorize the user by issuing respective OPEN and ACCESS calls
to the metadata server, similar to having NFSv4 data delegations.</t>
      <t>The combination of filehandle, synthetic uid, and gid in the layout
is the way that the metadata server enforces access control to the
data server.  The client only has access to filehandles of file
objects and not directory objects.  Thus, given a filehandle in a
layout, it is not possible to guess the parent directory filehandle.
Further, as the data file permissions only allow the given synthetic
uid read/write permission and the given synthetic gid read permission,
knowing the synthetic ids of one file does not necessarily allow
access to any other data file on the storage device.</t>
      <t>The metadata server can also deny access at any time by fencing the
data file, which means changing the synthetic ids.  In turn, that
forces the client to return its current layout and get a new layout
if it wants to continue I/O to the data file.</t>
      <t>If access is allowed, the client uses the corresponding (read-only
or read/write) credentials to perform the I/O operations at the
data file's storage devices.  When the metadata server receives a
request to change a file's permissions or ACL, it <bcp14>SHOULD</bcp14> recall all
layouts for that file and then <bcp14>MUST</bcp14> fence off any clients still
holding outstanding layouts for the respective files by implicitly
invalidating the previously distributed credential on all data file
comprising the file in question.  It is <bcp14>REQUIRED</bcp14> that this be done
before committing to the new permissions and/or ACL.  By requesting
new layouts, the clients will reauthorize access against the modified
access control metadata.  Recalling the layouts in this case is
intended to prevent clients from getting an error on I/Os done after
the client was fenced off.</t>
      <section anchor="sec-security-crc32-scope">
        <name>CRC32 Integrity Scope</name>
        <t>The CRC32 values carried in CHUNK_WRITE and returned from CHUNK_READ
are intended to detect accidental data corruption during storage or
transmission -- for example, bit flips in storage media or network
errors.  CRC32 is not a cryptographic hash and does not protect
against intentional modification: an adversary with access to the
network path could replace a chunk and recompute a valid CRC32 to
match.  The "data integrity" provided by the CRC32 mechanism in this
document refers to error detection, not protection against an active
attacker.  Deployments requiring protection against active attackers
<bcp14>SHOULD</bcp14> use RPC-over-TLS (see <xref target="sec-tls"/>) or RPCSEC_GSS.</t>
        <t>An authenticated client is in the "active attacker" role with
respect to its own chunks, in a restricted sense.  The data
server validates the CRC32 against the bytes the client
provided, so an authenticated client that chooses to send
semantically-invalid bytes with a correctly computed CRC32 will
have those bytes accepted.  The residual surface differs per
authentication model:</t>
        <ul spacing="normal">
          <li>
            <t>Under AUTH_SYS with loose coupling, the residual surface is
essentially the pre-existing attack surface of NFSv3 writes:
any host that can reach the data server with a valid uid can
write nonsense to chunks that uid owns.  This is the Flex
Files v1 authorization model, which Flex Files v2 inherits
without modification for this path.</t>
          </li>
          <li>
            <t>Under RPCSEC_GSS or TLS with mutual authentication, the
residual surface reduces to: only the authenticated client
can write nonsense into chunks it owns.  Cross-client
corruption is prevented because the data server verifies the
principal before accepting the write.  The remaining attack
surface is the client's own integrity: any deployment that
relies on data integrity above the wire <bcp14>MUST</bcp14> apply
application-level content validation.</t>
          </li>
        </ul>
        <t>Flex Files v2 does not attempt to defend against this
authenticated-but-malicious case.  The CRC32 mechanism is a
transport-integrity check, not a content-integrity check; the
system trust model assumes that an authenticated principal is
entitled to destroy the content of chunks it owns.</t>
      </section>
      <section anchor="chunk-lock-and-lease-expiry">
        <name>Chunk Lock and Lease Expiry</name>
        <t>When a client holds a chunk lock (acquired via CHUNK_LOCK) and its
lease expires or the client crashes, the lock is released implicitly
by the data server.  This opens a window in which another client
may write to the previously locked range before the original client's
repair is complete.  Implementations <bcp14>SHOULD</bcp14> ensure that the lease
period for chunk locks is sufficient to complete repair operations,
and <bcp14>SHOULD</bcp14> implement CHUNK_UNLOCK explicitly on abort paths.  The
metadata server's LAYOUTERROR and LAYOUTRETURN mechanisms provide
the coordination point for detecting and resolving such races.</t>
      </section>
      <section anchor="error-code-information-disclosure">
        <name>Error Code Information Disclosure</name>
        <t>The new error codes NFS4ERR_CHUNK_LOCKED (10099) and
NFS4ERR_PAYLOAD_NOT_CONSISTENT (10098) convey information about
chunk state to the caller.  Both of these errors <bcp14>MAY</bcp14> be returned
to callers whose credentials have not been verified by the data
server (e.g., when the AUTH_SYS uid presented does not match the
synthetic uid on the data file).  The information they reveal --
that a chunk is locked, or that a CRC mismatch occurred -- does
not directly disclose file contents but may indicate concurrent
write activity.  Implementations that are concerned about this
level of disclosure <bcp14>SHOULD</bcp14> require that operations on CHUNK ops
only succeed after credential verification and return
NFS4ERR_ACCESS for unverified callers rather than the more
specific error codes.</t>
      </section>
      <section anchor="sec-tls">
        <name>Transport Layer Security</name>
        <t>RPC-over-TLS <xref target="RFC9289"/> <bcp14>MAY</bcp14> be used to protect traffic between the
client and the metadata server and between the client and data servers.
When RPC-over-TLS is in use on the data server path, the synthetic
uid/gid credentials carried in AUTH_SYS remain the access control
mechanism; TLS provides confidentiality and integrity for the transport
but does not replace the fencing model described in <xref target="sec-Fencing-Clients"/>.
Servers that require transport security <bcp14>SHOULD</bcp14> advertise this via the
SECINFO mechanism rather than silently dropping connections.</t>
      </section>
      <section anchor="rpcsecgss-and-security-services">
        <name>RPCSEC_GSS and Security Services</name>
        <t>This document does not specify how RPCSEC_GSS <xref target="RFC7861"/> is
used between the client and a storage device in the loosely
coupled model, and the reasons differ between the two coupling
models.  Because the loosely coupled model uses synthetic
credentials that are managed by the metadata server rather than
shared with the storage device, a full RPCSEC_GSS integration
would require protocol work (RPCSEC_GSSv3 structured privilege
assertions, per <xref target="RFC7861"/>) on all three of the metadata
server, the storage device, and the client.  In the tightly
coupled model the principal used to access the data file is the
same as the one used to access the metadata file, so
RPCSEC_GSS applies unchanged.  The two subsections below treat
each model in turn.</t>
        <section anchor="loosely-coupled">
          <name>Loosely Coupled</name>
          <t>RPCSEC_GSS version 3 (RPCSEC_GSSv3) <xref target="RFC7861"/> contains facilities
that would allow it to be used to authorize the client to the storage
device on behalf of the metadata server.  Doing so would require
that each of the metadata server, storage device, and client would
need to implement RPCSEC_GSSv3 using an RPC-application-defined
structured privilege assertion in a manner described in Section
4.9.1 of <xref target="RFC7862"/>.  The specifics necessary to do so are not
described in this document.  This is principally because any such
specification would require extensive implementation work on a wide
range of storage devices, which would be unlikely to result in a
widely usable specification for a considerable time.</t>
          <t>As a result, the layout type described in this document will not
provide support for use of RPCSEC_GSS together with the loosely
coupled model.  However, future layout types could be specified,
which would allow such support, either through the use of RPCSEC_GSSv3
or in other ways.</t>
        </section>
        <section anchor="tightly-coupled">
          <name>Tightly Coupled</name>
          <t>With tight coupling, the principal used to access the metadata file
is exactly the same as used to access the data file.  The storage
device can use the control protocol to validate any RPC credentials.
As a result, there are no security issues related to using RPCSEC_GSS
with a tightly coupled system.  For example, if Kerberos V5 Generic
Security Service Application Program Interface (GSS-API) <xref target="RFC4121"/>
is used as the security mechanism, then the storage device could
use a control protocol to validate the RPC credentials to the
metadata server.</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t><xref target="RFC8881"/> introduced the "pNFS Layout Types Registry"; new layout
type numbers in this registry need to be assigned by IANA.  This
document defines a new layout type number: LAYOUT4_FLEX_FILES_V2
(see <xref target="tbl_layout_types"/>).</t>
      <table anchor="tbl_layout_types">
        <name>Layout Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Layout Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">LAYOUT4_FLEX_FILES_V2</td>
            <td align="left">0x5</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t><xref target="RFC8881"/> also introduced the "NFSv4 Recallable Object Types
Registry".  This document defines new recallable objects for
RCA4_TYPE_MASK_FF2_LAYOUT_MIN and RCA4_TYPE_MASK_FF2_LAYOUT_MAX
(see <xref target="tbl_recallables"/>).</t>
      <table anchor="tbl_recallables">
        <name>Recallable Object Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Recallable Object Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MIN</td>
            <td align="left">20</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MAX</td>
            <td align="left">21</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t>This document introduces the 'Flexible File Version 2 Layout Type
Erasure Coding Type Registry'.  The registry uses a 32-bit value
space partitioned into ranges based on the intended scope of the
encoding type (see <xref target="tbl-coding-ranges"/>).</t>
      <table anchor="tbl-coding-ranges">
        <name>Erasure Coding Type Value Ranges</name>
        <thead>
          <tr>
            <th align="left">Range</th>
            <th align="left">Purpose</th>
            <th align="left">Allocation Policy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">0x0000-0x00FF</td>
            <td align="left">Standards Track</td>
            <td align="left">IETF Review (RFC required)</td>
          </tr>
          <tr>
            <td align="left">0x0100-0x0FFF</td>
            <td align="left">Experimental</td>
            <td align="left">Expert Review</td>
          </tr>
          <tr>
            <td align="left">0x1000-0x7FFF</td>
            <td align="left">Vendor (open)</td>
            <td align="left">First Come First Served</td>
          </tr>
          <tr>
            <td align="left">0x8000-0xFFFE</td>
            <td align="left">Private/proprietary</td>
            <td align="left">No registration required</td>
          </tr>
          <tr>
            <td align="left">0xFFFF</td>
            <td align="left">Reserved</td>
            <td align="left">--</td>
          </tr>
        </tbody>
      </table>
      <dl>
        <dt>Standards Track (0x0000-0x00FF)</dt>
        <dd>
          <t>Encoding types intended for broad interoperability.  The
specification <bcp14>MUST</bcp14> include a complete mathematical description
sufficient for independent interoperable implementations (see
<xref target="encoding-type-interoperability"/>).  Allocated by IETF Review.</t>
        </dd>
        <dt>Experimental (0x0100-0x0FFF)</dt>
        <dd>
          <t>Encoding types under development or evaluation.  An Internet-Draft
is sufficient for allocation.  The specification <bcp14>SHOULD</bcp14> include
enough detail for interoperability testing.  Allocated by Expert
Review.</t>
        </dd>
        <dt>Vendor (open) (0x1000-0x7FFF)</dt>
        <dd>
          <t>Encoding types with a published specification or patent reference.
Interoperability is expected among implementations that license or
implement the specification.  The registration <bcp14>MUST</bcp14> include either a
math specification or a patent reference.  Allocated First Come
First Served.</t>
        </dd>
        <dt>Private/proprietary (0x8000-0xFFFE)</dt>
        <dd>
          <t>Encoding types for use within a single vendor's ecosystem.
No IANA registration is required.  Interoperability with other
implementations is not expected.  To reduce the likelihood of
accidental codepoint collisions between independent vendors,
implementations <bcp14>SHOULD</bcp14> derive the low-order 15 bits of any value
in this range from that vendor's Private Enterprise Number
<xref target="IANA-PEN"/> (for example, by hashing the PEN into the 15-bit
space and reserving one well-known offset per codec).  The
encoding type name <bcp14>SHOULD</bcp14> include an organizational identifier
(e.g., FFV2_ENCODING_ACME_FOOBAR).  A client that encounters a
value in this range from an unrecognized server <bcp14>SHOULD</bcp14> treat
it as an unsupported encoding type.</t>
        </dd>
      </dl>
      <t>This partitioning prevents contention for small numbers in the
Standards Track range and provides a clear signal to clients about
what level of interoperability to expect.</t>
      <t>This document defines the FFV2_CODING_MIRRORED type for Client-Side
Mirroring (see <xref target="tbl-coding-types"/>).</t>
      <table anchor="tbl-coding-types">
        <name>Flexible File Version 2 Layout Type Erasure Coding Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Erasure Coding Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">FFV2_CODING_MIRRORED</td>
            <td align="left">1</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_MOJETTE_SYSTEMATIC</td>
            <td align="left">2</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC</td>
            <td align="left">3</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_RS_VANDERMONDE</td>
            <td align="left">4</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
        </tbody>
      </table>
      <section anchor="iana-flag-words">
        <name>Flag-Word Allocation</name>
        <t>This document defines three bitmap spaces -- ffv2_flags4
(see <xref target="sec-ffv2_flags4"/>), ffv2_ds_flags4 (see
<xref target="sec-ffv2_ds_flags4"/>), and cwa_flags (see
<xref target="sec-CHUNK_WRITE"/>) -- whose allocated bits are enumerated in
this document.  Following the precedent of ff_flags4 in
<xref target="RFC8435"/>, IANA does not maintain a registry for any of these
bitmap spaces.  Future bit allocations are made by a document
that updates or obsoletes this one.  Implementations <bcp14>MUST</bcp14>
treat unknown bits as reserved and <bcp14>MUST NOT</bcp14> assign meaning to
them locally.</t>
      </section>
    </section>
    <section anchor="xdr-description-of-the-flexible-file-layout-type">
      <name>XDR Description of the Flexible File Layout Type</name>
      <t>This document contains the External Data Representation (XDR)
<xref target="RFC4506"/> description of the flexible file layout type.  The XDR
description is embedded in this document in a way that makes it simple
for the reader to extract into a ready-to-compile form.  The reader can
feed this document into the shell script in <xref target="fig-extract"/> to produce
the machine-readable XDR description of the flexible file layout type.</t>
      <figure anchor="fig-extract">
        <name>extract.sh</name>
        <sourcecode type="shell"><![CDATA[
#!/bin/sh
grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??'
]]></sourcecode>
      </figure>
      <t>That is, if the above script is stored in a file called "extract.sh"
and this document is in a file called "spec.txt", then the reader can
run the script as in <xref target="fig-extract-example"/>.</t>
      <figure anchor="fig-extract-example">
        <name>Example use of extract.sh</name>
        <sourcecode type="shell"><![CDATA[
sh extract.sh < spec.txt > flex_files2_prot.x
]]></sourcecode>
      </figure>
      <t>The effect of the script is to remove leading blank space from each
line, plus a sentinel sequence of "///".</t>
      <t>XDR descriptions with the sentinel sequence are embedded throughout
the document.</t>
      <t>Note that the XDR code contained in this document depends on types
from the NFSv4.1 nfs4_prot.x file <xref target="RFC5662"/>.  This includes both nfs
types that end with a 4, such as offset4, length4, etc., as well as
more generic types such as uint32_t and uint64_t.</t>
      <t>While the XDR can be appended to that from <xref target="RFC7863"/>, the various
code snippets belong in their respective areas of that XDR.</t>
    </section>
    <section numbered="false" anchor="sec-implementation-status">
      <name>Implementation Status</name>
      <t>Note to RFC Editor: please remove this section prior to publication,
per <xref target="RFC7942"/>.</t>
      <t>This section records the implementation status of this specification
at the time of writing.  The purpose, per <xref target="RFC7942"/>, is to help
reviewers evaluate the protocol against running code and to document
which parts have been validated end-to-end versus specified on paper.</t>
      <section numbered="false" anchor="reffs-mds-and-ds-and-ecdemo-client">
        <name>reffs (MDS and DS) and ec_demo (Client)</name>
        <dl>
          <dt>Organization:</dt>
          <dd>
            <t>Independent / open source.</t>
          </dd>
          <dt>License:</dt>
          <dd>
            <t>AGPL-3.0-or-later.</t>
          </dd>
          <dt>Source:</dt>
          <dd>
            <t><eref target="https://github.com/loghyr/reffs">https://github.com/loghyr/reffs</eref>.</t>
          </dd>
          <dt>Implementation:</dt>
          <dd>
            <t><tt>reffs</tt> is an NFSv4.2 server written in C that acts as both a
metadata server (MDS) and a data server (DS) in a Flex Files v2
deployment.  <tt>ec_demo</tt> is a client-side library with a
demonstration driver that exercises the Flex Files v2 data path
over NFSv4.2 with all three erasure-coding types defined in this
document.</t>
          </dd>
        </dl>
        <t>Coverage:</t>
        <ul spacing="normal">
          <li>
            <t>CHUNK_WRITE, CHUNK_READ, CHUNK_FINALIZE, and CHUNK_COMMIT (the
happy-path data-plane operations) are implemented end-to-end and
have been exercised against the three codec families (Reed-Solomon
Vandermonde, Mojette systematic, Mojette non-systematic).</t>
          </li>
          <li>
            <t>The chunk_guard4 CAS primitive, including the conflict-detection
and deterministic-tiebreaker rules in <xref target="sec-chunk_guard4"/>, is
implemented on both the client and the data server.</t>
          </li>
          <li>
            <t>Per-chunk CRC32 integrity checking (see
<xref target="sec-security-crc32-scope"/>) is implemented end-to-end.</t>
          </li>
          <li>
            <t>Per-inode persistent storage of chunk state (PENDING / FINALIZED
/ COMMITTED) is implemented using write-temp / fdatasync / rename
for crash safety.</t>
          </li>
          <li>
            <t>The repair data path (CHUNK_LOCK with CHUNK_LOCK_FLAGS_ADOPT,
CHUNK_WRITE_REPAIR, CHUNK_REPAIRED, CHUNK_ROLLBACK, and
CB_CHUNK_REPAIR) is <strong>specified but not yet implemented</strong> in the
prototype.  The corresponding operations currently return
NFS4ERR_NOTSUPP.  A fault-injection test harness is in place to
drive the repair path once it is implemented.</t>
          </li>
          <li>
            <t>The tight-coupling control protocol (TRUST_STATEID,
REVOKE_STATEID, BULK_REVOKE_STATEID) is <strong>specified but not yet
implemented</strong>.  Data servers advertise loose coupling via
<tt>ffdv_tightly_coupled = false</tt>, and synthetic AUTH_SYS
credentials with fencing are used for access control.</t>
          </li>
        </ul>
        <dl>
          <dt>Level of maturity:</dt>
          <dd>
            <t>Research-quality prototype.  The implementation demonstrates the
protocol and has produced the benchmark data summarised below.
It is not production-ready; in particular, it does not yet
implement the repair path required to tolerate concurrent-writer
races or multi-DS failure reconstruction.</t>
          </dd>
          <dt>Contact:</dt>
          <dd>
            <t>loghyr@gmail.com.</t>
          </dd>
          <dt>Last update:</dt>
          <dd>
            <t>April 2026.</t>
          </dd>
        </dl>
      </section>
      <section numbered="false" anchor="interoperability-and-benchmarks">
        <name>Interoperability and Benchmarks</name>
        <t>The reffs + ec_demo implementation has been benchmarked against
itself (no second Flex Files v2 implementation is known to the
authors at the time of writing).  The benchmark suite exercises
four I/O strategies -- plain mirroring, pure striping, Reed-Solomon
Vandermonde, Mojette systematic, and Mojette non-systematic -- at
five file sizes (4 KB, 16 KB, 64 KB, 256 KB, and 1 MB), at two
parity geometries (4+2 and 8+2), and on two platforms (an Apple M4
host running macOS with a Rocky Linux 8.10 Docker container, and a
Fedora 43 native Linux host on aarch64).  Each data point is the
mean of five measured runs.  Data servers run as Docker containers
on a single-host bridge network, so absolute latency numbers
reflect encoding and RPC fan-out cost with near-zero network
latency; real deployments will see higher absolute values but
similar overhead ratios.</t>
        <t>Selected findings:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Erasure-coded write overhead is modest at small and mid sizes.</strong>
At 4 KB to 64 KB payloads, all three EC codecs add 14% to 21%
write latency relative to plain mirroring.  Above 64 KB the
encoding cost begins to dominate; at 1 MB Reed-Solomon and Mojette
systematic reach approximately +54%, Mojette non-systematic
approximately +62%.</t>
          </li>
          <li>
            <t><strong>The dominant write cost is encoding, not fan-out.</strong>  A pure-
striping variant (6 data shards, no parity) isolates the two
costs.  At 1 MB, plain mirroring writes in 64 ms, striping in
71 ms (+11%), Reed-Solomon in 103 ms (+60%).  Of the 39 ms
Reed-Solomon penalty, only 7 ms comes from parallel fan-out; the
remaining 32 ms is encoding plus two additional parity RPCs.</t>
          </li>
          <li>
            <t><strong>Reconstruction of a missing data shard is essentially free for
systematic codecs at 4+2.</strong>  Reed-Solomon and Mojette systematic
add 1% to 6% to read latency in degraded-1 mode (one data shard
missing, reconstructed from the remaining five).  A client that
discovers a failed DS at read time can reconstruct transparently
with no user-visible latency impact.</t>
          </li>
          <li>
            <t><strong>At 8+2, systematic-codec reconstruction diverges.</strong>  Mojette
systematic reconstruction overhead stays at approximately +4% at
1 MB, while Reed-Solomon grows to approximately +54% due to the
O(k^2) cost of inverting a k x k matrix in GF(2^8).  Mojette
systematic's back-projection algorithm scales with m (parity
count) rather than k (data count) and is therefore preferable at
wider geometries.</t>
          </li>
          <li>
            <t><strong>Mojette non-systematic applies a full inverse transform on
every read</strong> regardless of whether any shard is missing.  At
1 MB this produces approximately 4x read overhead at 4+2 and
approximately 7x at 8+2.  This codec is suitable only for
write-once cold storage where reads are rare; it should not be
the default for interactive workloads.</t>
          </li>
          <li>
            <t><strong>Results are platform-independent.</strong>  The largest absolute
latency delta between macOS M4 and Fedora 43 at 1 MB is 20 ms
on writes.  Codec ordering, overhead percentages, and
qualitative scaling behavior are reproducible across operating
systems and Docker implementations.</t>
          </li>
        </ul>
        <t>The benchmarks confirm that the protocol's central design claims
hold in practice: client-side erasure coding is affordable at
typical payload sizes; systematic codecs reconstruct missing
shards cheaply; and the scaling properties of the three codec
families follow directly from their published algorithmic
complexities.</t>
        <t>The benchmarks also identify two non-goals for deployment: Mojette
non-systematic is not a viable general-purpose read codec, and
Reed-Solomon at k greater than approximately 6 loses its
"reconstruction is free" property.  These observations inform the
choice of default codec and geometry in implementations that
consume this specification.</t>
        <t>A full benchmark report with per-size tables, figures, and the
platform comparison is available alongside the source code.</t>
      </section>
    </section>
    <section numbered="false" anchor="sec-rejected-alternatives">
      <name>Design Rationale: Rejected Alternatives</name>
      <t>The design of Flex Files v2 went through several iterations between
2024 and 2026 that are recorded here for the benefit of future
reviewers and implementers.  Each alternative below was considered
and rejected, with the specific concern that led to its rejection.
Understanding why these approaches were rejected may help reviewers
evaluate the current design against a fuller space of possibilities
and may guide future extensions or replacements.</t>
      <section numbered="false" anchor="proprietary-projection-header-inside-opaque-payload">
        <name>Proprietary Projection Header Inside Opaque Payload</name>
        <t>The earliest iteration placed a 16-byte Mojette-specific header at
the start of the READ/WRITE opaque payload, interpreted in the
endianness of the writer's host.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>It embedded a specific erasure-coding type (Mojette) into the
generic replication-method framework, preventing alternate
codings from reusing the same wire format.</t>
          </li>
          <li>
            <t>The header bytes were not XDR-aligned, which required every
implementation to handle endianness explicitly rather than
relying on XDR's natural byte order.</t>
          </li>
          <li>
            <t>Carrying integrity and identification data inside an opaque
disrespected the XDR self-description model that the rest of
NFSv4 relies on.</t>
          </li>
        </ul>
        <t>The rejection of this approach at IETF 120 (July 2024) motivated
the shift to explicit XDR-encoded chunk headers and the
chunk_guard4 structure, both visible in the wire format.</t>
      </section>
      <section numbered="false" anchor="per-client-swap-files-with-mds-mappingrecall">
        <name>Per-Client Swap Files with MDS MAPPING_RECALL</name>
        <t>One proposal split logical and physical chunk addressing: the
metadata server maintained a mapping from logical offset to
physical location, and the client appended new chunks to a
per-client staging file on each data server before asking the
metadata server to atomically remap the file to the new chunks.
This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The MAPPING_RECALL operation required to atomically update the
mapping would, in a multi-writer deployment, have to recall all
outstanding read/write layouts on the file -- grinding the
application to a halt during every remap.</t>
          </li>
          <li>
            <t>Each client required its own staging file on every data server,
producing N clients * M data servers staging files that had to
be reconciled on client restart.</t>
          </li>
          <li>
            <t>The approach was biased toward correctness at the expense of
throughput, which inverted the expected workload mix where
single-writer cases dominate.</t>
          </li>
        </ul>
      </section>
      <section numbered="false" anchor="server-side-byte-range-lock-manager-per-file">
        <name>Server-Side Byte-Range Lock Manager per File</name>
        <t>Another proposal relied on byte-range locks obtained by clients
before writing, with the lock manager state spread across the data
servers.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>A failed lock holder required a lock manager to arbitrate
recovery, effectively reintroducing a centralized decision
point for each chunk.</t>
          </li>
          <li>
            <t>The lock recall path for HPC checkpoint workloads (many ranks
writing disjoint regions) would have required thousands of
locks per file, with recall storms on every phase transition.</t>
          </li>
          <li>
            <t>The design did not specify how the lock manager itself would
be replicated for high availability, deferring the hardest
part of the problem.</t>
          </li>
        </ul>
        <t>The current design uses CHUNK_LOCK (see <xref target="sec-CHUNK_LOCK"/>) but
only on the repair path, not on the normal write path.</t>
      </section>
      <section numbered="false" anchor="modified-two-touch-paxos-on-each-chunk">
        <name>Modified Two-Touch Paxos on Each Chunk</name>
        <t>A fully distributed-consensus proposal placed a lightweight
(modified two-touch) Paxos round on each chunk write, reaching
agreement among the data servers holding the mirror set.  This was
rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The constant-factor cost per write (two or three round trips,
leader election overhead, majority quorum requirement) was
unacceptable for workloads where single-writer throughput
dominates the deployment mix.</t>
          </li>
          <li>
            <t>The approach demanded that data servers be peers in a
consensus protocol, which is a substantially heavier
requirement than being independent chunk stores.</t>
          </li>
          <li>
            <t>A majority of (k+m) data servers must be reachable for any
progress, which is a strictly stronger availability requirement
than the k-of-(k+m) needed for erasure-coded reads.</t>
          </li>
        </ul>
        <t>Working-group feedback on this proposal was uniformly negative.
The current design retains the option -- nothing in this
specification prevents an implementation from running classical
consensus internally among MDS replicas (see
<xref target="sec-system-model-consensus"/>) -- but does not require it per
write.</t>
      </section>
      <section numbered="false" anchor="automatic-commit-of-empty-chunks">
        <name>Automatic Commit of Empty Chunks</name>
        <t>An earlier version included a WRITE_BLOCK_FLAGS_COMMIT_IF_EMPTY
flag (later renamed CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY) that
automatically committed a write to a previously-empty chunk
without a separate CHUNK_COMMIT round trip.  The flag is retained
in the current design but its scope was narrowed: it is
performant in the exclusive-writer case but produces blocks that
cannot be rolled back if a racing writer appears concurrently,
requiring either hole-punching or an extension of CHUNK_ROLLBACK
to work on committed blocks.  The narrow scope is documented in
the flag's definition; a broader version was rejected because it
created rollback liabilities that were disproportionate to the
single-RTT savings.</t>
      </section>
      <section numbered="false" anchor="global-clock-or-wall-clock-based-generation-counter">
        <name>Global Clock or Wall-Clock-Based Generation Counter</name>
        <t>An early design used a wall-clock timestamp as the cg_gen_id.
This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>No global clock exists among the many clients of a
multi-rack deployment.  Clock skew can cause a newer write
to appear to have an earlier timestamp than an older one.</t>
          </li>
          <li>
            <t>Timestamps at millisecond or microsecond resolution are not
fine-grained enough to disambiguate bursty writes from the
same client.</t>
          </li>
          <li>
            <t>Mixing client identity bits into the low-order bits of a
timestamp (to make it unique) reduces effective timestamp
resolution without providing a useful total ordering.</t>
          </li>
        </ul>
        <t>The current design uses a per-chunk monotonic counter scoped to
the chunk on the data server, with cg_client_id as the
disambiguator across clients.  See <xref target="sec-chunk_guard4"/>.</t>
      </section>
      <section numbered="false" anchor="layout-level-generation-counter">
        <name>Layout-Level Generation Counter</name>
        <t>Christoph Hellwig proposed at IETF 122 (March 2025) adding a
generation counter to the layout itself, transmitted to the
data servers alongside each I/O, so that the metadata server
could redirect writes to new data servers without issuing a
full CB_LAYOUTRECALL storm across every holder of the file.
This is a natural extension of the per-chunk cg_gen_id: where
cg_gen_id disambiguates successive writes to the same chunk, a
layout-level counter would disambiguate successive placements
of the same data.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The use case is already covered.  CB_CHUNK_REPAIR (see
<xref target="sec-CB_CHUNK_REPAIR"/>) and the Data Mover / Proxy-DS
mechanism (see the companion Data Mover design) together
handle mid-layout remap without requiring a layout-level
epoch on the wire.  CB_CHUNK_REPAIR reaches the specific
chunks that need redirection; the Data Mover reaches the
broader re-placement case; between them the full remap
space is covered.</t>
          </li>
          <li>
            <t>Adding a layout-level counter introduces a second,
potentially-conflicting epoch alongside cg_gen_id.  The CAS
semantics on the data server would have to compose the two
generations (per-chunk and per-layout), which multiplies
the states the data server must reason about without
strengthening any guarantee the protocol offers today.</t>
          </li>
          <li>
            <t>The CB_LAYOUTRECALL storm that motivated the proposal is a
worst-case cost that the current design pays only during a
genuine data-server retirement or full re-placement.
Partial remaps -- the common case -- already flow through
CB_CHUNK_REPAIR + layout refresh on LAYOUTGET without
disturbing other holders.</t>
          </li>
        </ul>
        <t>If a future revision determines that layout-level generation is
needed, it can be added as a protocol extension: the on-wire
surface is additive rather than a replacement, because
cg_gen_id's semantics are independent of any outer layout
epoch.</t>
      </section>
      <section numbered="false" anchor="declustered-raid-with-dynamic-parity-mapping">
        <name>Declustered RAID with Dynamic Parity Mapping</name>
        <t>Christoph Hellwig raised at IETF 121 (November 2024) the
possibility of borrowing from declustered RAID designs: the
metadata server maintains, for every fixed-size region of each
file, a mapping from logical address to the specific data
servers that currently hold that region's data and parity
shards; writes do not update chunks in place but instead produce
a new parity stripe on a freshly allocated set of data servers,
and the mapping is atomically swapped on the metadata server
once the new stripe is durable.  The attraction is that
overwrite is replaced by remap, eliminating the write-hole
problem entirely at the cost of moving consistency into the
mapping table.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The mapping load scales with the file's chunk count, not with
the file count.  A single large file with billions of chunks
produces a billion-entry mapping that the metadata server
must maintain with transactional semantics; the overhead is
inverted from the usual "a few large files" regime that
pNFS is designed for.</t>
          </li>
          <li>
            <t>Remapping storms during rebalancing, data-server addition, or
data-server failure require atomic updates to many mapping
entries at once.  Providing those updates with the
reasonable-latency bounds required by HPC checkpoint
workloads is an open research problem, not a specifiable
protocol.</t>
          </li>
          <li>
            <t>The approach reintroduces the metadata-server scale bottleneck
that client-side erasure coding is designed to avoid: every
write traverses the mapping table, and the mapping table is
the hot-spot under concurrent writes.</t>
          </li>
          <li>
            <t>The mapping table becomes the single point of failure that
the rest of the Flex Files architecture works hard to avoid;
replicating it with strong consistency requires a consensus
protocol on the metadata server, which the current design
deliberately does not require (see <xref target="sec-system-model-consensus"/>).</t>
          </li>
        </ul>
        <t>The current design uses fixed per-file chunk placement decided
at LAYOUTGET time plus chunk_guard4 CAS for writes, which
localises consistency decisions to the chunks being written
rather than to a global mapping table.</t>
      </section>
    </section>
    <section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The following from Hammerspace were instrumental in driving Flexible
File Version 2 Layout Type: David Flynn, Trond Myklebust, Didier
Feron, Jean-Pierre Monchanin, Pierre Evenou, and Brian Pawlowski.</t>
      <t>Pierre Evenou contributed the Mojette Transform encoding type
specification, drawing on the work of Nicolas Normand, Benoit Parrein,
and the discrete geometry research group at the University of Nantes.</t>
      <t>Christoph Hellwig was instrumental in making sure the Flexible File
Version 2 Layout Type was applicable to more than the Mojette
Transformation.</t>
      <t>David Black clarified at IETF 124 that the consistency goal of
Flex Files v2 is RAID consistency across the chunks of a stripe
rather than POSIX write ordering across application writes; that
framing is reflected in <xref target="sec-motivation"/> and in the Non-Goals
of <xref target="sec-system-model-consistency"/>.</t>
      <t>Chris Inacio, Brian Pawlowski, and Gorry Fairhurst guided this
process.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC4121">
          <front>
            <title>The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2</title>
            <author fullname="L. Zhu" initials="L." surname="Zhu"/>
            <author fullname="K. Jaganathan" initials="K." surname="Jaganathan"/>
            <author fullname="S. Hartman" initials="S." surname="Hartman"/>
            <date month="July" year="2005"/>
            <abstract>
              <t>This document defines protocols, procedures, and conventions to be employed by peers implementing the Generic Security Service Application Program Interface (GSS-API) when using the Kerberos Version 5 mechanism.</t>
              <t>RFC 1964 is updated and incremental changes are proposed in response to recent developments such as the introduction of Kerberos cryptosystem framework. These changes support the inclusion of new cryptosystems, by defining new per-message tokens along with their encryption and checksum algorithms based on the cryptosystem profiles. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4121"/>
          <seriesInfo name="DOI" value="10.17487/RFC4121"/>
        </reference>
        <reference anchor="RFC4506">
          <front>
            <title>XDR: External Data Representation Standard</title>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <date month="May" year="2006"/>
            <abstract>
              <t>This document describes the External Data Representation Standard (XDR) protocol as it is currently deployed and accepted. This document obsoletes RFC 1832. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="67"/>
          <seriesInfo name="RFC" value="4506"/>
          <seriesInfo name="DOI" value="10.17487/RFC4506"/>
        </reference>
        <reference anchor="RFC5531">
          <front>
            <title>RPC: Remote Procedure Call Protocol Specification Version 2</title>
            <author fullname="R. Thurlow" initials="R." surname="Thurlow"/>
            <date month="May" year="2009"/>
            <abstract>
              <t>This document describes the Open Network Computing (ONC) Remote Procedure Call (RPC) version 2 protocol as it is currently deployed and accepted. This document obsoletes RFC 1831. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5531"/>
          <seriesInfo name="DOI" value="10.17487/RFC5531"/>
        </reference>
        <reference anchor="RFC5662">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 External Data Representation Standard (XDR) Description</title>
            <author fullname="S. Shepler" initials="S." role="editor" surname="Shepler"/>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="January" year="2010"/>
            <abstract>
              <t>This document provides the External Data Representation Standard (XDR) description for Network File System version 4 (NFSv4) minor version 1. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5662"/>
          <seriesInfo name="DOI" value="10.17487/RFC5662"/>
        </reference>
        <reference anchor="RFC7530">
          <front>
            <title>Network File System (NFS) Version 4 Protocol</title>
            <author fullname="T. Haynes" initials="T." role="editor" surname="Haynes"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="March" year="2015"/>
            <abstract>
              <t>The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.</t>
              <t>This document, together with the companion External Data Representation (XDR) description document, RFC 7531, obsoletes RFC 3530 as the definition of the NFS version 4 protocol.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7530"/>
          <seriesInfo name="DOI" value="10.17487/RFC7530"/>
        </reference>
        <reference anchor="RFC7861">
          <front>
            <title>Remote Procedure Call (RPC) Security Version 3</title>
            <author fullname="A. Adamson" initials="A." surname="Adamson"/>
            <author fullname="N. Williams" initials="N." surname="Williams"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document specifies version 3 of the Remote Procedure Call (RPC) security protocol (RPCSEC_GSS). This protocol provides support for multi-principal authentication of client hosts and user principals to a server (constructed by generic composition), security label assertions for multi-level security and type enforcement, structured privilege assertions, and channel bindings. This document updates RFC 5403.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7861"/>
          <seriesInfo name="DOI" value="10.17487/RFC7861"/>
        </reference>
        <reference anchor="RFC7862">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 Protocol</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document describes NFS version 4 minor version 2; it describes the protocol extensions made from NFS version 4 minor version 1. Major extensions introduced in NFS version 4 minor version 2 include the following: Server-Side Copy, Application Input/Output (I/O) Advise, Space Reservations, Sparse Files, Application Data Blocks, and Labeled NFS.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7862"/>
          <seriesInfo name="DOI" value="10.17487/RFC7862"/>
        </reference>
        <reference anchor="RFC7863">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 External Data Representation Standard (XDR) Description</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document provides the External Data Representation (XDR) description for NFS version 4 minor version 2.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7863"/>
          <seriesInfo name="DOI" value="10.17487/RFC7863"/>
        </reference>
        <reference anchor="RFC8178">
          <front>
            <title>Rules for NFSv4 Extensions and Minor Versions</title>
            <author fullname="D. Noveck" initials="D." surname="Noveck"/>
            <date month="July" year="2017"/>
            <abstract>
              <t>This document describes the rules relating to the extension of the NFSv4 family of protocols. It covers the creation of minor versions, the addition of optional features to existing minor versions, and the correction of flaws in features already published as Proposed Standards. The rules relating to the construction of minor versions and the interaction of minor version implementations that appear in this document supersede the minor versioning rules in RFC 5661 and other RFCs defining minor versions.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8178"/>
          <seriesInfo name="DOI" value="10.17487/RFC8178"/>
        </reference>
        <reference anchor="RFC8434">
          <front>
            <title>Requirements for Parallel NFS (pNFS) Layout Types</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>This document defines the requirements that individual Parallel NFS (pNFS) layout types need to meet in order to work within the pNFS framework as defined in RFC 5661. In so doing, this document aims to clearly distinguish between requirements for pNFS as a whole and those specifically directed to the pNFS file layout. The lack of a clear separation between the two sets of requirements has been troublesome for those specifying and evaluating new layout types. In this regard, this document updates RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8434"/>
          <seriesInfo name="DOI" value="10.17487/RFC8434"/>
        </reference>
        <reference anchor="RFC8435">
          <front>
            <title>Parallel NFS (pNFS) Flexible File Layout</title>
            <author fullname="B. Halevy" initials="B." surname="Halevy"/>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>Parallel NFS (pNFS) allows a separation between the metadata (onto a metadata server) and data (onto a storage device) for a file. The flexible file layout type is defined in this document as an extension to pNFS that allows the use of storage devices that require only a limited degree of interaction with the metadata server and use already-existing protocols. Client-side mirroring is also added to provide replication of files.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8435"/>
          <seriesInfo name="DOI" value="10.17487/RFC8435"/>
        </reference>
        <reference anchor="RFC8881">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 Protocol</title>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <author fullname="C. Lever" initials="C." surname="Lever"/>
            <date month="August" year="2020"/>
            <abstract>
              <t>This document describes the Network File System (NFS) version 4 minor version 1, including features retained from the base protocol (NFS version 4 minor version 0, which is specified in RFC 7530) and protocol extensions made subsequently. The later minor version has no dependencies on NFS version 4 minor version 0, and is considered a separate protocol.</t>
              <t>This document obsoletes RFC 5661. It substantially revises the treatment of features relating to multi-server namespace, superseding the description of those features appearing in RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8881"/>
          <seriesInfo name="DOI" value="10.17487/RFC8881"/>
        </reference>
        <reference anchor="RFC9289">
          <front>
            <title>Towards Remote Procedure Call Encryption by Default</title>
            <author fullname="T. Myklebust" initials="T." surname="Myklebust"/>
            <author fullname="C. Lever" initials="C." role="editor" surname="Lever"/>
            <date month="September" year="2022"/>
            <abstract>
              <t>This document describes a mechanism that, through the use of opportunistic Transport Layer Security (TLS), enables encryption of Remote Procedure Call (RPC) transactions while they are in transit. The proposed mechanism interoperates with Open Network Computing (ONC) RPC implementations that do not support it. This document updates RFC 5531.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9289"/>
          <seriesInfo name="DOI" value="10.17487/RFC9289"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Plank97">
          <front>
            <title>A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like System</title>
            <author initials="J." surname="Plank" fullname="J. Plank">
              <organization/>
            </author>
            <date year="1997" month="September"/>
          </front>
        </reference>
        <reference anchor="IANA-PEN" target="https://www.iana.org/assignments/enterprise-numbers/">
          <front>
            <title>Private Enterprise Numbers</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>
        <reference anchor="RFC1813">
          <front>
            <title>NFS Version 3 Protocol Specification</title>
            <author fullname="B. Callaghan" initials="B." surname="Callaghan"/>
            <author fullname="B. Pawlowski" initials="B." surname="Pawlowski"/>
            <author fullname="P. Staubach" initials="P." surname="Staubach"/>
            <date month="June" year="1995"/>
            <abstract>
              <t>This paper describes the NFS version 3 protocol. This paper is provided so that people can write compatible implementations. This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="1813"/>
          <seriesInfo name="DOI" value="10.17487/RFC1813"/>
        </reference>
        <reference anchor="RFC4519">
          <front>
            <title>Lightweight Directory Access Protocol (LDAP): Schema for User Applications</title>
            <author fullname="A. Sciberras" initials="A." role="editor" surname="Sciberras"/>
            <date month="June" year="2006"/>
            <abstract>
              <t>This document is an integral part of the Lightweight Directory Access Protocol (LDAP) technical specification. It provides a technical specification of attribute types and object classes intended for use by LDAP directory clients for many directory services, such as White Pages. These objects are widely used as a basis for the schema in many LDAP directories. This document does not cover attributes used for the administration of directory servers, nor does it include directory objects defined for specific uses in other documents. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4519"/>
          <seriesInfo name="DOI" value="10.17487/RFC4519"/>
        </reference>
        <reference anchor="RFC7942">
          <front>
            <title>Improving Awareness of Running Code: The Implementation Status Section</title>
            <author fullname="Y. Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="A. Farrel" initials="A." surname="Farrel"/>
            <date month="July" year="2016"/>
            <abstract>
              <t>This document describes a simple process that allows authors of Internet-Drafts to record the status of known implementations by including an Implementation Status section. This will allow reviewers and working groups to assign due consideration to documents that have the benefit of running code, which may serve as evidence of valuable experimentation and feedback that have made the implemented protocols more mature.</t>
              <t>This process is not mandatory. Authors of Internet-Drafts are encouraged to consider using the process for their documents, and working groups are invited to think about applying the process to all of their protocol specifications. This document obsoletes RFC 6982, advancing it to a Best Current Practice.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="205"/>
          <seriesInfo name="RFC" value="7942"/>
          <seriesInfo name="DOI" value="10.17487/RFC7942"/>
        </reference>
        <reference anchor="PARREIN">
          <front>
            <title>Multiple Description Coding Using Exact Discrete Radon Transform</title>
            <author initials="B." surname="Parrein" fullname="B. Parrein">
              <organization/>
            </author>
            <author initials="N." surname="Normand" fullname="N. Normand">
              <organization/>
            </author>
            <author initials="J.-P." surname="Guedon" fullname="J.-P. Guedon">
              <organization/>
            </author>
            <date year="2001"/>
          </front>
          <seriesInfo name="IEEE" value="Data Compression Conference (DCC)"/>
        </reference>
        <reference anchor="NORMAND">
          <front>
            <title>A Geometry Driven Reconstruction Algorithm for the Mojette Transform</title>
            <author initials="N." surname="Normand" fullname="N. Normand">
              <organization/>
            </author>
            <author initials="A." surname="Kingston" fullname="A. Kingston">
              <organization/>
            </author>
            <author initials="P." surname="Evenou" fullname="P. Evenou">
              <organization/>
            </author>
            <date year="2006"/>
          </front>
          <seriesInfo name="LNCS" value="4245, pp. 122-133, DGCI 2006"/>
        </reference>
        <reference anchor="KATZ">
          <front>
            <title>Questions of Uniqueness and Resolution in Reconstruction from Projections</title>
            <author initials="M." surname="Katz" fullname="M. Katz">
              <organization/>
            </author>
            <date year="1978"/>
          </front>
          <seriesInfo name="Springer" value=""/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAGjn6GkAA9y963bcVpIm+n8/BUa1ZpmsyqRMinbZVLu7aZKy2ZZINUnZ
5enViwNmgiRKmUAWkEkqq0r9LPMs58lOfHHZFwBJyd2zzplzvGaqKSQu+xI7
rl9EjMdjtyyXs+Ige5s3+WxWzLKzV5fZ1oL+dzt7NSs+lDezIntV0v+8ztf1
apn9XDRtWVfZnstvbpri4YBvS2552HPTelLlc3rvtMlvl+P7fF0V7bi6bR/2
x7d0/y3d3o4f9sZf7rtJvizu6mZ9kLXLqZvSvw6yvx0fXp18dJO6aouqXbUH
2bJZFa5cNPxXu9z78stvv6QxNEV+kP1QVAUN3z3Wzfu7pl4tDrKzYol/ybgu
1+2ymPuh77v3xZp+nR5kp9WyaKpiOT7GOJ1rl3k1vc5ndUWDWBetW5QH2b8t
68koa+tm2RS3Lf21nssfNMt5vliU1d0om9TzeVEt2393Ll8t7+vmwGVjl9F/
ZUXDv9rJfuRF4EuyNlf39Txv4+t1c5dX5V/zJQ3zgH6gVzbtIp8U/Gsxz8vZ
QTar7+7XzT/f4V879FnnqrqZ0zMPBX0zu3h1tL+7t2t/fvXl1/rnV1+9sKtf
ff31nv75x69efGl/fvP1bvhzL/z5Qv/8ZveP39if+y/2w59f2Z/ffGNv+Hbv
m28PnCur23hwb2d59f7bPx7wfJT0DrOr1bJuynyW0eZcFMV0fFnP6jn946ie
0uJm9IrsVb6aLcdX9Yx2upoUtKrZxeHp8XhWvrcN5pf6xaf/xrL2/7Ij3+Vr
tvidi8u8uSuWB9n9crk4eP78sbjZKYpJu7Navt8ppqvn//HnBe5+rv+bL2hj
nh9djr/9evzixd7O/VK+LtR7WSxoODdFk+1+++0f6YfTw7PD8duTs2Tez942
5QPdn52ACBdN2RbZ2QqPtc9cd0wtBvX4uFPmVb5DZPI8b9vyrmKSe174F4wr
ecHz3lpgNei5Ax5LNNbbfNYWsmW73+y+8GSz+63t/7f7TApvDy8uTk47U3hD
m1Iu6IgdF+2kKRcgXNu1dy3+9+RDPllmxyX9XNBcL/Ip3XFFe9iCMJ4NjlN2
6PsdcKWmKKvOD2c72RmIqpp2fviXnfHbneyHVUHfiKZIvGKX/9kWTVm0IEn7
1unJyQlN4zhf5jTs+aIp2lamUN0WTQFC2zo+OtrGOM/OL94cnh2nC3BIzKee
F8tmnR3TdhYgYHAtYlITXozDGfG2cnk/Zype3hfZm/rPxZKW4rMWYeNcD3ey
n2h922XdXR5aghMaSL1Kl+DrDUvw+uzokiayv7f/1ShbLHay3b298e6LF6Ps
+IejU34SA/zp8Op/pFP/11XRYoptVt9m76ryLytiw22b0VhpEdp6tuIFKHtL
ctvU8+xtQ8vA/26fmv8bmma+/Gs0ld1v//jNhqlc0hmo7gqSEc+eOTcej7P8
hj5LBOjckICjC/UjDZhetaCfeXA3JDgK2kbsFO1rPgVpbNXVss5y5y/Qtx+K
ZpvnGt9BIqxu8rsimxYP5aTY5k3PM4i7nYwYfuEGherVelEEyZqVLT1/W1bF
FKu3vMe/68kKhz3LW5dXWfFhSXIRd9NXMRm6K1/ahDD2FXET2pd0PC3f5pri
L6uyod+r2ZpGNyvn5ZK+NS3umoKfKsFQctmtRyLeZDWcTJ7njq/kMxLC0/WY
5kX0QEd+0dQkMetZS1Pmk4ULsteYGjGc2uXTKX0Rg2/qh3Ja8Cfv6KCs6aHv
a/rk0ayk+Y5b/Dgvm6bG3vJHaTDupMnbFU1BeU1up4xe3/Dkp7z00/TzO0IV
Ff37+gz/s6yvL2jwtPLOgUet5PjTEsiqQysgPvyeVo44P8nhWiiDVvxhP4OG
ga+zzpFBHuNfM1qGbIt1nX8ui+Ut2PX2yD3el5N7nn8zuSdOMc1oJ/7t37eM
u+Nx/WnHHnuOC8/bgv/PP7H8v8brv+O3b+9kl/WqoVFNalokLE3ZtnQqZQjC
b/wsJjmIm66uqu6n72jlVjfQJZ7jy+PHO1HVnseq2jYt3i864R94wl6206J0
3p59xtu3dTfm5XQ6K5z7HbSxpp4Kl3DutBrUSrdaotFLJafdPezV3/6musfH
j9ujQVol2bNqiFPN5Lwtcd6EIRERyakg8ocIo0k83hPr5zMr9FO2blZDS53K
GaYfQWTT8pZlxDJ+aytk53+y89eyhoKzO8Xg7usps02Sb3l1hyXlL9GMOgeW
viizIyXr40djCw5zTDmJcY/dhKfwQeATjOmMb3L8O1oYnTqmk0/oc/h5xXJb
qTwc5gMm+hcyHKgKHz+OHJ+DnS/lIlRJuiiHY2c33pYRU6f8sKd3k4r58SPR
wFVgAnl2N6tvSBEkPZwk5JzIepaBWT3kM6wm8QseL59P2SLam7D69JnsJp+8
Hxf0NZI5RE4zPwMitLv7Jai0nJPKAm5Ks1V+7xJ+H3E4ncwfBjbmsApv4lPg
cAoKWm16VnYqK5e0z48VBkHCid5PisK8mNyTmt/OSSGjG2igq9nUHshdb9wk
A/LsEpZJ3hDZkN4wee8FgpBkpiyd9UGQoMv788cut4tiUt6WIlmiDeK5TogB
2Y/OyG5fNom+0XuhnZgBMUV/C7ERSdIS0ihPl2QV0vkg5uv3G2flfVU/zorp
nYor2veSJkk7TRu1yJkiI5ncFcEmEAa255RUDuLto/hnZ3IQa0HzuS3viAFM
s3Y1uRfSGqKDe7LSblezmZ4R14CQWqFG5RHMLeSQ25CqwXfRiWxdu67ot2U5
yXSutrQTFnr6HfxQVtOSlmpFZ8J/p91h5kiKzbQE2eWzSEDSIuJGnbV8kzaD
uNyCVC3mFyCPpljMSuJpbLvS6Sqn+jcGTz/mJb9tUi9Ix7Lz5kdAq/sLkbmb
DMno0eC8dcuhbdmBhSoCjZtpFs/kD5CBGKK8zC+xfEdJXdeIV3hRTt4TU9X7
cTt0EdEwMRM4DzyDJ1GSPZKWQP++q3EvLuiXdlz0ataLoLKWU2GRhTyHJa9a
1Yvo+fv8oQDpTMhmxSG4ddEb5biteCdBPOtstZiqEDmFwBQFrcDNo2iWzESg
V+HS68Nfz99dnVxcnF9kNdmcuX1aZO/QSkeaBi9RZ+f5kbDBurMy5mIaNtwZ
g+XDTy/TtenoY35b8WpPVKJC8aBuaKdx7l333HuFz26VdcFniwfeBRKPla1P
jiVp3COzSuIMTCW83ZiJlwbZzZpGOKNlMBmmlIEDBjrbcac7xQ6v9zqTt4GG
aMuLdC3gOljC09DakeCpgaPTd3l1amb0dl/GOgLsGVqxH+vHAntBtD5hjSnM
dnJfEM2OMEzMclI3zWohdhQoBsukk3M3YGdQXaEsKp+b3NfEwDDWR9A0G1bl
crUshE7v6po3cQ2tBWpIOSETnU9OzJ6FFOWRzRYJDeHBGyW0srJ9mA7vylgG
J0aeTW/r6OLoxZ4YPoUo6VBP6SGzBm5ood7LZJdmArMIYQPqntVxni6JsPtV
9V4H74IJRAYEbmddERKleAyHI5z1+jE6VUwr9WzmoB4YF+gycNq4d175ocky
E6ZJEleKBebuH7+BRiMH6aFs+asqAcMob8jonRdZYqzRrg3pQMzX1rwiN6ty
BgbklvXCCBLPN5VJAKJE4Zlyzrb+dHyxLW+Dr48k+R17Q4lmHLNB+9ALleMx
GYjO0Wb/U/jM/vWr1yd/un51+vrk8vrnvf+JY4fFZcuimhYLUqvAoWIteilU
yMZfG4zFzURlFLXrtgY+S9+M9N1tZfnLxzpVsrFUZ+dXvJuPpBXB/bqgBbmB
Z4KW/NWrhz17gvgpDhWrEw3UibzVW3b1FtbKoRfw4MCIDk1svTn8lZj4YlE3
9AaYpckw2nJOpyuvinrVztYvoXQplyxmdDLkfBF3LMFa41UjaoFmKPP/4eSK
9beiXe7ADLqIdbnXZB+soLmwgHrPXAta4LM37y6vno3k/2It8PfFyb++O704
Ocbflz8evn7t/3B6x+WP5+9eH4e/wpNH52/enJwdy8N0NUsuuWe0Es9EPXh2
/vbq9Pzs8PWzAfdEU6juVopTsmDmReci1hW/P3r7f/2v3X3a6v9Ge723u/st
0a38g04XaZywwSr5Ggtj+SeYtssXC7KHWSkmYTvJF+WSGMIIu9reQ9WGgUbr
+Pt/w8r8+0H2DzeTxe7+P+oFTDi5aGuWXOQ161/pPSyLOHBp4DN+NZPrnZVO
x3v4a/JvW/fo4j/80wxWw3j3m3/6Rwfiyd7US7iVccT+9ru2mIzn/sJH5y6Z
rFljmxp/zoQ/i4dDaFZ/GesvOF0kYuA2CGrlKBNFQxgdFDs6IEsSaiTfsBu4
U9zRYGQkqCarpmEFh89IC7fJIxT1NyRVlOnaiLxgSDVAt1CHQPlXHagf2aSp
4Xs0Fa8he7XIaXR6IIsKPhIx5KGLgbJuc/guV0vl5C182ZgshE4qHNRYJoHa
0LGerUd0EIuFVzE6qq4e7klNB7Ws8iU0o1r1bn8vcWhoD8Q+q2yei45Ax4e/
NyZWRhb7qqqKGVGyD7C1GbGsrZg/mvMt3rtIC3NhHdW00eV4KI1V9ZR3KGz+
n87rd1Xd1WEOTIW5WS+LSI+B5rGqRDeAJtO1KTDHOXG0qWlaaru20G1qlr+5
4yBfoaoM/BLVsuSNuyF6hwodL8pelk9hRz1GY3Teym4zOIIzUhDGvLUZayis
oNTV+BGOUNY2lmMaA82mXI5vZ+XCee1mFClf9Pw18eNmui8ih84I/ThuH/MF
aQxwpJZkE8BHRQY0Tl/8CDaM3UPyZj5YdirGQraurNjwoMlXkzVGTodsBm2k
AHUZycV7Z0eAvSVqx8TjDBNX3Yr49Te8aSByYp7ZVp692BvTxJ3qDryLU1XC
5Cd4MejrYxNw9yQRx+V0+6UIAJyHNnGijPU01PQ/92yU0cznOW0Q/X/RIWMe
4+Jp69lr5zjstCdkoK5U1SBtF4b4dI4jlmdHh5duWRY3dAreFw0dlkPIBdaX
eCdgW0dr1eF4JR9zjJ19Cx2uFw5PyaZg4mdQW582iIdVkgSqxYsUcww8Midl
4KHA7yC4zH6TAdG+jhGzaDEziPk4/mDeAKW++MWm0ZM0bGrwOfgoaBFmLp8+
5BVeRRyQbqeXz3i+xQTTvS0/sOd5YC4IcXvePK3ZEmELl5aoKu5IiuRLe5Oq
RC/N/qpg7YDbYGzGmvlVL/3w+aR53xHNdt0SB6hK2mTIkJwMPDIS/ZmM6cGf
LLWR2N7ucMa9zLuXmI8ovYCVjtTwRXSD1qilNx84N86y3//+ksRBEQmund//
XlTOzj4HIdfiCQScImlXQsnPIRlkpvlaTkWr5hFZJ5UoxxyqmiGwO+YXZSbR
OJgwq/Mp/PVMuXCI3N01xR0tO57aIJNvinVNK8zWIEQwWO/UD5/ORgMNGC/A
pkF2CUexnWaNDaI6X64aZrF0tMu/kr3M/8AocTzpeWJBHG7Y0bV7DYlLoh0M
ho1voY1bVv3LGWI5WM1DJUInMbsHvvsD83a+vSaZOquNauqFuQ9gl7Nn3Jzp
eAE/03Wys1NMdFHasHxecPCQgROqpODReb6EXsC6pD7C5B3eM6UFkIEz0dK8
OMBRCf4idrHaEnxP5Hk/z5v3WQExSXfylN8UvFWiwberOd1RtuJWzeDQh2BI
XcdjeD5XLWm+UGLVqooZV6Vskp73RpZnrd4YnrL2i4MO8iiWIo7wQLT6yQvB
ksA3ELubBCIUmcckyvvCBFh1SM8PRGRXtqgXq5lIj1i9sc8bDzPJMCFaXapv
mV2q9M32kX6QoOkdh9PJ6pzVa/Mp43SvqpLNtNlLFbFN8Wf2kYwjxo+VJLpi
U4lPUvSTd+2RiZzz0VmLrviYe/7RgkYAO4k5zBetYjL8OhG/hIsa9HC0WcxI
7IkXaXwPTRAjozms1enhjCuOZ8RHZ6wV0GiTRcyjYAPtIVtXFUNm6DyeVslA
d0ddRTCSZQtzm8QWPRyUY9a11HsJYQClKW+gd6n7k+PSkT9KmD8c9GtHzMH7
3cT9SoRPCsuDupHXO4ETxIToYjUZQgzIhbv7DH4VPYu0RMYS6VSoxAFRwXS/
L2YL+WDiKH5XsRcp23o/yubb8YYQSyBOwQ73vF2OyGbv+3a9ZaBmxdxgLols
Z/KvatefFD3H06lnvP+P2PRx/IEwuwP3eJ+K+9u8nEEyTsc0jDGNQnbTtkJC
YzqwJJQ3X7Xs5gB1VRP2Wd2sB13EPoxut5ZCJSFUpq8K4SINfLNobjUMl1dr
B15Aj1fGrkZZ8iHxCmPDSBpVEoXmmUxKkszFgLpKGl2Q9zL1t2Qan579kD3P
Xp2S+Xv6P06Os+cONvPp1RX9rdFCOsJkC8sjRz++O/vp+vX50U8ZHA/EUr09
wLN3R99fyz0XJ28PTy+IyOhc3wvzmWuUUTWRdJH8YaUzf0mEzS7b4kCt0aCz
3NU4nbcdQ4WOEGBribare+kNDHVhklVYZKsFexDZeoVUb0pEOemL7u355emf
lHbpBMnJMzO4IY29QayR9EPPA8TrqTiUEMQzrQnicb5gTyl/GhxqJvDGgdeI
czHITq9OLOt5OTkwrdhcqcQGxGsNwQsrnEX7VhIiDkF9AdYgABC+K7wxsAvA
ObLsF4vZpfMBLRFNFoUMQe5xurogWZLHYouKlJGFJRplAEylng1ZXGWYHHRn
Jdn4EONGIkbo14C14ay+4UNA7P0+n92OeSp+D2noCjwBd2CNNuO9G6c76lS9
85q8jpRdWRYl8sgbgaXo6rqgNbNbMXtHtx/lULTEMUSPjxGGaj+KDA4B55ad
lncAIOUNQzpEeTaxl5nYU1oqSCsUm4OYBhzvTXELB6iObiEiIxLl8/IDlCGO
ncxmseNedff01HC8iF0q8xvde4SogkOio669TOKm/rPEIkkzn0J9uIMWeylL
KtrLKHB529ottq4+kIWyLEgV/vbL/84RO52Pi168feAONDpZz+e1hvdgn0lY
Cp+gww5+jEhTZNi1KyITQOfIeBAtM2e2TfOAyDz5kEPqQ1pPZitIFuKdtzl8
y/T5umXgFt1+ky9Fy/0zkd2IxRuZ2TD24WpRdW5eTEsLIctooy3Hw1C+5+Vf
FTzFJgwm8jID5kaULbGHxuxsGNPjd4BMcmjBY0HVAWcimqbY8mCmFoXjuEVn
bbG0boBUdGmFFFji+m0KzpPwSVZ/G/aF1cEeYMuBXSA5/9wUQ0vb3tMP07E4
+WphnqQfjeGBZrOtvpPjPi1xCm9WvParcja15dwsyDJFcAZnjPFA1gTDWvtV
o0+TVv5X7/FhumItnql2bDo36YibFj+7J616fMuxBQiasPwQIczg+XbX2Yzd
hM77m/EjXhsdNzUImKP/+PZoW/yDi7qslrFFUVaxMVataSPAjoXh0Zr+mR8g
g9dApd6c4zNEj3PkcFksdvCKhH6bYlbKnnN4kfR9RUd7B49feTEjPSE2jOUi
a5HZ7gMSCWa8cGbViJmSAROsz4gxk0tgO6snRIWKsahnAPbBuyqRH3oeLg62
rYKbis/5CupkQHkOeQtZw4CTQKyOQKl8BMagkxbeR2UnMaPbEup3LEd4LQtb
U3moTZ3yXnbh4NMeurCFtK3bI++nYEtfnSIwMOio8ReVw/DAAFByeprMEUED
b5fmuDIEQti/yGMqP40lhIaoxcftRCLRhs6KpUB2oim7JecGLFMbE8o0I0By
QYSl3lblFGZL3NiZ5ANHd5DqMcvEGV8byKAhzRie5p4sZBl7jB1lVE7rRN84
cDgy5wFfMBRpuIGYhH56Z3i9PEtQTrfwDefBjBMZCHvpBt5CUy44hD71UXa/
2MkXHaQgMBQxTpG4vhia3tMob3AejwYORLa3aoGxN0SmyKdV7AW9R2AeYbj0
/yQICvNSkV4jOdxmoYzYIg/YS9GE1RoJI0hBYUPAPTcMCrsy/4gHfcNwY7BJ
F0LHPoE41h2thdFtmLkYNatZ3kT4PrWSOjBBVmyTcDADURyrc6R8F8XSrImn
F16pMk8QqDsS42eNUuPeeYQBhK7fzNm1B08hUXNdKySpBV6BFcKR86w3mIRC
NCuLoA/gC6vgd4yMpME4kywcKSMF3I6Flw2RIcxmTgRjGw3h2BjIIgjw2WYw
U1jutrhj1KRzgcZlLIcqOQy4VMZG+RcQzpP7Isbz6yvwt7zhyk4OXOMeRi7S
a8QrE+zoiFrpRQPAKXnlsSI+/K9la9FCW238HalENmOdg4tQyhE+KYXP2wJ0
0PJZSzOeGx46765PG0BRgt4hkzC/k+PDkaOphyfSxInY4CSIbsE7LQsApjsZ
TThFcBuzOQLkiXJIhtLJ+tG2DE9J/G/ZJchseHoCEMvFEWuWg4CxdY66fPic
Rarkkyb96DtbeM12tEcaZmgSWLLFDeqbP4sdJFgtvLENBCCcHyo/CWozEhgr
a6az4dbs3fq+2DPI7s2lKqStS240IlW+uHV8uW0nj7NG9LpMwUK9NocvxL+U
eQdVNDGnAwEnNeA4K7J5jDSPuMAtaaB+L2J81c06Ovl91ChjAdtsgJuLB0Jf
g1Nw+vzcux5VDkXor+iiHl+eTsQ5bWU6QJ0wum4yTQKbN6WPltUlao33a3y1
s7u3s99NWKCB3JXTsC6STFJOGXy1mhcNDTiKfyLKsIRLjvUGGZs8IivvYIBX
dzgRMo/w5lgUe6TSqpU3KeQ4SF4JVSYzVtESv0c10rYj+ZUIwzpEa7rtLCIo
cxEE6ND6yvGAxPGzycoapqdt1R28PGysCOoeR4ntNXE85tPnYl2ANFIssX+h
SgR7I9njY7gO2o6gZt0B8ef4d59RoH5WhZPJGJPNb7MXOy929r5kzkhksLfz
bT9zJRfvDmNxOFhVuTTbhd7Rf0oRFEUrroAlDNyK/Rs2Q1G1bIa7e99wQP8v
q1xWX5JjxI+QJyxhxVl9tJoGFIRJEb/VmIb3NJvazWLSH7gI8e2TBgDbjk+a
1/s6CT47X+30knyA/1g12PGRS+7sLk/4mpcaEvog1bmacrqWJYCkiyUiTBC+
ekkwf2FZY5bRo37GCsY07Y8CK1DhwJks8IfCYr6RALZH6MMer5QcVFxgbW62
huOvq/bSkNmpRpbsYuaZcIg79BJO2ujoRzxcRZ8x6VT5/Wzp90AynmxQ3ZoF
vCNOka2zAtgFO6n8JnkFw/vqxaLWlUhXoFySsXir3EmyacxSxo3Lcs7GF9lp
S6Bk/EaN/CbQHaPs5PyVY5ca/1QsJzTlrkDaeuNFKE98oxD1D8bcMmiPqRj1
Ip+hzV3EvoFygM5qCuDXokyNh5p9vWpX8f39HWgFAaEuGwmqm5MrIKfV9kpv
BOeMwdUY3h2bORJC70SvDCOHS2LODCLkdMOcqOp2ijgAOaStZxmyNVUA3ooL
Ve8NCj/jziTFgmNAChAVlc5MSMvBQNGESp2ZObwzFtxWZbHqxnD1KCb6gA/L
3HNilmJmlNABnb+hV887STSksd+VhuaGG5IDwez29UC4aSFfHTmoP7LGVW3f
gdoxq1vLh7F88s6EIvSSpgyzu5PXcnDmCgVZL0qBSPokWslK1foaAlFRP2uR
63CIyMXLuGRrRmnUb2QkrifF7WqmZDxipYn2nf4Gknskm98VG50gpuV0dnOD
frCX89jmRV6F0I9pO5LxAdiHZHAwiomknESZb2er9h5RzeyXi9Ork1aYwIjo
BiAVlZPmH1JZP0zhvApyNLuLAJ3JEq5MxfjEjP1Lz+FsxYtN0MD12A4nXD0C
J8dxPeJxCySOS54XR22bwsob6Ph5tB6kPlLoXypIuq90WKk0Y83n/IUV4qXw
2IDAOlEWgr0u7EGXhYot90FOoPq9Q9JgFSW76Nl6gm2og70SFovMGuIi2L0J
UTEC6mxv9gYw8FaaD7xlYSZiyvFxo3W/WUHPEIeaP0IXJ4fHkMP/e1UxeK5g
76nbckwClI3xp1QzWpRLU3E4GsZuVUDM5Cl7B60MUGvC5UesV45ZrxQn/Ige
nKmaItBGlUA8yZhqwjopdsabLeZylqAcSw1IG1Vmg2hZM5ACPIzDFIuGgYhe
Lb+M1MW9nd2gAXLuqkhEURZJNqUp/kWkFXqHYsfzOnAQojkOOAITRe9m7acJ
GEs9IE8jLVHTUyEVgEdKstUVieE3Pq7uECou0Mj+azJNHNzvbX3e/2FuYs5F
4kcsN3jHdQJeqr2PnMecJA3U8Wx5LyLObZEYM5/8dhB4dNVknkBqzUMWPskS
0gD4O+64iDC89PG7QrxrmizSe3xespOA4+6hwpD7OYfaTH9pAQUTptEq4mkV
nljfJdt8qSoN0uRMfnHLps7MAe0aCozFM5y3HSB0bzVXsef3FRVRnbBq7eZx
WnlwOpAYWDAbiF+89HAuc1Kn4gJyPrzC8/EnEs+dW8V+C+RnfpbbQpaGbycm
biFyep2Y6sBKmccuXBGbhAccZRq0k6KCNaH6iC4LojOmjUqW4ZqFdm1IrzpE
FiUWy8hsryRggzQ0Ko5W5wEWersmrNKQf7MkEM1CgkVHSkR44aUe5WN1bjmf
gtYJHoAEFCThyYDNO85ZphcisYILJ/CWMrn2fvpE8GTIG7MTVwSI7Gr2WUkB
NpAjGS1zMkPyCXsxYB6l4293ejmISwa83Wu+JsTwrWUPdl108BgDcFBwrgFD
LFDwBJpwHT6kVt/AxEcDcQuHKJq4mjQeC7albtChgdB3GK4MjRxHjDTjJHZU
xwbRaJgJiCpVJDVwiI52dxAZz5WPmB+hLUjNM5iL5AwK+I29FM7t7XQqZ5S2
zxyTTV4oy2ZeDtR0Abf0S+e12yGKkrlYdny6iDwhjjVZgqcku44MJpnlAc45
XO+jnDpPuHyT5IH7yDRUPylOYtYuSZygEczWQgbEEmjHXPBvSignRxLYQ9nU
DAsYZWneIg9fGOaaSdHWfCR6kWjEFh8OWY6yuFNFMduq8RHq0J89y1NMCjwF
5eWFi5QX1HtBXHeNCijGw3nUScUT8XTYvknZJ9KV6mo9h5fF+7N4G7rb2jnj
ssHmGjPVUjAvXt2LQrfGKlCfLGdvQkdPdL2g6sajCZZIPDEmb+cu6JvNdFa0
3nw2wRtT5FC9jiWjW9WRwoQ3IuOf46eTQlDJefKxjt/xm519uB1d7HbEBkhF
iWBYtqQlQBWSTAjwO1FWWgXPMLDB3vqip55qhnMLeyRgvJIgr3hNYfgaiixE
+TsG2g2fbPArNVn5vE4E9MD4SzZ742mjloiPNqUL5pfcx1AYF6rzjwuflK3r
f7e7KzfFXSkaL+n3zXowpJPfmH9TwBBt7OyDCnQbzPNsoOAaKYgc9XYCeRCi
l6Fd41TzScKDSDa/vvz17GjUJYOOQW/eBgZc3nDC1qSYmoWWPtONX4T92nGJ
Ayta1XvNOYA5+okXilert4MahdBYf7uuJvfE5BjFp69hx6esmTc93OAn9BC+
kmCd1mBrA74+1VIUR6p3j/XujyRTPpvXDJXHyXx5HGiYbbalqmUrsOA7vmZh
snbbsxVPjyPVB5khklQhNVNBUB2qlTIRxTQOFaknw1gbfS3j513ndgkzyd2i
MMRecGXRzFNubx/2pu01FzBhD6v8W2dg0C9cvZbHfBmzoJCaMe98zg0vCq8H
vyfEygy5cFHMQXNvESCdcjSchu+2LgDQYw6EgrAkYyZkOEl6bauZdQPhCdXR
jTDq21vGjyl9cBSYI/ibUg7kPEOoMNH5+kcrSTN9Dje3rXS/3pDHciZmMtfB
QfbBpIQ0EReWsDxSDmC2YKg033iK2CrTDbSQhsnORzo2d9G9gbC0goOuCXQF
B09GzrWt9k8uLq4Pj45OLi+lLsqOHgCR1ZsVqZ5XvI2rv2SImBcOKx3SODWl
smSaoOFNZLNulbvfistbdoVmamVonKbjSzzLdo2oeBKCLbKzo9TLWXq/jmCq
vCWF6XPdlWrTSYav7Z6DaMij4Zt1sxHv1vJLxJL1BJkYimoh2P6KGFRgte6B
6Z9WS08R/h1pEEdC/Af53A6NuKlJcGy1q4XwIZT39HW4+qxSOETNrhheW2My
OGUWXoG9a1lAoX6Xd7rWNSuoTS+b18JIDiY+cobgahHlmHUQ3vruWjBv8ZQw
qZXXP5gu3Epyf+lLoijUJ8vvEJiDdwFnCVqjPvAyu8f7SMlchkoAZG5ylhr8
T2aYpDX6Dt9d/UjC9dKHtaJNdcEln8DaPmWRpouP0kj88UJkU5E/FO2ULIGF
pWSQVjYuqzG9biwFJoPttcznC3ZGC87IT2JemHdMi0wt7tctr7y/ZavYuduJ
a6uJh2FSj7VCpBTrsweRPyujRKFyYuq0ms16IYUNotm3I+B/GH8nR5S25Z4d
vp6XdFTmzxKmbSHIGdeRFcJlR0xQqHegmvyhkNORktNrrmN6ePR62/TGwI9F
2ngvirIX/33V1g6tbIKwW8TguDi6+A7jBNKBo4USbaTIIBcwfIdrBGuBdGcI
otLiOtEjhsbphlB8jTGtv9Zx542MD35CIx6APjpl30mhG3Fu9MrkRKy56x4x
/NaT5qPoFZqQAWtsrD/DSGHxxK6FXDLRgyltURp6bdEgSBk7pxN4FG5yIVDl
kxAaWRBxH5tpmUrnzDuxgSldTe5d7h11iVpjLgNJwbJab+o3YKn6/eHx9eXV
4dXJ6bHVoJRHPboVgSDZaF6XVadc5Gcig4VR8UtZOohk9Dl6S/WfkuVRSG0/
zl3SGslI8BymY3mtFgRiT44I1zh+yFUKrXjhJuaXt+8VOwAx6iMfh2ocMkx/
NFjCNKpI2hHGFunzlyurseW2IF5szxk6paeqXAreiBSkdbyTGl8J8jKuHvg7
WBSnqRMT1pD4Ay5jVbB9DuXeKVtW/L9BNE0bS5THNpgEkQYMZLA/yNWnLBGc
dQCvuH5DD63t2MVo8K+SeSXXH0FaR8Npy2KAoXoMXgLznR0gEJBIUsCUQmlL
3ebXeOax4CePPRhDWfBbj5t7fXz4djv7gmb6Ba1nMZs6LfWGklkKABgajEU5
7ZPqMc65rgtX3hSWKamKNwFjJIg6b9ZcceGtx03VTO/zqcstYheZDWaH5dnl
CZ3fq4vMV25si4Dw6Qj1njYViax+BdKrvhHEcxfdYCbuFxbHnMI4buqbEqnd
7b0vQs8c3vYqbJE4Rq9QBYALtKZj5wqCqAgbzLqsZ9bJBJ35gJHjqdUmxebA
r3OTmx0TZWNQ8JdOpjhLlMr0QDuwLKC9Z67jxgC/VFgEMsAieyBAeeMXjVRd
hENlw9YwJMhIP1GY48KKv2wiocdc5U9kxHjfOT5ujiCo95vsRo9ikHFg4zkp
L3hdVlpYKrklrQ+7DLkh8lFQRZgt0iwKeFIAHMDnw5Hm5eSTxGla5ibwpo1W
I6i8ZhwpnWIPbmJqNIIvvQQpiI3U8AJ4p2FA6yiTl+MrNkzYDJHoMaLXTGiG
98CBS5qzGGWQxOY8LqdBYZLAMNSmERdoVT+WS/wpvIfTRMoLP5KyTRFDivwL
QiH6mRbJ9lP9js9rYLSCgW3DYPFAPNrWxI3mdmL5ZOM3ihk+vtJSR6EgPmn3
WT1f3Oc7k2em4PTKKEg6IxTa6NlUSdrkwwgOt6uBQ0ELh0LhpL2R7HqfocnN
gXP/8R//kbX3Bal34+Zx3Izx/9D3YtcmAJWMzg8uff3tH7PjYpJl+9nu7sGL
3Uwng5e4vx1kv7st78b47Jh/kK4e3z17owP5oiUVGvb9rT347KNz50NTGAWu
Y4llrabqsUWzqojnTUspBpce4ed6dlmfEIoYmubYprn77f5Xe/TX3jf7u98M
TxODvx6aK2cFJnM93jjPX5KMB8SDF0VlAfEoedvA/6PgkWOlFptHShD7iMwV
2PNSArrCVREif6GwZ1Js75lb3TDCzF69xXEYX+SKzu02c0h/SSocQVqLEZmS
N0zMu7y5qbUclT96SVltDYaamEIt6pJLMfDaO3X9aGUEZLcynEZc86V3qbWs
LIt4ERSATZ/eNpITU2pi3x2nR4v771a3Vj/DmA/PRQ3ybR707qGZkv0yLSJZ
kggxYaJOObsOJZYhSQ144q08brhEhbPOkWGfFXlD69Q4/dEn4DGhVLXdBreN
p3mTJgFUUfn3qJa68ymqf6FU/+1vpHphyCndv+Jr2WbyF/VbEEb+uN5ttPSC
fAp6F9t9qVSt6ky7pbFWVnlrGsW6eA9ZGTfYVF8Dom3k2t1cq0l0D/0y1/HC
+7nrFjsTh6B7TNxQgSU/gz4vcZTT8zfnxyf710DpKXR4OdBqQD+mNlze8WJb
wRMcRF68asAjuYNMYHU/JksbdEw2uVNrl30RiOXWs4Kt0xQfLu5ALrzEpSL9
qWwdq7+DLmiJlYtHVKVHHP/gIyNB8aJi7B2U1GFHqOYrR45zsIsvv/76gP7n
j9GHnLCf6FSnn8k+4zOu95k/+s/c66E1wsK2dJljDpd8dQePeOn3q2dWih5V
TMXyIVYUgBEGav1F6nKu2ntfvFbcsoa0CPsg9RCk2FpazxYCsy3EO/2I2gBV
AB4kShTdZ6Veb4qYAlxKAdHmWyw2znjQVMqQEqfAYvYxcobrVOv7dANQ+ewR
BRt73QQaLXAPC6dsHX9sLdF1b4hoxqoEjg0kj8pykyKgPdkgM4WPPZatlEeD
hROVx/6dAlgFDKNesTcCqpB4ZOoR45KgHQyV4IB5Qoz6QREBS2sePi04jszG
RkCHiOeOGA78a5rrlBKrtu5gL6fkRVZpZqCwCymINdWiAQ/4blQdS6zXQbSE
1K+87DoyujAeadkT+th8GaCioAQUotB1WLL+OpXzkayA1Drc1Acn00IYvKvB
vYU3n/zp6McfTo9R3v3wh+t3lyfXb+kt18eX2e0svxNTv+a7Ds9+OLk+PdZU
gqk2JhE/hRQOXEeYmyeGmmWvzK43hynkmzrIklfpS+KqkdhNeqktVT+N6nPW
QZGOzOO19Kv3D/7nVoVrptXLQksgc+3RnoPYmgiIHOG+bE+0QiLllN6SAuaG
AWRt6tCO4DA7jFHs8GifX2FVLbQbUtlynRvoP3QWpvTnsrScYTfoUl8akmvT
ETBb77XSwJH+mPAE1eW7CPkIgS1wHc2fHAUCQemevCkZcZ/PlhJ431RPT/Nw
tfOClpyMwOoBb8fDEWbwsO+xcmXrCyt4G0EtZFuQ4cZmEAyCypEv+Bl6vjqc
WxsHh4McVNjRVLHqbR0gXqGGSMiQpFWSRAyukPSE/3rQ7v2B3f4cFyHNmOSu
b8g2+tR+MUIN1TuhgXPPQ+GH529PzuQO3rPsiT2L8yCUE3rldQOEQTzxbT0p
eUze6a+yHO1Ej+20+IiJLx9RV1XhOwcNDchqGnU/iy02pKgFHTZGzuRgSzAy
WLByFsWIlSAwnP2dNb5dVROp14Dy6fQW/qi5hjd5L3TVr4/Pfzn74eLw+ISX
6ej1+eWJxVKk2SRX3s+KD6TeWDEY5Yj93Cl66eH0oWwHEIWttRqMm7hZVf/N
72SATpmE/oRarD2YkYLfX19ptzcATvs3F8cTtGJ1w+L1G5zrcUil+ayXi+4A
jQHMRqrGYQE0ZQZ9zWalRKbCidBoKUuKs7rreoc0qB5QvspK7RovkCFenVxe
WSiOJ/fq4uQkXIjf/+ndGIXScCn1RkOAdCGz4Ml0GR8ubiSdK2pd6L2ClnLF
8nHltzt5XjB4aX0ED0TEqREAmCGAxSXc1YQRJhK9N8qySIa/4w5bxnaCkwrD
erMJOMvJBr5xjZZb5+PIhdJtYYcnAj0GOQqStLAqxUTBWHiVGG2BmpSIf2td
ZintwhPTyEjd6qLeqCNHFaCT1MrhMl+ZdCfQTXoqdM7F1yxj80QEAIcWMTqL
Zwxso53QUfoSgWXGmNxSrXxNdjRtZoBvS8FD3VxtqXdvzcekcpN/nM8DZ0hu
hG51bTKNufNGxjmS6PzFDcyXne26KYKRdK+dViL76zFPYMc2sCr7i/ZhDtXA
dKg4QnEpfg1RCABaatnrW3zLKM07TP27Lnbh6KsQCdng0mb2UDHIn0HQnHqP
21HesWw08z4kNqueU7YM2hC9KG4hbCBUxjWwaslvazexVI5YQP77yLwMNvr6
YKg8lrM4YhZAjvATliVr4kJK0MjGcQXCqWDMeRUtAaGSrbPyCLkVRGBNk1OR
O5vGWu2Vqrz/39Bqr5KsEg+WnudTUxGHObecdNX9g3GM9CFiGiXrqa+QwIly
sLe+0UNU6HCoQE/S5dMarilea6SsLXzCpFecLOuuLt4FgTfKLk5+Pv/pJPwb
JP79u9eoHh3/EO2A69V2ZCNmbEijsU7ad1cd1Da761Uwlp0YekRCiM6CXpMh
a2b98rEeMywZGjOx6vekWzMQBRdb6SKIEBZvmHayfArDjzPCeIj0a84QhNqi
JQVhMjO0vs7wGKEroSQ7Zb///VG+sMwgpI9JA4XDpRHFuC3Eh0aG8WqR2kCZ
72nA7kNwtHQRwrAGRAk/HgGpOvvDg5EEjZ6Za44JE8xtlsLI5ABIXwL+OL/M
N0Lw+KbTs58PX78c8B754qLsTdb+c/wNft5/x/pa2wvPzq8u37192wv/6zLx
w7esKTKKATVlksowO8MmiHZS4MeDMQhnY1+Dy25vpw/XarFfBx/Snux2Z1fv
2NPju64M1P/icBJ/eYPn5M3xJWLzscfkUeNLPtClqyYuBvv2MAjlaaroUGVo
UWisB4/rWsREgHAHWjtxTZWUufAjXQZDZ2aIv6SEkTdoi8DP1wK1Y5euzI45
u9HF25OLNwYyLsOKJHjKbt3KSH0s2o0HjyWglGQnLZYToDzaWNP8xfIN1L/Z
A0az7v94dn7GN1gJZetY0wibMq+BlrQ0fAFMCckGaIuILf8/6HcIonajomIe
CKez+j/UAwGtqP3f5H4w2Dw7rYaUR1WWzDCx6W7CiGuh37H2bPH6vja4CJYC
W+Sm7ntbwXdThaaxyZFh2yPOC5bppbaj24QMuVlFeroGgPGakEZWL/I7q7Sx
ybPy/wMniEUZPFgK6sAq2A79MX6vJm5q+tseGLYqwiiw7RqSUodGohQsyxIl
In+KCjeRoFHn51FhBxLEdVUiJUT8r51zvfO0f8BTQHANsNDzdediXv10Yr+6
67u6vbzXaMYmbGSTb6LMkbLcAfoYIEkjkc+nD+ZRPjQQEo+ZCJII7Eas0tA4
PIF8NnWw+2AjgXwmdfhlVbyjZ1Ah3zzbEtSD9zzFxXhG9hL8OubqmsOsbfhQ
xSeKh/LEoeoU/vQyocvMJKY4GyrmKfoAhuCxfJIohXhFVNHWHKfLrpsyiysJ
er8VRzvu2MH2/4o39TczrMFc+g3qui+RE9EmKR1AAe1k/1UWlv02NgZhLiqQ
18VY2/WzkZK5MenxSPdkpNz9QNXUKh7SgItZUl4HHcobvckDDtHUoUwfSdzW
yUc+JckHGOrQ3mh5CzkPaix5dLxpJJ00mm6Y+fvg5E1yZRY1Kp8IOngV+1vE
1zmYJDsSVTkL9Tk+7cEL/cHwxciB23WkiSnC1XHUDLrBTH0ao4L+ummZpdaR
tFPceqVC8LMrkmftOkmSArjQecNHY5O9mHHk1OnUVtEtzKuWswxCknzE6H5j
DqKvOS6uSUm5+LR/Ul3L0PZXXJwTAwpL+jnuy0Fe4QFkvm+L+jGfWiUWo6X0
nibW7RO+ip6r86kkr9ht7guv+JCyuMWtl9tnel3dE17XLPK65jxTc7oO6tHe
mxrqLVmKo8+zEeDQBg+dS4P28CWm9BxBdtpBzE7UKmp4FTapZpLG12s7wTSk
HjSIPu6sIvIwqWIs9MN4yc6QW8+Mur9Ar/RUoHvJwZnQLnK2fukLD+nuAEAE
MC+NcL5YWhG2oaliSNMmf4z8Mmahpu6I8VjQjQLlR3vl7Oj7a0FsXpwcHb5+
Lf6ocl5wIlt0TNZRRUKP10QLFZ+gz56QtGpzKKyYL/uHhpEZ6IFWFY/O6CBI
+4OO53FLvEfJRZR1cB2Hjt6XXpVi4MMuZqdPDPwUV5Pp6HiSuCRaeI8f1YMU
otX8p8VUWzyYl94cTL/FSSa19WD7np1fuZswoNhhZKGPyCN8rLih9fAJFR/t
AA7JsHetd56yKEqdtKF2xMjdrONOAulexqppqI7QcbnuDDrxeYDt4PpOV9zf
b9DNLUkQ7vLkX9+dnB2dZH/I3hLNn59fvfqR/k5GtyWHoM21dsi1Hf3vssOz
87Nf35y/u0xdjbhXEdD233dDQGh/Mx+p+OYvw29kLVbgfDP77dmz7QSAzmxh
bOgk8YMrDj1d5UknDmBA9J7f3oOlSPEsbzg0NVsfWOINJ3FGbpGu7hE7Zctl
TMS9s2pMUgat0fh8MmGEHyukxns5rwIfUatEeCGD/7VFYi9gDE8aCbil75AZ
YL2cT5/4Fzu0dtCj5CRQYMqs9PTrCewhOn0qJoCXbPV3YTFbtSat5XxzuYEh
77/acvSR1hJBOOW27/BKAiND0/RTtNxXj/PgqarUy9uIGKw99NOxDa+bBzJ8
ck6szjerMJ+BtfZTuj7/6WDQRuM6yuwUHaLzSmGay7ZHUeLH7xcAW3VDOJG5
ovkLs9pjK+Y5F807XWZvDn8VSKx6ecIitNov77Zs5poiiCRE0sEfq7smn2qI
PKUZX2kEx02gptx8ktsLSgCxG9mAQTCv1RN1U9+tWsmCs1SIMCQGW2DTiulA
CEraRuG65BW+tb7JKgisRa5VjQfgFoYggPNL2osD6ft1e3st77tWjGa7r1l5
cY3mH06ujk9+Pj06OT17dY6ULK74QhuGmlPDdAOU70i+GmtUXvip4nGp8uAp
zdQL2A1JhBYs5vZ3USDch794pt13Y6FnfVJlpbIh4cQ5LHqE5VWDcAruUZoP
de75oo188s5njD0R+SkrPgQpVPyuVdIY4vCoNs4Y7Pzp2Ntnxt3Q4JL74ErI
TXJpvSb0yIgC2vEI7KPTog3fHLZ0m+aFIsUK31Vrkfuf11VIq/Mft4iCsblP
Bf44i8h1SgfVZocHvfGl9YjSCVn4iI/7QA2NEJY0N9cTE1d7R0N4KlpDyG6u
O/tExJdTaqwDpaQbolHjOEPpB++nkqbRFmXOOoqdTNYyo8NTTvtvSzIrvXlz
/m5UA7ODzI5W2wybpRIkTKwZqQ5La7hKhkj+XqRI1G4ldgN9xkpYA1TcLrTo
vGWFdhvjaTGZ5eq46QTMpQHKQDTcMIhr7dkRR4W5EpTNoq9ScKX+au0kC5SO
iMFDTOkBJ9yBvzageZLQrutCLFGQZdnkZWUJR1zdxENi8m6E35Vx6jpTCmjg
sQ6hbvlkVJ9KrWoWOR28gMpPbMnF26PLk6PrHy4vLftNu8+zYhyU4giv1wNq
MuqLBevdqvGu86hdQz4QHBvgd2asB3hG+L5CeeSdWLUEJfGJgDwRl+I2Dmdo
RpVLhlJa0bizOM2qUsuR31wVSzQNkR6qJZ0FQZpz47bI8su2Eks/VxSFWZ5R
BeOfXx+egUnPV0tU8rx6falN5L/d++ZbqX6kk+IUL1KhZyU3xjIuhsQTNvmR
PB1Sfi3kEG9IZ/Ul5NawLTewpdIWONNugBzAE8rGsvskYg+xQEvEibmOE78H
kXhR3dEs6tsunMnnYoQOIAk2+hmYFU6a83xm43gn0onEOO6zl9p+bLKMvFUi
XaQKcVz/wscVbmizTXV5xeX8lmpLkmo0rLyInXpXLElteaVtVqWWIJTxWzr0
yEJMM3cHvWZ6VKW0zaD7ayuPqoMZ5jw1KrZfConHffXuNbvqU0qRBdN54yN1
ckshhujpyCGvQUWQGxesCgzgxQ5gWWjQTONL99x7Z1Y9F4JY4Vq0Os1PTeDS
vm45G/12vCLLflpAvZiGmjidhSYBIs41LmaMhwfdPnyP912F4IV69qzqmwSO
fO8/LGHsSKDl2KfleMxLrRiNYCItzpirUKKUUQ0uQRJnS3r8CXmX3LXK2k2P
YxyWegi3n+rjIynjBbua+pA2XpOwv6JqCaGkOEgd57CT1/xf/SpgLzPAFJEW
XaJ0cYrpk+N0dfHra/rGhVRa4MIavXaUKjlIqRbAvg50w2jI4vOp7Eoh7Gvj
yt+WpB4kR6/ioIYvPJiI12fM67OaoT0nF6Oqyvlqrq+RYgjcY/2u2ISrNEyj
s/kfn9AKiNFH7AnefiKcmWUWMmgV5cGbIS2MSM9zo7F3r9yspndi9RUf7nP0
JZ9qbelueZyN+EptKMSt5bbuc6knbtZ8cBppmYftJwhiQseGVNZBU3HYVTJi
d03qrXHB8pa+cGOxsqPjT8NjnN5dMDbfekXh+1KCkXZyfyqam4IM5eyHfLHJ
96rPgo1H8Oxdzb/QFk354iCqmRKzAQuIDgUIwvejHIkogs7yLJ8XLn2jZyob
6mdb9gwNUsvaOtjsa8lUoz3Qcl7P5W+pMbH19vzy9E++XA02xco4MTjYae0r
P+RIfJKEpm9xmRNr1sEVF3wdKLDcIfMZOvvqpkIdicYOtlcmH3a1VbNoVQ4a
un19rEVdfU+C1EvLNeTA0jrO15mKUPoCbZnVGsvdwMb5LKyexY9tt3G48E0t
morissU/X5wcvn6z4Tws8tZGET3O5UE2BCm6wkO9rT22QnNRbFAbo+LFdKor
effRj+/OfnJeo1JVztwTfeFK+gca+C0DdC4r2wEXMB8qq/MuIneqAbSbHYNV
zWa86RvoeQs3RUYHWWFkzojGyuujdh9PwXsrtqUaDZY0WSQyLQETQ6GtUnRH
5kQoVCzOnbD4c9JVuaJNH1lgrnQWIZ062NrDKTRHgJX1jBYvm9ZMrhwnh4qu
qm/Q+DzTAz95ZiBiP5H7InTjsQwdqZ/JUDK4Jn3n7NuVNiLxsuSXi/OzH2gR
sy15rcCPkiQpdkSO431wauKFvuNkuT7UzTbO5kCp0s7LXfflHeG/vSEBhJd3
US9WM0aSJMfY57D1aULp4dZbFoGcDEtiBW6e4Mq90SCfJHxsQ1RD0khQwSAZ
rb6ymC+Wa7YxuBIKGVwxL+X9gz2X2IQ+t8cnoClEybGHrxQX8Fg7BMhp2eam
cFqyZeNAnA6EVcwqPly6LLSeOL0WoLF3GM8YcoFxxKCcScVYoqKo0FYsIZ0X
FcRrrZp7z9qHM1+c2XkIQ9cLBUyhwFEyN/U3xYs8kjS8dOQw5pBY5WKfoMeM
tLpuT+2MxKT8r7ZfVoRJN8jvj22L0kwvTPJFiHjoCFUH8V8w0RuQzXEl6jup
dsbOoXK5UtRVYlMLTMXDMiSiITpWJKzxmH8xjYrRls9FCfAFh5kRdeG/+oSl
dj5ZGVSho+VdxY0mowAkbhuLGr0vjeGBinvdWWFuL4ZIBgxpfCCcXjA7nNPg
HOHYXewfURcS3tp7vEj4oDq0Xtpc0z3SgFA7yCsipm6ZUvq2pPKj8ngLWMkd
6rFNZ8UL1lr6BalxVZErzhn1onQm1TSAjm59/VfXjVRDxFnf1xb1ntiJRLz7
4gp/mZsg8hThFULOMpzt0OZZC7ozrnIlTVjSoXuEbnB331tUpFfSnJGD+G7B
oRNi5zJ+X1DfgkS8RePjQjS97IqVjY2qusTgidcwVOKRVH20woEUbaOul2pJ
oYNuqhyqUetrZ3urXvyqXCB6MIcBFh6TqxSJlraPXPMqOLb7dZNoje5E740S
6CT0Jgb80CBY8bmWHjcao0nrGooj28Xua20J22oPgciXHRxsL7UD26q5zcHg
g1kaC/ynSnsiVMluBWjrsXdZxiWJi2Kk0mvPL7ZSHMdI33aNuolQ9wa+v70x
oYddW0PKhdjS7eZ8Sf5WU+Cc9K1/sZbFFYl90czR1sPrh77nDenr859YvkST
TtZEPy0D5GiGNRclUtsRb1ny+VtuIGg6Zw9ruqrYS8SqeN3Y23k4eYYOoapA
6gv6uf1oMciPdfBnofxj3BIk7K19xRalU43QvC3ZQNZsSiSCLkiQnaRdGLAY
LN97OQzbVkY8JlposRl9N8qO2k6rEwcOzZYxPbvXe6Db+QkAN58a1rdo84kC
0PGyZ7J3CQE+c+YHI/31Gek8OIwq/Z/5ek/sisRSX5AceMw3oDfFG+lc5Odk
aGYH26VryAoYvWs2nkgmCcMIeTgoqEz/bAQmLhUkq9sWrp/9DafL9OD441b4
j3uxLeNvsROJ4TTDHqwEsI3OJfX0KZPB+346XFz0XGakzlK8w/AeuZyhGE4x
Ot18x23cVEnAbkmrHmnBS/cdaFUH/RSeBEmUqE8xkxog8Uyw0wxw8UMZbZqB
FoAMMYLwDEfMiRjEslA/rofP8fl6XxQLOYqdIEE+Q2y99JAhFuSSsE7Hvpsa
say9j9lEliS0s+wGt0QFpbT+O+2hH0vDzSTFrGHexM2lpOw8Yj9c4pGHJkq+
X0eZ6KYSrAM5+86zXloYjoN0EbVRMZkBrucY1oz5xKGBgeR+1NM0fCSbtVzs
Q7d3mCsPuQ5igaoIA6V8nXJVfFBBH9XUSfhkzHCtSebmqmSBhbZdiARci4ng
NiSLuHVIlYrK4iVBsE6HvKMmJ03jongKxTptxxPcpjCe2HOEoigImgY1eWgh
EVNA46MSgT9+GiTDSqSAnVD0HgUfBkoN6E9R1B0WpA+DRbCzpCTtlFXPAQSa
vU9rudjOXpJ4OT0/c5HMob1+fXJ99Pr05OxKdCjwgXiTNLacKgfRHmd08EQg
diTrZh+GM/l99e7irEsFHUk6yNuldrUP1jErGoSVxPWqlJR6aFRa07czUkx7
PUn8jpG0rW+XnMq1WgBuh+r7y8nOttWzngIPQaxjfDuTHIf0fPgqsO39Six7
4PaMWK3ae3YpC/Q51DqPyHVjUwcb/oj3tIMVsGVxvQAWl4mTXKKEjJtijlkO
Jji1Gg+KBUg3Zysq08SF78CvXGMCg+6cv/RWhlWEKZi2pCZr3SyIufnQ01DU
wqKSLUoHT57seAFqYCxVq0V4B+scIyJ+gwZxpIOTrGA+qg1olgoIMiBH0Gbx
kNKp9qMbkhZR1e+h8QnL7jrRFeAxz5v3HJJNGU3AGk5ZpklRT7K569Bqfmy4
RJ4Q7NuCtQk6f9Znig+7VFZG23VTIIZuFC+UdCfq+un5wcivpOKG6165yIbi
0KL42IZaTnVuzJbFbNamKbvq0aVhhpIIXM5cv9yxKSI0rqJY1QSQMsnrbM3h
ZZ9/4yez44ZNP8U2mNrcyxv0Dmdf6M16mHs87kiVHSlet7zn7cM/SCJMzSTs
ebys5j5nMXiah1+w8jhZHNrhzQsHaWhpNFRmamDXSuVqe0vzsouDwafclksf
GGUKLabspg/fM9qIdN9ili9aMfH0pQJFhtK0wTv/hd89FPAm8cEl3Vq17y/s
hG+KW3HXrwi4xmJkqEiNmvDnC9ErPXp6CKQqjINhT+O/cgwuZFdJn42BoZjm
w2Fo7+vmyt4r2XZVRQp6pkHmV/u+lCr+GpouFpHyHJRK4ZlitxqTkOXm3nxB
dReD/qRaSvsaXR2NeZKN8frw9E0s0FgVDT1OX6lJz3NrAHEs514jQKi8GkbV
iFyeEVGqx6Gfhwvz7uQDUZSQdyhFp/Vxpups5G+ysQxLzsoplI2+NkqnfPmE
h6GLiNd5DZYZq+QDWseRhvomfWVr7FrVQo3GcS2+yI5Ly5z5fdoymhn14EQd
eGg+nTZc0zDs57ak7cdwKfB03xBXR8TNbYcWQ0Brm01YEj0MLXA22ji1wDq8
x1jlIaofmQVsrnVuTm5baYWwQN73xN/ey7lQx6kmbvWqzgXVLcWMiOxg/KrP
ptxxb8MyiPMBUb5qsuYIz1z9wRb+0cNV0VK7qDX8gRKxQurEUBEnecSeykq6
AiMn2RS/74lFPsLzdURqC31KMys2JBvglo+++vmLPvAI2mklRW58IXKpfRDX
lueKCL2a7S87m+xh5OygBqnzR4N4SAqxd3Nx2yyJGnbezaAcF/n/erieqDL8
wFDxaKdBqNY49oBMj/AcRT5+HpVK8IcXv2X8lhgmkKiskyGVlobzmMZhLrNS
Eddx+cZ4TXaqDXGhISZ0yx56P/HIcxea2hxa5+oYUkqnwUp4IAvnA+uhgeak
RpTjtfdXOqv0Ug419Pnirl6WWso/yqdx1gFWigoS3Wdmkx8K0zK1y+eYOoeW
OB3zbcDAlXrWqZJlfUidIZ6VMw6XX1fT0vcSSNKIIgTK1qUW7Nr9Zmf/S7wr
qr+/7RMhb9bpaJw4zx+sJG4o26Lp8CG5CcPc13PfuaruiO69QlJWMs28wdxr
qPP8x20nqD2fLiVSyMBCQR3lyjVj34C2XuR/QWYd97uc5vy665t6ujaHdDKg
MBYpXhvCWBuWdac7sZDhJdHa6+H3S6H4puA19/xO9BHZl/0XX338yKdvrRVY
1Y84zRjBAC6AJi4Srn4oqpL5vxVB4uQZOE+UaP50fEFcEC1WDDXv0jbC0jXq
w7SRqDrGOTitv1mtFvy3Kqvli73rpf2boYd678tP3kjCsW4++2667a/Fp297
7N1Glu8s+ucggpof+Pgy7Xc1MHtNNx76aUt3k/eQdjDDFm4j9fiphdVDE49X
GteQ/L1GvsG+jTm/rsCFp02bzG5oKHy3/fMf/vHpyckIehOTy5snJWQfjcrw
gK3v/xrV7uvH/JbOss2gS6YHmnklJu8b3nOeAJn/dahDpGpjh0kW7U40NlsD
xHG57dFMWpYOVFIJgNDJfQ254KR7HVJL9DWjjCk2/DPnfiiaUWy5Fqm/Xtlz
fDCUM3QOQCh0xONLG/yEdsRpu8th+0tbL28qSRFj6SB447G1mbT42ee4qEuG
TcIKHWVfhLZmPaxWf1YkQb4MEx5IfOZM7tT1zq9bRvW8B1xIoa676JF+rZh7
7n6z+4J7Gp/eJtNzOof93ziHKE9FCnaklJBVK24W181W0JSX3n65ZBLywuDz
kW1D0lD8DSLr0IE2irDFnVHhSW/mjI0JLaEgCbiHD3fJswY5lphEGvlNWamO
UqerNeqvhUBQhvZy5Cvl3llSns6SD3IvH8RZUaxlBMhTaLZvd7UZHUx0pchI
jc1zXNH2g1N02RjhYEdc+yksmpO8EHHncKUx0XQSfO6mPBbfzDg3MDXsdfZ8
wLESNW4awBn7MvucBCPxbx36WLrCC2bLcw6WfeEYPco/m8ATaLrzVVV6fjvP
PyBZwoUH5Zmo+8NgiSMu0ThAqbCkBWcbQVfl5bAx+A8ftOsulXRw3zwPhrEP
M2WXTizVQJPuE0OzccO5EPKKBKGWa1FB3+20X2Wj7bWOevSOiN4q4qhK4lzU
31iR9sapYKpbHpyq70xV3MgPPY5R3IbHmLehXYOclbREAmhKYgxa4z6k6Hf4
5jBY/qnKFI4rU7BpKmWPFK5n7LdL3VYuRbh2JzWgWwnJDKtuwZYQEXWbat5r
TfUd9hJvKKbFiEHvBAgtyjr1lTYW4jFRzt/a9unxUc8xJoHBldPmayLeOPy+
KXNPu3xGIimWHjgAJfEq7qQ4DI6z3rFaItmidIWm4WicEgCL06tN8C4I4Cgk
pJ/TGTsZxHLAPuy453hHrX0NQ+gEsuHktut7OunffZe9On1Nxv+vZ0fgHjfr
qOGNjlJIbcML/lv8AjYazbZ9sfNi54/ethUtYNt3ekyi6W9My4YDTjjtrPgg
+eFYRYv5owylL80/jx7igpnWY7Lrk/fKqP+uQjwUk5q8SFVmhrDfEHN8LKfA
CU6kbh+fkWZVvTcnA267JzKCIJRSt76oIr3GEtWl3l192/c6KHiShhavQaQc
u2h3ae0nrNLmlSTfD880i8sc8LHDl4cfse62xQdDBHrvljkEykXhiOsvQ3cn
zYtjUVct88lwUFb6ZyJy6VMxr2rv2eqMI96DUceWUacSAFpObAd4uyMjpPOu
vhWiujg/7LZ4kExLHetuOypwakYPWiGlB9xtiY/KEou1uU7nq9ujoEXG1TZ5
PwbSPLXW5GJBSou5Q3gARtas1LQ73QbCt6hdpzUW497L3TVggBtpEIDJ5lYs
npvzDvjBFDpjCan6Mhe9DBF9kFpAOWEZPU+UJeaav6C69tPdl4F1IU1xtuKa
nhCo/SmUlUsoQ4bYGUf/MXE4h5qhDoeuNn03gbvR4O5rVuJqKFb8e/T9Nn4L
4AUCPdRqJXMB0DBnOT0Gokz/YefS2mTjYLj+QxmHpiVBz3aY7MQpa+M+ydWU
4GEjDBGBaJt89W3p3xGWDsqN06WXFs2dmkfgMiRk5guNNfNghZlpmpb7jKm2
AdLb7+q+1CwqFMKhYcc1JdpwEuV8pRxBAhN+dsN9n9LZHX0PReL01a/XOknS
NVILBPB9M3amXF4gBx/wewPdIGq4zIEfdRWq++P0uOUoWdSL/nPWyDSbsFsx
5zqNPSwYhhZsscK/ujXaQW1jb/MJUwb9f02QDdwpvFkM5hWneE+6nnBg9wMz
l2LF4VE/l4DFssBDQlfo08F9p2YAKGADHushZuX5YIdq4YJ1PHWc+LvK+7Ty
Hke1szSkGLdaKKmQTrWm8vPJNHne8S/hNRhsxxgLY5Z25xa4uS+6zamJM1ug
kMMvvl6X6RNbYupyO/OyjfnStvTrjRpPRd16r9KB9dQeHdi91uDRpRLyUDBR
4+b5n2myGNht7IiRHnpcSU6VYw39VQWWD61hTN2lIWzUvLjv7Kf2S1RLvxqC
Jl9JKfZXMbzVC5iJlUaG730Ms2jkk1n0hqpVIU5PT7iA+YxOcCtxo1emZL6C
kvmz+iv2steibl6RiiDi1icqCPFDd9hHGT8LAtiWfPX113sfP4o+tUTEiqON
7T0y5kl4STgFL9PILptMV97BcQ1NhwhE4hVy7bfEKeTzLyOXiYy0qFbzeFAB
y48DLacVRYJO/nQNXf7y+uc9VB9amSvwdkUExC/B6rarpkEWsvpZbWw3BZeJ
SaMjEktEYMRFgZGXma9DgQFwhh/Q+MMjAbYgb1iJHQ6eZEnwxPw4naUe6zNi
g0RRAAGy0NziFUtiADYsYhI/71/vysgyFEXdHQ3dd355vHd9/v2/nBxd4Ta6
b2/wvu9fnx/9dP3z+et3b07kvheD94XlkOvfZfufuA/Lhvu+sts+vnT2p0Y9
uhSXzDheivi/WT25xtUk6KFRvd5/uBnBPY15PDWMzudJ9yIbfD/r/jerr+Wn
5Psz9vQP3i0/vexPTUrm7Kd3y8WBu8MqRXfrtXhucUwnIj1fF7fPScQM2SIG
vUIFNMBSX2bdMA+OtVb8xgEGk/JRn3AScC7Yp1PeSCNmn3DvnzJsA0pdx1vs
D2A+I7PqIDkez58/l983kBkR2caJ25mzBTjz6oN8l3lMtuXnoLOKD7NFSmIm
ONAoK3qrk7cOjnYni0Wn9+6rk1npNXzLSVIQVpb36ZmQ+jNli0r4IZOGrGQr
/xua43CbTaSRRDUXObG1lVqqbdZ/KHV/uJhWUqXCw5nU55IElnz8nCMNyPSB
LfZYtn7ovFQjzmPWMtP74qDhK5MaPP5aVrVLE8wye7fJOcbv+O/Vq5/3ro/O
j0/Pfrh+c4o0NTKxev8ZG02eOjmz587/5eTqCv6ly6uTN4dXp0f21N5nPIVK
o/GTxmKHn7ogGjkkfevizTn9bzzCfXuoF7ntLkB01uU6b1sI0nbvT3AWnR+B
BSkkegItyok35fTw7FArVKv69UWiyLSRJnPS5Nx4+KgWWCAo6EIf/QJ6Hbcg
iiA+TEDi/WPdIGhCDHJgz0f60lH2WCjd6UPp7/wwAy4B/GS/2hBdjAT5o0Ie
02/VL+VTI8kYndVrZ2pDhOcpq6wLoMBqv9gb35TLgUUXFtEucgHpxQlRN0gO
VN2tndA3APW+BK4yRzXpK+JU70c0GaRB8TGcjWi9qylNawtlIbZHXAT1bVM+
QNd9nknrLTJgaLPoVcyyQtgGjj4NCi9qVK2nbUaNY+0CxZXKyT5+4C9IpULp
ewTqKiYyk9Z3ygGnXs1FM9Mhj8Fc3/Pti7oUbO0la0jLm9lYlkWaTrXEEi1m
iqJSRHYV159Dx+dCFNgyr/Ixq9VTW316ymwA6RcmA9TCnzC+/RydzZE2aJA5
sNelNTXdzCbW5aX79JK55V0jPlaHXyROhjp6HLHxIQYu7RARyax8XyT55FHA
L7rMjXsBpjypJoGAs1MwaX6bx1QWeoOgnMrODXriE/DutPZTYIafFXJUsA1w
I+BeySyQanXSj0nz6ZRnT+7JQLIkeMiUx1L6VkkUB7NkFh+a6jB4zmnGiA0a
BS/uC8TuJm0clfWeJ4ltS+iBB8YOJge/q38HSyQfidEOnY3Uu2O4/kjbHXuC
911kUtt48B38VYxgWfguMFwVwL/kVrL4Bae9rBPrmy3MZKQ+hbq7lRH0VnoJ
yeQVeS6tAMhqGpg2LKO1lWpM11f8I/IPFy12VJwUVnsEbuwsSmvJqo77wom/
eZLP5BOg7dVy4Qsr6YTLii5ZSmGMOeEYAPPNgWmY4zWqACp9Cug2SX04DOUZ
w1xQfT3Bteizw2uQoX/cHcol38/hf+Rvms+SWDANn9vdaFkxQTARw1hX9bwk
NivlRki//aAFg6XxNAkOjSQigL1Ny4usXXGeSvQ1WmIpWZ+Qnnh0sc4xYcys
GXIrmRqQuEzBUssizxYrS4tCe6CK61poRoxd4Yo8odBJ3ONO0xxtOdJlhFy2
MRrTh/scL0GuzcqcKCi8Vgn2pkAMXZcbbbF89Dw5FPyGzpF4gcnFJaR9ObqU
UBjMHgmz/lJa0yitvjEaeAnXsNIMRzyvdIc4QXOXVwqPzxGVJo3ig9KC62lq
h0dvTq5fnZ9/f3gheQlwLc0MlNA53lJlYVbOS1UkuudM1tG0b1QNjgZDSyRZ
OtluxysnSdfx/IS3sO7A8cOmzqeuOx6f9tOSgoYte6GoCxPyzP+0AYm93snr
rYmmZt4vOVkE3iU6ZO2anpi3ikdi73nzoBE9198G1n6slokue9TFI+aJpTFE
VexMgiCVCuFoYjlCOiGb3AP5ghM5+LI7pVs8UArxZGH6eWtcbhQhp5iVAF5E
r3pXsTxXeJ2sztYjQzKiASjIIQXoCWZle9TZO8vA0OGNoeJk+V1TJH1HY7Fp
oOtgPXnEtb/0UZQJvoLC6G1yj1z5OGxwM8mjdPcl2THXCXCATJJXr4Z/e/nU
S07Pr69+vHjH9eW7L4l+e/IdaCJEN3vbKH6H/rb5eVaxrs/PTlTny75LzbEN
t21+4fnZ61/5Rn7kgtsYffiS/9v90h6zp7HPZKr0wMr4L9qPARtPty6y7qLL
z7LYvNNb2SfceZ5UZeGjZJbM80VmsS+L9Q8AYQUfFR8bErLSmG5FLDsu9sih
ZYT9RXLJaRSUbiNR0k7fnUGgFsj6hJs80pDVq2sIoDb0IQ6wVYYaAmFjzSUr
wYQ9FCkT0Ur9vtKF5woa8eTvadVTg+Ak6BtRSTRmdZtjikCcPSCrMWCUJIsU
zhiMySpubD5LpN8c6Bh6Xz38NSPdQ6ThAHCnu182F/50f/wciYxfw5Ze7JCK
00L2eqOODuhTg/aVI5r6w7rbGHh539Sru/uhgOlI4HGC9pHorDkCJDw3RUIN
J+C5gQSaJyYvFam1oBNtEcZkaLN+aatBbvM5E7b+Q4eh4JUFK8lU1NZonYZo
v4xCCztzg1jtIIwJeunSe+BQ02bwNfimaJtrsV2QCyjjdcMLMjBYTY+WL6QL
0eWHnyDa1YKzzDnKAp1ea5wjDIbOQJ2qGlHhp6PLN9bVryn6nXmt9AyHSRcF
6KQL24875Oow9LBI3jgrDjyc9hPHRZ8m9ujsgWRNOpz/iSXxxTBJ20THUjFr
G285qRLBhOC3JvYUGPDJqmhOHvPru1XeTEfa7ZY1tPukWTDUNHnHxfnr198f
Hv2U9dFgPBAkVqAji+V8ISKNHyRd9WmOQrawjSUUSIqHbh4kdhvIjfvZ0eEl
46WtE+uEOGupyRlSWYYOuq9oxYvVRo5hOKWvIZj6bmGfU5PclrqFNem4F7G5
JcPNyuO9jB+obtvr24EIDz9we88Q+ZdPeWijkfQkuP+pI8XDI6WvwOE96rS/
98VsESUKyLCd0A445YrRBbTlxtS/Ar7gNs4oixywHi2/a3HONDgdOWJ952cr
N8mZmEuo31KP1MYi6QRxmERv0CUTPJwfBLs7JP/BMjQYErPkiqk2QzHyYs3G
L9THj/FLom5NWjmeOBFQMGBIoRZZsKjtA4HOMNSeyuwvftwQp2IWcXypXOLw
6Or0Z+/LD8rhl7tDOqV/7PLt4UWIAESP7T35GD11evXrwGP7Tz52cfL28PRi
4LFvXn626urXZYD+w0J2yd//klJ/eGAr1WPD6pOY4L+gKnCdRsGksdyJEPaW
262lAjqxgdghxFTtACBgJfOSzVkYWiyOzupqHF/SVuQCkRK+Gn4WQNiNdRoJ
LmO8XeBSWT6vqzutoCAKM1uCqDoCjZvUFGaZ8JZrZ/uO40gqtNC+lNLFHlJ0
iPZYrgxRV2d9sPhDt9F5aV3Iy4gLwCzykiVxvp7VeeiEU1R3+Z1heTuyO/Cc
29jdN3hoQiUMPqqi9YkqwtiFRCAJGAcanlzF4pno/8TsUVTJWNSGfY5QP1wP
2O/oTBrouRBS6Szr5++eG1yE/u7pIe98R3wbrvuZvFKnk+yg1NWD51R2bxT6
GHGfeYDGB1ZL/TSDXEPspn4ajfnxXAA1dnJoUuSq4LWhEpT1qp15bd2IIbYU
pDOiQiBC1zN07ilbgW5tc6E99dMmlQZXzUPJjilg0qY+lT404uHuGzq7BDJa
SnhIWtBlQkdQqh45W80KmNsPbWb1UKBVsRBE3XQ5LC+7OjNmC2Zbm+G9zCwA
Ivlkdrq6Za8mNbdfUWchD0DH7tcF3dPrKLw/YOcbwpbrJcnKZ7LyYqGAl/lz
LjWkJ4U2WxQAuhAU10CXWRUfisnKY/pvQX4+gUdupinMRDsh41PaYaunutIu
B0QGOmaPXBF90KBhYrSbc9nXxVSa8/5kGEVtjNbq1RBm37SGw8TRN/UEFnfq
tmseY6Y4C2xdMkQ6M0ldHzNo+EekLJHqm3OZliiLLWlYFzVM03dILFWSHa3r
Avqy6CXGekgpcQNXI6jFZf6UyuNDK32o1xYW9H16OyfHth3KGIqqEl1Nha4s
ONbZKze5u5bRXZfT0FNXzY/YEpBTIOaDhJIdZ7tJOXwzBtLTiY+kARgty+cW
9+uWo0JyarhsjZ5ZeNEfFFA/K28LPlFsAkFNRNjRCop0WlmFMmDGiMb/aIu4
JZnaxsG4IJj+RDfp7XSTYJl73jXWSqPeTlm67EZoeeusz6d4tSxGjlM6rpWX
WnhsOy3SEoq1cuV6ZdBqWQkRcMdgxWV9WHo4ka+IGSvE9MZreePTpld8Y2p8
ibOmb32paWA/J/bXkM4ZPVNowG2yTp7qmlGdp/wvig/0j+XLZbN/Lcjk/sfQ
fGHj/dqKIb6fr/THFRTc3rhw+UlrMlncnkId/djRqePHump19Bs06+BwzflE
sDEl/YagBvq6A+XSO9KCAgRQgOEB2mE8F3+wT0KrqlQzsn+zpp/FK4n/tvro
mtvJcqpXtlPi4+TZIeDHQfe1YVW07xUNDHvF7w6XXqakzU7h//K7PoHx0uXo
bnzvjnT3+y/okEDvhoQOAqY9DJjETT0vlhEGXxPt0Nh1liVxSWmHliOBamgx
trq0GP2IcaTNKJ2UWEigOYuc1XFR6Xyj3RByZLygD/W1oGPptCM6IFdmlYif
MNBomsHtkj6P/rFrpy8QRSjnuoOHM2hqd/fBsunijCZWaj4OJvBF5c2ztXaQ
88iKpble7JQgLKNokkrCp0hUWPFAoUSzP028OVBFE6AEojSK45LOFRM0A5/r
ZiGgbTWT4vDvS0vy0iGgNpV2xsrTOgs+BUkCsECw5to8FpJ3Rzq0DuPwbqcL
pgFMbleSXOmK7q8hiJkAGFMylfAT+hDVC64yqPgN/9B3e0ygL8aoKCF6BFjD
jjtJ48ky+QC4gVgUFBpDWxXbZx9wF0UxHV/WM1Ktq+znHNVN53AojbI39Z8L
0ouy1tuR2+yO9TP7x+8k9O6imdG13Z0nYK6cCYth9iGul1cXp2+xiGfnZydw
3/Qxpv4W2CaXJ8OgV3/T8ckZ37MXsaQsiWTGwl5ydNuOoE85n8kdu97Gl9tI
BA8xP/+BiOfptexhD9jWEynFiEtRuq5VpkwwcJzTlX5+x51aJJLfoJBM1Q+H
uWd9O4ggjBL3BfwvdVTS7/lqR7uRdHxfrPsCMfa6fb1/vQy3DiwTLvfEAi6m
koBvK/nIKqTv633GqfqM4iaUFzKsO9mFqoFL6wqrJw2zEEC7MurlQh9QBiw4
17JVXkPG/AYDVHss6XhS2EZYIu1hFXljfVurJ7VR/9wgffb3lq7Pe5dfDj7M
i5n+h4fp8vD9/hSn99vl3kObVd/w0DWy4699qbPPfdxbaE+MtIgV1PDR4lMn
1la8S416PSVIu7mjkISOZZE6GkcmXByZCJhzL7oYwRdKgEUzFueTwrM1CcFI
tOsjMQsqN9NofNcgVwVybBTAaknksC20DmFqCQukEHmYcTAsCn/G17kQY+X6
ETWubsSFfaO2SirXvmhTt6A7hdtm1SwYQC2SneRRq2BRWYExOh3K2LSDMTG3
ZIhDkC622uFd1CIOIWLnNGLni8RDfREDoimyqFMWYI7iM7u616qVvqz0vvaX
i2pS5saovELGpV+FyYhHED6mF3vied+iZZgx2nlafNj2hRD9sPlhOjrgOBZ1
jP01KDcXAvidqXDd9FdakC6MmdbXPs9eVmlvz0Gp2E2tVc+1bxtHP6dWoF/r
ZIQ+ZPSRQr0j1vPtoaxnPsjNM0AJX0GbId+JRhvvnxSGPbecxyE/BK7HfiiA
11UEQMHjcoVwZIdI7QEIkUi31aCv9xtkfD54ybSXoo46Sh4gqxKIx5AhER9O
qwmS7pTXU70jN3kqyBiunhCtIaMvrXZgN668eYN9V8nBZkg6j3TgSedyQX1G
u4Pu5bIT32sp84Q5CKYK1UAK3+zX7A3uJ8hEUxY3TZG/V/AAbTYmZedtMydh
Xc/KpgzJX5kOfM6IAsysattQf4HqdiZl6VM3JDQudVlxM641aifWrSKvVWnn
TDR4A5sI/6U5JOxYRLUJMSEvlY2LY1WfsPwNPs1pqGNs+enQJQQsw6PmHBoP
KzUtPvTy8JkLvsKyAna1/gPsue5Gw57Ce31XW+8eiZoNay8mlJZr5UxwKvlh
Eh8BXbYJET7WcUNg5FMppo67THR/T49AiHGAjHvV9SQXIv58U8zK6i/gxUWr
419Kh4R8wtA5FG0x8BGO7V0JjBZPF97RmyIcu3zTuHZcunqcGQLAlERiioEW
tdrn5WDQb0oHgL/ZGuXkgXNb0Xxrw659JFhecQ+ApM/ZkJ7HjcV463r4cy7M
ZOoBy9voPV6zk5iGz4GUi1rYJSqrrPq0asrLe9TRWeC0o1bPzJ8KEHrfyHNb
kU/cyoaxZjbdHg1bfFtq3EBZbDnMjZoZgYSZZLwHXfKcA6QnKeE3K4STADeE
QkM+RNkxIJNPcvGABZkK6tou71YS5rM26fEW+0pP/AWfMQbOZksapX1ZXRtg
owUUz6l81tK4u0NBX/Z9ucI4OYtjaKe1axMx4zmZ5L0n6E1f72fIlJFgrPOc
TSzAiEDo1v6ORoYjP+U6T0WDjq3HZG6FbzTmy1TZdb9UPBl2cblgoIZH0gUh
ZSXCzVtlTudnYpXmNhyjQQPZDRrIYQZDtrGi25808pJM/sTXbvZFYsnM9HLb
df9HEOruE6lr/mn7asYwqPYaChOxg+v7UnP1N5lMNv6uyaTXU5PJbu6YTHr5
t5pMno78ooT6zQk1ya/9svksTJwWplIC8rkOSchJdvllFrKOgXClJZqNWdax
x4xTIG1E/WW01CzrlYyKOaE5m+/t5+CirW+XaWlfKapUGoAFiUmCpUUFyEsF
IYiGz3wZcUL2siLHva7YnsTedXf8D+Pw3x96v/49/vs/8Stv23/y2c2/ypj/
sGHMyVPx9Pwj485/+o6/D72m/94/RG/qDyi+9If4E3/P3ojY2/Uv9Zf2+pde
6Nfogjnfo3v6ly5OTo6vL89fn785P/vPD/K3rMNvXjP7SjyczZfc3/t3/D27
FAa/67+S3uX+4G85s1uGPpTc5YZuGZjJwFH5P2WpfGHQsC4Dl7oPaRXR+KH+
pe4Cf97w/v73UKx04ARv+I/2RUfQ/oanOosRS6gBJh0LqotiJvkd9+XCd35H
jhf3QbksWvjoD0MlqqdYv9VHHlIq3CalQpSnuIJDrE44TkTgktW7vawRQSN+
tsNeKq+we6GTehnKYlfrzEcou80oVbLlNy27aiB/TbNii+zp+hROFbMg3bmu
mKZZJKlAh1odFtnNPN40B7MdDS5XHPiX6IlLoidcGNuaCEkpAO/V48o/sHFf
rRp8UFCwhh7S1cUq3cRogQUXhcXXu0CTKAb8abBJHE3+26CKZtG3l7j6/Pe6
WhK93Xq/nf3++cbHJET3kh+Tv5+TFrSqpjk6a9k75vE7NgI34oEOgjeiGwYA
HEnUfADE0Qmcf2YAn/HxjOaFdxpaf3sQBVotiGbB7i1by5H4i7hxGl7yfttH
tcOTGuDkHjW9RQvL233XfNtXF/IB+LJ1K01rYJC2uiag/7Wx/8WqlWoXBs6E
pzUgGwouztlaAbFJ5PfAESt/GxbnDfKs/h6Ctn+P49B/z67qJdnJYG7M7rHM
kqZNugbxT6+ncGVFjj1v041QWqClvOD/LxFrdGMLzOTvToUqP/i1Pfg1/f8v
9f++Vb6JLK+RFGf164rHLy7jeHS2/wd8cV+//DWPfQacYIEMcOLSWgS6FTVJ
gtdj4sfZN/zkN/rk7pdPPuplhV/CsQIZWyP0E/l3jw6tB4UIEab677kzue+n
NZvF9yNBjd0+ESGGzdlmSm5HzoP6AgExH2sDCmDkXTQjqxVjKFAXE8cAomTr
iapJASjifjz/JfsvAE34+TeHZ79mA0ATtlCYDjuIGGPXmgDLlpwn5D9EayVW
jskogAnvuXxKGxP78/Csgiu++vK/s3Ah0hq5va/kH0Qt2z2jHUczidiGy08H
bZPHexZ6AvjK9Prs/tr3tOCfBo37ISQWHl1whQ1arM/FZMUjHLba+bchy10e
GrTe+aeeBa9OWLF/kzL3X+3s7u3s93q4dQOTkm6nBvpw2f+ktLe3ruFwZv56
gA4a/SXmtMfDSnz0aJCqBU8T3SgaiNXPH4UqoPO65UbLsvxyurjQjhWS6RRU
hicgkxLqLkC6+cPw6nMjwHtuigFVTLy34cOhDoR2tPaViqxSSVTHHC/VIXHN
at8y0bpDqJp2dn51jS4R5xdX0NU6rX+k9ygJWplz7CmfaW3wqXYLFKw7/TFT
pybyEXTlhyiUlz/s2xdttJBPinv3BO8cXPf7urIoClMhLbXjDHW6pym5LkZ2
k3NJ40oflRoWHLRZc+jD8g2kQUBFPKd13VECUJpPJeolJSn1DcZ8EK4WxHnd
sFqBC9JAIKs5AkfPtNtSdzhk2JR3Vc3VOWwCkmXMta2WXChRV2daTpa5Zkvo
2AWv5isAp0EUWfB2APYlIV3iiy6an2TbIBlfKl0Onqrsu+xvG/ynG823TxUu
/OxHO3UIP26mPh6mFw7fZd/EtAQAWca1RbhhZ/DsQ9eUHaVTRsoKFJQQofM7
qy3Iedscq3tBz9N9y03h49c+NQvOgvMk/91+ChDcjqnTp7El8EAnbbX0w4Oo
Th5E+MZu5xtuAIWotd6OuJjemfYcxcKKxOQie+MqXIdmVETd2EMBuFCSsVO2
8K6BH5SPXTkv3BbX4ApT0xhTPuViNwAbNJzyh241Mw2k0t8P9YoMOisR5OsD
LbTOIL/I3LqDRfq2R67SCgMWrAM/6QihVis4q9oSF0tSKEyEz+TVUV7lw7ho
R9sXtXmrdRvDCtMC3OYTAFCO4oI71s+HGyGzbEsr0kmsVCqjaCBcSiH6XKao
wBSX3BBRY0KxmLD7maz2lbCl0CSp5CSkIXYQx419vQmJdJrQ0k7AuY4mVC0S
bTPjIiX46wAObnQiSucqIW/OaSKLq5z5VEbf6EBKCzxa+CYkrWr9goyLyzA2
aRq1Drc8Nb8+ByayNnZMUAEvM/FBbZNwmxZJZOlj2ssPAlF4DQdXUU8tS6Rb
IsLwHMsG7V/nH1aZ0Ep+XIyOphEmSVvZsDR6KZiOfClImDYpE2V4roSbsXHA
Xx+bROJn+ZNDvQX1dEFOr1CDBlvO+bCYB5mKeJrf/JTuAlFHdHkjeWYFQN6m
jkjnjZa3L9GgwhoryaGRJ8PMrDtNtdZxL+M8vaALxosx0JR6aKCG2PHNHpK+
kXg61H3paV1b0hKHjh86jJDWk9ITno5pCv1Li7zp0JXMqClow5e1tG6W4jaZ
4F2ke59vVKyVXjjVE60zsML04qgYzRDaqLzFDsxXwi3GoWeffF22BA9G4J9X
0id7rOmHUaT8zfHlGB+81S2GvXaBpodz42IC3Lc97iMz5HcpyyBP6NZL3R7D
XWSZ+EI5TVBYQueY5HHv7tXirsmnhT8gtGYCPqlUIvGntoc71jHxa4cRXcWc
G+AVAb+HFT76XgsvXZwcHb5+zdsh0JqxlM1AXfIs888kXQpM1iQpnZo42alw
6G/P7HNX7y7OPn30WD/LvcDguemZVkidZxCWh+HpQgB6KW1Y6FU7+dB97WdS
nVUqDZCoaTEjUxSOH0AaNefD1yeKevYBz+cYx3Wfvy+Afjz509GPh2c/nFxz
e6G+ZJ4hN3ftDU7W5q0E0oC8fskpHzx77WAibdRc1DcQ0la0x+lKamCydcKt
L/kyZ0vKdV4rpwToKxnHeSUKEL5RvStR/lFefVVWWBX/inA0uCoZ6Yx2woic
W8j7tRYX0maYA6KsFunDGZmBw0nb14NOBRQf9d+EZxewf8H4OxYjkr4TpIy6
6LU+E2cCSfsTw9QpFCYyO7l6Zxe4Wcxmia2vELip1SMSqYp6lDkX9wougBS1
3Ac6bGgMn2IeQj19OyhOhW7G3eVChVj+HKPYfOE2n56sSAVrnCxFqnwPGj2Q
3rXo0yEYOKBqnsdJiINXkrCeGIWVWfJmgUIOpI8n98rSpQ+g7xkAy+bvo3vT
olpYxF8Ytc0q4AdQe4k4UScEly6hxoQMbmOFbU2C+b5w9V2jDwVsiKfSfTBu
rm0dl5N7sbPX81HFwaZ0+KgIDdV6JBXfE1LgKMGTj8pOqYjX1qsepVlKcSmU
yM19XYi5eAYgbqqi+0LJLuoULRPlwrp0B83CY6NCReGnIWgCZtkq2VrKwnQV
tbRt1Q36iLNKkspq86fEOLXgDw7IMCfgTT3L4Y4i2B+9yKdVeuXgY5IXEtxo
Q9/COnYhbWyiPIVps0RI4hZ/LZraIHfmCJvcF7BfW1rZVu2Zqi14gcRkQK0C
bpzFyA/u2hbB7iTHyEm5IGualCIKk2a9HvEmsPXVfBGitXxDqEXZ3RprAtVf
maQJlmYdQG8JFeaGkVhxY8gqhvexaIuA+75LQsyDvDszZW7RkWESt/POq2Rs
kWt7pEk9Uri5k7EfeYzvC5/mP1zj0+D4LuWC/p0+N9+W3+bbzekPbVk0Pg7v
pSgHUfmvrbhQ27ZZA1rvS3/Vf21zQKQJIEuLryMgCTMmPSyd0eYanmF4rTWP
02JmbXeVE9xlKEjGvoXcJSVWfVv2/1JXdtuhqTcHfOXMKz8QrQZ7yw3ixALp
VKsW3DqU+xSnLost7jldTe7kF93WbTDvh9zpdYdXWJVYMD6GDggKOBvukC6N
cKU2ge663oJXqH1tyJRIQ8l7DUhbdfamtWpHSXXAlNbtIn36JoUOh1pZWi/B
pNDOsAPEfFu0bvHXpPcSaKfT95pV3lnvdKEHYqetdTAOu6+QCtUQx9or2nfA
5tMpnOLl0CEmpl5O2SnSrm5aaOw0V85Uiyq+OtNXoM7SaYU1jTHrKnd6l3MI
Vlt4dz43UKCzFQU6r+pqPa9XbbwTNADOkOf6s0NNGL1CP9i8096E3CjY30XO
lUMH6s6KGaAY/JDEU0ondnHSpp3OO3XqB3McrLxmSnYauOtNWBdTV+5Fd4Rb
MctADyI1pFNScIiqz0LfDoAm1gdKNbe3yuiv0VQRtob6aNthViNn4dXh68sT
1voL7sgcD4MeJ00O/ZDMsJKx64pzVxEPisDpNp7iQ/HGMTH2ZJm8HKsUSdZl
08zpeImXloXnborhtbXc5FWFk/0yLHK8XXA6SzmJdUX/h2hgvELCpzhiMlnZ
pzw02y6uMeWF01gCaum5uerXc7CmGfEqIIemKWEysKLCYxDGLt1GP3gy8cVa
DpzjAzlby5EsOLoY8zjOFeS9xTC7a+Wc0sFnPJ7dzeobdPCwR79fZya9VQsO
Qjskm8N6SaYpirPraAZC4A97pulErUBxfVfaMWiNPEFUuFDI1DNe7p6ZQ4uK
dLVI2JlMR+cl6ZU2yGvY8ke1DBabnYqmaCBCg6lyK5XmcpKSdys4Msy+UEp+
2A0aT8xA/YL3hNkorlskxTSiukQpZN9IN/P3SjmjINdsHBdvj7IJrQpy/egQ
uy7YwOuqllvU4ZiJPPT1hVpttJrS0MCU8FRnjPBI9D9kPYqDxt12S0rrCX5p
bJwLUw6ytOj49rihGiHLKLslllQaeuZz+NR+GDgUOpjl/EruzicfZcvCHCBT
dA6UClacDRlqU40YGidBCe6AHNcH/rZjoe8whQm3s9ou7GAbdZcfnIY7SWsm
uHa9degGMxd4Eqd+Ru3CwORnRc7sHdZea9BQsVJzmUw+c75bVe4TS9UHy8qb
AVOlyDSrE6FJOictO3WuqeDQTMqGfb9S845Tm3OZVN0kc0Lk0BtflkrItCRL
0KpHUTphxy3DvaAgSeDwVuZ/PgU3kV/pAqkZKnMicpTOVz8VzQ2vEwvDgkw5
3gmONHC4V9bHgMR0vS3Cgs9mjrkuWq/887RGKXSdYTC7QiG1TvpQ57B80Vof
IHyQWTqfPnFXYfsK8U1FhmgyJ9Z94KLxEQM2WliD4uL/0r7Zfxwrg93SqkJT
qRFCt/JOTjN4L0cuCVMOY5yI1y7hxf9RWmzr6hAfg9WvfbfDa1dLa5gDukHc
voPp0UglrtJSLa1ESlQfrU1rcavDgR/T2iW+31t2wg8fcUj+/2bvXZvjOI50
4e/9KzrkOEcz1AwkgKRMgytHgAAk4ZgEYACUVuu1cRozDaDNue30DEHY1H8/
lU9eKqu7B4Rs7eWNeBm7FtnTXV1dl6y8PPkkvFLmUssyX6XBjI/QudWEiBiq
MVyQfLjepkS9YH0gQT8qlYq7iC3nkgaMkBu8xMbUD0en4Zz4iYuzn14Hi+Js
kKfX3x7v/bB39Hrv1WuhC9WfDw7DDVug13XaMHm1lZhKM7KD7rGxTdUjNAPR
Fk2sWC9hE1L3mefiZt5cA5GB2O2uxFXI1uP7EgyX5lgUu6jItIq9VAJExGur
2W0dorTP9Rx5ZBRlEtJ7lQOaMU4Tzk53zRXP0nJFGii55aqwYCZeLEHOw20A
n1bfSoCD3rAu2cNOo9J5FNeu+5iq/95eUzkm7Xn+6Z5j1+xLTXuNKpBrQdBA
naVLsuyQzhfJMmnaXYlhv6kFygMdtGQBeEu92bkBZsnEEuDevF7PpMeFpjEm
vLZCnGx6lA88dfeekWhE86KhueQ+xH4FT5HJlfxkwAhMKmg1YTI/+QxIFUq3
nysNaFo/hp2VvlawLgyzItdgZVlzZWNxctPLiR5FXPpaaGN5wzWXQ+/9W9jR
TG1IHn7Wrq0sCTz6HvaZQvXijBxoGJqC0opN7PjKCBybkK9NVpW6jzSziuTA
a4l380i4AVDOAI2guaqL0EHaCVPZlOA0RFqrnhDvVWMIKeoqkTueo1BfDfLt
AWksx5m/N8EHbz/detr6ujw/p61tddfYLoQ8LlCBbC0Qq0S9gikUFSDnYeU8
o/kd/H38HahgWIyHyL0Saj41U2mcPOFKIycrE0L9zpXFPIjMSJS46MVH7EcC
y4JZ932dSyX89uPNQoyMHJNgzd3FqcNOudz4oVpDIzIfbf5aiZJEISitZ9a6
efI11qi5Wy1YS8Mu2fKAbq3iGHMLMKkDCmSOyiQZzFvemVPZO13eS+cH1tIg
YJeQ7mcdc9/MCm/E1uxYl134Pc0mTY9mrB9yaIQxkg1XMMU9y04bLxXuiW9e
2Q8YCdwYxoctv2alnwIhx7kbZq2FGT4a1Xd5LkRDbLkThQXq3tbQGslL45bz
/FNxgIxNxLYl68vtjiPdXvSIoF4VJ8fDG8OjHQYnKIwHhz8c7R8eHX970hAz
QY5+xSpulDOcARjHBlqT6FlatJYxAoaok1Ew+CJVkZmuZ4zVwNgAyupKR8So
hncvWhDEj0kyhlTEETCaMKfVPHwkKMdkkvkC+eU0MFuJe65MUT2tQWilYmRz
/9ghqg6mTz3f+toe+u2Lr3ekDjZ3xj32889ZU2vut/SQqo7cYGFowmgzFF2W
WaYyFrKBtX3gQ8GHAGwz7c7kG0NnvrRr/AFVzIB4c3R8cnb5w+HZefjr+Zu9
i/3vOQZvOkDoAGHDrCYsbc2qBsV7Bl63afHXGP5RmL/7pugdpvwOahIuTCJc
z5LQ0QPPPguHfPL01g4PXlYKr6UpW5qHK4CyDVuQ7vBLLu63NByXzg+9vrWk
nXqbJTG5Zv4I7kx3IXS9UvTZVhyBntTjquiOxEGBEKFEILFMMWLjuWr70GYZ
3Nip0gYxncekRRbLFlV/QEuEkqRTtxOmICyPaqUR5BlzJGcS8LdAfrCog8Em
RScbCKW2CjbLnCbU3JwyzK5FYQCBEyqLBEsGMND3eGSBQDhiOfeEmlTq2UkO
Kg1si1LJsMHG4tjCYRAOFb0lnijHDcWo5tCzXSlOl7AKRah1F1XU1iYe0UjC
o5DEFkOPsOwm+eeSxs1vJGNxyeBCfjdVyhH3a6Fd+LzuqrtA3zrAzgCccq5s
VFYcSQsjdxFd4ZNfz28cjZVpL7ZCUPzU8V6J+w1cc5H5qsFKBRB+UjaFMaAi
5eZ3M0X+ekAIVVgVn+Yk6VX0khfgucoZ+soFoHGH5j8rKRMi8UuJJjF6dTW6
La36CYL0wQatGCkXlvpn9sWUIFdUzIwX1GAqhEa5CfKZIFj0bjKSlC1WLhJR
mEJKW1jV8plCc7iqP2vNCGjAMCHnngmM/NECLOGBSagIaVGDJozR7ilTGA95
UgpIfMmgCousjY2xpiGMpbUIgzhHgsF0QXyZDdozLANeLatglZarvGm/EzZW
xj7M4hAobGE6bNVsScrGkkwT8RQ+7b2QJTJVoZdTbUnFYrRFdSbCkUuvz5jl
lQc1SD7G+On+HtJDkHn74QAiSnOSnSJHEe1FqGe+uhWOo9e7zWFs7qMeCM/6
2dtdWQzDBJ+GX6mgXLeYyX7Ux0LD4/BB6Z9vHmLkOLdHGy+TR3/Mn+Rvs2O7
S6BYjRe8zr/Mz7Nql0li895XHHDtd5SQk415F8bq9ScYVb7Je9Tw2z5t1fzH
7GwXnesYRCcoPtnma7T2dpPUy7vI+5IzZldecXrJi6b3uk/NhqvMZ5KkYZFO
YXJJW3y9aYf/kpdjmfK7j2mS8i/ys6zxrRi/87783JNP74fbHhRKjSFFohXN
GCHrO6gHt5Kc7mSbaEL3eXN7SfN+HxK5TdB/zkok8EIhIBebOiaD7sT2Ku/g
BZVHEb2OYhqEcViTm0ntLICkraISm4kaGW76QJ0lV1mdWSG/kFideTsafNbQ
9znl2DKsqK9lUhs3Gi7e3ZY1nFzkNmnjc4ReLW1YS/JFVu3E1ABOfq3f60yz
FACetQ01FZAtZvlicv83sflZ0+fJtEoePGSccJjJkNz7mtUcHax1uJjOc1wV
kfyCqHhj7SZOFuVJM3BjvaF/PnkFeefrlVWN00QUrd6XVatmEWELpM/d4pcK
eNph4cRFFEhq/658Giam/7ZapeFNNQ2lvDtASxkHFQm1lOZsiUrIJahyG0SJ
nNzK93CR43skyLhvo3YsYQeCcokco2YlFfUjpBUOlb2a6T7TItBFlkZZSw3P
4ZULTgGxGnE4CwstxkUJc7YIeM2EdjFH65kVPkP6vOuc+PDI4J0uUKc82m1W
Hi6piU56kDgXJOulkqrnd8a8u0QiqbfGKHdnWd4AHyrW7RdczTvcDJ74zFdd
fyASoT2X5EvpOa0W6nlVwn0eZ0yzr+Wb0k+JGze1ZpnNgbEIJKc3Pd8sym4Q
1Gu3fRdkL9bIcLDwz/CcDNwGeU/N3kN/R6TkYSOV6o1/yj7l4sGsfIvuHU5v
rpYojswsJm54NyfGmsVCUZnK28JeXwAyzIFNBzTHmmTqeIfgdibpip3CQOGA
4qAbfKSwGsu8A1UYzwvntIcJDyoredqYcIc0UorCa/IULsxMIfKnDtY1MT2E
vwwJs5exms2odxsJIQYX25F4h8O6tZ5LRHYA1xZ7hGzUzFtvFjgVEzKohWAe
ZkrZTLZMJ/K94WJWqSEOecvZIEfVLOt9gp5002FT1fHwFTe3UbjLxojEcI0k
IQthU2RMQX0pgOJHDXpokivpD9G7oyyoWppKG6YKrwhdJH7slIpHfbVd/iMj
ziiYaQupzMDCv1efv3bIwWVrZjVB4rL3njaxqTxxSOrBY+wydoBgcj1adj9S
/eoGgqyR+mQeVz+SNRm/zTgTJhHhR9m6NieNJhHJo31DEmoU6QLixtcC0xxD
S6oGR/93c3FMi3dAuJNaEySL4TnZtTOljFg6eyaTRgCt0buMdT9OAymRc6GS
1CqyIZm3sQm0VG6Q9HU1YU2hA+EsVU7SbQVoneR6V7MsedfF7bp+IFVX4r2C
FVvQebqqJo1QQYRSVYh4FsmnrHx0nitCnxwfChNHxlg8TsAQ1uZkJqi4BR9E
0hUXTZC0m8ztFS7YKj9w9DTFOBBjfLhHAOJxKDJbPs2DzbwXq02xKvBWamS8
tQBMJOzBUptFbNF0XA91OpHTNPAc+O6XgV9JWJt0hmiauvu6W+LOmNNabS7S
DV+lyIxzJnyANiNsvOIaTvBPDwTsmmnP7LRBt2DwUBJe0Dj90nkHmC1Vui6Q
dCDYuNqLUkG0WmFWzTYulkv4KCVI3yS51s9toeWyaUlCPZHQsbI8vS0MLeXy
6KqXYi5L4hy9Xil8rnWapIk3g/BJRIxjKEvF+TDvZoalRz/Iu/hgDNbCGMDQ
ckMu1sBMd/oM4t2JqCpZzXqjKGqUi8cUsBbj0Mo/AOIBY5dlpiPcMXZAET4s
Md7PJ+8VUNzsNLFsRBTerFzdzZfvQDEKQMqGNJ72pyXxbwg7xKHqarXm1E8T
weJXlLNa9HLyBJFKtcBKikiQVsxGzzLj77I4SxbHUj8DDVEie5qPFH58KWZI
Q3aqEsy6Gg0g1nhxBW1cChTDqluvqDPt4cnaWU6WzV6tNOOCJe7IsArsRazX
VzMwkO2xIewdDZ4Xh5C+yJ/Hcd4CQl6D/bMwG8YlQYntzYIShVOiLG2cx1Gg
wksc0bZlsI5V7PwYH39TeaBAKsdxN0kpjj8oybIQN79dcGaTtJBlnzp0FLEV
jxyFhEPEGCv95B7HD6RC+/R56JhnNLalDQ26ViDzwMygUGBakqMjcwYEDSC/
f8weJzgfWucgvDt8Wyxr0XjpTbly0FE0FnWqVosCka8UGg+lpbWcEnuZ654B
kylvSXY22ms21hSmrfO8tOPctfV53fSeYa8LZp7mjDFpwrE1YbMOnvbYMoV3
K0rqSQ1sBjAbLWGHzqGEJEjrbkGyi8jOvBlLnGH4TajRWrmds677Gw+X1PVN
hXr/yTWOgGJjnXfourYkYrpztH2SBR93uYPoakwj1Wr3NHx4zTndQqtBO28t
I4dZAw3SyqKS0eUgmjGliaDEEbbgfRZXPQk9kmDXaxRzUxuNjgS6OxahkJXJ
Kp3XSGy7P7A6WZibj6MF3KIVITZU4i5toDKSl7SwALaxRO7ImKne0CRG190e
p9Sm0AuI0HhFzoeIY1Mar9V8kRnzAvl/WECYw1v8kuYmLlZNR1VEbLcU7s6+
kF9G7HWZh9iduyL8HZYGprup3ImXMK/ntuqIMlU6x7TAeIfvvtOseUxtmxlk
7Ud4aQ/b51AbriYfx68krZc9vInTssuAaE30J9wQ1XV0CcBBOQ2KSyaLj0pb
yhFhYQEDaicuqNvoEhcajQx48aiQyIbglIvmZ5JvkYH6RJLj3J3dliOva47o
SGwuaBy1OFAhetTHlAiTVAYbdx78bGXcEvpwZqCysC4os9EY2UhqUcg3/Ewi
XO4jx2iQA7xCWZcSq02k110pdSf5VBBORkf5EPeNhALI1ZvxS6yleZskwnIh
IIJqcYNx38VZYkWfmoJgnll8KUWDNuy+XKaZ+t+YajuJG87F6roD0khNiPqL
DGmSvEG3bjD7NQ1P3gjiju/cBVnLjG69JknPKijGE4m+RSOSM8huTCR+VH78
mBsxVmZQVkv19/KMLLZoULf3KT0+4iXB3YsuU0k4ddZ+1kitUPY8hbMNFMVF
p4U/FgBUd8dG5jDMTKnaLV3csWQhnaCQgEQ520teEYt9VrVAqGbz2fC6IECK
LpQ9RxSQtbjR2JyJEb8GIC/58DhBJKHDHo1Bm+ZmIXLq+w4mNveqrHL6q72s
ORh2jvNMpXSDCpha0I9LR7HROJ6D1Lu5AX9Ug40u25CQ4gKVldMlUB5ReqLB
YeFQ6TyAWG1NTyDVU+UIqpFfzoeOFPUgDWfD0hDrTvQvzEKYrbxXXVv6qkDl
1iwtN2qVffBbPRi75pRN5EkKVDXG+Oh3CcdJqRL6jvtBpLNYTO6da86qih9p
PpqPECK0GeMJ61n3qAzEb5R0y5BgVgUGkeP51aS6EaRlofHMCvZAFhNWi/p+
NrpdEncnQZ3S75NzwAde4LwqsmiOYAYgSgZY/zIjtR4r15aqKN8CwGEkaeTP
YlFVSRnpmEGDW5IjjAfGEAnip9a38viCE96KCIvjBa5DG+HQjEbK8BGFnC/W
fas4I91Gd9iNVM7iOI25kHSjumpRJwH15nTCw07fwosYYMNwpdex2NMjrS+E
VXP56EKKrxqAj1GFoTGu+IsN5tbQ9WRd37ram2KjJgSZCfcxQeEmYakQbg9k
A3U5LSh6hvqbapVOiSbiSjZuwkGDjs0T18B6QY3BEvDhbEuaN9cViH2XMW/c
gPeRoki3P42E+mGU/zc/5687i75nbQFl7VgkNf3W3QE2AGYmYoqPKHdbo9Oi
lG4wJAsm9Sd0ieH34E67EuA7vxiBx8U9n2aNEDZrc2SEg4ICMfll+IVd6XMX
h2uHHWNHQC7BKwmnkCeOEzIOl3oXlAzyOYYRoQLQ9SBjVXovJVl0kZjirlgm
uql6e1gDaVkKnc8m+mtidWXqWPBRDgtDPY6dldYa9mD6dK/ue3PfLYWm9ZcZ
W3LMYmaNY0PT6lbV9B7kQnOCuuJkNPz1QECGFKtxSYuI5KaGDCRbZ3M6dGp/
sI4JyxRyzBEVmfryMM7+N8GWZO72faalz9J/56viHQYHz11N5qN3PPuKftXi
l4XSfmbIIJiZMEpggC4SrmQ8Q5B6o2UJhoEFVaoCOcZgqiQjLBEee8t9WqSA
Q/Z4yBNgnpVGGtx26GGrNTJCSPqSX1xTLmILtZKuxgojStWW9PFWwQu+n3If
Sp0p1XykS02GWYkamQNUh2Qs7+AAv5RMM9LG+LhEo1GyXVIauXi84CSuKwqI
WaMthjH1DbJJk9S4x1dJ7GJzjXjBTlcwhCWJXrJnpSOc+D4LB1BB/zRSR8ZR
ByFFoHw3Qij2nRBJWJaFw8DKzTSH5O0Nqht/wnL0dEcjDeFvV2TcLEdJpEgE
nUxiT+MyGT/LaRm8xb+Khcl0Kl/dSxaMunCo0g2daeYql54NELO5QUkL8vvi
DB1HWhY9CyPnqm0jXhh81MhYRkzzs3evJLSummM2K0kyhaEMq5XQEFEGp8Wa
8Q3UmdAG5i3cxXdkyDkWm8GWSVrKgnFNFqFsmPDZGcUKz+eTOWHEXd/Du3T9
pry9cXgYTgAg0XxaSbFzqQbCj8Iek/qJUg3MDG9qUIqoZBdJrsfA4krWm23t
jXy4Aix1d0qCSu2Lo4kiSYdeYaD8jnJfcWycZS6fnq5akg84sVLRIRlUiXhw
ohSFGCBKryKYyNw2SeklAbygjB6/0UZ7AJTUml4YTiRXUXFeCUEl207hGcpy
pkIXzMxD4QQpbWGN1vmTJyP9efXkiQSF4CWJS06UAoq3cbYDKankdFgKng96
rxcsIiUA2g33b5I8rmfhpUCo0Z4gVx3L7hWndTDIHyo0JUKhpMZyuQZi7iWs
KkCPBA1OADp4LWS/WIKOLRIkaKsbCp1viUw4LpYQMZzfwTzM1Wzk+iz1Isiv
WE6vQIMZlwhS+WdjsOLzNMHUbI1SrU4f2SpkQ8lcN6aLDJsnT2wl6HSx7hSn
Md87PuAR0FwaDljxcCwKEKIRTn7/bD8IzNFtibP0yIs7GfgFAM7jmCNnNJF+
QD+v/YoXqHd42mFtZbULPwWq+QF1V1tIWmUxulPLgLuYusCuuQpgY+3o7Ohm
Ym74iNTuBX0axOM0y+R8hRFTTMQ4G+TLuRa/CDM2v6PCHdQfHiBpPaNVKC3H
5Zf31PlYimOXUeL6r3p+vYKKfbW+qTPZnSqYTXrQZjiWUA95y8L2FnwzXdKa
f3dzeX0UvZIVRjg6MfahBZZUngJmcrmEQyVzpVrlq3fDWqKIFvuFlsM7Epsp
D2SqMdYLSBVZrZ5aIeFxD8uSy+WSG0DMPn5jBqgNooGrZulaGHk823FrWhP6
YAaTDoPCM/QZfcFnsn3DbBycF2jprowccJ8hDvUZscUdnGvloGQf7u+dU02j
aQVDd7OyRJA5QrDU7C25upc4WPU3bO35LNHAlkx1xduOiR4zED1eof7PXKo5
CCMivtUStJODO55LYdeWk2sHCoHj2sU8M9k3iG3G9RqT+aNQl4FwQiBMUExa
SyIHuh29IuDVIzLNMCGZdl0PO9d546JwWRN4l6YBZKon6/wvS2IsY1JVelkY
ps7zWJwPh0ocWuQH1P1XJI84qa2VWdUobd++4aOXaV3ltLWFLza1YC194qe0
K8Ok2c1/8MKkArqahDYMPVOnSNULUmjcT9+64R0PdMg+s6se+8bLH//xp1rd
5Brpmy93TOzH/PvDvYPDM/e+ra2tx1/unKTYj0dd5n5Akuy2XvjIyzrjwR4k
IyvPd1FRmG9+5GVtw+p37KI2sN78qMvchjP3dlGkmG9OLr/pvvxc22CTjbqW
fPgjL/9688L81a0JeOTlX68fJHJ26S1byQsfefnX6kfO4lM8uNvxS/3l4+7L
XyfZlR3eI6sK3SWsUTX30JI9+QGfUkJOO6gclYcVUFmu8JD6QBETM4+21F3m
7O9jX4Oel6RyE9fiUlkpSZrVJqyb9VM6nWLsosXa4GiXR5O7y0QNMUJlHjCw
GOETdIZLnG/PJFVdyIm9XUJ+f9QtKUajNajexq4KOZxTvgvs/5Res+XNSo1E
A4vlTY00cz4kP3Xo2Br56N9CjeQdfz7+shZHd4XyO5NA2dQi38gJx7t+dXbf
yLGf3ZxSWC/Pfzref7bhxiihOt/uXk3EurvtO5Ib3Z/R/LIhyTfciJtvLhOB
/dCNiVRO+8iKK5LkSVxuf/XsRffHQKbWG78mffWfvvpzuPOrD093yusXv3vo
xm2+8fnX18XDN+7wjb/9+ndPi+uOG+1janzI1paJu+TGRy6zRDZt2A7NsvVh
T7m1fkm3SHHvqs59RaKnIq/EGJEiTVBmqcSQZMhfhyWL+9QYFFD0niQrcBsg
3RMntC45BWHDzWEw6IxiwmS+b5Vbg5bqbmE9Afanije7UA3NUn0e1Ho2VyJI
J45/ci3uFoGAxl6qy5WbY6dsfFRWnJph1jyqp4gMvmHKKKRgIjWYqoEUU0Y6
UgCwTiJmwKkHm9kg/SuAdgSEJebJ4AEpGMby55+zMFRrFJ+29HAiXqwb/n9G
shBZGfxn1ZSLbGk6VEMsPlqytoQqmZrvfvlCV1G6vARgUUVI586ieyS235KP
8R4MUVhL10Esfrje3nn6rLgaNe7BzNcsHh4QIEESsmAikbW98Z4uadmWqm1B
2XVPU0a2+rz9S/q88z+jzzu/pM9PN97zn93nT6/VT8ljyMAHxXG4A9IY+Kd9
jRKJfxK+Db8DO/XdlsH3YP/5iY6h01Fxo/Xb5IcNx3XDjnITbubOV4/sVcMw
ecQTbELk/lB98An+k5gHySRSnG4oGCyZOHYwveJr+zGOh2lzcjsJy/pAi+qv
rm0qWmjhwszwcVdlzLlDtoRBka7pHFSbAWcQBwqV4CJJr7EgpGvMsrye8YOJ
WZC1g8p4i8Qcte5PhMuuGq2v5sTT/9WHne1x+cIypLTStlvviGunxgiGJXR5
CGby/1+Z/x+rzP/2wRv/25R5rLmuG//zVO90yaaCYg+XmnKCpPtB+aB795/x
WfwqfpNfwc34q/Tin3Yy/go+xl/BxfgreBh/BQfjrzIj/7R78VfpxT/tXHy0
010uN/qh+sMGb3/31X/wmc3RjYeuftGUzRpSMdHjQipnZBRRJdqNk/TA+9qj
svnbH/PLp4JRn4pmNaMmXlo73IqK6k5RDEH9Iwf4OOdbTg+iUFC3JLNVaSJm
QtCBckV4GeK2U3aCxn53eWITTA35AaTMEGF+UxJmLgzPYUzBkoo+GAOZggAT
uiNgEzWhQnIWCRIEwOw6UhSNwudoYZtMKbrxgOaHaB06K6Vmd4n/qJLyd1kH
ZEuTO/YpRLvRsvnkH6/o0SR0+BfS7fPIFkfL5WU5v94lFum63Lhc+UZeEF1+
Anej/BktSZ/Z7dRQmjdu1OL+2/U39zFwrHVoUr+K/xLFKMisSZQpIXX4sVqW
bXPLbUhxdSXmVnf7RC0TbaTE8qL12TC7JPMgxQtuESJtVC5Wa84gLroMPykL
d7WuJqvE1Gp0bkkgDbw4Zt4/aL5lbfOtAxNsE9ZhvmUd5lveMN/WAhHRz7Ha
y47ag+BczZH5/46vQt0SqZ7lnBX/M3wVm10VHUuo6bKA8c0/mRXCmXXCNHhh
hU8IunZV1NXIiEymXNKNMLYNOA8vMYKtEYBK+fuaqQdMqwjiXU6hOzl+/RMy
6GCnn8W6JUbLECvfSpGlZx5/6q9TENJ4A3zCv5ZriHwvCDcwoEmTBbi4DQqb
MGNA592SVuG9F4YgJtuSQe9CJJHJhkfBUcr+wjAbUkn3YD0qZ8WymmvNGU7M
xc2aCg61gngduAkwhEOczCliatB+oWYJIy0ZWvPJOLOTf53cZag4cIih3waj
43dVfBNcUYxpihUJI7gQD1jWuLBoNEcte8yoUShJtYvOIcs6R6w1JkAOwm22
Ftx2rOAjRLENkrOI9e0YKD4PZIAyTTmc50ZBi5iUZMTNxjpsecewZc1hQ26h
FTWutdxHHdP7bTJjaF95BlGOLtPyBQmKV1PDLDmeqdsl3sXhe1Fo+SykWJHX
7rjsFHyVsewjlmdo+gQZXQ6rqulIDtPLmZhQTRuMyprXzcQAok5TWCwjWVML
JLcYfwmKtvsvjY+i4TIsJM2B8p+Md7Wi3M74T6wyfu7bo+O910f/dpj1PHpB
r2ptC9EfTl6/frW3/4c8uVevAuh+tMrjUrAIJee7y3H7MJEFkdIKEDBNtAt7
1h/csiz3WqXpoeVknHIg9W5Wkisnz1iKLZOFOOrKxpBQsc70wz1vIWkzrZfz
NCjRQ9hfGvolq8YlRqIrAyhhEcfx+qQ5tnSFwKhEjcTpsBwKBb2hT3Yx8q7r
9RKf47pzHvY6aJk9R8YVihVQPjGPQpJ0sp5Rd1lIcENvj6kr6SLha5zfIAl8
fhBU0QS3o4OsRysvJuNlPtURuWpFYzK4GTfelIMyX3NWmzvbOvlw7MNcN7CI
+CBSYUIZ5qjrO28O4ae/MJE06UfOld/YY42sJKsmFsYJPzxIBIPSxayYAkUQ
+huy9sn+9owEmr4Y60ARAYq+83Tvp9cneweXQaJe7p8cnx+dXxweXwjENrI7
0ovO2CAWdilO6WUjWRiqQqckp1eJLuaL+WR+c69cgPQFlNIhHIexRhJDhynf
ZIwqG1Svo1jd1oMOJ4C8Utt2IZC8oR5+kSqcm+BvH+nLpA5a09hr2X4f9RmZ
Gv4Htdbb7tN/fs/PWGa0f6anqQv0Dz9H/IxMYPoenrd+R2c29e0fGYOH/9ib
/hL+73EP9HaKPj/5ACC64w2fesC5vsKn9Xau+nnvaf+hJz4mf6V/dbxg/9Wl
Hh2ne0dn8cnu/vTO9vYPaVef75+9fdX/1O3dnfol979v3d/lhM3TXzpR7x9z
3cj8LyzdZ31dtx1dteeSJS9iUPrFlpdiKBvP9YzXm/5F8u1SxyDvSfY6qbik
3/jn6Ox5c3Au62jv4OT0YqDPWTEBriPQT55Lxjnt50Pflz7HiIPWSHeMZ/oc
r6DBg73Z8KOedP/Qw8wP8Q89yl0+POhcWRlfDeJN5GSZENKxRRCmaUvuo40f
/ilk7qSNanIlL58eZexciWkBfjFutG8NXHED5CwlzTwe35bpLi7dxq7VBp72
bY0r69R4vpCac9bGmImAV3Zc6ePPWo/r6/CuJ0GpDl91Q4mnwiJkSzitNtM4
qdTfcJYejp9t4MEQE7tYB3MA/momHZFRZVNH2BJ6RMEG7UjKgOZSBtRVqtL0
WveKvnw4sVKmukvM19biZFRjZX5DbGmidtwVnLDZ4pOq6+pmhmQoYaykZCv+
AlJN2szKDjjdYLrs73Lej2bNmmVOGjiQmOiCaR8x/YhxiVxIyIYAXNKr9axE
PmQY/WpETrijlIMZd2EsyNBQPmYpEAMGmh/Fa6I81WRhj7lAMZOnsFmqlT1m
adpTbm7CnlPxfHVu8PtPiJZy+Z4z6AggOq45uRXY97BZuvpR1MZqBy4EIVdz
MystBYs7/4T2eH6xd3GOqs88gX7T+7djU/Gnlh8W8zoyUxirxnJ5z5XT4N86
OBcXl5y1AH5KEbbrtMwOcXOiw/TCY+V9W84/UPbefdCrr/AO4g6GvT2QMj2w
DnhMMNLDoXwu+zMmBVODeOIJKQ/D9BmtRWoZra2Vpo6dGbf0vgQPYiEOH7mG
5HGXqpxaYUYotpqH9bW9hTpTRIwdZadykTAJii6gJnsLnFIkwxxDVUKO1FiY
vEvGFfvl8ONLPE8lObMdzrsNevGadwyp4TQDjNUCcsMqCOOxh40KY4CTvEZm
bVDdAA2kHHmWuB55AjcU5SCRU0yk3IcqGzIoQumeNVU8YquBx0kt2vR3lH2G
yMyMnKM2f+gg2dOihOS9WLJbxisTMjgQiF3XhJF61hd2wiBCJ5PhCAfSuCwo
8ghouKe3Ev6x9lknK8FKbbNwrxzPhLjBWJQYvT4XZxmBjjtejbuemHYYdZdJ
maBYV4APyexEnUhE+wfmfVctt9HN3dawCw/fBwoJcRtk6Q2UNU+rNlzBfycZ
tpK0HIZ5xJW4kIkjc69531yQWCtQZ0Hcha8fApWWp0WeKhTEYmT3PTvMrtez
kZZp5PRxfSd2SHzdUDPTGTmuEV47YzJO/c2R+htEDETAJHID8e6hvjZOHaaw
zQj6znu6OXDseIRLexmkxZCGjcs6YzUR+1qGMj1UJWtaIZ5QrTa+aUAVFNgJ
ZuyiWTkDL5GrfgIny3pVMlMM9SAKJZeolOneJq63iSyZcAstpJslnDRHX54M
LP8/Xs+EDCm8RLl2pXlRgMoPVK+Hgoci7pHdMKLzEhF8Ds5hK4vbQhSsU1Eb
dvNjk8PvwzI5Uqn/vvyEVJ8NowTninI6zcOgRnD1+ouki0PXRV56TJVrFd3j
TsqvqdxbgayFeHRQcyQImoUDUJ1mWQQJTBRlJBnUR0XsBveqZVB5pnDadR3t
A/VqZVpdoxDKC3oAIRM+TfUAo6K87g10BzF89GR1oMjBTHPZQZHGNb9YsRrE
WuBz75ZlmmhOTgl/50QR2VR9lLDlchnN0hgi+dldz6vE1RRoDuPerPE8H79B
M5owhdVMvcqodTcLby3eNfg+pExBOUQAJQM+hQ/bdbhjKUf1kyegNJLZvg7N
gxPhEAc9lBcSEnzu45iIXE985kFvb252cR/TceKd0zDxLg4P+G14vKect/1c
os6Hb04vfpL+9CiuImw1oJ2Ya59YUcBghx6cHh4fHB1/R02oEXoA0gdUjy2p
jib0gSdPXtNxJZhvDI9RQKRmHh80ZHfh9MXLkN+pg5EenjiCnA9cK0O7a6I1
wg/Q6SPXbpG5pioJTbTPIBDrvOHVh97o/bokC/C8ssqwroWu6i6Vr0kUASUs
4NG9LaV8S1ChWCW31Cg9RliF07OPv7ixFjxbHAYUPDi8QcED5ChcZDDVex9m
7SlmbX9C1dggToWVH+sDc3eiJEKxuiUqooRTmZWy7rmFUZz7nooDIQ0NyS8I
IzHYF9d5KUtFLVYK+W1LYVIbqeYJ6jqNYoZTI6zeiVD7BQE9rup38i1Z9gzf
ehbEOeWMMYqMSjhQylZYwvSxjWiOaUv8an4wOhvqrTg8GqDEueuUblRxwkZ2
A8KBkbw7WGKFx51NmFuUNaU3DToiBQHJ2tYjVgSClOLUhZ4ONyIOvA3SHrGp
qo4QXqYc/uFvD8NVLYlZV6NVQTm4WUp5FzkPhO8kDPhzDPiBKK+hcaoiNzaJ
4JcL3C2JHIQNmrM2BhFKwlPkluzk0WhZXKpujE2qcTUi81WToUtbghKu9sjB
YVBuSVLG2pugicLBPSPNJ+jkFoU5IUkI2+X87enpyRnJ2964DB9DMck+daPT
2nl9ErqGGwsrGYDzdz0Tam1SC8TNtWcqP89zOSkWZDobS7Y/oMLUvI8l4FvW
D2iyYDqJA+Vlc31SwJwXme4q8U2FJQ9ME+NzXDlRwerLG62v2j0+xcMLib/z
+p51TWPSV6rFKEA5zVW26ddYNRdch2miIdoRqJ5o6Tw0E2gTnwTuvM94XsqX
osRZ5E4ooD/bPFOuEZupygsDm7KXRALM/PhBv9n6jBd3cxbCUKLMyUyFJpM+
c9EkrwyFUYOoaGk0Rc1lzEiZqaKSmn/Z0GWGOJaFDnkftV8SdbFDkue99iFI
URONyKfSrZdKyT6T5XJ5FncEkitLXXMYTWGrB24jXYNh+7n6Jlx2K6dmsWrm
I1OgqEM6Vo3vIAREkFEV0xXjJG16yMhNdGI0UtM5AbOUcaFprHM0WVmhm/Kj
p0VGuDIv8T4lPrfVvHEA7h1sUGoBYAlvMK5yX0SamjxzWjMUCiuu0Z4z5XZC
NSzZY7Y5R+vV/PqamvxeeJ3k2KahXXOxqjXqkiFkzsMea57me6dH8C+Gz4Z0
5yRLdt7lJ1IlgI2rLPtRYUrRRLpeFoCbFldh5ww2LsWqFtW5oe5tUvU6T0s+
Qrq0QNFUO8au12LniL4eg13Z5o8uSAawELSPupE4yrHsbDdoiOxzofdjpPIz
1hKr2opSOAxDFLmiQwhCDFXbXIHMmYMeYMR4bDDU7LNjJy3nDH+uLLLYN2O1
j+7y0ItVsHCpIC+rhg2IxSMgQKy5tTU9vi5QQyng7Bahc+VCT2voKxuVRtpU
qrzymZ0osII38OJrwESJbemByMG8hgT5tZZgsto+iY7qbogtMuYoX3EZ3tL7
8SMnv4Aek7l7nKrJCzWi6gRe0dbtpZSS+OOTjZBJ8KAYjYLN71CC8Qi7Vqcq
nAK7rCeQ5+E+Qb4kNb4sQNAZHsA5MusG9iTVY1Tc9nQDWbO0cuhXOye0+mfT
HqeA84gSDJi9A/igkXArKIqreVCEcUJyB5NIzlAdByMszh7sUNp/GamBYSpI
oxhsCJE0RuD8dO/skDG3M1dLzpqM/P2DlsbHWlXWqfrMuz9FcHiWgyJlE39k
RyiBoLF+hKZI/KN0unWzwtdUM3AzojlLa/dKKRhj9TDvUxpjS0MYhBZUl3zK
wk4IsoSfife3YWq3Rnc3lwCCA3oQbJFv916fHwblnraaqu8MvwVxK7FJi/MS
DmqaDxkCkuwI/AHNazjlJZmmNGXEcyroVcWZ6QsivtYRSk4s3JSMNH95ZCSR
Y9R9dBKn5q/udX3xNzk+tm+HWqZSSAbRT+NiAq/76jbWkBAudF5F6joylxQn
TzExcwrgtIpb3ACCfXNz3e5s/hY7n4gna/y+kKq+mikGXVD6kbFNaF6sAU1f
5QrJUWCa3w/gP8vSTS+WohsPvNZeJC82H91AD0F9sRCF6y6x+ipaFqJV/EnR
Ygbe8w4Unl5Co4ti2QTzlls3W6gbcrafT6uaC5vNZ12QVdJe+0JU4D5eD65s
w9EkX6SH0kbnClt/JIWx0l8KGvdDipitgBmQg66aglhXPI+/iV59GsxULp0K
+nujbEoUNs0tDJtTWKEtei5xpSyNK2lQImFaCyZZY2HqtoxiIjMJ1hITrONp
6dP4M3z99wtSz6SmZ8YBcfpyTj3oErRkXQXbmWKgKy5KpLAXPagzOaiX60lZ
x8KiLfQmaO1iW+MkFKlxYum4q8tUCiyf9YAOgxL7v6AaKaKMXUsdM60653mq
LIuq28LzAFMThnda0qMZ/VWFujXYUXguN2N91KwTFUHZ5W0RlQz8zeDG92YB
O99VNEoxMr8TGROsBNHQaCGE8SCdSIRCFp2e+wLghVg336oxospwBFM3PscN
riCpqBbWCD4Odk7VKp828hD3zs6JwHtZfSAgL9cDhy9OePxDUzTVw8UyXMAE
9x3huk1gaxdG0fvAFDzaghP5K9zuqe0pRly3WXh1LyzpduJn8FuVAhQDsXlZ
Oy7J5Hgku71C8J8FrqrCwoLNvZIVk044z3afDaJPDIMJmqYh1rC2MlivYnDZ
MD9/xEr/tCGGYekOJORc1/hdcj7GwujXSB9DJByFtFSmv1HnjNMxtVgU/SRB
+J8h21PuzIc0Tzt4KNVsswbaVjo7SL0JrUi6Z+YVz9pSPB+reWabNc/84uxt
UDzJecZV7IqUr5wyd5KHeY1ygS9UbQnqaIJRqeqOqjDC5E7+h4TBXRA1jcH9
tLIZ8Z1SOibp9LXWMU+rLIifxGdQ5w14jcJEyYv3XkqRuh8+rzPzs1TjZ8Ld
H3O3AecgHU/acY4QKeor0ppDx1LkBcKUEoR8dx9USR9Ur2lGH6ddSwGELpPn
8zoWHZI8stBQom7HYbb05euGC9y3oplx5C0qgQhrc0g+lJSSOMlGE8nElyI8
ekbqcYKX0s4B0X0Q9CezT2e6MCQODoTra0X00KhneUu9BAJqdm/gSMY/KuxJ
bZWKfW+Qw55Yd8U+Ii5RzIo1lVII+5vlsZQMd5Ndc1HSRtLeRikdeyyCmtPt
2xpqUhWKc+h9YTYO87FSEPQo+VqKDmiGnB6ryicxZApkKW4VPuKB2i2u8gmr
aFb7utZXNYrYOZu4zt/NFKHtqsxQCOW+XDVTrTC3EgFgpMhcAYSPSkfyJkDz
4EiMgO6DossMQFJsNAXIUq/n62Wwd3czswvUEKAoHBRRqoBtiNx0Q0KtcwZ/
psZ+Q3lopdeqFUduBVrG5vPbgHgczaHZA8wix/rVvWnuvDL4etY0BvJfYAyQ
vO4E7ZLXRhyt4qVm27Lt5bOS3w05Lx7G9AGrBNrpeKxvtYyU7O6XHvGIShsM
LTV9fYUSrk7r2UqwKp3n2wNa0sb0THGpwf2hcSbef+xjDBu3LFL0vQrre0lJ
YLSlyt/MhS3b0rKt4y/HipYTjwPo0giDAgncjEiAfINqJJZjLuXhxbVW+KET
A5Ke0MPjeNxQGHiG/NxYkWPncbaUlJBNGNfhXSA5/B7jB6EnSljMQzhagZED
RZJxRxJfuXZOblFUGyZbUh+vwrj3aQTGa4qGJdmpcP1CWcJ0hu6+kw6rBfqU
j4fmQ0jRJqhjr/VQQwRnGhgXKdyXInFOiOowl8utLnhV81RMLBoEIhf3rg2d
UnJkhRWXjP8dpZubjsLaRvyygVj1PhV4ozFiR+ds3hwcLMyahyYaBpa6T7a0
1GnqD1JDpqlwomJ1nM7Pa88fqukyDO+kGJU4VLi+GLttNXSJpIvuA3agaDq6
h/vCWQWqaPaTPYRXk1uejS2mNWytdF4EydiSU6L6G+kUMcpDz2wYg26bzcfI
2nZZXCQSKsKGY7iP7vxaMeEINYbOLfJtRmq0l55mI0uJ9gfykOnNviq74XOZ
MGyfwXKaSZyQLvCpZ2+kJdIkJo+8AhE7SEJGjq3ZenpVolpwd1U/VrlLI/Tx
qdtG1WLFHxsVWC9QQVDAE1x1icr3Lecr8XCpH2qr4dOls11R1vGFnT0UHbVi
thRZadRZKaAGhnSiTBonRQiNcSSpVZ790pdy1QrW1hjDwm5gYHXSDPNPppNr
oY2OmtvyItYHV7cJnsJoCZh157GfAIL4hhUUS0E3CE4iy8aKSHpQuw0FLmnN
KXcvnSRLsrI1esxbqBj7Ju60LOi9RNtXWewpU/4zP0OlvSGWFC1bRmDz1odl
kuUnRCbsNr/jueYUkitywC1zOdi43hitCA17NkrXzrsHWiCW4Vg0jgxI5MzR
Isje/ZFwmFzU25AgdGkoBVs8cp84lmMV+NV8nk9bhw/WJmnYJAJJ5Opd0otY
mX4yr3EYLS2Kw6qVaLmEps8FGPWQUsuDkHqc0fdqliHU5fiHRioiOwte8+ZQ
IIE1yLXKJUBJpiyfnFQ+PbQ+bfgPa5XQApBsOC5pViWBiYCW9NwbmpQnT0Kf
RmG6q3oazlQB6ZnbPknIHK10fYtxA3f8MhyUXNjXisejNrWEO6ChW+rCIF+s
61tG3XunqoR+XBXIzd8pC5hGhDM/pBpxHr9K0I36XaArAgqvtj2iR1EBsCSL
Bszf54o+5EbWdLLfYoHwxozV5aL2sYqB2eFQEFjDOBkznUWXwzFolDFEKWUE
Bsv3xWjNPJeMPB8HbbVCiQlkABLoLCzgi9fn/E1UCN6PsK4D+hjOtBli9ZBj
tbpZFtElpbXSm6uSzpYZc2gOp0gFGhULfKi7i/sWcQOyn1q5phzcSMsNNqBR
UCWR/xfrNEtyHGM7xRGfgF1Q4UMgeAlIN4XRSi1Z2cZd98SqunEBucVDPlDK
Fq1mEcLMqeM5wSwZ70HRG5QyDs+Eb1CiTnXoihXCGwsly2m+ub5qscr+5Xa1
WtS7X355E75jfbUVGvqyKlfXw7ub4ey6fv/sy+tJ+QEAu+H7nWGcl98LVVTG
s+i6XK+vRAksEPc9QpJNuRoeLIvr1YDFkkbRQKFKlGDggRprmCkC8lAMHfYt
NihVoaZmOnKepUQ1O80FxbnaNLThwM6CkRm0Wy1sL2DZexYobm82xbTUryzr
QcaBpxbuclMyZoSqJl6NmLv1GHfG+ZrSORvfzoxDDIzsXo3IG7xXR5Bk8n5g
xb/5hZ/XnLcDh9NkrScxO6V4DTQjzEraVm8echhlKyrACvuW5QOO5PxN9UFY
pZxuWmeZucgcu2ytfEfqswEI9tuwSqHV4Fj/QQJ8O9lrln7QdVkSysZIOAdv
eaUG06YA0AM/cgb/s0EGu5xqwkIHwL18B/fqknr1TJql3UWDVzO5ESJpmXm4
K+GWQ+Fs9q3HzChiHZhWsWR8gcPClf0mP3/FIh9wCgt16g8iBymFnamAqJVZ
vqEJ37xrRWsTlaK1crJrMWbGJkLbNj+dXgMHm4Ddzw4PDy7PT16fvDk5zoQ0
j3n2Xii7QFLQNdgCy/yuoCOM+oU7gqJq41Fm5Ae8WU+Kpas9xYqopf+acsx0
HA51mhKwhmmltabV71bGDZGM/4Dt6rkjvGIDTstT1eG2WvF2UEBBAXEuG3n7
xdbODj3+p7Nv91+8eLH9535GBnzNPF/j8kP+VeiAcbgVriekMUQeBsvUxgSS
61AZEJ3y26rbR1e5bJ+8E34DaiXD20ur8/QoHt8u4piPyRbaXJAi/fPxn3oj
dIvriWzNzbTMjTdGspjr6/qSZvNSNLqHHk8e5IfD97qnn4X3P+pBfXhcM1Lw
gfoivqtT2WW7ye56c0QG6+FB48FfcVQ7Ckx9qqv/M0Y19fLv7V8c/XD4qDdu
/5e/cee//I1P/8vf+OzXeePp3tnRxU+PeuPz//I3fv3rvBEo6a4H22/87X/S
GzfJG3+aN974j8qcDlJrOZY7aqcF3ZCS+5xeyBofa3axhIPo8XQX+QC4hEOn
4HTKJLuoSRFiO1ZYSXHg61FPPMMDSpoitY4NFHHLOaV0EHQUs2U9M7trVYpt
qgbBRtq3zHQ6yO58HYpYiJMObmLpgn0OO6wDbFBmjn+H/U0Oa58ozwzRif63
qBjQ78zvPPd3JZU6nc5BqoWnx82ck0RIs7ZFKxOP63KCwgsKGQr2d8lGspJi
FmOpJhyJaxTe5OAEBk7nGAOMNFihnd1ygVzp1Vdpr6hHXM0M9SA+9V5XasMn
TxPOmRZf2bidEgSCJqeP7c2UlB8ecPYuclas6HttK0AVdilNELn3RYHVjlDp
5NtyslDPMK08YftX/6CMCzykvCjq0W051QJvmenezM6sdl9T6a6cui0o7Iqi
U5kuX+eQjKUVrEMyfDZdUFB5crKv8juNj/P82G7QxdpYgu7hsN7wcIZAerOB
5nB5JnXn9Ql/40LUWEgxOMhYKVzkHmKXyQpj9yH9Nm9R5JKVA3oNqhxRJYEU
NikAFSqiRwR8TUaVjJkgO6MsluEpglBrPAdj4Hit45DwMMgCx8fDzD4LRtjw
fD6ZT8N7fghPhiU7D/+bW2nzHgTm4bHK/vPLH/aODw7Pguw/OOxrmKu2euJS
YuAkfOf7qrzLsuQVvbPzPudEwxX3pvhQTdfT/KAScXZOMgvOvR44MnHrbsa0
UL13X0wH+Tu+ivSr3Cqrh5+sSrriKtgpAx529uSwaFPQtN+3qNczn5Sxcm7Y
c+tJ6FRJKQETCpuHN60XWFsaMydH3umkmL373W9//lk4yteUiRhaB+UR/MXh
ubPzTIYzLP+zvaODIZHi5Zyhah489ZJPp+wSiU5l20Y0EObpIdG6nl0XQThU
xdJR7Hs/O51VGYjceWK+Kybz8K5vgW7bAxS6XFWjYOKHtXp2nmTLL0vPy00D
+N23vZ2/vOBYtjbFQDnAQXeef52X3DkanUOAufnfTIkerXArjEx+u2U4p9Yj
eGtO55P72XwahjDLdsUxw0i8OvdO+ojwrtzTi/j0h7+8yL/IP/zlGf73Kf53
J/zvdt776sP2Ntykt0H0UhhwSnnQUiN5SSx8FGbXjvcEbEl8xaETN/k3+c7A
uX4EmAXILR2zkG47z5+T20R8JPgU/Qe9WMaR2ruqVneE1fpXIiBU/xYak+fS
SzShtCBvviwokDgPY0D7BTIFeOTZ8G/lcm7TkLGcvNoNw/0kvwqdLz8seuG5
XtEPo0F/uer35dBUZxG3maMW1PpqwkrE8+0dihkCWkiyjbi5SBjBC4Mp4BNK
K1KRS6NeT9mdXUuJkVshKuRsELdQJQT+qlxOynfhPMx727/7+gUcpNkpHaXk
P/vf+Y9hKZAc2f7db3d0ylqDQeD25ZwWH+0XGetsmo5jWIpjZk0iWMuMaAAX
xLcILEZ+fvTmYFiMRiXLhHFoJStst3j+Meb8ZmJN5PGEJ3N7Us5s7YPbXRH2
VdyhHNd1/vY8P0UX8heDr188Hez87uu8d4664edhvMr+wGKnjLwolwihorOg
HwgvyhvfuSpHt4Aya/K7ifY3yH7gaIOKb47I1zUvM5a5/fxDELT+dOC8iYGg
3wnpkdXii6/gs1uEB0BzC0c61gOQS6t7eZYDjD6++NCb8h9kff7w73+q/v3P
//6nv/77n8NK/utfKoY96/hyJOnwQ1AsJAjKXaFW6/XVUFq74LX2A4Oe9oUL
7uKymr0nmPNfesPtvgiY74p1WKzkMp1UCP6kG5ihQbJD73fzw/D8D2Gfoa2k
CK56lG0croxqab4ApAfj9VWuxEjvqBOq4iwKUM062Ld8yqnD6AU1YBU+ixt6
Fxv6YhqakqhS4/WU3KAZNGM9kMILRrsJzgtVN6S9bD3jEOSYGR+RN+r66Mb5
FGp5OP/JgOIIrK6zxlrMsu8QmkzgZYNc/ez49+WEgAX3qFfjlyuB3YIUzvT9
/ll4RhtP77Kf07KNELtbzDlimf+VZvdPXw3iU/3dLI+5Sf4teaU3T3FTLr/+
qfrzn/5KCzSIPz43a70xaC/he07pjvrPYaHQ54a/hdvRqSyKYXpWwFhj7BDO
y9KFF84TCvDw/PRUH5ZB6JuwBRaXyu5upQVoxUSGTUhY4mXpQ6SESsh7ou30
By3CC8GHUVGJZgoVb2yusRCmM4altatmw5LKtpqvQEEi6V+Ut9jareeq49mk
yw+H5GmvfTUHQMto9SMfDHRKDhXT7Eu6/c9l+5//w9tfmrji7fC+JI5lGGzp
e8V4WcXVh9QFXYCC5Gipq0245vMtRdomC58z6GnZT6u6FgJDkRsCsLq6Zwie
6ezRryBvNrBFmp2YLAJwVZL5O+eg+Fg/vLkijHC2zk967/6y0/cKUHpY6REV
9FDEiHFcSjmlM8eMmGH3gkCyEfbUwNMGg4WMgcpaXrF5dy+0fjfEOCVYXg38
GwcE2SXTkhnC9gjvU76vVJEQEVQ4PHxzrXL6OJ1rddamz+K8yPh0+lVbxIh0
1Knl7uafUnP74dlT1WlVK9xlLTb85M/aktIf5JuIga/ebZ+3bu7C06xBmBqA
Z+Mh2AtPf7W1FQ6yf/9zH7tKVcHr+XrpRjS/XhOS1XtM2rtdEmOy3jsSt8xJ
dLM2BIksm3PssPMg8GozaZpCMvdCMlgAqJjVOCSEJD6raSoivL/BKGZMUbu5
K3DOdRbRKskk3vT4Bd8QOiVxldDtj6QDxQ/JP3K4mj4g/D1+TfjHBcnK/JzT
6sO/ycQmv2L+Mfs4HA5DU8++2AnXn+V/eBX+s83/+Zr/8/yr/0X3yS3b+Ru6
SOYa37v1nK/obS98S6T447/8z53n/h5panvnhTa181xbp/vIq7u6mpCLQFna
pWCATAxGhlMPyfSFZHEjUn/2MwMCOC03j8WLN7ko3pz8n8OLi8PL85/OLw7f
7F0c7Q/y7juOT47dXerMmPKbNns0SBi2uyMU0I3it6p8MzsYSReqr7YMKz27
KedT4BY9uXIxuSmvlkU1Eqs3GgycLiDExXW+fZDFzOSa/YI7B/nNshpz0aGY
NM5V4aQN1rMYdOpbaKTvox2BVaToOFnaciiYN1t6Cq64INwIW4RY+yocPkuz
RuVYFasxUwyBaMpqcesuKpp8t8IfsEvLbRgM58zaJbQy9sX5+SGbYJWzLV/m
Xz+j+2M/CgTPp8Hig0RlzNbhyTHkw94P/7qj5/ddNV7d1kwy6x0eWcNRIsS9
Y3Wz0PidowMEtMgONPVkCViN+D9Og67zRx5r8i5tGrWBqIWn6iVyKPD5ZD2d
sQvpj+2fSRnawmGZJm6En85Z0OWR3f3HIVRhfavoyaekE+Vf5j9KlKenr1xQ
xtv8rp/9MdzhyzCzAfKNf2GftVtooLYag4COa1NSXAlNhc8iP0ypIxGE9+Ky
GuT/cVn129zfmbA7kf883BHevG2SNq598tXtnZ0dHh0D6QFnCTr+RS56aKam
w8w/RpOr+pOg0czJstCMwfp+Snu5GmUF84LTz2Ay1Pc82w13f5P/fbgzyIfb
gzz8387P7vev9fen4Xd3zyB/KgIo3HlH8jKKnJ6KwH4WzZk4onHMBqLMcANx
z5o0KVJ5or67K0qbWE+FRICWadxAkDGFiTPwe+uSsYZ6V4N8EfrQp0X09g0b
QzQvtEYoO2TSl3BNVwgzJ2rPSdAkFvmQllr423+E6RIiq2/yq01Pkccl9PZP
4Zk//yk0IcbVRbI3rojK5xWHDuOnc9Ic3f6qRz0f5KeD/I/U/4+Lj6TU/DH0
ZZtcZx//A/8+1X9zaV5MxH/QGlRco9Mg5tp0+L24qnuLftLiaewmdiLkoLEM
uR6Gbj/JfxRR84di9TeqreGV833kiMKDeNZijDO8PzLGxIKXOAQaG9nTf//7
H/Yu/u3nn5moSnofJrJXfbO9tTXrh0G4rD7mv/8mCJ/w5wQ1xpLfF/K7fBus
TloAslF5Zf7HkKzdZKTCpqBW5RuPYEz447anlmxfHBtyx7o2h8YybNrhoiwn
CfGHep7WoN+6Y1x+0N3gAbTlTRjwZXUlNQJK2QxbsFMrSlIID9HugDxQbQ/8
bPpgWAY9TotezWfBUgym5pmz6+RNxJBwxe4q5u0h/Gk8lbfIvjwrFyVYEShx
hG7wp1g02eRETtj+T3qzsFJOw///sa/lD83HY1QnXGjXu3/Yb/dJxUoFC/Hh
Ze+8BdqKtLBdHuGuilufuqe8PkJYuLKYrG7vJSgmByzCvKU5kRqrO2wWMXvd
Sy2dTYumbFoa9Pjf/358cvYmWIw//8wr5TVy8FxuH4ztZI7CojjXWbT5V7VM
QamxP71wpRqv4fiAA8J6I2PDv3pPgH4Un+fNr1ZT/KQ3DTP9rk+4f1T/GBc4
JSfM87Ui/QjNnZ1nxMJNkFPRQsH18o623AtdKMfhlHtgsdAh+MgF09CzBxln
kn6RTH6eTj6XGkA6ksy8BGWDqZjpZreTTGt6CZBWQkUFJdtyAAvRxYRnPVOg
MpgBmIcqDMptdaNkvqCfRQ8wwNDP4xfLKKkhkBieb2eI752dD+z3xG/IJs/7
YnmP/HY9sXfJHjxw5zdOzo/5Kzqpeq+Ye+P0m2CPhSPpm2dkINIJ0ROnaEPH
7UfDsEdaxTY19Xxnmwy77a9fwIbrkarBP2y/wA/PnskP2/bDc/yw8xX/sOn6
poY636xWodpaNPdDHhUxD23c4rkXDcVgxQ7yZ8HeVI2t8emfKQRI4PgNHvBB
zHAzi53Az1OpeRFUhxL549mkqFexWkKRziIxlBHe4ZbLB2J9QIyiAQ6wk3eR
uBhYNpnJKmj2j1TOJXQrNPkxjcV/jAsr7rF4sbE3baKpwiAlX4eVX5KP4Kcw
YPy/PTrZ+41/ZOpi4KH9mL+dVdjIH/MfQq8h8Pxfs49NRQM746M69j5GEeT/
Gh7zslwf+jdSoe0/365JXS8Rh6AnvvvWx6Kl3+J6ouaP5/I/0imNmr8Ll9/l
/xI0avwlSDTaKHvL0S2is6TrxPWnNv5wFOdJFmA6c2rXm1cK2ClaZ3vBXp2y
QOvxe8OpeHZu6YLEPbp8XwjIHbTmUBrWMtRYSxkmgJz6LJaXXXJ54A9oXQlz
8hlSQPiuXGYdLlpYromk1ieRmFNxwBTDQryjhYyT1nTSlBc+g4XDnatMrXiF
f0uH//A1ycpwCNZh9Nbqxy7yc4KgiV+lpr8Pa3eP7tIisvp4aA1nNXSCTX3q
F8nxWtOEcCJQssOQ85eRNx7LIBydxL8fn5yf7qPuAcoSKIBP8Foe7dUgN2KC
PwrxShkYTOEgZk5z/gFnyrQKBxL28qYcZ13gS8kTYoinwDSfgabMau+tjO+I
VHKmRGFxpEmkRtXhk589Z71NAYPkMC+R+ksohQV8lvRck1a4Qk6WJfMNPEA8
ow37qBXNXH0oyWXfTEClRDILCXoh+dJ88tzDTJZCGHWgqCRxP7x7EDU+R3+d
sBilFEau6im6LpwJKZ9HTMXOiHMcWhp7u8DN/G64TShPCDnp2WKyFiyRDfg4
w0QrcoE/8Iqklyy0arhSxg+mLdGJyZrTupWQCXDxmpmF21tJ6ddpT8i4M+5+
BcXFxetqpI7JqmmWX58zd7UuFvngnhVOZBPHarN7TKJRMVFxJ0mfZcYgLXxf
ME8Fd49ptn3xzviSjo/a6ixjkZlV6D+is44Fpx2m+5hTqJFNG2y0eeQ5xvSF
h+JSDC26GxpLFSKRasmVw/ly6JizhdYivADJ4shLJkxmpsR84IJ4dXl2uL/3
+vXl3vFPdJenAgiyfG5VhjjlMfPFhuhcIBRCe9aa+zhM+VyKGLbq9hj5grll
UeFovkxpGcDGKyeL7eE4ogMQqkdAgWJQmUoxVifj0nzUUxSGQKZj2lGKcwLb
2Zn9rKQIxpEtzMTffJN/e/Sa7Njj/Ww9gz3gRCTMiIid5gx2eyCsqCmBjUa1
bgL+QgKCl5PJS54VILqaazNoWZOSiLCz8fxudrMsxnq6rC3KCfMxyN7mrGi9
G4WfR25Y40ZKKaslu5NFiVKsN0QIqFuqNHKZJYSHptNq1nDCOARrFVmiBG4h
cGqGUl4cC26WyFYOOynkZ8yZrmygcJLSPkoGjvG1o/lkEnYr26f597o4mNSR
y9lle+6f+XwUul3zUBnx2VV5Q+aULo4C6vqihP1nhbPiYTK5lxah4n1hFqsI
z0LzXWl1EoWaPzQyIPtRqtQYLjhvH4sk7NmV3UD/z/B1jidXXM+ps4A1S/4u
5majEXR8A+MxpYkLhSbKILrhyNqnbyd74kO856zBIrAaWdXtg7sI1SNLGK9P
HmEocmCb7yYG0m9Th1xH5QSUlQ0nSJZ8D9N8G1W69jtSfF/Ew11cU7WjOfHw
iIKZxIiblsoaMbkFxBgEm1YCVdp2Y+RTniyjItEdhgBCE7OtdJUsM+tVFmsk
JrUMJX+EJ1RHmC8yMxVhUIWckcOvyapRDwgphZpDyknYY62JlTXG/ure7cz0
lEvmg9XxWZeWCwp5LY3etgl+1tC7KJTl7Ka4sXzz9Fz/vPbyw+GoKfslyLMl
ZN3qNi1AJOMRTGWtshulqbBtxJrf6UG2Ze6vLjrGycBKOUD+gIgRybtgY+SX
OBpFuG9Rl1dx6O2VkZQ68WyDDKrx8viOiJq4Om7Cfkl8XOMHiS8zCWMpNz8D
eH3ZUBSejyyXMEsrYYtLyIUJ5sNizXFU9CMpZv0PsiohXQQFJXOr2QxJlhwU
bh/pioWGPYfB7QhCCtToLd1JOLMcER/t1g1tDD4yimD4MQafvIfMfc7joUUQ
LMewbZfzRZgkIUmIy02DiBmWVB0rFQeNOGFCxKKmh6eoGRAJSHgWO8ugvMw5
f2ozwQgdn7m4lcBNLUktTKE+I21IDHbcMsTKFuoyIQ0htbmGX0aIaI1kBFV3
CSXOsODI6FlBW4o1P1t1fPf3zp1uRQqv3w2DbFEiD6omFwZLuHpEMcGkvDIb
4SljLFwXGUdXaAIj+QtKRqDLY9L3qJIuBcbuYwkFQapnY6K8QThoDEIryryq
Y504SoIiDfC0+DBHKe8zIiRRCJG6L1cxuYQHHyMbLgRR44aUTqtcyHyz5h40
1hfz8tUaHuSumAd8TcAuZvVQ2TQbI+KV3M+1EhFx60mZuhnXpidVLCPW+xSw
j0+3suTG31EkRPy0ijKtnNZKWmEwHIFpwr9eRm0QxykVew47K2ywCuU+Pf2V
HN+jkjPVLABD+mD+I0WHzuMKqhs4gY5VPaSXydK2EeH31Vx12lbkANaNc2g0
GFdEgjuXZRgaZeIE5QS9PBz/w/mi/vnnpHThoOGiGCT0k4PGGTzwtK1fJiSI
A6dE2GPCp6i/wfbVf3wfbjw8S+5PiimGZXHDprFlF4GFHB5xNlqn4XQK45WJ
7yoZXtzH1eo92U1yzIV9r8/667hfM1LAVGdsNsSETi4NLuxdc7XyVCuwMt6W
cmSpGFJWoLSCFmGL0tTykT/2NcfFpXc39/PehJzyFmP2D0dHna4ermaCQzd8
1VUV5OOyoqg7TMhqGTtcK2aIVQ2wh44ILDVI9y1MFTIIyw8MxwfaRNKStISw
eDAo3WdoDZOnckFUkEPy+g44g2II1X1oGZADK/8expkCMKjnEy6/K++HrJnw
BXpcKMth1PQb7k72vMRqDszz8pL7yksrfYBLYqkySxMQ9V1jRgSHrbKOB4X3
SgqPjGELj0h83QD4Re8hmVc0DBpZFQ7VyftpFeuOWdGMoLdIaRMsRj52QNK7
+TOUmpi0JVq+xLpZ3s/pnGWnI5O3R7E4xpEXtItxaJzNhkH+1/mapqAeONIr
eM9qHXcyyMI8DtUlC68dRVHWC+jk+NekugJ8iaX2pFw1zgwi0Z6GHUt+g+v1
CuxN07DwSGfhEx9UcpH6SS+Kp5zSjyEK5oSqCwoRKSsQGFIEiiupoIQEHlFX
gPIeFav5tBplwrI3EI1Ji7KYijm6lwIzsjIbYMvhMNNKvu54Y9WqHujLxra9
3fZk8EGJuRsz8SEizJSh26D/jgcZqJG5SnG2qLgYVFPg4HOv6ES/rmTYcTBz
TQwZniwIIvAASBliJr0lqUIeBfinBBrbCYcklTss/IIJjdUvzE5WqDUw2Cn/
cj2R5HKh4ccoSKCKhWxmDihKTb/P7SAuYoqyVARWpgERxLRxMmV1dMfghITt
WGp6Mm5G23RRjSw2kvdG4UgplXo+ZsHmYeetr65QdScYN2EzDDIn6Jl7n7Pw
Bg0RiEzLdPAoQb+oJpHEX7xrTJNLGaYLAZaHkRghzY9uPUPF9g5VYkk/QJeg
JVHwEyy+K0oszCECbovJNZNw3UvpYcoyD7J7EZR1GeJdyvo8cq4PP5yreRqX
MEARrpKxi8Kp4jlhd/d3hxeDhLJ44P3gfMqdH/7x7eHx/qEwInbGEGYa0QyK
zQTvolWyDhdpF6kaueI8nA7m5Hzv7cX3hAQZ5Gen++eH+5ffnZ9jS1+8Pg9N
cUAdpN2KXQZMNvUjii/cazeUONt4HfLHZST5VIZFrubIfBkjXsxl8B2p05C0
QUmrjbkyCJA55ciJJ/6ljuzFWVgzlxSRODw6CGrY2eEPJ384jBfo2VdvX1MP
kx/UebUiuoQh5HRKbapFv2udhvTrIT34ulAD0n2b2AFf0jmlmeUs3SJx9ndv
984OLvdfHwWz+/Lo4JLAA3K6WZogdPfZ8HoiBBlieR/4kdaBPuUj3jitYYpR
SEp9gqQ6MuvEkphM41nLtVtUr+RaHGQP9lp08l7/jOXXkT+ZMhOGjxn6o7o/
SNXPhvZKDRy+Ob34KUyl6h5fOs3jy1gCkYJ334OUFIT3bkWRCLAfkVMc5vbG
ApfN3ID7RSnpq+u6tMIvkAfJ/SufY2AJ+dBMlMJZy6pgjcQv9RsC6lqto81J
qRClQvQshw8EajjC1LtHK4/9pJoH1lZBwymKoFlb0TKRyy/uGHlUIueu6bgb
FeBe88DLtOS6K4UUWoN7UJhwTCt0Oq7R5WRYfeKb5BR1PAHWiVWUeEPh68Ht
UnYMHpfCqmRW7OhJNWtVkr1CnHVpBKprKAWwGr9RI6HgSvbIFZlS+dpAOwtr
kFtRy2q2po0thbNIvI0K4+rFw8YnI3Wc+uadBHqlUkRDt0FEGsSoXM7MGwtE
RrEUShQNzukSbRwTYY0Bec5cMTX2LqJJHmPbPFqKVTxPM55r0l9OUDX8tlrI
DiJYX7m6A4dK2kINB15mDjzxKb0Rnx2dsR0axFA9EwBRr1fEEFIHhdwVsFK3
GLxyeZeFrB4V9ssFXeNbdcrDOdShaKjTuum3KMDRSSDt/SA4bsOynC8gl/d5
yAetr+YpT8aBGoeTPzy8AMQ2Z29yGNQlk2NPg17EiuBf59WsYUoIDCeoxfUt
G05aPNepQkabHcyGsKC+JJ/rpKim7KtX40BGV0Tk3/8uvJkIEIizMvZGg95h
ndxWBAVUKxu5rr1X93+jfTUr+6Z9huGmoR5yUDYZBYzagR8WKeh0G95V0yoM
2qyULWVhJhKeq5Hclvdao8ZIRcy3tJJqH6ydUk/Db6zTUOFe9LbJ3QmXDDlE
fIySxodotBW9wnuSlacYEovGX+g1B1fR9Wi0CrqRq674MNOd0jmo/CcriSrB
6limsU6aZuRTUjZvha0a9KVKwB984mMBs4falJZkbc/ms5t5MHixG7xmObbS
hSI4KE/ojqmMMF3hrCiS+7VEQmEBXF0ReY9qFPBIsbglKqZhNUNN5b7kRaFs
kzocVtCjZDZtorgwngVNFHoGJ+h6wUIV6IQxjVABXiArdp8LxAaFU8tGnaNg
DLx/pmVf/iaMDzRCuzlJV3o4qKMMtpCvQ8B6gMvum2f3q1tGyh8H0UfU4diH
OCp2lWDHlmdVx5+HzIc0W5lyIfy3EkUBXgEFVyO3vj1MBXrGgGwYgMR+0+qH
hVVrNa+M8Jo00UZchpMMR9a+x1yOgJ5XD5fktNy77YBDZ9BYbB3xxj5rGiTs
231sbipaOas1Cu+woSArW4lxiLKbAkgz1sBfin7OATVKqpl+ujjhwWH4eKAx
c1d4C4KGAqJOPDgjgkCs03E9HJHsB4t8ePhNWBLkfiH+KnANkRktDMWn+7CZ
w5qdVuF80g+DxCDsPVuRcqqMsewac0xwN1IKKeUndGA1pMTVmSP+7lmZjCIZ
2mk1m0O/mF/r4DSLKhTK4Ia6VVxFjIndqByrRkH4bBCLfQaZI0UEir/OOcdb
AnuM4vFoCnpWyd5cgXMDKLTisWSQSDTWnk1zE1wRQV0+AiQhTx3kLFvXFEyC
NuPrr/un59eQvMRRVPLzLwluxAwjbviZBU0+Q4Yp+rBtnEkMq4BmceXpGnio
YqgTwjUpiDhGr0Gj7/zcNCALrPZx5TVICmdplzizj5IEQ3Nwrf3IrJakJRJv
mUK5gmqlaWnDYe7DqbpEPDE5AEeIvDYDs6LXIuILhGbG+YDp0m2US8HzbDak
C8QR7ghsyYdmm/HYPN9c9+Hhmg/4SKGZrLXdSRvSyyUmwp57jUJX2M98/BFs
FarILaWNM59E6rtxzqcWPpL3uhMP9NbUxREO9orOzkSeNgQQym+ZRJ0vF7cF
7XcSjvdc2IGM1ihC0WbJJYOq7vIfs9KJSFGRUWIrP8fnvhEzskNR9tEnMSt4
E7ZqOhJQbEHi21sZxPpPPBUct92K6e58ejIoMbKauYJr7OZsmEVeRWzECHji
GGbCVCTjIJPp9rrM/Ctpf2k9XeNr7/7jNMfOW3qsosfKb3JXZPD9Im8z9f4+
Yfj9Qp75mOdsqXYwDsvvasESSXDrPf/SflHne/DnL+kXGs4jvusv/nbtUU9k
kPakbz9/3DRW3R/jf/4IyDPAwdLs4Jc2INFCKLcDK6P4i1pozeFjH918ewMy
n+fvN9/eEwQUmYy1/ryp6WsmN5bzof/gvR1/3j/m3pSE+ovORxqLJ1yPnpXu
HtkC+gVvsTb7/tfHf+/HBAjwS57Le1CUgCyeWMHERz+d30zmVxDRDIkpf8HS
+mfnqJNDPP89NRW1tHQU8adDHjzwljBCcA2SrcFn+kvX60bjsskty2hGyBX5
Pg+olJ85h7TRRuy7UyqkjVjoXdDJZt+4P4Sy5yXI3Bt0mmgDhq5VdfNl8wvm
kwlohcnKDYbobNW3h9mBQwG52Ef2Bzzqz0fBB1PIDCnq1eqXPBw15KCBWAd+
QQPQrqCayTc++tn3eRwOPs77j3i2Nys/rAzh+4jXqEka62L3E8J7p6cM1Suu
SYgc6KiC+nc/Qp5eU4mgFERoQuT4wylsZj0rF1I/LRYd7sAZF+ArodjMpBzH
8MDLZHGbZT2T0149RxJFW9YrywyT0enoilW1Rfqcr7/LbyEXFxka9yUUcCsl
izxGReghzaNR7WaDR8kB6FgxFXp9/1LmnlU7wx3IiEjqCe/hopaT1rPCx3YI
k2NR5b4NgLRHI4B6hjTQrsKrnbTCoSrlu0oW22H6wNCYdll9zFGcUA4CoLez
OFL4Kj9YkuneqH/dqHds+7Axg+TSjkdK65SQFB1OD42BRnptrez6XINSSiwK
p4UI1GT3Oylp4lewzSxfh+6OlvyD9yNMHjxqDqQ8uX+ZyAonAHwkdfMSSi3+
FKHMO8eqmkZpqp+r1VQJ2mFiz5D6F17TJ0udyzWKNeYwALNiGo38YsmkEE3Y
s/nBXDd0DtcMdIOhg5CPWLXwv7dLR0mbCuGz6bBgapaci50n3YjcAvYxSUx7
kHkGtDQdUb0mLHLU/nOg2O8iDrbLBHRT1wiXGFD2yRMXUg6ytyCPq6RjEZWO
LUpI6CdPpDYqlTpMto1JU1dCutCNzwz2HKGiMwGvG5Dlu+SMCt4zXqihgrm6
/Yi0ZwW+ajwel5fBIGPhuELksNVHnzuECLvcUCVtVS5dfW4W3S51wEAx8gZ2
2GLnlGNU8VQnc03Ge0W5f+PYoFT13G/kByxrXbjxOEQpDkC+XDIFXhZeXEH+
twDhmAHxUhFh4KQarbgOOprgp2kiyrFAjDldUnKEorfVYSEODwwNhOdlRIur
cFVhofhiZDOgPIlsiNXynpP3uUgcAwr88tcc37g3SIWzzxsb/ptJafODFNVu
W4EKwkrdYzh6JKI7X9cTZNYxFQ+9XMWGkvCoiblCFooERCXHMQVM4HGAJmiq
UvwK+/iuuXpF7rL94q7m9e2wOoau1Bz+g5PTC2bUk/nmUFU1BhKMHkdKHDUp
LNX3tgiJrXi90hRdjImkk9NzUZp/ppEtEU3k1ObWUKzc+Xkm8xtg/ONXSzxZ
IsmfSUYow2fagXOKVPE1ywmSsFESGWWq7SdP6DDHWmfJydBHyJqlwdciuv/J
k1Y2j6sbn+8Rjw87q8CJRpPDv7zir+BFTyehDuDe5zWy0ODaRdbeq3CFxltg
va7wxQSrehH5ebUHOAIz9x15/A7IjUlxX6IOOnB/fOjbULCDN4bbaZVQwF84
Gxh1sJ7WACrru+XHuKcy8DByrriBlX0+1LSYFTfsKHvyJNFhGIZZcThLO771
5ElYSHeoxkmcLBXIZghUqdWwoaeH/QqSRvFwS9gZwR2C6G5xG1HjwLpS0Xfn
neNM7RRsOoqjW9n3MDjwh/JNIu2LldxdjP9aUH6Nu1s8pFzkG15OTsTCk9FD
ncZJQoOpKEqgVXi2eRYCPYrpFNEeVD2ZfOrapABgQD8O8Y+YZiYDgIaFMJk/
IUjRej5BKG54EyaPoO6aCM3AaY0ZrO64VnMz44zIGYrlTblKutGsyIyiyDgU
xM/PB3AS4UFXBJQz3XWnyefccclD4H474X5XiW+2JeVfKt5KTh91lZUPHkAg
EF6wx98KVDeOGzly76r6VupxoD4X4vVcITyWFtD4R+gnFCvX9Q0lqZM+sd3U
Cf1K5pOGXHbptFzSwIf3+jmpbcHwItk1XWYyr23tkNrCIfhiyeXHnPCtNZQo
AlvoGmSGuz+SBPQmGeY7zWc7dR3kEsvY36wtmGuWc1FBQQ6YLH4VRwQfd5IO
go5D+3gt584zoKznAC6DWDV2p8+BBwkU0IsYUk6hGhZkUfJCpDlNE0I1q+wg
Qh0BFDOeXiLrnQr20kPMgqWQLEmIB/Nt9iODb8hPoY8QMf8tYUsLP6qU7Bp0
2OV9dl2sVstnWhAYgDl+gbMsNt0T1hrxQUyC5BF6A4S+rEJwbj2FHJZgpchK
ymMgMOaN0BOTocmYNcpKxK7FuBEGJI6aNz7404mGg1O3u+6nTAjSwoqgBJOA
lAw3wdhaiWCfuUewwPvdqLH4kLKwriN+6ywqLElW/D6F/kvxlIwRzUS+udQW
/kEFnxhyZIV1AKozz+Yc8ZhOoMfxsCHSGmO8sSh0fV2JIUennlVWqrV6Eicl
SIYGZAplYhEEsJ3AyVFOVYSSMtZDV8baRXVl2U+oun3YkPAmbU4CDS2+c+zI
9Cgq0nDavAbtF8wLl0bHYRrAVuPHYzUBOiRxDeFdTogpJvK40GndzObLcuwI
d4AYCltpvlxPmS8tYkg4wZQQg6UEpK324XumnUFsEx17aT8jBD4DZKXJlon9
ziciq3+WYa5aPZxbct6i38Q/2fKemJ/Ie0QwEM4hwk7muIjnDZePukkQopaD
Ah4TnjJ1mCQ4YXOf5MAazck8pcXYznkpg/JNGTL1aojvGLoCDE+eZHyQgYxv
Nwe6gBg4sCsprL0yBu0IUmGMBaoUwlZd+Y9VtZiTeNyR+x19N/jaIvZd3Ips
uxfeTAwXkmd74HlG4T+AW+jxZta6W0wcmo6Z3VdFXdWKLWq4jB5y/OxGj5mR
pNA3388oSk0ymclLqhXT+GphFuGwkGGPuru6QFQl1h1xw8kzMKfi8OZq22gA
RPfDZk9gstDMHivQayESkFQw/bCmjqPIA013msDem91gNy2TvZTUZ/Wwx9j9
sGt+ICep+JWuwZ8avS+Q4ZxF8VDmo09UajlfU4eruaOw81u+yJd50XI2YWxY
lTZTyfym3SNblzGnwKGVOnyfPYDi09ABRgHP8i7uM16S0XJDy3W3rY8zSHRo
tmdlPkjtJTjo5tkXVz6cP80XJLLHOafU9yaQfU3mFrJYBWLw2cB3EoOXawCq
WDwzxSkSbsKwVYoCLV2WKAMn+ZWxOlczPCe9BabLU1+5GYU2N2/ucZIKq7A1
4QUUvM9GcLr1aghY+88/GxkdghKWtBB+hLNqPRHWUwe9R24ZQPFUB1MDhaf8
PTLmXdl0jVdLCaIur/0nXfaZueyLhsP+4cMpay/iX3IyZXIy0QwXejSUY2uL
lgFxfFZTWOdgKabtPEJwCOwkn9XFNTFT0QL6TBRP2iC89SOXErIoMVDqzsuS
tcvMMF6XtD6zJkj7J65BQgu8j8IKFRDopJypUlAAD80ZrHgNuSUJ+xWmH1P/
kDpAOvqA6wdATiIdLhk5g/ylL2HCMXOIWylR2XENFE+6ad239xvHo7zEW/ry
jpcoghwpd1aWYsUpH4tyBofhl2Qq4G/24mZwCPG8cJYAtucngnj8ZlFyWyCw
EZNTbKU0TWBlNIkysxADYgwyZHwltUjhzqRNu1gvmdGI/f8dW0aFIaD38ajC
TL5isrMIRUsgs6mVosub1U6N17pgU2NWIdOd3JJ5FcEeqVltERgbFwcQpHRz
Imkz80D3edWp85M3etcxFf6Bo8mmPx5FrT3TWEAmCWFioBYOK9afpWc81sND
J/hnCCCuJ8aX6DSGq3WQCivzv501DoxH6t9dX95jZdBhlXQKbbnpCIAymJgc
SqtGXGCaVVluT69KBnoW+RXM/RZvdPvN7mVcaExK9H0AaaY811iCUYLNrybV
DSuE15porg0QDQE93Ntw6sWNYHZ9UDrflzNwT60lpI6IxRKRJKb8pCIP9mn6
LujK6LHrq0o38sJ3xkp1jJPDn7yRzOmjOIUVpV0GM7TjVFSZTjpRHBXekl6U
0/bioG17kWu8lLzpRP7DnpiEPyA9n9jzZPlEsGOpFpTCisVv6s4ugSKQ31m0
Ol7NWYJE9vFMPmXaS72Y3BX3NRdZzbCFriNeiiNpcvzyaLpDuFopadyGlSvI
Z5zX9oGRBXo9XVMsNSMHOzk8kGoFSLq9M85tLWBn25A1SD2CVB4H45PO7/ky
yZUQF7foVaeqEtMEX4hzfwO3ka1jqkegSeNKQcbUU5AmDWKZlFgqcxUQ9d0x
PrLLmeBD4Ab0d066v/aF7qRQibLdMox5xiWhqUfgpImHoLATzpnQIiKE7HB0
P8WnJB6uQGxY6Ce97X7O5BHkvUmiIHIUjibrGj55KpIaOt7i3DEPSyT+QosD
geZM2CWlSfcDRWmLByf6ZSx+LBZL0MJlJa/AGeEQHlJaoJgS0/HfLOrIhgrT
JNe74C6lVEnGjsXgwkx5wUvOSbpuDC07axfBCL6H3IjaU1Q/Uo7PTGNYkQRC
yJo5NiU8sizVPsh+5aRaGHcS6Pu/xjLxRX769uLb78N/Gy/+wnf0/3JiBFe+
FETI2BbcbbFYkLoMer6Y+B2nG47Qxqry3l4Muyb1+TXkzJzrGDznvCRGaoyY
NZMY6cI6IdcaqbKqib4+OT/0c8XvqOOwhQ8YvcOyWAT9XzL8LF1mQRt75QLr
wpWL0MHWThx8poBgVy6HZHlJ2mamQMSsFpK0M3b3rqLYwFblSisNB3C6jVQc
k62iqVFMH5unXB0RBaCKGWModpElRkFUn+x2efIHzvThWi+IhoS3jUbL4lJC
dz2HKYiIDhAlhrVFl3cajbayb67l8NQ6x9qyRuZogfrUos60ojRhKIb1cklh
ZOd6eIJ9JUl119hXQnZwuQK0hwLFPnaFLx+HLU2x10Fu6XO0xvV1cCs1kmRE
0RPmjw0cITHK1+YJocFBI2PKxJaUFhkoGg/O8usc3kflK+UN3xp7NsJK1XwA
yV1rVD+eUixAExORaDnkLJkR+JluaB9yYOB6hWTm5sBI3vqgRRj8uSRussvL
N6kcNh1cs8RYqNMlfqtmlQPWGBZUPY12/aGkhZrGpPfzxsEjltAdJhDprZoW
Ffboein2QZGv6uKSfwC2u9zUctKsNRlNN4uDLPicCKsBernglHAezLX8gwi0
mcRP0oywxyaiZieaCGZKpurTEtVJPfrsEW7a/pshvi03VV8wPgsj7iB2Bjh+
vQHbHYiQwqOcvBahRZ1bjrNKVf9HuLvxqcTLb/bCbG6kcOCY5gOFjETbQUJJ
6FTJKDqZuHu2ycASo2nQWmd5XGvOmcFCNx15Y9tQT3nufJs9gcNh8ahxcTWf
v3tXlguJmIdHOQYsz/rSGm70+gJzAJMSqxPglCIWwplJOo94hdDnig1T8mV5
p9GWnQMHOhMJRFW/MVJYRjtY35WYw+wJ6LCDE57CwrIcwAPMyj6Ng1lpUeI+
vF7b9ow+bd6Cu8JC+8ZWo66okSAMyUejDwpNkClNMjBbdgwRuZsLDgNMGJaj
Uib5dFGE4XkztdZEFAEusH0Z9j/tQkR3sTiDcnC9ip9VIN3e+FOYBsSEJo2F
vyTBAYXCkLuBPCF6UIDESLnpmEeEViewfspHJ0Qj+nzCN+L4ZR0G4yVyvdny
k4i+PJx0TT0U0uJAwl3h1YoFMv+8Pk+zCNO/ExcZphOa3FIy6y8S9wMpYOA7
4EmdgBSBKW0SW1Xyvcwz8QB5DZngUkD3ivJODHPUdaj4442sYU1XgEZaM1wt
HltSHDPhtGhSvhjjEa1JnIqtj20VWONeLKv6XS386TkHQn3QKF0SFO6y9oj+
gV6KTpLryhy3vn3ozmPVaY3Fcz2m+JjOeyFq/pQMQ+zKq2DrX1NoZwNZVizP
Zl3tTn0m4I7jLMe64c5yT7HI/XwAEGBuycz8QzzwfCvoHBbGlQ4G5yDUqxWy
txg5NhZnwxlRQ6uz1zig980K3oC3ZzqjLNtv00bbcAT1G1CGAZiyB/kPQbMK
/Z0ugogJZoqiivo5AHnqoAijAN+qugA5XAza8WApz26y6ZoTy4fg617zulty
exaxL67U54J6YWT9jzNH48YS6J742ZgZRso/3s0ZSE+muncpkJGp6O2B4cqK
SKyAHksfViiyADAy7dHoE4GTXhaExGSwpdhTGpYDUZJRU+xuyFR3x62TuX2m
SPmLBHBrqJIO7nKE1VUFFTU8QzcRWaoJ5bWc03FLJzngKLUAYp88OQARLPPt
uBEEkrZDfFQWK24xMRbKw7ivBGFWX43OiqDlaGiDlQvsBiZ/uRHmRj2UiPs3
TL+wmiXmKCNAU1eNQ3KLX99tEnHxczRXVJNpmADSdPi0ePb0WbCfGJE2yUHj
Kf1CiUAh7FTM7JMnp5ZsAl424BvtEjuLMH7NxAnWNtq5E5tTJBgtSfza0AgU
frXh0H7GMM/jua4k6oyih3RZj1+29CCNBrp0IIdp0+lmwBRBKJA3gGH/K/Ah
jIgS4KTSaHSDYyU3gsYwrQVAPi6WoVS6GjIH7KQYR4+AY8yoZT81agcw4kNt
HleOilvDw69d6YGHtZ+E4w/PfpbQS1Bs7jPvs2fUS1zFuizxbFQ0aCLYv9dw
SIJbFA4A88PhJWHYnmHYOmB1lt3o8GY1xk2xdjL9DlZnKLp3QM3Fmkg478Fn
pVDZMdv+tKrED1qZMu/4ZszFSjTGJtMN+oxDgOGTvI4aMVlx/LkIigKpRNTE
lCQGW6AUmvEYshqIzkd/8RAORK6sHH3GyuIRq8irZGdZYly13CQMWanOnDXY
VWIT5i0RkI93UQoGL3FSR7Jb9ToQDCrh5QJ14jKsmBup43YQujm/BxNnluCf
OVs9HJlR0WiZupIEC2K1rOnxYIu1aGTAtCS/MHDWGZfk7agpkfdYFxDFgPyC
oYtBklAYF+kGo9vMnXKbddqXyr7kSJU/rzNftOkRnz0Jpi3RV7+jzDdZYq2q
NBB+ozD/1XwmrGA18auTclaTlBI4hUDOatZ/h6KxmdVg1LRaB9UVwMiUEJE6
b8jMlA8xPe5nbKlH3MsMykUp9Ui5WkmTW1WUPioW+x1R8HWpd0bPl2XfUhCN
czUaASTHAGh6hSRbco2CJ08iheB1gaqETDU3KkXyNIeYbRnHZBhkv5YTqla1
Buw9Tbk+dnW/KFD9TD0WjiUQKgDtlG6iQogut3Gc7I0fYF1X5p4xAoqSbcJp
UJwDhZKWoI3yk7UlQ9Lkr25taQzNOZeuIuNZyZRh9zRzzdD9HDCR7/dQHnWM
76BAm982bNFrt4QQPlENu+lbtdf7tOuZ0znNo/rRmNVMJICgV3M6gO3AEwp7
t7cC+U6J+pQRJsuPEzRccpfxTQoTJm17G8rOtDRNAtLTAuomFAS4fmqB4VgF
CSWxXpb+APDZeD4tEbMtiSnGyAiRKAUv0mSJKS0yXhwrymCgp2lx+qQBtgrv
cVzGqpL6kd9xGL2ZpCUvpfjDBGo5Rh3feSjEdXTl87qVP0A5DdYYQ1CqcPAW
TGPIZNer+YoKglAyjK/MIy/lV0n/UvLOoEmFWUDRPWGDJEQaZpK3PFf0eJjA
86ZYXhUIEYVlpduZhzcTDQlKEdDOMNumC6rYxISdTdrRMmjXYcSGo+Xo6U7D
E4cHhN6T1ATzHaOydIMfWTg+Z57mM94D/wDdZhMHLWUyFEQWV1BaRfWc1+Tt
siI+pdv8+3IyuasYdHylWVCr/Ojw4tt8e2c77x0H2QJNYuernWf96KvkdodX
5H3jxbmipCTWfqFy+RJE+fD3zUjs8Pet2LorpxyJBgR4dkXMp6T29NU1upjD
aRJWjKx9hTJxIZRW9Y68GS6G/cJa3USSZgWezNt6WNUu2TGOoJyHVpMMkyt4
cquLScoy94BdONQJ1jGpnArJQed/5gKOLt+EohxyHo84lmPE3UnZMkFADDk5
gM0MmHtBQuNBqfN+L/z08rVVHatVqtnGWYE4WrGNuVALcbpuARjSjPjmoueJ
js3+May1L41WJfyVtWNbHhTaWnEpdKTNWFBaEvGgQ4gS68cYSJ5UHtNNcIT9
xiLXJ3FI9wTCEeYBVMsk7msrr/jtpPzAl/IfpAzMdv6a7eqLYFdrvtyzp8+J
PjFapbQCM/WJOrBmAsYsECp+/1Ra2X6x/TS0wlFdN+umTNRy+om9wCzytIUm
NDn8EVxDS+p5FVboJ+p2vohHh2+kZvR5pozLKLFB4RCNRyZf0Hj5d4cXexcX
Z4PsXP4SRNjhnvbk7PDNyQ+HTMqoM5HU3iuEEuTqPtsw+Dt+8Ad5rI85JWVy
QrtyGYcOCQ+yC7X8nZVV4wpVHaMc7IyMwCuNkR60H+mMgcvT/Mkdb+kImJMH
IhqFIGX11I7IqIWjgFC6sa1xtRTEg9FuY0k0eSWdN8uhMFFiXqfBWs1IQpBT
i9Uk4zl6qGRiUvHCL9xwuoIFHVY2fUrv7PB4701YDq+PjqlK3OuTY6oyd3L6
0yDf23/Nx984rh5UYCZksmb+dtmEWnuH8A518v75LE6fyCHl/beIlswVM04T
RsktfI4D5uO1JOHHQi0cB+4KqqH8zs/9SBGDHXRKDe3mpl2fG/T/wP2TTZ35
oqbI9c9SWLRrkRXk/yjcmo1L1SYFxj64BzxkKdcVksbV7AWilRG1T1C3gyWU
7bVXbL1egGAco2HAvE0Dn3EtR2bMVrbf9IPYIFNU1oBBWfjP2ckJ/hoky7ff
J9nJmpxc59svtp59DYRW+Nv27wb0n51t/OdFf9fbv+gQlxYOQm09vQIneXj1
65OTP7w97WwfjT6nzLP17C4ogYs5hVGw9eYcYQ4qGTSuCZoSGbixrd9SU+W1
gJX07oKRIxHgJs7bSgmrSpEoEYlOqcuoBYpYdxUdfG3u2tJBh0YoaUGLnMpb
jYEm2eLhf7jjT78KPY9rKFios5HwdtFbyRhk+0ga6pGXqB9M69kQRak5SY98
petq/OUN+RbnK/MsQuYEgwNVs8C0HvbT8ApGbEXaW1VL7cSZzgS5bm5mU9DY
h/7zUbOx+88aU4iPGFHeArwOoQE+nzY2sEOLYDQpi9l6gbpjVJZDLN/1LBgt
79DKyenh8YCxd5vX6/bXvEpDi3LibZo8LYTateUbx7EO49HsFfJCEMn5cJ8f
fXmCnh3+6/73e8ffHV4Sfw6P1uX54fn50Uno8MHhedhsP9kFauzV0fEB1UQ+
vrw4ad/JFZsIgNP1lbIdnz7Hhz7l7336W/7PM/znOS0oFbWySzEEIEyZllYT
va2jMLUjfdB3RwfPELi+fHt+eHl8cnx5St62igOT7pO31KPjME9CgbZgFw78
LgmCKGpAotfSs5vxRo1vAfYgcRF3TPDn0UEXFkH7i+hrUBJLYukcj2oU/hp0
0SNhBjtKf6lfWl6LpdFAVPHXJReh3qYtYTD41vQH3NvxavdAx69I75U423A1
p6qP6fiyrE3Qd5icePCIEtWlJaAAOtkgLDxbio8dWpk4iDYiFVLlQPxAjco9
pNIAzkUaDtxPFdem9Kq7mgd4q+ZzSyxI7psCgmidVDWIihmKO09imdMgJafr
KWcoe835/c7m81vxxe0lua41hRL6yWmwZnZFidystLBgCnrLnvd4W50rwfKl
fZNjThRh1n2zhpkCVQYYLcVxea1u3rBqMjP7SOXcavpxncrtztJoMgQRlJhX
1HmNAgZ98fzt6SlG5jfh+6NecaSF6k5FqyCYM4cqOaw8mThfE862eoPSA03H
6T6/QOl5IapP/3+quJdjdv/13tEbcuacvj584MR+Lh9yfrh/dPztyUD/Eubh
ksyIzSOzw+PxLJzY4kwFRAsYCBagECDqfMuvJ8X7+TpyAqbHBEuVOlmT5GMg
nxb5Kk2YqCBX1ZIVtmWZyYbjJFG3eAu8kZahNgF3MK8vVQ1hzplvIsv0uiRn
RMXaVIO0FpMiGghBSm4coqUgQTiSdDj2B+22ClwaoD6Y401YvkjxxmU68MhP
Clfxas4YTEgY8jXJBsS2AlUD7wuJLov7987KckROgyITYopGRUEWjmOKgldX
QM5Z8rJVOu8yzly6hKCkNJisGnWmGjWy4JbvK0rw1ywiCmNWxc1sjrj/mgcR
x4slTBB1bdPQRIqsTapwtqhu366UUdRZmkTrCzM63R9a9gCgMiCXKGRJ0fZB
Rt8Ms/oRdrTiS6g4aQS4qB8gvo4+/odPdV2sm+bER6GqUVh2aoDrEnXJAXzP
NbG241giW2XgEqMidqJIzKbP64zIqmlQwFeu92AXvWFYv2GwfXUoKEaoFV5k
Prqs0yYxewCAGSxdmk/g8+guoTnJkBitgRlX8mdUlbUeIRu2eGvxMHpeb+/e
38k5lyXnnH6eNrDhVNuzVcUIvKYHRSstmx05NjDzhsliJkjOUbwq2QEPiF8F
3lfsVVo/GXewNRV3xYzrLam1OVBaOXgRsML5QjZXrgcEYC1QRTdV1/duCfNo
YyHigzC26nPakHWROH/Ct5FCzn6El11fT0avApGYBG+dGmwm/ossVSftiFHw
HJJIXSxbapOZ9d1L5hRylR18WbCdp8Wiv+s8m1EWydNe1WGfBnGWnat8HHQ8
y0vx4HDv9euT/T1PyB7VZ6byMPgPuK1GFt3/Tf4GkjQshOhyz3vffvvDTlAK
CCN/+eaIKq4dHvQ5wm9luBjKQsRxQtlG4LZLTX3qaiHzbG/X73eG3NIz2Cia
tZ8YAckkSeJpRLtpgELiTdT3JCCQwrczU2eebn1tp5dXgbe7n0ZAYTdqkaws
Msv8Jv/EDqtLHEjb6MWRuySUtvE2JMwAfFhMGOcd3eN5dHA3IwL4lbWdfuzz
5enrt+fyrt+++HrHv+v51jb5AW7D0TMs7iRDlwBb0PwO/7D5sW15LAZWuKiY
GrW2Qje2QN5B8nLmjJFjReS2mqk3vrn26bv5q9z639T6M+3eghDNAjorurK1
OAVPORZeOgiOGj7xdVlEXzabic7NODcdFbYgm5oCSc1P9jBtFEm0eRW+tw/4
XmsHB2OGd+CTX7Z5YYwlFaqHmStRXfsEiSTmlzXe++bk/xxekK3z0/nF4Zu9
i6P9QdZ9B3mLNt91dn75w97xweHZm5Pwv53SQmKaEot+4mMPHj2HbfulbEyl
TNzqMBFTJkGJVfCC80HyXldV1q14l4oJR6EWrqT3WHw9uU+vpvdGWRHv5Gvp
fd+Htxyedbze/ZA+ATxscitdUYZ/vvL2mK4535G/3Phw45DbwB/XHHQNBzFL
TywNTItM7CI3+lwGNGkbl5qjT7bQYWsG+Gp6LyavaVS1fuGOt5TCKI24oIYE
fWW2XJ2UdFklOiPM4w3ukRWM3i5dUXV2YMvmN15gIc0Z22DO6bQzRu5wvY1C
rSUiqIc7kwkzTM24Wt/EfGPXrLCyk+mwUNuCJCkkS1DIOqUK56JBhpIYUT+4
nUgDHDADOygGnWpNHD9BaDTGkPm4OfTh7NnHCXuRqQ6QAJYum+NXQeu3AN9D
FgJHrDcF5JrxwSzGiDm2/f+4e9vuNo4kTfR7/Yq6mj3HpBugRIq2JWrcszRF
2dqWRF2Ssttnz11MESiQ1QJRGBQgit3y/Pab8UREZmRWAQRldffMaua4CaAq
KytfIuPliSfm5WUxdydHA9rRxFW1yrhAIZVkafCCNEGQHj4Mnp/88ubHUycG
5PPRyZsXL09fRwoIDVdXlAT/3RXv1iOyYd140SE2HC4VKN/hWueDVIHs9YWw
oHigtwDw9W6P25CI4JFbGfTfd/w/5+T+fnV8eHYMGeVe57g7Vqa+sN1H3G+O
RO4+5u7vb6N2dUK7HAdMDabQe46eddDM8gEU5KZ3+1+rfs1XzUtmvUqXok9+
UY3puXvBH9++O/0RW8H9fXp8/u7UzeQvh2/OB/jGnZQnb9a+N7sAWdtdoRo9
3gYWfFJexowTa4PErNORluCdtaxReAibVOadVjNQ4RiDjRMCJaZ9kMcwhLNf
XwseQSKIAl1DLQDseYkS0nAzzI7sPwkm6o8IjwjA5Y0+Ctpa2Ibkkg4kLKve
LyAhgoMguCAoJLr1gv7cH5Bzhx4hH5/D2SMfztwHVEDlKjA+WBJZ9JyKMuKZ
CEU4unoWgTXov7TjX75wH05evAAPw9Hhm6PjV+EzxXXencnB3VoHDTRw2hXf
YIe7VYH/PsF/n3Ll6BehHAO7/BBGUfF6p2ylfYwC3m7wevKn4qH4ky5w/kQ9
PsOImrrfcMk/P/755dExu6H9x1cvz87XCrB93vT7LMH293tdGwID8R278r/Z
+baXr/T3s0DZ36VpFavRxD0EQb1+fd0RNlwftEsCgr28O/jXC/DW7jCf56Eh
KeejfXy6dloehFtOXIx/UVbf63bRY6eye0cyEa2O2OXj/drspV9bQrgdylVE
TyEsZxwTs5Ew5KcivS8Bzfj8A0o7Y24LdbbQTSZY5sze0xJXucsJ1KnyMdCk
W6Et9Z8DOKvDDUePIy/UtJz4V/cUBN7tRsGBobyaUcciHyaU5ELo1xZN5Mma
S1LF0Q8D3VhHTpXq9i/sPYJ/gS9nKTKQDbbCveHu2N3zt3Dbg8M3v668+lt/
8WZxi510zqrpeAKCpUb04DA68FpXU51CFI7JjMDINSekbcev8C4Sz/9tHkqO
lbCX6aBX02ReX0sWaoB/8GOgxbdUeFIJDZkNtEP31l8nSqGsuiz4/rDIz5bX
CBOdgzAg+9vfFhcThHgLRsu6sZ6ACwH8F0Su8IFSB1tBdSUAzGLkdax8gu8B
vDjoBHvkH2im5wM3WMW0yVqOphSIZrWCZOvTxqD5YkA/qXjP8gcnb8+l7XVO
LEBvWLPOrDeI821ExzCp3HKba1+nwTwk6/InJZ3vdKoFoyyrFp0WGT3w8Ff7
Ql0gCMJHJo9r2ilYa5EORAfxKdgr+Yp/nxQt0P9j7nSl1q+kQCU/faKG+/0+
/c/dUfFPPhe48+kdP6L9ewXFN42Im8fZJ+OBrWj3Zp29+01WRsU3bZgWRPol
NayK5up/bv7fOsWHGIOiGOX2XT2+u2G/Ne4zFIzeDGBzAfL9/oa7/Cor791S
q2v7WXjaVmS1bcuw+8Y73RDpOG/Srr6lNm4cGZ831qtWh/FO9Yy7sZe4FXuR
67DXdhD2IrvVuvR6qTPPjnH82nZA/Ci560nXgnVvuiy6fOyN63X54VY+LwZI
NBs+fnNfSOeUyCLwUM5tPxWbuShWznP303jxbOgG6G57dcOddve9VuWKhj/X
OLVN8prfiu3MbRnq+1uR9zAh4074P+nBxhcgcvnhWXAENJ+/j+9vCEZPsnsE
plOuphONGJUUT1RGLSXeAbzUKyLXBBUUz5wqSrFSYCpakdZT9j8y/pBxHxS6
c/fxB1anB9Tavnf3SfKLnTajDS1q5l+8lSb65Ev2CXYKMNJL4V3UDBsufhBi
VCHyS4b/zi4ph9ZW2QJiqLmi5FFf2sx2i42S//zP/8w/jpSx7OHX4D1aABWB
9yF3e/QyBSv6kRH2tTKOAVYl1++76T0nU8rJ7FdySnyf7z5bf+mZWwdy6d76
S+nBeunjZ5lcW06X13k0PzTE+/nfsniN2nfa5x5+39nx3vobqb8dN7qv19+I
3rdvdF/Lbb/xKzFKLe9YcdEbcV3Uff95Mh8P+Ltn9rJJOb1cXMWX8XfRZeLS
3reXaa0Ie51bLtwxHmO/jrHL/KLgDs6K/1iWpieuyYt6dPuvf0SL8rbLKa1o
+7KuVWeODK+2OuZ0Mh/wZ/q4HQ8IIkHtST5IJoX70h5c6eOAf4pe2m1BInPo
bOlDLSMUz17Uj2J+2cR9pV337tSdXeeDFz8dsM0aj94FJZR09bwmCmGhfosm
2kxL6wb+EYPYcVNVU0hmP72Jv+64Xuep8yH8ow6JEzYQ3akoUtkt0lcE75+f
n5KQllIDac8t+xRJRt1JL14d/xlTfebt6midhGvtutB60fQgWZm6ZLmAsJG9
F7f5eDyIXz4CAcW/qT8SDCfuIGo6vSykeYlbxp4Cgf1r2RSXXO4GfFCNp7Dt
zN8zvRWU+hsujVMsMq7oE3wAWKPLuZBWXCyrCcpDEQQPmSAVGelSE2qayfnK
qcihaiVBoCmFm0nSeB9vteZsWBNuuRxlcDGkI7WdywAxFFXcPbttNpo6c8t0
wkVC+S3Er0SDeIxBPMWokq+Lz++q7mN0f+N4KD27qt03+/K7//xbdCY+fPhQ
NzGBu/w9+pv+S+TveFyVVv7aSxMZjEuNDLaXJnIYl1o5bK8dlR+qYTnAS+7L
tfjQiIili3/Tv6KtGMZC9qH/gvaf+IvhJrTRBU10jh/M/IOsbUS/AG1mRlZG
Nb77b7GgpZ/C64/KgX4ViaHpuKFB2Q/3YZCWTXrVoJ457WDfX4WPXbIp7pQM
SvTlA6nUFYbObyOvCnLdLZJkuq9H3t9OyTxp6rDwepUjvt6n6uA6rWTf2NhG
RvroSqevUmTMOQ4ixJKNOufdj5wM8AKlFodX/NieXTo7Zsx7dlUzAMysXWKs
4opGEhR3W/+ylDnkotG+UrWlglcHLQAZGYOzaLAQegcRn+lO7h/SdHk1nYQh
4jJcKzh0JGSNk1UGWR5A/TZw/G2majSvciFmz4mUTtLTk1e7hnFC2GPEQZHw
WgjtAlNfsq8e4pq/lpeRaeOToBc780OkERBRIuHh2tIcK+A62PN6Nq/IR8zM
p7zoxauPE0D2wU72qnpfEliZn8K9SGinhbKTnsMfAisZt59FaXAkQ+W4fkcn
U34WTiZrMaFmRxNJ3QHFzqfD2/07JK2/ri1ul27df7s/WATBOHG7uRmQzVgS
0WlLPHbdgerb97yHnqI1I+7zlM+5Z1ovBkQH9oFqeLbucwKN8g/27X0gLhpc
LJvbAf220T3F5eUc0Xnto5vh+O41p0aYI3t0+G9JTP5QLwxDBZnxoD0VidB4
yUamPsmp90SHKhDz5BmiZbWfDkajQ8p05aJDXrPguun7WZd+Jr9J4qmUkmQA
ZJyIlfkhGnlW8sntjsClk6QtbzUv9HgI6308ngwIfEwIHRoVflggUVnXVjoW
1BZSUiiYdapr+AGPLyP4SeLcyCGQBdEiYTtIkzof1SzLH/gV2t0GsUyO6qkk
enkmS/HSXEf88Qpk5VrMUG996xmnuMiZd1V8KAOVpt+KuV7ki+BxM+52oU/P
slUbxZ6/3H+E0HQKAyFlxusvPJSrnS6IucS1Tdkno2XJBEocLnUHuMi/rt2m
SU7hUfjWWcWapEoEfMRWTfmbp2+PEPJ1B/r6behzp5YghSZdLDB3ZlxWE1BT
HeE8NA2eIT3eHu88brmJ+EDwYsGSnXFEMdB3NpSUM+KdOy2mtX5uqimOwdEU
J6HryV/LeZ27A3Oe/69iuizmtxlB1J5+9yg/UmyBe8C7acWsFvk5veXWu/Oj
7fjUDt2apqufoVh0FOsooxeCx5trdXqnXswb2FaZX0mawwhCDI7iKoEJywWu
jsV5uzeuv1xqonFiYF7KKTkq2V9fQEkLKggY0oSEh7UelBLKMm99WRcu9rjU
VauoQFm1U+70pEYKJVpDK8PrLSzxDeoTMqyCjOObOmrUyBBf148SS5BtAhZt
PrY9bnFeXWJK8rjipMmw1jRTp3GIjiMbHReig92QOmNWE6s0xXuleAQPN9Ep
aaqi/CjBZSKTpckhsYL5Rb5k5m09ViYiQR4suVi+r1UzkhZaBya9z2gevBsk
dumbrqN1MDauNVw5lhyu9OKWhqMnw2ZXQu7fdbhzD3R5tq5uuZToahBX333u
x4MWTv7oezr7dbRCydIKdNkMv3aDS7RuRCaK+gCxBuzWwxT5wRlqx3bo3rxR
cVfBEoNJOAuyLi57dgY8PEOfTYPdxwE1rGe3oErTfOQdPxti5MiAZ+y6l6Ur
iZbYYxd0mK9UIXpQ5tlPQggYOy32sNIu4gFY9EKfWzNPiyDWrQ8I/GjDeqId
hzyYL8temM08vLmezxm5SfiN5zSOHhDraXlB3aPMZuNJcZm3XFcZeowysJ0e
qAuSMQ+u6sUDsYwYI872njsP6+VkhNqZU82pJhsqkOWJ3uCzgliXgjVxl+HA
F93ppBENuNnQUeMv38xZ4y9f5bBxm5rGr3V9pxhYdXG3JEg9J3p15D5pSZl4
V/Ormq82MQdk5K0twF+RMOhyHvn3sn4jcx8s8LbXyN8WOYwSSyqvqgE02Wd3
XAQ9sMsP5B8jL6Sf6W1+IYWOkJW9xEHDzhauS0QqpAIxlY2Fdgvdl5nDtS3b
GtWzKcOk17nFiH4UXPL0gPz9tL5xKutlqaoPbfIIGg5fEsqR4onLRhKnMyDM
QdqmbLNexEB+BHc072rpOD1gUWdS2jHyKPObTEK9Cbs6hFfC8lJ1esM39HUv
Z/U0C9BGo8dl2Rm0Qq58C82+LDiBXjWj1oO5XBhejXzlLJt9k5GXBtyATL90
K5Iu+Lj4YBCv1orX0npYXANNdMiHbGm45SdHRBa4XN3Mwhbj05G4vMneESGr
ywjMF+PYaDK+N1qdAHbzJYj98weT8ki01D57LYvRez1hOWeDrWjqKbhOJCym
2hxTyaHuJ2X4ZvPltI8zLZBvMbHimNJjOS6O/AF2cJGRfV3CBq2arFleXsJE
k34F5lydCyr0yjKFl6F05sLnwWQor7DU0Lpbf7v56x9I5mDAdPSo7E1zVU9G
PH9zH0VwC13ueciGDxk6u4/SXTYDexj9vpMFx2oY/p49cdSHqkdKHvlQMxgy
yLPiq3VSeRqlw5Lkw/dTCVOSZbBP+STZUSHYy6PvcF3PHCLmQnJbxj+opWxd
vMEypKlzy5Vor1UJYoJYGp0mdLUMZQna1njbG5QZN7GuZDOUmkKNkJJEy8bx
Oef1N+igy2nlLH2CVnvOtAyam5W+KGihPVYF1dnbCyzSCjujAhloLzP6pdWP
2uGqWP3VkyDzlPGRMsqljINz3JbCSfRfdsO2gpGJHaRxyA3sIN/CCjMEkQ7+
PJnz5wFPjQkyJfeI1Pf34HP7rjuMDe1ZamzI9w+UzTSgvpOdSfVoe139Niq4
OOtFBsVu+o4QK3GOzecF0spLlpANKqdQqNNHhnaYkl6v8OW5cKcJaBQSJ8q0
2AbiHy3CiLB6JRLdGazZobD5tNZ3CJ4L3Ui91mHRMTQZdxLuBhyS5NmJCiPH
1HedM2yOv4KJ5dwDk/N9xWDmMpiZUR82HU7TOgXbCsPqY4UI50rqma1Zkb54
nYRFslHV/KWuRFuTQjFEt4SwHh1TzLC8Fk9mMzh4i5pvAtAhWMBcQkYBbTgj
r6+dFIOR6FNsfAnUyOWiRcxIZSgqHCNcfizGXUjmBi+8DvhZuuqNH91H1Ml/
XjGt9bKJXxOX2/BaHofXtpO34rVv+iEHPtanDix2gUF/9Fh30TQO72NDFxhu
Q+5CKLmjIEuLxuzRjaYO/rV/6jyRI9Ms7LsnKQrC8URlkqgROQ3TifruHzlR
6AImamfSFJHRibI17p2Inc2wQWEGM6lb24jx2LJY75rVI027/aminLpzr+ZT
WH3fI4siXpAyAlniwRlbrebW/ofdFYarbd/brh1grUl9FeOzUvycXBND6OzJ
GXdHz037/A+7dGrma/91v1p7rNroh8ilhCzEomlQj2q6YDsS0X4MMpV7FDpq
8QxxPaqh7FneEUaACzPCy3HG0LCrDSFhFuYlo5dZmFcew7wkhJcuCl5X8Q/R
ZDP8yV0gjH4DvDQDGfMteF7H4+vhAAXJDXARbszz03cxTJE8Fo/32GOB26TV
Z9FtLw5fncX3rQAjJi/kH552lzW2q+jLroUWt9f/sNfS0fhBW24VUozKbTlO
09s2IBp7YaVlBGTmyfi/QeG96HZOy42prriZRoRRFxNs0/M5iVINmBUXrwJL
Z9UTFX8PxmWW2L2MPbJS8qvJDZdQKL0ZqKMM5YXWQhH4DuFF8CC6bhCui3uR
/igZw1S+lmJ9zD/I0o1UYZJ1Xp7v7Xyz800KECfE3hIlw9iKb9ICgHY3sTEc
HmTdIV3yPsNdLV7tK+xtbshGQ3VyfNzfc4zQ2haGIl+tQDjyPLvqrJ5Uw1vl
FNQw2uHRqwYc5USjGLi/aypqWnG9F9Tv6Gxb3t03OadhARqro9dglphiM5vW
fEFofTcuseVpGhfhNFP8TUnnKZM6EG2HEIRTjdfKFyO7b9smQI+OMhELO2+a
aiKFxYsEEsXaw/Wo6ZurcKYhR92mSHdTYpNDxrNXRRnVmudsaoK4V0c6enpk
CHyuqa+VzJUWMQpC+40uizzb29t53Fri4mWGgaChBXB9ic3D2SKyhZCLfHIB
aqJzXHXKFXZvH+y0bGfOSTg9OtwfnP/69njw+vDsT4MXL/YkMWLw+uUbSNXv
871Hzza+6fDPetNupzk8LW/682HhzWBqKr8uqEA47dAPew9+69I64hlglZiP
dxX+9ozBkTIvVBFrBot68L4sZ/68YeLHEFzAxWBFoq50nREdz5c36PhFcN/n
Wpe6lywgcAaoA5mrTtDKUPYFT/ML3aCjRAXqsZM7cDn3fvbkHeQVMy1FKdz9
oqv4ezmixpSC1IcykBZmXSMoSkZo9aq+IYfobXDYcjNZl0xNA6QmWEaNK/Mu
dZ9Cdk02vkshR9cN4MNHYKiRPhpJ9VhaxsixwaE0vBhwlwfuJXjkItcRigu8
eDFIljy8hLQzeptc/IvsiPWeotBlvzl4XbymXr3AD2GPQO3Ad7FTk7Mc0ukw
4ublVKGZZS8z8yHKhEgsnbC0HWmejadFJquStFF3BLvjlOYR4njBCIjORRRN
Z6Zd43daN+CI4zLtR9lOQqB+Z6bf0lWWTC9PXp88P95HOyRzA38rLfXVE3f3
I/0ZuvKRv+zIoQdfbVjeeoD3EnibPG7UfgZcSygZK49iBUqIBF5wcRhApqB1
kBCZa48FkDcNNrDUtqenV3NMYImQXPlxhrTFWmqr6P3kgDR9SteWL9yZcVMc
+xc3nS1Q/KF+74fRH/+gb+F6NRl7L42QqrXUnfQkNva7lEOnWp7XeUEWBGuv
VG1yOePwT5fLAYmbYyGpwKLPuKej8P4ByufjADFXsdVAhF5dJqUvdHSsgeT5
m/LG+1s434NJw1NRpSLj4dfKKpofsZruIX2c7PiMQCi8Znw6AcTc1w+TpvgM
93Xbmcrzzcn5gFgyTk6p0CjyHB89evrds+6b3h7+ihRhuuvo5M3Zy7Pz4zfn
ctOTFTeFbPbj58bI45uerr1JysrHN7nb7ujeq5Oz8zy9qVsvYTdz3w28Ct9j
djzTWZ6LvHXKS0Q8g/0b56eGdlTX5Nl97kOkDehB+NuV/z7lbzis9MndSWtq
JlQilJksDCB3zOAnnkH6g1fiuuudqvnJtrligrnNJ+02u69PW41WALf1tKN/
5qruFnQ5fOJVsKoJuWzVu2F1cBO7q9+IrkILmrGthY76Gl0xy8VO8wNJGVg7
TVt8GxEF83Rti2N97WRF1MsB/StGuCH53Dc0oSnPsafcEkIeUg1gdlaRd5i8
to23l9mxIo+ksyZwf0buAw+kTsVs+lRSQaheqt9VsLsQ5uHiCXGwhnNVgLlW
jzJJatu5hgAQxYTrPbTrEMAXK6koHJM2tUc6XkULEdyxM9KJfJJO5IodEk2l
OyoYt47JlJIeMdnrBbNhAjZnzWXgYp3Uv22irzmpwEwnjS4N/6J0iqWb6WKK
k9WC6wXmh3IRQCFW6ifvjg4L9CLjOSRfkIlerKimQd2AI5OiXhT/pwiXcIeY
0HEy9pH8SEf8aWvrWDkSjfNiTtoVwB2hqBFBMVCJXOGVhN8PTnMT+LCgCp8w
wLc6HYAmiJateR7sPXrdbA3eUBEsa4axVUShaxjzDYdRhWg8jm4ku8dRhWnX
QCLaSMqHpeBujagSFCNZApYHbtP4MF9aJcHiHSTFLIdXrLXZB1A+B1N+lSMP
eFCppX66MaFhbPskPbSy44qtjYMhHZbdVRsaBwQlmdpiL3GdIHnDlGpPzbBs
aziczyXVbTsB0mqR8BAzl0ivrxki+8YMwpJcfZNbn3rJiS5EfMkJAyQDyVe0
kx0VKOXC9WnLA/fs2inBN6akT3MFxzXRxpblQvTiUHwIryaIo3zrT8Xir87k
qxak+k+xGl/XfykXC/Dt32bv+/W4v/X+D9fbXPGCybBPnRHZP6snzqKZOjWY
Ou/+GpUU7JsqIx5hG2rlhgQQ11m9H6hSPReQ57JCXmheOOOEvXegLceXmVYh
mtRkqgKGdeWuUuQN2doguH9+huqCZ4Ozt4enSG/K4u9lAuO6AB+ceQQgmaxh
j+E7Ea692WKN6UGcMXPyqStaiNbPxW0mjngueKcgHF1Y/T6DMfWkIMbJ6+Iy
LIWWxAhFfcSpiPx34UGmITFAO3ea1tlq7jt9FPxVzDjNnOaWmDNjww55Ln0y
oPrNtJi55bKwOD7G8LG3h7AaQsOhNRU6Nyi9LHzIFBEiy0yven78iliK+E1L
Cvl0Ef2h/LoQCZbFCFXKIDz7POLMr08N+8e/PMm3qFpTODXIInb6wrNOKkGe
TZxr2iaNEoGkVr0QKypUJjeLKc+pL+cw038mJz2bDw3L4jq+jK15+PIteEZ8
RGqaeu6h6UgUWc5ARImFYOHYIvGtqhLW9xYlIadebOqW/3U3CcyvpEsUI6mJ
aA+jcgveVnLDOTj5U8+P6+HR0fHZWfj8w+FzZ8f14iUSPr746fjPb5nszM/1
m58PX5mPJ+Fv4ZMMX5wdn/58fPri8N2rc/Pl+eGr4xa7Wv75fU56tK4X9rG+
isV/o6GyVTL+Tn3+/Z1EYY7fMZ9WPf3ds8xD9XfdEOtGrNu4ue+I+rIg/9gd
4lkU/7GPZSrHf7g8YN359y5bsQbuWs33XVZnb49+975MiCr/7lvh9/U5Liu9
alLSPv4g9IwgIlzR8db6IPzQT4dvnhNX1OqdfHz6es0aSvgO/6t3t4OjcYPu
Jj2710MjXkevW6mT8E6Njp2GgUJ/QxWwjUi4ry44vPgvpQ62B8Brg5v9i3TG
xO7+/BV7+NxTXK9YLF3+2o1UpZdnz1+aDqSStLUIT1519NDaExvKIF2wrQXg
8Q0br0VZvDJP6oxICze9I4TgVXn9O40XCbRQf12//Wq1nECfs1ifrF2s33Qu
1jtiObk1bzrWr12q65aRNJWsZmXR/fH4PF8T54i7k+6IuDu6JOIhTuJid00v
FgPI5H98+XwffpPBu7PjAULtz89WwJHa1x+fHp69Oz0eKA/+9/mjj4/ITfmI
wn82hLfyWdpzWm8rL0KQD1H6uI6KJPFL8nerKjR71Km8heHNT+LTVPQ9DU8L
yoeQYChuQoW3d7AjujBoa4cFVVo4+rjy7QT/HvsVKx+FEaYUE5M2+yhiT2xl
WZryrbtZC73G4XB2NSHNhEskqHP6QzEnD14YgiYjeAR7UeOXASCD3sY986f6
ppQEBPcaPqQEZAVBN9cNlw9zrR1TN/CZArSqaZSiB1yiAPkWkgnNeI4dn1kR
QUZLN7xrHyYseTQFdhH5KdBEi0Vc0sP0wdQ4QiQ6ilOqeHyxnFOoR3IYfPUO
gpgurupLlLHlfIwsgRqcB0SwD8gltdGXU65MiwRfBKso64mWyjyw06C8oJMK
FvzgK2o3ODxCge0nTw/yMdWk3teIJmJfA2R8SYLgip/biYLUGyfuW7nz4d+q
tp51oiekgpiI6R/otB6cwcvyvet4dyLgqncR6eT0D3hAV10XoRBCdfuVzRqx
sHKcCAyOxLWTt8RWf/iKFoDOjH+G4MS1tnkIl0qkuAWSNkLFJ69kSF75uZyj
vNKeTWNBClv+H8tyjpBFFSepc2K0hHBDkRu8hVTzXgibEolKRixJ/Da6mC7J
EAVqWqvwqL6m4APXqvJ0sliRuGOAUJUmptqv2muNV4ixk6UMitvTAyrqgiPs
hfxLl5fgXeOHWoyhwbkOLweX5XSQsGHEF/Ag2msSYGv0pGQp2t+w/E44snFT
zZG6TlljkzgoNyzmc0XKPekjmHBVFqNyLoj4kJYiWSsSu+gRfEqxNLJs8fg+
PUzydyDGzk3AQ7PGG3nuKAvsdhqGMb1jdO7wphjwMvjXP+bNpF5wgfpRVVzO
i2t0gvtPvn6OGo7dD4DOT1EAisaPhvpRh6632/HdXsd3j6WFXffr43w//yb/
Nv8uf5I/vc931MYf+r/z/zKohCv/+VW2+hIonn/3ntjV/M/tCQHGB8P5cN2I
/N17Eu+ffGdn5+/dE2rlB3AcPOo/PshbS2MrbBhhAvZ0meV8O9y93/9O7rbz
ueX2PuoK+gg+aScjc9+T/u7uQWvst45Ojx7vMcVUW66Y23f3+m8OWuO2RaAb
wi4g0P0MKimiuCxbttvC0koklZhv/ZvTjxL6VZh2JGINA2h0jPBR7OnIJ4Qi
+Ha/f1EtMknsE6IWDgX7SgVguOCwrurWU/s7HrKTvWTcA1HtjatyMqKMJD99
B9kB1S8O0+fOQqc8Tt3t7YnU9HVc+VWTcxOcq1HQqe9U8kd8DBvYRSNEhtTd
RSll7wA84MyeEi3UUueuWYJwa7ycyPsRqGJ6G4oxhpXnGkElP7odkK9FcT3z
gXHRgYONoIV+i/xyUl8UlHThVAOg+Jn+kM2UGeXvqaUSHgZmxBunePdxm0+5
xNurWhR1T8lk3KKgbiL/D9iLvBjO6ybEsKkJPpWokudlRZzN0VuCXYMyj90C
wMAEbhGEzzF+43Aq88DzYuBvBApk953M/OM9WmjyOjSEF5OquVpZZ1jJE2m8
Me5eWwuJZ67Hl2AgGMWlBT7s9Tkfc5+dK3Q/SpNrl7ZXQMagfjJAPB40K0Zk
uEFYItiAqIA9YCpUF4CoKaa4nxeYB9gbrhLJ0Y0eQBPBAkRrqgoYPNSKlJ26
uKVVWLTLFLp1wPl2ss7gBOCEvoZKMEPRoIqXkq+nL0DQ+ukt3UrYhTmZa005
bYTPVjNzvO7tdrox4mn1ILe2/I9lAUANL8WqoTIHgpJ0b+H2LZD2dDsvR8Ya
yr6lFmRNMHNmkDsMmJgur12vh2nrHkKiy2VRlRfzsnhPgKTlpOQiD5QP+g4c
N1NfkNk1jkXKal81z7f8ruhFU7PdRY9D2xXMh/TYVEayaCQsPqdUQDA6s5oY
xmj5QkBMbiUf2jX9jIdFC8sTe1ZIOBXEZJmk+EfHpF1dvD+33KyGtPCepojc
hm6judAE5g/gV8opnM2rWvhH8z/ku9sheyvdGVXDzFvXbswbzWIg+h6yk8LI
cLVrzs1cOhnAMLHsuRhgoG2wk8e8vbqfuNfzBjPGGam0DOfuQDF5KHMizGoP
RRWq1FO/+twW8l0jg5BLmkcvx6uN68LLCgQhIxkcdOgoKSs9mRzF0aaUnNWG
Uay8wInwUQimr5SDN8Ieog+MXxxelWSJ0pFVjho9SRiZyBmu9BoyMnFp0iiY
+kyOBN0iLCZ9gpAi5qgYgMWd1QoPppMT066g02nDyFOn0tz2wA3NaKfuqYRw
mDZcRsVnI7q/L+dlGZ7i5Twvdpqpa9hbGLnSzYS7n/E558fPMdgKFqXlR9A7
PxXYhgtOHePjv7Oqax9MpkGZCAw2BLAiwRlvlkJxn5S7oxun1RMWeEBGuvmn
9HxqwGkMl6ByZXblwm1IQlOODM0XPZZEC3pXNm6lkUeVXMB5VBXd0IatKEtd
QZ7yycgX9X3ePYILcPeIRPXnAWskrTOPHDlzd4zMqSwxDidzMoXt03wVjg85
qOhl3FHFbpQiSxYIUYTzw1s6tlUbGqZ9LjJ3pAGkSNPru6w3hsO4vHav+gHZ
q247lrNJfYvzAXMjKd9NpMj6vLsbOgl5AbFSwODnSLERTQEES92pCM/aGiLY
XIUOgQGm+iaBwbDQnLU+1BulxyYc8Wt5QF+qnR+GLpwuJ0pybjqZUY7acgal
hHG7rbNvNWJUHCr0OyXL4QHQuvnNoTnYAdlin7+bJ6QWz+oZWLuJrseqX50v
uM38Befm9KNOS3p/JPy7NCnoeOT+kw2NutNGQ2uvZrNbWRyK7iZqcAHCOVU1
QT7AWGh5eQLAhoNsKEuU6QfO+eijgRyBhLyiuu+r3XVbwVkHY9Kjde3TKDgU
7BM85zB+L+EMnZeWW6alV5PQUfnPZzu9NarUy8uSFXdBLKzOBpjc+jSZfCsS
QFq3cY50SwbSrlKsOfEX5xbSJamLbGRGL6DZnWpN2CRPdNmpJKTmEQkC+dKm
dVMhYdWZExNWIEthgOgSXyMwKFB7lLqi1pH2tS9JS6xvk525o2mvjWbEWzaJ
EFIhK2Ro+6nLDycErSWdQ+5Ih1jwhqOYqDTTMxxLzNRvRI3HzYtIeG7EwZHb
rxW83z8R4yIydQ/XqLM4NHwCQIdsyJhoCAk8VBtHOD7kUkn1mZI+P+Hb3POd
YKOsn5S6t5WmVTV87jNfLrPARtNmPfS8/91YYosCWU8P0Q0cCAKdNK/rMR/L
vqmOgJJRyZgki14XbLwTMmNoDXVE2mRm6TktXc2b1+qHuSqaoICungSy+P29
Ti8iuQPX8tvjNxT76eUKsH2O9/aaDwd46dbDN89lTSmIQX1PjOePEouZNQia
iDrG09c0te7ZG++tDrqtU7VkFIMm+CtDJK+IhsnyvNaZrISuRCQKH1aLped/
L0Rz4i3zoaonPqmKVDqi+dVIJRjo5w0z+nt9P3kmmG5pIisiwOWMcd3OtRsy
rj/iVd0gHJEEZxIm4jQcpMXcoXHxyvKsvH7prj4fiqZta03NadD35SlGSUiX
c8rIddQ9sSHPIDIaACLi8J5/F7O0B9ejhqmV2odQGi8nNi/qqToz233yx82C
lBSWBnTUxMDFbQ0OsoD9iwaK302J1HvaKfDB6Cn0aEXDpwR4g1iEP+vY3nTU
8a5lUJrSkIwiTRrFlvtabLkvtMy//Qb1ayk9Ek57ze48VYUg6ilS8g/WzH0r
HIjBZ72djfJ/X3nzv0eahVRMQyfgrXWqeW1LHogEgtvRXXtFropuLSJzFlE5
GVPyFJknc6cjynFYdJoiDWXzkpIrEePuvbHdIaZnjAshARDJVvGAUFgu0iDU
J5x2mIkWmi6tR2u8aMlP0QMpyrwg2xYnThbGRtYNx3utqjaa16gLJ5VgoqJd
TvrRTW5S+jJgbokQroGJ0na6CaL8Pgr1Z0xmAJ0F4RDK3OPGZMVYivoSnF5G
kfH9FbGTrZM6vCjs+JL3d3qbtcVONDk+rot33E/dIWqAyVFKqZdu3fBixjnl
ZWMirrJ14kqGsD3CtOqhSY5Ys0f2pBlGPRpVYVEUCs6lFhSOosAUIMlCC5Kx
d/j85O054Erqig6XKFODHZZVVNP2mjgwHw0ylfzmv1fF5nO+hu+6Kzwvz+sM
z/NvHh0S3dAKNvH3EmxS+qmArmD7EYXtSG2iyWacBSU2iwtYE/JD37UtFFTR
cgKJu0awI4wDG5UfvT4gQTOyLbLFlXe6CxXBQmuHxQzyUYUsNJGNKrLaA2sn
SwTE8EKP2WFHGks0Xba4XhKU64XiNvpmEp3JjNfUM7mHJ4D+k3vCdPcesROB
A7/ZebKzu7NPNAQRPBDO0EIKaMkg+UiYhod4S0CTrUGKQNmaqv1LX+xYCrWC
n++RYrXCm2RieJEHMC3vquncRKWD1OVRG0hj4KB8OBLhGyHQ1hDb0P1Jza8E
gMbzDTKjFp1NfvJ20JEVmH+ff/ddr31RlIZHFz3puChKmqOLnnZcFGWpEe7r
UcdFUZYYXbTbcVGUvUUX7XVeZLKj6KLHXRfZXCa6aL/joijziC76puOiKE+I
Lvp21UUeROwu8iNuppeE/qLuE6qSVbH+bFJMy1aZN1B7qNrWPc8d2TL01HgK
O3JUgMqLLurIDHEXPfVTGAlhAsYbZiCzxi07UKD8dCbldAR/gADPTLIxVVkr
5pf1bB/RT/pI5o776GzuGTRykO9GrM0KOX/stFh2cRMZB3FmQD6IIJzG+2gV
9p19a+4KcizzcS9VzZv4wq/QcTK9XSt06PjIlutNTt3J6Nti5maMtTcq6qzv
3uTBc1FPv5JBgLU5IyL6NiufWyt2dOJB7OCxIv9TsvkPogRhxnvVM4gt5gh4
1n03pMKBzdSN7oW5seJWlRUHScJt1MCYsroNmjRpw4iSA6svtH6NGmVgXVRt
K26WtvmB0aaiu2lQV9yHfpjM0ui+Nc9TEXWQJHQm9zNbxao2RIIdJNmZcRs1
56SsaIMF3EGUaBndv5yueXsItYM2blDujQuWddwqr90xkfbndps8MK2WI5l3
ENvd2ojTTZtFWqZNG4il3EGSw6dNiA22oo0OcRm9X8fv2vDFcvI+aTwihPU7
vo8dr0J2tSR40IbciuQQQfolJAcFnz5TcJhbP0tumPu/mNgwbd5fapib7yM0
zG2fIzOi2z9PZNgmPkNimNvvKzDMrV9KXqRNfo644DZ+j7TgFr68sOB2N5AV
vMfbsqJ775OsMOrad985jWReLy+vnKpKEVF12XxtFKdt4WWlNtSb4os1m/qG
UWEjqrrlOXDwrCdPMn3W00f5VjQRvWRYe11jsp2ZfnT65lf2KWvp3Optsk6+
bkdpX650Vih8abeRG02HBVlPMjY7QAZ3crjkecRleU71khe5BMT4R8tv+Ykb
QjJk3s32wje5iZS7z/It7sa2/9H6d/hepoAMLcZGILf4ZMMWcW/aYGwwcoNP
N2xQ703bjO1L/PDk0YZtmnvTZmOLlJvd3bBZ9pbF7cXGK7e3t2F7Xf2L7Vxu
7/HG7fG9rTYjs5jb3N+0Tbk3bTO2ornNbzZsk+9NW4xNbm7x2w1bxL2dDdoc
308k9u7RoNwb2u0ywLldu31eB7mzHdqN7g1Ndpnr3OTTO5uM7w1tdln3+OHp
ozvb7Li3m//V5ES/1djnyVvNgg/C8LvvYt0y7+eHhPFHwAA8hPkR/GrIPmOv
WSS8OER1ePrju9fHb87bqdPimm5bvpF/2unIR+9OTymG8OKnA8bjfP3QXpFW
lx4Oi67K0sgDMIWZ6bJQpTh2h4sfWq+SrKtVzu72GyQe79YFcL4ggHd89u7V
hmNDqkJC80/lQcaVdBW9nXPtUPfDOHovT65trmTmxg3fi5++7sVwBd4seRmu
ytSyU7Qsk+9a6NN2EqMgDU3ILyKVjP519JJfD3/vR6Mg5WlbbfiKTRuMwl1j
4Of2+fHZ0elLZKhmWbSTKoWatjPuWwn3AEgUo1HF9XydJnNdTFGELqATw5pl
GtzCQ4DZkbeTeTZSk33N9d9tg2Ptlq+dTU6yWw0kFtXUczkOy6iudgLbr8T1
bt86o4qu0PTEu8iJKwtgIm+0gLUzua6Z/RchvLCXKV3Fx1Do9dhI1h9DESHf
Lt6fMjn9NTZSc0Y49C7iZIQbQNUceCtD1m6TMXKe0yrDGHN9MpOt7sWLxG3w
Zz3OfE6vTlXosBTiDb9QKWNtjyPJ1QIoezctZWFQVkktXjNq5KwNXeGKKaPl
dFQwJjWIN4Z2zJbzGUHEpRH+KeM05UJxm3mizHcW2c1n9QJfUSqCL7irQ6fr
kjbKv+QvDcUCnLWxhpnsHssFqrEdxFs9pIk65RFNNkrW3DaL8rpP2PyJZMXB
YmOcwKE0BsMBcSnD1cpoEv/okkk2FU9FTwyPJ0xjomBfoJ58nooBBuyXowgc
tmBzw6eg6TO2uL6WB09RdR1BF1ZNs/RExvrQ7TtBWFkiP3M3Y/NbYTQtFibM
aTFaK7jEZQ+GMJuinbxYR9qyhDXlDSh58OOMY+NUiEuqhNDrhM75UaRVOPpQ
kPQRWbAoA2TBq8sMCysu3OKXPapDGPICd9aN+PHrt+e/upOptomEUoizWvC4
Aqi7YmgjJEH3cNHt8Tiv7ZFC+AJQT7Z+6CDWQVSjGNPhtzFP7ldNblCDXWuA
8htH5TXv3xAGl0wRCszqr1wxbSYAII54I6Kq54jHQNKbjshgJxqXvhDxM+oK
e0oKxa0dgrC/ENQN0jkeAjegPOeE4ZsW14ELPR2K1vag21dOY4RMbKVlOBkj
GTCEcgKferNQuH8EY0zTQhQRl0bxt+XYJZ7q0ompchbZW/0/ppvDfyPnrQdI
CYZ7Vs3KCYfPE0y0RsEpH4nyRlhktgSmJplsP+P4fkDOlAS0KRaKfBVgvWzP
9mw2cQLaymOgyF+BlZ4NDc9z5FcEc9YzpiY4CYy8t+b/dq+lhKFCIfi3AeLx
QK9meYHv5wT34XfCGY/0XJONEIQQJXMxlYPnhdexBFxRNgdaaYHEtuo585ms
wP/0LeiLtlQ1ZRCr5jfQYkcWukpV5PiRDI1WxDODO8nz6Bxdc36KnohpAjRz
Wk/78rgYUsUA9S5qTQE6to6HFWeD5mgL2RTnx9VaFVJqPT5ncCD7k3krpzgt
um9rJRitl3fCZwSISvdy6oafOAXzlQlAtxjVswVQv5LMsBLGFS1AA/bnTtOt
N25nUIpVa3FKNpSgCGVT4cmMu6YeQmIQNo1SUzVzskvz0UIbw9vffgMLY5M9
wCvK6qJEW0HUhESMBwd4pvR+RgRJk/JSa4bacXJ7lwC3l8WsCWVACl04YCdD
mQNKa0y9Dk+iqFTez08Z+M1sdWRHrvU9HEtN93u4Hkzc/J6eB9nAwagui0Ea
Xej0UJSbeSjKLg9Fy5J3V8UBu04/Bj2SPq03c81YdJq54ff7uTBCiPFv699F
xcEm3VxpjPuf19nikuzIa7cweXo+06BYQIfwecB8dBVcPzuAuNn+49oDXA6W
cd4qzEpjwuIs6TZhSUnwM674wZC+yRZTpoUPTOMMKleOvLhiJv/GOaNOTmyV
O5c7d/FfS9Xvo9Oj3KllOL5UF0nSVur5KMK1Q1xr8UtfokFtvRNkJBbz94yc
xz2lJIqoS2BeStZSqEoSXO6ZG5hqwsEdtmSlgEhQACLX8dZqx/B2lmS5VshP
Glfz66g979Xf6vbX68CkxSixgoQJNLTGS05MQV+Nh5bTtcr3DfNYiiBybYUf
qvbuBDZ1Ja48iPOdFE/OfURNQZ5NHGpZ16M92h6VqmmUWsL6aYoCcPL63KsV
LKPl4W/F8idDWeABo0hy+3jSvYR3DFv6vZ7j8WZyebyR53i8qec4fodOiRZd
cj/RawEad/qPxxv7j8eb+o+T569/vY18yBHkpO1FHn++FznpK7/mF/UjR32/
cyzWnV/BKWL0eJGjPjNP/R4smrfonLm4zYxE3FbBE5R/LYxzmI/9Rr3QBJ4P
VQNaRLkrtgO4jFcxapQnydmbl1wxN3GUq9gcJ2fjCvcuzsZxfDZm6dlIb6I9
1uPR70Fk29/mE6f2ipFqtyu5AyokopPz0xxGrsmLMoyDkEhluO0rP9ziigkq
AXe78fkD1bQ9G8+yakyHXa/DLGI3nNyF2j1Qm8VSwqtkxnSixMZyISSYUfZK
slTETyNZOHTCg7NaiLKIEAQY14PWwSIHG9zicks50kaIxmzMLgGetSJiHcnA
NeDWE6gp4h6Z0z+cP0SN1mMLrsdjtZ2xQQSrZcQAErBRKY6kqfPADUy5zJSa
HaVqMcuXG5IfZLEgJ4veh9PSw1Kn1Hyng9yWC1OgS/Pga1QJB7G40RLU8bjV
HYHfTg/OJ4/04LRoCbJ1CrVsfgLeTQxzOrrsWWlxEvc6LluY3N9h8Qyv5neb
O7jojjMV18QHaqfsbHW+U36mV93vtEzQh+mBeVHXk9z8c12fD8r6jrPyat5x
WK5sjr1ayZUt7YKu3Ey9aL/SneO20TGcAjXbJ/HV/POP4nav/Wt/0QM5fYt7
jM2Kg9lu6ShV3gCPNE+qWrB3h09Nds0x0rXxNSXLBkYOxBmszJY02bW4VydG
yGt6h58EPtG7RIelBE69WUppj3+hOHXXngo49t/rX5ls6F+ZbKbHT7r0+CTZ
EVeBJ339fpxs5F8JQ9G5zvzPdwotuxE96re9AyefswGjTRNdnfp1W7e2h2We
DMvv263+VVcP3x1qM7ZIMVSipmlefhxOlg15rlnjtEF9dup4vS67uDVri7VS
XUM7xBxHS5jFdy82wL+KFCKDlBA9GnQe9plgQKM6jV1BKB58rruginnRNPWw
glrm852i2aB6ALpIA+W/0XU1xChRFWRshiCbwEsITqAbIqSSSyHWFaKiF9NC
ZMqI0BkpCJ3XpSMEh4ZIFu4KjTKZmMpONMmaKixy1GMQDEGfKmyWuY+cQwj1
qvfDckUVIHhdNWtMYyiEgOJYAmPa1kp6AjQl4R1ZgPTOzVU1y1C6Iwo1iNmk
c+5z0L2bTNCdW12ozW0q6gmqo2G1sOGuQEcCbqTy44z2hkblcJicaxfjWJud
Ze7aisOC1ouuQF47EgPGFiwW9XU1zPw4OGsiHgpmKS10PVAJXKfV33oGxux/
h23ZM1v0D357dvFAwDpzq5JqW2NJ+Q6AMFeCo9y5HAHYWSnp67CQ5qWUKOag
TRZVPPdBSs7BgO14KOR8hidYNghIEilcu7DGea/Dq9lLbA5ec3amrXjhV2Ra
BjOCbABWCaVCtiL+6G3zemIp5ZBsz5GdQGSREXpKcsfdf0eNjwYxG+VFyUE7
XrgyKTzZwrrnhhFk1lonmLLJo8k2NeaD64MjachbyIJLV+lGyo8F4Gxw50sE
CoEuxvZAO5OiuBk7ywXw4YlHiAgWX7o+cei6YVKMtaPGFaHi4DKF2wxDTeap
IQrlN7WxRfWm86nQnVDRyxa3M6EO7eCX8NIg/loxT5H/OCLRMLLUS2WznSNa
jUzs6UYY7dzLNjV010sm82DmyxUkQb5WMVc59IxvraWW4pk8JwDh8vhZQu7k
80MQ8sw93AAh85e0iJBHgirc7kpSsD2bYzd7IdeHFoxDjP2hUWcvDmeteFIp
CbjE+HPBUy0k99mLX8UDIPzSVNfVpKC6KkNUEmdmJHR8UtdNuabj7YrOTB7O
/U5eyz23WLpfCALqAYThVGDiGaDUeO5dCzzx01t7IyEThNdBGBaLcEIrTVNB
kB/RFZqwrC0irH9FQsYdQbUAM5uFR5MWFI+6KK+KD4wsM5VWFvPqkqrEN7Ml
l2YyR4cIdaHYCU/J8vAc9whlTLVMHMqLAteY9MKI6ijhTYw8SoefFFR1Sqlz
Cd82rwiJo17PtubiyT5rJIG5KeCNxm4nt2Z3d/LDqVUpxVepa2ZeEjxXJc4W
jsiPBUmtnjDBGbDSFYirmFjougBHmSeQvViGbUwuMMIxAVm2EP1AT1JmPFJP
qLo23VfGk0y1CRqGeQE9XIMD/X3uA3mc/C+cOs3SC53oxAuQinDotf3Jezud
NJHKKlUMF+aMiIFvXUxFLRzJSsYyCcnVQzevTZstirFdnpWM8wTZd7qCNKoy
0iljuzXZ56S/TxUsSdZZj9fLV1qKwXLzmePkqyYjL+qMizX4c92psQfCHMzw
J6+esKpuzq0swcSQeCBEMtXLKs1JBQ3ao0w6wscWZ5ccNCsP1HFQYb5KSLrA
L96sYZ5SXUJrHwQy1THYquEmZmYwSEGPe5QMwkCDCMPIl3Q4X2vaMOmdento
plrOmz2bfRz5gJsV3t/7u32/jL93E3fvBt7ezZy9d3h57+3eJduWHaarK1NJ
/6hOy3qX0HzgK54MJuV0vWeoywPS0aLotGldrJaLWB3E613OLRwNZor5r21b
6O1qD7IdtWQezE9dnuL2+rvDnT7s8KTbxw83dnjf5ene3MW9xrf9Wa7t9nDo
m9/Dp72Bm2y9N1t/XecmYy91k+N/0yylvb3ONKU8pCll69OUQobP2owkKswp
bvQvmY8ELNFnZSPN/9tmI83vykYCKr2ViTRnHkkArxU+VORCaacjx0DRxPJo
LHUbAu60MViDppeNikEIizbekwwL/Fj42gtDydh4ac5RKS4hxMusCenpK6JE
Cz2GacvKj6h20Ymty306VVIkVEqEki7pD3hyHF/PFrcy3j6T40JwuC+sAg4q
LNqd8k16xP32W1QppGjeNxm1uK8988vM2+GP5x6UERabqCNOU6inoyyhJgbx
HTkZ1vUGYkjrFSp5JfN1S9pCpfABtprY3mDKPddj6e8eVtJ+L/PXFTkMOx2h
/LEv+eK9aHEj+TfSiZbS6BRRqjpDiXPyeCmtFu59nF8INyeWniSIkK6HImKm
FiL94/KZ/IS9jErO3fGPqtt9aqlWyb9Pm7dkNKuDjuqMn/xFvIoPuiozhouw
uQ/y/fY1aZ+6u/eH6EBZsWR9WW3+uU27RfXCrsk70+gh+wXHO5ydnznekA8H
udNRyrz175O/iPWN//3o/ztYcRH949XnJu7j47E7ASnhp/MinHzthsxFuDAQ
lNp5bl0EEzRpLb4IF5pShwf545UXhcpq+betjgfN9CCqExoPAbvFd9KCjp89
mCMKg++X33Rf9BmD+Xj1RZ8zmI9WXmQGM7nonoNJoXU3ngix/77B/O67754U
F8NR90WfMZj7qy/677QyNxAYdwjDyM7w0vDUFylrXTovnQorqrf1AzxOqcTy
fn7EkHTKgSEnjlNVjhk1vwrW4ZHp93QPWMLD3wvS2MBNoNdtAtJouQvwbcvI
3gh+Eb/oCuPIXHIvGEZEAfeFDMYWFOPzoRNR99a/+p22oSxQBFwmjdF8m9yk
zLdyPbKtJBei10VRtW1gmZriIbpgHBvj2jzYUO7QheMcymJ3Oki2Lh0E+veU
U+HSbYjYc1nMszTThX2fwpetxtCHopoUCnkWgxboLnX+z2P8SDDMLAW5T20x
6IKdcD87zEO5yHbcEFMisW32NsvgdcTBJQvAGEDyMiihGSY008wmUd3lYnkw
WvR1N2zB3ACf7oKdMH4kpBFReSsa3gQ1osZdlkKcYym6n7IpkkdVSBSd0KRx
bO7OIvTQ3fuJ0Yjz9ffmonxp8bih7yx6h24xYS+5H7rWUlzemYsy78pFWd/p
dU6/6Jo7HX+Wi/MLyfLOvt7b+7eBoLd9v3Ms1gp63UOUU0bVeVell8RRP1SQ
woZjnFQ190JEt7YtGL6cjmr2K0VJAxZPQIFIiJUAyA9CPs4omd8jo2S+QUYJ
If5zehlNKZl/kZQSTiTQhr9EUkk9z5I5eBY5tti5Jhi/9mDG5AjDYiqVCW1H
W+vCQuxu6rwZltNiXtWNBKz1MBpVzbCGo4/cZlPxb8pRYrJCmNlCqmt3MOaw
z2jKHrwaCyfE9BlJd5sXl0U15XjwYXIoxnd6bF3hxuB6JmcmlQwZg7ghz2Wu
qimyc4YL7ZMMHL1EteiIbY7KSamZuOqeZfQSOzZRZ5dSVwRMkuS/uKeCgGbH
U6GEaiYy6clMw9FmEmR8U0gVbciPe12jRFNyXH4Tcwe7w/Id4Gp3nI8CKbzX
6Wj5zH+vibHc0MRYbnaGLjc6Q5cbmRj2NTuFr7ngXuaFYXduH0jLf7JxYTq3
7qXvOG9kEQo6UFLL74JnW5IuwXUbQGwbDitgWHVx66yyWOXacosrhQky7khb
VdTDs1C8MutUUlt5eN1qr0LFV7fhpi3bCmxMBALzUtUnmzNs1xQTIzSPE2g0
IJCuCl8kcKWWneYiq+xW0EcTQ4lphUHhkB4M17wvethKmm8jpnEnaH5BLwTo
gKzwBObA3LB3yRjeInilQYTZ0k2CGPPw5nIAEFfX/jg/fXe8IpMh1MVyDbQL
Y+H2F4evztr3r9szHb1Ndk37ii6N1ebnsEErcJmj85c/H7qPL18MmMps41wd
UzjidwppDNpmclou7RDVnOY5uKpv9pNG3ddrxbW2eSf0Qi5cAb9YcXU7Iahj
TunCO4upaYt8Z2OrJqy7ej58vJdmE6bwDtvynYafmfhOCR5+v5/J5wsLpPZe
6xS+mXecwukC4KtEb42ubNmPNxtzGbgrIZk3TNL0lytP553JmjfiY99wFtaY
seaCu2zYUNKhrS/cfL4Ba7so7/ZFzdfQ7fUDsE6R4DPGKb/8R4tltxu/ch+a
XWGPUbpdwKYFTbEhuIW+5+4puiX7IugWNp8/C95y898W3qKyI4G3qBcyGQT/
9S3XdnfaWThVmGry1fHg7Nc3R/srKFRZBHnj0zUmabtchhKd5AIbkyVXJcRo
MQtbZOnFRAaZJwciHZBTZSiS1CjK2BQx1Cw0Ym/c8ykHAXlFMzu9lTxEhs3X
c9YZqsVygYLj7erXO+3ReH54fmhGY7p2SNy7ZrLiu0ekTd5AF5RTwLL5zswO
EDGEVqWAi/FlOkZtlzYUVUQEGMEzpprwVAxTvzbvpMsZyXzjorkC+rfJ7Bog
ULwwUs7qBgwo6lwHGS1V6WVrPYwcUVa8e3N2fvjDq+OOZWR6pSPnJmsWCoSu
WFAqeVavoZ6kWJC6D/lDcYip+7q1tgoORupCsphmReVkpMC49eR66jR+LCVS
T7ngKvnughOJ6aQobzWkSDihdlEaR1Ors4qgphQp/6jGSb+RZjtFACA40CRx
Ho+i7TwqnSirbwNyTFPOkIem+aIM/HZj65WCEMawrZmtrQMe1iOyUCbu7on7
L0BvxVSzppDPaBJIbMZmdE6s2BiV0J82Jge2DjoGEGG2bmIs3IKzSB0/4WV4
5LNk2/mtbJ8iybZx7ZwF+9aWpZC42aesmdtuaYEMQbSp4pyw/LYL1ZgT9fA6
DbKqRsr3TSfkQZZ9nZsiu5xZGJyAPhqpmyXbxETiFFUw6aD9xVWbrltotgl8
Nm9j/njboPrR8mJBsjMQ2sqkcicJCkdH31Jyd901tEq1LlGen0w9GJCpMm9E
SIFOmMJwqK8LGCTJposSjTnrnv2xhZXgbhwTORT2p8ZQfb1qxg76CCvLvLix
IBgV3kkAyZFEGaOZ5GxPs4DydBHLyaxUR2yEEX6SYoYToWg24whm+tAdZJvS
q1sGx+y6cj+/L1GYu5zI4VH4elTER0TL57JcaOiUTlTQ3sf9dw8bFxMGm8ah
UXXh+GvptM3MwBoYtXtlywlyjYqxzRXI9D2Bv9notKxprFSMkVyzQxAJmK5A
CPK2fyTLUwlm4Wwh3xHsUc8U7/Vk782h9QB1a+yLaEMWgIqgcsOjNu2Od6nQ
/eRC8UdTRNztOQKg5WGJ0UEb4D9Fk8eNyl96gWo+TiV180xJSRVjkfvLGdFQ
sTclxB40TsOjI3Rk/Cq9jG8hrt6ecpTNyAs2Z/nFTbmlIXGZlwsesCaEGTKo
ynxq0AnC6pVwClC86tqpt4jX+NXS5hnHYWOyYfkehTlTtY5gcIgzoVUTXWt9
JyNcZL6+uc5HFFzRqreo55BsP07dqnmWRZEX3YBLkuvuo3cR6W43s0y8UJX6
MgpBvTKzTkLGoLviWWPXZqaJ67ifOKjdAZuCz5ugkbmuXpc4WNxaofPY2VWS
LYcd8bac98H+lR8ioRWq25lPNnC7w44UvwybE+6CS8oV9yOkj68Di7rVUzpA
FoEi3RClRTToGceFiBVUiscvF0MynIh1wGO9U2eFVsbr9CE4y5Ypwo/NM8na
u/iLL7lAT+Ly77ybt0gWZOI7WrHVpbJElIEXmOTo5nWl6ZFS7C0TgvdTxnQZ
jnlugJWIfjOjM29SOS2DipRExf9y1ITsq9aBSJi7t1wMdzhfOrdsb7OiUd6D
6HEQ5hawgmuyDs+QEvN55/xO/ARISU4iRHopXAQz98VsjtooHPJ0BjB+3eos
LRDoed+cnL096vHbKKsIHGcRg3raReJR74XXUEWN7iSFzgcJeXyC5Twvx0RE
0AjdeDjqOCxahjGipkKO6OT2Gd7bM/1qNgeS4SunnpQ7nqZ+Uc/6rD4HF5QZ
zpglQXJ3Ke6JR8KtuJwzzwGA/yrhYtpaUph0u2G3IZwMJS2HEJCcDSZ8aJRL
2O07DmWnQ4o1xWkxuEkP1RukNNjoSbrpFbPk7i/Mu5fCZe7ew50hlSp59Ajf
b9ZLfJ2MApZ1ktitC+WHw+d/fn4aPp8dn/58fPri8N2r823lhBdd+YEQX5R5
kDYPEKCvaEeVfOi55UIKx5SdYa6bUHUmk34975MteSCsKMQTb4qTEFMlc5it
rLljmeedKABvYzA2yUmi64ND7tMSHPRFo3wc/syk3s1NnQXRgVlzEe4WVkFY
usl0R0sgcDBfVlrAxAPoSO+Ehc+UEgY+AB8KfNxFxseE16g8NCJ2WMKLcZtL
1XpAIWj6s3Sb8UbqghT4Q76ekH6PkbqpRpqG7XOwKMFb3k7WUliakUJxdHgG
ilZZcKOMpxTT7XluFIUCWtDWVlNtCswOUdEGzKzTzaG2TkowNkTjIeQh3uuX
WdwiVdKg6biuRn2dXzbQJdGLzCGKZNY3U3qPDGbRlkqDSd0YUlqqzNJdxEXJ
VTLaE2K708ARURKgL6WcksohoAp5RSwfTsbTwqHx6PN4yMqlJuAzLW9r6ZHf
GE5NdfKiFRz9LgqOKm+LxkhPFQy6HpQRQVrvBc1oF3j/ErG/DXHgeu2m0b/5
5uG/Fiic/nVG1Ob3CwDO7xevm98nYLcxVLM9a6sDJvayz4jfmWLeG4TxNozj
3SOQd49I3qa05B3vdffwbR53s+XPu8JvvzP+FndcXvzLh+HsW2w0OncG5VS2
XRWBe8TExIqIAKYDsk+xOvJ4KfJQ4JF8bClyguQyFHxeycRNFNB3yqoX5wzo
MYuaPFkKjFHwvnxPwTXw+lWCdDS/RX4H5qL28L/G0rJQgE8q5EW50Lb4xOHz
QGzjG/bvxSOI0mnhCcvGFxXieBubDJy5Pbms3fBcXedbp2eEdZpXH91hRo+F
a22ev67/Ui5AeTh3crM/K6li2CUpkY+jXgQCzbtwVoF4St0DSfVILVhhWBbF
KTmyFZ5cF/ajLshFPJh2rtfkgLhGvokasQkfniU9EOppxclazSY16nyfvl3Z
3J2lQ6RMIXJLZMJ8bolr+btk2g0mDtpdeMO1pI87ndvPbR+n89NGWVbNlRaz
jNwfpuBnlkwZvPAmYjurCRWmjAG8RXh2mP8su7gl25sm03ganDU/qS8vwxSz
M9XZFKB3AjmS28t/NRvy5cMTpZ9i3nlDXy8BYpuNYipTsdeRCmOlQkbL7Fnf
lWLrAso6Sk1JCikgbwlKYxdst9GiQHDF+apqjJKHgz3k0K/ib/NGj7xd2DYZ
GRPqtL2sa26npWA+OUg46Ihn6JIssXn+iunZzpR9bholaLOGGRdB31C5jG76
InrloikGzCbXqVryT1VNtua+XM6fUq2BQgBRqwxbjFW5xfiJe5PBsAmXuUU5
dZZyMVl1eHa8c3Jwtq/YEAYc39ipWCyafxoOuN27te+9UleIl+lcFikdNEIj
qCxpPkhqt0tTZwlJH291C5cRrzIfk0VokINZWShvgUyCpmX2foVSFH0sV/gf
c+i23nPC0fk2dSXCPt5zKW5b2lyxT7MvJJJ0QpB9Ds2ZhbTBAGUpz12XK4wL
eRGmoCLGjLkM4U46yjw4nBgHF4icw/SKROLqbC3Xztt35y9+ooqqGlGPxwTj
oXyzw3KEOIrFVqgl7x7f3scmWwcfI4iDh0wg+irQIq6K9ePxuTh32esqFCXy
opgSG8suWGEkgiItvllM6+ntdb0kwpopJr/H4SU+tHAWyYDo2UNEuKCSmSnb
KLssZ8UFuDqpMxdg7h/OqwvucudU47rffjtoswb6BU1Ms1mbP7RDFOYmXONf
Sgc0s4t5Xtd2cu8uEy2OyJiLnLHtNCKAP/xF8HUyGrOaiHk/0L0SXJWu4Z1l
DbBw1lvkkwSTZLtv8Ty/PHl98vx4XwOqWfLtL538zu7sLMmFB8gC+SnDGGSJ
WAj8fgdukdBT+tId3uLs+FcvXEbMp047+WuZaBLhKNGXKi7calk6neGG/KdD
hq9RAJqc6p7TFg/J+CEXJUV9Gl2NOZcJciM8LW8QZz5bJTmgIip7z8oJLdLl
hAw103PYZh8GU8YCNPkfv893H/2fpzFTr2Y2RKei3wqaExAzxHLGMtUWbdTK
B6d+18J3w2N2OFcQIlKl07dHZ8dHgx/PzpRbP1sY/zZ9zxSstyj+rAy8D9xQ
Dsv/6Sb21esH29LW4bvznwZnv54h8nb+yrfYi896n3EHRRCsT04vcAPey6Tq
oj9LpqSY6W0XFYdoSIrzMoR2lXJ9WzMTk5uvkBXSrptgn1CSFGRk3uZEuCNE
WaK+Ns91FlftGlUcetBo6/Gfj3760R3RQMoM3p0dD966SQeJZrHI6NfDNz8e
D14+Xyk3wBnqi0wen742KzKzlyIgNa/Bqd/Ra3f649gSiml5CX77ItqYGoEo
PyIbU1mzce60JWVW22OpddilfM3GMS4eb7vPcfBGqmFuFyS2LQU4Cyc6Vx/6
snWR4LMQtUfrzrvF3x7qBZHo05fEoGuGIvN6kd9tIlDTHQYmc3K0jwL8h0e6
Guk98UpRZmhhudZFVQvvfSJX2Ivv8RsLhleRpe50SwY+nR7/fPKnY70D2/CH
d6/ISrbf08lRToEdFqidliTmGBpXGm0yTblVpTFg8TDR6fubyeh4KKeIgW+b
q0Po7eLszwIbL3iOhRmP406WnJ3j/cnDM6PYIg5HpMASetXAhrzOM4FWYPTE
Y7yos5nbX768p0BZEHWNy5DfuCW6462Ltydvzo6dEeo0boRk1SIIZ1CIcJtR
pALlvPJHboHKfSHw6E7M+eXyGj0JkUwoQBTqHqW36BAfdCkxN1x7gbU0r704
m4PRAgK47NBwvAMgVm6ihz8/dmfKQWsv0eqkFLu5m0YkpBcCJxHZFLw+UIOe
dW4nn47nxi96KE7Kle/ara25B6pC35ZPOtUdbwvrLFZCrosJk4A8y0UqdrcI
ompORajdCthi/VY1+KDcstIKizJeDG9OCBnoDofnr44P6CTsfo7TUpPbzs/e
vX3bnpVQXUZR29TDSMYEbFh3J73uib3u7l6nhcdvQ8fWgZ11lWcdB6qISfbm
3fNMjR5qAvft8RCQBVehl5UpTzXdbLl+nh4kYha+H5I33gXkWl3r/Ukk44bu
n/iuL+L/ma/2/yTuiK6HJw6Jjks29MQkd3a6Yub/PFdMR/fWv/pKZ0yycMQi
KTh3RM46e2zUAe2deSTc2RpfjJjuAeYn9PjePA/g08j2NKeI5kul1avhim+9
AAPPWI9h9zBtIWeqMlCiYbjc0Q8DNj5Oj48OX72CCuYW3YEFiOgWNxT5vDRp
VqizCqosgWWSJtx4pMoNCpho5fYgQ8h6EncCZa6Q7yMfc8IT+cHVnLZKhTyY
zOabgl3C0IT9ecBALFNPPB5Vrr5Cx0RbCialDDw2IjL3XL+edekSbjZRb4Mp
s5YNM4sIJ4XI8OuKy6VjZsrpEHon7ZWAfEPhk76WQwCiiN/o+CPnpec6a+fv
Tt9Es+UryhvXAkVeppRwl2u9mXgJkf+Axz2ZsoIxpmzH1xj3qM5K8CcrYEUz
t4Ykn5PW/sHeN+Y9MGcyFNC2crKoLznfJyIms44KRHtou6IcRX88gYeze4/r
4Sk5SIniLwQFbgykzi5CT4xPHVfTqrla5dNw2kOmFiRZQuu8vR0vCX6EdEIE
tefeCrXPfMinzLvim1pjykgvr8YRsvHkPNRpKxqf4RU/81mnKR0qosGGwJPY
gcR736kPfVunxJo0nYVKns/r2UyK83iihxD09mYVMz7PSDJRwlPIWgmg+fqq
uqgIuiEOQqmTJUXzqBEW+pyvsqr+HZUtdJZDkynT9/tQtKvwwynoeikig9+5
goxS+Ch6rBjVMw6LXyeV6tYVFrnDnZJu1/8K/pT20dwwtMxDuklwUafa5Z7E
rmSMptiWHIVdTsxkxSG/3zocrJn34bAoZ3Ynj3O8KVJT3OL90hpmVORtSr7O
jl16VcxmJRfLBEELDPNshWEelcOiPLJUrJNQgxTKhOWKl5nP5Vg5Yp2jHshZ
DrghcQT4OIanMcvQd49al6HakhNiWssX7MLqqbHGXxIEeuVYpj3eTilkROav
HIhgahHEVUpSOsFAR0mFUmPs31EHCwW05ZSaEAHah9L7gGK7espV/cJJkTXF
uESO+309EryaoDH0OKU19+PJjq8857Ejh0VEmfMFHRYdi7PTYfF38Dp0+RRW
dKfTp/BPsNXj3fJ/pXn99NFBp8vS29iHTvcX47qRgOwRvwTb1l2uxw0N7I5b
O6xsdemqEX1Bi0a/XGVCrmw6MSRXXbehId11e6c1ffFPNKdXdXKDoVhpWHct
GZZujVQ7XWFW69HbdcJSaqjbvJOxpxwprn1dxo4QjM8FlKIernHCQNRSNi8C
N6wzsrteJrK0M29p5y1Lm/eC4Q67lWh4EcxK/rVeWVdPOMe6y3aqb91TsaAK
pWkc8hNsGerS6J4dIEZZuL6hFjwIJLlSLS31brjLeBqDgg1LXaqqAEi50+l2
gHFuv9kfuP8crABHUKPsbGCs50RfTcSOXwed8+W9DzahoaEEEzbOJJHS9fJ1
a4BBE8Pm9HJmKwmmUQ/uYT2dItEMSNxIB3TWhje66eKOnm5F8iv/HmlJfy3n
dbPtmbr1Vbi+olnGgQKjr/pxErNqfFSZ+zopqut1UVlnWvWH86K5QlR2xchu
qJl7/eqzNPN2oRzSzDk25tloN9bHiQqQPOfC4tHjopS/MxLWoY2rDs7zdqcK
DqbajlEW+aRaGHN9MHsBuHeo6g/SdqPFI4AZWj3u/Z2pnT8QW4BlsEjfm2mI
JK6YvB5p587UBikIcYxAFzYxOjvy1eLBKmMK+VJOqxEHE78HRDMPkAj2tBdZ
a5woktiLi1gS3ROoTsTicM0+ACDH9dKXeoK+5QfIGF5OO6ZxeoDqxNShy0l9
4YaauW5k+jgLLRmbZsXG8DqjOMDW+L8yp6GyA6xqVGSOfGgq4MrWmfFdXbC6
ZmZ1zX+sLf+qep+eIr1VUjpYNk49qhmXnGFUxMHJRc+EiCS1ktzS2MaqmEn5
04YdQZlxvVAZ5DlO78SdhT0gHqm5ZBlci1TLOn1R7Qxw8UX1cjfjGs/NRuqW
ogY3tA3T1+IMZWsh8ijQ15y/64TPl7QL/w5W3T3Nq045+H+JjZW/KW/AbLa/
s5cfFVLAwdtdTWpQqO7+8GvcOdQ7jCeYtKBj4aA74oQYVlywTnzg0ZsGJ28H
TiGzaRswFb7Pd7/t6bXWThhe9OtZ03d9UtOgo9+5W2YwCc5tACp3ivp0hBNU
cgbKj26RcrTCGUGD4cXALc16tg86KvmG8r3cNwR3A20xbKqGbJxqyuIRFIff
Pfn28W+/yWk4v+bSgoocag+UvR+H0KgeLkX7S4w4N9hJ5+LXaA8qrLX2wB7k
yRdsbNaz4QVkB3t5Y7MsenIfT9ZxX9upB+3svfAeMqRf7j3IUNzkNfjBna/R
3acHXECqvcTc8mx1A+4IFgGhlNTLqc/GX0hO8SnXR5Gs4riNO70S5XR5nbv+
srTndx1w6n5Sctm1zG0SnPfs5M3g9PDoGDurt/YyZ0C/+8Fdtmcs9XhexDMS
ekEvlDy9xRJPFdA2YIkfduW1tjJQqTY0ncnP7upj54r/W9L2YHzVVccRzync
j2mObZqFHV+/JnUmzYlp3Uvl+IggIh6jFbPt7+IvVtzDc9N6Er5fl7/bNXBp
dmrHNRvnPXds4XReojnnjrcqbt/R7a6c2vYlKxJqk+0NV+3UE5y1HP211wDg
/qAfJhy+99YJS4UpeOGv6zlXvfDigaOFPDM72ZkG8GyGnSTXrQn2UaiQwmWS
hTcXTlzKEsDrMyOaD9mLwgymmxy8+E6Tny2YXZQC6iL6tPIL74i0LBY8NGLW
aNmW+OWkrpYKUk84wUjK0Iq4xZWt95komFWTAfVNUXc6ysVOmbZbbGxPE7e9
fRhFXWacT6pOKrGJsiQHSk9yvKTQJlasxsOfZUk1SIfMDIOINOUGVkeOLdDL
KUqctPuYbSlNpkDhWyGG7WBwSoIwekQ2e8DyiwtGTViyM9JcjNrQkUWDpkKI
aX5NRoUXX1uatAA9yfVQPzs9U5gry1kNauZIU1J26Mc7j3d2iQItYocOSVxh
eRKq/+PM02gxiTGp11P2HmTMFcb8rii/4JrfSnKDfdKqT2BGLoqzjXpZUjwj
4FtlWdFV5Pt5LQngULZlgDJvMRDsZDmTJUOY7y4BwegV2auZemHijPbg1fOz
UE6KGeUku1YpJ7+ISFzAhee6Jpgbf5Ns6z4SQBjtLLgpj3gN0ATutS+7A+oi
XTUiYOblBPQ1ySKyy0b4g2zGM+80ItIeiwBDXfFlwxyFHRKUkUC+aJDu7gNI
47Y6c5Ad0A6k4Ggf9DS+/J7fllsccSUjiI3FaPMD85yIum03jAtecoLgEM4Z
I8+Gtzl7uE2ZD6rcjZwjz/kENxQwEgYNJo5Ed2/bu4y1bXK1m9vp8GpeT7nw
ipIvieOVvY6ed6EK2CbKVB5qOoR7PdSCf/nwJDw6SD9BVGsiODdvLBbqGP11
Odd6TUTMsNMxIVAcZUbCHXkznC8vdrrBX1eQDzLYHr9GReDtuSikMZ6rgEIo
V+VoOYFUWZSXc8LvNTdlOVP4G1D8NAXOXGsWklcSD67r0ps6yBkY5T7cfsFV
Y2REGy8mc2Gh0NFzexohEulPPHYA29yUPsv+lh0x+QrdXDyJkBKRAwtOENod
TpSqf18z1AvWI9xoOZnIcsGtwW7wPB0rfDfQ85yRf1ML81cDv3LJIRyIJs8Y
cmAQbkF5zLTIWC/87LHg5MppzA9eMiGvpSCI3olS4PnheSgMXsKjvpyL6zw6
nvjhUV04diQb9SmOYXknJbSzmICsDqsB9I3sjVZFwclNwh0Fg0PTTfUoEL6z
nJMYo3MLtLxZ0bzno4tYMke3fgHx9qbso/LyNmeCSr+BA4Phsyw4+DqAXd7J
4FU8WkJO+I6IdjNrZhNfOm1FJsV7t2kY+Jaqu57zzcBsaVn3Kel66sThdfGx
unbW6NawGLi/dXirv5bbTuBe1ouKiXan+ZFb5efHg7PjszOnWbcrRXy78zit
FeH5HTKhIvXSTuafI3uUP1EKAEj60+uc/AAzRXjFqQrtF5Z96zn/aeFmPqB3
zGzQsrfJSRx4RZ9FmGXmVQMnYqyYyBEbndT1VI9FbvmrJlbJxsVQCdtkbVO4
oJjDCLAbAmWj5Mi1b06aaBNjEa3ql2/ZdVwPIQORTE0xCBbhxZoalNw3m0BP
kwaGt3ixW4ODTrHNYIPddDc0QVQTmOu9XFUzrYaA10uU3zYzSy/hETRqIPOy
1BxYl4OVsySVeV7KBLORq+LuiMSdF1KedJNDSz5qEHNKKztTQMiHbUyBMveN
X4rJ7nX6kBYuo/P2PG60APGuqITeK+3m0WrOMvvI8zfKz12qdEtdzri1WBnZ
XnHeI4JlKylnrnNLwd/tZFl04m38aqQWS7yq4eMQqcydYg/p6sHt67fhxW0w
lshn27L+haWdWekjOwlxkG7lPkKFeK555gbF3LKZ6JaaUETjLTMgbEvF31I/
3fl4SWTtOZhJqfvEwSQdMAN3dELblGIcAwpynJyeHz9Px7Ed5xAMpKkZE/MD
wGjwOE5vRa2YZANYBaaQNnfQH8TOcQ3KCcRDFkEtfVUHH2Q5/PXVyeFzt//P
zruWBdFaTZYjPQkQvVUPhT8x+AWyC1P4my01H8ZbaIWUQClGOhH4SC3XGav4
dTgKUMDPDaeQZCcD08rGVp5SLxthlFHQr1I0f+l3VKbyfbpgtk+dNVhgfTWv
R8V1calhxIi/oGsQAaQ49JpeUDgaqFQa2FhcTDTi4rrYx2UNAcrnZYZIdxBq
LWoBREvnFUXmOsE0WTiSi4hKJtl45jBGWDxrrRNPBSUrQrkdltM5HcKIDToz
MJBkGI1G99D2KlyTbOmGKF9XKmIIppFKs4QeS6AOt/58FO2cA+gXCtongnfS
rqYIguJ8AYpBKhfZ8DslIUsUmYdBn0F81+V8ARjCWIJ4u39gMUUvClYRWjAE
eAEdxFACgYG7rYgFFPowdrNKmgAIbbj/9Fz/AFtiyRc2wtokU4LuEnNBEtgz
f1FEnqPfcmUsc71tFHZ5pqExobDurRwH2sQ0bu7UFd2eVBFPvIdRkmoZRGso
bz2elB/BbI83E4ed5oCNnBQYlU3W6je/cihZMxVKPDK51UzUPmbxXEXrDCZ6
V8drZlmWcf+YWaW5yXd3vtvZxVP3dvbgS0u150MZ7TWd2KgDdKEMYSZDCE5I
WltYUfwUKXczKj9Uw7IXeKPcUz9UGH84xbv2GJY7diZPA23ywGhicCL8vMzm
5Enps65jCE8mgWa54YduT1HDTmBrO8ESlPfxDXt7LH47z0ElZVvjrof0BG7N
wgK9LJM6ZNJet4EmbCbGwS4t+o1ChIDwQ7U4XMzx6PXt07dHuTLqsBiyg0Hd
vCwXCzVsdR+IgRnUXyHs2eqgDCItukUZ5Fny9b2Ek0YLd89BrYkCAc7WPDx6
pc9sotnqaQWK2zyU1kpSKhmUkwUTXAjVOcmt4y3CyeFHMAsOcrDvlyPxYgRm
UCCdejbNLAyY+1oNVtN57gRXMkdaqe4cWQ6++jy0BV56cQECubLjBX6JSwQT
vWND1Cv0GIPLiGDLcAejIoRbGPOGIaEIKBgmISh9gYKJHkFXW01ZzhkyWE6c
hYiZlrkAaDnrtl56eVNdV8Q5wy59mcoP+7z1qLD7ZcQIm5ydlkSmuZ26J7ge
50viFIP7o/KoNVFfhbngprgN+uHK7ZbsMjlFYtSm2V9gE1HThOc09K/R7mY1
ioGE6oiBA0J+QavEj3YJG62IkkTc50wXWOWXtq3JcrlUUUx1dqb2AZamwxdz
LJpYiEY7kQlflJhUeuRHOnMjDXf3Q3Z+hzv9Vk/uwJTAQR6u7WXvp7wnsAX8
pRVjSCmkhX6ZrKngw0ffsjDiodBJeKFu2b0KYj9lHd7tWNWUUA3CfYJPn4pS
SLq0Xw30lJ6ErBjYqqXw2q/ELDckt7jKVWZku3VFwu9L21PxmSKHsa7LBdum
flmjrMVNMV1owQQuOpEmruOQZD4nfrOqUbEYybalUt/GsmcLRH2opCBxDp74
7UjEMX0vkaCgiUT88KYL44YSBnZiGivK2lq4wEsLT5JH73vF7jBtMFrBcyeI
XmGzeMoYMARQ5UyL2OeCEFpklaSfsCEgqbYej9kRJBRAzYJc6GS0o3yXa2RR
8CDFbZZWNLI8J7HpC7tnnuRBVwsxzVYc5qEIGpd0GZkRhk/KvYAfQuRzzysf
leTw9DTH+HA5Cqgp+enx//sOzhuRffBNu3019YVQhSLeHDS0ypKz+SGPKaU2
3OpBBKeVX5DRSdnkUn8rHCG6rwxJotbxyhKpqyuAmBx8tkWQ6I3HzTF5gVRS
GbFe6smupSMg3lLlhtJnpTIMLdIGAyEp6mYvUNYP1gCxdI05IU3okF/6kNMZ
pRwIkEv17D6KEPSRjSAIRM+ivIQXgqmjqojJXoKG4ic0PNHQqxhMEV6QQ2Y0
nvByFLIqJAoNeCFHynSP1XPOclc53e/nUY2bC7dPxpNqhnHVm67LkbMRKQu0
XFDML2Pzf0fHQRUst0hvZ4v6cl7MnCyks/CK7SGV3GQzwSsmE2+q3kRF2Q5o
cooRWRuFOI5iTSmTnrDRMpTcHqQMeQc3DyTtDeKDLIT3mXu8kPRgOcEfYNR8
BPGBmgzep8F3BZrZlkU6p4AceqfRoQVbaD374jgZ5d3pDSEWsmKxKIbvoU48
97kdavnT5HXdzRJFb20yEW6i3/cpWtsnjkWDC19MGjDEW1ZHcv5ME97IoDaL
8vQgedoDcnmx8yQzHgZlUmPgT0/d5STDOKxbThu1Saw3LLDchKG2ogF1o41A
yXR2tOxNZ/c5lnJFHCaYF6J29dWaKQrcV8ZPbl+LbzJviJO+snJ0ySBmKvX1
yCLmu9QN7SMcTTVaUmLycj6mxShhSxKhmbVl3VSCUIXT8t4hB0D5MbkrYF/x
BMY9PUvi5jla7fYFHw0+jFr2kZsNMYdJ83eIJ+GxUMYfcJzcKa51o2NWoODj
sE33LEPEo7ZkDucAfZiSR02YZwX7hfboQrcsfE1t0cFfTMqPdPMLHIofdhOT
FKOjShVdqxc6cTN1+l3FOWNKwGOlhxy9xJvhpMOOGWHDZkoU2a9kqK+XCxrU
eH4w4hwkTwbdncRLqGz1Qaje1rUGgSRwo5mMDxxGMkJO3MrYHFEyT9/cGGQ4
l82kkwyULh77EU2OtaYlG0+JUfls56Xqq6lRn/yqleoYslgAZPArzOy8r3iD
e0F5wEWgQ0KapnvOywmnReWxZM2Li1r8l0SVKLYm1TfAOqQMWR59X5YMJ4Qp
PEDFaqPV4A8X13dii1V4xNQyipOPxk5Q3ylV/WsiqSVVC7pDVOjASHrSNXFk
EoikH94ERRV6evBxP9Ofma1FfKScGIl1rTwsEs5NJZgl+GWn7USPey69rV7S
kllCk7UkKgoOwVdal/kVknqPkfKbZb/YlN+uwPCWIApHCbELYw5p85kc4rLx
XIcS2aIUTUV9aBlMT8tjtF85XlObGqAdyoSj4uLTEZV+UcKLOKDmTvFbW4o6
Up8FucPhTRNx9yE0XdTKp8P1tUGNRDpzFF7xRSosNSpej16KYgJVzQjRMIoN
V/8cj6lcBxt22r6HAHvLqJeZUFgIyEVB1vKjjhyMgAu3Iq2fOcXRue1qycew
DAxjV1jljSo8rPhaWiEuRTcOKg00Z9KsmnrCgTGq0EuYO115x9CAKBDuNORQ
0u951QwnNWrkQBkmc8GGm+KicfS+VM1l99Gjp0+x6loxQIptHp28OXt5dn78
5pwvfbLNCKjbqJqgG6mllMxTrIK4fKnKLa25H+qF1tVtBHbj2bFUGweGAzc0
CnE2Zi9UA05gc3tL+bZzs8JV3xGObE9C7899Oik9MXoQbNBURZIYH1fEU0A2
n8ba7Zu7C5BYWLrl3u9nAh/xOEHeIpK/h5+c/CMcKT/Tg0CIvFgBzYrIRhnL
CQ0DDE6RRw2C8LQthay7NKWGMt6qWnimY5MJexnfVMIGwuSxDOdDwU3TyK+l
YNUzAhoNxO5OhaqSN9R1W7y5grA1djXPmegQwQzLEucy7YXl1E+wrgiTYSkG
7ZxwV1wvyy502SXneqZQRRiCYWkci61IUtazLFLlEdd5uvfk6W+/6dLUAIxY
CBQ6JmkTla4wgfYulwp9by63cfk4qxhnRtQhthCWgSwiiroQqjr2f5G/8CF5
AO2+MWaw3whCPQa9KnIHZF5kPYP65kM77vdxJW1Cz5haRKj6Yvw5ntEi9RtM
LUc4T8S5x+e0rSkhVtQLvqDPVBoNhXzPpJaY0GXLQvTz6yOUslRh2ArAtuIK
ZDRPTjN9+ebFidE97JJqqgkDUn32rhA7iHeclpRRb3GW+Oe6/pFnTWrTeqPV
DwAv01ukhJhGNJ9x1y04rge/cqkUiRfP+93JinEHfcQiGSJBCjhlMylqnILI
av1kuI1OuR+M9itt51Hb7LsMSy7yTKpwcUYgIBIrEAs2WRpQWFP+Jo1vOtG7
nEzsqPGyY+KRiHUkhIzhutgK9zhzTMsts/r3wc32ZZk5NZFWCmkHOaMO/IRs
qxtwcTUvyzSanGl8pbPLCvz3oAkZceLaSOdKlCpVSDsivsHRXpnaXxJWIJda
xz1+xNl13tSZXbtClLOcsmNXjWtaEYCHSOj7okRcgtAnGbO+o8MVu9cFmOc0
YF4lR/xWmX2SIgIex5OxHa18Ej1kRxAAkwhiK+VP5Mnl6Ei1ENYG/7JRvGxd
1LdNJNQmB3leQ9OqYxob7gZevfvOXufcq2OTmsqUgi5onNGy5PqGBQt+a58J
2ijrWre5X7fCY0Iw4XksTAXAkO3vPDWoBTfke76iih6ejY/4gAR1VMPtwzit
LGo0TqAO/ga/fk3uBFmupLr6M9pyceqOBcKuId9XDLbiHYy8fqoKLclHlGAR
xzLUgcGt0vKYTqr3tBwR5KHSDBzTo1aIv6ABRinukgAHFEqEOB9gjdlhwx42
10zPOMUZuLd6ZNghT6Mn56cbCORIsG7DcA+zTTw3rZeCnWLdDfhP9U2JdTde
Ajtk+qMcDxcmzN3L7PDwToI5If3pKXmKE3KA4y049hz378PjrGb2Q+5kcdvo
5j9nkRY2/y94g6g2WO9uEReJqwypaAVUYOxmEXfrRKOu6Hjjk29ID7MW6Mm1
pI5RrFVCbZjDbKc1+YGDxCscANXB8i4kb473cxi7THx6Ivv9Wco+Cync46MD
1Tj/Uzm/IPqn/Odv8h9Lt63dGZsqGvlhEBT52zkFBK4RLGGX0pZ7bv/w7UsR
s/u7e07MZlpiVivVapteHepxUK59ovHCyrCp148j3ZuMowYUUpkLHN/LwzeH
CYbPqeeV0x76w+hbp6ob1JWvWC5okwfA00nhx3PsBa4FML998MzGcbFvp8vr
i3IeAltSnebWs4VeQMBWlwIkp06KrAvxCBbPTRQmzk3zB+IJIH6R4z8PiJ7y
bPDzXibhgsXFRNNRsXe5VE7+yb5E/oYWPv59yn8Gvv1T7sZAv3KywP33dTV1
K+hnPmib/JNrpO8MUWqqqwPu+0cfv8H9rqnzH57vPqJL8cVunCf/CbneaU81
ydt29BCjhajKg2SmGK+bTBdDUDjcCHF7AmQGT13mp04PmNag05DPw92K+HDC
NTs9OtwfnP/69njw+vDsT4MXL/aEg27w+iXDZtZccfhnOz/hCWF6uvusM9U1
TW6WWpOEwfXztL7Ln/K9R3kyX6+i2fp0RyOHf6ZGdu9qRCfbvLbO9Yq3TqY9
nis/5yxuvnqh4E9yLOtQ5Ht2wWcJnQ0eoqvhK+9Ll90KO6TIH+/1KaaKmK9T
NUj+eUAtZy3WCku/gHdUTGkf5UX8WJkdY2h+WAx9/rrPLYXlALXkU/52OZ+R
o+YT8aMq6/hbRg1iemSqH3185P716X9evHCfzwjUANj7ORiKP+Uvj89fENdq
5Zb4Fq0ixSdvc0Pu1l1u4QVaOP5IntFrjk3Lx4U2IHfs8jO/4zt+dq/tFuMW
OYBdo24+5s3CDfl1KX/C2h7pzU/4ZnfvMb2oU0GdoH/IOE8n0t1EfKLszqjE
l8dUSxsv+MmnjK4eYTzCiovHVtdc11rg3cWsLrTi0uHbisZ3G/kLx3ZGmzDt
pIldzOuC3RhA7Ep9CvH1xjqi0Nkh9wGnoLiZr8mUJV8gYfxYI0RMKTNu6TG0
J5/CZh84SRXfRknvdSX2qd/9tI+CPObFJsdUWDlucUbrYitaNZ3jwixpI/L/
1TNsYNJLaFcVAnE5nLKKMS0X/efzYgywX/KWhV/9iYnBg6jOdx7GTFJARm4Z
VRMZpfg18wVDX9J35WWe+beN1/RWtOQ7X1e0stlSU+nijtZwr3ncAaFTdrKX
aecsZUJxXRPotMvbSpUaKSxJxHZRHlD0zFi8dSw6pTkkYMVVu79Fu8d20MIu
z+wud2PXtaW3on3fOYJqyUiGh2fI/ICpIOrGYS1KbubkAzS96OVM7gVcJMng
YoZgcGTpqAocRseeRq6WoDEbT2QBVldUPbweZwa7Q+5hjrc4xXVSNeLlYJ+Y
3aH8Ek2v9WhZwVxoUCy1mz6zwu9+Q+geYCrJnuATySuZOCmkMGGxCMMkw++G
F1SZ5LV8Aw3SyQAatP7b4zdOj9qKUUQAwl5prJkQwZwY4j7sfkMHohyFEkki
s4FgdFMiEJxM+gQInQpzFBxfNDJDZV6ND0GUHY13Lvkr6vmlsxoYTECBVE0C
m2cSfnnx4ue9wfEbyZM7PHp9PHhxcvLD4el2i8uFHriccvnAjDM5O8aNjLkp
AY4u3XMBd7Hp1eymovoeDV8pJq67MHqfHVFUvJLAACBE/xsNsqhXoLkmD2Bk
MJStU0dyhcFCqYkQwm9K+lExYSJfKW2ISNkN5IIGW9pSr5bFvdPyKIsGDIAH
DbAM7+uXFIEk5CFNGciX8MD+GXlPXqMWkZbpjjWa2PzoOnWh3G5ugahGKipP
Zy9zY290GCJ7kWobL6TXJ//r+JxS3H89Oz9+fXj+8kha2fs9rb0hVo3Q4qf8
8ee0duosrMM3z49PX5+4/wZDKt/foLVEG4qMrQ2U586pSzR08ti8mBSX/V+c
xLKqqtjcY/rtxv3WtNT5sPDIG+4EzHUxyyFjwJczHn/YG9DtzX5mEHHma7Ck
4vOoka9sfZ/4F1wMV+pNwV/Zaw2ak/z07vEcLS6CikByGKlWbu+CjXfk86+N
+/KFJ6wUZMMQbgukEoy1kyh2Tgbt/uNvfvutxyeZiR1X8F5zeV+2TKAJebb4
psyi4aLnsvOO7JagMTUSORmVwmUt/WQntFTqpZO+phLc5UKZ7p1Q74jzSr0l
okheTlnc86A0ch6QzmLplNnjoenwBOAkxZaTV7jCSU5kn/nzoOOqUzxendag
SxaR9/TTXccfSZl00hEFGk9LCczzgtxyj9rmcd//5tG37gActZ/bmU8IGc+6
lGsjs7eRvuYk+WjU5bLFFPq8leviPUhJKH/GDWwWwOaFFIApP4JcRZKF8cNt
f1H3yTKoOEPt2ut0uInAfGPm0EgMZYlaXLmjOefucjiUePbkOW4EOAxNWg4n
9xVEGFz2qXUYEjQ79xolZgzEY7N/+X8eXlTTh81VdjkvZ/lX/yf/+uHDh1/l
/+NrJ6XIbP6q+Tf+Lv+3f/sq/e5/uO8idkAdHZFf8nGnuWJHARF3Nz1JIxPM
mr445yvwFEncCxCAUW6bEXataCCbjltIT95ZfFw8MN5NMx/zpTg8+eFF0xr4
vihdSD0O49Vc5aE3+b/m+pz8jxjwAVIR9gbkJd352DU02q63d+WjeN/TEXOi
jKnvZVbDaDHjDY2gUzkg+i8mBeFwoABCcwJBChMozCZLVMYgJYeoZXxVM9fu
AzeTD9xbJgupMcHZ1m0QsrqpJIoATyt55wPT7pt6YSBd9ADSOFUgdO1HS9QD
t6AW9dZEXSLJ25cB5jmHuPjmWx/hwoKAwuq0fMIfuTsyOVdZ7RypIbjf47hI
0Sifas8N5/RyceX+KBfDHeRxke7s/jcD88UlO+fFHNK7l247P94bcNCePny7
P6AB+AUc0f7liym8zLOZzzTg9Bh6R8NyzKGTD8WcEHcZhqyZVu6uBUdnyeTE
Aq7mNg+moLC/VgehB0J8x0cEqvgslRw3oTpgUpXf3Jpl3bccff9gXEyakhYj
z2UNRfB4VLnNeuCWFdCKshAxkxJG1qoVNRvbgv3NQrT96f4e9ta5vYnUfLBB
XLUig0L4olxykSGcyQLjSpZjIBfZfUA7aMZeOhvqx8N7so3cxp5R2cCqvCF1
X7wfpSgHmlsvmFcnOZhZvx5JRlMdjmypl0Ap+oxbY8yahEnIIhnRWUELkELk
y8ZkpoIwccYRkn8hpK9bj/kWKPnd5c/PGCFaDgcjN9j5Fuv5291TdWKMNPDF
vTRG7kNgQPOmXs6RrveK/RTMK/fj21f9xzuPnGnbBxGL+/0MF+Lnf71aLGbN
wcOHl277LC923KH3cFJfXt3OH6K/f6QkuGjacNu/48d/B+Z36unQFfzu5moB
Ozw/EiTJkBUW7N2ii8ePhmVb8DHR9/Q1DoMIy4x6MB5M7VbFv8swcpfESOMy
MZPqYh5yY/jOa9CSCK04eQAE1Vd+LOdUWS4A7w18WlP/wbdHt+h7+3oerFOX
rL/3Iy9LQl6ObgSpekTtFZclkhyMVtwzKU36t5JJsV7N3x2dvH798hy8VK5l
qo5x20fCD8qhzNwxUhqo3zZTzeq0xsu4AGNhWOs6JKMo04TfFM6GfFxcV4Ch
bJ06pah/Vk/qa5QS+rkgX6T7e+S26uv6L6VbFhIvJT9r+G5Kksp/v41EBNrn
pj7Dfn50SCi26roiydiTA0FVfkK1OZm06Pt0ooxZDekz1TEGpWF/UZUXTqS+
J+hSzApsH8WCxDVgh4iwJ/UiYlqN2Dl8NLSfvy3n3J5mfcVYdzXfMy373pkH
5+yhqlkxSf4pzmAfIZtYeR999ppg3bVIrJKRPQxcZO7xD3NeOOfHz1tP4/g3
YKh9ShdwF4/pRYlfE6yD5E5ybQDITSj2nEooLm795AluOzBmbCX0ZXk3fRlR
ra/hHxNarRYfmSzchAoHr/X110EoE5yRzL3bcmFf9+uvQzlnnBDG+ohTew1i
NnBfCv61VTEDDjJUdnNT9Rc5EskV7jbYfCp5xe7BgqqsXQsj75CUAWRGUtSr
XSST5Mc6Lv7UDu1vRaWZaIQ3qOqybuzizfH114S8MhhYg9yMs7PAuezE9Xg8
+jAQKMVAoRTf5zjv/p0lW4BwK9rV3WjxCFhDikIliQZUBGz2CApLJ6J66K6J
qYfycegUozBWMR9e9f9jyVjYdOYThSWcGzZ5yFD1EKfBzAbIL1zvrq6L+XuR
Ectr96FidOikvkGF75eBnAC3Qm+DFfoMS4Pcm8PlpOAyZN5fwZOQEJLZJWP4
kNz/T7iUUUCY97l4MXKQ4PwhfnWU0XIKinIbxvzPOKjcYAwXGD7WE/7n5bW7
mBQHGueiUQ8Hax9OYE/yvUd734oS1IoO0LD9oKPUdCs/LE1Ie/qDV5eSqaGR
x3HlRzwcWFm1aMrJGGWJmHU7TZOL23KzwX4WAbwwQFET8lOVVHMJwlQ3S4Lu
e1UiGxMrKyX3C3dpxZ42t+fd9F6rR7dHWi0Bdpyxho/RYXrnUQoXUOdxSs8i
9gTNq8+JbtQd1vv5n37o5bvf4n++5U973/BHam03f/0DOe8WBCfN3DKk6bos
6+tSisHt/2EPF/7/jV3bchtHsnyfr5hwhMKkBMiWrJV9xCeKkmxtmBJD4tnd
87KKATEkx8TNM4BI/P2pzKzq6QFB2Q/rWIHAXLqrquuSlfXLk+ee5INxA6OV
OZrImth3zDsExKkuT18UbJwMb3teXXz8HEHbp+XFzbb8vVls7spfnj77sXyD
bos2RZWtLl8V7+qpnW/li5/KhUgf9RteGRhDaPPLF4dBPaqzh3UiR9wiKyaC
ETFGdsRj2kN1uxYMCQUTqt0nIVFM1MjGvPGkbaZXdbR8q9UWmT00U8PlBvO1
Vx4sKLmcae6de4cEspydmMYtxsjnXOCKXJaF2SYOEEzN5H6xI+Q9Zpn/67wB
yNZem0lFbTHu7x30ZrmLIJCBq3kNYhEeYwAAfo6hC5cNzziNwHz8+G3vyNbT
oO2OXzcdsYzosF57eQXvMsf4OgjY08ePzbYcr0tIFlSJIpaIqUeZw/z2RJ4k
jg2TuxeP8PXnzx4V0TAbq0h4Ho/H5a764KRl9km3kXVOq8xVnZjqaWbcdDlH
11Z9hGeHnA+ULdclu0imSWr2JTnWXWMfAYj65B8vHj3kyRbl7pdfPn/0VIt7
zrwKngMwU74mnxK5zUQSDUPvkmHrCWcCRmKMp3I7wZwCLnHw0s8Ysi/ip6V0
Fsf4cpa6xqHMJW/F0aJ6/9HucnrTMw4gW9B5N+pv2MDP+flZCfV+8uzZo8Oh
qcJPnv34k/788sdHUMaPSnX99D/2KXyP/OsWuVaz9XakBuGf8TtRETKFYq+A
/N8sVuHId7ZvxEUPapevmjJjsEMmTY1XNd18maZ1vv6fhqMNUOvlWIDE9seF
5IWzpvFLyCvgaQO5COk1YX/ynBv1kDyVQ+GAuFPaXz5S+s8UK4S9gcNx1Vam
e+NnYss+QN23fzi7gj/xKD+pgw1jPWhYhsW7V7CFv9lYvCGfLcagcWhgxs6v
PveeylMNO5V836J0c7Uk19UY1HWTWa+zdriSupyLbuJmp8UoW4WxYsidQRNT
hORXNCLlQ6o43L4wSxbvbMVDNFQ8Myp8XUm7Br0NNumq5biF5R71Lqeb6IO0
K3w8uPnv80MpK2u+9HRJgntT3tn/7Jdtc4f9+/XdwfP//sJZGHve4ftO5OB2
uwgNqtkVaN2v55h4PAt4y7w8kPwWPv3pcNDwdFMeOJcJ/+Rc/sQ5s5N3RSAJ
KwtcAiDo2+wk9715wHuIHhNv3+Hrdt60ReoihtrOj2w7YDu2O3JUkHi2EYRO
udzSAvmuKAu4CqzhcBte3Eke0z5L1zzoG3735zv81eQscscSMYKcGo3bpa2R
GivCZXx1AWbhiKBvrzW3EUMUEFy09p8jFpKuswmMhQYE+vjuHvrkPCA4tnnc
JaMDELouGF7SOMOrUN7P2Z4A8V+ng9zuEwo1rWe22QF2kR91yvl7Ze8exblm
L/38R1ndpdMrkEKBK0K0C61HWlbzzC/gCF+Rro+Lq+BI5y6kkoUJTRtotTK1
No1q76NWPUrmCEUJk3jc3J3aQeM4xVhyob1LsZ33lYaIszBzqMY4BIL0UOjk
KOSODFOMl7j2F/WrQRLQc3KlHxFIEl7a4k9DKyzo8/EfdE/kwhztsfG5HXQh
Lpzw+MJWcIX5S4m90VfrPo1olj4rUvpMpOZ913AYcQR0Cd6WLAQ69ohdvGOj
1f0lFGBbaJ4tz0No9tUSobO61MN/fJXM047uJ7qir+QkVqWkmo09/y6V5FtI
WIan3tos0xUZj91QDdX0ZTkj4wwoEr7bMedNx5P2u1g6h3KiojaBgx74sUWQ
pxUX18tGxa/QRWm9WOBo6nik7gP2Fbj3Zh7ljgGYryiOZfj6+E6DamSaVxxG
jt414qxHdsxebVpXHj5YqDmRpgj+9XrVVztrJX2o/lBIKTPMzfPhWet5IyH/
5PAsE+tP9R/y1o9nrLuvSfOm6k/rfxtX2d8eqP7QAdXVbdWGEfGtHAS1EnUw
7kCGrVPmy61PYZG9LA9C/DJ1jaroY49IGxold1vB+rIRKIO4iaxEw0MrJZTI
mMUILnsP72K8FXc5W0pAgkZcnN4649BN7eTeHe8ATu/gI2eUn7lPCzLeJFa6
2+utwB4SV3uIfoavLzx69lFiKtPzF4MSUxAR+uomJioKEobErJxfSGyU0SrJ
8MkujVkkdfSFeV+dE/R5/zWDPs+onGVoz7Pek/hNpfH3XKjy46r603yYM1m3
h+XBQk6YonW/10pNoizz7OUYTE5hK8Zpia91K6cq1uhat3OoW/zg8yH0CG5g
R/1E8SiKAK44bdAA2SU7qRyV2XwE2nGc31ZdvxfepCh2qPfrvoRdlRmlwL2C
THngr3GYYBvIhEUpGOsc/ZtmOq5BWNJW81oBvoMM6fO5fNYiImL0LMPd1pvE
M8jGN9L4iG9CTEtYcF87J9aqnVL/P28+mQazbyn6IlM2TzPc8syf81eAF5bU
p9kyZjQoeau2eIe2QpLibrbCC2RGwYG05XCuqdcyypOqbbeK+xI1EZTVYaKO
O3PyIgobMizcbBa5ms7r2Z4SRdUc6bhxjnCJBuoq0pj0rfF7dRgllqSnkQ0M
OY/qcSgrjh0C6J+Z33Pwz429OYzUod1iTXzuVFJ63VyuHZ7JFeKaM4SsYzqY
9qZLlnxQjUrdvCOVhSLs8W7+4WZTT+2kUIm3/HxbrdzW0mChJHx6fHZG5OHb
k+Pff3+gCLygH2RWAxxbGFSEHCz9FoJWr7cd/+GsftNpK/b6V/v69hLgjboy
r0SVQNGNizqsGHz6cenAuO32xvcICA5vdlIzC6aAD3CuLpiGK4Wi4pqtU47O
Hyn4t7qbYI3dM++uWi/nPp0M0e1K2Khmlqhq+id4WvyFwYAsDVc+m/aYJ9Cz
myq9HRYjFo6Nuc7kpzS6bFfmZo1UVWWcn4hVUUvOOFEzjuCg7fQ2J77heGwe
VbOIsucOFZiGm12bTQoyywjL7CmlzZrIFLzc/n5BR3hvf/jrvEveKx5w9kFA
ncDQj8vTAQfK4FIO0rmupqpwiSkIBzOzDWBVjwfi4dFbx6TU2MFJU6lv+BYx
pLMPLpx1GCsEpDXbM2g43HtZgf9ZJlSRupuh1O8RIZp583cK+kjqpvyu7yFY
z7qUNXSFFpcJcdnlazOaY/WQkUfslLQZLfEpUPT9+nzsDF1Jp2nmVGrGBYVH
Fz/WcuK6OknUukFF64WIUd50fnHj1B2tV4C7FZ11D8+ial0EZ81fHq3HkRzi
tRFs+Xg4SlA1vCXksJ00LHf46EREmNuRY9/Mm6P2Rmuhsice17ElwFx3dnVQ
4BKxFg0GVbuXEd7YFYp1L3zxN7Quo9au36YwvDyYc0JnZcYh+BiZ8Gu6P/hN
QH+JkFCjPRW2twMW+XcVsWwUMe0M9th5rTVIhI+CNMK867VodV1F1qTxwMJf
wL3EaTO9xzBzby+9kiVCitAk6b+XPlEAiOCCFbaRRiC24YsgVjU948JmnprJ
oB1f82CPH3qw7NDMqvcZMHw4ZAx1BuZW0vzYVI9UMts/59Sumae9nXsSOnXq
3Mbl+e1yfL4EFO+sultyHWm6yNX3gDbRvR5wQbP3HDR0Xa9jyZ+dofp8W+O/
xUGQKiNOHq9x40O/s8a5xWmlo5XPPVI5AAmAysJcFWHVPbYDCunKoL8m3Jc5
dvtT7tAWD59PDIyrxXqMsXakxurU6qPVO0BkzwgLKQU9LVL1HY31TP5lmosd
iZ6RidQfmiD552bZbuYh5XiLQz6R/XqzEBVmFZM6ekVSemxoJnuTK3iTzKVb
m5760gztHhM/BeFsGkc1WL0JIC7et1PJ0c62lWmhZOM7n4yEFVPK3l73axMz
QNM7KiOhyTZ5w1igZsywKmlnli8tlenKwc2T+eHw8eYbFpfKfnKS9w34aXkF
N2z4gCT6BcOa2b8FFDvX2PwxdZI5UdrNeHk51gOA4cA1vh5U6ZiwBEzVdgrd
J5gouyqBVtds2kVKs0odYPE3iwau6gzECVeMtZ/uswJt3WP+l/LbzR/BGdYE
fLXpdrptUz/WvSlcHiUF+HKGtgWznEW/t4wQF9xDqRX8ZDd3gzYSZazGDCJ6
lfeOkh3+MrHVNNQgkey56TnemIvHrNcJqd2x2W/nK4ymoi/50BHuMXObKJKa
mKtWlQIwvc4QTsJbfXn/7svb07Pz/yvQmlIeaEKeIFXTAfjJcVEn5+//hXGg
8btD5a2qeGYuklPS886J6bPKeD7HNd+HIl4EHzDkGFW2dT3EEvaWJA19qq7U
6ylPpPBQZ0dKsN5wKNWND/FaWASJwQmvBGAqfOZBpV4NOWO2ZCAPyh0uXimV
AyY6bJWu01i9iWaOw2Zyptwlx31epOplm8Z+9qiX2XZU9HTh3olr1rker0Ci
xYi4JeF95F0gB0OoGdgtg9WoX3M9oK+VXtnXIAPDR/eSVvN7h4XSITiyp2cb
eyZK+9wxW0RQtfHAT/P0Zk0VaSTZT6YS7CSknpNgqufyLNxqfzo/LzuOkolc
0q+z5cRswokmnrblvzHInv8av6bzTR4bqe+JGj2/qRbbzIGgVOJ6F7w6Cotm
pOeroLG5uPpyVS++NNO/DNo+LMsrPaguRQLvLjt65/n8CRR2GagxKGOf5wA5
rJftbhA02r475RWCyDhiaYOXLkw+f4dZjtD8/l2U5jaZoXu85JxJHHTxBcYq
8wYtywIiAXTVwCfXP8kVu1Ep0Gm77OZADpshl//vvfZAMTRdNZ80V8w7TjZt
t95G2T5KBwxl+hmSeprT5q7pJ4QpkWM/ZVdZ6mHqu6FTKzTXIb3qAWa5YaBs
g8a05s9NfZj4vpOX33/f+cHj7cL6qNFW/r8tvHlw9mZrDu1UbeobDmnF7LsO
bNt7cwQWTPhSLqV7U29/82P9PvmmO+0mfFoOkz8XyCJbXkIKGTi5VGlY5F7o
sKuSmufGAh7+XbU5uW5NlJer6/K3eja7ba78nK6nWVrreXlwCpwT8lr/OCTI
ActXXPU3iTWIvVSrmIKHUeljLdbr6FcZjGrqsoIEXd73P3wkpumhMVBFDJVQ
4SpEcL1kImZw5dj1NDa2YGnl5PWXYFtmCoaxU6y4wicPOKMTjtN5grOuSrnL
gdFmVJMEJJmXVx7kpw8GetSJ9LYjk13/JimJy4uN0mipRL6u9VbQONDL7HJ9
6j5GL2vMu2a2/I1MFSyTD2+xTSJEtGRgTeqE3SGgDjGPOG34VzhHkcEj7O2U
HQ0/oIRwtx2/+awmjeBXZcAnnP18ZR+BpLr/lZTyMLHf4beei54307GLn5J1
IQL9IVyV+VpySsNqCaLGPpO65+3obHtsES4nQ4NsngJpwEIuecTuvG52EUbT
fvyaM532igt+lPOtClhDweUr0caufAJAbIciB1fNcq+wZLxKleNSlV9brgNu
NI62BjorXJVeOfsD0/n4j7lpMbij22Pt8qyGU6wvnVVPsLCyNyLmYffaw/Ry
3fpWHqYRXpoLj+JVKSAEk03dvfsyRBKFrbNUuxzwidctG/RqDVZYoAJmDqkt
0k7H1vLSR8dMq20fQe63Hmr+jZx/XEcxDycVIPuztDNzTI26SEM99ri0K0CK
mNbwtGrlS7VpHI81TlO31hFf2pnhMtILE/HeZwB0Vy48nc9TpiMJy42HAWDX
1ftSA+UYV+PHu2rwpEzadWnHK7WmH22ZLTKyIpt2Qgc3nN4p2ao55CzKjYgV
OmHd1TcTDuVAhLOjxvx5xaJEp0c7pCpwPKJj85Jt1qjb5WIMzS6y6RmC6yHh
loGbqrzmOQqb2FtvTEVLEq+hT30w7wQu9uB2OWcOpBr5If2mRtyxhsaWn47f
v5Ez8GZrcZh5EmfCDJ4qw/93T2xz04bn9bPy4INZBfzUS1GEBaTSL9MKkyUC
hlSBme4+mCSx+3YlB/ADkj3VHGF4V0+FTVBSk83I6B9WsvKBmo9XjXbmbF4M
ssWSiL77hdAbp/TGnb73gbC0GgKuCSJzFEfqdMlw3EspMZUj+mAYQi7s9atp
apQXPaPDOIlFrcUqS7H3+YZKg/ow0dzzGBWJ1N1fGvLW13S6W3j3iVJu18Mh
OixKS35zRHUbourc/lZrNno7fIWBKg4DReKMmz31OPGKzKisZw1zZINRM2OE
o4XnY0scBajVRpEjYIfz5Vfv9FHrFwGjwdDp77hOT/eXrkX8ROCnDHsY7hZw
V3KkcHwpmcuxVm74fbLBhjHVcXBIEcamv/FqE4Q+xDLELJa+osRz0L8wRilg
m57qQdeTcV2XkXfomeHgaitQKA3zoOM/A7Czih5FoQSZ3XQYYfSdSRb5QOMF
uu8o3XN1v/OxwVUKMaidYtSUT8fSpzqe3MsAfm60tUWtFRuWRoNzI5DKmC9B
a539re/FUe5KYpt4RBiGLdJa0X9aqEMD8x1EH3aWwiyN5IofxxYrOsPxDJEZ
B9RwghxQNlndJHdYWfFT1HPC6gZmL3LrrVVRV4jxP25PcJe8e2pPPrivDtVD
auFYF0opiu/rmR1HmsUkw/RN3F/aLYT0X5cICBKkwrNmbUWMazcwGFSmvuw9
+NhlieWVpd15BdtGHr4+8xTgy3sqpytMaqHeaXSlPKpdAS3lEhCCl0Ek+P8z
7BbWvEEHLL6PnelY8Ekve6Sd9pKxpitTCJSJHtgT3/bOKbWZVB20vO23luEc
3nek1Hs9aybsRYM7tZuXzQpLDyZ1v5ER4JlHR1XmiAard+VRVJwCM7Yuh+O/
1TJwr9uYFQ/umr9TQRId9ofnCxXFynRq+oGm6oK3wxeDsSdIzHoSa2itgfk7
vkD/2ayeXilafBCudZloj2i9fqvmcxNcxiJMADaEVDp3ZKNWd3w7eH6Kh1mo
XlmcZCbDvrpdmFk6b5GeOt3ezOqJGdxR+cbMiRngd3ULq/XPulqMz+wDTCpb
chpBYx/7J2+/Imkl3XmNbhVzrW7twbubBrSF+ZfUs+mTXLGUgUo/T3jzAQnc
sNxgRrWtbh3WxAOVadrL8kNj8moH4Afmnc1XfW03M9E3Fw92pncP0A0BTFqP
GU2WTJUUP4j+d9EwSSvv7QOCFaj2fXfwlkw0w22YV8S3+HSsHdqlYj8pGK7j
UA8fXz3XlC6vDQWCN61TwFe1ja9nSHxe2HGmMmfvnr7Iwp5MpoETRpl7p1Oy
kzuafzODFLjcs5dGbtJA6s8+fn7/n+gj8/xe/DyHsUjnjmTvgLtz0+3Nc3U2
Y8YjPPsVKKoXaYr5B/MhfgXUueDAhL0GxV+AKTtuXPl+UV00y9GukEpyfzUX
fVu+q5r2GrlWwTRFXQRvDVmep8X/AwDSIKNJ6wQA

-->

</rfc>
