<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.34 (Ruby 2.6.10) -->
<?rfc docmapping="yes"?>
<?rfc comments="yes"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-haynes-nfsv4-flexfiles-v2-05" category="std" consensus="true" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="Flex File Layout v2">Parallel NFS (pNFS) Flexible File Layout Version 2</title>
    <seriesInfo name="Internet-Draft" value="draft-haynes-nfsv4-flexfiles-v2-05"/>
    <author initials="T." surname="Haynes" fullname="Thomas Haynes">
      <organization>Hammerspace</organization>
      <address>
        <email>loghyr@gmail.com</email>
      </address>
    </author>
    <date/>
    <area>General</area>
    <workgroup>Network File System Version 4</workgroup>
    <keyword>Internet-Draft</keyword>
    <abstract>
      <?line 81?>

<t>Parallel NFS (pNFS) allows a separation between the metadata (onto a
metadata server) and data (onto a storage device) for a file.  The
Flexible File Layout Type Version 2 is defined in this document as
an extension to pNFS that allows the use of storage devices that
require only a limited degree of interaction with the metadata
server and use already-existing protocols.  Data protection is also
added to provide integrity.  Both Client-side mirroring and the
Erasure Coding algorithms are used for data protection.</t>
    </abstract>
    <note>
      <name>Note to Readers</name>
      <?line 92?>

<t>Discussion of this draft takes place
on the NFSv4 working group mailing list (nfsv4@ietf.org),
which is archived at
<eref target="https://mailarchive.ietf.org/arch/search/?email_list=nfsv4"/>. Source
code and issues list for this draft can be found at
<eref target="https://github.com/ietf-wg-nfsv4/flexfiles-v2"/>.</t>
      <t>Working Group information can be found at <eref target="https://github.com/ietf-wg-nfsv4"/>.</t>
    </note>
  </front>
  <middle>
    <?line 103?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>In Parallel NFS (pNFS) (see Section 12 of <xref target="RFC8881"/>), the metadata
server returns layout type structures that describe where file data is
located.  There are different layout types for different storage systems
and methods of arranging data on storage devices.  <xref target="RFC8435"/> defined
the Flexible File Version 1 Layout Type used with file-based data
servers that are accessed using the NFS protocols: NFSv3 <xref target="RFC1813"/>,
NFSv4.0 <xref target="RFC7530"/>, NFSv4.1 <xref target="RFC8881"/>, and NFSv4.2 <xref target="RFC7862"/>.</t>
      <t>To provide a global state model equivalent to that of the files
layout type, a back-end control protocol might be implemented between
the metadata server and NFSv4.1+ storage devices.  An implementation
can either define its own proprietary mechanism or it could define a
control protocol in a Standards Track document.  The requirements for
a control protocol are specified in <xref target="RFC8881"/> and clarified in
<xref target="RFC8434"/>.</t>
      <t>The control protocol described in this document is based on NFS.  It
does not provide for knowledge of stateids to be passed between the
metadata server and the storage devices.  Instead, the storage
devices are configured such that the metadata server has full access
rights to the data file system and then the metadata server uses
synthetic ids to control client access to individual data files.</t>
      <t>In traditional mirroring of data, the server is responsible for
replicating, validating, and repairing copies of the data file.  With
client-side mirroring, the metadata server provides a layout that
presents the available mirrors to the client.  The client then picks
a mirror to read from and ensures that all writes go to all mirrors.
The client only considers the write transaction to have succeeded if
all mirrors are successfully updated.  In case of error, the client
can use the LAYOUTERROR operation to inform the metadata server,
which is then responsible for the repairing of the mirrored copies of
the file.</t>
      <t>This client side mirroring provides for replication of data but does
not provide for integrity of data.  In the event of an error, a user
would be able to repair the file by silvering the mirror contents.
I.e., they would pick one of the mirror instances and replicate it to
the other instance locations.</t>
      <t>However, lacking integrity checks, silent corruptions are not able to
be detected and the choice of what constitutes the good copy is
difficult.  This document updates the Flexible File Layout Type to
version 2 by providing error-detection integrity (CRC32) for erasure
coding.  Data blocks are transformed into a header and a chunk.  This
document also introduces new operations that allow the client to roll
back writes to the data file.</t>
      <t>Using the process detailed in <xref target="RFC8178"/>, the revisions in this
document become an extension of NFSv4.2 <xref target="RFC7862"/>.  They are built on
top of the external data representation (XDR) <xref target="RFC4506"/> generated
from <xref target="RFC7863"/>.</t>
      <t>This document defines <tt>LAYOUT4_FLEX_FILES_V2</tt>, a new and independent
layout type that coexists with the Flexible File Layout Type version 1
(<tt>LAYOUT4_FLEX_FILES</tt>, <xref target="RFC8435"/>).  The two layout types are NOT
backward compatible: an FFv2 layout cannot be parsed as an FFv1 layout
and vice versa.  A server <bcp14>MAY</bcp14> support both layout types simultaneously;
a client selects the desired layout type in its LAYOUTGET request.</t>
    </section>
    <section anchor="requirements-language">
      <name>Requirements Language</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="sec-motivation">
      <name>Motivation</name>
      <t>Server-sided erasure coding places the erasure-coding compute at
the server, which becomes a bottleneck as the number of concurrent
clients grows.  Moving the erasure transform to the client
parallelizes the compute across all writers: each client encodes
locally and fans out the resulting chunks to the data servers
directly, keeping the metadata server in its coordinator role for
metadata rather than making it a data-path funnel.</t>
      <t>Flex Files v1 (<xref target="RFC8435"/>) already places the replication
transform at the client via client-side mirroring, but mirroring
provides no integrity check: silent byte corruption is
undetectable, and repairing a damaged mirror requires choosing a
trusted copy essentially blind.  Flex Files v2 adds two integrity
mechanisms -- a per-chunk CRC32 for on-wire and at-rest bit-flip
detection, and the chunk_guard4 compare-and-swap primitive (see
<xref target="sec-chunk_guard4"/>) for detecting concurrent-writer
inconsistency -- while preserving the client-side compute model.
The chunk_guard4 per-chunk header is 8 bytes total (a 32-bit
generation id and a 32-bit owning-client short-id); this keeps
the metadata-server overhead for maintaining erasure-coding
consistency to the smallest value that still admits a CAS
tiebreaker.</t>
      <t>An alternative to client-side erasure coding is to keep the
erasure-coding transform inside the storage system -- that is, on
the data servers themselves, or on a server-side pre-ingest stage
between the client and the data servers.  This approach has real
advantages: a single codec is fixed at the storage system, so
clients do not have to negotiate codec support; repair never
traverses the client; and the wire protocol stays minimal because
no on-wire consistency primitives are needed.</t>
      <t>Flex Files v2 does not choose that path, for three reasons:</t>
      <ul spacing="normal">
        <li>
          <t><strong>Scale bottleneck.</strong>  The storage system becomes a scale
bottleneck in exactly the way this section opens with:
large-scale parallel workloads drive the aggregate
erasure-coding compute beyond what a bounded storage tier can
supply, while clients are the naturally horizontally scaling
resource.</t>
        </li>
        <li>
          <t><strong>Loss of per-file codec flexibility.</strong>  A single
server-fixed codec forecloses the option of picking different
codecs for different files in the same namespace, which
matters when files have different durability and performance
requirements.</t>
        </li>
        <li>
          <t><strong>Benchmark evidence.</strong>  Measurements summarised in
<xref target="sec-implementation-status"/> show that client-side encoding
with the overhead introduced here is competitive with
server-side encoding on realistic workloads, and scales the
encoding compute with the writer population rather than with
the data-server count.</t>
        </li>
      </ul>
      <t>The right answer for a given deployment is not universal;
<xref target="sec-rejected-alternatives"/> records the alternatives considered
and why each was not chosen for Flex Files v2's target workload
classes.</t>
      <t>Client-side erasure coding turns write-hole recovery into a
protocol-level concern rather than an implementer-internal one.
In Flex Files v1, the replication transform produces independent
full-copy mirrors, so a partial write is detected and repaired by
resilvering from a surviving copy.  A single server-side
coordinator has enough visibility to drive that repair without
help from the client.  Under a (k, m) erasure code, in contrast,
a write transaction fans out across multiple data servers with no
single server-side actor holding whole-transaction visibility:
when the client fails mid-fan-out, the partial state across data
servers must be reconciled by the metadata server, and the
reconciliation protocol must be specified on the wire so that any
compliant client, data server, or repair agent can participate.
The chunk_guard4 CAS primitive, the PENDING / FINALIZED /
COMMITTED state machine, the CHUNK_LOCK escrow mechanism, and
CB_CHUNK_REPAIR together form that on-wire reconciliation
protocol.</t>
      <t>Scope note: the consistency goal of Flex Files v2 is RAID
consistency across the chunks that make up an encoded stripe, not
POSIX write ordering across arbitrary application writes.  The
protocol does not attempt to make overlapping application writes
from different clients atomic: that is the province of file
locking (<xref target="RFC8881"/>, Section 12) and of application-level
coordination.  What the protocol does guarantee is that the
chunks comprising a given stripe agree on which write produced
them, so that readers and repair clients never observe a
half-applied stripe.  Readers who need cross-write ordering
beyond a single stripe <bcp14>MUST</bcp14> use the existing NFSv4 locking
primitives.</t>
    </section>
    <section anchor="sec-use-cases">
      <name>Use Cases</name>
      <t>The protocol is designed around three workload classes.  The
percentages below reflect the expected deployment mix in
installations that choose Flex Files v2 for its combination of
integrity and performance; individual deployments may diverge.</t>
      <dl>
        <dt>Single writer, multiple readers (approximately 90% of expected</dt>
        <dd>
          <t/>
        </dd>
        <dt>deployments):</dt>
        <dd>
          <t>The common case is a file written by one client and subsequently
read by many.  Examples include artifacts deposited by batch
jobs, container images, and media files.  The protocol is
optimized for this case; see <xref target="sec-system-model-progress"/>.</t>
        </dd>
        <dt>Multiple writers without sustained contention (approximately 9% of</dt>
        <dd>
          <t/>
        </dd>
        <dt>expected deployments):</dt>
        <dd>
          <t>Files with multiple concurrent writers where races on the same
chunk are rare.  Examples include shared-directory append-only
logs and distributed builds.  The chunk_guard4 CAS primitive and
per-chunk locking cover this case without penalizing the common
single-writer path.</t>
        </dd>
        <dt>Multiple writers with high-frequency contention, no overwrite</dt>
        <dd>
          <t/>
        </dd>
        <dt>(approximately 1% of expected deployments):</dt>
        <dd>
          <t>High-performance computing (HPC) checkpoint workloads, in which
many ranks write disjoint regions of the same file in lockstep.
The protocol relies on block alignment to keep per-chunk
contention rare despite overall high writer count.  Contention
that does occur is resolved via the deterministic tiebreaker
rule defined in <xref target="sec-chunk_guard4"/>.</t>
        </dd>
      </dl>
      <t>Scale targets include multi-thousand-client deployments (on the
order of tens of thousands of concurrent clients for HPC
checkpointing), parallel-filesystem replacements, and multi-rack
shared-storage clusters.  The repair protocol (see
<xref target="sec-repair-selection"/>) is designed to let such deployments
tolerate data-server failures and concurrent-writer races without
blocking the critical path for the first two workload classes.</t>
    </section>
    <section anchor="definitions">
      <name>Definitions</name>
      <dl>
        <dt>chunk:</dt>
        <dd>
          <t>One of the resulting chunks to be exchanged with a data server after
a transformation has been applied to a data block.  The resulting chunk
may be a different size than the data block.</t>
        </dd>
        <dt>control communication requirements:</dt>
        <dd>
          <t>the specification for information on layouts, stateids, file metadata,
and file data that must be communicated between the metadata server and
the storage devices.  There is a separate set of requirements for each
layout type.</t>
        </dd>
        <dt>control protocol:</dt>
        <dd>
          <t>the particular mechanism that an implementation of a layout type would
use to meet the control communication requirement for that layout type.
This need not be a protocol as normally understood.  In some cases,
the same protocol may be used as a control protocol and storage protocol.</t>
        </dd>
        <dt>client-side mirroring:</dt>
        <dd>
          <t>a feature in which the client, not the server, is responsible for
updating all of the mirrored copies of a layout segment.</t>
        </dd>
        <dt>data block:</dt>
        <dd>
          <t>A block of data in the client's cache for a file.</t>
        </dd>
        <dt>data file:</dt>
        <dd>
          <t>The data portion of the file, stored on the data server.</t>
        </dd>
        <dt>replication of data:</dt>
        <dd>
          <t>Data replication is making and storing multiple copies of data in
different locations.</t>
        </dd>
        <dt>Erasure Coding:</dt>
        <dd>
          <t>A data protection scheme where a block of data is replicated into
fragments and additional redundant fragments are added to achieve parity.
The new chunks are stored in different locations.</t>
        </dd>
        <dt>Client Side Erasure Coding:</dt>
        <dd>
          <t>A file based integrity method where copies are maintained in parallel.</t>
        </dd>
        <dt>(file) data:</dt>
        <dd>
          <t>that part of the file system object that contains the data to be read
or written.  It is the contents of the object rather than the attributes
of the object.</t>
        </dd>
        <dt>data server (DS):</dt>
        <dd>
          <t>a pNFS server that provides the file's data when the file system
object is accessed over a file-based protocol.</t>
        </dd>
        <dt>fencing:</dt>
        <dd>
          <t>the process by which the metadata server prevents the storage devices
from processing I/O from a specific client to a specific file.</t>
        </dd>
        <dt>file layout type:</dt>
        <dd>
          <t>a layout type in which the storage devices are accessed via the NFS
protocol (see Section 5.12.4 of <xref target="RFC8881"/>).</t>
        </dd>
        <dt>gid:</dt>
        <dd>
          <t>the group id, a numeric value that identifies to which group a file
belongs.</t>
        </dd>
        <dt>layout:</dt>
        <dd>
          <t>the information a client uses to access file data on a storage device.
This information includes specification of the protocol (layout type)
and the identity of the storage devices to be used.</t>
        </dd>
        <dt>layout iomode:</dt>
        <dd>
          <t>a grant of either read-only or read/write I/O to the client.</t>
        </dd>
        <dt>layout segment:</dt>
        <dd>
          <t>a sub-division of a layout.  That sub-division might be by the layout
iomode (see Sections 3.3.20 and 12.2.9 of <xref target="RFC8881"/>), a striping pattern
(see Section 13.3 of <xref target="RFC8881"/>), or requested byte range.</t>
        </dd>
        <dt>layout stateid:</dt>
        <dd>
          <t>a 128-bit quantity returned by a server that uniquely defines the
layout state provided by the server for a specific layout that describes
a layout type and file (see Section 12.5.2 of <xref target="RFC8881"/>).  Further,
Section 12.5.3 of <xref target="RFC8881"/> describes differences in handling between
layout stateids and other stateid types.</t>
        </dd>
        <dt>layout type:</dt>
        <dd>
          <t>a specification of both the storage protocol used to access the data
and the aggregation scheme used to lay out the file data on the underlying
storage devices.</t>
        </dd>
        <dt>loose coupling:</dt>
        <dd>
          <t>when the control protocol is a storage protocol.</t>
        </dd>
        <dt>(file) metadata:</dt>
        <dd>
          <t>the part of the file system object that contains various descriptive
data relevant to the file object, as opposed to the file data itself.
This could include the time of last modification, access time, EOF
position, etc.</t>
        </dd>
        <dt>metadata server (MDS):</dt>
        <dd>
          <t>the pNFS server that provides metadata information for a file system
object.  It is also responsible for generating, recalling, and revoking
layouts for file system objects, for performing directory operations,
and for performing I/O operations to regular files when the clients
direct these to the metadata server itself.</t>
        </dd>
        <dt>mirror:</dt>
        <dd>
          <t>a copy of a layout segment.  Note that if one copy of the mirror is
updated, then all copies must be updated.</t>
        </dd>
        <dt>non-systematic encoding:</dt>
        <dd>
          <t>An erasure coding scheme in which the encoded shards do not contain
verbatim copies of the original data.  Every read requires decoding,
even when no shards are lost.  The Mojette non-systematic transform is
an example.  Non-systematic encodings are typically used for archival
workloads where reads are infrequent.</t>
        </dd>
        <dt>recalling a layout:</dt>
        <dd>
          <t>a graceful recall, via a callback, of a specific layout by the metadata
server to the client.  Graceful here means that the client would have
the opportunity to flush any WRITEs, etc., before returning the layout
to the metadata server.</t>
        </dd>
        <dt>revoking a layout:</dt>
        <dd>
          <t>an invalidation of a specific layout by the metadata server.
Once revocation occurs, the metadata server will not accept as valid any
reference to the revoked layout, and a storage device will not accept
any client access based on the layout.</t>
        </dd>
        <dt>resilvering:</dt>
        <dd>
          <t>the act of rebuilding a mirrored copy of a layout segment from a
known good copy of the layout segment.  Note that this can also be done
to create a new mirrored copy of the layout segment.</t>
        </dd>
        <dt>rsize:</dt>
        <dd>
          <t>the data transfer buffer size used for READs.</t>
        </dd>
        <dt>stateid:</dt>
        <dd>
          <t>a 128-bit quantity returned by a server that uniquely defines the set
of locking-related state provided by the server.  Stateids may designate
state related to open files, byte-range locks, delegations, or layouts.</t>
        </dd>
        <dt>storage device:</dt>
        <dd>
          <t>the target to which clients may direct I/O requests when they hold
an appropriate layout.  See Section 2.1 of <xref target="RFC8434"/> for further
discussion of the difference between a data server and a storage device.</t>
        </dd>
        <dt>storage protocol:</dt>
        <dd>
          <t>the protocol used by clients to do I/O operations to the storage
device.  Each layout type specifies the set of storage protocols.</t>
        </dd>
        <dt>systematic encoding:</dt>
        <dd>
          <t>An erasure coding scheme in which the first k of the k+m encoded
shards are identical to the original k data blocks.  A healthy read
(no failures) requires no decoding -- the data shards are read directly.
Decoding is triggered only when data shards are missing.  Reed-Solomon
Vandermonde and Mojette systematic are examples.</t>
        </dd>
        <dt>tight coupling:</dt>
        <dd>
          <t>an arrangement in which the control protocol is one designed
specifically for control communication.  It may be either a proprietary
protocol adapted specifically to a particular metadata server or a
protocol based on a Standards Track document.</t>
        </dd>
        <dt>uid:</dt>
        <dd>
          <t>the user id, a numeric value that identifies which user owns a file.</t>
        </dd>
        <dt>write hole:</dt>
        <dd>
          <t>A write hole is a data corruption scenario where either two clients
are trying to write to the same chunk or one client is overwriting an
existing chunk of data.</t>
        </dd>
        <dt>wsize:</dt>
        <dd>
          <t>the data transfer buffer size used for WRITEs.</t>
        </dd>
      </dl>
    </section>
    <section anchor="coupling-of-storage-devices">
      <name>Coupling of Storage Devices</name>
      <t>A server implementation may choose either a loosely coupled model or a
tightly coupled model between the metadata server and the storage devices.
<xref target="RFC8434"/> describes the general problems facing pNFS implementations.
This document details how the new flexible file layout type addresses
these issues.  To implement the tightly coupled model, a control protocol
has to be defined.  As the flexible file layout imposes no special
requirements on the client, the control protocol will need to provide:</t>
      <ol spacing="normal" type="1"><li>
          <t>management of both security and LAYOUTCOMMITs and</t>
        </li>
        <li>
          <t>a global stateid model and management of these stateids.</t>
        </li>
      </ol>
      <t>When implementing the loosely coupled model, the only control protocol
will be a version of NFS, with no ability to provide a global stateid
model or to prevent clients from using layouts inappropriately.  To enable
client use in that environment, this document will specify how security,
state, and locking are to be managed.</t>
      <t>The loosely and tightly coupled locking models defined in Section 2.3
of <xref target="RFC8435"/> apply equally to this layout type, including the use of
anonymous stateids with loosely coupled storage devices, the handling
of lock and delegation stateids, and the mandatory byte-range lock
requirements for the tightly coupled model.</t>
      <section anchor="layoutcommit">
        <name>LAYOUTCOMMIT</name>
        <t>Regardless of the coupling model, the metadata server has the
responsibility, upon receiving a LAYOUTCOMMIT (see Section 18.42 of
<xref target="RFC8881"/>) to ensure that the semantics of pNFS are respected (see
Section 3.1 of <xref target="RFC8434"/>).  These do include a requirement that data
written to a data storage device be stable before the occurrence of
the LAYOUTCOMMIT.</t>
        <t>It is the responsibility of the client to make sure the data file is
stable before the metadata server begins to query the storage devices
about the changes to the file.  If any WRITE to a storage device did not
result with stable_how equal to FILE_SYNC, a LAYOUTCOMMIT to the metadata
server <bcp14>MUST</bcp14> be preceded by a COMMIT to the storage devices written to.
Note that if the client has not done a COMMIT to the storage device, then
the LAYOUTCOMMIT might not be synchronized to the last WRITE operation
to the storage device.</t>
      </section>
      <section anchor="sec-Fencing-Clients">
        <name>Fencing Clients from the Storage Device</name>
        <t>With loosely coupled storage devices, the metadata server uses synthetic
uids (user ids) and gids (group ids) for the data file, where the uid
owner of the data file is allowed read/write access and the gid owner
is allowed read-only access.  As part of the layout (see ffv2ds_user
and ffv2ds_group in <xref target="sec-ffv2_layout"/>), the client is provided
with the user and group to be used in the Remote Procedure Call
(RPC) <xref target="RFC5531"/> credentials needed to access the data file.
Fencing off of clients is achieved by the metadata server changing
the synthetic uid and/or gid owners of the data file on the storage
device to implicitly revoke the outstanding RPC credentials.  A
client presenting the wrong credential for the desired access will
get an NFS4ERR_ACCESS error.</t>
        <t>With this loosely coupled model, the metadata server is not able to fence
off a single client; it is forced to fence off all clients.  However,
as the other clients react to the fencing, returning their layouts and
trying to get new ones, the metadata server can hand out a new uid and
gid to allow access.</t>
        <t>It is <bcp14>RECOMMENDED</bcp14> to implement common access control methods at the
storage device file system to allow only the metadata server root
(super user) access to the storage device and to set the owner of all
directories holding data files to the root user.  This approach provides
a practical model to enforce access control and fence off cooperative
clients, but it cannot protect against malicious clients; hence, it
provides a level of security equivalent to AUTH_SYS.  It is <bcp14>RECOMMENDED</bcp14>
that the communication between the metadata server and storage device
be secure from eavesdroppers and man-in-the-middle protocol tampering.
The security measure could be physical security (e.g., the servers
are co-located in a physically secure area), encrypted communications,
or some other technique.</t>
        <t>With tightly coupled storage devices, the metadata server sets the
user and group owners, mode bits, and Access Control List (ACL) of
the data file to be the same as the metadata file.  And the client must
authenticate with the storage device and go through the same authorization
process it would go through via the metadata server.  In the case of
tight coupling, fencing is the responsibility of the control protocol and
is not described in detail in this document.  However, implementations
of the tightly coupled locking model (see <xref target="sec-state-locking"/>) will
need a way to prevent access by certain clients to specific files by
invalidating the corresponding stateids on the storage device.  In such
a scenario, the client will be given an error of NFS4ERR_BAD_STATEID.</t>
        <t>The client need not know the model used between the metadata server and
the storage device.  It need only react consistently to any errors in
interacting with the storage device.  It <bcp14>SHOULD</bcp14> both return the layout
and error to the metadata server and ask for a new layout.  At that point,
the metadata server can either hand out a new layout, hand out no layout
(forcing the I/O through it), or deny the client further access to
the file.</t>
        <section anchor="implementation-notes-for-synthetic-uidsgids">
          <name>Implementation Notes for Synthetic uids/gids</name>
          <t>The selection method for the synthetic uids and gids to be used for
fencing in loosely coupled storage devices is strictly an implementation
issue.  That is, an administrator might restrict a range of such ids
available to the Lightweight Directory Access Protocol (LDAP) 'uid' field
<xref target="RFC4519"/>.  The administrator might also be able to choose an id that
would never be used to grant access.  Then, when the metadata server had
a request to access a file, a SETATTR would be sent to the storage device
to set the owner and group of the data file.  The user and group might
be selected in a round-robin fashion from the range of available ids.</t>
          <t>Those ids would be sent back as ffv2ds_user and ffv2ds_group to the
client, who would present them as the RPC credentials to the storage
device.  When the client is done accessing the file and the metadata
server knows that no other client is accessing the file, it can
reset the owner and group to restrict access to the data file.</t>
          <t>When the metadata server wants to fence off a client, it changes the
synthetic uid and/or gid to the restricted ids.  Note that using a
restricted id ensures that there is a change of owner and at least one
id available that never gets allowed access.</t>
          <t>Under an AUTH_SYS security model, synthetic uids and gids of 0 <bcp14>SHOULD</bcp14> be
avoided.  These typically either grant super access to files on a storage
device or are mapped to an anonymous id.  In the first case, even if the
data file is fenced, the client might still be able to access the file.
In the second case, multiple ids might be mapped to the anonymous ids.</t>
        </section>
        <section anchor="example-of-using-synthetic-uidsgids">
          <name>Example of using Synthetic uids/gids</name>
          <t>The user loghyr creates a file "ompha.c" on the metadata server, which
then creates a corresponding data file on the storage device.</t>
          <t>The metadata server entry may look like:</t>
          <figure anchor="fig-meta-ompha">
            <name>Metadata's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r--r--    1 loghyr  staff    1697 Dec  4 11:31 ompha.c
]]></sourcecode>
          </figure>
          <t>On the storage device, the file may be assigned some unpredictable
synthetic uid/gid to deny access:</t>
          <figure anchor="fig-data-ompha">
            <name>Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19452   28418    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>When the file is opened on a client and accessed, the user will try to
get a layout for the data file.  Since the layout knows nothing about
the user (and does not care), it does not matter whether the user loghyr
or garbo opens the file.  The client has to present an uid of 19452
to get write permission.  If it presents any other value for the uid,
then it must give a gid of 28418 to get read access.</t>
          <t>Further, if the metadata server decides to fence the file, it <bcp14>SHOULD</bcp14>
change the uid and/or gid such that these values neither match earlier
values for that file nor match a predictable change based on an earlier
fencing.</t>
          <figure anchor="fig-fenced-ompha">
            <name>Fenced Data's view of ompha.c</name>
            <sourcecode type="shell"><![CDATA[
-rw-r-----    1 19453   28419    1697 Dec  4 11:31 data_ompha.c
]]></sourcecode>
          </figure>
          <t>The set of synthetic gids on the storage device <bcp14>SHOULD</bcp14> be selected such
that there is no mapping in any of the name services used by the storage
device, i.e., each group <bcp14>SHOULD</bcp14> have no members.</t>
          <t>If the layout segment has an iomode of LAYOUTIOMODE4_READ, then the
metadata server <bcp14>SHOULD</bcp14> return a synthetic uid that is not set on the
storage device.  Only the synthetic gid would be valid.</t>
          <t>The client is thus solely responsible for enforcing file permissions
in a loosely coupled model.  To allow loghyr write access, it will send
an RPC to the storage device with a credential of 1066:1067.  To allow
garbo read access, it will send an RPC to the storage device with a
credential of 1067:1067.  The value of the uid does not matter as long
as it is not the synthetic uid granted when getting the layout.</t>
          <t>While pushing the enforcement of permission checking onto the client
may seem to weaken security, the client may already be responsible
for enforcing permissions before modifications are sent to a server.
With cached writes, the client is always responsible for tracking who is
modifying a file and making sure to not coalesce requests from multiple
users into one request.</t>
        </section>
      </section>
      <section anchor="sec-state-locking">
        <name>State and Locking Models</name>
        <t>An implementation can always be deployed as a loosely coupled model.
There is, however, no way for a storage device to indicate over an NFS
protocol that it can definitively participate in a tightly coupled model:</t>
        <ul spacing="normal">
          <li>
            <t>Storage devices implementing the NFSv3 and NFSv4.0 protocols are
always treated as loosely coupled.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do not return the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID are indicating that
they are to be treated as loosely coupled.  From the locking viewpoint,
they are treated in the same way as NFSv4.0 storage devices.</t>
          </li>
          <li>
            <t>NFSv4.1+ storage devices that do identify themselves with the
EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID can potentially
be tightly coupled.  They would use a back-end control protocol to
implement the global stateid model as described in <xref target="RFC8881"/>.</t>
          </li>
        </ul>
        <t>A storage device would have to be either discovered or advertised over
the control protocol to enable a tightly coupled model.</t>
        <section anchor="loosely-coupled-locking-model">
          <name>Loosely Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
When an NFSv4 version is used as the data access protocol, the metadata
server may make stateid-related requests of the storage devices.  However,
it is not required to do so, and the resulting stateids are known only
to the metadata server and the storage device.</t>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with by the metadata server.  Stateids are
selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server may need to interact with the storage device to
locate the file to be opened, but no locking-related functionality
need be used on the storage device.</t>
            </li>
            <li>
              <t>OPEN_DOWNGRADE and CLOSE only require local execution on the
metadata server.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and only
used on the metadata server.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  No storage device
involvement is required.</t>
            </li>
            <li>
              <t>TEST_STATEID and FREE_STATEID are processed locally on the
metadata server, without storage device involvement.</t>
            </li>
          </ul>
          <t>All I/O operations to the storage device are done using the anonymous
stateid.  Thus, the storage device has no information about the openowner
and lockowner responsible for issuing a particular I/O operation.
As a result:</t>
          <ul spacing="normal">
            <li>
              <t>Mandatory byte-range locking cannot be supported because the
storage device has no way of distinguishing I/O done on behalf of
the lock owner from those done by others.</t>
            </li>
            <li>
              <t>Enforcement of share reservations is the responsibility of the
client.  Even though I/O is done using the anonymous stateid, the
client <bcp14>MUST</bcp14> ensure that it has a valid stateid associated with the
openowner.</t>
            </li>
          </ul>
          <t>In the event that a stateid is revoked, the metadata server is responsible
for preventing client access, since it has no way of being sure that
the client is aware that the stateid in question has been revoked.</t>
          <t>As the client never receives a stateid generated by a storage device,
there is no client lease on the storage device and no prospect of lease
expiration, even when access is via NFSv4 protocols.  Clients will
have leases on the metadata server.  In dealing with lease expiration,
the metadata server may need to use fencing to prevent revoked stateids
from being relied upon by a client unaware of the fact that they have
been revoked.</t>
        </section>
        <section anchor="tightly-coupled-locking-model">
          <name>Tightly Coupled Locking Model</name>
          <t>When locking-related operations are requested, they are primarily dealt
with by the metadata server, which generates the appropriate stateids.
These stateids <bcp14>MUST</bcp14> be made known to the storage device using control
protocol facilities.  For flex files v2 deployments in which the storage
devices are NFSv4.2 servers, those facilities are provided by the
TRUST_STATEID, REVOKE_STATEID, and BULK_REVOKE_STATEID operations
defined in <xref target="sec-tight-coupling-control"/>.</t>
          <t>The metadata server and a storage device establish that they can
use TRUST_STATEID via a two-part handshake, both parts of which
<bcp14>MUST</bcp14> succeed before the metadata server may issue TRUST_STATEID
against that storage device for production traffic:</t>
          <ol spacing="normal" type="1"><li>
              <t><strong>Capability probe.</strong>  At control-session setup the metadata
server sends a TRUST_STATEID against the anonymous stateid
(see <xref target="sec-tight-coupling-probe"/>).  A storage device that
supports tight coupling <bcp14>MUST</bcp14> reject the probe with
NFS4ERR_INVAL; a storage device that does not support tight
coupling returns NFS4ERR_NOTSUPP and the metadata server
falls back to loose coupling.  The metadata server records
the result per storage device in ffdv_tightly_coupled.</t>
            </li>
            <li>
              <t><strong>Control-session gating.</strong>  The metadata server presents
EXCHGID4_FLAG_USE_PNFS_MDS at EXCHANGE_ID when it opens the
control session to the storage device
(see <xref target="sec-tight-coupling-control-session"/>).  The storage
device <bcp14>MUST</bcp14> reject any incoming TRUST_STATEID,
REVOKE_STATEID, or BULK_REVOKE_STATEID that does not arrive
on such a session with NFS4ERR_PERM.  This is the
authorization mechanism that distinguishes the metadata
server from ordinary pNFS clients, which connect with
EXCHGID4_FLAG_USE_PNFS_DS or EXCHGID4_FLAG_USE_NON_PNFS and
are therefore structurally unable to invoke these operations.</t>
            </li>
          </ol>
          <t>Given this basic structure, locking-related operations are handled
as follows:</t>
          <ul spacing="normal">
            <li>
              <t>OPENs are dealt with primarily on the metadata server.  Stateids
are selected by the metadata server and associated with the client
ID describing the client's connection to the metadata server.
The metadata server needs to interact with the storage device to
locate the file to be opened and to make the storage device aware of
the association between the metadata-server-chosen stateid and the
client and openowner that it represents.  OPEN_DOWNGRADE and CLOSE
are executed initially on the metadata server, but the state change
<bcp14>MUST</bcp14> be propagated to the storage device.</t>
            </li>
            <li>
              <t>Advisory byte-range locks can be implemented locally on the
metadata server.  As in the case of OPENs, the stateids associated
with byte-range locks are assigned by the metadata server and are
available for use on the metadata server.  Because I/O operations
are allowed to present lock stateids, the metadata server needs the
ability to make the storage device aware of the association between
the metadata-server-chosen stateid and the corresponding open stateid
it is associated with.</t>
            </li>
            <li>
              <t>Mandatory byte-range locks can be supported when both the metadata
server and the storage devices have the appropriate support.  As in
the case of advisory byte-range locks, these are assigned by the
metadata server and are available for use on the metadata server.
To enable mandatory lock enforcement on the storage device, the
metadata server needs the ability to make the storage device aware
of the association between the metadata-server-chosen stateid and
the client, openowner, and lock (i.e., lockowner, byte-range, and
lock-type) that it represents.  Because I/O operations are allowed
to present lock stateids, this information needs to be propagated to
all storage devices to which I/O might be directed rather than only
to storage device that contain the locked region.</t>
            </li>
            <li>
              <t>Delegations are assigned by the metadata server that initiates
recalls when conflicting OPENs are processed.  Because I/O operations
are allowed to present delegation stateids, the metadata server
requires the ability:  </t>
              <ol spacing="normal" type="1"><li>
                  <t>to make the storage device aware of the association between
the metadata-server-chosen stateid and the filehandle and
delegation type it represents</t>
                </li>
                <li>
                  <t>to break such an association.</t>
                </li>
              </ol>
            </li>
            <li>
              <t>TEST_STATEID is processed locally on the metadata server, without
storage device involvement.</t>
            </li>
            <li>
              <t>FREE_STATEID is processed on the metadata server, but the metadata
server requires the ability to propagate the request to the
corresponding storage devices.</t>
            </li>
          </ul>
          <t>Because the client will possess and use stateids valid on the storage
device, there will be a client lease on the storage device, and the
possibility of lease expiration does exist.  The best approach for the
storage device is to retain these locks as a courtesy.  However, if it
does not do so, control protocol facilities need to provide the means
to synchronize lock state between the metadata server and storage device.</t>
          <t>Clients will also have leases on the metadata server that are subject
to expiration.  In dealing with lease expiration, the metadata server
would be expected to use control protocol facilities enabling it to
invalidate revoked stateids on the storage device.  In the event the
client is not responsive, the metadata server may need to use fencing
to prevent revoked stateids from being acted upon by the storage device.</t>
        </section>
      </section>
      <section anchor="sec-tight-coupling-control">
        <name>Tight Coupling Control Protocol</name>
        <t>When an NFSv4.2 storage device participates in a tightly coupled
deployment, the metadata server and the storage devices need a
control protocol that:</t>
        <ol spacing="normal" type="1"><li>
            <t>registers the layout stateid with each storage device so the
storage device can validate client I/O independently; and</t>
          </li>
          <li>
            <t>revokes trust promptly when the metadata server withdraws the
client's authorization -- for example, on CB_LAYOUTRECALL
timeout, lease expiry, or layout return after error.</t>
          </li>
        </ol>
        <t>This specification defines that control protocol as three new
NFSv4.2 operations: TRUST_STATEID (<xref target="sec-TRUST_STATEID"/>),
REVOKE_STATEID (<xref target="sec-REVOKE_STATEID"/>), and BULK_REVOKE_STATEID
(<xref target="sec-BULK_REVOKE_STATEID"/>).  These operations are sent by the
metadata server to each storage device over a dedicated control
session (see <xref target="sec-tight-coupling-control-session"/>) and <bcp14>MUST NOT</bcp14>
be sent by pNFS clients.</t>
        <t>The receiver of these operations is any server the metadata
server delegates client-I/O admission to.  In this document that
is the storage device (DS).  The same mechanism applies to a
proxy server (PS) as defined in the companion Data Mover
draft -- a PS may or may not additionally act as a DS, but in
either role it needs the metadata server to register a layout
stateid before it can admit client I/O.  Where this section
says "storage device," read it as "storage device, or proxy
server as defined in the companion Data Mover draft"; the flag
check and the three operations are identical for both roles.</t>
        <section anchor="sec-tight-coupling-probe">
          <name>Capability Discovery</name>
          <t>A storage device indicates support for tight coupling implicitly,
by processing TRUST_STATEID rather than returning NFS4ERR_NOTSUPP.
The metadata server probes each storage device during
control-session setup:</t>
          <figure anchor="fig-trust-stateid-probe">
            <name>TRUST_STATEID capability probe</name>
            <artwork><![CDATA[
SEQUENCE + PUTROOTFH + TRUST_STATEID(
    tsa_layout_stateid = ANONYMOUS_STATEID,
    tsa_iomode         = LAYOUTIOMODE4_READ,
    tsa_expire         = 0,
    tsa_principal      = "")
]]></artwork>
          </figure>
          <t>The anonymous stateid is used deliberately: a correctly implemented
storage device <bcp14>MUST</bcp14> reject it (see <xref target="sec-TRUST_STATEID"/>), so the
probe cannot accidentally register garbage in the trust table.  The
metadata server interprets the probe response as follows:</t>
          <ul spacing="normal">
            <li>
              <t>NFS4ERR_NOTSUPP: tight coupling is not supported on this
storage device.  The metadata server falls back to loose coupling
(anonymous stateid plus fencing) and sets ffdv_tightly_coupled
to false for this storage device.</t>
            </li>
            <li>
              <t>NFS4ERR_INVAL: tight coupling is supported.  The anonymous
stateid was correctly rejected.  The metadata server records the
capability and sets ffdv_tightly_coupled to true for this
storage device.</t>
            </li>
            <li>
              <t>NFS4_OK: the storage device accepted an anonymous stateid into
its trust table.  This is a storage device bug.  The metadata
server <bcp14>SHOULD</bcp14> log the anomaly.  It <bcp14>MAY</bcp14> treat the capability as
confirmed to avoid downgrading to loose coupling, but it <bcp14>MUST</bcp14>
immediately issue REVOKE_STATEID to remove the bogus entry.</t>
            </li>
          </ul>
          <t>The capability is recorded per storage device, not per file.
Partial support across a mirror set is permitted: each
ff_device_versions4 entry returned by GETDEVICEINFO carries its
own ffdv_tightly_coupled flag, set independently.</t>
        </section>
        <section anchor="sec-tight-coupling-control-session">
          <name>Control Session</name>
          <t>The metadata server establishes an NFSv4.2 session to each
tight-coupling-capable storage device at startup.  On this session
the metadata server acts as the storage device's client and
presents EXCHGID4_FLAG_USE_PNFS_MDS in its EXCHANGE_ID args.</t>
          <t>The storage device <bcp14>MUST</bcp14> verify that any incoming TRUST_STATEID,
REVOKE_STATEID, or BULK_REVOKE_STATEID compound arrives on a
session whose owning client presented EXCHGID4_FLAG_USE_PNFS_MDS
in its EXCHANGE_ID args.  Requests that arrive on any other
session <bcp14>MUST</bcp14> be rejected with NFS4ERR_PERM.  This is the sole
access control on these operations; a pNFS client connecting to
the storage device does not present EXCHGID4_FLAG_USE_PNFS_MDS
and therefore cannot invoke them.</t>
          <t>The EXCHGID4_FLAG_USE_PNFS_MDS check replaces any path- or
filehandle-level gating.  TRUST_STATEID operates on a filehandle
that may be any file on the storage device, and the metadata
server is the sole authority that can legitimately speak this
protocol.</t>
          <t>Because the EXCHGID4_FLAG_USE_PNFS_MDS check relies on the owning
client's self-declaration at EXCHANGE_ID time, the storage device
cannot by itself distinguish a legitimate metadata server from any
other host that sets the flag.  Deployments are therefore
responsible for constraining who can establish a control session
in the first place.  Two mechanisms are <bcp14>RECOMMENDED</bcp14>:</t>
          <ol spacing="normal" type="1"><li>
              <t>The control session <bcp14>SHOULD</bcp14> use RPCSEC_GSS with a machine
principal that the storage device has been configured to
accept as a metadata server.  The storage device validates
the principal before accepting EXCHANGE_ID with
EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>Alternatively, the control session <bcp14>SHOULD</bcp14> run over a
network path isolated from pNFS clients (for example, a
dedicated management VLAN or mutual TLS (<xref target="RFC9289"/>) with
an allowlisted client certificate), such that only
configured metadata servers can reach the storage device on
that path.</t>
            </li>
          </ol>
          <t>Deploying neither mechanism reduces the authorization strength
of TRUST_STATEID and the revocation operations to "any host
that can reach the storage device can invoke them"; a strict
deployment <bcp14>MUST</bcp14> apply at least one of the above.</t>
        </section>
        <section anchor="sec-tight-coupling-layoutget">
          <name>Flow at LAYOUTGET</name>
          <t>For each new or refreshed layout segment, the metadata server:</t>
          <ol spacing="normal" type="1"><li>
              <t>chooses the layout stateid (as it would without tight coupling);</t>
            </li>
            <li>
              <t>identifies the tight-coupling-capable storage devices in the
mirror set (those for which ffdv_tightly_coupled is true);</t>
            </li>
            <li>
              <t>fans out TRUST_STATEID to each such storage device,
specifying the layout stateid, the layout iomode, a
tsa_expire derived from the metadata server's lease (see
<xref target="sec-tight-coupling-lease"/>), and the client's authenticated
identity in tsa_principal;</t>
            </li>
            <li>
              <t>waits for all fan-outs to complete (or reach their per-storage-
device timeout) before returning the layout.</t>
            </li>
          </ol>
          <t>If every storage device in the mirror set rejects the TRUST_STATEID
fan-out, the metadata server <bcp14>MUST NOT</bcp14> return the layout; instead it
returns NFS4ERR_LAYOUTTRYLATER.  If some storage devices accept and
others reject, the metadata server <bcp14>MAY</bcp14> return a layout covering
only the accepting storage devices, subject to the mirror-set rules
for minimum acceptable coverage.  A storage device that returns
NFS4ERR_DELAY is retried until either success or the metadata
server's LAYOUTGET-response budget is exhausted.  If a storage
device returns NFS4ERR_NOTSUPP at this time (having accepted the
probe earlier), the metadata server <bcp14>MUST</bcp14> clear
ffdv_tightly_coupled for this storage device, fall back to loose
coupling, and re-issue the layout accordingly.</t>
        </section>
        <section anchor="sec-tight-coupling-principal">
          <name>Principal Binding and the Kerberos Gap</name>
          <t>Flex files v1 has a known gap: a client authenticated to the
metadata server with Kerberos has no way to present the same
authenticated identity to the storage device, because v1 layouts
carry only ffds_user / ffds_group (POSIX uid/gid for AUTH_SYS).  A
strict Kerberos deployment on v1 must either allow AUTH_SYS from
the metadata server's subnet or accept that the v1 data path is
not Kerberos-protected.</t>
          <t>The tsa_principal field in TRUST_STATEID closes that gap.  When a
client authenticates to the metadata server as a Kerberos
principal (e.g., alice@REALM), the metadata server passes that
principal name to each storage device in tsa_principal.  The
storage device then enforces a two-part check on each CHUNK
operation that presents the layout stateid:</t>
          <t>a.  the stateid is in the trust table and has not expired; and</t>
          <t>b.  the caller's authenticated identity (the RPCSEC_GSS display
    name on the CHUNK compound) matches tsa_principal.</t>
          <t>Both conditions <bcp14>MUST</bcp14> hold.  On principal mismatch the storage
device <bcp14>MUST</bcp14> return NFS4ERR_ACCESS -- the semantics are "you do
not have an authorized layout for this file", which matches the
existing fencing error and avoids the confusion of
NFS4ERR_WRONGSEC (which directs the client to re-authenticate
with a different flavor) or NFS4ERR_BAD_STATEID (which directs
the client to return the layout).</t>
          <t>The metadata server <bcp14>MUST</bcp14> populate tsa_principal with the
RPCSEC_GSS display name of the authenticated client when the
client authenticated to the metadata server via RPCSEC_GSS.  The
metadata server <bcp14>MUST</bcp14> set tsa_principal to the empty string only
for AUTH_SYS and TLS clients (for which there is no server-
verified per-user identity).  Setting tsa_principal to the empty
string for an RPCSEC_GSS client disables the principal check on
the storage device and silently re-opens the flex files v1
Kerberos gap; it is a metadata server bug, not a protocol option.</t>
          <t>If tsa_principal is the empty string, no principal check applies.
This is the expected setting for AUTH_SYS and TLS clients:</t>
          <ul spacing="normal">
            <li>
              <t>AUTH_SYS clients have no server-verified identity.  The
storage device's stateid check and the AUTH_SYS uid/gid on the
data file together constitute the authorization.  In a tightly
coupled deployment the data file's owner/group need not match
the metadata file's, since ffv2ds_user and ffv2ds_group are
ignored (see <xref target="sec-ffv2-mirror4"/>).</t>
            </li>
            <li>
              <t>TLS clients have transport-layer authentication via mutual TLS
(<xref target="RFC9289"/>).  The TLS layer authenticates the client machine;
the stateid check confirms the metadata server authorized that
machine to access this file.  The machine-level authentication
is handled beneath the RPC layer and is not reflected in
tsa_principal.  Opportunistic TLS (STARTTLS without certificate
verification) provides encryption but not authentication, and
therefore has the same authorization properties as plain
AUTH_SYS.</t>
            </li>
          </ul>
          <t>When a client's I/O is routed through a proxy server (PS) -- that
is, the layout the metadata server returns to the client has
FFV2_DS_FLAGS_PROXY set on the proxy's ffv2_data_server4 entry,
per the companion Data Mover draft -- the storage device observes
CHUNK operations arriving from the PS's address rather than from
the client directly.  The tsa_principal the metadata server
populates in TRUST_STATEID is the principal the <em>storage device</em>
will observe on those CHUNK operations, and the Data Mover
draft's credential-forwarding rules (in particular rule 1,
"Credential pass-through") require the PS to forward the
client's credentials verbatim on every CHUNK operation it issues
on the client's behalf.  Therefore:</t>
          <ul spacing="normal">
            <li>
              <t>For an RPCSEC_GSS client whose I/O is proxied through a PS,
the metadata server <bcp14>MUST</bcp14> set tsa_principal to the client's
RPCSEC_GSS display name (identical to the non-proxied case).
The storage device's principal check on CHUNK operations will
match against the client's principal on the forwarded
compound, not the PS's service identity.</t>
            </li>
            <li>
              <t>For an AUTH_SYS client whose I/O is proxied through a PS,
the metadata server <bcp14>MUST</bcp14> set tsa_principal to the empty
string (identical to the non-proxied case).  The PS forwards
the client's AUTH_SYS uid/gid; the storage device's stateid
check plus the forwarded AUTH_SYS uid/gid constitute the
authorization.</t>
            </li>
          </ul>
          <t>The metadata server <bcp14>MUST NOT</bcp14> set tsa_principal to the PS's own
service principal.  Doing so would require the PS to
authenticate to the storage device as itself (bypassing
credential forwarding) which is explicitly prohibited by the
Data Mover draft's rule 4 ("PS service identity is for the
control plane only").</t>
        </section>
        <section anchor="sec-tight-coupling-trust-gap">
          <name>Client-Detected Trust Gap</name>
          <t>A window exists between a successful TRUST_STATEID fan-out and
the client's first I/O to the storage device.  A transient failure
may cause the storage device to forget or reject the entry before
the client's first CHUNK_WRITE arrives.  The client cannot
distinguish this case from legitimate revocation; both surface as
NFS4ERR_BAD_STATEID on the storage device.</t>
          <t>The recovery path:</t>
          <ol spacing="normal" type="1"><li>
              <t>The client sends LAYOUTERROR(layout_stateid, device_id,
NFS4ERR_BAD_STATEID) to the metadata server.</t>
            </li>
            <li>
              <t>The metadata server retries TRUST_STATEID against the
reporting storage device.  If the retry succeeds, the
metadata server returns NFS4_OK for LAYOUTERROR.  The client
retries the original I/O.</t>
            </li>
            <li>
              <t>If the retry fails -- the storage device is unreachable or
returns a hard error -- the metadata server issues
CB_LAYOUTRECALL for that device and the client returns the
layout segment covering that storage device.  The client is
expected to re-request via LAYOUTGET.</t>
            </li>
          </ol>
          <t>This is the same LAYOUTERROR path used for NFS4ERR_ACCESS or
NFS4ERR_PERM in the fencing model (see <xref target="sec-Fencing-Clients"/>),
with the metadata server's action being "retry TRUST_STATEID"
instead of "rotate uid/gid".</t>
        </section>
        <section anchor="sec-tight-coupling-lease">
          <name>Lease and Renewal</name>
          <t>tsa_expire in a TRUST_STATEID request is a wall-clock expiry
instant expressed as an nfstime4.  The metadata server <bcp14>MUST</bcp14> set
tsa_expire to the current wall-clock time plus the metadata
server's client lease period.</t>
          <t>The metadata server <bcp14>MUST</bcp14> re-issue TRUST_STATEID for an entry
before tsa_expire while the corresponding layout is outstanding.
The <bcp14>RECOMMENDED</bcp14> trigger is: when an entry is within half the
lease period of its tsa_expire, re-issue TRUST_STATEID with a
refreshed tsa_expire.  Renewing on every SEQUENCE that keeps the
layout stateid alive is correct but produces
metadata-server-to-storage-device traffic proportional to the
client's SEQUENCE rate, which is undesirable in steady state.</t>
          <t>If an entry expires on the storage device before the metadata
server renews it -- for example, because the metadata server is
partitioned from the storage device for longer than the lease
period -- the storage device <bcp14>MUST</bcp14> return NFS4ERR_BAD_STATEID to
the client on the next CHUNK operation.  The client returns the
layout to the metadata server and re-requests.  This is the same
recovery path as the trust gap described above.</t>
        </section>
        <section anchor="sec-tight-coupling-ds-crash">
          <name>Storage Device Crash Recovery</name>
          <t>The trust table is volatile.  The storage device <bcp14>MUST NOT</bcp14> persist
trust entries across restarts; a storage device restart therefore
empties the trust table.</t>
          <t>The client detects a storage device restart via NFS4ERR_BADSESSION
or NFS4ERR_STALE_CLIENTID on its data server session.  The client
returns the affected layout segment to the metadata server via
LAYOUTRETURN and re-requests via LAYOUTGET.  The metadata server
then fans out fresh TRUST_STATEID operations to the recovered
storage device.</t>
          <t>Planned storage device restarts (software upgrade, etc.) <bcp14>SHOULD</bcp14>
drain in-flight CHUNK operations before shutting down.</t>
        </section>
        <section anchor="sec-tight-coupling-mds-crash">
          <name>Metadata Server Crash Recovery</name>
          <t>When the metadata server restarts, its control sessions to the
storage devices are lost.  Trust entries remain on the storage
devices until tsa_expire, but the metadata server is no longer
renewing them; the entries are effectively orphaned until the
metadata server completes grace.</t>
          <t>When the metadata server reconnects to a storage device with a
new boot epoch -- that is, the EXCHANGE_ID returns a new server
owner on the storage device's view of the metadata server -- the
storage device <bcp14>SHOULD</bcp14> mark all trust entries established under
the prior metadata-server epoch as pending-revalidation.  While an
entry is pending-revalidation:</t>
          <ul spacing="normal">
            <li>
              <t>I/O that presents the entry's stateid <bcp14>MUST</bcp14> receive
NFS4ERR_DELAY, not NFS4ERR_BAD_STATEID.  NFS4ERR_DELAY tells
the client to retry with the same stateid -- the metadata
server is recovering and may yet revalidate the entry.
NFS4ERR_BAD_STATEID would instead cause the client to return
the layout immediately, producing a thundering herd against
the metadata server during grace.</t>
            </li>
            <li>
              <t>An entry remains pending-revalidation until the metadata
server either re-issues TRUST_STATEID for it (which transitions
it back to trusted) or until the entry's tsa_expire elapses
(which removes it).</t>
            </li>
          </ul>
          <t>The metadata server's recovery sequence is:</t>
          <ol spacing="normal" type="1"><li>
              <t>Reconnect to each storage device and establish a fresh
control session.</t>
            </li>
            <li>
              <t>Optionally issue BULK_REVOKE_STATEID with an all-zeros
clientid to each storage device.  This clears the prior trust
table eagerly; skipping this step is correct, because orphan
entries expire via tsa_expire.</t>
            </li>
            <li>
              <t>Enter grace and accept RECLAIM operations from clients.  For
each reclaimed layout, fan out TRUST_STATEID to the relevant
storage devices.</t>
            </li>
            <li>
              <t>Exit grace.  Clients that did not reclaim in time have their
state revoked; the metadata server issues REVOKE_STATEID or
BULK_REVOKE_STATEID on their behalf.</t>
            </li>
          </ol>
          <t>Metadata servers <bcp14>SHOULD</bcp14> persist the set of outstanding
TRUST_STATEID entries (clientid, layout stateid, storage device
address, tsa_expire) to stable storage.  With this persistence
the metadata server can re-issue TRUST_STATEID for all known
entries immediately upon reconnecting to each storage device,
before clients begin reclaiming.  This shrinks the window during
which the storage device returns NFS4ERR_DELAY for client I/O.
Persistence is a latency optimization, not a correctness
requirement: the re-layout path handles recovery in all cases.</t>
        </section>
        <section anchor="sec-tight-coupling-compat">
          <name>Backward Compatibility</name>
          <ul spacing="normal">
            <li>
              <t>NFSv3 storage devices are unchanged.  They are always treated
as loosely coupled; TRUST_STATEID does not exist on NFSv3
servers.</t>
            </li>
            <li>
              <t>NFSv4.2 storage devices for which the TRUST_STATEID probe
returns NFS4ERR_NOTSUPP are treated as loosely coupled;
fencing is the only revocation mechanism, the same as for
NFSv3.</t>
            </li>
            <li>
              <t>NFSv4.2 storage devices for which the probe returns
NFS4ERR_INVAL support tight coupling; the metadata server uses
TRUST_STATEID at LAYOUTGET and REVOKE_STATEID or
BULK_REVOKE_STATEID for revocation instead of fencing.</t>
            </li>
          </ul>
          <t>A single deployment <bcp14>MAY</bcp14> contain a mix of tight-coupled and
loose-coupled storage devices; each is negotiated independently
via the probe.</t>
        </section>
      </section>
    </section>
    <section anchor="device-addressing-and-discovery">
      <name>Device Addressing and Discovery</name>
      <t>Data operations to a storage device require the client to know the
network address of the storage device.  The NFSv4.1+ GETDEVICEINFO
operation (Section 18.40 of <xref target="RFC8881"/>) is used by the client to
retrieve that information.</t>
      <section anchor="sec-ff_device_addr4">
        <name>ff_device_addr4</name>
        <t>The ff_device_addr4 data structure (see <xref target="fig-ff_device_addr4"/>)
is returned by the server as the layout-type-specific opaque field
da_addr_body in the device_addr4 structure by a successful GETDEVICEINFO
operation.</t>
        <t>The ff_device_versions4 and ff_device_addr4 structures are
reused unchanged from <xref target="RFC8435"/>; they are reproduced here for
reader convenience and are not part of the XDR extracted from
this document.</t>
        <figure anchor="fig-ff_device_versions4">
          <name>ff_device_versions4 (reused from RFC 8435)</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_device_versions4 {
           uint32_t        ffdv_version;
           uint32_t        ffdv_minorversion;
           uint32_t        ffdv_rsize;
           uint32_t        ffdv_wsize;
           bool            ffdv_tightly_coupled;
   };
]]></sourcecode>
        </figure>
        <figure anchor="fig-ff_device_addr4">
          <name>ff_device_addr4 (reused from RFC 8435)</name>
          <sourcecode type="xdr"><![CDATA[
   struct ff_device_addr4 {
           multipath_list4     ffda_netaddrs;
           ff_device_versions4 ffda_versions<>;
   };
]]></sourcecode>
        </figure>
        <t>The ffda_netaddrs field is used to locate the storage device.  It
<bcp14>MUST</bcp14> be set by the server to a list holding one or more of the device
network addresses.</t>
        <t>The ffda_versions array allows the metadata server to present choices
as to NFS version, minor version, and coupling strength to the
client.  The ffdv_version and ffdv_minorversion represent the NFS
protocol to be used to access the storage device.  This layout
specification defines the semantics for ffdv_versions 3 and 4.  If
ffdv_version equals 3, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to
0 and ffdv_tightly_coupled to false.  The client <bcp14>MUST</bcp14> then access the
storage device using the NFSv3 protocol <xref target="RFC1813"/>.  If ffdv_version
equals 4, then the server <bcp14>MUST</bcp14> set ffdv_minorversion to one of the
NFSv4 minor version numbers, and the client <bcp14>MUST</bcp14> access the storage
device using NFSv4 with the specified minor version.</t>
        <t>Note that while the client might determine that it cannot use any of
the configured combinations of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled, when it gets the device list from the metadata
server, there is no way to indicate to the metadata server as to
which device it is version incompatible.  However, if the client
waits until it retrieves the layout from the metadata server, it can
at that time clearly identify the storage device in question (see
<xref target="sec-version-errors"/>).</t>
        <t>The ffdv_rsize and ffdv_wsize are used to communicate the maximum
rsize and wsize supported by the storage device.  As the storage
device can have a different rsize or wsize than the metadata server,
the ffdv_rsize and ffdv_wsize allow the metadata server to
communicate that information on behalf of the storage device.</t>
        <t>ffdv_tightly_coupled informs the client as to whether the
metadata server is tightly coupled with this storage device.  Note
that even if the data protocol is at least NFSv4.1, it may still
be the case that there is loose coupling in effect.  For an NFSv4.2
storage device, the metadata server sets ffdv_tightly_coupled to
true only after confirming the storage device implements the
TRUST_STATEID control protocol via the capability probe described
in <xref target="sec-tight-coupling-probe"/>.  An NFSv4.2 storage device that
does not implement TRUST_STATEID (returning NFS4ERR_NOTSUPP to the
probe) <bcp14>MUST</bcp14> be advertised with ffdv_tightly_coupled set to false.</t>
        <t>If ffdv_tightly_coupled is not set, then the client <bcp14>MUST</bcp14> commit
writes to the storage devices for the file before sending a
LAYOUTCOMMIT to the metadata server.  That is, the writes <bcp14>MUST</bcp14> be
committed by the client to stable storage via issuing WRITEs with
stable_how == FILE_SYNC or by issuing a COMMIT after WRITEs with
stable_how != FILE_SYNC (see Section 3.3.7 of <xref target="RFC1813"/>).</t>
      </section>
      <section anchor="storage-device-multipathing">
        <name>Storage Device Multipathing</name>
        <t>The flexible file layout type supports multipathing to multiple
storage device addresses.  Storage-device-level multipathing is used
for bandwidth scaling via trunking and for higher availability of use
in the event of a storage device failure.  Multipathing allows the
client to switch to another storage device address that may be that
of another storage device that is exporting the same data stripe
unit, without having to contact the metadata server for a new layout.</t>
        <t>To support storage device multipathing, ffda_netaddrs contains an
array of one or more storage device network addresses.  This array
(data type multipath_list4) represents a list of storage devices
(each identified by a network address), with the possibility that
some storage device will appear in the list multiple times.</t>
        <t>The client is free to use any of the network addresses as a
destination to send storage device requests.  If some network
addresses are less desirable paths to the data than others, then the
metadata server <bcp14>SHOULD NOT</bcp14> include those network addresses in
ffda_netaddrs.  If less desirable network addresses exist to provide
failover, the <bcp14>RECOMMENDED</bcp14> method to offer the addresses is to provide
them in a replacement device-ID-to-device-address mapping or a
replacement device ID.  When a client finds no response from the
storage device using all addresses available in ffda_netaddrs, it
<bcp14>SHOULD</bcp14> send a GETDEVICEINFO to attempt to replace the existing
device-ID-to-device-address mappings.  If the metadata server detects
that all network paths represented by ffda_netaddrs are unavailable,
the metadata server <bcp14>SHOULD</bcp14> send a CB_NOTIFY_DEVICEID (if the client
has indicated it wants device ID notifications for changed device
IDs) to change the device-ID-to-device-address mappings to the
available addresses.  If the device ID itself will be replaced, the
metadata server <bcp14>SHOULD</bcp14> recall all layouts with the device ID and thus
force the client to get new layouts and device ID mappings via
LAYOUTGET and GETDEVICEINFO.</t>
        <t>Generally, if two network addresses appear in ffda_netaddrs, they
will designate the same storage device.  When the storage device is
accessed over NFSv4.1 or a higher minor version, the two storage
device addresses will support the implementation of client ID or
session trunking (the latter is <bcp14>RECOMMENDED</bcp14>) as defined in <xref target="RFC8881"/>.
The two storage device addresses will share the same server owner or
major ID of the server owner.  It is not always necessary for the two
storage device addresses to designate the same storage device with
trunking being used.  For example, the data could be read-only, and
the data consist of exact replicas.</t>
      </section>
    </section>
    <section anchor="flexible-file-version-2-layout-type">
      <name>Flexible File Version 2 Layout Type</name>
      <t>The original layouttype4 introduced in <xref target="RFC5662"/> is extended as shown in
<xref target="fig-orig-layout"/>.  The layout_content4 and layout4 structures are
reused unchanged from <xref target="RFC5662"/>; the layouttype4 enum is extended
with the new LAYOUT4_FLEX_FILES_V2 value.  The full enum and
surrounding structures below are reproduced for reader
convenience; only the new constant LAYOUT4_FLEX_FILES_V2 is part
of the XDR extracted from this document (see
<xref target="fig-orig-layout-extract"/>).</t>
      <figure anchor="fig-orig-layout">
        <name>The original layout type (illustrative; reused from RFC 5662 with extension)</name>
        <sourcecode type="xdr"><![CDATA[
       enum layouttype4 {
           LAYOUT4_NFSV4_1_FILES   = 1,
           LAYOUT4_OSD2_OBJECTS    = 2,
           LAYOUT4_BLOCK_VOLUME    = 3,
           LAYOUT4_FLEX_FILES      = 4,
           LAYOUT4_SCSI            = 5,
           LAYOUT4_FLEX_FILES_V2   = 6
       };

       struct layout_content4 {
           layouttype4             loc_type;
           opaque                  loc_body<>;
       };

       struct layout4 {
           offset4                 lo_offset;
           length4                 lo_length;
           layoutiomode4           lo_iomode;
           layout_content4         lo_content;
       };
]]></sourcecode>
      </figure>
      <t>The extracted XDR contribution for this extension is the new
layouttype4 constant alone:</t>
      <figure anchor="fig-orig-layout-extract">
        <name>New layouttype4 value (extracted)</name>
        <sourcecode type="xdr"><![CDATA[
   /// const LAYOUT4_FLEX_FILES_V2 = 6;
]]></sourcecode>
      </figure>
      <t>This document defines structures associated with the layouttype4
value LAYOUT4_FLEX_FILES_V2.  <xref target="RFC8881"/> specifies the loc_body structure
as an XDR type "opaque".  The opaque layout is uninterpreted by the
generic pNFS client layers but is interpreted by the flexible file
layout type implementation.  This section defines the structure of
this otherwise opaque value, ffv2_layout4.</t>
      <section anchor="ffv2codingtype4">
        <name>ffv2_coding_type4</name>
        <figure anchor="fig-ffv2_coding_type4">
          <name>The coding type</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_coding_type4 {
   ///     FFV2_CODING_MIRRORED                  = 1,
   ///     FFV2_ENCODING_MOJETTE_SYSTEMATIC      = 2,
   ///     FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC  = 3,
   ///     FFV2_ENCODING_RS_VANDERMONDE          = 4
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type4 (see <xref target="fig-ffv2_coding_type4"/>) encompasses
a new IANA registry for 'Flexible Files Version 2 Erasure Coding
Type Registry'.  I.e., instead of defining a new Layout Type for
each Erasure Coding, we define a new Erasure Coding Type.  Except
for FFV2_CODING_MIRRORED, each of the types is expected to employ
the new operations in this document.</t>
        <t>The 32-bit ffv2_coding_type4 value space is partitioned by
intended scope -- Standards Track, Experimental, Vendor (open),
and Private / proprietary -- with different allocation policies
per range, so that vendors can assign codec values without
consuming standards-track codepoints.  See
<xref target="tbl-coding-ranges"/> and the accompanying prose in
<xref target="iana-considerations"/> for the range assignments and allocation
policies.</t>
        <t>FFV2_CODING_MIRRORED offers replication of data and not integrity of
data.  As such, it does not need operations like CHUNK_WRITE (see
<xref target="sec-CHUNK_WRITE"/>.</t>
        <section anchor="encoding-type-interoperability">
          <name>Encoding Type Interoperability</name>
          <t>The data servers do not interpret erasure-coded data -- they store and
return opaque chunks.  The NFS wire protocol likewise does not depend
on the encoding mathematics.  However, a client that writes data using
one encoding type <bcp14>MUST</bcp14> be able to read it back, and a different
client implementation <bcp14>MUST</bcp14> be able to read data written by the first
client if both claim to support the same encoding type.</t>
          <t>This interoperability requirement means that each registered
encoding type <bcp14>MUST</bcp14> fully specify the encoding and decoding
mathematics such that two independent implementations produce
byte-identical encoded output for the same input.  The specification
of a new encoding type <bcp14>MUST</bcp14> include one of the following:</t>
          <ol spacing="normal" type="1"><li>
              <t>A complete mathematical specification of the encoding and decoding
algorithms, including all parameters (e.g., field polynomial,
matrix construction, element size) sufficient for an independent
implementation to produce interoperable results.</t>
            </li>
            <li>
              <t>A reference to a published patent or pending patent application
that contains the algorithm specification.  Implementors can then
evaluate the licensing terms and decide whether to support the
encoding type.</t>
            </li>
            <li>
              <t>A declaration that the encoding type is a proprietary
implementation.  In this case, the encoding type name <bcp14>SHOULD</bcp14>
include an organizational prefix (e.g.,
FFV2_ENCODING_ACME_FOOBAR) to signal that interoperability is
limited to implementations licensed by that organization.</t>
            </li>
          </ol>
          <t>Option 1 is <bcp14>RECOMMENDED</bcp14> for encoding types intended for broad
interoperability.  Options 2 and 3 allow vendors to register encoding
types for use within their own ecosystems while preserving the
encoding type namespace.</t>
          <t>The rationale for this requirement is that erasure coding moves
computation from the server to the client.  If the client cannot
determine how data was encoded, it cannot decode it.  Unlike layout
types (where the server controls the storage format), encoding types
require client-side agreement on the mathematics.</t>
        </section>
      </section>
      <section anchor="sec-ffv2_layout">
        <name>ffv2_layout4</name>
        <section anchor="sec-ffv2_flags4">
          <name>ffv2_flags4</name>
          <figure anchor="fig-ffv2_flags4">
            <name>The ffv2_flags4</name>
            <sourcecode type="xdr"><![CDATA[
   /// const FFV2_FLAGS_NO_LAYOUTCOMMIT  = FF_FLAGS_NO_LAYOUTCOMMIT;
   /// const FFV2_FLAGS_NO_IO_THRU_MDS   = FF_FLAGS_NO_IO_THRU_MDS;
   /// const FFV2_FLAGS_NO_READ_IO       = FF_FLAGS_NO_READ_IO;
   /// const FFV2_FLAGS_WRITE_ONE_MIRROR =
   ///     FF_FLAGS_WRITE_ONE_MIRROR;
   /// const FFV2_FLAGS_ONLY_ONE_WRITER  = 0x00000010;
   ///
   /// typedef uint32_t            ffv2_flags4;
]]></sourcecode>
          </figure>
          <t>The ffv2_flags4 in <xref target="fig-ffv2_flags4"/>  is a bitmap that allows the
metadata server to inform the client of particular conditions that
may result from more or less tight coupling of the storage devices.</t>
          <t>Each flag below describes both the semantics when set and the
normative requirement it places on the client.  When a flag is
not set, the client <bcp14>MUST</bcp14> follow the default behavior described
for its unset state.</t>
          <dl>
            <dt>FFV2_FLAGS_NO_LAYOUTCOMMIT:</dt>
            <dd>
              <t>When set, the client <bcp14>MAY</bcp14> omit the LAYOUTCOMMIT to the
metadata server.  When unset, the client <bcp14>MUST</bcp14> send LAYOUTCOMMIT
per <xref target="RFC8881"/> Section 18.42.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_IO_THRU_MDS:</dt>
            <dd>
              <t>When set, the client <bcp14>MUST NOT</bcp14> proxy I/O operations through
the metadata server, even after detecting a network disconnect
to a storage device.  When unset, the client <bcp14>MAY</bcp14> retry failed
I/O via the metadata server.</t>
            </dd>
            <dt>FFV2_FLAGS_NO_READ_IO:</dt>
            <dd>
              <t>When set, the client <bcp14>MUST NOT</bcp14> issue READ against layouts of
iomode LAYOUTIOMODE4_RW, and <bcp14>MUST</bcp14> instead request a separate
layout of iomode LAYOUTIOMODE4_READ for any read I/O.  When
unset, the client <bcp14>MAY</bcp14> issue READ against either iomode.</t>
            </dd>
            <dt>FFV2_FLAGS_WRITE_ONE_MIRROR:</dt>
            <dd>
              <t>When set, the client <bcp14>MAY</bcp14> update only one mirror of each
layout segment (see <xref target="sec-CSM"/>) and rely on the metadata
server or a peer data server to propagate the update to the
remaining mirrors.  When unset, the client <bcp14>MUST</bcp14> update all
mirrors.</t>
            </dd>
            <dt>FFV2_FLAGS_ONLY_ONE_WRITER:</dt>
            <dd>
              <t>When set, the client is the exclusive writer for the layout
and <bcp14>MAY</bcp14> issue CHUNK_WRITE without setting cwa_guard, retaining
the ability to use CHUNK_ROLLBACK in the event of a write hole
caused by overwriting.  When unset, the client <bcp14>MUST</bcp14> set
cwa_guard on every CHUNK_WRITE so that chunk_guard4 CAS can
prevent collisions across concurrent writers.</t>
            </dd>
          </dl>
        </section>
      </section>
      <section anchor="ffv2fileinfo4">
        <name>ffv2_file_info4</name>
        <figure anchor="fig-ffv2_file_info4">
          <name>The ffv2_file_info4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_file_info4 {
   ///     stateid4                fffi_stateid;
   ///     nfs_fh4                 fffi_fh_vers;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_file_info4 is a new structure to help with the stateid
issue discussed in Section 5.1 of <xref target="RFC8435"/>.  I.e., in version 1
of the Flexible File Layout Type, there was the singleton ffv2ds_stateid
combined with the ffv2ds_fh_vers array.  I.e., each NFSv4 version
has its own stateid.  In <xref target="fig-ffv2_file_info4"/>, each NFSv4
filehandle has a one-to-one correspondence to a stateid.</t>
      </section>
      <section anchor="sec-ffv2_ds_flags4">
        <name>ffv2_ds_flags4</name>
        <figure anchor="fig-ffv2_ds_flags4">
          <name>The ffv2_ds_flags4</name>
          <sourcecode type="xdr"><![CDATA[
   /// const FFV2_DS_FLAGS_ACTIVE        = 0x00000001;
   /// const FFV2_DS_FLAGS_SPARE         = 0x00000002;
   /// const FFV2_DS_FLAGS_PARITY        = 0x00000004;
   /// const FFV2_DS_FLAGS_REPAIR        = 0x00000008;
   /// typedef uint32_t            ffv2_ds_flags4;
]]></sourcecode>
        </figure>
        <t>The ffv2_ds_flags4 (in <xref target="fig-ffv2_ds_flags4"/>) flags details the
state of the data servers.  With Erasure Coding algorithms, there
are both Systematic and Non-Systematic approaches.  In the Systematic,
the bits for integrity are placed amongst the resulting transformed
chunk.  Such an implementation would typically see FFV2_DS_FLAGS_ACTIVE
and FFV2_DS_FLAGS_SPARE data servers.  The FFV2_DS_FLAGS_SPARE ones
allow the client to repair a payload without engaging the metadata
server.  I.e., if one of the FFV2_DS_FLAGS_ACTIVE did not respond
to a WRITE_BLOCK, the client could fail the chunk to the
FFV2_DS_FLAGS_SPARE data server.</t>
        <t>With the Non-Systematic approach, the data and integrity live on
different data servers.  Such an implementation would typically see
FFV2_DS_FLAGS_ACTIVE and FFV2_DS_FLAGS_PARITY data servers.  If the
implementation wanted to allow for local repair, it would also use
FFV2_DS_FLAGS_SPARE.</t>
        <t>The FFV2_DS_FLAGS_REPAIR flag informs the client that the
indicated data server is a replacement for a previously failed
ACTIVE data server, whose content has been (or is being)
reconstructed from the surviving shards of the mirror set.  A
REPAIR data server differs from a SPARE in two ways:</t>
        <ul spacing="normal">
          <li>
            <t>A SPARE is standing by with no payload; the client <bcp14>MAY</bcp14> fail
over to it at write time without metadata-server coordination.</t>
          </li>
          <li>
            <t>A REPAIR has been promoted by the metadata server to replace a
failed ACTIVE, and its payload was placed there by a repair
client executing the flow in <xref target="sec-repair-selection"/> rather
than directly by the original writer.  The flag is the
client's indication that reads from this data server return
erasure-decoded content rather than content produced by the
original write.</t>
          </li>
        </ul>
        <t>Clients that rely on write-provenance information (for example,
deployments that track which client wrote which generation)
<bcp14>SHOULD</bcp14> be aware of the REPAIR flag so they do not treat the
reconstructed payload as if it had been written directly by the
cg_client_id recorded in the chunk_guard4; the guard values
still match across the mirror set by construction, but the
physical write path differs.</t>
        <t>Over the lifetime of a file, a single data server <bcp14>MAY</bcp14> transition
ACTIVE -&gt; REPAIR (on replacement) or REPAIR -&gt; ACTIVE (once the
metadata server has accepted the reconstructed content as
authoritative and the fail-over is complete); the metadata
server reflects the current flag set in the next layout it
returns.</t>
      </section>
      <section anchor="ffv2dataserver4">
        <name>ffv2_data_server4</name>
        <figure anchor="fig-ffv2_data_server4">
          <name>The ffv2_data_server4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_server4 {
   ///     deviceid4               ffv2ds_deviceid;
   ///     uint32_t                ffv2ds_efficiency;
   ///     ffv2_file_info4         ffv2ds_file_info<>;
   ///     fattr4_owner            ffv2ds_user;
   ///     fattr4_owner_group      ffv2ds_group;
   ///     ffv2_ds_flags4          ffv2ds_flags;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_server4 (in <xref target="fig-ffv2_data_server4"/>) describes a data
file and how to access it via the different NFS protocols.</t>
      </section>
      <section anchor="ffv2codingtypedata4">
        <name>ffv2_coding_type_data4</name>
        <figure anchor="fig-ffv2_coding_type_data4">
          <name>The ffv2_coding_type_data4</name>
          <sourcecode type="xdr"><![CDATA[
   /// union ffv2_coding_type_data4 switch
   ///         (ffv2_coding_type4 fctd_coding) {
   ///     case FFV2_CODING_MIRRORED:
   ///         ffv2_data_protection4   fctd_protection;
   ///     default:
   ///         ffv2_data_protection4   fctd_protection;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_coding_type_data4 (in <xref target="fig-ffv2_coding_type_data4"/>) describes
the data protection geometry for the layout.  All coding types carry an
ffv2_data_protection4 (<xref target="fig-ffv2_data_protection4"/>) specifying the
number of data and parity shards.  The coding type enum determines how
the shards are encoded; the protection structure determines how many
shards there are.</t>
        <t>Although the FFV2_CODING_MIRRORED case and the default case currently
carry the same type, the union form is intentional.  Future revisions
of this specification may assign distinct arm types to specific coding
types; using a union now avoids an incompatible change to the XDR at
that time.</t>
        <t>For FFV2_CODING_MIRRORED, fdp_data is 1 and fdp_parity is the number
of additional copies (e.g., fdp_parity=2 for 3-way mirroring).
Erasure coding types registered in companion documents (e.g.,
Reed-Solomon Vandermonde, Mojette systematic) use fdp_data &gt;= 2 and
fdp_parity &gt;= 1.</t>
        <figure anchor="fig-ffv2_stripes4">
          <name>The stripes v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_striping {
   ///     FFV2_STRIPING_NONE = 0,
   ///     FFV2_STRIPING_SPARSE = 1,
   ///     FFV2_STRIPING_DENSE = 2
   /// };
   ///
   /// struct ffv2_stripes4 {
   ///         ffv2_data_server4       ffs_data_servers<>;
   /// };
]]></sourcecode>
        </figure>
        <t>Each stripe contains a set of data servers in ffs_data_servers.
If the stripe is part of a ffv2_coding_type_data4 of
FFV2_CODING_MIRRORED, then the length of ffs_data_servers
<bcp14>MUST</bcp14> be 1.</t>
      </section>
      <section anchor="ffv2key4">
        <name>ffv2_key4</name>
        <figure anchor="fig-ffv2_key4">
          <name>The ffv2_key4</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef uint64_t ffv2_key4;
]]></sourcecode>
        </figure>
        <t>The ffv2_key4 is an opaque 64-bit identifier used to associate a
mirror instance with its backing storage key.  The value is assigned
by the metadata server and is opaque to the client.</t>
      </section>
      <section anchor="sec-ffv2-mirror4">
        <name>ffv2_mirror4</name>
        <figure anchor="fig-ffv2_mirror4">
          <name>The ffv2_mirror4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_mirror4 {
   ///         ffv2_coding_type_data4  ffm_coding_type_data;
   ///         ffv2_key4               ffm_key;
   ///         ffv2_striping           ffm_striping;
   ///         uint32_t                ffm_striping_unit_size;
   ///         uint32_t                ffm_client_id;
   ///         ffv2_stripes4           ffm_stripes<>;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_mirror4 (in <xref target="fig-ffv2_mirror4"/>) describes the Flexible
File Layout Version 2 specific fields.</t>
        <t>The ffm_client_id is a 32-bit value, assigned by the metadata
server at layout-grant time, that the client <bcp14>MUST</bcp14> use as the
cg_client_id field of chunk_guard4 (see <xref target="sec-chunk_guard4"/>) in
every CHUNK_WRITE it issues against the mirror's data servers.
Its purpose is to satisfy the 32-bit-per-field budget of
chunk_guard4 while preserving the guarantee that concurrent
writers on the same file are distinguishable:</t>
        <ul spacing="normal">
          <li>
            <t>The NFSv4 clientid4 (<xref target="RFC8881"/>) is a 64-bit structured
value whose low 32 bits (a slot index) are not guaranteed
unique across clients that hold layouts on the same file.
Folding clientid4 to 32 bits locally at each client could
therefore collide with another client's folded value and
violate the uniqueness contract on chunk_guard4.</t>
          </li>
          <li>
            <t>Only the metadata server has the information needed to avoid
such collisions: it sees every layout it grants on a file and
can assign a dense 32-bit ffm_client_id that is guaranteed
distinct from the ffm_client_ids assigned to other clients
holding concurrent write layouts on the same file.  The
metadata server <bcp14>MUST</bcp14> assign ffm_client_id subject to this
uniqueness rule.</t>
          </li>
          <li>
            <t>Because cg_client_id participates in the deterministic
tiebreaker for racing writers (see <xref target="sec-chunk_guard4"/>),
having the metadata server assign it also lets the metadata
server influence which client wins contention by choosing
the numeric ordering of the values it hands out.  Specific
ordering policies are implementation-defined and out of
scope for this document, but the protocol mechanism is
present.</t>
          </li>
        </ul>
        <t>An ffm_client_id is scoped to the file and layout for which it
was granted.  A client that holds layouts on two different files
may receive two different ffm_client_ids from the same metadata
server, and a client that relinquishes and later re-acquires a
layout on a given file <bcp14>MAY</bcp14> be assigned a different ffm_client_id.
ffm_client_id does NOT survive a metadata server restart: the
metadata server reassigns values as clients reclaim layouts
during the grace period.</t>
        <t>The ffm_coding_type_data is which encoding type is used
by the mirror.</t>
        <t>The ffm_striping selects the striping method used by the
mirror.  The three permissible values are FFV2_STRIPING_NONE
(the mirror is not striped), FFV2_STRIPING_SPARSE (stripe units
are mapped to the same physical offset on every data server,
leaving holes), and FFV2_STRIPING_DENSE (stripe units are
packed contiguously on each data server without holes).  See
<xref target="sec-striping"/> for the mapping math for each option.</t>
        <t>The ffm_striping_unit_size is the stripe unit size used
by the mirror.  The minimum stripe unit size is 64 bytes.  If
the value of ffm_striping is FFV2_STRIPING_NONE, then the value
of ffm_striping_unit_size <bcp14>MUST</bcp14> be 1.</t>
        <t>The ffm_stripes is the array of stripes for the mirror; the
length of the array is the stripe count.  If there is no
striping or the ffm_coding_type_data is FFV2_CODING_MIRRORED,
then the length of ffm_stripes <bcp14>MUST</bcp14> be 1.</t>
      </section>
      <section anchor="ffv2layout4">
        <name>ffv2_layout4</name>
        <figure anchor="fig-ffv2_layout4">
          <name>The ffv2_layout4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layout4 {
   ///     ffv2_mirror4            ffl_mirrors<>;
   ///     ffv2_flags4             ffl_flags;
   ///     uint32_t                ffl_stats_collect_hint;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layout4 (in <xref target="fig-ffv2_layout4"/>) describes the Flexible
File Layout Version 2.</t>
        <t>The ffl_mirrors field is the array of mirrored storage devices that
provide the storage for the current stripe; see <xref target="fig-parallel-filesystem"/>.</t>
        <t>The ffl_stats_collect_hint field provides a hint to the client on
how often the server wants it to report LAYOUTSTATS for a file.
The time is in seconds.</t>
        <figure anchor="fig-parallel-filesystem">
          <name>The Relationship between MDS and DSes</name>
          <artwork><![CDATA[
                +-----------+
                |           |
                |           |
                |   File    |
                |           |
                |           |
                +-----+-----+
                      |
     +-------------+-----+----------------+
     |                   |                |
+----+-----+       +-----+----+       +---+----------+
| Mirror 1 |       | Mirror 2 |       | Mirror 3     |
| MIRRORED |       | MIRRORED |       | REED_SOLOMON |
+----+-----+       +-----+----+       +---+----------+
     |                   |                |
     |                   |                |
+-----------+      +-----------+      +-----------+
|+-----------+     | Stripe 1  |      |+-----------+
+| Stripe N  |     +-----------+      +| Stripe N  |
 +-----------+           |             +-----------+
     |                   |                |
     |                   |                |
+-----------+      +-----------+      +-----------+
| Storage   |      | Storage   |      |+-----------+
| Device    |      | Device    |      ||+-----------+
+-----------+      +-----------+      +||  Storage  |
                                       +|  Devices  |
                                        +-----------+
]]></artwork>
        </figure>
        <t>As shown in <xref target="fig-parallel-filesystem"/> if the ffm_coding_type_data
is FFV2_CODING_MIRRORED, then each of the stripes <bcp14>MUST</bcp14>
only have 1 storage device. I.e., the length of ffs_data_servers
<bcp14>MUST</bcp14> be 1. The other encoding types can have any number of
storage devices.</t>
        <t>The abstraction here is that for FFV2_CODING_MIRRORED, each
stripe describes exactly one data server. And for all other
encoding types, each of the stripes describes a set of
data servers to which the chunks are distributed. Further,
the payload length can be different per stripe.</t>
      </section>
      <section anchor="ffv2dataprotection4">
        <name>ffv2_data_protection4</name>
        <figure anchor="fig-ffv2_data_protection4">
          <name>The ffv2_data_protection4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_data_protection4 {
   ///     uint32_t fdp_data;    /* data shards (k) */
   ///     uint32_t fdp_parity;  /* parity/redundancy shards (m) */
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_data_protection4 (in <xref target="fig-ffv2_data_protection4"/>) describes
the data protection geometry as a pair of counts: the number of data
shards (fdp_data, also known as k) and the number of parity or
redundancy shards (fdp_parity, also known as m).  This structure is
used in both layout hints and layout responses, and applies
uniformly to all coding types:</t>
        <table anchor="fig-protection-examples">
          <name>Example data protection configurations</name>
          <thead>
            <tr>
              <th align="left">Protection Mode</th>
              <th align="left">fdp_data</th>
              <th align="left">fdp_parity</th>
              <th align="left">Total DSes</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">Mirroring (3-way)</td>
              <td align="left">1</td>
              <td align="left">2</td>
              <td align="left">3</td>
              <td align="left">3 copies, no encoding</td>
            </tr>
            <tr>
              <td align="left">Striping (6-way)</td>
              <td align="left">6</td>
              <td align="left">0</td>
              <td align="left">6</td>
              <td align="left">Parallel I/O, no redundancy</td>
            </tr>
            <tr>
              <td align="left">RS Vandermonde 4+2</td>
              <td align="left">4</td>
              <td align="left">2</td>
              <td align="left">6</td>
              <td align="left">Tolerates 2 DS failures</td>
            </tr>
            <tr>
              <td align="left">Mojette-sys 8+2</td>
              <td align="left">8</td>
              <td align="left">2</td>
              <td align="left">10</td>
              <td align="left">Tolerates 2 DS failures</td>
            </tr>
          </tbody>
        </table>
        <t>By expressing all protection modes as (fdp_data, fdp_parity) pairs,
a single structure serves mirroring, striping, and all erasure
coding types.  The coding type (<xref target="fig-ffv2_coding_type4"/>) determines
HOW the shards are encoded; the protection structure determines
HOW MANY shards there are.</t>
        <t>The total number of data servers required is fdp_data + fdp_parity.
The storage overhead is fdp_parity / fdp_data (e.g., 50% for 4+2,
25% for 8+2).</t>
      </section>
      <section anchor="sec-ffv2-layouthint">
        <name>ffv2_layouthint4</name>
        <figure anchor="fig-ffv2_layouthint4">
          <name>The ffv2_layouthint4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layouthint4 {
   ///     ffv2_coding_type4       fflh_supported_types<>;
   ///     ffv2_data_protection4   fflh_preferred_protection;
   /// };
]]></sourcecode>
        </figure>
        <t>The ffv2_layouthint4 (in <xref target="fig-ffv2_layouthint4"/>) describes the
layout_hint (see Section 5.12.4 of <xref target="RFC8881"/>) that the client can
provide to the metadata server.</t>
        <t>The client provides two hints:</t>
        <dl>
          <dt>fflh_supported_types</dt>
          <dd>
            <t>An ordered list of coding types the client supports,
with the most preferred type first.  The server <bcp14>SHOULD</bcp14> select a type
from this list but <bcp14>MAY</bcp14> choose any type it supports.  If the server
does not support any of the listed types, it returns
NFS4ERR_CODING_NOT_SUPPORTED, and the client can retry
with a different list to discover the overlapping set.</t>
          </dd>
          <dt>fflh_preferred_protection</dt>
          <dd>
            <t>The client's preferred data protection geometry as a
(fdp_data, fdp_parity) pair.  The server <bcp14>SHOULD</bcp14> honor this hint but
<bcp14>MAY</bcp14> override it based on server-side policy.  A server that manages
data protection via administrative policy (e.g., per-directory or
per-export objectives) will typically ignore this hint and return the
geometry dictated by policy.</t>
          </dd>
        </dl>
        <t>For example, a client that prefers Mojette systematic with 8+2
protection would send:</t>
        <artwork><![CDATA[
fflh_supported_types = { FFV2_CODING_MIRRORED,
                         FFV2_ENCODING_MOJETTE_SYSTEMATIC,
                         FFV2_ENCODING_RS_VANDERMONDE }
fflh_preferred_protection = { fdp_data = 8, fdp_parity = 2 }
]]></artwork>
        <t>A server with a policy of RS 4+2 for this directory would ignore
both hints and return a layout with FFV2_ENCODING_RS_VANDERMONDE
and (fdp_data=4, fdp_parity=2).  A server without erasure coding
might return FFV2_CODING_MIRRORED with (fdp_data=1, fdp_parity=2)
for 3-way mirroring.</t>
        <section anchor="sec-codec-negotiation">
          <name>Codec Negotiation</name>
          <t>Because the coding-type registry is expected to grow over time
(new erasure codes are added, older ones fall out of favour,
vendors register private codes; see <xref target="iana-considerations"/>),
neither clients nor metadata servers are required to implement
every registered codec.  The protocol uses ffv2_layouthint4 as
the negotiation surface:</t>
          <dl>
            <dt>Client-side advertisement:</dt>
            <dd>
              <t>A client that wishes to influence codec selection <bcp14>SHOULD</bcp14>
send the set of codecs it actually implements in
fflh_supported_types.  A client <bcp14>MUST NOT</bcp14> claim support for
a codec it cannot encode or decode: a false advertisement
produces silent data unavailability when the resulting layout
is issued.</t>
            </dd>
            <dt>Metadata-server selection:</dt>
            <dd>
              <t>The metadata server <bcp14>SHOULD</bcp14> select a codec from the client's
fflh_supported_types list when the server's policy permits.
The server <bcp14>MAY</bcp14> override the hint when its policy dictates a
specific codec (for example, per-export objectives); in that
case the server issues a layout with the policy-dictated
codec and the client <bcp14>MUST</bcp14> either honour it or fail its I/O
with NFS4ERR_CODING_NOT_SUPPORTED.</t>
            </dd>
            <dt>Fallback when no overlap exists:</dt>
            <dd>
              <t>If the server's policy cannot be satisfied by any codec the
client supports, the metadata server has three options:
</t>
              <ol spacing="normal" type="1"><li>
                  <t>Return NFS4ERR_CODING_NOT_SUPPORTED on the LAYOUTGET.
The client <bcp14>MAY</bcp14> retry with a different (possibly empty)
fflh_supported_types list to learn the server's codec
repertoire through the errors returned.</t>
                </li>
                <li>
                  <t>Fall back to I/O via the metadata server itself, so the
client's reads and writes are satisfied by the MDS
translating to the underlying DS codec on the client's
behalf (see <xref target="sec-Fencing-Clients"/> for the MDS-I/O
fallback).  This is correct but serializes all I/O for
the codec-ignorant client through a single actor.</t>
                </li>
                <li>
                  <t>Route the client through a <strong>translating proxy</strong> that
understands both the file's native codec and a codec
the client does support.  The MDS issues a layout with
the proxy's data-server entry carrying
FFV2_DS_FLAGS_PROXY and a coding_type the client does
support (typically FFV2_CODING_MIRRORED for a minimal
NFSv4.2 client, or a flat NFSv3 surface for an NFSv3
client).  The proxy encodes and decodes on the fly
against the real DSes.  This preserves parallel I/O
for the codec-ignorant client that the MDS-I/O
fallback loses.  The proxy registration, directive, and
credential-forwarding rules are defined in the
companion Data Mover design; this draft defines only
the layout-flag surface (FFV2_DS_FLAGS_PROXY in
<xref target="sec-ffv2_ds_flags4"/>) that makes the proxy visible to
the client.</t>
                </li>
              </ol>
              <t>Options (1), (2), and (3) are not mutually exclusive: a
given deployment <bcp14>MAY</bcp14> implement any combination.  A
deployment that supports (3) covers all the clients that
(1) and (2) would cover and additionally preserves parallel
I/O for codec-ignorant clients.</t>
            </dd>
            <dt>Runtime codec change:</dt>
            <dd>
              <t>If a metadata server changes its codec policy after layouts
have been issued (for example, a deployment upgrade that
retires an older codec), the metadata server <bcp14>MUST</bcp14> recall the
affected layouts via CB_LAYOUTRECALL and may re-issue new
layouts with the new codec.  Clients that do not support the
new codec LAYOUTRETURN with NFS4ERR_CODING_NOT_SUPPORTED,
and the server either grants a layout using a mutually-
supported codec or the client falls back to I/O via the
metadata server.</t>
            </dd>
          </dl>
          <t>This mechanism deliberately avoids a separate capability-bit
handshake at EXCHANGE_ID.  ffv2_layouthint4 already provides
per-request negotiation surface; adding a session-level
capability set would duplicate it and would complicate codec
upgrades without additional value, because a client that
genuinely upgrades its codec set at runtime can simply update
the fflh_supported_types on its next LAYOUTGET.</t>
          <t>Note: In <xref target="fig-ffv2_layout4"/> ffv2_coding_type_data4 is an enumerated
union with the payload of each arm being defined by the protection
type. ffm_client_id tells the client which id to use when interacting
with the data servers.</t>
          <t>The ffv2_layout4 structure (see <xref target="fig-ffv2_layout4"/>) specifies a layout
in that portion of the data file described in the current layout
segment.  It is either a single instance or a set of mirrored copies
of that portion of the data file.  When mirroring is in effect, it
protects against loss of data in layout segments.</t>
          <t>While not explicitly shown in <xref target="fig-ffv2_layout4"/>, each layout4
element returned in the logr_layout array of LAYOUTGET4res (see
Section 18.43.2 of <xref target="RFC8881"/>) describes a layout segment.  Hence,
each ffv2_layout4 also describes a layout segment.  It is possible
that the file is concatenated from more than one layout segment.
Each layout segment <bcp14>MAY</bcp14> represent different striping parameters.</t>
          <t>The ffm_striping_unit_size field (inside each ffv2_mirror4) is the
stripe unit size in use for that mirror.  The number of stripes is
given by the number of elements in ffs_data_servers within each
ffv2_stripes4.  If the number of stripes is one, then the value for
ffm_striping_unit_size <bcp14>MUST</bcp14> default to zero.  The mapping scheme
(sparse or dense) is selected per mirror by ffm_striping and is
detailed in <xref target="sec-striping"/>.  Note
that there is an assumption here that both the stripe unit size and
the number of stripes are the same across all mirrors.</t>
          <t>The ffl_mirrors field represents an array of state information for
each mirrored copy of the current layout segment.  Each element is
described by a ffv2_mirror4 type.</t>
          <t>ffv2ds_deviceid provides the deviceid of the storage device holding
the data file.</t>
          <t>ffv2ds_file_info is an array of ffv2_file_info4 structures, each
pairing a filehandle (fffi_fh_vers) with a stateid (fffi_stateid).
There <bcp14>MUST</bcp14> be exactly as many elements in ffv2ds_file_info as there
are in ffda_versions.  Each element of the array corresponds to a
particular combination of ffdv_version, ffdv_minorversion, and
ffdv_tightly_coupled provided for the device.  The array allows for
server implementations that have different filehandles and stateids
for different combinations of version, minor version, and coupling
strength.  See <xref target="sec-version-errors"/> for how to handle versioning
issues between the client and storage devices.</t>
          <t>For tight coupling, fffi_stateid provides the stateid to be used
by the client to access the file.  The metadata server registers
fffi_stateid with each tight-coupling-capable storage device via
TRUST_STATEID (see <xref target="sec-tight-coupling-control"/>) before returning
the layout; the storage device validates subsequent CHUNK operations
against its trust table.</t>
          <t>For loose coupling and an NFSv4 storage device, the client <bcp14>MUST</bcp14> use
the anonymous stateid to perform I/O on the storage device, because
the metadata server stateid has no meaning to a storage device that
is not participating in the control protocol.  In this case the
metadata server <bcp14>MUST</bcp14> set fffi_stateid to the anonymous stateid.</t>
          <t>For an NFSv3 storage device (ffdv_version = 3), the tight-coupling
model does not apply: <xref target="sec-ff_device_addr4"/> requires
ffdv_tightly_coupled to be FALSE whenever ffdv_version equals 3,
because NFSv3 has no wire encoding for stateids.  The corresponding
fffi_stateid element in the ffv2ds_file_info array <bcp14>MUST</bcp14> therefore
be the anonymous stateid and is unused; an NFSv3 data server uses
the synthetic-uid fencing model (see <xref target="sec-Fencing-Clients"/>)
rather than a stateid-based trust table.</t>
          <t>This specification of the fffi_stateid restricts both models for
NFSv4.x storage protocols:</t>
          <dl>
            <dt>loosely couple</dt>
            <dd>
              <t>the stateid has to be an anonymous stateid</t>
            </dd>
            <dt>tightly couple</dt>
            <dd>
              <t>the stateid has to be a global stateid</t>
            </dd>
          </dl>
          <t>By pairing each fffi_fh_vers with its own fffi_stateid inside
ffv2_file_info4, the v2 layout addresses the v1 limitation where a
singleton stateid was shared across all filehandles.  Each open file
on the storage device can now have its own stateid, eliminating the
ambiguity present in the v1 structure.</t>
          <t>For loosely coupled storage devices, ffv2ds_user and ffv2ds_group
provide the synthetic user and group to be used in the RPC credentials
that the client presents to the storage device to access the data
files.  For tightly coupled storage devices, the user and group on
the storage device will be the same as on the metadata server; that
is, if ffdv_tightly_coupled (see <xref target="sec-ff_device_addr4"/>) is set,
then the client <bcp14>MUST</bcp14> ignore both ffv2ds_user and ffv2ds_group.</t>
          <t>The allowed values for both ffv2ds_user and ffv2ds_group are specified
as owner and owner_group, respectively, in Section 5.9 of <xref target="RFC8881"/>.
For NFSv3 compatibility, user and group strings that consist of
decimal numeric values with no leading zeros can be given a special
interpretation by clients and servers that choose to provide such
support.  The receiver may treat such a user or group string as
representing the same user as would be represented by an NFSv3 uid
or gid having the corresponding numeric value.  Note that if using
Kerberos for security, the expectation is that these values will
be a name@domain string.</t>
          <t>ffv2ds_efficiency describes the metadata server's evaluation as to
the effectiveness of each mirror.  Note that this is per layout and
not per device as the metric may change due to perceived load,
availability to the metadata server, etc.  Higher values denote
higher perceived utility.  The way the client can select the best
mirror to access is discussed in <xref target="sec-select-mirror"/>.</t>
        </section>
        <section anchor="error-codes-from-layoutget">
          <name>Error Codes from LAYOUTGET</name>
          <t><xref target="RFC8881"/> provides little guidance as to how the client is to
proceed with a LAYOUTGET that returns an error of either
NFS4ERR_LAYOUTTRYLATER, NFS4ERR_LAYOUTUNAVAILABLE, and NFS4ERR_DELAY.
Within the context of this document:</t>
          <dl>
            <dt>NFS4ERR_LAYOUTUNAVAILABLE</dt>
            <dd>
              <t>there is no layout available and the I/O is to go to the metadata
server.  Note that it is possible to have had a layout before a
recall and not after.</t>
            </dd>
            <dt>NFS4ERR_LAYOUTTRYLATER</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should continue
with I/O to the storage devices.</t>
            </dd>
            <dt>NFS4ERR_DELAY</dt>
            <dd>
              <t>there is some issue preventing the layout from being granted.
If the client already has an appropriate layout, it should not
continue with I/O to the storage devices.</t>
            </dd>
          </dl>
        </section>
        <section anchor="client-interactions-with-ffflagsnoiothrumds">
          <name>Client Interactions with FF_FLAGS_NO_IO_THRU_MDS</name>
          <t>Even if the metadata server provides the FF_FLAGS_NO_IO_THRU_MDS
flag, the client can still perform I/O to the metadata server.  The
flag functions as a hint.  The flag indicates to the client that
the metadata server prefers to separate the metadata I/O from the
data I/ O, most likely for performance reasons.</t>
        </section>
      </section>
      <section anchor="layoutcommit-1">
        <name>LAYOUTCOMMIT</name>
        <t>The flexible file layout does not use lou_body inside the
loca_layoutupdate argument to LAYOUTCOMMIT.  If lou_type is
LAYOUT4_FLEX_FILES, the lou_body field <bcp14>MUST</bcp14> have a zero length (see
Section 18.42.1 of <xref target="RFC8881"/>).</t>
      </section>
      <section anchor="interactions-between-devices-and-layouts">
        <name>Interactions between Devices and Layouts</name>
        <t>The file layout type is defined such that the relationship between
multipathing and filehandles can result in either 0, 1, or N
filehandles (see Section 13.3 of <xref target="RFC8881"/>).  Some rationales for
this are clustered servers that share the same filehandle or allow
for multiple read-only copies of the file on the same storage device.
In the flexible file layout type, while there is an array of
filehandles, they are independent of the multipathing being used.
If the metadata server wants to provide multiple read-only copies
of the same file on the same storage device, then it should provide
multiple mirrored instances, each with a different ff_device_addr4.
The client can then determine that, since each of the fffi_fh_vers
values within ffv2ds_file_info are different, there are multiple
copies of the file for the current layout segment available.</t>
      </section>
      <section anchor="sec-version-errors">
        <name>Handling Version Errors</name>
        <t>When the metadata server provides the ffda_versions array in the
ff_device_addr4 (see <xref target="sec-ff_device_addr4"/>), the client is able
to determine whether or not it can access a storage device with any
of the supplied combinations of ffdv_version, ffdv_minorversion,
and ffdv_tightly_coupled.  However, due to the limitations of
reporting errors in GETDEVICEINFO (see Section 18.40 in <xref target="RFC8881"/>),
the client is not able to specify which specific device it cannot
communicate with over one of the provided ffdv_version and
ffdv_minorversion combinations.  Using ff_ioerr4 (<xref target="sec-ff_ioerr4"/>)
inside either the LAYOUTRETURN (see Section 18.44 of <xref target="RFC8881"/>)
or the LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/> and <xref target="sec-LAYOUTERROR"/>
of this document), the client can isolate the problematic storage
device.</t>
        <t>The error code to return for LAYOUTRETURN and/or LAYOUTERROR is
NFS4ERR_MINOR_VERS_MISMATCH.  It does not matter whether the mismatch
is a major version (e.g., client can use NFSv3 but not NFSv4) or
minor version (e.g., client can use NFSv4.1 but not NFSv4.2), the
error indicates that for all the supplied combinations for ffdv_version
and ffdv_minorversion, the client cannot communicate with the storage
device.  The client can retry the GETDEVICEINFO to see if the
metadata server can provide a different combination, or it can fall
back to doing the I/O through the metadata server.</t>
      </section>
    </section>
    <section anchor="sec-striping">
      <name>Striping</name>
      <t>The flexible file layout type version 2 inherits the dense and
sparse striping dispositions defined by the file layout type in
Section 13.4 of <xref target="RFC8881"/>.  The disposition for a given
mirror is selected by the ffm_striping field (see
<xref target="sec-ffv2-mirror4"/>) and applies to every data server in that
mirror's ffs_data_servers list.  Three values are permitted:</t>
      <dl>
        <dt>FFV2_STRIPING_NONE:</dt>
        <dd>
          <t>The mirror is not striped.  ffm_striping_unit_size <bcp14>MUST</bcp14> be 1
and ffm_stripes <bcp14>MUST</bcp14> contain exactly one stripe.  The entire
mirror lives on that stripe's single data server list, with
no offset transformation.</t>
        </dd>
        <dt>FFV2_STRIPING_SPARSE:</dt>
        <dd>
          <t>Logical offsets within the file map to the same numeric
offset on each data server.  A data server that does not own
the stripe unit at a given logical offset presents a hole at
that offset.  This is the simpler model and matches the
mental picture of "the file is laid out end-to-end on each
data server, but each data server stores only its stripe
units".</t>
        </dd>
        <dt>FFV2_STRIPING_DENSE:</dt>
        <dd>
          <t>Stripe units owned by a given data server are packed
contiguously on that data server, with no holes.  The
logical offset is transformed into a compact physical offset
on the target data server.  This matches pre-existing
deployments that follow the dense layout convention of
Section 13.4.4 of <xref target="RFC8881"/>.</t>
        </dd>
      </dl>
      <t>The mapping math for sparse and dense is given in
<xref target="fig-striping-math"/>.  Common definitions apply to both.</t>
      <figure anchor="fig-striping-math">
        <name>Sparse and dense stripe mapping math</name>
        <artwork><![CDATA[
L: logical offset within the file (bytes)
U: stripe-unit size in bytes  = ffm_striping_unit_size
W: stripe width               = length of ffs_data_servers
S: stripe size in bytes       = W * U
N: stripe number              = L / S
i: index (0-based) of the data server that owns L
                              = (L / U) mod W
R: byte offset within the stripe unit
                              = L mod U

FFV2_STRIPING_SPARSE:
  physical offset on data server i:
      P_sparse(L) = L
  other data servers see a hole at offset L.

FFV2_STRIPING_DENSE:
  physical offset on data server i:
      P_dense(L) = N * U + R
             = (L / S) * U + (L mod U)
  each data server stores only the stripe units it owns,
  packed contiguously.
]]></artwork>
      </figure>
    </section>
    <section anchor="recovering-from-client-io-errors">
      <name>Recovering from Client I/O Errors</name>
      <t>The pNFS client may encounter errors when directly accessing the
storage devices.  However, it is the responsibility of the metadata
server to recover from the I/O errors.  When the LAYOUT4_FLEX_FILES
layout type is used, the client <bcp14>MUST</bcp14> report the I/O errors to the
server at LAYOUTRETURN time using the ff_ioerr4 structure (see
<xref target="sec-ff_ioerr4"/>).</t>
      <t>The metadata server analyzes the error and determines the required
recovery operations such as recovering media failures or reconstructing
missing data files.</t>
      <t>The metadata server <bcp14>MUST</bcp14> recall any outstanding layouts to allow
it exclusive write access to the stripes being recovered and to
prevent other clients from hitting the same error condition.  In
these cases, the server <bcp14>MUST</bcp14> complete recovery before handing out
any new layouts to the affected byte ranges.</t>
      <t>Although the client implementation has the option to propagate a
corresponding error to the application that initiated the I/O
operation and drop any unwritten data, the client should attempt
to retry the original I/O operation by either requesting a new
layout or sending the I/O via regular NFSv4.1+ READ or WRITE
operations to the metadata server.  The client <bcp14>SHOULD</bcp14> attempt to
retrieve a new layout and retry the I/O operation using the storage
device first and only retry the I/O operation via the metadata
server if the error persists.</t>
    </section>
    <section anchor="client-side-protection-modes">
      <name>Client-Side Protection Modes</name>
      <section anchor="sec-CSM">
        <name>Client-Side Mirroring</name>
        <t>The flexible file layout type has a simple model in place for the
mirroring of the file data constrained by a layout segment.  There
is no assumption that each copy of the mirror is stored identically
on the storage devices.  For example, one device might employ
compression or deduplication on the data.  However, the over-the-wire
transfer of the file contents <bcp14>MUST</bcp14> appear identical.  Note, this
is a constraint of the selected XDR representation in which each
mirrored copy of the layout segment has the same striping pattern
(see <xref target="fig-parallel-filesystem"/>).</t>
        <t>The metadata server is responsible for determining the number of
mirrored copies and the location of each mirror.  While the client
may provide a hint to how many copies it wants (see <xref target="sec-ffv2-layouthint"/>),
the metadata server can ignore that hint; in any event, the client
has no means to dictate either the storage device (which also means
the coupling and/or protocol levels to access the layout segments)
or the location of said storage device.</t>
        <t>The updating of mirrored layout segments is done via client-side
mirroring.  With this approach, the client is responsible for making
sure modifications are made on all copies of the layout segments
it is informed of via the layout.  If a layout segment is being
resilvered to a storage device, that mirrored copy will not be in
the layout.  Thus, the metadata server <bcp14>MUST</bcp14> update that copy until
the client is presented it in a layout.  If the FF_FLAGS_WRITE_ONE_MIRROR
is set in ffl_flags, the client need only update one of the mirrors
(see <xref target="sec-write-mirrors"/>).  If the client is writing to the layout
segments via the metadata server, then the metadata server <bcp14>MUST</bcp14>
update all copies of the mirror.  As seen in <xref target="sec-mds-resilvering"/>,
during the resilvering, the layout is recalled, and the client has
to make modifications via the metadata server.</t>
        <section anchor="sec-select-mirror">
          <name>Selecting a Mirror</name>
          <t>When the metadata server grants a layout to a client, it <bcp14>MAY</bcp14> let
the client know how fast it expects each mirror to be once the
request arrives at the storage devices via the ffv2ds_efficiency
member.  While the algorithms to calculate that value are left to
the metadata server implementations, factors that could contribute
to that calculation include speed of the storage device, physical
memory available to the device, operating system version, current
load, etc.</t>
          <t>However, what should not be involved in that calculation is a
perceived network distance between the client and the storage device.
The client is better situated for making that determination based
on past interaction with the storage device over the different
available network interfaces between the two; that is, the metadata
server might not know about a transient outage between the client
and storage device because it has no presence on the given subnet.</t>
          <t>As such, it is the client that decides which mirror to access for
reading the file.  The requirements for writing to mirrored layout
segments are presented below.</t>
        </section>
        <section anchor="sec-write-mirrors">
          <name>Writing to Mirrors</name>
          <section anchor="single-storage-device-updates-mirrors">
            <name>Single Storage Device Updates Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is set, the
client <bcp14>MAY</bcp14> update just one of the copies of the layout segment.
For this case, the storage device <bcp14>MUST</bcp14> ensure that all copies of
the mirror are updated when any one of the mirrors is updated.  If
the storage device gets an error when updating one of the mirrors,
then it <bcp14>MUST</bcp14> inform the client that the original WRITE had an error.
The client then <bcp14>MUST</bcp14> inform the metadata server (see <xref target="sec-write-errors"/>).
The client's responsibility with respect to COMMIT is explained in
<xref target="sec-write-commits"/>.  The client may choose any one of the mirrors
and may use ffv2ds_efficiency as described in <xref target="sec-select-mirror"/>
when making this choice.</t>
          </section>
          <section anchor="client-updates-all-mirrors">
            <name>Client Updates All Mirrors</name>
            <t>If the FF_FLAGS_WRITE_ONE_MIRROR flag in ffl_flags is not set, the
client is responsible for updating all mirrored copies of the layout
segments that it is given in the layout.  A single failed update
is sufficient to fail the entire operation.  If all but one copy
is updated successfully and the last one provides an error, then
the client <bcp14>MUST</bcp14> inform the metadata server about the error.
The client can use either LAYOUTRETURN or LAYOUTERROR to inform the
metadata server that the update failed to that storage device.  If
the client is updating the mirrors serially, then it <bcp14>SHOULD</bcp14> stop
at the first error encountered and report that to the metadata
server.  If the client is updating the mirrors in parallel, then
it <bcp14>SHOULD</bcp14> wait until all storage devices respond so that it can
report all errors encountered during the update.</t>
          </section>
          <section anchor="sec-write-errors">
            <name>Handling Write Errors</name>
            <t>When the client reports a write error to the metadata server, the
metadata server is responsible for determining if it wants to remove
the errant mirror from the layout, if the mirror has recovered from
some transient error, etc.  When the client tries to get a new
layout, the metadata server informs it of the decision by the
contents of the layout.  The client <bcp14>MUST NOT</bcp14> assume that the contents
of the previous layout will match those of the new one.  If it has
updates that were not committed to all mirrors, then it <bcp14>MUST</bcp14> resend
those updates to all mirrors.</t>
            <t>There is no provision in the protocol for the metadata server to
directly determine that the client has or has not recovered from
an error.  For example, if a storage device was network partitioned
from the client and the client reported the error to the metadata
server, then the network partition would be repaired, and all of
the copies would be successfully updated.  There is no mechanism
for the client to report that fact, and the metadata server is
forced to repair the file across the mirror.</t>
            <t>If the client supports NFSv4.2, it can use LAYOUTERROR and LAYOUTRETURN
to provide hints to the metadata server about the recovery efforts.
A LAYOUTERROR on a file is for a non-fatal error.  A subsequent
LAYOUTRETURN without a ff_ioerr4 indicates that the client successfully
replayed the I/O to all mirrors.  Any LAYOUTRETURN with a ff_ioerr4
is an error that the metadata server needs to repair.  The client
<bcp14>MUST</bcp14> be prepared for the LAYOUTERROR to trigger a CB_LAYOUTRECALL
if the metadata server determines it needs to start repairing the
file.</t>
          </section>
          <section anchor="sec-write-commits">
            <name>Handling Write COMMITs</name>
            <t>When stable writes are done to the metadata server or to a single
replica (if allowed by the use of FF_FLAGS_WRITE_ONE_MIRROR), it
is the responsibility of the receiving node to propagate the written
data stably, before replying to the client.</t>
            <t>In the corresponding cases in which unstable writes are done, the
receiving node does not have any such obligation, although it may
choose to asynchronously propagate the updates.  However, once a
COMMIT is replied to, all replicas <bcp14>MUST</bcp14> reflect the writes that
have been done, and this data <bcp14>MUST</bcp14> have been committed to stable
storage on all replicas.</t>
            <t>In order to avoid situations in which stale data is read from
replicas to which writes have not been propagated:</t>
            <ul spacing="normal">
              <li>
                <t>A client that has outstanding unstable writes made to single
node (metadata server or storage device) <bcp14>MUST</bcp14> do all reads from
that same node.</t>
              </li>
              <li>
                <t>When writes are flushed to the server (for example, to implement
close-to-open semantics), a COMMIT must be done by the client
to ensure that up-to-date written data will be available
irrespective of the particular replica read.</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="sec-mds-resilvering">
          <name>Metadata Server Resilvering of the File</name>
          <t>The metadata server may elect to create a new mirror of the layout
segments at any time.  This might be to resilver a copy on a storage
device that was down for servicing, to provide a copy of the layout
segments on storage with different storage performance characteristics,
etc.  As the client will not be aware of the new mirror and the
metadata server will not be aware of updates that the client is
making to the layout segments, the metadata server <bcp14>MUST</bcp14> recall the
writable layout segment(s) that it is resilvering.  If the client
issues a LAYOUTGET for a writable layout segment that is in the
process of being resilvered, then the metadata server can deny that
request with an NFS4ERR_LAYOUTUNAVAILABLE.  The client would then
have to perform the I/O through the metadata server.</t>
        </section>
      </section>
      <section anchor="erasure-coding">
        <name>Erasure Coding</name>
        <t>Erasure Coding takes a data block and transforms it to a payload
to send to the data servers (see <xref target="fig-encoding-data-block"/>).  It
generates a metadata header and transformed block per data server.
The header is metadata information for the transformed block.  From
now on, the metadata is simply referred to as the header and the
transformed block as the chunk.  The payload of a data block is the
set of generated headers and chunks for that data block.</t>
        <t>The guard is an unique identifier generated by the client to describe
the current write transaction (see <xref target="sec-chunk_guard4"/>).  The
intent is to have a unique and non-opaque value for comparison.
The payload_id describes the position within the payload.  Finally,
the crc32 is the 32 bit crc calculation of the header (with the
crc32 field being 0) and the chunk.  By combining the two parts of
the payload, integrity is ensured for both the parts.</t>
        <t>While the data block might have a length of 4kB, that does not
necessarily mean that the length of the chunk is 4kB.  That length
is determined by the erasure coding type algorithm.  For example,
Reed Solomon might have 4kB chunks with the data integrity being
compromised by parity chunks.  Another example would be the Mojette
Transformation, which might have 1kB chunk lengths.</t>
        <t>The payload contains redundancy which will allow the erasure
coding type algorithm to repair chunks in the payload as it is
transformed back to a data block (see <xref target="fig-decoding-db"/>).</t>
        <t>The protocol provides two levels of payload integrity, consumed at
different points in the read path:</t>
        <dl>
          <dt>Consistency:</dt>
          <dd>
            <t>A payload is <strong>consistent</strong> when all of the chunks that belong
to it carry the same chunk_guard4 value (see
<xref target="sec-chunk_guard4"/>).  Consistency alone does NOT imply the
bytes are free of corruption; it means only that every chunk in
the payload came from the same write transaction.  A reader
detects inconsistency when it assembles a payload and finds
differing chunk_guard4 values across chunks.</t>
          </dd>
          <dt>Integrity:</dt>
          <dd>
            <t>A payload has <strong>integrity</strong> when it is consistent AND every
contained chunk passes its CRC32 check.  Integrity is the
precondition for returning the payload's data block to the
application.</t>
          </dd>
        </dl>
        <t>The separation matters because the two checks detect different
failure modes.  Consistency detects protocol-level failures (racing
writers, partial writes, rollback windows); the CRC32 detects
byte-level corruption (network errors, media errors, software bugs
in the erasure transform).  Neither subsumes the other.</t>
        <t>The two-level integrity model also reflects a deeper property of
distributed writes: <strong>last-writer-wins does not apply to a payload
spread across independent data servers.</strong>  The ordering of writes
arriving at one data server may differ from the ordering arriving
at another; the "last" write on DSa may well be the "first" on
DSc.  The chunk_guard4 CAS primitive (see <xref target="sec-chunk_guard4"/>)
resolves this by serializing concurrent writers per chunk rather
than by imposing a global order.</t>
        <t>The erasure coding algorithm itself might not be sufficient to
detect all byte-level errors in the chunks.  The CRC32 checks
allow the data server to detect chunks with integrity issues; the
erasure decoding algorithm can then reconstruct the affected
chunks from the remaining integral chunks in the payload.</t>
        <section anchor="encoding-a-data-block">
          <name>Encoding a Data Block</name>
          <figure anchor="fig-encoding-data-block">
            <name>Encoding a Data Block</name>
            <artwork><![CDATA[
                 +-------------+
                 | data block  |
                 +-------+-----+
                         |
                         |
   +---------------------+-------------------------------+
   |            Erasure Encoding (Transform Forward)     |
   +---+----------------------+---------------------+----+
       |                      |                     |
       |                      |                     |
   +---+------------+     +---+------------+     +--+-------------+
   | HEADER         | ... | HEADER         | ... | HEADER         |
   +----------------+     +----------------+     +----------------+
   | guard:         | ... | guard:         | ... | guard:         |
   |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
   |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
   | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
   | crc32   :      | ... | crc32   :      | ... | crc32   :      |
   +----------------+     +----------------+     +----------------+
   | CHUNK          | ... | CHUNK          | ... | CHUNK          |
   +----------------+     +----------------+     +----------------+
   | data: ....     | ... | data: ....     | ... | data: ....     |
   +----------------+     +----------------+     +----------------+
     Data Server 1          Data Server N          Data Server 6
]]></artwork>
          </figure>
          <t>Each data block of the file resident in the client's cache of the
file will be encoded into N different payloads to be sent to the
data servers as shown in <xref target="fig-encoding-data-block"/>.  As CHUNK_WRITE
(see <xref target="sec-CHUNK_WRITE"/>) can encode multiple write_chunk4 into a
single transaction, a more accurate description of a CHUNK_WRITE
is in <xref target="fig-example-chunk-write-args"/>.</t>
          <figure anchor="fig-example-chunk-write-args">
            <name>Example of CHUNK_WRITE_args</name>
            <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 3     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x32ef89             |
  |         [1]:  0x56fa89             |
  |         [2]:  0x7693af             |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
          </figure>
          <t>This describes a 3 block write of data from an offset of 1 block
in the file.  As each block shares the cwa_owner, it is only presented
once.  I.e., the data server will be able to construct the header
for the i'th chunk from the cwa_chunks from the cwa_payload_id, the
cwa_owner, and the i'th crc32 from the cw_crc32s.  The cwa_chunks
are sent together as a byte stream to increase performance.</t>
          <t>Assuming that there were no issues, <xref target="fig-example-chunk-write-res"/>
illustrates the results.  The payload sequence id is implicit in
the CHUNK_WRITEargs.</t>
          <figure anchor="fig-example-chunk-write-res">
            <name>Example of CHUNK_WRITE_res</name>
            <artwork><![CDATA[
  +-------------------------------+
  | CHUNK_WRITEresok              |
  +-------------------------------+
  | cwr_count: 3                  |
  | cwr_committed: FILE_SYNC4     |
  | cwr_writeverf: 0xf1234abc     |
  | cwr_owners[0]:                |
  |        co_chunk_id: 1         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[1]:                |
  |        co_chunk_id: 2         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  | cwr_owners[2]:                |
  |        co_chunk_id: 3         |
  |        co_guard:              |
  |            cg_gen_id   : 3    |
  |            cg_client_id: 6    |
  +-------------------------------+
]]></artwork>
          </figure>
          <section anchor="calculating-the-crc32">
            <name>Calculating the CRC32</name>
            <figure anchor="fig-calc-before">
              <name>CRC32 Before Calculation</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id : 0     |
  | crc32   : 0        |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
        Data Server 1
]]></artwork>
            </figure>
            <t>Assuming the header and payload as in <xref target="fig-calc-before"/>, the crc32
needs to be calculated in order to fill in the cw_crc field.  In
this case, the crc32 is calculated over the 4 fields as shown in
the header and the cw_chunk.  In this example, it is calculated to
be 0x21de8.  The resulting CHUNK_WRITE is shown in <xref target="fig-calc-crc-after"/>.</t>
            <figure anchor="fig-calc-crc-after">
              <name>CRC32 After Calculation</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_WRITEargs                    |
  +------------------------------------+
  | cwa_stateid: 0                     |
  | cwa_offset: 1                      |
  | cwa_stable: FILE_SYNC4             |
  | cwa_payload_id: 0                  |
  | cwa_owner:                         |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  | cwa_chunk_size  :  1048            |
  | cwa_crc32s:                        |
  |         [0]:  0x21de8              |
  | cwa_chunks  :  ......              |
  +------------------------------------+
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="decoding-a-data-block">
          <name>Decoding a Data Block</name>
          <figure anchor="fig-decoding-db">
            <name>Decoding a Data Block</name>
            <artwork><![CDATA[
    Data Server 1          Data Server N          Data Server 6
  +----------------+     +----------------+     +----------------+
  | HEADER         | ... | HEADER         | ... | HEADER         |
  +----------------+     +----------------+     +----------------+
  | guard:         | ... | guard:         | ... | guard:         |
  |   gen_id   : 3 | ... |   gen_id   : 3 | ... |   gen_id   : 3 |
  |   client_id: 6 | ... |   client_id: 6 | ... |   client_id: 6 |
  | payload_id : 0 | ... | payload_id : M | ... | payload_id : 5 |
  | crc32   :      | ... | crc32   :      | ... | crc32   :      |
  +----------------+     +----------------+     +----------------+
  | CHUNK          | ... | CHUNK          | ... | CHUNK          |
  +----------------+     +----------------+     +----------------+
  | data: ....     | ... | data: ....     | ... | data: ....     |
  +---+------------+     +--+-------------+     +-+--------------+
      |                     |                     |
      |                     |                     |
  +---+---------------------+---------------------+-----+
  |            Erasure Decoding (Transform Reverse)     |
  +---------------------+-------------------------------+
                        |
                        |
                +-------+-----+
                | data block  |
                +-------------+
]]></artwork>
          </figure>
          <t>When reading chunks via a CHUNK_READ operation, the client will
decode them into data blocks as shown in <xref target="fig-decoding-db"/>.</t>
          <t>At this time, the client could detect issues in the integrity of
the data.  The handling and repair are out of the scope of this
document and <bcp14>MUST</bcp14> be addressed in the document describing each
erasure coding type.</t>
          <section anchor="checking-the-crc32">
            <name>Checking the CRC32</name>
            <figure anchor="fig-example-chunk-read-crc">
              <name>CRC32 on the Wire</name>
              <artwork><![CDATA[
  +------------------------------------+
  | CHUNK_READresok                    |
  +------------------------------------+
  | crr_eof: false                     |
  | crr_chunks[0]:                     |
  |        cr_crc: 0x21de8             |
  |        cr_owner:                   |
  |            co_guard:               |
  |                cg_gen_id   : 7     |
  |                cg_client_id: 6     |
  |        cr_chunk  :  ......         |
  +------------------------------------+
]]></artwork>
            </figure>
            <t>Assuming the CHUNK_READ results as in <xref target="fig-example-chunk-read-crc"/>,
the crc32 needs to be checked in order to ensure data integrity.
Conceptually, a header and payload can be built as shown in
<xref target="fig-example-crc-checked"/>.  The crc32 is calculated over the 4
fields as shown in the header and the cr_chunk.  In this example,
it is calculated to be 0x21de8.  Thus this payload for the data
server has data integrity.</t>
            <figure anchor="fig-example-crc-checked">
              <name>CRC32 Being Checked</name>
              <artwork><![CDATA[
  +---+----------------+
  | HEADER             |
  +--------------------+
  | guard:             |
  |   gen_id   : 7     |
  |   client_id: 6     |
  | payload_id  : 0    |
  | crc32    : 0       |
  +--------------------+
  | CHUNK              |
  +--------------------+
  | data:  ....        |
  +--------------------+
       Data Server 1
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="write-modes">
          <name>Write Modes</name>
          <t>There are two basic writing modes for erasure coding and they depend
on the metadata server using FFV2_FLAGS_ONLY_ONE_WRITER in the
ffl_flags in the ffv2_layout4 (see <xref target="fig-ffv2_layout4"/>) to inform
the client whether it is the only writer to the file or not.  If
it is the only writer, then CHUNK_WRITE with the cwa_guard not set
can be used to write chunks.  In this scenario, there is no write
contention, but write holes can occur as the client overwrites old
data.  Thus the client does not need guarded writes, but it does
need the ability to rollback writes.  If it is not the only writer,
then CHUNK_WRITE with the cwa_guard set <bcp14>MUST</bcp14> be used to write chunks.
In this scenario, the write holes can also be caused by multiple
clients writing to the same chunk.  Thus the client needs guarded
writes to prevent over writes and it does need the ability to
rollback writes.</t>
          <t>In both modes, clients <bcp14>MUST NOT</bcp14> overwrite payloads which already
contain inconsistency.  This directly follows from <xref target="sec-reading-chunks"/>
and <bcp14>MUST</bcp14> be handled as discussed there.  Once consistency in the
payload has been detected, the client can use those chunks as a
basis for read/modify/update.</t>
          <t>CHUNK_WRITE is a two pass operation in cooperation with CHUNK_FINALIZE
(<xref target="sec-CHUNK_FINALIZE"/>) and CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).
It writes to the data file and the data server is responsible for
retaining a copy of the old header and chunk. A subsequent CHUNK_READ
would return the new chunk. However, until either the CHUNK_FINALIZE
or CHUNK_ROLLBACK is presented, a subsequent CHUNK_WRITE <bcp14>MUST</bcp14> result
in the locking of the chunk, as if a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>)
had been performed on the chunk. As such, further CHUNK_WRITES by
any client <bcp14>MUST</bcp14> be denied until the chunk is unlocked by CHUNK_UNLOCK
(<xref target="sec-CHUNK_UNLOCK"/>).</t>
          <t>If the CHUNK_WRITE results in a consistent data block, then the
client will send a CHUNK_FINALIZE in a subsequent compound to inform
the data server that the chunk is consistent and can be overwritten
by another CHUNK_WRITE.</t>
          <t>If the CHUNK_WRITE results in an inconsistent data block, or if the
data server returns NFS4ERR_CHUNK_LOCKED, the client reports the
condition to the metadata server via LAYOUTERROR with an error code
of NFS4ERR_PAYLOAD_NOT_CONSISTENT.</t>
        </section>
        <section anchor="sec-repair-selection">
          <name>Selecting the Repair Client</name>
          <t>The repair topology involves three actors communicating along
distinct paths, as shown in <xref target="fig-repair-topology"/>.</t>
          <figure anchor="fig-repair-topology">
            <name>Repair topology</name>
            <artwork><![CDATA[
     +------------+              +----------------+
     | Reporting  |              |                |
     | client     | ----(1)----> |    Metadata    |
     | (detects   | LAYOUTERROR  |    server      |
     |  error)    |              |                |
     +------------+              +----------------+
                                   |        ^  ^
                                   | (2a)   |  |
                                   |        |  |
                           +-------+--+  (2b) (3)
                           |          |  |   |
                 CB_CHUNK_REPAIR      |   |  |
                 (RACE or SCRUB)      |   |  |
                                   |   |   |  |
                                   v   |   |  |
     +-------------+           +--------+-------------+
     |  Repair     | ----(4)-> |                      |
     |  client     |  CHUNK_   |    Data servers      |
     |  (selected  |  LOCK_    |    (mirror set for   |
     |   by MDS)   |  ADOPT,   |    affected ranges)  |
     |             |  CHUNK_   |                      |
     |             |  WRITE_   +----------------------+
     |             |  REPAIR,
     |             |  CHUNK_
     |             |  FINALIZE,
     |             |  CHUNK_
     |             |  COMMIT,
     |             |  CHUNK_
     |             |  REPAIRED
     +-------------+

     (1) Reporter LAYOUTERRORs the MDS.
     (2a) MDS selects a repair client (may be same as reporter).
     (2b) MDS escrows the chunk lock and issues CB_CHUNK_REPAIR.
     (3)  Repair client adopts the lock and drives the repair.
     (4)  Repair client issues CHUNK_* ops against the mirror set.
]]></artwork>
          </figure>
          <t>The metadata server is the authority that selects which client
(or, in a tightly coupled deployment, which data server) repairs
an inconsistent payload.  This is analogous to the way the
metadata server assigns per-mirror priority via ffv2ds_efficiency
(see <xref target="sec-select-mirror"/>): the protocol does not prescribe the
selection algorithm, and each deployment <bcp14>MAY</bcp14> tune its policy.</t>
          <t>Implementations <bcp14>MAY</bcp14> consider factors such as:</t>
          <ul spacing="normal">
            <li>
              <t>Whether a client holds an active write layout on the affected
payload (the client most likely to hold surviving shards in
cache).</t>
            </li>
            <li>
              <t>Whether a client has previously reported consistent shards to
the metadata server via LAYOUTSTATS or a prior LAYOUTERROR.</t>
            </li>
            <li>
              <t>Whether the layout exposes a data server carrying
FFV2_DS_FLAGS_REPAIR as a target for reconstructed shards.</t>
            </li>
            <li>
              <t>Network proximity, observed latency, or recent client load --
the same class of information that informs ffv2ds_efficiency.</t>
            </li>
          </ul>
          <t>The selection algorithm is not normative.  What is normative is
that every client <bcp14>MUST</bcp14> be prepared to:</t>
          <ol spacing="normal" type="1"><li>
              <t>Receive a repair request for a payload that the client does
not have an outstanding write layout on, and did not write;
and</t>
            </li>
            <li>
              <t>Continue its own workload after reporting
NFS4ERR_PAYLOAD_NOT_CONSISTENT without itself being selected
to repair the payload it reported.</t>
            </li>
          </ol>
          <t>The metadata server signals the selected client via the
CB_CHUNK_REPAIR callback (<xref target="sec-CB_CHUNK_REPAIR"/>), which
identifies the file, the affected ranges (each with its own
triggering nfsstat4), and a wall-clock deadline.  A client that
receives CB_CHUNK_REPAIR for a file for which it does not
already hold a layout <bcp14>MUST</bcp14> acquire a layout via LAYOUTGET before
attempting the repair.</t>
          <t>Operational expectations for CB_CHUNK_REPAIR:
CB_CHUNK_REPAIR is an exceptional path, triggered only by
concurrent-writer races or data-server failures.  It is not a
steady-state operation and its frequency is a function of
racing-writer and data-server-failure rates in the deployment
rather than of normal client workload.  Implementations <bcp14>SHOULD</bcp14>
treat the CB_CHUNK_REPAIR handler as rare-path code and avoid
over-optimising it.  Implementations <bcp14>SHOULD</bcp14>, however, provision
enough client-side compute to handle a repair transaction
without stalling their foreground I/O, because foreground
throughput during repair is the externally observable cost of
this callback.</t>
        </section>
        <section anchor="repair-protocol-normative-vs-informative">
          <name>Repair Protocol: Normative vs. Informative</name>
          <t>The selection algorithm is non-normative and deployment-tunable.
The externally-observable state transitions of the repair flow
are normative.  The line between the two is drawn at what
another party on the wire -- the metadata server, another
client, a reader -- can observe.  What no other party can see
(client-internal ordering, retry policy, whether to CHUNK_READ
first to confirm the failure) is left to implementations.</t>
          <t>The following requirements are normative.  An implementation
that violates any of these can leak inconsistency or write-holes
into the cluster:</t>
          <ol spacing="normal" type="1"><li>
              <t><strong>Final state flat.</strong>  Every shard in every range identified
in a CB_CHUNK_REPAIR <bcp14>MUST</bcp14> reach either the COMMITTED state
(repaired) or the EMPTY state (rolled back).  No shard is
left in PENDING or FINALIZED indefinitely.</t>
            </li>
            <li>
              <t><strong>Lock before write.</strong>  The repair client <bcp14>MUST</bcp14> adopt the
lock on every affected range via CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT (<xref target="sec-CHUNK_LOCK"/>) before issuing
any CHUNK_WRITE_REPAIR, CHUNK_ROLLBACK, or CHUNK_WRITE on a
chunk in that range.  The lock on the affected chunks is
held continuously from the failure that triggered
CB_CHUNK_REPAIR through the adoption; at no point is the
range unlocked.</t>
            </li>
            <li>
              <t><strong>Clear the errored state.</strong>  On the reconstruction path,
the repair client <bcp14>MUST</bcp14> issue CHUNK_REPAIRED
(<xref target="sec-CHUNK_REPAIRED"/>) after CHUNK_COMMIT.  Without it,
readers continue to see holes regardless of on-disk state.</t>
            </li>
            <li>
              <t><strong>Release locks explicitly.</strong>  CHUNK_ROLLBACK does not
release chunk locks.  On the rollback path the client <bcp14>MUST</bcp14>
issue CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) on each affected
chunk.  A client that walks away without either completing
CHUNK_REPAIRED or issuing CHUNK_UNLOCK holds the locks
until lease expiry, blocking progress for other writers.</t>
            </li>
            <li>
              <t><strong>Deadline honored.</strong>  The client <bcp14>MUST</bcp14> drive every range to
its final flat state before ccra_deadline, or <bcp14>MUST</bcp14> respond
to the CB_CHUNK_REPAIR with NFS4ERR_DELAY (requesting an
extension), NFS4ERR_CODING_NOT_SUPPORTED (declining), or
NFS4ERR_PAYLOAD_LOST (declaring the data unrecoverable).
A deadline that elapses without any of these leaves the
metadata server free to re-select; the client <bcp14>MUST NOT</bcp14>
continue repair-related CHUNK operations after the
deadline without first re-verifying its layout and the
chunk lock state.</t>
            </li>
            <li>
              <t><strong>Terminal return codes.</strong>  NFS4ERR_CODING_NOT_SUPPORTED
<bcp14>MUST</bcp14> mean "decline; select another client."
NFS4ERR_PAYLOAD_LOST <bcp14>MUST</bcp14> mean "the data is not
recoverable; do not retry."  The metadata server relies on
these to decide whether to re-issue.</t>
            </li>
          </ol>
          <t>The following aspects are informative / implementation-defined:</t>
          <ul spacing="normal">
            <li>
              <t>Choice between the reconstruction path (CHUNK_WRITE_REPAIR)
and the rollback path (CHUNK_ROLLBACK) on a given range.  The
protocol <bcp14>MUST</bcp14> support both; the client <bcp14>MAY</bcp14> use either based
on its local state and whether reconstruction is feasible
from surviving shards.</t>
            </li>
            <li>
              <t>Ordering among multiple affected ranges in a single
CB_CHUNK_REPAIR (parallel or serial).</t>
            </li>
            <li>
              <t>Whether to issue CHUNK_READ to confirm the failure mode
before reconstructing.</t>
            </li>
            <li>
              <t>Retry policy on transient CHUNK_WRITE_REPAIR errors below the
deadline cutoff.</t>
            </li>
            <li>
              <t>How the repair status is surfaced to local filesystem API
callers.</t>
            </li>
          </ul>
        </section>
        <section anchor="carrying-out-the-repair">
          <name>Carrying Out the Repair</name>
          <t>With the normative framing above, the reconstruction path is:</t>
          <ol spacing="normal" type="1"><li>
              <t>CHUNK_LOCK with CHUNK_LOCK_FLAGS_ADOPT on each affected
range (<xref target="sec-CHUNK_LOCK"/>).</t>
            </li>
            <li>
              <t>CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>) with the
reconstructed data for each inconsistent shard.  The
client's chunk_owner4 on this and all subsequent operations
is the one it presented in the CHUNK_LOCK ADOPT above;
prior owners' generation ids are now historical.</t>
            </li>
            <li>
              <t>CHUNK_FINALIZE (<xref target="sec-CHUNK_FINALIZE"/>) and CHUNK_COMMIT
(<xref target="sec-CHUNK_COMMIT"/>) to persist the repaired shards.</t>
            </li>
            <li>
              <t>CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to clear the
errored state.</t>
            </li>
          </ol>
          <t>The rollback path, when reconstruction is not possible:</t>
          <ol spacing="normal" type="1"><li>
              <t>CHUNK_LOCK with CHUNK_LOCK_FLAGS_ADOPT on each affected
range.</t>
            </li>
            <li>
              <t>CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>) on each affected
shard to restore the previously committed content.</t>
            </li>
            <li>
              <t>CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) on each shard.</t>
            </li>
          </ol>
          <t>In both paths, the repair client <bcp14>SHOULD</bcp14> target reconstructed
shards according to the following fallback order: first, any
data server in the layout carrying FFV2_DS_FLAGS_REPAIR; then
the data server that reported the failure (the one carrying the
failing shard at the range identified by ccr_offset and ccr_count
in the CB_CHUNK_REPAIR argument); then, if both of the above are
unreachable, a data server carrying FFV2_DS_FLAGS_SPARE.  If
none of the above are available, the client <bcp14>MUST</bcp14> return
NFS4ERR_PAYLOAD_LOST on the CB_CHUNK_REPAIR response.</t>
          <section anchor="single-writer-mode">
            <name>Single Writer Mode</name>
            <t>In single writer mode, the metadata server sets FFV2_FLAGS_ONLY_ONE_WRITER
in ffl_flags, indicating that no other client holds a write layout for
the file.  The client sends CHUNK_WRITE with cwa_guard.cwg_check set to
FALSE, omitting the guard value.  Because only one writer is active,
there is no risk of two clients overwriting the same chunk concurrently.</t>
            <t>The single writer write sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The client issues CHUNK_WRITE (cwa_guard.cwg_check = FALSE) for each
shard.  The data server places the written block in the PENDING state
and retains a copy of the previous block for rollback.</t>
              </li>
              <li>
                <t>The client issues CHUNK_FINALIZE to advance the blocks from PENDING
to FINALIZED, validating the per-block CRC32.</t>
              </li>
              <li>
                <t>The client issues CHUNK_COMMIT to advance the blocks from FINALIZED
to COMMITTED, persisting the block metadata to stable storage.</t>
              </li>
            </ol>
            <t>If the client detects an error after CHUNK_WRITE but before CHUNK_FINALIZE
(e.g., a CRC mismatch on a subsequent CHUNK_READ), it issues CHUNK_ROLLBACK
to restore the previous block content.  CHUNK_ROLLBACK does not lock the
chunk; the next CHUNK_WRITE is permitted immediately.</t>
          </section>
          <section anchor="repairing-single-writer-payloads">
            <name>Repairing Single Writer Payloads</name>
            <t>In single writer mode, inconsistent blocks arise from a client or data
server failure during a CHUNK_WRITE / CHUNK_FINALIZE sequence.  Because
no other writer is active, the original writer is the typical choice
for repair, but the metadata server <bcp14>MAY</bcp14> designate any client according
to the rules in <xref target="sec-repair-selection"/>.  A designated client that
did not originate the writes <bcp14>MUST</bcp14> follow the rollback path of that
section if it cannot reconstruct the payload from surviving shards.</t>
            <t>The repair sequence when the selected client is the original writer is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The repair client issues CHUNK_READ to identify which blocks are in an
inconsistent state (PENDING with a CRC mismatch, or in the errored
state set by a prior CHUNK_ERROR).</t>
              </li>
              <li>
                <t>For each errored block, the repair client reconstructs the correct
data using the erasure coding algorithm (RS matrix inversion or Mojette
back-projection) from the surviving consistent blocks.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>)
to write the reconstructed data.  CHUNK_WRITE_REPAIR bypasses the guard
check and applies different data server policies (e.g., allowing writes
to blocks in the errored state).</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_FINALIZE and CHUNK_COMMIT to persist the
repaired blocks.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to
clear the errored state and make the blocks available for normal reads.</t>
              </li>
            </ol>
          </section>
          <section anchor="sec-multi-writer">
            <name>Multiple Writer Mode</name>
            <t>In multiple writer mode, the metadata server does not set
FFV2_FLAGS_ONLY_ONE_WRITER, indicating that concurrent writers may hold
write layouts for the file.  The client sends CHUNK_WRITE with
cwa_guard.cwg_check set to TRUE, supplying a chunk_guard4 in cwa_guard.cwg_guard
that uniquely identifies this write transaction across all data servers.</t>
            <t>The multiple writer write sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The client selects a unique chunk_guard4 for this transaction.  The
cg_client_id identifies the client (derived from the client's
clientid4); the cg_gen_id is a per-client generation counter
incremented for each new transaction.</t>
              </li>
              <li>
                <t>The client issues CHUNK_WRITE (cwa_guard.cwg_check = TRUE) for each
shard.  The data server checks that no other client's block is in the
PENDING state for this chunk.  If another client's block is already
pending, the data server returns NFS4ERR_CHUNK_LOCKED with the
clr_owner field identifying the lock holder.</t>
              </li>
              <li>
                <t>On NFS4ERR_CHUNK_LOCKED, the client <bcp14>MUST</bcp14> back off.  It issues
CHUNK_ROLLBACK for any shards it has already written in this
transaction, then retries after a delay.</t>
              </li>
              <li>
                <t>If all CHUNK_WRITEs succeed, the client issues CHUNK_FINALIZE and
CHUNK_COMMIT as in single writer mode.</t>
              </li>
            </ol>
            <t>The guard ensures that the complete set of shards forming a consistent
erasure-coded block all carry the same chunk_guard4.  A reader that
encounters shards with different guard values knows the payload is not
yet consistent and <bcp14>MUST</bcp14> either retry or report NFS4ERR_PAYLOAD_NOT_CONSISTENT.</t>
          </section>
          <section anchor="repairing-multiple-writer-payloads">
            <name>Repairing Multiple Writer Payloads</name>
            <t>In multiple writer mode, inconsistent blocks can arise from two sources:
a client failure leaving some shards in PENDING state, or two clients
writing different data to the same chunk before one has committed.</t>
            <t>The metadata server coordinates repair by designating a repair
client according to the rules in <xref target="sec-repair-selection"/>.  The
FFV2_DS_FLAGS_REPAIR flag, when present on a data server in the
layout, identifies the target data server into which reconstructed
shards should be written; it does not by itself identify the
repair client.  The repair sequence is:</t>
            <ol spacing="normal" type="1"><li>
                <t>The repair client issues CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) on the
affected block range of each data server.  If any lock attempt returns
NFS4ERR_CHUNK_LOCKED, the repair client records the existing lock
holder's chunk_owner4 and proceeds; the lock holder's data is a
candidate for the winning payload.</t>
              </li>
              <li>
                <t>The repair client issues CHUNK_READ on all data servers to retrieve
the current payload.  It examines the chunk_owner4 of each shard to
identify which transaction (if any) produced a consistent set across
all k data shards.</t>
              </li>
              <li>
                <t>If a consistent set is found (all k data shards carry the same
chunk_guard4), that payload is the winner.  The repair client issues
CHUNK_WRITE_REPAIR to copy the winner's data to any data servers whose
shard is inconsistent, followed by CHUNK_FINALIZE and CHUNK_COMMIT.</t>
              </li>
              <li>
                <t>If no consistent set exists (all available payloads are partial), the
repair client selects one transaction's payload as authoritative
(typically the one with the most complete set of shards, or the most
recent cg_gen_id) and proceeds as above.</t>
              </li>
              <li>
                <t>After all data servers carry consistent, finalized, committed data, the
repair client issues CHUNK_REPAIRED to clear the errored state and
CHUNK_UNLOCK to release the locks acquired in step 1.</t>
              </li>
              <li>
                <t>The repair client reports success to the metadata server via
LAYOUTRETURN.</t>
              </li>
            </ol>
          </section>
        </section>
        <section anchor="sec-reading-chunks">
          <name>Reading Chunks</name>
          <t>The client reads chunks from the data file via CHUNK_READ.  The
number of chunks in the payload that need to be consistent depend
on both the Erasure Coding Type and the level of protection selected.
If the client has enough consistent chunks in the payload, then it
can proceed to use them to build a data block.  If it does not have
enough consistent chunks in the payload, then it can either decide
to return a LAYOUTERROR of NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server or it can retry the CHUNK_READ until there are
enough consistent chunks in the payload.</t>
          <t>As another client might be writing to the chunks as they are being
read, it is entirely possible to read the chunks while they are not
consistent.  As such, it might even be the non-consistent chunks
which contain the new data and a better action than building the
data block is to retry the CHUNK_READ to see if new chunks are
overwritten.</t>
        </section>
        <section anchor="whole-file-repair">
          <name>Whole File Repair</name>
          <t>Whole-file repair is the case in which too many data servers have
failed, or too many chunks have been lost, for the per-range repair
flow defined in <xref target="sec-repair-selection"/> to reconstruct the file in
place.  In this case the metadata server <bcp14>MUST</bcp14> either:</t>
          <ol spacing="normal" type="1"><li>
              <t>Construct a new layout backed by replacement data servers and
drive the reconstruction via the <strong>Data Mover</strong> mechanism (a
designated data server acts as the source of truth for client
I/O during the transition, pushing reconstructed content to the
replacement data servers in the background).  The Data Mover
mechanism also covers the non-repair cases where a file's layout
must change while remaining available to clients -- policy-
driven layout transitions, data server maintenance evacuation,
administrative ingest, TLS coverage transition, and
filehandle-backend migration.</t>
            </li>
            <li>
              <t>If the metadata server has no data-mover-capable data server
available, or the surviving shards are insufficient to
reconstruct any portion of the file, terminate the affected
byte ranges with NFS4ERR_PAYLOAD_LOST (see
<xref target="sec-NFS4ERR_PAYLOAD_LOST"/>).</t>
            </li>
          </ol>
          <t>The Data Mover mechanism is specified in the companion Proxy
Server document <xref target="I-D.haynes-nfsv4-flexfiles-v2-proxy-server"/>.</t>
          <t>Implementations that do not support the Data Mover mechanism can
still perform recovery for cases where per-range repair suffices,
using CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>) and the repair
client selection rules in <xref target="sec-repair-selection"/>.  Such
implementations will surface NFS4ERR_PAYLOAD_LOST on any failure
that exceeds per-range repair's reach, including the multi-data-
server failure scenarios the Data Mover mechanism is intended to
handle.</t>
        </section>
      </section>
      <section anchor="mixing-of-coding-types">
        <name>Mixing of Coding Types</name>
        <t>Multiple coding types can be present in a Flexible File Version 2
Layout Type layout.  The ffv2_layout4 has an array of ffv2_mirror4,
each of which has a ffv2_coding_type4.  The main reason to allow
for this is to provide for either the assimilation of a non-erasure
coded file to an erasure coded file or the exporting of an erasure
coded file to a non-erasure coded file.</t>
        <t>Assume there is an additional ffv2_coding_type4 of FFV2_CODING_REED_SOLOMON
and it needs 8 active chunks.  The user wants to actively assimilate
a regular file.  As such, a layout might be as represented in
<xref target="fig-example_mixing"/>.  As this is an assimilation, most of the
data reads will be satisfied by READ (see Section 18.22 of <xref target="RFC8881"/>)
calls to index 0.  However, as this is also an active file, there
could also be CHUNK_READ (see <xref target="sec-CHUNK_READ"/>) calls to the other
indexes.</t>
        <figure anchor="fig-example_mixing">
          <name>Example of Mixed Coding Types in a Layout</name>
          <artwork><![CDATA[
 +-----------------------------------------------------+
 | ffv2_layout4:                                       |
 +-----------------------------------------------------+
 |     ffl_mirrors[0]:                                 |
 |         ffs_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: 0                     |
 |         ffm_coding: FFV2_CODING_MIRRORED            |
 +-----------------------------------------------------+
 |     ffl_mirrors[1]:                                 |
 |         ffs_data_servers:                           |
 |             ffv2_data_server4[0]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[1]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[2]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[3]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_ACTIVE  |
 |             ffv2_data_server4[4]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[5]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_PARITY  |
 |             ffv2_data_server4[6]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |             ffv2_data_server4[7]                    |
 |                 ffv2ds_flags: FFV2_DS_FLAGS_SPARE   |
 |     ffm_coding: FFV2_CODING_REED_SOLOMON            |
 +-----------------------------------------------------+
]]></artwork>
        </figure>
        <t>When performing I/O via a FFV2_CODING_MIRRORED coding type, the
non- transformed data will be used, Whereas with other coding types,
a metadata header and transformed block will be sent.  Further,
when reading data from the instance files, the client <bcp14>MUST</bcp14> be
prepared to have one of the coding types supply data and the other
type not to supply data.  I.e., the CHUNK_READ call to the data
servers in mirror 1 might return rlr_eof set to true (see
<xref target="fig-read_chunk4"/>), which indicates that there is no data, where
the READ call to the data server in mirror 0 might return eof to
be false, which indicates that there is data.  The client <bcp14>MUST</bcp14>
determine that there is in fact data.  An example use case is the
active assimilation of a file to ensure integrity.  As the client
is helping to translated the file to the new coding scheme, it is
actively modifying the file.  As such, it might be sequentially
reading the file in order to translate.  The READ calls to mirror
0 would be returning data and the CHUNK_READ calls to mirror 1 would
not be returning data.  As the client overwrites the file, the WRITE
call and CHUNK_WRITE call would have data sent to all of the
data servers.  Finally, if the client reads back a section which
had been modified earlier, both the READ and CHUNK_READ calls would
return data.</t>
      </section>
      <section anchor="sec-rs-encoding">
        <name>Reed-Solomon Vandermonde Encoding (FFV2_ENCODING_RS_VANDERMONDE)</name>
        <section anchor="overview">
          <name>Overview</name>
          <t>Reed-Solomon (RS) codes are Maximum Distance Separable (MDS) codes:
for a (k+m, k) code, any k of the k+m encoded shards suffice to
recover the original data.  The code tolerates the simultaneous loss
of up to m shards.  <xref target="Plank97"/> is a tutorial treatment of RS
coding in RAID-like systems and is the recommended background
reading for implementers unfamiliar with the construction used
here.</t>
        </section>
        <section anchor="galois-field-arithmetic">
          <name>Galois Field Arithmetic</name>
          <t>All RS operations are performed over GF(2^8), the Galois field with
256 elements.  Each element is represented as a byte.</t>
          <dl>
            <dt>Irreducible Polynomial</dt>
            <dd>
              <t>The field is constructed using the irreducible polynomial
x^8 + x^4 + x^3 + x^2 + 1 (0x11d in hexadecimal).  The primitive
element (generator) is g = 2, which has multiplicative order 255.</t>
            </dd>
            <dt>Addition</dt>
            <dd>
              <t>Addition in GF(2^8) is bitwise XOR.</t>
            </dd>
            <dt>Multiplication</dt>
            <dd>
              <t>Multiplication uses log/antilog tables.  For non-zero elements
a and b: a * b = exp(log(a) + log(b)), where the exp table is
doubled to 512 entries to avoid modular reduction on the index sum.</t>
            </dd>
          </dl>
          <t>These are the classical constructions from Berlekamp (1968) and
Peterson &amp; Weldon (1972).  The log/antilog table approach for GF(2^8)
multiplication predates all known patents on SIMD-accelerated GF
arithmetic.  Implementors considering SIMD acceleration of GF(2^8)
operations should be aware of US Patent 8,683,296 (StreamScale),
which covers certain SIMD-based GF multiplication techniques.</t>
        </section>
        <section anchor="encoding-matrix">
          <name>Encoding Matrix</name>
          <t>The encoding process uses a (k+m) x k Vandermonde matrix, normalized
so that its top k rows form the identity matrix:</t>
          <ol spacing="normal" type="1"><li>
              <t>Construct a (k+m) x k Vandermonde matrix V where V[i][j] = j^i
in GF(2^8).</t>
            </li>
            <li>
              <t>Extract the top k x k sub-matrix T from V.</t>
            </li>
            <li>
              <t>Compute T_inv = T^(-1) using Gaussian elimination in GF(2^8).</t>
            </li>
            <li>
              <t>Multiply: E = V * T_inv.  The result has an identity block on top
(rows 0 through k-1) and the parity generation matrix P on the
bottom (rows k through k+m-1).</t>
            </li>
          </ol>
          <t>The identity block makes the code systematic: data shards pass through
unchanged, and only the parity sub-matrix P is needed during encoding.</t>
        </section>
        <section anchor="encoding">
          <name>Encoding</name>
          <t>Given k data shards, each of shard_len bytes, encoding produces m
parity shards, each also shard_len bytes:</t>
          <artwork><![CDATA[
For each byte position j in [0, shard_len):
  For each parity shard i in [0, m):
    parity[i][j] = sum over s in [0, k) of P[i][s] * data[s][j]
]]></artwork>
          <t>where the sum and product are in GF(2^8).  All shards (data and
parity) are the same size.</t>
        </section>
        <section anchor="decoding">
          <name>Decoding</name>
          <t>When one or more shards are lost (up to m), reconstruction proceeds
by matrix inversion:</t>
          <ol spacing="normal" type="1"><li>
              <t>Select k available shards (from the k+m total).</t>
            </li>
            <li>
              <t>Form a k x k sub-matrix S of the encoding matrix E by selecting the
rows corresponding to the available shards.</t>
            </li>
            <li>
              <t>Compute S_inv = S^(-1) using Gaussian elimination in GF(2^8).</t>
            </li>
            <li>
              <t>Multiply S_inv by the vector of available shard data at each byte
position to recover the original k data shards.</t>
            </li>
            <li>
              <t>If any parity shards are also missing, regenerate them by
re-encoding from the recovered data shards.</t>
            </li>
          </ol>
          <t>The reconstruction cost is dominated by the matrix inversion, which
is O(k^2) in GF(2^8) multiplications.</t>
        </section>
        <section anchor="rs-interoperability-requirements">
          <name>RS Interoperability Requirements</name>
          <t>For two implementations of FFV2_ENCODING_RS_VANDERMONDE to
interoperate, they <bcp14>MUST</bcp14> agree on all of the following parameters.
Any deviation produces a different encoding matrix and renders
data unrecoverable by a different implementation.</t>
          <ul spacing="normal">
            <li>
              <t>Irreducible polynomial: x^8 + x^4 + x^3 + x^2 + 1 (0x11d)</t>
            </li>
            <li>
              <t>Primitive element: g = 2</t>
            </li>
            <li>
              <t>Vandermonde evaluation points: V[i][j] = j^i in GF(2^8)</t>
            </li>
            <li>
              <t>Matrix normalization: E = V * (V[0..k-1])^(-1)</t>
            </li>
          </ul>
          <t>These four parameters fully determine the encoding matrix for any
(k, m) configuration.</t>
        </section>
        <section anchor="rs-shard-sizes">
          <name>RS Shard Sizes</name>
          <t>All RS shards (data and parity) are exactly shard_len bytes.  This
simplifies the CHUNK operation protocol: chunk_size is exactly the
shard size for all mirrors.</t>
          <table anchor="tbl-rs-shards">
            <name>RS shard sizes for common configurations</name>
            <thead>
              <tr>
                <th align="left">Configuration</th>
                <th align="left">File Size</th>
                <th align="left">Shard Size</th>
                <th align="left">Total Storage</th>
                <th align="left">Overhead</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">4+2</td>
                <td align="left">4 KB</td>
                <td align="left">1 KB</td>
                <td align="left">6 KB</td>
                <td align="left">50%</td>
              </tr>
              <tr>
                <td align="left">4+2</td>
                <td align="left">1 MB</td>
                <td align="left">256 KB</td>
                <td align="left">1.5 MB</td>
                <td align="left">50%</td>
              </tr>
              <tr>
                <td align="left">8+2</td>
                <td align="left">4 KB</td>
                <td align="left">512 B</td>
                <td align="left">5 KB</td>
                <td align="left">25%</td>
              </tr>
              <tr>
                <td align="left">8+2</td>
                <td align="left">1 MB</td>
                <td align="left">128 KB</td>
                <td align="left">1.25 MB</td>
                <td align="left">25%</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>
      <section anchor="sec-mojette-encoding">
        <name>Mojette Transform Encoding (FFV2_ENCODING_MOJETTE_SYSTEMATIC, FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC)</name>
        <section anchor="overview-1">
          <name>Overview</name>
          <t>The Mojette Transform is an erasure coding technique based on discrete
geometry rather than algebraic field operations.  It computes 1D
projections of a 2D grid along selected directions.  Given enough
projections, the original grid can be reconstructed exactly.</t>
          <t>The transform operates on unsigned integer elements using modular
addition.  The element size is an implementation choice: 128-bit
elements leverage SSE SIMD instructions; 64-bit elements are
compatible with NEON and AVX2 vector widths.  No Galois field
operations are required.</t>
        </section>
        <section anchor="grid-structure">
          <name>Grid Structure</name>
          <t>Data is arranged as a P x Q grid of unsigned integer elements,
where P is the number of columns and Q is the number of rows.
For k data shards of S bytes each with W-byte elements:</t>
          <artwork><![CDATA[
P = S / W       (columns per row)
Q = k           (rows = data shards)
]]></artwork>
        </section>
        <section anchor="directions">
          <name>Directions</name>
          <t>A direction is a pair of coprime integers (p_i, q_i).  Implementations
<bcp14>SHOULD</bcp14> use q_i = 1 for all directions <xref target="PARREIN"/>.  For n = k + m total
shards, n directions are generated with non-zero p values symmetric
around zero:</t>
          <ul spacing="normal">
            <li>
              <t>For n = 4: p = {-2, -1, 1, 2}</t>
            </li>
            <li>
              <t>For n = 6: p = {-3, -2, -1, 1, 2, 3}</t>
            </li>
          </ul>
        </section>
        <section anchor="forward-transform-encoding">
          <name>Forward Transform (Encoding)</name>
          <t>For each direction (p_i, q_i), the forward transform computes a 1D
projection.  Each bin sums the grid elements along a discrete line:</t>
          <artwork><![CDATA[
Projection(b, p, q) = SUM over all (row, col) where
                       col * p - row * q + offset = b
                       of Grid[row][col]
]]></artwork>
          <t>The number of bins B in a projection is:</t>
          <artwork><![CDATA[
B(p, q, P, Q) = |p| * (Q - 1) + |q| * (P - 1) + 1
]]></artwork>
          <t>For q = 1, this simplifies to:</t>
          <artwork><![CDATA[
B = abs(p) * (Q - 1) + P
]]></artwork>
          <t>The byte size of the projection is B * W.</t>
        </section>
        <section anchor="katz-reconstruction-criterion">
          <name>Katz Reconstruction Criterion</name>
          <t>Reconstruction is possible if and only if the Katz criterion
<xref target="KATZ"/> holds:</t>
          <artwork><![CDATA[
SUM(i=1..n) |q_i| >= Q    OR    SUM(i=1..n) |p_i| >= P
]]></artwork>
          <t>When all q_i = 1, the q-sum simplifies to n &gt;= Q.</t>
        </section>
        <section anchor="inverse-transform-decoding">
          <name>Inverse Transform (Decoding)</name>
          <t>The inverse uses the corner-peeling algorithm:</t>
          <ol spacing="normal" type="1"><li>
              <t>Count how many unknown elements contribute to each bin.</t>
            </li>
            <li>
              <t>Find any bin with exactly one contributor (singleton).</t>
            </li>
            <li>
              <t>Recover the element, subtract from all projections.</t>
            </li>
            <li>
              <t>Repeat until all elements are recovered.</t>
            </li>
          </ol>
          <t>The algorithm is O(n * P * Q).</t>
        </section>
        <section anchor="systematic-mojette">
          <name>Systematic Mojette</name>
          <t>In the systematic form (FFV2_ENCODING_MOJETTE_SYSTEMATIC), the first
k shards are the original data rows and the remaining m shards are
projections.  Healthy reads require no decoding.</t>
          <t>Reconstruction of missing data rows proceeds via the
corner-peeling algorithm of <xref target="NORMAND"/>:</t>
          <ol spacing="normal" type="1"><li>
              <t>Load available parity projections.</t>
            </li>
            <li>
              <t>Subtract contributions of present data rows (residual).</t>
            </li>
            <li>
              <t>Corner-peel the residual to recover missing rows.</t>
            </li>
          </ol>
          <t>Reconstruction cost is O(m * k) -- a fundamental advantage over RS
at wide geometries (k &gt;= 8).</t>
        </section>
        <section anchor="non-systematic-mojette">
          <name>Non-Systematic Mojette</name>
          <t>In the non-systematic form (FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC),
all k + m shards are projections.  Every read requires the full
inverse transform.  This provides constant performance regardless of
failure count, but at higher baseline read cost than systematic.</t>
        </section>
        <section anchor="mojette-shard-sizes">
          <name>Mojette Shard Sizes</name>
          <t>Unlike RS, Mojette parity shard sizes vary by direction:</t>
          <table anchor="tbl-mojette-proj-sizes">
            <name>Mojette projection sizes for 4+2, 4KB shards, 64-bit elements</name>
            <thead>
              <tr>
                <th align="left">Direction (p, q)</th>
                <th align="left">Bins (B) for P=512, Q=4</th>
                <th align="left">Size (bytes, 64-bit elements)</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">(-3, 1)</td>
                <td align="left">521</td>
                <td align="left">4168</td>
              </tr>
              <tr>
                <td align="left">(-2, 1)</td>
                <td align="left">518</td>
                <td align="left">4144</td>
              </tr>
              <tr>
                <td align="left">(-1, 1)</td>
                <td align="left">515</td>
                <td align="left">4120</td>
              </tr>
              <tr>
                <td align="left">(1, 1)</td>
                <td align="left">515</td>
                <td align="left">4120</td>
              </tr>
              <tr>
                <td align="left">(2, 1)</td>
                <td align="left">518</td>
                <td align="left">4144</td>
              </tr>
              <tr>
                <td align="left">(3, 1)</td>
                <td align="left">521</td>
                <td align="left">4168</td>
              </tr>
            </tbody>
          </table>
          <t>When using CHUNK operations, the chunk_size is a nominal stride; the
last chunk in a parity shard <bcp14>MAY</bcp14> be shorter than the stride.</t>
        </section>
      </section>
      <section anchor="comparison-of-encoding-types">
        <name>Comparison of Encoding Types</name>
        <table anchor="tbl-encoding-comparison">
          <name>Comparison of erasure encoding types</name>
          <thead>
            <tr>
              <th align="left">Property</th>
              <th align="left">Reed-Solomon</th>
              <th align="left">Mojette Systematic</th>
              <th align="left">Mojette Non-Systematic</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">MDS guarantee</td>
              <td align="left">Yes</td>
              <td align="left">Yes (Katz)</td>
              <td align="left">Yes (Katz)</td>
            </tr>
            <tr>
              <td align="left">Shard sizes</td>
              <td align="left">Uniform</td>
              <td align="left">Variable</td>
              <td align="left">Variable</td>
            </tr>
            <tr>
              <td align="left">Reconstruction cost</td>
              <td align="left">O(k^2)</td>
              <td align="left">O(m * k)</td>
              <td align="left">O(m * k)</td>
            </tr>
            <tr>
              <td align="left">Healthy read cost</td>
              <td align="left">Zero</td>
              <td align="left">Zero</td>
              <td align="left">Full decode</td>
            </tr>
            <tr>
              <td align="left">GF operations</td>
              <td align="left">Yes (GF(2^8))</td>
              <td align="left">No</td>
              <td align="left">No</td>
            </tr>
            <tr>
              <td align="left">Recommended k</td>
              <td align="left">k &lt;= 6</td>
              <td align="left">k &gt;= 4</td>
              <td align="left">Archive only</td>
            </tr>
          </tbody>
        </table>
        <t>At small k (k &lt;= 6), RS is the conservative choice with uniform shard
sizes.  At wider geometries (k &gt;= 8), systematic Mojette offers lower
reconstruction cost.  Non-systematic Mojette is suitable only for
archive workloads where reads are infrequent.</t>
      </section>
      <section anchor="sec-spare-substitution">
        <name>First-Line Substitution to a Spare</name>
        <t>When a client's CHUNK_WRITE to an FFV2_DS_FLAGS_ACTIVE data server
fails with a transport-level error, NFS4ERR_IO, NFS4ERR_NOSPC, or
any other code that indicates the data server cannot accept the
shard, and the layout includes a data server flagged
FFV2_DS_FLAGS_SPARE (<xref target="sec-ffv2_ds_flags4"/>) that is not already
holding a shard for the affected payload, the client <bcp14>MAY</bcp14> substitute
the spare for the failing active data server for this write.</t>
        <t>Substitution avoids the full metadata-server repair flow.  The
client issues CHUNK_WRITE to the spare in place of the failing
ACTIVE and, if successful, proceeds with CHUNK_FINALIZE and
CHUNK_COMMIT against the full set of data servers the payload
now resides on (the k-1 healthy ACTIVE plus the substituted
SPARE).  The spare becomes the i-th shard holder for the
affected payload.</t>
        <t>The client <bcp14>MUST</bcp14> inform the metadata server of the substitution
before returning the layout.  This is done via LAYOUTERROR on
the failing ACTIVE (reporting the error code the client
encountered) in the same compound as, or before, any
LAYOUTSTATS reporting of the substitution.  The metadata server
uses the LAYOUTERROR to decide whether to update the layout in
place -- promoting the spare to ACTIVE and demoting the failing
ACTIVE to a stale-or-unreachable state -- or to push new
layouts via CB_RECALL_ANY to other clients so readers do not
continue to consult the failing ACTIVE.</t>
        <t>Substitution is optional.  A client that does not implement it,
or does not have a suitable spare in the layout, falls through
to the normal write-hole handling below.  Substitution is also
not available to clients writing with cwa_stable == FILE_SYNC
unless the client is prepared to drive FILE_SYNC semantics on
the spare as well; otherwise the substitution silently
downgrades the durability contract.</t>
        <t>Substitution <bcp14>MUST NOT</bcp14> be used when the existing PENDING state
on any shard of the affected payload carries a different
chunk_guard4 than the current transaction (the range has been
adopted by a repair client already -- the normal repair flow
applies and substitution would collide).</t>
      </section>
      <section anchor="handling-write-holes">
        <name>Handling write holes</name>
        <t>A write hole occurs when a client begins writing a stripe but does not
successfully write all k+m shards before a failure.  Some data servers
will hold new data while others still hold old data, producing an
inconsistent payload.</t>
        <t>The CHUNK_WRITE / CHUNK_ROLLBACK mechanism addresses this.  When a client
issues CHUNK_WRITE, the data server retains a copy of the previous shard
and places the new data in the PENDING state.  If any shard write fails,
the client issues CHUNK_ROLLBACK to each data server that received a
CHUNK_WRITE, restoring the previous content.  The payload remains
consistent from the reader's perspective throughout, because PENDING
blocks carry the new chunk_guard4 value and CHUNK_READ returns the last
COMMITTED or FINALIZED block when a PENDING block exists.</t>
        <t>A single-shard CHUNK_WRITE failure <bcp14>MAY</bcp14> also be handled without
CHUNK_ROLLBACK by substituting the failing data server with an
FFV2_DS_FLAGS_SPARE, per <xref target="sec-spare-substitution"/>.  This
avoids engaging the metadata server's repair flow and is the
preferred path on transient single-DS failures when the layout
exposes a suitable spare.</t>
        <t>In the multiple writer model, a write hole can also arise when two clients
are racing.  The chunk_guard4 value on each shard identifies which
transaction wrote it.  A reader that finds shards with different guard
values detects the inconsistency and either retries (if a concurrent write
is still in progress) or reports NFS4ERR_PAYLOAD_NOT_CONSISTENT to the
metadata server to trigger repair.</t>
        <t>When substitution and CHUNK_ROLLBACK are both unavailable, and
the payload cannot be reconstructed because too many shards have
been lost (for example, a catastrophic multi-DS failure with no
spares provisioned), the repair flow ultimately terminates with
NFS4ERR_PAYLOAD_LOST; see
<xref target="sec-NFS4ERR_PAYLOAD_LOST"/>.</t>
      </section>
    </section>
    <section anchor="sec-system-model">
      <name>System Model and Correctness</name>
      <t>The design decisions in this document -- centralized coordination
through the metadata server, CAS semantics via chunk_guard4,
pessimistic lock escrow during repair, and erasure-coded reads
from any sufficient subset -- depart visibly from a classical
distributed-consensus protocol such as Paxos or Raft.  This
section states the system model those decisions rest on, the
consistency and progress guarantees the protocol provides under
that model, and how the protocol relates to (and when it relies
on) classical consensus.  It is intended as the correctness
framing for implementers and reviewers; the normative wire
behavior is defined in the preceding sections.</t>
      <section anchor="sec-system-model-wire">
        <name>Wire Semantics vs Implementation</name>
        <t>The protocol defines wire semantics, not data-server
implementation.  The operations introduced in
<xref target="sec-new-ops"/> (CHUNK_WRITE, CHUNK_FINALIZE, CHUNK_COMMIT,
CHUNK_ROLLBACK, CHUNK_LOCK / CHUNK_UNLOCK, CHUNK_READ,
CHUNK_REPAIRED, CHUNK_ERROR, CHUNK_HEADER_READ,
CHUNK_WRITE_REPAIR) together with the per-chunk state machine
(<xref target="sec-system-model-chunk-state"/>) and the chunk_guard4 CAS
(<xref target="sec-chunk_guard4"/>) are the entire surface a peer observes.
The data server's internal representation of persistent state is
not exposed on the wire, and two data-server implementations
that satisfy the same wire semantics <bcp14>MAY</bcp14> differ arbitrarily in
their internal structure.</t>
        <t>In particular, the protocol does NOT exchange:</t>
        <ul spacing="normal">
          <li>
            <t>which on-disk layout (log-structured, append-only,
in-place-overwrite, external object store, key-value store,
or any other) a data server uses to persist chunks;</t>
          </li>
          <li>
            <t>whether a data server holds PENDING and FINALIZED chunks in
a single blob or in distinct regions;</t>
          </li>
          <li>
            <t>how a data server represents the CHUNK_LOCK table, the guard
epoch, or the escrow owner;</t>
          </li>
          <li>
            <t>whether a data server's chunk retention beyond COMMIT is
implemented via shadow blocks, journals, reference counts,
or copy-on-write.</t>
          </li>
        </ul>
        <t>This decoupling is deliberate.  It lets the protocol accommodate
future smart-DS designs -- including designs that integrate more
closely with storage back-ends that already provide atomic
replace, multi-version concurrency, or internal erasure coding --
without protocol revisions, provided the wire semantics are
preserved.  Conversely, a data server implementer is free to
pick the representation that best fits the underlying storage
stack without fear that some less common implementation choice
is disallowed.</t>
        <t>The counterpart of this rule is that the wire is the entire
contract.  Any behavior a client relies on <bcp14>MUST</bcp14> be observable
via the operations listed above; any behavior that is not
observable (cache state, background scrubbing cadence,
internal retry ordering, on-disk layout) is implementation
detail and <bcp14>MUST NOT</bcp14> be depended upon.</t>
      </section>
      <section anchor="sec-system-model-roles">
        <name>Actors and Roles</name>
        <t>Three actors participate on behalf of any given file:</t>
        <dl>
          <dt>pNFS client:</dt>
          <dd>
            <t>Issues CHUNK operations to data servers over the data path;
issues LAYOUTGET, LAYOUTRETURN, LAYOUTERROR, and SEQUENCE to
the metadata server on the control path.  Authenticates to the
metadata server via AUTH_SYS, RPCSEC_GSS, or TLS.  <bcp14>MAY</bcp14> be
selected as a repair client via CB_CHUNK_REPAIR.</t>
          </dd>
          <dt>Metadata server (MDS):</dt>
          <dd>
            <t>Is the sole coordinator for the file.  Grants, renews, and
revokes layouts; issues TRUST_STATEID / REVOKE_STATEID /
BULK_REVOKE_STATEID to each tight-coupled data server; selects
the repair client under the rules in
<xref target="sec-repair-selection"/>; owns the reserved
CHUNK_GUARD_CLIENT_ID_MDS escrow identity for in-flight repair.</t>
          </dd>
          <dt>Data server (DS):</dt>
          <dd>
            <t>Persists chunks and enforces the per-file trust table, the
per-chunk guard CAS (chunk_guard4), the per-chunk lock state
(including the MDS-escrow owner), and the chunk state machine
(EMPTY / PENDING / FINALIZED / COMMITTED).  Has no
coordinator role.  Has no knowledge of the erasure coding type
in use for any file: the erasure transform is performed
entirely at the client, and the data server stores the
resulting chunks without interpreting their contents.</t>
          </dd>
        </dl>
        <t>The protocol does NOT mandate how a data server implements the
chunk state machine or stores PENDING chunks.  An implementation
<bcp14>MAY</bcp14> use per-client staging files, a single append-only instance
file with an index, a separate metadata-header file paired with
a blocks file, a log-structured store, or any other
representation that preserves the normative semantics (the
EMPTY / PENDING / FINALIZED / COMMITTED transitions, the
chunk_guard4 CAS, lock continuity across revocation, and the
integrity checks).  The choice is a data-server implementation
concern and is transparent to clients and the metadata server.</t>
        <t>Each file is owned by exactly one metadata server at any given
instant.  Ownership transfer between metadata servers (for
example, during MDS failover) is implementation-defined and out
of scope for this document; see <xref target="sec-system-model-consensus"/>.</t>
      </section>
      <section anchor="sec-system-model-failures">
        <name>Failure Model</name>
        <t>The protocol assumes:</t>
        <dl>
          <dt>Crash-stop:</dt>
          <dd>
            <t>Clients, metadata servers, and data servers fail by stopping.
A restarted component rejoins the protocol with a fresh epoch
and participates in the grace / reclaim path already defined
in <xref target="RFC8881"/>.  Correct components do not exhibit arbitrary
(Byzantine) behavior.</t>
          </dd>
          <dt>Fail-silent data servers:</dt>
          <dd>
            <t>Data servers report honestly about the state of the data they
hold.  The protocol detects on-disk bit rot via CRC32
(see <xref target="sec-CHUNK_WRITE"/>) but does not defend against a data
server that deliberately lies about whether a chunk is
COMMITTED or what its contents are.  Byzantine data servers
are explicitly outside the trust model; see
<xref target="sec-system-model-nongoals"/>.</t>
          </dd>
          <dt>Authenticated writers and their own data:</dt>
          <dd>
            <t>An authenticated client may write arbitrary (even
semantically-invalid) bytes into chunks it owns.  The CRC32
check detects transport corruption, not adversarial content.
This matches the existing NFSv4 authorization model: once
you have write access, you may write anything.</t>
          </dd>
          <dt>Network partitions:</dt>
          <dd>
            <t>The protocol is partition-tolerant at the cost of availability
during the partition window.  A client partitioned from a
data server recovers via LAYOUTERROR and may be issued a new
layout (possibly against a spare, see
<xref target="sec-spare-substitution"/>).  An MDS partitioned from a data
server eventually renews trust entries on reconnection; in
the interim, the data server returns NFS4ERR_DELAY for
affected stateids (see <xref target="sec-tight-coupling-mds-crash"/>).
Message loss is bounded by RPC retransmit; eventual delivery
is assumed once the partition heals.
</t>
            <t>Split-brain scenarios (in which a partitioned minority of
the data servers in a mirror set attempts to make progress
independently of the majority) cannot drive inconsistent
writes to COMMITTED state.  The chunk_guard4 CAS on each
write requires the guard value from a successor chunk to
strictly advance the guard value of its predecessor; on
partition heal, any writes attempted on the minority side
are detected by the majority because their guard values do
not satisfy the CAS precondition, and those writes are
discarded.  When reconciliation is impossible -- for example,
the erasure code has lost too many shards across both sides
of the partition to reconstruct any single consistent
generation -- the repair flow terminates with
NFS4ERR_PAYLOAD_LOST (see <xref target="sec-NFS4ERR_PAYLOAD_LOST"/>),
which is terminal for the affected ranges.</t>
          </dd>
          <dt>Lease bound:</dt>
          <dd>
            <t>All state held by a data server on behalf of a metadata server
is bounded by the TRUST_STATEID expiry (see
<xref target="sec-tight-coupling-lease"/>).  An orphaned entry will
eventually expire even if the metadata server never returns.</t>
          </dd>
        </dl>
      </section>
      <section anchor="sec-system-model-chunk-state">
        <name>Chunk State Machine</name>
        <t>Each chunk on a data server occupies exactly one of four states.
The transitions below are the complete set; any implementation
of the data server's chunk state table <bcp14>MUST</bcp14> admit these
transitions and no others.</t>
        <figure anchor="fig-chunk-state-machine">
          <name>Chunk lifecycle on the data server</name>
          <artwork><![CDATA[
                       CHUNK_WRITE
                    (fresh cg_gen_id)
      +---------+ ------------------> +-----------+
      |  EMPTY  |                     |  PENDING  |
      +---------+ <------------------ +-----------+
           ^        CHUNK_ROLLBACK        |  ^
           |       (discard PENDING)      |  | CHUNK_WRITE
           |                              |  | (replace PENDING,
           |                              |  |  same writer, same
           |                              |  |  cg_gen_id)
           |                              |  |
           |             CHUNK_FINALIZE   v  |
           |          (writer stops       |
           |           further writes)    |
           |                              v
           |                       +-------------+
           |        CHUNK_ROLLBACK |  FINALIZED  |
           |       (discard        +-------------+
           |        FINALIZED)           |
           |                             | CHUNK_COMMIT
           |                             |  (make durable and
           |                             |   globally visible)
           |                             v
           |                       +-------------+
           +-------------------- > |  COMMITTED  |
                CHUNK_ROLLBACK     +-------------+
             (only via repair;          |
              replaces with a newer     | CHUNK_WRITE with a higher
              COMMITTED generation      | cg_gen_id begins a new
              or discards per the       | PENDING successor;
              rollback invariant)       | the prior COMMITTED is
                                        | retained until its
                                        | successor is COMMITTED
                                        | (see the rollback
                                        v  invariant below)
                                  (next PENDING
                                   against same chunk)
]]></artwork>
        </figure>
        <t>States:</t>
        <dl>
          <dt>EMPTY:</dt>
          <dd>
            <t>The chunk has no payload.  CHUNK_READ returns a zero-filled
result; CHUNK_WRITE against an EMPTY chunk is the first write.</t>
          </dd>
          <dt>PENDING:</dt>
          <dd>
            <t>The chunk has payload accepted by CHUNK_WRITE but not yet
finalized.  Not visible to CHUNK_READ (see
<xref target="sec-system-model-consistency"/>).  Further CHUNK_WRITEs from
the same writer <bcp14>MAY</bcp14> replace the payload in place (same
cg_gen_id).</t>
          </dd>
          <dt>FINALIZED:</dt>
          <dd>
            <t>The writer has signalled via CHUNK_FINALIZE that it will send
no more CHUNK_WRITEs for this generation.  Still not visible
to CHUNK_READ, but a candidate for CHUNK_COMMIT.</t>
          </dd>
          <dt>COMMITTED:</dt>
          <dd>
            <t>The chunk is durable and globally visible.  Subsequent
CHUNK_READs return this content until a newer COMMITTED
generation replaces it.  A higher-generation PENDING successor
<bcp14>MAY</bcp14> exist concurrently; the rollback invariant in
<xref target="sec-system-model-consistency"/> requires the data server to
retain the COMMITTED content while that successor exists.</t>
          </dd>
        </dl>
        <t>Transitions are driven by the operations named on the arrows.
CHUNK_ROLLBACK against a COMMITTED chunk is used only on the
repair path (see <xref target="sec-CHUNK_ROLLBACK"/>) and replaces the chunk
with a newer COMMITTED generation chosen by the repair client,
rather than returning the chunk to EMPTY.</t>
      </section>
      <section anchor="sec-system-model-consistency">
        <name>Consistency Guarantees</name>
        <t>The protocol provides <strong>per-chunk linearizability on COMMITTED
state</strong>:</t>
        <ol spacing="normal" type="1"><li>
            <t>Once CHUNK_COMMIT returns success to a writer for a given
chunk, every subsequent CHUNK_READ whose stateid postdates
the COMMIT observes either that writer's data or the data of
a later committed write.  A reader <bcp14>MUST NOT</bcp14> observe a
rolled-back write as if it had committed.</t>
          </li>
          <li>
            <t>Concurrent writers on the same chunk in multi-writer mode
serialize via chunk_guard4.  On guard conflict one writer
succeeds; the other receives NFS4ERR_CHUNK_GUARDED and <bcp14>MUST</bcp14>
either abandon the write or re-read and retry.  At most one
generation becomes COMMITTED per serialized decision.</t>
          </li>
          <li>
            <t>During repair, the chunk's lock is held continuously -- first
by the original writer, then transferred to the MDS-escrow
owner on REVOKE_STATEID, and finally adopted by the repair
client via CHUNK_LOCK_FLAGS_ADOPT.  No writer that did not
hold the lock may observe or mutate the chunk.  The
invariant "a chunk with a live lock has exactly one logical
owner at any instant" is preserved across revocation.</t>
          </li>
        </ol>
        <t>Across multiple chunks the protocol makes <strong>no multi-chunk
atomicity or ordering guarantee</strong>.  A reader that reads chunk A
at one offset and chunk B at another <bcp14>MAY</bcp14> observe A's new value
and B's old value simultaneously.  Applications that require
multi-chunk atomicity <bcp14>MUST</bcp14> layer it above this protocol -- for
example, via file-level checksums, application-level generation
fields, or external transaction managers.</t>
        <t><strong>The chunk is the unit of atomicity.</strong>  Two properties follow:</t>
        <ol spacing="normal" type="1"><li>
            <t>Chunk-aligned writes do not interfere.  Two concurrent
writers whose writes cover disjoint chunks -- even writes
that cover adjacent chunks -- never race.  Each write
terminates independently at COMMITTED per the per-chunk
linearizability rule above.</t>
          </li>
          <li>
            <t>Sub-chunk overlapping writes from different writers
produce chunk-resolution-granularity contention.  When two
concurrent writers target overlapping byte ranges within a
single chunk, chunk_guard4 resolves them: one writer's
entire chunk-generation wins and becomes COMMITTED; the
other writer sees NFS4ERR_CHUNK_GUARDED and is expected to
re-read and retry if it wishes to apply its change on top
of the winning generation (see
<xref target="sec-NFS4ERR_CHUNK_GUARDED"/>).  The protocol does NOT
produce byte-level merges of overlapping sub-chunk writes:
the losing writer's bytes are not preserved as a partial
update within the winning generation.</t>
          </li>
        </ol>
        <t>Applications that require byte-level write merging or sub-chunk
ordering guarantees <bcp14>MUST</bcp14> serialize such writes externally, for
example via NFSv4 byte-range locks (<xref target="RFC8881"/>, Section 12).
The chunk size that bounds the atomicity unit for a given file
is the product of ffm_striping_unit_size and the stripe width
W in <xref target="fig-striping-math"/>; applications can query
fattr4_coding_block_size (see <xref target="sec-fattr4_coding_block_size"/>)
to learn the effective chunk size and align their writes
accordingly.</t>
        <t>This choice -- chunk-boundary atomicity rather than stripe- or
block-boundary atomicity -- is load-bearing for the rest of the
consistency story: the chunk_guard4 CAS evaluates at the chunk
level, the PENDING / FINALIZED / COMMITTED state machine is per
chunk, CHUNK_LOCK is per chunk, and repair via CB_CHUNK_REPAIR
operates on chunks.  A different atomicity boundary would
require redefining those primitives, which this revision does
not.</t>
        <dl>
          <dt>Erasure-coded reads:</dt>
          <dd>
            <t>A reader of an erasure-coded file reconstructs the plaintext
from any sufficient subset of k shards of the (k+m)-shard
stripe; the guard values on those shards <bcp14>MUST</bcp14> agree.  Shards
with stale guards are ignored.  This is not a quorum read in
the Paxos sense -- there is no voting on a value; there is
only reconstruction of the single value identified by the
current guard.</t>
          </dd>
          <dt>Rollback invariant:</dt>
          <dd>
            <t>The data server <bcp14>MUST</bcp14> retain the prior FINALIZED or COMMITTED
content of a chunk while any successor PENDING chunk exists.
A corollary of this rule is the <strong>lowest-guard-recoverable</strong>
property: as long as at least k data servers in the mirror
set retain the chunk at some generation G or lower, the
payload that was COMMITTED at generation G (or earlier) can
be reconstructed.  This is the correctness basis for
CHUNK_ROLLBACK (see <xref target="sec-CHUNK_ROLLBACK"/>): rollback does not
synthesize data, it simply selects the lowest-generation
chunks whose guards agree across the mirror set and discards
the higher-generation PENDING or FINALIZED chunks that
triggered the rollback.  The protocol never relies on locating
or reconstructing data from outside the mirror set.</t>
          </dd>
          <dt>Visibility of non-committed state:</dt>
          <dd>
            <t>PENDING and FINALIZED chunks <bcp14>MUST NOT</bcp14> be globally visible.
CHUNK_READ returns only COMMITTED content; a CHUNK_READ whose
target chunk is currently PENDING or FINALIZED sees the
predecessor COMMITTED content (or an EMPTY chunk if none
exists), not the in-progress successor.  A writer observing
its own PENDING or FINALIZED chunk <bcp14>MAY</bcp14> receive the in-progress
content on the same stateid that produced it, but no other
stateid -- on the same or a different client -- sees it.
The retention window that makes the prior COMMITTED content
available to CHUNK_READ and to CHUNK_ROLLBACK is itself
bounded; see <xref target="sec-system-model-retention-scope"/> for the
normative scoping rule.</t>
          </dd>
        </dl>
      </section>
      <section anchor="sec-system-model-retention-scope">
        <name>Ownership and Scope of Retained Prior Content</name>
        <t>The rollback invariant in <xref target="sec-system-model-consistency"/>
requires a data server to retain the prior FINALIZED or
COMMITTED content of a chunk while any successor PENDING chunk
exists.  That retained content -- sometimes informally called
the "safe buffer" -- is not global state.  It is scoped to the
stateid that wrote the PENDING successor, and its retention and
visibility are governed by that owning stateid's lease.</t>
        <dl>
          <dt>Owner:</dt>
          <dd>
            <t>The data server <bcp14>MUST</bcp14> record, alongside each PENDING chunk,
the owning stateid (the stateid presented on the CHUNK_WRITE
that produced the PENDING).  This is the owning writer's
stateid; it identifies the client and openowner/lockowner
that the data server will release the PENDING to on
CHUNK_FINALIZE or CHUNK_COMMIT, and that the MDS will treat
as the authoritative owner for purposes of
<xref target="sec-system-model-progress"/>.</t>
          </dd>
          <dt>Visibility:</dt>
          <dd>
            <t>Before transition to COMMITTED, the PENDING content is
visible only on the owning stateid.  A CHUNK_READ presenting
any other stateid (from the same client or a different
client) <bcp14>MUST</bcp14> observe the predecessor COMMITTED or EMPTY
state, not the PENDING successor.  This is the normative
form of the "non-committed data <bcp14>MUST NOT</bcp14> be globally visible"
rule in the Visibility bullet above.</t>
          </dd>
          <dt>Retention window:</dt>
          <dd>
            <t>The data server <bcp14>MUST</bcp14> retain the predecessor COMMITTED (or
FINALIZED) content that the PENDING is superseding for as
long as the owning stateid's lease is valid.  If the owning
stateid's lease expires without the PENDING reaching
COMMITTED, the retention obligation for that PENDING ends
(see <xref target="sec-system-model-progress"/> for the scavenger rule
that drives demotion).  If the PENDING does reach COMMITTED,
the new COMMITTED generation supersedes the prior one under
the standard rollback invariant and its own retention is
governed by any newer PENDING successor.</t>
          </dd>
        </dl>
        <t>The practical effect is that the "safe buffer" for a chunk is
not an unbounded chunk-global state but a per-writer window
bounded by that writer's lease.  The data server always has a
rule for discarding retained prior content -- it is the
owning stateid's lease expiry -- so a chunk cannot accumulate
indefinitely many retained generations even in the presence of
dropped or partitioned writers.</t>
      </section>
      <section anchor="sec-system-model-progress">
        <name>Progress and Termination</name>
        <t>Under the failure model above, the protocol guarantees the
following progress properties:</t>
        <dl>
          <dt>Data-path progress:</dt>
          <dd>
            <t>If all mirrors are reachable and none are failed, a
CHUNK_WRITE followed by CHUNK_FINALIZE followed by
CHUNK_COMMIT completes in O(1) round trips independent of
cluster size.  In particular, there is no consensus round,
no leader election, and no quorum voting on the write
itself.  The three operations <bcp14>MAY</bcp14> be amortized across
compounds: a steady-state writer sending a series of
CHUNK_WRITEs can piggyback the CHUNK_FINALIZE of the previous
write on the compound that carries the next write (for
example, <tt>SEQUENCE + PUTFH + CHUNK_FINALIZE + CHUNK_WRITE</tt>),
reducing the data-path happy case to a single round trip per
CHUNK_WRITE rather than three.  The CHUNK_COMMIT for the
final write in a sequence <bcp14>MAY</bcp14> similarly ride on the CLOSE
compound.  These compound-packing optimizations are
permitted by the normal NFSv4.2 compound rules and require
no protocol extensions.</t>
          </dd>
          <dt>Repair termination:</dt>
          <dd>
            <t>Every CB_CHUNK_REPAIR completes in bounded time.  The client
selected as the repair client either:
</t>
            <ol spacing="normal" type="1"><li>
                <t>returns NFS4_OK for every range in ccra_ranges (repair
succeeded), or</t>
              </li>
              <li>
                <t>returns NFS4ERR_PAYLOAD_LOST for one or more ranges (the
erasure code lost too many shards to reconstruct; the
data is permanently unrecoverable), or</t>
              </li>
              <li>
                <t>fails to respond within the ccra_deadline, in which case
the metadata server <bcp14>MUST</bcp14> re-select under the rules in
<xref target="sec-repair-selection"/> or <bcp14>MUST</bcp14> declare the ranges lost.</t>
              </li>
            </ol>
            <t>NFS4ERR_PAYLOAD_LOST is terminal for the affected ranges.
The protocol makes no further attempt to recover them.</t>
          </dd>
          <dt>Eventual trust-table convergence:</dt>
          <dd>
            <t>After a metadata server restart, each data server's trust
table converges to the metadata server's view within one
metadata-server lease period.  Entries that the metadata
server does not re-issue expire naturally via tsa_expire;
entries that the metadata server does re-issue transition
from pending-revalidation back to active on the next
TRUST_STATEID (see <xref target="sec-tight-coupling-mds-crash"/>).</t>
          </dd>
          <dt>Orphaned PENDING scavenger:</dt>
          <dd>
            <t>A PENDING chunk whose owning stateid (see
<xref target="sec-system-model-retention-scope"/>) has expired without
transition to FINALIZED or COMMITTED is an orphan.  The
metadata server <bcp14>MUST</bcp14> drive demotion of orphaned PENDINGs so
that no chunk remains in a non-terminal state indefinitely:
</t>
            <ol spacing="normal" type="1"><li>
                <t>When an owning stateid's lease expires, the metadata
server identifies every PENDING chunk owned by that
stateid (either from its own bookkeeping or by query
against the data server) and issues the control-plane
operations needed to demote each PENDING.</t>
              </li>
              <li>
                <t>Demotion replaces the PENDING with the predecessor
COMMITTED (or EMPTY) content that the data server has
been retaining under
<xref target="sec-system-model-retention-scope"/>.  The data server
<bcp14>MUST NOT</bcp14> wait for a separate client action before
performing the demotion.</t>
              </li>
              <li>
                <t>Any CHUNK_LOCK held in escrow on behalf of the expired
stateid (see <xref target="sec-chunk_guard_mds"/>) is released after
an MDS-defined grace period.  The grace period exists to
let a recovering client reclaim its lock via the grace /
reclaim path defined in <xref target="RFC8881"/>; on expiry of the
grace period without reclaim, the lock becomes available
for new CHUNK_LOCK_FLAGS_ADOPT acquirers.</t>
              </li>
            </ol>
            <t>The scavenger timeout (the delay between lease expiry and
demotion) is implementation-defined but <bcp14>SHOULD</bcp14> be tied to
the metadata server lease period so that it composes
naturally with existing NFSv4 grace / reclaim semantics.  A
scavenger timeout shorter than the lease risks racing an
in-progress client reclaim; a timeout substantially longer
than the lease extends the retention budget without a
commensurate benefit.</t>
          </dd>
        </dl>
        <t>The protocol does NOT guarantee progress if the metadata server
is unavailable for longer than its lease period -- this is the
standard NFSv4 lease assumption and is inherited unchanged.</t>
      </section>
      <section anchor="sec-system-model-consensus">
        <name>Relation to Classical Consensus</name>
        <t>Classical consensus protocols (Paxos, Raft, Viewstamped
Replication) solve the problem of reaching agreement among
mutually-distrusting replicas in the absence of a trusted
coordinator.  They typically cost two or three round trips per
decision, require a majority of replicas to be live and
reachable for progress, and impose the overhead of leader
election and log replication.</t>
        <t>This protocol is not a consensus protocol and does not attempt
to be.  Its approach instead is:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong>Designated coordinator.</strong>  The metadata server is the
coordinator for a file.  Clients accept the MDS's authority
for layout grants, stateid registration, repair client
selection, and revocation.  This assumption is the same one
made by <xref target="RFC8434"/> and all pNFS layout types to date.</t>
          </li>
          <li>
            <t><strong>Per-chunk CAS, not per-chunk voting.</strong>  Concurrent writes
on the same chunk serialize via chunk_guard4 as a CAS
primitive (see <xref target="sec-chunk_guard4"/>).  No replica vote is
required; the data server that owns the chunk evaluates the
guard locally and rejects stale writes with
NFS4ERR_CHUNK_GUARDED.</t>
          </li>
          <li>
            <t><strong>Pessimistic locks off the critical path.</strong>  CHUNK_LOCK is
used only during repair, never on the normal write path.
Lock escrow (see <xref target="sec-chunk_guard_mds"/>) preserves the
"exactly one owner" invariant across stateid revocation
without requiring a consensus round to elect the next owner.</t>
          </li>
          <li>
            <t><strong>Erasure-coded reads replace quorum reads.</strong>  A reader
reconstructs from any k of k+m shards with matching guards.
No voting is needed because there is no disagreement to
resolve: the guard identifies the single generation that was
committed.</t>
          </li>
        </ol>
        <t>The result is a data path with O(1) round-trip cost independent
of the number of replicas, and a repair path whose cost is
bounded by the number of affected chunks rather than by the
cluster size.</t>
        <t>Metadata-server high availability is orthogonal.  Deployments
that require a highly-available metadata server <bcp14>MAY</bcp14> replicate
metadata-server state across multiple metadata server instances
using classical consensus (Raft, Paxos, or equivalent).  Such
replication is implementation-defined; from a pNFS client's
perspective a highly-available metadata server looks like a
single metadata server that occasionally resets its session and
triggers grace-period reclaim, and the client's behavior is
already specified by <xref target="RFC8881"/>.  This protocol neither
requires nor precludes such an implementation.</t>
      </section>
      <section anchor="sec-system-model-nongoals">
        <name>Non-Goals</name>
        <t>For clarity, the protocol explicitly does not provide:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Byzantine fault tolerance.</strong>  A data server that
deliberately misreports its state, or a client that
bypasses its own authentication, is outside the trust model.
Deployments requiring Byzantine tolerance <bcp14>MUST</bcp14> add it in a
layer above or below this protocol.</t>
          </li>
          <li>
            <t><strong>Metadata server high availability.</strong>  Single-MDS-per-file
is the protocol model.  MDS HA, if deployed, is implemented
below the wire protocol and transparent to clients.</t>
          </li>
          <li>
            <t><strong>Cross-file atomicity.</strong>  Writes to multiple files are not
atomic at the protocol level.  File-system-level transactions
are not defined.</t>
          </li>
          <li>
            <t><strong>Multi-chunk atomicity within a single file.</strong>  COMMITs on
distinct chunks are independent.  A reader may observe a
partial write across chunks; applications must layer their
own consistency if they need otherwise.</t>
          </li>
          <li>
            <t><strong>Global linearizability across unrelated files.</strong>  Each
file's COMMITTED state is linearizable in isolation; no
total order is defined across files.</t>
          </li>
          <li>
            <t><strong>Authenticated malicious client protection.</strong>  An
authenticated client may write garbage into its own chunks
with a correctly computed CRC32; see
<xref target="sec-security-crc32-scope"/>.  The CRC32 check is a
transport-integrity check, not an adversarial-integrity
check.</t>
          </li>
          <li>
            <t><strong>General-purpose intent primitive.</strong>  Christoph Hellwig
observed at IETF 121 (November 2024) that the intent-based
pattern used here (CHUNK_WRITE -&gt; CHUNK_FINALIZE -&gt;
CHUNK_COMMIT with CHUNK_ROLLBACK as the abort path) has
potential applicability beyond erasure coding -- for
example, as a general multi-target atomic-ish write
primitive.  This document scopes the mechanism to erasure
coding: the on-wire operations carry erasure-coding-specific
semantics (chunk_guard4, mirror-set repair, per-codec
geometry), and generalising the primitive is explicit
future work.  Protocol extensions that reuse the
intent / finalize / commit pattern in other contexts are
not precluded by this document but are not defined by it.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="nfsv42-operations-allowed-to-data-files">
      <name>NFSv4.2 Operations Allowed to Data Files</name>
      <t>In the Flex Files Version 1 Layout Type (<xref target="RFC8435"/>), the data path
between client and data server was NFSv3 (<xref target="RFC1813"/>); the
operations a client sent to a data file were limited to READ,
WRITE, and COMMIT, and the operations the metadata server sent on
its control plane to the data server were limited to GETATTR,
SETATTR, CREATE, and REMOVE.  An NFSv4.2 data server, as used by
the Flex Files Version 2 Layout Type, exposes a much larger
operation set.  This section defines which operations a client <bcp14>MAY</bcp14>
send to a data file, which operations the metadata server <bcp14>MAY</bcp14>
send, and which operations a data server <bcp14>MUST</bcp14> reject.</t>
      <t>The restrictions below apply only to operations directed at a data
file on a data server.  Clients retain the full NFSv4.2 operation
set for files visible through the metadata server, including the
operations prohibited below (RENAME, LINK, CLONE, COPY, ACL-scoped
SETATTR, and so on).  The metadata server <bcp14>MAY</bcp14> internally use
operations on data files that clients <bcp14>MUST NOT</bcp14> send, as part of
its control-plane duties for the file (see
<xref target="sec-system-model-roles"/>).</t>
      <section anchor="sec-ops-mds">
        <name>Control Plane: Metadata Server to Data Server</name>
        <t>When the metadata server acts as a client to a data server, it is
managing the data file on behalf of the metadata file's namespace.
A data server <bcp14>MUST</bcp14> support the following operations on data files
when issued by the metadata server:</t>
        <ul spacing="normal">
          <li>
            <t>SEQUENCE, PUTFH, PUTROOTFH, GETFH (<xref target="RFC8881"/> Sections 18.46,
18.19, 18.21, 18.8): session and filehandle plumbing.</t>
          </li>
          <li>
            <t>LOOKUP (<xref target="RFC8881"/> Section 18.15): runway pool directory
traversal.</t>
          </li>
          <li>
            <t>GETATTR (<xref target="RFC8881"/> Section 18.7): reflected GETATTR after a
write layout is returned, and any other attribute queries the
metadata server needs to reconcile its cached view.</t>
          </li>
          <li>
            <t>SETATTR (<xref target="RFC8881"/> Section 18.30): data file truncate for
MDS-level SETATTR(size) fan-out, synthetic uid/gid rotation
for fencing, and mode-bit initialisation on runway assignment.</t>
          </li>
          <li>
            <t>CREATE (<xref target="RFC8881"/> Section 18.4): runway pool file creation.</t>
          </li>
          <li>
            <t>REMOVE (<xref target="RFC8881"/> Section 18.25): cleanup on MDS file
unlink.</t>
          </li>
          <li>
            <t>OPEN, CLOSE (<xref target="RFC8881"/> Sections 18.16, 18.2): used by the
metadata server when it acts as a client to the data server
for InBand or proxy I/O.</t>
          </li>
          <li>
            <t>EXCHANGE_ID, CREATE_SESSION, DESTROY_SESSION,
BIND_CONN_TO_SESSION, DESTROY_CLIENTID (<xref target="RFC8881"/> Sections
18.35, 18.36, 18.37, 18.34, 18.50): control-session
management.  The metadata server sets
EXCHGID4_FLAG_USE_PNFS_MDS in its EXCHANGE_ID.  A data
server that supports the tight-coupling control protocol
(see <xref target="sec-tight-coupling-control-session"/>) identifies the
metadata server's session by EXCHGID4_FLAG_USE_PNFS_MDS and
accepts TRUST_STATEID, REVOKE_STATEID, and
BULK_REVOKE_STATEID on that session.</t>
          </li>
          <li>
            <t>TRUST_STATEID (<xref target="sec-TRUST_STATEID"/>), REVOKE_STATEID
(<xref target="sec-REVOKE_STATEID"/>), BULK_REVOKE_STATEID
(<xref target="sec-BULK_REVOKE_STATEID"/>): the MDS-to-DS tight-coupling
trust-table control operations.</t>
          </li>
        </ul>
        <t>The metadata server <bcp14>MAY</bcp14> also use other NFSv4.2 operations on data
files as implementation-defined control-plane actions (for
example, COPY or CLONE to migrate a data file between data
servers during a data mover operation).  The list above is the
minimum set a Flex Files v2 data server <bcp14>MUST</bcp14> support for the
metadata server's use.</t>
      </section>
      <section anchor="sec-ops-client">
        <name>Data Path: Client to Data Server</name>
        <t>A pNFS client with an active Flex Files v2 layout <bcp14>MUST</bcp14> restrict
the operations it issues against data files to the operations
defined below.  A data server <bcp14>MUST</bcp14> reject any other operation on
a data file with NFS4ERR_NOTSUPP.</t>
        <section anchor="session-and-identity-plumbing">
          <name>Session and Identity Plumbing</name>
          <t>Required for all protection modes:</t>
          <ul spacing="normal">
            <li>
              <t>SEQUENCE, PUTFH, GETFH, PUTROOTFH (<xref target="RFC8881"/> Sections 18.46,
18.19, 18.8, 18.21).</t>
            </li>
            <li>
              <t>EXCHANGE_ID, CREATE_SESSION, DESTROY_SESSION,
BIND_CONN_TO_SESSION, DESTROY_CLIENTID (<xref target="RFC8881"/> Sections
18.35, 18.36, 18.37, 18.34, 18.50).</t>
            </li>
            <li>
              <t>RECLAIM_COMPLETE (<xref target="RFC8881"/> Section 18.51).</t>
            </li>
            <li>
              <t>SECINFO, SECINFO_NO_NAME (<xref target="RFC8881"/> Sections 18.29, 18.45):
discovery of acceptable security flavours on the data
server.</t>
            </li>
          </ul>
          <t>These operations are baseline NFSv4.2 session plumbing and are
supported on data files as on any NFSv4.2 file.</t>
        </section>
        <section anchor="getattr-on-a-data-file">
          <name>GETATTR on a Data File</name>
          <t>GETATTR <bcp14>MAY</bcp14> be issued by a client against a data file.  The
primary use case is repair: a repair client selected by
CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>) may need to query the
per-server file size or allocation state when reconstructing a
payload, and the data mover described informally in
<xref target="sec-system-model-roles"/> similarly benefits from attribute
queries on surviving mirrors.  Diagnostic use is also permitted.</t>
          <t>Clients <bcp14>MUST NOT</bcp14> treat GETATTR values returned by a data server as
authoritative for any file attribute (size, timestamps, owner,
mode, ACL, and so on).  The metadata server is the sole authority
for file attributes.  Values returned by a data server reflect the
per-server data file instance only and <bcp14>MAY</bcp14> diverge from the
metadata server's view, particularly during a write layout's
lifetime or during a Data Mover transition.  A client that uses a
data-server GETATTR result to determine the file's visible size
will observe inconsistencies.</t>
        </section>
        <section anchor="setattr-on-a-data-file">
          <name>SETATTR on a Data File</name>
          <t>Clients <bcp14>MUST NOT</bcp14> issue SETATTR against a data file.  A data server
<bcp14>MUST</bcp14> reject a client SETATTR with NFS4ERR_NOTSUPP.</t>
          <t>Attribute changes on data files <bcp14>MUST</bcp14> be reconciled with the
metadata server's view and cannot be applied unilaterally by a
client.  A client that wants to truncate, change the mode, change
ownership, or otherwise modify attributes on a file <bcp14>MUST</bcp14> issue
SETATTR to the metadata server for the file's MDS handle; the
metadata server fans the change out to the data files as a
control-plane operation.</t>
          <t>This rule explicitly covers truncate (SETATTR with size in the
bitmap): a client <bcp14>MUST NOT</bcp14> truncate a data file directly.
Similarly, a client <bcp14>MUST NOT</bcp14> issue DEALLOCATE against a data
file; see the next subsection.</t>
        </section>
        <section anchor="mirrored-data-files-ffv2codingmirrored">
          <name>Mirrored Data Files (FFV2_CODING_MIRRORED)</name>
          <t>For a mirror whose ffm_coding_type_data is FFV2_CODING_MIRRORED
(see <xref target="sec-ffv2-mirror4"/>), client operations on the data file
follow the same pattern as the File Layout Type in <xref target="RFC8881"/>
Section 13.6 and the Flex Files v1 Layout Type in <xref target="RFC8435"/>:</t>
          <t>Required:</t>
          <ul spacing="normal">
            <li>
              <t>READ (<xref target="RFC8881"/> Section 18.22).</t>
            </li>
            <li>
              <t>WRITE (<xref target="RFC8881"/> Section 18.32).</t>
            </li>
            <li>
              <t>COMMIT (<xref target="RFC8881"/> Section 18.3).</t>
            </li>
          </ul>
          <t>Optional (the client <bcp14>MAY</bcp14> send, and the data server <bcp14>MAY</bcp14> support):</t>
          <ul spacing="normal">
            <li>
              <t>READ_PLUS (<xref target="RFC7862"/> Section 15.10): hole-aware reads.</t>
            </li>
            <li>
              <t>SEEK (<xref target="RFC7862"/> Section 15.11): hole and data detection.</t>
            </li>
            <li>
              <t>ALLOCATE (<xref target="RFC7862"/> Section 15.1): space reservation hint.</t>
            </li>
          </ul>
          <t>The client <bcp14>MUST NOT</bcp14> send:</t>
          <ul spacing="normal">
            <li>
              <t>DEALLOCATE (<xref target="RFC7862"/> Section 15.4): hole punching is a
metadata-server responsibility; the client issues DEALLOCATE
on the metadata-server filehandle, and the metadata server
fans out to the data servers as a control-plane operation.</t>
            </li>
          </ul>
        </section>
        <section anchor="erasure-coded-data-files-ffv2encoding">
          <name>Erasure-Coded Data Files (FFV2<em>ENCODING</em>*)</name>
          <t>For a mirror whose ffm_coding_type_data is any of the erasure-
coding types defined in this document
(FFV2<em>ENCODING_MOJETTE_SYSTEMATIC,
FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC,
FFV2_ENCODING_RS_VANDERMONDE), client operations use the CHUNK</em>*
operations rather than READ / WRITE / COMMIT.</t>
          <t>Required for all erasure-coded clients:</t>
          <ul spacing="normal">
            <li>
              <t>CHUNK_WRITE (<xref target="sec-CHUNK_WRITE"/>).</t>
            </li>
            <li>
              <t>CHUNK_READ (<xref target="sec-CHUNK_READ"/>).</t>
            </li>
            <li>
              <t>CHUNK_FINALIZE (<xref target="sec-CHUNK_FINALIZE"/>).</t>
            </li>
            <li>
              <t>CHUNK_COMMIT (<xref target="sec-CHUNK_COMMIT"/>).</t>
            </li>
            <li>
              <t>CHUNK_HEADER_READ (<xref target="sec-CHUNK_HEADER_READ"/>).</t>
            </li>
            <li>
              <t>CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) and CHUNK_UNLOCK
(<xref target="sec-CHUNK_UNLOCK"/>).</t>
            </li>
            <li>
              <t>CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).</t>
            </li>
          </ul>
          <t>Required for clients that participate in repair:</t>
          <ul spacing="normal">
            <li>
              <t>CHUNK_ERROR (<xref target="sec-CHUNK_ERROR"/>).</t>
            </li>
            <li>
              <t>CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>).</t>
            </li>
            <li>
              <t>CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>).</t>
            </li>
          </ul>
          <t>Clients <bcp14>MUST NOT</bcp14> send:</t>
          <ul spacing="normal">
            <li>
              <t>READ, WRITE, COMMIT against an erasure-coded data file.  A
data server <bcp14>MUST</bcp14> reject these with NFS4ERR_NOTSUPP and <bcp14>MAY</bcp14>
log the client for operator attention; this case is almost
always a client bug in which the client did not inspect the
mirror's ffm_coding_type_data before issuing I/O.</t>
            </li>
            <li>
              <t>READ_PLUS, SEEK, ALLOCATE, DEALLOCATE against an erasure-
coded data file.  Chunk-level allocation is a
metadata-server responsibility.</t>
            </li>
          </ul>
        </section>
        <section anchor="operations-that-must-not-be-sent-to-a-data-file">
          <name>Operations That MUST NOT Be Sent to a Data File</name>
          <t>Clients <bcp14>MUST NOT</bcp14> send the following operations to a data server
on a data file, regardless of protection mode.  A data server
<bcp14>MUST</bcp14> return NFS4ERR_NOTSUPP:</t>
          <ul spacing="normal">
            <li>
              <t>OPEN, CLOSE, OPEN_DOWNGRADE, OPEN_CONFIRM (<xref target="RFC8881"/>
Sections 18.16, 18.2, 18.18, 18.20).  Opens occur on the
metadata server; the stateid obtained there is used on the
data path.</t>
            </li>
            <li>
              <t>LOCK, LOCKU, LOCKT, RELEASE_LOCKOWNER (<xref target="RFC8881"/> Sections
18.10, 18.11, 18.13, 18.24).  Byte-range locks on data files
are not supported; erasure-coded files use CHUNK_LOCK, and
mirrored files rely on metadata-server coordination.</t>
            </li>
            <li>
              <t>DELEGPURGE, DELEGRETURN, WANT_DELEGATION (<xref target="RFC8881"/> Sections
18.5, 18.6 and <xref target="RFC7862"/> Section 15.3).  Delegations are
issued by the metadata server.</t>
            </li>
            <li>
              <t>Any operation whose purpose is to manipulate the file's
namespace: RENAME, LINK, SYMLINK, CREATE (at the file-
creation use, not MDS runway creation), REMOVE.  Namespace
operations belong on the metadata server.</t>
            </li>
            <li>
              <t>Any ACL-scoped SETATTR or GETATTR bit (FATTR4_ACL,
FATTR4_DACL, FATTR4_SACL).  Access control on data files is
delegated to the metadata server.</t>
            </li>
            <li>
              <t>CLONE, COPY, COPY_NOTIFY, OFFLOAD_CANCEL, OFFLOAD_STATUS
(<xref target="RFC7862"/> Sections 15.13, 15.2, 15.3, 15.8, 15.9).
File-level data migration is a metadata-server responsibility.</t>
            </li>
            <li>
              <t>LAYOUTGET, LAYOUTCOMMIT, LAYOUTRETURN, LAYOUTSTATS,
LAYOUTERROR, GETDEVICEINFO, GETDEVICELIST (<xref target="RFC8881"/>
Sections 18.43, 18.42, 18.44, <xref target="RFC7862"/> Sections 15.7,
15.6, <xref target="RFC8881"/> Sections 18.40, 18.41).  Layout operations
belong on the metadata server.</t>
            </li>
            <li>
              <t>TRUST_STATEID, REVOKE_STATEID, BULK_REVOKE_STATEID
(<xref target="sec-TRUST_STATEID"/>, <xref target="sec-REVOKE_STATEID"/>,
<xref target="sec-BULK_REVOKE_STATEID"/>).  These are MDS-to-DS
control-plane operations; a data server rejects them with
NFS4ERR_PERM when received on a client session (see
<xref target="sec-tight-coupling-control-session"/>).</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="callback-path-data-server-to-client">
        <name>Callback Path: Data Server to Client</name>
        <t>A data server does not call back directly to pNFS clients.
Recall notifications and repair coordination flow through the
metadata server's backchannel session with the client.  The
callbacks a client will observe that affect its data files are:</t>
        <ul spacing="normal">
          <li>
            <t>CB_LAYOUTRECALL (<xref target="RFC8881"/> Section 20.3).</t>
          </li>
          <li>
            <t>CB_NOTIFY_DEVICEID (<xref target="RFC8881"/> Section 20.12).</t>
          </li>
          <li>
            <t>CB_RECALL_ANY (<xref target="RFC8881"/> Section 20.6).</t>
          </li>
          <li>
            <t>CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>).</t>
          </li>
        </ul>
        <t>A data server influences these callbacks only indirectly, via
LAYOUTERROR reports the client issues to the metadata server or
by returning error codes that prompt the client to report.  A
data server <bcp14>MUST NOT</bcp14> attempt to send CB_* operations to clients
directly.</t>
      </section>
      <section anchor="summary-table">
        <name>Summary Table</name>
        <t><xref target="tbl-ops-allowed"/> lists each relevant NFSv4.2 operation and its
applicability on a data file in each direction.  "required" means
the data server <bcp14>MUST</bcp14> support the operation when received on the
indicated path; "OPT" means the data server <bcp14>MAY</bcp14> support it and the
client <bcp14>MUST</bcp14> tolerate the absence of support; "<bcp14>MUST NOT</bcp14>" means the
client <bcp14>MUST NOT</bcp14> send the operation and the data server <bcp14>MUST</bcp14> reject
it with NFS4ERR_NOTSUPP; "<bcp14>MAY</bcp14>" means the metadata server <bcp14>MAY</bcp14> use
the operation as an implementation-defined control-plane action.</t>
        <table anchor="tbl-ops-allowed">
          <name>NFSv4.2 operations allowed on data files</name>
          <thead>
            <tr>
              <th align="left">Operation</th>
              <th align="left">Client -&gt; DS</th>
              <th align="left">MDS -&gt; DS</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">SEQUENCE, PUTFH, GETFH, PUTROOTFH</td>
              <td align="left">required</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">EXCHANGE_ID, CREATE_SESSION, DESTROY_SESSION, BIND_CONN_TO_SESSION, DESTROY_CLIENTID</td>
              <td align="left">required</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">RECLAIM_COMPLETE</td>
              <td align="left">required</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">SECINFO, SECINFO_NO_NAME</td>
              <td align="left">required</td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">GETATTR</td>
              <td align="left">OPT (non-authoritative)</td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">SETATTR</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">LOOKUP, CREATE, REMOVE</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">required</td>
            </tr>
            <tr>
              <td align="left">READ, WRITE, COMMIT</td>
              <td align="left">required (mirrored); <bcp14>MUST NOT</bcp14> (erasure-coded)</td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">READ_PLUS, SEEK, ALLOCATE</td>
              <td align="left">OPT (mirrored); <bcp14>MUST NOT</bcp14> (erasure-coded)</td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">DEALLOCATE</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE, CHUNK_READ, CHUNK_FINALIZE, CHUNK_COMMIT, CHUNK_HEADER_READ, CHUNK_LOCK, CHUNK_UNLOCK, CHUNK_ROLLBACK</td>
              <td align="left">required (erasure-coded); <bcp14>MUST NOT</bcp14> (mirrored)</td>
              <td align="left">not used</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ERROR, CHUNK_REPAIRED, CHUNK_WRITE_REPAIR</td>
              <td align="left">required (erasure-coded repair clients); <bcp14>MUST NOT</bcp14> (mirrored)</td>
              <td align="left">not used</td>
            </tr>
            <tr>
              <td align="left">OPEN, CLOSE, OPEN_DOWNGRADE, OPEN_CONFIRM</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">OPT (proxy I/O)</td>
            </tr>
            <tr>
              <td align="left">LOCK, LOCKU, LOCKT, RELEASE_LOCKOWNER</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">DELEGPURGE, DELEGRETURN, WANT_DELEGATION</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">RENAME, LINK, SYMLINK</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">CLONE, COPY, COPY_NOTIFY, OFFLOAD_CANCEL, OFFLOAD_STATUS</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MAY</bcp14> (data migration)</td>
            </tr>
            <tr>
              <td align="left">LAYOUTGET, LAYOUTCOMMIT, LAYOUTRETURN, LAYOUTSTATS, LAYOUTERROR, GETDEVICEINFO, GETDEVICELIST</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
            </tr>
            <tr>
              <td align="left">ACL-scoped GETATTR/SETATTR bits</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">
                <bcp14>MAY</bcp14></td>
            </tr>
            <tr>
              <td align="left">TRUST_STATEID, REVOKE_STATEID, BULK_REVOKE_STATEID</td>
              <td align="left">
                <bcp14>MUST NOT</bcp14></td>
              <td align="left">required (tight coupling)</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="sec-layouthint">
      <name>Flexible File Layout Type Return</name>
      <t>layoutreturn_file4 is used in the LAYOUTRETURN operation to convey
layout-type-specific information to the server.  It is defined in
Section 18.44.1 of <xref target="RFC8881"/> (also shown in <xref target="fig-LAYOUTRETURN"/>).</t>
      <figure anchor="fig-LAYOUTRETURN">
        <name>Layout Return XDR</name>
        <sourcecode type="xdr"><![CDATA[
      /* Constants used for LAYOUTRETURN and CB_LAYOUTRECALL */
      const LAYOUT4_RET_REC_FILE      = 1;
      const LAYOUT4_RET_REC_FSID      = 2;
      const LAYOUT4_RET_REC_ALL       = 3;

      enum layoutreturn_type4 {
              LAYOUTRETURN4_FILE = LAYOUT4_RET_REC_FILE,
              LAYOUTRETURN4_FSID = LAYOUT4_RET_REC_FSID,
              LAYOUTRETURN4_ALL  = LAYOUT4_RET_REC_ALL
      };

   struct layoutreturn_file4 {
           offset4         lrf_offset;
           length4         lrf_length;
           stateid4        lrf_stateid;
           /* layouttype4 specific data */
           opaque          lrf_body<>;
   };

   union layoutreturn4 switch(layoutreturn_type4 lr_returntype) {
           case LAYOUTRETURN4_FILE:
                   layoutreturn_file4      lr_layout;
           default:
                   void;
   };

   struct LAYOUTRETURN4args {
           /* CURRENT_FH: file */
           bool                    lora_reclaim;
           layouttype4             lora_layout_type;
           layoutiomode4           lora_iomode;
           layoutreturn4           lora_layoutreturn;
   };
]]></sourcecode>
      </figure>
      <t>If the lora_layout_type layout type is LAYOUT4_FLEX_FILES and the
lr_returntype is LAYOUTRETURN4_FILE, then the lrf_body opaque value
is defined by ff_layoutreturn4 (see <xref target="sec-ff_layoutreturn4"/>).  This
allows the client to report I/O error information or layout usage
statistics back to the metadata server as defined below.  Note that
while the data structures are built on concepts introduced in
NFSv4.2, the effective discriminated union (lora_layout_type combined
with ff_layoutreturn4) allows for an NFSv4.1 metadata server to
utilize the data.</t>
      <section anchor="sec-io-error">
        <name>I/O Error Reporting</name>
        <section anchor="sec-ff_ioerr4">
          <name>ff_ioerr4</name>
          <figure anchor="fig-ff_ioerr4">
            <name>ff_ioerr4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_ioerr4 {
   ///         offset4        ffie_offset;
   ///         length4        ffie_length;
   ///         stateid4       ffie_stateid;
   ///         device_error4  ffie_errors<>;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Recall that <xref target="RFC7862"/> defines device_error4 as in <xref target="fig-device_error4"/>:</t>
          <figure anchor="fig-device_error4">
            <name>device_error4</name>
            <sourcecode type="xdr"><![CDATA[
   struct device_error4 {
           deviceid4       de_deviceid;
           nfsstat4        de_status;
           nfs_opnum4      de_opnum;
   };
]]></sourcecode>
          </figure>
          <t>The ff_ioerr4 structure is used to return error indications for
data files that generated errors during data transfers.  These are
hints to the metadata server that there are problems with that file.
For each error, ffie_errors.de_deviceid, ffie_offset, and ffie_length
represent the storage device and byte range within the file in which
the error occurred; ffie_errors represents the operation and type
of error.  The use of device_error4 is described in Section 15.6
of <xref target="RFC7862"/>.</t>
          <t>Even though the storage device might be accessed via NFSv3 and
reports back NFSv3 errors to the client, the client is responsible
for mapping these to appropriate NFSv4 status codes as de_status.
Likewise, the NFSv3 operations need to be mapped to equivalent NFSv4
operations.</t>
        </section>
      </section>
      <section anchor="sec-layout-stats">
        <name>Layout Usage Statistics</name>
        <section anchor="ffiolatency4">
          <name>ff_io_latency4</name>
          <figure anchor="fig-ff_io_latency4">
            <name>ff_io_latency4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_io_latency4 {
   ///         uint64_t       ffil_ops_requested;
   ///         uint64_t       ffil_bytes_requested;
   ///         uint64_t       ffil_ops_completed;
   ///         uint64_t       ffil_bytes_completed;
   ///         uint64_t       ffil_bytes_not_delivered;
   ///         nfstime4       ffil_total_busy_time;
   ///         nfstime4       ffil_aggregate_completion_time;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>Both operation counts and bytes transferred are kept in the
ff_io_latency4 (see <xref target="fig-ff_io_latency4"/>.  As seen in ff_layoutupdate4
(see <xref target="sec-ff_layoutupdate4"/>), READ and WRITE operations are
aggregated separately.  READ operations are used for the ff_io_latency4
ffl_read.  Both WRITE and COMMIT operations are used for the
ff_io_latency4 ffl_write.  "Requested" counters track what the
client is attempting to do, and "completed" counters track what was
done.  There is no requirement that the client only report completed
results that have matching requested results from the reported
period.</t>
          <t>ffil_bytes_not_delivered is used to track the aggregate number of
bytes requested but not fulfilled due to error conditions.
ffil_total_busy_time is the aggregate time spent with outstanding
RPC calls. ffil_aggregate_completion_time is the sum of all round-trip
times for completed RPC calls.</t>
          <t>In Section 3.3.1 of <xref target="RFC8881"/>, the nfstime4 is defined as the
number of seconds and nanoseconds since midnight or zero hour January
1, 1970 Coordinated Universal Time (UTC).  The use of nfstime4 in
ff_io_latency4 is to store time since the start of the first I/O
from the client after receiving the layout.  In other words, these
are to be decoded as duration and not as a date and time.</t>
          <t>Note that LAYOUTSTATS are cumulative, i.e., not reset each time the
operation is sent.  If two LAYOUTSTATS operations for the same file
and layout stateid originate from the same NFS client and are
processed at the same time by the metadata server, then the one
containing the larger values contains the most recent time series
data.</t>
        </section>
        <section anchor="sec-ff_layoutupdate4">
          <name>ff_layoutupdate4</name>
          <figure anchor="fig-ff_layoutupdate4">
            <name>ff_layoutupdate4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layoutupdate4 {
   ///         netaddr4         ffl_addr;
   ///         nfs_fh4          ffl_fhandle;
   ///         ffv2_io_latency4 ffl_read;
   ///         ffv2_io_latency4 ffl_write;
   ///         nfstime4         ffl_duration;
   ///         bool             ffl_local;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t>ffl_addr differentiates which network address the client is connected
to on the storage device.  In the case of multipathing, ffl_fhandle
indicates which read-only copy was selected. ffl_read and ffl_write
convey the latencies for both READ and WRITE operations, respectively.
ffl_duration is used to indicate the time period over which the
statistics were collected.  If true, ffl_local indicates that the
I/O was serviced by the client's cache.  This flag allows the client
to inform the metadata server about "hot" access to a file it would
not normally be allowed to report on.</t>
        </section>
        <section anchor="ffiostats4">
          <name>ff_iostats4</name>
          <figure anchor="fig-ff_iostats4">
            <name>ff_iostats4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_iostats4 {
   ///         offset4            ffis_offset;
   ///         length4            ffis_length;
   ///         stateid4           ffis_stateid;
   ///         io_info4           ffis_read;
   ///         io_info4           ffis_write;
   ///         deviceid4          ffis_deviceid;
   ///         ffv2_layoutupdate4 ffis_layoutupdate;
   /// };
   ///
]]></sourcecode>
          </figure>
          <t><xref target="RFC7862"/> defines io_info4 as in <xref target="fig-ff_iostats4"/>.</t>
          <figure anchor="fig-io_info4">
            <name>io_info4</name>
            <sourcecode type="xdr"><![CDATA[
   struct io_info4 {
           uint64_t        ii_count;
           uint64_t        ii_bytes;
   };
]]></sourcecode>
          </figure>
          <t>With pNFS, data transfers are performed directly between the pNFS
client and the storage devices.  Therefore, the metadata server has
no direct knowledge of the I/O operations being done and thus cannot
create on its own statistical information about client I/O to
optimize the data storage location.  ff_iostats4 <bcp14>MAY</bcp14> be used by the
client to report I/O statistics back to the metadata server upon
returning the layout.</t>
          <t>Since it is not feasible for the client to report every I/O that
used the layout, the client <bcp14>MAY</bcp14> identify "hot" byte ranges for which
to report I/O statistics.  The definition and/or configuration
mechanism of what is considered "hot" and the size of the reported
byte range are out of the scope of this document.  For client
implementation, providing reasonable default values and an optional
run-time management interface to control these parameters is
suggested.  For example, a client can define the default byte-range
resolution to be 1 MB in size and the thresholds for reporting to
be 1 MB/second or 10 I/O operations per second.</t>
          <t>For each byte range, ffis_offset and ffis_length represent the
starting offset of the range and the range length in bytes.
ffis_read.ii_count, ffis_read.ii_bytes, ffis_write.ii_count, and
ffis_write.ii_bytes represent the number of contiguous READ and
WRITE I/Os and the respective aggregate number of bytes transferred
within the reported byte range.</t>
          <t>The combination of ffis_deviceid and ffl_addr uniquely identifies
both the storage path and the network route to it.  Finally,
ffl_fhandle allows the metadata server to differentiate between
multiple read-only copies of the file on the same storage device.</t>
        </section>
      </section>
      <section anchor="sec-ff_layoutreturn4">
        <name>ff_layoutreturn4</name>
        <figure anchor="fig-ff_layoutreturn4">
          <name>ff_layoutreturn4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct ffv2_layoutreturn4 {
   ///         ffv2_ioerr4     fflr_ioerr_report<>;
   ///         ffv2_iostats4   fflr_iostats_report<>;
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>When data file I/O operations fail, fflr_ioerr_report&lt;&gt; is used to
report these errors to the metadata server as an array of elements
of type ff_ioerr4.  Each element in the array represents an error
that occurred on the data file identified by ffie_errors.de_deviceid.
If no errors are to be reported, the size of the fflr_ioerr_report&lt;&gt;
array is set to zero.  The client <bcp14>MAY</bcp14> also use fflr_iostats_report&lt;&gt;
to report a list of I/O statistics as an array of elements of type
ff_iostats4.  Each element in the array represents statistics for
a particular byte range.  Byte ranges are not guaranteed to be
disjoint and <bcp14>MAY</bcp14> repeat or intersect.</t>
      </section>
    </section>
    <section anchor="sec-LAYOUTERROR">
      <name>Flexible File Layout Type LAYOUTERROR</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send error
information to the metadata server (see <xref target="sec-io-error"/>), it <bcp14>MAY</bcp14>
use LAYOUTERROR (see Section 15.6 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTERROR4args is treated the same as ff_ioerr4.</t>
    </section>
    <section anchor="flexible-file-layout-type-layoutstats">
      <name>Flexible File Layout Type LAYOUTSTATS</name>
      <t>If the client is using NFSv4.2 to communicate with the metadata
server, then instead of waiting for a LAYOUTRETURN to send I/O
statistics to the metadata server (see <xref target="sec-layout-stats"/>), it
<bcp14>MAY</bcp14> use LAYOUTSTATS (see Section 15.7 of <xref target="RFC7862"/>) to communicate
that information.  For the flexible file layout type, this means
that LAYOUTSTATS4args.lsa_layoutupdate is overloaded with the same
contents as in ffis_layoutupdate.</t>
    </section>
    <section anchor="flexible-file-layout-type-creation-hint">
      <name>Flexible File Layout Type Creation Hint</name>
      <t>The layouthint4 type is defined in the <xref target="RFC8881"/> as in
<xref target="fig-layouthint4-v1"/>.</t>
      <figure anchor="fig-layouthint4-v1">
        <name>layouthint4 v1</name>
        <sourcecode type="xdr"><![CDATA[
   struct layouthint4 {
       layouttype4        loh_type;
       opaque             loh_body<>;
   };
]]></sourcecode>
      </figure>
      <artwork><![CDATA[
                          {{fig-layouthint4-v1}}
]]></artwork>
      <t>The layouthint4 structure is used by the client to pass a hint about
the type of layout it would like created for a particular file.  If
the loh_type layout type is LAYOUT4_FLEX_FILES, then the loh_body
opaque value is defined by the ff_layouthint4 type.</t>
    </section>
    <section anchor="fflayouthint4">
      <name>ff_layouthint4</name>
      <figure anchor="fig-ff_layouthint4-v2">
        <name>ff_layouthint4 (v1 compatibility)</name>
        <sourcecode type="xdr"><![CDATA[
   union ff_mirrors_hint switch (bool ffmc_valid) {
       case TRUE:
           uint32_t    ffmc_mirrors;
       case FALSE:
           void;
   };

   struct ff_layouthint4 {
       ff_mirrors_hint    fflh_mirrors_hint;
   };
]]></sourcecode>
      </figure>
      <t>The ff_layouthint4 is retained for backwards compatibility with
Flex Files v1 layouts.  For Flex Files v2 layouts, clients
<bcp14>SHOULD</bcp14> use ffv2_layouthint4 (<xref target="fig-ffv2_layouthint4"/>) instead,
which provides coding type selection and data protection geometry
hints via ffv2_data_protection4 (<xref target="fig-ffv2_data_protection4"/>).</t>
    </section>
    <section anchor="recalling-a-layout">
      <name>Recalling a Layout</name>
      <t>While Section 12.5.5 of <xref target="RFC8881"/> discusses reasons independent
of layout type for recalling a layout, the flexible file layout
type metadata server should recall outstanding layouts in the
following cases:</t>
      <ul spacing="normal">
        <li>
          <t>When the file's security policy changes, i.e., ACLs or permission
mode bits are set.</t>
        </li>
        <li>
          <t>When the file's layout changes, rendering outstanding layouts
invalid.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the need to enforce
locking constraints.</t>
        </li>
        <li>
          <t>When existing layouts are inconsistent with the requirements
regarding resilvering as described in <xref target="sec-mds-resilvering"/>.</t>
        </li>
      </ul>
      <section anchor="cbrecallany">
        <name>CB_RECALL_ANY</name>
        <t>The metadata server can use the CB_RECALL_ANY callback operation
to notify the client to return some or all of its layouts.  Section
22.3 of <xref target="RFC8881"/> defines the allowed types of the "NFSv4 Recallable
Object Types Registry".</t>
        <figure anchor="fig-new-rca4">
          <name>RCA4 masks for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MIN     = 20;
   /// const RCA4_TYPE_MASK_FF2_LAYOUT_MAX     = 21;
   ///
]]></sourcecode>
        </figure>
        <figure anchor="fig-CB_RECALL_ANY4args">
          <name>CB_RECALL_ANY4args XDR</name>
          <sourcecode type="xdr"><![CDATA[
   struct  CB_RECALL_ANY4args      {
       uint32_t        craa_layouts_to_keep;
       bitmap4         craa_type_mask;
   };
]]></sourcecode>
        </figure>
        <t>Typically, CB_RECALL_ANY will be used to recall client state when
the server needs to reclaim resources.  The craa_type_mask bitmap
specifies the type of resources that are recalled, and the
craa_layouts_to_keep value specifies how many of the recalled
flexible file layouts the client is allowed to keep.  The mask flags
for the flexible file layout type are defined as in <xref target="fig-mask-flags"/>.</t>
        <figure anchor="fig-mask-flags">
          <name>Recall Mask Flags for v2</name>
          <sourcecode type="xdr"><![CDATA[
   /// enum ffv2_cb_recall_any_mask {
   ///     PNFS_FF_RCA4_TYPE_MASK_READ = 20,
   ///     PNFS_FF_RCA4_TYPE_MASK_RW   = 21
   /// };
   ///
]]></sourcecode>
        </figure>
        <t>The flags represent the iomode of the recalled layouts.  In response,
the client <bcp14>SHOULD</bcp14> return layouts of the recalled iomode that it
needs the least, keeping at most craa_layouts_to_keep flexible file
layouts.</t>
        <t>The PNFS_FF_RCA4_TYPE_MASK_READ flag notifies the client to return
layouts of iomode LAYOUTIOMODE4_READ.  Similarly, the
PNFS_FF_RCA4_TYPE_MASK_RW flag notifies the client to return layouts
of iomode LAYOUTIOMODE4_RW.  When both mask flags are set, the
client is notified to return layouts of either iomode.</t>
      </section>
    </section>
    <section anchor="client-fencing">
      <name>Client Fencing</name>
      <t>In cases where clients are uncommunicative and their lease has
expired or when clients fail to return recalled layouts within a
lease period, the server <bcp14>MAY</bcp14> revoke client layouts and reassign
these resources to other clients (see Section 12.5.5 of <xref target="RFC8881"/>).
To avoid data corruption, the metadata server <bcp14>MUST</bcp14> fence off the
revoked clients from the respective data files as described in
<xref target="sec-Fencing-Clients"/>.</t>
    </section>
    <section anchor="new-nfsv42-error-values">
      <name>New NFSv4.2 Error Values</name>
      <figure anchor="fig-errors-xdr">
        <name>Errors XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* Erasure Coding error constants; added to nfsstat4 enum */
   ///
   /// const NFS4ERR_CODING_NOT_SUPPORTED   = 10097;
   /// const NFS4ERR_PAYLOAD_NOT_CONSISTENT = 10098;
   /// const NFS4ERR_CHUNK_LOCKED           = 10099;
   /// const NFS4ERR_CHUNK_GUARDED          = 10100;
   /// const NFS4ERR_PAYLOAD_LOST           = 10101;
   ///
]]></sourcecode>
      </figure>
      <t>The new error codes are shown in <xref target="fig-errors-xdr"/>.</t>
      <section anchor="error-definitions">
        <name>Error Definitions</name>
        <table anchor="tbl-protocol-errors">
          <name>Error Definitions</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Number</th>
              <th align="left">Description</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">10097</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CODING_NOT_SUPPORTED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_NOT_CONSISTENT</td>
              <td align="left">10098</td>
              <td align="left">
                <xref target="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_LOCKED</td>
              <td align="left">10099</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_LOCKED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_CHUNK_GUARDED</td>
              <td align="left">10100</td>
              <td align="left">
                <xref target="sec-NFS4ERR_CHUNK_GUARDED"/></td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_LOST</td>
              <td align="left">10101</td>
              <td align="left">
                <xref target="sec-NFS4ERR_PAYLOAD_LOST"/></td>
            </tr>
          </tbody>
        </table>
        <section anchor="sec-NFS4ERR_CODING_NOT_SUPPORTED">
          <name>NFS4ERR_CODING_NOT_SUPPORTED (Error Code 10097)</name>
          <t>The client requested a ffv2_coding_type4 which the metadata server
does not support.  I.e., if the client sends a layout_hint requesting
an erasure coding type that the metadata server does not support,
this error code can be returned.  The client might have to send the
layout_hint several times to determine the overlapping set of
supported erasure coding types.</t>
        </section>
        <section anchor="sec-NFS4ERR_PAYLOAD_NOT_CONSISTENT">
          <name>NFS4ERR_PAYLOAD_NOT_CONSISTENT (Error Code 10098)</name>
          <t>The client encountered a payload in which the blocks were inconsistent
and stays inconsistent.  As the client can not tell if another
client is actively writing, it informs the metadata server of this
error via LAYOUTERROR.  The metadata server can then arrange for
repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_LOCKED">
          <name>NFS4ERR_CHUNK_LOCKED (Error Code 10099)</name>
          <t>The client tried an operation on a chunk which resulted in the data
server reporting that the chunk was locked. The client will then
inform the metadata server of this error via LAYOUTERROR.  The
metadata server can then arrange for repair of the file.</t>
        </section>
        <section anchor="sec-NFS4ERR_CHUNK_GUARDED">
          <name>NFS4ERR_CHUNK_GUARDED (Error Code 10100)</name>
          <t>The client tried a guarded CHUNK_WRITE on a chunk which did not match
the guard on the chunk in the data file. As such, the CHUNK_WRITE was
rejected and the client should refresh the chunk it has cached.</t>
        </section>
        <section anchor="sec-NFS4ERR_PAYLOAD_LOST">
          <name>NFS4ERR_PAYLOAD_LOST (Error Code 10101)</name>
          <t>Returned by a repair client on the CB_CHUNK_REPAIR response
(ccrr_status) to indicate that the identified ranges cannot be
repaired and the underlying data is no longer recoverable.
Causes include: too few surviving shards to meet the
reconstruction threshold (Katz criterion for Mojette, any
k-of-(k+m) subset for Reed-Solomon Vandermonde), inability to
roll back to a previously committed payload because that payload
is also lost, or exhaustion of all FFV2_DS_FLAGS_SPARE and
FFV2_DS_FLAGS_REPAIR data servers available in the layout.</t>
          <t>On receipt, the metadata server <bcp14>MUST NOT</bcp14> retry the repair by
selecting a different client -- the payload is damaged and the
metadata server transitions the affected file or byte range into
an implementation-defined damaged state.  Operator notification
and restore-from-snapshot are out of scope for this specification.</t>
          <t>NFS4ERR_PAYLOAD_LOST is distinct from NFS4ERR_DELAY (transient;
metadata server <bcp14>MAY</bcp14> extend the deadline or re-select) and from
NFS4ERR_IO (per-operation failure; metadata server <bcp14>MAY</bcp14> retry or
re-select).  Only NFS4ERR_PAYLOAD_LOST is terminal.</t>
        </section>
      </section>
      <section anchor="operations-and-their-valid-errors">
        <name>Operations and Their Valid Errors</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-ops-and-errors">
          <name>Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CHUNK_COMMIT</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ERROR</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_FINALIZE</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_HEADER_READ</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_LOCK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_CHUNK_LOCKED, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_READ</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOTSUPP, NFS4ERR_PAYLOAD_NOT_CONSISTENT, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_REPAIRED</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_ROLLBACK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_UNLOCK</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_INVAL, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_CHUNK_GUARDED, NFS4ERR_CHUNK_LOCKED, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOSPC, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">CHUNK_WRITE_REPAIR</td>
              <td align="left">NFS4_OK, NFS4ERR_ACCESS, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, NFS4ERR_IO, NFS4ERR_NOSPC, NFS4ERR_NOTSUPP, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
            <tr>
              <td align="left">TRUST_STATEID</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, NFS4ERR_INVAL, NFS4ERR_NOFILEHANDLE, NFS4ERR_NOTSUPP, NFS4ERR_PERM, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">REVOKE_STATEID</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, NFS4ERR_INVAL, NFS4ERR_NOFILEHANDLE, NFS4ERR_NOTSUPP, NFS4ERR_PERM, NFS4ERR_SERVERFAULT</td>
            </tr>
            <tr>
              <td align="left">BULK_REVOKE_STATEID</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_DELAY, NFS4ERR_NOTSUPP, NFS4ERR_PERM, NFS4ERR_SERVERFAULT</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="callback-operations-and-their-valid-errors">
        <name>Callback Operations and Their Valid Errors</name>
        <t>The callback operations and their valid errors are presented in
<xref target="tbl-cb-ops-and-errors"/>.  All error codes not defined in this document
are defined in Section 15 of <xref target="RFC8881"/> and Section 11 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-cb-ops-and-errors">
          <name>Callback Operations and Their Valid Errors</name>
          <thead>
            <tr>
              <th align="left">Callback Operation</th>
              <th align="left">Errors</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">CB_CHUNK_REPAIR</td>
              <td align="left">NFS4_OK, NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, NFS4ERR_DEADSESSION, NFS4ERR_DELAY, NFS4ERR_CODING_NOT_SUPPORTED, NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_ISDIR, NFS4ERR_LOCKED, NFS4ERR_NOTSUPP, NFS4ERR_OLD_STATEID, NFS4ERR_PAYLOAD_LOST, NFS4ERR_SERVERFAULT, NFS4ERR_STALE</td>
            </tr>
          </tbody>
        </table>
      </section>
      <section anchor="errors-and-the-operations-that-use-them">
        <name>Errors and the Operations That Use Them</name>
        <t>The operations and their valid errors are presented in
<xref target="tbl-errors-and-ops"/>.  All operations not defined in this document
are defined in Section 18 of <xref target="RFC8881"/> and Section 15 of <xref target="RFC7862"/>.</t>
        <table anchor="tbl-errors-and-ops">
          <name>Errors and the Operations That Use Them</name>
          <thead>
            <tr>
              <th align="left">Error</th>
              <th align="left">Operations</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left">NFS4ERR_CODING_NOT_SUPPORTED</td>
              <td align="left">CB_CHUNK_REPAIR, LAYOUTGET</td>
            </tr>
            <tr>
              <td align="left">NFS4ERR_PAYLOAD_LOST</td>
              <td align="left">CB_CHUNK_REPAIR</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
    <section anchor="exchgid4flaguseerasureds">
      <name>EXCHGID4_FLAG_USE_ERASURE_DS</name>
      <figure anchor="fig-EXCHGID4_FLAG_USE_ERASURE_DS">
        <name>The EXCHGID4_FLAG_USE_ERASURE_DS</name>
        <sourcecode type="xdr"><![CDATA[
   /// const EXCHGID4_FLAG_USE_ERASURE_DS      = 0x00100000;
]]></sourcecode>
      </figure>
      <t>When a data server connects to a metadata server it can via
EXCHANGE_ID (see Section 18.35 of <xref target="RFC8881"/>) state its pNFS role.
The data server can use EXCHGID4_FLAG_USE_ERASURE_DS (see
<xref target="fig-EXCHGID4_FLAG_USE_ERASURE_DS"/>) to indicate that it supports the
new NFSv4.2 operations introduced in this document.  Section 13.1
of <xref target="RFC8881"/> describes the interaction of the various pNFS roles
masked by EXCHGID4_FLAG_MASK_PNFS.  However, that does not mask out
EXCHGID4_FLAG_USE_ERASURE_DS.  I.e., EXCHGID4_FLAG_USE_ERASURE_DS can
be used in combination with all of the pNFS flags.</t>
      <t>If the data server sets EXCHGID4_FLAG_USE_ERASURE_DS during the
EXCHANGE_ID operation, then it <bcp14>MUST</bcp14> support all of the operations
in <xref target="tbl-protocol-ops"/>.  Further, this support is orthogonal to the
Erasure Coding Type selected.  The data server is unaware of which type
is driving the I/O.</t>
    </section>
    <section anchor="new-nfsv42-attributes">
      <name>New NFSv4.2 Attributes</name>
      <section anchor="sec-fattr4_coding_block_size">
        <name>Attribute 89: fattr4_coding_block_size</name>
        <figure anchor="fig-fattr4_coding_block_size">
          <name>XDR for fattr4_coding_block_size</name>
          <sourcecode type="xdr"><![CDATA[
   /// typedef uint64_t                  fattr4_coding_block_size;
   ///
   /// const FATTR4_CODING_BLOCK_SIZE  = 89;
   ///
]]></sourcecode>
        </figure>
        <t>The new attribute fattr4_coding_block_size (see
<xref target="fig-fattr4_coding_block_size"/>) is an <bcp14>OPTIONAL</bcp14> to NFSv4.2 attribute
which <bcp14>MUST</bcp14> be supported if the metadata server supports the Flexible
File Version 2 Layout Type.  By querying it, the client can determine
the data block size it is to use when coding the data blocks to
chunks.</t>
      </section>
    </section>
    <section anchor="new-nfsv42-common-data-structures">
      <name>New NFSv4.2 Common Data Structures</name>
      <section anchor="sec-chunk_guard4">
        <name>chunk_guard4</name>
        <figure anchor="fig-chunk_guard4">
          <name>XDR for chunk_guard4</name>
          <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_GUARD_CLIENT_ID_MDS  = 0xFFFFFFFF;
   ///
   /// struct chunk_guard4 {
   ///     uint32_t   cg_gen_id;
   ///     uint32_t   cg_client_id;
   /// };
]]></sourcecode>
        </figure>
        <t>On the wire, a single CHUNK_WRITE carries the 8-byte header
followed by the opaque payload, as shown in
<xref target="fig-chunk-wire-layout"/>.  The payload length is carried
separately in the CHUNK_WRITE4args cwa_chunks&lt;&gt; slot; the
diagram shows the per-chunk framing only.</t>
        <figure anchor="fig-chunk-wire-layout">
          <name>Per-chunk wire layout</name>
          <artwork><![CDATA[
    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                          cg_gen_id                            |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                         cg_client_id                          |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                           cr_crc                              |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                    opaque payload ...                         |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   Bytes 0-3:   cg_gen_id      (per-chunk generation counter)
   Bytes 4-7:   cg_client_id   (owning-client short id)
   Bytes 8-11:  cr_crc         (CRC32 over the opaque payload)
   Bytes 12-N:  opaque payload (encoded shard; variable length)
]]></artwork>
        </figure>
        <t>The chunk_guard4 (see <xref target="fig-chunk_guard4"/>) is effectively a 64-bit
value identifying a specific write transaction on a specific chunk.
It has two fields:</t>
        <dl>
          <dt>cg_gen_id:</dt>
          <dd>
            <t>A per-chunk monotonic generation counter.  Each chunk's gen_id
starts at 0 when the chunk is first written and is incremented
on each successful write by any client.  cg_gen_id is NOT a
timestamp -- the protocol does not rely on a global clock,
and no interpretation of cg_gen_id as a wall-clock value is
supported.  cg_gen_id values are NOT comparable across distinct
chunks; a given cg_gen_id is only meaningful within the scope
of a single chunk on a single file.</t>
          </dd>
          <dt>cg_client_id:</dt>
          <dd>
            <t>A 32-bit value established by the metadata server at the time
the client's layout is granted (see <xref target="sec-ffv2-mirror4"/> and
ffm_client_id).  The metadata server <bcp14>MUST</bcp14> assign distinct
cg_client_id values to distinct clients that hold concurrent
write layouts on the same file.  cg_client_id is opaque with
respect to client identity -- a data server <bcp14>MUST NOT</bcp14>
interpret its bits as naming or ordering clients in any
external sense.  The value supports two operations only:
equality comparison (to detect whether two chunks were written
by the same transaction) and numeric comparison (to implement
the tiebreaker rule below).</t>
          </dd>
          <dt>Uniqueness contract:</dt>
          <dd>
            <t>The pair (cg_gen_id, cg_client_id) uniquely identifies a write
transaction on a chunk.  Neither field alone is globally
unique; two clients <bcp14>MAY</bcp14> independently write with the same
cg_gen_id on the same chunk (in particular, both may write
with cg_gen_id equal to some prior value + 1), and the
cg_client_id is what makes the resulting transactions
distinguishable.</t>
          </dd>
          <dt>Deterministic tiebreaker for concurrent writers:</dt>
          <dd>
            <t>When two or more clients race on the same chunk in the
multi-writer mode, the client whose cg_client_id compares
numerically lowest wins the race.  A data server enforces this
by accepting the first CHUNK_WRITE whose guard check succeeds
and rejecting later writers with NFS4ERR_CHUNK_GUARDED; across
the mirror set, the subset of data servers on which each
client wins will vary, but the deterministic tiebreaker
ensures all clients agree on which client's write ultimately
becomes COMMITTED.  A client that lost the race on at least
one data server <bcp14>MUST</bcp14> re-read the chunk and <bcp14>MAY</bcp14> retry its write
with a refreshed cg_gen_id.  A client that detects no forward
progress after a bounded number of retries <bcp14>MUST</bcp14> escalate via
LAYOUTERROR and the repair coordination flow in
<xref target="sec-repair-selection"/>.</t>
          </dd>
        </dl>
        <t>The numeric ordering of cg_client_id values is arbitrary with
respect to the clients' external identities -- it is a
deterministic total order over the opaque 32-bit values, not a
preference ordering over the clients themselves.  A deployment
that requires a specific client to win a race <bcp14>MUST</bcp14> arrange
cg_client_id assignment at the metadata server; the protocol does
not provide a preference mechanism at layout-grant time.</t>
        <section anchor="metadata-server-assignment-rules-for-cgclientid">
          <name>Metadata-Server Assignment Rules for cg_client_id</name>
          <t>To uphold the uniqueness contract, the metadata server <bcp14>MUST</bcp14>
follow these rules when assigning cg_client_id (that is, when
populating ffm_client_id at layout-grant time):</t>
          <ul spacing="normal">
            <li>
              <t>Two clients holding concurrent write layouts on the same
file <bcp14>MUST</bcp14> receive distinct cg_client_id values.  A client
that holds only a read layout need not be assigned a
distinct value.</t>
            </li>
            <li>
              <t>The reserved sentinel CHUNK_GUARD_CLIENT_ID_MDS (0xFFFFFFFF)
<bcp14>MUST NOT</bcp14> be assigned to any client.</t>
            </li>
            <li>
              <t>A cg_client_id <bcp14>MAY</bcp14> be reused by the metadata server after
the prior holder's layout has been fully returned (via
LAYOUTRETURN or revocation).  The metadata server <bcp14>SHOULD</bcp14>
avoid reusing a cg_client_id within a single lease period
to simplify diagnosis of stale writes.</t>
            </li>
            <li>
              <t>cg_client_id values do not persist across metadata-server
restart.  Clients reclaiming layouts during the grace period
receive freshly assigned values; the protocol does not rely
on any pre-restart assignment surviving.</t>
            </li>
          </ul>
        </section>
        <section anchor="data-server-collision-handling">
          <name>Data-Server Collision Handling</name>
          <t>A (cg_gen_id, cg_client_id) pair that the uniqueness contract
would otherwise render unique can nonetheless collide if a
client and the metadata server disagree about which
cg_client_id the client currently holds, or if a client
presents a spoofed cg_client_id.  The data server enforces the
contract locally:</t>
          <ul spacing="normal">
            <li>
              <t>If the data server receives a CHUNK_WRITE whose
chunk_guard4 has the same (cg_gen_id, cg_client_id) as a
chunk already in PENDING, FINALIZED, or COMMITTED state
AND the presented payload differs from the retained
payload, the data server <bcp14>MUST</bcp14> reject the write with
NFS4ERR_CHUNK_GUARDED and <bcp14>SHOULD</bcp14> report the collision to
the metadata server via LAYOUTERROR.  This situation is a
protocol violation on one side of the conversation; the
metadata server resolves it by revoking the offending
client's layout and selecting a repair client under
<xref target="sec-repair-selection"/>.</t>
            </li>
            <li>
              <t>If a client presents CHUNK_GUARD_CLIENT_ID_MDS as
cg_client_id in any client-originated operation, the data
server <bcp14>MUST</bcp14> reject the operation with NFS4ERR_INVAL (see
<xref target="sec-chunk_guard_mds"/>).</t>
            </li>
            <li>
              <t>A cg_client_id that does not match any layout the data
server has been told about (via TRUST_STATEID) <bcp14>MUST</bcp14> be
rejected.  Unknown cg_client_id values are treated as stale
layouts; the data server returns the error specified in
<xref target="sec-tight-coupling-control"/> for unknown stateids.</t>
            </li>
          </ul>
        </section>
        <section anchor="sec-chunk_guard_mds">
          <name>Reserved cg_client_id Value: CHUNK_GUARD_CLIENT_ID_MDS</name>
          <t>The value <tt>CHUNK_GUARD_CLIENT_ID_MDS</tt> (0xFFFFFFFF) is reserved.
It denotes that the chunk lock is held by the metadata server
itself, in escrow during a repair coordination sequence (see
<xref target="sec-repair-selection"/>).  The data server produces a
chunk_guard4 with this cg_client_id when the metadata server
revokes the prior holder's stateid while that holder still holds
chunk locks; the locks <bcp14>MUST NOT</bcp14> be dropped and are transferred to
the MDS-escrow owner instead.</t>
          <t>The metadata server does not originate CHUNK_LOCK or CHUNK_WRITE
traffic on its own session.  Clients <bcp14>MUST NOT</bcp14> present
CHUNK_GUARD_CLIENT_ID_MDS as the cg_client_id of any
client-originated chunk_guard4 or chunk_owner4.  A data server
that receives such a value from a client <bcp14>MUST</bcp14> reject the
operation with NFS4ERR_INVAL.</t>
          <t>The MDS-escrow owner is released only by a CHUNK_LOCK from the
client selected via CB_CHUNK_REPAIR, carrying
CHUNK_LOCK_FLAGS_ADOPT.  See <xref target="sec-CHUNK_LOCK"/>.</t>
        </section>
      </section>
      <section anchor="chunkowner4">
        <name>chunk_owner4</name>
        <figure anchor="fig-chunk_owner4">
          <name>XDR for chunk_owner4</name>
          <sourcecode type="xdr"><![CDATA[
   /// struct chunk_owner4 {
   ///     chunk_guard4   co_guard;
   ///     uint32_t       co_chunk_id;
   /// };
]]></sourcecode>
        </figure>
        <t>The chunk_owner4 (see <xref target="fig-chunk_owner4"/>) is used to determine
when and by whom a block was written.  The co_chunk_id is used
to identify the chunk and <bcp14>MUST</bcp14> be the index of the chunk within
the file.  I.e., it is the offset of the start of the chunk
divided by the chunk length.  The co_guard is a chunk_guard4
(see <xref target="sec-chunk_guard4"/>), used to identify a given
transaction.</t>
        <t>The co_guard is like the change attribute (see Section 5.8.1.4 of
<xref target="RFC8881"/>) in that each chunk write by a given client has to have
an unique co_guard.  I.e., it can be determined which transaction
across all data files that a chunk corresponds.</t>
      </section>
    </section>
    <section anchor="sec-new-ops">
      <name>New NFSv4.2 Operations</name>
      <figure anchor="fig-ops-xdr">
        <name>Operations XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* New operations for Erasure Coding start here */
   ///
   ///  OP_CHUNK_COMMIT        = 78,
   ///  OP_CHUNK_ERROR         = 79,
   ///  OP_CHUNK_FINALIZE      = 80,
   ///  OP_CHUNK_HEADER_READ   = 81,
   ///  OP_CHUNK_LOCK          = 82,
   ///  OP_CHUNK_READ          = 83,
   ///  OP_CHUNK_REPAIRED      = 84,
   ///  OP_CHUNK_ROLLBACK      = 85,
   ///  OP_CHUNK_UNLOCK        = 86,
   ///  OP_CHUNK_WRITE         = 87,
   ///  OP_CHUNK_WRITE_REPAIR  = 88,
   ///
   /// /* MDS-to-DS control-plane operations for tight coupling */
   ///
   ///  OP_TRUST_STATEID       = 90,
   ///  OP_REVOKE_STATEID      = 91,
   ///  OP_BULK_REVOKE_STATEID = 92,
   ///
]]></sourcecode>
      </figure>
      <t>The following amendment blocks extend the nfs_argop4 and
nfs_resop4 dispatch unions defined in <xref target="RFC7863"/> with arms for
each of the new operations defined in this document.  A consumer
that combines this document's extracted XDR with the RFC 7863
XDR applies these amendments at the union's extension point.</t>
      <figure anchor="fig-nfs_argop4-amend">
        <name>nfs_argop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_argop4 amendment block */
   ///
   /// case OP_CHUNK_COMMIT: CHUNK_COMMIT4args opchunkcommit;
   /// case OP_CHUNK_ERROR: CHUNK_ERROR4args opchunkerror;
   /// case OP_CHUNK_FINALIZE: CHUNK_FINALIZE4args opchunkfinalize;
   /// case OP_CHUNK_HEADER_READ:
   ///     CHUNK_HEADER_READ4args opchunkheaderread;
   /// case OP_CHUNK_LOCK: CHUNK_LOCK4args opchunklock;
   /// case OP_CHUNK_READ: CHUNK_READ4args opchunkread;
   /// case OP_CHUNK_REPAIRED: CHUNK_REPAIRED4args opchunkrepaired;
   /// case OP_CHUNK_ROLLBACK: CHUNK_ROLLBACK4args opchunkrollback;
   /// case OP_CHUNK_UNLOCK: CHUNK_UNLOCK4args opchunkunlock;
   /// case OP_CHUNK_WRITE: CHUNK_WRITE4args opchunkwrite;
   /// case OP_CHUNK_WRITE_REPAIR:
   ///     CHUNK_WRITE_REPAIR4args opchunkwriterepair;
   /// case OP_TRUST_STATEID: TRUST_STATEID4args optruststateid;
   /// case OP_REVOKE_STATEID: REVOKE_STATEID4args oprevokestateid;
   /// case OP_BULK_REVOKE_STATEID:
   ///     BULK_REVOKE_STATEID4args opbulkrevokestateid;
]]></sourcecode>
      </figure>
      <figure anchor="fig-nfs_resop4-amend">
        <name>nfs_resop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_resop4 amendment block */
   ///
   /// case OP_CHUNK_COMMIT: CHUNK_COMMIT4res opchunkcommit;
   /// case OP_CHUNK_ERROR: CHUNK_ERROR4res opchunkerror;
   /// case OP_CHUNK_FINALIZE: CHUNK_FINALIZE4res opchunkfinalize;
   /// case OP_CHUNK_HEADER_READ:
   ///     CHUNK_HEADER_READ4res opchunkheaderread;
   /// case OP_CHUNK_LOCK: CHUNK_LOCK4res opchunklock;
   /// case OP_CHUNK_READ: CHUNK_READ4res opchunkread;
   /// case OP_CHUNK_REPAIRED: CHUNK_REPAIRED4res opchunkrepaired;
   /// case OP_CHUNK_ROLLBACK: CHUNK_ROLLBACK4res opchunkrollback;
   /// case OP_CHUNK_UNLOCK: CHUNK_UNLOCK4res opchunkunlock;
   /// case OP_CHUNK_WRITE: CHUNK_WRITE4res opchunkwrite;
   /// case OP_CHUNK_WRITE_REPAIR:
   ///     CHUNK_WRITE_REPAIR4res opchunkwriterepair;
   /// case OP_TRUST_STATEID: TRUST_STATEID4res optruststateid;
   /// case OP_REVOKE_STATEID: REVOKE_STATEID4res oprevokestateid;
   /// case OP_BULK_REVOKE_STATEID:
   ///     BULK_REVOKE_STATEID4res opbulkrevokestateid;
]]></sourcecode>
      </figure>
      <t>Operations 78 through 88 (the CHUNK_* operations) are sent by
clients to storage devices on the data path.  Operations 90
through 92 (TRUST_STATEID, REVOKE_STATEID, BULK_REVOKE_STATEID)
are sent by the metadata server to storage devices on the
MDS-to-DS control session (see
<xref target="sec-tight-coupling-control-session"/>); they <bcp14>MUST NOT</bcp14> be sent by
pNFS clients.</t>
      <table anchor="tbl-protocol-ops">
        <name>Protocol OPs</name>
        <thead>
          <tr>
            <th align="left">Operation</th>
            <th align="left">Number</th>
            <th align="left">Target Server</th>
            <th align="left">Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">CHUNK_COMMIT</td>
            <td align="left">78</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_COMMIT"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ERROR</td>
            <td align="left">79</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_ERROR"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_FINALIZE</td>
            <td align="left">80</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_FINALIZE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_HEADER_READ</td>
            <td align="left">81</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_HEADER_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_LOCK</td>
            <td align="left">82</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_LOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_READ</td>
            <td align="left">83</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_READ"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_REPAIRED</td>
            <td align="left">84</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_REPAIRED"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_ROLLBACK</td>
            <td align="left">85</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_ROLLBACK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_UNLOCK</td>
            <td align="left">86</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_UNLOCK"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE</td>
            <td align="left">87</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE"/></td>
          </tr>
          <tr>
            <td align="left">CHUNK_WRITE_REPAIR</td>
            <td align="left">88</td>
            <td align="left">DS (client)</td>
            <td align="left">
              <xref target="sec-CHUNK_WRITE_REPAIR"/></td>
          </tr>
          <tr>
            <td align="left">TRUST_STATEID</td>
            <td align="left">90</td>
            <td align="left">DS (MDS control)</td>
            <td align="left">
              <xref target="sec-TRUST_STATEID"/></td>
          </tr>
          <tr>
            <td align="left">REVOKE_STATEID</td>
            <td align="left">91</td>
            <td align="left">DS (MDS control)</td>
            <td align="left">
              <xref target="sec-REVOKE_STATEID"/></td>
          </tr>
          <tr>
            <td align="left">BULK_REVOKE_STATEID</td>
            <td align="left">92</td>
            <td align="left">DS (MDS control)</td>
            <td align="left">
              <xref target="sec-BULK_REVOKE_STATEID"/></td>
          </tr>
        </tbody>
      </table>
      <section anchor="sec-CHUNK_COMMIT">
        <name>Operation 78: CHUNK_COMMIT - Activate Cached Chunk Data</name>
        <section anchor="arguments">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_COMMIT4args">
            <name>XDR for CHUNK_COMMIT4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cca_offset;
   ///     count4          cca_count;
   ///     chunk_owner4    cca_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_COMMIT4resok">
            <name>XDR for CHUNK_COMMIT4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_COMMIT4resok {
   ///     verifier4       ccr_writeverf;
   ///     nfsstat4        ccr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_COMMIT4res">
            <name>XDR for CHUNK_COMMIT4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_COMMIT4res switch (nfsstat4 ccr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_COMMIT4resok   ccr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description">
          <name>DESCRIPTION</name>
          <t>CHUNK_COMMIT is COMMIT (see Section 18.3 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner activating the blocks.
As such, all of the normal semantics of COMMIT directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_COMMIT
works on blocks and not a raw data stream.  As such cca_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cca_count is a count of
blocks to activate and not bytes to activate.</t>
          <t>Further, while it may appear that the combination of cca_offset and
cca_count are redundant to cca_chunks, the purpose of cca_chunks
is to allow the data server to differentiate between potentially
multiple pending blocks.</t>
          <section anchor="interaction-with-chunkfinalize">
            <name>Interaction with CHUNK_FINALIZE</name>
            <t>CHUNK_COMMIT transitions a chunk from FINALIZED to COMMITTED
(see <xref target="sec-system-model-chunk-state"/>).  A chunk <bcp14>MUST</bcp14> have
previously been transitioned from PENDING to FINALIZED via
CHUNK_FINALIZE before CHUNK_COMMIT is accepted:</t>
            <ul spacing="normal">
              <li>
                <t>If the target chunk is PENDING (i.e., the writer never
issued CHUNK_FINALIZE), the data server <bcp14>MUST</bcp14> reject the
CHUNK_COMMIT entry for that chunk with
NFS4ERR_PAYLOAD_NOT_CONSISTENT in the corresponding
ccr_status slot.  The writer is expected to either issue
CHUNK_FINALIZE to advance the state or CHUNK_ROLLBACK to
abandon the PENDING generation.</t>
              </li>
              <li>
                <t>If the target chunk is EMPTY (no generation to commit), the
data server <bcp14>MUST</bcp14> reject with NFS4ERR_PAYLOAD_NOT_CONSISTENT
for that chunk.</t>
              </li>
              <li>
                <t>If the target chunk is already COMMITTED at the generation
identified by the cca_chunks entry's cg_gen_id, the
CHUNK_COMMIT is idempotent and <bcp14>MUST</bcp14> succeed.  Idempotence
preserves the NFSv4 COMMIT contract for duplicate-request
retransmission.</t>
              </li>
              <li>
                <t>If the target chunk is FINALIZED at a different generation
than the one named in the cca_chunks entry, the data server
<bcp14>MUST</bcp14> reject with NFS4ERR_CHUNK_GUARDED.  A client that sees
this has lost a race and <bcp14>SHOULD</bcp14> re-read the chunk (see
<xref target="sec-chunk_guard4"/>).</t>
              </li>
            </ul>
            <t>The three-step CHUNK_WRITE -&gt; CHUNK_FINALIZE -&gt; CHUNK_COMMIT
sequence <bcp14>MAY</bcp14> be pipelined within a single NFSv4.2 compound
(see <xref target="sec-system-model-progress"/>); each operation evaluates the
current state of the target chunks independently.</t>
          </section>
          <section anchor="interaction-with-a-locked-chunk">
            <name>Interaction with a Locked Chunk</name>
            <t>When a chunk is locked via CHUNK_LOCK (see <xref target="sec-CHUNK_LOCK"/>),
CHUNK_COMMIT is permitted only when the submitter owns the
lock -- that is, when the stateid carried on the compound
matches the lock holder's stateid (or is an
CHUNK_LOCK_FLAGS_ADOPT-transferred continuation):</t>
            <ul spacing="normal">
              <li>
                <t>The owning writer <bcp14>MAY</bcp14> issue CHUNK_COMMIT; the chunk
transitions from FINALIZED to COMMITTED normally.</t>
              </li>
              <li>
                <t>A non-owning client <bcp14>MUST</bcp14> receive NFS4ERR_CHUNK_LOCKED in
the corresponding ccr_status slot.  The chunk's state is
not changed.</t>
              </li>
              <li>
                <t>During repair, the MDS-escrow owner
(CHUNK_GUARD_CLIENT_ID_MDS, see <xref target="sec-chunk_guard_mds"/>)
holds the lock while the repair client adopts it via
CHUNK_LOCK_FLAGS_ADOPT.  CHUNK_COMMIT during the escrow
window is permitted only to the holder of the adopted
lock.</t>
              </li>
            </ul>
            <t>This rule is what <xref target="sec-system-model-consistency"/> calls
"lock continuity across revocation": the COMMIT privilege
follows the lock without gaps in which a non-owner could race.</t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_ERROR">
        <name>Operation 79: CHUNK_ERROR - Report Error on Cached Chunk Data</name>
        <section anchor="arguments-1">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_ERROR4args">
            <name>XDR for CHUNK_ERROR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ERROR4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cea_stateid;
   ///     offset4         cea_offset;
   ///     count4          cea_count;
   ///     nfsstat4        cea_error;
   ///     chunk_owner4    cea_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-1">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_ERROR4res">
            <name>XDR for CHUNK_ERROR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ERROR4res {
   ///     nfsstat4        cer_status;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-1">
          <name>DESCRIPTION</name>
          <t>CHUNK_ERROR allows a client to report that one or more chunks at
the specified block range are in error.  The cea_offset is the
starting block offset and cea_count is the number of blocks
affected.  The cea_error indicates the type of error detected
(e.g., NFS4ERR_PAYLOAD_NOT_CONSISTENT for a CRC mismatch).</t>
          <t>The data server records the error state for the affected blocks.
Once marked as errored, the blocks are not returned by CHUNK_READ
until they are repaired via CHUNK_WRITE_REPAIR (<xref target="sec-CHUNK_WRITE_REPAIR"/>)
and the repair is confirmed via CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>).</t>
          <t>The client <bcp14>SHOULD</bcp14> report errors via CHUNK_ERROR before reporting
them to the metadata server via LAYOUTERROR.  This allows the data
server to prevent other clients from reading corrupt data while
the metadata server coordinates repair.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_FINALIZE">
        <name>Operation 80: CHUNK_FINALIZE - Transition Chunks from Pending to Finalized</name>
        <section anchor="arguments-2">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_FINALIZE4args">
            <name>XDR for CHUNK_FINALIZE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_FINALIZE4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cfa_offset;
   ///     count4          cfa_count;
   ///     chunk_owner4    cfa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-2">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_FINALIZE4resok">
            <name>XDR for CHUNK_FINALIZE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_FINALIZE4resok {
   ///     verifier4       cfr_writeverf;
   ///     nfsstat4        cfr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_FINALIZE4res">
            <name>XDR for CHUNK_FINALIZE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_FINALIZE4res switch (nfsstat4 cfr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_FINALIZE4resok   cfr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-2">
          <name>DESCRIPTION</name>
          <t>CHUNK_FINALIZE transitions blocks from the PENDING state (set by
CHUNK_WRITE) to the FINALIZED state.  A finalized block is visible
to the owning client for reads and is eligible for CHUNK_COMMIT.</t>
          <t>The cfa_offset is the starting block offset and cfa_count is the
number of blocks to finalize.  The cfa_chunks array lists the
chunk_owner4 entries whose blocks are to be finalized.  Each
owner's blocks at the specified offsets <bcp14>MUST</bcp14> be in the PENDING state;
if not, the corresponding entry in the per-owner status array
ccr_status is set to NFS4ERR_INVAL.</t>
          <t>CHUNK_FINALIZE serves as the CRC validation checkpoint: the data
server <bcp14>SHOULD</bcp14> have validated the CRC32 of each block at CHUNK_WRITE
time.  After CHUNK_FINALIZE, the block metadata (CRC, owner, state)
is persisted to stable storage so that it survives data server
restarts.</t>
          <t>Blocks that have been finalized but not yet committed <bcp14>MAY</bcp14> be rolled
back via CHUNK_ROLLBACK (<xref target="sec-CHUNK_ROLLBACK"/>).</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_HEADER_READ">
        <name>Operation 81: CHUNK_HEADER_READ - Read Chunk Header from File</name>
        <section anchor="arguments-3">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4args">
            <name>XDR for CHUNK_HEADER_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    chra_stateid;
   ///     offset4     chra_offset;
   ///     count4      chra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-3">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_HEADER_READ4resok">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_HEADER_READ4resok {
   ///     bool            chrr_eof;
   ///     nfsstat4        chrr_status<>;
   ///     bool            chrr_locked<>;
   ///     chunk_owner4    chrr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_HEADER_READ4res">
            <name>XDR for CHUNK_HEADER_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_HEADER_READ4res switch (nfsstat4 chrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_HEADER_READ4resok     chrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-3">
          <name>DESCRIPTION</name>
          <t>CHUNK_HEADER_READ differs from CHUNK_READ in that it only reads chunk
headers in the desired data range.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_LOCK">
        <name>Operation 82: CHUNK_LOCK - Lock Cached Chunk Data</name>
        <section anchor="arguments-4">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_LOCK4args">
            <name>XDR for CHUNK_LOCK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_LOCK_FLAGS_ADOPT  = 0x00000001;
   ///
   /// struct CHUNK_LOCK4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cla_stateid;
   ///     offset4         cla_offset;
   ///     count4          cla_count;
   ///     uint32_t        cla_flags;
   ///     chunk_owner4    cla_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-4">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_LOCK4res">
            <name>XDR for CHUNK_LOCK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_LOCK4res switch (nfsstat4 clr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     case NFS4ERR_CHUNK_LOCKED:
   ///         chunk_owner4    clr_owner;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-4">
          <name>DESCRIPTION</name>
          <t>CHUNK_LOCK acquires an exclusive lock on the block range specified
by cla_offset and cla_count.  While locked, other clients' CHUNK_WRITE
operations to the same block range will fail with NFS4ERR_CHUNK_LOCKED.
The lock is associated with the chunk_owner4 in cla_owner.</t>
          <t>If the blocks are already locked by a different owner and
cla_flags does not include CHUNK_LOCK_FLAGS_ADOPT, the operation
returns NFS4ERR_CHUNK_LOCKED with the clr_owner field identifying
the current lock holder.</t>
          <t>CHUNK_LOCK is used in the multiple writer mode (<xref target="sec-multi-writer"/>)
to coordinate concurrent access to the same block range, and in the
repair flow (<xref target="sec-repair-selection"/>) to transfer lock ownership
to a repair client.</t>
          <t>The lock is released by CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>) or
implicitly when the client's lease expires.</t>
          <section anchor="lock-transfer-via-chunklockflagsadopt">
            <name>Lock Transfer via CHUNK_LOCK_FLAGS_ADOPT</name>
            <t>The CHUNK_LOCK_FLAGS_ADOPT flag in cla_flags requests an atomic
transfer of lock ownership to cla_owner for every chunk in
[cla_offset, cla_offset+cla_count).  The data server <bcp14>MUST</bcp14> perform
the transfer as a single atomic step per chunk: there is no window
in which the chunk is unlocked.  After a successful ADOPT, subsequent
CHUNK_WRITE, CHUNK_WRITE_REPAIR, CHUNK_ROLLBACK, and CHUNK_UNLOCK
operations <bcp14>MUST</bcp14> present cla_owner as their chunk_owner4.</t>
            <t>CHUNK_LOCK_FLAGS_ADOPT is the sole mechanism by which a chunk lock
can change hands without first being released.  The lock ordering
invariant -- that every chunk in a payload transitioning through
repair is held by exactly one owner continuously from failure
detection to repair completion -- depends on it.</t>
            <t>CHUNK_LOCK_FLAGS_ADOPT is valid only when the caller has been
selected as the repair client for the range by the metadata server,
typically via CB_CHUNK_REPAIR (<xref target="sec-CB_CHUNK_REPAIR"/>).  A data
server that receives CHUNK_LOCK with the ADOPT flag from a client
that has not been so designated <bcp14>MAY</bcp14> reject the operation with
NFS4ERR_ACCESS.  The mechanism by which the data server determines
designation is coupling-model dependent:</t>
            <ul spacing="normal">
              <li>
                <t>In a tightly coupled deployment, the metadata server notifies the
data server via the control protocol (e.g., TRUST_STATEID with
the new client's stateid or a similar facility).</t>
              </li>
              <li>
                <t>In a loosely coupled deployment, the data server <bcp14>MAY</bcp14> rely on the
metadata server's authentication of the client and accept ADOPT
from any authenticated client holding a current layout that
includes the range.  The write-hole exposure cost is that a misbehaving
client can trigger spurious ownership transfers; the write-hole
exposure is bounded by the chunk_guard4 checks that subsequent
CHUNK_WRITEs from displaced writers experience.</t>
              </li>
            </ul>
            <t>The current lock holder at the moment of ADOPT <bcp14>MAY</bcp14> be:</t>
            <ol spacing="normal" type="1"><li>
                <t>Another client whose stateid remains valid (for example, a
client that has stopped making progress but has not yet lost
its lease).  The prior owner's PENDING or FINALIZED shards
remain on disk until the new owner issues CHUNK_WRITE_REPAIR,
CHUNK_ROLLBACK, or CHUNK_COMMIT.</t>
              </li>
              <li>
                <t>The metadata server itself, acting through the
CHUNK_GUARD_CLIENT_ID_MDS escrow owner
(<xref target="sec-chunk_guard_mds"/>).  This occurs when the metadata
server has revoked the prior holder's stateid in a tightly
coupled deployment.</t>
              </li>
            </ol>
            <t>In either case, ADOPT's effect from the repair client's
perspective is the same: after the successful return the caller
holds the lock and may drive the range to consistency.</t>
            <t>The data server <bcp14>MUST</bcp14> reject CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT if cla_owner's cg_client_id equals
CHUNK_GUARD_CLIENT_ID_MDS -- that value is reserved for server
production and <bcp14>MUST NOT</bcp14> be presented by a client.  The operation
returns NFS4ERR_INVAL in that case.</t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_READ">
        <name>Operation 83: CHUNK_READ - Read Chunks from File</name>
        <section anchor="arguments-5">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_READ4args">
            <name>XDR for CHUNK_READ4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4    cra_stateid;
   ///     offset4     cra_offset;
   ///     count4      cra_count;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-5">
          <name>RESULTS</name>
          <figure anchor="fig-read_chunk4">
            <name>XDR for read_chunk4</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct read_chunk4 {
   ///     uint32_t        cr_crc;
   ///     uint32_t        cr_effective_len;
   ///     chunk_owner4    cr_owner;
   ///     uint32_t        cr_payload_id;
   ///     bool            cr_locked;
   ///     nfsstat4        cr_status;
   ///     opaque          cr_chunk<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4resok">
            <name>XDR for CHUNK_READ4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_READ4resok {
   ///     bool        crr_eof;
   ///     read_chunk4 crr_chunks<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_READ4res">
            <name>XDR for CHUNK_READ4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_READ4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///          CHUNK_READ4resok     crr_resok4;
   ///     default:
   ///          void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-5">
          <name>DESCRIPTION</name>
          <t>CHUNK_READ is READ (see Section 18.22 of <xref target="RFC8881"/>) with additional
semantics over the chunk_owner.  As such, all of the normal semantics
of READ directly apply.</t>
          <t>The main difference between the two operations is that CHUNK_READ
works on blocks and not a raw data stream.  As such cra_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cra_count is a count of
blocks to read and not bytes to read.</t>
          <t>When reading a set of blocks across the data servers, it can be the
case that some data servers do not have any data at that location.
In that case, the server either returns crr_eof if the cra_offset
exceeds the number of blocks that the data server is aware or it
returns an empty block for that block.</t>
          <t>For example, in <xref target="fig-example-CHUNK_READ4args"/>, the client asks
for 4 blocks starting with the 3rd block in the file.  The second
data server responds as in <xref target="fig-example-CHUNK_READ4resok"/>.  The
client would read this as there is valid data for blocks 2 and 4,
there is a hole at block 3, and there is no data for block 5.  The
data server <bcp14>MUST</bcp14> calculate a valid cr_crc for block 3 based on the
generated fields.</t>
          <figure anchor="fig-example-CHUNK_READ4args">
            <name>Example: CHUNK_READ4args parameters</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4args                |
  +--------------------------------+
  | cra_stateid: 0                 |
  | cra_offset: 2                  |
  | cra_count: 4                   |
  +----------+---------------------+
]]></artwork>
          </figure>
          <figure anchor="fig-example-CHUNK_READ4resok">
            <name>Example: Resulting CHUNK_READ4resok reply</name>
            <artwork><![CDATA[
        Data Server 2
  +--------------------------------+
  | CHUNK_READ4resok               |
  +--------------------------------+
  | crr_eof: true                  |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x3faddace         |
  |     cr_owner:                  |
  |         co_chunk_id: 2         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  | crr_chunks[0]:                 |
  |     cr_crc: 0xdeade4e5         |
  |     cr_owner:                  |
  |         co_chunk_id: 3         |
  |         co_guard:              |
  |             cg_gen_id   : 0    |
  |             cg_client_id: 0    |
  |     cr_payload_id: 1           |
  |     cr_chunk: 0000...00000     |
  | crr_chunks[0]:                 |
  |     cr_crc: 0x7778abcd         |
  |     cr_owner:                  |
  |         co_chunk_id: 4         |
  |         co_guard:              |
  |             cg_gen_id   : 3    |
  |             cg_client_id: 6    |
  |     cr_payload_id: 1           |
  |     cr_chunk: ....             |
  +--------------------------------+
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="sec-CHUNK_REPAIRED">
        <name>Operation 84: CHUNK_REPAIRED - Confirm Repair of Errored Chunk Data</name>
        <section anchor="arguments-6">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_REPAIRED4args">
            <name>XDR for CHUNK_REPAIRED4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_REPAIRED4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cra_stateid;
   ///     offset4         cra_offset;
   ///     count4          cra_count;
   ///     chunk_owner4    cra_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-6">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_REPAIRED4res">
            <name>XDR for CHUNK_REPAIRED4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_REPAIRED4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-6">
          <name>DESCRIPTION</name>
          <t>CHUNK_REPAIRED signals that blocks previously marked as errored
(via CHUNK_ERROR, <xref target="sec-CHUNK_ERROR"/>) have been repaired.  The
repair client writes replacement data via CHUNK_WRITE_REPAIR
(<xref target="sec-CHUNK_WRITE_REPAIR"/>), then calls CHUNK_REPAIRED to clear
the error state and make the blocks available for normal reads.</t>
          <t>The cra_offset and cra_count identify the repaired block range.
The cra_owner identifies the repair client that performed the
repair.  The data server verifies that the blocks were previously
in error and that the repair data has been written and finalized.</t>
          <t>If the blocks are not in the errored state, the operation returns
NFS4ERR_INVAL.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_ROLLBACK">
        <name>Operation 85: CHUNK_ROLLBACK - Rollback Changes on Cached Chunk Data</name>
        <section anchor="arguments-7">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_ROLLBACK4args">
            <name>XDR for CHUNK_ROLLBACK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ROLLBACK4args {
   ///     /* CURRENT_FH: file */
   ///     offset4         cra_offset;
   ///     count4          cra_count;
   ///     chunk_owner4    cra_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-7">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_ROLLBACK4resok">
            <name>XDR for CHUNK_ROLLBACK4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_ROLLBACK4resok {
   ///     verifier4       crr_writeverf;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_ROLLBACK4res">
            <name>XDR for CHUNK_ROLLBACK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_ROLLBACK4res switch (nfsstat4 crr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_ROLLBACK4resok   crr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-7">
          <name>DESCRIPTION</name>
          <t>CHUNK_ROLLBACK reverts blocks from the PENDING or FINALIZED state
back to their previous state, effectively undoing a CHUNK_WRITE
that has not yet been committed via CHUNK_COMMIT.</t>
          <t>The cra_offset is the starting block offset and cra_count is the
number of blocks to roll back.  The cra_chunks array lists the
chunk_owner4 entries whose blocks are to be rolled back.  Each
owner's blocks at the specified offsets <bcp14>MUST</bcp14> be in the PENDING or
FINALIZED state; blocks that have already been committed via
CHUNK_COMMIT cannot be rolled back.</t>
          <t>CHUNK_ROLLBACK is used in two scenarios:</t>
          <ol spacing="normal" type="1"><li>
              <t>A client discovers an encoding error after CHUNK_WRITE and
before CHUNK_COMMIT, and needs to undo the write to try again.</t>
            </li>
            <li>
              <t>A repair client needs to undo a repair attempt that was found
to be incorrect before committing it.</t>
            </li>
          </ol>
          <t>The data server deletes the pending chunk data and restores the
block metadata to EMPTY.  If the block was in the FINALIZED state,
the persisted metadata is also removed.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_UNLOCK">
        <name>Operation 86: CHUNK_UNLOCK - Unlock Cached Chunk Data</name>
        <section anchor="arguments-8">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_UNLOCK4args">
            <name>XDR for CHUNK_UNLOCK4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_UNLOCK4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        cua_stateid;
   ///     offset4         cua_offset;
   ///     count4          cua_count;
   ///     chunk_owner4    cua_owner;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-8">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_UNLOCK4res">
            <name>XDR for CHUNK_UNLOCK4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_UNLOCK4res switch (nfsstat4 cur_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-8">
          <name>DESCRIPTION</name>
          <t>CHUNK_UNLOCK releases the exclusive lock on the block range
previously acquired by CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>).  The
cua_owner <bcp14>MUST</bcp14> match the owner that acquired the lock; otherwise
the operation returns NFS4ERR_INVAL.</t>
          <t>If the blocks are not locked, the operation returns NFS4_OK
(idempotent).</t>
          <t>A client <bcp14>SHOULD</bcp14> release chunk locks promptly after completing
its write or repair operation.  Chunk locks are also released
implicitly when the client's lease expires.</t>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE">
        <name>Operation 87: CHUNK_WRITE - Write Chunks to File</name>
        <section anchor="arguments-9">
          <name>ARGUMENTS</name>
          <figure anchor="fig-write_chunk_guard4">
            <name>XDR for write_chunk_guard4</name>
            <sourcecode type="xdr"><![CDATA[
   /// union write_chunk_guard4 switch (bool cwg_check) {
   ///     case TRUE:
   ///         chunk_guard4   cwg_guard;
   ///     case FALSE:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4args">
            <name>XDR for CHUNK_WRITE4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// const CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY = 0x00000001;
   ///
   /// struct CHUNK_WRITE4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwa_stateid;
   ///     offset4            cwa_offset;
   ///     stable_how4        cwa_stable;
   ///     chunk_owner4       cwa_owner;
   ///     uint32_t           cwa_payload_id;
   ///     uint32_t           cwa_flags;
   ///     write_chunk_guard4 cwa_guard;
   ///     uint32_t           cwa_chunk_size;
   ///     uint32_t           cwa_crc32s<>;
   ///     opaque             cwa_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-9">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE4resok">
            <name>XDR for CHUNK_WRITE4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE4resok {
   ///     count4          cwr_count;
   ///     stable_how4     cwr_committed;
   ///     verifier4       cwr_writeverf;
   ///     nfsstat4        cwr_block_status<>;
   ///     bool            cwr_block_activated<>;
   ///     chunk_owner4    cwr_owners<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE4res">
            <name>XDR for CHUNK_WRITE4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE4res switch (nfsstat4 cwr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE4resok    cwr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-9">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE is WRITE (see Section 18.32 of <xref target="RFC8881"/>) with
additional semantics over the chunk_owner and the activation of
blocks.  As such, all of the normal semantics of WRITE directly
apply.</t>
          <t>The main difference between the two operations is that CHUNK_WRITE
works on blocks and not a raw data stream.  As such cwa_offset is
the starting block offset in the file and not the byte offset in
the file.  Some erasure coding types can have different block sizes
depending on the block type.  Further, cwr_count is a count of
written blocks and not written bytes.</t>
          <t>If cwa_stable is FILE_SYNC4, the data server <bcp14>MUST</bcp14> commit the written
header and block data plus all file system metadata to stable storage
before returning results.  This corresponds to the NFSv2 protocol
semantics.  Any other behavior constitutes a protocol violation.
If cwa_stable is DATA_SYNC4, then the data server <bcp14>MUST</bcp14> commit all
of the header and block data to stable storage and enough of the
metadata to retrieve the data before returning.  The data server
implementer is free to implement DATA_SYNC4 in the same fashion as
FILE_SYNC4, but with a possible performance drop.  If cwa_stable
is UNSTABLE4, the data server is free to commit any part of the
header and block data and the metadata to stable storage, including
all or none, before returning a reply to the client.  There is no
guarantee whether or when any uncommitted data will subsequently
be committed to stable storage.  The only guarantees made by the
data server are that it will not destroy any data without changing
the value of writeverf and that it will not commit the data and
metadata at a level less than that requested by the client.</t>
          <t>The activation of header and block data interacts with the co_activated
for each of the written blocks.  If the data is not committed to
stable storage then the co_activated field <bcp14>MUST NOT</bcp14> be set to true.
Once the data is committed to stable storage, then the data server
can set the block's co_activated if one of these conditions apply:</t>
          <ul spacing="normal">
            <li>
              <t>it is the first write to that block and the
CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY flag is set</t>
            </li>
            <li>
              <t>the CHUNK_COMMIT is issued later for that block.</t>
            </li>
          </ul>
          <t>There are subtle interactions with write holes caused by racing
clients.  One client could win the race in each case, but because
it used a cwa_stable of UNSTABLE4, the subsequent writes from the
second client with a cwa_stable of FILE_SYNC4 can be awarded the
co_activated being set to true for each of the blocks in the payload.</t>
          <t>Finally, the interaction of cwa_stable can cause a client to
mistakenly believe that by the time it gets the response of
co_activated of false, that the blocks are not activated.  A
subsequent CHUNK_READ or HEADER_READ might show that the co_activated
is true without any interaction by the client via CHUNK_COMMIT.</t>
          <section anchor="guarding-the-write">
            <name>Guarding the Write</name>
            <t>A guarded CHUNK_WRITE is when the writing of a block <bcp14>MUST</bcp14> fail if
cwa_guard.cwg_check is TRUE and the target chunk does not have the
same cg_gen_id as cwa_guard.cwg_guard.cg_gen_id.  This is
useful in read-update-write scenarios.  The client reads a block,
updates it, and is prepared to write it back.  It guards the write
such that if another writer has modified the block, the data server
will reject the modification.</t>
            <t>As the chunk_guard4 (see <xref target="fig-chunk_guard4"/> does not have a
chunk_id and the CHUNK_WRITE applies to all blocks in the range of
cwa_offset to the length of cwa_data, then each of the target blocks
<bcp14>MUST</bcp14> have the same cg_gen_id and cg_client_id.  The client <bcp14>SHOULD</bcp14>
present the smallest set of blocks as possible to meet this
requirement.</t>
          </section>
          <section anchor="per-block-acceptance-semantics">
            <name>Per-Block Acceptance Semantics</name>
            <t>A CHUNK_WRITE targets a contiguous range of blocks on a single
data server.  The data server evaluates each block independently
and reports the outcome per block in cwr_block_status (see
<xref target="fig-CHUNK_WRITE4resok"/>):</t>
            <ul spacing="normal">
              <li>
                <t>Each block is subjected to the guard check (when
cwa_guard.cwg_check is TRUE), the cg_client_id validation
(see <xref target="sec-chunk_guard4"/>), and any other local preconditions
(storage-space limits, tight-coupling trust-table state,
etc.).</t>
              </li>
              <li>
                <t>Blocks that pass their preconditions are written and their
cwr_block_status entry is NFS4_OK.  Blocks that fail produce
the appropriate error code
(NFS4ERR_CHUNK_GUARDED, NFS4ERR_NOSPC, etc.) in the
corresponding cwr_block_status slot, and their data is
NOT persisted.</t>
              </li>
              <li>
                <t>cwr_count reflects only the blocks that were written
successfully; failed blocks do not contribute.</t>
              </li>
              <li>
                <t>The top-level cwr_status is NFS4_OK when the call itself was
structurally valid and the data server could evaluate each
block.  Per-block failures are reported in cwr_block_status,
not by failing the whole operation.  The data server returns
a top-level error only if it could not evaluate the request
at all (for example, NFS4ERR_BADXDR, NFS4ERR_SERVERFAULT).</t>
              </li>
            </ul>
            <t>This is the "continue and report" discipline.  It is
intentionally not all-or-none: atomicity is already per-chunk
(see <xref target="sec-system-model-consistency"/>), so there is no
file-level correctness reason to reject the entire compound
because of a single chunk guard failure.  Per-block reporting
gives the client the information it needs to construct a
targeted CHUNK_ROLLBACK or CHUNK_WRITE retry that covers only
the blocks that failed.</t>
            <t>The data server does not hold a file-wide lock across the
per-block evaluation.  The chunk_guard4 CAS is evaluated
atomically per chunk at the point the data server updates that
chunk's state, so an interleaving CHUNK_WRITE from a different
client that arrives mid-compound will either win its own CAS
race (and the losing client sees NFS4ERR_CHUNK_GUARDED for the
contested block) or be rejected itself, without introducing
data-server-level locking beyond the per-chunk scope.</t>
          </section>
        </section>
      </section>
      <section anchor="sec-CHUNK_WRITE_REPAIR">
        <name>Operation 88: CHUNK_WRITE_REPAIR - Write Repaired Cached Chunk Data</name>
        <section anchor="arguments-10">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4args">
            <name>XDR for CHUNK_WRITE_REPAIR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE_REPAIR4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4           cwra_stateid;
   ///     offset4            cwra_offset;
   ///     stable_how4        cwra_stable;
   ///     chunk_owner4       cwra_owner;
   ///     uint32_t           cwra_payload_id;
   ///     uint32_t           cwra_chunk_size;
   ///     uint32_t           cwra_crc32s<>;
   ///     opaque             cwra_chunks<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-10">
          <name>RESULTS</name>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4resok">
            <name>XDR for CHUNK_WRITE_REPAIR4resok</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CHUNK_WRITE_REPAIR4resok {
   ///     count4          cwrr_count;
   ///     stable_how4     cwrr_committed;
   ///     verifier4       cwrr_writeverf;
   ///     nfsstat4        cwrr_status<>;
   /// };
]]></sourcecode>
          </figure>
          <figure anchor="fig-CHUNK_WRITE_REPAIR4res">
            <name>XDR for CHUNK_WRITE_REPAIR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union CHUNK_WRITE_REPAIR4res switch (nfsstat4 cwrr_status) {
   ///     case NFS4_OK:
   ///         CHUNK_WRITE_REPAIR4resok   cwrr_resok4;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-10">
          <name>DESCRIPTION</name>
          <t>CHUNK_WRITE_REPAIR has the same semantics as CHUNK_WRITE
(<xref target="sec-CHUNK_WRITE"/>) but is used specifically for writing
reconstructed chunk data to a replacement data server during
repair operations.</t>
          <t>The repair workflow is:</t>
          <ol spacing="normal" type="1"><li>
              <t>The repair client reads surviving chunks from the remaining
data servers via CHUNK_READ.</t>
            </li>
            <li>
              <t>The client reconstructs the missing chunks using the erasure
coding algorithm (RS matrix inversion or Mojette corner-peeling).</t>
            </li>
            <li>
              <t>The client acquires a CHUNK_LOCK (<xref target="sec-CHUNK_LOCK"/>) on the
target data server to prevent concurrent writes during repair.</t>
            </li>
            <li>
              <t>The client writes the reconstructed data via CHUNK_WRITE_REPAIR.</t>
            </li>
            <li>
              <t>The client calls CHUNK_FINALIZE and CHUNK_COMMIT to persist
the repair.</t>
            </li>
            <li>
              <t>The client calls CHUNK_REPAIRED (<xref target="sec-CHUNK_REPAIRED"/>) to
clear the error state.</t>
            </li>
            <li>
              <t>The client releases the lock via CHUNK_UNLOCK (<xref target="sec-CHUNK_UNLOCK"/>).</t>
            </li>
          </ol>
          <t>CHUNK_WRITE_REPAIR is distinguished from CHUNK_WRITE to allow the
data server to apply different policies to repair writes (e.g.,
bypassing guard checks, logging repair activity, or prioritizing
repair I/O).  The CRC32 validation on the repair data follows the
same rules as CHUNK_WRITE.</t>
          <t>The target blocks <bcp14>SHOULD</bcp14> be in the errored state (set by
CHUNK_ERROR) or EMPTY.  If the blocks are in the COMMITTED state
with valid data, the data server <bcp14>MAY</bcp14> reject the repair to prevent
overwriting good data.</t>
        </section>
      </section>
      <section anchor="sec-TRUST_STATEID">
        <name>Operation 90: TRUST_STATEID - Register Layout Stateid on Data Server</name>
        <section anchor="arguments-11">
          <name>ARGUMENTS</name>
          <figure anchor="fig-TRUST_STATEID4args">
            <name>XDR for TRUST_STATEID4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct TRUST_STATEID4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        tsa_layout_stateid;
   ///     layoutiomode4   tsa_iomode;
   ///     nfstime4        tsa_expire;
   ///     utf8str_cs      tsa_principal;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-11">
          <name>RESULTS</name>
          <figure anchor="fig-TRUST_STATEID4res">
            <name>XDR for TRUST_STATEID4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union TRUST_STATEID4res switch (nfsstat4 tsr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-11">
          <name>DESCRIPTION</name>
          <t>TRUST_STATEID registers a layout stateid with the data server so
that subsequent CHUNK operations presenting that stateid can be
validated against the data server's per-file trust table.  It is
the mechanism by which tight coupling (see
<xref target="sec-tight-coupling-control"/>) is established between the
metadata server and the data server for a particular layout.</t>
          <t>TRUST_STATEID operates on the current filehandle; a PUTFH naming
the data server's file <bcp14>MUST</bcp14> precede it in the same compound.</t>
          <t>tsa_layout_stateid is the stateid the metadata server issued in
the LAYOUTGET that produced this layout.  It <bcp14>MUST NOT</bcp14> be a special
stateid (anonymous, invalid, read-bypass, or current).  The sole
exception is the capability probe described in
<xref target="sec-tight-coupling-probe"/>: when the metadata server sends
TRUST_STATEID with tsa_layout_stateid set to the anonymous stateid
against the root filehandle, the data server <bcp14>MUST</bcp14> reject the
request with NFS4ERR_INVAL.  That rejection is the positive
response to the probe.</t>
          <t>tsa_iomode is the iomode of the layout (LAYOUTIOMODE4_READ or
LAYOUTIOMODE4_RW).  The data server <bcp14>MAY</bcp14> enforce this against the
CHUNK operation presented: a READ-iomode trust entry does not
authorize CHUNK_WRITE.</t>
          <t>tsa_expire is the absolute wall-clock time at which the trust
entry becomes invalid if not renewed.  See
<xref target="sec-tight-coupling-lease"/>.  The data server <bcp14>MUST</bcp14> reject a
TRUST_STATEID whose tsa_expire has tv_nseconds &gt;= 10^9 with
NFS4ERR_INVAL.</t>
          <t>tsa_principal is the client's authenticated identity as verified
by the metadata server at LAYOUTGET time.  For RPCSEC_GSS clients
this is the GSS display name (e.g., "alice@REALM").  For AUTH_SYS
and TLS clients, tsa_principal <bcp14>MUST</bcp14> be the empty string,
indicating that no principal binding is enforced on subsequent
CHUNK operations.  See <xref target="sec-tight-coupling-principal"/>.</t>
          <t>If the data server receives TRUST_STATEID on a session whose
owning client did not present EXCHGID4_FLAG_USE_PNFS_MDS at
EXCHANGE_ID, the data server <bcp14>MUST</bcp14> return NFS4ERR_PERM.  The data
server <bcp14>MUST NOT</bcp14> process TRUST_STATEID on a regular client
session.</t>
          <t>If a trust entry already exists for the same tsa_layout_stateid
on the same current filehandle, TRUST_STATEID atomically updates
tsa_expire and tsa_principal; this is the renewal path (see
<xref target="sec-tight-coupling-lease"/>).</t>
          <t>At registration time, the data server tags the new trust entry
with the identity of the metadata server -- derived from the
clientid of the owning client of the control session on which
TRUST_STATEID arrived.  This tag is consulted by REVOKE_STATEID
and BULK_REVOKE_STATEID to ensure that revocation only affects
entries registered by the same metadata server (see
<xref target="sec-BULK_REVOKE_STATEID"/>).  In a multi-metadata-server
deployment sharing a single data server, each metadata server
registers and revokes only its own entries; the tag is opaque to
pNFS clients and is not carried on the wire.</t>
        </section>
        <section anchor="response-codes">
          <name>RESPONSE CODES</name>
          <ul spacing="normal">
            <li>
              <t>NFS4_OK: the trust entry is registered (or updated).</t>
            </li>
            <li>
              <t>NFS4ERR_BADXDR: arguments could not be decoded.</t>
            </li>
            <li>
              <t>NFS4ERR_BAD_STATEID: tsa_layout_stateid was a special stateid
other than the anonymous stateid on the root filehandle.</t>
            </li>
            <li>
              <t>NFS4ERR_DELAY: the data server is temporarily unable to process
the request; the metadata server <bcp14>SHOULD</bcp14> retry.</t>
            </li>
            <li>
              <t>NFS4ERR_INVAL: tsa_layout_stateid was the anonymous stateid
and the current filehandle is not the root filehandle;
tsa_expire is malformed; or the current filehandle is a
directory (except in the capability-probe case).</t>
            </li>
            <li>
              <t>NFS4ERR_NOFILEHANDLE: no current filehandle is set.</t>
            </li>
            <li>
              <t>NFS4ERR_NOTSUPP: the data server does not implement
TRUST_STATEID.  This is the capability-probe response (see
<xref target="sec-tight-coupling-probe"/>).</t>
            </li>
            <li>
              <t>NFS4ERR_PERM: the request arrived on a session whose owning
client did not present EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>NFS4ERR_SERVERFAULT: the data server failed while processing
the request.</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="sec-REVOKE_STATEID">
        <name>Operation 91: REVOKE_STATEID - Revoke Registered Stateid on Data Server</name>
        <section anchor="arguments-12">
          <name>ARGUMENTS</name>
          <figure anchor="fig-REVOKE_STATEID4args">
            <name>XDR for REVOKE_STATEID4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct REVOKE_STATEID4args {
   ///     /* CURRENT_FH: file */
   ///     stateid4        rsa_layout_stateid;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-12">
          <name>RESULTS</name>
          <figure anchor="fig-REVOKE_STATEID4res">
            <name>XDR for REVOKE_STATEID4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union REVOKE_STATEID4res switch (nfsstat4 rsr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-12">
          <name>DESCRIPTION</name>
          <t>REVOKE_STATEID invalidates a single trust entry on the data
server.  Subsequent CHUNK operations that present the revoked
stateid <bcp14>MUST</bcp14> fail with NFS4ERR_BAD_STATEID.</t>
          <t>The metadata server calls REVOKE_STATEID in any of the following
situations:</t>
          <ul spacing="normal">
            <li>
              <t>CB_LAYOUTRECALL timeout: the client did not return the layout
within the recall timeout.  REVOKE_STATEID terminates the
client's ability to issue further I/O to the data server
without waiting for tsa_expire.</t>
            </li>
            <li>
              <t>LAYOUTERROR with NFS4ERR_ACCESS or NFS4ERR_PERM: the data
server rejected the client's I/O; the trust entry is stale
and must be removed.  This mirrors the fencing case in the
loose-coupled model.</t>
            </li>
            <li>
              <t>Explicit LAYOUTRETURN: the client returned the layout cleanly.
The metadata server <bcp14>MAY</bcp14> issue REVOKE_STATEID at this time or
<bcp14>MAY</bcp14> rely on tsa_expire; either is correct.</t>
            </li>
          </ul>
          <t>REVOKE_STATEID operates on the current filehandle; a PUTFH naming
the data server's file <bcp14>MUST</bcp14> precede it in the same compound.  The
filehandle and rsa_layout_stateid together identify the trust
entry to revoke.</t>
          <t>In-flight CHUNK operations that arrived before REVOKE_STATEID
completes <bcp14>MAY</bcp14> be allowed to finish.  The data server <bcp14>MUST NOT</bcp14>
process new CHUNK operations presenting rsa_layout_stateid after
REVOKE_STATEID returns.</t>
          <t>Lock state (see <xref target="sec-CHUNK_LOCK"/>) held by the revoked stateid
is NOT released as part of REVOKE_STATEID; the data server <bcp14>MUST</bcp14>
transfer each held lock to the MDS-escrow owner (see
<xref target="sec-chunk_guard_mds"/>).  Dropping a chunk lock during
revocation would permit a write hole and is prohibited; the
repair coordination sequence in <xref target="sec-repair-selection"/> assumes
that locks held by a revoked writer remain held until a repair
client adopts them via CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT.</t>
          <t>If the data server receives REVOKE_STATEID on a session whose
owning client did not present EXCHGID4_FLAG_USE_PNFS_MDS at
EXCHANGE_ID, the data server <bcp14>MUST</bcp14> return NFS4ERR_PERM.</t>
          <t>REVOKE_STATEID is scoped to the issuing metadata server's entries
(see the tagging rule in <xref target="sec-TRUST_STATEID"/>).  The data server
<bcp14>MUST NOT</bcp14> remove an entry that was registered by a different
metadata server, even if rsa_layout_stateid happens to match.  In
a multi-metadata-server deployment, one metadata server therefore
cannot revoke another metadata server's entries.</t>
          <t>REVOKE_STATEID is idempotent: revoking a stateid that has no
matching trust entry (either no entry exists, or the entry was
registered by a different metadata server) returns NFS4_OK.  The
metadata server therefore does not need to track precisely which
entries are currently live on which data server in order to revoke
safely.</t>
        </section>
        <section anchor="response-codes-1">
          <name>RESPONSE CODES</name>
          <ul spacing="normal">
            <li>
              <t>NFS4_OK: the trust entry was removed, or no matching entry
existed (idempotent).</t>
            </li>
            <li>
              <t>NFS4ERR_BADXDR: arguments could not be decoded.</t>
            </li>
            <li>
              <t>NFS4ERR_BAD_STATEID: rsa_layout_stateid was a special stateid.</t>
            </li>
            <li>
              <t>NFS4ERR_DELAY: the data server is temporarily unable to process
the request.</t>
            </li>
            <li>
              <t>NFS4ERR_INVAL: rsa_layout_stateid was the anonymous stateid.</t>
            </li>
            <li>
              <t>NFS4ERR_NOFILEHANDLE: no current filehandle is set.</t>
            </li>
            <li>
              <t>NFS4ERR_NOTSUPP: the data server does not implement
REVOKE_STATEID.</t>
            </li>
            <li>
              <t>NFS4ERR_PERM: the request arrived on a session whose owning
client did not present EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>NFS4ERR_SERVERFAULT: the data server failed while processing
the request.</t>
            </li>
          </ul>
        </section>
      </section>
      <section anchor="sec-BULK_REVOKE_STATEID">
        <name>Operation 92: BULK_REVOKE_STATEID - Revoke All Stateids for a Client</name>
        <section anchor="arguments-13">
          <name>ARGUMENTS</name>
          <figure anchor="fig-BULK_REVOKE_STATEID4args">
            <name>XDR for BULK_REVOKE_STATEID4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct BULK_REVOKE_STATEID4args {
   ///     clientid4       brsa_clientid;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-13">
          <name>RESULTS</name>
          <figure anchor="fig-BULK_REVOKE_STATEID4res">
            <name>XDR for BULK_REVOKE_STATEID4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// union BULK_REVOKE_STATEID4res switch (nfsstat4 brsr_status) {
   ///     case NFS4_OK:
   ///         void;
   ///     default:
   ///         void;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-13">
          <name>DESCRIPTION</name>
          <t>BULK_REVOKE_STATEID removes every trust entry on the data server
that was registered on behalf of the named client.  The data
server applies this as a scan over its trust table.</t>
          <t>The metadata server calls BULK_REVOKE_STATEID in any of the
following situations:</t>
          <ul spacing="normal">
            <li>
              <t>Client lease expiry: when a client's lease on the metadata
server expires, the metadata server revokes all of that
client's layouts.  A single BULK_REVOKE_STATEID replaces the N
per-file REVOKE_STATEID compounds that per-entry revocation
would require.</t>
            </li>
            <li>
              <t>CB_LAYOUTRECALL with LAYOUTRECALL4_ALL: the metadata server is
recalling all layouts for a client.  BULK_REVOKE_STATEID is the
data-server-side complement.</t>
            </li>
            <li>
              <t>Metadata server restart cleanup: after the metadata server
reconnects to a data server, it <bcp14>MAY</bcp14> issue
BULK_REVOKE_STATEID(brsa_clientid = all-zeros) to clear the
prior trust table before re-issuing TRUST_STATEID as clients
reclaim.  See <xref target="sec-tight-coupling-mds-crash"/>.</t>
            </li>
          </ul>
          <t>BULK_REVOKE_STATEID is scoped to the issuing metadata server's
entries (see the tagging rule in <xref target="sec-TRUST_STATEID"/>).  The
data server <bcp14>MUST NOT</bcp14> affect entries registered by a different
metadata server.  Consequently, in a multi-metadata-server
deployment sharing a single data server, one metadata server
cannot clear another metadata server's entries via
BULK_REVOKE_STATEID.</t>
          <t>The special value with all fields of brsa_clientid set to zero
means "revoke every entry owned by the issuing metadata server,
regardless of which pNFS client registered it".  The data server
<bcp14>MUST</bcp14> interpret this value as a clear of the issuing metadata
server's entries only, and <bcp14>MUST NOT</bcp14> treat it either as "the pNFS
client whose clientid happens to be zero" or as a global table
clear across metadata servers.</t>
          <t>BULK_REVOKE_STATEID does not operate on the current filehandle;
no PUTFH is required in the compound.</t>
          <t>If the data server receives BULK_REVOKE_STATEID on a session
whose owning client did not present EXCHGID4_FLAG_USE_PNFS_MDS at
EXCHANGE_ID, the data server <bcp14>MUST</bcp14> return NFS4ERR_PERM.</t>
          <t>Like REVOKE_STATEID, BULK_REVOKE_STATEID is idempotent (no error
is returned if there are no matching entries) and preserves chunk
locks held under any revoked stateid by transferring them to the
MDS-escrow owner (see <xref target="sec-chunk_guard_mds"/>), rather than
dropping them.</t>
        </section>
        <section anchor="response-codes-2">
          <name>RESPONSE CODES</name>
          <ul spacing="normal">
            <li>
              <t>NFS4_OK: the matching entries were removed, or there were
none (idempotent).</t>
            </li>
            <li>
              <t>NFS4ERR_BADXDR: arguments could not be decoded.</t>
            </li>
            <li>
              <t>NFS4ERR_DELAY: the data server is temporarily unable to process
the request.</t>
            </li>
            <li>
              <t>NFS4ERR_NOTSUPP: the data server does not implement
BULK_REVOKE_STATEID.</t>
            </li>
            <li>
              <t>NFS4ERR_PERM: the request arrived on a session whose owning
client did not present EXCHGID4_FLAG_USE_PNFS_MDS.</t>
            </li>
            <li>
              <t>NFS4ERR_SERVERFAULT: the data server failed while processing
the request.</t>
            </li>
          </ul>
        </section>
      </section>
    </section>
    <section anchor="new-nfsv42-callback-operations">
      <name>New NFSv4.2 Callback Operations</name>
      <figure anchor="fig-cb-ops-xdr">
        <name>Callback Operations XDR</name>
        <sourcecode type="xdr"><![CDATA[
   ///
   /// /* New callback operations for Erasure Coding start here */
   ///
   ///  OP_CB_CHUNK_REPAIR     = 16,
   ///
]]></sourcecode>
      </figure>
      <t>The following amendment blocks extend the nfs_cb_argop4 and
nfs_cb_resop4 dispatch unions defined in <xref target="RFC7863"/> with arms
for the new callback operation defined in this document.</t>
      <figure anchor="fig-nfs_cb_argop4-amend">
        <name>nfs_cb_argop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_cb_argop4 amendment block */
   ///
   /// case OP_CB_CHUNK_REPAIR: CB_CHUNK_REPAIR4args opcbchunkrepair;
]]></sourcecode>
      </figure>
      <figure anchor="fig-nfs_cb_resop4-amend">
        <name>nfs_cb_resop4 amendment block</name>
        <sourcecode type="xdr"><![CDATA[
   /// /* nfs_cb_resop4 amendment block */
   ///
   /// case OP_CB_CHUNK_REPAIR: CB_CHUNK_REPAIR4res opcbchunkrepair;
]]></sourcecode>
      </figure>
      <section anchor="sec-CB_CHUNK_REPAIR">
        <name>Callback Operation 16: CB_CHUNK_REPAIR - Request Repair of Inconsistent Chunk Ranges</name>
        <section anchor="arguments-14">
          <name>ARGUMENTS</name>
          <figure anchor="fig-CB_CHUNK_REPAIR4args">
            <name>XDR for CB_CHUNK_REPAIR4args</name>
            <sourcecode type="xdr"><![CDATA[
   /// enum cb_chunk_repair_reason4 {
   ///     CB_REPAIR_REASON_RACE  = 1,
   ///     CB_REPAIR_REASON_SCRUB = 2
   /// };
   ///
   /// struct cb_chunk_range4 {
   ///     offset4         ccr_offset;
   ///     count4          ccr_count;
   ///     nfsstat4        ccr_error;
   /// };
   ///
   /// struct CB_CHUNK_REPAIR4args {
   ///     nfs_fh4                     ccra_fh;
   ///     stateid4                    ccra_layout_stateid;
   ///     nfstime4                    ccra_deadline;
   ///     cb_chunk_repair_reason4     ccra_reason;
   ///     cb_chunk_range4             ccra_ranges<>;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="results-14">
          <name>RESULTS</name>
          <figure anchor="fig-CB_CHUNK_REPAIR4res">
            <name>XDR for CB_CHUNK_REPAIR4res</name>
            <sourcecode type="xdr"><![CDATA[
   /// struct CB_CHUNK_REPAIR4res {
   ///     nfsstat4           ccrr_status;
   /// };
]]></sourcecode>
          </figure>
        </section>
        <section anchor="description-14">
          <name>DESCRIPTION</name>
          <t>CB_CHUNK_REPAIR is sent by the metadata server to request that
a selected client repair one or more inconsistent chunk ranges.
Selection follows the rules in <xref target="sec-repair-selection"/>; those
rules are normative for how the client <bcp14>MUST</bcp14> respond on receipt
of this callback.</t>
          <t>The ccra_fh identifies the file whose chunks are inconsistent.
The callback compound carries the filehandle directly; there is
no preceding PUTFH in callback compounds.</t>
          <t>The ccra_layout_stateid carries the recipient client's current
layout stateid for the file if one is held.  A client that does
not hold a layout on ccra_fh <bcp14>MUST</bcp14> ignore ccra_layout_stateid
(it will be the anonymous stateid) and <bcp14>MUST</bcp14> acquire one via
LAYOUTGET before issuing any CHUNK operation on the ranges.</t>
          <t>The ccra_deadline is a wall-clock nfstime4 (seconds and
nanoseconds since the epoch, as defined in Section 3.3.1 of
<xref target="RFC8881"/>) by which the client is expected to have driven every
range to completion (CHUNK_REPAIRED on the reconstruction path,
or CHUNK_UNLOCK on the rollback path).  Missing the deadline
does not corrupt state -- the metadata server <bcp14>MAY</bcp14> re-select
another repair client after the deadline elapses -- but a
client that has missed the deadline <bcp14>MUST</bcp14> re-verify its layout
and the chunk lock state before continuing any repair-related
CHUNK operation.</t>
          <t>The ccra_reason distinguishes the two flows that cause the
metadata server to issue a repair callback:</t>
          <dl>
            <dt>CB_REPAIR_REASON_RACE:</dt>
            <dd>
              <t>A live-race repair.  A client (not necessarily the recipient
of this callback) detected a chunk-level inconsistency at
write or read time and reported it via LAYOUTERROR.  The
metadata server is driving repair synchronously because the
affected chunk is on the critical path of some I/O.  The
recipient <bcp14>SHOULD</bcp14> prioritise the callback over background
work.</t>
            </dd>
            <dt>CB_REPAIR_REASON_SCRUB:</dt>
            <dd>
              <t>A background scrub.  The metadata server has detected stale
or inconsistent payloads during a scheduled integrity sweep
and is opportunistically driving repair.  No client is
currently blocked on these ranges.  The recipient <bcp14>MAY</bcp14>
schedule the callback at lower priority than
CB_REPAIR_REASON_RACE, and <bcp14>MAY</bcp14> return NFS4ERR_DELAY to defer
repair to a more convenient time; the metadata server will
retry.</t>
            </dd>
          </dl>
          <t>The two reasons share all other semantics: the same ccra_ranges
encoding, the same response codes, the same deadline contract.
Only the priority / retry behaviour differs.</t>
          <t>The ccra_ranges array lists every chunk range the metadata
server requests the client to repair.  Each entry carries its
own ccr_error describing the failure mode the client is being
asked to remedy.  The repair strategy depends on the error code;
see <xref target="sec-repair-selection"/> for the normative and guidance
split.</t>
          <t>The metadata server <bcp14>SHOULD</bcp14> keep each CB_CHUNK_REPAIR compound
within the back-channel maximum (ca_maxrequestsize) negotiated
in CREATE_SESSION (see Section 18.36.3 of <xref target="RFC8881"/>).  If the
set of affected ranges would exceed that maximum, the metadata
server <bcp14>MAY</bcp14> issue multiple CB_CHUNK_REPAIR callbacks to the same
client.  Each callback is independent; the client drives each
to completion before the deadline on that callback's ranges.</t>
          <t>The fact that a range appears in ccra_ranges implies the data
server holds a chunk lock on the range (the failure occurred in
or around a PENDING or FINALIZED state that established the
lock).  The repair client <bcp14>MUST</bcp14> use CHUNK_LOCK with
CHUNK_LOCK_FLAGS_ADOPT (<xref target="sec-CHUNK_LOCK"/>) to take ownership
of the lock before issuing CHUNK_WRITE_REPAIR, CHUNK_ROLLBACK,
or CHUNK_WRITE on any chunk in a requested range.</t>
        </section>
        <section anchor="response-codes-3">
          <name>Response Codes</name>
          <t>The ccrr_status value returned by the client has the following
normative meanings to the metadata server:</t>
          <dl>
            <dt>NFS4_OK</dt>
            <dd>
              <t>The client has accepted the request and driven every range in
this callback to completion (CHUNK_REPAIRED or CHUNK_UNLOCK on
every affected chunk).  The metadata server clears the repair
queue entry.</t>
            </dd>
            <dt>NFS4ERR_DELAY</dt>
            <dd>
              <t>The client has accepted the request but requires more time.
The metadata server <bcp14>MAY</bcp14> extend the deadline by issuing a new
CB_CHUNK_REPAIR with a later ccra_deadline, or <bcp14>MAY</bcp14> re-select
another client.  The client continues to hold any locks it has
adopted until the original or extended deadline.</t>
            </dd>
            <dt>NFS4ERR_CODING_NOT_SUPPORTED</dt>
            <dd>
              <t>The client does not implement the encoding type of the layout
and cannot reconstruct.  The metadata server <bcp14>MUST NOT</bcp14> retry with
the same client and <bcp14>SHOULD</bcp14> select a different client.</t>
            </dd>
            <dt>NFS4ERR_PAYLOAD_LOST</dt>
            <dd>
              <t>The client has concluded that the identified ranges cannot
be repaired -- there are not enough surviving shards to
reconstruct and rollback is also impossible.  The metadata
server <bcp14>MUST NOT</bcp14> retry the repair and transitions the affected
ranges into an implementation-defined damaged state.  See
<xref target="sec-NFS4ERR_PAYLOAD_LOST"/>.</t>
            </dd>
          </dl>
          <t>All other error codes listed in <xref target="tbl-cb-ops-and-errors"/> are
treated by the metadata server as retriable: the metadata server
<bcp14>MAY</bcp14> issue a subsequent CB_CHUNK_REPAIR to the same or a
different client.  If the client becomes unreachable (no
response within the deadline), the metadata server re-selects
per <xref target="sec-repair-selection"/>.</t>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The combination of components in a pNFS system is required to
preserve the security properties of NFSv4.1+ with respect to an
entity accessing data via a client.  The pNFS feature partitions
the NFSv4.1+ file system protocol into two parts: the control
protocol and the data protocol.  As the control protocol in this
document is NFS, the security properties are equivalent to the
version of NFS being used.  The flexible file layout further divides
the data protocol into metadata and data paths.  The security
properties of the metadata path are equivalent to those of NFSv4.1x
(see Sections 1.7.1 and 2.2.1 of <xref target="RFC8881"/>).  And the security
properties of the data path are equivalent to those of the version
of NFS used to access the storage device, with the provision that
the metadata server is responsible for authenticating client access
to the data file.  The metadata server provides appropriate credentials
to the client to access data files on the storage device.  It is
also responsible for revoking access for a client to the storage
device.</t>
      <t>The metadata server enforces the file access control policy at
LAYOUTGET time.  The client <bcp14>MUST</bcp14> use RPC authorization credentials
for getting the layout for the requested iomode ((LAYOUTIOMODE4_READ
or LAYOUTIOMODE4_RW), and the server verifies the permissions and
ACL for these credentials, possibly returning NFS4ERR_ACCESS if the
client is not allowed the requested iomode.  If the LAYOUTGET
operation succeeds, the client receives, as part of the layout, a
set of credentials allowing it I/O access to the specified data
files corresponding to the requested iomode.  When the client acts
on I/O operations on behalf of its local users, it <bcp14>MUST</bcp14> authenticate
and authorize the user by issuing respective OPEN and ACCESS calls
to the metadata server, similar to having NFSv4 data delegations.</t>
      <t>The combination of filehandle, synthetic uid, and gid in the layout
is the way that the metadata server enforces access control to the
data server.  The client only has access to filehandles of file
objects and not directory objects.  Thus, given a filehandle in a
layout, it is not possible to guess the parent directory filehandle.
Further, as the data file permissions only allow the given synthetic
uid read/write permission and the given synthetic gid read permission,
knowing the synthetic ids of one file does not necessarily allow
access to any other data file on the storage device.</t>
      <t>The metadata server can also deny access at any time by fencing the
data file, which means changing the synthetic ids.  In turn, that
forces the client to return its current layout and get a new layout
if it wants to continue I/O to the data file.</t>
      <t>If access is allowed, the client uses the corresponding (read-only
or read/write) credentials to perform the I/O operations at the
data file's storage devices.  When the metadata server receives a
request to change a file's permissions or ACL, it <bcp14>SHOULD</bcp14> recall all
layouts for that file and then <bcp14>MUST</bcp14> fence off any clients still
holding outstanding layouts for the respective files by implicitly
invalidating the previously distributed credential on all data file
comprising the file in question.  It is <bcp14>REQUIRED</bcp14> that this be done
before committing to the new permissions and/or ACL.  By requesting
new layouts, the clients will reauthorize access against the modified
access control metadata.  Recalling the layouts in this case is
intended to prevent clients from getting an error on I/Os done after
the client was fenced off.</t>
      <section anchor="sec-security-crc32-scope">
        <name>CRC32 Integrity Scope</name>
        <t>The CRC32 values carried in CHUNK_WRITE and returned from CHUNK_READ
are intended to detect accidental data corruption during storage or
transmission -- for example, bit flips in storage media or network
errors.  CRC32 is not a cryptographic hash and does not protect
against intentional modification: an adversary with access to the
network path could replace a chunk and recompute a valid CRC32 to
match.  The "data integrity" provided by the CRC32 mechanism in this
document refers to error detection, not protection against an active
attacker.  Deployments requiring protection against active attackers
<bcp14>SHOULD</bcp14> use RPC-over-TLS (see <xref target="sec-tls"/>) or RPCSEC_GSS.</t>
        <t>An authenticated client is in the "active attacker" role with
respect to its own chunks, in a restricted sense.  The data
server validates the CRC32 against the bytes the client
provided, so an authenticated client that chooses to send
semantically-invalid bytes with a correctly computed CRC32 will
have those bytes accepted.  The residual surface differs per
authentication model:</t>
        <ul spacing="normal">
          <li>
            <t>Under AUTH_SYS with loose coupling, the residual surface is
essentially the pre-existing attack surface of NFSv3 writes:
any host that can reach the data server with a valid uid can
write nonsense to chunks that uid owns.  This is the Flex
Files v1 authorization model, which Flex Files v2 inherits
without modification for this path.</t>
          </li>
          <li>
            <t>Under RPCSEC_GSS or TLS with mutual authentication, the
residual surface reduces to: only the authenticated client
can write nonsense into chunks it owns.  Cross-client
corruption is prevented because the data server verifies the
principal before accepting the write.  The remaining attack
surface is the client's own integrity: any deployment that
relies on data integrity above the wire <bcp14>MUST</bcp14> apply
application-level content validation.</t>
          </li>
        </ul>
        <t>Flex Files v2 does not attempt to defend against this
authenticated-but-malicious case.  The CRC32 mechanism is a
transport-integrity check, not a content-integrity check; the
system trust model assumes that an authenticated principal is
entitled to destroy the content of chunks it owns.</t>
      </section>
      <section anchor="chunk-lock-and-lease-expiry">
        <name>Chunk Lock and Lease Expiry</name>
        <t>When a client holds a chunk lock (acquired via CHUNK_LOCK) and its
lease expires or the client crashes, the lock is released implicitly
by the data server.  This opens a window in which another client
may write to the previously locked range before the original client's
repair is complete.  Implementations <bcp14>SHOULD</bcp14> ensure that the lease
period for chunk locks is sufficient to complete repair operations,
and <bcp14>SHOULD</bcp14> implement CHUNK_UNLOCK explicitly on abort paths.  The
metadata server's LAYOUTERROR and LAYOUTRETURN mechanisms provide
the coordination point for detecting and resolving such races.</t>
      </section>
      <section anchor="error-code-information-disclosure">
        <name>Error Code Information Disclosure</name>
        <t>The new error codes NFS4ERR_CHUNK_LOCKED (10099) and
NFS4ERR_PAYLOAD_NOT_CONSISTENT (10098) convey information about
chunk state to the caller.  Both of these errors <bcp14>MAY</bcp14> be returned
to callers whose credentials have not been verified by the data
server (e.g., when the AUTH_SYS uid presented does not match the
synthetic uid on the data file).  The information they reveal --
that a chunk is locked, or that a CRC mismatch occurred -- does
not directly disclose file contents but may indicate concurrent
write activity.  Implementations that are concerned about this
level of disclosure <bcp14>SHOULD</bcp14> require that CHUNK operations
only succeed after credential verification and return
NFS4ERR_ACCESS for unverified callers rather than the more
specific error codes.</t>
      </section>
      <section anchor="sec-tls">
        <name>Transport Layer Security</name>
        <t>RPC-over-TLS <xref target="RFC9289"/> <bcp14>MAY</bcp14> be used to protect traffic between the
client and the metadata server and between the client and data servers.
When RPC-over-TLS is in use on the data server path, the synthetic
uid/gid credentials carried in AUTH_SYS remain the access control
mechanism; TLS provides confidentiality and integrity for the transport
but does not replace the fencing model described in <xref target="sec-Fencing-Clients"/>.
Servers that require transport security <bcp14>SHOULD</bcp14> advertise this via the
SECINFO mechanism rather than silently dropping connections.</t>
      </section>
      <section anchor="rpcsecgss-and-security-services">
        <name>RPCSEC_GSS and Security Services</name>
        <t>This document does not specify how RPCSEC_GSS <xref target="RFC7861"/> is
used between the client and a storage device in the loosely
coupled model, and the reasons differ between the two coupling
models.  Because the loosely coupled model uses synthetic
credentials that are managed by the metadata server rather than
shared with the storage device, a full RPCSEC_GSS integration
would require protocol work (RPCSEC_GSSv3 structured privilege
assertions, per <xref target="RFC7861"/>) on all three of the metadata
server, the storage device, and the client.  In the tightly
coupled model the principal used to access the data file is the
same as the one used to access the metadata file, so
RPCSEC_GSS applies unchanged.  The two subsections below treat
each model in turn.</t>
        <section anchor="loosely-coupled">
          <name>Loosely Coupled</name>
          <t>RPCSEC_GSS version 3 (RPCSEC_GSSv3) <xref target="RFC7861"/> contains facilities
that would allow it to be used to authorize the client to the storage
device on behalf of the metadata server.  Doing so would require
that each of the metadata server, storage device, and client would
need to implement RPCSEC_GSSv3 using an RPC-application-defined
structured privilege assertion in a manner described in Section
4.9.1 of <xref target="RFC7862"/>.  The specifics necessary to do so are not
described in this document.  This is principally because any such
specification would require extensive implementation work on a wide
range of storage devices, which would be unlikely to result in a
widely usable specification for a considerable time.</t>
          <t>As a result, the layout type described in this document will not
provide support for use of RPCSEC_GSS together with the loosely
coupled model.  However, future layout types could be specified,
which would allow such support, either through the use of RPCSEC_GSSv3
or in other ways.</t>
        </section>
        <section anchor="tightly-coupled">
          <name>Tightly Coupled</name>
          <t>With tight coupling, the principal used to access the metadata file
is exactly the same as used to access the data file.  The storage
device can use the control protocol to validate any RPC credentials.
As a result, there are no security issues related to using RPCSEC_GSS
with a tightly coupled system.  For example, if Kerberos V5 Generic
Security Service Application Program Interface (GSS-API) <xref target="RFC4121"/>
is used as the security mechanism, then the storage device could
use a control protocol to validate the RPC credentials to the
metadata server.</t>
        </section>
      </section>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t><xref target="RFC8881"/> introduced the "pNFS Layout Types Registry"; new layout
type numbers in this registry need to be assigned by IANA.  This
document defines a new layout type number: LAYOUT4_FLEX_FILES_V2
(see <xref target="tbl_layout_types"/>).</t>
      <table anchor="tbl_layout_types">
        <name>Layout Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Layout Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">LAYOUT4_FLEX_FILES_V2</td>
            <td align="left">0x6</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t><xref target="RFC8881"/> also introduced the "NFSv4 Recallable Object Types
Registry".  This document defines new recallable objects for
RCA4_TYPE_MASK_FF2_LAYOUT_MIN and RCA4_TYPE_MASK_FF2_LAYOUT_MAX
(see <xref target="tbl_recallables"/>).</t>
      <table anchor="tbl_recallables">
        <name>Recallable Object Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Recallable Object Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MIN</td>
            <td align="left">20</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
          <tr>
            <td align="left">RCA4_TYPE_MASK_FF2_LAYOUT_MAX</td>
            <td align="left">21</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">1</td>
          </tr>
        </tbody>
      </table>
      <t>This document introduces the 'Flexible File Version 2 Layout Type
Erasure Coding Type Registry'.  The registry uses a 32-bit value
space partitioned into ranges based on the intended scope of the
encoding type (see <xref target="tbl-coding-ranges"/>).</t>
      <table anchor="tbl-coding-ranges">
        <name>Erasure Coding Type Value Ranges</name>
        <thead>
          <tr>
            <th align="left">Range</th>
            <th align="left">Purpose</th>
            <th align="left">Allocation Policy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">0x0000-0x00FF</td>
            <td align="left">Standards Track</td>
            <td align="left">IETF Review</td>
          </tr>
          <tr>
            <td align="left">0x0100-0x0FFF</td>
            <td align="left">Experimental</td>
            <td align="left">Expert Review</td>
          </tr>
          <tr>
            <td align="left">0x1000-0x7FFF</td>
            <td align="left">Vendor (open)</td>
            <td align="left">First Come First Served</td>
          </tr>
          <tr>
            <td align="left">0x8000-0xFFFE</td>
            <td align="left">Private/proprietary</td>
            <td align="left">No registration required</td>
          </tr>
          <tr>
            <td align="left">0xFFFF</td>
            <td align="left">Reserved</td>
            <td align="left">--</td>
          </tr>
        </tbody>
      </table>
      <dl>
        <dt>Standards Track (0x0000-0x00FF)</dt>
        <dd>
          <t>Encoding types intended for broad interoperability.  The
specification <bcp14>MUST</bcp14> include a complete mathematical description
sufficient for independent interoperable implementations (see
<xref target="encoding-type-interoperability"/>).  Allocated by IETF Review.</t>
        </dd>
        <dt>Experimental (0x0100-0x0FFF)</dt>
        <dd>
          <t>Encoding types under development or evaluation.  An Internet-Draft
is sufficient for allocation.  The specification <bcp14>SHOULD</bcp14> include
enough detail for interoperability testing.  Allocated by Expert
Review.</t>
        </dd>
        <dt>Vendor (open) (0x1000-0x7FFF)</dt>
        <dd>
          <t>Encoding types with a published specification or patent reference.
Interoperability is expected among implementations that license or
implement the specification.  The registration <bcp14>MUST</bcp14> include either a
math specification or a patent reference.  Allocated First Come
First Served.</t>
        </dd>
        <dt>Private/proprietary (0x8000-0xFFFE)</dt>
        <dd>
          <t>Encoding types for use within a single vendor's ecosystem.
No IANA registration is required.  Interoperability with other
implementations is not expected.  To reduce the likelihood of
accidental codepoint collisions between independent vendors,
implementations <bcp14>SHOULD</bcp14> derive the low-order 15 bits of any value
in this range from that vendor's Private Enterprise Number
<xref target="IANA-PEN"/> (for example, by hashing the PEN into the 15-bit
space and reserving one well-known offset per codec).  The
encoding type name <bcp14>SHOULD</bcp14> include an organizational identifier
(e.g., FFV2_ENCODING_ACME_FOOBAR).  A client that encounters a
value in this range from an unrecognized server <bcp14>SHOULD</bcp14> treat
it as an unsupported encoding type.</t>
        </dd>
      </dl>
      <t>This partitioning prevents contention for small numbers in the
Standards Track range and provides a clear signal to clients about
what level of interoperability to expect.</t>
      <t>This document defines the FFV2_CODING_MIRRORED type for Client-Side
Mirroring (see <xref target="tbl-coding-types"/>).</t>
      <table anchor="tbl-coding-types">
        <name>Flexible File Version 2 Layout Type Erasure Coding Type Assignments</name>
        <thead>
          <tr>
            <th align="left">Erasure Coding Type Name</th>
            <th align="left">Value</th>
            <th align="left">RFC</th>
            <th align="left">How</th>
            <th align="left">Minor Versions</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="left">FFV2_CODING_MIRRORED</td>
            <td align="left">1</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_MOJETTE_SYSTEMATIC</td>
            <td align="left">2</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_MOJETTE_NON_SYSTEMATIC</td>
            <td align="left">3</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
          <tr>
            <td align="left">FFV2_ENCODING_RS_VANDERMONDE</td>
            <td align="left">4</td>
            <td align="left">RFCTBD10</td>
            <td align="left">L</td>
            <td align="left">2</td>
          </tr>
        </tbody>
      </table>
      <section anchor="iana-flag-words">
        <name>Flag-Word Allocation</name>
        <t>This document defines three bitmap spaces -- ffv2_flags4
(see <xref target="sec-ffv2_flags4"/>), ffv2_ds_flags4 (see
<xref target="sec-ffv2_ds_flags4"/>), and cwa_flags (see
<xref target="sec-CHUNK_WRITE"/>) -- whose allocated bits are enumerated in
this document.  Following the precedent of ff_flags4 in
<xref target="RFC8435"/>, IANA does not maintain a registry for any of these
bitmap spaces.  Future bit allocations are made by a document
that updates or obsoletes this one.  Implementations <bcp14>MUST</bcp14>
treat unknown bits as reserved and <bcp14>MUST NOT</bcp14> assign meaning to
them locally.</t>
      </section>
    </section>
    <section anchor="xdr-description-of-the-flexible-file-layout-type">
      <name>XDR Description of the Flexible File Layout Type</name>
      <t>This document contains the External Data Representation (XDR)
<xref target="RFC4506"/> description of the flexible file layout type.  The XDR
description is embedded in this document in a way that makes it simple
for the reader to extract into a ready-to-compile form.  The reader can
feed this document into the shell script in <xref target="fig-extract"/> to produce
the machine-readable XDR description of the flexible file layout type.</t>
      <figure anchor="fig-extract">
        <name>extract.sh</name>
        <sourcecode type="shell"><![CDATA[
#!/bin/sh
grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??'
]]></sourcecode>
      </figure>
      <t>That is, if the above script is stored in a file called "extract.sh"
and this document is in a file called "spec.txt", then the reader can
run the script as in <xref target="fig-extract-example"/>.</t>
      <figure anchor="fig-extract-example">
        <name>Example use of extract.sh</name>
        <sourcecode type="shell"><![CDATA[
sh extract.sh < spec.txt > flex_files2_prot.x
]]></sourcecode>
      </figure>
      <t>The effect of the script is to remove leading blank space from each
line, plus a sentinel sequence of "///".</t>
      <t>XDR descriptions with the sentinel sequence are embedded throughout
the document.</t>
      <t>Note that the XDR code contained in this document depends on types
from the NFSv4.1 nfs4_prot.x file <xref target="RFC5662"/>.  This includes both nfs
types that end with a 4, such as offset4, length4, etc., as well as
more generic types such as uint32_t and uint64_t.</t>
      <t>While the XDR can be appended to that from <xref target="RFC7863"/>, the various
code snippets belong in their respective areas of that XDR.</t>
    </section>
    <section numbered="false" anchor="sec-implementation-status">
      <name>Implementation Status</name>
      <t>Note to RFC Editor: please remove this section prior to publication,
per <xref target="RFC7942"/>.</t>
      <t>This section records the implementation status of this specification
at the time of writing.  The purpose, per <xref target="RFC7942"/>, is to help
reviewers evaluate the protocol against running code and to document
which parts have been validated end-to-end versus specified on paper.</t>
      <section numbered="false" anchor="reffs-mds-and-ds-and-ecdemo-client">
        <name>reffs (MDS and DS) and ec_demo (Client)</name>
        <dl>
          <dt>Organization:</dt>
          <dd>
            <t>Independent / open source.</t>
          </dd>
          <dt>License:</dt>
          <dd>
            <t>AGPL-3.0-or-later.</t>
          </dd>
          <dt>Source:</dt>
          <dd>
            <t><eref target="https://github.com/loghyr/reffs">https://github.com/loghyr/reffs</eref>.</t>
          </dd>
          <dt>Implementation:</dt>
          <dd>
            <t><tt>reffs</tt> is an NFSv4.2 server written in C that acts as both a
metadata server (MDS) and a data server (DS) in a Flex Files v2
deployment.  <tt>ec_demo</tt> is a client-side library with a
demonstration driver that exercises the Flex Files v2 data path
over NFSv4.2 with all three erasure-coding types defined in this
document.</t>
          </dd>
        </dl>
        <t>Coverage:</t>
        <ul spacing="normal">
          <li>
            <t>CHUNK_WRITE, CHUNK_READ, CHUNK_FINALIZE, and CHUNK_COMMIT (the
happy-path data-plane operations) are implemented end-to-end and
have been exercised against the three codec families (Reed-Solomon
Vandermonde, Mojette systematic, Mojette non-systematic).</t>
          </li>
          <li>
            <t>The chunk_guard4 CAS primitive, including the conflict-detection
and deterministic-tiebreaker rules in <xref target="sec-chunk_guard4"/>, is
implemented on both the client and the data server.</t>
          </li>
          <li>
            <t>Per-chunk CRC32 integrity checking (see
<xref target="sec-security-crc32-scope"/>) is implemented end-to-end.</t>
          </li>
          <li>
            <t>Per-inode persistent storage of chunk state (PENDING / FINALIZED
/ COMMITTED) is implemented using write-temp / fdatasync / rename
for crash safety.</t>
          </li>
          <li>
            <t>The repair data path (CHUNK_LOCK with CHUNK_LOCK_FLAGS_ADOPT,
CHUNK_WRITE_REPAIR, CHUNK_REPAIRED, CHUNK_ROLLBACK, and
CB_CHUNK_REPAIR) is <strong>specified but not yet implemented</strong> in the
prototype.  The corresponding operations currently return
NFS4ERR_NOTSUPP.  A fault-injection test harness is in place to
drive the repair path once it is implemented.</t>
          </li>
          <li>
            <t>The tight-coupling control protocol (TRUST_STATEID,
REVOKE_STATEID, BULK_REVOKE_STATEID) is <strong>specified but not yet
implemented</strong>.  Data servers advertise loose coupling via
<tt>ffdv_tightly_coupled = false</tt>, and synthetic AUTH_SYS
credentials with fencing are used for access control.</t>
          </li>
        </ul>
        <dl>
          <dt>Level of maturity:</dt>
          <dd>
            <t>Research-quality prototype.  The implementation demonstrates the
protocol and has produced the benchmark data summarised below.
It is not production-ready; in particular, it does not yet
implement the repair path required to tolerate concurrent-writer
races or multi-DS failure reconstruction.</t>
          </dd>
          <dt>Contact:</dt>
          <dd>
            <t>loghyr@gmail.com.</t>
          </dd>
          <dt>Last update:</dt>
          <dd>
            <t>April 2026.</t>
          </dd>
        </dl>
      </section>
      <section numbered="false" anchor="interoperability-and-benchmarks">
        <name>Interoperability and Benchmarks</name>
        <t>The reffs + ec_demo implementation has been benchmarked against
itself (no second Flex Files v2 implementation is known to the
authors at the time of writing).  The benchmark suite exercises
four I/O strategies -- plain mirroring, pure striping, Reed-Solomon
Vandermonde, Mojette systematic, and Mojette non-systematic -- at
five file sizes (4 KB, 16 KB, 64 KB, 256 KB, and 1 MB), at two
parity geometries (4+2 and 8+2), and on two platforms (an Apple M4
host running macOS with a Rocky Linux 8.10 Docker container, and a
Fedora 43 native Linux host on aarch64).  Each data point is the
mean of five measured runs.  Data servers run as Docker containers
on a single-host bridge network, so absolute latency numbers
reflect encoding and RPC fan-out cost with near-zero network
latency; real deployments will see higher absolute values but
similar overhead ratios.</t>
        <t>Selected findings:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Erasure-coded write overhead is modest at small and mid sizes.</strong>
At 4 KB to 64 KB payloads, all three EC codecs add 14% to 21%
write latency relative to plain mirroring.  Above 64 KB the
encoding cost begins to dominate; at 1 MB Reed-Solomon and Mojette
systematic reach approximately +54%, Mojette non-systematic
approximately +62%.</t>
          </li>
          <li>
            <t><strong>The dominant write cost is encoding, not fan-out.</strong>  A pure-
striping variant (6 data shards, no parity) isolates the two
costs.  At 1 MB, plain mirroring writes in 64 ms, striping in
71 ms (+11%), Reed-Solomon in 103 ms (+60%).  Of the 39 ms
Reed-Solomon penalty, only 7 ms comes from parallel fan-out; the
remaining 32 ms is encoding plus two additional parity RPCs.</t>
          </li>
          <li>
            <t><strong>Reconstruction of a missing data shard is essentially free for
systematic codecs at 4+2.</strong>  Reed-Solomon and Mojette systematic
add 1% to 6% to read latency in degraded-1 mode (one data shard
missing, reconstructed from the remaining five).  A client that
discovers a failed DS at read time can reconstruct transparently
with no user-visible latency impact.</t>
          </li>
          <li>
            <t><strong>At 8+2, systematic-codec reconstruction diverges.</strong>  Mojette
systematic reconstruction overhead stays at approximately +4% at
1 MB, while Reed-Solomon grows to approximately +54% due to the
O(k^2) cost of inverting a k x k matrix in GF(2^8).  Mojette
systematic's back-projection algorithm scales with m (parity
count) rather than k (data count) and is therefore preferable at
wider geometries.</t>
          </li>
          <li>
            <t><strong>Mojette non-systematic applies a full inverse transform on
every read</strong> regardless of whether any shard is missing.  At
1 MB this produces approximately 4x read overhead at 4+2 and
approximately 7x at 8+2.  This codec is suitable only for
write-once cold storage where reads are rare; it should not be
the default for interactive workloads.</t>
          </li>
          <li>
            <t><strong>Results are platform-independent.</strong>  The largest absolute
latency delta between macOS M4 and Fedora 43 at 1 MB is 20 ms
on writes.  Codec ordering, overhead percentages, and
qualitative scaling behavior are reproducible across operating
systems and Docker implementations.</t>
          </li>
        </ul>
        <t>The benchmarks confirm that the protocol's central design claims
hold in practice: client-side erasure coding is affordable at
typical payload sizes; systematic codecs reconstruct missing
shards cheaply; and the scaling properties of the three codec
families follow directly from their published algorithmic
complexities.</t>
        <t>The benchmarks also identify two non-goals for deployment: Mojette
non-systematic is not a viable general-purpose read codec, and
Reed-Solomon at k greater than approximately 6 loses its
"reconstruction is free" property.  These observations inform the
choice of default codec and geometry in implementations that
consume this specification.</t>
        <t>A full benchmark report with per-size tables, figures, and the
platform comparison is available alongside the source code.</t>
      </section>
      <section numbered="false" anchor="sec-architectural-implication">
        <name>Architectural Implication: Cost of Fault Tolerance</name>
        <t>The headline question every storage audience asks of an
erasure-coding protocol is: "what does it cost when something goes
wrong?"  The benchmark answer for the recommended operating point
is <strong>essentially zero</strong>.  Mojette systematic at 4+2 reconstructs a
missing data shard with read-latency overhead within run-to-run
noise of healthy operation.  Mojette systematic at 8+2 holds at
approximately +4%.</t>
        <t>This shifts the deployment conversation away from "is erasure
coding cheap enough to enable" and toward "which codec and
geometry minimise the compromise."  The compromise that remains is
not the cost of fault tolerance; it is the cost of write-time
encoding, which is bounded (under 60% at 1 MB, under 25% at 64 KB),
and the cost of crash-safe durability via the chunk state machine
(see <xref target="sec-system-model-consistency"/>), which is +7% to +22% on
writes and +2% to +10% on reads.</t>
        <t>Wire-format performance objections raised earlier in the working
group's review of this work are addressed in
<xref target="sec-rejected-alternatives"/>: the per-RPC byte-shuffling cost of
the original Mojette-specific projection header has been replaced
with XDR-encoded chunk metadata (see <xref target="sec-chunk_guard4"/>), so the
remaining wire-format cost is the XDR-encoded chunk header itself,
which is identical for every codec and is part of the +7% to +22%
v2 write overhead measured above.</t>
      </section>
    </section>
    <section numbered="false" anchor="sec-rejected-alternatives">
      <name>Design Rationale: Rejected Alternatives</name>
      <t>The design of Flex Files v2 went through several iterations between
2024 and 2026 that are recorded here for the benefit of future
reviewers and implementers.  Each alternative below was considered
and rejected, with the specific concern that led to its rejection.
Understanding why these approaches were rejected may help reviewers
evaluate the current design against a fuller space of possibilities
and may guide future extensions or replacements.</t>
      <section numbered="false" anchor="proprietary-projection-header-inside-opaque-payload">
        <name>Proprietary Projection Header Inside Opaque Payload</name>
        <t>The earliest iteration placed a 16-byte Mojette-specific header at
the start of the READ/WRITE opaque payload, interpreted in the
endianness of the writer's host.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>It embedded a specific erasure-coding type (Mojette) into the
generic replication-method framework, preventing alternate
codings from reusing the same wire format.</t>
          </li>
          <li>
            <t>The header bytes were not XDR-aligned, which required every
implementation to handle endianness explicitly rather than
relying on XDR's natural byte order.</t>
          </li>
          <li>
            <t>Carrying integrity and identification data inside an opaque
disrespected the XDR self-description model that the rest of
NFSv4 relies on.</t>
          </li>
        </ul>
        <t>The rejection of this approach at IETF 120 (July 2024) motivated
the shift to explicit XDR-encoded chunk headers and the
chunk_guard4 structure, both visible in the wire format.</t>
      </section>
      <section numbered="false" anchor="per-client-swap-files-with-mds-mappingrecall">
        <name>Per-Client Swap Files with MDS MAPPING_RECALL</name>
        <t>One proposal split logical and physical chunk addressing: the
metadata server maintained a mapping from logical offset to
physical location, and the client appended new chunks to a
per-client staging file on each data server before asking the
metadata server to atomically remap the file to the new chunks.
This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The MAPPING_RECALL operation required to atomically update the
mapping would, in a multi-writer deployment, have to recall all
outstanding read/write layouts on the file -- grinding the
application to a halt during every remap.</t>
          </li>
          <li>
            <t>Each client required its own staging file on every data server,
producing N clients * M data servers staging files that had to
be reconciled on client restart.</t>
          </li>
          <li>
            <t>The approach was biased toward correctness at the expense of
throughput, which inverted the expected workload mix where
single-writer cases dominate.</t>
          </li>
        </ul>
      </section>
      <section numbered="false" anchor="server-side-byte-range-lock-manager-per-file">
        <name>Server-Side Byte-Range Lock Manager per File</name>
        <t>Another proposal relied on byte-range locks obtained by clients
before writing, with the lock manager state spread across the data
servers.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>A failed lock holder required a lock manager to arbitrate
recovery, effectively reintroducing a centralized decision
point for each chunk.</t>
          </li>
          <li>
            <t>The lock recall path for HPC checkpoint workloads (many ranks
writing disjoint regions) would have required thousands of
locks per file, with recall storms on every phase transition.</t>
          </li>
          <li>
            <t>The design did not specify how the lock manager itself would
be replicated for high availability, deferring the hardest
part of the problem.</t>
          </li>
        </ul>
        <t>The current design uses CHUNK_LOCK (see <xref target="sec-CHUNK_LOCK"/>) but
only on the repair path, not on the normal write path.</t>
      </section>
      <section numbered="false" anchor="modified-two-touch-paxos-on-each-chunk">
        <name>Modified Two-Touch Paxos on Each Chunk</name>
        <t>A fully distributed-consensus proposal placed a lightweight
(modified two-touch) Paxos round on each chunk write, reaching
agreement among the data servers holding the mirror set.  This was
rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The constant-factor cost per write (two or three round trips,
leader election overhead, majority quorum requirement) was
unacceptable for workloads where single-writer throughput
dominates the deployment mix.</t>
          </li>
          <li>
            <t>The approach demanded that data servers be peers in a
consensus protocol, which is a substantially heavier
requirement than being independent chunk stores.</t>
          </li>
          <li>
            <t>A majority of (k+m) data servers must be reachable for any
progress, which is a strictly stronger availability requirement
than the k-of-(k+m) needed for erasure-coded reads.</t>
          </li>
        </ul>
        <t>Working-group feedback on this proposal was uniformly negative.
The current design retains the option -- nothing in this
specification prevents an implementation from running classical
consensus internally among MDS replicas (see
<xref target="sec-system-model-consensus"/>) -- but does not require it per
write.</t>
      </section>
      <section numbered="false" anchor="automatic-commit-of-empty-chunks">
        <name>Automatic Commit of Empty Chunks</name>
        <t>An earlier version included a WRITE_BLOCK_FLAGS_COMMIT_IF_EMPTY
flag (later renamed CHUNK_WRITE_FLAGS_ACTIVATE_IF_EMPTY) that
automatically committed a write to a previously-empty chunk
without a separate CHUNK_COMMIT round trip.  The flag is retained
in the current design but its scope was narrowed: it is
performant in the exclusive-writer case but produces blocks that
cannot be rolled back if a racing writer appears concurrently,
requiring either hole-punching or an extension of CHUNK_ROLLBACK
to work on committed blocks.  The narrow scope is documented in
the flag's definition; a broader version was rejected because it
created rollback liabilities that were disproportionate to the
single-RTT savings.</t>
      </section>
      <section numbered="false" anchor="global-clock-or-wall-clock-based-generation-counter">
        <name>Global Clock or Wall-Clock-Based Generation Counter</name>
        <t>An early design used a wall-clock timestamp as the cg_gen_id.
This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>No global clock exists among the many clients of a
multi-rack deployment.  Clock skew can cause a newer write
to appear to have an earlier timestamp than an older one.</t>
          </li>
          <li>
            <t>Timestamps at millisecond or microsecond resolution are not
fine-grained enough to disambiguate bursty writes from the
same client.</t>
          </li>
          <li>
            <t>Mixing client identity bits into the low-order bits of a
timestamp (to make it unique) reduces effective timestamp
resolution without providing a useful total ordering.</t>
          </li>
        </ul>
        <t>The current design uses a per-chunk monotonic counter scoped to
the chunk on the data server, with cg_client_id as the
disambiguator across clients.  See <xref target="sec-chunk_guard4"/>.</t>
      </section>
      <section numbered="false" anchor="layout-level-generation-counter">
        <name>Layout-Level Generation Counter</name>
        <t>Christoph Hellwig proposed at IETF 122 (March 2025) adding a
generation counter to the layout itself, transmitted to the
data servers alongside each I/O, so that the metadata server
could redirect writes to new data servers without issuing a
full CB_LAYOUTRECALL storm across every holder of the file.
This is a natural extension of the per-chunk cg_gen_id: where
cg_gen_id disambiguates successive writes to the same chunk, a
layout-level counter would disambiguate successive placements
of the same data.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The use case is already covered.  CB_CHUNK_REPAIR (see
<xref target="sec-CB_CHUNK_REPAIR"/>) and the Data Mover / Proxy-DS
mechanism (see the companion Data Mover design) together
handle mid-layout remap without requiring a layout-level
epoch on the wire.  CB_CHUNK_REPAIR reaches the specific
chunks that need redirection; the Data Mover reaches the
broader re-placement case; between them the full remap
space is covered.</t>
          </li>
          <li>
            <t>Adding a layout-level counter introduces a second,
potentially-conflicting epoch alongside cg_gen_id.  The CAS
semantics on the data server would have to compose the two
generations (per-chunk and per-layout), which multiplies
the states the data server must reason about without
strengthening any guarantee the protocol offers today.</t>
          </li>
          <li>
            <t>The CB_LAYOUTRECALL storm that motivated the proposal is a
worst-case cost that the current design pays only during a
genuine data-server retirement or full re-placement.
Partial remaps -- the common case -- already flow through
CB_CHUNK_REPAIR + layout refresh on LAYOUTGET without
disturbing other holders.</t>
          </li>
        </ul>
        <t>If a future revision determines that layout-level generation is
needed, it can be added as a protocol extension: the on-wire
surface is additive rather than a replacement, because
cg_gen_id's semantics are independent of any outer layout
epoch.</t>
      </section>
      <section numbered="false" anchor="declustered-raid-with-dynamic-parity-mapping">
        <name>Declustered RAID with Dynamic Parity Mapping</name>
        <t>Christoph Hellwig raised at IETF 121 (November 2024) the
possibility of borrowing from declustered RAID designs: the
metadata server maintains, for every fixed-size region of each
file, a mapping from logical address to the specific data
servers that currently hold that region's data and parity
shards; writes do not update chunks in place but instead produce
a new parity stripe on a freshly allocated set of data servers,
and the mapping is atomically swapped on the metadata server
once the new stripe is durable.  The attraction is that
overwrite is replaced by remap, eliminating the write-hole
problem entirely at the cost of moving consistency into the
mapping table.  This was rejected because:</t>
        <ul spacing="normal">
          <li>
            <t>The mapping load scales with the file's chunk count, not with
the file count.  A single large file with billions of chunks
produces a billion-entry mapping that the metadata server
must maintain with transactional semantics; the overhead is
inverted from the usual "a few large files" regime that
pNFS is designed for.</t>
          </li>
          <li>
            <t>Remapping storms during rebalancing, data-server addition, or
data-server failure require atomic updates to many mapping
entries at once.  Providing those updates with the
reasonable-latency bounds required by HPC checkpoint
workloads is an open research problem, not a specifiable
protocol.</t>
          </li>
          <li>
            <t>The approach reintroduces the metadata-server scale bottleneck
that client-side erasure coding is designed to avoid: every
write traverses the mapping table, and the mapping table is
the hot-spot under concurrent writes.</t>
          </li>
          <li>
            <t>The mapping table becomes the single point of failure that
the rest of the Flex Files architecture works hard to avoid;
replicating it with strong consistency requires a consensus
protocol on the metadata server, which the current design
deliberately does not require (see <xref target="sec-system-model-consensus"/>).</t>
          </li>
        </ul>
        <t>The current design uses fixed per-file chunk placement decided
at LAYOUTGET time plus chunk_guard4 CAS for writes, which
localises consistency decisions to the chunks being written
rather than to a global mapping table.</t>
      </section>
    </section>
    <section numbered="false" anchor="sec-wg-concern-codec-on-client">
      <name>Working Group Concern: Codec on Every Client</name>
      <section numbered="false" anchor="source">
        <name>Source</name>
        <t>Christoph Hellwig, IETF 120, NFSv4 Working Group session, during the
discussion of the original Flexible File Version 2 erasure-coding
proposal.</t>
      </section>
      <section numbered="false" anchor="the-question-as-asked">
        <name>The Question as Asked</name>
        <t>Christoph stated that he was "very scared of the implications of
having every client be a full participant in a distributed storage
system."  He pointed out that any erasure-coding or replication
protocol that runs at the client requires every client implementation
to understand the codec, and that codecs evolve over time as new
algorithms appear in the storage research literature.  He observed
that the same problem appears with replication ("simple two-, three-,
four-way replication"): a client power-failure event mid-write leaves
the participating data servers in inconsistent states, and the
recovery machinery (mirrored logs, write-ahead replay, partial-write
detection) is "a bit of overkill for simple replication."</t>
        <t>David Black seconded the concern in the same session, stating that
"it's better to have the data protection algorithm be inside the
boundary of what you think the storage system is than outside."</t>
      </section>
      <section numbered="false" anchor="what-we-believe-is-being-asked">
        <name>What We Believe Is Being Asked</name>
        <t>Two coupled requirements:</t>
        <ol spacing="normal" type="1"><li>
            <t>Codec correctness and codec evolution must not be a per-client
burden.  An ecosystem in which every client must ship and update
every supported codec does not interoperate at scale: an
organisation cannot upgrade its storage system's encoding without
coordinating an upgrade across every client.</t>
          </li>
          <li>
            <t>The expensive recovery paths (partial writes, durable shard
placement, mirrored logging) must not live at the client either.
A protocol that exposes those paths to the client forces every
client implementation to carry the failure-recovery machinery,
which is precisely what RAID controllers and distributed storage
systems put behind a service boundary so that hosts do not have
to reason about it.</t>
          </li>
        </ol>
        <t>In short: the data-protection algorithm and its recovery story
belong inside a storage boundary, not at the client.</t>
      </section>
      <section numbered="false" anchor="how-the-proxy-server-addresses-this">
        <name>How the Proxy Server Addresses This</name>
        <t>The Proxy Server (PS) role, defined in
<xref target="I-D.haynes-nfsv4-flexfiles-v2-proxy-server"/>, is the storage
boundary that Christoph and David asked for.</t>
        <t>A PS is a peer of the MDS and the data servers that:</t>
        <ul spacing="normal">
          <li>
            <t>speaks the codec on behalf of clients that cannot;</t>
          </li>
          <li>
            <t>receives whole-stripe operations from a codec-ignorant client;</t>
          </li>
          <li>
            <t>encodes (or decodes) using whatever the layout's
 <xref target="fig-ffv2_coding_type4"/> demands;</t>
          </li>
          <li>
            <t>drives the CHUNK operations to the participating data servers;</t>
          </li>
          <li>
            <t>carries the partial-write / FINALIZE / COMMIT recovery machinery
 that the codec requires.</t>
          </li>
        </ul>
        <t>Three properties follow:</t>
        <ul spacing="normal">
          <li>
            <t>A legacy NFSv4.2 (or even NFSv3) client gets erasure-coded
 durability without speaking erasure coding.  The PS is where
 the codec lives; the client does not have to be upgraded when
 the codec is upgraded.</t>
          </li>
          <li>
            <t>Codec evolution is a server-side concern.  Adding a new entry
 to <xref target="fig-ffv2_coding_type4"/> requires updating the PSes and DSes,
 not every client in the deployment.  This matches the operational
 pattern of every other distributed-storage protocol on the wire.</t>
          </li>
          <li>
            <t>The recovery machinery (PENDING -&gt; FINALIZED -&gt; COMMITTED, the
 chunk-state machine, partial-write detection via
 <xref target="sec-chunk_guard4"/>) executes on the PS, not the client.  Clients
 see ordinary NFSv4.2 semantics; the PS is responsible for
 converting those semantics into the chunk state-machine the
 DSes implement.</t>
          </li>
        </ul>
        <t>A codec-aware NFSv4.2 client is still permitted (and is the fast
path: no proxy hop, no double bandwidth on the proxy's link).  The
PS is the answer for clients that either cannot speak the codec
or are too old to be upgraded.  In Christoph's framing, the PS is
the inside of the storage boundary; codec-aware clients are
implementations that have been admitted into that boundary by
design.</t>
        <t>The PS does carry a data-plane cost: client bytes traverse the
proxy on the way to the DSes, so the proxy's link sees roughly
twice the bandwidth of a direct client-to-DS path, and the PS pays
the encode/decode CPU.  This is the price of admission for clients
that do not speak the codec; it is the same store-and-forward cost
any storage gateway pays.  It does not affect codec-aware clients,
which talk to the DSes directly.</t>
      </section>
    </section>
    <section numbered="false" anchor="sec-wg-concern-recall-storms">
      <name>Working Group Concern: Coherent Multi-DS Writes Without Recall Storms</name>
      <section numbered="false" anchor="source-1">
        <name>Source</name>
        <t>Christoph Hellwig, IETF 122, NFSv4 Working Group session, during
the FFv2 erasure-coding discussion.</t>
      </section>
      <section numbered="false" anchor="the-question-as-asked-1">
        <name>The Question as Asked</name>
        <t>Christoph observed that performing erasure coding across a set of
data servers, where clients need a coherent view of the encoded
data while writes are in flight, is "just really complicated,
especially without recalling layouts."  He continued: "maybe we
need a more efficient network operation that doesn't recall layout
but updates layouts in a different way, and that might reduce
the overhead.  Basically any scheme would require either a fair
amount of intelligence on the data servers or some form of updating
outstanding layouts to point to a new right-out-of-place location."
He explicitly noted he was "leaning to updating the data servers
to be smarter."</t>
        <t>The same conversation introduced the idea of a "generation counter
that gets sent over the wire to the data servers, which means the
data server now needs to look for a new location for the same
existing layout."</t>
      </section>
      <section numbered="false" anchor="what-we-believe-is-being-asked-1">
        <name>What We Believe Is Being Asked</name>
        <t>Two coupled requirements:</t>
        <ol spacing="normal" type="1"><li>
            <t>The MDS must be able to mutate where data lives -- replace a
failing data server, redirect to a spare, rebalance, repair --
without serialising every layout-holding client through a
CB_LAYOUTRECALL round-trip.  A recall is global with respect
to the layout: every client holding it must drain in-flight I/O
and DELEGRETURN before the MDS can mutate.  In an erasure-coded
workload with many concurrent clients, this turns a localised
DS hiccup into a global stall.</t>
          </li>
          <li>
            <t>The data servers must be smart enough to enforce per-client
access on a finer grain than "the file is reachable from the
network."  Anonymous-stateid I/O combined with synthetic-uid
fencing is a coarse instrument: fencing one client's access
to a file affects every client's access to that file.  The
only way to selectively revoke is to teach the DS who is
permitted, on which file, with which iomode -- which is the
"smarter data server" Christoph was asking for.</t>
          </li>
        </ol>
      </section>
      <section numbered="false" anchor="how-truststateid-revokestateid-and-bulkrevokestateid-address-this">
        <name>How TRUST_STATEID, REVOKE_STATEID, and BULK_REVOKE_STATEID Address This</name>
        <t>Sections <xref target="sec-TRUST_STATEID"/>, <xref target="sec-REVOKE_STATEID"/>, and
<xref target="sec-BULK_REVOKE_STATEID"/> of this document define exactly the
"smarter data server" the working group asked for.</t>
        <t>The mechanism:</t>
        <ul spacing="normal">
          <li>
            <t>At LAYOUTGET, the MDS issues a real layout stateid and fans out
 TRUST_STATEID to each DS in the mirror set, registering
 <tt>(stateid.other, fh, clientid, iomode, expire)</tt> in a per-DS
 trust table.  CHUNK_WRITE and CHUNK_READ on the DS now validate
 against the trust table; an unknown, expired, or revoked
 stateid yields NFS4ERR_BAD_STATEID.</t>
          </li>
          <li>
            <t>When the MDS needs to mutate the layout for a particular client
 -- because that client misbehaved, because a DS the layout
 points at is being drained, because the file is being repaired
 -- it issues REVOKE_STATEID to the affected DS.  Other clients'
 trust entries on the same file are untouched.</t>
          </li>
          <li>
            <t>When the MDS needs to mutate at client-scope (lease expiry,
 client eviction), it issues BULK_REVOKE_STATEID, which removes
 every trust entry the named client has on the DS without
 affecting other clients.</t>
          </li>
        </ul>
        <t>The control-plane cost reshapes accordingly:</t>
        <ul spacing="normal">
          <li>
            <t>Layout mutation is no longer global.  The MDS reroutes data to a
 spare DS, rebuilds shards from surviving copies, and revokes
 only the trust entries that pointed at the failing location.
 The other clients holding the layout are not contacted.</t>
          </li>
          <li>
            <t>The revoked client only learns of the mutation lazily, on its
 next CHUNK_WRITE or CHUNK_READ to the affected stripe.  That
 operation returns NFS4ERR_BAD_STATEID; the client responds with
 LAYOUTERROR; the MDS replies with a refreshed layout pointing
 at the new location; the client re-trusts and resumes.  A
 client that never touches the affected stripe never pays the
 cost at all.</t>
          </li>
          <li>
            <t>With warm spares known to the MDS, the entire repair can complete
 before any client notices.  The MDS reconstructs onto a spare
 using server-to-server traffic, atomically swaps the layout slot
 in its in-memory state, and revokes only the trust entries on
 the now-evacuated DS.  Reading clients see no interruption (any
 k of the surviving shards reconstructs); writing clients pay
 one round-trip to refresh the layout when they next write the
 affected stripe.</t>
          </li>
        </ul>
        <t>The combination of TRUST_STATEID and a warm-spare DS pool is the
"more efficient network operation that updates layouts" Christoph
asked for.  It is not literally a layout update on the wire; it is
a primitive that makes layout updates a local event the MDS can
resolve before the client has to pay a recall round-trip.</t>
        <t>The chunk state machine (PENDING -&gt; FINALIZED -&gt; COMMITTED) and
<xref target="sec-chunk_guard4"/> address the orthogonal concern of partial-write
recovery, ensuring that even when the MDS reroutes mid-write the
DSes can detect inconsistent stripes via per-chunk generation
checks rather than via a global wall-clock or consensus protocol.</t>
      </section>
      <section numbered="false" anchor="combined-effect-on-the-cluster-tax">
        <name>Combined Effect on the "Cluster Tax"</name>
        <t>The Proxy Server addresses the codec-distribution cost; the trust
stateid mechanism addresses the layout-mutation cost.  Together,
they confine the residual cluster overhead to:</t>
        <ul spacing="normal">
          <li>
            <t>the store-and-forward bandwidth on the PS link, paid only by
 clients that route through a PS rather than going DS-direct;
 and</t>
          </li>
          <li>
            <t>one LAYOUTERROR/LAYOUTGET round-trip per client per affected
 stripe, paid only by clients that actually try to use a stripe
 whose backing has changed.</t>
          </li>
        </ul>
        <t>Neither cost scales with the number of layout-holding clients,
which is the property the working group asked for.</t>
      </section>
    </section>
    <section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The following from Hammerspace were instrumental in driving Flexible
File Version 2 Layout Type: David Flynn, Trond Myklebust, Didier
Feron, Jean-Pierre Monchanin, Pierre Evenou, and Brian Pawlowski.</t>
      <t>Pierre Evenou contributed the Mojette Transform encoding type
specification, drawing on the work of Nicolas Normand, Benoit Parrein,
and the discrete geometry research group at the University of Nantes.</t>
      <t>Christoph Hellwig was instrumental in making sure the Flexible File
Version 2 Layout Type was applicable to more than the Mojette
Transformation.</t>
      <t>David Black clarified at IETF 124 that the consistency goal of
Flex Files v2 is RAID consistency across the chunks of a stripe
rather than POSIX write ordering across application writes; that
framing is reflected in <xref target="sec-motivation"/> and in the Non-Goals
of <xref target="sec-system-model-consistency"/>.</t>
      <t>Chris Inacio, Brian Pawlowski, and Gorry Fairhurst guided this
process.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC4121">
          <front>
            <title>The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2</title>
            <author fullname="L. Zhu" initials="L." surname="Zhu"/>
            <author fullname="K. Jaganathan" initials="K." surname="Jaganathan"/>
            <author fullname="S. Hartman" initials="S." surname="Hartman"/>
            <date month="July" year="2005"/>
            <abstract>
              <t>This document defines protocols, procedures, and conventions to be employed by peers implementing the Generic Security Service Application Program Interface (GSS-API) when using the Kerberos Version 5 mechanism.</t>
              <t>RFC 1964 is updated and incremental changes are proposed in response to recent developments such as the introduction of Kerberos cryptosystem framework. These changes support the inclusion of new cryptosystems, by defining new per-message tokens along with their encryption and checksum algorithms based on the cryptosystem profiles. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4121"/>
          <seriesInfo name="DOI" value="10.17487/RFC4121"/>
        </reference>
        <reference anchor="RFC4506">
          <front>
            <title>XDR: External Data Representation Standard</title>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <date month="May" year="2006"/>
            <abstract>
              <t>This document describes the External Data Representation Standard (XDR) protocol as it is currently deployed and accepted. This document obsoletes RFC 1832. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="67"/>
          <seriesInfo name="RFC" value="4506"/>
          <seriesInfo name="DOI" value="10.17487/RFC4506"/>
        </reference>
        <reference anchor="RFC5531">
          <front>
            <title>RPC: Remote Procedure Call Protocol Specification Version 2</title>
            <author fullname="R. Thurlow" initials="R." surname="Thurlow"/>
            <date month="May" year="2009"/>
            <abstract>
              <t>This document describes the Open Network Computing (ONC) Remote Procedure Call (RPC) version 2 protocol as it is currently deployed and accepted. This document obsoletes RFC 1831. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5531"/>
          <seriesInfo name="DOI" value="10.17487/RFC5531"/>
        </reference>
        <reference anchor="RFC5662">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 External Data Representation Standard (XDR) Description</title>
            <author fullname="S. Shepler" initials="S." role="editor" surname="Shepler"/>
            <author fullname="M. Eisler" initials="M." role="editor" surname="Eisler"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="January" year="2010"/>
            <abstract>
              <t>This document provides the External Data Representation Standard (XDR) description for Network File System version 4 (NFSv4) minor version 1. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5662"/>
          <seriesInfo name="DOI" value="10.17487/RFC5662"/>
        </reference>
        <reference anchor="RFC7530">
          <front>
            <title>Network File System (NFS) Version 4 Protocol</title>
            <author fullname="T. Haynes" initials="T." role="editor" surname="Haynes"/>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <date month="March" year="2015"/>
            <abstract>
              <t>The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.</t>
              <t>This document, together with the companion External Data Representation (XDR) description document, RFC 7531, obsoletes RFC 3530 as the definition of the NFS version 4 protocol.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7530"/>
          <seriesInfo name="DOI" value="10.17487/RFC7530"/>
        </reference>
        <reference anchor="RFC7861">
          <front>
            <title>Remote Procedure Call (RPC) Security Version 3</title>
            <author fullname="A. Adamson" initials="A." surname="Adamson"/>
            <author fullname="N. Williams" initials="N." surname="Williams"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document specifies version 3 of the Remote Procedure Call (RPC) security protocol (RPCSEC_GSS). This protocol provides support for multi-principal authentication of client hosts and user principals to a server (constructed by generic composition), security label assertions for multi-level security and type enforcement, structured privilege assertions, and channel bindings. This document updates RFC 5403.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7861"/>
          <seriesInfo name="DOI" value="10.17487/RFC7861"/>
        </reference>
        <reference anchor="RFC7862">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 Protocol</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document describes NFS version 4 minor version 2; it describes the protocol extensions made from NFS version 4 minor version 1. Major extensions introduced in NFS version 4 minor version 2 include the following: Server-Side Copy, Application Input/Output (I/O) Advise, Space Reservations, Sparse Files, Application Data Blocks, and Labeled NFS.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7862"/>
          <seriesInfo name="DOI" value="10.17487/RFC7862"/>
        </reference>
        <reference anchor="RFC7863">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 2 External Data Representation Standard (XDR) Description</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="November" year="2016"/>
            <abstract>
              <t>This document provides the External Data Representation (XDR) description for NFS version 4 minor version 2.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7863"/>
          <seriesInfo name="DOI" value="10.17487/RFC7863"/>
        </reference>
        <reference anchor="RFC8178">
          <front>
            <title>Rules for NFSv4 Extensions and Minor Versions</title>
            <author fullname="D. Noveck" initials="D." surname="Noveck"/>
            <date month="July" year="2017"/>
            <abstract>
              <t>This document describes the rules relating to the extension of the NFSv4 family of protocols. It covers the creation of minor versions, the addition of optional features to existing minor versions, and the correction of flaws in features already published as Proposed Standards. The rules relating to the construction of minor versions and the interaction of minor version implementations that appear in this document supersede the minor versioning rules in RFC 5661 and other RFCs defining minor versions.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8178"/>
          <seriesInfo name="DOI" value="10.17487/RFC8178"/>
        </reference>
        <reference anchor="RFC8434">
          <front>
            <title>Requirements for Parallel NFS (pNFS) Layout Types</title>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>This document defines the requirements that individual Parallel NFS (pNFS) layout types need to meet in order to work within the pNFS framework as defined in RFC 5661. In so doing, this document aims to clearly distinguish between requirements for pNFS as a whole and those specifically directed to the pNFS file layout. The lack of a clear separation between the two sets of requirements has been troublesome for those specifying and evaluating new layout types. In this regard, this document updates RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8434"/>
          <seriesInfo name="DOI" value="10.17487/RFC8434"/>
        </reference>
        <reference anchor="RFC8435">
          <front>
            <title>Parallel NFS (pNFS) Flexible File Layout</title>
            <author fullname="B. Halevy" initials="B." surname="Halevy"/>
            <author fullname="T. Haynes" initials="T." surname="Haynes"/>
            <date month="August" year="2018"/>
            <abstract>
              <t>Parallel NFS (pNFS) allows a separation between the metadata (onto a metadata server) and data (onto a storage device) for a file. The flexible file layout type is defined in this document as an extension to pNFS that allows the use of storage devices that require only a limited degree of interaction with the metadata server and use already-existing protocols. Client-side mirroring is also added to provide replication of files.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8435"/>
          <seriesInfo name="DOI" value="10.17487/RFC8435"/>
        </reference>
        <reference anchor="RFC8881">
          <front>
            <title>Network File System (NFS) Version 4 Minor Version 1 Protocol</title>
            <author fullname="D. Noveck" initials="D." role="editor" surname="Noveck"/>
            <author fullname="C. Lever" initials="C." surname="Lever"/>
            <date month="August" year="2020"/>
            <abstract>
              <t>This document describes the Network File System (NFS) version 4 minor version 1, including features retained from the base protocol (NFS version 4 minor version 0, which is specified in RFC 7530) and protocol extensions made subsequently. The later minor version has no dependencies on NFS version 4 minor version 0, and is considered a separate protocol.</t>
              <t>This document obsoletes RFC 5661. It substantially revises the treatment of features relating to multi-server namespace, superseding the description of those features appearing in RFC 5661.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8881"/>
          <seriesInfo name="DOI" value="10.17487/RFC8881"/>
        </reference>
        <reference anchor="RFC9289">
          <front>
            <title>Towards Remote Procedure Call Encryption by Default</title>
            <author fullname="T. Myklebust" initials="T." surname="Myklebust"/>
            <author fullname="C. Lever" initials="C." role="editor" surname="Lever"/>
            <date month="September" year="2022"/>
            <abstract>
              <t>This document describes a mechanism that, through the use of opportunistic Transport Layer Security (TLS), enables encryption of Remote Procedure Call (RPC) transactions while they are in transit. The proposed mechanism interoperates with Open Network Computing (ONC) RPC implementations that do not support it. This document updates RFC 5531.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9289"/>
          <seriesInfo name="DOI" value="10.17487/RFC9289"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Plank97">
          <front>
            <title>A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like System</title>
            <author initials="J." surname="Plank" fullname="J. Plank">
              <organization/>
            </author>
            <date year="1997" month="September"/>
          </front>
        </reference>
        <reference anchor="IANA-PEN" target="https://www.iana.org/assignments/enterprise-numbers/">
          <front>
            <title>Private Enterprise Numbers</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>
        <reference anchor="RFC1813">
          <front>
            <title>NFS Version 3 Protocol Specification</title>
            <author fullname="B. Callaghan" initials="B." surname="Callaghan"/>
            <author fullname="B. Pawlowski" initials="B." surname="Pawlowski"/>
            <author fullname="P. Staubach" initials="P." surname="Staubach"/>
            <date month="June" year="1995"/>
            <abstract>
              <t>This paper describes the NFS version 3 protocol. This paper is provided so that people can write compatible implementations. This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="1813"/>
          <seriesInfo name="DOI" value="10.17487/RFC1813"/>
        </reference>
        <reference anchor="RFC4519">
          <front>
            <title>Lightweight Directory Access Protocol (LDAP): Schema for User Applications</title>
            <author fullname="A. Sciberras" initials="A." role="editor" surname="Sciberras"/>
            <date month="June" year="2006"/>
            <abstract>
              <t>This document is an integral part of the Lightweight Directory Access Protocol (LDAP) technical specification. It provides a technical specification of attribute types and object classes intended for use by LDAP directory clients for many directory services, such as White Pages. These objects are widely used as a basis for the schema in many LDAP directories. This document does not cover attributes used for the administration of directory servers, nor does it include directory objects defined for specific uses in other documents. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4519"/>
          <seriesInfo name="DOI" value="10.17487/RFC4519"/>
        </reference>
        <reference anchor="RFC7942">
          <front>
            <title>Improving Awareness of Running Code: The Implementation Status Section</title>
            <author fullname="Y. Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="A. Farrel" initials="A." surname="Farrel"/>
            <date month="July" year="2016"/>
            <abstract>
              <t>This document describes a simple process that allows authors of Internet-Drafts to record the status of known implementations by including an Implementation Status section. This will allow reviewers and working groups to assign due consideration to documents that have the benefit of running code, which may serve as evidence of valuable experimentation and feedback that have made the implemented protocols more mature.</t>
              <t>This process is not mandatory. Authors of Internet-Drafts are encouraged to consider using the process for their documents, and working groups are invited to think about applying the process to all of their protocol specifications. This document obsoletes RFC 6982, advancing it to a Best Current Practice.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="205"/>
          <seriesInfo name="RFC" value="7942"/>
          <seriesInfo name="DOI" value="10.17487/RFC7942"/>
        </reference>
        <reference anchor="PARREIN">
          <front>
            <title>Multiple Description Coding Using Exact Discrete Radon Transform</title>
            <author initials="B." surname="Parrein" fullname="B. Parrein">
              <organization/>
            </author>
            <author initials="N." surname="Normand" fullname="N. Normand">
              <organization/>
            </author>
            <author initials="J.-P." surname="Guedon" fullname="J.-P. Guedon">
              <organization/>
            </author>
            <date year="2001"/>
          </front>
          <seriesInfo name="IEEE" value="Data Compression Conference (DCC)"/>
        </reference>
        <reference anchor="NORMAND">
          <front>
            <title>A Geometry Driven Reconstruction Algorithm for the Mojette Transform</title>
            <author initials="N." surname="Normand" fullname="N. Normand">
              <organization/>
            </author>
            <author initials="A." surname="Kingston" fullname="A. Kingston">
              <organization/>
            </author>
            <author initials="P." surname="Evenou" fullname="P. Evenou">
              <organization/>
            </author>
            <date year="2006"/>
          </front>
          <seriesInfo name="LNCS" value="4245, pp. 122-133, DGCI 2006"/>
        </reference>
        <reference anchor="KATZ">
          <front>
            <title>Questions of Uniqueness and Resolution in Reconstruction from Projections</title>
            <author initials="M." surname="Katz" fullname="M. Katz">
              <organization/>
            </author>
            <date year="1978"/>
          </front>
          <seriesInfo name="Springer" value=""/>
        </reference>
        <reference anchor="I-D.haynes-nfsv4-flexfiles-v2-proxy-server">
          <front>
            <title>Proxy-Driven Server for Flexible Files Version 2</title>
            <author fullname="Thomas Haynes" initials="T." surname="Haynes">
              <organization>Hammerspace</organization>
            </author>
            <date day="28" month="April" year="2026"/>
            <abstract>
              <t>   Parallel NFS (pNFS) with the Flexible Files Version 2 layout type
   supports client-side erasure coding and per-chunk repair between
   clients and data servers.  This document extends that architecture
   with a proxy server (PS) role: a registered peer of the metadata
   server that polls the metadata server for work assignments and
   carries them out -- moving a file from one layout to another,
   reconstructing a whole file from surviving shards, or translating
   between codecs for clients that cannot participate in the file's
   native encoding (including NFSv3 clients).  All PS-MDS coordination
   is fore-channel: the metadata server returns work assignments inline
   in the response to a PS-initiated PROXY_PROGRESS poll, and the PS
   reports completion via a fore-channel PROXY_DONE.  No callback
   operations are required for the PS protocol.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-haynes-nfsv4-flexfiles-v2-proxy-server-00"/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIALYC8WkAA9y963YbV5Ym+P88RbRy9TLpBCiTop02Va4qmqRsVkqkiqTs
dNeqxQ4CATBSQAQSAZBCZqqeZZ5lnmz2ty/nEhGg5KqemZ7x6s6iAnE5l332
9dt7D4dDtypXs+Ioe5sv89msmGUXr66znQX97272alZ8KO9mRfaqpP95nW/q
9Sr7uVg2ZV1lBy6/u1sWD0d8W3LLw4Eb16Mqn9N7x8t8shre55uqaIbVpHk4
HE7o/gnd3gwfDoZffe1G+aqY1svNUdasxm5M/zrK/nZ6fHP20Y3qqimqZt0c
ZavlunDlYsl/NauDr7767isaw7LIj7Ifi6qg4bvHevl+uqzXi6PsoljhXzKu
602zKuZ+6IfufbGhX8dH2Xm1KpZVsRqeYpzONau8Gt/ms7qiQWyKxi3Ko+zf
VvVokDX1crUsJg39tZnLHzTLeb5YlNV0kI3q+byoVs2/O5evV/f18shlQ5fR
f2VFw7/Zy37iReBLsjY39/U8b+Lr9XKaV+Vf8xUN84h+oFcum0U+KvjXYp6X
s6NsVk/vN8t/nuJfe/RZ56p6OadnHgr6Znb16uRw/2Df/vz6q2/0z6+/fmFX
v/7mmwP98w9fv/jK/vz2m/3w50H484X++e3+H761Pw9fHIY/v7Y/v/3W3vDd
wbffHTlXVpN4cG9nefX+uz8c8XyU9I6zm/WqXpb5LKPNuSqK8fC6ntVz+sdJ
PabFzegV2at8PVsNb+oZ7XQ1KmhVs6vj89PhrHxvG8wv9YtP/w1l7f9lT77L
12zxWxdX+XJarI6y+9VqcfT8+WNxt1cUo2ZvvXq/V4zXz//jzwvc/Vz/N1/Q
xjw/uR5+983wxYuDvfuVfF2o97pY0HDuimW2/913f6Afzo8vjodvzy6SeT97
uywf6P7sDES4WJZNkV2s8VjzzLXH1GBQj497ZV7le0Qmz/OmKacVk9zzwr9g
WMkLnnfWAqtBzx3xWKKxTvJZU8iW7X+7/8KTzf53tv/fHTIpvD2+ujo7b03h
DW1KuaAjdlo0o2W5AOHarr1r8L9nH/LRKjst6eeC5nqVj+mOG9rDBoTxrHec
skM/7IErLYuyav1wsZddgKiqceuHf9kbvt3LflwX9I1oisQr9vmfTbEsiwYk
ad86Pzs7o2mc5quchj1fLIumkSlUk2JZgNB2Tk9OdjHOi8urN8cXp+kCHBPz
qefFarnJTmk7CxAwuBYxqREvxvGMeFu5up8zFa/ui+xN/ediRUvxWYuwda7H
e9kfaX2bVd1eHlqCMxpIvU6X4JstS/D64uSaJnJ4cPj1IFss9rL9g4Ph/osX
g+z0x5NzfhID/OPxzf9Ip/6v66LBFJusnmTvqvIva2LDTZPRWGkRmnq25gUo
O0syWdbz7O2SloH/3Tw1/zc0zXz112gq+9/94dstU7mmM1BNC5IRz545NxwO
s/yOPksE6FyfgKML9SMNmF61oJ95cHckOAraRuwU7Ws+Bmns1NWqznLnL9C3
H4rlLs81voNEWL3Mp0U2Lh7KUbHLm55nEHd7GTH8wvUK1ZvNogiSNSsben5S
VsUYq7e6x7/r0RqHPcsbl1dZ8WFFchF301cxGborX9mEMPY1cRPal3Q8Dd/m
lsVf1uWSfq9mGxrdrJyXK/rWuJguC36qBEPJZbceiXiT1XAyeZ47vpLPSAiP
N0OaF9EDHfnFsiaJWc8amjKfLFyQvcbUiOHULh+P6YsY/LJ+KMcFf3JKB2VD
D/1Q0ydPZiXNd9jgx3m5XNbYW/4oDcadLfNmTVNQXpPbKaPXL3nyY176cfr5
PaGKiv59e4H/WdW3VzR4WnnnwKPWcvxpCWTVoRUQH35PK0ecn+RwLZRBK/5w
mEHDwNdZ58ggj/GvGS1DtsO6zj+XxWoCdr07cI/35eie578c3ROnGGe0E//2
7zvG3fG4/rRnjz3HhedNwf/nn1j+3+L13/Pbd/ey63q9pFGNalokLE3ZNHQq
ZQjCb/wsRjmIm66uq/anp7Ry6zvoEs/x5eHjVFS157GqtkuL94tO+EeesJft
tCitt2ef8fZd3Y15OR7PCud+B21sWY+FSzh3XvVqpTsN0ei1ktP+Afbqb39T
3ePjx91BL62S7FkviVPN5LytcN6EIRERyakg8ocIo0k83hPr5zMr9FM2blZD
Sx3LGaYfQWTjcsIyYhW/tRGy8z/Z+WtYQ8HZHWNw9/WY2SbJt7yaYkn5SzSj
1oGlL8rsSMn6+NHYgsMcU05i3GM/4Sl8EPgEYzrDuxz/jhZGp47p5CP6HH5e
s9xWKg+H+YiJ/oUMB6rCx48Dx+dg7yu5CFWSLsrh2NuPt2XA1Ck/HOjdpGJ+
/Eg0cBOYQJ5NZ/UdKYKkh5OEnBNZzzIwq4d8htUkfsHj5fMpW0R7E1afPpPd
5aP3w4K+RjKHyGnmZ0CENr1fgUrLOaks4KY0W+X3LuH3EYfTyfy+Z2OOq/Am
PgUOp6Cg1aZnZaeyckX7/FhhECSc6P2kKMyL0T2p+c2cFDK6gQa6no3tgdx1
xk0yIM+uYZnkSyIb0htG771AEJLMlKWzPggSdHl3/tjlZlGMykkpkiXaIJ7r
iBiQ/eiM7A5lk+gbnRfaiekRU/S3EBuRJC0hjfJ8RVYhnQ9ivn6/cVbeV/Xj
rBhPVVzRvpc0Sdpp2qhFzhQZyeS2CDaB0LM956RyEG8fxD87k4NYC5rPpJwS
AxhnzXp0L6TVRwf3ZKVN1rOZnhG3BCE1Qo3KI5hbyCG3IVW976IT2bhmU9Fv
q3KU6VxtaUcs9PQ7+KGsxiUt1ZrOhP9Os8fMkRSbcQmyy2eRgKRFxI06a/km
bQZxuQWpWswvQB7LYjEriaex7Uqnqxzr3xg8/ZiX/LZRvSAdy86bHwGt7i9E
5m7UJ6MHvfPWLYe2ZQcWqgg0bqZZPJM/QAZiiPIyv8TyHSV1XSNe4UU5ek9M
Ve/H7dBFRMPETOA88AyeREn2SFoC/Xta415c0C/tuejVrBdBZS3HwiILeQ5L
XjWqF9Hz9/lDAdIZkc2KQzBx0RvluK15J0E8m2y9GKsQOYfAFAWtwM2DaJbM
RKBX4dLr418v392cXV1dXmU12Zy5fVpkb99KR5oGL1Fr5/mRsMG6szLmYhw2
3BmD5cNPL9O1aeljflvxak9UokLxoO5op3HuXfvce4XPbpV1wWeLB94FEo+V
rU+OJVm6R2aVxBmYSni7MRMvDbK7DY1wRstgMkwpAwcMdLbnzveKPV7vTSZv
Aw3RlhfpWsB1sIKnobEjwVMDR6fv8urUzOjtvox1BNgztGI/1Y8F9oJofcQa
U5jt6L4gmh1gmJjlqF4u1wuxo0AxWCadnLsDO4PqCmVR+dzoviYGhrE+gqbZ
sCpX61UhdDqta97EDbQWqCHliEx0PjkxexZSlEe2WyQ0hAdvlNDKyvZhOrwr
QxmcGHk2vZ2Tq5MXB2L4FKKkQz2lh8wauKOFei+TXZkJzCKEDah7Vsd5uiTC
7tfVex28CyYQGRC4nXVFSJTiMRyOcNbrx+hUMa3Us5mDemBcoM3AaePeeeWH
JstMmCZJXCkWmPt/+BYajRykh7Lhr6oEDKO8I6N3XmSJsUa71qcDMV/b8Irc
rcsZGJBb1QsjSDy/rEwCECUKz5RztvOn06tdeRt8fSTJp+wNJZpxzAbtQy9U
jsdkIDpHk/1P4TOHt69en/3p9tX567Pr258P/ieOHRaXLYtqXCxIrQKHirXo
lVAhG39NMBa3E5VR1L7b6fksfTPSd3eV5a8e61TJxlJdXN7wbj6SVgT364IW
5A6eCVryV68eDuwJ4qc4VKxOLKFO5I3esq+3sFYOvYAHB0Z0bGLrzfGvxMQX
i3pJb4BZmgyjKed0uvKqqNfNbPMSSpdyyWJGJ0POF3HHEqw1XjWiFmiGMv8f
z25Yfyua1R7MoKtYl3tN9sEamgsLqPfMtaAFPnvz7vrm2UD+L9YCf1+d/eu7
86uzU/x9/dPx69f+D6d3XP90+e71afgrPHly+ebN2cWpPExXs+SSe0Yr8UzU
g2eXb2/OLy+OXz/rcU8sC9XdSnFKFsy86FzEuuIPJ2//z/9j/5C2+r/RXh/s
739HdCv/oNNFGidssEq+xsJY/gmm7fLFguxhVopJ2I7yRbkihjDArjb3ULVh
oNE6fvlvWJl/P8r+4W602D/8R72ACScXbc2Si7xm3Sudh2URey71fMavZnK9
tdLpeI9/Tf5t6x5d/Id/msFqGO5/+0//6EA82Zt6BbcyjtjfftcUo+HcX/jo
3DWTNWtsY+PPmfBn8XAIzeovQ/0Fp4tEDNwGQa0cZKJoCKODYkcHZEVCjeQb
dgN3ijsajIwE1Wi9XLKCw2ekgdvkEYr6G5IqynRtRF4wpBqgW6hDoPyrDtSP
bLSs4Xs0FW9J9mqR0+j0QBYVfCRiyEMXA2VNcvgu1yvl5A182ZgshE4qHNRY
JoG6pGM92wzoIBYLr2K0VF093KOaDmpZ5StoRrXq3f5e4tDQHoh9Vtk8Fx2B
jg9/b0isjCz2dVUVM6JkH2BrMmJZOzF/NOdbvHeRFubCOqppo8vxUBqr6ijv
UNj8P53X76q6rcMcmQpzt1kVkR4DzWNdiW4ATaZtU2COc+JoY9O01HZtoNvU
LH9zx0G+QlUZ+CWqVckbd0f0DhU6XpSDLB/DjnqMxui8ld1kcARnpCAMeWsz
1lBYQamr4SMcoaxtrIY0BppNuRpOZuXCee1mEClf9Pwt8ePl+FBEDp0R+nHY
POYL0hjgSC3JJoCPigxonL74EWwYu4fkzXyw7FQMhWxdWbHhQZOvRhuMnA7Z
DNpIAeoykov3zo4Ae0vUjonHGSauuhXx629500DkxDyznTx7cTCkiTvVHXgX
x6qEyU/wYtDXhybg7kkiDsvx7ksRADgPTeJEGeppqOl/7tkoo5nPc9og+v+i
Q8Y8xsXT1rPXzHHYaU/IQF2rqkHaLgzx8RxHLM9Ojq/dqizu6BS8L5Z0WI4h
F1hf4p2AbR2tVYvjlXzMMXb2LbS4Xjg8JZuCiZ9BbX3aIB5WSRKoFi9SzDHw
yJyUgYcCv4PgMvtNBkT7OkTMosHMIObj+IN5A5T64hebRk/ScFmDz8FHQYsw
c/n4Ia/wKuKAdDu9fMbzLUaY7qT8wJ7nnrkgxO1587hmS4QtXFqiqpiSFMlX
9iZViV6a/VXB2gG3wdiMNfOrXvrh80nzviOa7aYhDlCVtMmQITkZeGQk+jMZ
04M/WWojsb3d4owHmXcvMR9RegErHajhi+gGrVFDbz5ybphlX355TeKgiATX
3pdfisrZ2ucg5Bo8gYBTJO1KKPk5JIPMNN/IqWjUPCLrpBLlmENVMwR2h/yi
zCQaBxNmdT6Gv54pFw6R6XRZTGnZ8dQWmXxXbGpaYbYGIYLBesd++HQ2ltCA
8QJsGmSXcBTbadbYIKrz1XrJLJaOdvlXspf5Hxgljic9TyyIww17unavIXFJ
tIPBsPEttDFh1b+cIZaD1TxWInQSs3vguz8wb+fba5Kps9qopl6Y+wB2OXvG
zZmOF/AzbSc7O8VEF6UNy+cFBw8ZOKFKCh6d5yvoBaxL6iNM3uE9Y1oAGTgT
Lc2LAxyV4C9iF6stwQ9EnvfzfPk+KyAm6U6e8puCt0o0+GY9pzvKRtyqGRz6
EAyp63gIz+e6Ic0XSqxaVTHjqpRN0vPeyPKs1RvDY9Z+cdBBHsVKxBEeiFY/
eSFYEvgGYnejQIQi85hEeV+YAKsW6fmBiOzKFvViPRPpEas39nnjYSYZRkSr
K/Uts0uVvtk80g8SNJ1yOJ2szlm9MZ8yTve6KtlMm71UEbss/sw+kmHE+LGS
RFdsKvFJin7yrj0ykXM+OhvRFR9zzz8a0AhgJzGH+aJRTIZfJ+KXcFGDHk62
ixmJPfEiDe+hCWJkNIeNOj2cccXhjPjojLUCGm2yiHkUbKA9ZOuqYsgMncfz
Khno/qCtCEaybGFuk9iih4NyyLqWei8hDKA05UvoXer+5Lh05I8S5g8H/cYR
c/B+N3G/EuGTwvKgbuTNXuAEMSG6WE2GEANyYXqfwa+iZ5GWyFginQqVOCAq
mO73xWwhH0wcxe8q9iJlO+8H2Xw33hBiCcQp2OGeN6sB2exd3663DNSsmBvM
JZHtTP5V7bqToud4OvWM9/8Rmz6MPxBmd+Qe71NxP8nLGSTjeEjDGNIoZDdt
KyQ0pgNLQnnzdcNuDlBXNWKf1d2m10Xsw+h2aylUEkJl+qoQLtLAN4vmRsNw
ebVx4AX0eGXsapAlHxKvMDaMpFElUWieyagkyVz0qKuk0QV5L1N/S6bx+cWP
2fPs1TmZv+f/4+w0e+5gM5/f3NDfGi2kI0y2sDxy8tO7iz/evr48+WMGxwOx
VG8P8OzdyQ+3cs/V2dvj8ysiMjrX98J85hplVE0kXSR/WOnMXxNhs8u2OFJr
NOgs0xqnc9IyVOgIAbaWaLu6l97AUBcmWYVFtl6wB5GtV0j1ZYkoJ33Rvb28
Pv+T0i6dIDl5ZgYvSWNfItZI+qHnAeL1VBxKCOKZ1gTxOF+wp5Q/DQ41E3hj
z2vEuRhkp1cnVvW8HB2ZVmyuVGID4rWG4IUVzqJ9JwkRh6C+AGsQAAjfFd4Y
2AXgHFn2i8Xs0vmAlogmi0KGIPc4XV2QLMljsUVFysjCEo0yAKZSz4YsrjJM
Drqzkmx8iHEjESP0a8DacFbf8SEg9n6fzyZDnorfQxq6Ak/AHVijzXjvhumO
OlXvvCavI2VXlkWJPPJGYCm6ui5ozexWzN7R7Sc5FC1xDNHjQ4Shmo8ig0PA
uWGn5RQApHzJkA5Rnk3sZSb2lJYK0grF5iCmAcf7spjAAaqjW4jIiET5vPwA
ZYhjJ7NZ7LhX3T09NRwvYpfK/E73HiGq4JBoqWsvk7ip/yyxSNLMx1AfptBi
r2VJRXsZBC5vW7vD1tUHslBWBanC33313zlip/Nx0Yt3j9yRRifr+bzW8B7s
MwlL4RN02MGPEWmKDLtmTWQC6BwZD6Jl5sy2aR4QmWcfckh9SOvRbA3JQrxz
ksO3TJ+vGwZu0e13+Uq03D8T2Q1YvJGZDWMfrhZV5+bFuLQQsow22nI8DOV7
Xv5VwVNswmAiLzNgbkTZEntoyM6GIT0+BWSSQwseC6oOOBPRNMWGBzO2KBzH
LVpri6V1PaSiSyukwBLXb1NwnoRPsvq7ZF9YHewBthzYBZLzz8uib2mbe/ph
PBQnXy3Mk/SjITzQbLbVUznu4xKn8G7Na78uZ2Nbzu2CLFMEZ3DGGA9kTTCs
tV81+jRp5X/1Hh+mK9bimWqHpnOTjrht8bN70qqHE44tQNCE5YcIYQbPt7vW
ZuwndN7djJ/w2ui4qUHAHP2ntye74h9c1GW1ii2KsoqNsWpDGwF2LAyP1vTP
/AAZvAYq9eYcnyF6nCOHq2Kxh1ck9LssZqXsOYcXSd9XdLR38PiVFzPSE+KS
sVxkLTLbfUAiwYwXzqwaMVMyYIL1GTFmcglsZ/WIqFAxFvUMwD54VyXyQ8/D
xcG2VXBT8TlfQ50MKM8+byFrGHASiNURKJWPwBB00sD7qOwkZnQ7Qv2O5Qiv
ZWFrKg81qVPeyy4cfNpDF7aQtnV34P0UbOmrUwQGBh01/qJyGB4YAEpOT5M5
ImjgzcocV4ZACPsXeUzlp6GE0BC1+LibSCTa0FmxEshONGW34tyAVWpjQplm
BEguiLDU26qcwmyJOzuTfODoDlI9Zpk442sDGSxJM4anuSMLWcaeYkcZldM4
0TeOHI7MZcAX9EUa7iAmoZ9ODa+XZwnKaQLfcB7MOJGBsJfu4C005YJD6GMf
ZfeLnXzRQQoCQxHjFInri6HpPY3yBufxaOBAZHurFhh7Q2SKfFrFXtB7BOYR
hkv/T4KgMC8V6TWQw20WyoAt8oC9FE1YrZEwghQU1gfcc/2gsBvzj3jQNww3
Bpu0IXTsE4hj3dFaGN2GmYtRs57lywjfp1ZSCybIim0SDmYgimN1jpTvoliZ
NfH0witV5gkCdU9i/KxRatw7jzCA0PWXc3btwVNI1FzXCklqgFdghXDgPOsN
JqEQzdoi6D34wir4HSMjqTfOJAtHykgBt2PhZUNkCLOZE8HYBn04NgayCAJ8
th3MFJa7KaaMmnQu0LiM5VglhwGXytgo/wLCeXRfxHh+fQX+ljfc2MmBa9zD
yEV6DXhlgh0dUSu9qAc4Ja88VcSH/7VsLFpoq42/I5XIZqxzcBFKOcInpfB5
W4AWWj5raMZzw0Pn7fVpAihK0DtkEuZTOT4cORp7eCJNnIgNToLoFrzTsgBg
upPRhFMEtzGbI0CeKIdkKJ2sH21L/5TE/5Zdg8z6pycAsVwcsWY5CBhb56jL
h89ZpEo+adKPvrOD1+xGe6RhhmUCS7a4QX33Z7GDBKuFNzaBAITzQ+UnQW1G
AmNlzXQ23Jq9W98XewbZvblShbRxyY1GpMoXd06vd+3kcdaIXpcpWKjX5vCF
+Jcy76CKJuZ0IOCkBhxnRTaPkeYRF5iQBur3IsZX3W2ik99FjTIWsMl6uLl4
IPQ1OAXnzy+961HlUIT+ii7q8eXpRJzTVqYF1AmjayfTJLB5U/poWV2i1ni/
xtd7+wd7h+2EBRrItByHdZFkknLM4Kv1vFjSgKP4J6IMK7jkWG+QsckjsvIO
Bng1xYmQeYQ3x6LYI5XWjbxJIcdB8kqoMpmxipb4PaqRNi3Jr0QY1iFa011n
EUGZiyBA+9ZXjgckjp9NVtYwPW2rpvDysLEiqHscJbbXxPGYj5+LdQHSSLHE
/oUqEeyNZI8P4TpoWoKadQfEn+PffUaB+lkVTiZjTDa/yV7svdg7+Io5I5HB
wd533cyVXLw7jMXhYFXl0mwXekf3KUVQFI24AlYwcCv2b9gMRdWyGe4ffMsB
/b+sc1l9SY4RP0KesIQ1Z/XRahpQECZF/FZjGt7TbGo3i0l/4CLEt08aAGw7
Pmle72sl+Ox9vddJ8gH+Y73Ejg9ccmd7ecLXvNSQ0AepztWY07UsASRdLBFh
gvDVS4L5C8sas4wO9TNWMKZpfxRYgQoHzmSBPxQW840EsD1CH/Z4peSg4gJr
c7MNHH9ttZeGzE41smQXM8+EQ9yhk3DSREc/4uEq+oxJp8rvZ0u/B5LxZIPq
1izgHXGKbJ0VwC7YSeU3ySsY3lcvFrWuRLoC5YqMxYlyJ8mmMUsZN67KORtf
ZKetgJLxGzXwm0B3DLKzy1eOXWr8U7Ea0ZTbAmnnjRehPPGtQtQ/GHPLoD2m
YtSLfIY2txH7BsoBOmtZAL8WZWo81OzrVbuK7+/uQCMICHXZSFDdnFwBOa22
V3ojOGcMrsbwpmzmSAi9Fb0yjBwuiTnTi5DTDXOiqtsp4gBkn7aeZcjWVAE4
EReq3hsUfsadSYoFx4AUICoqnZmQloOBogmVOjNzeGcsuK3KYtWO4epRTPQB
H5a558QsxcwooQM6f0evnreSaEhjn5aG5oYbkgPB7Pb1QLhxIV8dOKg/ssZV
bd+B2jGrG8uHsXzy1oQi9JKmDLO7k9eyd+YKBdksSoFI+iRayUrV+hoCUVE/
a5HrcIjIxcu4YmtGadRvZCSuR8VkPVMyHrDSRPtOfwPJPZDNb4uNVhDTcjrb
uUE/2st5bPMir0Lox7QdyfgA7EMyOBjFRFJOosyT2bq5R1Qz++Xq/OasESYw
ILoBSEXlpPmHVNb3UzivghzN9iJAZ7KEK1MxPjFj/9JLOFvxYhM0cD02/QlX
j8DJcVyPeNwCieOS58VR22Vh5Q10/DxaD1IfKPQvFSTtVzqsVJqx5nP+wgrx
UnhsQGCdKAvBXhf2oMtCxZZ7LydQ/d4habCKkl30bD3BNtTBXgmLRWYNcRHs
3oioGAF1tjc7A+h5K80H3rIwEzHl+LjRut+toWeIQ80foauz41PI4f+1qhg8
V7D31G05JAHKxvhTqhktyrWpOBwNY7cqIGbylL2DVgaoNeHyA9Yrh6xXihN+
QA/OVE0RaKNKIJ5kTDVhnRQ7480WczlLUI6lBqSNKrNBtGwYSAEexmGKxZKB
iF4tv47UxYO9/aABcu6qSERRFkk2pSn+RaQVeodiy/PacxCiOfY4AhNF727j
pwkYS90jTyMtUdNTIRWAR0qy1RWJ4Tc+ru4QKi7QyP5rMk0c3O9tfd7/fm5i
zkXiRyw3eMd1Al6qvY+cx5wkDdTxbHUvIs7tkBgzn/xuEHh01WSeQGrNQxY+
yRLSAPh77rSIMLz08Wkh3jVNFuk8Pi/ZScBx91BhyP2cQ22mv7SAggnTaBXx
tApPrO+Kbb5UlQZpcia/uGVTZ2aPdg0FxuIZztsOELoTzVXs+H1FRVQnrFq7
eZxWHpwOJAYWzAbiF688nMuc1Km4gJwPr/B8/InEc+fWsd8C+Zmf5baQpeHb
iYlbiJxeJ6Y6sFLmsQtXxCbhAUeZBs2oqGBNqD6iy4LojGmjkmW4YaFdG9Kr
DpFFicUyMtsrCdggDY2Ko9V5gIXergmrNOTfLAlEs5Bg0YkSEV54rUf5VJ1b
zqegtYIHIAEFSXgyYPOOc5bphUis4MIJvKVMrp2fPhE86fPG7MUVASK7mn1W
UoAN5EhGy5zMkHzEXgyYR+n4m71ODuKKAW/3mq8JMTyx7MG2iw4eYwAOCs41
YIgFCp5AE67Dh9Tq65n4oCdu4RBFE1eTxmPBttQN2jcQ+g7DlaGR44iRZpzE
jurYIBr0MwFRpYqkBg7R0f4eIuO58hHzIzQFqXkGc5GcQQG/sZfCuYO9VuWM
0vaZY7LJC2XZzMuBmi7gln7pvHbbR1EyF8uOTxeRJ8SxJkvwlGTXgcEkszzA
OfvrfZRj5wmXb5I8cB+ZhuonxUnM2iWJEzSC2UbIgFgC7ZgL/k0J5eRIAnso
lzXDAgZZmrfIwxeGuWFStDUfiF4kGrHFh0OWoyzuWFHMtmp8hFr0Z8/yFJMC
T0F5eeEi5QX1XhDX3aACivFwHnVS8UQ8HbZvUvaJdKW62szhZfH+LN6G9ra2
zrhssLnGTLUUzItX96LQrbEK1CfL2ZvQ0hNdJ6i69WiCJRJPjMnbuSv65nI8
KxpvPpvgjSmyr17HitGt6khhwhuQ8c/x01EhqOQ8+VjL7/jt3iHcji52O2ID
pKJEMCwb0hKgCkkmBPidKCuNgmcY2GBvfdFRTzXDuYE9EjBeSZBXvKYwfA1F
FqL8LQPtjk82+JWarHxeRwJ6YPwlm73xtFFLxEeb0gXzS+5jKIwL1fnHhU/K
xnW/296Vu2JaisZL+v1y0xvSye/MvylgiCZ29kEFmgTzPOspuEYKIke9nUAe
hOhlaLc41XyS8CCSzW+vf704GbTJoGXQm7eBAZd3nLA1KsZmoaXPtOMXYb/2
XOLAilb1XnMOYI5+4oXi1ersoEYhNNbfbKrRPTE5RvHpa9jxKWvmTQ/X+wk9
hK8kWKc12JqAr0+1FMWR6t1DvfsjyZTP5jV95XEyXx4HGmaT7ahq2QgseMrX
LEzW7Hq24ulxoPogM0SSKqRmKgiqRbVSJqIYx6Ei9WQYa6OvZfy8a90uYSa5
WxSG2AuuLJp5ymTycDBubrmACXtY5d86A4N+4eqtPObLmAWF1Ix553NueFF4
Pfg9IVZmyIWrYg6ae4sA6Zij4TR8t3MFgB5zIBSEJRkzIsNJ0msbzazrCU+o
jm6EUU8mjB9T+uAoMEfwt6UcyHmGUGGi8/WP1pJm+hxublvpbr0hj+VMzGSu
g4Psg1EJaSIuLGF5pBzAbMFQab7xFLFVphtoIQ2TnY90bKbRvYGwtIKDrgl0
BQdPRs61rQ7Prq5uj09Ozq6vpS7Knh4AkdXbFamOV7yJq79kiJgXDisd0jg1
pbJkmqDhjWSzJsrdJ+Lyll2hmVoZGqfp+BLPsl0jKh6FYIvs7CD1cpberyOY
Km9JYfpcd6XadpLha7vnIBryaPhm3WzEu7X8ErFkPUEmhqJaCLa/IgYVWK17
YPqn1dJThH9LGsSREP9BPrd9I17WJDh2mvVC+BDKe/o6XF1WKRyiZlcMr60x
GZwyC6/A3rUsoFC/yztd65oV1GUnm9fCSA4mPnKG4GoR5Zh1EN769lowb/GU
MKqV1z+YLtxIcn/pS6Io1CfLpwjMwbuAswStUR94md3jfaRkrkIlADI3OUsN
/iczTNIafcfvbn4i4Xrtw1rRprrgkk9gbZ+ySNPFR2kk/nghsqnIH4pmTJbA
wlIySCsbltWQXjeUApPB9lrl8wU7owVn5CcxL8w7pkWmFvebhlfe37JT7E33
4tpq4mEY1UOtECnF+uxB5M/KKFGonJg6reZys5DCBtHsmwHwP4y/kyNK23LP
Dl/PS1oq82cJ06YQ5IxryQrhsgMmKNQ7UE3+WMjpRMnpNdcxPT55vWt6Y+DH
Im28F0XZi/++amvHVjZB2C1icFwcXXyHcQJpz9FCiTZSZJALGL7DNYK1QLoz
BFFpcZ3oEUPjtEMovsaY1l9rufMGxgc/oRH3QB+dsu+k0I04NzplciLW3HaP
GH7rSfNR9ApNyIA1NtSfYaSweGLXQi6Z6MGUtigNvbZYIkgZO6cTeBRuciFQ
5ZMQlrIg4j420zKVzpl3YgNTuh7du9w76hK1xlwGkoJltd7Ub8BS9Yfj09vr
m+Obs/NTq0Epj3p0KwJBstG8LutWucjPRAYLo+KXsnQQyehz9FbqPyXLo5Da
fpy7pDWSkeDZT8fyWi0IxJ4cEa5x/JCrFFrxwm3ML2/eK3YAYtRHPo7VOGSY
/qC3hGlUkbQljC3S5y9XVmPL7UC82J4zdEpPVbkSvBEpSJt4JzW+EuRlXD3w
d7AozlMnJqwh8Qdcx6pg8xzKvVO2rPh/g2iaNpYoj00wCSINGMhgf5CrT1ki
OOsAXnH9hg5a27GL0eBfJfNKrj+CtI4lpy2LAYbqMXgJzHd2gEBAIkkBUwql
LXWbX+OZx4KfPPVgDGXBbz1u7vXp8dvd7Aua6Re0nsVs7LTUG0pmKQCgbzAW
5bRPqsc457ouXHlTWKakKt4FjJEg6rxZc8OFtx63VTO9z8cut4hdZDaYHZZn
12d0fm+uMl+5sSkCwqcl1DvaVCSyuhVIb7pGEM9ddIOZuF9YHHMK43BZ35VI
7W7ufRF65vC2V2GLxDF6gyoAXKA1HTtXEERF2GDWZR2zTibozAeMHE+tNik2
B36dm9xsmShbg4K/tDLFWaJUpgfagWUB7T1zLTcG+KXCIpABFtkDAcobv2ig
6iIcKlu2hiFBRvqJwhwXVvxlGwk95ip/IiPG+87xcXMEQb3fZjd6FIOMAxvP
SXnB67LWwlLJLWl92FXIDZGPgirCbJFmUcCTAuAAPh+ONC8nnyRO0zI3gTdt
tBpB5TXjSOkUe3AbU6MRfOUlSEFspIYXwDsNA1pHmbwcX7FhwmaIRI8RvWZC
M7wHDlzSnMUogyQ253E5DgqTBIahNg24QKv6sVziT+E9HCdSXviRlG2KGFLk
XxAK0c80SLYf63d8XgOjFQxsGwaLB+LRNiZuNLcTyycbv1XM8PGVljoKBfFJ
u8/q+eI+3xs9MwWnU0ZB0hmh0EbPpkrSNh9GcLjd9BwKWjgUCiftjWTX+wxN
bo6c+4//+I+suS9IvRsuH4fLIf4f+l7s2wSgktH5waVvvvtDdlqMsuww298/
erGf6WTwEve3o+x3k3I6xGeH/IN09fj+2RsdyBcNqdCw7yf24LOPzl32TWEQ
uI4lljWaqscWzboinjcupRhceoSf69llfUIoom+aQ5vm/neHXx/QXwffHu5/
2z9NDP62b66cFZjM9XTrPH9JMh4QD14UlQXEo+RtA/8PgkeOlVpsHilB7CMy
V2DHSwnoCldFiPyFwp5Jsb1nbnXHCDN79Q7HYXyRKzq3u8wh/SWpcARpLUZk
St4wMaf58q7WclT+6CVltTUYamIKtahLLsXAa+/U9aOVEZDdynAacc2X3qXW
sLIs4kVQADZ9ettATkypiX1TTo8W999Et1Y/w5gPz0UN8m0e9PahGZP9Mi4i
WZIIMWGiTjm7DiWWIUkNeOKtPG64RIWzzpFhnxX5ktZp6fRHn4DHhFLVdhvc
Np7mTZoEUEXl36Na6t6nqP6FUv13v5HqhSGndP+Kr2XbyV/Ub0EY+eM63Wrp
BfkU9C62+1KpWtWZdktjrazy1jSKdfEesjJusKmuBkTbyLW7uVaT6B76Za7j
hfdz1y12JvZB95i4oQJLfgZ9XuIo55dvLk/PDm+B0lPo8Kqn1YB+TG24vOXF
toInOIi8eFWPR3IPmcDqfkyWNuiYbHKn1i77IhDLrWcFW6cpPlzcgVx4iUtF
+lPZOFZ/e13QEisXj6hKjzj+wUdGguJFxdg7KKn9jlDNV44c52AXX33zzRH9
zx+iDzlhP9GpTj+TfcZnXOczf/CfuddDa4SFbWkzxxwu+WoKj3jp96tjVooe
VYzF8iFWFIARBmr9Repyrpt7X7xW3LKGtAj7IPUQpNhaWs8WArMpxDv9iNoA
VQAeJEoU3WelXu+KmAJcSgHR5lssNs540FTKkBKnwGL2MXKG61jr+7QDUPns
EQUbO90EllrgHhZO2Tj+2Eai694Q0YxVCRwbSB6V5UZFQHuyQWYKH3ssGymP
BgsnKo/9OwWwChhGvWJvBFQh8cjUI8YlQVsYKsEB84QY9YMiApbW3H9acByZ
jQ2ADhHPHTEc+Nc01yklVm3dwV5OyYus0sxAYRdSEGusRQMe8N2oOpZYr71o
Calfed12ZLRhPNKyJ/Sx+SpARUEJKESh67Bi/XUs5yNZAal1uK0PTqaFMHhX
g3sLbz7708lPP56forz78Y+3767Pbt/SW25Pr7PJLJ+KqV/zXccXP57dnp9q
KsFYG5OIn0IKB24izM0TQ82yV2bXm8MU8k0dZMmr9CVx1UjsJr3UlqqbRvU5
66BIR+bxWvrV+wf/c6vCNdPqVaElkLn2aMdBbE0ERI5wX7YnWiGRckpvSQFz
/QCyJnVoR3CYPcYotni0z6+wqhbaDalsuM4N9B86C2P6c1VazrDrdamvDMm1
7QiYrfdaaeBEf0x4gurybYR8hMAWuI7mTw4CgaB0T74sGXGfz1YSeN9WT0/z
cLXzgpacjMDqAW/HwxFm8HDosXJl4wsreBtBLWRbkP7GZhAMgsqRL/gZer7a
n1sbB4eDHFTY0Vix6k0dIF6hhkjIkKRVkkQMrpD0hP+61+79kd3+HBchzZjk
rm/INvjUfjFCDdU7oYFzz0Phh5dvzy7kDt6z7Ik9i/MglBN65XULhEE88U09
KnlM3umvshztRE/ttPiIiS8fUVdV4TsH9Q3Iahq1P4stNqSoBR22Rs7kYEsw
MliwchbFiJUgMJz9rTWerKuR1GtA+XR6C3/UXMPbvBe66renl79c/Hh1fHrG
y3Ty+vL6zGIp0mySK+9nxQdSb6wYjHLEbu4UvfR4/FA2PYjCxloNxk3crKr/
9ncyQKdMQn9CLdYezEjB76+vtNsZAKf9m4vjCVqxumHx+vXO9TSk0nzWy0V3
gMYAZiNV47AAmjKDvmazUiJT4URotJQlxUXddr1DGlQPKF9lpXaNF8gQb86u
bywUx5N7dXV2Fi7E7//0bgxCabiUeqMhQLqQWfBkuowPFy8lnStqXei9gpZy
xfJx7bc7eV4weGl9BA9ExKkRAJghgMUl3NaEESYSvTfKskiGv+eOG8Z2gpMK
w3qzDTjLyQa+cY2WW+fjyIXSbWH7JwI9BjkKkrSwLsVEwVh4lRhtgZqUiH9r
XWYp7cIT08hI3eii3qkjRxWgs9TK4TJfmXQn0E16KnTOxdcsY/NMBACHFjE6
i2f0bKOd0EH6EoFlxpjcUq18TXY0baaHb0vBQ91cbal3b83HpHKTf5zPA2dI
boVutW0yjbnzRsY5kuj8xQ3MV63tuiuCkXSvnVYi++sxT2DHNrAq+4v2YQ7V
wHSoOEJxKX4NUQgAWmrZ61t8yyjNO0z9uy524eirEAnZ4tJm9lAxyJ9B0Jx6
j9tR3rFcauZ9SGxWPadsGLQhelHcQthAqIxrYNWS39ZsY6kcsYD895F5GWz0
9d5QeSxnccQsgBzhJyxL1sSFlKCRjeMKhGPBmPMqWgJCJVtn5RFyK4jAmian
Irc2jbXaG1V5/7+h1d4kWSUeLD3Px6Yi9nNuOemq+wfjGOlDxDRK1lNfIYET
5WAnvtFDVOiwr0BP0uXTGq4pXmugrC18wqRXnCzrbq7eBYE3yK7Ofr7841n4
N0j8h3evUT06/iHaAdep7chGzNCQRkOdtO+u2qtttterYCw7MfSIhBCdBb0m
Q9bM+tVjPWRYMjRmYtXvSbdmIAouNtJFECEs3jDtZPkUhh9nhPEQ6decIQi1
RUsKwmRmaH2d4TFCV0JJdsq+/PIkX1hmENLHpIHC8cqIYtgU4kMjw3i9SG2g
zPc0YPchOFq6CGFYPaKEH4+AVK394cFIgkbHzDXHhAnmJkthZHIApC8Bf5xf
5hsheHzT+cXPx69f9niPfHFR9iZr/zn+Bj/vv2N9re2FF5c31+/evu2E/3WZ
+OEJa4qMYkBNmaQyzF6/CaKdFPjxYAzC2djV4LLJZPxwqxb7bfAhHchut3Z1
yp4e33Wlp/4Xh5P4y1s8J29OrxGbjz0mjxpf8oEuXTVxMdi3+0EoT1NFiypD
i0JjPXhc1yImAoQ70NqJa6qkzIUfaTMYOjN9/CUljHyJtgj8fC1QO3bpyuyY
sxtdvD27emMg4zKsSIKnbNetjNTHotl68FgCSkl20mI5AcqjjTXNXyzfQP3b
PWA06+6PF5cXfIOVULaONUthU+Y10JKWhi+AKSHZAE0RseX/B/0OQdRuVVTM
A+F0Vv+beiCgFTX/i9wPBptnp1Wf8qjKkhkmNt1tGHEt9DvUni1e39cGF8FS
YIvc1H1vK/huqtA0tjkybHvEecEyvdR2dNuQIXfrSE/XADBeE9LI6kU+tUob
2zwr/z9wgliUwYOloA6sg+3QHeMPauKmpr/tgWGrIowC264hKbVvJErBsixR
IvKnqHAbCRp1fh4VtiBBXFclUkLE/9o613tP+wc8BQTXAAs9X3cu5tVPJ/ar
u76t28t7jWZswkY2+TbKHCjL7aGPHpI0Evl8+mAe5UMDIfGYiSCJwG7FKvWN
wxPIZ1MHuw+2EshnUodfVsU7egYV8s2zHUE9eM9TXIxnYC/Br0OurtnP2voP
VXyieChPHKpW4U8vE9rMTGKKs75inqIPYAgeyyeJUohXRBVtzXG6arsps7iS
oPdbcbRjyg62/1e8qb+ZYfXm0m9R132JnIg2SekACmgv+6+ysOy3sTEIc1GB
vC7G2q6fjZTMjUmPR3ogI+XuB6qmVvGQelzMkvLa61De6k3ucYimDmX6SOK2
Tj7yKUnew1D79kbLW8h5UGPJo+NNI2ml0bTDzD8EJ2+SK7OoUflE0MHr2N8i
vs7eJNmBqMpZqM/xaQ9e6A+GL0YO3LYjTUwRro6jZtAdZurTGBX0107LLLWO
pJ3ixisVgp9dkzxrNkmSFMCFzhs+GpvsxIwjp06rtopuYV41nGUQkuQjRvcb
cxB9zXFxTUrKxaf9k+pahra/5uKcGFBY0s9xX/byCg8g831b1I/51CqxGC2l
9zSxbp/wVXRcnU8lecVuc194xYeUxS1uvdw+0+vqnvC6ZpHXNeeZmtO1V4/2
3tRQb8lSHH2ejQCHtnjoXBq0hy8xpecIstP0YnaiVlH9q7BNNZM0vk7bCaYh
9aBB9HFnFZGHSRVjoR/GS7aG3Hhm1P4FeqWnAt1LDs6EdpGzzUtfeEh3BwAi
gHlphPPFyoqw9U0VQxov88fIL2MWauqOGA4F3ShQfrRXzk5+uBXE5tXZyfHr
1+KPKucFJ7JFx2QTVST0eE20UPEJ+uwJSas2h8KK+ap7aBiZgR5oVfHojA6C
tD9qeR53xHuUXERZB9dy6Oh96VUpBt7vYnb6RM9PcTWZlo4niUuihXf4Ud1L
IVrNf1yMtcWDeenNwfRbnGRSWw+278XljbsLA4odRtYKVuJTVrAjnUwpyHLP
SruIGFVHCktiH4J4kSJnTj/jWXH5J3boln1tBrhngvn4ABILXjJpuiPV8xG7
+ODHtfP2elegUz4GIGbgfEGP0iiAvc7eMPiJzsJkJS3q314zO6yVK8LD53to
cK2RlUjI02vN5K+c1b3nEnmryIzp2WXjFD4nwWLj5vFXOCI3V4+OvmSesdMt
9LR2DVCDz1qKwzMB95Y80PaPmQQCPmxsqz5vhTJeoWcvRQWd5VPpTeV5phzL
FsGHApXgIZJ7W0sBRw6vRVGHU8WmbfqlgMQBerBuhu9svIOe1Z00EBDqkwzc
3SbuVpHyi9j8CRU4Wm79vd5AEQ+w6T3D4zX3kOwNpUiijbs++9d3ZxcnZ9nv
s7fEVy8vb179RH8no9sRRtvkWp/m1ujm++z44vLi1zeX765TdzbuVZS9/fd9
H9je38xsO775q/DbgiYB6Tqz3549202SHFj0DA0BJ7EWzXVIV3nUijVZskMn
NuQBecRNyjsOf842R5bcxYnCkeutrd/Gjv9yFTPKjjwwQSyDVsRHPhox/fKp
96cW4Hl8RI+KyFtOMNE2nB1QAry1pEStfBfWAB3nmg2JD7tFa0cdSk6CUWYw
Sd/IjlLYR6dPxZ3wkp3uLixm68Y0QpEhXNKiL8Kk/gL6SGPJRpzW3XWqJsG3
vmn6KVp+tccS8VRVs8qbiBisBfnT8TNv/wUyfHJObDIu12E+PWvtp3R7+cej
Xj8A1+pmx3sfnVcKBV41HYqSWFG3yNy6HSaMTGLNkZnVHr8zz7kw4/kqe3P8
q8Cu1ZMYFqHRnoyTcjnXNFQkupJ8fqymy3ysMIyUZnw1Gxw3gTNzg1NuYSlB
6nb0DDJwXqu3866erhvJtLR0mzAkBvRg04pxT5hTWpPhuuSuvrXe3CoIrA2z
dSYAqBvOBiRnrGgvjqS33GRyK++7VRxwc6iZn3Ed8B/Pbk7Pfj4/OTu/eHWJ
tD+uKkQbhrpm/XQDKTmQr8Zauxd+qtxeqzx4yvrxStyWRFUDJHCLxQhs4UOs
PNP2u7HQsy6psuGyJOHEeVKmb/CreiE73Ac371PbvmiiuI/zWYlPhJDLig9B
mo4wNaW0j8Ojoj3j/POn47ufGduF7sO9liWsK/naXtt+ZNQK7XgEKNNp0YZv
n5fbNi8UwlaIuHok8FVJT1TEn/+4Ra2MzX0quMyZaq5VnqquOur8S+tDphOy
ECUf9546LSH0ba7UJyau+qGGiVW0hrDwXHf2CZIQPVO7nIrhgWagwwzlRbwv
VBqTG5Ihayl2MlnLvg9POe3xLgnT9ObtOeJRndWWrROtthnPKyVIKPJkCJH1
oE19ydjN34sUiVr6xK7Gz1gJa7KL24UWnbfe0dJlOC5Gs1ydgy1QhjTZ6U7P
Gc51o31hYuQBVxuzWXRVCu4GUW2cZBrTETEIkik94IR7iAkExFgCH3BtGC+K
/qyWeVlZUhtX0PGwq7yNInFlXB6BKQU08FgHQ1E+GdVAU88Ni5wWJkXlJ7bk
6u3J9dnJ7Y/X15ZhOUeRxUqcJkEpjjChHTAwIwtZsE7XSx+eiVqC5D0B2B5+
Zw6hAAEK31fjUd6JVUuQOJ8AfRBxKTboeIaGZ7lkwaVVs1uLs1xX6p3gN1fF
Co1ppE9vSWdBshm4OWDkXch2Em9Srkgd825EVbJ/fn18wUb4eoVqsTevr+Gm
uXp18t3Bt99JhS2dFKcRkgo9K7n5mnExJDexWwkJ+iGt3MJa8Ya0Vl/Cuku2
5Xq2VFpPZ9pxkoPEQtlYdp+o7h0UaLs5svBE4lsjEi+qKc2inrQhcz7fJ3SZ
SfD3z8CscNKc5zNbxzuSbjfGcZ+91BZ3o1XkERXpIpWu4xorPnZ1R5ttqssr
Lhm5UluSVKN+5UXs1GmxIrXllbbylXqVUMYndOiR6Zpmh/d6ZvWoSvmkXhfr
Th5VoLO8htSo2H0pJB73brzXDL5PKUUG2OCNj9TJHYWxom8oh1V7FUFujrEu
MIAXe4D+oQk4jS/dc+8BXHdcCGKFa2H0NAc6geT72vhs9Nvxiiz7cQH1Yhzq
LrUWmgSIOHC5YDYe7nUt8j3ePxoCZOo9tsqCEpz0/SWxhLEjgZbjkJbjMS+1
KjkC1rQ4Q650inJZNbgESZwd6SMp5F1yZzRraT6MsX7qhd59qleUlCUo2NXU
hU3ymoT9FVVLCCXF2uo4+wMJ5mPtVpp7mQEKK94518aNynG6ufr1NX3jSqp5
cPGWTstTlRykVEtSiA50y2jI4vPlEpRC2NfG1eWtEEKQHJ2qlhoi84A1Xp8h
r896hhawXPCsKufrub5GCm7gG/Sibdhdw806m//pGa2AGH3EnhBRIsKZWfYq
A6NRgr7X40yk57nR0LtX7tbjqVh9xYd7UrHEOXA+6ZZg2orh1aZV3L5w5z6X
mvVmzQenkZYS2X2CIEZ0bEhl7TUV+10lA3bXpN4aFyxv6T04FCs7Ov40PMaC
ToOx+dYrCj+UEvC2k/vHYnlXkKGc/Zgvtvle9Vmw8SgFYF9zfLQNWL44iury
xGzAgu59Qajw/SgPJ0JprNTj79I3eqaypUa7ZWjRILV0soPNvpFsSNoDLRn3
XP6WOiY7by+vz//kSyJhU6xUGAPQndZX80OOxCdJaPoWl9KxhjBc1cPXGgPL
7TOfobOv7yrUKlnawfbK5MO+tgMXrcpBQ7evD7VwsO97kXppuU4hWFrL+TpT
EUpfoC2zena569k4n+nXsfix7TYOF76phXlRwLj456uz49dvtpyHRd7YKKLH
uQTNlkBYW3iot7XDVmguij9r4swLMZ3qSt598tO7iz86r1GpKmfuia5wJf0D
TSJXAZ6ZlU2PC5gPlfUSEJE71iDt3Z5B92Yz3vQt9LyDmyKjg6wwMmdEY+X1
UbuPp+C9FbtS8QhLmiwSmZYIuqCYWym6I3MiFMMW505Y/Dnpqlw1KTpOLnWl
swhp1VrXPmGhAQesrGe0eNm4ZnJlLAZUdFV9g8bnmR74yTMDqvuJ3Beh45Nl
gUmNVoYrwjXpu7NP1trsxsuSX64uL36kRcx25LUCcUsS8dgROYz3wamJF3rb
k+X6UC93cTZ7yuG2Xu7aL28J/90tSUa8vIt6sZ4xWik5xj5PsksTSg8Tb1kE
cjK8khVReoIrd0aDnKXwsS1RDUlVQpWMZLT6ymK+WG3YxuBqO2RwxbyU9w/2
XGIT+vwxn+SoMDjHHr5SXMBD7UIhp2WXGw9qWaCtA3E6EFYxq/hw6bLQeuL0
WoDG3mE8o88FxhGDciZViYmKomJusYR0XlQQr7WOAR1rH858cWbnAepQLxSU
hyJaydzU3xQv8kBSPdORa2Tc+sXrU4ZLanTdntoZiUn5X22/rNCXbpDfH9sW
pZlOmOSLEPFIg8f+CyZ6A3o+rnY+lYp67BwqV2tF9iU2tcAKPPRHIhqiY0XC
Go/5F9OoGNH7XJQAX9SaGVEbYq5PWPrwk9VnFZ5cTituZhoFIHHbUNRo7j4k
yMvXrRXmFnaIZMCQxgfC6QWzwzkNzhGO3cX+EXUh4a2dx4uED6pD66XNNd0j
DQj14xkipm7ZePq2pLqo8ngLWMkd6rFNZ8UL1liKD6lxVZErlh41yXQm1TgA
2ya+xrBrR6oh4qy3cIOaYuxEIt59dYO/zE0QeYrwCiFnGc5uaCWuTQMYu7uW
Rj/p0D0KPLi77y0q0imbz+hUfLfg0Amxcxm/b9pgcLdgWWtuPpGWWB5SDDzP
upgXlskMpUm8An0baGZPUhAN43avXv18cHt6zZ7C69u3V5d/+jWqqCdf/ULq
L99y4UN5o8bOBm6hyKDtYBKvO7Tca3f8psaJipNASpbSM8y7L95eQ5OSboQJ
esNr257Fa8tSIcGWsOgBcpo4broqdNkWFPjXl+ksvpRWfDoXWTR4itpzCh6U
NhgJsTNfZG9I5PSYs00nNne2Q8OKSlzgYrY/cM9OQmE+KNlDpZNnvs2rrhuH
6eWlkYaQfLTBUZDm6VCc2WPSGr9INDR/dEm/xS8aLW4h6y3HQaTJq21SWGJr
SuUgrzIh87fXgw4z/ixlxEaEp7cpUTudVrro426DQLrNrs/L68i0rsbQ2WYp
m8DckQuTRhnRfsHCa3QpdXvEjWaa/sCXSmTa14KdQfTGa9wS3P+3rbBoWSzu
WdH6nNWUtSQ61Fk2aRIOTa2tFLzsYRZfJHnksvwMV0nWr6tfpCoEJ4kkWsQT
ejrceltXgveEtAln+xLLotOavWtWg75zHtOuLFuK7DQWntu52+CAc/AvaZal
fGJXNWr2ffn2XLQJ9+VdGbJbXZsr0wSYmRxmO8/eXncITFteCc8weO4sr6Qj
6bNdj24Q2OdpIS6K7Iat5K0+JgGPkZLMGL/HshqjTyDMvyZqCa4uwMl61mLJ
6o31jUU8EUlAkLtn9CZ4wjXJepZ00JCe2FwQNERku0UlaQGm4rCJqgsIZkQ8
z32DYJ5wKw0AFVyQFn2WCKyL466sOnG2H4u8KAgbIkMvtT3tejnJmUJcn6X6
VN1zYGyYu8PNFIdFZVxS1UG8q/Tay6udFIA40Lfdoqg0/BQ939/dmu3MMZm+
0yZO4GZ7MQn+1rKAgtd1W4ubd8Wzw75oWY3G5x5uVYUUxsVUHk06WRP9tAwQ
H/Gd14HUlTBP8vkJd1fuV3iAcqw4vME+pHppb+fh5Bnap6vnQ1/QLXzEIhiP
tcD5oTZ23C8t7K3X/3RRWqWaLUyQ9ZQUSYlEYHFJ2guZxZZ1BVvFu+cN+F9G
ynG00OLs9K26W/4mWp0Y8WJOOHMQdRoztdtiAv3v8+a7rth8pNl5eNkz2buE
AJ85C+DUE7qh5pQlFSvPfDFMjqFhqa/IgHnMt6S2SBjNuShAx3krLVCyriF7
Duhds+FI0mw5x4KHg24T9M+l5NBJee1q0iBmcbjldJlEjz9uChM3ql3F3+Lo
hxes3dBLks2Gtm71+CkZ6oMWLS4uagszUmf1b8LwHrnWs9gUceqeBT2buOOk
oLSTPoZ0RKd8WI605JV+Ck+CJEoU75pJgbR4JthpRmb6oQy2zUCrY4fgdniG
oV5EDOISU3Xa4775fL0vioUcxVZ0O58BFFZ6rCtboFLNh459O290VfvgqIks
qfbDRie4JVIZ0uY4tId+LEvutO21B1I60XlTevIAtMD1r3lo4p3y6ygT3Vaf
vqegkfOslxaGA/jtdKOo0l4P13NsAWE+cUy7p/IRio2bacgWMVdC0+3t58p9
Pu9YoCo0Tilfp1wVH1Zt5T/lkzHDNbt8e8nWwEKbNrYPMbFEcBsEU+IRpEpF
NYMT9EarffDJMidN46p4Kv1i3AxHuE3xp3HIAxXjgPYJ/p2+hYTWjK6QJRAr
/DRIhr0fgtJFRyBUw+qpw6Q/Bb+Kg8Hh8RsRXjqp1z9m1bMHOm3v00J3trPX
JF7OLy9cJHNor1+f3Z68Pj+7uBEdCnwg3iQFRaXKQbTHGR08EYgtybrd+e5M
ft+8u7poU0FLkvbydmns4VEmzIp68ZBxMU8lpU4aBa3pW1LvuY9M7zI2JG3r
yYrz3NcL4MTRmmg12tu1Zh9jAPmIdQwnM0kAbRvHyhma+7W4pAE4N2K1VjjZ
tSzQ51DrPCLXrR2vbPgD3tMWyM2WxXWQF1xDVxKtEzJeFnPMsjf7u1EgQyxA
2gntUQ1LrgoMfuWWJjDozvlLb2VYubyCaUsK1tfLBTE3j5lY9cRsDE7ToK/C
6Ml2YKAGBgE32qGgtwkEoFx36J5LOjjJCnU7ZuZ2jBGIQZvFQ0qn2qy3T1pE
LVH6xicsux39VWTiPF++ZyxRymgCSH7MMk0qnpOBDvRKKkJ1QnDMFqxN0Pmz
Jpx82KXtRF45r0D03SgOL2nd2A4w84NRQETFDSddusiGYkyMOH36+nG2bsxW
xWzWcqVoKJKGGepFca8X/XLLpojSSDT9Qk0A6SGxyTaMi/LJyX4ye67f9FN3
h6nNnaIKPlLqq+CqPhcSSQaq7Ehl39U9bx/+QRJhbCbhNt+VpN95mkdAq/IJ
Hji0/ZsXDlLf0ljGp6qBbSuVSxGvLDwsDgZfj6RceUQPU2gx5vhy+J7RRqT7
FrN80YiJpy+VHBooTVvCyl/43UN0gMQH17tt1L6/shO+DXDBLVEjxDWLkb4K
fmrCXy58iqxoxn3ZFcI4GK87/CuDR0LquTQh6xmKaT6Mn/K+d257spZtV1Wk
oGeWSItv3pfS4kgxVcUiUp6DUik8U+xWYxKy3Ny4OKjuYtCfVZwAuMx1dRSs
QzbG6+PzN7FAY1U0NIB/pSY9z20JbH459xoBMF5VPxxU5PKMiFI9Dt0iJTDv
zj4QRQl5hzq9WjxwrFEy/iYby7DkrNZUudTXRrUmXj7hYWincum8emuwVvIB
iwM49yZ9ZWPsWtVCYUzS7yqy49IasH6fdoxmBh0cbCuvQaNDg2g/d6WmUYzz
BU8X7igZYtJyeFRsbef7lAlLoocxcc5GG+fEcZUML181p66H6gdmAVtM+K6Y
lpVtpVUJBXnfE397L+dCHaeacdwpyRtUtxTsKLKDEy9Cvrl7G5ZBnA+Ih1Wj
DUMT5uoqN9yCHq6KltqpZxt67pESsWLBxVCR6G7EnkrmCOzo9LnhPxCL5OjU
CQKIK6t/syVLDrd89K1hXnQRs9BOK6kA6Lu0SGGouPEORwI6DW1etjbZ5z+x
gxqkzh8N4iHpUtMuVNJkCdyl9W5Gk7rI/9cBpEZtc3qGikdb3dO1AYTPJPCp
CYMoOM2jUgn+8OK3jN8ymgXLm7VSe9O6uR6M389l1iriWi7fONGAnWp9XKiP
CU3YQ+8nHnnuQse/Y0A5pjNrQCW5EHQarL4Z0kc/sB4aaE4KaDpee3+ltUov
5VBDny+m9arUPkdRIqhjKWMrCLrPzCY/FqZlapcvjuAkXJOabz0GbogsBSXL
mrQ7S9WxuHlvbxo1LX2jpST/NYJO7lxrNdP9b/cOv8K7ouZEuz6D/26TjsaJ
8/zB+gWEmnZaKyhk5WKYh3ruW1fVHdG+V0jK6smaN5gbMbae/7jrBG7u83xF
ChnKNaijXNZvaLVqaAfyvyAlnJuBj3N+3e1dPd6YQzoZUBiLVPYPYawty7rX
nlhITRaY0W3/+6WLzrLgNff8TvQR2ZfDF19//Minb6Pl6dWPOM4YegcugOoh
grN6KKqS+b9ViOSsTzhPlGj+dHpFXBD95yzdyyU1XbSl5ofxUuPDNM7eaf3N
Ctnhv3VZrV4c3K7s34yZ13tffvJGEo718rPvptv+Wnz6tsfObWT5zqJ/9qb+
8AMfX6bNQHtmr3Uy+n7a0d3kPaQdzLCFu6iZ8dTC6qGJxytd/Uj+3iJR7tDG
nN9W4MLjZZPMrm8ofLf98x/+8enJyQg6E5PL2yclZB+NyoDsykg4DcIXNu7G
/FbO0qShS6YHmnklJs8AaHHEc/PteR2KNKra2GKSRbMXjc3WAHFc7gk5k37u
vQV/LJNhdF9DLjhp7YucSH3NIGOKDf/MuVmclsKwJMHUX6/sOT4YyhlaByBU
geTxpd0Pa99YKu0F3m9/WbmiLfW6YhA4BG88tiaT/oeHHBd1ybBJWAFr9CL0
fO2gTrqzIgnyVZhwT8UOLkGSut75dauo2UmPCyk0vRE90q8Vc8/9b/dffPwo
sd14Dk7ncPgb5xAlWEo1s5QSsmrNnXTbaXaaq9nZL5dMQl4YfD6ybch2jb9B
ZH1Rr1QMRxG2uG08POnLOYM6Q79MSAJucMgthK17oGXUkkZ+V1aqo9Tpag26
ayHYyb69HPg2AlPLJtdZ8kHuJDI6qxi6ipDkmlPke4FuT2shulJIv8bmOa5o
+8G1JdgY4WBHXBgzLJqThEZx53AZVtF0ksSSbQmYA11dl1sWEOx19nzAsRJ1
texJkPE9iDh7U+LfOvQh4wcaARt7zsGyLxyjR/nnMvAEmu58XZWe387zD8jy
c+FBeSZqjdVb/5HrV/dQKixpSRCJci7k5bAx+A8ftGsvFdPcE/Pg/Kt+puzS
iaUaaNKaq282rj+JT16RQKtzrbjsW8F3y0M1nb6aj94R0VlFHFXJ+ObOTUp3
kiJmnAqmuiVwq/rOVMVdjlfAId7pGc+b0MtKzkpa2wc0JTGGPY8pVJuwxTf7
s7yeKqnkuKQSm6ZSE1Jx5sZ+29Rtdb6Ea7dy2tplIs2walcaCxFRt60hkDac
2WMv8ZZKowyu9k6A0L+1VXxyawU5E+X8rV1f1yVqyMok0Lty2plWxBuH37el
nGsL9EgkxdIDB6AkXsVtpvvBcR7oJ/kXFqUrNH9U45QAWJzfbIN3QQBHISH9
nM7YySBWPfZhyz3HO2q9/RhCJ5ANJ7fd3tNJ//777NX5azL+f704Afe420Td
AHWUQmpbXvDf4hew0Wi27Yu9F3t/8LataAG7vg12Ek1/Y1o2HHDCaWfFByls
glW0mD9qdPu+RfPoIa4mbg242z55r4z67yrEQ5Mpkhepysy5V3fEHB/LMXCC
IylqzGdkua7em5MBt90TGUEQSh8AX3GaXmMVVqQYcD3peh0UPElDi9cgUo5d
tLu09iNWafNKqsb0zzSL6/PwscOX+x/R2CNc+IoI9N4tcwiUi8IR11+F1pea
0M2irlrlo/6grDQXR+TS1xC4qb1nqzWOeA8GLVtGnUoAaDmxHeDtjoyQ1ru6
Vojq4vyw2+FBMi21rLvdqPq7GT3oE5kecLcjPiqriKGdB1tf3R0ELTIuRc77
0VOfQAtxLxaktJg7hAdgZM1KTZOiNQArRgVRLUAtWiU/2lkDBriRBgGYbG6d
dMCW+vxgCp2xSgr6Mhe9DBF9kFpAOWEZPU+UJeaGCKC6JvDTjizXgAawLqQp
ztZc8BwCtTuFsnIJZcgQW+PoPiYO51BQ3eHQ1abvJnA3Gtx9zUpcDcWKf4++
38RvAbxAoIdaZmsuABrmLOenQJTpP+xcznMJsOFguO5DGYemkwwmYn+AEVd1
KH5pSnC/EYaIQLRNvjWJNDcLSwflxunSMxXkrWJ94DIkZOYLjTXzYIWZaX6x
+4ypNgHS24kwC9JIFDMMOy6G1ISTKOcr5QgSmPCz62+Kmc7u5AcoEuevfr3V
SZKukVogyDszY4frAD/m4AN+b6AbeBNexLy5CtX9cX7acJRMLkdm19NrZJpN
2K2Yc53HHhYMQ1MZrCuCbo22l92yCNIWhJdZKzsE7hTeLAbzmmuTjNqecGD3
AzOXTg7hUT+XgMWywENCV2hixk05ZwAoYAMe6z5m5flgi2rhgpUsMZz4aeV9
WnmHo9pZ6lOMG63wh1qwWCZV+flkmjxv+ZfwGgy2ZYyFMfOofODmPlK/1UCa
+EAhh198oUnTJ3bE1F2txMKJ+FK7IncULRDIbjSwjtqjA7vX4nG6VEIeCiZa
unn+Z5osBjaJHTHSYJhLoKpyrKG/qsDyoW+eqbs0hK2aF+jnk/slqqVfDUGT
r6VPzasY3uoFzMj6RsD3PoRZNPDJLHpD1agQp6dH3N1lRie4kbjRK1MyX0HJ
/Fn9FQfZa1E3b0hFEHHrExWE+KE7HKL+rAUBbEu+/uabg48fRZ9aIWLF0cbm
HqVeSHhJOAUv08gum0w33sFxC02HCETiFXLtt8Qp5PMvI5eJjLSo1vN4UAHL
jwMtpxXV7c7+dAtd/vr25wOUzVubK3CyJgLil2B1m/VyiaQ69bPa2O4Krm+W
RkcklojAiIsCIy8zX0AJA+C8MqDx+0cCbEG+ZCW2P3iSpQXx1Y/TWuqhPiM2
SBQFECALzS1esSQGYMMiJvHz4e2+jCxDNe/9Qd99l9enB7eXP/zL2ckNbqP7
Dnrv++H15ckfb3++fP3uzZnc96L3vrAccv377LD3vuuT6/M4tvJ99vUn3ofl
xX3f2G0fXzr7U6MjbcpMViZesvi/WT26xdUkOKLRv85/uBlBQI2NPDWM1udJ
RyNb/TBr/zerb+Wn5Pszjgj03i0/vexOTWrCHaZ3y8Weu8MqRXfrtXhucewn
IlFf+L3LccRc2SFGvkaJT8BXX2btcBCOv7ZNwUEHM/PRoXBicH7Y91PerVku
+Yoy/inDQKBfSLzF/qDmMzK/jpJj9Pz5c/l9C5kRkW2duJ1NW4ALr2bId5kX
ZTt+Djqr+NBbRCVmlj3dRqO3Onlr72j3sljE+iiAOqOVXsO3nCQPYWV5n54J
qT9T9qmEHzJuyJq2+vYhpZR7lSPdJCoqzJUbGikW3mTdh1I3iYtpJVU+POxJ
fTNJAMrH2TkigYwg2GyPZeOHzks1kLIFehLFkcNXRjVkwa2sapsmmLV2bpNz
jN/xH1dNOLk8Pb/48fbNOdLZyBTr/GfsNnnq7MKeu/yXs5sb+KGub87eHN+c
n9hTB5/xFBr1xk8aK+5/6opo5Jj0sqs3l/S/8QgP7aFOhLe9ANFZl+u8bSGY
274/wWO0fgRmpJAoC7QtJ16X8+OLY23BoGraF4nC00Qaz9kyb0ABJ7XAB0FB
V/roF9D/uI9jBAViAhIvIesQQWNiMAR7SNKXDrLHQulOH0p/54cZmAmAKPvf
+uhiIAghVQYw/Ub9Vz6FkozWWb1xpl7ELXhazXPUm/LiYHhXrnoWXVhEs8gF
zBcnTt0hiVB1vGZE3wAk/Br4S+TlZzfEqd4PaDJIl+JjOBvQeldjmtYO6h7t
DrjK99tl+QCd+Hkm/UvJ0KHNolcxywrhHTgENXi8qJGXTtuMYiHaSpNbcZAd
/cBfkFK80jwS1FWMZCaNbzcITr2eiwanQx6Cub7n2xd1KRjca9akVnezoSyL
dO5siCVabBVVE1GqhAus0hSaQhTdMq/yIavfY1t9espsBWm6KgPUytYw0v0c
nc2RNqiXObB3pjF13swr1vnxJimXviqmS/HFOvwi8TQUiuXIjg9FcO2iiEhm
5fsiyTuPAoPRZZheDLo8q0aBgLNzMGl+m8deFnqDoKHK1g164hOQ77j2U2CG
nxVyVLANcDfgXslAkHKs0tRS8+6UZ4/uyZCyZHnIlMdSmn9KtAezZBYfOhMy
yM5KktigUXzjvkCMb9TE0VvvoZIYuIQoeGDsiHLwz/p3sETyERttc27tlu74
pDABBIL3rfhSG7r3HfxVjGBV+FZ6XD3Av2Qi2f6C517ViZXOlmgyUp9q3d7K
CKIrDRll8opQl143ZF31TBsW1MZqEafrK34U+YeLFjuqvg3rPgJBthalsaRW
x811QxkR/gRoe71a+MqBOuGyokuWehhjUzhWwHyzZxrmoI1KXEsjHrpNUiSO
Q/3hMBe0F0nwL/ps/xpkaMI7RT+A+zn8lPxN820SC6bhc89ArZspSCdiGJuq
npfEZrVizLL8oBXx1yPBXBcacUSge5eWF9m94mSVKG20xNKTJSE98fxinWPC
mHFK33rGLeAOMP9lwRQsNS/ybLG29Cn0WKy4/oVmztgVLjkXKnnFjYI1HdKW
I11GyGUbozF9uNnxEuTkrM3ZgsqilWB0CsTadbnRW9RH2ZNDwW9oHYkXmFzc
I8HXW00JhUHvkTDrLmXUxQ7R9EHPS7i+kGZC4nmlO8QTltO8Uhg9yjXRctNG
Cy24jqZ2fPLm7PbV5eUPx1eSvwAX1MzAC63jLdUYZuW8VEWifc5kHU37Rln8
aDC0RJLNk+23vHeSnB3PT3gL6w4cZ1zW+di1x+PTgxpS0LBlLxSdYUI+7otn
r3fyeutErhn6K04qgReKDlmzoSfmjeKW2Mu+fNDIn+tuA2s/VvNElz1qUxXz
xNIYoip2JkGQcoWwNbEcIZ2Qde4Bf8HZHHzerRIvHlCFuLMw/bwxLjeIEFbM
SgBDole9q1ieKwxPVmfnUXsC+gEoGCIF8gm2ZXfQ2jvL1LAmjVBxsny6LJLm
7bHYNHB2sJ48Mttf+ijKBF9B548muUeufOw3uJnkpeLcxeVtAjAgk+TVq/7f
Xj71kvPL25ufrt5xA5X2S6LfnnwHuuTRzd42it+hv21/nlWs28uLM9X5su9T
c2zLbdtfeHnx+le+kR+54j59H77i//a/ssfsaewzmSodUDP+i/ajx8bTrYus
u+jysyw27/RW9h23nidVWfgomSXzfJFZjMwwAT2AWcFRxceGhGxU6y6qZswh
aMADRHLJaRQ071Kiqa3Gcr2ALpD1GXfKpiGr99eQQo3oW6sE3sqQRCBxrEN3
JdixhyJlItqKxlfE8FxBI6P8PS3rbVCdBKUjKonGtiY5pghk2gOyHwOWSbJN
4YzBmKwyx/azRPrNkY6h89XjX7MajUdxrQfg094vmwt/ujt+jljGr2FLL3ZI
xekjB51RRwf0qUH7ChNcChMJ13GGjBS46wusDgRGJ6ggieKaI0DCeGMk3nCi
nutJtHli8tJyQQs/0RZhTIZK65bA6uU2nzNha7B3HApjWVCTTEXt/dnq+PnL
IPQBNjeI1RjCmKCXrrwHDrVvel+Db4q2uRHbxfeorVz/gvQMVtOo5QvpQrT5
4SeIdr3gbHSOxkCn1yYeCJeh9V2r+kZUIOrk+o21Rl5yIYUUbGolajicuihA
J214P1mpU9NPdRh6WCS/nBUHHk7zieOiTxN7dPZAsiYtzv/Ekvhqz6Rtou27
mLVLbzmpEsGE4Lcm9hQYQMrKRI8e89vpOl+OUftoJbPiI2UqpwJ35B1Xl69f
/3B88sesixrjgSABAy3HLDcMkWv8IGmtT3MUsoVtLK26pDp08yCx20BuPMxO
jq8ZV23t7EfEWUtN4pAKNHTQfeUrXqwmcgzDKX0LwdR1C/vcm+S21C2sycmd
iM2EDDcro/cyfqCaNLeTnggPPzC5Zyj9y6c8tNFIOhLc/9SS4uGR0lfq8B51
2t/7YraIEgq08KbQDjjlmlEItOXG1L8GDmESZ55FDliPqt+3eGgaxI4csYbm
f7R6ypyxuYL6LQW3bSySdhCHSfQGXTLBzflBsLtD8iQsk+NeKmyyhaFvFSMv
1mz8Qn38GL8kakeorVGIEwEtA4YUapYFi9o+EOgMQ+2ozP7ixy1xqrRW8/HJ
zfnP3pcflMOv9vt0Sv/Y9dvjqxABiB47ePIxeur85teexw6ffOzq7O3x+VXP
Y9++/GzV1a9LD/2HhWyTv/8lpf7wwE6qx4bVJzHBf0FV4HqOgl1juRMh8S0H
XEsKtGIDsUOIqdoBaMBK5jWbszC0WBxd1NUwvrQgUZOjN4Z5HYroCQGO3Vkr
reAyxtsFVpXl87qaaqUFUZjZEkR1EmjcpKYwy4S3HA47+JFSx5FUcqF9gSMM
XkCSon20x3Klj7pa64PF77uNzkvjQv5GXChmkZcsifPNrM5Dq7eimuZTw/y2
ZHfgOZPY3dd7aELFDD6qovWJKsIYh0QgCWgHGp5cxeKZ6P/E7FF8yVjUln2O
0EFc8N7v6Ew6xLoQUmkt6+fvnutdhO7u6SFvfUd8G679mbxSp5PsoNTfg+dU
dm8QGvXls4a1hr7VUj9NL9cQu6mbbmN+PBfAj61cmxThKrhuqARlvW5mXls3
YogtBSmerRCI0NYTrenKRiBeu1yQT/20SUXC9fJBKtcDuzb2Kfeh0xy3l9LZ
JdDSUsJD0mM1EzqCUvXIWW3WocN+aDKrmwKtioUgGoPIYXnZ1pkxWzDb2gzv
VWYBEMk7s9PVLo81qrm/mDoLeQA6dr8uRMPzOgrv99j5hsTlukqy8pmsvFgo
4GX+nEuThFGh3YQFqC4ExaW/ZVbFh2K09tj/CcjPJ/rIzTSFmWgnZHxKxwD1
VFe+SYCN2SNXRB80CJkY7eZc9vUzlea8PxlGUROjujq1htk3reEwcfSNPYHF
zQzsmseiKc4CW5cMkc5MUv/HDBr+EalNpPrmXM4lynZLOrJGHUH1HRJLlaRI
qyOPxmN6ibEe0ivDQNgIanE5QKXy+NCybl5sLCzoG9G3To5tO5QxFF8luhoL
XVlwrLVXbjS9ldHdluPQNF7Nj9gSkFMg5oOEkh1nxVlpfjEG0tOJj6QBGC3f
5xb3m4ajQnJquLyNnll40R8UeD8rJwWfKDaBoCYi7GiFR1q9GkO5MGNEw3+0
RdyRjG7jYFw4TH+im/R2ukkwzx3vGmulUfPCLF12I7S8cdbIWrxaFiPHKR3W
ykstPLabFnMJRV25NYsyaLWshAjQkjEqmGpwIl85M1aIo9YiT5peSQ+SxPgS
Z03X+lLTwH5O7K8+nTN6ptCA22iTPNU2o1pP+V8UH+gfy1er5eGtIJi7H0N3
oa33a6+h+H6+0h1XUHA748LlJ63JZHE7CnX0Y0unjh9rq9XRb9Csg8M15xPB
xpQ01IMa6OsTlCvvSAsKEEABhgdo+vFc/MEuCa25NU3/zZqmFq8k/tvpomsm
o9VYr+ymxMdJtn3Aj6P2a8OqaGNHGhj2it8dLr1MSZudwv/ld30C46XL0d74
zh3p7ndf0CKBzg0JHQTsexgwiZt6XqwirL4m5KFz+SxL4pLS7zNHolXfYuy0
aTH6EeNIuy07KcWQQHMWOavjotL5TvIh5Mh4QR/qa0DH0kpOdECu4CoRP2Gg
0TSD2yV9Hg3SN05fIIpQzvUJj2fQ1Kb3wbJp44xGVpI+DibwReXNs422SPXI
ipW5XuyUICyjaJJKwqdIaFjzQKFEsz9NvDlQRROgBKI0iuOSDhfEuHOEeXiz
ENC22kpx+PelJYPpEFDDSls/5mk9Bp+qJAFYIFhz7Y4OybsnLcj7cXiT8YJp
AJPbl2RYuqL7awhiJgDGlIwl/ISGP/WCqxEqfsM/9P0BE+iLISpPiB4B1rDn
ztJ4skw+AG4gFkPDLMP22QfcVVGMh9f1jFTrKvs5RxXUORxKg+xN/eeC9KKs
8XbkLrtj/cz+8XsJvbtoZnRtf+8JmCtnzGKYXYjr9c3V+Vss4sXlxRncN12M
qb8Ftsn1WT/o1d90enbB9xxELClLIpmxsJdc3qYl6FPOZ3LHrjfx5SYSwX3M
z38g4nl6LXs4ALb1TEo24lKU1msVLBMMHOd+pZ/fc+cWieQ3KCRT9cN+7llP
ehGEUYK/gP+l3kr6PV8VaT+Sju+LTVcgxl63bw5vV+HWnmXC5Y5YwMVUEvBt
JR9ZhfR9c8g4VZ95vAxliAzrTnahauDS4sLqTsMsBNCujHq+0AeUAQvOtWyU
15Axv8UA1SaCOp4UthGWSJs0Rt5Y37fxSW3UP9dLn929pevzzuWXvQ/zYqb/
4WG63H+/P8Xp/Xa589B21Tc8dIss+ltfEu1zH/cW2hMjLWIFNXy0+NSJtRVv
U6NeTwnSbm4pJKElZ6SOxpEJF0cmAubciy5G8IVSYdGMxfmk8GxNQjASbftI
zILKzTQaTpfIVYEcGwSwWhI5bAqtV5hawgIpRL5mHAyLwp/xdS7YWLluRM13
+Us618lqfdGkbkF3DrfNerlgALVIdpJHjYJFZQWGaOUrY7tbj7mX1sQlQ+yD
dLHVDu+iFnsIETunETtfTB7qixgQyyKLOmoB5ig+s5t7rW7py08fagPVqHZl
bozKK2RcIlaYjHgE4WN6cSCe9x1ahhmjncfFh11fMNEPmx+mowOOY1HH2F+D
snQhgN+aCtdXf6WF68KYaX3t8+xlRUEdBfLGbmqtjq6NSTn6ObZC/lpPI/Qr
o48U6h2xpqYPZT3zQW6eAUr9CtoM+U402nj/pIDspeVG9vkhcD32QwG8riIA
Ch6XNYQjO0Rqj0CIRLqNBn293yDj88FLps2CddRR8gBZlUA8hgyJ+HBa7ZB0
p7ye6h25yVNBxnCVhWgNGX1pNQbbceXtG+zbJvc2TdJ5pANv1nd/1iLu0LkD
ffHuoJGf7MQPWvI8YQ6CqULVkMJ3szd7gxvmMtGUxd2yyN8reIA2G5Oy87ad
k7CuZ+VV+uSvTAc+Z0QBZlbdra8PQTWZSfn61A0JjUtdVty0a4Mai3WjyGtV
2jkTDd7AZYT/0hwSdiyiKoWYkNfKxsWxqk9Y/gaf5jTUMbQ8dugSApbhUXMO
jYeVmhYfen74zAVfiVkBu1onAvZce6NhT+G9vm27d49YRTlflBllnfJGzgSn
nB8n8RHQZZMQ4WMdd7xHPpVi6rgbRfv39AiEGAfIuFOFT3Ih4s8vi1lZ/QW8
uGh0/CvppJCPGDqH4i4GPsKxnZbAaPF04R29K8Kxy7eNa8+lq8eZIdy5kyMx
RU8Pdu0Hc9TrN6UDwN9sjHLywLmtuL6uqNN+EyyvuFdA0g+tT8/jBmS8dR38
ORdwMvWA5W30Hq/ZSUzD50DKRS0AE5VfVn3aeiDfo97OAqcdNX1m/lSA0LtG
ntuJfOJWXow1s/HuoN/i21HjBspiw2Fu1NYIJMwk4z3okuccID1Jqb9ZIZwE
uCEUJPIhypYBmXySiwwsyFRQ13Y5XUuYD9+AgIy32FeE4i/4jDFwNlvSKO3L
6t8AGy2geE7lWyTFm/v0Zd+/K4yTszj6dlq7OxEznpNJ3nmC3vTNYYZMGQnG
Os/ZxAKMCIRu7e5oZDjyU671VDTo2HpM5lb4hmS+nJVd90vFk2EXlwsGangk
XRBSViLcvFXwdH4mVpFuyzHqNZBdr4EcZtBnGyu6/UkjL8nkT3ztZl8klsxM
Lzdt938EoW4/kbrmn7avZgyDam6hMBE7uL0vNVd/m8lk42+bTHo9NZns5pbJ
pJd/q8nk6cgvSqjznFCT/Notr8/CxGkBKyUgn+uQhJxkl19mIesYCFdaotmQ
ZR17zDgF0kbUXUZLzZLPcVPWMjRx8z0AHVy09WSVlgCW4kulAViQmCRYWlSK
vFYQgmj4zJcRJ2QvK3Lc64rtSexde8d/Pwz//b7z69/jv/8Tv/K2/Sef3f6r
jPn3W8acPBVPzz8ybP2n7/h732u67/199KbugOJLv48/8ffsjYi9ff9Sf+mg
e+mFfo0umPM9uqd76ers7PT2+vL15ZvLi//8IH/LOvzmNbOvxMPZfsn9vXvH
37NrYfD7/ivpXe73/pYLu6XvQ8ldru+Wnpn0HJX/XZbKFxAN69Jzqf2QVhuN
H+peai/w5w3v738PRU17TvCW/2hfdATNb3iqtRixhOph0rGguipmkt9xXy58
h3jkeHG/lOuigY/+OFSseor1Wx3lPqXCbVMqRHmKKzjE6oTjRAQubb3fyRoR
NOJnO+yl8gq7F1qpl6F8drXJfISy3bRSJVt+17CrBvLXNCu2yJ6uT+FUMQvS
neuPaZpFkgp0rFVkkd3M401zMJtB73LFgX+JnrgkesIFtK3ZkJQC8F49rvwD
G/fVeokPCgrW0EO6uliluxgtsODisfh6G2gSxYA/DTaJo8l/61XRLPr2Elef
f6mrJdHbnfe72ZfPtz4mIbqX/Jj8/Zy0oHU1ztGBy94xj9+xFbgRD7QXvBHd
0APgSKLmPSCOVuD8MwP4jI9nNC+809D6m6Mo0GpBNAt279haDsRfxA3W8JL3
uz6qHZ7UACf3suksWlje9rvmu766kA/Al41ba1oDg7TVNQH9r4n9L1bVVLs1
cCY8rQHZUHBxzjYKiE0iv0eOWPnbsDhvkGf19xC0/Xsch/57dlOvyE4Gc2N2
j2WWNG3SNYh/ej2FKzBy7HmXboTSAi3lBf9/iVija1tgJn93KlT5wW/swW/o
/3+l//et8k1keQ2kiKtfVzx+dR3Ho7PD3+OLh/rlb3jsM+AEC2SAE5fWYtGN
qEkSvB4SP86+5Se/1Sf3v3ryUS8r/BIOFcjYGKGfyb87dGi9KkSIMNX/wB3M
fd+t2Sy+Hwlq7PaJCDFszi5TcjNwHtQXCIj5WBNQAAPvohlYrRhDgbqYOHoQ
JTtPVE0KQBH30+Uv2X8BaMLPvzm++DXrAZqwhcJ02ELEGLvWBFi25Dwh/z5a
K7FyTEYBTHjP5VOamNifh2cVXPH1V/+dhQuR1sAdfC3/IGrZ7RjtOJpJxDZc
fjpomzzesdATwFem12f3t773Bf/Ua9z3IbHw6IIrbNBifS4mKx5hv9XOv/VZ
7vJQr/XOP3UseHXCiv2blMP/em//YO+w0+utHZiUdDs10PvbAyQlwL11DYcz
89cjdNroLjGnPR5X4qNHI1UtjJroRtFArM7+IFQLndcNN2SW5ZfTxYV2rJBM
q/AyPAGZlFp3AdLNH4ZXnxsG3nPzDKhi4r0NHw51ILTzta9UZJVKonrneKkO
iWtb+9aK1kVC1bSLy5tbdJO4vLqBrtZqESQ9SknQypxjT/lMa4iPtaugYN3p
j5k6NZGPoCvfR6G8/GHfvmiihXxS3LsneGfvut/XlUVRmAppqR1nqNM9y5Lr
YmR3OZc+rvRRqWHBQZsNhz4s30AaCVTEcxrXHiUApflYol5SklLfYMwH4WpB
nNdLVitwQRoNZDVH4OiZZlfqE4cMm3Ja1VydwyYgWcZc22rFhRJ1dcblaJVr
toSOXfBqvlJwGkSRBW96YF8S0iW+6KL5SbYNkvGl0mXvqcq+z/62xX+61Xz7
VOHCz360VYfw43bq42F64fB99m1MSwCQZVxbhBt7Bs8+dE3ZUTplpKxAQQkR
Or+z2qqct82xuhf0PN233BQ+fu1Ts+AsOE/y3x+mAMHdmDp9GlsCD3TSfks/
3Ivq5EGEb+y3vuF6UIha6+2Ei+ldaG9SLKxITC6yN6zCdWhGRdS1PRSACyUZ
W2ULp0v4QfnYlfPC7XANrjA1jTHlYy52A7DBklP+0NVmpoFU+vuhXpNBZyWC
fH2ghdYZ5BeZW7e3SN/uwFVaYcCCdeAnLSHUaKVnVVviYkkKhYnwmbw6yqt8
GBdta7uiNm+0bmNYYVqAST4CAOUkLrhjfX+4YTLLtrQincRKpTKKBsKlFKLP
ZYoKTHHJDRE1JhSLEbufyWpfC1sKzZRKTkLqYwdx3NjXm5BIpwkt7Ric62hC
1SLRNjMuUoK/juDgRseidK4S8uacJrK4yplPZfQNEaS0wKOFb0LSqtYvyLi4
DGOTxlGLcctT8+tzZCJra2cFFfAyEx/UNgm3bZFElj6mPf8gEIXXcHAV9dSy
RLolIgzPsWzQPnf+YZUJjeTHxehoGmGStJX1S6OXgunIV4KEaZIyUYbnSrgZ
Gwf89aFJJH6WP9nXg1BPF+T0GjVosOWcD4t5kKmIp/nNT+kuEHVEl3eSZ1YA
5G3qiHToaHj7Eg0qrLGSHBp+MszMuthUGx33Ks7TC7rgE7gkxMUllAsNlJ7d
p8NwJVz4qYkYosc3jdgz8RdpuaE0TEcx25HuOnRC0axks2tPb6c8dEMt8mWL
+nje9vCyIOpY1dIPeulzBKQfoG99vMfzRLVobAXja/HyJ8rWaPMOLdha2Oe8
Tijpj9wiUCpqgssmW4SXvjm9tic57Q1OXenHJEAz4uYzzsEgw1+2MymlJCcT
/2nDvgiP9EpafA81IzIK3tNHh0qcvLxKfN75w7lt3M6elXuab5nPyr9iDjP2
gBjn42HfKzcestoAnKZn3rLc3ieQQ8mQlX4BiqJzl7Ym8Q98+WW8GlzW6Msv
/VlmZyGWhpN9owpVcGrT0leiwoZjm6ckEX2SDRElLRVrcKL3cYf4aR6RIj+N
2dLrlhvJulEEllfzQh751eWffg1DMnu6PSJ72ATNTlCpe7UgCaEyUCKf2cPW
vE/eO5AKPhNaUm3tqsLYSmbyxZSKd4Og/7BRoeZLTvLfSoyT2caejAGydATE
XWeEpXDWgsH+3p/m6dAC11uoSa3rLdSbzerGO41kxKqe5ZKyKlouEcbA0JE8
U9JqgJ/LZ0P6/mO+ZBMawEH1sodWLfEp9zkqp+AIb2ppgwSI1EvVqZf5JNSy
RzQkph+FNUs2qO7DTh+piH6C/+RQdytyqGH3Xk1+mTlSkaSubpfm5QBaPcqd
/d1BtnOgkKKdFwG1O1+ruuSLKB2JKBZAWkiUlgpKvgukSB7f/pYz+7Msvp+H
7Jv/4Ztshwt7CQNt/IGnQcrwDnbVRBHDnQ+Sz0aabXoIDI8rx+onLISHrtCs
dm4cQxKpTOZ2kXLyu5TJkSdUFEsdNcPBZZnEpjhtW1S0ltqSx4uyXkyX+bjw
cybJJGDASi0E/tRuv9xmZUQ7QymZ5ty4tAh4asixkx+0EN7V2cnx69e8gAJ1
HEoZI/SJyDL/TNJdxnT/JMVeE9lbFWf97Zl97ubd1cWnVSG2l3OvwAtfFR1L
Ic6eJVtenJHp0AWGaYaKoaWsAxvd1/TJ9h7MsVWODhDVcTEr79gRD4i55uD5
enFRr1Xgqx3jau/pWAKNfvank5+OL348u+W2cF1LaQZlYeMdgOxdsZJ0PfbT
SyZ6nr12npL2ly7q9wrrR47KeC01idlbxPqInqC5XRfhqAToK8vHeX6asHGn
dnDijEG7izVxOVoV/4pwNLhKJNnwdsKInBvwio0We9Mmxj0KXi3WAGfIRxol
t+s+alWk8iisbflFknxVMB6a1XpJpwxav4ZMtV4eZ2ZK2yoTAKqwRW5Arqbc
BtIXs1nie1VI8tjqw4mVg/rAORdbDC7ZNIukCzwLoYp2A4kIgxb6m9hBcWoE
ZdwVNFTs5s8xqtgX0vTlIhQ5Zg3vpWig7x2mB9KrdT49jTUMNbs9bk0CbpIU
+8QorOydd9MoBEz6L3OPQ136kIQzQwKJxV/o3rTIIRbxF86iYZP8A6i9RNy+
BYlIl1Bj9AZ/tELjZif4fp71dKkPBayep9JDMG7uNRCX93xBylg7ZhAH/9Ph
o0I/XB0D6cCRkAJHbZ98VHZK7SltmW0asuj3FY5+lfs6PXPx1ELcVEX7hZLt
2SoiKZacYvYjM85jVUOF96chwQIu3CnZe5WF6SqKdNeqzXQRwJUk+dbm345x
wyE+F5C6TnQXPcvhjiL4gzpIFKu8zWCQJE8vhDX6voV1bEOM2XB6CmNsienE
Lf5aLGuDQFtgYnRfwJ/Y0Mo26l+qmoIXSFw4qB3DDQ8ZicfdNiMYtOR8Oinf
Zs3uUoR30mR9ZTgZSSNazxcBPcM3hNrA7a2x5n3dlUmaF2oWGPSWUPGzHxkb
N/StYrg1i7Yokcp3rYl5kA8v/V/svWt3G8e1Nvi9f0UvZ80YlAHapGTHoV6f
GYqibU0kkSFp++TkJHybQJPsCLeDBiQxkf/71H72pXZVNyg68bnMWsOVWCTQ
XV1dl137+jypcHNbBktc9ztGScUisJbSIksG0s8QVFwE71azj5pJP+aylkcV
qRS0Ng0rRYdf3zfHWIk0WZKvRNEkVg4cHOPAA2fuqOtF8BflW/lrBwHqVUx6
13wnShAh5T7dLFlvKwmXo9xBST8FXLLNRznJg48AkfD1VkUCeW32BI/B5K02
OuS/QPFpH6HQv48XXmZoYsamIRlfWEcEnfsaxJ7s6snYA7iOiJT7tG6IB5tN
ZBlNMLC6y9yb4OyyLmccpdSEonaT4EMqF1dlyJ6VS0fsyRL3jmDFyKzLJdSE
eDQ0U9BpKFWHOLqV4FuKHT5M0FrTta4fhkdfpaUcEbtQ8Gv0FNrtd0hrrCGM
m38ac+HR2kGnRtqpEVTeaWd3EXftxVlYwJeUZA6K4OgZy5tgxgA6jq+4PJRP
e92dLCme9m3iINSbCZzU7eaqJY09vCsqhx0Cd6H6CqmzYbeSd4T6LKM8XTCw
niC0w6adS2lu+rgewOSWFehqvpjfzRab1s9E6AAQS4AH3keeawp9L+mytkQ+
4fkCLDnimsxxwNkMkJqoWFQJJU6WGg+xhYwy3hCc7r02LWmTyUIQz2jnhWUw
1ZGV93DgRQZxwokhnS6FgrKcppFHiZLY7g7M9SKC/pLIcMnWkJhZ2y9qeC98
e/jy/BhaP8XSEslVhtuDJkf8dGpYcd9lxMHyZElqtLtVplhqlEpM6nsyTHaO
iYOuI6Yh6TDEa62KDr3oH1vFitjMaWc/jYPsp4uCgAzvczcP/4Q1MNpQAT57
oUse2fvc0zuFx/yzw2nECQ7pvrno4usoiZEfBappXDVkMkBRQR9YsLNz9L0t
EwPPOigKbMjpHW/JGtkeXsYhRoK5pW7mY1UUsg4ecHt5M11cEaOS3vrsrtTT
W7TgeGhH8A+yXpLXZMW5yDQDXuBv91XTcRTO9Pke0+MIZilnuBURWNoEL1iP
K9KinK7mDjs904kJj7kre2UNLH9CL8KxmSFME6FT6My8UujKogqn5M2GHBlq
X8hKfrsXNR4vQG3AO4fZ0OPIMbiRw4lLS6h06ZZ2LcPLxXNN+3F2euR8x22R
J3+Zrqq1npnETM5Dw3trhSA7XUM9r4ToUNpH8kh0H6Tc8lHjbnOIf9nBT1WM
Ayi4V6S57duRhmKErF21oT+pJBUI+/C++dBkfdLBFIOBayk/eitH2MQBMiEm
V0YURHV6xAocIlWZQwFgrvd47b/LLPRdrDCWdoq1BQfbMB9+kjTzG9ENI1t5
QexcM04XRSm+o28kIT+tK4h3svZaTdVnK7Xil6mmhbEHVlboLz5YKG9aKMCg
/1AnmJgBC5tAJIo0xCWV7Sv4fhmDFFATFb/UYpW8E2VymPGlpd1YSzwErXgU
r+poo2kQWoYunAQFtQr5Z5AIyfmVDpCYofxOYTkyE+Hv69UVxgmHYR1MOcwE
4rlIv+Hx0cKO8HlbxwGfTgtIXaLC+r8nC6KmkDeMZlcEtszKObPN8mmrvGz0
QIh07D52V9H01eybcoZo8k5rCbQuLWIAowUaFGJJ2L2VPZxGhmZLUN4mjNkU
LsVMTkryXg6LJG2kP+c0yNo1efG/D3s7PEhGJ8gxsvpv+cPY7GatBGa0biiP
ym1sOHM5c4Q+DUO1Vsgqh1fZptwI4nDAbYIlZfyb5TFuPkJsEV4pc6kVhWfN
MeMjdG49JWCcZgIXJB+utylwOlB4SNCPa6VGqGLLpcAyIMkUXmJjToGj0/JO
+Y6Lsz++DBbF2bBMP//h9eGPhy9eHj57KfDN+vXz43DBLuDOnTZMXm0FClSE
jKB7bG1T9QitCLdFwzM+jdiGpO4z7tDNIl8DERHe7a7EVcjW49saiMPmWBS7
qCokxqTMrIh47ebd1iFK+9wuUNdLUSYhIVE5oAgeNOHsdFfsjiKlj9NAyS2z
dAMpfrkCWBq3gXzh9lYCHPSETc0edhqV3qO4dd3HVP339pro8bTn5cd7jl3D
KjTz1XKBW6vZmb1UUkVxTOeLVP3ldldi2G9rgeLPw44sAI60Nzu3pL0z0A+i
39ebufS40rLyBGdcgOxNj/KBp/7ec2YwwW5paC65DsFgyW8r5JPyZMgZ8UQw
OGVwVXkNSBWCP1koLHPK58XOSs/drgvDrMgNULI2zDQvTm56OMFViUtfiY9W
N5uZuEv8U9jRTG0ILkrR5bqXgkp9DvtMoXpxhSQ0DC0J7MQm9j1TDccm5G2T
VaXuI610JTnwUuLdPBJuABTDRSNojgUXOki3gLWYUXojgYirJ8R71TilHzx3
5I7nKNQXw3IPuS2vC39tUq+x93j3ceftyvKctrbxYLJdCHlcgRFyIymviXoF
UygqQM7DynWfi3fw9/F7gFG2moxQCytQqWqm0jh5AKysRrZ4oZk1PSuLcWkZ
IS5x0YuP2I8ElgWzoHjeYSVg8OPNQoyMHJNg+e5iKAenXG59UeU0ikh0299W
oiRRCErrhbVunnyNNWotbSeHMLNLdn2BjbLqxlovTOqQApnjOinO9ZZ34VT2
Xpf3yvmBlaoJaD/S/aJn7nOUjiy2Zse67MLvaTZpehRB5JhDI5yznrmCKe5Z
99p4qXBPfPOKRsN5Ttkw3m/55cxrFUKOCzfMyk0cXhps6DwXoiF23ImCyndn
a2iDYtJJx3n+sThAwSZi15L19OeTCH8aPSLgD2SwEnhjeLTD4ASF8fnxjy+O
jl+8/vYkEzNBjn7BKm6UM1yRHccGWpPoWUoizjkCluEso2Dp5MTqNdvMOVcD
Y4PkJ0flE6Ma3r1oQRA/JskYEqku0mjCnDaL8JKAgJRJ5g/IL6eB2Ubcc3Wa
1dMZhE5pXLHwtx2DBTa968vdr+ym33791X7Q8WnuuDPutp9/LnKteaejhzRt
xGoMQxNGm0uDZJkVKmMhG1jbR74+8GmQ5Uy7M3nH0JnP7TN+gSZWpL168frk
7PLH47Pz8Ov5q8OLo+85Bm86QOgA5YYZRzdtzaYF5UYBnM1Z9dcY/tGyK/dO
0TtMKbnUJFyYRIBRJKGje+59Eg755O7dfR68ohacYVO2FBdBM/L6tyBd4Zdc
3G9pOC6dH3p8Z0k79bZIYnJ5PR+uTHchdL1a9NlOHIHu1OOq6o/EQYEQoURJ
YoXmiE0Wqu1Dm3X54910sd+UsYicxbJF1e/REqEk6dTthykIy6NZawR5zpj1
hQT8LZAfLOpgsAkJcJah1FXB5oXThPLNKcPsWpR0Yjihigh4ZwkG+hyfWSAp
HK1hxiVQ0cIvKpgANLAdiDur1TBU3U4eBmX8o7dUn+Cw+rjKJPTsQMhCE5S3
WPrSB923uw3XOYKiaUpiBzFNUM8TPBCB1eAnkrG44uRCfjYxl4n7tdIufNr2
8eDQuw4t7ZwKQhgd0MjqlKi+D3gQr/xyceNgBU17sRUCMmqHQyjuN2B/RiTC
DCUQRVEJjRXngIqUW7ybK/KoTwghxmvxaU6TXkUveQXcwZJTX9EoX+FKEtAq
IvEriSZx9up6fFsbGxWC9MEGbThTLiz1T+yNqWC5ahipNKjBRExJtWLymshR
9m4ykpQdlEQSUZLPjSAGv6bAzq7bTzozAlhGTMi5R2Ykf7Qklkg6tYeGpUUN
2EauPkqRG3nIE2o28SUDujGi6GZjTUMYqQ4pB3GBKoTZkvCLMxhKLANeLetg
ldbrMrffKTdWxj7M4giFSlL30OHQSmi8SaaJeAqv9lbAaxk61suprqRiMdqB
nhThyHUJc0bd5kENko9z/HR/j+gmyLyjcAARxQTJTpGjiPYi1LNY3wrm3MuD
fBjzfTQAAOVO8cOBLIZRkp+Gb4ngs1/MFD/pbaHhSXih9Oeb+xCSzu3W7GFy
60/lo/KH4rVdJalY2QNelp+X50VzwKDd5eALDrju9FB6ysZ8F8bq5UcQrr4p
B9TwDzu0VcufirMDdK5nEJ2g+GibL9HaD9ukXtkHppqcMQfyiNNLXjSDlzvU
bPiU8aWSsljSKUwuaYsvt+3wX/JwLFN+9muapPKz8qzI3hXjd74jXw/k1akm
7l6hlA0pCl9pxiizvgcKdjfB2Ei2iQJsnOfbS5r3+5DAxn5DNYKoy4BCQC42
dUwG3YntVd7BS6KrEr2OYhqU47AhN5PaWUiSNoY7NhM1Mpz7QJ0l1xjvt4AR
SazOvB0ZvwD0fa4ksYpX6mudcJVHw8W724rMyUVuk25+jsBdpg0rRWpkOUhM
DeTJb/R9nWmWJoAXXUNNBWSH6aOa3v1NbH7W9HkyjVmJh4wLwAsZkjuXwSTR
wVaHi+GVJ00VwYgIGj1y6XHxPk+aJTe2W/rni1eAA7JZG4unFqIom2rRrHNS
dwukL9ziF0ZS7bBglCMKJFzsa18Wj+m/bdZpeFNNwznXQSBpqeCgImUtSRje
v4VSApY2iBI5uZX3YdL5OxTIuHejdqxgB4JyhRqjnNlK/Qgp46yyCXDNrrjn
ltUNk8ikUdZaw3N45JJLQBpVK3AWVkqOSFV2tgh4zYR2MUebuRFRAs7EdU58
eGTwzpbrgo3qVcYoipw0azroQeJckKoXzl+lYiRFQl+hsN9bY1S7s6pvkB8q
1m0Qo8eHz+li8HYUbgXfF4nQnksxvPScVgv1vKnhPo8zpmgY8k7pq8SNm1qz
jK7DuQgkp7fdn1cbWwrqtdu+S7IXW1Q4WPhndE4Gbgam1rL30F8RIdLYSD06
f/VR+5TJ3Fn5Ft07nN7MXiuOzCIWbng3J8aaxULVmMrbyb2+QMowBzZdojnW
JFN5uAxuZ5Ku2SmMLBzUx/YnH2lajVXeAbqR54UxRsKEB5WVPG0MgEYaKUXh
tXgKH8xNIfKnDtY1Ie+EX0aUs1ewms1Z7zYSQtQgtiPhwId1az2XiOwQri32
CNmombfeLHAid7NUC8l5mCuEPtkyvZnvmYtZpYY45K1mgxxV82LwEbjobYdN
08bDV9zcRqkhGyMCdWZFQhbCpsiYJvWlCRQ/adBD9iyIGqJ3R1GplSpQGybG
bYQuEj92Co2mvto+/5EBGVWMfAhoCeTCv1Wfv3bIpcu2jDIFIAnvPc1zU3ni
UNSD29hl7BKCyfVoaCso9WuzDLKs9Mk8rn4kWzJ+8zgTJhHhR9m6NidZk4jk
0b4hCTWO8C1x49P0sCuvaTMW9+j/zhfHrHqDDHdSa4JksXxOdu3MqCKWzp7p
NAugZb0rWPfjMpAaNRcqSY0hE8W82SZQ6vIg6dtmyppCT4azsE6l2wqpdYK9
0cyL5FkXt5stEBuct83xXskVW9J5um6mWaggplI1iHhWyausfXQex93lyetj
wQQoOBePCzAERT+ZCSIb4oNIuuKiCVJ2U7i9wgTa8gVHT9McB2LwCNc47Iq0
eLDdBqPhKqT6RqrQyHhnAZhIOISlNo+5RbNJO9LpRE3T0HOSuG+GfiVhbdIZ
Qnp8hvVyS1hGC1TZZ4t0y1tpZsY5A/BAmxF0dHENJ/lP9wTs8rJndtoIpEPD
RXhB4/RL5w3SbMP/rysUHUhuXOtFqWS0GlG2VhtXqxV8lBKkz0kH9HU72XLF
rCahnkjoanpDDNq3MwirMLRUy6OrXsi1VoQBfb3W9LnOaZIW3gzDKxGGiGVZ
ap4P4yAXWHr0hTyLD8ZgLUyQGFpvqcUamulOr0E4aDGrSlazXiiKGtXiMSS3
xTiUiQ2JeMixKwrTEd5x7oBm+LDEeLuYvtWE4rzThHoUs/Dm9frdYvUGkM9I
SNlSxtN9tST+DWGHOFTbrDdc+mkiWPyKclaLXk6eIFKpllhJMROkE7PRs8zw
FC3OUsSx1NdAQ1TIntYjhS+fihmSyU5VgllXowHEGq+uoI0LYTysus2aOtMd
nqJb5WTV7M1aKy5Y4o4tV4G9iO3mag5EyEM2hL2jwYOTUKYv6udxnHcSIa+B
xlyZDeOKoMT2ZkEJIqsoS7PzOApUeIljtm0drGMVOz/F2181PlEgleO4mqQU
xx8U9F6A9H9YcmWTtFAUHzt0NGMrHjmaEg4R4/CfRKj/lUo63MFz3wnPidhW
MTTsW3wMyTWHLoEZSU6NwtkONHbciQk7m+B36ByBcOzwZZFhKHvoTb12WaNo
LKpTnRYlO77RrHjoK52VlJjKTEGJdEx5SrKp0V7eWC5HO0d5bSe5a+vTNnec
YZtLujytJk5HE7jDaSXIOIVvmSK7DdXzpLY15y4bQmyPuqFYJKjo7mRjVxEo
f3sacYHhN3lGa+V2wWrub3ympC5t4kz/J5c3YonZEu9Rc21JxErnaPYkCz5u
cJedq+GMVKE91MjhNZdzC6IGbbqNjBxmDYh0awtIRm+DKMVUIbLhnUjKaBFX
Pck7El7XG/BqqnlWyb6NfECyMlmb88rIx1Ymy3BzbXTytWg1iOmUeEmzZAwG
h5SHdFIAbFOJ4JHxUnUh56fQnR6n06bPCwfGRZvexfQ1RVNcL5aFAS6Q24eF
g/m5xR1p3uFqnfunYqJ2R8/u7Qu5Y8RMlzmI3XlXhd9hYGCqc51OnIOMY1dp
cpYkPwk6O57hu+8Uah5T22KWqfYTnLPH3eOnm6UmL8ePJGWXHbuJr7LPbuhM
9Ee8D8119ATALzkL+kohi48goeR4sGiA5Wcnnqfb6AkX9IwCaeJRD5HNwJUW
+WuSS5Hz8wkbx3k5+w1GXtccyJGQXFA0WvGbQuyoaykRJKn8NQhTuNfquCX0
5sJyycK6oIJGg74jiUWR3vA1iW+5jvyhQQbwCmUVSow1kVzvagES4xNBoHEd
0kPcNxIBIA9vwQ+xlhZdbAgrgYD4acX7xX0XH4lx7+WCYFFYWClNAs3MvVKm
mfqfTbWdwplPsbnuyWSkJkTrRWE0Sd2gUmcAq7m9yRtBvPC9u6DoWM+dxyRV
WRWFdiLfgmhDcv7YhYm0j4qPH3PDwyosg9Uq/L08I0Mt2tHdfUq3j3lJcPei
p1TqTJ2RX2QVFYYdJ1lsQ03eotPCHwvIT3fHRuFSlxnZul+6uGPJIjlBGQGW
fXGYPCJyLjetZE7NF/PRdUV5KLpQDh0+QNGBRGMrJgb6sjy85MXjBJGEDns0
xmryzUIcAXc9AGzuUUXjdFd7WD4Y5Cxq40wlgsV4mpb05coha2THc5B6NzeA
jcpA6IotdSguPtmsYx/AUis90ZiwQKf0HkCssqYnkOqocgS1KCv3OK1wdG5Z
GmLUie6FWQizVQ6aa6talQy5DUvLrRrlDmCt7g1Zc6UmyiMlQzWG9uh7icIJ
YxS9x90wolgsGTs2KZ2hzaRlaD4wiIhmDCNs5v2jMhR3UdItSwAzMi4EjBdX
0+ZGEiwrDWM2sAWKWKdatXfz8e2KIJRbQDj695NzwMdb4LOqimiKYAYgSoZY
/zIjrR4r11ahKO+CPMOIzcivxaKqYUxXVziDS5IjjAfGEhHEPa1P5fEFNYdx
uYu/BR5DG+HQjAbIGoYL5vPFum/EX9JtdIe9R/U8jhOlPY46JNdVm8TR8+mE
Y53ehRcxcgzDJ4OexZ4eaTuCU7WQl66EA9vy9jiZMDTGxOvYYG4NXU837a2j
QBb7NMHFTCDoKQOOoF0pXQ8YA209qyhoBhpktUhn5Eq4ko2bQM+gY4vELbBZ
UmOwBHwU22rlzWMFfPVVLBe3fPuITKTbn0ZC3S8Kw16e89udRZeztgB2URZJ
ubu6P66GPJmpmOFjKtnWoLQopVuMyIoBWSmpxNL24EW7knx3fjDijcs7Ps2y
yDVrc2SAA3kCofhV+IY96AsXfutGG2NHgCnBKwmnkMeLEwwOV3EXlAxyNYYR
aWmqhwWr0ocptqILwFTvqlWim6qnhzWQjqXQe2+ivyZWV6FOBR/csOjTw0BZ
aa1hD6Z3D9odb+q7pZBbf4XBUsfiZdY4tjSt3lSt6kEJNNela3qMRr3uicOQ
YjWpaRGR3NRIgRTpbK+CTu0P1jFhmUKOOXwiU1/uT6//TbAlmULjiNlBivTv
cg0Q5Ir38tV0MX7Ds69Jr8pBXCnaZ4HCgbkJoyT7zwXAFYNnBLhvtCwxMICf
CjmbAwomQi8Bh/Apt9ynZZpnyB4PuQOAs9JIBmmHHnZaIyOEpC+5w7XSIrbQ
KtZqJHpShLakj7eas+D7KdeBcVJhtSNKajLMis/I0J86JBN5Bsf1hbnSsBrj
7RKEvtlUq4lUMm7mTVhlkh5x3VAczBrtAIupX5BNGqmnYw8C3kpCFs4Hir5c
4oFPeCZDB4CyoWgBWjQrHeF693k4gCr607AcOX06CCnKxXcjRFCwKX6EFVe4
1Fe5mOawAX61FIutxo/3NcAQfrsi42Y1TgJEIuhkEgcajin4Xq7G4C3+ReSH
1Kl8pgDd6sIhwjE608xNLj0bIlRzA2Yh8vniDJ1ENBY9CyPUqm0jXhh81MhY
xlTmJ2+eSURdNcdiXpNkCkMZVislQUQZHG+zd6DOhDYwb+EqvqJAqbHYDLZM
UkYhTmeywGRmwhdnFCI8X0wXlBru+h6epes3heuNw8NZBMgfWsyaVqismJSJ
b4U9JjS2Qspohjc1KFxWxUVS4jG0cJL1Zk97Iy+ueZW6O6UupfUclaJI0qFX
WS5+D+tiHBtnmcurp6uW5ANOrFR0SOFUIh6cKAVxAETpVcwhMrdNwoAneS5g
M+Un2mgPkRy1oQeGE8kR2y4awaVk2yncQ8XNxDfEgDwUShCGIWu0LR89GuvX
60ePJCAEL0lccqIUUJiNixxISV0z3QNrsqT3esEiUgK5umW5VfK4noWHIjGN
9gS56lh2r7mag3P7oUKDn+Ua9tsGiXJPYVUh40iSwClvDl4L2S9Wl2OLBHXZ
6oZC5zsiE46LFUQMl3Uw/HIzH7s+C20P+RXr2RXQL+MSQQX/fAIwfJ4mmJqd
UWrV6SNbhWwometsusiwefTIVoJOF+tOcRrLw9fPeQS0hIaDVTwcywo4aJQe
f3R2FATm+LbGWfrCizsZ+CXymiexNM7QIf2AChmIrPi1ccK4FFtZ7QJLAVJV
JNu1FolWWYzutDLgLpQu2dZMxpqtHZ0d3UwMCR8TtAdBnwbeOM0yOV9hxFRT
Mc6G5WqhHERhxhbviD+J+sMDJK0XtAql5bj8yoE6H2tx7HJyuP7VLq7XULGv
NjdtIbtTBbNJD9oMryXUQ96ysL0lrZk+UurVdwt5fBS9UgxG6XNi7EMLrIn4
B2ZyvYJDpXCM2fLWB2EtUTSL/UKr0TsSmyn8Y6oxtktIFVmtHlEhgW8Py5JZ
y8kNIGYfP7FAhg0igeucQRxGHs923JrWhN5YwKTDoPAMfUJv8IlsX6IkOa/Q
0rs6Qr99gjjUJwQS9/xcCdySfXh0eE7UcrMGhu52ZYky5ShxpWVvydWd8QNh
ay/miQa2YoQr3naM71gA3/EKNGwLIXEQIES8q9VlJwd3PJeYd8nlgsBx7eKd
hewbxDXjeo01/FGoy0A4IRAmKNaqJZED3Y5eEfDqEZlmmJBCu66Hneu8QVC4
Ygk8S7P/C9WTdf5XNQGVMZYqPSwMU+95LM6HY8ULrZia5hnJI65l6xRUfTby
P591L/jgZVr5YXsLn21rwVr6yFdpV0ZJs9t/8MAPvjE1CW0YBqZOkapH/D47
6VO3POOeDtlrJk/2o3bvCPzyuzrd/Azfbf+4Z2I/lN8fHz4/PnPP293dffjH
vZMU+/Ggj7kfkCQHnQc+8GOd8WAPkpFVlgcgdueLH/ixtmG0HQegaNeLH/Qx
t+HMvQNwxfPFycev+j/+Uttgk426lrz4Az/+9eaFYas7E/DAj3+9fpDIOaCn
7CYPfODHv1Y/Shaf4sHdi2/qP37d//FXSVFlj/dISyt7hTXIy4+txpNv8JUk
5LSDytH4tALiPQw3qQ8UMTHzaDN3mxR9v3a+V1mSCknciktlrdhoRhHb5rQp
vU4xdtFibXC0yyeRu48JEWIMQh4ArxjOE3SGS5xvT6RCXTCJvV1Cfn/QlVTj
oGyQJ5x9LUt1jVRJF9j/Kb1my5uVGokGVqubFtXlfEh+7NCxNfLBP4UaKXt+
PvyyFsfvKoV1JoGyrUW+kOuMD/zq7L+QYz8HJVWuXp7/8fXRky0XRgnV+3T3
aMLTPehekVzofsaLy0ySb7kQF99cJgL7vgsTqZz2kRVX1MaTuNz74snX/S8D
mdpufZv00X/64s/hyi/eP96vr7/+3X0X7vGFX351Xd1/4T5f+Nuvfve4uu65
0F6mxYvs7pq4Sy584DJLZNOW7WACShxVYU+5tX5Jl0BMIZ7kiYgei7wSY0S4
maDMErOQFMZfhyWL69QYlFzoQ6lR4DaAtSdOaF1ymnsNN4dlPxcUEybzfbfe
HXZUdwvrST5/qnizC9WyWZpPg1rP5kpM0onjn3wWd4ukf8ZeqsuVm2OnbLxV
VpyaYdY8SFNEBt8wUhQqL1ERTCQg1YwzHSkA2CYRM6SnB5vZMvnXSNqRJCwx
T4b3SMEwlj//XISh2oAy06rCCW+xzfz/nMlCGGXwnzUz5tbSKqhMLD5YsnaE
Kpmab375QldRurpEwqKKkN6dRddIbL8jH+M1GKKwlq6DWHx/vbf/+El1Nc6u
wcy3LB7uESBBErJgIpG1t/WaPmnZlapdQdl3TS4jO33e+yV93v+f0ef9X9Ln
x1uv+c/u88fX6sfkMWTgveI4XAFpjPynI40SiX8Svg2/A3v13Y7Bd2//+Y6e
odNRcaP12+SLLcd1Zke5CTdz54sH9iozTB5wB5sQpT9U772DfxLzIJlEitON
JAdLJo4dTM/4s6MYx8O0ObmdhGV9oEX1V9c2cRVauLCw/LirOpbaoVLCUpGu
6RxUmwFnEAcKFdciKa2xIKRrzIq7nvCNiVlQdIPKeIrEHJXuJ6bLrrPW1wuC
5//i/f7epP7aCqPo+KGxcesdce3UGMGwhC6PAEj+/yvz/2OV+d/ee+F/mzKP
Ndd34X+e6p0u2VRQHOKjXE6QdH9e3+ve/Wd8Fr+K3+RXcDP+Kr34p52Mv4KP
8VdwMf4KHsZfwcH4q8zIP+1e/FV68U87Fx/sdJePs36o/rDF29//6T94z/bo
xn2ffpbLZg2pmOhxIZUzMoqIgHbrJN3zvO6obH/3h3zzsWDUx6JZedTES2uX
t6KiulcUQ1D/xAE+LvWW04OQE9QtySBVWoSZ4HKApQgPQ9x2xk7Q2O8+T2yS
U0N+AGEXopzfFHuZ+eA5jCm5pKIPxkCmZIAJyhFyE7WgQmoWKSUICbObiEw0
Dq+jfDaFInPjBq0PUfo5Y1Czq8R/1AjrXdGTsqXFHUcUot1q2Xz0xyt6NAk9
/oV0+zywxfFqdVkvrg8IPLqtty5XvpAXRJ+fwF0oP+MV6TMHvRpKfuFWLe6/
XX9zLwPHWo8m9av4L8FBQWZNokwJlsNPzarumltuQ4qrKzG3+tsnRJloIyWW
F63PzOySyoM0X3CXMtLG9XK94Qriqs/wEza4q00zXSemVta5FSVp4MGx6v5e
863omm89OcE2YT3mW9FjvpWZ+baRFBF9HaNcdogelM6Vj8z/d3wV6pZI9Szn
rPif4avY7qroWUK5ywLGN39lVghX1gnA4IXxnVDq2lXVNmPDL5kxkxvl2Gbp
PLzEKG2NEqgUti8vPWA0ReDtcgndyeuXf0QFHez0s0hXYpAMkfBWuJWe+PxT
/zkFIQ03wBf8K0tDhHlBuIETmrRYgDltwGfCiAG9V0tZhfdeWAYx2Zac9C4g
EoVsePCMUvUXhtkylXQPtuN6Xq2ahVLNcGEuLtZScKgVhOnATQAYHOJkQRFT
S+3n1yW5IBVai+mksJN/k1xlWXGADkO/LY2On9XwRXBFcU5TJCKMyYW4warG
BUEjH7XiIaNGoSTVLnqHrOgdsc6YIHMQbrON5G1H4h7Bh82wzWKub89A8Xkg
A1RoyeGiNORZxKSkIm4+0WEre4atyIcNtYXGZdwqy0cby/ttMmNoX+EFwUJX
KGtBksWrpWFWHM+I7RLv4vC9KLR8FlKsyGt3zDYFX2Vke8TyDE2foKLL5apq
OZLL6eVKTKimGZCy1nUzMICo0xQWK0jWtJKSW00+BzLb3eeGR5G5DCspc6D6
J4Nbbai2M/6JVcb3ffvi9eHLF/92XAx89oJ+qpQWoj+cvHz57PDo92VyrX6K
RPcX6zIuBYtQcr27HLf3A1kQFq0kAqaFdmHP+oNbluVhh5EeWk7BJQdCc7OW
Wjm5x0psGSzEIVZmQ0IcnemLe7hC0mY6D+dpUKCHsL809EtWjSuMRFeGUMJi
HsfLk3xs6RNKRiVYJC6H5VAoUA19sYthdl1vVngd153zsNeBxuwxMq7AUUD1
xDwKSdHJZk7dZSHBDf3wmrqSLhL+jOsbpIDPD4IqmoB0dCnr0cqLxXiFL3VE
rVqVTQY348abalAWG65qc2dbLx6OvZjrBhYRH0QqTKjCHHS+i3wIP/6GiaRJ
X3KhsMY+18iYWLWwME748fNEMChczJohUCRDf0vVPtnfHpFAyxcj/RMBoOgz
Tw//+PLk8PllkKiXRyevz1+cXxy/vpAU2wjqSA86Y4NYkKW4pJeNZEGnCp2S
ml4FulgsF9PFzZ1CANIbUEmHQBtGaiROHaZ6kwnINYimo1rftsMeJ4A8Utt2
IZAyUw8/SxXObelvH+jNhP4sN/Y6tt8HvUemhv+g1gZ7O/TPv/A9Vhnt7xlo
6QL94eeI75EJTJ/D87bT05ltfftHxuD+H3vSX8L/HnbDYL/a4TvvSYjuecLH
bnCur/Bqg/2rnXLweOe+Oz4kv9JfPQ84enapR8fp4YuzeGd/fwZnh0fHtKvP
j85+eLbzscv7O/VLrn/bub7PCVum3/RmvX8odSPzX1i6T3Z03fZ01e5LlryI
QekXW16aQ5ndNzA4b/qL5NuljkE5kOp1UnFJv/H30dnz6vm5rKPD5yenF0O9
zzgEmD5gJ7kvGee0n/e9X3ofZxx0RrpnPNP7eAUN7+3Nli/1pPuHbmZ8iH/o
Vu7y8fPelVXwp0G8iZysE0A6tgjCNO3KdbTxw5+C4U7aqBZX8vIZUMXOlZgW
wBfjRnesgStugJylpJnH49sq3cWlm+1abeDxjq1xRZ2aLJZCNWdtTBj/d23H
ld7+pHO7Pg7PehSU6vBWN1R4KihCtoRTkpnspFJ/w1l6OH6yBQdDTOxqE8wB
+KsZdERGlU0dQUsYEAQbtCNh/yyF/dMRVGl5rXvEjrw4IVKmukus11ZOMqJW
WdwQWpqoHe8qLtjs4Em1bXMzRzGUoFVSsRW/AakmXUBllzidoVzuHHDdj1bN
mmVOGjgyMdEF0z5i+RHnJTJ/kA0BEFnXm3mNesgw+s2YnHAvUuhlXIWxIEND
YZiFFwYIND+J10ThqcnCnjAvMYOnsFmqhB7ztOypNDfhwKl4npQbsP5TgqRc
veUKOkoQnbRc3Irc97BZ+vpRtYZqBywEAVdzMystBYu7/Ij2eH5xeHEOsmee
QL/p/dOxqfhV6/fLRRuRKQxVY7W6Y8I0+Leen4uLS85aJH4K99p1yq5DuJzo
MD3wteK+rRbvqXrvLujVV3gGQQbD3h4KOw+sAx4TjPRoJK/L/oxpxdAgHnhC
WGEYPqOzSK2itbPS1LEz55be1sBBrMThI5+heNyVKqdWmAGKrRdhfe3tgl6K
8LCj7FQsEgZB0QWUo7fAKUUyzCFUJeBI2cLkXTJp2C+HL5/ifmLiLPa57jbo
xRveMaSG0wxwrhYyN4w4GLfdb1QYApzUNTJqg+oGaCDFyLPC9YgTuIWLg0RO
NRWWD1U2ZFAEyb3IVTxCq4HHSS3a9HuwPUNkFgbO0Zo/dJjsaVFCykFk6pbx
KgQMDgBi1y3lSD3ZEXTCIEKn09EYB9KkrijyiNRwD28l+GPds05WgjFss3Bv
HM6EuMFYlBiqPnOyjIHCHT+Nu56QdjjrrhB2oEgnwIdkcaJOJIL9A+C+I8nN
unnQGXbB4XtPISFugyy9oaLmKVnDFfx3UmErRcthmMdMwIVKHJl7rftmHmIl
ni6CuAtvP0JWWplyOzXgweLM7jt2mF1v5mNlZ+TycX0mdkh83Egr0zlzXCO8
dsYUXPpbovQ3iBiIgGnEBuLdQ33NTh2GsC0o9Z33dD5w7HiES3sVpMWIho3Z
nLGaCH2tADsPkWPNGsQTmvXWJw2JOIGdYIYuWtRz4BI50hM4WTbrmpFiqAdR
KLlCpUL3NmG9TWXJhEtoId2s4KR58fnJ0Or/4+eFgCGFhyjWrjQvClD9nmh6
KHgo4h7VDWM6LxHB5+ActrK4LUTBOhW14aB8bXL4bVgmL1Tqv60/ItXnoyjB
mUhOp3kU1Agmrb9IujhyXeSlx1C5RuQed1J5TSxvFaoW4tFBzZEgyPkCQEqz
qoIEJogykgzqoyJ0gzvVMoiVKZx2fUf7UL1ahZJqVAJ5QTcgZMKnqR5gxMXr
nkBXEMLHQFYHuA3mWssOiDSm+mLFahgpwBfeLcsw0VycEn7nQhHZVDtgrmWW
jJwRQyQ/u+t5lTgqgXwYD+fZ/Xz8Bs1oyhBWc/Uqg+JuHp5avcnwPoSdoB4h
gFIgP4UP2024YiVH9aNHgDSS2b4OzQMT4RgHPZQXEhJ87uOYiFhPfOZBb883
u7iP6TjxzmmYeBfHz/lpuH2gmLc7pUSdj1+dXvxR+jOguIqg1QB2YqF9YkUB
gx16cHr8+vmL199RE2qEPgfoA0hja6LPhD7w6NFLOq4k5xvDYxAQqZnHBw3Z
XTh98TDUd+pgpIcnjiDnA1dCaPeZaI3wA/T6yLVbZK6pSkIT7SsIxDrPvPrQ
G71fl2QB7ldUGda10FXdpfI2iSKggAU8ure1sLYEFYpVciuN0mOEVTg9+/iN
s7Xg0eIwoMDB4Q0KHCAH4SKDqd77MGuPMWtHUyJhgzgVRH6sD8zdiYIIRVJL
EKGEU5mVsv65hVFc+p6KAyENDck3CCNxsi8+56UsRFqsFPLTVoKkNlbNUzjv
OYoZTo2weqcC7RcE9KRp38i7FMUTvOtZEOdUM8ZZZETfQCVbYQnTy2bRHNOW
+NF8Y3Q2tLtxeDRAiXPXKd0gb8JGdgPCgZGyP1hifOPOJiwtyprCmwYdkYKA
ZG3rESsCQRg4daGnw42IA2+DtEdsqqojhJcph3/43cNwNStC1tVoVVAOblbC
6iLngeCdhAH/EgP+XJTX0DiRx01MIvjlAndLIgdhg5asjUGEkvAUuSU7eTxe
VZeqG2OTalyNwHzVZOjTlqCEqz3y/DgotyQpI+UmYKJwcM9J8wk6uUVhTkgS
wnY5/+H09OSM5O1gUoeXoZjkDnWj19p5eRK6hgsrowzA+buZC7Q2qQXi5jo0
lZ/nuZ5WSzKdDSXbH1Bhat5G5veO9QOYLJhO4kB5mq9PCpjzItNdJb6psOSR
08T5OY5FVHL15YnWV+0en+LhgYTfeX3HuqYh6SvUYhSgXOYq2/QrrJoLpl+a
aoh2DKgnWjr3zQTaxCsBO+8Tnpf6qShxFrkTCOhPts+Ua8RmqvHCwKbsKYEA
Mz5+0G92P+HFnc9CGEpQnMxVaDLoM3MleWUojBpERUejqVpmLyNlpolKavl5
psuMcCwLHPIReF8SdbFHkpeD7iFIURONyKfSbZBKyR0Gy2VqFncEkitLXXMY
TUGrR95GugaJCinymzDbVknNYtUsxqZAUYd0rLL3oAyIIKMahivGSZp7yMhN
dGIwUrMFJWYp4kJurHM0WVGhc/kxUJIRJuQl3KfE57ZeZAfg4fMtSi0SWMIT
DKvcc0dTk2dOa4ZCYeQa3TlTbCeQYMkes8053qwX19fU5PeC6yTHNg3thjmq
NqAjQ8ichz1SnZaHpy/gXwyvDenORZbsvCtPhCWAjaui+EnTlKKJdL2qkG5a
XYWdM9y6FJtWVOdM3dum6vWelnyE9GmBoqn2jN2gg84RfT2WdmWbP7ogOYGF
UvuoG4mjHMvOdoOGyD4VeD/OVH7CWmLTGimFy2GIIld0CMkQA1mb48Wcu9QD
jBiPDYaafXbspOWa4U8VRRb7ZqL20bsy9GIdLFzi4WXVMEuxeEAKEGtuXU2P
P5dUQ+FtdovQuXKhp2X6ylalkTaVKq98ZicKrOQbePE1ZKDErvRA5GDRQoL8
WkswWW0fzY7qb4gtMsYoXzP7bu39+BGTX5Iek7l7mKrJCzVm1Ul6RVe3Fyol
8ccnG6GQ4EE1Hgeb32UJxiPsWp2qcAocsJ5Anoe7JPMl4feyAEFveADnyLw/
sSdhj1FxO9ANZM3SyqFv7ZxQ0s/cHqeA85gKDBi9A/lBY8FW0Cyu/KAI44Ti
DgaRnIMdByMszh7sUNp/BamBYSpIoxhuCZFkI3B+enh2zDm3c8cjZ01G/P5h
R+NjraroVX0W/a8ieXhWgyJsiT+xI5SSoLF+BKZI/KN0uvWjwrfEF7g9o7lI
KXuFCsZQPcz7lMbY0hAGZQuqSz5FYacMsgSfife35dTujt/dXCIRHKkHwRb5
9vDl+XFQ7mmrqfrO6bcAbiU0aXFewkFN8yFDQJIdgT9k81qe8opMU5oywjmV
7FXNM9MHxPxaByg5tXBTMtL85hGRRI5R99JJnJrfetD3xt+UeNkdO9QKlUIy
iH4al1N43de3kUNCsNB5FanryFxSXDzFwMxpAqcxbnEDCPYtzHW7v/1d7Hwi
nKzJ20rIfLVSDLqg9KNgm9C8WEOavsYRyVFgmp+PxH+WpdseLKQb9zzWHiQP
Nh/dUA9BfbAAhesuMX4VpYXokD9ptpgl73kHCk8vZaOLYpkn89a7N7vgDTk7
KmdNy8Rmi3lfyipprzsCVOBeXg+uYsvRJG+kh9JW5wpbfySFsdKfSjbu+zRj
tkHOgBx0zQzAuuJ5/E306tNgpnLpVLK/t8qmRGHT2sKwOQUV2qLnElcq0riS
BiUSpLVgkmULU7dlFBOFSbCOmGAdT2lP49fw9d8tST0TPs+CA+L05lx60Cdo
yboKtjPFQNdMSqRpL3pQF3JQrzbTuo2kop3sTcDaxbYmSShS48TSccfLVEta
PusBPQYl9n9FHCmijF0Lj5myznmcKqui6rfwfIKpCcN3SumRR39Voe4MdhSe
q+25PmrWiYqg6PK2iGpO/C3gxvdmATvfVTQKGZnfiZwTrADR0GghhHEjnUiU
hSw6PfcFiRdi3XyrxogqwzGZOnsdN7iSSUVcWGP4ONg51ap82opDPDg7JwDv
VfOeEnmZBhy+OMHxD03RVI+Wq/ABJnjHAa7bBHZ2YRS990zBgy04kb+C7Z7a
nmLE9ZuFV3eCkm4nfgG/VS2JYgA2r1uHJZkcj2S3Nwj+s8BVVVhQsLlXsmLS
CefZ3mGD6CPDYIImN8Qya6uA9SoGlw3zlw9Y6R83xDAs/YGEkjmN3yTnY+RD
v0b5GCLhINJSmf5KnTNOx1SyKPpKgvA/Q7an2Jn3aZ528FCp2XYNtKt09oB6
U7Yi6Z6FVzxbK/F8qOZZbNc8y4uzH4LiSc4zZrGrUrxyqtxJbuY1ygRfYG0J
6miSo9K0PawwguRO/ocEwV0yarLB/biyGfM7hTom6fS1cpinLAviJ/EV1GWW
XqNpouTFeytUpO6LT9vC/CzN5Ilg98fabaRzkI4n7ThHiJD6irTm0LGQvECY
UoGQ7+69Kum96jXN6MO0ayFA6DN5Pm0j6ZDUkYWGEnU7DrOVL19nLnDfilbG
kbeoRkZYF0PyvqKUxEk2nkolvpDw6BmpxwkeSjsHQPdB0J/MP17pwilxcCBc
X2tGD416UXbUS2RAze8sOZLzHzXtSW2Vhn1vkMMeWHfNPiKmKGbFmqgUwv5m
eSx04W6yWyYlzYr2tkrp2GMR1Fxu39VQE1YorqH3xGwc5mOlIOhR8rYUHdAK
OT1WFU9ixBDIQm4VXuIe7hbHfMIqmnFft/qojMTO2cRt+WauGdqOZYZCKHf1
Oi+1wtxKBIAzRRaaQPigciRvAuQHR2IE9B8UfWYAimKjKUCWervYrIK9e1CY
XaCGAEXhoIgSA7Zl5KYbEmqdM/gLNfYz5aFTXqtWHLkVaBmbz29LxuN4Ac0e
ySxyrF/dmebOK4M/L3JjoPwFxgDJ696kXfLaiKNVvNRsW3a9fEb5ncl58TCm
NxgTaK/jsb1VGinZ3U99xiOYNji11PT1NShcndazm+Sq9J5v92hJW8szxaUG
94fGmXj/sY8xbNy6SrPvVVjfSUkCZ1uq/C1c2LIrLbs6/mqi2XLicQBcGuWg
QALnEQmAbxBHYj1hKg8vrpXhh04MSHrKHp7E44bCwHPU50ZGjv2H2VJCIZsg
rsO7QHL4LcYPQk+UsFiH8GINRA6QJOOKJL5y7ZzcoqhmJlvCj9dg3HdoBCYb
ioYl1alw/UJZwnSG7r6RDqsF+piPh/wmlGhTquOgc1MmggsNjIsU3hGSOCdE
dZjr1W5felV+KiYWDQKRyzvXhk4pObLCikvG/x2Vm5uOwtpGfLOhWPW+FHir
MWJH53yRDw4WZstDEw0DK90nW1p4mnaGqSGTK5xgrI7T+Wnr8UO1XIbTOylG
JQ4V5hdjt62GLlF00X/ADjWbjq7hvnBVgSqaO8kewqPJLc/GFsMadlY6L4Jk
bMkp0fyNdIoY5aF7toxBv83mY2RduywuEgkVYcNxuo/u/FZzwhFqDJ1blnuc
qdFdelqNLBTt99Qh05M9K7vl5zJg2BEny2klcQK6wKeePZGWSA5MHnEFYu4g
CRk5tuab2VUNtuB+Vj9WuWsD9PGl2wbVYuSPGQPrBRgEJXmCWZeIvm+1WIuH
S/1Qu5lPl852zbKOD+ztoeioDaOlyEqjzgqBGhDSCTJpkpAQGuJIwlVe/NKH
MmsFa2ucw8JuYOTqpBXmHy0nV6KNHs5teRDrg+vbJJ/CYAkYdeehrwCA+MwK
ilTQGcBJRNlYE0gPuNtAcElrTrF76SRZkZWt0WPeQtXEN/FOaUHvJNq+LmJP
GfKf8Rka7Q2hpChtGSWbd16skCo/ATJht/k7nmsuIbkiB9yqlION+cZoRWjY
M6OuXfQPtKRYhmPRMDIgkQsHiyB79yfKw2RSb8sEoY9GQtjiM/cJYzmywK8X
i3LWOXywNknDJhFIIlevkl5EZvrposVhtLIoDqtWouVSNn0piVH3KbU8CKnH
GX1v5gVCXQ5/aKwispfwmjeHJhJYg8xVLgFKMmX55CT69ND6LPMftiqhJUEy
c1zSrEoBEyVa0n2vaFIePQp9GofpbtpZOFMlSc/c9klB5nit61uMG7jjV+Gg
ZGJfI48HN7WEO6ChW+nCsFxu2lvOuvdOVQn9OBbI7e8pC5hGhCs/hI24jG8l
2Y36XoArQhZea3tEj6IKyZIsGjB/n2r2ITeyoZP9FguEN2Zkl4vaxzoGZkcj
ycAaxcmY6yy6Go5hRmMIKmUEBuu31XjDOJeceT4J2moDiglUAFLSWVjAFy/P
+Z2ICN6PsK4DehmutBlh9ZBjtblZVdElpVzp+aqks2XOGJqjGUqBxtUSL+qu
4r7FvAHZT51aUw5upHSDWWoUVEnU/0WeZimO49xOccQnyS5g+JAUvCRJN02j
FS5Z2cZ910RW3biA3OKhZLdlOLeQ16FY8MRhPafenq4W7++Kc3UXCz7n3//+
f70YPd+9re6CmTGaX7dvn4yup/V75MiN3u5ThOP9nVSCAXgkr60Sqmd2Pks2
5HpbF8PBVwRjLWiJShAvSad3vDHdGs/FnfBA1u2w4ABOJ39xW1FjTPlMvAOx
BuohboHzDZVFZu/OyD2cYNg/q6i/u1OHilTEvmcFOn/DT1uuf4HjZrrRE42d
O0zFlUdqFfys3T7kMG7WRGQKO5H3GY628lXzXtCZnI7XFoW5mhxKa6u4Qer7
QDLpt2GpQDvA8fijBMr2i5csRaAzskQRyZdg98F5Se6oVYWECXzJlfBPhgXs
W+JWxVmKa/kK7tUl9eqJNEtSiQavZZAgRKQK8xQ3gtEGAmr2UccKI6renzWR
er2C0HX02eQvb1h0Ii3BQob6hcgTKgVnSB1qZV5uacI371pRjp9atD8uGq0m
jHxEWav5q9Nj4KiSpPGz4+Pnl+cnL09enbwuBHyO8eq+1ir9hBg16NSr8l1F
RwH1C1cEhc/Goy7In3azmVYrx+HECp2V0ZqSybAWLnszBTIN00prTVnk1oax
kIz/kO3ThQOOYkNIaZ7acFmreWtQ5AClcC4bee/r3f19uv1PZ98eff3113t/
3inIEG4ZL2tSvy+/CB0wLLTK9YRO3ohnYBXPmEBywSmSoFMiO/x39CnT38kz
YX9TKwWeXhtf0oPwcPsAWD4kW2g7sUP68+GfeiLO6OupbM3t8MbZEyPoyvV1
e0mzeSma0X23JzfyzeF93d1PwvMfdKPePGk54+4eng7f1ZnssoNkd716QYbf
8fPsxl9xVHuImj7W1f8Zo5p6yw+PLl78ePygJ+79lz9x/7/8iY//y5/45Nd5
4unh2YuLPz7oiV/+lz/xq1/nicg27rux+8Tf/ic9cZu88ad59sR/VOb0gEPL
sdzDQRZ0QyqSc3oha3ys2UUqBNHj6SqypZkKoVdwOmWSXb2kCLE9KOieOPD1
qCe83iEVH5Fax3aTuLecUjoMOorZhB7h3LUqpJWqQbBr6ltGDB0W7zyfQyS0
pIOb0K5g58IY6gna14XDsWG/jctZT5RnTnWJfqyoGND3jJO88FcljJdO5yDV
wsPMFs7ZIOBTe6KViedyNQWBgabeBDu2ZmNTwSWribDyRgAYTRNyYXlL8mZf
PYw0ZKL3dssFRKVXX6S9oh4xKxh4FT72XEdZ4YuQKV+YFl+dXU6J9kGT09sO
5wpuD08ye+m4ulT0va4VoAq7QPxHDHtRYLUjREF8W0+X6mGllSeo+epnk3GB
p5EXRTu+rWdKlFaY7s0ox2r35Up349RtyWZuKMpT6PJ1jr1IUWAdkuGz6YKC
ypNTfFG+0zgzz4/tBl2s2RJ0N4f1hpsLBKTzBvLh8ojkznsSfmNCZyykGGTj
nCN8yD3ELpMVxm44+m7RgZolKwcwFcTA0CQBCTYpkHJDV7PpwLhHBjmMmSA7
o65W4S5KRda4CMbA4UPHIeFhkAWOl4eZfRaMsNH5YrqYhef8GO4MS3YR/lsa
RfgAAvP4tcr+88sfD18/Pz4Lsv/58Y6Gi1rj5Rao/pPwnm+b+l1RJI8YnJ3v
cG0xXFqvqvfNbDMrnzcizs5JZsFJNgDWJC49KBheafDms9mwfMOfooypNIby
8JWxjWt+AjtlgGfOnhwWbZp87PcteG8W0zoy0IY9t5mGTtWUWj+l8HN40maJ
taWxZ3KInU6r+Zvf/fbnnwXre0MVfaF1QAfBjxXuOzsvZDjD8j87fPF8ROBy
JVd6Chy7gh9S5JFdItE5a9uIBsI8PSRaN/PrKgiHplo5qHrvr6azqgAgOk/M
d9V0EZ71LbLEDpFSXK+bcTDxw1o9O0+qzle1x7emAfzu28H+X77mmLA2xQln
SKvc//KrsubO0eiATV7+ZmjxaIUbwTD57VbhnNqM4a05XUzv5otZGMKiOBDH
DGe0taV3dsdM6cbdvYx3v//L1+Vn5fu/PMF/H+O/++G/e+Xgi/d7e3BD3gbR
S+G0GdUTC9fwitDsKFytHR9I0iLh/oZO3JTflPtD5/qRBCekrtIxC+m2/+WX
5DYRHwleRf+gB8s4UntXzfod5Tz9KwH5qX8Ljcl96Uc0obQgbz6vKCC3CGNA
+wUyBXm989Hf6tXCpqFgOXl1EIb7UXkVOl+/Xw7CfYNqJ4wG/XK1syOHpjqL
uM0SnEqbqykrEV/u7VPsDSl6JNsI44qEEbwwmAI+oZTZiVwa7WbGbuFWqDpu
BfCPqyrcQpVQ8rN6Na3fhPOwHOz97quv4SAtTukoJf/Z/1n+FJYCyZG93/12
X6esMxiUJL5a0OKj/SJjXczScQxLccLoQ5QeMic4vSXhFiKnoTx/8er5qBqP
a5YJk9BKUdlu8ThejJ3NAJWohwl3lnannNnaB7e7YvpU9Q60VtflD+flKbpQ
fj386uvHw/3ffVUOzsG/fR7Gq94ZWgySMxjqFUKR6CzK+MODyuw91/X4FinB
WkRuov0VqgjYa6/imyPbbcvLjGXuTvk+CFp/OnD9wVCyyCljomgXrOc08Nkt
ww2Ai4UjHesBGUDrO7mXA3U+Tnffk8ofZX3++O9/av79z//+p7/++5/DSv7r
XxpOH9bx5YjM8fugWEgwkbtCrbabq5G0dsFr7UdOHjoSTLWLy2b+ltKF/zIY
7e2IgPmu2oTFSi7TaYMgSrqBOcVGdujdQXkc7v8x7DO0lZDJqkfZxuHKIIsW
S6TGYLy+KBVg6A11QlWcZQXIVpc+La9y6nLdghqwDq/FDb2JDX02C01JdCZ7
PBUJaCXKRA+k8IDxQZIvBfYKaa/YzDmUN2HkRNRfuj66cT6FWh7OfzKgOJKp
6yxbi0XxHUJ8SZrWsFQ/O/6+nFKA/g68L365UtJYkMKFPt/fC89odvcB+zmt
agcxsOWCI3/lX2l2//TFMN61c1CUscbHP6Vs9OIZLirl2z81f/7TX2mBBvHH
52arFwbtJbzPKV3R/jksFHrd8Fu4HJ0qohimeyWpaYIdwvVNuvDCeUIBHp6f
gerDMgg7JmyR00r0tbspkauYyLAJKSd3VftQI0X3y4FoOzvDDnCE5FkROUNe
isQbm7kKwnTG8K521WxYUtnWizWgPKSMiur/Orv1XHU8m3T54pg87a1nRUCK
Fq1+1FUBlshll+R9Sbf/uWz/8394+0sTV7wd3taEVQyDLX2uGC/ruPpQAqAL
UDIiOupqnvb45a5mrCYLnyvRadnPmrYVIECRG5KodHXHqWyms0e/gjzZkhbS
Kr9kEQDzkczfBQeXJ/ri+Yow4Na2PBm8+cv+jleA0sNKj6igh74g/RbHpdAS
nTmEwQK7F0CMWdhTA09bDBYyBhprec3m3Z3A490QcpPkxGoA3bAUyC6Z1Yy0
dUh5M/XbRhUJEUGVyyvP1yqXYdO51hZdGCquL4x3p2+1S8hCL3q13IPyY2ru
Trj3VHVa1QoPWIsNX/mztqYyAnknQrJrD7rnrZu7cDdrEKYG4N54CA7C3V/s
7oaD7N//vINdparg9WKzciNaXm8oI9R7TLq7XQpMisEbEreM7XOzsUwMWTbn
2GHnQeC1ZtLkQrL0QjJYAGCeyg4JAVsvWpqKmCafIXMZ4tJB6YjCma8QrZJM
4k2Pb/AOoVMSVwnd/kA6UHyR8gOHq+kFwu/xbcIfFyQry3MuTw9/k4lNfsXy
Q/FhNBqFpp58th8+f1L+/ln4Z4//+Yr/+fKL/4Ouk0v2ylf0IZlrfO3ul/yJ
Xva1b4kUf/zLf+5/6a+Rpvb2v9am9r/U1uk68uqur6bkIlC0cwHel4nByHAJ
H5m+kCxuRNpPfuaEAC5vLSMJ8DYXxauT/+f44oI46s8vjl8dXrw4Gpb9V7w+
ee2uUmfGjJ+03aNBwrDbHYFSzkhkVflmlC2SLsRTtgorvbipFzPk/3mQ4mp6
U1+tqmYsVm80GDjtXgCA23LveRErfFv2C+4/L29WzYTJe2LxNbOrSRusZ3Hy
pm8hK4NHO5JWkWaZydKWQ8G82dJTYK4F4UbJb4i1r8PhszJrVI5VsRoLzSEQ
TVktbt1FVY4bK3X4B7TcRsFwLqxdyvrFvjg/P2YTrHG25dPyqyd0fexHheD5
LFh8kKic+3R88hry4fDHf93X8/tdM1nftgzW6h0eReYoEQDcibpZaPzO0QFK
tCieawnHCmk14v84DbrOH3isybu0bdSGohaeqpfIZVMvppvZnF1If+h+TcrQ
Lg7LtAAifHXOgq6MKOk/jaAK61NFTz4lnaj8vPxJojwDfeSSKscW73aKP4Qr
PJ0xGyDf+AfusHYLDdRWYxDQcW1KqShlU+G1yA9T60gE4b28bIblf1w2O10M
7UJQksh/Hq4IT94zSRvXPvnqDs/Ojl+8RqYHnCXo+Gel6KGFmg5zfxtNrupP
7OaKTpalVt61dzPay824qBhfm74GIqA+58lBuPqb8u+j/WE52huW4X/7P7vv
v9LvH4fv3TXD8rEIoHDlO5KXUeQMVATuFNGciSMax2woygw3EPesSZMqlSfq
u7ui8oPNTIrxaZnGDQQZU5k4A062LhlraHA1LJehDzu0iH54xcYQzQutEaqy
mO5IuKYvhFkSROY0aBLLckRLLfz2H2G6BBDqm/Jq213kcQm9/VO4589/Ck2I
cXWR7I0rgsR5xqHD+OpcfEaXPxtQz4fl6bD8A/X/w/IDKTV/CH3ZI9fZh//A
36f6N1PcYiL+g9bgkBN4vAax0KbD99VVO1juJC2exm5iJ0IOGlqP62Ho9qPy
JxE1v6/WfyOOCq+cH6HWEh7Esw7ymuXNo/JKLHiJQ6Cxsd3997///vDi337+
mQGfpPdhIgfNN3u7u/OdMAiXzYfyX74Jwif8nICrK/l+Kd/Lu8HqpAUgG5VX
5n+MyNpNRipsCmpV3vEFjAl/3A7Ukt0Rx4ZcsWnNobEKm3a0rOtpAqChnqcN
YKzecX570N3gAbTlTbnUq+ZKsPZr2Qy7sFMbSvYPN9HugDxQbQ84Z3pjWAYD
Li9eL+bBUgym5pmz6+RJhDRwxe4qxr+h/NN4Ku+SfXlWL2ugC1ABBl3gT7Fo
ssmJnKDmnwzmYaWchv//YUdpBM3HY5AhTFjr3T/st/uoYqWChXDlijfeAu1E
Wtguj+mumv89c3d5fYRy4epqur69k6CYHLAI89bmRMpWd9gsYva6h1pZmJKP
bFsadPvf//765OxVsBh//plXykvUsrkaORjbyRyFRXGus2jzr2qZJqXG/gzC
J81kA8cHHBDWGxkb/tZ7AvSl+DzP31pN8ZPBLMz0mx3KnweLxqTCKTllvKw1
6Udo7uy8IDRrSjkVLRSYKW9oy32tC+V1OOXuWSx0CD5wwWR69rDgiszPkskv
08lnyH6U9cjMS1A2mIqFbnY7yZQbSxJpJVRUUdEqB7AQXUzwygtNVEaFPeM5
hUG5bW4UFBcwrugBBhj6eXxjGSU1BBLD84c54ntn50P7PvEbssnztlrdoU5c
T+wDsgefu/MbJ+eH8hmdVINnjGFx+k2wx8KR9M0TMhDphBiIUzTTcXeiYTgg
rWKPmvpyf48Mu72vvoYNNyBVg7/Y+xpfPHkiX+zZF1/ii/0v+Ittn29rqPfJ
ahWqrUVzP+JREfPQxi2ee9FQDFbssHwS7E3V2LJX/0RTgCQdP8PTHsZKMbPY
Kfl5JtwRQXWoUYddTKt2HVkHqnQWCemL8h1umYYP6wNiFA1wgJ28i4RpwLLJ
TFbJZv9ApQ+hW6HJD2ks/kNcWHGPxQ+zvWkTTUx9VMQcVn5NPoI/hgHj/w7o
ZN/J/ijUxcBD+6H8Yd5gI38ofwy9hsDzvxYfckUDO+ODOvY+RBHkfw23eVmu
N/0bqdD2z7cbUtdrxCHoju++9bFo6be4nqj51wv5j3RKo+Zvwsdvyv8VNGr8
EiQabZTD1fgW0VnSdeL6Uxt/NI7zJAswnTm1680rhdwpWmeHwV6dsUAb8HPD
qXh2bmV3hOG5eltJkjvgwaE0bGSosZYKTAA59Vksr/rk8tAf0LoSFuQzpIDw
u3pV9LhoYbkmklrvBAB1wwFTDAvhd1YyTsqNpCUvfAYLFjqzNa15hX9Lh//o
JcnKcAi2YfQ26seuynNKQRO/Sku/j1p3je7SKqLj+NQarmroTTb1JVQkx1tF
dcOJQMUOI64DRv11pBN4cRJ/f31yfnoE/gDA+2sCn+Rr+WyvDCSIgfIoxCt0
KpjCYaxA5voDrpTpEPBR7uVNPSn6ki+lTohTPCVN8wngvozDbm24QaSSM7QI
iyMtxjTIC19E7LHfbQo4SQ7zEiG0BJpXks+SnmvRCjPNFEUy38gHiGe05T4q
M5jjWZKa8O1ATgrIspSgF4oYzSfPPSxkKYRRRxaVFMCHZw+jxudgpBM0oBQK
yLGHouuCPZDiYsSS5oKwu6GlsbcLGMdvRnuU5QkhJz1bTjeSS2QDPikw0Zq5
wC94RdJLFlozWityBsN/6MQU+bTuJkX5TAIzt3B7p7j7Ou0JGXeGga9JcXHx
Oq7RCVk1OY35gjGgdbHICw+MgJBNHOM49zmJBmlEJElSCsjIO0ogXzHeA3eP
4ao9CWZ8SM9L7fbSQRRmFfqX6OWD2CwnWilp+5hLkVGVGmy0RcQLxvSFm+JS
DC26C7KlCpFInGz1aLEaOQRqgYcID0DRNep7KSezUIA7YCo8uzw7Pjp8+fLy
8PUf6SpfUh9k+cLYerjksfCkPXQuUBZCd9byfRymfCFkgB3+GwMxMLcsmIIW
qxTeAKi2crLYHo4jOgQweUwo0BxUhiSMLF9McUc9BcECKh3TjlKcE7mdvVXE
Ci5gWNOC8PvNN+W3L16SHfv6qNjMYQ84EQkzIuZOcyW43RBW1IySjcatbgJ+
Q0oEr6fTpzwryOjK12bQsqY1AUoXk8W7+c2qmujpsrEoJ8zHIHvzWVHeGE0/
jxirhjGUQj9LdSeLEoUqz0QIIFCaNHJZJMCBptMqBlCC3ANrFVWilNxCyakF
KLE4FpxTTSsWnBDiGQKlo98TbE/aR8nAcX7teDGdht3K9mn5vS4OBkdkWrji
0P1ZLsah2y0PlQGIXdU3ZE7p4qigri9r2H9GQBUPk+mdtAgV7zOzWEV4Vlrv
SquToMj8oVEgsx+Un4YUwfXvWCRhz67tAvo/p69zPLlhXqReImiW/H0IyAbH
5+r2JxNijBIoStAJuuEouqdvLwrhffjhrMEisBrRye2F+4DJI9oWr08eYShy
QG3vB9jRd1OHXA8DAehZwwlSJO/DcNkGOa79jlDZF/FwF9dU6+BCfHpExYhc
hPFK9EAMEgExBsGmjJoKf27Idoo3ZZAeusMQQMhzthX2kWVmuy4i12DCCSj1
IzyhOsL8ISM8UQ6qgBxy+DVZNeoBIaVQa0i5CHui3FJFNvZXd25npqdcMh+s
js/7tFxAsSvFeNcm+FlD76JQ1vOb6sbqzdNz/dPWyw+XR03VL0GerSDr1rcp
kY+MRzCVla02SlNBrYjc2elBtmvurz5Yw+nQKBEgfwBoiOJdoBryQxwcIdy3
4LfVPPTuykgoQzxqHyfVeHn8jgCPmGU2QZEkXKvJvQCShYSxFOOeE3g9/SYI
3CNaJMzSRlDXEpBeSvNhsdbMja5uJ4JLtv8gOhHKRUDMWBr3MSRZclC4faQr
Fhr2Aga3A9qowHVbu5NwbjUiPtqtG9qQcGQUgZRjSDjlAJX7XMdDiyBYjmHb
rhbLMEkCkhCXmwYRCyypNjL+Bo04QRTEoqabZ8Dej0AePIu9dCJPS66f2g7U
QcdnKW4lYDxLUQtDkc9JGxKDHZeMsLIFAoxRbaA2t/DLCKBrRO0g9lrKEue0
4IiM2UBbityZHT7co8Nzp1uRwut3w7BY1qiDasmFwRKuHVNMMKEpZiM8RV6F
66Lg6ApNYARRAfUCujwhfY8YaSkwdhepCCRTvZgQdAzCQRMAQ1HlVRv51qgI
ijTA0+r9ApTYZ9X12lKI1H25jsUlPPgY2fBBEDVuSOm0KgUUt8j3oBFAmpev
1fAgd8U84BtK7GJUD5VN8wkiXsn1zDmIiNtA6N7mzPFOqlhB6PFpwj5e3ei9
Db+jSgDtaRUVykDWKVrhZDhKpgl/PY3aII5TIk0OOytssAa0mR5GSo7vcc2V
ahaAIX2w/ImiQ+dxBbVZnkDPqh7Rw2Rp24jw81pmb7YVOYR14xwaGeKKSHDn
sgxDo4iWgJygh4fjf7RYtj//nFAADjMXxTCBcRxmZ/DQw59+noAJDp0SYbcJ
LqF+B9tX//g+XHh8llyfkBKGZXHDprFVFwHNGx5xNlpn4XQK41WI7yoZXlzH
rO8e7CY55sK+13v957heK1KA+GZoNoQoTi4NJshumfU71QqMDttKjqwUQ+D5
ayOGCFuUppaP/Inn7haX3rtFQnGfpZzyFmP0DwfrnK4eZgXBoRve6qoJ8nHV
UNQdJmSzih1uNWeIVQ2gcI4pWWqY7luYKmQQ1u85HR/ZJlKWpFS84sGgcp+R
NUyeyiVBKo7I6zvkCooRVPeRVUAOjUY9jDMFYMCLEz5+U9+NWDPhD+h2gf6G
UbOTuTvZ8xJZERjn5Sn3lZdWegNTS6kySxMQ9V1DGAQWrKJ3B4X3Sgg8JrCF
xyS+bpD4Rc8hmVdlBo2sCpfVyftpHfm7jHwi6C1CEYLFyMcOwG63v4ZC/JK2
RMuX0CvruwWds+x0ZBD0KBYnOPKCdjEJjbPZMCz/utjQFLRkwkBfG0u4stVx
J4MszONIXbLw2lEUZbOETo6/ps0V0pdYak/rdXZmEBj1LOxY8htcb9ZAb5qF
hUc6C5/4gGSL0E/6oXjKqfwYomBBWXVBISJlBQJDyJSYkQRUDLhFXQGKe1St
F7NmXAha3VA0JiU3MRWTeOwXbq9kyZajUaGMuO54Y9WqHerDJra93fbk5IMa
czdhAEFEmKlCN4PRjgcZIIaZ7bdYNkyqlAscvO4VnejXjQw7DmbmlpDhKYIg
Ag6A0PkyeCxJFfIowD8lqbG96ZCkcoeFXzEwsPqF2ckKtQYGO9VfbqZSXC5w
9hgFCVSxkC3MAUWl6XelHcRVLFEWZl1FGhBBTBunUHREdwxOSdhOhBuT82a0
TRfVKGIj5WAcjpRaIdxjFWwZdt7m6grsNcG4CZthWDhBzxj2XIU3zEQgKi3T
waMC/aqZRjB88a4x3CxVmC4lsTyMxBhlfnTpGZjPe1SJFX0BXYKWRMV3sPhu
qLCwhAi4rabXDMJ1JxS+VGUeZPcyKOsyxAdU9fnCuT78cK4XaVzCEorwKRm7
ICAVzwm7u787vhgm0L9D7wfnU+78+A8/HL8+OhZkwd4YguL2kWIzxbNolWzC
h7SLVI1ccx1ODwJxefjDxfeUCTIsz06Pzo+PLr87P8eWvnh5HprigDrArzV3
GWmyqR9RfOFeu6HC2exxqB+XkeRTGRa5miOLVYx4MZbBd6ROQ9IGJa01BMgg
QBZUIyee+Kc6shdnYc1cUkTi+MXzoIadHf948vvj+AHd++yHl9TD5At1Xq0J
LmEEOZ1ChCp5dqvTkL49pAd/LtCAdN02dMCndE5pZTlLtwhA/d0Ph2fPL49e
vghm9+WL55eUPCCnm5UJQnefj66nApAhlvdzP9I60Kd8xBs2NEwxCkmpT5BU
R0adWBEiaDxrmQNF9UrmtCB7cNCBZff6Z6QxR/1kikwYXmbkj+qdYap+Ztor
NXD86vTij2EqVff43Gken0cqQQrefQ9wTwDHuxVFIsC+RE1xmNsbC1zmtQF3
y1rKVzdtbQQqkAfJ9WtfY2AF+dBMFApZ6UmwRuKbJgSgpK4ZbT0XpUKUCmCy
HD4QqOEIU+8erTz2k2odWFcFDacogmZdRctELj+4Z+TB6M1d03E3KMDD/MAr
lLrcUQqF1uAeFCQc0wqdjmtwOQVWn/gmuUQddwB1Yh0l3kjwenC50HfB41IZ
22TDjp5Us1Yl2SvERZ9GoLqGQumq8Rs1EgquFA9ckSkkrg20s7CGpZFDNvMN
bWwhoCLxNq4M8xY3G56M8CHtmHcS2SuNZjT0G0SkQYzr1dy8scjIqFYCiaLB
OV2i2TER1hgyzxkrpsXeRTTJ59jmR0u1judpwXNN+ssJ2Ldvm6XsIErrq9fv
gKGSttDCgVeYA098Sq/EZ0dnbI8GMVLPBJKoN2tCCGmDQu6IoNQtBq9c2Wch
q0eF/XJB1/hWnfJwDvUoGuq0zv0WFTA6KUn7KAiO27AsF0vI5SMe8mHnrXnK
k3GgxuHkDzcvkWJbsjc5DOqKQaZnQS9iRfCvi2aemRKShhPU4vaWDScloXWq
kMFPB7MhLKjPyec6rZoZ++rVOJDRFRH5978LbiYCBOKsjL3RoHdYJ7cNpQKq
lY1a18Gzu7/RvprXO6Z9huGmoR5xUDYZBYzacz8sQox0G57V0ioM2qzQf7Iw
EwnPrB639Z1yvRioiPmW1sKawdop9TR8xzoNEeCitzl2J1wy5BDxMUoaH4Kj
1uwV3pOsPMWQWDT+Qq85uIquR6NVshuZvcSHmd4pnIPKf7KSiFFVxzKNddI0
o56SqnkbbNWgLzWS/MEnPhYwe6hNaUnW9nwxv1kEgxe7wWuWE6MAFMFBdULv
GMoI0xXOiiq5XqkGKgvg6oooB4T1zyPF4pagmEbNHNzEO1IXBfojdTisoUfJ
bNpEMcGcBU009QxO0M2ShSqyEyY0QhVwgYw0vpQUGxCQ1hlfUDAG3j5R+pS/
CeIDjdBBSdKVbg7qKCdbyNshYD3Ex+6d53frW86Ufx1E32L1hvchjooDBdix
5dm08esR4yHN16ZcCP6tRFGQrwDi0ohRbzcT0c0EKRuWQGLfKYtgZayn5pUR
XJM824jpLMlwZO17wrD+dL96uKSm5c5tBxw6w2yx9cQbd1jTIGHf7WO+qWjl
rDcgsGFDQVa2AuMQZDcFkOasgT8V/ZwDalRUM/s4yd/z4/DyyMYsHYEVBA0F
RJ14cEYEJbHOJu1oTLIfaOzh5ldhSZD7hfCrgDVEZrQgFJ8ewWYOa3bWhPNJ
XwwSg3Lv2YqUU2WCZZfNMaW7kVJIJT+hA+sRFa7OHfD3wOgmqmRoZ818Af1i
ca2Dk5MTVIrgBv4nZuNiYDeiNdUoCJ8NYrHPIXMEjL/664JrvCWwx1k8PpuC
7lWwN0cUbgkKnXgsGSQSjbV709oER8any0cSSchTBznL1jUFk6DNeB5zf/fi
GpKXMIpqvv8ppRsxwogbfkZBk9eQYYo+bBtnEsMqoFlcebgGHqoY6oRwTYgF
J+g1YPSdn5sGZInVPmm8BknhLO0SV/ZRkWBoDq61nxjVkrREwi3TVK6gWmlZ
2mhU+nCqLhEPTI6EI0Re88Cs6LWI+CJDs+B6wHTpZrQjuJ/NhnSBOMAdSVvy
odk8HluW2/kT7udOwEsKzGSr7U67Kb1M1RD23EsQRmE/8/FHaatQRW6pbJzx
JFLfjXM+dfIjea878UBPTV0c4WBv6OxM5GkmgEBjZRJ1sVreVrTfSTjeAeYU
RmsUoWizZuqdpp9GY147ESkqMqiqynO87isxI3sUZR99ErOCN2GHG5ESxZYk
vr2VQaj/hFPBcdvdWO7OpycnJUZUM0dcxm7OzCzyKmIWI+CJ4zQThiKZBJlM
l7d14R9J+0t5aQ2vvf/HaY69lwxYRY8ManJVRPD9rOwi9f5LgvD7mdzzoSzZ
Uu1BHJbv1YIlkODOc/5X90G9z8HPX9I3tDyP+Ky/+Mu1RwORQdqTHfv6w7ax
6n8Z//UHpDwjOViaHf7SBiRaCOV2aHSEv6iFzhw+9Nbtl2cp82X5dvvlA8mA
IpOx1a+3NX3N4MZyPuzce23Pz9uHXJuCUH/We0u2eMLn0bPS3yNbQL/gKdbm
jv/24e/7IUkE+CX3lQMoSsgsnhrx4IPvLm+miyuIaE6JqX/B0vpn56gXQ7z8
F2oqamnpKOKnRx7c85QwQnANkq3BZ/pT1+uscdnkVmU0p8wVeb+cWz58zTWk
WRux706pkDYiYbpkJ5t9434oy56XIGNv0GmiDVh2raqbT/M3WEyngBUmKzcY
ovP1jt3MDhwKyMU+sj/gQT8fJD+YQmYoUW/Wv+TmqCEHDcQ68AsagHYF1Uze
8cH3vi3jcPBxvvOAewfz+v3aMnwf8Bg1SSO/9E4CeO/0lJF6xbUIkQMdTVD/
7sao08uVCCpBhCZEjj+cwmbWs3IhPGSRvLcnz7gCXgnFZqb1JIYHniaL2yzr
uZz26jmSKNqqXVtlmIxOT1eMHRblc57Hlp9CLi4yNO5qKOBGyYo6Rs3QQ5lH
xnazxaPkEuhYMRV4/ZRLnqw1tTPcgYyIpJ7wPl3UatIGRiBshzA5FlXu2wBI
ezQC4AWkgXZMqXbSCoaq0HfVLLbD9AGhMe2y+pijOKEaBKTezuNI4a38YEml
e8YjnfEG2z7MZpBc2vFI6ZwSUqLD5aEx0EiPbRVdn7kchapQMC1EoCa730lJ
E7+S28zydeSu6Mg/eD/C5MGj5pKUp3dPE1nhBICPpG5fQqnFn2Yo884xdtAo
TfV1lZWUUjtM7Fmm/oXX9MlSZ9pDscZcDsC8mkUjv1oxKESe9mx+MNcNncMN
J7rB0EHIR6xa+N+71FHSpqbw2XRYMLVIzsXek25MbgF7mSSmPSw8Alpajqhe
ExY5av+5pNjvYh5snwnopi4Ll1ii7KNHLqQcZG9FHlcpxyIoHVuUkNCPHgnH
6An5bZI6UpWmjoq50o3PCPYcoaIzAY8bkuW74ooK3jNeqIEJXN1+BNqzBl41
bo/Ly9IgI3FcJXLYeMYXLkOEXW5gSVvXK8dzzaLblQ5YUow8gR222Dn1BGyY
6mRuyXhvqPZvEhsUdsyjrD5g1erCjcchqDiQ8uWKKfCw8OAG8r+TEI4ZEC8V
AQZOm/Ga+cTRBN9NE1FPJMWYyyWlRih6W10uxPFzywbC/TKi1VX4VNNC8cao
ZgA9iWyI9eqOi/eZJI4TCvzy1xrfuDdIhbPXm1j+N4PSls/TrHbbCkSsKvzB
cPRIRHexaaeorGMoHnq4ig0F4VETc40qFAmISo1jmjCB25E0QVOV5q+wj++a
2StKV+0XdzWvb5erY9mVWsP//OT0ghH1ZL45VNVMkAlGt6MkjpoUlOo7W4SE
VrxZa4kuxkTKyem+KM0/0ciWiCZyanNrIP12fp7p4gY5/vGtJZ4skeRPpCKU
02e6gXOKVPFnVhMkYaMkMspQ248e0WGOtc6Sk1MfIWtWlr4Ws/sfPepU8zj+
9fKQcHzYWQVMNJoc/uYZvwUvejoJdQAPP21RhQbXLqr2noVPaLwlrdcRX0yx
qpcRn1d7gCOwcO9RxveA3JhWdzX4xJH3x4e+DQU7eGO4nVYJBfwFs4GzDjaz
FonK+mz5Mu6pAjiMXCtuycq+HmpWzasbdpQ9epToMJyG2XA4Szu+++hRWEjv
wMZJmCwNwGYoqVJZpaGnh/0KkEbxcEvYGcEdStHd5TaixoF1paLvnXeOM7RT
sOkojm706WFw4A/li0TaV2u5upr8taL6Gne1eEiZLBteTi7Ewp3RQ53GSUKD
qShKUqtwb34WInsU0ymiPah6MvnUtWmFhAF9OcQ/YpmZDAAaFsBkfoUgRdvF
FKG40U2YPEp110JoTpzWmMH6HXMe5xVnBM5QrW7qddKNnNmYYkp8KIifnw/g
JMKDrkhSzuzAnSafcselDoH77YT7u0Z8sx0p/1TzreT0UVdZfe8BBADhJXv8
jeg5O27kyH3XtLfCxwF+LsTrmWk7Ugto/CP0E4qV6/oWauekT2w39aZ+JfNJ
Qy67dFavaODDc/2ctLZgeJEcmC4zXbS2dkht4RB8tWL6MSd8Ww0lisAWuAaZ
4f6XJAG9TYb5TvPZTl0HuMQq9rfoCuaW5VxUUFADJotfxRGljztJB0HHoX08
lmvnOaFs4BJchpE1dn+HAw8SKKAHcUo5hWpYkEXJC5HmNE0I1aKxgwg8AiAz
nl2i6p0Ie+kmRsHSlCwpiAfybfETJ9+Qn0JvIWD+W8otrfyoUrFr0GFXd8V1
tV6vnighMBLm+AHOsth2TVhrhAcxDZJH4A0Q+jKG4NJ6CjkswUqRlVTHQMmY
NwJPTIYm56xRVSJ2LcaNckDiqHnjg1+dYDi4dLvveqqEIC2sCkowCUipcJMc
W6MI9pV7lBZ4dxA1Fh9SFtR1xG+dRYUlyYrfx7L/0nxKzhEtRL650hb+QgWf
GHJkhfUkVBcezTnmYzqBHsfDhkg5xnhjUej6uhFDjk49Y1ZqlT2JixKkQgMy
hSqxKAWwW8DJUU5VhBIa65GjsXZRXVn204rO5vfsTdpeBBpafOPQkelWMNJw
2bwG7ZeMC5dGx2EawFbj2yObAB2S+AzhXS6IqaZyu8Bp3cwXq3riAHeQMRS2
0mK1mTFeWswh4QJTyhisJSBt3IdvGXYGsU107Kl9jRD4HCkrOVom9jufiKz+
WYW5avVwbsl5i34T/mTHe2J+Iu8RwUA4hwg7meMiXmQuH3WTIEQtBwU8Jjxl
6jBJ8oTNfVIi12hB5iktxm7NSx2Ub6qQadcjvMfIETA8elTwQQYwvoMS2QWE
wIFdSWHttSFoxyQVzrEASyFs1bV/WVWLuYjHHbnf0XsDry3mvotbkW33ypuJ
4YPk3gFwnkH8h+QWuj2vWneLiUPTsbL7qmqbVnOLMpfRfY6fg+gxM5AUeue7
OUWpSSYzeEmzZhhfJWYRDAsZ9qi7qwtEVWLdETdcPANzKg5vqbaNBkB0P2z3
BCYLzeyxCr0WIAEpBdMXy3UczTzQcqcp7L35DXbTKtlLCT+rT3uM3Q+75kdy
kopf6Rr4qdH7AhnOVRT3VT76QqWO8zV1uJo7Cju/44t8WlYdZxPGhlVpM5XM
b9o/sm0dawpctlKP73OApPg0dIBRwL28i3c4X5Kz5UZW625bH2eQ6NBsz8p8
kNpL6aDbZ19c+XD+5A9IZI9zTqnvTVL2tZhbwGI1EYPPBr6SELxcA1DF4pkp
TpFwEYat0SzQ2lWJcuIkPzKyc+XhOektcro89JWbUWhzi3yPk1RYh60JL6Dk
+2xNTrdejZDW/vPPBkaHoIQVLYQv4azaTAX11KXeo7YMSfHEg6mBwlN+Hxnz
vmq67NFCQdTntf+oy74wl32VOezvP5yK7iL+JSdTIScTzXClR0M9sbZoGRDG
ZzODdQ6UYtrOYwSHgE7ySVtdEzIVLaBPRPGkDcJbP2IpoYoSA6XuvCJZu4wM
43VJ6zNrgrR/4hqkbIG3UViBAYFOyrkqBRXyobmCFY8htyTlfoXpx9Tfpw6Q
jj5k/gDISZTDJSNnKX/pQxhwzBziRiUqOy7L4kk3rXv3nex4lId4S1+e8RQk
yBFyZ20lVlzysazncBh+TqYCfrMH58EhxPPCWYK0PT8RhOM3j5LbAoFZTE5z
K6VpSlZGk6CZhRgQY5BTxtfCRQp3Jm3a5WbFiEbs/+/ZMioMkXofjyrM5DMG
O4upaEnKbGql6PJmtVPjtS7YlM0qZLqTWzKvItgjNKstAkPj4gCCUDcnkrYw
D/QOrzp1fvJG7zumwh84mmz641HU2TPZAjJJCBMDXDisWH+SnvFYD/ed4J8g
gLiZGl6i0xiuNkEqrM3/dpYdGA/Uv/vefMDKoMtV0im05aYjAMhgQnKojY24
wjSrstydXpUMdC/qKxj7LV7o9ptdy3mhsSjR9wGgmXJftgSjBFtcTZsbVgiv
tdBcGyAYArp5sOXUixvB7PqgdL6t58Ce2khIHRGLFSJJDPlJJA/2avos6Mro
seurSjfywvfGSnWMk8OfvJGM6aN5CmsquwxmaM+pqDKddKI4KrwlvSin7cVB
2+4i13gpedMJ/Ic9MQl+QHo+sefJ6olgxxIXlKYVi9/UnV2SikB+Z9HqeDUX
SSayj2fyKdNd6tX0XXXXMslqgS10HfOlOJImxy+PpjuEm7WCxm1ZuZL5jPPa
XjCiQG9mG4qlFuRgJ4cHSq2Qkm7PjHPbSrKzbcgWoB5BKk+C8Unn92KV1EqI
i1v0qlNViWmCL8S5vwXbyNYx8RFo0bhCkDH0FKRJBiyTAksVjgFRnx3jIwdc
CT5C3oB+z0X3157oTohKFO2W05jnTAlNPQImTTwEBZ1wwYAWMUPIDkf3VbxL
4uGaiA0L/WSwt1MyeAR5b5IoiByF4+mmhU+eSFJDxzuYO+ZhicBfaHEoqTlT
dklp0f1Qs7TFgxP9MhY/FoslaOGyktfAjHAZHkItUM0I6fhvFnVkQ4VhktsD
YJdSqSTnjsXgwlxxwWuuSbrOhpadtctgBN9BbkTtKaofKcZnoTGsCAIhYM0c
mxIcWZZq72W/clEtjDsJ9P1vQ5n4rDz94eLb78O/2YM/8x3931wYwcyXkhEy
sQV3Wy2XpC4Dni8WfsfphiM0W1Xe24th16I+v4acmXMdg+dcl8SZGmNGzSRE
urBOyLVGqqxqoi9Pzo/9XPEz2jhs4QXGb7AslkH/lwo/K5dZ0sZeu8C6YOUi
dLC7HwefISDYlcshWV6StpkpEDFvBSTtjN296yg2sFWZaSVzAKfbSMUx2Spa
GsXwsWWK1RGzAFQx4xyKA1SJURDVF7tdnvyeK32Y6wXRkPC08XhVXUrobuBy
CmJGB4ASw9qij/ezRjvVN9dyeCrPsbaskTlaoL60qLesKC0YimG9UkoY2bke
7mBfScLuGvtKmR1MV4D2QFDsY1d480nY0hR7HZZWPkdrXB8Ht1JWJCOKniB/
bMEIiVG+Lk4IDQ4amVAltpS0yEDReHCVX+/wPqheqcx8a+zZCCtV6wGkdi1j
P55RLEALE1FoOeIqmTHwmW5oH3Jg4HqNYuZ8YKRufdgBDP5UCjfZ5eWbVAyb
HqxZQizU6RK/Vc5ywBrDktjTaNcfS1moaUx6PW8c3GIF3WECUd6qZVFhj25W
Yh9U5bqtLvkL5HbX21pOmrUmo+lmcZAlnxNhNUAvlzwlnAcLpX8QgTaX+Ela
EfbQQtTiRAvBTMlUfVqiOqlHnz3Cue2/PcW346bakRyfpQF3EDoDHL/egO0P
RAjxKBevxdSi3i3HVaWq/yPcnb0q4fKbvTBfGCgcMKb5QCEj0XaQQBI6VTKK
Tgbunm8zsMRoGnbWWRnXmnNmsNBNR97QNtRTXjrf5kDS4bB41Li4WizevKnr
pUTMw60cA5Z7PbWGG70dSXMAkhKrE8CUIhTCuUk6n/EKoc+MDTPyZXmn0a6d
A891JpIUVX3HCGEZ7WB9VmIOsyegxw5OcAorq3IADjAr+zQOZqVFiXv/eu3a
M3q3eQveVRbaN7QadUWNJcOQfDR6o8AEmdIkA7NrxxCBu7ngMJIJw3JUyCRf
LoowPG+mzpqIIsAFti/D/qddiOguFmdQDq7X8bUqlNsbfgrDgJjQpLHwH0lw
QFNhyN1AnhA9KABipNh0jCNCqxO5fopHJ0Ajen+CN+LwZV0OxlPUerPlJxF9
uTnpmnoopMWhhLvCozUXyPzzej/NIkz/3rzIMJ3Q5FZSWX+RuB9IAQPeAU/q
FKAIDGmT2KpS72WeiXvAa8gEFwLdK6o7sZyjvkPFH29kDWu5AjTSltPV4rEl
5JgJpkUO+WKIR7QmcSp2XrZDsMa9WDXtm1bw00sOhPqgUbokKNxl7RH8Az0U
nSTXlTluffvQnSeq0xqK52ZC8TGd90rU/BkZhtiVV8HWv6bQzhawrEjPZl3t
L32mxB2HWY51w53lnmKR+/lAQoC5JQvzD/HA86WAc1gaVjoQnINQb9ao3uLM
sYk4G84IGlqdvYYBfWRW8JZ8e4YzKoqjLmy0DUdQv5HKMARS9rD8MWhWob+z
ZRAxwUzRrKKdEgl56qAIowDfqroAOVwM2PFgKc9vitmGC8tHwOve8LpbcXsW
sa+u1OcCvjCy/ieFg3FjCXRH+GyMDCP0j+8WnEhPprp3KZCRqdnbQ8srqyKw
AnosfViDZAHJyLRHo08ETnpZEBKTwZZiT2lYDgRJRk2xu6FQ3R2XThf2miLl
L5KEW8sq6cEuR1hdVVBRwwt0E5GllrK8Vgs6bukkRzpKKwmxjx49BxAs4+24
EUQmbY/4aCxW3EFirBSH8UgBwoxfjc6KoOVoaIOVC+wGBn+5EeRGPZQI+zdM
v6CaJeYoZ4CmrhqXyS1+fbdJxMXP0VxRTWZhAkjT4dPiyeMnwX7ijLRpCRhP
6RcoAgWwU3NmHz06tWIT4LIhv9E+YmcRxi8vnGBto1s7sb1EgrMlCV8bGoGm
X205tJ9wmufrha4k6oxmD+mynjzt6EEaDXTlQC6nTaebE6YohQJ1Axj2vyI/
hDOiJHFSYTT6k2OlNoLGMOUCIB8Xy1CirobMATopxtFnwHHOqFU/ZdwBnPGh
No+jo+LWcPNLRz1wv/aTYPzh3k8SeAmKzX3iffac9RJXsS5L3BsVDZoI9u9l
Dklgi8IBYH44PCQM2xMMW09anVU3unyzFuOmuXYy/S6tzrLo3iBrLnIi4bwH
npWmyk7Y9qdVJX7QxpR5hzdjLlaCMTaZbqnPOAQ4fZLXURaTFcefi6BoIpWI
mliSxMkWoEIzHENWA9H56C8ewYHIzMrRZ6woHpFFXiU7yxLDquUmYcgKO3OR
oavEJsxbIkk+3kUpOXiJkzqC3arXgdKgElwuQCeuwoq5ER6356GbizsgcRZJ
/jNXq4cjMyoaHVNXimABrFbkHg+2WKusAqYj+QWBsy2YkreHU6IcsC4gigH5
BUMXgyShMC7KDca3hTvltuu0TxV9yYEqf9oWnrTpAa89DaYtwVe/oco3WWId
VhoIv3GY/2YxF1SwlvDVSTlrSUpJOoWknLWs/45EYzOrwaBplQfVEWAUCohI
nbfMzBQPMT3u52ypx7yXOZSLWvhIma0kx1YVpY/IYr8jCL4+9c7g+YriWwqi
ca1GFkByCICmV0ixJXMUPHoUIQSvK7ASMtTcuBbJkw8x2zIOyTDIfqUTatat
Buw9TLnednW3rMB+ph4LhxIIFYB2Sj9QIUSX2zhO9sYXsK4rcs8EAUWpNuEy
KK6BAqUlYKP8ZO3KkOT41Z0tjaE5Z+oqMp4VTBl2T15rhu6XSBP5/hD0qBO8
BwXa/LZhi167JYDwiWrYD9+qvT6iXc+Yzmkd1U+GrGYiAQC9WtOB3A7coWnv
9lRkvlOhPlWEyfLjAg1X3GV4k4KESdvehrK3LE2LgPS0gLoJBQGun1bScIxB
QkGsV7U/AHw1ni9LxGxLYYohMkIkCuFFWiwxo0XGi2NNFQx0Ny1OXzTAVuEd
jsvIKqkv+R2H0fMiLXkoxR+mUMsx6njPYwGuo08+bTv1A1TTYI1xCkoTDt6K
YQwZ7Hq9WBMhCBXDeGYeeSg/SvqXgncGTSrMAkj3BA2SMtIwk7zlmdHjfgDP
m2p1VSFEFJaVbmce3kI0JChFyHaG2TZbEmMTA3bmsKN10K7DiI3Gq/Hj/cwT
hxsE3pPUBPMdg1k6w0cWjM+5h/mM18A/QJfZxEFLmY4kI4sZlNZRPec1ebtq
CE/ptvy+nk7fNZx0fKVVUOvyxfHFt+Xe/l45eB1kCzSJ/S/2n+xEXyW3O7oi
7xsvzjUVJbH2C5XLUxCVo3/JI7Gjf+nE1h2dcgQakMSzK0I+JbVnR12jywWc
JmHFyNrXVCYmQumwd5R5uBj2C2t1UymalfRk3tajpnXFjnEE5Tw0TjJMruST
Gy8mKcvcA3bhUCdYxyQ6FZKDzv/MBI6u3oSiHHIejzmWY8DdCW2ZZECMuDiA
zQyYe0FC40bheb8TfHp526aNbJVqtnFVII5WbGMmaiFM110khuQR31L0PNGx
2T+Gtfa5waqEX1k7tuVBoa01U6GjbMaC0lKIBx1ClFg/xsjkSeUxXQRH2G8s
cn0Sh/RQUjjCPABqmcR9a/SK307r9/xR+aPQwOyVL9muvgh2tdbLPXn8JcEn
RquUVmChPlGXrJkkY1YIFb99LK3sfb33OLTCUV0366ZMtHL6ib3AKPK0haY0
OfwSzKElfF6VEf1E3c6TePT4RlrOPi8UcRkUGxQO0Xhk8gbZw787vji8uDgb
FufySxBhx4fak7PjVyc/HjMoo85Ewr1XCSTI1V2xZfD3/eAPy8iPOSNlckq7
chWHDgUPsguV/s5o1ZihqmeUg51RUPJKNtLD7i29MXC5m1+55yk9AXPyQESj
EKCsHtoRFbVwFFCWbmxr0qwk48Fgt7EkclxJ581yWZigmNdpsFYLkhDk1GI1
yXCO7qNMTBgv/MINpytQ0GFl06sMzo5fH74Ky+Hli9fEEvfy5DWxzJ2c/nFY
Hh695ONvElcPGJgpM1krf/tsQuXeoXyHNnn+Yh6nT+SQ4v5bREvmihGnKUfJ
LXyOA5aTjRThR6IWjgP3BdVAv/PzToSIwQ46pYYOStOuzy31/7n7k02dxbKl
yPXPQizat8gq8n9Ubs3GpWqTAmMf2AM+ZanUFZLG1ewBopURtE9Qt4MlVBx2
V2y7WQJgHKNhiXnbBr5gLkdGzFa03/SF2CDTrKwhJ2Xhn7OTE/waJMu33yfV
yVqc3JZ7X+8++QoZWuG3vd8N6Z/9Pfzz9c6Bt3/RIaYWDkJtM7sCJnl49MuT
k9//cNrbPhr9kirPNvN3QQlcLiiMgq234AhzUMmgcU3RlMjArW39lpqqryVZ
Sa+uOHMkJriJ87ZRwKpaJErMRKfSZXCBItbdRAdfF7u2dqlDY1Ba0CIneqsJ
skl2efjv7/jjL0LP4xoKFup8LLhd9FQyBtk+koYG5CXaCab1fARSai7SI1/p
ppl8fkO+xcXaPIuQOcHgAGsWkNbDfhpdwYhtSHtrWuFOnOtMkOvmZj4DjH3o
Px81W7v/JJtCvMSY6hbgdQgN8Pm0tYF9WgTjaV3NN0vwjhEth1i+m3kwWt6g
lZPT49dDzr3bvl73vuJVGlqUE2/b5CkRat+Wz45jHcYX82eoC0Ek5/1d+eLz
E/Ts+F+Pvj98/d3xJeHn8Ghdnh+fn784CR1+fnweNtsf7QNq7NmL18+JE/n1
5cVJ90pmbKIEnL63lO34+Eu86GN+38e/5X+e4J8vaUGpqJVdiiEAYMqsNk70
ro7C0I70Qt+9eP4EgevLH86PL0/DgQYCqYYDk+6Vd9Wj43KeBAJtyS4c+F2S
DKKoAYleS/duzzfK3gW5B4mLuGeCP40OurAI7nkjiaVzPCoj/hr2wSNhBnuo
v9QvLY/F0sgyqvjtkg+h3qYtYTD40vQLXNvzaHdDz7co75U422i9INbHdHxZ
1ibZd5icePCIEtWnJYAAnWwQFp4dxccOrUIcRFszFVLlQPxAGXMPqTRI5yIN
B+6nhrkpvequ5gGeqvXcEguS62ZIQbROqhpEZIbizpNY5ixIydlmxhXKXnN+
u7/9/Nb84u6S3LRaQgn95DRYMweiRG5XWlgwBb3l0Hu8jedKcvnSvskxJ4ow
675FZqZAlUGOluZxea1ukVk1hZl9pHLu5n5cp3K7szSaDEEEJeYVdV6jgEFf
PP/h9BQj85vw/lGveKFEdaeiVVCaM4cqOaw8nTpfE862dovSA03H6T6/QOn5
WlSfnf+p4l6O2aOXhy9ekTPn9OXxPSf2l/Ii58dHL15/ezLUX8I8XJIZsX1k
9nk8noQTW5ypSNFCDgQLUAgQdb6V19Pq7WITMQHTY4KlSpusSfIxkE+LfJUm
TFSQq2rJCtuqLmTDcZGoW7wVnkjLUJuAO5jXl6qGMOfMN1EU+rkUZ0TF2lSD
/5e7d+9qI8nyRf+PT5HXc9YyVEvYYPzCUz2HAlHlaRt8AVd1rbPu0UmkFGRb
KDWZkjHd9nz2G/sVsSMyUgjb3T1z6LWqLSkyMh47duznb4e1mCSiASJIwYwD
sBTACEecDkf2oL1WgUsXUG/V8Tgsn7l49DVceGAnRVPxoqIYTOQwYGviA4jH
CqEa6Fywd5nNvzeuLIfHNMgNA1NEFQWJOY7BC15eYOScS152lc5TyplKl+Ao
KXEmi0RtRKLGLLj6YwkJ/pJFBG7MMr+cVej3X9Ii4vXiEiYAujZWNDFF1m0q
Y7aIbN+ulJE3Jkyi1YUZleyPUnYPg8owcglcluBt7xmYM6rVa+jREl8CxUl9
gIvYAfzrYPK/3jV01m7ijfdMVbywZNRArEusS46B75kk1iauJdBVeioxysdO
5IHa9LAxAFYNi4J45dIGT9FbCut3Mdi6OhQKRlgrPDfauyzbxj57DACmYOnC
2QQeenMJ7InBxGhxzKiSP6OyaOQK6TjiLeKh6Hlpnj7fwT1ngntOpicddNxq
+46qKAIvtqBIpWWnR45dMHPHZhESJOUoXhRkgMcQvxJxX/GsAv0YGmBrK27y
GdVbEm2zJ7ByaEVACqcvTCVYD+iAdY4qaFRObhUJ02ojIeKEcG3F5tSRdREY
f+zcQCAnO8Kr1OxB6ZVAJALBW4YKm2P/uQnFSXfFSPAcJpEqXzbXJnPa90aw
p8hXycBnrO58nc8395Rl0/MiflqLOmTTAMyyM+GPvcSzRIqHg/03b04O9jUg
uxefCcrDhf8gttXIeff/JXuLnNQSgje5ZxtHR7/uWKEAYuSHb19DxbXB4SZ5
+F0ZLgplAeA4hmyD4LahpD6lejAa7W3ycadPPe2ijiJZ+4ESEGwSJ576aDdx
ULC/CcYeOATC8G3jxJknW8/c7aVF4O300+hQ2PNSJAmLhDLfZZ/YIXGJHGmd
Vhxuxa60zmaYMIPBh/mU4ry9eTzzBu7YI4C/krSz6cc8fPfm/Rm/6/mLZzv6
XU+3tsEOcGWvnn5+wxm6ELCFkt/gT92PbfNj3rFCRcVEqXUU2tkDWAfByplR
jBwJIlflTKzxMe3DvGlWiv67et+V4c0hopmDzvJUthal4AnGwisVgiOKj3+d
8dGXcTfeuOn3JlFhC3lTzJBE/SQLUydLgsMr4XsHGL7XOsFWmaET+MP9Di8q
Y0GF6r5RJaobnSAR+PxM9N63J/8+OAdd5/ez88Hb/fPXBz2TbnF8cryi1enZ
8Nf948PB6dsT+98kt2CfJvuif9C+Bx09h8f2ER9MgUzcSqiIIZIg+yqI4LST
fCNVlXXLtxI2oSDU7DdhG+dfD9rJt2Fbzyt8S/oubPeLfcvgNPF69UP4BMbD
Bk3hG0H4p2/eH8N3ynakv44m7jDkOvDj4kUXdxCh9PjSwEBkrBep1acyoEHf
+FW8+qALDVo7QN+GbXHzYqWq9QsNvCUUem5EBTXY6cu7peqkhGQVyIyoHneY
Rxao9KZkRZHZMbasutQMC9Oc8RhUlE47o8gdqreRi7YEAPVoziTADCdmXCwv
fb6x6pZR2UF1mItuAZwUOYsVyJJchXLRkIcCGxE7uLuRenjB9NxF0UuKNX79
OEIjWkPC4ybXh9Jn12P2zFNVQAKidLk9/slK/c7Bt0pDII91l0Mu9g8a7yMm
33ZdXOa1vTkahB2NTFVdygUWUolIgwhSOUF6+GF4ePLb8c+nlg3w54OT46PX
p28DAQSWK+Ulwf9us3XrMeiwdr3gEhuNlhIonzCt00UqgezVBaOguEBvDsCX
p13cBnsEDyxlwH/f0/+dg/n7zWD/bIA8yk5nkPaViS1s+zGNmzyR209o+Lub
WLs6gl0OHaYqptBZjl4lYGbpAvJ805n9r0W+plZ1QahXMSm65BeRmA7tBH9+
9/70ZzwK9t+ng/P3p3Ynf9s/Ph/iN/amPDleOW8yAZK02yEaPdnEWPBpcRki
Tqx0EpNMB1KCM9aSROFC2Lgy76ycIxSOUtgoIZB92ntZGIZw9vtbjkdgDyKH
rmEtADzz7CWE5aYwO9D/2JkoP6J7hANcjuVVKK35YwgmaQ/C0jU/HwnhDQTe
BAEu0Y0j+OfuEIw78Ar+eIjGHv5wZj9gBVSqAuOcJYFGT6koY9oJX4QjNbIg
WAP+Cyf+9ZH9cHJ0hDgMB/vHB4M3/jP4dd6f8cXdooMGJXA4FU/xhFuqwP++
wP++pMrRR74cA5n80I0i7PVO3grnGAt428Xr8T8lHoo+CYHTJxjxGa6oqvuN
JvnDwa+vDwZkhnYf37w+O1/JwHbp0O8SB9vd7aUOBC7EczLlP9161ss67f3E
UHa3YVtZa1R+D46gXk1fd7gNVzvtIodgL0s7/3o+vDXt5nM4NMDlnLePbtek
5gFxy5GJ8S+C6nvdLnpsRXZnSAag1TGZfJxdm6z0K0sIt125EtGTM8oZ+cS0
JwzzUzG9LwqacfkHkHZG2BZibIGHlLPMqr2nBbayzSGoU/ijh0nXTJvrP/vg
rIQZDl4HVqhZMXVTdxAEzuwGzoERT02JY4ENE4XknOHXFk1gyao5qeLgp6Ec
rAMrSqXtCzuP0b5AzYmLDPmAdZg37BPbO+4R6nu4f/x7Z+tnrvF6fouteM/K
2WSKAEsNy8F+ddBqXc5kC7FwjFEMI5OckLYe32FdBJz/28yXHCtQX4aLXlST
urrmLFQf/kGvQSm+JcKDSKjAbFA6tLP+IRIKmeqMt/0hkZ8tr9FNdI6AAeZv
f1tcTNHFm1O0rF3rKWIhIP4FgCt8hNTBllNdAABNGHkdCp+I94C4ODgIssg/
kEzPB3ax8lljWoamOBBNSwXR0YeDAftFAf0g4r3KHpy8O+e+VxmxMPSGJGuj
rUGUb8Myhkrl5sds/7IN6iUmZU+KBp80qnmlzJSLpEYGL9z/XU8oFQQB8ZHR
65p2CtbKSAeAg/js9ZWs4++zRAv0/5hZWan1KwhQ0U+foeN+vw//d7dX/LPL
BU6+PfEj9n8vp/i6HnH1Ov1mfGHL273eYO+eSadXfN2OgSDiL6FjETS7/+z+
v7OCDyAGBT7KzbtGfHfH7mjcZykoetMHm3Mg37d3nLKrdD67IVrX5iv/to1A
a9vkZXedJ80Q8Tqv06/MUjpXhoyvW+su6lDWqZ4yN/Yis2IvMB322gbCXqC3
apNeLzbm6TUOp60XxK2SbQ+yFmr3asgsy4fWuF7KDtf5vjBAolnz9evbQpJb
wkTgQjk33VasZ6Lo3Of024h41jQDpPvu7jipd9+LKjs6/lrlVHdJNL8R6pmb
vNT31yLvoUKGg3D/hBcrWwDz5Udn3hDQfP05vr8iGLxJnxFUnTJRnWDFoKR4
JDJKKfFE4KW0CEwTUFDcWFEUfKUYU9HytJ6S/ZHiDynuA1x39jn6QOL0EHrb
deY+Tn7R26akoUVF+Iu33EUfbMkuwU4CjKQpWhclw4aKH3gflff8guK/tQ3C
odZVNjBiqLmC5FFX2kwPi5SS//zP/8w+jQWx7NEPiHu0wKgInA+Y24PJ5CTo
B0rYD4I4hmFV3H7Xbu85qFKWZ7/hW+LHbPvV6qZnlg646c7qpvBiafrkleG2
xWx5nQX7A0u8m/3NhDSq57RLI/wxOfDe6gdhvIkH7derH8TRtx+0X/NjX2hK
FKWWJSgumBHVRd11n6f1ZEjfvdLNpsXscnEVNqPvgmZs0t7VzaRWhG5nyYUG
Rmvs6BhPmSMKGuA8/49loUZiu7yoxrf/+kfskWe7nAFF68naXq06MrraSOzp
tB7SZ/i4GS4IeoLam7wXbQqNpb24PMYh/RRM2h5BAHNI9vSx4hUKdy8YR15f
NuFY4dS9P7V31/nw6Jc90lnD1buAhJLUyCuAEGbot2Cj1ba0HqAfcRETD5UV
uGR244fo60R72afkS+hHWRLLbJB1x6xIeDdzX2a8fz48BSbNpQbikWv0KeCM
cpKO3gz+jFt95vTqgE58W00XUi8aXsSUKSRLBYQV7724zSaTYTj5IAgo/E3s
kYhwYi+iJmllAcmLzTL6FvDoX8smv6RyN4gH1TgI22T+nhotR6kfU2mcfGGo
oo+3ASCNLmsGrbhYllMsDwUheJgJUoKSzjWhZobvV0pF9lUrIQQaUrgJJI3O
8UZrz0YVxC0XY4MmhnilNjNeIApFZXPPdhuNpjKWTKdUJJRmwXYlWMQBLuIp
rirYuuj+Lqs+ru4X8ofCu8vKfrPLv7vPX4I78dGjR3KIIbjLPSO/yV/EfyeT
stD8VzeNeDA2VTxYN434MDbVfFi3HRcfy1ExxEnuclv80DCLhcZf5F/BUfRr
wefQfQHnj+3FaCbU3gVJdA5fTPiDJG0Ev2C0mVpZXtXw6b+FjBZ+8tMfF0P5
KmBDs0kDi7Lrn8NFWjZxq2E1t9LBrmuFH1O8KRwUL0rw5QOu1OWXzh0jJwpS
3S3gZHKux87eDsk8ceow43oVY2rvUnWwnVSyb7Rvw4A82mn0FYiMmvwgDCzZ
iHHe/kjJAEdYanF0Ra/tadLZUmve01RNAWCKdgGxiioasVPcHv3LgveQika7
StUaCl4MtBiQYSg4CxYLXe8IxKeGk7mXNCmrpuUwAFyGbTkOHROyJhGVIS/3
Qf3acfzMiBhNVM7A7BmA0nF6ejS1a1ROIPYY/aCY8Joz7AJBX5KtHtk1fc2T
4W2jm6AXGvO9pxFDRAGEh2pLk6+A6mDX1bwuwUZMyKdE9GzVxxuAz8GWeVN+
KCBYmd5Co4hgpxmyE95DHzwqGfVvgjQ44KF8Xb+Hmyk78zeT1piwZkcTcN0h
+M5no9vdOzita9dmt0tL9892hwvPGKf2NDdD0BkLADptscfUE1h9+57PwFuk
ZsR93vI1z8yqxRDgwD5CDc/Wc5ahQf7Brn4OgYuGF8vmdgi/rfVMfnlZo3de
xmh3OHx6xa3h90hfHe5bYJM/VQuFUAFqPMKeMkdoHGcDVR/41AeAQ+UQ8+gd
LGW1346IRvuQ6UpFh5xkQXXTd01KPuPfOPGUS0lSAGSYiGXcEo0dKvn0dovD
paOkLac1L+R68PQ+mUyHEHwMETqwKvQyD6Kyqq94LaAvTEkBZ9ap0PADWl+K
4AeOc8OXgPGshd12yE2qbFwRL3/gKDTdB6BMjqsZJ3o5JEu20lwH+PESyEq1
mFG8db0bSnHhO+8q/1h4KE13FDNp5IrgUTf2cYZPN6broOj7l8aPLjTZQg9I
aYj+/Eup2ukCkEts35B9Ml4WBKBE7lJ7gTP/S502SXLyr8JvrVYsSaoAwAdo
1ZC/efruAF2+9kJffQxd7tQSQaFBFvPInYbKamKoqaxw5rtGnCG53p5sPWmZ
iehCcGxBg52RR9HDdzaQlDOmkzvLZ5V8bsoZXoPjGd6EdiR/Leoqsxdmnf17
Plvm9a2BELWXzx9nBxJbYF/wflYSqkV2DrPceH9+sBne2n5Ys5j6KRQLrmJZ
ZRwFx+PVUp3eihd1g7qVcZQkOYwIiEFeXAEwIb5A1bEob/fGjpdKTTSWDdQF
35Ljguz1OQppXgRBhDQG4SGpB0sJGeO0L23CxTPOddVKKFBWbhVbPa6RAonW
KJXh9BYa+AbrE1JYBSjHN1XQqeIhrq4fJJZgtgmiaNO17eIW6/IStyQLK06q
DGtJM7USB8s4fNCxIQ4wHVKn1GpAlQZ/LxePoOUGOCVJVeQf2bkMYLKwOcBW
cH8xX9I4XY+EiYCRe00u5O8rxYyoh9aFCfMZ1966AWwXvkldrcOJMq1hywnn
cMWNWxKO3AzrtUS+f9flTiMQ8my1bpmUoDUCV99974eL5m/+4Hu4+2W1fMnS
EuGyKfzaLi7AugGYKNYHCCVgSw8zzA82WDs2IXvTQcWncuIYBMKZg3Zx2dM7
4MIz5N2w2H28oEbV/Bah0iQfecvtBis5vOCGTPdMupxoiWfsAi7zThGih8I8
2UkgAkZvi76sZIj4AiR6hs+tCKeFI9a1DQjx0UbVVAaO/KBeFj2/m5mfudzP
BswkNOMa1tEFxDpYXoTuEWSzyTS/zFqmK4MjxjKwSQvUBfCYB1fV4gFrRhQj
TvqevQ+r5XSMtTNnklMNOpQHy2O5wWUFkSyF2sRdigM1utNIwxJws6ahxjVf
z1jjmncZbOyhhvVrtU+yga7GaU4QW06kdWA+aXGZ8FTTVNVX66gDvPJaF6Cv
gBmkjEduXtpupJ5DDbxtNXKPBQajSJPKynKIkuyrOxqhHJiyA7nX8ITkM8zm
NxDoILKyFxloyNhCdYlAhJRATEFjgdMCzxl1ubZ5WyNyNmSY9JJHDOBHEUse
XpB9mFU3VmS9LET0gUMehIajLQnLkeIblw0nThuMMEfQNkGbdSwG+Yc3R9Op
5oHDCxaV4dKOgUWZZjL19SY0dTCuhMalSlrD17R1L+fVzPjQRiXHGXOGUiFV
vkXJvsgpgV4ko9aLqVwYTg1s5cSbXZeBlQaxAQl+6ZY5nbdx0cXAVq2OaUk9
LKqBxjLkI9I0LPnxFWE8lqvdWdTF6HYELG/Qd5jJChkh8sUkVJqU7Q2oEwO7
qQn6/umDSnkEWGqXvWbC6L0eo5yTwpY31QyxTtgtJtIcQclh3U/I8DX1ctbH
O82DbxGw4gTSY8kvjvkDZOACJfu6QB20bEyzvLxEFY3H5ZFzZS+g0CvxFCJD
HsyFy4MxWF5hKa51S3/b2dufgOfggsnqQdmb5qqajmn/audFsITOzzwixQcU
ne3H8SmbI3oY/L5lvGHVL39P3zhiQ5UrJQtsqAYVGcyzotayqbSNPGBO8qHn
oYQp8DLUT+km2RIm2MuC77BdT10iqiGYLcMfRFPWJl6vGcLWWXIF2GsRgggg
Flan8UMtfFmCtjbetgYZZSYWSlZLKSnU6FJib9kkvOec/IYy6HJWWk0fQqsd
ZppByU1zXyxoISMWAdXq2wsk0hJPRolgoD2j5EstH7XdVaH4KzeBcZDxgTBK
pYy9cVyXwonkXzLDtpyRkR4kfsg19CDXQ4cagp4O+jyt6fOQtkY5maJnmOu7
Z/Bz+6k7lA0ZWaxs8PcPBM3UR31HJxPq0fZS41YiOBvrmQeFZvqEixUwx+o6
x7Tygjhkg5VTwNXpPENbBEkvLVx5LnxSOTRy9hMZKbaB/o8WYISnXvZEJ501
W+A2n1UyB2+5kIPUa10WiaUxNEg0N+AlCZadoDByCH2X3GF1/eUELGdfGN3v
HYuZ8WIaJT6su5yqd3C25QrVRzMRypWUO1uyIl3xOnaLmHHZ/KUqWVrjQjEA
t4RuPbimCGF5ZTyZzuCgI6q+8YEOXgOmEjIS0IZ35PW15WKoJLoUG1cCNTC5
SBEzEBnyEq8RKj8Wxl1w5gYRXiL8LKZ6ZUd3HnWwn5cEa71swmlic+1ey0L3
2mY0K6J9NQ6+8JE+ZWHxFKjojx7JLpLG4WxsOAQKtwFzIQq5Y89L80ad0bW2
Du1r/9R9AkOmIuy7NylwwtFGGU7UCIyG8UY9/0duFA4BN2pr2uSB0olla+yc
AJ1NoUHhDhquW9uw8tjSWO/a1QNJu/2lhJy6cyfmg1t910UWBbggRRBkiS82
pLWqR/sftzsUV92/010TwVrT6iqMz4rj57hNGEKnb85wOHJv6vd/3IZbM1v5
l55ae63a0Q+BSQmzEPOmwXpUswXpkejtx0WGco8MR82WIapHNeIzSydCMXBG
Rng9MRQadrVmSJgO8+LVMzrMKwvDvNiFFxMF0VX4Q7DZFP5kGzCi3xAnTYGM
2QZaXieT69EQC5KrwEU0Y56fvg/DFMFi8WSHLBb4GPf6KnjsaP/NWfhcRzBi
NCH38ni4JLFdBV+mCC3sr/9xpyWj0Ys2LBWCj8oeOUrT21RBNLphKWUEeOdB
+b/BwnvB45SWG0JdUTcNM6MUEmzTczmJXA2YBBcnAvNgxRIVfo+Iy8Sxe4Ys
slzyq8kUlpAvvemhoxTkhdRC4fAdiBfBF0G7oW8XjiL+kTOGoXwt+PoIf5C4
G4jCwOscP9/Zerr1NA4Qh4i9JZYMIy2+iQsA6tNEyrB/kTaHpPi9wadauNpX
eLapI+0Nlc1xfn+HMQK0zQhFrloBY+Q5dNV5NS1Ht4IpKG60/YM3DWKUA4yi
x/6uoKhpSfVesH5Hsm+eu+uyhmXBaKzEqBFZYoaHWfXmCkLL3KjEloNpXPjb
TOJvCrhPCdQBYDsYIBxqvJauGNl9+1YOehwoAbGQ8aYpp1xYPI9Cokh6uB43
fdUK7zTMUdcp0mlIbDDIOPSqIKNa8pxVTRA7dUxHj68MDp9rqmsBcwUixoLQ
7qAzkZudna0nLRJnKzMqCOJaQKwv1nkoW4SPEOYin1wgNNE5tjqlCru3D7Za
ujPlJJwe7O8Oz39/Nxi+3T/70/DoaIcTI4ZvXx8jV/0x23n8au2H9v8sD20n
1eFZcdOvR7lTg6Gr7DqHAuFwQj/uPPiSkjrCHSCRmK53Yf76jsErpc5FEGuG
i2r4oSjm7r4h4EfvXMDGiIoEQ0ndEYn38wwSv3Dc97nUpe5FBISYAWJApqoT
QBmCvuBgflE2SJSowHrsYA5c1s7OHs2Bp2ikFCVj97Os4p4ljxpBCsIYCg9a
aFIryEKG7/WqugGD6K032FI3JsVTYwepcpZB54K8C8MHl11jJncJ5Dh0FfDh
PDDQSR87ieVYIGPMscFLaXQxpCEP7SRo5QLTERYXODoaRiSPVkI4Gb11Gv/G
J2K1pcgP2R0Ooou3MKoj/MGfERQ78LvQqElZDvF2KHbzeiahmUXPqP1gYYI5
lmxY3A93T8rTwjBVgjRqr2B7ncI+IjteUAREkoiC7TQyNJrTqgVHPy7BfhTt
JAQYt1Hj5qESZ3p98vbkcLCL/QDP9fitQOrdG3f3K90d2vnK37b40kNbrSdv
ucB7UXgbv27cfgealrBkLL+KBCgGEjii4jAYMoVSBzCRWkbMAXkzrwNzbXt4
e1njBhbokis+zTFtseLaKvI8GCDVmGLacoU7DXVFvn820+kCxR+rD24Z3fWP
8C1Ur8aQ9VIxqUpK3fFIQmU/JRxa0fK8ynLQIEh6hWqTyzm5f1ImB0zcnDBI
BRK9oZGO/fx9KJ/zA4RYxVoCYXh13pQ+w9GRBJJlx8WNs7dQvgeBhsesSljG
ox8EVTQ7IDHdhfRRsuMrCEIhmnHpBMjmfngUdUV3uKvbTlCexyfnQ0DJODmF
QqOY5/j48cvnr9IPvdv/HVOE4amDk+Oz12fng+NzfuhFx0M+m31wqJQ8eujl
yoe4rHz4kH3sjuG9OTk7z+KH0nIJmZn7duGF+Q7I8Ax3ecb81govAfAMnt8w
P9X3I7Im7e6hc5E2CA9C33b+fc6Oya302T4JNDVnKBHITGYEkDt28DPtIPyD
KHFVeytqftZ9dmww9fmi3We6fdxrQAHU18vE+FSrdA9CDp+JCrq64GZdc0Pq
oC62u2cErbAHydiWQkd98a4octHb/IBTBlZu0wY9BkDBtF2bbFhfuVkB9LKP
/mUlXIF87iqY0Bjn2EFuMSAPiAaodpaBdRisto3Tl8mwwq+Eu8ZjfwbmAxdI
HbPZ+K0ggkC9VHeqUO9CNw8VTwidNZSrgjHXYlEGTq0H10AARD6leg/tOgRo
i+VUFPJJq9ojialIIYI7Tka8kS/ijew4IcFW2quC4tZxM7mkRwj2ekFomBg2
p9VljIu1XP+2Cb6mpAK1nbC6sPyLwgqWdqfzGd6sOriew/ywXARGIZZiJ097
hzn0wtAegi1IeS86qmnAMNCQCV4v8P+Dh4uxQ5TrOFr7gH/EK/6ydXQ0HwnW
eVGDdIXBHb6oEYRiYCVyCa+E+H1vNFeODx1U4RIG6FErA8AGAdmq96G+B9M1
K+INJYJlxTK2iiikljFbcxmFiYbraFcyvY7CTFMLid5GED40BHdrRQWgGJMl
UPPAx8Q/TE3LyFm8hUkxy9EVSW36BZDPQZBfxdgFPAjXEjvdBKJhdP/APaSy
Y8fRxoshXpbtrgONFwQkmepiL2GdIJ5hDLUnapjZGI3qmlPdNqNAWikS7n3m
7Ol1NUP43KhFWIKpb3rrUi8p0QWALylhAHgg2Iq2zEGOpVyoPm2xZ99dWSH4
RpX0aa7QcA2wsUWxYLnYFx/CqXHEUbbxp3zxV6vylQsQ/WdIjW+rvxSLBeLt
35oP/WrS3/jwh+tNqnhBYNinVonsn1VTq9HMrBgMg7f/Ghfg7JsJIh7ENlSC
DYmBuFbr/QiV6qmAPJUVckzzwionZL1D2HL80kgVomkFqiqGYV3ZVhJ5A7o2
AtwfnmF1wbPh2bv9U0xvMuH3vIFhXYCPVj3CQDKmYRfDd8JYe/PFCtUDMGNq
sKlLtBDQz8WtYUM8FbyTIBwhrH6fgjHlpgDEyev80pNCi2P4oj5sVMT8d8ZB
hiVRgXb2Nq1MN/advArtVYQ4TZjmGpjTkGKHeS59UKD6zSyfW3JZ6Dg+iuEj
aw/EajAMh9RUSB5QmCzakMEjBJqZtDocvAGUIpppAS6fFNAfll9nIMEiH2OV
MmSefVpxwteHjt3rX59kG1Ctyd8aoBFbeeFVEkqQdhPvNekTVgmCpLomRIIK
lMk1IeQ5jOUc1fRfwUhP6kNDvLgKm5E2j7Z8HTzDNiJRTR320GzMgixlIGKJ
Ba/h6CLxraoS2vYWJCHHVmwYlvt1O3LMd8IlspLUBLCHQbkFpyvZ5Rye/Knn
1nX/4GBwduY//7R/aPW4Xkgi/uPRL4M/vyOwM7fXx7/uv1EfT/y/GU/Sf3E2
OP11cHq0//7NufryfP/NoIWuln39mKMRrRqFfq2rYvHfaKl0lYy/05i/fZBY
mOMb9lOLp9+8y7RUf9cDsWrF0srNfVfUlQX5x54Qh6L4j30tQTn+w/kByc7f
SrasDdxFzfclq7N3B998LiOgyr/7Ufi2MYdlpbs2JR7jTwzPiECEHQNv0QfE
D/2yf3wIWFHdJ3lw+nYFDUV4h//Vh5vAaFxjuNHI7vXSANfRyVZiJLxToiOj
oYfQX1MEbEck3FcWHF38lxIH2wvgpMH1/gKZMdK7v55i9w8dxHUHsaTstWuJ
Sq/PDl+rAcSctEWEJ28SI9T6xJo8SAi2RQAuvmFtWmTi5X0SY0RcuOk9RAhe
FdffqLywowXGa8ftqFVjAn0Nsb5YSaxPk8R6hy8n0+pNgn41qa4iI+4qomZB
0f15cJ6t8HOEw4lPRDgcIYlwiSO/2F3bi8SAYPI/vz7cRbvJ8P3ZwKpB+2fv
TwfDw7OOiKRVj9AIf8wef3oMlsrH4AHUXryVz/L4gepWtUNvH7rrw4IqnM3P
WeCt8tBkWoc6FwpAP3JUQ/X32E/N4T4QEoZVTqAC9xYejVQw2soJYrkWckOu
asex8KGNsXQeGUZNUf5pdaYCJMVWxqUq5bptWpFs5BonsxOmnFC5BDFUf8xr
sOb5VWgMhEqQRTWcDwZnQMyGfecv1U3ByQh2Gs69hFEWEMa5aiWcy2vlstq1
NxKsVc6CdD2MUeSgvgVnRVNsx5bLsgjCRwu7vCtfxoh5sAWajtwWSNLFIizv
ocag6h2hVzrwWQqrPFrW4PbhfAZXyQPCTRdX1SWWtKXcDBOFHZz76GDnnIvq
pC9nVKUWk33RcQUZUEAqtUeqwVKDlkPoQAhXXbvBi8QX237xci+bQH3qXfFu
oh9siNlfnCzY8XM7aRBGY1l/K4/e/3X19SoZScHVxJhl/wQ39/AMLS4/2oGn
kwK75sI8ysoiaA3tahdEJPhK953dKs7QuU4QGI5JbCfvALl+/w0QgOyMewfH
jEudc+86Za9xK2BaMRWXyGIwkeXXosZSSzs6pQXT2bL/WBY1ui/KMGGdkqTZ
nesL3uAsuLL3gpGVgFtS9BL7coPG0MSgR6hpUeFBdQ2OCKpb5aBlkSLxiSG6
rSRJVX/VpjWiEKUzc0kUe6aHUOAF77Ij/ovJi2Nfw5fqeEMV8zq6HF4Ws2GE
jBE2oEXUbaIg1+BNESnq35D8TsjLcVPWmMYOGWTT0EE3yutaouZe9NGxcFXk
46Lm6HifosIZLOzH6EEolcTVMNni6/vwMs7lQTZ2rpwfkkHe8HvHxiPdiUtG
jY4idUc3+ZDI4F//mDXTakHF6sdlflnn1zgIGj/Y/cmDOLE/YBj9DItBwfrB
Uj9OyH3bie92Et894R627a9Pst3safYse569yF7e5zvo4w/9b/yfQfGw889R
WXcTFEL/7iPR1PzPHQkEjw9H9WjVivzdRxKen2xra+vvPRLo5SfEO3jcf7KX
tUhjwx8YRgV20JlFvemf3u0/56f1fm7Ys481Bp03H6STsXruRX97e6+19hsH
pwdPdghuqs1X1OPbO/3jvda6bUAADsQxoNP7FYqk6NEl3rLZZpaaIwnHfOdm
Dj+yG1hCtgMWq9BAg2uErmIHTT6FiIJnu/2LcmE4yY9BW8gt7KoWINoFuXhF
tp7p3/ElW+Y1xUAA7N6kLKZjyE5y27dn9qCWsd8+exda4XFmH29vpKSyY8uH
TUZdUN5GDre+Fckf0zWsQjAaBjWE4S4KLoGHQQiU5VNgDxXXvGuWCL41WU55
fhBgMbv1hRk95dlOsKofPI7hX4v8eu6c5CwDex1Biv7m2eW0usghAcOKBhjR
T1CIpKbMIZdPNBX/MkRJvLGCdx8fc+mXOHsRi4LhCbCMJQoYJuYCYhxGlo/q
qvH+bOiCbiWo6nlZAn5zMEtE2oAsZEsAuDAeZwRd6bh+E38r08ITMdA3HBak
zx3v/JMdIDSeDizhxbRsrjprDguQIqw3rruT1nwSmh3xJaIRjMMyAx93+pSb
uUuGFngey5TLkDY7wsdQ/KRg8XDRNBvh5UbwEo4TCIrZY8gK1AgAmIoZPk8E
5oLtFW4J5+sGL4CNIAYi9VU5MNzXjeSTurgFKszbJQstHVDuHdMZ2gEoua+B
cswoaED1S87dkwlAmP3sFh6FOIYa1LWmmDWMbStZOk72tiddKfFAPZhnW/zH
MsfgGiLFsoGSBxwxaWdhzy1G3cPjRI4Ud8jnFnpgmiAUTc93KHhitry2ox7F
vbtwEiGXRVlc1EX+AYKTltOCCj5Abuh7xLuZueLMtnMkUhL7yjrbcKeiF2zN
ZgoqB44roiDCa2MeSawR4vIpvQIZo1WrAW0MyBcZxPSWc6Nt169oWaTIPCBp
+eRTjp4sonT/4JrU1EXnc8Puqk8R70m6yK0fNnbnu8D9w0BYyC+c12XFWKTZ
H7LtTZ/JFZ+MsiEUrmu75o1kNACUD+hJfmWo8jXlaS4tD6CQMXPIChhCOOjN
IwxfOU806rrBHaPsVCDD2l4oKielBvCs9lKUvmI9jKtPfWHua6AQUnnzYHJE
bVQjnikQwRlB4YBLRwBa4c1gNA4OJeevNhTRSgQO4I8MNn0leLxBHCKOgWIZ
R1cFaKJwZRXjRm4SilKkbFeYBq9MWKY0cKy+4itBjgixSZcsJNFzUBhAx6BV
EioMNyduuwSgzhqKQrUizW0PcaIp8im9lcgcZg2VVHGZifbfl3VR+Lc4Pk/E
Djt1jfoWrlxhd8I+T7E654NDXGwJHAXygzA8txV4DBeURkbXf7LCax9RTb0w
4dFsINgKGGd4WHKJAYU8Hjk4rZEQw8MoSbv/kKoPHViJ4RJhXQlpObcHEiIr
xwryC14LrAVHVzSW0sCiClbgLKiQriDEOkpUl8hP6WakRn2Xg4+OBjT3MEd1
9wFJJK07Dww5tb1GaihRjJeTupn88Wke+uuDLyqYjL2qyIySm4hAAC6cXt6S
sbXY0BAEdG7slYYBi7C9bsjyoL+Mi2s71Y+YyWqPYzGfVrd4P+DecPp3Ewiy
LgfvBm5CIiASCigQOhBsWFJAsKV0WsKrtoSIyK4MjUDBpjITj2aYS/5aH8Ub
gcqGmOK3/II+Vz7f90M4XU4F8FwN0kC+2nKOQgnF8Lbuvu7oUTaowO+QOIcv
QKmbZo6Sg16QDbL5233CNON5NUcEb4Du0eJXcoKbhGVwrm4/GDSn+gfMPyVJ
oYwH5j8+0FiDWklobWpWp5XYIctuLAbnCD4noiYCEVBcNE8egmH9RTZiEiUo
gnO6+mAhxwhIXkIN+G5z3YY31qEy6SJ39dvAP+T1E3zPfjgvxg+tC40z05Kr
gekI/6e7HWaNFet5sqDFXQAiq9UBprcuZSbbCBiQ1HCsMfWSgmq7BGtKAsZ7
C1MnYYikZAYTkExP0SZ0wicO2YokIOYBIALY0mZVU2LyqlUnpiRAFowGkWJf
Y0RTgP4gjUW0IxlrnxOYSN4GPXNLUmAbyY7XyBLepQJayEiPU8gPbwigJdlD
GkiCLTjFkVVU2Ok5XkuE2q9YjYuhZ5ZwqNjBgT2vJVq/fwH0Rcza3V8hzuKl
4ZIBErzBEOgQJvNAnRzG++CmnPYzA3l+So/Z91vGBhlAMYxvK2WrbOjeJ+xc
QoQNtk1b6On827XEI4pR9vASOcAeLNBy86qa0LXsuko4lJRIRoBZMF1E5p2C
GgM0lPC08c7Ce1qymlOvxQ5zlTdeAO3eBND43bNWLgK+g6bld4Nj8P30Mgm2
PcR5O8mHfLzw6P7xIdOUBDSI7Yli+4MkY0IQQklEDOPxNFXde7LGO60DHkuK
lhTRIMn+ghZJFNEQcJ6TOiNKSCUlgfuwXCwdFnzOkhMdmY9lNXUJViDSAeSv
eCoRjb5uCN3fyfvROxH1FjayBDBcyh6X41zZJaNaJE7U9cwRE+JU8kSYkoMp
MndIXERZDqHXkW73/ZA3bV1rpm6DvitVMY5cupRfBqaj9Mb6nINAacCAInLv
ubko0h5ejxuCWWpfQrG/HJC9YKRizGyPyV03CxBSiBvAVRMGMW6Kc5AY7F/E
Ufx+BgDfsyTDR3RPhkrLG7olEEOIWPirxPGGq45OLQWoCSTJOJCksfByXwov
9xmi+csXFL+WPCLGt5dMz1MRCIKRYnr+3oq9b7kDcfFJbiel/P90Pvx/AsmC
q6fhINBaa0XzSpc/YA6EZkfb9gpMFWkpwliNqJhOIJEK1JPayoh8HeZJVaSB
zF4QctljnD4bmwk2Pae4EGAAAW9lCwi45QIJQmzC8YAJdKFJST1S70XKf7Ic
CF7mBei2eOMYvzZMN+Tv1aLauK6wRhxXhQkKeFnuBw/ZTenzglkSgbgGAk3b
SoNFuXPka9GoLAG4C/wlZOzrJqDFaLj6AvG9lCDjxstsx6ziOkQUen3B+ju7
NW22E2yO8+viHHdjc4goYHyVQhqmpRsiZrynHG+M2JVZxa54CdsrDFSPkuSY
JHvMpFTLKFejCCwShYL3UissDrzA4CAxvgfO3ts/PHl3juFKYor2TQS1QS9L
F+y0bhM65oNFhvLf9O8u33xGbeipu9zz/L6ke55+c9EhwQMtZxN9z84mgaLy
0RWkP2KROxCbYLMpzgKSnNkELMn5fuzSFxZXkdICkbmGY0coDmxcfHLyADvN
QLcwiytndGdYgoXUEQvR5INqWdiFGZegtXsET+II6MPzIyaDHUgswXbpQnuR
U67nC93IzNg7Y5TV1KG6+zcgFCiNhKDvXcROEB/4dOvF1vbWLkASBBGCaAzN
uZgWL5LzhIl7iI4ESrIVAiRA5qZI/zwWvZYMs+D2eyyxWn4mhhUvsADGpV4l
tRtgdTCNedwOpFGhoXQ5AvgbRKCtALmB56P6X1EAGu03Ahu1oG2yk3fDRIZg
9mP2/EWv3ShIyYNGLxONggQ6COl6nGgUZKxBo+1EoyBjDBrtJBoFmVzQ6Emy
kcqUgka7qUY6rwkaPU00CrKQoNGzRKMgZwgaPe9q5AKKbSO34mp7gekvqj5E
VZIo1p9P81nRKvmGMB8itqX3OZE5Y9/6MtydRL4KNAp3J5ElAo3c7gRMGILk
FUqQonGNFOThP61KORujPYADz1TiMVRcy+vLar6L3k/4COqO/Wh17jlK5AjE
GyA4S/j5EyvFkokbgDkAPwP5AzPCWXiOuuLgybZmW4Bhma57rnDehA0f4sBB
9ba9wKXjPFt2NBkMx8C3+dzuGElvUOBZ5t5k3nJRzR7yIqC2OQdQ+jZCn6UV
vTrhIiYwrcD+FB3+vSBZmOK9qjmyLcILeJV+GrnCns7aDZ5FdaPjUeEVe1Hy
bdDBBDK8VTRp1IdiJXtaXmj9GnRKgXVB5a2wWzjme0qaCp6GRe14DsehskyD
51a8T1jUXpTcGT1PyBVdfTAH24syNcM+KspP6eiDGNxekHQZPL+crZg9MrW9
dtwgPxsWL0s8ytNObKT+ud0nLUyr54Dn7YV6t3RiZdNmEZdskw5CLrcX5fNJ
F6yDdfSRYJfB/BK/S8cXy+mHqPMAHNad+D6eeGGy3ZzgQTvkljkHM9LvwTnA
+fSVjEM9+lV8Qz3/3diG6vP+XEM9fB+moR77Gp4RPP51LEN38RUcQz1+X4ah
Hv1e/CLu8mvYBfXxLdyCevj+zIL6XYNX0Blv84r02QdeocS15y+sRFJXy8sr
K6qCR1RMNj8owWmTMVqhD7GmuMLNqtZhUOQIKnA5PBx818vHRt71cifbCDai
Fy1rL7Umm0aNI2mb7xyTacncYm3SRr60obTPLa0Wira028CMJsuCWU+8NlsY
GZzEc8myANfyHGonLzJ2iNGPGuvyM3WEiZFZGvmFHrIbyU+fZRs0jE33o7bv
0LMEB+l7DJVA6vHlmj3is3GHocKIP7x4vGaH8mzcZ6hfUp/ba/apno27DTVS
6nZnzW7JWhb2Fyqv1N+TNftLjS/Uc6m/3bX7o2dbfQZqMfX5dN0++dm4z1CL
pj6frdknPRv3GKrc1OPzNXvEZ5Md6nzfz8D27tEhP+v7TSng+MNLTe1vPd/Z
9P0Gz/ouU+o6dbl9Z5fhs77PlHZPfe7c2Wfi2TQWrMqPfie+z5N3khHvmeHz
F6FsmfWzfYjxR4cBYhJmB2hXw+wzspoFzItcVPunP79/Ozg+b+dQs2m6rfkG
9mkrIx+8Pz0FH8LRL3sUj/PDI90irjQ9GuWpKtOYB6CKNEMzX7E4NIezHVpa
cdZVl7G7PYPI4t1qgMYXdOANzt6/WXNtQFSIIP+hVMik5KHiaGuqI2p/mATz
ckDbqiWhOK45L3r7qolhC5xZNBmq0NTSU6REkxuaH9Nm5KMACY2BMAKRDP4S
o6Tp4b93g1XgUrWtPlz1pjVW4a41cHt7ODg7OH2NGarGBCeplFDTdtJ9K+ce
AyTy8bik2r5WkrnOZ1iQzkcnepolSNzchQCTIW/LOGRSlX1NteB1hxMZlquj
DUayW3Ek5uXM4TqOiqDGdhS2X7LpXc/aQHVXlPTYukiJKwuMibyRYtZW5bom
JGB04fmzDOkqzocC0yMlWX70BYVcvzh/yOR0bbSn5gzi0FMgyuhuQNhmj2Hp
s3YbQ5HzlFbp15hqlalsdcde2G+D/6wmxuX0ylb5AXNRXv8LlDWW/siTXC4w
yt5uS5GrKKuoLq9aNTDW+qFQ9ZTxcjbOKSbVszcK7Zgv6zmEiHMn9JOhNOVc
4jazSJhPFtzN5tUCv4JUBFd8V5ZO6BIOyr9krxXEAhprQwkzOj0aF1R8O+hv
dSFNMCgX0aS9ZM1tsyiu+xCbP+WsONTYKE5gnztDxQH9Ugq3laJJ3KsLAtyU
eCp4o389xDRGAvYF1pbPYjZAAfvFOAgOW5C64VLQ5B0bVGvLBU9BpR2OLiyb
ZulAjeWlm3cGYZmIf2Z2x+pbRjfNF8rNqWO0OnDF+Qx6N5tEOzm2jmnL7Nbk
GUDy4Kc5+cahKBdXDIHp+MG5VQQqHH/MgfswL1gUPmTBicsUFpZfWOLnMypL
6PMCt1at+ODtu/Pf7c1U6URCLspZLmhdMVC3Y2mDSIL0csHj4TqvHJGE8PlA
PT76foBIB0G9YtwOd4xpcx82mYoaTNEA5DeOi2s6v94Nzpki4JiVX6l62pwD
gMjjjR5VuUdcDCTMdAwKO8C49BmUn6Ku8Exx0biVS+DPFzp1PXcOl8AuKO05
xPDN8muPix4vRet4wOOd2xhEJrbSMiyP4QwYiHJCbPVmIeH+QRhjnBbSGRHH
VQfhsABudWFZVTEPdK7+H+MD4r7hO9cFSXEc97ycF1NyoUdx0eIJh5wkyB3p
ZJuSaoKGFvLjOZ2hgICbfCERsBxgz8e0vatNmIjWeR3k2RtEqieFw0EeOcog
HHuKrfHGAjUBbQbY7LWEMaxaiJjcGMzjAr6a5QV+X0PYD80J73pM01VZCZ4Z
QVIXQTo4rHhZTwxb5EOCvbSCxTaqmnBNOuKA+jr4C45WOaNgVslzAKLHbHTh
rpjrB7w0oIpXKv4ky4L7dMU9yvIibhOGaM6qWZ9fF4ZWUaB6Cm6TAx5b10TH
HSG52ow7RXlylVSK5PqPhxQkSHZlOtJxvBY8t9EZlNbLkmE0HJAKz1IKh9s4
CeorokDdfFzNFxj9y0kNneFcAQGqoH8aNDx6Y08GpFq1iJOzojiakA8Vvpni
r2GEyDUgRg1SVCWDMiUBSfGN0e2XL4jM2JgHOEWmLki45cgan5DxYA/fyaOf
A1DStLiUOqJ6nezZhcDby3ze+NIguRAOApVh6QNIb4ytDy8D71TWz04pAJwQ
7ECfXGmDGHCd93uYIJT//J4WCD7AXrku8mHsZUhaKor1LBVFylLR0uhtq9Bx
l7RnwCvh02p1V61FUt31v9/PlOFdjX9bPRdhB+sMs1Mpdz+v0sk56ZFoN1f5
ei7jIF+gLOHygenqyqmmtg/mJj2Q6hFQiViK9xZmVihVFu+StCoLwoLbcYkj
9GmcpDkZKYagOqfgcsHKC6to0m+UO2r5xEaxdbl1FyY2VwI/OD3IrHiG15fI
I1H6SlWPg/h2ZNdSENOVbRCd7wQzE/P6A0XQ4zMFJ4yIaaAuOHvJVyrxpndj
F6ackpOHNFouKuIFgMCEvNFtIN40UbZriXlKk7K+Dvpz1v2NtN1eFiYuUIkU
xOigvjciOVYJXYUeIKdr4e9r5rPknuXqqj9QAd4ybBhKWI0Q73cQQCkHEusM
0m7ipWZSr3ZR91i9GlYpZtYvHsfRAJZfnzuxgng0v/wdWwBAYeYwgXHAuZ1f
6V7MOwxf+lYL8mQ9vjxZy4I8WdeCHM4hydGCJvdjvTpQ40478mRtO/JkXTty
9P7V01vLlhyEnrStyZOvtyZHY6Vpfld7cjD2O9di1f3ljSNKjmc+6jL0xP5B
rHkD7pmLW6M44qYwHi/8S7Gc/WziDuqFJPJ8LBuER+SnQj2ASnvl40bwkqzO
eUlVdCODubDNSXQ3dph58W6chHejie9GmImMWK5HdwYx6/42m1qxl5VUfVzB
LFBiQjoYQdVlZLu8KPw6MJiUwcceuuVmk4wXCWjYjcsjKGft3Xhlyglcdr2E
WkTmOH4K6/mg2MyaEk7FKNUJEhyLBYNhBlksEamwvYazceCGRxxrBswCYBCM
dd1rXSx8saF5nB8pxtIJwJlNyCRAu5YH6CMGMQcsPSFERTgidfv7+wcg0nqk
wfVorTYNKUSotYwpkARRqSSepKkyjxEMOc2Qoh2kbBHal12Sn5hYMDcL5kPp
6Z7UIUXfyiC3xUIV7ZJ8+AorhyPYuJISxAC5kfbEb7Yuzm25OHXUBOg6uWg2
v2DcGyvmcHXpu1LHS9zrumzF5n6DxjO6qu9Wd7DRHXcqtgkv1CTvbA0+yT/j
Vve7LaMoxPjCvKiqaab+7NDrYVHdcVde1YnLsrM7smpFLVvSBbRcT7xoT+nO
dVvrGo4DNts38VX99Vdxe9Ru2t/1Qo5ncY+16biY9ZEOUuZVAJLkS5ULsu7Q
rUmmOYp4bVydyaJBJQfZGWqZLW6yo+NfLRsBq+kddhK0id7FOjQ0cGzNEox7
/PMFq1Nnysezf6t9ZbqmfWW6nhw/TcnxUdIjtkK89NXncbqWfcUvRZLO3M93
Mi19EF30b/sETr/mAAaHJmgd23Vbj7aXpY6W5dtOq5tq9/LdITbjEclHAtg0
y4pPo+myAcs1SZzauU9GHSfXmYtbRVsklQoNbQGCHJAwse9eqIA/DAQiFTHB
cjTCeuh3IhIa1G5MOaNo8akEgwjmedNUoxLFMpf3FOwG1AUQIvXQ/0rWFVcj
e1Uwc9M72zjMBMIK5ED4lHIuztrBKnohPIQRZISkp8APXkiHgQ4VoCyaK8TL
pHwqW8EmS8ow81EXi6CA+kRg0wh+YBxCl69YPzRmVI5Ar127RnCGDAzIhiVE
TtvohCnArti9wwQIc26uyrnBKh6Bq4HVJtlzl4vuzGQc5bmRit7chEKfCHk0
Khfa3eVhSRAjqfg0h7MhXjm8TM5liKGvTe8yDa3jsgB6EQok2mFfMB7BfFFd
lyPj1sFqE+FSEFppLvQAZXGtVH/rkBjN//LHsqeO6B/c8UzhQaB2ZqkS6l0j
SbkBIHAuO0hpcBk6YecFp7GjhlQXXLaYnDYmqILunJSUi4G64z6D9Cm8YD4g
CJYILtuFVs57CatmL9I5iOb0Tmv2QlMkeAa1gqQAlhG0gunwPzrdvJpqaDlM
uifPjge0MBBFxTnk9r/jxnmDCJXyoiCnHREubwptNqPv2WVEUGupHQxZ5cFm
q7rz3vRBnjTMXzDepCuwI8WnHMPa0JzPHih0dFGMD0pnXCjXkLGcAz8cAAkA
wuKXdkzkum4IHGPlqlGVqNC5DO42hVRjHERELjin2rco1nS6FdKJFT2zuJ0z
hGgCZ8Jxg/BriX0K7McBmIbipY4rq+McwGsY1qcbRrazk20qlF0vCdSDEDA7
wIJc/WKqfOiQ31qkFsc1OWwAiM+jdzHIk8sTQZdn5sIN0GX+GogI80mwMrdt
CQK2Q3VMoxhSzWiOcQhjgGDVyYpD2SsOXIodLmEcOsdVLTgH2rFfiQdA90tT
XpfTHOqrjLC6OCEk4cCnVdUUKwbervJMIOI07mha9r350v4CoaAukNDfCgRA
g9FqtPe2B9r42a1+ECITGN+BkRZzf0MLXFMOoT8sKzSerHVkWP8KmIy9gioO
0GwWLqo0B3/URXGVf6QIM1VxZVGXl1A5vpkvqUSTujqYqTPUjn+Lyfx77CsE
OVUjcgg+CprGeBSKVQeJb6zkQVr8NIfqUwKhC3FudQnROGL1bEsuDvSzwmQw
uwV00MjsZGl2eyvbn2mRkm2VQjN1AWG6wnE28Ir8lAPX6jEinApaukIAKwIY
us4Rq8wByV4s/TEGExjEM2GE2YLlA7lJCflILKFi2rRfKUsy1ChoKNwLo4gr
xEL/kDlHHoEAMLZOs3RMJ7jxfEiFv/Ta9uSdrSRcpKBL5aOFuiPCALgUYlEr
jqQTuYxdctXI7mvTRo2ChxU6GeULku20AzyqVNzJkN4anXOQ32cSNAnaWY/o
5aGUZNAYfeo6edgYsKLOqWiDu9etGLvHCMIU/uTEExLV1b1lopgYYA8QmQx1
swp1U6EE7aJMEu5jHW8XXTSdF+rEizAPI7AuxBlvViBQiSwhNRA8qOoEUavR
TEwIYcgFXfwjZxJ6OERUjFxph/OVqg2B34m1B3aqZbx5orOQAxtw02H9vb/Z
9/vYe9cx965h7V3P2HuHlffe5l3Qbclg2l2hiscH9VpWm4Tqoat8MpwWs9WW
oZQFJNEjy7RxfayWiVgMxKtNzq04GtwpwsHWfeFouy3IetWifVA/pSzFbfq7
w5w+SljS9etHaxu877J0r2/iXmHb/irTdns5ZOb3sGmvYSZbbc2WX1eZychK
3WT4/3G20s5OMl0p8+lKZnW6ks/0WZmZBAU62Yz+PfOSMJboq7KS6v+2WUn1
XVlJGJ3eykiqCU8SA68lfCjPGNpOVo4CRSPNo9EQbuhwh4NBEjRMNigKwWja
OE9QLPDH3NVgGHHmxmt1j3KRCQZgJklIbl9mJVLw0W+bKT5h1YtkbF3m0qqi
YqFcKhRkSXfBg+H4er645fV2GR0XHId7pAVwhMSC08nfxFfcly9BxZC8+dAY
6HFXRubIzOnhT2oXlOGJjcURKylUs7GJIIoRAA+MDKtGg2xI6hYKiCXhdnP6
QinhA6Q1kb5B0Ht2xDzeHaSk3Z5x7fIMFTtZoeyJK/3irGhhJ9lTHkRLaLSC
KFSfgQQ6fj2XWPPPPskuGKMTSY8TRUDWw2JiqiYi/FEZTXrDjoHSc3f8QZW7
zy3RKvr7vH5PSrLaS1Rp/OwaERXvpSo0+kZ4uPey3XabeEzp4f0huFA6SNaV
2qaf2/BbUDfsGqwzjVyy33G9/d35leuN/GEvszJKkbX+PrtGJG/8r8f/315H
I/gj6rMb9+nJxN6AkPiTbIQ3X7sj1QgbeqBSvc+tRqiCRr2FjbChKnm4lz3p
bOQrrGXPWgP3kuleUC80XAIyi2/FhR2/ejHH4AbfLZ6mG33FYj7pbvQ1i/m4
s5FazKjRPRcTXOt2PdHF/m2L+fz58xf5xWicbvQVi7nb3ei/E2WuwTDuYIaB
nuG44akrVtZqWhdWhGXRW9sBdmNIsayfHVBIOuTAgBHHiioDiprvCutwken3
NA9o4MNvDdJYw0wg7dYJ0miZC/DblpK9VvhFONEO5Ug1uVcYRgAF950UxlYo
xteHTgTDWz31O3VDJlB0uEwbJfk2mUqdb+V6mI0oF6KXgqraVGGZkuLBsmDo
G6MaPXig7KWLhnMUFtPpIGZVOgjK3zNKhYuPIfqei7w2caYL2T4ZN1uUoY95
Oc0l5JkVWozuEuN/HcaPeMVMQ5G71BYVXbDlnyeDuS8b2fYb4pawb5uszbx4
CT84ZwEoBYgng6U0/YYayWxi0Z0b84uxR1d/QxfO9eHTqbATih/xaURQ5gqW
N4oaEeXOxCHOIRd9GqMqgkWVwRQt04R1bO7OInShu/djowH267fmonxv9rim
7SyYQ5pN6Cb3i67VUJd35qLUqVyU1YNeZfQL2txp+NOYnN+JlyfHem/r3xqM
Xo/9zrVYyejlDEFOGVTp7UovCb1+WEkKDxzFSZW1YyJytHXh8OVsXJFdKUga
0PEE4IhEtuID8j2TDzNK6ntklNRrZJRAxH8Gk5GUkvq7pJRQIoF0/D2SSqra
RHvwKjBskXGNY/zaixmCI4zyGVco1ANt0YUOsbupsmZUzPK6rBp2WMtlNC6b
UYWGPjCbzdi+yVeJygohdAuusp1AziGb0YwseBUSjvfpUyTdbZZf5uWM/MH7
0aUYPuli63K7BtdzvjOhdMgEgRuyjPeqnGF2zmghY+KFg0mUi4Rvc1xMC8nE
FfMsRS+RYRPr7ULqCgeTRPkv9q0IRLPlIFF8VRPe9Gin0dCmEmRcV5gq2oAd
97rCUk3RdfksxBC2l+V7DFe7437kkMJ73Y4a1/xbVYzlmirGcr07dLnWHbpc
S8XQ00wyX9XgXuqFQnluX0jLf7JyoQa3atJ33DdMhBwdyKnld4Vna7AujutW
AbHtcFgOhhUTt+wqsVWqMbe4kjBBijuSXiXq4ZUvYmmSQmorDy8t9kqoeHcf
dtvMhkdlgiAwx1VdsjmF7aqiYhDNYxkaLAhyVwlfhOBKKT9NxVbJrCCvBoQS
1QsFhSP3oHDN+0YPa07zPEAct4zmNxwFBzpgVngU5kAYsXfxGDoiOKVhELMl
hwR9zKObyyEGcaXOx/np+0FHJoOvj2U7aBfIwseP9t+ctZ9fdWYSo41OTbtF
SmLV+Tmk0HK4zMH561/37cfXR0OCNFs7V0cVkPhGJo2Lth6f5qYJVk1pnsOr
6mY36tR+vZJdS593hl5ww47wi47W7YSgxJ5CwzuLqkmP9GSjqyesal2PnuzE
2YRxeIfu+U7FT218koP73++n8rkCA7G+17qFb+rELRwTALViuTVo2dIfb9bG
MrAtkTOvmaTpmgte553JmjdsY19zF1aosarBXTqsL+3Qlhduvl6B1UPkuX1X
9dUPe/UCrBIk6I6xwi/9o4W2m45fuQ/cLqPHCOwuhk1zNMWawS3wPQ1PolvM
d4luIfX5q8Jbbv7bhrcI74jCW8QKGS2C+/qWarxb6czfKgQ5+WYwPPv9+GC3
A0qVWJBTPm1nnLZL5ShxkFRoY7qk6oS4WoTCFmh6IZCBceBAIANSqgx4khqJ
MlbFDCULDRAcd1zKgY+8gp2d3XIeIoXNVzXJDOViucDC4+0q2Fvt1TjcP99X
qzFbuSR2roYpPr0ibfAGaFDMMCybnjR6gQAptCw4uBi/jNeobdJGQRU9AhTB
M4Ha8FAUU75WcxJyxmS+Sd5cYfRvYzQNQFA8I1LOqwYRUMS4jqC0UK2XtHW/
cgBZ8f747Hz/pzeDBBmpUcnK2c2a+0KhHQQlnKebhnqcYgHiPvIf8EPM7Nct
2srJGSmEpGOaJSrHgABj6cmO1Er8SEognlLhVbDdeSMSwUlB3qpPkbBM7aJQ
hqbWYCWCGlKk3Ksay/3Gku0UBAChAY0T5/FVcJzHhWVl1a2PHJOUM8xDk3xR
Cvy2a+uEAu/G0L2poy0L7ukRs1Cm9ump/S8GveUzyZrCfEaVQKIzNoN7ouNg
lAx/2qgc2MrLGBgRpusnhszNG4vE8OMnQytvomPnjrJ+CyfbhjV0FmRbWxYM
4qbfsmJv09wCMwSxT2HnEMuvh1BOKFEPp9NgVtVYcL/hhtwz5odMFdulzEJv
BHTeSDksZh0ViVJUEUkH+19ctWG7GW4bgs/qdswfHRusgrS8WADv9IC2vKk0
SAiFg6tvybm7tg1QqdQnyrKTmQsGJKjMG2ZSCCsMbjiss4thkMCbLgrszGr3
ZI/NNQe36xjxIX8+xYfq6lZT7KDzsBLPCzvzjFHCOyFAcsxexmAnKdtTEVAW
EzHfzAJ1REoYxE+Cz3DKUM1qHRGh3g8Hs01h6hrB0VyX9ucPBRboLqZ8eeSu
LhXgEQH5XBYLcZ3CjYrw9+H47csm+ZSCTUPXqJhwXFu4bY1aWBVGbaesMUGu
sXJsc4Wg+g7IXx10IGtYK2FjwNf0EgQMJuUIwbztn0HzFIBZNLaA7Qj1UYcY
7+RkZ80BekBxa+KKaSMvQCiC0i6P6LRbzqQCz4MJxV1NAYC3wwhAKQ9JDC5a
H/6TN1nYKf9LGojkY0VSu8+QlFRSLHJ/OQcYKrKmeN+D+GlodRiOjKbSM/QI
YPX2BKNsDlawmvgXdWVJg/0yrxe0YI13MxgUlenWgBuExCvGFAB/1bUVb9Ff
46iljTeOl43KhqVnJMwZqnZ4hYONCa3a6IIVHq1wblydc9mPwLki1W+xrkN0
/Ch1q6JdZkGeZQMqTS6nD+bC3F0fZt54hip15RS8eKV2HZiMiu4Kd41Mm0YS
1/F5wKC2F2wcfN54icwO9brAi8XSCtzHVq/ibDk8Ee+Kuo/oX9k+JrSi6Hbm
kg3s6dArRZMhdcI2uIRccbdC8vrKI6lrOSURZOEh0hVQWgCDbsgvBKigXER+
uRiB4gSoAy7WOzZWSIW8pA3BarYEET5Q7wRt7+IvrvQCvInKwNNp3gBeYNh2
1HHUucJEkIHnkeTg4VUl6jGl2GkmEN4PGdOFv+apAxIi+s0c7rxpaaUMKFYS
FAHMsDZkX6QO9ITZZ4vFaIvypTON9jbPG8E9CF6HzFwHrGAbk7AMCTCfM85v
hW9ALklJhJheiiaCuf1iXmONFHJ5WgUYf91Ilhjw8LzHJ2fvDno0G0EVQcNZ
gKAeDxFw1Ht+GiKowZMg0DknIa2P15zrYgJABA3DjfurjtyihV8j6MrniE5v
X+G8HdKvZHNgMnxpxZNiy8HUL6p5n8Rnb4JSyxmiJHDuLvg98ZVoVlzWhHOA
gf/C4ULYWhCY5LjhaUN3MgppGTIBztkgwIdGsITtuSNXdrykSFOUFoMPyaV6
gykN2nsSH3qJWbLP52ruBWOZ23nYO6QUIQ9e4cZNcomrl5GjZh0ldguh/LR/
+OfDU//5bHD66+D0aP/9m/NNwYRnWfkBA18Umec2D9BBX8KJKujSs+QCAseM
jGF2mCjqTKf9qu6DLrnHqCiAE6+KlABSJWGYddbe0cjzlhUgbqNXNsFIIvRB
LvdZgRj0eSN4HO7OhNHVqs4Cy8AkuTB2C4kgxN14uwMS8BjMl6UUMnEBdCB3
ooZPkBIqfABtKGjjzg1dE06icqERocESrRi3GVevx1AI2H4THzM6SKmQAnfJ
V1OQ73GlbsqxpGG7HCxI8ObZMS150gwEioP9M4RoZYIbG9pS3G6HcyNRKAgL
2jpqIk0hskNQtAF31srmKLZOC0RsCNaDwUOc1c/ouEWopAHbcV2O+7K/pKBz
oheoQ+DJrG5mMA+DatGGcINp1ShQWqjQki7mIuAqBs4E6+6wcACUhKEvBd+S
giEgAnkJKB+WxwPhwHr0aT2YcqELtJkWtxWPyB0MK6ZaftFyjr4InKOC2yI+
0lMJBl0dlBGEtN4rNKNd6P17+P7WjAOXtut6/+r13X+toHD4S3rU6vs5AOv7
+evq+zjs1g7VbO9at8NEN/sK/50q6r2GG29NP949HHn38OStC0uemNfdy7e+
302XQU+5377R/xYOnCf+/d1wehZrrc6dTjnhbVe5xx5RPrE8AIBJhOyDrw4s
XhJ5yOGRdG1J5ATwZRTwiZIBm8hH3wmqXpgzINcs1uQxcWCMBO/z9+BcQ1y/
kiMd1W+B3YGwqF34X6NhWcDBx5XyglxoXXxi/9AD27iO3bxoBbGEmn/DsnFF
hcjfRioDZW5PLyu7PFfX2cbpGcQ61eUne5nBa9G0Vmdvq78UC4Q8rC3f7M8L
qBp2CULkk2AUHkDzrjgrDzwl5oGoiqQUrFAoi2yUHOsKT3YIu8EQuBEtpt7r
FTkgtpOnQSc64cOhpHtAPak8WYnaJEqdG9Ozzu7uLB3C5Qoxt4Q3zOWW2J6f
R9uuYuJQuvMzXAn6uJU8fvb4WJkfDsqybK6kqGVg/lCFP020ZWiFVx7beQVR
YYIYQEeEdofwz8zFLejesJnK0mC1+Wl1eem3mIypVqdAeCcER7Jn+a/qQL5+
dCLwU4Q7r+Dr2UGss1FUZSqyOkJhrJjJSKk9bbuS2DofZR2kpkSFFDBvCYXG
VNhuI0WB0BTnqqpRlDwa2H0OfRd+m1N6eHb+2BhQJsRoe1lV1E8sYL58vBdh
0AHO0CVoYnX2huDZzgR9bhYkaJOEGRZDX1O4DB76LnLlosmHhCaXFC3pp7IC
XXOXm9OnWGoAF0DQK4UthqLcYvLCzmQ4anwzS5Qzqynn067LMzHn6OJst1gz
DDh8MClYLJp/Whxwe3Qr590pK4RkWjORwkXDMIKCkuacpPq4NJWJQProqOtw
GbYq0zWZ+w7JmWV8eQvMJGhaau9DLEXRR3JF+2OGsq2znJB3vg1diW4fZ7lk
sy0crtCm2WcQSbghQD9HyZmYtIoBMjHOXcoURoW8IKagBMSMmpdwK15lWhxK
jEMTCN/DMEUAcbW6lu3n3fvzo1+gsqp41MM1wfUQvNlRMUY/io6tEE3evr59
jlW2Dn4MQhxcyAR6Xzm0iKpi/Tw4Z+MuWV0ZooQniluifdk5CYwAUCTFN/NZ
Nbu9rpYAWDPDze+Re4kuLbyLeEHk7gEgXISSmQvaKJks5/kFYnXCYC4QuX9U
lxc05ORWY7svX/baqIGOoAFp1rTxQxOsMFPuGjcpWVCjibmuKr25d5eLZkNk
iEVOse2wIhj+8BeOr+PVmFcAzPsRnmXnKg8N58w0QMxZHuFP7Ezi475B+/z6
5O3J4WBXHKom+va3JL6zvTsLMOFhyALYKf0amIgteHy/PUsk8JY+D4eOOBn+
xQpnAPnUSid/LSJJwl8lMqn8wlLL0soMN2A/HVH4GjigwajuMG3xJYZeclGA
16cRasyoTJBd4Vlxg37msy7OgSKioPd0bmgekxNmqKmRo272cTijWIAm++OP
2fbj//0yROqVzIbgVnRHQXICQoRYyliG2qKNaPmIqZ8ifLs86oRTBSEAVTp9
d3A2OBj+fHYm2Ppmoezb8D1BsN5iEWhB4H1gl3JU/E+7sW/ePtjkvvbfn/8y
PPv9DD1v529cj73wrncZdygIIuqTlQvsgvcMV110d8kMBDN57KIkFw1wcSJD
lK5irG+tZuLmZh28gvu1G+wSSqKCjITbHDF3dFEWWGeb9tqEVbvGJbkexNs6
+PPBLz/bKxojZYbvzwbDd3bTEUQzXxj4df/458Hw9WEn30DMUFdkcnD6VlGk
0U3RIVVXiKmfGLW9/fHaYohpngTNPg8Opnggik+YjSmo2XjvtDmlqfS11Lrs
YrxmZRhni7c+53jxBqJhpgkSjy04OHPLOrsvfT66mOCzYLFH6s9b4m8v9QJA
9OFLQNBVS2GcXOROGzPU+IQhkjkY2sc+/IdWuhzLMyGlCDI0o1wLUVWMex/x
FbLiu/iNBYVXgaZuZUsKfDod/Hryp4E8gcfwp/dvQEvW38PNUcwwdphD7aQk
MfnQqNJoYyTlVoRGH4uHGx3PX21G4qWUIoZ421QdQh5nY7/xaLyIc8zIeOR3
0uDs5O+PXm6UYIt+OAAFZterODZ4Oq84tAJXjy3Gi8rM7fly5T05lAW9rmEZ
8htLoltOu3h3cnw2sEqolbjRJSsagb+DvIdbrSIUKCfKH1sC5ee849HemPXl
8hpH4j2ZKACBq3scPyJLvJcSYm6o9gJJaU56sToHRQtwwGVCwnEGgFC4CV5+
OLB3yl7rLAF1QopdbbcRE9JzDidh3uStPigGvUoeJ5eOZ9cveCnelJ1zTUtr
9oUi0Lf5k2x1YraonYVCyHU+JRCQVxlzxXSPCFRNqQiVpYANkm9FgvfCLQmt
qFGGxHB8ApGB9nI4fDPYg5sw/R4rpUaPnZ+9f/euvSu+uoxEbcMIAx7jY8PS
g3SyJ551+/QqKTycDVxbe3rXhZ8lLlRmk2TNu+edGrxUOe7b68FBFlSFnimT
36qG2TL9bO9FbBZtP8BvnAnI9rrS+hNxxjXNP+FT38X+U3fbfyJzROrlkUEi
0WRNS0z0ZNIUU//zTDGJ4a2eeqcxJiIc1khyyh3hu05fG5WP9jYuEu5shS2G
VXcf5sfw+E4998Gnge6pbhHJl4qrV6MpvjUBCjwjOYbMw3CErKpKgRINhcsd
/DQk5eN0cLD/5g2KYJbo9nSAiBxxBZFPpAm7AoOVoMoCY5m4C7sesXCDBUyk
crvnIaA9sTkBMlfA9pFNKOEJ7OCiTmuhgl8MavNNTiZhlITdfUCBWKqeeLiq
VH0Frok2F4xKGbjYiEDds+N6lZIl7G5ivQ2CzFo2hCzCmBTMw69LKpeOO1PM
Rih3wlnxkW9Y+KQv5RAwoohmNPhEeemZ7Nr5+9PjYLdcRXllWgDPywwS7jKp
NxOSENgPaN2jLcspxpT0+ArXPaiz4u3JErAimVsj4M9Rb/9g6xvhHqg7GQXQ
tnCyqC4p3ycAJtOGCvT2wHHFchT9yRQtnOkzLpcn5yBFgj8DFNg14Dq76Hqi
+NRJOSubqy6bhpUejGiQoAmtsvYmJon4CPGGcNSenRXWPnMunyJL+TelxpTi
Xk6Mg8jGk3Nfpy1vXIZX+M5XSVXaV0RDHQLfRAYkOvtWfOjrOiVapUkWKjms
q/mci/M4oAfv9HZqFSE+z4EzQcKTz1rxQfPVVXlRQugGGwi5ThYXzYNOiOlT
vkpX/TsoW2g1h8YI0vcHX7Qrd8vJ0fVcRAZ/pwoyAuEj0WP5uJqTW/w6qlS3
qrDIHeaU+Lj+V7CntK/mhkLLXEg3MC4YVLvcE+uVFKPJuiV5YZdTtVmhy+9L
wsBqnA2HWDmhO7k4x5s8VsV1vF9cwwyKvM3A1pk4pVf5fF5QsUwEaEHF3HQo
5kE5LMgji9k6MDXkQoZRrojMXC5H54olV92Ds+xRR2wIcH4MB2NmcOwuap2X
aoNviFnFX5AJqyfKGn0JIdCdaxmPeDOGkGGe37kQXtWCEFcuSWkZA1wlJZYa
I/uOGFjAoc231BQA0D4WzgYU6tUzqurnbwrT5JMCc9zva5EgakKJoUcprZlb
TzJ8ZRmtHRgsAsic72iwSBBn0mDxd7A6pGwKHcNJ2hT+Cbp6eFr+71Svd/aS
JkunY+9b2Z+V64Ydsgc0CdKtU6bHNRXsxKMJLVtMuqJEXwDRyJddKmRn15Ei
2dVuTUU69XhSm774J6rTXYNcYyk6FesUyRB3a7jaaYdaLVdv6oaF1FB7eKcT
BzmSX7u6jAkXjMsF5KIetnOIgai4bF4Q3LBKyU5NJtC0jdO0s5amTWdBYYfd
sjc892ol/Vp11tVjzLF02U6xrTsoFqxCqTpH/oloGWLSSO8ORowScz2GHlwQ
SNRSNC2xbthmtI1ewEZNnauqYCDlVtLsgMq5/mZ3aP+z1xEcAZ2SsYFiPacy
NWY7jg6S++WsDzqhoYEEE1LOOJHSjvJta4ERJobU6eVcVxKMvR40wmo2w0Qz
jMQNZECrbTilGxonRroR8K/sR0xL+mtRV82mQ+qWqVB9RUXGHgKjL/Jx5LNq
nFeZxjrNy+tVXlmrWvVHdd5coVe2Y2XXlMydfPVVknm7UA5I5uQbc2i0a8vj
AAUIlnNG8ehRUcpv9IQlpHGRwWnf7hTBEak2scrMn0QKI6wPQi9A7B2o+oNp
uwHxcMAMUI+dv1W1swesCxAPZu57M/OexI7N64F0blVtBAUBjBGUhZWPTq98
uXjQpUxhvpSVatjARPNA1kwLxIw9HoVprRN4EnthEUuAe0KoE9Y4bLcPMCDH
jtKVekJ5yy2QUrysdAzr9ACrE8OALqfVhV1qwrrh7aMstGhtmo6D4WRGNoCt
sH8ZK6GSAaxshGWOnWvKx5WtUuNTQ9CyptGy5j9Wl39TfohvkV4Xl/aajRWP
KopLNrgqbOCkomcMRBJrSZY0NpEq5lz+tCFDkFGmFyiDXOPtHZmz8AywRarm
LINr5momaYtqZ4CzLaqX2R0Xf64Zi1kKOlxTN4ynRRnKWkOkVYCvKX/XMp/v
qRf+HbS6e6pXST74f4mOlR0XN4hstru1kx3kXMDB6V1NrFCI7P7oB3xyJE8o
SzBIQQPGoDughBgSXJBOnOPRqQYn74ZWINNpG6gq/JhtP+tJW60njC761bzp
2zGJapAYd2bJDFWCc+2AyqygPhvjDco5A8UnS6TkrbBK0HB0MbSkWc13EY6K
v4F8L/sNhLshbDHqVA3oOOWM2CNCHD5/8ezJly98G9bXVFpQIofaC6Wfx0to
XI2WLP1FSpxd7Ghw4TTai4raWnth97LoC1I2q/noAnkHWXlDtSx4cx/fLOu+
clAP2tl7fh68pN9vHqAorjMNenFyGukxPaACUm0Ss+TZGgaaI4gF+FJSr2cu
G3/BOcWnVB+Fs4rDPu60ShSz5XVmx0vcnuY6pNT9qOSy7Zn6hHDes5Pj4en+
wQBPVm9lM6tAv//JNttRmnq4L2wZ8aOACUVvb6HEQwW0NVDiR6m81lYGKtSG
hjv51V1jTFL836K+h5OrVB1HfE9uf4xzbOMs7LD9itSZOCem9SyU4wOAiHCN
OnbbPUVfdDxDe9N6E36/Kn83tXBxdmqizdp5z4kjHO9LsOc08FbF7TuGncqp
bTfpSKiNjjeaamcO4Kxl6K+cBIDmD/hhSu57p50QV5ghLvx1VVPVC8ceyFtI
O7NlzsSBpzPsOLluhbMPXIXgLuMsvJoxcSFLAKdPiGjOZc8CMyLdZIiLbyX5
+YLQRcGhzqxPKr/QiYjLYqGFhtUaKdsSTo7ragkjdYATFEnpe2GzuKD1vmIB
s2wMRn2D1x2uctZTZu0eGz3SyGyvXwZelznlk4qRinUiE+VAyU2Ok2TYxJLE
eLRnaVANkCGNQhDhruzCysqRBno5wxIn7TGaDYHJ5FD4loth0yucnCCMIwKd
3cfyswlGVFjQM+JcjErBkQWLJkyIYH5VRoVjXxuStIBykh2hfLZyJiNXFvMK
oZkDSUnQoZ9sPdnaBgi0AB3aJ3F58oSo/k9zB6NFIMYgXs/IemAIK4zwXbH8
gu1+I8oNdkmrLoEZc1GsbtQzUfEMH9/KZAWtwPbzlhPAUdjmBTJOY4Cwk+Wc
SQZivlMMgqJX+KwascKEGe3equd2oZjmc8hJtr1CTn4egLggFp4dGsfcuIf4
WPcxAYSinTluykW8+tAEGrUru4PQRUI1zGDqYorwNRERabJh/CCd8UwnDYC0
J8zAsK74siGMwgQHpUggVzRITvcecuO2OLNn9uAEgnO0j/A0rvyeO5Yb5HEF
JYiUxeDwY8xzxOo27TIuiOQ4goMxZxQ/G91mZOFWZT6gcjfmHDnMJzRDYYyE
igZjQ6J9tm1dRtpWudrN7Wx0VVczKrwi4EtseCWro8NdKH1sE2QqjyQdwk4P
a8G/fnTiX+25H0dUSyI4da80FhgY/OuylnpNAMywldgQFBx5R/wTWTOqlxdb
6eCvK+QPvNgufg2KwOt7kUFjHFYBuFCuivFyilxlUVzWEL/X3BTFXMLfMIof
tsCqa82C80rCxbVDOq48n0Gl3LnbL6hqDK9o49hkxigUsnr2TKOLhMcTrh0G
29wULsv+lgwxWYdszpZE5BKBAQuNIHA6LCsV+75kqOckR9jVsjyR+IKlwXTw
PFwr9DRGz1NG/k3FyF8N2pULcuEga3KIIXsqws0Lj0aKjPX8zy4WHEw5jfrB
cSbMa8khRO9EIPDc8jxiBC/GUV/WbDoPrid6eVAXjgzJSnwKfVjOSInSWQhA
VnlqQPhGskaLoGD5JsQdeYVD0k3lKmC8s4ySGIN7C2F5Td58oKsLUDLHt46A
6HhD9lFxeZsRQKU7wB7B8JXxBr5EYJczMjgRD0jIMt8xwG6aZj51pdM6Mik+
2ENDgW+xuOsw31SYLZB1H5KuZ5YdXuefymurjW6M8qH9tyxv+ddi0zLcy2pR
EtDuLDuwVH4+GJ4Nzs6sZN2uFPFs60lcK8LhOxiGInXcjvefPHuQP1FwABCP
p5fcfB9miu4VKyq0J8zn1mH+A+Ea59AbEBo0n20wEntc0VdBzDLhqiEmYiiY
8BUb3NTVTK5F6vlhE4pkk3wkgG1M2+AuyGtUAvSBwLJRfOXqmYMk2oSxiFr0
yzY0HVcj5IGYTA0+CGLh+YoalDQ2nUAPm4YIbyGxa4UDbrH1wgbTcDewQVAT
mOq9XJVzqYaA04uE3zYySy/CEVRiIOGyVORY54uVsiQFeZ7LBJOSK+zuANid
Y1IOdJNcS85rEGJKCzqTj5D3xxgcZfYbR4rR6bXykBQug/v2POw0R+BdFgmd
Vdruo5acefcxz18JP3eJ0i1x2VBvoTCy2XHfowdLV1I2dnBLjr/bMia48dae
GojF7K9q6DrEVOYk28N0dW/2dcfw4tYrS2CzbWn/jNJOqPSBnoR+kLRwH0SF
OKx5wgbFvSU10ZIaQ0TjLA1G2BYSfwvjtPfjJYC1Z4hMCsMHDCYegFq4gxM4
puDjGIKT4+T0fHAYr2Pbz8ExkKpmTIgPgEqDi+N0WlTHJquAVYwphMPt5QfW
c2yHfAPRkgWhlq6qg3Oy7P/+5mT/0J7/s/MUWQCs1XQ5lpsAvbdioXA3Bk3A
XKjC36SpOTfeQiqkeEgxkIkQj1RjnZGIX/mrAAv42eVkkOxoYVrZ2IJT6ngj
KmXg9Cslmr9wJ8oIf58tCO1Tdg01sL6o1+P8Or8UN2KAX5BaRAyk2HeSnhc4
GhSpxLGxuJiKx8UOsY/NGggorwuDnm7P1FrQAugtrUvwzCWDaYy/kvMASiY6
eOoyRre4adGJg4JiihBsh+WshksYfYNWDfQgGUqikTO02RXXxEe6AcjXTkEM
nWkg0ixRjoWgDkt/zot2Tg70CwnaB4B3kK5m6ATF+wWjGLhykXa/QxIye5Fp
GeQdgHdd1AsMQ5iwE2/7D8SmYKKIKgIEAwEvCAcxYkegx27LQwaFY5jYXQVJ
AAFtaPzwXvcCXWLJFTZC2gRVAp5idYET2I1rFIDnyLdUGUu1152iXm7ENcYQ
1r3OdYBDDOtmb12W7UEUccB7uEpcLQNgDXnWk2nxCZHtcWZssJMcsLHlAuOi
Ma1x05R9yZoZQ+KByi1qoozRhHsV0Bmq6KmBV4SyzOv+yWihucm2t55vbeNb
d7Z20JYWS8/7vNorBrHWAKAhL6HhJURMSKAtpCh6C5e7GRcfy1HR87hR9q0f
S1x/NIqnzhiSO55M2gY45B7RRMWJ0PuMzsnj0mepawjfDAxNY8OP7JmCji3D
ln68JsjzcR07fSycncOg4rKt4dB9egL1psMCHS/jOmTcX1pBYzQTZWDnHt1B
AUBAtEO1MFzU9ejk7dN3B5kg6hAb0osBw7wsFgtRbOUcsILpxV8G7NlIQAaB
FN2CDHIo+TIvxqSRwt01QmtigQCra+4fvJF3NsFu9aQCxW3mS2tFKZUUlGO8
Cs6A6pTklpiFvzncChpvIEf0/WLMVgyPDIqRTj2dZuYXzH4tCqsaPA2CKplj
WqmcHCYHV30epQUivbAAAbdMTOC3sEQwwDs2AL0Cr1FxGUHYMpqDsSKEJYy6
oZBQdCgoJCEU+jwEE7wCWmtJme8ZUFhOrIaIO817gUHLJq299LKmvC4Bc4ZM
+ryVH3fp6EFh98sAETa6OzWITHM7s2+wI86WgCmG5o/SRa2x+MrIBTf5rZcP
O49bdMr4FgmjNtX5QjQRUU1oT/34GhmuqbAYiK+O6DEg+BfsFfDRLlFHy4Mk
EfvZCIGVjrR1TZbLpbBiqLMz0y/QMB2umGPehEw0OIkE+CLApDwit9LGrjSa
ux+R8ds/6Y569ARuCRrIfdue+TCjM4FHwDUtKYYUXFo4LpU15W34ODbjV9wX
OvETSvPurhD7Gcnw9sSKpITVIOwntOlDUQpOl3bUAG/pscuKAlulFF57SoRy
A3yLqlwZxdu1KRLtvnA8JT6T+TDSdbEg3dSRNZa1uMlnCymYQEUn4sR1vCQJ
z4lmVjbCFgPethTo25D3bCBQH1ZSYD8HbfxmwOIIvhdAULCLiP3QofPrhiUM
9MY0mpW1pXAOL80dSB7M94rMYdJhQMG1ZURv8LA4yBhECIDKmTpinwpCSJFV
4H6MhoBJtdVkQoYghgBqFmBCB6Udy3fZThY5LVLYZ6FZI/FzYJuusLtxIA9C
LYA0W5KbBzxoVNJlrFYYbVJ2Am4JMZ+7Lp1XktzTswzXh8pRoJiSnQ7+3/do
vGHeh7Zpe65mrhAqQ8SriwaoLLqbH9GaQmrDrVxEaLRyBBnclE3G9bf8FSLn
SoEkSh0vE3FdoQBAcnDZFp6jNy5ujsALuJLKmORSB3bNA0HgLRFuIH2WK8MA
kTa4EJyirs4CZP0gDQBK14QS0hgO+bVzOZ1BygEHcomc3cciBH3MRuAIRIei
vEQrBEFHlQGSPTsN2U6ocKJRrqJgCj9BcpnBeqKVI2eqYC80hheSp0zOWFVT
lrvw6X4/C2rcXNhzMpmWc1xXeei6GFsdEbJAiwX4/Ayp/1uyDiJgWSK9nS+q
yzqfW14Id+EV6UPCuUFnQqsYb7yqehMUZduDzcnHoG3kbDgKJSXDIyGlZcS5
PZgy5AzctJBwNgAPMmfcZxrxgtOD+QZ/gKvmPIgPRGVwNg16ysPMtjTSGhxy
ODrxDi1IQ+vpiePNyHOHGSJbMPlikY8+oDhx6HI7RPOHzUs9TRxFHm0MMzeW
7/vgre0DxqKKC19MG0SI16iOYPyZRbiRXmxm4elB9LYHYPIi44lRFgZBUqPA
n56Yy4GHkVu3mDWik2hrmEe58UutWQPWjVYMxcjuSNmb5PDJl3IFGCa4LwDt
6qo1gxe4L4if1L8U3yTcEMt9mXKEZNBnyvX1QCOmp8QM7TwcTTleQmLysp4A
MbLbElio0bqs3UoEVKG0vPeYAyD4mDQURF9xAMY9uUvC7slbbc8FXQ3OjVr0
MTcb2RxumnuCLQlPGDJ+j/zkVnCtGlmzHAs+jtpwz7xEtGpLwnD2oQ8zsKgx
8izHfmF/0NCShaupzTL40bT4BA8f4aX4cTtSSXF1RKiCttLQspuZle9KyhkT
AB7NPfjqBdwMyx221AorNFOAyH7DS329XMCihvuDK05O8mjR7U28RJGt2vPV
21I0iJEEdjWj9UGDEa+QZbe8NgeQzNNXD3oeTmUz4SZDSBcX+xFsjtamORtP
gFHpbidSddXUYEyOark6BhMLBjI4ClMn7yEdcMco96gItE9Ik3TPuphSWlQW
ctYsv6jYfglQiaxrQn0DpEPIkKXVd2XJ8IZQhQegWG1ADe5ysWMHtFgJj5hp
RHGw0egN6luhqn8NILUgaqHsEBQ6UJweZE28MiGIpO9ngkUVenLx0Tjjnwmt
hW2klBiJdC04LOzOjTmYBvglo+1UrnsqvS1W0oJQQiNaYhEFL8E3Upf5DSb1
DjDl15jfdMpvyjG8wRGF4wjYhWIO4fCpHOKicViH7NmCFE2J+pAymA6WR0m/
fL3GOjUG7UAmHBQXn42h9IsAXoQONXuL3+pS1IH4zJE75N5UHnfnQhOiFjwd
qq+N0EggMwfuFVekQkOj4vRgUuATKCuKEPWr2FD1z8kEynWQYif9uxBgpxn1
jHKFeYdc4GQtPsnKoRJwYSlS25njODp7XDX4GJKBQuzyVN6IwEOCr4YVolJ0
Ey/SoOQMklVTTckxBhV6IeZOKG+AEhA4wq2E7Ev6HZbNaFphjRwUhkFd0O6m
sGgczBequWw/fvzyJVJdywcIvs2Dk+Oz12fng+NzavpikyKgboNqgnalllwy
T2IV2OQLVW6B5n6qFlJXt+GwG4eOJdI4xnDgA42EOCu1F0UDSmCzZ0vwtjNF
4SLvMEa2A6F39z7clA4Y3TM2lFSZkygbV4BTADqf+Nr1zG0DTCwsLLn3+4bD
R1ycIB0Rzt/Dnyz/gzhSeqcLAgHwYglolohsLGM5hWVAhZP5UYNOeDiWDNZd
qFJDho6qFJ5JHDJGL6OHCtSBcPOIh9OlYLdp7GjJa/UUAY0dxLBkBq9pNuNy
aK1SqGmzWHjw+peJrMpwCJYzt7NCCiq1kjXZGgKuqFCWpnA+HudymUApGIi/
EgcWqY8gpRsTyPDo0Hm58+Llly9Ck+J5YdUAfMbAZoKaFcrDnrKlwPequXbI
h+nEeFkEAyLVYOlRIgJ3C4RTh4YvMBQ+AtOfPjBK/3UngDHHUKAK7ADG8apX
KLc5n479fVJynyhgzHQoqBhh3AVugDrdyRKVEa0mbNWjC1oXk2D16Yga9AlD
owFf7xkXEWOcbKZAt7/ONck0ihotR9aWVHoM9smKpK+Pj06U0KFJqimnFInq
0nYZ0YHN4kBSSq7FS8S9144PTGpclNZpq24BiExvMRdEdSKJjNuW4KgQfCep
5JH5zhncQX2xN3wAH+ldQBJpSvpR0Dl4j0XtMfgYXG8/KbGX+86Cvslo6Uku
MEkKV7HaH8ZGdIQq6CxpjIFVdW9ix6blucvpVK8akR0hjgRwI95XjDaLDf+M
1cOkzjLJfR/tbl8WxsqHQCkgFmQUbuA2ZFPsf4uruihiN7IRx0pyyBLx76Il
eMUBZCPeK5amRBJNuHq9hb1URb/YnwC2tMQzbsXJZt5URtMuI+QsZ2TRFa0a
KALjQtjnfVGgQwLCTgzBveOAS7Krc0SeFX2JSg5oVka/SUIBnoSbsRlQPrAe
UCAg8hKQYUsBTqTNJbdIuWC4BjfZwFG2yt3bRhBqo4IcVihiVSF+DQ0Dp55+
spfce7FoQldGsOe8qBmQJRU2zInxa8WMw4xMim4zR7cMYALxwXXITDlywexu
vVThCnbJd1wpFbk8G+fqQfTTcYX2HgrQMkGnYea0NzQ4+lVJE6Cygszq7mgN
wiknFkPrGjB6hVFWdIIxoR/KQXPWEWRWhE4MsVxQr0Aes2n5AcgRvTtQk4Gc
edALABc0GJwUDokjBiSGCB18GM9o9hsyrdluesoaThF73StDlnhYPb4/7UJg
cgTJNhTnoY6JA6V1XDDJ1u2C/1LdFEh3kyUGDanxCLjDhfJv94xeHjpJqEfw
eHqCmmKZHMbhLcjpHI7v4xNTEewhDTK/beTwnxNL84f/N5xBUBSsdzeLC9iV
wRy0HGVfPM3M7laxRqHo8OCDUUgus1a0k+1JLKJIqxCuoS6zrdbme/ARJ3Bg
NB2q3DknzNF59mtn2JjHvN/dpWSs4Io9zi1QTrI/FfUF4D5lvz7Nfi7ssbZ3
bCxoZPueUWTvavAEXKOXhGxJG/a9/f13r5nN7m7vWDZrpLaslKiVPp041CNv
XPtGI8IyeKhXryM8G62jeBJinosBfK/3j/ej4D0rnpdWeuiPgm+tqK7CrVyp
cg4zeYCBdFzx8RzPAhUBqG8fvNIOXDy3s+X1RVF7jxaXpbl1MKEXyGDLS44g
h0Eyr/OOCGLPTeAfzlT3e2wCAGCRwZ+HgEt5Nvx1x7CfYHExlTxUPLtUIyf7
rCeRHQPh49/n7FcMbP+c2TWQrywvsP99W84sBf1KF22Tfbad9K0GCl2lBmC/
f/zpGT5vuzr/6XD7MTTFL7bDBPnPmOQdj1Syu/VA93G10J3yINopCtSNtoti
T8jPiOz2BEMyaOuM2zq5YFqLDkte+6cl1MMyV3N6sL87PP/93WD4dv/sT8Oj
ox0Gnxu+fU3xMita7P9Z749/g9+e9Jhlp1LbZHeptUm4uG6fVg/5c7bzOIv2
602wW5/v6GT/z9DJ9l2dyGaracted8w62vZwr9yeE7t5eCRRn2BRlqXIdjTB
mwjHBl8i1PDQGdH5tKIekmdPdvrgTEVnrxU1gP+5SFpKV6wkHv0CzaKsSjv3
LjqOBdIxjMn3xNCnr/vUkycHFEs+Z++W9RwsNJ8BGFXgxt9RuCBuD2/140+P
7V8f/u/oyH4+g2gGjHc/R2jiz9nrwfkRgKyWlsSjv8/cwzb1cIQ9DD6BSfSa
nNL8cSEd8BPb9M7n9MSvdtqWGDfA8rtpPx+VdbOwS35d8D9R2x7Lwy/oYfvs
ACZqRVDL6B9RgKdl6XYjPkNaZ1DbywVTcx9H9OZTCqse43p4igvXVmguRQt0
ugjOBSguXr6NYH03MXFhoHe08dsOkthFXeVkxsBQXS5MwUbeUEZkHDtMesBb
kO3L16DKghEQgvtIIkRnklH26AlKTy53Tb9wGgu+jaDdCyX2Ydz9eIwcckzE
xteUpxxLnAFdbARUk1wXgkcbg+GvmuMBBrkETlXOsS37MxIxZsWif1jnE4zy
i2aZO+qPVAxaRLG60zIazv0YWzIqp7xK4TSzBcW8xHMlMjdutiFNbwQkn5wu
S2XzpeTQhQOt0LzmAg4gLGXLvI4Hp7ES8usKok1TZlYo0Qj+SEC0CxKAgneG
7C1BdIJvCBEVV+3x5u0R60Xzp9zoU27XLnWkN4Jzn1xB0WQ4tcNBY37ErQDM
xlHFQq6x/AElvWByKukCTSTR4uIOocJh4lXlOBhZe1i5ir3FpDyBBlheQdnw
amJU0A6Yh8nRYgXXadmwlYNsYvqE0iSaXuvVTMFUYZA1tZs+wcFvP4WwHgym
BH2CbiQnZOJNwRUJ84VfJl5+u7yIkQlWy2OUIC0PgEXrvxscWzlqIwwfwgjY
K3EyQygwZYTYD9tP4ULkq5BdSKA2QPzcDJADp9M+RILOGDIKDV+wMiOBXA0v
Qaw3Gp5csFdU9aXVGiiKADyokv1VG/a7HB39ujMcHHOC3P7B28Hw6OTkp/3T
zRaIC7xwOaO6gYZSOBPrBsrcDCKNLu17Mc5F51WTmQoKezTUklVc2zCYzxYL
Kk5IoMgfdPs34l0Rq0BzDRbAQGEoWrcOJwkj/KRkQDCwKchH+ZQQfLmmIbrI
bpAviJelzfUqJu6tlkWZJWCM7IAF5uV9+xpcjxByCFuGqEv4wv4ZWE/eYhEi
qc8dSjSh+pG6dVG4XV8DEUmFRZ7kKDOlbyQUkZ1A5AkJ6e3Jvw/OIbf997Pz
wdv989cH3MvOt/R2DHAavsfP2ZOv6e3Ualj7x4eD07cn9r9ecMt21+gtkoYC
ZWsN4Tm5dZGEDhabo2l+2f/NciwtqrLOPYHfbuxvTUuc94QH1nDLYK7zeYY8
BoFyJpOPO0N4vNk1KhROfY3wqPh53PBXurBP+As2RlPqTU5f6bYqjBPs9Pb1
5CbOvYgAfBhzrOzZRRjesUu8VubLI4dUySENIzRbYA7BRAaJVc5Bod198vTL
lx7dZMppXKL1mur6kmaCkpCDiW8KEywXvJeMd6C3eImpYc/JuGAQax4nGaG5
RC/c9BXU3i4WAnFvmXrCwcuFlgAbeTkjdk+L0vB9ADKLxlEmi4fkwUPkJgi2
lLVCpU0yQPnMDr2MK0bxkDq1QhcRkbP0w1ODTyBMWu6IlRlPC/bIE0Fu2Fdt
0rrvPn38zF6A4/Z7k4mEyONJlrJ9GP0YyGuWk4/HKZMtbqFLWLnOPyAaCSTO
2IU1Pso858ovxSdEVeEsYfzhtr+o+qAZlJSadu1kOnwIovgmBJ4RKcrstbiy
V3NGwyV3KADs8XvsCpAbGqQcyurLASm46EPvqEjA7txrlQgqEF9r/uX/eXRR
zh41V+ayLubZw/+d/fDo0aOH2f/4wXIpUJsfNv9G32X/9m8P4+/+h/0ugAWU
1WH+xR+3misyFABid9Pj/DEOVpOJU6ICbRH7vTAEYJzpbhhWK1jIJvEIyMlb
i0+LB8q6qfajXrLBk16eN62F77PQhTnHfr2aq8yPJvvXTN6T/REXfIg5CDtD
sJJufUotjfTr9F3+yNb3eMUsKyPMe95Vv1oEdQMraEUOZP0X0xwCcFAARMkJ
kVEIOWE+XWJJDBByAFPGlTOz/T6wO/nAzjIipEY5Z1uPIZOVQ8VeBLS0gnXe
Q+weVwsVywUvAIlTGELqPGqEHjQLSjVvydAFdLxdXmDac2QXT585DxcSBAqs
VsqHwCP7hOF7lcTOsSiCuz3yi+SNAKn27HLOLhdX9h/FYrSFCVwgO9v/Nwh5
cUnGeVaH5OmlPc5PdobktIcPz3aHsAC/ITi0m3w+QyvzfO5SDCgvBuao4I3J
dfIxryHUzuCSNbPSPrUg7yyonEjAZa0TYHJw+0tZEHghsu/wisDyPUtBxY0w
DghN5YulWZJ9i/GPDyb5tCmAGGkvKxQEB+PSHtY9S1YYpsiEiDvJbmQpV1GR
ss1Bv8Z721/u7uDZOtcPgZiPMBBXLc8gI70IiFygCBsmMCphOcGQRTIfwAma
k5VOu/rx5T0+RvZgz6FeYFncgLjP1o+ChQNJqudgV8s5CFK/GnMqU+Wv7P+/
tm9tbuPIsvxev6KCEwqTEoqyaLW6V5ydWYqSbM2aNldij2e+jF0kimCZAIqN
AkhhJvq/b55z7s3MAkBZEbsT0Q+RBOqRmfd97rk2KAG9+QKsCaxmZRJEJGPY
ChxAlMhXfdaSSqbEO1VI/gEQ33Aey31y8YePv/0kaGhz9es4LHa5Lz//YPdW
/ZwFaSSK+5AFuc8J/iz7brVgn96PylOIUO778x+r7w6/DaFtRQaW8PdP/CD/
/I83y+Vd//r580kQn9XlYTB6z6fd5Ga9eM7n/Sd0vw22jV/7jX/8jWDfeeRB
d9R72Ksl4/Dy1JAkV3JYKLv1LgI/LMuB4WMGv8evaQwGIGYOgoko6nAqfrNl
1CNZkKb5MNP2cpGaYvTNGflIjE8cGQCD8zWfmwVGyiXEfYab9p5/Eu3hK/7e
cZCHfOpG/ns1yLJssJbzMZJWPcX16knD7obMKx5lvUz+b2eRkl+t353+fHb2
4YKEVOHKGIuxrtjpwzkod8GMNBm670Acs76tw2Nck6ownXVfkvGgxURvymRD
eV3PWsJQ9j8Gp6j61E27GWcI/WuNXGT49ziI6ln3exOOhdVLkWdNv5tDU8Xf
H7ADAXKeDWZ4WZ6eAMXWzlpoxpEZBHf5gWoLOmlZxT6iQnSG+BkDjMllWC3b
5jKo1FtAl4Z0wPmtpEjCBfIlAvakWw4oVge0HLEaWpXnzULX83avIcjdw/fC
573vbIAL8VDbP7JJ8S4hYB+zjdgJH2PbmoHcfTqss5A9TyRk4fbPSx2ci3dv
t+6m+jfxpxX6BMKHr/GiINYk3SDSSeEaRHADvl5iduJyHTfPANuJKmN/g7es
3M1bBo71LxCPGZ/WFhGZHdwNDhy+1tOnSSkDzohwb90s89d9+jTNcaaFyKKP
YU9v1pqbSC8N/7o1KoMJMo50C1v1u5lEpMKDgC3m1lAcbmyoyi5cYRwTkraA
oiLloNrlxibFtR5Ofdou7e8PZjJhhb9inMuX1m4oHE+fAnmVYWAz5OawLYtk
y0FdX1+P7381KMWvDqX4nyXt3W/SbAm77WjX8MUcj8Az5ChUaDSiIhizD6Cw
sIieoZuBogeNOLBiKGPVi6ub6m8rYWE3d37DYUl2I+8ayjh6QGZwlxfIL8PT
3czqxa3piNUs/NAKHTrtHjja+0NiJeBX6bcxCj3m0UB682o1rTV/LOYrtAkb
TGT5kcmIkMJ/ppphlKDllaYWs/mIyR8Qq3N+VnBQnNRwSPxMQxUW42rJ5ZOf
8L8ms/BhOA5Y57r3DIe8j6Cwp+XRt0evzAnaqg5g2d74KvW7nR9pE3hPz6K7
tLE1WHmaq7jiyWAV7bJvptecRyS67c3+uOG1wm4oz2KAFwEUvRN/0yX1JoK0
1f0KmP3oShTXoGNFV7+RlrbKtAWZD9s784zuCF4tADshWOOPA2P6h6aUKaCd
5hT3Am2CN9SX4BkNxvpl+b/fjMoXr/h/r/TT0Z/0I672ojx7g+TdEnDSIhxD
bNek6WaNTYF7+eyIH/zLsyNL8kG5gcoqOJrImoTPBO8QEKemPHtZsGPSve1Z
ffXzJw/aPnZXt+vyx3a++lz+5fDFt+VbtFksYlS50OXr4n0zDvatfPldORfb
o77DKwNjCGl+9fLAOUdle1gnMsQtsmJiFhFVZE88ZnioflODIaEQDtXmk5Ah
xmtkFW98uWjHk8Z7vdVji8weuqjhcoPy2ioPISi5nmrgnXmHBLKcnwaJm1fI
51zhilyWedBNnBwYu8jtYsfIe0wz/9cIA5CtvQkqFbVFv7+1zgfNXThzDFzN
GzCK0IwBAPjJpy1ct7Rxmn359Om75Mg2Y+fr9m+3PbGMaK1eWnkF7zLD3Doc
sMOnT4NuOVmWOFkQJR6xyEg9yhzmd6fyJGE2wrl7+QQfP3rxpPBOWV9FwvNo
HrtN8YGlZfZJt5F2jqvMVb0MoqdhceNuhnat5hjPjnM+ELZclsJFMklSly9Z
sT634VcAoj7708snj3myRbn54VdHTw61uBfMq+A5ADPla/IpkduM7NBQ9HYy
wnrCmYCSqPBUpieYU8Al9l+ZjSHtIr5aSmZhxrtpbBeHMJe8FWeK6v1Hm8tp
3c4wQGFBZ/0o3bCFn/PnFyXE+9mLF08OhqoKX3nx7Xf686tvn0AYf1aq67v/
EX4L3yP/eIhc6+lyPVJn8J/xPXEQMoUSXgH5v6mvwrHtbOrARfNpn6+aMmPQ
Q+E0tVbVNPUVJK239f84nGmAWi/nAUSaPy4kL5x1i1/jvAKeNjgXfnrDYX92
xI167DyVw8OB487T/uqJ0n9BsPywt3A4Jos6yF71QjTZ+6j7pocLV7AnHuWW
2mkwloNOZWi8rYIt/M02xBvy2Xz+GacFZrT8anBPHJ5q2Knl+xalqauOJFcV
OOsup0lmg3ElZzkXPRy3YC1G2SpUiiE3JkyMEZJPqETKx0RxuH2ulkK8sxYB
0VDwglLh6+q0a8LbYJMmC85Z6HaIdzleeQNkuMLP+7f/cXQgYWXNl54u2W9v
y8/hv+Gbi/Yz9u/79/tH//EXDsHY8Q7f9GIFD7fz0KCeTsDnfjPDqOOpw1tm
5b7Ob2Fjnw4GDU+35b6RmPBPRuJPnDNbeO8IJGFlgUsABP0is+S2N494D95j
Yu07fN3emrbIWcRQ24iRww6EHducNSpIPNsIXKbs3FID2a4oC3jnWMPhNrz8
rPMY91myZkHf8LN//oy/hnPmuWMdMYKcWs3Zpa6RGCvCZXx1BUphj6AfbjSw
EdMTEFwswv8cs5B0k41eLDQZ0OZ2J+iTEYDAbNPcRaUDELou6F5SleFVeN4v
2J6A47+MhjzcxwVq3EzDZjvYRX7UGQfvlck9crsWXvroW2ndzngVyJ3AFSHa
hdojLmvwzK/gCE/I08fFVXAku4tTycKExgwstDKNNo1ibzNWLUrm7EQdJhG4
mTu1gcYxbrHoQluX4mKWKg0eZ2HYUIM5CATpodDJGcg9qaUYL3Htr5rXgySg
5eRKMxFIEl6HxR+7VISgz+Z+0D2RC3O8Q8fnetAOcWFMx1dhBe8weCnSNtpq
bfOHZumzIqbPxGae2oVdiSOgi/C2qCHQsUfs4mc2Wm0voQDbQvOsaQ8h2ZMO
obPa091/fB3V04bsR56ie5IRq1JSTyvLv0sk+RY6LEOrtwyaaUKqY1NUQzF9
VU5JNQNuhL0Ndd72tLR7vnQG5URF7RIOuuPH5s6aVlzddK2KXy6LknrRv1HV
0aTuAvYVuPdq5uWOAZivKE6k+FJ8pwk1Us13nEKO3jXirEfBzE5WCxMePpiL
OZGmCP71evV9sLU6faj+8JDyzDA3z4dXyHwSQpoWOc0VzvyHWewXeR2kWCbo
Pd/3gjE+9JgqQXX+xapNX3ykGoTjc+Ok7k6FZordlWK9GreqFva3ho8rNtLd
iea3f13uPfhoL2hOxTY3LFWEDSHqbYIu+YdFWIJ/3tsMpIOFwQyYVLIH25rq
bFG/KLwrmKnKHTUETkxLbTtebjuyMwe42g7nzyih63HlyjcqSkNLhigROdnw
f0F2WlV8w9+ny5t1NuvpsacINsq5RJbFlsMSS2k37bXNXsmYY8jasOitBx94
B+qLPTis2pHCgx/oJednB+CBM3/3rNb1gBfdU50rikwRRQap81mcbQTevA4/
Hu55btR/U1pP94y4kFbUB/qSTqmkcumn9NjSmflHLNscfM5sQo4eDdR7mKgR
9n5fEOcQW7iZGxnq+ehP/BVDwINRGthll2eaukKaGixvnoGyrvJBvtwgGTkC
SjtXsX+vyqZZEeEUn/HZn+nIPzs6egLHyKIoPMizI/3lxbdPNC1QXsEvQdtX
op9wFkjKsFphqKIWNTOGTb0IZmLh/eJwLWB8MCvqDqNP1Cng1VS2XXIo0Xi8
aDjmjPgnsbD/zni/CucUyB2Y9v7vfxf5ODQaEhKg66r6m9X19TRG0N014QCR
D8ZOdRXJGzJX9kbYjJiYM+oADcVBKbviFvvci1QR3D2YW0iyXk54imoesuXz
8Nlq8xvXt+dRLtDbKTWwXARn4vXTPKRoOtohVXG2vcX90WZKJOaTiINhqf6t
fJSPhq4NXslHW/zyJFt8U9m7N+ZxdW0eEEzAIKH5oPhOnaA93gnA3mUsXJjz
WBx9eyTHERnaMjb9q2YfHpEusKvfoJeb61aiTNhbVmHnUsV6AJkOmYDL3sOa
0B80c4IdgSCvJKxZb51xn8cDZawmhr+3Bmxy/dk5OyzIVBbZRB9u1sLqydsI
D5Fmr9vCg2sFCIEyPn8xQAg4gaytbmQQpB+A4V53xgsnFmHvdGf2K1waM6Qa
b+u1tmgjVjUZYM7OEuLnGVj/PEnPDzqtH7hQ5c93dTDI5bmc08fPg1QEpMD3
WpUlVNVfvKog0tsya4JhFPMaOW7HHWXn5zbXR49g/vFIUU6ILJde04bKHrfo
X++jm6sSQ1BOyJN6NPZQ92kvrMdcrH4flgmBVJcZI8xWPb3ct9c4iKg7FDIc
yYN19vZ7OBogmlrUs0b5WcOIM2S389mIQI7JT9nRRbOK/LDsWyb9mjSNGPLc
XyKx99JPGawelE/w/NF26qYhFmM0ezMv3BjvEPi8SVmdLWNGX5UzbZTki1ur
EQB3Cys8r+Ugco8Z2OkpT+vFYq20XaSUg7Aayt9gw0Y6x8OGBDk3mxiFtjc4
klW0AHqCBq1ygKLzX9RehZKtKEsjJ4/sdodezPFz7ubKhRXmm/1PL0LYuv8v
q/DmUFIH4RZLtleMdUrhEhm6niv0qMLvoyM+ABNEMoaRqvqetXLjOthsymkw
i0LolJ8egjslXUuFBUTP2cn5OYHj705PfvzxEQzPnGFs0BrgRsSAOZTQaHrY
c3Cz7vmDsbHKboete22ViyFixvHKlJVZLaYbHl2/qHWFYA6KX9ohypvUJgnA
ho5cJ6PsglcMd8A+E1TDRJlEcYQ3scRij+S8if2ts33vmFNaL4O7qKmSMON3
gra200gxlp7gsPgDhYGzNFz5bEpvXv/MbqrqpGsMXzjyKhgDq6qg0l2Zxz0S
KIZp2kiIDShQxmWdcbs73bJ1qfINqyoExO3cUSsbFI4aSnkTdJKTEHtWLTyl
pFmT9Hyegr2f08hu7Q+/nZOcWMEauRoMDoi9LE/LswGF1eBShrG8qccCKIjh
DYaZyWJMw/AHovFI2jEKNXbwsq1F+8CAw1hj58YWjxVCowy766g4zHu5A2+/
+WpMtJoaiu16nmELgcpn5exIxqnynO0h2Cr7WPQxgRYVFdtqyjfwddUCTP7H
M7IeLQgvhKDvlucTY1aMMk01J6QQLqh2IvEadpcmq5eREt0pxK2OPMo5Q65u
jXlpYQFJf8dci2XXHHRUOOXYH5rWE8/t89qIOW2sJ09QPbwlzuHismW12kbe
wsddjwy6HLw5Sq93hiv5bWk5dnQF95lNeTxwkRCRCoOinc4Ib2wCRdgCPvgD
mCcAldJ3Yxa13J9xsnIdlIPz6DJkb/vf+Ul0bhDgJp4UCmzSAzfdqq8JReYR
085gj20egaJ9PgoSHrM+SdHdTe1J79byQvYC5iWO2/EWQdjWXhoQQXxCLkmS
f0OuoH7ruSGGpyONrl24L4K8RJAzLmzmqYUzGMzXzKd+DD1YNthn4KsswhoO
h0SZmKnxOPc7wklUi7Tfc9ri1MIf4wyGTJ0ZJ3158dBVFx2Q1Of1547rSNVF
jtVHpInu9YDDn2E26EP7JGPRn50CPPTQ4H+LfSfDR5qzWuLGB3ZnjeF0ayXT
yuceqZqLELqeLBphaNT8u4Hp60sfW8BuDZZIw59yh7Z43D4xx1TPlxXGkZLZ
sFenplZvH4lZRljICOtpUWntqayn8i99XFoMNUfhSP2uyb9/W3WL1cxPOd7i
gE8Uvr2ai8K49glLSZBU3RiqyaRyhU6VutxKPAVFu0PFj0EUHscIDlbvEkkF
a7us5Whn28pkYZZB0UQ7rJgSeeF171uf3RzfUQllTSTL+309iRMUq2ouQfPF
pQqysn/7bHYwfLzZitiAMk28s7Yvs5YTuGHDByRBOwgyl0hcInjKJDZ/TFky
47m8rbrrSg8AghqT+GYAsohpIWV3KmZ3SjQbaab4PFbJJA7Q+Kt5C1d1Ct6b
CWPtw11aYNGklq1OfnvwR2DDWu8+aPsNsoTYTrs1PdGiJMfOT9F1FjRnkfaW
EeKceyixgp9s6m7QBbiVWeP3rSFwg35SZGMtJUjZNVM9J6vg4jGvesqRHNjs
d7M7jBSkL/mYCY9pNWe4a30eZl0Kf/omA6gKLvvrh/e/vjs7v/j3Ap2F5b4m
mwoROx5gVw3Wenrx4V8xxtm/d6CyQ+3PzEWyUSK8c2RorjN+5qrh+/CIF87j
jnMMkMSyGULBkyaJw/rqiVr15YkUFupsnBKsNxxKkangeM1DBImBN6+VsC1i
lnLp0VLzOSwZuN9yh4tXitXcSxlbVVs0DhUi17G3S7NArzmm+SqCTxZxXHMC
LU7XoyKNeTAihaCdm+oOHIiMiBccVOJ5F5yDIVIYrMROSpfWXA9oa6VXtjXI
epm8+VSr+Y2h+ukQHIenJwtJdpR2uWNhEcG0SYMf56BO29rTSNKfTCUES0g5
Jz9g4mAuTGt/vLgoe44A81zS99PuMuiEU02qXpS/hHNV8afqDZ1v0pBJfE/V
p/9FsVhnDgRPJa53xasjRx+U9OzOWciuJr9Omvmv7fgPg7afunKiB9WlOHih
z0zvLJ8bhBoTAzUGZWzTHzR+6GX7WwSNYd+NsRBBpJtY6uDODpPNTWOWwyU/
vYuqlOHM0D3uOB8Yhs4/wFhl1oJxQjhSYGZb+OT6kRzfK1VkjHUx3ByNH0GR
y/9PZZiwvfXssp0w73i5WvTLtaOuvPLLUCbN/tXTnLWf2zTZUYmc8FU2BccW
1ERmEZksuA7xVfcxgxODwFv0Fbd/WzUHcU5D9PLT522ug7+dax/xJMj/Dwsf
PLjwZksOWxa04AsOac1SgyX/u7BW3ZwJX55Lyd7YupfNrG9zJ5vTHg6fliOc
PzuQRba8RIQzcLJTpSG/O4sMJkrqfa6EG/9asTm9WYSj3N3dlD800+lDOzE7
3YyztNZRuX+G0izyWn86IEYNy1dM0k18DXwv1elrlYvSxhEtl95uOBix12f1
ZLq8H57/bHWT3eP7Ch8GJNyBH8Flx0TM4Mq+63Hcd8HK+OmbX50lnykYxk6+
4gqfLOD0RmZOVXPK0TrmLgdK22tR2vqoXl5bkB9/MZCjXpzlPYlI05vEJC4v
NoojAePQDK23gsaBXGaXS6n7wlt3cUmbtfUVmSpoJhu6FTaJCP+SgTWZbzaH
N1uHkMdpw7/COfIMHlHLZ2xIe44Swud19faTeuycHpsBnxdww68wXCB9S0J5
EMlL8V3LRc9alL55/JSs8yOQjHBd5mvJ6Tp3HXh2UyZ1x9vR2bbYwl1OhgbZ
HByyOPq5pIndeN3sIoymzfwGZzruFRf8OKfLFi6SB5evRB17Z5NbfDsUOZho
ljsPS0aLV1tbgfJr3dJBCJV3pdFZ4aok4UwG0+aonHDTfOBSv0Pb5VkNG43R
WWVeqN4yKZHgYSfpYXq5WdhWxno1bSrRfYpTGiWb+q37MkQSA7lNF7BzwCde
Lthf3WggzhwVsOCQhkXaaLjtrm3k17hepwhyt/YQd4Pn/P06ink4YQbZny7Y
zIoSdRWHMe1wae+ACGVaw9KqtS3VqjU4bRWnJS49vgw2w85IOkxs1zlHP05t
h4eNHCZYwDzxYdBvYeJ9rUGgjKvx5U0xeFZG6boO5pVSk0YSZ4uMrMhqcUkH
153eMYcNcDillxsRK/RqVVLbozuUgyOcmRrgNBiLsrnIu9lVgaOJ9s2Lulko
gW5eQbKLbOqR0NZIuGXY1DqveY5cJybtjWmW8cRrWF8K5o1/Kzx4uJwRv1KM
zEi/bRB3LCGx5ceTD2/lDLxdhzgseBLngnyfKcP/tRbbwBbJXr8o938KWgFf
tVIUUV2x9Mu0wmWHgCFWYMabD6aT2H+5kgP0WIQgXLefm7GgZUpqkksC9A9K
Vj5S87Gq0cZ85KtBtlgnIjUvEjlp6B3c6Rsb5E2tIdyxEI7HblLHHcNxK6X4
NCVvY2QIOQ+vX48jz4nYdQ2Fz1aCRqTgPPY2l1ZpUBsCnXseCcrjL43zlmo6
/QO8+8gIuunhENzrpSW7OaK6FUHRpn/rJXk6DH3IQBXGQJE442ZLPV5aRWZU
NtOWOTJPDQq+hHC0sHxsCVOAWq0XORyKNOvurVHTcUSpkO3vuIxP94euhX9F
2NUMOu7uFmCzcqRgvpTM5ThCU/w2kWbFmOrEKQCJQtbfeLVLhD7EMvgMrVRR
oh20D1QoBazjUz3qejKu6zPuJT0zHFxtBQqlrh5k/rP+I1bRvSgUOx5WPUbP
7YWTRTpnf4F+j6d7JpwaHxtU0zgGjTFEB+GTWfrY+JNbGcDsxqIJUWvNftPR
wG54ownmAlFbZ39LrZTKXenYRhoohmHzuFb0n+ZqsKuXbPs9JEjEwiyNUvQv
+xYrOoN5xpGJYEUC5hI5I07usLJiVtRywiJzIJXEwjpjva7gY9tMn+AuefPr
jnxwqg41Q2Z4XxeeUhTfl9NgjjRDT4rpi7DtuFsI6e87BAQRUmFZs0XNFoV+
oDAoTKnsPfi1nSWWV7pw5zvoNgIKU+bJsfNbIqcrXDZqWqLSlfCodkXgo06A
H7wMIsF/Z9itDLMrhF/Pgk982WPttJWMsSKGQ1YmeqBPbNt7m4jApOqgY3m3
tnTncNuREnXGtL1kKzHcqc287Jfgkp7U/UJGgDaPjqrUERVWcuVRVBwDM7bM
fCQ2KLHja4ssghUP7pq9U0EONNJ75AvlxcpoNc2gqbpgbCbFYGoVErOWxBpq
a2D+LHdffs/c/akQbK+952JevqOBNzCJgH8Pk8qQbuqHqoIOlRw8gv4L7o9Y
XL7SrxlFVM3IMDnDp+wbjZZ3VWcZlKtVn8fjEfj5GI3hEKpVuMsufw3b/n8c
Vx7M2Ul/2zwCZkvPz4jE3JMbJaT3BEm/4rghe7IM3s7y7g0Tow7n1FJfNt7E
pJ769q52urh8cLiPnjDO270yrKGEGbdbWZwBnb0BTDOAnzMYpbkKdK1WcZb7
BoCjHz7lsMyCZPUqAhzNhfB+C9OXakxp7rvpveyjhKImvX4RG0Z6T4G2w9EQ
UddPhRpcMWD/wVstiLiyB2fCw10bz85b2TzBWPb3xLXHYuxIlc1qxDb4Cjj1
7LN7B6/TVM277gGib9pSg8iRgDA8TRPUes9sYNy+ZcLrm2vbsoDjwr20gDZ1
YziUwRHeoEhWRZfYiAlUBb24Wi3ScPrWI92xnupRisgxQ4aMvZr8j+Eg4sq3
6MQm46zWIHvZw72ieBvO5bh8M+WEYSYNGt9VAV19c7DSUSjxFu5IFXstxspe
Av2Y0tkxYs9HX8dmwsvGwX1YA7oFwJuyOS9sbQisUPwLyjY/FzaFVc7wnDCn
cAW8RJDlX/C9X5ryDSAv4f4f+vBPPOIXhPrC55Q1cWwQ02nBjX0Re9IGwKC5
dRfxcCvzTH/RykeWQY4TgJFFD4GjKM4ja3UahTqQM16ov2nvRPBGj4oXsY6X
SDqsJ4i2LnH8YtzMUn4M5vryyyJTtq4MK3St7tjFq7LaYHG/ybqWs2A/Gyiq
kVJ+hUFONdYEjiyCEYCKEbifcs46ZfcocxZuDS3yiV3EZZnF6Lk4AAR2kNZ8
qqHmuRJTDY6ZEXSnD3ReeCANE6fTqmdxEzt1qvkrV4B6811KkMku4FgVqkhB
VNuizPxbKtSDBrblRDOecgbiRgkzdWDoLsWPi3jP4t0KJy1cnohkG9oTBcjT
6oA3x6gY8shrqKE75cxabNaHORpIF8vXUWSrnSJrU4PTVuLp1kXkChRUN54n
fyRz1PMtkvH9wfBITBAb+A0ZTvaG9JqJ8yiufPCl/fNPB5xnP8pIzYr/+q9/
/lC9Pbyp10F0q/l1f/+yAn8mY6/q/ghv+Xltzr8z9SVtk5SS5pFG+8+mUSrN
GprFgrST8vyTagdAlrgT4NR6W/gdXFPBcghg6ts+2dHhSDkvOvpY97CWx/gW
+ITZpvHAWrPnLxITlGjNdckq+LMd8p92OV5B0OQgimy85L8PnGIr3KwRA53X
er6ho270pSRUlpLg4J6X5NAF2qbnpUkYpVfaHOMaZzw/ajB5BQ0W7dNH3dJl
dGGRK6zcljud9ph9tYZ+OTd09wFtyppg1ejqwMRpM6mDF+5kevtKgYlV8LsD
1wgTMGUOsDK8a9bH5aUJbjGdv0HwaEpS5yYCRbMnhnKzJIPdM+p8z7hjLJ1U
8Zh9jBtXwGAu+7OB7zeMl9BDAp6qCCCrf5gVGzjlGekT1yGPn4LoQNJ6eRbq
/JN1nL0N/5BK5JCFgZNptYW8ks40E8cYNw4SsnNUT2UjMDB+oUQkL6ZMdA7Y
c3W0GWOyChRD511OmBPUVf+UGOrwQySoG3miQ+FZNWjU2/DQyuihGd9Yubuh
DCRNVyuycc9t7aRAM+1ZWqQmkUSAK9u8WGe8l4MUlY6YKOPUVSCmgdJ6Npcp
kZNy37Fyn7UhVvZ28c2xo8k2Ug9K5dQPyJ378/gmw92AO3qHUgDrxfuJHCLY
0X5ZwCi/JlcMVfxNd0fmmHG3YlYjfPyhHS9jNY+fCl7LNDiLPmRCr4u/Zv26
A01qOB1zhyifSWoKYxFYdl3JRPRAzDRzNZqDb3o28rQ+i5D3ZkxgFtErsxtm
8XiwTnGEQ9ABOwe9JPbLemwrZ/sT/hgt1eW6UPbCUhrhYagx5KrUOfcm0r6v
YxzKbiFPU6mcwNV3YanXrrgpwtb1OFh8HESCWyc303WxfGgtw53t2DXjWtb0
LaW27MDtJjSv28nzT6yPcQ1lpJ7LPpWn53/NBoPqAazFHquixEC21woTzQna
2OS82VehDRCaVXgING9aR0A4juQIsb2bBAnAUuDxDtkcFvVxLQbtHXvqjZ3L
enqbL2JkVfhyhgZmIezQmfPg/aJ6xy9mWjQ+rfykrPBW6kbY8UpJ4//fiZuj
r0rccB/fv7/fTMWUKZnz/5CM8YyABMGAgNum1uOU2mo5A3zKyLDHLoSs8sN1
srVPXcx+IMf6vjh7vKOapcLymihwOpN7v1uR2rCUDqsfFexfE5g4wRewU6yY
qE3H0jwID9r5CmDHvVm9vsSUncKekFzhTZySZaRsWcPR0rkO5t/4HbxqiZKY
J+29McgST9fXeu8H5BliUmfGwasCZBV53QOTveve6l4Ul2Cv0Z44nMZrI6YQ
LC2KeobCjg+oCe89EVH8FsCATargZijF7HMdPYsi73LyNwALG5PczIdyliN5
SFFp7q5VN48tZ4d7xQ9N3tEYBJktxsrpTeO4iqE3kz9dIdvQz4KtDzHnntSu
gD05G8LGlMpgF2qpw71tkJW0Fr3LnjVn98TZAmgaZPP8EjwRHtiQZlktdx7C
LJwXLs60625tJjFni/qQFu+oxpMXhD+mVf3vy69cWITkeHeNR+7Cz/SkJJV8
F3rCgDFYzVOACQbeG/HDKOHGeAbICjaKhTL+ky0kGCFUlslHbxYtMvEpR2sA
Be+0iBxl6mHXA2xCRYhxrgzjfOIiF3SBpeYtM8n2VfemU5D1eugS+51bSw2N
AdgMR6mSjgGOjtegZ/3ux3fff3x38dePP3nTo8efgFBoReW6AGe6FbXEpjXR
exHvmipNbspKQv1BLNyrQ4u1i7F5gmU4hVfBANjYEnvlIKPTaZYT2tnpQAEa
EIIwD7OZTTMWXZXp4aaXBLEqHbgXq8b0dWPrRIZcdRUJ3Xoy7+broIZ6+e0h
nAchalDTl0wgqIblhL/VqtVbOrmvON27Gu5SS8oW0Rb530GLpwcP7pEe2/fb
pojIYximzuJn09SGOIBayTxghswb69WDY51v991tY7MGlkRZys9AfsCKicnp
HpF5izoj6zOzLFVHXj8OPrKslS/enqm5fAv3ssQItKZ12ionYkmeIdPzFs8z
+Xa3uZ49FfSFRNAn5yNRKDW4DxI6+vXwsvg96GT0tx33DVGst4BvTKjKJ4cX
uxeDatq8ITXK5DkiFmkdAmnJhqxyOIoia7O/OfrHLXbpxxTrdQ1N77nZwXtT
erD/uIyVU2N/2KjUICkCofnd3/btsocMnUfldfDEdRhbILB4GkYwkkGnHvwm
FwFSKUBniBkgwY4LyRpNslEA4Gxw0x6eCgbJx1VIqnMW/3S9Y83aI/GxPwGO
7sJOuwTSV2XdNmAtcrbzNydvfUUU5f/iE3qwvtEemqVJKtiMY6K4LjP1g+Yf
65nIcAEgXSP7HJ7O/17jTdNlJX/wTFhya72UOxYGf5RdN+kwfUTmyt42PEG7
9OOxIS9mSqRXyJkJnlPFudLf32Rb5pCOLqvuSC+BtnzOjkVPGn1x7TJ8BJtT
9jWuhftl+W9PzN+3qlGNspfYIYGJoQITX/qsBJIeXYl3tTe5uaz77JTlxQst
SUIqOube6v5KwGdxMSz0TY0ZGUEbM7cyma4lrzaBjO9u+bM53Cr23MniZY7N
oll0TOZQSZC9oBS4d4GHpFuyanFujbCPxiqY5vvWwFh3rdcKdej7ZAaSuPhe
KgayurDlPt1Fik6vVAb893wtBg2lJgrWMCKm6Ssn97d8GSXQl57Pg6mU80i0
EldoWv9nOyWZLnn1ZIU/Lwe6AjMlk6rYPMhKbnNZBVcZUCrIG9kh94PEqY1L
6CPEzNTuO0yNPI6Hm7XRJs7wNQwsKk9aEi6u605b4dyR3rhnxf3pfVRqMCZk
OM6FwnDloqJYxUznxsvbJ4gbjknHTkzXcq8gprTidQiTeMCGnPF4vZGFsAAA
uhvMFiWbOK2KpVFmxI4nnIFWwwXTuc546rp5crR5BRURLKm87BxetVzUCFVH
mzDJPj9z/VQdSiybMyStZkENsNpUL5uBKDwmBl3KgocFqJr7+mpVR4340SaZ
+cFH/jSIMIuoi5W6UPet17a8jem7KJMmqfkSHBzHvn+/atgpk9QmCwpUgBOu
OnvpB9Ova0mGgcZsmzfFwJUW/NTaGWuGLoCGFuEkVK5qwsklAaJ8l69LHGyk
BzJXr0heTT4tQnANpgH81QyQmyXdLfFW1Gl0TpnNRRx8L0YZhr3IQpqCvV9M
isZoJzMESATUa4owQ7AsMLMF3ObW+4qU/0HmOw4z9wnnTFRSMD4TgkUdRQFy
rgFkI6OwmPcOcaqXqjQ95BY3WpEEPcEuMokI8VVpYRNlgsPSk0owdV2kZENB
yGU/QMbjszFsyzorSRGw2R8v9/7Uw6V3NkFQT713KrB5eVF/3vvKIm4dK78x
R1vFQo7SI73I1iXuhXt+qaloeAmL36MhujLGL+spGhWUONL6zn28Td+OV2wD
1eNHaO+yk/n3TP4wUbxVlDj/xJQ4akDtWGrqMkcTmKnmtqaMAr6W78ekw6F4
+6lSPuPYg30+CRRLZsKeJ/xhpm7uooHnP12VmOOMEzJ8xOHjBZu/ojTT3SKZ
ulEN3DWGa0DFCP3CeFCIHXZiQkfhJy+xwEpt4r91HiATO5MsfUaGaGUGUu7+
QXj1D+XJFUzetBlP1BX36Mm7jtN56XH9UM9mzUI9V2x0TiE9uns0kQ2fdpRh
8fiw5NcGDng/Xc9D3HKxQBvu2fp2Gly9PkRgb9sxyCPeB6EOf/6Xpp5X5+EX
C/DgzXmSw6/tN+/ukRKxEBlDFcrz+iE8eAiyw/sOPiQ31nAj1BzG73oRadEH
s8qHtAojRCIPRt/mq4z9+akNsh429if214dA5U24WVDg5/UCeOrUBoFsPrj3
ErVxRPHZTkl9/3XeshldXSo/oSkLjvh228sDB6YOt2GmMnq/MoU/AH0WO7dD
WQlRWnl6UfbCODCcaDquk7Ms58C4qxAJis4lteG8zAEGCbsLOmtUGDYG+vQR
7RM/mVEnGb6XKWETsFwTnP/86cO/Oben9THHkkZG16VSxDEfrLCypLJh1zZP
JU6Xs042kC//XdAeLcdP3bz6HozcaCn9I55Z37jyw7y+arvR5iHVyf2+Q/Xx
ffA2b9BTLjpKTdhFmRH5rsPi/wKkB83iESgFAA==

-->

</rfc>
