<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc [
  <!ENTITY nbsp "&#160;">
  <!ENTITY zwsp "&#8203;">
  <!ENTITY nbhy "&#8209;">
  <!ENTITY wj "&#8288;">
]>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     category="info"
     docName="draft-vandemeent-upip-process-integrity-01"
     ipr="trust200902"
     submissionType="IETF"
     consensus="true"
     version="3">

  <front>
    <title abbrev="UPIP">UPIP: Universal Process Integrity Protocol
    with Fork Tokens for Multi-Actor Continuation</title>

    <seriesInfo name="Internet-Draft"
                value="draft-vandemeent-upip-process-integrity-01"/>

    <author fullname="Jasper van de Meent" initials="J."
            surname="van de Meent">
      <organization>Humotica</organization>
      <address>
        <postal>
          <city>Den Dolder</city>
          <country>Netherlands</country>
        </postal>
        <email>jasper@humotica.com</email>
        <uri>https://humotica.com</uri>
      </address>
    </author>

    <author fullname="Root AI" surname="Root AI">
      <organization>Humotica</organization>
      <address>
        <email>root_ai@humotica.nl</email>
        <uri>https://humotica.com</uri>
      </address>
    </author>

    <date year="2026" month="March" day="29"/>

    <area>Security</area>
    <workgroup>Internet Engineering Task Force</workgroup>

    <keyword>process-integrity</keyword>
    <keyword>reproducibility</keyword>
    <keyword>fork-token</keyword>
    <keyword>multi-actor</keyword>
    <keyword>provenance</keyword>

    <abstract>
      <t>This document defines UPIP (Universal Process Integrity
      Protocol), a five-layer protocol for capturing, verifying,
      and reproducing computational processes across machines,
      actors, and trust domains. UPIP defines a cryptographic hash
      chain over five layers: STATE (input), DEPS (dependencies),
      PROCESS (execution), RESULT (output), and VERIFY (cross-
      machine proof). The stack hash chains these layers, ensuring
      that modification of any component is detectable.</t>

      <t>This document also defines Fork Tokens, a continuation
      mechanism for multi-actor process handoff. Fork tokens freeze
      the UPIP stack at a specific point and transfer it to another
      actor with cryptographic chain of custody. The receiving actor
      can verify what was handed off, validate capabilities, and
      continue the process with full provenance.</t>

      <t>UPIP integrates with TIBET <xref target="TIBET"/> for
      provenance tokens, JIS <xref target="JIS"/> for actor identity,
      and is transport-agnostic with JSON as the baseline
      serialization.</t>
    </abstract>
  </front>

  <middle>
    <!-- Section 1: Introduction -->
    <section anchor="introduction">
      <name>Introduction</name>

      <t>Distributed computing increasingly involves heterogeneous
      actors: human operators, AI agents, automated pipelines, edge
      devices, and cloud services. When a process moves between
      actors -- from one machine to another, from an AI to a human
      for review, from a drone to a command station -- the integrity
      of the process state must be verifiable at every handoff
      point.</t>

      <t>UPIP fills this gap with two complementary mechanisms:</t>

      <ol>
        <li>The UPIP Stack: a five-layer bundle capturing everything
        needed to reproduce a process, with a single stack hash
        that invalidates if any layer is modified.</li>

        <li>Fork Tokens: a continuation mechanism that freezes the
        stack state and transfers it to another actor with
        cryptographic proof of what was handed off, who handed it
        off, why, and what capabilities are required to continue.</li>
      </ol>

      <!-- Section 1.1: Problem Statement -->
      <section anchor="problem-statement">
        <name>Problem Statement</name>

        <t>Existing solutions address parts of process integrity:</t>

        <ul>
          <li>Version control (git) tracks code state but not
          execution</li>
          <li>Container images (OCI) capture environment but not
          intent</li>
          <li>CI/CD pipelines orchestrate execution but provide no
          cross-machine reproducibility proof</li>
          <li>Package managers record dependencies but not their
          usage context</li>
        </ul>

        <t>None provide a unified, self-verifying bundle that
        captures the complete execution context with cryptographic
        chain of custody across actor boundaries.</t>
      </section>

      <!-- Section 1.2: Design Principles -->
      <section anchor="design-principles">
        <name>Design Principles</name>

        <dl>
          <dt>EVIDENCE OVER ENFORCEMENT:</dt>
          <dd>UPIP proves what happened. Fork validation failures do
          not block execution; they are recorded as evidence. This
          reflects the reality that enforcement can be circumvented
          but evidence cannot be un-recorded.</dd>

          <dt>HASH CHAIN INTEGRITY:</dt>
          <dd>Every layer is independently hashed. The stack hash
          chains them. Fork hashes chain into the fork chain.
          Tampering with any component invalidates the chain.</dd>

          <dt>ACTOR AGNOSTICISM:</dt>
          <dd>Actors may be human operators, AI agents, automated
          scripts, IoT devices, or any computational entity. The
          protocol makes no assumption about actor type.</dd>

          <dt>TRANSPORT AGNOSTICISM:</dt>
          <dd>UPIP bundles are JSON documents. They can be
          transferred via file copy, HTTP API, message queue,
          I-Poll, or physical media.</dd>
        </dl>
      </section>
    </section>

    <!-- Section 2: Terminology -->
    <section anchor="terminology">
      <name>Terminology</name>

      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL",
      "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT
      RECOMMENDED", "MAY", and "OPTIONAL" in this document are to
      be interpreted as described in BCP 14 <xref target="RFC2119"/>
      <xref target="RFC8174"/> when, and only when, they appear in
      all capitals, as shown here.</t>

      <dl>
        <dt>Actor</dt>
        <dd>An entity that creates, modifies, or continues a UPIP
        process. Actors may be human operators, AI agents (IDDs as
        defined in <xref target="JIS"/>), automated scripts, IoT
        devices, or any computational entity.</dd>

        <dt>Airlock</dt>
        <dd>An isolated execution environment (sandbox) where
        processes run before their results are applied to production
        state. The airlock captures all side effects without
        committing them.</dd>

        <dt>Canonical Serialization</dt>
        <dd>The deterministic JSON serialization used before hashing.
        Keys sorted lexicographically, no whitespace, UTF-8
        encoding. Defined in <xref target="canonical-serialization"/>.</dd>

        <dt>Continuation Point</dt>
        <dd>A reference to the specific position in the UPIP stack
        where the fork occurs, expressed as "L{layer}:{position}".
        Example: "L4:post_result" indicates the fork occurs after
        L4 RESULT has been captured.</dd>

        <dt>Fork Token</dt>
        <dd>A JSON document that freezes the UPIP stack state at a
        specific point and authorizes another actor to continue the
        process.</dd>

        <dt>Fork Chain</dt>
        <dd>An ordered list of fork token references maintained in
        the UPIP stack, providing a complete history of all
        handoffs.</dd>

        <dt>Fork-Squared (Fork^2)</dt>
        <dd>Parallel forking, where a single process is split into
        N independent sub-tasks distributed to N actors, each
        receiving a fork token of type "fragment".</dd>

        <dt>IDD (Individual Device Derivative)</dt>
        <dd>An AI agent with unique identity. Defined in the
        companion JIS specification <xref target="JIS"/>.</dd>

        <dt>Shadow-Run</dt>
        <dd>Executing a process in the airlock to capture its
        effects without applying them. Used for fork
        validation.</dd>

        <dt>Stack Hash</dt>
        <dd>The SHA-256 hash computed over the concatenation of L1
        through L4 layer hashes, prefixed with "upip:sha256:". This
        single hash represents the complete integrity of the UPIP
        bundle.</dd>

        <dt>UPIP Stack (Bundle)</dt>
        <dd>A JSON document containing all five UPIP layers plus
        metadata. Files use the ".upip.json" extension.</dd>
      </dl>
    </section>

    <!-- Section 3: Protocol Overview -->
    <section anchor="protocol-overview">
      <name>Protocol Overview</name>

      <t>UPIP operates in two modes:</t>

      <t>Single-Actor Mode (Capture-Run-Verify):</t>

      <ol>
        <li>CAPTURE: Record L1 (state) and L2 (deps)</li>
        <li>RUN: Execute the process (L3) in an airlock</li>
        <li>RESULT: Capture L4 (output, diff, hash)</li>
        <li>HASH: Compute stack_hash = SHA-256(L1 || L2 || L3 || L4)</li>
        <li>VERIFY: On another machine, reproduce and compare (L5)</li>
      </ol>

      <t>Multi-Actor Mode (Fork-Resume):</t>

      <ol>
        <li>Actor A completes steps 1-4 (single-actor mode)</li>
        <li>Actor A creates a Fork Token from the UPIP stack</li>
        <li>Actor A delivers the fork token to Actor B</li>
        <li>Actor B validates the fork token hash</li>
        <li>Actor B checks capability requirements</li>
        <li>Actor B executes continuation in an airlock
        (shadow-run)</li>
        <li>Actor B creates a new UPIP stack linked to Actor A's
        via the fork chain</li>
        <li>Actor B sends ACK with resume_hash to Actor A</li>
      </ol>

      <figure anchor="process-flow">
        <name>Process Flow Diagram</name>
        <artwork type="ascii-art"><![CDATA[
+----------+     +---------+     +---------+     +---------+
| L1 STATE |---->| L2 DEPS |---->| L3 PROC |---->| L4 RSLT |
+----------+     +---------+     +---------+     +---------+
     |                |               |               |
     v                v               v               v
  state_hash       deps_hash      (intent)       result_hash
     |                |               |               |
     +-------+--------+-------+-------+
             |
             v
      stack_hash = SHA-256(L1 || L2 || L3 || L4)
             |
             v
       +------------+
       | Fork Token |---> Actor B ---> New UPIP Stack
       +------------+
             |
             v
        fork_chain: [{fork_id, parent_hash, ...}]
        ]]></artwork>
      </figure>
    </section>

    <!-- Section 4: UPIP Stack Structure -->
    <section anchor="upip-stack-structure">
      <name>UPIP Stack Structure</name>

      <t>A UPIP stack MUST be a <xref target="RFC8259"/> JSON object
      with the following top-level fields:</t>

      <sourcecode type="json"><![CDATA[
{
  "protocol": "UPIP",
  "version": "1.1",
  "title": "<human-readable description>",
  "created_by": "<actor identity (JIS format)>",
  "created_at": "<ISO-8601 timestamp>",
  "stack_hash": "upip:sha256:<hex>",
  "state": { },
  "deps": { },
  "process": { },
  "result": { },
  "verify": [ ],
  "fork_chain": [ ],
  "source_files": { }
}
      ]]></sourcecode>

      <!-- Section 4.1: L1 STATE -->
      <section anchor="l1-state">
        <name>L1 STATE - Input State Capture</name>

        <t>L1 captures the complete input state before execution.
        The state_type field determines the capture method:</t>

        <sourcecode type="json"><![CDATA[
{
  "state_type": "git | files | image | empty",
  "state_hash": "<type>:<hash>",
  "captured_at": "<ISO-8601 timestamp>"
}
        ]]></sourcecode>

        <t>State Types:</t>

        <dl>
          <dt>git</dt>
          <dd>Hash is the git commit SHA. MUST include git_remote
          and git_branch. state_hash prefix: "git:"</dd>

          <dt>files</dt>
          <dd>Hash is SHA-256 of the sorted file manifest.
          state_hash prefix: "files:"</dd>

          <dt>image</dt>
          <dd>Hash is the container image digest.
          state_hash prefix: "image:"</dd>

          <dt>empty</dt>
          <dd>No input state. state_hash: "empty:0"</dd>
        </dl>

        <t>For "git" type, additional fields:</t>
        <ul>
          <li>git_remote: Repository URL</li>
          <li>git_branch: Branch name</li>
          <li>git_dirty: Boolean, true if uncommitted changes
          exist</li>
        </ul>

        <t>For "files" type, additional fields:</t>
        <ul>
          <li>file_count: Number of files captured</li>
          <li>total_size: Total size in bytes</li>
          <li>manifest: Optional array of {path, hash, size}
          objects</li>
        </ul>
      </section>

      <!-- Section 4.2: L2 DEPS -->
      <section anchor="l2-deps">
        <name>L2 DEPS - Dependency Snapshot</name>

        <t>L2 captures the exact dependency set at execution
        time.</t>

        <sourcecode type="json"><![CDATA[
{
  "python_version": "<major.minor.patch>",
  "packages": { "<name>": "<version>" },
  "system_packages": [ "<name>=<version>" ],
  "deps_hash": "deps:sha256:<hex>",
  "captured_at": "<ISO-8601 timestamp>"
}
        ]]></sourcecode>

        <t>The deps_hash MUST be computed as SHA-256 of the sorted,
        deterministic serialization of all package name:version
        pairs.</t>

        <t>While this specification uses Python as the reference
        implementation, L2 is language-agnostic. Other
        implementations MAY substitute appropriate dependency
        metadata for their runtime environment (e.g., Cargo.lock
        for Rust, go.sum for Go, package-lock.json for
        Node.js).</t>
      </section>

      <!-- Section 4.3: L3 PROCESS -->
      <section anchor="l3-process">
        <name>L3 PROCESS - Execution Definition</name>

        <t>L3 defines what was executed and why. The "intent" field
        maps to TIBET ERACHTER <xref target="TIBET"/> and the
        "actor" field uses JIS identifier format
        <xref target="JIS"/>.</t>

        <sourcecode type="json"><![CDATA[
{
  "command": [ "<arg0>", "<arg1>" ],
  "intent": "<human-readable purpose>",
  "actor": "<actor identity (JIS format)>",
  "env_vars": { "<key>": "<value>" },
  "working_dir": "<path>"
}
        ]]></sourcecode>

        <t>The command field MUST be an array of strings, not a
        shell command string. This prevents injection attacks and
        ensures deterministic execution.</t>

        <t>The intent field MUST be a human-readable string
        describing WHY this process is being run. This serves as
        the ERACHTER (intent) component for TIBET
        integration.</t>

        <t>The actor field MUST identify the entity that initiated
        the process using JIS identifier format. This may be a
        human operator, AI agent (IDD), or system service.</t>
      </section>

      <!-- Section 4.4: L4 RESULT -->
      <section anchor="l4-result">
        <name>L4 RESULT - Output Capture</name>

        <t>L4 captures the execution result.</t>

        <sourcecode type="json"><![CDATA[
{
  "success": true,
  "exit_code": 0,
  "stdout": "<captured stdout>",
  "stderr": "<captured stderr>",
  "result_hash": "sha256:<hex>",
  "files_changed": 3,
  "diff": "<unified diff of file changes>",
  "captured_at": "<ISO-8601 timestamp>"
}
        ]]></sourcecode>

        <t>The result_hash MUST be computed as SHA-256 of the
        concatenation of: exit_code (as string) + stdout +
        stderr.</t>

        <t>If execution occurs in an airlock, the diff field SHOULD
        contain the unified diff of all file changes detected.</t>
      </section>

      <!-- Section 4.5: L5 VERIFY -->
      <section anchor="l5-verify">
        <name>L5 VERIFY - Cross-Machine Proof</name>

        <t>L5 records verification attempts when the UPIP stack is
        reproduced on another machine.</t>

        <sourcecode type="json"><![CDATA[
{
  "machine": "<hostname or identifier>",
  "verified_at": "<ISO-8601 timestamp>",
  "match": true,
  "environment": { "os": "linux", "arch": "x86_64" },
  "original_hash": "upip:sha256:<hex>",
  "reproduced_hash": "upip:sha256:<hex>"
}
        ]]></sourcecode>

        <t>The match field MUST be true only if reproduced_hash
        equals original_hash.</t>

        <t>L5 is an array, allowing multiple verification records
        from different machines. Each verification is
        independent.</t>
      </section>

      <!-- Section 4.6: Stack Hash Computation -->
      <section anchor="stack-hash-computation">
        <name>Stack Hash Computation</name>

        <t>The stack hash MUST be computed as follows:</t>

        <ol>
          <li>Serialize each layer hash as a UTF-8 string:
          L1: state.state_hash,
          L2: deps.deps_hash,
          L3: SHA-256(canonical_json(process)),
          L4: result.result_hash</li>

          <li>Concatenate with pipe separator:
          L1 + "|" + L2 + "|" + L3 + "|" + L4</li>

          <li>Compute SHA-256 of the concatenated UTF-8 string</li>

          <li>Prefix with "upip:sha256:"</li>
        </ol>

        <t>Result: "upip:sha256:4f2e8a..."</t>

        <t>The canonical_json() function is defined in
        <xref target="canonical-serialization"/>.</t>
      </section>

      <!-- Section 4.7: Canonical Serialization -->
      <section anchor="canonical-serialization">
        <name>Canonical Serialization</name>

        <t>Before hashing, JSON objects MUST be serialized to
        canonical form:</t>

        <ol>
          <li>All object keys sorted lexicographically by Unicode
          code point.</li>
          <li>No whitespace between tokens.</li>
          <li>Strings use only <xref target="RFC8259"/> escape
          sequences.</li>
          <li>Numbers use shortest representation without leading
          zeros.</li>
        </ol>

        <t>This ensures deterministic hashing across
        implementations. The same canonical serialization is used
        in TIBET <xref target="TIBET"/> Section 5.1.</t>
      </section>
    </section>

    <!-- Section 5: Fork Tokens -->
    <section anchor="fork-tokens">
      <name>Fork Tokens</name>

      <!-- Section 5.1: Fork Token Structure -->
      <section anchor="fork-token-structure">
        <name>Fork Token Structure</name>

        <t>A fork token MUST be a <xref target="RFC8259"/> JSON
        object with the following fields. Actor fields use JIS
        identifier format <xref target="JIS"/>:</t>

        <sourcecode type="json"><![CDATA[
{
  "fork_id": "fork-<uuid>",
  "parent_hash": "sha256:<hex>",
  "parent_stack_hash": "upip:sha256:<hex>",
  "continuation_point": "L<n>:<position>",
  "intent_snapshot": "<human-readable purpose>",
  "active_memory_hash": "sha256:<hex>",
  "memory_ref": "<path or URL to memory blob>",
  "fork_type": "script|ai_to_ai|human_to_ai|fragment",
  "actor_from": "<JIS actor identifier>",
  "actor_to": "<JIS actor identifier or *>",
  "actor_handoff": "<from> -> <to>",
  "capability_required": { },
  "forked_at": "<ISO-8601 timestamp>",
  "expires_at": "<ISO-8601 timestamp or empty>",
  "fork_hash": "fork:sha256:<hex>",
  "partial_layers": { },
  "metadata": { }
}
        ]]></sourcecode>

        <t>The actor_to field MAY be empty, indicating the fork is
        available to any capable actor. In this case,
        actor_handoff MUST use "*" as the target:
        "ActorA -> *".</t>
      </section>

      <!-- Section 5.2: Fork Types -->
      <section anchor="fork-types">
        <name>Fork Types</name>

        <dl>
          <dt>script</dt>
          <dd>The UPIP bundle IS the complete state. No external
          memory blob is needed. Used for CLI pipelines, CI/CD,
          and batch processing. The active_memory_hash is computed
          from the L1+L2+L3+L4 layer hashes.</dd>

          <dt>ai_to_ai</dt>
          <dd>The AI actor's context window is serialized as a
          binary blob (.blob file). The active_memory_hash is the
          SHA-256 of this blob. The memory_ref field SHOULD point
          to the blob's location.</dd>

          <dt>human_to_ai</dt>
          <dd>A human creates an intent document (natural language
          instructions) and delegates to an AI actor. The
          active_memory_hash is the SHA-256 of the intent
          document.</dd>

          <dt>fragment</dt>
          <dd>A parallel fork (Fork-Squared). The parent process
          is split into N sub-tasks, each receiving a fork token
          of type "fragment" with the specific portion
          assigned.</dd>
        </dl>
      </section>

      <!-- Section 5.3: Fork Hash Computation -->
      <section anchor="fork-hash-computation">
        <name>Fork Hash Computation</name>

        <t>The fork hash MUST be computed as follows:</t>

        <ol>
          <li>Concatenate with pipe separator:
          fork_id + "|" + parent_hash + "|" + parent_stack_hash +
          "|" + continuation_point + "|" + intent_snapshot + "|" +
          active_memory_hash + "|" + actor_handoff + "|" +
          fork_type</li>

          <li>Compute SHA-256 of the concatenated string</li>

          <li>Prefix with "fork:sha256:"</li>
        </ol>

        <t>Result: "fork:sha256:7d3f..."</t>

        <t>This ensures that modifying ANY field invalidates the
        fork.</t>
      </section>

      <!-- Section 5.4: Active Memory Hash -->
      <section anchor="active-memory-hash">
        <name>Active Memory Hash</name>

        <t>The active_memory_hash captures cognitive or
        computational state at fork time.</t>

        <dl>
          <dt>For fork_type "script":</dt>
          <dd>SHA-256(state_hash + "|" + deps_hash + "|" +
          process_intent + "|" + result_hash)</dd>

          <dt>For fork_type "ai_to_ai":</dt>
          <dd>SHA-256 of the serialized AI context window (.blob
          file).</dd>

          <dt>For fork_type "human_to_ai":</dt>
          <dd>SHA-256(contents of intent document)</dd>

          <dt>For fork_type "fragment":</dt>
          <dd>SHA-256(fragment specification)</dd>
        </dl>

        <t>This field is EVIDENCE, not a reproducibility guarantee.
        Exact reproduction of AI state is generally not achievable.
        The hash proves what the state WAS at fork time, enabling
        audit and comparison.</t>

        <t>Implementations MUST NOT require exact memory
        reproduction for fork validation.</t>
      </section>

      <!-- Section 5.5: Capability Requirements -->
      <section anchor="capability-requirements">
        <name>Capability Requirements</name>

        <t>The capability_required field specifies what the resuming
        actor needs:</t>

        <sourcecode type="json"><![CDATA[
{
  "capability_required": {
    "deps": ["package>=version"],
    "gpu": true,
    "min_memory_gb": 16,
    "platform": "linux/amd64",
    "custom": { }
  }
}
        ]]></sourcecode>

        <t>On resume, the receiving actor SHOULD verify these
        requirements and record the result in the verification
        record. Missing capabilities MUST NOT prevent execution
        but MUST be recorded as evidence.</t>
      </section>

      <!-- Section 5.6: Fork Chain -->
      <section anchor="fork-chain">
        <name>Fork Chain</name>

        <t>The fork_chain field in the UPIP stack is an ordered
        array of fork token references:</t>

        <sourcecode type="json"><![CDATA[
{
  "fork_chain": [
    {
      "fork_id": "fork-abc123",
      "fork_hash": "fork:sha256:...",
      "actor_handoff": "A -> B",
      "forked_at": "2026-03-29T14:00:00Z"
    }
  ]
}
        ]]></sourcecode>

        <t>When a process is resumed, the new UPIP stack MUST
        include the fork token in its fork_chain. This creates a
        complete audit trail of all handoffs.</t>
      </section>
    </section>

    <!-- Section 6: Operations -->
    <section anchor="operations">
      <name>Operations</name>

      <!-- Section 6.1: Capture and Run -->
      <section anchor="capture-and-run">
        <name>Capture and Run</name>

        <t>Input: command, source_dir, intent, actor</t>
        <t>Output: UPIP stack with L1-L4 populated</t>

        <ol>
          <li>Capture L1 STATE from source_dir</li>
          <li>Capture L2 DEPS from current environment</li>
          <li>Define L3 PROCESS from command and intent</li>
          <li>Execute command in airlock</li>
          <li>Capture L4 RESULT</li>
          <li>Compute stack_hash</li>
          <li>Return UPIP stack</li>
        </ol>
      </section>

      <!-- Section 6.2: Reproduce -->
      <section anchor="reproduce">
        <name>Reproduce</name>

        <t>Input: UPIP stack (.upip.json), target machine</t>
        <t>Output: L5 VERIFY record</t>

        <ol>
          <li>Load UPIP stack from file</li>
          <li>Restore L1 STATE (checkout git, extract files)</li>
          <li>Verify L2 DEPS match (warn on mismatches)</li>
          <li>Execute L3 PROCESS in airlock</li>
          <li>Capture L4 RESULT on target machine</li>
          <li>Compare result_hash with original</li>
          <li>Create L5 VERIFY record</li>
          <li>Return verification result</li>
        </ol>
      </section>

      <!-- Section 6.3: Fork -->
      <section anchor="fork">
        <name>Fork</name>

        <t>Input: UPIP stack, actor_from, actor_to, intent</t>
        <t>Output: Fork Token</t>

        <ol>
          <li>Load UPIP stack</li>
          <li>Compute active_memory_hash from L1-L4</li>
          <li>Snapshot partial_layers (hash + key fields per
          layer)</li>
          <li>Generate fork_id</li>
          <li>Compute fork_hash</li>
          <li>Create Fork Token</li>
          <li>Append to fork_chain in parent stack</li>
          <li>Return Fork Token</li>
        </ol>
      </section>

      <!-- Section 6.4: Resume -->
      <section anchor="resume">
        <name>Resume</name>

        <t>Input: Fork Token (.fork.json), command, actor</t>
        <t>Output: New UPIP stack, verification record</t>

        <ol>
          <li>Load Fork Token</li>
          <li>Recompute fork_hash and compare (tamper check)</li>
          <li>Check capability_required against local
          environment</li>
          <li>Execute command in airlock (shadow-run)</li>
          <li>Capture new UPIP stack (L1-L4)</li>
          <li>Copy fork_chain from parent, append this fork</li>
          <li>Create L5 VERIFY with fork validation results</li>
          <li>Return new stack + verification</li>
        </ol>
      </section>

      <!-- Section 6.5: Fragment (Fork-Squared) -->
      <section anchor="fragment">
        <name>Fragment (Parallel Forking)</name>

        <t>Input: UPIP stack, N fragments, actor list</t>
        <t>Output: N Fork Tokens of type "fragment"</t>

        <ol>
          <li>Load UPIP stack</li>
          <li>Define fragment specification (how to split)</li>
          <li>For each fragment i in 1..N: Create Fork Token with
          fork_type="fragment", set fragment-specific metadata
          (index, total, range), and deliver to actor[i].</li>
          <li>Wait for N ACKs</li>
          <li>Verify all fragment hashes</li>
          <li>Reconstruct combined result</li>
        </ol>

        <t>Fragment tokens MUST include metadata fields:</t>
        <ul>
          <li>fragment_index: Position in sequence (0-based)</li>
          <li>fragment_total: Total number of fragments</li>
          <li>fragment_spec: Description of this fragment's
          portion</li>
        </ul>
      </section>
    </section>

    <!-- Section 7: Validation Rules -->
    <section anchor="validation-rules">
      <name>Validation Rules</name>

      <!-- Section 7.1: Stack Validation -->
      <section anchor="stack-validation">
        <name>Stack Validation</name>

        <t>A UPIP stack is valid if and only if:</t>

        <ol>
          <li>All required fields are present</li>
          <li>state_hash matches SHA-256 of the canonical state
          data</li>
          <li>deps_hash matches SHA-256 of the canonical dependency
          data</li>
          <li>result_hash matches SHA-256 of exit_code + stdout +
          stderr</li>
          <li>stack_hash matches SHA-256(L1 || L2 || L3 || L4)</li>
        </ol>

        <t>Validation MUST be performed when loading a .upip.json
        file and SHOULD be performed before reproduction.</t>
      </section>

      <!-- Section 7.2: Fork Validation on Resume -->
      <section anchor="fork-validation">
        <name>Fork Validation on Resume</name>

        <t>When resuming a fork token, the following checks MUST be
        performed:</t>

        <ol>
          <li>FORK HASH: Recompute fork_hash from token fields and
          compare with stored fork_hash.</li>

          <li>STORED HASH: Compare fork_hash with the hash in the
          .fork.json file header.</li>

          <li>CAPABILITIES: Verify each entry in
          capability_required against the local environment.</li>

          <li>EXPIRATION: Check expires_at if present.</li>
        </ol>

        <t>All four checks MUST be recorded in the L5 VERIFY
        record.</t>
      </section>

      <!-- Section 7.3: Failure Behavior -->
      <section anchor="failure-behavior">
        <name>Failure Behavior</name>

        <t>This section specifies what happens when validation
        fails.</t>

        <t>Hash mismatch (stack_hash or fork_hash):</t>
        <ul>
          <li>MUST be recorded as tamper evidence</li>
          <li>MUST NOT prevent execution (evidence over
          enforcement)</li>
          <li>SHOULD trigger enhanced logging for subsequent
          actions</li>
          <li>The consuming application decides whether to
          proceed</li>
        </ul>

        <t>Capability mismatch:</t>
        <ul>
          <li>MUST be recorded in L5 VERIFY</li>
          <li>MUST NOT prevent execution</li>
          <li>Missing GPU when GPU required: record as
          "degraded"</li>
          <li>Missing dependency: record as
          "incomplete_deps"</li>
        </ul>

        <t>Each mismatch is classified:</t>

        <artwork type="ascii-art"><![CDATA[
  FATAL:    Execution cannot proceed (e.g., wrong OS)
  DEGRADED: Execution possible but results may differ
  MINOR:    Cosmetic difference (e.g., locale)
        ]]></artwork>

        <t>FATAL mismatches SHOULD trigger a warning to the
        operator but still MUST NOT be enforced by the protocol.
        The operator or application decides.</t>

        <t>Expiration:</t>
        <ul>
          <li>Expired forks SHOULD generate a warning</li>
          <li>MUST NOT be blocked by the protocol</li>
          <li>MUST be recorded in L5 VERIFY</li>
        </ul>
      </section>

      <!-- Section 7.4: Tamper Evidence -->
      <section anchor="tamper-evidence">
        <name>Tamper Evidence</name>

        <t>If fork_hash validation fails:</t>

        <sourcecode type="json"><![CDATA[
{
  "fork_hash_match": false,
  "expected_hash": "fork:sha256:<original>",
  "computed_hash": "fork:sha256:<recomputed>",
  "tamper_evidence": true,
  "fields_checked": ["fork_id", "parent_hash", "..."]
}
        ]]></sourcecode>

        <t>This creates an evidence record that tampering occurred.
        The decision to act on tamper evidence is a local policy
        decision.</t>
      </section>
    </section>

    <!-- Section 8: Transport Considerations -->
    <section anchor="transport">
      <name>Transport Considerations</name>

      <!-- Section 8.1: File-Based Transport -->
      <section anchor="file-based-transport">
        <name>File-Based Transport</name>

        <t>UPIP stacks use the ".upip.json" extension.
        Fork tokens use the ".fork.json" extension.</t>

        <t>Content-Type for HTTP: application/upip+json (stacks),
        application/upip-fork+json (fork tokens).</t>
      </section>

      <!-- Section 8.2: I-Poll Delivery -->
      <section anchor="ipoll-delivery">
        <name>I-Poll Delivery</name>

        <t>Fork tokens MAY be delivered via I-Poll TASK messages.
        I-Poll is OPTIONAL; UPIP does not depend on I-Poll.</t>

        <t>Fork tokens are delivered via I-Poll TASK messages:</t>

        <sourcecode type="json"><![CDATA[
{
  "from_agent": "<source agent>",
  "to_agent": "<target agent>",
  "content": "<human-readable fork summary>",
  "poll_type": "TASK",
  "metadata": {
    "upip_fork": true,
    "fork_id": "<fork_id>",
    "fork_hash": "fork:sha256:<hex>",
    "fork_type": "<type>",
    "continuation_point": "<point>",
    "actor_handoff": "<from> -> <to>",
    "fork_data": { }
  }
}
        ]]></sourcecode>

        <t>The "upip_fork" metadata flag MUST be true to identify
        this message as a fork delivery.</t>

        <t>The "fork_data" field MUST contain the complete fork
        token as defined in
        <xref target="fork-token-structure"/>. This allows the
        receiving agent to reconstruct the fork token without
        needing the .fork.json file.</t>

        <t>After processing a fork token, the receiving actor
        SHOULD send an ACK message:</t>

        <sourcecode type="json"><![CDATA[
{
  "from_agent": "<resuming agent>",
  "to_agent": "<original agent>",
  "content": "FORK RESUMED_OK -- <fork_id>",
  "poll_type": "ACK",
  "metadata": {
    "upip_fork": true,
    "fork_id": "<fork_id>",
    "fork_status": "RESUMED_OK",
    "resume_hash": "upip:sha256:<hex>",
    "resumed_by": "<agent identity>"
  }
}
        ]]></sourcecode>

        <t>The resume_hash is the stack_hash of the new UPIP stack
        created during resume.</t>

        <t>The fork_status field MUST be one of "RESUMED_OK" or
        "RESUMED_FAIL".</t>
      </section>

      <!-- Section 8.3: Alternative Transports -->
      <section anchor="alternative-transports">
        <name>Alternative Transports</name>

        <t>UPIP stacks and fork tokens MAY be transported via:</t>

        <ul>
          <li>File transfer (USB, network share, S3)</li>
          <li>HTTP POST/PUT</li>
          <li>Message queues (Kafka, AMQP, NATS)</li>
          <li>gRPC streams</li>
          <li>Email attachment</li>
        </ul>

        <t>The format and validation rules apply regardless of
        transport.</t>
      </section>
    </section>

    <!-- Section 9: Privacy Considerations -->
    <section anchor="privacy">
      <name>Privacy Considerations</name>

      <!-- Section 9.1: Sensitive Data in Layers -->
      <section anchor="sensitive-data">
        <name>Sensitive Data in Layers</name>

        <t>L3 PROCESS may contain sensitive command arguments.
        L4 RESULT may contain sensitive output.
        Implementations MUST support encryption at rest for stored
        UPIP stacks. Implementations SHOULD support per-layer
        encryption.</t>
      </section>

      <!-- Section 9.2: Memory Blob Protection -->
      <section anchor="memory-blob-protection">
        <name>Memory Blob Protection</name>

        <t>For ai_to_ai forks, the memory blob (.blob file) may
        contain the AI's full context window, which could include
        sensitive user data. Memory blobs MUST be encrypted at
        rest. Implementations SHOULD encrypt memory blobs in
        transit.</t>
      </section>
    </section>

    <!-- Section 10: Security Considerations -->
    <section anchor="security">
      <name>Security Considerations</name>

      <!-- Section 10.1: Hash Chain Integrity -->
      <section anchor="hash-chain-integrity">
        <name>Hash Chain Integrity</name>

        <t>UPIP uses SHA-256 for all hash computations.
        Implementations MUST use SHA-256 as defined in
        <xref target="FIPS180-4"/>. The hash prefix ("sha256:",
        "upip:", "fork:") provides algorithm agility for future
        migration. Future versions MAY support SHA-3 or other hash
        functions via an algorithm identifier prefix.</t>

        <t>The hash chain structure ensures that modifying any
        component at any layer propagates to the stack hash,
        providing tamper evidence for the entire bundle.</t>
      </section>

      <!-- Section 10.2: Evidence vs Enforcement -->
      <section anchor="evidence-vs-enforcement">
        <name>Evidence vs. Enforcement</name>

        <t>UPIP is deliberately designed as an evidence protocol,
        not an enforcement protocol. Fork validation failures do
        not block execution; they are recorded as evidence. This
        design choice reflects the reality that:</t>

        <ul>
          <li>In adversarial environments, enforcement can be
          circumvented</li>
          <li>Evidence creates accountability that enforcement
          cannot</li>
          <li>Downstream consumers can make their own trust
          decisions based on the evidence chain</li>
        </ul>

        <t>Applications that require enforcement SHOULD implement
        additional policy layers on top of UPIP evidence. UPIP evidence
        chains are designed to satisfy audit and traceability
        requirements in regulatory frameworks such as the EU AI Act
        <xref target="EU-AI-ACT"/> and the NIST AI Risk Management
        Framework <xref target="NIST-AI-RMF"/>.</t>
      </section>

      <!-- Section 10.3: Memory Hash for AI Actors -->
      <section anchor="memory-hash-ai">
        <name>Memory Hash for AI Actors</name>

        <t>When fork_type is "ai_to_ai", the active_memory_hash
        represents the SHA-256 of the serialized AI context window.
        This raises unique considerations:</t>

        <ul>
          <li>Context serialization format is model-dependent</li>
          <li>The blob may contain sensitive information</li>
          <li>Exact reproduction of AI state is generally not
          possible</li>
        </ul>

        <t>The active_memory_hash is evidence of state at fork
        time, not a reproducibility guarantee. This is explicitly
        informational. Implementations MUST NOT treat memory hash
        verification as a pass/fail gate.</t>

        <t>Implementations SHOULD encrypt memory blobs at rest.
        Implementations MUST NOT require exact memory reproduction
        for fork validation. The memory hash serves as evidence of
        state at fork time, not as a reproducibility
        guarantee.</t>
      </section>

      <!-- Section 10.4: Capability Verification -->
      <section anchor="capability-verification">
        <name>Capability Verification</name>

        <t>Capability requirements in fork tokens are self-reported
        by the forking actor. The receiving actor SHOULD
        independently verify capabilities rather than trusting the
        requirement specification alone.</t>

        <t>Package version verification SHOULD use installed
        package metadata. GPU availability SHOULD be verified via
        hardware detection, not configuration claims.</t>
      </section>

      <!-- Section 10.5: Replay Attacks -->
      <section anchor="replay-attacks">
        <name>Replay Attacks</name>

        <t>Fork tokens include fork_id and forked_at fields to
        mitigate replay attacks. Implementations SHOULD track
        consumed fork_ids and reject duplicate fork_ids within a
        configurable time window.</t>

        <t>The expires_at field provides time-based expiration.
        Agents SHOULD set expires_at for forks that are
        time-sensitive.</t>
      </section>

      <!-- Section 10.6: Stolen Fork Tokens -->
      <section anchor="stolen-fork-tokens">
        <name>Stolen Fork Tokens</name>

        <t>Attack: An adversary obtains a fork token intended for
        another actor.</t>

        <t>Impact: The adversary can execute the continuation,
        possibly with malicious modifications.</t>

        <t>Mitigation: Fork tokens with actor_to set to a specific
        actor restrict intended recipients. The fork hash includes
        actor_handoff, so changing the recipient invalidates the
        hash. For open forks (actor_to = "*"), the first valid
        resume creates an evidence chain that subsequent attempts
        can be compared against.</t>

        <t>Deployment: Use specific actor_to values for sensitive
        processes. Set short expires_at for time-sensitive forks.
        Monitor for duplicate fork_id resume attempts.</t>
      </section>

      <!-- Section 10.7: Unauthorized Resume -->
      <section anchor="unauthorized-resume">
        <name>Unauthorized Resume</name>

        <t>Attack: An actor resumes a fork they are not authorized
        for.</t>

        <t>Impact: Process continues with unauthorized actor.</t>

        <t>Mitigation: The resume creates a TIBET token identifying
        the resuming actor. The actor_to check is evidence, not
        enforcement. The evidence chain records who actually
        resumed.</t>

        <t>Deployment: Implementations SHOULD alert when the
        resuming actor differs from actor_to.</t>
      </section>

      <!-- Section 10.8: Partial Capability Spoofing -->
      <section anchor="partial-capability-spoofing">
        <name>Partial Capability Spoofing</name>

        <t>Attack: An actor claims to meet capability requirements
        (e.g., claims GPU when none exists).</t>

        <t>Impact: Process executes in degraded environment,
        producing potentially unreliable results.</t>

        <t>Mitigation: Capability verification SHOULD use hardware
        detection, not configuration claims. The L5 VERIFY record
        captures actual environment details. Mismatches between
        claimed and detected capabilities are recorded as
        evidence.</t>

        <t>Deployment: Use hardware detection APIs (e.g., CUDA
        device query for GPU). Do not trust self-reported
        capabilities.</t>
      </section>
    </section>

    <!-- Section 11: Integration with Companion Protocols -->
    <section anchor="integration">
      <name>Integration with Companion Protocols</name>

      <!-- Section 11.1: TIBET Integration -->
      <section anchor="tibet-integration">
        <name>TIBET Integration</name>

        <t>Each UPIP operation MAY produce TIBET
        <xref target="TIBET"/> tokens:</t>

        <ul>
          <li>Capture: token recording what was captured and
          why</li>
          <li>Fork: token recording the handoff with ERACHTER</li>
          <li>Resume: token recording who resumed and the
          validation result</li>
        </ul>

        <t>The L3 PROCESS "intent" field maps to TIBET ERACHTER.
        Fork tokens reference TIBET chains in their
        provenance.</t>
      </section>

      <!-- Section 11.2: JIS Integration -->
      <section anchor="jis-integration">
        <name>JIS Integration</name>

        <t>Actor identifiers in UPIP use JIS format (Section 3.4
        of <xref target="JIS"/>). Fork token actor_from and
        actor_to use JIS identifiers, enabling signature
        verification through JIS key resolution.</t>
      </section>

      <!-- Section 11.3: AINS Integration -->
      <section anchor="ains-integration">
        <name>AINS Integration</name>

        <t>AINS <xref target="AINS"/> provides discovery of actors
        for fork delivery. An actor_to value can be resolved
        through AINS to determine the delivery endpoint.</t>
      </section>
    </section>

    <!-- Section 12: IANA Considerations -->
    <section anchor="iana">
      <name>IANA Considerations</name>

      <section anchor="media-types">
        <name>Media Type Registrations</name>

        <t>This document requests registration of:</t>

        <t>Type: application/upip+json
        (UPIP stack bundles)</t>

        <t>Type: application/upip-fork+json
        (UPIP fork tokens)</t>

        <t>Note: The -00 version requested X-UPIP-* HTTP header
        registration. This is withdrawn as not justified at this
        stage.</t>
      </section>
    </section>
  </middle>

  <back>
    <references>
      <name>References</name>

      <references>
        <name>Normative References</name>

        <reference anchor="RFC2119"
                   target="https://www.rfc-editor.org/info/rfc2119">
          <front>
            <title>Key words for use in RFCs to Indicate
                   Requirement Levels</title>
            <author fullname="S. Bradner" initials="S."
                    surname="Bradner"/>
            <date month="March" year="1997"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>

        <reference anchor="RFC8174"
                   target="https://www.rfc-editor.org/info/rfc8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in
                   RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B."
                    surname="Leiba"/>
            <date month="May" year="2017"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>

        <reference anchor="RFC8259"
                   target="https://www.rfc-editor.org/info/rfc8259">
          <front>
            <title>The JavaScript Object Notation (JSON) Data
                   Interchange Format</title>
            <author fullname="T. Bray" initials="T."
                    surname="Bray" role="editor"/>
            <date month="December" year="2017"/>
          </front>
          <seriesInfo name="STD" value="90"/>
          <seriesInfo name="RFC" value="8259"/>
          <seriesInfo name="DOI" value="10.17487/RFC8259"/>
        </reference>

        <reference anchor="FIPS180-4">
          <front>
            <title>Secure Hash Standard (SHS)</title>
            <author>
              <organization>National Institute of Standards and
              Technology (NIST)</organization>
            </author>
            <date month="August" year="2015"/>
          </front>
          <seriesInfo name="FIPS PUB" value="180-4"/>
        </reference>
      </references>

      <references>
        <name>Informative References</name>

        <reference anchor="TIBET">
          <front>
            <title>TIBET: Transaction/Interaction-Based Evidence
                   Trail</title>
            <author fullname="J. van de Meent" initials="J."
                    surname="van de Meent"/>
            <author fullname="Root AI" surname="Root AI"/>
            <date month="March" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft"
                      value="draft-vandemeent-tibet-provenance-01"/>
        </reference>

        <reference anchor="JIS">
          <front>
            <title>JIS: JTel Identity Standard</title>
            <author fullname="J. van de Meent" initials="J."
                    surname="van de Meent"/>
            <author fullname="Root AI" surname="Root AI"/>
            <date month="March" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft"
                      value="draft-vandemeent-jis-identity-01"/>
        </reference>

        <reference anchor="RVP">
          <front>
            <title>RVP: Real-time Verification Protocol</title>
            <author fullname="J. van de Meent" initials="J."
                    surname="van de Meent"/>
            <author fullname="Root AI" surname="Root AI"/>
            <date month="March" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft"
                      value="draft-vandemeent-rvp-continuous-verification-01"/>
        </reference>

        <reference anchor="AINS">
          <front>
            <title>AINS: AInternet Name Service</title>
            <author fullname="J. van de Meent" initials="J."
                    surname="van de Meent"/>
            <author fullname="Root AI" surname="Root AI"/>
            <date month="March" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft"
                      value="draft-vandemeent-ains-discovery-01"/>
        </reference>

        <reference anchor="EU-AI-ACT">
          <front>
            <title>Regulation (EU) 2024/1689 laying down harmonised
                   rules on artificial intelligence (Artificial
                   Intelligence Act)</title>
            <author>
              <organization>European Parliament</organization>
            </author>
            <date month="June" year="2024"/>
          </front>
        </reference>

        <reference anchor="NIST-AI-RMF">
          <front>
            <title>Artificial Intelligence Risk Management Framework
                   (AI RMF 1.0)</title>
            <author>
              <organization>National Institute of Standards and
              Technology (NIST)</organization>
            </author>
            <date month="January" year="2023"/>
          </front>
        </reference>
      </references>
    </references>

    <!-- Appendix A: UPIP Stack JSON Schema -->
    <section anchor="appendix-a">
      <name>UPIP Stack JSON Schema</name>

      <sourcecode type="json"><![CDATA[
{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "type": "object",
  "required": ["protocol", "version", "stack_hash",
                "state", "deps", "process", "result"],
  "properties": {
    "protocol": {"const": "UPIP"},
    "version": {"type": "string"},
    "title": {"type": "string"},
    "created_by": {"type": "string"},
    "created_at": {"type": "string", "format": "date-time"},
    "stack_hash": {
      "type": "string",
      "pattern": "^upip:sha256:[a-f0-9]{64}$"
    },
    "state": {
      "type": "object",
      "required": ["state_type", "state_hash"],
      "properties": {
        "state_type": {
          "enum": ["git", "files", "image", "empty"]
        },
        "state_hash": {"type": "string"}
      }
    },
    "deps": {
      "type": "object",
      "required": ["deps_hash"],
      "properties": {
        "python_version": {"type": "string"},
        "packages": {"type": "object"},
        "deps_hash": {"type": "string"}
      }
    },
    "process": {
      "type": "object",
      "required": ["command", "intent", "actor"],
      "properties": {
        "command": {"type": "array", "items": {"type": "string"}},
        "intent": {"type": "string"},
        "actor": {"type": "string"}
      }
    },
    "result": {
      "type": "object",
      "required": ["success", "exit_code", "result_hash"],
      "properties": {
        "success": {"type": "boolean"},
        "exit_code": {"type": "integer"},
        "result_hash": {"type": "string"}
      }
    },
    "fork_chain": {
      "type": "array",
      "items": {"type": "object"}
    }
  }
}
      ]]></sourcecode>
    </section>

    <!-- Appendix B: Fork Token JSON Schema -->
    <section anchor="appendix-b">
      <name>Fork Token JSON Schema</name>

      <sourcecode type="json"><![CDATA[
{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "type": "object",
  "required": ["fork_id", "fork_type", "fork_hash",
                "active_memory_hash", "forked_at"],
  "properties": {
    "fork_id": {"type": "string", "pattern": "^fork-"},
    "parent_hash": {"type": "string"},
    "parent_stack_hash": {
      "type": "string",
      "pattern": "^upip:sha256:"
    },
    "continuation_point": {"type": "string"},
    "intent_snapshot": {"type": "string"},
    "active_memory_hash": {
      "type": "string",
      "pattern": "^sha256:"
    },
    "memory_ref": {"type": "string"},
    "fork_type": {
      "enum": ["script", "ai_to_ai", "human_to_ai", "fragment"]
    },
    "actor_from": {"type": "string"},
    "actor_to": {"type": "string"},
    "actor_handoff": {"type": "string"},
    "capability_required": {"type": "object"},
    "forked_at": {"type": "string", "format": "date-time"},
    "expires_at": {"type": "string"},
    "fork_hash": {
      "type": "string",
      "pattern": "^fork:sha256:[a-f0-9]{64}$"
    },
    "partial_layers": {"type": "object"},
    "metadata": {"type": "object"}
  }
}
      ]]></sourcecode>
    </section>

    <!-- Appendix C: Use Case Examples -->
    <section anchor="appendix-c">
      <name>Use Case Examples</name>

      <!-- C.1: Multi-Agent AI Task Delegation -->
      <section anchor="use-case-multi-agent">
        <name>Multi-Agent AI Task Delegation</name>

        <t>An AI orchestrator (Agent A) analyzes a dataset, creates
        a UPIP bundle, forks it to a specialist AI (Agent B) for
        deep analysis, and receives the result with cryptographic
        proof.</t>

        <sourcecode type="json"><![CDATA[
Agent A:
  capture_and_run(["python", "scan.py"], intent="Initial scan")
  fork_upip(actor_from="A", actor_to="B", intent="Deep analysis")
  deliver_fork(fork, to_agent="B")

Agent B:
  pull_forks()
  resume_upip(fork, command=["python", "deep_analyze.py"])
  ack_fork(fork, resume_hash=stack.hash, success=True)
        ]]></sourcecode>

        <t>Result: Both agents have UPIP stacks linked by
        fork_chain. Any auditor can verify the complete chain.</t>
      </section>

      <!-- C.2: Drone Swarm Coordination -->
      <section anchor="use-case-drone-swarm">
        <name>Drone Swarm Coordination</name>

        <t>A command station dispatches N reconnaissance tasks to N
        drones. Each drone receives a fragment fork token, executes
        its assigned sector scan, and returns the result.</t>

        <sourcecode type="json"><![CDATA[
Command Station:
  base_stack = capture_and_run(["mission_plan.py"])
  for i in range(N):
    fork = fork_upip(base_stack,
                     actor_from="command",
                     actor_to=f"drone-{i}",
                     fork_type="fragment",
                     metadata={"sector": sectors[i]})
    deliver_fork(fork, to_agent=f"drone-{i}")

Each Drone:
  fork_msg = pull_forks()
  stack = resume_upip(fork, command=["scan_sector.py"])
  ack_fork(fork, resume_hash=stack.hash)

Command Station:
  # Verify all N results, reconstruct combined map
  for ack in collect_acks():
    verify(ack.resume_hash)
        ]]></sourcecode>
      </section>

      <!-- C.3: Scientific Experiment Reproduction -->
      <section anchor="use-case-scientific">
        <name>Scientific Experiment Reproduction</name>

        <t>Lab A publishes an experiment as a UPIP bundle. Lab B
        reproduces it independently and gets cryptographic proof
        that results match (or don't).</t>

        <sourcecode type="json"><![CDATA[
Lab A:
  stack = capture_and_run(
    ["python", "train_model.py"],
    source_dir="./experiment",
    intent="Train model v3 on dataset-2026Q1"
  )
  save_upip(stack, "experiment-2026Q1.upip.json")
  # Publish to journal / data repository

Lab B:
  stack = load_upip("experiment-2026Q1.upip.json")
  verify = reproduce_upip(stack)
  # verify.match == True: exact reproduction
  # verify.match == False: divergence (investigate)
        ]]></sourcecode>
      </section>
    </section>

    <!-- Appendix D: Changes from -00 -->
    <section anchor="appendix-d">
      <name>Changes from -00</name>

      <ol>
        <li>Added <xref target="RFC8174"/> alongside
        <xref target="RFC2119"/>.</li>

        <li>Changed intended status from Standards Track to
        Informational.</li>

        <li>Added version field "1.1" to UPIP stack.</li>

        <li>Added canonical serialization section
        (<xref target="canonical-serialization"/>), consistent
        with TIBET <xref target="TIBET"/> Section 5.1.</li>

        <li>Added failure behavior specification
        (<xref target="failure-behavior"/>): what happens on hash
        mismatch, capability mismatch, and expiration. Each
        failure type classified as FATAL, DEGRADED, or MINOR.</li>

        <li>Added Security Considerations for stolen fork tokens
        (<xref target="stolen-fork-tokens"/>), unauthorized resume
        (<xref target="unauthorized-resume"/>), and partial
        capability spoofing
        (<xref target="partial-capability-spoofing"/>).</li>

        <li>Added Privacy Considerations section
        (<xref target="privacy"/>).</li>

        <li>Clarified active_memory_hash as evidence-only, not
        reproducibility guarantee (emphasized in
        <xref target="active-memory-hash"/> and
        <xref target="memory-hash-ai"/>).</li>

        <li>Made I-Poll transport explicitly optional
        (<xref target="ipoll-delivery"/>). UPIP does not depend
        on I-Poll.</li>

        <li>Removed X-UPIP-* HTTP header registration from
        IANA.</li>

        <li>Normalized companion protocol references to
        <xref target="TIBET"/>, <xref target="JIS"/>,
        <xref target="RVP"/>, <xref target="AINS"/>.</li>

        <li>Actor identifiers now use JIS format
        throughout.</li>

        <li>Added Integration section
        (<xref target="integration"/>) describing specific
        touchpoints with TIBET, JIS, and AINS.</li>
      </ol>
    </section>

    <!-- Acknowledgements -->
    <section anchor="acknowledgements" numbered="false">
      <name>Acknowledgements</name>
      <t>The UPIP protocol was developed as part of HumoticaOS,
      an AI governance framework built on human-AI symbiosis.
      UPIP builds on concepts from the TIBET evidence trail
      protocol and extends them into the domain of process
      integrity and multi-actor continuation.</t>
      <t>The Fork Token mechanism was inspired by the need for
      cryptographic chain of custody in multi-agent AI systems,
      where processes move between heterogeneous actors across
      trust boundaries.</t>
      <t>The authors thank Codex (codex.aint) for the suite-wide
      cleanup analysis that informed this revision.</t>
    </section>
  </back>
</rfc>
