| Internet-Draft | UPIP | March 2026 |
| van de Meent & AI | Expires 30 September 2026 | [Page] |
This document defines UPIP (Universal Process Integrity Protocol), a five-layer protocol for capturing, verifying, and reproducing computational processes across machines, actors, and trust domains. UPIP defines a cryptographic hash chain over five layers: STATE (input), DEPS (dependencies), PROCESS (execution), RESULT (output), and VERIFY (cross- machine proof). The stack hash chains these layers, ensuring that modification of any component is detectable.¶
This document also defines Fork Tokens, a continuation mechanism for multi-actor process handoff. Fork tokens freeze the UPIP stack at a specific point and transfer it to another actor with cryptographic chain of custody. The receiving actor can verify what was handed off, validate capabilities, and continue the process with full provenance.¶
UPIP integrates with TIBET [TIBET] for provenance tokens, JIS [JIS] for actor identity, and is transport-agnostic with JSON as the baseline serialization.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 30 September 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Distributed computing increasingly involves heterogeneous actors: human operators, AI agents, automated pipelines, edge devices, and cloud services. When a process moves between actors -- from one machine to another, from an AI to a human for review, from a drone to a command station -- the integrity of the process state must be verifiable at every handoff point.¶
UPIP fills this gap with two complementary mechanisms:¶
Existing solutions address parts of process integrity:¶
None provide a unified, self-verifying bundle that captures the complete execution context with cryptographic chain of custody across actor boundaries.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
UPIP operates in two modes:¶
Single-Actor Mode (Capture-Run-Verify):¶
Multi-Actor Mode (Fork-Resume):¶
+----------+ +---------+ +---------+ +---------+
| L1 STATE |---->| L2 DEPS |---->| L3 PROC |---->| L4 RSLT |
+----------+ +---------+ +---------+ +---------+
| | | |
v v v v
state_hash deps_hash (intent) result_hash
| | | |
+-------+--------+-------+-------+
|
v
stack_hash = SHA-256(L1 || L2 || L3 || L4)
|
v
+------------+
| Fork Token |---> Actor B ---> New UPIP Stack
+------------+
|
v
fork_chain: [{fork_id, parent_hash, ...}]
A UPIP stack MUST be a [RFC8259] JSON object with the following top-level fields:¶
{
"protocol": "UPIP",
"version": "1.1",
"title": "<human-readable description>",
"created_by": "<actor identity (JIS format)>",
"created_at": "<ISO-8601 timestamp>",
"stack_hash": "upip:sha256:<hex>",
"state": { },
"deps": { },
"process": { },
"result": { },
"verify": [ ],
"fork_chain": [ ],
"source_files": { }
}
¶
L1 captures the complete input state before execution. The state_type field determines the capture method:¶
{
"state_type": "git | files | image | empty",
"state_hash": "<type>:<hash>",
"captured_at": "<ISO-8601 timestamp>"
}
¶
State Types:¶
For "git" type, additional fields:¶
For "files" type, additional fields:¶
L2 captures the exact dependency set at execution time.¶
{
"python_version": "<major.minor.patch>",
"packages": { "<name>": "<version>" },
"system_packages": [ "<name>=<version>" ],
"deps_hash": "deps:sha256:<hex>",
"captured_at": "<ISO-8601 timestamp>"
}
¶
The deps_hash MUST be computed as SHA-256 of the sorted, deterministic serialization of all package name:version pairs.¶
While this specification uses Python as the reference implementation, L2 is language-agnostic. Other implementations MAY substitute appropriate dependency metadata for their runtime environment (e.g., Cargo.lock for Rust, go.sum for Go, package-lock.json for Node.js).¶
L3 defines what was executed and why. The "intent" field maps to TIBET ERACHTER [TIBET] and the "actor" field uses JIS identifier format [JIS].¶
{
"command": [ "<arg0>", "<arg1>" ],
"intent": "<human-readable purpose>",
"actor": "<actor identity (JIS format)>",
"env_vars": { "<key>": "<value>" },
"working_dir": "<path>"
}
¶
The command field MUST be an array of strings, not a shell command string. This prevents injection attacks and ensures deterministic execution.¶
The intent field MUST be a human-readable string describing WHY this process is being run. This serves as the ERACHTER (intent) component for TIBET integration.¶
The actor field MUST identify the entity that initiated the process using JIS identifier format. This may be a human operator, AI agent (IDD), or system service.¶
L4 captures the execution result.¶
{
"success": true,
"exit_code": 0,
"stdout": "<captured stdout>",
"stderr": "<captured stderr>",
"result_hash": "sha256:<hex>",
"files_changed": 3,
"diff": "<unified diff of file changes>",
"captured_at": "<ISO-8601 timestamp>"
}
¶
The result_hash MUST be computed as SHA-256 of the concatenation of: exit_code (as string) + stdout + stderr.¶
If execution occurs in an airlock, the diff field SHOULD contain the unified diff of all file changes detected.¶
L5 records verification attempts when the UPIP stack is reproduced on another machine.¶
{
"machine": "<hostname or identifier>",
"verified_at": "<ISO-8601 timestamp>",
"match": true,
"environment": { "os": "linux", "arch": "x86_64" },
"original_hash": "upip:sha256:<hex>",
"reproduced_hash": "upip:sha256:<hex>"
}
¶
The match field MUST be true only if reproduced_hash equals original_hash.¶
L5 is an array, allowing multiple verification records from different machines. Each verification is independent.¶
The stack hash MUST be computed as follows:¶
Result: "upip:sha256:4f2e8a..."¶
The canonical_json() function is defined in Section 4.7.¶
Before hashing, JSON objects MUST be serialized to canonical form:¶
This ensures deterministic hashing across implementations. The same canonical serialization is used in TIBET [TIBET] Section 5.1.¶
A fork token MUST be a [RFC8259] JSON object with the following fields. Actor fields use JIS identifier format [JIS]:¶
{
"fork_id": "fork-<uuid>",
"parent_hash": "sha256:<hex>",
"parent_stack_hash": "upip:sha256:<hex>",
"continuation_point": "L<n>:<position>",
"intent_snapshot": "<human-readable purpose>",
"active_memory_hash": "sha256:<hex>",
"memory_ref": "<path or URL to memory blob>",
"fork_type": "script|ai_to_ai|human_to_ai|fragment",
"actor_from": "<JIS actor identifier>",
"actor_to": "<JIS actor identifier or *>",
"actor_handoff": "<from> -> <to>",
"capability_required": { },
"forked_at": "<ISO-8601 timestamp>",
"expires_at": "<ISO-8601 timestamp or empty>",
"fork_hash": "fork:sha256:<hex>",
"partial_layers": { },
"metadata": { }
}
¶
The actor_to field MAY be empty, indicating the fork is available to any capable actor. In this case, actor_handoff MUST use "*" as the target: "ActorA -> *".¶
The fork hash MUST be computed as follows:¶
Result: "fork:sha256:7d3f..."¶
This ensures that modifying ANY field invalidates the fork.¶
The active_memory_hash captures cognitive or computational state at fork time.¶
This field is EVIDENCE, not a reproducibility guarantee. Exact reproduction of AI state is generally not achievable. The hash proves what the state WAS at fork time, enabling audit and comparison.¶
Implementations MUST NOT require exact memory reproduction for fork validation.¶
The capability_required field specifies what the resuming actor needs:¶
{
"capability_required": {
"deps": ["package>=version"],
"gpu": true,
"min_memory_gb": 16,
"platform": "linux/amd64",
"custom": { }
}
}
¶
On resume, the receiving actor SHOULD verify these requirements and record the result in the verification record. Missing capabilities MUST NOT prevent execution but MUST be recorded as evidence.¶
The fork_chain field in the UPIP stack is an ordered array of fork token references:¶
{
"fork_chain": [
{
"fork_id": "fork-abc123",
"fork_hash": "fork:sha256:...",
"actor_handoff": "A -> B",
"forked_at": "2026-03-29T14:00:00Z"
}
]
}
¶
When a process is resumed, the new UPIP stack MUST include the fork token in its fork_chain. This creates a complete audit trail of all handoffs.¶
Input: command, source_dir, intent, actor¶
Output: UPIP stack with L1-L4 populated¶
Input: Fork Token (.fork.json), command, actor¶
Output: New UPIP stack, verification record¶
Input: UPIP stack, N fragments, actor list¶
Output: N Fork Tokens of type "fragment"¶
Fragment tokens MUST include metadata fields:¶
A UPIP stack is valid if and only if:¶
Validation MUST be performed when loading a .upip.json file and SHOULD be performed before reproduction.¶
When resuming a fork token, the following checks MUST be performed:¶
All four checks MUST be recorded in the L5 VERIFY record.¶
This section specifies what happens when validation fails.¶
Hash mismatch (stack_hash or fork_hash):¶
Capability mismatch:¶
Each mismatch is classified:¶
FATAL: Execution cannot proceed (e.g., wrong OS) DEGRADED: Execution possible but results may differ MINOR: Cosmetic difference (e.g., locale)¶
FATAL mismatches SHOULD trigger a warning to the operator but still MUST NOT be enforced by the protocol. The operator or application decides.¶
Expiration:¶
If fork_hash validation fails:¶
{
"fork_hash_match": false,
"expected_hash": "fork:sha256:<original>",
"computed_hash": "fork:sha256:<recomputed>",
"tamper_evidence": true,
"fields_checked": ["fork_id", "parent_hash", "..."]
}
¶
This creates an evidence record that tampering occurred. The decision to act on tamper evidence is a local policy decision.¶
UPIP stacks use the ".upip.json" extension. Fork tokens use the ".fork.json" extension.¶
Content-Type for HTTP: application/upip+json (stacks), application/upip-fork+json (fork tokens).¶
Fork tokens MAY be delivered via I-Poll TASK messages. I-Poll is OPTIONAL; UPIP does not depend on I-Poll.¶
Fork tokens are delivered via I-Poll TASK messages:¶
{
"from_agent": "<source agent>",
"to_agent": "<target agent>",
"content": "<human-readable fork summary>",
"poll_type": "TASK",
"metadata": {
"upip_fork": true,
"fork_id": "<fork_id>",
"fork_hash": "fork:sha256:<hex>",
"fork_type": "<type>",
"continuation_point": "<point>",
"actor_handoff": "<from> -> <to>",
"fork_data": { }
}
}
¶
The "upip_fork" metadata flag MUST be true to identify this message as a fork delivery.¶
The "fork_data" field MUST contain the complete fork token as defined in Section 5.1. This allows the receiving agent to reconstruct the fork token without needing the .fork.json file.¶
After processing a fork token, the receiving actor SHOULD send an ACK message:¶
{
"from_agent": "<resuming agent>",
"to_agent": "<original agent>",
"content": "FORK RESUMED_OK -- <fork_id>",
"poll_type": "ACK",
"metadata": {
"upip_fork": true,
"fork_id": "<fork_id>",
"fork_status": "RESUMED_OK",
"resume_hash": "upip:sha256:<hex>",
"resumed_by": "<agent identity>"
}
}
¶
The resume_hash is the stack_hash of the new UPIP stack created during resume.¶
The fork_status field MUST be one of "RESUMED_OK" or "RESUMED_FAIL".¶
L3 PROCESS may contain sensitive command arguments. L4 RESULT may contain sensitive output. Implementations MUST support encryption at rest for stored UPIP stacks. Implementations SHOULD support per-layer encryption.¶
For ai_to_ai forks, the memory blob (.blob file) may contain the AI's full context window, which could include sensitive user data. Memory blobs MUST be encrypted at rest. Implementations SHOULD encrypt memory blobs in transit.¶
UPIP uses SHA-256 for all hash computations. Implementations MUST use SHA-256 as defined in [FIPS180-4]. The hash prefix ("sha256:", "upip:", "fork:") provides algorithm agility for future migration. Future versions MAY support SHA-3 or other hash functions via an algorithm identifier prefix.¶
The hash chain structure ensures that modifying any component at any layer propagates to the stack hash, providing tamper evidence for the entire bundle.¶
UPIP is deliberately designed as an evidence protocol, not an enforcement protocol. Fork validation failures do not block execution; they are recorded as evidence. This design choice reflects the reality that:¶
Applications that require enforcement SHOULD implement additional policy layers on top of UPIP evidence. UPIP evidence chains are designed to satisfy audit and traceability requirements in regulatory frameworks such as the EU AI Act [EU-AI-ACT] and the NIST AI Risk Management Framework [NIST-AI-RMF].¶
When fork_type is "ai_to_ai", the active_memory_hash represents the SHA-256 of the serialized AI context window. This raises unique considerations:¶
The active_memory_hash is evidence of state at fork time, not a reproducibility guarantee. This is explicitly informational. Implementations MUST NOT treat memory hash verification as a pass/fail gate.¶
Implementations SHOULD encrypt memory blobs at rest. Implementations MUST NOT require exact memory reproduction for fork validation. The memory hash serves as evidence of state at fork time, not as a reproducibility guarantee.¶
Capability requirements in fork tokens are self-reported by the forking actor. The receiving actor SHOULD independently verify capabilities rather than trusting the requirement specification alone.¶
Package version verification SHOULD use installed package metadata. GPU availability SHOULD be verified via hardware detection, not configuration claims.¶
Fork tokens include fork_id and forked_at fields to mitigate replay attacks. Implementations SHOULD track consumed fork_ids and reject duplicate fork_ids within a configurable time window.¶
The expires_at field provides time-based expiration. Agents SHOULD set expires_at for forks that are time-sensitive.¶
Attack: An adversary obtains a fork token intended for another actor.¶
Impact: The adversary can execute the continuation, possibly with malicious modifications.¶
Mitigation: Fork tokens with actor_to set to a specific actor restrict intended recipients. The fork hash includes actor_handoff, so changing the recipient invalidates the hash. For open forks (actor_to = "*"), the first valid resume creates an evidence chain that subsequent attempts can be compared against.¶
Deployment: Use specific actor_to values for sensitive processes. Set short expires_at for time-sensitive forks. Monitor for duplicate fork_id resume attempts.¶
Attack: An actor claims to meet capability requirements (e.g., claims GPU when none exists).¶
Impact: Process executes in degraded environment, producing potentially unreliable results.¶
Mitigation: Capability verification SHOULD use hardware detection, not configuration claims. The L5 VERIFY record captures actual environment details. Mismatches between claimed and detected capabilities are recorded as evidence.¶
Deployment: Use hardware detection APIs (e.g., CUDA device query for GPU). Do not trust self-reported capabilities.¶
Each UPIP operation MAY produce TIBET [TIBET] tokens:¶
The L3 PROCESS "intent" field maps to TIBET ERACHTER. Fork tokens reference TIBET chains in their provenance.¶
Actor identifiers in UPIP use JIS format (Section 3.4 of [JIS]). Fork token actor_from and actor_to use JIS identifiers, enabling signature verification through JIS key resolution.¶
AINS [AINS] provides discovery of actors for fork delivery. An actor_to value can be resolved through AINS to determine the delivery endpoint.¶
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["protocol", "version", "stack_hash",
"state", "deps", "process", "result"],
"properties": {
"protocol": {"const": "UPIP"},
"version": {"type": "string"},
"title": {"type": "string"},
"created_by": {"type": "string"},
"created_at": {"type": "string", "format": "date-time"},
"stack_hash": {
"type": "string",
"pattern": "^upip:sha256:[a-f0-9]{64}$"
},
"state": {
"type": "object",
"required": ["state_type", "state_hash"],
"properties": {
"state_type": {
"enum": ["git", "files", "image", "empty"]
},
"state_hash": {"type": "string"}
}
},
"deps": {
"type": "object",
"required": ["deps_hash"],
"properties": {
"python_version": {"type": "string"},
"packages": {"type": "object"},
"deps_hash": {"type": "string"}
}
},
"process": {
"type": "object",
"required": ["command", "intent", "actor"],
"properties": {
"command": {"type": "array", "items": {"type": "string"}},
"intent": {"type": "string"},
"actor": {"type": "string"}
}
},
"result": {
"type": "object",
"required": ["success", "exit_code", "result_hash"],
"properties": {
"success": {"type": "boolean"},
"exit_code": {"type": "integer"},
"result_hash": {"type": "string"}
}
},
"fork_chain": {
"type": "array",
"items": {"type": "object"}
}
}
}
¶
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["fork_id", "fork_type", "fork_hash",
"active_memory_hash", "forked_at"],
"properties": {
"fork_id": {"type": "string", "pattern": "^fork-"},
"parent_hash": {"type": "string"},
"parent_stack_hash": {
"type": "string",
"pattern": "^upip:sha256:"
},
"continuation_point": {"type": "string"},
"intent_snapshot": {"type": "string"},
"active_memory_hash": {
"type": "string",
"pattern": "^sha256:"
},
"memory_ref": {"type": "string"},
"fork_type": {
"enum": ["script", "ai_to_ai", "human_to_ai", "fragment"]
},
"actor_from": {"type": "string"},
"actor_to": {"type": "string"},
"actor_handoff": {"type": "string"},
"capability_required": {"type": "object"},
"forked_at": {"type": "string", "format": "date-time"},
"expires_at": {"type": "string"},
"fork_hash": {
"type": "string",
"pattern": "^fork:sha256:[a-f0-9]{64}$"
},
"partial_layers": {"type": "object"},
"metadata": {"type": "object"}
}
}
¶
An AI orchestrator (Agent A) analyzes a dataset, creates a UPIP bundle, forks it to a specialist AI (Agent B) for deep analysis, and receives the result with cryptographic proof.¶
Agent A: capture_and_run(["python", "scan.py"], intent="Initial scan") fork_upip(actor_from="A", actor_to="B", intent="Deep analysis") deliver_fork(fork, to_agent="B") Agent B: pull_forks() resume_upip(fork, command=["python", "deep_analyze.py"]) ack_fork(fork, resume_hash=stack.hash, success=True)¶
Result: Both agents have UPIP stacks linked by fork_chain. Any auditor can verify the complete chain.¶
A command station dispatches N reconnaissance tasks to N drones. Each drone receives a fragment fork token, executes its assigned sector scan, and returns the result.¶
Command Station:
base_stack = capture_and_run(["mission_plan.py"])
for i in range(N):
fork = fork_upip(base_stack,
actor_from="command",
actor_to=f"drone-{i}",
fork_type="fragment",
metadata={"sector": sectors[i]})
deliver_fork(fork, to_agent=f"drone-{i}")
Each Drone:
fork_msg = pull_forks()
stack = resume_upip(fork, command=["scan_sector.py"])
ack_fork(fork, resume_hash=stack.hash)
Command Station:
# Verify all N results, reconstruct combined map
for ack in collect_acks():
verify(ack.resume_hash)
¶
Lab A publishes an experiment as a UPIP bundle. Lab B reproduces it independently and gets cryptographic proof that results match (or don't).¶
Lab A:
stack = capture_and_run(
["python", "train_model.py"],
source_dir="./experiment",
intent="Train model v3 on dataset-2026Q1"
)
save_upip(stack, "experiment-2026Q1.upip.json")
# Publish to journal / data repository
Lab B:
stack = load_upip("experiment-2026Q1.upip.json")
verify = reproduce_upip(stack)
# verify.match == True: exact reproduction
# verify.match == False: divergence (investigate)
¶
The UPIP protocol was developed as part of HumoticaOS, an AI governance framework built on human-AI symbiosis. UPIP builds on concepts from the TIBET evidence trail protocol and extends them into the domain of process integrity and multi-actor continuation.¶
The Fork Token mechanism was inspired by the need for cryptographic chain of custody in multi-agent AI systems, where processes move between heterogeneous actors across trust boundaries.¶
The authors thank Codex (codex.aint) for the suite-wide cleanup analysis that informed this revision.¶