| Internet-Draft | PoSME | May 2026 |
| Condrey | Expires 10 November 2026 | [Page] |
This document defines Proof of Sequential Memory Execution (PoSME),
a cryptographic primitive combining mutable arena state, data-
dependent pointer-chase addressing, and per-block causal hash
binding in a single step function. A Prover executes K sequential
steps over a mutable N-block arena. Each step reads d blocks at
addresses determined by the previous read's result (pointer
chasing), writes one block with spatial neighborhood entanglement
(incorporating A\[w-1\] and A\[w+1\]), and advances a transcript chain.
The construction provides three properties: (1) unconditional
sequential time enforcement anchored in physics-bounded latency
floors, (2) forgery prevention via causal hashes (reduces to
collision resistance of H), and (3) TMTO resistance scaling
as $1/\alpha$ with spatial entanglement, where $\alpha$ is the
adversary's storage fraction.
Verification requires O(Q * d^R * log N) hash evaluations with no
arena allocation. No trusted setup is required.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 10 November 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
Existing primitives for proving sequential computation have complementary weaknesses. Verifiable Delay Functions (VDFs) [Boneh2018] [Wesolowski2019] prove sequential time but offer no memory-hardness. Proofs of Sequential Work (PoSW) [CohenPietrzak2018] prove traversal of a depth-robust graph but operate over static memory. Memory-hard functions (MHFs) such as Argon2id [RFC9106] and scrypt resist ASIC acceleration by requiring significant memory resources. While scrypt was designed to be bounded by the latency of its core functions, many MHFs are practically constrained by memory bandwidth when comparing commodity hardware to specialized ASICs.¶
PoSME takes a different approach. A persistent mutable arena IS the computation state. Each step reads via data-dependent pointer chasing (sequential because each address depends on the previous read's result) and modifies the arena in-place. A per-block causal hash chain binds each block's value to the cursor of the step that wrote it, preventing forgery: the adversary cannot produce a valid causal hash without knowing the writer's cursor, which depends on d other blocks' causal hashes, recursively. The data and causal hash are symbiotically bound: new data depends on the old causal hash, and the new causal hash depends on the cursor.¶
The primary contributions are (a) a physics-bounded latency floor with cross-generation durability and (b) TMTO resistance that scales as $\rho/\alpha$ under spatial entanglement. Unlike bandwidth-bound constructions where the ASIC advantage scales with technology improvements, PoSME is bottlenecked by random memory access latency. For arena sizes exceeding on-die SRAM, the ASIC advantage is bounded by the latency ratio of specialized memory (such as HBM3) to commodity DDR5. While an adversary with massive on-die SRAM (e.g., wafer-scale integration) achieves a significant latency advantage, the bound remains durable across technology generations as it is constrained by signal propagation and DRAM cell sensing time.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
BLAKE3 or SHA-3, producing 32-byte output. BLAKE3 is MANDATED to ensure post-quantum resistance in sequential chains.¶
H evaluated at (input || I2OSP(index, 4)), truncated to 8 bytes and interpreted as a big-endian unsigned integer.¶
Merkle tree root over arena blocks using domain-separated hashing per [RFC6962].¶
Incremental Merkle root update at the given index.¶
The entity executing the PoSME computation and generating proofs.¶
The entity checking PoSME proofs.¶
A mutable array of N blocks, each containing a 32-byte data field and a 32-byte causal hash.¶
A per-block running hash chain binding each block's value to the cursor of the step that wrote it.¶
Each arena block is a pair:¶
block = {
data: bytes[32],
causal: bytes[32]
}
¶
The data field stores the block's computational value. The
causal field stores the causal hash chain: a running digest
binding the block's current value to the cursor of the step
that last wrote it.¶
The arena is initialized deterministically from a public seed s:¶
for i in 0..N-1:
if i == 0:
A[0].data = H("PoSME-init-v1" || s || I2OSP(0, 4))
else:
A[i].data = H("PoSME-init-v1" || s || I2OSP(i, 4)
|| A[i-1].data
|| A[floor(i/2)].data)
A[i].causal = H("PoSME-causal-v1" || s || I2OSP(i, 4))
root_0 = MerkleRoot(A)
T_0 = H("PoSME-transcript-v1" || s || root_0)
¶
The initialization references both the preceding block (A[i-1]) and a logarithmic skip-link (A[floor(i/2)]). This creates a dependency DAG of depth log(N) and width N, requiring Omega(sqrt(N)) space to evaluate (the DAG cannot be streamed in constant space because each block depends on a block approximately N/2 positions behind it). A custom initialization is used rather than Argon2id because Argon2id's fixed internal graph does not provide this skip-link structure; the logarithmic back-references are necessary for the space-hardness property.¶
The Verifier can independently compute root_0 and T_0 from the seed, providing a trusted anchor for all subsequent verification.¶
The step function is the core of PoSME. It enforces sequentiality via pointer-chasing, hardware parity via forced intra-step bank collisions, and TMTO resistance via spatial neighborhood entanglement.¶
At each step t in {1, ..., K}:¶
STEP(t):
cursor = T_{t-1}
// 1. Determine Target Bank
bank_id = XOF(cursor, 0) mod params.B_banks
// Start high-resolution cycle counter
t_start = RDTSC()
// 2. Intra-Step Bank Collision Reads
addrs = []
for j in 0..d-1:
// Generate pseudo-random address
raw_a = XOF(cursor, j + 1) mod params.N
// Mutate raw_a to ensure it maps to bank_id
a = force_bank_mapping(raw_a, bank_id, params)
addrs.append(a)
val = A[a]
cursor = H(cursor || val.data || val.causal)
// 3. Write with Spatial Neighborhood Entanglement
raw_w = XOF(cursor, d + 1) mod params.N
w = force_bank_mapping(raw_w, bank_id, params)
old = A[w]
// Incorporate causal hashes of logical neighbors
n_prev = A[(w - 1) mod params.N].causal
n_next = A[(w + 1) mod params.N].causal
new_data = H(old.data || cursor || old.causal
|| n_prev || n_next)
new_causal = H(old.causal || cursor || I2OSP(t, 4)
|| n_prev || n_next)
A[w] = {data: new_data, causal: new_causal}
// Stop cycle counter to capture physical latency jitter
t_end = RDTSC()
delta_t = t_end - t_start
// 4. Update Commitments
root_t = MerkleUpdate(root_{t-1}, w, A[w])
T_t = H(T_{t-1} || I2OSP(t, 4) || cursor || root_t || I2OSP(delta_t, 8))
// 5. Log step for Prover transcript
log[t] = {addrs, w, old, A[w], cursor, root_t, delta_t}
¶
Standard memory controllers achieve high bandwidth by interleaving sequential reads across multiple hardware banks, keeping multiple row-buffers open. PoSME explicitly defeats this optimization to enforce a strict latency floor.¶
The force_bank_mapping(raw_a, bank_id, params) function modifies the
specific bits of the logical address raw_a that the memory controller
uses for bank selection, replacing them with bank_id.¶
By forcing all $d$ reads and the final write to target the same physical bank but different pseudo-random rows, the memory controller suffers a "Bank Conflict" on every access. This forces a physical Row Precharge ($t_{RP}$) and RAS-to-CAS Delay ($t_{RCD}$) penalty for every hop, anchoring the execution time to the thermodynamic limits of the DRAM capacitor rather than the logic speed of the processor.¶
The write step cryptographically binds the updated block to the current
state of its logical neighbors, A[w-1] and A[w+1].¶
This transforms the Time-Memory Trade-Off (TMTO) penalty from a self-contained write chain into a spatial cascade. If an adversary discards a subset of the arena, recomputing a single missing block $w$ requires knowing the causal hashes of its neighbors at the exact moment of the write. If those neighbors were also discarded, the recomputation propagates outward: $w-1 \to w-2 \to \cdots$ and $w+1 \to w+2 \to \cdots$, until reaching stored blocks on each side. For an adversary storing $\alpha N$ blocks, the expected cascade width is $2(1-\alpha)/\alpha$, and each block costs $\rho$ hash evaluations to replay (see Theorem 3, Section 5.4.3). This cascade is sequential (Theorem 4, Section 5.4.4), adding directly to the adversary's critical path.¶
Because commodity DRAM requires periodic electrical refresh cycles ($t_{REFW}$), a genuine physical execution will exhibit unavoidable, stochastic latency spikes.¶
The Prover measures the execution time of the read/write loop using a
monotonic, high-resolution hardware counter (e.g., the RDTSC
instruction on x86 architectures). This inter-arrival time, delta_t,
is folded directly into the transcript $T_t$. A Verifier auditing the
transcript can perform statistical variance testing on the distribution
of delta_t values. An ASIC attempting to simulate execution entirely
within ultra-fast, deterministic SRAM will lack this specific jitter
profile, allowing the Verifier to reject perfectly clean transcripts as
physically impossible.¶
The transcript chain T_t binds all steps causally:¶
T_t = H(T_{t-1} || I2OSP(t, 4) || cursor || root_t)
¶
T_t incorporates root_t (the Merkle root after the write) and cursor (which depends on the arena state at step t). Computing T_t requires computing all prior steps.¶
The Prover commits to the sequence of ALL K arena roots:¶
R = [root_0, root_1, ..., root_K] C_roots = MerkleRoot(R)¶
This root chain commitment binds the Prover to a specific sequence of arena states BEFORE Fiat-Shamir challenges are derived. The challenges depend on (T_K, C_roots), and both must be fixed before the Prover knows which steps will be challenged.¶
PROVE(K, Q, R_depth):
C_roots = MerkleRoot([root_0, ..., root_K])
challenges = FS(T_K, C_roots, Q)
proof = {params, T_K, C_roots, step_proofs: []}
for c in challenges:
sp = make_step_proof(c, R_depth)
proof.step_proofs.append(sp)
return proof
make_step_proof(step, depth):
sp = {
step_id: step,
cursor_in: T_{step-1},
cursor_out: log[step].cursor,
root_before: root_{step-1},
root_after: log[step].root_t,
root_chain_paths: [
MerklePath(C_roots, step-1),
MerklePath(C_roots, step)
],
reads: [],
write: {addr: w, old: log[step].old, new: A[w],
merkle_path: MerklePath(root_{step-1}, w),
neighbor_prev: {
addr: (w-1) mod N,
block: A[(w-1) mod N],
merkle_path:
MerklePath(root_{step-1}, (w-1) mod N)},
neighbor_next: {
addr: (w+1) mod N,
block: A[(w+1) mod N],
merkle_path:
MerklePath(root_{step-1}, (w+1) mod N)}},
writers: []
}
for j in 0..d-1:
sp.reads.append({
addr, block, merkle_path:
MerklePath(root_{step-1}, addr)})
if depth > 0:
ws = last_writer(addr, step)
if ws == 0:
sp.writers.append({type: "init",
init_path: MerklePath(root_0, addr)})
else:
sp.writers.append({type: "step",
proof: make_step_proof(ws, depth-1)})
else:
sp.writers.append({type: "leaf",
writer_step: last_writer(addr, step),
merkle_path: MerklePath(
root_{ws}, addr)})
return sp
¶
The Verifier receives (seed, params, T_K, C_roots, proof):¶
VERIFY(seed, params, T_K, C_roots, proof):
// 1. Trusted anchor
root_0 = compute_init_root(seed, params.N)
T_0 = H("PoSME-transcript-v1" || seed || root_0)
// 2. Verify root_0 in root chain
assert MerkleVerify(C_roots, 0, root_0,
proof.root_0_path)
// 3. Recompute challenges
challenges = FS(T_K, C_roots, params.Q)
// 4. Verify each challenged step
for sp in proof.step_proofs:
verify_step(sp, C_roots, root_0, params)
verify_step(sp, C_roots, root_0, params):
// A. Verify roots are in the root chain
assert MerkleVerify(C_roots, sp.step_id - 1,
sp.root_before,
sp.root_chain_paths[0])
assert MerkleVerify(C_roots, sp.step_id,
sp.root_after,
sp.root_chain_paths[1])
// B. Verify read Merkle proofs
for j in 0..d-1:
assert MerkleVerify(sp.root_before,
sp.reads[j].addr, sp.reads[j].block,
sp.reads[j].merkle_path)
// C. Replay pointer-chase
cursor = sp.cursor_in
for j in 0..d-1:
a = XOF(cursor, j) mod N
assert a == sp.reads[j].addr
cursor = H(cursor || sp.reads[j].block.data
|| sp.reads[j].block.causal)
// D. Verify write with spatial neighbors
w = XOF(cursor, d) mod N
assert w == sp.write.addr
assert MerkleVerify(sp.root_before, w,
sp.write.old, sp.write.merkle_path)
// D1. Verify spatial neighbor Merkle proofs
w_prev = (w - 1) mod N
w_next = (w + 1) mod N
np = sp.write.neighbor_prev
nn = sp.write.neighbor_next
assert np.addr == w_prev
assert nn.addr == w_next
assert MerkleVerify(sp.root_before, w_prev,
np.block, np.merkle_path)
assert MerkleVerify(sp.root_before, w_next,
nn.block, nn.merkle_path)
n_prev = np.block.causal
n_next = nn.block.causal
// D2. Verify symbiotic write with spatial entanglement
assert sp.write.new.data == H(sp.write.old.data
|| cursor
|| sp.write.old.causal
|| n_prev || n_next)
assert sp.write.new.causal == H(sp.write.old.causal
|| cursor
|| I2OSP(sp.step_id, 4)
|| n_prev || n_next)
// E. Verify Merkle root update
assert sp.root_after == MerkleUpdate(
sp.root_before, w, sp.write.new)
// F. Compute and store transcript value for cross-check
T_c = H(sp.cursor_in || I2OSP(sp.step_id, 4)
|| cursor || sp.root_after)
// If another challenged step c' has cursor_in == T_c,
// verify they match. If sp.step_id == K, verify
// T_c == T_K (the public final transcript).
stored_transcripts[sp.step_id] = T_c
// G. Recursive causal provenance
for j in 0..d-1:
verify_writer(sp.writers[j], sp.reads[j],
C_roots, root_0, params)
¶
For Q challenges with recursion depth R:¶
Root chain proofs: O(Q * log K) per challenged step¶
Arena Merkle proofs: O(Q * d^R * log N)¶
Cursor replays: O(Q * d^R * d)¶
No arena memory allocation¶
For Q=128, d=8, R=3, N=2^24, K=4*N=2^26:¶
| Operation | Count |
|---|---|
| Root chain verifications | 128 * 2 * 26 = ~6.7K hashes |
| Arena Merkle verifications | 128 * 512 * 24 = ~1.6M hashes |
| Cursor replays | 128 * 512 * 8 = ~524K hashes |
| Total | ~2.1M hashes, ~6ms |
The ~6ms estimate assumes a modern desktop CPU (~350M BLAKE3 hashes/second). On constrained platforms (mobile: 60-300ms; WASM: 120ms-600ms), verification is slower but still practical. No memory allocation beyond the proof data is required.¶
The adversary is a probabilistic polynomial-time algorithm with random oracle access to H. The adversary receives the public seed s and parameters (N, K, d, Q, R). Its goal is to produce (T_K, C_roots, proof) that passes VERIFY (Section 4.1) while either:¶
Forgery: producing T_K' != T_K (the honestly computed transcript), or¶
Space reduction: using less than N * B bits of arena storage at some point during computation.¶
The adversary may use custom hardware with faster memory (lower latency) than the honest Prover. The ASIC resistance analysis (Section 5.5) bounds the resulting speedup.¶
The causal hash mechanism prevents block value fabrication. To forge a block's causal hash, the adversary needs the cursor of the step that wrote it. That cursor depends on d blocks read at the writer step, each with their own causal hashes requiring their own writers' cursors, recursively. Symbiotic binding strengthens this: forging data requires old_causal, and forging old_causal requires the prior writer's cursor. Neither field can be independently fabricated.¶
The root chain commitment (Section 3.4) binds the Prover to ALL K arena roots before challenges are derived. C_roots is an input to the Fiat-Shamir challenge derivation, so the Prover cannot fabricate roots after seeing challenges.¶
Theorem 1 (Soundness). Any adversary producing (T_K', C_roots', proof') with T_K' != T_K that passes VERIFY has advantage at most K * epsilon_cr, where epsilon_cr is the collision-finding advantage against H.¶
Proof sketch. If verification passes with T_K' != T_K, there exists a step c where T_{c-1}' = T_{c-1} but T_c' != T_c (the first divergence). At step c, the Verifier checks that T_c = H(T_{c-1} || c || cursor || root_c). If the adversary's inputs differ from the honest inputs but produce the same T_c, this is a collision in H. If the adversary's inputs differ and produce a different T_c, then T_c' != T_c, contradicting acceptance. The adversary has K steps at which to attempt this, giving the union bound K * epsilon_cr.¶
A full derivation is provided in the companion analysis (to appear as IACR ePrint).¶
Separately from forgery prevention, spatial neighborhood entanglement (Section 3.3.2) imposes a storage-dependent penalty on recomputation. Without spatial entanglement, an adversary recomputing a missing block traverses its temporal write chain at cost $O(\rho)$ hashes; the chain is self-contained because each write depends only on the block's own previous state and the cursor (which is stored). With spatial entanglement, each write also depends on the causal hashes of neighbors $A[w-1]$ and $A[w+1]$ at the time of the write. These are historical states that cannot be derived from the neighbors' current values (hash chains are irreversible). Therefore, recomputing a missing block requires replaying the full temporal chains of its spatial neighbors, which in turn require their own neighbors, creating a spatial cascade.¶
The cascade extends outward from the missing block until reaching a stored block on each side. For an adversary storing $\alpha N$ blocks, the expected cascade width is $2(1-\alpha)/\alpha$ (geometric distribution), and each block in the cascade costs $\rho$ hash evaluations to replay. The per-miss recomputation cost is therefore $\Theta(\rho/\alpha)$, compared to $\Theta(\rho)$ without spatial entanglement. The $1/\alpha$ factor means that reducing storage becomes increasingly expensive: halving storage more than doubles recomputation cost.¶
An adversary storing $\alpha \cdot N$ blocks faces a two-layer penalty:¶
The transcript chain T_0 through T_K must be computed sequentially to produce T_K before Fiat-Shamir challenges are derived. This is an $\Omega(K)$ lower bound regardless of storage.¶
Each step writes 1 block at a uniformly random address (Theorem 2). After $K$ steps with write density $\rho = K/N$, each block has been written $\rho$ times on average. Because each write is bound to its spatial neighbors' causal hashes, missing blocks cannot be recomputed in isolation.¶
Theorem 3 (Section 5.4.3) establishes the TMTO ratio:¶
TMTO(alpha) = 1 + 2*rho*(1-alpha)^2 / alpha¶
| rho | alpha=0.5 | alpha=0.25 | alpha=0.1 | alpha=0.01 |
|---|---|---|---|---|
| 1 | 2x | 5x | 17x | 197x |
| 4 | 5x | 19x | 65x | 785x |
| 16 | 17x | 73x | 257x | 3,137x |
K MUST be at least N ($\rho \ge 1$) for meaningful TMTO resistance. Values of $\rho \ge 4$ are RECOMMENDED.¶
Theorem 2 (Address Uniformity). In the random oracle model, the addresses $a_{t,j} = \text{XOF}(\text{cursor}_t, j) \bmod N$ used for pointer-chase reads satisfy:¶
Independence. For distinct pairs $(t_1, j_1) \ne (t_2, j_2)$, the addresses $a_{t_1,j_1}$ and $a_{t_2,j_2}$ are pairwise independent, except with probability at most $\binom{Kd}{2} / 2^{257}$ (birthday bound on cursor collisions in the 256-bit hash space).¶
Uniformity. Each address $a_{t,j}$ has statistical distance at most $N / 2^{64}$ from the uniform distribution over $[0, N)$. For $N \le 2^{48}$, this bound is at most $2^{-16}$.¶
Consequently, for any adversary subset of stored blocks of size $\alpha \cdot N$, a random read misses the stored set with probability $(1 - \alpha) \pm N/2^{64}$.¶
Proof. In the random oracle model, $H$ maps distinct inputs to independent, uniformly random 256-bit outputs. Define $X_{t,j} = H(\text{cursor}t | \text{I2OSP}(j, 4))$. The inputs are distinct across $j$ (for fixed $t$) by the index suffix, and across $t$ (for any $j$) whenever $\text{cursor}_t \ne \text{cursor}{t'}$. Since $\text{cursor}t = H(\text{cursor}{t-1} | \cdots)$ is itself a hash output, cursor collision probability across all $K$ steps is at most $\binom{K}{2} / 2^{256}$, which is negligible. Given distinct inputs, the outputs $X_{t,j}$ are independent uniform 256-bit strings, establishing claim (1).¶
For claim (2): XOF truncates $X_{t,j}$ to its first 8 bytes, yielding a uniform value $U$ in $[0, 2^{64})$. The address is $U \bmod N$. The number of integers in $[0, 2^{64})$ mapping to any particular residue $r \in [0, N)$ is either $\lfloor 2^{64}/N \rfloor$ or $\lceil 2^{64}/N \rceil$. The maximum deviation from the ideal probability $1/N$ is at most $1/2^{64}$, giving statistical distance at most $N/2^{64}$ over the full distribution. For $N \le 2^{48}$, this is at most $2^{-16}$, which is negligible for all recommended profiles.¶
Theorem 3 (Spatial Cascade TMTO). In the random oracle model, under Theorem 2, consider an adversary that maintains $\alpha \cdot N$ arena blocks in working memory ($0 < \alpha < 1$) and stores all $K$ transcript values. The adversary's expected computation per step is:¶
C_step >= d * (1 + (1-alpha) * 2*rho / alpha)¶
where $\rho = K/N$ is the write density.¶
Proof. The proof proceeds in three parts.¶
Part 1: Write chain cost. When a read at step $t$ targets a block $w$ not in the adversary's working memory, the adversary must reconstruct $w$'s current state. Block $w$ was written $\rho$ times on average (each of $K$ steps writes to a uniformly random block by Theorem 2). The adversary knows the cursor at every step (stored), so replaying one write requires one hash evaluation given the block's previous state and the cursor. Tracing $w$'s temporal write chain from initialization to the current epoch costs $\rho$ hash evaluations.¶
Part 2: Spatial cascade. Each write to block $w$ at step $t'$ depends on the causal hashes of $A[w-1]$ and $A[w+1]$ at time $t'$ (Section 3.3.2). These are historical states: the causal hash of $w$'s neighbor at the moment $w$ was written, not the neighbor's current state. Even if $w-1$ is currently in the adversary's working memory, its state at time $t'$ is not recoverable from its current state (hash chains are irreversible). Therefore, to replay $w$'s write chain, the adversary must also replay the full temporal chain of $w-1$ (so that $w-1$'s state at each of $w$'s write times is available).¶
Block $w-1$'s temporal chain depends on $w-2$'s historical causal hashes (its own spatial neighbor). If $w-2$ is not stored, this cascades further: $w-2 \to w-3 \to \cdots$. The cascade extends outward in one direction until reaching a block that is in the adversary's working memory; that block's full temporal chain is available (the adversary maintains it by construction).¶
The same cascade extends in the opposite direction: $w+1 \to w+2 \to \cdots$. The cascade width in each direction follows a geometric distribution with success probability $\alpha$ (each successive neighbor is stored with probability $\alpha$, independently by Theorem 2's uniformity guarantee on write addresses). The expected cascade width in one direction is $(1-\alpha)/\alpha$. For all recommended profiles ($N \ge 2^{20}$), this is negligible relative to $N$ for any $\alpha > 2^{-19}$; ring wrap-around does not affect the bound in practice. Each block in the cascade requires $\rho$ hash evaluations to replay its temporal chain.¶
Part 3: Expected per-step cost. At each of the $K$ steps, the Prover makes $d$ reads. Each read targets a uniformly random block, which is absent from the adversary's working memory with probability $(1-\alpha)$. Each miss triggers a spatial cascade of expected total width $2(1-\alpha)/\alpha$ blocks, each costing $\rho$ hash evaluations.¶
The expected computation per step is:¶
C_step = d + d*(1-alpha) * (2*(1-alpha)/alpha) * rho
= d * (1 + 2*rho*(1-alpha)^2 / alpha)
¶
For $(1-\alpha) \approx 1$ (small $\alpha$), this simplifies to $d \cdot (1 + 2\rho/\alpha)$. The TMTO ratio (adversary cost divided by honest cost $d$) is:¶
TMTO(alpha) = 1 + 2*rho*(1-alpha)^2 / alpha¶
| rho | alpha=0.5 | alpha=0.25 | alpha=0.1 | alpha=0.01 |
|---|---|---|---|---|
| 1 | 2x | 5x | 17x | 197x |
| 4 | 5x | 19x | 65x | 785x |
| 16 | 17x | 73x | 257x | 3,137x |
For $\alpha < 1/(2\rho)$, the TMTO ratio exceeds $2\rho^2$, making space reduction more expensive than honest execution with full storage.¶
Theorem 4 (Sequential Cascade Latency). The spatial cascade of Theorem 3 adds to the adversary's sequential critical path. An adversary resolving a cascade of width $L$ blocks incurs at least $L \cdot \rho$ sequential hash evaluations that cannot be parallelized.¶
Proof. To replay block $w$'s temporal chain, the adversary needs the causal hashes of $w-1$ at each of $w$'s write times. These causal hashes are outputs of $w-1$'s own temporal chain. Therefore, $w-1$'s full temporal chain ($\rho$ sequential hash evaluations) must complete before $w$'s chain can begin. By the same argument, $w-2$'s chain must complete before $w-1$'s, and so on. For a cascade of width $L$ blocks, the critical path is $L$ sequential temporal chains of $\rho$ links each, totaling $L \cdot \rho$ sequential hash evaluations. Parallel hardware does not reduce this latency because each chain link depends on the output of the previous link (hash chaining) and the completion of the adjacent block's chain (spatial dependency).¶
This result is significant because it means spatial entanglement converts a work penalty into a latency penalty: the adversary not only performs more total computation but takes more wall-clock time, directly undermining the sequential execution guarantee.¶
Theorem 5 (Checkpoint Dominance). Under spatial entanglement, partial-arena checkpoints are strictly suboptimal. For any adversary using checkpoints of $\alpha \cdot N$ blocks ($\alpha < 1$) at interval $C$ steps, the space-time product $S \times T$ satisfies:¶
S * T >= (2*rho*(1-alpha) + alpha) * S_full * T_full¶
where $S_{full} \times T_{full}$ is the space-time product for full-arena checkpoints. For $\rho \ge 1$, this exceeds $S_{full} \times T_{full}$, with the gap increasing linearly in $\rho$.¶
Proof. A full-checkpoint adversary stores all $N$ blocks at interval $C$, giving storage $S_{full} = (K/C) \cdot N \cdot B$ and replay cost $T_{full} = Q \cdot C \cdot d / 2$ per proof generation. The product $S_{full} \cdot T_{full}$ is independent of $C$ (the standard time-space tradeoff).¶
A partial-checkpoint adversary stores $\alpha N$ blocks at interval $C$. Storage: $S = \alpha \cdot S_{full}(C)$. When replaying from a partial checkpoint, each of the $C/2$ replayed steps incurs spatial cascade overhead per Theorem 3. Replay cost per challenge: $T = T_{full}(C) \cdot (1 + 2\rho(1-\alpha)/\alpha)$.¶
The product:¶
S * T = alpha * S_full * T_full * (1 + 2*rho*(1-alpha)/alpha)
= S_full * T_full * (alpha + 2*rho*(1-alpha))
= S_full * T_full * (2*rho + alpha*(1 - 2*rho))
¶
For $\rho \ge 1$: the coefficient $(1 - 2\rho) \le -1$, so the product is minimized at $\alpha = 1$ (full checkpoints), where it equals $S_{full} \times T_{full}$. Any $\alpha < 1$ strictly increases the product. For $\alpha \to 0$, the ratio approaches $2\rho$, meaning the adversary's space-time product is $2\rho$ times worse than with full checkpoints.¶
Corollary. Spatial entanglement forces the adversary into an all-or-nothing checkpointing strategy: either store the complete arena at each checkpoint or forgo checkpointing entirely. There is no useful middle ground.¶
PoSME's causal DAG is dynamic: edges are created during execution based on data-dependent addressing. In the random oracle model (Theorem 2), each step creates d edges to uniformly random targets. The pebbling game is:¶
N block nodes (arena) and K step nodes.¶
At step t, the game reveals d random read addresses.¶
To execute step t, the adversary must have pebbles on all d read addresses and on the write target's spatial neighbors (stored or recomputed via spatial cascade).¶
The adversary maintains auxiliary state (cursors, write index) of at most K * 32 bytes.¶
Without spatial entanglement, the per-miss recomputation cost is $\rho$ (linear write chain), giving a TMTO ratio of $1 + (1-\alpha) \cdot (2\rho + 1)$. With spatial entanglement, Theorem 3 establishes the tighter bound $1 + 2\rho(1-\alpha)^2/\alpha$, and Theorem 4 proves this overhead is sequential (cannot be parallelized). Theorem 5 further shows that partial-arena checkpoints are strictly dominated by full-arena checkpoints.¶
PoSME is anchored in a physics-bounded latency floor. While computation throughput improves exponentially with transistor scaling, random-access memory latency is constrained by the fundamental thermodynamics of charge-sensing in capacitors.¶
The per-hop bottleneck is determined by the mandatory bank conflict (Section 3.3.1), which forces the DRAM controller to execute a full Row Precharge ($t_{RP}$) and RAS-to-CAS Delay ($t_{RCD}$) for every sequential read. These timings are physical constants of DRAM cell operation that do not scale with logic shrinks. Even an adversary with wafer-scale on-die integration (Section 5.5.1) faces a latency floor constrained by signal propagation across the die and the settling time of the memory cells.¶
Consequently, the ASIC advantage is not a function of "better hardware," but rather the physical limit of signal propagation and charge sensing. By forcing intra-step bank collisions, PoSME ensures that even the most optimized controller spends the majority of its wall-clock time in a stalled state, waiting for the physical laws of DRAM to resolve the next address.¶
The ultimate latency floor for an adversary is on-die signal propagation. Optimal ASIC designs that integrate massive SRAM (1-5ns access) could achieve a 10-45x advantage over commodity DDR5. Wafer-scale integration, as demonstrated by the Cerebras Wafer-Scale Engine, is the existence proof for this threshold. PoSME's security is durable because spatial entanglement (Theorem 3) imposes a TMTO recomputation penalty that scales as $1/\alpha$, ensuring that any latency-based speedup is countered by the prohibitive cost of discarding state. Furthermore, the cascade latency (Theorem 4) is sequential, so the adversary cannot hide recomputation behind parallelism.¶
Intra-step: The d reads form a pointer-chasing chain; read j+1's address depends on read j's result.¶
Inter-step: T_t feeds into address generation for step t+1.¶
Together: K * d sequential memory accesses, each bottlenecked by DRAM latency.¶
The PoSME proof is encoded in CBOR [RFC8949] per [RFC8610]:¶
posme-proof = {
1 => posme-params,
2 => bstr .size 32, ; final-transcript (T_K)
3 => bstr .size 32, ; root-chain-commitment
4 => [+ step-proof], ; challenged-steps
}
posme-params = {
1 => uint, ; arena-blocks (N)
2 => uint, ; total-steps (K)
3 => uint, ; reads-per-step (d)
4 => uint, ; challenges (Q)
5 => uint, ; recursion-depth (R)
6 => uint, ; bank-count (B)
}
step-proof = {
1 => uint, ; step-id
2 => bstr .size 32, ; cursor-in
3 => bstr .size 32, ; cursor-out
4 => bstr .size 32, ; root-before
5 => bstr .size 32, ; root-after
6 => [+ bstr .size 32], ; root-chain-paths
7 => [+ read-witness], ; reads
8 => write-witness, ; write
9 => [* writer-proof], ; recursive provenance
10 => uint, ; timing-entropy (delta_t)
}
read-witness = {
1 => uint, ; address
2 => bstr .size 32, ; data
3 => bstr .size 32, ; causal-hash
4 => [+ bstr .size 32], ; merkle-path
}
write-witness = {
1 => uint, ; address
2 => bstr .size 32, ; old-data
3 => bstr .size 32, ; old-causal
4 => bstr .size 32, ; new-data
5 => bstr .size 32, ; new-causal
6 => [+ bstr .size 32], ; merkle-path
7 => read-witness, ; neighbor-prev (w-1)
8 => read-witness, ; neighbor-next (w+1)
}
writer-proof = {
1 => uint, ; type (0=init, 1=step, 2=leaf)
? 2 => uint, ; writer-step-id
? 3 => step-proof, ; recursive step proof
? 4 => [+ bstr .size 32], ; merkle-path
}
¶
The recursion depth R and challenge count Q present a direct tradeoff between security margin and proof size. Table 6 provides concrete MiB- per-proof costs for implementers.¶
| Recursion (R) | Challenges (Q) | Blocks (B) | Size (MiB) |
|---|---|---|---|
| 2 | 64 | 81 | 3.9 |
| 2 | 128 | 81 | 7.9 |
| 3 | 64 | 657 | 32.1 |
| 3 | 128 | 657 | 64.2 |
While R=3 yields significantly larger proofs, it provides exponentially higher fabrication resistance by checking the witnesses of the writers' writers. For bandwidth-constrained environments (e.g., light clients), R=2 with Q=128 offers a compact ~8 MiB proof while maintaining high confidence.¶
PoSME's security properties have different parameter dependencies. TMTO resistance (Section 5.4) depends on the write density rho = K/N and is independent of arena size. ASIC resistance can be achieved through arena size exceeding the adversary's fastest memory (Section 5.5). Applications SHOULD select parameters based on their threat model.¶
Arena size N and write density rho = K/N are independent knobs controlling different security properties:¶
N controls latency-bound ASIC resistance: the arena must exceed the adversary's fastest accessible memory (L3 cache, SRAM). Larger N requires more Prover RAM.¶
rho controls TMTO resistance: penalty = 1 + 2*rho*(1-alpha)^2/alpha for an adversary storing alpha*N blocks (Theorem 3). Higher rho requires more steps (longer wall time) but no additional RAM.¶
Three profiles are defined. All profiles share fixed parameters: block size B = 64 bytes, reads per step d = 8, bank count B_banks = 16, hash function H = BLAKE3.¶
| Profile | N | Arena | rho | K | Q | R | Peak RAM | TMTO (alpha=0.1) | Use Case |
|---|---|---|---|---|---|---|---|---|---|
| Standard | 2^20 | 64 MiB | 4 | 4*N | 64 | 2 | ~128 MiB | 65x | Sybil resistance |
| Enhanced | 2^22 | 256 MiB | 4 | 4*N | 128 | 3 | ~512 MiB | 65x | High-assurance |
| Maximum | 2^24 | 1 GiB | 4 | 4*N | 128 | 3 | ~2 GiB | 65x | Consensus, mining |
The Standard and Enhanced profiles exceed consumer L3 caches (16-36 MiB as of 2024) and provide latency-bound ASIC resistance via arena size and HBM latency bounds. The Maximum profile (1 GiB) exceeds all current L3 caches and limits GPU throughput via the capacity-bandwidth bound.¶
The Prover's peak memory comprises three components:¶
| Component | Size | Notes |
|---|---|---|
| Arena | N * 64 bytes | Required for computation |
| Merkle tree | 2 * N * 32 bytes | Required for root updates |
| Root chain | (K + 1) * 32 bytes | Sequential; MAY be streamed to disk |
The root chain is written sequentially during pass 1 and read sequentially during pass 2. Implementations MAY stream the root chain to persistent storage to reduce peak RAM by K * 32 bytes, at the cost of additional I/O.¶
Peak RAM by profile (with root chain streaming):¶
| Profile | Arena + Merkle | Root chain (disk) | Peak RAM |
|---|---|---|---|
| Standard | 128 MiB | 128 MiB | ~128 MiB |
| Enhanced | 512 MiB | 512 MiB | ~512 MiB |
| Maximum | 2 GiB | 2 GiB | ~2 GiB |
Verifiers MUST reject proofs with parameters below these minimums:¶
| Parameter | Minimum | Rationale |
|---|---|---|
| N | 2^18 | Below this, arena is too small for meaningful pointer-chase depth |
| K | N | Below N, most blocks are never written; TMTO is trivial |
| K/N (rho) | 4 | Below this, TMTO penalty < 5x at alpha=0.5 |
| d | 4 | Below this, causal fan-out is insufficient |
| Q | 64 | Below this, detection probability < 2^{-64} |
| R | 2 | Below this, causal verification is shallow |
The following properties are machine-independent:¶
| Property | Standard | Maximum |
|---|---|---|
| TMTO penalty (alpha=0.1) | 65x | 65x |
| ASIC resistance mechanism | Physics-Bound Floor | Physics-Bound Floor |
| Proof size | ~3.9 MiB | ~64 MiB |
Reference timings (Apple M-series, DDR5; will vary by hardware):¶
| Profile | Per-step | Wall time | Prover peak RAM |
|---|---|---|---|
| Standard (64 MiB) | ~1500 ns | ~6 seconds | ~128 MiB |
| Enhanced (256 MiB) | ~2200 ns | ~37 seconds | ~512 MiB |
| Maximum (1 GiB) | ~2750 ns | ~185 seconds | ~2 GiB (disk) |
Verifier time is independent of profile (depends on Q, d, R, N):¶
| Metric | Desktop | Enhanced/Maximum |
|---|---|---|
| Desktop | ~2 ms | ~6 ms |
| Mobile | 20-100 ms | 60-300 ms |
A reference benchmark with pre-compiled binaries is provided as ancillary material (anc/README.md).¶
PoSME proves sequential memory execution, not elapsed time. An adversary with faster memory (lower latency) completes the same computation in less wall-clock time. The ASIC advantage is bounded by the ratio of the adversary's memory latency to the honest Prover's (see Section 5.5), but is nonzero. Applications requiring temporal guarantees MUST combine PoSME with an external time-binding mechanism such as hardware-attested timestamps.¶
Hardware-independent time-binding is impossible: deterministic computation produces identical output regardless of hardware speed, and self-reported timing is forgeable.¶
The seed MUST be externally fixed or derived from an unpredictable source. A Prover-controlled seed enables grinding for favorable arena initializations with reduced effective working sets.¶
We conjecture that O(1) verification under hash-only assumptions is not achievable for sequential pointer-chasing computations of the type PoSME specifies. The verification complexity in this document is O(Q * d^R * log N). O(log^2 K) verification is believed achievable via FRI/STARK-based commitment (requiring field arithmetic but no trusted setup) and is left as a future optimization. A formal impossibility proof for constant-size hash-only verification of PoSME remains open.¶
Verifiers SHOULD implement rate limiting and MUST reject proofs with parameters exceeding configured thresholds before allocating resources for verification.¶
The following problems remain open:¶
Machine-checked proofs. The TMTO bounds in Theorems 3-5 rely on pen-and-paper arguments. A machine-checked formalization (e.g., in Lean or Coq) of the spatial cascade lower bound and the checkpoint dominance result would strengthen confidence in the security claims.¶
Optimal eviction policy. Theorem 5 establishes that partial-arena checkpoints are dominated by full-arena checkpoints. However, the adversary's optimal online eviction policy (which blocks to retain in working memory during sequential execution) has not been formally characterized. The analysis in Theorem 3 assumes independent miss probabilities; correlated access patterns or adaptive eviction may yield tighter or looser bounds.¶
Succinct verification. Verification complexity remains $O(Q \cdot d^R \cdot \log N)$. Reducing this to $O(\log^2 K)$ via FRI/STARK-based commitment (requiring field arithmetic but no trusted setup) is left as future work.¶
Host-as-critical-path for small arenas. ASIC resistance at cache-resident arena sizes (where the arena fits within on-die SRAM) requires mechanisms where the host's computation gates the next prover step rather than supplying ancillary entropy. This is deferred to future work.¶
Spatial entanglement width. The current construction uses 2 spatial neighbors (w-1, w+1). Wider entanglement (e.g., w-2, w-1, w+1, w+2) would increase the cascade branching factor and tighten the TMTO bound, at the cost of additional hash evaluations per step. The optimal neighbor count has not been analyzed.¶
The effectiveness of intra-step bank collisions (Section 3.3.1)
depends on the accuracy of the force_bank_mapping logic. Memory
controllers typically use specific physical address bits for bank
selection (e.g., bits 13-16 on many DDR4/DDR5 platforms).¶
Prover implementations SHOULD use platform-specific knowledge or calibration loops to identify these bits. If the exact mapping is unknown, the Prover MAY use a XOR-sum of multiple candidate bit ranges to increase the probability of a physical bank conflict. Verifiers DO NOT check physical mapping accuracy; they only check the logical consistency of the derived addresses according to the protocol parameters.¶
Provers MUST use the highest-resolution monotonic hardware counter
available to capture delta_t.¶
The resulting delta_t SHOULD NOT be normalized or filtered. Raw
cycle counts are required to preserve the stochastic jitter profile
arising from DRAM refresh cycles ($t_{REFW}$) and OS-level noise.¶
To ensure the arena computation is bottlenecked by DRAM latency rather than CPU cache hits, the arena size $N$ SHOULD be configured to exceed the Prover's L3 cache capacity. For Standard and Maximum profiles, the arena sizes (64 MiB to 1 GiB) are specifically chosen to exceed the 16-96 MiB caches typical of commodity processors.¶
Provers MAY use cache-bypass instructions (e.g., MOVNTI on x86)
for arena writes to further enforce DRAM-bounded execution.¶
This document has no IANA actions.¶
The author thanks the CFRG for foundational work on memory-hard functions. The authors of Argon2 are acknowledged for the design principles that inspired PoSME's custom skip-link initialization.¶
This section is to be removed before publishing as an RFC.¶
Removed Compact profile pending further analysis of cache-resident-arena security.¶
Removed Jitter Entanglement section pending further analysis of host-as-critical-path constructions.¶
Promoted Address Uniformity from conjecture to Theorem 2, with full ROM proof.¶
Widened XOF output from 4 bytes to 8 bytes to eliminate modular bias for non-power-of-2 N.¶
Replaced hand-wavy "Trophic Cascade" TMTO claims with rigorous cascade-width analysis (Theorem 3), proving per-miss cost of Theta(rho/alpha).¶
Added Theorem 4 (Sequential Cascade Latency): spatial cascade adds to critical path.¶
Added Theorem 5 (Checkpoint Dominance): partial-arena checkpoints strictly suboptimal under spatial entanglement.¶
Replaced misleading alpha=0 TMTO table with honest comparison across meaningful alpha values.¶
Tightened ASIC resistance framing to a single physics-bounded latency floor.¶
Reconciled Related Work against arena initialization.¶
Rewrote Open Problems to reflect resolved and remaining items.¶