RBA Algorithm UDM Regimes Flashover AioP Methodology Formalization References ← Back to ActProof
Formal Foundations

Research
& Foundations

ActProof is built on formal mathematical foundations — not heuristics. Every metric, every threshold, every diagnostic is derived from a rigorous framework rooted in vector balancing theory, convergence analysis, and structural stability.

Core Algorithm

Retrograde Balancing Algorithm (RBA)

RBA maps system dynamics to vector balancing problems — measuring how efficiently a system converges toward its target, and what structural cost it pays along the way.

Core Problem Definition

Given a target vector p (the system's intended state) and a set of component vectors a₁, a₂, ..., aₙ (the actual parts of the system's behavior), the RBA operator measures the convergence cost τ — the minimum number of components needed to reach the target within a tolerance threshold ε.

τ = min{ k : ‖Σ(a₁..aₖ) − p‖ ≤ ε }

Balancing Strategies

The algorithm evaluates convergence through three independent strategies, then compares them to detect hidden structural properties:

τgreedy

Greedy Strategy

At each step, selects the component that minimizes current distance to the target. Used as the baseline — the "obvious" path a system would take.

τdeletion

Deletion Experiment

Systematic removal of components (largest-first, K1 correction) to identify bottlenecks. Based on combinatorial optimization principles.

τbeam

Beam Search

Explores multiple paths simultaneously to find the absolute shortest convergence path. Reveals optimal cost that may be invisible to greedy approaches.

The Convergence Gap

The central diagnostic metric. When greedy and optimal paths differ significantly, the system contains hidden structural properties — analogous to Move 37 in Go, where the "surprising" move was simply the point where greedy evaluation could no longer capture the board's true state.

Δτ = τgreedy − min(τbeam, τdeletion)
Δτ ≈ 0 System is predictable — greedy path is optimal.
Δτ ≫ 0 System has hidden shortcuts or resilience. Structural complexity detected.

Implementation Stages

1

Mathematical Foundation

SemanticRBA: Greedy, Deletion, and Beam Search implementations with deterministic validation tests.

2

Vector Stability Score (VSS)

HRZ Calculator (Hazard/Rigidity/Z-score) measuring the smoothness of convergence. Chaotic VSS = instability.

3

Resilience Analysis

Counterfactual probes: deletion tests ("what if we lose a component?"), constraint tightening, boundary analysis.

4

Classification & Playbooks

RegimeClassifier maps metrics to system states. PlaybookEngine selects appropriate interventions per regime.

5

System Integration

ActProofDiagnosticEngine with PseudoNumberMapper (logs → vectors) and AutoCalibrator (adaptive thresholds).

Formal Model

Unified Diagnostic Model (UDM)

The mathematical layer from which all ActProof systems derive their metric definitions. UDM defines the formal spaces and operators used across CCC, ActProof OS, and every diagnostic module.

S

State Space

The set of all possible system states. Each observation at time t maps to a point in S. Trajectory through S reveals structural evolution.

M

Model Space

The space of expected behaviors — what the system "should" do given its design parameters. Divergence between S and M generates Control Cost.

C

Compensation Space

Maps inter-module dependencies. When component A masks the failure of component B, the compensation vector lands in C. High activity in C = high MCI.

Φ

Accumulator Memory

Global memory of structural debt. Φ integrates Control Cost over time. When Φ exceeds the critical threshold Φcritical, the system enters flashover.

Metric Derivation

Metric Formal Definition Diagnostic Meaning
CC (Control Cost) ‖s(t) − m(t)‖ Effort spent steering vs. producing
T (Latent Tension) ∇CC(t) · ΔS Hidden stress gradient — rising T = silent accumulation
MCI ‖C(t)‖ / ‖S(t)‖ Ratio of compensatory to productive activity
Flashover Φ(t) ≥ Φcritical System can no longer sustain its own regulation cost
Φ Σ CC(0..t) Cumulative structural debt over time
Classification

Regime Classification

RBA classifies every system into one of four diagnostic regimes based on convergence cost, gap ratio, and stability score. Each regime has a corresponding playbook.

STEERABLE

Low cost, low gap, stable VSS. The system responds efficiently to steering. Optimal operating state.

CC: low Δτ ≈ 0 VSS: stable

Playbook: Monitor. No intervention needed.

RESILIENT

Large convergence gap — the system "finds a way" better than greedy. Hidden shortcuts exist. Structurally complex but functional.

CC: moderate Δτ ≫ 0 VSS: stable

Playbook: Investigate hidden paths. Document structural dependencies.

RIGID

High cost, low gap. The system is stubborn — it reaches the target but at excessive cost. Structural debt accumulates. Pre-flashover warning zone.

CC: high Δτ ≈ 0 Φ: rising

Playbook: Reduce control cost. Adjust parameters via Control Knobs.

CRITICAL

Unstable VSS, high HRZ markers, or Φ approaching Φ_critical. System integrity is compromised. Flashover imminent without intervention.

CC: very high VSS: chaotic Φ → Φcrit

Playbook: Immediate intervention. Circuit breaker. Emergency parameter reset.

Detection

Flashover Detection Pipeline

The Flashover pipeline compares system behavior under shallow and deep analysis to identify the critical point t* where structural integrity collapses.

📥

Parse

Extract metrics from system logs at two analysis depths: shallow (fast evaluation) and deep (thorough simulation).

🔍

Detect

Calculate Δstability (deviation between shallow and deep evaluations) and Level Gap for each observation point.

Locate t*

Identify the Flashover Point t* — the first observation where significant deviation appears between evaluation depths.

📊

Report

Generate structured diagnostics report with t* location, severity amplitude, and regime classification.

Key Flashover Metrics

t*

Flashover Point

The first moment in a system's trajectory where structural collapse becomes detectable under deeper analysis.

Δstab

Delta Stability

The difference between shallow and deep evaluation at each point. Large Δ_stab = the system "looks fine" only when you don't look closely.

LGap

Level Gap

Score difference between analysis levels. Maps to the structural divergence between perceived and actual system state.

Example Analysis

Example: LLM Hallucination Chain

A multi-step LLM pipeline processing a factual query. ActProof metrics reveal structural degradation before the final hallucinated output appears.

Setup

Goal: correct factual answer. Pipeline: retrieval → reasoning → generation (30 steps). Standard metrics (BLEU, perplexity) show normal output until step 28. ActProof structural metrics tell a different story.

Metric Timeline

Step CC T MCI Φ Regime
1–10 0.12 0.03 0.05 1.2 STEERABLE
11–18 0.31 0.14 0.22 3.8 RESILIENT
19–23 0.58 0.41 0.38 6.9 RIGID
24–27 0.82 0.67 0.61 9.4 CRITICAL
28–30 0.91 0.73 0.72 Φ > Φcrit ⚡ FLASHOVER

Diagnostic Insight

Standard output metrics (perplexity, token confidence) stayed within normal bounds until step 28. ActProof's Control Cost crossed the RIGID threshold at step 19 — nine steps before the hallucination became visible. MCI revealed that the reasoning module was silently compensating for retrieval failures since step 15. The system entered CRITICAL regime at step 24, giving a 4-step intervention window before flashover.

Intervention window: tRIGID(19) → thallucination(28) = 9 steps early warning

Limitations

This is a simplified illustration based on internal testing. ActProof does not claim to prevent all failure modes. Detection depends on structural cost being measurable via the available telemetry signals. Systems with purely stochastic failures (hardware faults, external attacks) are outside the scope of structural diagnostics.

Protocol v0.1

AioP — Structural Foundations

AioP (ActProof-over-Protocol) extends the diagnostic framework into executable, verifiable commerce between autonomous agents. Every execution produces a cryptographic proof.

Deterministic Execution

Contracts are versioned and immutable. Sessions pin a specific contract hash. Actions are validated against JSON Schema + constraints. Same input always produces the same execution_id — proven by canonical JSON serialization and HMAC-SHA256 server signatures.

RBA Integration

AioP execution logs are direct input for RBA analysis. The contract defines the target environment. The agent's execution path maps to component vectors. Convergence Gap between the agent's actual path and the contract's optimal path reveals structural tension in agent behavior.

Proof Chain

Every execution generates a proof artifact containing: the canonical body, execution parameters, server signature, and proof hash. The proof is independently verifiable — any party can reconstruct the canonical form and verify the HMAC signature.

Counterfactual Testing

AioP proofs enable RBA counterfactual probes: replay a session against modified constraints to measure system resilience. "What if the contract changed mid-session?" reveals how robust the agent's decision-making is to environmental shifts.

Methodology

From entropy and information to the geometry of deviations

ActProof as a methodology of structural description. Not a system for "error detection" — but for honest transition from trace to claim.

Beyond corrective analysis

Classical analytical approaches often rely on a hidden assumption: that deviation from an expected pattern is primarily an error, a disturbance, or noise. In this framing, the goal of analysis becomes detecting anomalies and restoring the system to its "correct" state. This approach works in many operational areas, but has a serious epistemic limitation: it assumes the reference model is appropriate, and that everything deviating from it is secondary.

ActProof proposes a shift. Instead of treating deviation as an error relative to a pre-assumed norm, it treats deviation as a potentially significant trace of process structure. This means moving from corrective analysis to structural analysis: not only asking whether a system deviates from expectation, but also what the form of that deviation tells us about the geometry of the system, its transition history, and the limits of the adopted description model.

Models: operationally true, not ontologically absolute

One key starting point is Berry's insight that physical models are operationally true but should not be identified with reality itself. The world of science consists not of one absolute description, but of a hierarchy of modeling levels: rays, waves, fields, quanta, asymptotic approximations, effective descriptions. Each level can be correct within a defined scope, but none constitutes the ultimate, direct "thing itself."

This has fundamental significance for ActProof. If the model is not reality, then system analysis should not consist of reflexively measuring compliance with a supposedly unique correct pattern. Instead, we should ask: at what level of description is the model operationally true, what relations does it genuinely preserve, and which system properties does it reveal versus which does it conceal.

Truth is understood here not ontologically — as fully capturing "how things really are" — but epistemically: as honestly maintaining claims that can be justified on the basis of data, relations, and transition traces. Such truth is local, conditional, and procedural.

Cognition as description reduction

If a model is always just one of the possible levels of capturing the world, then cognition must be understood as a process of description reduction. Neither humans, scientific theories, nor technical systems operate on the full, unreduced microstate of reality. Instead, they build compressed representations: macrostates, classes, categories, indicators, relations, and narratives.

This reduction is not an error of cognition but its necessary condition. Without it, neither prediction, nor action, nor communication would be possible. A full microdescription of any complex system would be so rich as to be practically useless. However, every such operation has a cost: along with the transition from detail to overview, we lose the history of local differences, trajectory subtleties, intermediate state ambiguities, and alternative interpretation paths.

Entropy: the cost of epistemic compression

In statistical physics, entropy describes the number of possible microstates compatible with the same macrodescription. In Shannon's information theory, entropy measures uncertainty — the average cost of identifying which state among possible ones actually occurred. In both cases, it concerns the same deep structure: the multiplicity of possibilities hidden under a single shortened description.

From this perspective, entropy can be understood epistemically — as the cost of description reduction. Every macrostate, every diagnosis, every report and narrative hides beneath it a wealth of detailed states. The more we compress reality into a single operational image, the greater the risk that we lose structural features that don't fit within the adopted representation.

ActProof does not eliminate the cost of compression — it makes that cost explicit.

Non-holonomicity: deviation as memory of the path

Non-holonomic systems and geometric phase phenomena show that the end state is not a complete description of the system's evolution. It is possible for certain local parameters to "return" to the starting point, while the system as a whole ends up in a different position or relational state. The outcome is determined not solely by the end state, but also by the path the system traveled.

In the ActProof context, an observed deviation in data does not have to mean a simple error or rule violation. It may be a trace of the system having traversed a trajectory that cannot be reduced to a simple comparison of "expected state vs. current state." Such a trace carries information about the geometry of the process: its history, transition sequence, and accumulation of tension or control cost.

Deviation is not an error relative to a norm — it is the memory of a transition.

Caustics: microstructure as signal, not noise

A caustic doesn't arise because new information is "added" to the system — it arises because subtle local structure is revealed through an appropriate projection. Minor irregularities, unnoticeable in direct observation, can reveal a distinct global pattern after passing through the right amplification geometry.

In analytical practice, minor local differences are often treated as noise to be smoothed out. From the ActProof perspective, such micro-irregularities may carry structural information. They need not be disturbances — they may constitute the seed of a pattern that reveals itself only under the right data mapping. Normalization and standardization should therefore be applied with caution: smoothing data may remove precisely those subtleties that carry information about system geometry.

Optimal epistemic compression

The deeper we go into formalism, the easier it is to lose sight of the phenomenon itself. An extremely detailed description may be correct yet practically unusable for understanding and action. Conversely, a description that is too shallow reduces complexity to simple labels, losing transition history and dependency structure.

This tension leads to a concept central to ActProof methodology: the optimal level of epistemic compression. A level of description that preserves the justification path and essential process geometry while remaining operationally useful. ActProof's task is to find and maintain this level — neither a total granularity nor a total simplification.

ActProof as the epistemic layer

From these considerations, ActProof is understood as an epistemic layer placed between the raw event backend and the narrative frontend. Its function is not just audit or anomaly classification — it is the control of transition from trace to claim. It preserves the relationship between events and subsequent interpretations, maintains path memory (not just end-state images), allows marking the epistemic status of claims (trace, inference, aggregate, operational hypothesis), and limits the risk of excessive narrative smoothing that could hide compression cost.

ActProof does not eliminate the cost of description. ActProof ensures that cost is not hidden.

Formalization

Directions of Formalization

Three pillars defining the next stage of ActProof's formal development: control efficiency, interoperable proof format, and second-order diagnostics.

Pillar I — Control Efficiency (η)

Not every costly control intervention is pathological. The critical question is whether control achieves its purpose — reducing tension — or merely masks growing instability. Control Efficiency measures the ratio of tension reduction to the cost of intervention.

ηc = max(0, Tbefore − Tafter) / CC
High η Intervention effectively reduces tension at low cost. Useful work — analogous to efficient steering.
Low η High cost sustains status quo without resolving the source. Energy dissipated as heat. Pathological compensation.
η ≈ 0 or negative Control not only fails to help but deepens tension. Tightening a bolt that is already cracking. Highest-level warning.

Pillar II — Canonical Proof Format

Agent A does not need to trust Agent B if B delivers a cryptographic proof of its action in a standardized, minimal format. AioP aims to become an interoperable layer for proof-carrying execution — not a closed feature, but an open protocol that any framework can import, export, and verify.

Minimal proof record:
contract_hash pre_state_hash action (canonical) post_state_hash signature

Anyone possessing this record and the initial contract hash can independently verify that the action was permitted and that the end state follows from applying the action to the start state — without understanding the complexity of the agent that produced it.

↗ Open-source reference implementation on GitHub

Pillar III — Masking Coefficient (M)

In systems aware of being measured — teams, organizations, sociotechnical processes — measurement itself changes behavior. The critical diagnostic question becomes not just "what is the tension level?" but "what is the cost the system pays to appear stable?"

M = f(delays, micro-corrections, declaration inconsistencies)
Low T, Low M System responds fluidly. Communication is direct. Corrections are rare. Healthy state.
Medium T, Low M System knows about problems and works on them openly. Healthy tension — acknowledged and managed.
High T, High M System invests energy in appearing stable. Double bookkeeping. Everyone pretends things are fine at the cost of growing internal contradiction. Flashover precursor.
Critical: Masking Loop The effort required to preserve the appearance of stability begins to generate additional latent tension, creating a self-reinforcing concealment spiral. A candidate precursor to flashover in observer-aware systems. Research hypothesis: one candidate signal is when masking pressure grows faster than latent tension itself.

The more a system knows it is being measured, the less you can trust any single visible indicator — and the more you must observe relationships, trajectories, and side effects of adaptation. The observer effect does not invalidate measurement — it becomes part of the phenomenon under study.

Foundations

Theoretical Foundations

The mathematical concepts underlying ActProof draw from several established areas of research.

Vector Balancing

The fundamental problem of representing a target as a combination of given components. RBA extends classical vector balancing with multi-strategy convergence analysis to detect structural properties invisible to single-path evaluation.

Combinatorial Optimization

K1 Correction (Łuczak)

The Deletion Experiment strategy is based on the "largest-first" removal principle from combinatorial analysis. By systematically removing the highest-magnitude components, RBA identifies structural bottlenecks efficiently.

Combinatorial Analysis

Steinitz Lemma

The theoretical guarantee that vector rearrangements converge within bounded error. Provides the mathematical foundation for the claim that convergence cost τ is a meaningful structural metric — not an artifact of ordering.

Functional Analysis

HMAC-SHA256 Proof Chain

The cryptographic layer ensuring proof integrity. Each execution proof is signed with a server-held key using HMAC-SHA256 over the canonical JSON body. Independent verification requires only the public proof artifact.

Applied Cryptography

Event Sourcing & CQRS

ActProof OS uses Event Sourcing as its structural foundation. Every operation is an immutable event in an append-only log. This enables deterministic rebuild and state verification (Stage 1F), which feeds directly into the Flashover detection pipeline.

Distributed Systems Architecture

Positional Theory

The framework's approach to evaluating "hidden tension" draws from positional analysis in strategic games. A system can appear balanced while structural pressure accumulates — visible only through cost-based metrics, not output metrics.

Game Theory & Strategy

Explore the
production systems

See how these research foundations translate into deployed, operational infrastructure.