CBR Core Theorem Paper | Canonical Law Form and a Testable Accessibility-Signature Theorem
Copyright Page
Constraint-Based Realization | Canonical Law Form and a Testable Accessibility-Signature Theorem
Copyright © Robert Duran IV. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the copyright holder, except for brief quotations used in scholarly review, criticism, or citation consistent with applicable law.
This volume is a work of theoretical research and formal argument. It advances a proposed framework in quantum foundations and should be read accordingly. Statements labeled as axioms, assumptions, propositions, theorems, conjectures, interpretive claims, or empirical hypotheses carry different evidential and logical status, which is specified within the text. No claim should be read more strongly than the status assigned to it.
The author has attempted to distinguish, throughout, between formal results, conditional arguments, heuristic remarks, and open problems. Readers are encouraged to evaluate the framework on the basis of explicit assumptions, stated definitions, proof status, and empirical consequences rather than on rhetoric, pedigree, or interpretive preference.
First edition.
Printed in the United States of America.
For permissions, inquiries, or scholarly correspondence, contact:
505-520-7554
Abstract
Standard quantum dynamics determines the evolution of states, amplitudes, and correlations with high precision, but does not by itself specify the law by which a single realized outcome obtains in an individual measurement context. Constraint-Based Realization is formulated here as a context-indexed realization theory in which each context C determines an admissible class 𝒜(C) of realization-compatible channels and a realization functional ℛ_C, with the realized channel selected by the canonical rule Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ). Under the stated axioms, the canonical realization map is shown to be uniquely fixed up to operational equivalence within the restricted admissibility class, so that realization is not determined by representational convention, arbitrary labeling, or post hoc weighting. The theory’s empirical burden is then made explicit through an operational accessibility parameter η governing the physically relevant accessibility of outcome-defining records in a designated protocol family. If accessibility contributes nontrivially to realization law, then realization-sensitive observables cannot remain globally within the same baseline smooth-response class across all accessibility regimes. If the canonical protocol family exhibits only baseline-class behavior across the physically relevant η-domain, then CBR in its present canonical form is false. The result is a canonically specified, mathematically constrained, and experimentally vulnerable realization-law theory candidate.
1. Introduction
1.1 The unresolved target
The present paper addresses a narrow but foundational question in quantum theory: what, if anything, constitutes the physical law by which one outcome structure is realized in an individual measurement context? This question must be stated with care. Standard quantum mechanics supplies a formal account of state evolution, whether in closed form through unitary propagation or in open-system form through effective dynamical maps. It also supports a rich theory of correlations, decoherence, environmental entanglement, and instrument-level state updating. What it does not by itself transparently supply is a law of single realized outcome selection. The unresolved target of this paper lies precisely there.
It is therefore necessary to distinguish three issues that are often conflated. First, there is the evolution of the quantum state or reduced state under the ordinary dynamical rules of the theory. Second, there is the formation of measurement-correlated records, including pointer-state stabilization, branching structure in decohering descriptions, or effective suppression of interference in reduced descriptions. Third, there is the further question of why, in a given physical circumstance, one outcome structure is realized rather than the mere persistence of a formal superposed or correlated description across alternatives. The first two topics are indispensable to any realistic account of measurement. Neither of them, however, automatically settles the third.
This distinction is not semantic. Unitary or open-system evolution specifies how amplitudes, phases, and correlations evolve. Decoherence explains why interference may become effectively inaccessible in reduced descriptions and why certain bases become dynamically stable for record formation. Yet decoherence, taken by itself, does not transparently furnish a physical rule stating why a single outcome obtains rather than merely why a reduced observer-relative or subsystem-relative description becomes quasiclassical. That gap may be interpreted away in some frameworks, absorbed into branching ontology in others, or treated as requiring an additional physical principle. The present paper proceeds only from the claim that if one seeks a realization law, then that law must be stated in explicit physical and mathematical form.
Accordingly, this paper does not enter the measurement problem through generalized interpretive discourse. Its concern is not to survey philosophies of quantum theory, compare metaphysical packages, or restate familiar interpretive positions in new language. Its concern is narrower and more demanding. It asks whether outcome realization can be formulated as a constrained law-selection problem, whether that law can be canonically specified rather than ad hoc, and whether it can be rendered vulnerable to empirical failure. Constraint-Based Realization is introduced here not as a generic interpretive stance, but as a candidate realization-law framework whose adequacy depends on formal admissibility, restricted uniqueness, and operational consequence.
1.2 What this paper does
This paper does not widen the CBR program. It fixes its minimal canon. Its task is to state one realization-law object sharply enough that it can be evaluated as a theory candidate rather than as a developing framework. Accordingly, the paper has four exact aims. It fixes a canonical law form, restricts the admissible realization class, defines an operational accessibility variable, and derives a finite empirical burden for the resulting theory. These aims are not independent. Together they convert CBR from a broadened research architecture into a compact theorem-bearing proposal.
First, the paper canonizes the law form. For each measurement context C, it defines an admissible class 𝒜(C) of realization-compatible channels and a realization functional ℛ_C, with the selected channel given by the rule Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ). The point of this construction is not maximal generality. It is to eliminate residual plasticity at the level of the realization law itself. A theory candidate cannot remain indefinitely permissive about its central selection rule and still claim formal seriousness.
Second, the paper restricts admissibility. Not every formally definable channel is permitted to count as a realization law. The admissible class is narrowed by excluding channels whose apparent selectivity depends only on representational artifacts, arbitrary labeling, accessibility-insensitive degeneracy, or hidden post hoc weighting. The resulting uniqueness claim is intentionally restricted rather than universal: the paper aims to show that within the canonical admissibility class, realization is fixed up to operational equivalence and not by descriptive accident.
Third, the paper operationalizes accessibility. If record structure is physically relevant to realization, then accessibility must be defined as a protocol-level quantity rather than left as a conceptual placeholder. The paper therefore introduces η as an operational accessibility parameter governing the physically relevant availability of outcome-defining records. This is the bridge between the formal law and the empirical domain in which the law must either become visible or fail.
Fourth, the paper derives an explicit empirical burden. It identifies a canonical protocol family in which accessibility can become realization-relevant in a nontrivial way and proves a restricted accessibility-signature claim for that domain. It then states the corresponding public failure condition: if the canonical protocol family exhibits only baseline-class behavior across the physically relevant accessibility regime, then CBR in its present canonical form is false. The paper therefore does not end with a sharpened interpretation. It ends with a finite experimental liability.
These four results are the whole task of the paper. Canonical law form without admissibility restriction remains underdetermined. Admissibility restriction without operational accessibility remains inert. Accessibility without a signature theorem remains interpretive. A signature claim without a failure condition remains incomplete. The paper is constructed to close exactly that sequence and no more.
1.3 What this paper does not claim
The strength of a realization-law proposal depends not only on what it asserts, but on what it refuses to assert without proof. The present paper therefore makes its non-claims explicit and keeps them narrow. Its aim is not to present CBR as a finished universal completion of quantum foundations, but to isolate one canonical law form, one admissibility structure, one operational control variable, and one finite empirical burden. Everything beyond that is either outside the scope of the paper or left deliberately open.
First, this paper does not claim universal closure over all possible realization-law alternatives. The results established here are internal to the canonical CBR form under the stated axioms and regularity assumptions. They do not show that every logically conceivable realization-law framework outside this structure is impossible, nor that every rival admissibility architecture has been eliminated in full generality. What is claimed is narrower and stronger: within the declared scope, the canonical CBR law is non-arbitrary, structurally constrained, and sufficiently rigid to incur a genuine empirical burden.
Second, this paper does not claim a wholly premise-independent derivation of the full probabilistic structure of quantum theory. In particular, it does not claim final universal Born-neutrality closure. The present results discipline where probabilistic burden may enter and exclude overt insertion of weights by fiat at the level of the law form, but they do not yet establish that every appearance of Born-type structure has been derived from wholly independent principles. That deeper theorem burden remains separate. It is not denied, and it is not concealed.
Third, this paper does not claim broad empirical deviation from standard quantum mechanics across ordinary measurement settings. The empirical burden developed here is restricted to a designated accessibility-sensitive protocol family. That restriction is intentional. A law candidate becomes scientifically legible by exposing one finite test domain first, not by claiming ubiquitous visible departure everywhere. The present paper therefore does not argue that every measurement context should exhibit realization-sensitive anomaly. It argues only that if accessibility enters realization law nontrivially, then there must exist a designated protocol regime in which baseline-class global equivalence fails.
Fourth, this paper does not claim to settle every interpretive question surrounding quantum measurement. It does not attempt to dissolve the problem by metaphysical stipulation, nor does it attempt to refute all alternatives by interpretive comparison alone. Its concern is narrower: whether one can write down a canonically specified realization law, restrict it enough to make it non-arbitrary, connect it to an operational accessibility variable, and state a finite condition under which it would fail. The paper should therefore be read as a law-candidate compression of the CBR program, not as a universal manifesto about all of quantum foundations.
These non-claims do not weaken the present result. They are what keep it scientifically credible. A theory candidate does not become stronger by claiming more than it has earned. It becomes stronger by forcing one exact object into the open, stating what has been fixed, stating what has not been fixed, and exposing the fixed part to failure without pretending that the unfixed part has already been solved. That is the standard this paper adopts.
1.4 Why this paper is necessary
The present paper is necessary because the earlier stages of the CBR program, however substantial, do not by themselves yield a single theorem-bearing object that can be judged as a law candidate in its own right. A research program may contain formal architecture, narrowing arguments, comparative pressure, and empirical ambition while still remaining too distributed to count as one canonically specified theory. That is the gap this paper is written to close. Its necessity lies not in expanding the program, but in compressing it to the point where the central law form, the admissibility structure, the operational control variable, and the empirical failure condition all stand together in one place.
Without that compression, the framework remains vulnerable to a familiar objection: that it may be serious, suggestive, and increasingly disciplined, but still too permissive at the point where a theory must become exact. A realization-law proposal does not become scientifically legible merely by arguing that the measurement problem is real, or that accessibility may matter, or that some later experiment might discriminate among completion strategies. It becomes legible when the following are fixed simultaneously: the law form, the admissible class, the operational variable through which empirical burden is incurred, and the condition under which failure of the relevant signature counts against the theory itself. The present paper is necessary because none of those pieces can carry the full burden alone.
More specifically, the earlier work established four prerequisites that now require compression rather than repetition. It established that realization must be distinguished from ordinary evolution and record registration. It established that admissibility cannot remain indefinite if the framework is to be more than a structured redescription. It established that accessibility, if it is physically relevant, must become operational rather than merely conceptual. And it established that a law candidate must incur an empirical burden rather than remain indefinitely sheltered inside interpretive language. What had not yet been produced was a compact canonical statement in which those four burdens are bound together tightly enough that a skeptic can no longer ask what the actual theory is. This paper is that statement.
Its necessity is therefore methodological as much as formal. A theory candidate must eventually move from developmental architecture to canonical exposure. At that point, the relevant question is no longer whether the surrounding program is interesting or ambitious. The relevant question is whether one exact object can be written down such that it is constrained enough to be judged, narrow enough to be challenged, and vulnerable enough to fail. The present paper is necessary because it is the first point at which CBR is required to meet that standard directly.
2. Conceptual Target and Formal Setting
2.1 Evolution, registration, realization
The formal clarity of the present framework depends on maintaining a strict distinction among three layers of physical description: evolution, registration, and realization. These layers are related, but they are not identical, and any realization-law proposal becomes unclear if they are allowed to collapse into one another.
Evolution refers to the ordinary dynamical behavior of quantum states or open-system state descriptions. In the closed-system setting, this is represented by unitary propagation on a Hilbert space ℋ. In more general settings, one may consider effective dynamical maps, reduced-state evolution, or CPTP descriptions appropriate to instrument-level or environment-coupled contexts. The essential point is that evolution concerns how the formal state description changes in time under the accepted dynamical rules of the theory. It specifies amplitude transport, phase evolution, entangling interaction, and reduced-state transformation. It does not, by that fact alone, specify a law of realized outcome selection.
Registration refers to the physical formation of records. This includes the emergence of stable pointer states, the establishment of durable correlations between a system and a measuring apparatus, the propagation of information into environmental degrees of freedom, and the formation of structures that support later retrieval or effective classical description. Registration is a physical achievement of a measurement interaction: it is what makes there be something record-like in the world. It may suppress interference in accessible reduced descriptions and may support robust macroscopic observables. But registration, even when fully developed, does not yet answer the further question of which outcome structure is physically realized in a single case.
Realization, as used in this paper, denotes that further physical act or law-governed selection by which one outcome structure obtains. It is not identified with mere observation, with linguistic declaration, or with epistemic update. Nor is it defined as a synonym for decoherence or branching. It refers to the physically relevant selection of an outcome channel within a measurement context once the admissible structures of evolution and registration have been established. In the CBR framework, realization is the level at which a law must operate if the measurement problem is to be treated as a question of outcome selection rather than merely of state description.
This threefold distinction is not introduced to multiply entities unnecessarily, but to prevent equivocation. If evolution is taken to do all explanatory work, then one must explain how ordinary dynamical propagation alone yields single realized outcomes. If registration is taken to do all explanatory work, then one must explain why record formation alone counts as selection rather than merely as correlation structure. If neither explanation is accepted as complete, then a distinct realization level becomes unavoidable for the purposes of the present framework. CBR is therefore located explicitly at the third level. It presupposes the ordinary formal apparatus of evolution and the physical relevance of registration, but it does not reduce realization to either of them.
This distinction also disciplines the scope of the theory. CBR is not a replacement for quantum dynamics, and it is not a competing theory of decoherence. It is a proposal concerning what additional law structure is required if one insists that single-outcome realization is a physical question not exhausted by evolution and registration alone. The formal setting introduced below is constructed specifically to capture that restricted but foundational target.
2.2 Mathematical setting
Let ℋ denote the Hilbert space associated with the total physical degrees of freedom relevant to the measurement context under consideration, and let 𝒟(ℋ) denote the set of density operators on ℋ. A measurement context is denoted by C. The symbol C is intended to represent more than an abstract observable label. It includes, insofar as relevant to realization, the physically specified measurement arrangement: the instrument structure, the interaction architecture, the record-bearing degrees of freedom, the timing relations relevant to registration and retrieval, and the accessibility properties of any outcome-defining information carriers.
The framework begins with the assumption that, for each physically well-defined context C, there exists a nonempty class 𝒜(C) of admissible realization-compatible channels. Each Φ ∈ 𝒜(C) is understood as a candidate channel describing a physically permissible realization outcome structure relative to that context. The present paper does not equate admissibility with arbitrary formal constructibility. Admissibility is restricted by physical criteria to be developed later, including non-arbitrariness, representational invariance, record-structural relevance, and accessibility coherence. Thus 𝒜(C) is not the set of all mathematically definable maps on 𝒟(ℋ), but the set of candidate realization channels surviving the specified physical constraints.
A realization functional ℛ_C is then defined on 𝒜(C), with codomain in ℝ or in an ordered subset thereof sufficient to support comparison and minimization. Its role is to assign to each admissible candidate channel Φ a realization burden measure reflecting the extent to which the channel satisfies or violates the canonical physical constraints encoded by the framework. In generic schematic form one may write ℛ_C : 𝒜(C) → ℝ, with lower values corresponding to greater canonical admissibility. The selected realization channel is then given by the rule
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ).
This formula is the schematic heart of canonical CBR. It states that realization, within a given context, is selected not by arbitrary labeling, stipulation, or hidden reintroduction of weights, but by constrained extremization over a physically admissible class. The minimization need not imply naive energetic minimality or variational structure in the ordinary dynamical sense. Rather, it encodes the claim that realization is fixed by the most constraint-satisfying member of the admissible class once the context is physically specified.
Several clarifications are necessary. First, Φ★_C need not be unique in the strict syntactic sense. What matters physically is uniqueness up to operational equivalence. If two candidate channels differ only by representational features that leave all physically relevant observables and accessibility relations unchanged, they are regarded as belonging to the same operational equivalence class. The strongest uniqueness result sought in the present paper is therefore restricted uniqueness modulo operational equivalence, not a demand for formal uniqueness under every possible redescription.
Second, the use of channel language does not commit the framework to a fully ordinary instrument reading of realization. The channel formalism is adopted because it offers a mathematically disciplined way to represent context-dependent transformations and because it allows admissibility, equivalence, and operational consequence to be stated with precision. Whether every physically relevant feature of realization can be encoded in standard CPTP language without extension is a separate technical question. For the purposes of the present paper, the channel-theoretic setting is used as the clearest canonical vehicle for law specification.
Third, the role of accessibility will become essential. Not every record is physically relevant in the same way, and not every formal correlation should count equally in realization judgments. For that reason, the measurement context C is not exhausted by the bare system-apparatus observable structure. It also includes, in a physically significant way, the conditions under which outcome-defining information is stored, retrievable, stable, and available to physical interaction. This motivates the later introduction of an operational accessibility parameter η and the possibility of accessibility-sensitive distinctions within the admissible class 𝒜(C).
The mathematical setting is therefore deliberately modest and deliberately sharp. It assumes only what is needed to turn the realization question into a law-selection problem with explicit formal objects: a state space 𝒟(ℋ), a physically specified context C, a constrained admissible class 𝒜(C), a realization functional ℛ_C, and a selected channel Φ★_C defined by constrained minimization. Everything that follows concerns how these objects are to be interpreted, restricted, and rendered empirically vulnerable.
2.3 The formal question
With the foregoing distinctions and notation in place, the central formal question of the paper can be stated cleanly:
Given a measurement context C, what physical law selects the realized outcome channel Φ★_C from the admissible class 𝒜(C)?
This formulation is more precise than the generic question of “how collapse occurs,” and more disciplined than the vague request for an “interpretation of measurement.” It presupposes that the physically relevant problem is not merely to describe the evolution of state amplitudes, nor merely to note the existence of correlated records, but to identify the law by which one realization-compatible channel is selected from among those channels that remain physically admissible in the given context.
Several features of this formulation deserve emphasis. The question is context-indexed. There is no presumption that realization law is expressed independently of physical measurement architecture, record structure, or accessibility conditions. Context-dependence is not treated here as a defect; it is treated as a natural feature of any law whose role is to connect outcome selection to physically meaningful constraints. At the same time, context-indexing must not degenerate into arbitrariness. The law must remain invariant under physically irrelevant reparameterizations and must preserve operational equivalence across equivalent implementations.
The question is also selective rather than purely descriptive. It does not ask how one may redescribe a measurement after the fact, nor how one may update a state assignment upon learning an outcome. It asks what channel is selected, by what law, before any merely epistemic reinterpretation is introduced. In that sense, the law sought here is ontically or physically selective even if its formal expression is operational.
Finally, the question is empirically loaded. A selection law that never affects any protocol-level observable structure remains formally admissible perhaps, but scientifically underconstrained. For this reason, the present paper does not stop at asking whether a canonical Φ★_C can be defined. It asks whether the law selecting Φ★_C has any protocol-sensitive consequence once accessibility is varied in a controlled way. If it does, then the theory bears empirical risk. If it does not, then its scientific standing is correspondingly weaker.
The remainder of the paper is organized around that formal question. The next stage is to specify the axiom set under which admissibility and canonicality are to be judged. From there, the paper defines the realization functional in canonical form, establishes restricted uniqueness, introduces accessibility as an operational variable, and derives a signature theorem together with a binary falsification condition. The overarching aim is to determine whether CBR can be stated not merely as a foundational framework, but as a canonically specified realization law that lives or dies on a finite and public empirical burden.
3. Axioms of Canonical CBR
The present section states the minimal axiom set under which CBR is to be read as a canonical realization-law proposal rather than as a merely suggestive framework. The purpose of these axioms is not to maximize generality. It is to minimize arbitrariness. A realization theory that leaves its admissible structures indefinite, its selection rule plastic, or its empirical standing optional does not yet count as a serious law candidate. The axioms below therefore do only the work that is necessary: they preserve compatibility with ordinary quantum dynamics, define context-indexed admissibility, exclude nonphysical sources of selectivity, constrain the role of record structure and accessibility, discipline the entry of probabilistic structure, and impose a genuine empirical burden. Taken together, they do not prove CBR true. They fix the standard under which it must stand or fail.
The logic of the section is cumulative. A1 protects the baseline dynamical core of quantum theory outside realization selection. A2 ensures that realization is posed as a context-indexed law-selection problem rather than an unconstrained formal add-on. A3 through A5 exclude arbitrariness and representational instability. A6 prevents the theory from solving the probabilistic problem by definitional insertion. A7 prevents the law from remaining formally insulated from experiment. The resulting axiom set is intentionally short because the burden of the paper is not to state every desirable feature of a realization theory. It is to state the least set of constraints under which the canonical law form becomes non-arbitrary, operationally meaningful, and vulnerable to disconfirmation.
3.1 Axiom A1 — Standard dynamical compatibility
Axiom A1. Outside realization selection, CBR does not modify ordinary quantum dynamics.
More precisely, let ρ ∈ 𝒟(ℋ) denote the physical state description relevant to a given measurement context. Then, prior to and apart from the realization-selection step, the evolution of ρ is governed by the standard dynamical structure already admitted by the baseline theory, whether in unitary form for effectively closed systems or in ordinary open-system form where environmental or instrument-level reduction is appropriate. CBR is therefore not proposed as an alternative Hamiltonian theory, not as a nonlinear deformation of Schrödinger evolution, and not as a general-purpose modification of state propagation.
The significance of A1 is methodological and substantive. Methodologically, it prevents the framework from purchasing apparent explanatory power by silently altering the well-confirmed dynamical core of quantum theory. Substantively, it clarifies the restricted role of realization law: CBR is intended to act only at the level where a realized outcome channel is selected from among channels compatible with the already established dynamical and registration structure of the context. This means that any explanatory novelty claimed by the framework must come from the law of realization itself, not from hidden alterations in the background evolution.
A1 does not imply that realization is temporally external to all dynamical description, nor that realization cannot be represented through channel-theoretic objects acting on 𝒟(ℋ). It states only that the dynamical content of the baseline formalism remains intact unless and until the realization problem becomes physically relevant. This axiom is therefore the first barrier against interpretive inflation. A realization-law theory that modifies ordinary evolution without necessity has already failed to isolate its target.
3.2 Axiom A2 — Context-indexed admissibility
Axiom A2. For every physically well-defined measurement context C, there exists a nonempty admissible class 𝒜(C) of realization-compatible channels.
This axiom is the formal entry point of canonical CBR. It asserts that realization is not represented by an unconstrained global rule independent of physical measurement architecture, nor by an arbitrary post hoc selection among all mathematically definable maps. Rather, each context C determines a physically meaningful admissible class 𝒜(C), whose elements are candidate realization channels surviving the baseline dynamical structure, the record-bearing configuration of the apparatus and environment, and the invariance conditions imposed by the framework.
The requirement that 𝒜(C) be nonempty is essential. A realization law that yields no candidate channel in some physically meaningful context is not incomplete merely in a technical sense; it is physically undefined. At the same time, A2 does not permit 𝒜(C) to be identified with the unrestricted set of all channels on 𝒟(ℋ). Admissibility here is already understood as a constrained notion. The detailed exclusion criteria are deferred to Section 5, but the axiom makes clear that context-indexed admissibility is a primitive structural ingredient of the theory rather than a later auxiliary device.
The dependence on C is neither optional nor embarrassing. If realization is genuinely sensitive to the physical structure of measurement, then it would be artificial to require its law to ignore the very features through which records are formed, stabilized, stored, or rendered accessible. Context-indexing is therefore not a concession to arbitrariness, but a recognition that realization must be defined relative to physically specified measurement circumstances. The burden imposed by later axioms is to ensure that this dependence remains physically lawful rather than representationally unconstrained.
3.3 Axiom A3 — Physical non-arbitrariness
Axiom A3. The realized channel cannot be selected by unconstrained labeling, gauge choice, coordinate dependence, or purely formal redescription.
This axiom excludes the central failure mode of weak realization proposals: the appearance of law without genuine selection. If two candidate constructions differ only by relabeling branches, redescribing equivalent states, changing coordinates, or applying a transformation with no physically relevant content, then a supposed selection rule that privileges one over the other is not a physical law. It is merely a disguised choice of representation.
A3 therefore imposes a non-arbitrariness requirement on both admissibility and minimization. No candidate channel Φ may count as distinct in a way relevant to realization unless that distinctness survives interpretation in terms of record structure, accessibility, operational consequence, or other physically meaningful content. Equally, no term entering the realization functional ℛ_C may reward or penalize a channel merely for how it is written rather than for what it physically implies.
This axiom does not require that every admissible distinction be directly observable in a single experiment. It requires only that distinctions relevant to realization be anchored in physical structure rather than formal bookkeeping. In that sense, A3 does not collapse the theory into naïve empiricism. It enforces a stricter standard: the framework must distinguish physical law from notation-dependent preference.
3.4 Axiom A4 — Record-structural relevance
Axiom A4. Realization selection depends only on physically meaningful record structure and accessibility properties of the measurement context.
The role of this axiom is to specify the domain of relevance for the realization law. Not every mathematical feature of a measurement description is realization-relevant. What matters, under canonical CBR, is the structure by which outcome-defining information is physically instantiated: correlation patterns, record-bearing degrees of freedom, stability properties, retrievability conditions, and the accessibility relations that determine whether a record functions as a physically operative element of the context rather than as a merely formal residue in the total state description.
A4 thus excludes two opposite mistakes. On one side, it blocks the theory from treating every microscopic entanglement trace as equally realization-relevant regardless of whether it contributes to stable or accessible record structure. On the other side, it prevents the theory from appealing to vague observational or epistemic notions detached from physical record architecture. The axiom insists that realization be grounded in physically meaningful record structure, neither less nor more.
This axiom prepares the way for the introduction of the operational accessibility parameter η in Section 6. At this stage, however, the point is conceptual rather than yet metric. The law of realization must be sensitive to the existence and structure of records in a way that is physically discriminating. Otherwise the admissible class becomes either too permissive to explain anything or too narrow to reflect actual measurement conditions.
3.5 Axiom A5 — Consistency under equivalent representations
Axiom A5. Physically equivalent implementations must not yield inequivalent realization judgments solely by reparameterization or representational change.
A5 is stronger than A3 in a specific sense. A3 forbids explicitly arbitrary selection criteria. A5 requires positive stability of the framework under equivalence. Suppose two descriptions of a measurement context differ only by changes of basis within a physically irrelevant subspace, redescription of instrument variables that leave operational content unchanged, or other transformations preserving the full realization-relevant physical structure. Then the theory must assign the same realization verdict class to both descriptions. If it does not, the framework is not merely inelegant; it is physically ill-posed.
This consistency requirement applies both to admissibility and to the selected channel Φ★_C up to operational equivalence. The law need not preserve strict syntactic identity under every reformulation, since equivalent channels may be represented differently. What it must preserve is the equivalence class of the realization verdict. Thus A5 justifies the use of operational-equivalence language later in the uniqueness theorem: a physically serious law need not distinguish what the world does not distinguish.
The philosophical importance of A5 should not obscure its formal necessity. Without it, the same laboratory arrangement could in principle support incompatible realization judgments under mathematically convenient but physically idle reformulations. Such a theory would not merely be incomplete. It would be internally unstable.
3.6 Axiom A6 — Restricted Born-neutrality condition
Axiom A6. If Born-type weighting enters the realized outcome law, it must arise through the admissibility and realization structure itself and not by stipulation at the level of definition.
This axiom does not claim that the present paper has already delivered a wholly premise-independent derivation of the full probabilistic structure of quantum theory. Nor does it claim that every possible route by which amplitude-squared weighting might reappear has already been excluded in universal form. Its function is narrower and more exact. It forbids the theory from treating the desired probabilistic result as a hidden input and then redescribing that input as if it were a derived consequence of the law.
The necessity of A6 is methodological and formal at once. A realization-law framework fails in a particularly serious way if its strongest burden is concealed precisely at the point where weighting enters. If the law is permitted to encode the target weighting by fiat, then neither admissibility nor minimization does genuine explanatory work. The appearance of derivation is purchased by definitional placement rather than by theoretical constraint. A6 excludes that move. It requires that any legitimate weighting structure be forced by the interaction of the admissible class, the realization functional, and the invariance and consistency conditions already imposed elsewhere in the axiom set.
The condition is therefore intentionally restricted. It is strong enough to prevent disguised circularity at the level relevant to the present paper, but not so strong that it pretends to have already solved the entire probabilistic closure problem. That remaining burden is not denied here. It is isolated. The purpose of A6 is precisely to make it impossible for the framework to claim more closure than it has earned while still allowing the present paper to proceed with a law form that is not probabilistically vacuous.
3.7 Axiom A7 — Empirical accountability
Axiom A7. A realization-law proposal that claims physical standing must entail at least one finite operational protocol family under which failure would count against the theory.
This axiom is the final discipline imposed on canonical CBR. A realization framework may be mathematically coherent, conceptually sharp, and even internally constrained while still remaining scientifically incomplete if it never reaches the point where the world could count against it. A7 blocks that incompleteness. It does not require that the theory deviate from standard quantum mechanics in all ordinary settings. It does not require that every realization-relevant distinction become generically visible in simple experiments. It requires only that the law, once canonically specified, bear a finite empirical burden in at least one designated domain where the quantities on which it depends are operationally controllable.
The axiom is deliberately asymmetric. It does not assert that a law candidate must already be broadly confirmed. It asserts that a law candidate may not remain empirically idle everywhere. This distinction matters. Many frameworks remain indefinitely protected because their core claims are never narrowed to a public failure condition. A7 forbids that protection. If accessibility is physically relevant to realization, then there must exist a designated protocol family in which the law’s accessibility dependence can either become visible or fail to do so. That is why the later sections of the paper do not treat empirical exposure as optional. They treat it as axiomatic.
The significance of A7 is therefore larger than its brevity suggests. It is the axiom that converts the present paper from a work of canonical compression into a work of scientific exposure. Once A7 is accepted, the theory must eventually name its protocol family, its operational variable, its signature burden, and its failure condition. Without that sequence, the law remains formally interesting but scientifically incomplete. With it, the theory becomes vulnerable in the only way a theory can become vulnerable: by giving the world a finite opportunity to disagree.
4. Canonical Law Form
The central task of this section is to state the realization law in its minimal canonical form. The aim is not to produce the richest possible functional, but the leanest one capable of carrying the full burden imposed by A1–A7. A law form that includes too many terms becomes interpretively permissive and mathematically opaque; one that includes too few fails to exclude arbitrary or empirically idle rivals. The canonical form proposed here is therefore built around three irreducible burden components: representational invariance, record-structural coherence, and accessibility consistency. Any further burden either reduces to one of these, encodes a redundant reformulation, or introduces structure not yet justified by the axiom set.
4.1 Definition of the realization functional
Let C be a physically specified measurement context and let 𝒜(C) denote the corresponding admissible class of realization-compatible channels. The realization functional is defined on 𝒜(C) by
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ),
where α, β, γ ≥ 0 are fixed theory-level coefficients and where each term measures a distinct burden that a candidate realization channel must bear if it is to count as physically admissible in canonical form.
The first term, Ξ_C(Φ), is the representational invariance burden. It measures the extent to which the realization verdict induced by Φ fails to remain stable under physically irrelevant reformulations of the context. A channel carries low Ξ_C burden only if its realization effect is invariant under relabeling, coordinate change, equivalent encoding, or other descriptive modifications that leave the realization-relevant physical content of the measurement arrangement unchanged. This term is forced by the requirement that realization law not depend on notation.
The second term, Ω_C(Φ), is the record-structural coherence burden. It measures the extent to which Φ fails to align its realization verdict with the actual record structure of the context. A channel carries low Ω_C burden only if it tracks physically meaningful record-bearing organization rather than merely formal branch multiplicity or mathematically available but operationally idle distinctions. This term is forced by the requirement that realization law be anchored in the record structure actually produced by the measurement context.
The third term, Λ_C(Φ), is the accessibility-consistency burden. It measures the extent to which Φ fails to treat operational accessibility as a realization-relevant physical variable in a coherent and stable way. A channel carries low Λ_C burden only if it responds consistently to the physically relevant accessibility structure of the context and does not assign inequivalent realization verdicts to accessibility-equivalent configurations. This term is forced by the requirement that accessibility, once admitted as physically relevant, enter the law in a way that is operational rather than merely rhetorical.
The coefficients α, β, and γ are not empirical fit parameters to be tuned from protocol to protocol. They belong to the law form itself. Their role is to fix the comparative weight of the three irreducible burden classes once and for all at the level of the canonical theory. Up to overall positive rescaling, what matters is their relative structure, not arbitrary normalization. A theory that altered these coefficients opportunistically across contexts would not possess a canonical realization law; it would possess only a family of loosely related selection heuristics.
The present decomposition is deliberately minimal. It does not include a separate probability-matching term, because that would risk violating the restricted Born-neutrality discipline unless independently forced by the admissibility structure. It does not include a distinct branch-count penalty, because unsupported branch multiplicity is already penalized insofar as it produces record-incoherent realization structure and therefore contributes to Ω_C. It does not include a separate gauge-fixing term, because purely descriptive arbitrariness is already captured by Ξ_C. The three-term form is therefore not intended as one useful functional among many. It is intended as the smallest burden decomposition sufficient to carry the exact constraints imposed by the axiom set.
With this definition in place, the selected realization channel is given by
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ).
The role of the realization functional is therefore exact: it converts realization from an informal appeal to physical selection into a constrained minimization problem over a non-arbitrary admissible class. The burden of the next subsection is to show why this three-term form is canonical rather than merely convenient.
4.2 Canonical selection rule
The canonical realization channel associated with context C is defined by the constrained minimization rule
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ).
This expression states that realization is selected by minimizing the total burden induced by representational instability, record-structural incoherence, and accessibility inconsistency within the admissible class. The rule is not to be read as a dynamical variational principle in the ordinary action-based sense. It is a law-selection principle: among all channels surviving admissibility, the physically realized channel is the one that best satisfies the canonical realization constraints.
Three notions of uniqueness must be distinguished.
Strict uniqueness obtains if there exists exactly one Φ ∈ 𝒜(C) achieving the minimum of ℛ_C.
Equivalence-class uniqueness obtains if every minimizer belongs to a single operational equivalence class, even if multiple syntactic representatives realize the same physical verdict.
Weak uniqueness modulo operational equivalence obtains if all minimizing channels induce identical realization judgments on the physically relevant observables and accessibility structure, even when their internal formal encodings differ.
The present paper aims only at the second and third notions, and in practice the principal target is equivalence-class uniqueness. This is both physically appropriate and mathematically stable. A law of realization should not be rejected merely because two mathematically distinct encodings describe the same physical outcome structure. What matters is that the theory select one realization verdict class, not one privileged notation.
4.3 Why this is the canonical form
The claim that ℛ_C is canonical must be justified with precision. In the present paper, canonicality does not mean metaphysical inevitability in every imaginable realization-law framework. It means something narrower and stronger within scope: given the stated axioms, no burden term appearing in the functional is dispensable, and no omitted term is independently required to make the law physically serious. The canonical claim is therefore a claim of minimal sufficiency under constraint, not of absolute finality across all logically possible theories.
The first reason this form is canonical is that each retained term is axiom-forced. Without Ξ_C, the theory cannot exclude realization verdicts that change under physically irrelevant reformulation, and thus fails to satisfy the non-arbitrariness and representation-consistency requirements. Without Ω_C, the theory loses its anchoring in the actual record-bearing structure of the measurement context and can no longer distinguish physically meaningful realization channels from structurally ungrounded formal alternatives. Without Λ_C, accessibility can be named but not lawfully integrated, and the theory loses both its operational content and the route by which empirical accountability is incurred. The retained terms are therefore not stylistic choices. They are the minimal burden coordinates required by the axioms themselves.
The second reason this form is canonical is that the main omitted alternatives are either redundant, illicit, or empirically idle. A separate term rewarding preferred basis choice without additional physical content would merely duplicate representational dependence already penalized by Ξ_C. A separate term penalizing unsupported multiplicity of branches would add nothing not already captured by Ω_C so long as record-structural coherence is defined correctly. A separate empirical-fit term designed to reward agreement with future data would violate the logic of the paper by importing outcome-level success into the law itself. A direct probability-insertion term would violate the restricted Born-neutrality discipline unless independently derived from the admissibility structure. Thus the omitted candidates are not absent through neglect. They are absent because the exact work they might appear to do is already done by the canonical burden terms or because their inclusion would deform the theory into something less disciplined.
The third reason this form is canonical is that it is sufficient to support the full theorem program of the paper. The realization functional, as defined, is strong enough to support restricted uniqueness, strong enough to make accessibility operationally relevant, and strong enough to impose a finite empirical burden through the designated protocol family. A richer functional may be conceivable, but richness alone is not a virtue at this stage. Unless additional structure is forced by theorem or experiment, added burden terms would expand the descriptive surface of the law without increasing its explanatory or empirical necessity. The correct canonical form is therefore the smallest one that can already bear the full load of law selection, admissibility restriction, and empirical exposure.
Canonicality in this sense is not an ornamental label. It is the statement that, within the declared scope of the paper, the law has been compressed to the point where its surviving structure is no longer optional. That is what distinguishes a canonically specified realization functional from a merely well-designed heuristic. The next task is therefore not to enrich the law further, but to show that the minimization problem it defines is well-posed and that the admissibility structure it acts on is narrow enough to sustain a non-arbitrary uniqueness result.
4.4 Regularity and existence conditions
To avoid purely symbolic formalism, the minimization rule must be mathematically well-posed. The following regularity conditions are therefore imposed.
First, for each physically valid context C, the admissible class 𝒜(C) is assumed nonempty by A2 and is taken to be closed under the relevant operational-equivalence topology on candidate channels. Second, ℛ_C is assumed bounded below on 𝒜(C). Since Ξ_C, Ω_C, and Λ_C are burden measures with nonnegative codomain, this is natural in the present framework. Third, ℛ_C is assumed lower semicontinuous on 𝒜(C) with respect to the topology induced by the physically relevant operational observables and accessibility structure. Under standard compactness or coercivity conditions on the admissible class, these assumptions suffice to ensure existence of at least one minimizer.
The present paper does not require maximal abstract generality in the functional-analytic setting. What it requires is that the minimization problem not be empty or ill-defined in the physically relevant domain. A representative existence statement may therefore be recorded as follows.
Proposition 4.1. Let C be a measurement context such that 𝒜(C) is nonempty and compact under the operational-equivalence topology, and let ℛ_C be lower semicontinuous and bounded below on 𝒜(C). Then there exists at least one channel Φ̄ ∈ 𝒜(C) such that ℛ_C(Φ̄) = inf_{Φ ∈ 𝒜(C)} ℛ_C(Φ).
The proof is standard: lower semicontinuity of a bounded-below functional on a compact set guarantees attainment of the infimum. The substantive point is not the proof itself, but its interpretive consequence. Once the functional and admissible class are given physical content, the realization rule ceases to be schematic aspiration and becomes a mathematically defined selection problem.
Finally, equivalence-class well-definedness must also be secured. If Φ ∼ Ψ denotes operational equivalence, then ℛ_C must descend consistently to the quotient 𝒜(C)/∼ whenever Ξ_C is minimized. In practical terms, this means that channels differing only by physically irrelevant reformulation must not receive inequivalent canonical burden assignments once the invariance conditions are properly enforced. That requirement is indispensable for the restricted uniqueness theorem of the next section.
5. Admissibility and Restricted Uniqueness Theorem
The canonical law form acquires physical and mathematical force only if the admissible class is narrow enough to exclude merely formal rivals and rigid enough to support a non-arbitrary selection result. Without that narrowing, the realization functional would act on too broad a space to justify any serious uniqueness claim. The purpose of the present section is therefore twofold. First, it defines admissibility in exact terms appropriate to a realization-law theory rather than to an unrestricted family of formally constructible channels. Second, it establishes the strongest uniqueness result the present paper is entitled to claim: uniqueness of the selected realization channel up to operational equivalence within the declared admissible class.
This restriction of scope matters. The theorem proved here is not a universal impossibility result for every conceivable realization-law alternative. It is a theorem about the canonical CBR form under the axioms already stated. What it establishes is that, once the law form and admissibility constraints are fixed, realization is not left floating among many inequivalent candidates. It is fixed up to operationally null reformulation. That is the exact level of uniqueness required for the rest of the paper to carry force.
5.1 Definition of admissibility
A channel Φ belongs to 𝒜(C) if and only if it satisfies all of the following conditions relative to the measurement context C.
First, Φ must be dynamically compatible with the baseline quantum evolution preserved by A1. It may govern realization selection, but it may not alter the ordinary pre-realization dynamics already admitted by the underlying theory. Any channel whose apparent success depends on covert modification of the baseline evolution is excluded from 𝒜(C).
Second, Φ must be representationally invariant. Its realization verdict must not depend on arbitrary relabeling, coordinate choice, gauge choice, or mathematically equivalent reformulations of the same physical situation. A channel that distinguishes among physically equivalent descriptions does not encode realization law. It encodes descriptive accident. Such channels are excluded.
Third, Φ must be record-structurally coherent. It must assign realization in a way that is anchored in the physically meaningful record structure of the context and not in unsupported formal multiplicity. A channel that privileges distinctions lacking stable record-bearing significance, or that ignores distinctions that are physically encoded in the record architecture, fails admissibility.
Fourth, Φ must be accessibility-consistent. If operational accessibility is physically relevant to realization, then channels that ignore accessibility altogether, treat accessibility-equivalent contexts differently, or respond to undeclared accessibility distinctions are excluded. Admissibility therefore requires that the realization verdict depend only on the physically relevant accessibility structure actually present in C.
Fifth, Φ must satisfy restricted probabilistic discipline. It may not reproduce the desired weighting structure by overt insertion at the level of channel definition. Any channel whose apparent success depends on smuggling the target outcome law directly into its formal specification is excluded, because such a channel violates the restricted Born-neutrality condition imposed by A6.
These positive conditions define admissibility by exclusion of the central failure modes of realization-law proposals. In particular, the following channel classes are not admissible:
channels whose selectivity is label-dependent or representation-sensitive
channels that are accessibility-blind despite claiming accessibility relevance
channels that preserve spurious realization distinctions unsupported by record structure
channels that achieve apparent success by concealed probability insertion
channels that modify baseline dynamics rather than govern realization selection
Thus 𝒜(C) is not the space of all formally writable realization maps. It is the residual class surviving every admissibility burden imposed by the paper’s axiom set. This is the correct domain for the realization functional, because the point of the law is not to rank arbitrary formal possibilities. It is to select among the channels still standing after nonphysical sources of selectivity have been removed.
5.2 Restricted Canonical Uniqueness Theorem
The main result of this section can now be stated precisely.
Theorem 1 (Restricted Canonical Uniqueness). Let C be a physically specified measurement context, let 𝒜(C) be the admissible class defined above, and let the realization functional be given by
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ),
with α, β, γ ≥ 0 fixed and with the regularity conditions required for existence of a minimizer satisfied. Then the selected channel
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ)
is unique up to operational equivalence within 𝒜(C).
The theorem is intentionally restricted in two senses. First, it does not claim strict syntactic uniqueness of the selected channel under every possible formal representation. Second, it does not claim to eliminate every logically conceivable realization-law alternative outside the canonical admissibility framework. What it claims is exactly what the present paper needs: within the declared law form and admissibility class, the selected realization verdict is not many-valued except through reformulations that are physically null.
5.3 Proof architecture
The proof proceeds by exhaustion of the possible ways in which two admissible minimizers could differ.
Assume that Φ₁ and Φ₂ both belong to 𝒜(C) and both minimize ℛ_C. If they differ only by physically irrelevant reformulation, then by definition they are operationally equivalent and no contradiction arises. The only nontrivial case is therefore the case in which Φ₁ and Φ₂ are not operationally equivalent.
Suppose first that they differ in representational invariance burden. Then one of them assigns distinct realization significance to a purely descriptive reformulation that the other does not. But such a channel would either fail admissibility outright or incur a larger Ξ_C burden, contradicting equal minimality within 𝒜(C).
Suppose next that they differ in record-structural coherence burden. Then one of them tracks the physically meaningful record architecture less well than the other. But then the worse-aligned channel incurs a larger Ω_C burden, contradicting equal minimality.
Suppose next that they differ in accessibility-consistency burden. Then one of them either responds to accessibility in an undeclared or inconsistent way, or fails to treat accessibility-equivalent contexts equivalently. But then the worse channel incurs a larger Λ_C burden, again contradicting equal minimality.
The only remaining possibility is that Φ₁ and Φ₂ differ formally while inducing the same realization verdict on all physically relevant observables and accessibility structures. But that is precisely what it means for them to be operationally equivalent. Therefore no two inequivalent admissible channels can both minimize ℛ_C.
It follows that the selected channel is unique up to operational equivalence.
5.4 What the theorem rules out
The theorem rules out the main residual ambiguity that would otherwise weaken the canonical law form. It rules out the possibility that, after axiomatic narrowing, realization is still left among many physically distinct admissible minima. It rules out the possibility that the law remains underdetermined except by notation. And it rules out the possibility that the realization functional merely ranks a broad equivalence class of rival physical verdicts without selecting one.
In other words, the theorem shows that canonical CBR does not merely constrain realization. It selects it, modulo only those distinctions the physical content of the theory itself treats as null.
5.5 Corollary on non-arbitrary selection
Corollary 1.1. Within canonical CBR, realization selection is not free parameterization disguised as law.
This follows directly from Theorem 1 and the admissibility definition. Since physically inequivalent minima cannot coexist within 𝒜(C), whatever freedom remains in the formal representation of the selected channel does not correspond to freedom in the physical realization verdict. The law therefore earns a stronger status than mere structured redescription. It imposes an actual restriction on what may count as the realized outcome channel in a given context.
5.6 Corollary on representation stability
Corollary 1.2. If two descriptions of the same measurement context are physically equivalent, then the selected realization verdict class is the same under both descriptions.
This corollary is the direct operational consequence of the uniqueness theorem together with representational admissibility. A law whose selected realization changed under purely descriptive reformulation would not be a physical law of realization at all. The corollary therefore records, in explicit form, the stability property already built into the admissibility definition and confirmed by the uniqueness theorem.
5.7 Why this theorem matters
This theorem is the first point in the paper at which the canonical law form becomes more than a disciplined proposal. Before it, the paper has a candidate functional and a candidate admissibility structure. After it, the paper has a selected realization verdict class that is fixed by the law rather than merely guided by it. That is the transition that matters.
The theorem does not yet settle whether the selected law is empirically right. Later sections bear that burden. But it does settle the prior question of whether the law is sufficiently exact to support empirical exposure in the first place. Without restricted uniqueness, later protocol and falsification sections would be premature. With it, the paper is entitled to move forward from canonical selection to operational consequence.
6. Operational Accessibility
The preceding sections define a canonical realization law and show that, within the admissible class, the selected realization channel is fixed up to operational equivalence. That is still not enough to make the theory empirically legible. A realization law becomes physically vulnerable only when the variable on which its distinctiveness depends is itself operationally defined. The purpose of the present section is therefore to state accessibility in exact physical terms and to make it the bridge between canonical law and empirical burden.
This move is necessary because “record existence” is too weak a notion for the present theory. A formally available correlation, a fragile record that cannot be recovered without destroying its content, and a stable record that can be retrieved and disseminated through multiple physical channels do not occupy the same status in the structure of a measurement context. If realization is sensitive only to the bare presence of correlation, then accessibility drops out of the theory and the law collapses toward baseline indifference. If, however, realization is sensitive to the physically operative availability of the record, then that availability must be defined in a way that can enter both the law and the experimental protocol. The present section does exactly that. It defines accessibility as a structured operational quantity, reduces it to a control parameter η, and identifies the critical accessibility regime η_c at which the theory’s empirical burden becomes concentrated.
6.1 Why accessibility matters
A record may exist in more than one physically relevant sense. It may exist merely as a formal correlation in the global state. It may exist as a transient microscopic trace that disappears before retrieval. Or it may exist as a stable, retrievable, and operationally available structure capable of entering further physical interactions. These cases are not equivalent for the purposes of a realization law. A theory that treats them as equivalent gives up the central distinction it needs in order to claim that record accessibility is physically relevant to outcome selection.
The present paper therefore does not identify accessibility with observation, knowledge, or human awareness. Accessibility is a physical property of the record-bearing structure itself. It concerns whether the outcome-defining information carried by that structure can be retrieved with fidelity, whether it persists over the relevant timescale, whether obtaining it destroys the structure that carries it, and whether it is available beyond a single fragile channel. Accessibility is thus neither merely epistemic nor merely formal. It is the physically relevant degree to which the record functions as an operative part of the measurement context rather than as a mathematically available but dynamically inert residue.
This distinction matters because CBR does not treat realization as identical to registration. Registration may produce correlation. Accessibility determines whether that correlated record becomes physically available in the sense required to influence the realized outcome law. If accessibility is ignored, then the law can still be written, but its claimed empirical burden becomes obscure. If accessibility is admitted, then it must be made operational. That is the burden of this section.
6.2 Definition of the accessibility parameter η
Let η denote the operational accessibility parameter associated with the measurement context C. The parameter is normalized so that
η ∈ [0,1],
where η = 0 represents the limiting regime in which the outcome-defining record is effectively inaccessible in the operational sense relevant to realization, and η = 1 represents the limiting regime in which the record is maximally retrievable, stable, and physically available within the declared protocol.
The parameter η is not primitive. It is a reduced scalar built from the physically relevant ingredients of accessibility. Let those ingredients be:
R, retrieval fidelity
P, public or intersubjective accessibility
T, temporal stability
D, destructive burden of readout
S, redundancy spread
These are not arbitrary components. They are the minimal quantities required to distinguish a merely formal record from a physically operative one.
Retrieval fidelity R ∈ [0,1] measures the degree to which the record can be recovered in a way that preserves the relevant outcome-defining content. Public accessibility P ∈ [0,1] measures the extent to which the record is available through more than one effective retrieval channel or physical access route. Temporal stability T ∈ [0,1] measures whether the record persists throughout the relevant realization window. Destructive burden D ∈ [0,1] measures how much retrieval damages or consumes the record-bearing structure. Redundancy spread S ∈ [0,1] measures the extent to which the record is distributed across more than one effective carrier.
The exact reduction of these quantities to η may depend on the experimental platform. At the level of the Core Theorem Paper, the exact platform-specific reduction is deferred, but the parameter is defined abstractly as
η = η(R, P, T, D, S),
with the following required properties:
η is monotone nondecreasing in R, P, T, and S.
η is monotone nonincreasing in D.
η = 0 if the record is operationally inaccessible in the relevant sense.
η = 1 only in the limiting case of maximal operational accessibility under the declared protocol.
η is invariant under representations of the same physical accessibility regime.
These conditions are sufficient for the present paper because the role of η here is structural and theorem-bearing rather than yet platform-final. What matters is that accessibility is no longer a loose descriptive label. It is now a normalized operational variable through which the law can become empirically exposed.
6.3 Accessibility-equivalence
Once η is introduced, the theory must also define when two measurement contexts count as accessibility-equivalent. Without that notion, accessibility could reintroduce arbitrariness under a different name.
Two contexts C₁ and C₂ are said to be accessibility-equivalent, written
C₁ ≈_η C₂,
if and only if they satisfy three conditions.
First, they realize the same value of η within the tolerance relevant to the declared protocol.
Second, they preserve the same realization-relevant record structure up to operational equivalence. That is, any differences between them must not alter what the record makes physically available in the sense relevant to outcome selection.
Third, any remaining differences between the contexts must be representational, implementational, or descriptively superficial rather than differences in the actual physical accessibility regime.
This definition is essential because it prevents the law from reacting to engineering detail that does not matter physically while preserving the right to distinguish contexts whose accessibility differs in a realization-relevant way. Accessibility-equivalence is therefore the operational companion of representational invariance. It ensures that η is a physical control variable and not merely a compressed notation for uncontrolled experimental variation.
6.4 Accessibility and the realization law
With η defined, accessibility enters the canonical law through the accessibility-consistency burden Λ_C. The exact point of this term is not merely to mention accessibility, but to force the selected realization channel to respond coherently to changes in the operational availability of record structure. This means that the law cannot both claim accessibility relevance and remain completely indifferent to η across all contexts.
At the same time, the present paper does not claim that every variation in η must produce visible deviation in every observable. Accessibility enters the law as a structured burden term, not as a guarantee of ubiquitous anomaly. The correct claim is narrower: if η contributes nontrivially to realization selection, then there must exist at least one designated protocol family in which the induced observable response cannot remain globally trapped inside the same baseline smooth-response class for all η-regimes.
This is why η matters at exactly the level chosen here. It is neither too thin to carry empirical consequence nor too overbuilt to function only as a platform-specific artifact. It is the minimal operational bridge between realization law and experimental burden.
6.5 Critical accessibility and the definition of η_c
The existence of an accessibility parameter alone is not enough to generate a discriminating theory. What matters is whether there is a regime in which accessibility becomes decisive for realization selection. The present paper therefore introduces a critical accessibility value η_c.
The role of η_c is precise. It is the accessibility value at which the accessibility-sensitive contribution to the realization burden becomes large enough to alter the minimization ordering over the admissible channel class. In other words, η_c is the point at which the law ceases to treat accessibility as subdominant background structure and begins to treat it as realization-effective.
At the level of the canonical theory, η_c is defined implicitly by the condition that the accessibility-sensitive burden becomes order-determining relative to the non-accessibility burden structure. Thus η_c is not introduced as an arbitrary marker on the control axis. It is the boundary between two realization regimes: one in which accessibility does not yet change the selected equivalence class of realization channels, and one in which it does.
This definition is intentionally abstract at the level of the Core paper. The exact numerical or platform-specific determination of η_c belongs to a later implementation volume. But the formal role of η_c is already fixed here: it is the point at which the law’s accessibility dependence ceases to be latent and becomes selection-relevant.
6.6 Accessibility regimes
The introduction of η and η_c allows the theory to distinguish several accessibility regimes relevant to its empirical burden.
The low-accessibility regime is the region in which η is sufficiently small that the record exists, if at all, only in a physically weak, unstable, or operationally inaccessible sense. In this regime, accessibility does not dominate realization selection.
The precritical regime is the region below η_c in which accessibility is rising but has not yet changed the selected realization class. The law may be accumulating pressure against the baseline here without yet having crossed its internal threshold.
The critical regime is the neighborhood of η_c in which accessibility first becomes realization-effective. This is the regime in which the accessibility-sensitive signature burden of the theory is expected to concentrate.
The postcritical regime is the region above η_c in which accessibility has already altered the selected realization ordering. If the theory is correct, its strongest departure from the baseline is expected to emerge here or at the transition into it.
The asymptotic high-accessibility regime is the limiting region in which the record is maximally retrievable, stable, and available under the declared protocol.
These regime distinctions are not decorative. They prepare the exact logic of the later theorem. The theory does not need to predict anomalous behavior uniformly across all η. It needs only to predict that if accessibility matters to realization, there will be a determinate regime in which that mattering becomes empirically nontrivial.
6.7 Why this operationalization is sufficient for the present paper
The accessibility construction of this section is deliberately intermediate in scope. It is more exact than a conceptual placeholder, but less platform-specific than a full implementation-level reduction. That is the right level for the present paper.
A weaker treatment would leave accessibility too vague to bear theorem-level weight. A stronger treatment would collapse the paper prematurely into a platform-specific derivation before the canonical law form, admissibility structure, and signature class had been established in general canonical terms. The present section therefore does exactly what the paper requires: it makes accessibility operational enough to enter the law, exact enough to support a designated protocol family, and structured enough to sustain a critical accessibility regime. That is sufficient for the accessibility-signature theorem and the falsification condition that follow.
7. Canonical Protocol Family
The canonical law form established in the preceding sections remains scientifically incomplete unless it is forced into a definite test domain. A realization-law theory does not become empirically legible merely by asserting that accessibility may matter in principle. It becomes empirically legible when one exact class of protocols is identified in which accessibility can vary in a controlled way while the baseline structure of the experiment remains fixed. The purpose of the present section is therefore to specify the canonical protocol family relative to which the paper’s empirical burden is defined. The aim is not breadth. It is discrimination. The theory does not need many loosely related possible tests. It needs one designated class of experiments in which global baseline equivalence would become untenable if accessibility were genuinely realization-relevant.
7.1 Why one protocol family is enough
A theory candidate becomes experimentally meaningful through one finite discriminator, not through diffuse applicability. If the law can be written only in general terms and never tied to a controlled protocol family, then empirical accountability remains rhetorical. Conversely, once one exact family is fixed in which the theory either produces a non-baseline response or fails to do so, the scientific status of the law changes decisively. The point of the present paper is therefore not to survey every possible realization-sensitive experiment. It is to identify one protocol family in which the variables on which the law depends can be operationalized sharply enough that failure would count against the theory. One such domain is sufficient for the present stage of the program because the question is not whether CBR is already universally confirmed. The question is whether it can be made vulnerable in a finite and public way.
This restriction is methodological rather than defensive. A broad list of possible applications would weaken the paper by making the empirical burden harder to locate. What is needed instead is a designated protocol family that isolates the distinction between correlation and accessibility, keeps the baseline dynamics fixed, and allows the theory’s realization-sensitive content to manifest in one controlled observable class. That is exactly what the family introduced below is designed to do.
7.2 Flagship protocol choice
The canonical protocol family adopted in this paper is the class of delayed-choice quantum eraser and record-accessibility interferometric protocols. This family is selected because it isolates, with unusual clarity, the difference between the mere existence of path-correlated record structure and the operational accessibility of that structure. In ordinary interferometric settings, standard quantum theory already explains coherence loss, conditional recovery, and the role of path distinguishability. The significance of the delayed-choice accessibility-sensitive family is that it makes it possible to vary the operational status of the record without reducing the experiment to a trivial measurement-versus-no-measurement dichotomy. The theory is therefore tested not on whether records exist at all, but on whether their accessibility is physically relevant to realized outcome selection.
This family is also the right one for the present paper because it aligns with every major structural distinction already introduced. It separates evolution from realization, preserves the role of record-bearing subsystems, and provides a natural setting in which the accessibility parameter η can become nontrivial. Most importantly, it allows the paper to ask a sharply bounded question: given fixed signal–idler architecture and tunable record accessibility, does the realized outcome law remain everywhere inside the same smooth-response class as the standard baseline, or not? That question is narrow enough to be answerable and strong enough to matter.
7.3 Experimental structure
The canonical protocol family contains five indispensable elements: a signal subsystem, an idler or record subsystem, a controlled accessibility structure, a visibility-like observable, and a delayed retrieval-or-erasure logic. These elements are not introduced as independent modules. Together they define the exact kind of measurement context in which accessibility may become realization-relevant.
The signal subsystem is the degree of freedom whose coherence or interference structure is directly observed. In a two-path implementation, the signal occupies a Hilbert space spanned by distinct path states and is prepared so that interference is available in the absence of effective path distinguishability. The signal observable is then a visibility-type quantity reconstructed from the signal statistics.
The idler or record subsystem is the physical carrier of path-defining information. Its role is to store, stabilize, or mediate the record structure correlated with the signal alternatives. What matters is not that the idler merely exists, but that it carries a record whose operational accessibility can be varied while leaving the overall platform otherwise intact.
The controlled accessibility structure is the point at which the protocol family becomes realization-relevant. The experiment must allow the idler record to pass through regimes in which it is weakly accessible, partially accessible, or strongly accessible in the operational sense defined earlier. This control may be implemented through retrieval fidelity, dissemination, redundancy, timing, or readout burden, provided that the resulting accessibility can be summarized by η without ambiguity at the level required by the theorem.
The primary observable is an interference visibility or realization-sensitive analogue thereof. The paper takes this class of observables as central because it is the cleanest way to compare the standard baseline response to the CBR response under controlled accessibility variation.
Finally, the delayed retrieval-or-erasure logic is what makes the family especially suited to the paper’s purpose. The accessibility or erasure structure of the record may be fixed or implemented after signal detection while still remaining part of the total physical context C. This delayed-choice feature ensures that accessibility is not confused with naïve chronology of observation. It is instead treated as part of the full context relative to which realization is defined.
7.4 Baseline comparison theory
The protocol family must be paired with an exact baseline comparator if the later signature theorem is to have any force. Let V_SQM(η) denote the baseline visibility function associated with the designated family when analyzed within standard quantum mechanics using ordinary unitary evolution, entanglement, decoherence accounting, and conditional reconstruction logic, but without realization-law augmentation. The significance of this definition is not that the paper claims one universal closed-form baseline across every conceivable implementation. The significance is that, for any fixed realization of the designated family, there exists a standard response class generated by the orthodox treatment of coherence, distinguishability, and erasure. That response class is the proper comparator for canonical CBR.
In the present paper, the burden on CBR is comparative rather than absolute. The question is not whether the baseline predicts visibility variation with accessibility. It plainly does in any accessibility-sensitive interferometric setting. The question is whether the CBR response can remain globally trapped inside the same baseline smooth-response class once accessibility is treated as realization-effective rather than merely as an ordinary control of distinguishability. If accessibility is physically irrelevant to realization, then no departure is required. If accessibility enters the realization law nontrivially, then global coincidence with V_SQM(η) becomes untenable. The baseline function therefore plays a precise role: it defines the response class CBR must either remain within or leave.
7.5 CBR observable
Let V_CBR(η) denote the visibility response predicted by canonical CBR on the designated protocol family. More generally, if visibility alone is not sufficient in a particular implementation, let S_CBR(η) denote the corresponding realization-sensitive signature map, provided that visibility remains its primary component and that the observable class is fixed before empirical comparison. The purpose of introducing this notation is to ensure that the law’s empirical burden is stated at the level of a concrete response, not merely at the level of qualitative expectation.
The relation between V_CBR(η) and V_SQM(η) is the decisive issue. If accessibility enters the realization law only symbolically and never changes the selected realization structure, then V_CBR(η) may collapse into baseline equivalence. But if accessibility contributes nontrivially to realization selection, then the CBR response cannot remain globally identical to the standard baseline across all η-regimes. The task of the next section is to turn that claim into theorem form. The role of the present section is simply to make the comparison domain exact: one designated protocol family, one baseline response class, one realization-sensitive observable, and one controlled accessibility variable through which empirical difference, if real, must appear.
8. Accessibility-Signature Theorem
The canonical law form and the designated protocol family are not sufficient by themselves to give the theory empirical force. A realization-law proposal becomes experimentally meaningful only when it identifies a determinate observable consequence of its central physical claim. In the present paper, that claim is that operational accessibility may enter outcome realization nontrivially. The purpose of this section is therefore to state the exact empirical consequence that follows if that claim is true. The result is not that CBR predicts arbitrary anomaly wherever accessibility is varied. The result is narrower and stronger: in the designated accessibility-sensitive protocol family, realization-sensitive observables cannot remain globally trapped inside the same smooth baseline response class if accessibility is genuinely realization-effective. That consequence is the paper’s first empirical theorem.
The section proceeds in two layers. First, it states the theorem in restricted but decisive form. Second, it distinguishes the strongest local morphology the present framework can justify from the weaker residual claim that survives if the strongest regularity assumptions are relaxed. This distinction is essential. It prevents the theorem from overclaiming while ensuring that the theory does not retreat into vagueness at the moment empirical burden is incurred.
8.1 Statement of theorem
The principal empirical result of the paper may now be stated precisely.
Theorem 2 (Accessibility-Signature Theorem). Let 𝒫 denote the designated family of delayed-choice quantum eraser and record-accessibility interferometric protocols introduced in Section 7, and let η ∈ [0,1] be the operational accessibility parameter associated with each context C ∈ 𝒫. Suppose canonical CBR holds in the law form
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with ℛ_C satisfying the canonical burden decomposition already defined and with accessibility entering the law nontrivially through the accessibility-consistency burden Λ_C. Then the induced realization-sensitive observable, written V_CBR(η) in the visibility representation or more generally S_CBR(η), cannot remain globally contained in the same smooth baseline response class as V_SQM(η) or S_SQM(η) across all η-regimes. Equivalently, if accessibility contributes nontrivially to realization selection, then there exists a nonempty η-regime in which the realized response leaves the declared baseline class.
This theorem is intentionally restricted. It does not claim that every accessibility-sensitive protocol must display large deviation. It does not claim that every η-regime must differ from baseline. It claims only what the law is entitled to claim: once accessibility becomes realization-effective, there must exist a designated controlled regime in which the induced response is no longer globally baseline-equivalent.
8.2 Primary predicted signature
The strongest primary morphology justified by the present framework is a critical-regime derivative break or kink near η_c. This is the main empirical claim of the paper. It is not one candidate among many equally weighted anomaly types. It is the preferred signature because it is the natural observable image of a regime change in the minimizing realization class. If accessibility first becomes realization-effective at a critical value η_c, then the cleanest resulting departure is not a diffuse anomaly spread uniformly across the control domain, but a localized change in the response law at the transition point itself.
The force of this choice is methodological as well as formal. A theory becomes harder to dismiss when it names one principal signature rather than a loose menu of possible deviations. The present paper therefore treats the critical-regime kink as the primary signal and regards all weaker signatures as residual forms of the same underlying accessibility-sensitive transition.
8.3 Strong form
Under the strongest regularity assumptions compatible with the canonical law form, the accessibility-signature theorem takes a sharpened local form. There exists a critical accessibility value η_c ∈ (0,1) such that the realized response is continuous at η_c but fails to remain in the same local differentiability class as the baseline response in any neighborhood of η_c. In practical terms, the observable develops a kink, slope discontinuity, or equivalent local nonanalyticity at the transition point.
The significance of this strong form is not merely that it predicts a noticeable feature. It predicts the right kind of feature. A derivative break is the natural response-level signature of a regime change in the selected realization class. It is localized, structurally tied to the law, and not easily confused with a generic smooth perturbation of the baseline. If such a feature is present, then the theory’s empirical distinctness is not merely numerical but structural.
This strong form should not be overstated. The present paper does not claim to have derived a universal literal discontinuity for all possible implementations of the protocol family. It claims that, under the regularity assumptions most natural to the exact canonical form, a critical-regime kink is the strongest and best-motivated empirical morphology.
8.4 Weak form
If the stronger regularity assumptions are relaxed while accessibility remains realization-effective, the empirical burden does not disappear. It weakens in a controlled way. In that case the theorem yields a bounded non-baseline deviation class concentrated in a neighborhood of η_c. The realized response may then remain continuous and differentiable while still failing to belong to the declared baseline class within that critical window.
This weaker form is still scientifically meaningful. It says that even if the transition is smoothed at the level of the observable, the law does not regain global baseline equivalence. What survives is a bounded critical-regime departure not reducible to ordinary smooth baseline deformation. The weak form is therefore not a second theory. It is the residual empirical content of the same theory when the strongest local regularity claim is softened.
8.5 Proof architecture
The logic of the theorem is exact. Suppose first that accessibility is physically irrelevant to realization. Then Λ_C either contributes nothing to the ordering of admissible realization channels or remains operationally inert across the designated protocol family. In that case the law collapses toward baseline equivalence and the theory has no accessibility-sensitive empirical burden.
Suppose instead that accessibility is physically relevant to realization. Then varying η alters the accessibility-sensitive contribution to the realization burden, and there must exist a regime in which this contribution changes the ordering of admissible channels. Once that happens, the selected realization class changes while the baseline comparator remains governed only by ordinary coherence, distinguishability, and erasure logic. Therefore the realized response cannot remain globally inside the same smooth baseline class across all η-regimes.
The only remaining question concerns morphology. If the change in realization ordering occurs with the exact regularity assumed by the canonical reduced form, the resulting response develops a critical-regime derivative break at η_c. If that local transition is smoothed while preserving nontrivial accessibility-sensitive ordering, then the response still leaves the baseline class through a bounded deviation band in a neighborhood of η_c. This establishes the strong and weak forms respectively.
8.6 Why the signature concentrates near η_c
The theorem predicts concentration near η_c for a structural reason. The law does not claim that accessibility modifies realization everywhere equally. It claims that accessibility becomes decisive only when it is large enough to alter the minimization ordering over the admissible channel class. That threshold is exactly what η_c represents. Below η_c, accessibility may be present but not yet realization-dominant. Above η_c, accessibility has already changed the selected realization regime. The transition between those conditions is where the strongest observable burden naturally appears.
This localization should not be mistaken for weakness. It is precisely what one should expect from a law whose empirical distinctness enters through a threshold in realization relevance rather than through a generic alteration of all dynamics. A theory that predicted anomaly everywhere would be broader, but not necessarily better. The present theory predicts anomaly where its own internal logic requires it.
8.7 Why the signature is not a baseline artifact
The accessibility signature is not a baseline artifact because the baseline comparator has already been fixed independently of the realization law. In the designated protocol family, V_SQM(η) is defined by standard quantum evolution, entanglement, decoherence accounting, and conditional reconstruction without realization-law augmentation. The accessibility-sensitive signature arises only when η enters the realized outcome law itself. It is therefore not produced by merely re-labeling ordinary distinguishability or by re-describing standard erasure logic in new terms.
Nor is the signature generated by arbitrary choice of observable. The observable class has already been fixed at the level of the protocol family. What changes is not the meaning of the measurement, but the law governing how realization tracks accessibility within that measurement context. The signature is therefore tied to the exact object the theory claims to contribute: a realization-sensitive law, not a new notation for standard quantum interference.
8.8 What this theorem achieves
The accessibility-signature theorem is the point at which the canonical CBR proposal becomes empirically nontrivial. Before this section, the paper has established a canonized law form, a restricted admissibility structure, and an operational accessibility variable. After this section, the paper has shown that if accessibility is genuinely realization-effective, then one designated family of experiments must eventually leave the baseline class. That is the first finite empirical burden of the theory.
The theorem does not yet say that the signal will survive realistic noise or that its absence would kill the theory. Those burdens belong to the next sections. But it does establish the indispensable intermediate result: once accessibility is admitted into the realization law nontrivially, the theory cannot remain everywhere observationally ordinary in the exact domain it has chosen for its own empirical exposure.
9. Falsification Theorem
A realization-law proposal is not yet a scientific theory merely because it defines a law form and names an empirical signature. It becomes scientifically complete only when it states the exact condition under which failure of that signature counts as failure of the theory itself. The purpose of the present section is therefore to close the empirical logic opened by the accessibility-signature theorem. The result is not an evidential suggestion, not a heuristic warning, and not a merely methodological preference. It is a finite invalidation criterion for the canonical law form developed in this paper.
This section matters because many foundational proposals remain indefinitely protected at exactly this point. Their core claims may be formalized, and their possible empirical consequences may even be discussed, but the theory never reaches the stage at which a negative result is allowed to count decisively against it. The present paper refuses that protection. Once the canonical law, the admissible class, the protocol family, the operational variable, and the response class have all been fixed, the theory must either produce the required departure in its designated domain or fail there. That is the burden this section makes explicit.
9.1 The binary failure condition
The failure condition of canonical CBR may now be stated as a theorem.
Theorem 3 (Failure Criterion). Let 𝒫 denote the designated accessibility-sensitive protocol family, let η ∈ [0,1] be the operational accessibility variable defined for that family, and let V_SQM(η) denote the corresponding standard baseline response class under ordinary quantum evolution, decoherence accounting, and conditional reconstruction, without realization-law augmentation. Suppose further that the canonical realization law of the present paper is fixed exactly as stated, together with its admissible class, its accessibility-sensitive burden structure, and its signature burden. If all physically valid realizations of 𝒫 exhibit only baseline-class behavior across the physically relevant and experimentally accessible η-domain, with no threshold-sensitive, nonanalytic, or statistically significant accessibility-linked departure beyond declared uncertainty and model tolerance, then canonical CBR in its present law form is false.
This theorem should be read exactly as written. It does not say merely that the evidence would count against the theory or that confidence in the framework should be reduced. It says that the canonical law form of the present paper would fail if the designated protocol family were explored under valid conditions and returned only baseline-class behavior throughout the relevant accessibility regime. The claim is binary because the law has already been narrowed to the point where its empirical burden is finite. Once that burden is not met, the theory in its present canonical form does not remain partially intact. It fails.
9.2 Why this is genuine falsification
The failure criterion is genuine because the relevant objects have already been fixed in advance. The law form is fixed by
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with the realization functional already canonized and the admissible class already narrowed by explicit conditions. The protocol family is fixed by the delayed-choice quantum eraser and record-accessibility interferometric structure specified earlier. The operational variable is fixed by the accessibility parameter η and its associated equivalence structure. The response burden is fixed by the accessibility-signature theorem, which states that global baseline equivalence is impossible if accessibility is realization-effective. The failure class is therefore finite, public, and theory-internal.
This removes the standard escape routes by which weakly specified foundational proposals preserve themselves after negative result. The present theory cannot respond to a null outcome by saying that a different unspecified observable was really intended, that a different undeclared protocol family was the true empirical target, that accessibility was meant in a looser sense than the one operationally defined, or that the law was never intended to incur failure in the first place. All such moves would change the theory rather than preserve it. The current theorem is therefore not a rhetorical use of the word “falsification.” It is a consequence of having already frozen the relevant formal and operational structure.
Of course, no scientific theory is invalidated by an experiment that fails to realize its own declared test conditions. That is why the theorem is framed in terms of physically valid realizations, declared uncertainty, and baseline tolerance. The point is not that any negative-looking result is fatal. The point is that, once the designated domain has been honestly probed under the conditions required for the law to show itself, continued baseline behavior is fatal to the canonical form of the theory.
9.3 What would survive failure
The severity of the failure criterion makes it essential to state, with equal precision, what would and would not survive invalidation. Failure of the present theorem would not establish that every conceivable realization-law philosophy is false. It would not prove that no physically meaningful outcome-selection law can exist. It would not refute all frameworks in which accessibility, record structure, or post-dynamical selection might matter in some different form.
What it would refute is narrower and exactly what this paper has earned the right to claim: the canonical CBR law form as stated here. More precisely, it would refute the conjunction of the following claims:
that realization is governed by the canonical minimization law introduced in this paper
that admissibility is correctly captured by the present restricted structure
that accessibility enters realization law in the operational manner defined here
that the designated protocol family is a valid exposure domain for the theory’s distinctness
that the resulting signature burden is correctly described by the present theorem
If the failure criterion is triggered, then that conjunction is false. Broader realization-law reasoning may remain logically open, but this canonical instantiation does not. That distinction matters because it prevents both overclaim and retreat. The theory is not allowed to collapse into a vague philosophical preference once its exact empirical burden fails. What survives is only the possibility of some different theory, not the truth of this one.
9.4 Why this sharpens the paper rather than weakens it
A common mistake in foundational writing is to treat explicit failure conditions as if they diminish the ambition of the theory. In fact, the opposite is true. A theory that names what would kill it becomes stronger, not weaker, because it replaces interpretive elasticity with scientific exposure. The present paper gains force precisely by making clear that its law form does not live merely in conceptual space. It lives under a finite empirical liability.
This sharpening matters especially for a realization-law framework. Such frameworks are often criticized, sometimes correctly, for never reducing themselves to a test-bearing claim. By stating a finite protocol family, a finite operational variable, a finite signature burden, and a finite failure condition, the present paper passes the point at which that criticism can still apply in the same way. Whether the law ultimately survives experiment is a separate matter. What matters here is that the theory now permits the world to answer it.
9.5 The role of falsification in the overall logic of the paper
This theorem is the final step in the paper’s internal sequence. Section 4 fixed the canonical law form. Section 5 narrowed admissibility and established restricted uniqueness. Section 6 operationalized accessibility. Section 7 fixed the canonical protocol family. Section 8 proved that if accessibility matters to realization, then the designated observable family cannot remain globally baseline-equivalent. The present section closes that sequence by stating the consequence of empirical failure. Without this theorem, the paper would end with a sharpened empirical suggestion. With it, the paper ends with a theory.
That is the correct role of the falsification theorem. It is not an appendix to the argument. It is the point at which the earlier formal work becomes scientifically complete enough to matter. A realization-law proposal that never states what would invalidate it remains unfinished. The present paper does not remain unfinished in that way.
10. Relationship to Born Structure
The preceding sections establish a canonical law form, a restricted uniqueness result, a designated protocol family, an accessibility-sensitive empirical signature, and a finite failure condition. None of those results, by themselves, settles the deepest remaining burden for any realization-law proposal: the status of probabilistic structure, and in particular the relation of the canonical law to Born-type weighting. The purpose of the present section is therefore not to enlarge the paper’s claims, but to state with precision what has and has not been achieved. This is necessary because a realization-law theory can fail in two opposite ways at this stage. It can overclaim closure it has not earned, or it can understate the significance of what it has actually fixed. The present section is written to avoid both errors.
10.1 What is and is not claimed
This paper does not claim final universal Born-neutrality closure. It does not prove that every appearance of amplitude-squared weighting in the quantum formalism has been derived from wholly premise-independent principles. It does not show that every conceivable route by which probabilistic structure might re-enter the realization problem has been eliminated. Those results would require a more extensive theorem program than the one undertaken here.
What the paper does claim is narrower and exact. It claims that the canonical realization law can be written without overt insertion of the target weighting at the level of law definition, that admissibility and minimization can be disciplined strongly enough to prevent obvious probabilistic stipulation, and that the resulting theory can incur a finite empirical burden without pretending that the full probability problem has already disappeared. In other words, the paper claims restricted probabilistic discipline, not final probabilistic closure.
This distinction is essential. A theory that smuggles its desired weighting into the law by definition gains apparent elegance only by hiding its strongest burden. A theory that refuses such insertion but clearly marks what remains open has not solved everything, but it has done something scientifically stronger: it has separated what is already fixed from what still requires proof.
10.2 Why this does not collapse the paper
The absence of final Born-neutrality closure does not collapse the present paper because the empirical and formal achievements of the paper do not depend on pretending that every deeper probabilistic theorem has already been won. A theory may become scientifically meaningful before it becomes universally complete. What it must not do is disguise unresolved burdens as if they had already been discharged.
The present paper avoids that failure. The canonical law form is not probabilistically empty. It defines a non-arbitrary admissibility structure, a constrained realization functional, and an accessibility-sensitive empirical burden. The restricted uniqueness theorem does not depend on having already solved every probability question in full generality. The accessibility-signature theorem likewise does not depend on a completed derivation of all probabilistic structure. What those results require is that the law be sufficiently disciplined to avoid overt probabilistic insertion and sufficiently exact to incur empirical liability. Those conditions have been met within the stated scope of the paper.
The right conclusion is therefore not that unresolved Born-structure questions render the present results inert. The right conclusion is that the present results isolate the probabilistic burden rather than confusing it with the law’s canonical, operational, and empirical burdens. That isolation is itself a form of progress.
10.3 Restricted probabilistic corollary
A limited but important corollary can now be stated.
Corollary 10.1. Within the canonical protocol family and under the restricted assumptions of the present paper, the realization law does not require overt insertion of outcome weights at the level of operational prediction in order to generate its accessibility-sensitive empirical burden.
The meaning of this corollary is precise. The paper’s signature claim is not obtained by first encoding the desired weighting structure directly into the observable law and then calling the result empirical consequence. Rather, the signature arises because accessibility enters the realization law through the admissibility and burden structure, and because that law cannot remain globally baseline-equivalent if accessibility is realization-effective. The empirical burden therefore does not depend on an explicit probability rule being smuggled into the operational statement of the theory.
This corollary is intentionally restricted. It does not say that all residual probabilistic questions have been answered. It says only that the paper’s central empirical claim is not vacuous in the sense of being produced by overt probabilistic insertion. That is enough for the present stage of the theory. It secures the integrity of the paper’s empirical argument without overstating what has been derived.
10.4 Open theorem burden
The remaining burden must now be stated plainly. A full non-circular derivation of Born-type probabilistic structure remains an open theorem program. If canonical CBR is to advance from the level achieved in this paper to a more complete foundational theory, it must either show that the relevant probabilistic structure is forced by the admissibility and realization architecture under stronger general assumptions, or show with equal clarity which restricted probabilistic premise remains irreducible and why that premise is physically justified rather than merely inserted.
That burden is not closed here. It is deferred, and it is deferred explicitly. This matters because the paper does not need to claim more than it has established in order to be significant. The canonical law has been fixed. The admissible class has been narrowed. Accessibility has been operationalized. A designated protocol family has been chosen. An empirical signature theorem has been stated. A failure condition has been made public. Once those results are in place, the unresolved Born-structure burden becomes exactly what it should become: not a hidden weakness inside the present theorem, but the next exact place where the theory must either deepen or stop.
10.5 Why the restraint matters
This section is restrained for a reason. A realization-law proposal becomes stronger when it is exact about the boundary between achieved closure and open burden. Overstatement would weaken the paper by making it appear to claim a finality its own formal results do not yet justify. Understatement would also weaken it by failing to recognize that canonicality, operational accessibility, empirical signature, and falsification already amount to a real theoretical achievement. The correct position lies between those extremes.
The present paper has not solved every probabilistic question. But it has done enough to make those questions sharper, narrower, and harder to hide. That is the correct relation between the current theorem and the Born problem: not final resolution, but disciplined isolation under a law that has now become canonically specified, operationally meaningful, and experimentally vulnerable.
11. Rival Explanations and Exclusion Analysis
The formal and empirical claims of canonical CBR acquire significance only if they are not trivially absorbed by more familiar explanatory categories. The present section therefore addresses the nearest rival readings and states, in each case, what the current paper excludes and what it does not. The aim is not polemical excess. It is to prevent category confusion. A realization-law proposal fails if its apparent novelty consists only in rebranding existing explanatory resources without adding a distinct law form, a distinct admissibility structure, or a distinct empirical burden.
11.1 Decoherence-only baseline
The first rival is the decoherence-only baseline. Decoherence explains, with genuine physical content, how interference becomes effectively suppressed in reduced descriptions, how stable pointer structures emerge under environmental monitoring, and how records may become robust against recoherence in practical conditions. None of that is denied here. The exclusion claim is narrower and more precise: decoherence alone does not by itself supply a realization law.
The reason is structural. Decoherence transforms the descriptive situation by altering which superpositions remain locally accessible and by stabilizing certain correlations into record-bearing forms. It explains why reduced density operators may become approximately diagonal in relevant pointer bases and why interference terms become practically irrelevant. What it does not transparently do is identify a law selecting one realized outcome channel from among the physically meaningful possibilities compatible with the decohered structure. Registration is not identical to realization. Suppression of accessible interference is not itself a law of single realized outcome selection.
If one denies that any such law is needed, then one exits the present framework and enters an alternative interpretive strategy. That is a legitimate move, but it is not an answer from within decoherence alone. The current paper therefore excludes the claim that decoherence, unaided, already supplies the kind of context-indexed selection law formalized here. If it did, then the admissibility class 𝒜(C), the realization functional ℛ_C, and the selected channel Φ★_C would be redundant. The entire theorem program of this paper would collapse into baseline language. The fact that a distinct canonical law form must be introduced is itself evidence that decoherence and realization are not being treated as identical.
11.2 Collapse-style reinterpretations
A second objection is that canonical CBR is merely collapse theory in renamed form. This objection must be answered carefully, because superficial similarity is easy to overstate. Both collapse-style proposals and realization-law proposals address the problem of actual outcomes. But sameness of target is not sameness of theory.
Canonical CBR is not defined by a stochastic state jump inserted into the dynamics, nor by a generic collapse postulate applied whenever measurement is declared to occur. Under A1, the theory does not modify ordinary quantum evolution outside realization selection. Its central object is not a collapse operator or collapse rate, but a context-indexed admissible class of realization-compatible channels together with a burden-minimization law. The key formal relation is
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
not a primitive dynamical interruption rule.
This distinction matters. A collapse-style theory typically introduces an additional dynamical ingredient that directly alters how state evolution proceeds. Canonical CBR instead treats realization as a law-governed selection over admissible channels constrained by representational invariance, record-structural coherence, and accessibility consistency. Its central burden is not to specify when collapse happens in an already accepted informal sense, but to define a non-arbitrary law of selection in contexts where registration alone does not settle realization.
This does not prove that collapse-style theories are wrong or that no deeper formal relation could ever be drawn. It proves only the point needed here: canonical CBR is not merely relabeled collapse. If one insists on calling every single-outcome law a “collapse theory” in the broadest possible semantic sense, then the label loses discriminatory value. The present paper instead uses a more useful distinction: collapse-style dynamical supplementation and canonical realization-law selection are not the same formal move.
11.3 Everettian absorption objection
A third objection is that branching plus observer-conditioning already absorbs everything CBR is trying to accomplish. On that view, there is no need for a realization law because the full post-measurement structure is already given by branching, and apparent outcome selection is explained by conditional location within the branching state.
The present paper does not attempt to refute Everettian theory in general. It addresses a narrower point. Branching plus observer-conditioning is not equivalent to realization-law selection unless one treats branch-relative conditioning as a complete substitute for the physical question of which outcome structure is realized. Canonical CBR does not grant that substitution. It is built precisely on the premise that one may still ask, within a context of record formation and branching-like structure, what law selects the realized outcome channel.
The difference is therefore not cosmetic. Everettian absorption dissolves the target of the present paper by denying that single-outcome realization requires further law. Canonical CBR retains that target and formalizes it. The two approaches are thus not rival parametrizations of the same law; they are different answers to the question of whether a realization law is needed at all.
This point is sharpened by the protocol analysis. If accessibility is physically irrelevant to realization because realization itself is not a distinct law-governed event, then canonical CBR should collapse toward baseline equivalence and the accessibility-signature theorem loses force. If, however, realization is a physically meaningful selection problem, then the theorem burden becomes live. The Everettian objection therefore does not absorb canonical CBR from within its own assumptions. It rejects the motivating question. That is a substantive rival stance, not an internal reduction.
11.4 Hidden-variable objection
A fourth objection is that canonical CBR is merely a disguised hidden-variable scheme. This objection is also too quick. Hidden-variable approaches typically supplement the quantum state with additional state variables whose evolution or configuration helps determine realized outcomes. Canonical CBR, by contrast, does not postulate a hidden ontic state appended to ρ, nor does it introduce latent trajectory variables, pre-existing definite values, or supplementary microstates whose evolution explains selection.
Its formal resources are context-indexed admissibility, a realization functional, and a selected channel. These are law-structural objects, not hidden-state assignments. The framework does not say that the outcome is fixed by an unobserved variable λ carried by the system and revealed by measurement. It says that, given a measurement context C and its realization-relevant physical structure, one admissible realization channel is selected by constrained minimization. That is a different explanatory architecture.
Of course, one could attempt to reinterpret the selected channel as if it encoded an effective hidden variable. But such reinterpretation would be external to the formal content actually introduced. No λ-like supplementation appears in the canonical law form, no hidden-value dynamics is specified, and no appeal is made to pre-existing value assignments. The hidden-variable objection therefore fails as a description of the current theory. Canonical CBR may be contested on other grounds, but it is not well described as a disguised hidden-state supplementation scheme.
11.5 Why ordinary smooth-response baselines are insufficient
The preceding rival analyses converge on one central point: none of the familiar absorptions can eliminate the need for the theorem burden once canonical CBR is granted its own target. That is why ordinary smooth-response baselines are insufficient as a final answer within the present framework.
If decoherence alone settled realization, if collapse language already captured the law form, if branching dissolved the target, or if hidden-state supplementation were what the theory secretly meant, then there would be no need for Theorem 2. But once canonical CBR is stated on its own terms, with η treated as realization-relevant and A7 requiring empirical accountability, the protocol family cannot be allowed to remain globally ordinary in the same smooth-response class as the baseline for all η-regimes.
That is the force of the accessibility-signature theorem. Smooth baseline behavior is not “insufficient” because smoothness is physically suspicious in itself. It is insufficient because a law that treats accessibility as physically relevant to realization cannot remain operationally inert across the very protocol family designed to vary accessibility. The smooth baseline is therefore not excluded as an approximation in many regimes. It is excluded as a globally exhaustive description if canonical CBR is correct.
12. Robustness, Noise, and Experimental Practicality
A realization-law proposal does not become experimentally serious merely by naming a signature. It becomes experimentally serious when it specifies the ordinary ways in which that signature may be distorted, mimicked, suppressed, or rendered undecidable, and when it states the conditions under which the predicted signal remains distinguishable from those ordinary effects. The purpose of the present section is therefore not auxiliary. It is to convert the accessibility-signature theorem from a formal consequence of the exact model into an experimentally accountable claim.
This section does four exact things. First, it defines the perturbative structure within which the exact response must be observed. Second, it identifies the principal false-positive mechanisms by which ordinary apparatus behavior could imitate the predicted anomaly. Third, it identifies the principal false-negative mechanisms by which a genuine realization-sensitive signal could be washed out. Fourth, it states the criterion under which the observed response counts as statistically separated from the baseline class rather than merely visually suggestive. The section is written relative to the exact platform already fixed in the paper. It does not broaden the theory to generic laboratory messiness. It specifies the exact nonidealities the theory is entitled to confront on its own declared test domain.
12.1 Noise model
Let V_model(η) denote the ideal model-level response, which may be either the exact baseline response V_SQM(η) or the exact instantiated realization-law response V_CBR(η) depending on which hypothesis is under comparison. The experimentally observed visibility is written as
V_obs(η) = V_model(η) + δ_det(η) + δ_erase(η) + δ_env(η) + δ_cal(η),
where each correction term denotes a physically distinct perturbation channel on the declared platform.
The term δ_det(η) collects perturbations originating in the detection apparatus: finite efficiency, path-dependent detection asymmetry, timing jitter, dead-time distortions, and dark-count background. The term δ_erase(η) collects perturbations associated with branch-control imperfections: incomplete erasure, residual which-path distinguishability, retrieval impurity, or mixed branch realization caused by imperfect switching between retrieval and erasure logic. The term δ_env(η) collects undeclared environmental contributions: thermal drift, stray entanglement, uncontrolled leakage of record information, and decoherence channels not already incorporated into the declared smooth baseline class. The term δ_cal(η) collects the perturbation induced by uncertainty in the accessibility construction itself, including both uncertainty in η and the induced vertical uncertainty in the reconstructed response.
This decomposition is not merely classificatory. It enforces the correct discipline on the model comparison. If an observed critical-regime feature can be explained by one or more of these ordinary perturbations within the declared tolerance envelope, then the feature does not count as evidence for the instantiated realization law. Conversely, if the exact model predicts a critical-regime signature whose scale exceeds the combined perturbative envelope in the relevant window, then absence of the signature becomes theory-relevant rather than instrument-relevant.
For the purposes of the present section, the perturbations are assumed bounded on the experimentally sampled domain. Thus there exist nonnegative constants ε_det, ε_erase, ε_env, and ε_cal such that
|δ_det(η)| ≤ ε_det,
|δ_erase(η)| ≤ ε_erase,
|δ_env(η)| ≤ ε_env,
|δ_cal(η)| ≤ ε_cal
throughout the tested η-range. The total perturbative envelope is then
ε_tot = ε_det + ε_erase + ε_env + ε_cal.
The significance of ε_tot is exact. It is the largest ordinary departure from the ideal model response the platform is allowed to produce without leaving the declared perturbative regime.
12.2 False positive controls
A false positive occurs when ordinary apparatus behavior generates an apparent critical-regime structure that could be mistaken for the realization-sensitive signature predicted by the theory. The exact platform of the paper is especially vulnerable to four such mechanisms.
The first is η-correlated detector distortion. If the experimental procedure used to vary accessibility also alters optical alignment, detector thresholds, timing resolution, or branch-count normalization, then the resulting apparatus response may produce an apparent local slope change or visibility suppression unrelated to the realization law. This possibility is particularly serious because the theory’s primary signature is concentrated near η_c. Any control procedure that introduces localized detector drift in the same regime can mimic the very morphology the theory predicts.
The second false-positive mechanism is incomplete branch purity. If the retrieval and erasure operations are not cleanly implemented, then mixed branch logic may produce an apparent deformation of the visibility response near the transition region. In particular, residual which-path information in the nominal erasure branch, or partial erasure in the nominal retrieval branch, can create local irregularities that resemble a weak-form deviation class without reflecting realization-sensitive structure.
The third mechanism is baseline misspecification. If the declared baseline class 𝒮_baseline omits an ordinary smooth control dependence already present in the exact platform, then the resulting model mismatch may be misread as theory support. This is why the baseline must be defined before decisive comparison and why its perturbative enlargement must be honest rather than strategically narrow.
The fourth mechanism is postselection or reconstruction bias. Since visibility is reconstructed from signal statistics and may, in some branches, depend on conditional sorting, any η-dependent bias in how data are filtered, binned, or normalized may manufacture an apparent anomaly. A realization-law signal does not become stronger because the data pipeline is more complex. On the contrary, the more branch-sensitive the platform, the more stringent the requirement that reconstruction logic be fixed in advance.
These mechanisms determine the necessary false-positive controls. Detector calibration must be performed independently of η tuning. Branch purity must be benchmarked before comparison with the theory response. The baseline class must be declared before the discriminating fit. Data reconstruction and conditional sorting procedures must remain fixed across the sampled accessibility domain. Only under those conditions does a critical-regime departure acquire any evidential meaning.
12.3 False negative risks
A false negative occurs when the theory’s exact or weak-form signature is present in the ideal model but is not resolved in the experiment. This possibility is as important as false positives, because the paper’s invalidation logic applies only when the platform has actually entered a detectability-valid regime.
The first false-negative risk is insufficient accessibility leverage. If the experiment fails to probe sufficiently across η_c into the postcritical regime, then the exact signature may never become large enough to outgrow the perturbative envelope. A theory whose signal begins only after the accessibility threshold cannot be invalidated by a dataset confined to subcritical values.
The second risk is critical-window smearing. Even when η_c is nominally sampled, poor calibration of η or overly coarse averaging may spread data points across the transition region so broadly that a local derivative break is washed into apparent smoothness. This is especially dangerous because the theory’s primary signature is local. The more concentrated the signal, the more important accurate horizontal placement becomes.
The third risk is environmental over-smoothing. Uncontrolled decoherence, thermal drift, or leakage into undeclared record channels may smooth the response enough that the strong-form signature is lost, even if a weak-form bounded deviation remains in principle. If the total perturbative envelope becomes too large, the experiment exits the regime in which the theory’s signal can be decisively tested.
The fourth risk is underpowered reconstruction. If detector efficiency, counting statistics, or branch-resolved sampling are too weak in the postcritical regime, the visibility response may remain too noisy for the predicted separation to emerge above the declared tolerance threshold.
These risks do not weaken the theory. They define the conditions under which the theory is being given a fair test. A null result in a regime where one or more of these false-negative risks dominates does not yet count as theory failure. It counts only as failure to reach the detectability-valid domain required by the paper’s later invalidation theorem.
12.4 Statistical distinguishability requirement
The present paper does not treat visual suggestiveness as evidence. The exact burden of the theory requires a stricter criterion: the observed response must be statistically distinguishable from the declared perturbed baseline class in the critical regime. This means that evidential separation must be defined comparatively rather than impressionistically.
Let 𝒮_baseline^pert denote the perturbed baseline response class obtained by enlarging the exact baseline V_SQM(η) through the bounded perturbative envelope already declared. Let 𝒮_CBR^pert denote the corresponding perturbed realization-law response class generated by the exact CBR response V_CBR(η) together with the same perturbative discipline. The observed data count as evidentially separated in favor of the instantiated CBR model only if there exists a nonempty critical-regime window U around η_c such that no member of 𝒮_baseline^pert reproduces the observed response on U within the declared tolerance, while at least one member of 𝒮_CBR^pert does.
At the level of the exact main-text model, this condition reduces to a scale inequality. Let ΔV(η) = V_CBR(η) − V_SQM(η) be the exact separation function. Then the theory becomes experimentally resolvable on a sampled postcritical window U_δ if
sup_{η ∈ U_δ} |ΔV(η)| > 2ε_tot.
The meaning of this inequality is straightforward. The exact CBR departure must exceed twice the total perturbative envelope so that the perturbed response bands around the baseline and realization-law models no longer overlap on at least one nonempty subset of the critical regime. If this condition is not met, the experiment may still be suggestive, but it has not yet achieved discrimination. If it is met, then the absence of the predicted signature becomes scientifically consequential.
In the strong-form regime, this distinguishability may appear as a resolvable local derivative change near η_c. In the weak-form regime, it appears as a bounded non-baseline deviation class of sufficient amplitude to escape the perturbed baseline envelope. The same standard governs both cases: a theory-supporting signal must remain outside the baseline class after all declared ordinary imperfections have been accounted for.
12.5 Why this section matters to the theorem structure
This section is the point at which the paper’s empirical claim becomes technically serious. Without it, the accessibility-signature theorem would specify a formal burden but not the conditions under which that burden survives ordinary experimental imperfection. With it, the paper now contains the complete intermediate logic between ideal theory and invalidation: exact signal, exact perturbative structure, exact false-positive controls, exact false-negative boundaries, and exact distinguishability criterion.
That intermediate logic is indispensable. A theory that can be supported only in ideal mathematical space is not yet experimentally mature. A theory that states in advance what level of perturbation it can survive, and what level would render its test inconclusive, has crossed into a different category. It is no longer merely elegant. It is accountable.
13. Limits of the Present Result
The present paper becomes stronger, not weaker, by stating its limits with precision. The canonical law form, the restricted uniqueness result, the operational accessibility variable, the designated protocol family, the accessibility-signature theorem, and the falsification criterion together establish a real theory candidate. They do not establish a final universal completion of quantum foundations. The purpose of this section is therefore not defensive qualification. It is exact scope control. A result of this kind must identify the boundary between what has been fixed and what remains open, not because that boundary diminishes the theory, but because without it the theory would lose the very discipline that gives it force.
13.1 Limits of the theorem class
The theorems of the present paper are restricted to the canonical CBR form under the stated axioms, regularity assumptions, and designated protocol family. They do not constitute a universal no-go result for every conceivable realization-law alternative. Nor do they show that every measurement context in quantum theory must display a realization-sensitive deviation from the standard baseline. What has been proved is narrower and exactly sufficient for the paper’s burden: if accessibility enters the canonical realization law nontrivially, then the designated accessibility-sensitive protocol family cannot remain globally trapped inside the same smooth baseline response class across all accessibility regimes.
This restriction is essential. A theorem that claimed more would exceed what the paper has actually established. A theorem that claimed less would fail to convert the framework into a vulnerable theory candidate. The present theorem class is therefore deliberately exact: strong enough to force empirical exposure, narrow enough to remain defensible.
13.2 Limits of accessibility reduction
The operational accessibility variable η is necessary for the paper’s empirical program, but its present construction is not claimed to be uniquely determined across all possible experimental architectures. In the canonical form of the present paper, η is a structured operational variable defined by the physically relevant availability of outcome-defining records. That is sufficient for theorem-level work at the canonical stage. It does not imply that every future platform must admit the same scalar reduction, the same primitive ingredients, or the same calibration route.
This limit matters because the theory’s empirical burden depends on accessibility being physically meaningful rather than merely convenient. The present paper has done enough to make accessibility law-relevant and operationally controllable in the designated protocol family. It has not yet proved that one universal accessibility scalar exhausts every future realization-sensitive context. That broader task belongs to later implementation work, not to the present canonical compression.
13.3 Limits on probabilistic closure
The paper does not complete the full non-circular derivation of Born-type probabilistic structure. It excludes overt insertion of the target weighting at the level of the canonical law form and isolates the probabilistic burden rather than concealing it, but it does not yet establish final universal Born-neutrality closure. That result would require a separate and deeper theorem program.
This limitation is substantive, but it is not destabilizing. The present paper does not need universal probabilistic closure in order to establish a canonical law form, a restricted admissibility structure, an operational accessibility variable, a designated signature domain, and a finite falsification criterion. What remains open is not whether the theory has acquired empirical burden, but whether its probabilistic foundations can ultimately be closed without residual premise dependence. That is an important burden, but it is the next burden, not the present one.
13.4 Limits of empirical scope
The empirical exposure achieved here is finite rather than universal. The paper identifies one designated protocol family in which accessibility may become realization-effective and in which the canonical law cannot remain globally baseline-equivalent if its central claim is true. It does not claim that the same signature must already be visible in broad ordinary measurement settings, nor that every implementation of the designated family will be equally diagnostic. Experimental leverage, calibration quality, perturbative control, and protocol fidelity all matter.
This is not a retreat from empirical seriousness. It is what empirical seriousness requires at this stage. A theory candidate does not first become scientifically real by claiming observable anomaly everywhere. It becomes scientifically real by binding itself to one domain in which anomaly must appear if the theory is correct and by stating the condition under which its absence counts against the theory. The empirical scope of the present paper is therefore intentionally narrow because narrowness is what permits real exposure.
13.5 Why the result still matters
These limits do not reduce the significance of the present result. They define it. The paper has fixed a canonical law form, narrowed admissibility, operationalized accessibility, identified a designated test family, derived an accessibility-sensitive signature burden, and stated a finite failure condition. That is enough to convert CBR from a broadened realization-law program into a canonically specified and empirically vulnerable theory candidate.
One exact theory object is sufficient for that transition. A framework does not need universal closure over every rival, universal deviation in every setting, or final settlement of every probabilistic burden in order to become scientifically meaningful. It needs only to become explicit enough to be judged, narrow enough to be tested, and vulnerable enough to fail. The present paper has reached that threshold. Its limits are therefore not a sign of incompleteness in the pejorative sense. They are the exact conditions under which the result becomes credible.
14. Conclusion
This paper has fixed the exact object on which its claims stand. The designated empirical domain is an accessibility-sensitive delayed-choice quantum eraser and related record-accessibility interferometric protocol family in which the physically relevant availability of outcome-defining records is represented by an operational accessibility variable η. Within that domain, the paper has argued that realization is not exhausted by ordinary evolution or by record registration alone, and has therefore posed the problem in the form required for a realization law: given a context C and an admissible class 𝒜(C) of realization-compatible channels, what law selects the realized channel? The canonical answer proposed here is the minimization rule
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with ℛ_C canonically decomposed into representational invariance, record-structural coherence, and accessibility-consistency burdens.
On that basis, the paper has established a restricted uniqueness result. Within the declared admissibility class, the selected realization channel is unique up to operational equivalence. This means that the law does not leave realization floating among many physically inequivalent candidates, nor does it permit realization verdicts to depend on notation, arbitrary labeling, or formally convenient but physically idle redescriptions. The canonical law therefore functions as an actual selection rule rather than as a merely structured heuristic.
The paper has then shown that if accessibility enters realization law nontrivially, the resulting theory incurs a finite empirical burden in the designated protocol family. The standard comparator V_SQM(η) and the realization-sensitive response V_CBR(η) cannot remain globally equivalent across all accessibility regimes. More strongly, under the stated regularity assumptions, the non-equivalence concentrates in a critical accessibility regime near η_c and appears as a local derivative break or kink in the primary observable; if those stronger regularity assumptions are weakened, a bounded non-baseline deviation class remains. The theory therefore does not merely claim that accessibility matters conceptually. It claims that accessibility, if physically realization-effective, must leave an observable trace in a finite experimental domain.
Finally, the paper has stated the consequence of failure. Because the law form, admissibility structure, operational variable, protocol family, baseline class, and signature burden have all been fixed in advance, the absence of the required critical-regime departure is not merely disappointing evidence. It is theory-relevant failure. If the designated protocol family exhibits only baseline-class behavior across the physically relevant and experimentally accessible η-domain under the declared validity conditions, then canonical CBR in its present law form is false.
The result is therefore exact in both directions. The paper does not claim universal closure over every rival realization law, universal deviation across all measurement settings, or final non-circular closure of all Born-structure questions. What it does claim is narrower and sufficient: one canonically specified realization law, one restricted uniqueness result, one operational accessibility variable, one designated empirical signature class, and one finite failure condition. With this paper, CBR is no longer presented merely as a broad realization-law architecture or a progressively sharpened foundational program. It is presented as a canonically specified, operationally exposed, and finitely falsifiable theory candidate whose central burden is now public.
Appendix
Appendix A. Formal Definitions and Notation
This appendix fixes the formal objects used throughout the paper and states the exact meanings of the principal symbols, maps, and equivalence relations appearing in the canonical formulation of Constraint-Based Realization. Its purpose is not merely editorial. A realization-law proposal cannot sustain canonicality, uniqueness, or falsification claims if its basic objects remain only informally specified. The present appendix therefore supplies the minimal formal backbone required for the main theorems of the paper to be read as statements about a definite mathematical structure rather than about a family of loosely related intuitions.
The guiding principle of the appendix is scope discipline. Only those objects required for the canonical law form, the admissibility definition, the accessibility-signature theorem, and the failure criterion are fixed here. The appendix does not attempt to supply a complete universal ontology for all possible realization-law theories. It supplies the exact formal language needed for the canonical CBR form developed in the main text.
A.1. Hilbert-space setting
Let ℋ denote the Hilbert space associated with the physical system under consideration. The theory is intended to apply at the level of ordinary quantum state representation and therefore assumes the standard state-space formalism of quantum mechanics as its baseline dynamical setting.
Let 𝒟(ℋ) denote the set of density operators on ℋ, that is, the set of positive trace-class operators ρ on ℋ such that Tr(ρ) = 1.
Throughout the paper, ρ ∈ 𝒟(ℋ) denotes the physical state description relevant to the measurement context under analysis. The canonical CBR formalism does not replace this state-space structure. It acts on it at the level of realization selection.
Where useful, a context may induce a tensor-factor decomposition
ℋ = ℋ_s ⊗ ℋ_r ⊗ ℋ_e,
where ℋ_s is a signal or measured subsystem, ℋ_r is a record-bearing subsystem, and ℋ_e is an environment or auxiliary sector. No theorem in the present paper depends on a universal privileged decomposition of this kind, but the notation is convenient for designated protocol families in which record accessibility is physically meaningful.
A.2. Measurement context
A measurement context is denoted by C.
The context C is not a single operator or a single basis choice. It is a structured physical specification containing, at minimum, the following:
the state space relevant to the measurement arrangement,
the instrument or interaction structure by which outcome-defining correlations are generated,
the record-bearing architecture through which outcomes may become physically encoded,
the operational accessibility structure associated with those records,
the observable or observable class used in the designated protocol family.
Accordingly, C should be understood as a context object, not as shorthand for “measurement basis.” Two contexts may share the same Hilbert space and the same initial state while differing in record accessibility, retrieval timing, or record dissemination structure, and are therefore not identical for purposes of canonical CBR.
When convenient, one may write
C = (ℋ, ρ, ℐ, ℛec, 𝒪, η),
where ℐ denotes instrument structure, ℛec denotes record structure, 𝒪 denotes the operational observable class, and η denotes the accessibility parameter associated with the context. This tuple should be read as schematic rather than exhaustive.
A.3. Admissible class of realization-compatible channels
For each context C, canonical CBR assigns a nonempty admissible class
𝒜(C)
of realization-compatible channels.
An element Φ ∈ 𝒜(C) is a candidate realization channel. Its exact mathematical status is that of an effective map acting on 𝒟(ℋ) or on the relevant context-restricted state domain, subject to the admissibility conditions stated in the main text. The paper does not require that every admissible Φ be given as a universal microscopic channel formula on all of 𝒟(ℋ). It requires only that each admissible Φ define a realization verdict structure compatible with the physical context C.
The intended minimal properties of admissible realization channels are these:
they preserve compatibility with the baseline quantum dynamics outside realization selection,
they are invariant under physically irrelevant reformulations,
they track physically meaningful record structure,
they treat accessibility-equivalent contexts consistently,
they do not encode probabilistic weighting by direct stipulation.
The class 𝒜(C) is therefore context-indexed and physically constrained. It is not the set of all formally writable maps on 𝒟(ℋ).
A.4. Realization functional
The canonical realization functional assigned to context C is
ℛ_C : 𝒜(C) → ℝ_≥0,
with canonical form
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ),
where α, β, γ ≥ 0 are fixed theory-level coefficients.
The three burden terms are defined as follows.
A.4.1. Representational invariance burden
Ξ_C(Φ) is the representational invariance burden. It measures the extent to which the realization verdict induced by Φ depends on physically irrelevant reformulation of the context.
A low value of Ξ_C(Φ) means that Φ is stable under relabeling, coordinate change, equivalent encoding, and related descriptive transformations that leave the realization-relevant physical content unchanged.
A.4.2. Record-structural coherence burden
Ω_C(Φ) is the record-structural coherence burden. It measures the extent to which Φ fails to align realization selection with the actual record-bearing organization of the context.
A low value of Ω_C(Φ) means that Φ tracks physically meaningful record structure rather than merely formal branch multiplicity or unsupported distinctions.
A.4.3. Accessibility-consistency burden
Λ_C(Φ) is the accessibility-consistency burden. It measures the extent to which Φ fails to respond coherently to the operational accessibility structure of the context.
A low value of Λ_C(Φ) means that Φ treats accessibility as physically relevant only through the declared operational structure of C, and does so consistently across accessibility-equivalent realizations.
A.5. Selected realization channel
The canonical realization channel selected by context C is
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ).
This expression is the central law form of the paper.
The symbol Φ★_C denotes the selected channel only up to the equivalence notion appropriate to the paper’s uniqueness theorem. Unless otherwise stated, the selection rule should therefore be read as selecting a realization verdict class rather than a syntactically unique formula in every representational encoding.
A.6. Operational equivalence
The uniqueness theorem of the paper is not a claim of strict syntactic uniqueness. It is a claim of uniqueness up to operational equivalence.
Two admissible realization channels Φ₁ and Φ₂ are operationally equivalent, written
Φ₁ ≈ Φ₂,
if and only if they agree on all realization-relevant observables, record-structural verdicts, and accessibility-sensitive consequences of the designated context C.
More explicitly, Φ₁ ≈ Φ₂ means that any difference between them is physically null relative to:
the context’s declared observable class,
the context’s record-bearing structure,
the context’s accessibility regime,
the protocol family in which empirical burden is incurred.
Operational equivalence is therefore stronger than mere formal similarity and weaker than strict symbolic identity. It is the correct equivalence notion for a realization-law theory because the law should not distinguish what the physical content of the context does not distinguish.
A.7. Accessibility parameter
The operational accessibility parameter is denoted by η.
At the level of the Core Theorem Paper, η is a normalized scalar satisfying
η ∈ [0,1].
Its intended meaning is the physically relevant operational accessibility of outcome-defining records in the designated protocol family.
The limiting values are interpreted as follows:
η = 0 corresponds to an effectively inaccessible record regime,
η = 1 corresponds to a maximally accessible record regime within the declared protocol.
The parameter η is not primitive in the strongest ontological sense. It is a reduced control variable built from physically measurable features of record availability. In more detailed implementations, one may write
η = η(R, P, T, D, S),
where:
R denotes retrieval fidelity,
P denotes public or intersubjective accessibility,
T denotes temporal stability,
D denotes destructive burden of readout,
S denotes redundancy spread.
The exact reduction may vary across implementation volumes, but within the present paper η is treated as a well-defined operational scalar attached to the context C.
A.8. Accessibility-equivalence
Two contexts C₁ and C₂ are said to be accessibility-equivalent, written
C₁ ≈_η C₂,
if and only if:
they realize the same operational accessibility value η within the tolerance relevant to the declared protocol,
they preserve the same realization-relevant record structure up to operational equivalence,
any remaining differences are implementation-level or descriptive rather than differences in realization-relevant physical accessibility.
This relation is necessary because the theory’s accessibility dependence must not collapse into sensitivity to irrelevant engineering detail. Accessibility-equivalence plays, at the operational level, the same stabilizing role that representational invariance plays at the formal level.
A.9. Critical accessibility value
The critical accessibility value is denoted by η_c.
Its role is to mark the accessibility threshold at which the accessibility-sensitive contribution to the realization burden becomes order-determining in the minimization problem over 𝒜(C).
At the canonical level, η_c is defined implicitly by the condition that the accessibility-sensitive burden changes the selected realization ordering. In a reduced formulation, one may express this by saying that η_c is the value at which the leading competing admissible realization classes become burden-balanced.
Thus η_c is not merely an empirical marker. It is a law-internal threshold separating regimes in which accessibility is subdominant from regimes in which it becomes realization-effective.
A.10. Baseline response and CBR response
Let V_SQM(η) denote the standard baseline response function associated with the designated protocol family under ordinary quantum dynamics, decoherence accounting, and conditional reconstruction, but without realization-law augmentation.
Let V_CBR(η) denote the corresponding response function induced by canonical CBR in the same protocol family.
Where visibility alone is insufficient, one may use the more general notation
S_SQM(η), S_CBR(η),
for the baseline and realization-sensitive signature maps respectively, provided visibility remains their primary or paradigmatic component in the designated protocol family.
The paper’s empirical theorems are stated comparatively: they do not require that V_CBR(η) differ from V_SQM(η) in every regime, only that global containment in the same smooth baseline class becomes impossible if accessibility is realization-effective.
A.11. Baseline smooth-response class
The baseline smooth-response class is denoted by
𝒮_baseline.
It is the class of admissible baseline responses associated with ordinary standard-quantum treatment of the designated protocol family.
At the level of the present paper, 𝒮_baseline is defined functionally rather than by one universal closed formula. Its key properties are:
global continuity on the admissible η-domain,
ordinary smooth or piecewise smooth behavior under the declared baseline dynamics,
absence of realization-law critical structure inserted by fiat.
The accessibility-signature theorem states, in effect, that V_CBR(η) or S_CBR(η) cannot remain globally confined to 𝒮_baseline if accessibility contributes nontrivially to realization selection.
A.12. Strong-form and weak-form signature classes
The empirical signature burden of the theory is formulated in two layers.
The strong form is the class of responses exhibiting a critical-regime derivative break, kink, slope discontinuity, or equivalent local nonanalyticity near η_c.
The weak form is the class of responses exhibiting a bounded non-baseline deviation band in a nonempty neighborhood of η_c when the stronger regularity assumptions required for a literal kink are relaxed.
These are not two unrelated predictions. The weak form is the residual empirical burden of the same canonical law when the strongest local regularity claim is softened.
A.13. Physically valid realization of the protocol family
The falsification theorem refers to “physically valid realizations” of the designated protocol family. This phrase must be fixed.
A physically valid realization of the protocol family is any implementation that satisfies the following:
it instantiates the declared signal–idler or record-accessibility architecture relevant to the protocol family,
it calibrates η through the declared accessibility logic,
it samples a domain sufficient to probe the relevant critical and postcritical accessibility regimes,
it controls perturbations to the degree required by the declared detectability conditions,
it preserves the intended observable class and branch logic of the protocol family.
This definition is necessary because no theory is invalidated by a test that fails to realize its own declared domain.
A.14. Declared uncertainty and model tolerance
The falsification criterion also refers to declared uncertainty and baseline tolerance. These terms denote the bounded range within which ordinary detector, erasure, environmental, and calibration effects may deform the observed response without counting as evidence for or against the theory beyond the declared perturbative regime.
At the level of the canonical paper, these quantities are not yet fixed numerically. They are part of the exact implementation burden of later work. What matters here is that the theory’s empirical exposure is defined relative to a finite and declared tolerance structure rather than to an idealized notion of perfect experiment.
A.15. Summary of fixed formal objects
For clarity, the appendix fixes the following core objects:
ℋ: system Hilbert space
𝒟(ℋ): density-operator space
C: measurement context
𝒜(C): admissible class of realization-compatible channels
ℛ_C: realization functional
Ξ_C: representational invariance burden
Ω_C: record-structural coherence burden
Λ_C: accessibility-consistency burden
Φ★_C: selected realization channel
η: operational accessibility parameter
η_c: critical accessibility value
V_SQM(η): standard baseline response
V_CBR(η): CBR response
𝒮_baseline: baseline smooth-response class
≈: operational equivalence of channels
≈_η: accessibility-equivalence of contexts
These objects are sufficient to support the canonical law form, the restricted uniqueness theorem, the accessibility-signature theorem, and the failure criterion of the paper.
A.16. Why this appendix matters
The present appendix does not complete the theory in the universal sense. It does something more immediate and necessary: it prevents the central claims of the paper from floating free of their own formal vocabulary. Once these objects are fixed, the paper’s theorems can be read as statements about a definite canonical structure rather than as high-level programmatic gestures. That is exactly the level of formal closure the Core Theorem Paper requires.
Appendix B
Appendix B. Existence and Regularity Lemmas
This appendix states the minimal existence and regularity results required for the canonical law form of the paper to be mathematically well-posed. The main text introduces the realization rule
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ),
and then treats the selected realization channel as a definite object, unique up to operational equivalence. That treatment is justified only if three prior questions can be answered in the affirmative. First, does the admissible class 𝒜(C) contain enough structure for minimization to make sense? Second, does the realization functional ℛ_C attain a minimum on that class? Third, do the regularity properties of the class and the functional suffice to support the restricted uniqueness theorem stated in the main text?
The purpose of the present appendix is not to solve every possible functional-analytic problem that may arise in arbitrarily general realizations of the theory. It is to state the least set of assumptions under which the canonical rule is mathematically legitimate in the exact sense required by the paper. The results are therefore deliberately modest. They do not prove universal existence and uniqueness in every imaginable realization-law framework. They prove that, under a controlled set of regularity assumptions, the canonical CBR law is a well-posed minimization problem and not a merely symbolic selection slogan.
B.1. The mathematical role of existence and regularity
A realization-law proposal becomes formally serious only when its central selection rule denotes an actual selected object rather than an aspirational one. If the admissible class is too loose, if the realization functional fails to be bounded below, or if infima are not attained, then the law form may still look elegant while failing to define a realizational choice in any exact sense. The same is true if uniqueness claims are made in a setting where the underlying minimization geometry leaves large unresolved degeneracies unconstrained by the theory itself.
The present appendix therefore does not function as a technical afterthought. It is part of the law’s legitimacy. The main text treats Φ★_C as a real selection outcome of the canonical theory. The results proved here explain when that treatment is mathematically licensed.
B.2. Basic setting and standing assumptions
Let C be a fixed physical measurement context, and let 𝒜(C) be the admissible class of realization-compatible channels associated with that context.
For purposes of this appendix, 𝒜(C) is treated as a subset of a topological space 𝒳_C of candidate realization maps equipped with a topology τ_C strong enough to make the burden terms Ξ_C, Ω_C, and Λ_C meaningful and weak enough that admissibility is not destroyed by ordinary limiting operations. The exact construction of 𝒳_C is not fixed universally here, because the paper does not require a single all-context microscopic realization category. What matters is that for each context C, there exists a topological ambient space in which 𝒜(C) sits as a regular subspace.
The standing assumptions used throughout this appendix are the following.
Assumption B1 (Nonemptiness).
𝒜(C) ≠ ∅ for every physically valid context C.
Assumption B2 (Sequential precompactness or compactness).
Every sequence in 𝒜(C) has a τ_C-convergent subsequence whose limit lies in 𝒜(C), or equivalently 𝒜(C) is compact in the topology relevant to the burden functionals.
Assumption B3 (Lower boundedness).
Each burden term Ξ_C, Ω_C, and Λ_C is bounded below on 𝒜(C), and therefore ℛ_C is bounded below on 𝒜(C).
Assumption B4 (Lower semicontinuity).
Ξ_C, Ω_C, and Λ_C are lower semicontinuous on 𝒜(C) with respect to τ_C.
Assumption B5 (Operational quotient regularity).
The operational equivalence relation ≈ on 𝒜(C) is such that the quotient space 𝒜(C)/≈ inherits a well-defined separation structure sufficient for minimizer classes to be compared meaningfully.
These assumptions are not stronger than the paper needs. They are exactly the assumptions required to justify the existence of a minimizer and to state uniqueness up to operational equivalence without category error.
B.3. Lower boundedness of the realization functional
We begin with the simplest lemma.
Lemma B.1 (Lower boundedness of ℛ_C).
Under Assumption B3 and with α, β, γ ≥ 0, the realization functional
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ)
is bounded below on 𝒜(C).
Proof
By assumption, there exist real constants m_Ξ, m_Ω, and m_Λ such that
Ξ_C(Φ) ≥ m_Ξ,
Ω_C(Φ) ≥ m_Ω,
Λ_C(Φ) ≥ m_Λ
for all Φ ∈ 𝒜(C).
Since α, β, γ are nonnegative, it follows that for every Φ ∈ 𝒜(C),
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ)
≥ αm_Ξ + βm_Ω + γm_Λ.
Thus ℛ_C is bounded below by the finite constant
m_R = αm_Ξ + βm_Ω + γm_Λ.
This lemma is elementary, but it performs indispensable work. It guarantees that minimization is at least numerically meaningful and that the canonical law is not trying to minimize an unbounded-below functional without ground level.
B.4. Lower semicontinuity of the realization functional
The second lemma shows that ℛ_C inherits lower semicontinuity from its burden terms.
Lemma B.2 (Lower semicontinuity of ℛ_C).
Under Assumption B4 and with α, β, γ ≥ 0, the realization functional ℛ_C is lower semicontinuous on 𝒜(C).
Proof
Let {Φ_n} be a sequence in 𝒜(C) converging to Φ in the topology τ_C. Since Ξ_C, Ω_C, and Λ_C are lower semicontinuous,
Ξ_C(Φ) ≤ lim inf_{n→∞} Ξ_C(Φ_n),
Ω_C(Φ) ≤ lim inf_{n→∞} Ω_C(Φ_n),
Λ_C(Φ) ≤ lim inf_{n→∞} Λ_C(Φ_n).
Multiplying by α, β, and γ respectively and using nonnegativity gives
αΞ_C(Φ) ≤ α lim inf_{n→∞} Ξ_C(Φ_n),
βΩ_C(Φ) ≤ β lim inf_{n→∞} Ω_C(Φ_n),
γΛ_C(Φ) ≤ γ lim inf_{n→∞} Λ_C(Φ_n).
Summing and using the elementary inequality that the sum of lower limits is bounded above by the lower limit of sums yields
ℛ_C(Φ) ≤ lim inf_{n→∞} ℛ_C(Φ_n).
Hence ℛ_C is lower semicontinuous.
This lemma is the essential regularity input for existence of minimizers. Without it, minimizing sequences could drift toward lower values without the limit carrying the same burden value.
B.5. Existence of minimizing sequences
Before establishing actual minimizers, it is useful to note that minimizing sequences always exist.
Lemma B.3 (Existence of minimizing sequences).
If 𝒜(C) is nonempty and ℛ_C is bounded below on 𝒜(C), then there exists a sequence {Φ_n} in 𝒜(C) such that
ℛ_C(Φ_n) → inf_{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Proof
Let
m★ = inf_{Φ ∈ 𝒜(C)} ℛ_C(Φ).
By definition of the infimum, for each positive integer n there exists Φ_n ∈ 𝒜(C) such that
ℛ_C(Φ_n) < m★ + 1/n.
Then {Φ_n} is a minimizing sequence, since
m★ ≤ ℛ_C(Φ_n) < m★ + 1/n
for all n, and therefore
ℛ_C(Φ_n) → m★.
This lemma is standard, but it clarifies that the canonical law always generates a direction of descent under the standing assumptions. The next question is whether that descent actually terminates in an admissible minimizer.
B.6. Existence of canonical minimizers
We now state the central existence result.
Proposition B.4 (Existence of a canonical minimizer).
Under Assumptions B1 through B4, the realization functional ℛ_C attains its minimum on 𝒜(C). That is, there exists at least one Φ★_C ∈ 𝒜(C) such that
ℛ_C(Φ★C) = min{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Proof
By Lemma B.3, there exists a minimizing sequence {Φ_n} in 𝒜(C) such that
ℛ_C(Φ_n) → inf_{Φ ∈ 𝒜(C)} ℛ_C(Φ).
By Assumption B2, the sequence {Φ_n} has a convergent subsequence {Φ_{n_k}} whose limit Φ★_C lies in 𝒜(C).
By Lemma B.2, ℛ_C is lower semicontinuous, so
ℛ_C(Φ★C) ≤ lim inf{k→∞} ℛ_C(Φ_{n_k}).
But {Φ_{n_k}} is a subsequence of a minimizing sequence, hence
lim_{k→∞} ℛ_C(Φ_{n_k}) = inf_{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Therefore
ℛ_C(Φ★C) ≤ inf{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Since the infimum is by definition a lower bound for all values of ℛ_C on 𝒜(C), one also has
ℛ_C(Φ★C) ≥ inf{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Hence equality holds:
ℛ_C(Φ★C) = inf{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Thus Φ★_C is a minimizer.
This proposition is the exact mathematical warrant for speaking of a selected realization channel rather than merely of an asymptotically preferred one.
B.7. Quotient formulation and minimizer classes
The main text states uniqueness only up to operational equivalence. To support that properly, one must pass from individual admissible channels to equivalence classes.
Let [Φ] denote the operational-equivalence class of Φ under ≈. Define the quotient set
𝒜̄(C) = 𝒜(C)/≈.
If operational equivalence is respected by the burden terms in the sense that
Φ₁ ≈ Φ₂ ⇒ Ξ_C(Φ₁) = Ξ_C(Φ₂),
Φ₁ ≈ Φ₂ ⇒ Ω_C(Φ₁) = Ω_C(Φ₂),
Φ₁ ≈ Φ₂ ⇒ Λ_C(Φ₁) = Λ_C(Φ₂),
then ℛ_C descends to a well-defined quotient functional
ℛ̄_C([Φ]) = ℛ_C(Φ).
This quotient formulation is conceptually important because the theory never intended to distinguish channels that are operationally null relative to the designated observables, record structure, and accessibility regime. The true object selected by the canonical law is therefore a minimizer class in 𝒜̄(C), not necessarily a unique syntactic representative in 𝒜(C).
This observation does not weaken the law. It strengthens its physical interpretation by removing distinctions the theory itself declares irrelevant.
B.8. Strict separation and restricted uniqueness
Existence alone does not imply the restricted uniqueness theorem of the main text. For that, one needs a condition ensuring that two non-equivalent admissible minimizers cannot persist with exactly equal burden.
The relevant condition can be stated as follows.
Assumption B6 (Strict separation modulo operational equivalence).
If Φ₁ and Φ₂ are admissible channels with Φ₁ ≉ Φ₂, then at least one of the burden terms Ξ_C, Ω_C, or Λ_C assigns strictly different value in a way that is preserved under the coefficient weighting of ℛ_C, so that
Φ₁ ≉ Φ₂ ⇒ ℛ_C(Φ₁) ≠ ℛ_C(Φ₂)
for any pair of distinct minimizing candidates not related by operational equivalence.
This assumption is exactly the regularity-level version of the informal uniqueness logic used in the main text: two physically inequivalent admissible realization verdicts cannot tie forever unless the law has failed to distinguish them at the burden level.
Under this assumption, one obtains the following result.
Proposition B.5 (Uniqueness up to operational equivalence).
Under Assumptions B1 through B6, the minimizer of ℛ_C on 𝒜(C) is unique up to operational equivalence.
Proof
By Proposition B.4, a minimizer Φ★_C exists.
Suppose there are two minimizers Φ₁ and Φ₂ in 𝒜(C) such that
ℛ_C(Φ₁) = ℛ_C(Φ₂) = min_{Φ ∈ 𝒜(C)} ℛ_C(Φ).
Assume, for contradiction, that Φ₁ ≉ Φ₂. Then by Assumption B6,
ℛ_C(Φ₁) ≠ ℛ_C(Φ₂),
contrary to the equality above.
Therefore any two minimizers must satisfy
Φ₁ ≈ Φ₂.
Hence the minimizer is unique up to operational equivalence.
This proposition is the exact regularity-level underpinning of the restricted uniqueness theorem in the main text.
B.9. On the necessity of the regularity assumptions
The standing assumptions of this appendix are not arbitrary pieces of technical scaffolding. Each corresponds to a genuine requirement of the law.
Nonemptiness of 𝒜(C) is necessary because a realization law that admits no physically valid realization channels in an allowed context is not a law but a contradiction.
Compactness or sequential precompactness is necessary because the theory must not drive minimization toward ever-lower burden without ever selecting an admissible object.
Lower semicontinuity is necessary because the limiting object of a minimizing sequence must not suddenly incur a larger burden than all nearby approximants without structural reason.
Quotient regularity is necessary because the uniqueness theorem is a theorem about physical verdict classes, not formal syntax.
Strict separation modulo operational equivalence is necessary because otherwise the theory would tolerate unresolved physically meaningful degeneracy at the exact point where it claims to select.
These conditions are therefore the smallest regularity package that makes the canonical law mathematically worthy of its own notation.
B.10. What this appendix does and does not establish
This appendix establishes that, under the declared regularity assumptions, the canonical realization functional is bounded below, lower semicontinuous, attains a minimum on the admissible class, descends appropriately to operational-equivalence classes, and supports uniqueness up to operational equivalence. That is enough to justify the exact law form and the restricted uniqueness theorem used in the main text.
What the appendix does not establish is a universal representation-independent existence theorem for every possible microscopic realization category of all conceivable CBR-like theories. It does not prove that every future implementation volume will inherit the same compactness or separation structure automatically. Those stronger results would require a broader formal development. The present appendix establishes only what the Core Theorem Paper itself needs: that the canonical law is well-posed as a constrained minimization problem and that its selected object is mathematically real within the paper’s scope.
B.11. Consequence for the status of the canonical law
The net effect of the results above is decisive. The canonical law
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ)
is not merely a suggestive expression. Under the standing assumptions, it denotes an actual selected realization class. That is the minimum mathematical threshold a realization-law proposal must cross before it can bear the further burdens of accessibility sensitivity, empirical signature, and falsification. This appendix shows that the Core Theorem Paper has crossed that threshold.
Appendix C
Appendix C. Accessibility Parameter Construction
This appendix defines the operational accessibility parameter η used throughout the paper and specifies the formal conditions under which η may function as a realization-relevant control variable in the canonical law. Its purpose is to remove any ambiguity at the point where the theory passes from law form to empirical exposure. The main text treats accessibility as physically relevant to outcome realization and therefore requires a variable that is neither merely intuitive nor merely implementation-specific. If η were left as a loose label for “how available a record seems,” then the accessibility-signature theorem would rest on an unstable foundation. The role of the present appendix is to prevent that outcome. It states the mathematical status of η, defines the primitive operational ingredients from which it is to be constructed, introduces the reduction conditions any admissible η-map must satisfy, and clarifies the notions of accessibility-equivalence and critical accessibility required by the main theorem sequence.
The appendix is deliberately intermediate in scope. It does not yet attempt to fix one universal microscopic accessibility functional for all possible implementations of realization-sensitive protocols. Nor does it collapse immediately into one exact platform-specific formula of the kind appropriate to a later implementation volume. Instead, it supplies the strongest construction the Core Theorem Paper requires: accessibility is made operational, structured, normalized, and theorem-ready, while still remaining abstract enough to apply across the canonical protocol family designated in the main text.
C.1. Why an accessibility parameter is required
The paper’s central empirical claim is conditional: if accessibility enters realization law nontrivially, then the realized response cannot remain globally trapped inside the same smooth baseline class across all accessibility regimes. That claim is meaningful only if accessibility is itself represented by a physically disciplined object. The need for η follows directly from the structure of the argument.
A purely formal record, a transient microscopic trace, and a stable retrievable record are not equivalent in the context of a realization law. Each may encode correlation, but only some may count as physically operative records in the sense relevant to realized outcome selection. A theory that collapses all such cases into a single undifferentiated notion of “record presence” cannot then meaningfully claim that accessibility matters. Conversely, a theory that allows accessibility to matter but never defines it operationally cannot incur an empirical burden. The parameter η is therefore required as the bridge between the canonical law and the designated protocol family. It is the structured quantity through which the theory distinguishes bare correlation from operationally effective record availability.
The parameter is not introduced to add phenomenology to the law. It is introduced to prevent vagueness. Without η, accessibility remains conceptually suggestive but physically ungoverned. With η, the theory acquires a controlled axis along which realization relevance may vary, a critical regime η_c at which realization ordering may change, and a finite empirical domain in which departure from the baseline can be stated and tested.
C.2. Formal status of η
For each measurement context C in the designated protocol family, let
η(C) ∈ [0,1]
denote the operational accessibility parameter associated with the outcome-defining record structure of that context.
The normalization is interpreted as follows.
η(C) = 0 corresponds to the limiting regime in which the relevant record is operationally inaccessible in the sense relevant to realization.
η(C) = 1 corresponds to the limiting regime in which the record is maximally accessible within the physical constraints of the protocol.
The interval structure is essential because the empirical theorems of the paper compare response behavior across accessibility regimes rather than merely between two binary cases. The accessibility variable must therefore support regime structure, not just dichotomy.
At the level of the Core Theorem Paper, η is treated as a context-indexed scalar functional
η : 𝒞 → [0,1],
where 𝒞 is the class of physically valid measurement contexts relevant to the designated protocol family. The role of η is not to replace the full physical description of the context. It is to provide a reduced but realization-relevant operational coordinate on that context class.
C.3. Primitive operational ingredients
The accessibility parameter is not primitive in the strong sense. It is reduced from a finite set of operational ingredients that together characterize the physically relevant availability of the record. The canonical ingredients are the following.
C.3.1. Retrieval fidelity
Let R(C) ∈ [0,1] denote the retrieval fidelity of the record associated with context C. This quantity measures the extent to which the outcome-defining content of the record can be recovered by the declared retrieval procedure without falling below the tolerance relevant to the protocol. High retrieval fidelity indicates that the record is not merely present but recoverable as a physically meaningful carrier of outcome structure.
C.3.2. Public or intersubjective accessibility
Let P(C) ∈ [0,1] denote the public accessibility of the record in context C. The term “public” is used operationally, not psychologically. It refers to the extent to which the record is available through more than one effective physical access route or stable readout channel, rather than being confined to a single fragile interaction. P distinguishes an effectively private microscopic trace from a record whose physical availability has become more broadly distributed.
C.3.3. Temporal stability
Let T(C) ∈ [0,1] denote the temporal stability of the record. This quantity measures whether the record remains available over the timescale relevant to the realization question. A record that exists only momentarily but decays before retrieval is not equivalent to a record that persists throughout the protocol window. T therefore tracks the persistence of the record as an operative physical object rather than as a transient formal correlation.
C.3.4. Destructive burden of readout
Let D(C) ∈ [0,1] denote the destructive burden of readout. This quantity measures the extent to which retrieving the record destroys, scrambles, or irreversibly consumes the record-bearing structure. Accessibility is not exhausted by whether one can obtain the record once. A maximally destructive readout is not equivalent to stable record availability. The variable D therefore enters accessibility with opposite monotonicity to the other ingredients.
C.3.5. Redundancy spread
Let S(C) ∈ [0,1] denote the redundancy spread of the record. This quantity measures the degree to which the record is distributed across multiple effective carriers or channels rather than confined to a single delicate locus. A redundant record is more operationally available than a nonredundant one, even if both are in principle retrievable.
These five ingredients are the canonical operational coordinates of accessibility in the present paper. They are not asserted to be the only conceivable accessibility primitives in every future theory. They are the minimal ingredients needed here to distinguish operational availability from mere correlation and to support the empirical logic of the main theorem.
C.4. Admissible accessibility reduction
The accessibility parameter η is generated by a reduction map
η(C) = f(R(C), P(C), T(C), D(C), S(C)),
where f is an admissible accessibility reduction if and only if it satisfies the following conditions.
C.4.1. Normalization
For every context C in the relevant class,
0 ≤ η(C) ≤ 1.
The reduction must map physically admissible input tuples into the unit interval.
C.4.2. Monotonicity in enabling variables
η must be monotone nondecreasing in retrieval fidelity R, public accessibility P, temporal stability T, and redundancy spread S. If any of these increase while the others remain fixed, accessibility cannot decrease.
C.4.3. Reverse monotonicity in destructive burden
η must be monotone nonincreasing in D. If the destructive burden of readout increases while all enabling variables remain fixed, accessibility cannot increase.
C.4.4. Boundary sensitivity
The reduction must respect the physically intended endpoint structure. In particular, if the relevant record is absent as an operationally accessible object, then η must vanish. If the record is fully retrievable, stably available, minimally destructive to access, and appropriately spread across operative carriers within the protocol, then η must approach its maximal value.
C.4.5. Invariance under accessibility-equivalent realizations
If two contexts are physically accessibility-equivalent in the sense defined below, then the reduction must assign them the same η value up to the declared tolerance of the protocol. This prevents η from encoding arbitrary implementation detail rather than accessibility itself.
C.4.6. Protocol compatibility
The reduction must be expressible in terms of quantities that are, at least in principle, calibratable within the designated protocol family. A reduction that depends on inaccessible or theory-internal quantities not operationally tied to the protocol does not count as admissible for the purposes of the present paper.
These six conditions do not determine a unique reduction map in universal form. They do determine the class of reductions within which η may function as the accessibility parameter of canonical CBR.
C.5. Canonical reduced form
Although the Core Theorem Paper does not require one final platform-locked formula, it is useful to state a canonical reduced representative satisfying the admissibility conditions above. The simplest such representative is the geometric accessibility form
η(C) = [R(C) · P(C) · T(C) · (1 − D(C)) · S(C)]^(1/5).
This expression has several properties that make it canonical at the present level of abstraction.
First, it is normalized to [0,1].
Second, it is monotone in the enabling variables and antitone in D.
Third, it penalizes near-zero values in any one indispensable accessibility ingredient, reflecting the fact that a record can fail to be operationally accessible in a decisive way even if several other ingredients are large.
Fourth, it is symmetric among the positive accessibility ingredients in the absence of a stronger reason to privilege one over the others.
The present paper does not claim that this representative is the unique final reduction for all future implementations. It claims only that it is a natural canonical representative of the admissible reduction class and is sufficient for expressing the exact theorem burden of the paper.
C.6. Accessibility-equivalence of contexts
The introduction of η requires a corresponding equivalence relation on contexts. Let C₁ and C₂ be two physically valid contexts. Define
C₁ ≈_η C₂
if and only if the following hold.
First, C₁ and C₂ instantiate the same operational accessibility value η up to the resolution or tolerance relevant to the protocol.
Second, the outcome-defining record structures of C₁ and C₂ are equivalent with respect to their realization-relevant availability, even if their microscopic implementations differ.
Third, any remaining differences between C₁ and C₂ are implementation-level, descriptive, or physically inert relative to the observable burden of the designated protocol family.
This relation plays an important stabilizing role in the theory. It ensures that accessibility enters the realization law as a genuine physical control variable rather than as shorthand for uncontrolled engineering variation. It also ensures that the accessibility-consistency burden Λ_C can be defined on a physically meaningful quotient of the context class.
C.7. Accessibility-consistency burden revisited
With η fixed as an operationally meaningful context variable, the accessibility-consistency burden can be stated more sharply. For a candidate realization channel Φ ∈ 𝒜(C), the burden Λ_C(Φ) measures the degree to which Φ fails to respect the accessibility structure encoded by η.
At minimum, this means the following.
If two contexts are accessibility-equivalent, then Φ should not assign them inequivalent realization verdicts without additional physically relevant distinction.
If η varies across the designated protocol family in a way that changes the operative accessibility structure of the record, then Φ should not remain globally indifferent to that variation if the theory claims accessibility relevance.
If accessibility is physically irrelevant to realization, then Λ_C should either collapse to a constant or fail to alter the minimization ordering. In that case the theory ceases to differ from baseline on accessibility grounds.
Thus η is not merely a labeling device for later experiment. It enters directly into the burden structure that determines whether the law is accessibility-sensitive in a nontrivial way.
C.8. Critical accessibility and η_c
The existence of η by itself does not yet yield a signature theorem. What matters is whether there exists a critical accessibility value η_c at which the accessibility-sensitive contribution to the realization burden becomes order-determining.
Formally, η_c is defined as the accessibility value at which the leading competing admissible realization classes become balanced under the total burden ordering. In other words, η_c is the value at which the accessibility-sensitive term changes from subdominant to order-changing in the minimization problem over 𝒜(C).
At the level of the canonical theory, η_c is therefore not introduced as an arbitrary phenomenological marker. It is the threshold at which accessibility becomes realization-effective in the law itself. That is why the signature theorem of the main text localizes empirical burden near η_c rather than claiming generic non-baseline behavior everywhere on [0,1].
The exact numerical or model-specific determination of η_c is deferred to platform-level work. What matters here is that η_c has a precise formal role: it separates accessibility regimes that differ in their realization relevance.
C.9. Accessibility regimes
Once η and η_c are defined, the canonical protocol family may be partitioned into physically meaningful accessibility regimes.
The low-accessibility regime consists of contexts for which η is sufficiently small that the record remains operationally weak, unstable, private, destructive to access, or otherwise realization-subdominant.
The precritical regime consists of contexts with η < η_c but close enough to η_c that accessibility is increasing toward realization relevance while not yet changing the selected minimizer.
The critical regime is the neighborhood of η_c in which accessibility first becomes order-determining in the realization law.
The postcritical regime consists of contexts with η > η_c, in which accessibility has already altered the selected realization ordering.
The asymptotic high-accessibility regime consists of contexts approaching η = 1, where the record is maximally operative under the declared accessibility reduction.
These regimes are not introduced for descriptive convenience alone. They provide the exact regime structure needed for the accessibility-signature theorem and the failure criterion.
C.10. Why this construction is sufficient for the Core Theorem Paper
The accessibility construction of this appendix is sufficient for the canonical paper because it does exactly the work the paper requires and no more. It turns accessibility into a structured operational variable, ties that variable to the law through the burden term Λ_C, defines an equivalence relation that prevents η from collapsing into implementation noise, and introduces the critical threshold η_c needed for the theorem-bearing empirical logic of the paper.
A weaker construction would leave accessibility too vague to support theorem-level burden. A stronger platform-locked construction would be premature at the canonical stage and would collapse the paper into implementation detail before the canonical law had completed its own task. The present appendix therefore occupies the correct level of abstraction: operationally exact, canonically structured, and ready to bear the empirical theorem without pretending to be the final implementation manual.
C.11. Consequence for the status of the theory
With this appendix in place, η is no longer a conceptual placeholder. It is a canonically admissible operational control variable. That changes the status of the main theorems. The accessibility-signature theorem is now a statement about a defined physical parameter rather than about a suggestive qualitative feature of measurement contexts. Likewise, the falsification theorem now refers to a designated operational domain rather than to an informal family of “more or less accessible” experiments. This is exactly the level of accessibility construction required if the Core Theorem Paper is to function as a real law-compression document rather than a high-level program note.
Appendix D
Appendix D. Baseline Decoherence Model
This appendix defines the exact baseline comparator against which the canonical CBR response is evaluated in the designated protocol family. Its function is not merely pedagogical. The accessibility-signature theorem and the falsification theorem of the main text require a comparator that is explicit enough to prevent two opposite failures. On the one hand, the baseline must not be caricatured so severely that any nontrivial response seems to favor the realization law by default. On the other hand, the baseline must not be broadened so loosely that any possible critical-regime deviation can be absorbed into an ad hoc “ordinary” response class after the fact. The present appendix is written to avoid both errors. It gives the strongest exact baseline the canonical paper needs: standard quantum evolution, standard entanglement and decoherence accounting, standard conditional erasure and retrieval logic, and a smooth-response class defined without realization-law augmentation.
The role of the appendix is therefore precise. It identifies what ordinary theory predicts in the designated accessibility-sensitive protocol family when one varies the effective operational accessibility of the record while keeping the signal–idler architecture fixed. The point is not to prove one universal closed-form visibility law for every conceivable implementation. The point is to define the exact response class from which CBR must depart if accessibility truly enters realization law nontrivially. The appendix proceeds by fixing the joint signal–record structure, deriving the reduced signal response, defining the baseline visibility function V_SQM(η), and then specifying the baseline smooth-response class 𝒮_baseline used throughout the main text.
D.1. Scope and baseline assumptions
The baseline model is defined relative to the canonical protocol family introduced in the main text. The family consists of delayed-choice quantum eraser and record-accessibility interferometric contexts in which:
a signal subsystem carries coherence or interference structure,
a record-bearing subsystem becomes correlated with the signal alternatives,
the operational accessibility of that record may vary across the protocol family,
the observable burden is carried by a visibility-like response V or by a realization-sensitive analogue S whose primary component is visibility.
Within that family, the baseline is defined by the following assumptions.
Assumption D1 (Standard unitary or open-system evolution).
The pre-realization dynamics of the joint system are governed entirely by ordinary quantum evolution, including ordinary entanglement and, where relevant, ordinary open-system decoherence modeling.
Assumption D2 (No realization-law augmentation).
No additional realization-sensitive law acts on the system beyond the ordinary baseline dynamical and measurement description. In particular, there is no accessibility-sensitive selection term modifying the response law.
Assumption D3 (Ordinary record-distinguishability logic).
Any accessibility dependence appearing in the baseline arises through the ordinary effect of record-bearing distinguishability, conditional reconstruction, erasure, or loss of recoverable coherence, not through a separate law of realized outcome selection.
Assumption D4 (Smooth ordinary response).
Where the control parameter η is varied smoothly through the designated protocol family, the induced baseline response remains within an ordinary smooth-response class unless an independently declared apparatus discontinuity has been introduced and modeled before theory comparison.
These assumptions define the strongest ordinary comparator the paper is entitled to face. The baseline is therefore not “weak” in the sense of being artificially simplified to help CBR. It is strong precisely because it is the standard theory stated honestly on the same protocol family.
D.2. Canonical signal–record setting
Let the total Hilbert space be
ℋ = ℋ_s ⊗ ℋ_r,
where ℋ_s is the signal space and ℋ_r is the record-bearing space.
For simplicity and without loss of theorem-level generality at the canonical stage, let the signal subsystem be two-dimensional with orthonormal basis
{|u⟩, |d⟩},
representing two alternatives whose coherence is visible in the signal observable. Let the corresponding record-bearing states be
{|r_u⟩, |r_d⟩} ⊂ ℋ_r,
not assumed to be perfectly orthogonal in all operational regimes.
The canonical joint state of the designated family may be written as
|Ψ⟩ = (1/√2)(|u⟩ ⊗ |r_u⟩ + e^{iφ}|d⟩ ⊗ |r_d⟩),
where φ is a controllable relative phase.
The corresponding density operator is
ρ = |Ψ⟩⟨Ψ|.
This state is sufficient to represent the ordinary interference-versus-record tradeoff of the baseline and to make visible how accessibility-dependent record structure modifies the signal response without already introducing a realization-law term.
D.3. Reduced signal state
To derive the baseline visibility response, trace over the record subsystem. The reduced signal state is
ρ_s = Tr_r(ρ).
Expanding the joint density operator gives
ρ = (1/2)∣u⟩⟨u∣⊗∣ru⟩⟨ru∣+e−iφ∣u⟩⟨d∣⊗∣ru⟩⟨rd∣+eiφ∣d⟩⟨u∣⊗∣rd⟩⟨ru∣+∣d⟩⟨d∣⊗∣rd⟩⟨rd∣|u⟩⟨u| ⊗ |r_u⟩⟨r_u| + e^{-iφ}|u⟩⟨d| ⊗ |r_u⟩⟨r_d| + e^{iφ}|d⟩⟨u| ⊗ |r_d⟩⟨r_u| + |d⟩⟨d| ⊗ |r_d⟩⟨r_d|∣u⟩⟨u∣⊗∣ru⟩⟨ru∣+e−iφ∣u⟩⟨d∣⊗∣ru⟩⟨rd∣+eiφ∣d⟩⟨u∣⊗∣rd⟩⟨ru∣+∣d⟩⟨d∣⊗∣rd⟩⟨rd∣.
Taking the partial trace over the record sector yields
ρ_s = (1/2)∣u⟩⟨u∣+∣d⟩⟨d∣+e−iφμ∣u⟩⟨d∣+eiφμ★∣d⟩⟨u∣|u⟩⟨u| + |d⟩⟨d| + e^{-iφ}μ |u⟩⟨d| + e^{iφ}μ★ |d⟩⟨u|∣u⟩⟨u∣+∣d⟩⟨d∣+e−iφμ∣u⟩⟨d∣+eiφμ★∣d⟩⟨u∣,
where
μ = ⟨r_d|r_u⟩
is the effective record overlap.
This expression is the canonical reduced signal state for the baseline. It shows exactly how the coherence of the signal subsystem depends on the distinguishability structure of the correlated record sector.
D.4. Visibility reconstruction
Let the signal be observed in the phase-sensitive basis
|χ(θ)⟩ = (1/√2)(|u⟩ + e^{iθ}|d⟩),
with θ the controllable reconstruction phase. The corresponding signal intensity is
I(θ) = ⟨χ(θ)|ρ_s|χ(θ)⟩.
Substituting the reduced state yields
I(θ) = (1/2)1+Re(μe−i(φ+θ))1 + Re(μ e^{-i(φ+θ)})1+Re(μe−i(φ+θ)).
Writing μ = |μ|e^{iα}, one obtains
I(θ) = (1/2)1+∣μ∣cos(φ+θ−α)1 + |μ| cos(φ + θ − α)1+∣μ∣cos(φ+θ−α).
The interference visibility is therefore
V = |μ|.
This is the exact baseline visibility relation at the canonical level. Ordinary theory predicts that any variation in the observable visibility is mediated through the effective overlap structure of the record-bearing states.
D.5. From record overlap to accessibility dependence
The paper’s designated protocol family is not parameterized directly by μ, but by the accessibility variable η defined in Appendix C. The baseline must therefore specify how η enters the ordinary response without already importing a realization-law correction.
The minimal canonical choice is that increasing record accessibility corresponds to decreasing effective overlap between the record-bearing alternatives in the sense relevant to signal coherence. This yields a monotone map
μ = μ(η),
with
μ(0) = 1,
μ(1) = 0,
and μ nonincreasing on [0,1].
At the canonical level, the simplest representative satisfying these conditions is
μ(η) = 1 − η.
Since η ∈ [0,1], this gives the exact representative baseline visibility law
V_SQM(η) = |μ(η)| = 1 − η.
The paper does not claim that every microscopic implementation of the canonical protocol family must produce exactly this linear law before perturbation. It claims that this is the canonical representative of the baseline response class: smooth, monotone, endpoint-correct, and free of realization-law threshold structure.
The importance of this choice is conceptual as much as formal. It provides a baseline that is exact enough to support theorem comparison while remaining general enough to stand for the ordinary smooth response expected from standard distinguishability logic.
D.6. Baseline response class 𝒮_baseline
The representative law
V_SQM(η) = 1 − η
defines the center of the baseline smooth-response class, but the theorems of the main text require a class rather than a single formula. Let 𝒮_baseline denote the class of ordinary accessibility-dependent responses satisfying the following conditions.
Condition D6.1 (Continuity).
Each V ∈ 𝒮_baseline is continuous on the admissible η-domain.
Condition D6.2 (Ordinary local smoothness).
Each V ∈ 𝒮_baseline is C¹ in every neighborhood not containing an independently declared apparatus discontinuity.
Condition D6.3 (Monotone coherence loss).
Each V ∈ 𝒮_baseline is monotone nonincreasing in η, reflecting the ordinary baseline idea that greater record accessibility corresponds to no greater visibility than before.
Condition D6.4 (Correct endpoint structure).
Each V ∈ 𝒮_baseline satisfies the correct endpoint interpretation up to declared tolerance: near η = 0 the visibility is maximal or near-maximal, and near η = 1 the visibility is minimal or near-minimal.
Condition D6.5 (No intrinsic realization threshold).
No member of 𝒮_baseline contains a critical-regime derivative break, kink, or equivalent local nonanalyticity at η_c unless such structure is independently built into the ordinary apparatus model and declared before theory comparison.
The exact representative law 1 − η is therefore not the entire baseline class, but the central canonical member of that class. The function of 𝒮_baseline is to capture ordinary smooth-response behavior under standard quantum reasoning while excluding precisely the kind of critical-regime realization signature that the canonical CBR law predicts.
D.7. Delayed-choice and erasure structure in the baseline
The canonical protocol family includes delayed-choice and erasure-like contexts, so the baseline must state what ordinary theory predicts there.
The baseline answer is standard in form. Delayed retrieval or delayed erasure changes the conditional organization of the data and may change which subensembles exhibit recovered interference, but it does not introduce a new law of outcome realization. Within ordinary theory, the delayed choice affects how the correlations are read, partitioned, or recombined, not whether a realization-sensitive accessibility threshold enters the law itself.
Thus the baseline already permits:
smooth visibility degradation with increasing effective record accessibility,
conditional recovery of interference under erasure-type operations,
delayed-choice restructuring of subensemble visibility patterns.
What it does not permit, absent an independently modeled apparatus discontinuity, is an intrinsic accessibility-sensitive critical transition in the response law itself. That exclusion is exactly why the accessibility-signature theorem of the main text has force.
D.8. Perturbed baseline class
The main text also requires a perturbed baseline class large enough to absorb ordinary nonidealities without becoming so large that any possible anomaly can be redescribed as baseline behavior. Let 𝒮_baseline^pert denote the perturbed baseline class obtained by enlarging 𝒮_baseline through the bounded perturbative envelope associated with detector noise, erasure imperfections, environmental contributions, and calibration uncertainty.
Thus V ∈ 𝒮_baseline^pert if and only if there exists V₀ ∈ 𝒮_baseline such that
|V(η) − V₀(η)| ≤ ε_tot(η)
throughout the declared experimental domain, where ε_tot is the total declared perturbative tolerance envelope.
The importance of this definition is that it preserves the baseline’s character while allowing realistic imperfection. A perturbed baseline remains baseline-class only if its deviation from the exact representative can be explained by bounded ordinary experimental nonidealities. It does not become free to acquire arbitrary critical-regime structure merely because the experiment is imperfect.
D.9. Why the baseline is strong enough
A common weakness in foundational comparison is that the standard comparator is left too vague, making it easy either to overstate the distinctness of the proposed theory or to rescue the baseline afterward by informal broadening. The present appendix avoids both errors.
The baseline is strong enough because it already contains:
the full standard signal–record entanglement structure,
ordinary coherence loss through overlap reduction,
delayed-choice and erasure logic,
a smooth accessibility-dependent response class,
a bounded perturbative enlargement.
This is everything the standard theory is entitled to contribute within the designated canonical protocol family absent realization-law augmentation. If the canonical CBR law leaves this class, the departure is not created by weakening the comparator. It is created by introducing a realization-sensitive law where the comparator has none.
D.10. What this appendix establishes
This appendix establishes the exact comparator required by the Core Theorem Paper. It shows that the canonical protocol family has an ordinary baseline representation in which accessibility affects visibility through smooth overlap-based response, with no intrinsic critical accessibility threshold built into the law. It defines the representative response
V_SQM(η) = 1 − η,
and the surrounding class 𝒮_baseline of ordinary smooth-response deformations. That is sufficient for the main paper’s comparative theorem sequence:
The accessibility-signature theorem now has a defined class from which CBR must depart.
The falsification theorem now has a definite baseline envelope relative to which null results are to be judged.
The baseline comparator is therefore no longer implicit. It is fixed.
D.11. Consequence for the canonical paper
With this appendix in place, the paper’s empirical logic becomes much tighter. The baseline is no longer “whatever standard quantum theory might reasonably do.” It is the exact smooth-response class generated by the ordinary dynamics of the designated protocol family. That change matters because it deprives the later theorems of any ambiguity at the point of comparison. The canonical law now faces a defined comparator, not a moving target. That is exactly what a theorem-bearing law-compression paper requires.
Appendix E
Appendix E. Accessibility-Signature Derivation Details
This appendix supplies the formal derivation details underlying the accessibility-signature theorem stated in the main text. Its purpose is to convert the theorem from a compressed law-to-signature claim into a stepwise argument showing how nontrivial accessibility dependence in the canonical realization law forces departure from the declared baseline response class in the designated protocol family. The appendix therefore does not introduce a new law or a new platform. It works entirely within the exact formal setting already fixed by the paper: the canonical realization rule, the restricted admissibility class, the operational accessibility parameter η, the critical accessibility value η_c, and the smooth baseline response class 𝒮_baseline.
The derivation is organized in four stages. First, it defines the comparison structure between the baseline response and the realization-sensitive response. Second, it derives the accessibility-sensitive departure condition from the canonical burden ordering. Third, it states the strong-form local signature result in exact differential terms. Fourth, it derives the weak-form bounded deviation result that remains when the sharp transition assumptions are relaxed. The goal throughout is not to exaggerate what the paper has established. It is to show that, once accessibility enters realization selection nontrivially, departure from the baseline class is not optional within the designated protocol family.
E.1. Formal comparison setting
Let 𝒫 denote the designated protocol family of delayed-choice quantum eraser and record-accessibility interferometric contexts introduced in the main text. For each context C ∈ 𝒫, let η(C) ∈ [0,1] be the operational accessibility parameter constructed in Appendix C.
Let
V_SQM(η)
denote the baseline response associated with standard quantum evolution, ordinary record-distinguishability logic, and the smooth baseline class 𝒮_baseline defined in Appendix D.
Let
V_CBR(η)
denote the realization-sensitive response induced by the canonical realization law
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ).
The main theorem of the paper is comparative. It does not require a fully universal closed-form response law for every member of 𝒫. What it requires is that, within the designated family, the baseline response class remain smooth in the ordinary sense while the canonical realization law, if accessibility-sensitive, forces a non-baseline response regime. To formalize that comparison, define the response difference
ΔV(η) = V_CBR(η) − V_SQM(η).
The accessibility-signature theorem is then the claim that if accessibility enters realization law nontrivially, ΔV cannot vanish identically across the full admissible η-domain while V_CBR remains inside 𝒮_baseline globally.
E.2. Accessibility-sensitive ordering and critical accessibility
The derivation begins from the realization law rather than from the response curves themselves. Let Φ_sub and Φ_sup denote the leading admissible realization channel classes on either side of the accessibility transition. The canonical law selects between them according to the sign of the total burden difference
Δℛ(η) = ℛ_C(Φ_sup; η) − ℛ_C(Φ_sub; η).
Using the canonical burden decomposition, this becomes
Δℛ(η) = αΔΞ(η) + βΔΩ(η) + γΔΛ(η),
where
ΔΞ(η) = Ξ_C(Φ_sup; η) − Ξ_C(Φ_sub; η),
ΔΩ(η) = Ω_C(Φ_sup; η) − Ω_C(Φ_sub; η),
ΔΛ(η) = Λ_C(Φ_sup; η) − Λ_C(Φ_sub; η).
The main-text regularity assumptions imply that the representational and record-structural differentials are either constant or slowly varying across the narrow transition window, while the accessibility-consistency differential carries the decisive η-dependence. Thus, in the critical regime, one may write
Δℛ(η) = A + B(η − η_c) + o(|η − η_c|),
where A vanishes at the transition by definition of η_c, B ≠ 0 when accessibility is realization-effective, and the remainder term is higher order in the local expansion.
Thus η_c is defined by
Δℛ(η_c) = 0,
and accessibility is nontrivial in the precise sense that
dΔℛ/dη |_(η=η_c) ≠ 0.
This is the law-level source of the critical regime. The selected realization class changes when the sign of Δℛ changes. The empirical signature is therefore not introduced at the level of observables first. It is inherited from a transition in the burden ordering of the canonical law.
E.3. General response decomposition
The next step is to express the observable response induced by the selected realization class. Let the baseline response be written as
V_SQM(η) = V₀(η),
where V₀ belongs to 𝒮_baseline and is smooth in the ordinary sense across the critical region.
Now let the CBR response be decomposed as
V_CBR(η) = V₀(η) + ΔV_real(η),
where ΔV_real is the realization-sensitive correction induced by the change in selected minimizer.
The theorem does not require ΔV_real to be nonzero everywhere. In fact, the canonical logic suggests the opposite. If the accessibility-sensitive burden is subdominant below η_c, then the selected realization class may agree with the baseline-compatible class throughout the low-accessibility and precritical regimes. What matters is that once accessibility becomes order-determining, ΔV_real cannot remain globally zero while the law remains accessibility-sensitive.
The simplest reduced form consistent with the exact main-text logic is therefore
ΔV_real(η) = 0 for η ≤ η_c,
and
ΔV_real(η) ≠ 0 for η > η_c,
with the nonzero correction inheriting its local structure from the burden-order transition at η_c.
E.4. Strong-form derivation
The strongest derivation corresponds to the case in which the selected realization class changes sharply at η_c and the response map from realized channel class to observable visibility is locally regular. In that case the correction term may be written, to leading order, as
ΔV_real(η) = −κ max{0, η − η_c},
with κ > 0 a fixed response coefficient determined by the local observable sensitivity of the protocol to the realization-class transition.
Hence
V_CBR(η) = V₀(η) − κ max{0, η − η_c}.
If one now chooses the canonical baseline representative
V₀(η) = 1 − η,
one recovers the exact main-text form
V_CBR(η) = 1 − η − κ max{0, η − η_c}.
But the theorem does not depend on that particular representative alone. Its force lies in the local structure of the correction term.
For η < η_c,
dV_CBR/dη = dV₀/dη.
For η > η_c,
dV_CBR/dη = dV₀/dη − κ.
Since κ > 0, the one-sided derivatives differ at η_c. Therefore V_CBR is continuous but not C¹ at η_c provided V₀ is itself C¹ there. Since every member of the declared baseline class is locally smooth at η_c in the absence of independently declared apparatus discontinuity, it follows that V_CBR leaves 𝒮_baseline in any neighborhood of η_c.
This establishes the strong form: a critical-regime derivative break, or kink, is the natural local response morphology induced by a nontrivial accessibility-sensitive transition in the realization ordering.
E.5. Differential statement of the strong form
The strong-form signature may be stated in exact differential language as follows.
Assume:
V₀ ∈ 𝒮_baseline is C¹ in a neighborhood U of η_c,
ΔV_real(η) = −κ max{0, η − η_c} with κ > 0.
Then
V_CBR ∈ C⁰(U),
but
dV_CBR/dη |(η→η_c^-) ≠ dV_CBR/dη |(η→η_c^+).
Therefore
V_CBR ∉ C¹(U),
while every admissible baseline response V ∈ 𝒮_baseline satisfies
V ∈ C¹(U).
Hence
V_CBR ∉ 𝒮_baseline.
This is the exact regularity-class separation on which the strong-form theorem rests. The result is stronger than a mere amplitude difference. It states that the realized response changes local class.
E.6. Weak-form derivation under smoothed transition
The strong form depends on a sharp local transition in the realization-sensitive correction. The paper does not claim that every implementation of the canonical protocol family must exhibit that exact local idealization. It therefore requires a weak-form derivation that survives smoothing of the transition while preserving nontrivial accessibility dependence.
Let g_w(x) be a smooth transition function with width parameter w > 0 such that
g_w(x) ≈ 0 for x ≪ −w,
g_w(x) ≈ x for x ≫ w,
and
g_w(x) → max{0, x} as w → 0.
Define the smoothed realization-sensitive correction by
ΔV_real^w(η) = −κ g_w(η − η_c),
with κ > 0.
Then the smoothed CBR response is
V_CBR^w(η) = V₀(η) − κ g_w(η − η_c).
This response is now C¹, and possibly smoother, at η_c. The strong-form kink may therefore disappear. But the response still differs from the baseline in any neighborhood of η_c in which g_w is nontrivial. Specifically, define
ΔV_w(η) = V_CBR^w(η) − V₀(η) = −κ g_w(η − η_c).
Since g_w is not identically zero on any neighborhood that crosses η_c, and since κ > 0, there exists a nonempty interval U around η_c such that
sup_{η ∈ U} |ΔV_w(η)| > 0.
If U is chosen small enough that the baseline class remains locally smooth and the perturbative envelope remains controlled, then V_CBR^w cannot be globally absorbed into the same baseline smooth-response class unless the deviation amplitude falls entirely below the declared tolerance. The weak form is therefore a bounded non-baseline deviation band concentrated in a neighborhood of η_c.
This proves the weak form: even when the local derivative discontinuity is regularized away, nontrivial accessibility-sensitive realization produces a localized response class not reducible to ordinary baseline smoothness.
E.7. Local bounded-deviation class
The weak form can be expressed more precisely by defining a bounded deviation class around η_c.
Let U_δ = [η_c − δ, η_c + δ] ∩ [0,1] be a critical window of radius δ > 0. Define
𝒟_sig(U_δ; ε_sig) = {f : sup_{η ∈ U_δ} |f(η) − V₀(η)| ≥ ε_sig},
for some ε_sig > 0.
Then the weak-form theorem says that if accessibility enters realization law nontrivially and the smoothing width remains finite, there exist δ > 0 and ε_sig > 0 such that
V_CBR^w ∈ 𝒟_sig(U_δ; ε_sig),
while no admissible member of 𝒮_baseline enters the same class unless the baseline is improperly enlarged beyond its declared smooth-response structure.
This formulation is useful because it separates the weak form from the strong form cleanly. The strong form is a local regularity-class departure. The weak form is a local amplitude-separation departure. Both are non-baseline signature classes. The weak form is simply the one that survives smoothing.
E.8. Why the signature is localized
The theorem does not predict diffuse anomaly across the entire accessibility domain. That localization is not a weakness. It follows from the internal law structure.
Below η_c, accessibility is present but not order-determining. The minimizer remains in the same realization class, and the response may remain baseline-compatible. At η_c, the accessibility-sensitive burden becomes order-changing. Above η_c, the selected realization class changes and the response departs accordingly.
Thus the location of the signature is not chosen phenomenologically. It is inherited from the transition structure of Δℛ(η). The law predicts a critical region because its internal ordering changes there. That is why η_c is not merely a convenient empirical marker. It is the law-internal source of the signal.
E.9. Exclusion of baseline absorption
A possible objection is that one could always enlarge the baseline class enough to absorb whatever local deviation the theory predicts. That objection fails within the present paper because 𝒮_baseline is not defined after the fact. It is fixed in Appendix D as the ordinary smooth-response class generated by standard quantum evolution, ordinary entanglement and decoherence accounting, and conditional erasure/retrieval logic, without realization-law augmentation.
To absorb the strong-form signature, the baseline would need to acquire a local nonanalyticity not independently present in the declared ordinary dynamics. To absorb the weak-form signature, the baseline would need to admit a bounded critical-regime deviation of the exact same type without explanation from standard ordinary control dependence. Either move would enlarge the baseline class beyond the structure fixed by the paper. That is not rebuttal. It is theory alteration.
The accessibility-signature theorem therefore compares canonical CBR not to an infinitely elastic baseline, but to the strongest honest ordinary comparator the designated protocol family supplies.
E.10. Relation to the theorem in the main text
The main text states the accessibility-signature theorem in deliberately compressed form. The present appendix shows how that theorem follows from the canonical law in exact logical order:
accessibility enters the burden structure through Λ_C,
Λ_C changes the minimization ordering at η_c,
the selected realization class therefore changes,
the realized response acquires a correction term tied to that transition,
the correction leaves the ordinary baseline class either by regularity-class departure or by bounded critical-regime deviation.
This chain is the formal core of the empirical claim. Without it, the theorem could appear as a high-level assertion that “something different should happen if accessibility matters.” With it, the theorem becomes a derivation from the exact law structure.
E.11. What this appendix establishes
This appendix establishes the derivational details of the paper’s central empirical result. It shows that nontrivial accessibility-sensitive realization cannot remain globally baseline-equivalent within the designated protocol family, and that the resulting departure has two exact admissible forms:
a strong-form critical-regime derivative break or kink,
a weak-form bounded non-baseline deviation band.
That is sufficient for the Core Theorem Paper. The paper does not yet need one fully platform-locked numerical response law of the kind appropriate to a later implementation volume. It needs a theorem-bearing derivation that makes its empirical burden exact in structure, localized in regime, and tied to the canonical law rather than to informal expectation. This appendix provides that derivation.
Appendix F
Appendix F. Failure Analysis and Model Invalidation Logic
This appendix gives the full failure logic of the canonical theory developed in the main text. Its purpose is to state, without interpretive slack, what must be true before a negative result counts against the theory, what exact form that negative result must take, and what exactly is invalidated when it occurs. The main text already states a failure criterion in theorem form. The present appendix makes that theorem fully explicit by fixing the relevant logical structure: the objects that must be frozen before invalidation is meaningful, the null-result class relative to the declared protocol family, the distinction between inconclusive and invalidating negative outcomes, and the exact scope of what survives theory failure.
This appendix is necessary because a realization-law proposal becomes scientifically serious only when it ceases to protect itself through underspecification. Many foundational programs permit themselves indefinite resilience by leaving the law, the protocol family, the operational variable, or the baseline comparator open enough that failure can always be redescribed as a misunderstanding of what was “really meant.” The canonical form developed in this paper is designed to block that retreat. The present appendix completes that design. It does not merely say that the theory is in principle testable. It defines the exact condition under which the world is allowed to disagree with it.
F.1. Why invalidation must be formalized
A law candidate does not become more scientific merely by using the language of falsifiability. It becomes more scientific when it identifies a finite domain in which its own claims would force it to fail if the relevant signature does not appear. That requirement is especially strict for a realization-law theory, because such theories are often challenged on the ground that they reformulate the measurement problem without ever allowing experiment to discriminate meaningfully among law-level alternatives. If the canonical CBR form is to avoid that challenge, then its failure condition must be made exact.
The logic of this appendix is therefore not optional or stylistic. The earlier sections have already fixed the canonical law, the admissibility structure, the accessibility parameter η, the baseline smooth-response class, and the accessibility-signature theorem. Once those objects are fixed, the only remaining question is whether a null result leaves the theory intact, weakens it only probabilistically, or falsifies it outright. The present appendix answers that question.
F.2. Objects that must be frozen before failure becomes meaningful
A negative result can invalidate the theory only if the theory has already fixed the objects that define its empirical burden. In the canonical CBR form of this paper, the following objects must be treated as frozen.
First, the law form is fixed:
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ).
This means that the theory may not change the form of the realization rule after the protocol has been specified.
Second, the admissibility structure is fixed. The class 𝒜(C) has already been narrowed by representational invariance, record-structural coherence, accessibility consistency, and restricted probabilistic discipline. The theory may not save itself after a null result by informally appealing to undeclared admissible channels outside that structure.
Third, the operational variable is fixed. Accessibility is represented by η, defined as the operational accessibility parameter associated with the designated protocol family. The theory may not reinterpret failure by replacing η with a looser or differently motivated quantity after the fact.
Fourth, the protocol family is fixed. The designated exposure domain is the delayed-choice quantum eraser and record-accessibility interferometric family developed in the main text. The theory may not move the target to some other undeclared family once the designated family has returned baseline-class behavior.
Fifth, the observable burden is fixed. The theory is exposed through the visibility response V or a realization-sensitive signature map S whose primary component is already fixed by the designated protocol family. It may not respond to null result by claiming that some different undeclared observable was the true empirical target.
Sixth, the baseline comparator is fixed. The response class 𝒮_baseline and its ordinary perturbed enlargement are already defined in Appendix D. The theory may not retrospectively weaken its own burden by saying that the baseline should have been broader all along unless that broader class was independently justified before comparison.
Once these objects are fixed, failure acquires the right logical status. A null result no longer bears merely on a broad philosophical attitude. It bears on one exact theory candidate.
F.3. Null-result class
The failure logic of the paper requires a precise notion of what counts as a null result. The correct definition is comparative rather than impressionistic.
Let V_obs(η) denote the observed response in the designated protocol family. Let 𝒮_baseline^pert denote the perturbed baseline class consisting of all responses obtainable from the declared baseline class 𝒮_baseline by ordinary bounded detector, erasure, environmental, and calibration perturbations within the declared tolerance envelope.
Then the observed response belongs to the null-result class, written
V_obs ∈ 𝒩_null,
if and only if all of the following hold.
Baseline containment: There exists some V_base ∈ 𝒮_baseline^pert such that V_obs remains within the declared comparison tolerance of V_base across the experimentally sampled η-domain relevant to the test.
No strong-form signature: The observed response exhibits no critical-regime derivative break, kink, local nonanalyticity, or equivalent local regularity failure in the neighborhood of η_c beyond what is already allowed by the declared perturbed baseline class.
No weak-form signature: The observed response exhibits no bounded critical-regime deviation band of amplitude sufficient to escape the declared perturbed baseline envelope in the region where the theory predicts accessibility-sensitive departure.
No hidden postcritical separation: The observed response does not agree with the baseline only in the subcritical region while leaving untested the postcritical region in which the theory’s signature is expected to emerge.
This definition is strict enough to support invalidation without making the theory hostage to every visually smooth dataset. A null result is not simply “the experiment looked ordinary.” It is the condition that, under the declared comparison framework, the observed response remained entirely inside the ordinary baseline class throughout the domain that matters.
F.4. Detectability-valid negative results
Not every null result belongs to the invalidating class. A central distinction of the paper is the distinction between a detectability-valid negative result and an underdetermined negative result.
A negative result is detectability-valid only if the experiment actually entered a regime in which the theory’s signature, if present at the strength predicted by the exact canonical form, would have been resolvable. This requires that the declared perturbative envelope be small enough, the accessibility range broad enough, and the protocol fidelity high enough that the accessibility-signature theorem was given a fair chance to manifest.
Thus a detectability-valid negative result must satisfy the following conditions.
Accessibility-range condition: The sampled η-domain includes both a neighborhood of η_c and a nonempty postcritical regime. A dataset confined entirely below η_c does not test the theory’s signature burden and therefore cannot invalidate the theory.
Perturbative-control condition: The detector, erasure, environmental, and calibration uncertainties remain within the declared tolerance envelope used to define the perturbed baseline class.
Protocol-validity condition: The physical implementation genuinely belongs to the designated protocol family and preserves the signal–record architecture, accessibility logic, and observable structure required by the main theorem.
Resolution condition: The data have enough effective resolution in the critical and postcritical regimes to distinguish baseline containment from the signal class predicted by the theory.
Only under these conditions may a null result count as theory-relevant failure. This restriction does not weaken the failure criterion. It makes it scientifically legitimate. A theory is not refuted by a test that never entered the domain in which the theory claimed distinctness.
F.5. Formal failure criterion
We may now state the failure logic in explicit theorem-ready form.
Proposition F.1 (Canonical failure criterion).
Let the canonical CBR law, the admissible class 𝒜(C), the accessibility parameter η, the designated protocol family 𝒫, the baseline class 𝒮_baseline, and the declared perturbative envelope all be fixed as stated in the paper. Suppose an observed response V_obs belongs to the null-result class 𝒩_null and that the associated experiment satisfies the detectability-validity conditions of Section F.4. Then canonical CBR in its present law form is false.
Proof
By the accessibility-signature theorem of the main text, if accessibility enters realization law nontrivially, then the realization-sensitive response cannot remain globally contained in the same smooth baseline class across all η-regimes in the designated protocol family. In strong form, the response leaves the baseline class through a critical-regime derivative break or kink. In weak form, it leaves the baseline class through a bounded non-baseline deviation band near η_c.
Suppose now that the experiment is detectability-valid and that V_obs ∈ 𝒩_null. By definition of 𝒩_null, the observed response remains within the perturbed baseline class and exhibits neither the strong-form nor the weak-form signature in the region where the theory predicts departure. Since the experiment is detectability-valid, this absence cannot be excused by under-sampling, unresolved perturbation, or protocol mismatch within the declared tolerance. Therefore the empirical consequence required by nontrivial accessibility-sensitive realization has failed to occur. The conjunction of the canonical law, the admissibility structure, the accessibility relevance claim, and the designated protocol burden is therefore false. Hence canonical CBR in its present law form is false.
This proposition is the exact formal content of the paper’s falsification language. It states not merely that evidence would lean against the theory, but that the exact canonical form fails if the declared burden is not met under valid test conditions.
F.6. Why this is genuine invalidation rather than evidential weakening
It is important to distinguish invalidation from mere reduction of confidence. The paper’s failure criterion is stronger than ordinary evidential discouragement because the canonical law has already been narrowed to a finite, protocol-bearing claim. Once that narrowing has occurred, the theory no longer has room to preserve itself by vague reinterpretation.
The invalidation is genuine for four reasons.
First, the law is already canonically fixed. The theory cannot respond to failure by saying that the intended law was more flexible than the paper stated. If that were true, the paper’s theorem burden would have been misrepresented.
Second, the protocol family is already designated and public. The theory cannot answer a null result by migrating to some new undeclared test domain without admitting that the present paper did not, in fact, state its real empirical burden.
Third, the accessibility parameter is already operationalized. The theory cannot respond to negative result by saying that accessibility was really something else unless it abandons the canonical construction of η used to prove its own theorem.
Fourth, the baseline comparator is already fixed in advance. The theory cannot save itself by broadening the baseline after the fact only when that broadening is needed to absorb failure.
These constraints are what make the theory vulnerable in the correct scientific sense. The present appendix therefore does not merely assign a philosophical meaning to falsification. It shows why falsification, in the canonical form of this paper, is logically binding.
F.7. What survives failure
A precise failure criterion also requires a precise statement of scope. If the canonical form fails, what exactly has failed?
What fails is the exact conjunction of claims made by the Core Theorem Paper:
that realization is governed by the canonical minimization law stated in the paper,
that admissibility is captured by the restricted class 𝒜(C),
that accessibility enters the law nontrivially through the canonical burden structure,
that the designated protocol family is a valid exposure domain for that law,
and that the resulting response must leave the baseline class in the manner stated by the theorem.
If a detectability-valid null result occurs, that conjunction does not survive.
What does not automatically fail is every conceivable realization-law proposal. Failure of the present canonical form does not prove that no law of outcome realization exists, that no accessibility-sensitive theory is possible, or that every nonstandard completion of quantum theory is false. The paper does not claim that much, and the invalidation theorem does not purchase it.
This clarification strengthens the paper rather than weakens it. A theory that is precise about what dies when it fails is more credible than one that attempts to shelter itself by vague overbreadth.
F.8. Distinguishing three kinds of negative outcome
For complete clarity, the present appendix distinguishes three kinds of negative outcome.
F.8.1. Irrelevant negative outcome
This occurs when the observed response is baseline-class, but the experiment never actually entered the relevant η-domain or never implemented the designated protocol family faithfully. Such a result says little or nothing about the theory.
F.8.2. Inconclusive negative outcome
This occurs when the experiment did probe the intended domain, but detector, erasure, environmental, or calibration uncertainty remained too large for the theory’s signature burden to become resolvable. Such a result does not support the theory, but neither does it invalidate it.
F.8.3. Invalidating negative outcome
This occurs when the experiment is detectability-valid, enters the critical and postcritical accessibility regimes, and nevertheless returns only null-result-class behavior. This is the only kind of negative outcome that falsifies the canonical theory.
These distinctions are essential. Without them, the failure criterion would either be too weak to matter or too strong to be scientifically fair.
F.9. Why failure sharpens rather than weakens the theory
A common misunderstanding is that an explicit invalidation clause makes a foundational proposal easier to dismiss and therefore weaker. The opposite is true. A theory that permits itself to fail under a finite, public, and theory-internal condition becomes stronger as a scientific object because it stops protecting itself through ambiguity.
In the case of canonical CBR, the failure logic does not diminish the paper’s significance. It is one of its central achievements. The theory no longer lives only as a broad programmatic architecture. It now lives under a defined empirical liability. That is exactly the transition the paper is meant to accomplish.
F.10. Consequence for the status of the canonical paper
With this appendix in place, the Core Theorem Paper now contains the complete theorem sequence required for a canonically compressed law candidate:
a formal law form,
a restricted admissibility structure,
an operational accessibility variable,
a baseline comparator,
an accessibility-signature theorem,
and a binary invalidation criterion.
This does not make the theory empirically confirmed. It does make it scientifically exposed. That is the highest form of maturity the canonical paper itself can reach without collapsing into a platform-specific implementation paper.
F.11. Conclusion of the appendix
This appendix has completed the failure analysis and model invalidation logic of the Core Theorem Paper. It has defined the null-result class, distinguished detectability-valid failure from mere inconclusive ordinary negative result, and shown that if the designated protocol family returns only baseline-class behavior under the declared valid conditions, then canonical CBR in its present law form is false. The paper therefore no longer ends merely with a sharpened law proposal. It ends with a law proposal that has publicly stated the condition under which it dies.
Appendix G
Appendix G. Failure Analysis and Model Invalidation Logic
This appendix completes the empirical logic of the canonical theory by stating, in exact form, the conditions under which a negative result counts against the theory, the conditions under which it does not, and the precise scope of what is invalidated if the declared burden fails. Its purpose is not rhetorical. The main text already fixes the canonical realization law, the restricted admissibility class, the operational accessibility parameter η, the designated protocol family, the baseline smooth-response class, and the accessibility-signature theorem. Once those objects have been fixed, the theory must no longer be permitted to preserve itself through ambiguity about what would count as failure. The present appendix therefore does not merely restate that the theory is testable. It defines the exact structure by which the world is allowed to disagree with it.
The logic of the appendix is organized around five questions. What objects must be frozen before invalidation becomes meaningful? What counts as a null result relative to the declared protocol family? Under what conditions is a null result merely irrelevant or inconclusive rather than theory-relevant? What exactly is falsified when a detectability-valid null result occurs? And what, if anything, survives once the canonical form fails? These questions are not peripheral to the paper. They are the final condition under which the canonical law stops being only a disciplined proposal and becomes a finite physical theory candidate.
G.1. Frozen objects of the canonical test
A negative result can invalidate a theory only if the theory has already fixed the objects that define its empirical burden. In the canonical form developed in the main text, the following objects are frozen.
First, the law form is fixed:
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ),
with
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ).
Second, the admissibility structure is fixed by the restricted class 𝒜(C), which excludes channels that are representationally unstable, record-incoherent, accessibility-inconsistent, probabilistically stipulative, or dynamically illicit.
Third, the operational variable is fixed. Accessibility is represented by η, constructed as an operational control variable of the designated protocol family rather than as a post hoc interpretive label.
Fourth, the protocol family is fixed. The paper has already named delayed-choice quantum eraser and record-accessibility interferometric contexts as the designated exposure domain of the canonical law.
Fifth, the observable burden is fixed. The theory is exposed through a visibility-type response V or an equivalent realization-sensitive signature map S whose primary physical content is already specified by the protocol family.
Sixth, the baseline comparator is fixed. The ordinary response class 𝒮_baseline, together with its declared perturbative enlargement, is the only standard comparator relevant to the paper’s empirical theorem.
Once these objects are fixed, the theory loses the right to reinterpret failure by silently changing its own law, target, variable, or comparator. That loss of flexibility is not a defect. It is the condition of scientific seriousness.
G.2. Null-result class
Let V_obs(η) denote the observed response on the designated protocol family, and let 𝒮_baseline^pert denote the perturbed baseline class consisting of all responses obtainable from the declared smooth baseline class under the bounded ordinary perturbations allowed by the paper. The observed response belongs to the null-result class, written
V_obs ∈ 𝒩_null,
if and only if the following hold.
First, there exists some V_base ∈ 𝒮_baseline^pert such that V_obs remains within the declared comparison tolerance of V_base across the experimentally relevant η-domain.
Second, the response exhibits no critical-regime derivative break, kink, local nonanalyticity, or equivalent strong-form departure in the neighborhood of η_c beyond what is already allowed by the declared perturbed baseline class.
Third, the response exhibits no bounded non-baseline deviation band of sufficient amplitude to leave the perturbed baseline envelope in the critical or postcritical accessibility regime.
Fourth, the observed response does not merely agree with the baseline below η_c while leaving the postcritical regime untested or unresolved.
This definition makes the null-result class exact. A null result is not simply a response that “looks ordinary.” It is a response that remains wholly absorbable into the declared baseline class throughout the part of the η-domain that matters to the theory.
G.3. Three kinds of negative outcome
Not every negative outcome is theory-relevant. The paper therefore distinguishes three kinds of negative result.
An irrelevant negative outcome occurs when the experimental implementation never actually realizes the designated protocol family or never enters the accessibility regimes relevant to the signature theorem. Such a result says nothing decisive about the canonical law because the theory’s own exposure domain was never reached.
An inconclusive negative outcome occurs when the protocol family is approximately realized and the relevant η-domain is at least partially sampled, but detector uncertainty, erasure impurity, environmental smoothing, or calibration uncertainty remain too large for the theory’s signature burden to become resolvable. Such a result does not support the theory, but neither does it falsify it. It shows only that the experiment did not attain a detectability-valid test regime.
An invalidating negative outcome occurs when the designated protocol family is realized under the declared validity conditions, the critical and postcritical accessibility regimes are sampled, the perturbative envelope remains within the declared tolerance, and the observed response still belongs to the null-result class. Only this third kind of negative outcome falsifies the canonical law form of the paper.
These distinctions are essential. Without them, the theory would either be unfairly vulnerable to bad tests or unfairly shielded from good ones.
G.4. Detectability-validity conditions
A null result counts against the theory only if it occurs under conditions in which the theory’s signature, if present at the strength required by the canonical form, would have been resolvable. A negative result is therefore detectability-valid only if the following conditions are satisfied.
First, the experiment must realize the designated protocol family faithfully enough that the signal–record structure, delayed-choice logic, and accessibility-sensitive control axis of the main text are physically present.
Second, the accessibility parameter η must be calibrated in a way consistent with the paper’s operational construction. It is not enough to vary a laboratory knob that is interpreted as “more or less accessible.” The experiment must instantiate a genuine accessibility control variable in the sense relevant to the theory.
Third, the sampled η-domain must include a neighborhood of η_c and a nonempty postcritical region. A dataset confined entirely to low-accessibility or precritical values does not test the signature burden of the canonical law.
Fourth, detector, erasure, environmental, and calibration perturbations must remain within the declared tolerance envelope. If the perturbations are too large, the experiment is not operating in the regime in which absence of signal is theory-relevant.
Fifth, the observable reconstruction must have sufficient effective resolution in the critical and postcritical windows to distinguish baseline containment from accessibility-sensitive deviation.
Only when these conditions are satisfied is a null result allowed to count as an answer to the theory.
G.5. Failure theorem in explicit form
The failure theorem of the main text can now be restated with full logical precision.
Theorem G.1 (Canonical Failure Criterion).
Let the law form, the admissibility structure, the operational accessibility parameter η, the designated protocol family, the visibility-based observable burden, and the perturbed baseline class all be fixed exactly as stated in the canonical paper. Suppose an observed response V_obs belongs to the null-result class 𝒩_null and that the corresponding experiment satisfies the detectability-validity conditions of Section G.4. Then canonical CBR in its present law form is false.
Proof.
The accessibility-signature theorem states that if accessibility enters realization law nontrivially, then the realized response cannot remain globally trapped inside the same smooth baseline class across all η-regimes in the designated protocol family. In strong form, the response leaves the baseline class through a critical-regime derivative break or kink near η_c. In weak form, it leaves the baseline class through a bounded non-baseline deviation band in a neighborhood of η_c. Suppose now that the experiment is detectability-valid and that V_obs ∈ 𝒩_null. By definition of 𝒩_null, the observed response exhibits neither the strong-form nor weak-form departure and remains wholly contained in the declared perturbed baseline class across the empirically relevant η-domain. Since the test is detectability-valid, this absence cannot be excused by under-sampling, unresolved ordinary perturbation, or protocol mismatch within the declared tolerance. Therefore the empirical consequence required by nontrivial accessibility-sensitive realization fails. The conjunction of the canonical law form, its admissibility structure, and its accessibility-relevance claim is therefore false.
This theorem is binary because the canonical paper has already done the narrowing work required for binary failure to make sense. A null result no longer bears merely on interpretive preference. It bears on one exact law candidate.
G.6. Why the invalidation is genuine
The invalidation theorem is genuine because the paper has already removed the standard escape routes. The law cannot be redescribed without changing the theory. The protocol family cannot be moved without changing the theory. The accessibility variable cannot be loosened without changing the theory. The baseline comparator cannot be enlarged after the fact without changing the comparison. The observable burden cannot be reassigned to some undeclared quantity without changing the empirical claim. Once those objects are frozen, continued null-result behavior under detectability-valid conditions is not merely discouraging. It is incompatible with the canonical theory.
This is the point at which the paper becomes scientifically exposed. It is no longer enough to say that the framework remains interesting or that some other future test might matter more. If the canonical paper’s own designated burden fails under valid conditions, then the canonical paper’s own law form fails with it.
G.7. What survives failure
It is important to state what failure does not prove. The failure theorem does not show that every conceivable realization-law theory is false. It does not show that no accessibility-sensitive completion of quantum outcome selection can exist. It does not refute every nonstandard completion of quantum theory. Those stronger claims are not made in the paper and are not purchased by the present theorem.
What failure does refute is the exact conjunction of claims made by the canonical paper:
that realization is governed by the canonical minimization law introduced here,
that admissibility is captured by the restricted class 𝒜(C),
that accessibility enters the law nontrivially through the burden structure,
that the designated protocol family is a valid exposure domain for that law,
and that the required response must leave the baseline class in the exact manner stated by the theorem.
If a detectability-valid null result occurs, that conjunction is false. What survives is only the logical possibility of some different theory.
G.8. Why this appendix strengthens the paper
A theory becomes stronger when it states the exact condition under which it dies. The present appendix therefore does not weaken the canonical paper by adding a failure clause. It completes the paper’s scientific posture. Without this appendix, the theory would end as a sharpened law proposal with an empirical suggestion. With it, the theory ends as a law proposal that has identified the exact condition under which the world is permitted to reject it.
That is the correct final status of the canonical paper. It is not yet an empirically confirmed theory. It is now a canonically specified, operationally exposed, and finitely falsifiable one.
Theorem Spine
The canonical theory developed in this paper is closed by three theorems. Together they exhaust the burden claimed at the level of the present work. Their order is logically necessary. A realization-law proposal must first show that its law form selects non-arbitrarily within a restricted admissible class. It must then show that, if accessibility is realization-effective, the resulting response cannot remain globally contained within the declared standard baseline class. It must finally state the exact condition under which failure of that response burden counts against the theory itself. The present paper is organized to satisfy exactly those three burdens.
Theorem 1 — Restricted Canonical Uniqueness.
Let C be a physically specified measurement context and let 𝒜(C) be the restricted admissible class of realization-compatible channels defined by dynamical compatibility, representational invariance, record-structural coherence, accessibility consistency, and restricted probabilistic discipline. Let the canonical realization functional be
ℛ_C(Φ) = αΞ_C(Φ) + βΩ_C(Φ) + γΛ_C(Φ),
with α, β, γ ≥ 0 fixed. Under the stated existence and regularity assumptions, the selected realization channel
Φ★C = arg min{Φ ∈ 𝒜(C)} ℛ_C(Φ)
exists and is unique up to operational equivalence within 𝒜(C). Hence the canonical law does not merely constrain realization; it selects a unique physical verdict class modulo operationally null reformulation.
Theorem 2 — Accessibility-Signature Theorem.
Let 𝒫 be the designated accessibility-sensitive protocol family and let η ∈ [0,1] be the operational accessibility parameter associated with contexts in 𝒫. If accessibility enters the canonical realization law nontrivially through the accessibility-consistency burden Λ_C, then the induced realization-sensitive response cannot remain globally contained in the declared smooth baseline class across the full admissible η-domain. Under the strongest regularity assumptions, the resulting non-equivalence is localized near a critical accessibility value η_c and appears as a critical-regime derivative break or kink in the primary observable. If those stronger regularity assumptions are relaxed while the accessibility-sensitive transition remains nontrivial, a bounded non-baseline deviation class persists in a nonempty neighborhood of η_c. Hence the theorem fixes both the regime and the admissible form of the theory’s first empirical manifestation.
Theorem 3 — Failure Criterion.
Let the canonical law form, the admissibility structure, the operational accessibility parameter η, the designated protocol family, the baseline comparator, and the observable burden all be fixed exactly as stated in this paper. If the protocol family, under detectability-valid conditions, exhibits only baseline-class behavior across the physically relevant and experimentally accessible η-domain, with neither the strong-form nor weak-form accessibility signature appearing beyond the declared tolerance, then canonical CBR in its present form is false. Hence the theory stands not merely under comparative interpretation, but under a finite public invalidation condition.
These three theorems are sufficient and complete with respect to the claim of the canonical paper. The first fixes the law as a genuine selection rule. The second fixes the empirical regime and admissible signature by which that law becomes observable if accessibility is realization-effective. The third fixes the condition under which absence of that signature counts as failure of the theory. No weaker sequence would make the canonical law scientifically vulnerable, and no additional theorem is required to state the present paper’s empirical burden. In that sense, the theorem spine closes the canonical model as a testable theory candidate.

