// 01
The AION Definition
What a FrameworkActually Is
Not a document. Not an opinion. A reasoning machine with declared failure conditions.

In the AION methodology, a framework is a structured reasoning system with exactly one epistemic function per module, explicit invalidation conditions, registered formulas under Protocol I, and a convergence state that measures how much empirical evidence supports it.

It is not a best-practices guide. Not a consulting recommendation. Not a methodology deck. It is an instrument — and like any instrument, its first obligation is to report honestly when its measurement is unreliable.

An AION Framework IS
  • A structured reasoning system with one epistemic function per module
  • Falsifiable — carries explicit conditions under which it is invalid
  • Versioned — every change documented with a delta and a trigger
  • Convergence-tracked — M-NASCENT through M-STRONG, honest at each stage
  • Protocol I registered — every quantitative metric has a declared formula
  • CEV-audited — arithmetic verified against edge cases before deployment
  • ECF-tagged — every claim carries [D] / [R] / [S] / [?] epistemic status
  • NBP-equipped — every assertion has a falsification condition and a CF score
  • Deployable in any domain — the core methodology does not change
  • Honest about what it cannot yet measure — M-NASCENT is not a failure
An AION Framework IS NOT
  • A PowerPoint deck of recommendations
  • A best-practices guide with no failure mode
  • A document that cannot tell you when it is wrong
  • A one-size-fits-all methodology applied uniformly across harm tiers
  • A consulting opinion dressed in framework language
  • A system with undefined metrics that produce unverifiable outputs
  • Declared M-STRONG on day one — that is confabulation
  • A living document quietly updated to match what happened
  • A proprietary black box — invalidation conditions are public by default
  • Patched — defects are documented and rebuilt from first principles, never fixed silently
01
Axiom Register
The foundational claims that must hold for the framework to be valid. If an axiom is disputed before execution, the framework halts. Axioms are declared before modules are designed — never retrofitted to match conclusions.
Declared before build
02
Module Isolation
Each module is assigned exactly one epistemic function. A module assigned multiple functions cannot enforce boundaries on any of them. Cognitive drift is a structural consequence of role-mixing — not a failure of execution.
One function per module
03
Protocol I Registration
Every quantitative metric registered: formula, domain, measurement class, CF score, and at least one falsification condition. Unregistered metrics are [?] unverified. A metric without Protocol I registration cannot produce certified outputs.
Every metric has a formula
04
Invalidation Conditions
The exact conditions under which the framework declares itself invalid — not merely uncertain, but structurally unable to produce reliable output. This is not a limitation. It is the primary instrument of trust. A framework without them is a promise.
Explicitly declared
// 02
7-Phase Build Sequence
The BuildProtocol
The sequence is fixed. No phase skips. No shortcuts past Protocol I.

The build protocol exists because frameworks designed out of order break in the same predictable ways. Axioms defined after modules produce circular reasoning. Formulas registered after deployment cannot be falsified retroactively. The sequence enforces architectural integrity before a single measurement is taken.

01
Domain Scoping
Define the domain, the failure modes it carries, the harm potential of output error, and the regulatory or epistemic context. Harm Risk Tier is declared here — it governs verification requirements for the entire build. A framework for medical decision support operates at a different epistemic constraint level than a framework for content scoring. These constraints cannot be adjusted after the build begins.
Domain declaration · Harm Risk Tier · Regulatory context · Scope boundary statement
02
Failure Mode Mapping
Identify the specific ways this framework can produce wrong outputs. Not theoretical failure — documented failure patterns from analogous systems, from red team sessions, from the AION failure archive. Each failure mode becomes a candidate for an invalidation condition. If you cannot name how a framework fails, you cannot build one that knows when it is failing.
Failure mode register · Red team candidates · Invalidation condition draft set
03
Axiom Registration
Governing axioms declared before modules are designed. Each axiom carries an epistemic tag ([D] / [R] / [S]) and a verification statement. Axioms tagged [S] or [?] require explicit CF scores. Any axiom that cannot be stated precisely enough to carry a tag is not ready to be an axiom — it is still a hypothesis. Hypotheses are tracked in the open threads register, not the axiom register.
Formal axiom register · ECF tags assigned · CF scores for [S] and [?] axioms
04
Module Specification
Each module specified with: one epistemic function, declared boundaries, explicit prohibitions, input and output constraints, violation conditions, bias safeguards, and domain adapter hooks. The module specification is the contract. If a downstream module receives output from a module declared in violation, it halts. No silent propagation of boundary breaches.
Module specification set · Fence rules · Violation conditions · Bias safeguard registry
05
Protocol I Formula Registration
Every quantitative metric registered before any output is generated. Registration requires: formula declaration, domain specification, measurement class, CF score, and at least one falsification condition tied to an FCL entry threshold. A metric without Protocol I registration is tagged [?] and cannot produce certified outputs. This phase surfaces undefined metrics that were assumed to be derivable — the FSVE Gini error and the CPA-001 BRS undefined input were both invisible until this registration was enforced.
Protocol I registry · Formula set · CF scores · NBP entries for each metric
06
CEV Arithmetic Audit
Calculated Empirical Verification: every formula verified against edge cases, boundary conditions, and the full domain of valid inputs before deployment. The FSVE v3.5 Gini formula produced negative values for every input — a sign error undetected until CEV was applied. CPA-001 v2.1 carried a BRS formula with an undefined input variable — the primary certainty metric was not computable. CEV does not verify intent. It verifies that the mathematics does what the specification claims.
CEV finding register · All findings resolved · CEV Audit Record (§ installed in framework)
07
Convergence State Declaration
The framework is assigned its honest convergence state — M-NASCENT on day one for every framework, regardless of how thoroughly it was designed. Convergence advances through empirical FCL entries from real deployments, not through confidence or elapsed time. The convergence state is public, versioned, and falsifiable. A framework declared M-STRONG without FCL evidence is a misrepresentation of the same kind as a clinical trial with fabricated data.
Convergence certificate · FCL path declared · Advancement criteria specified
// 03
Domain-Agnostic · Any System
Every DomainIs In Scope
The methodology does not change. The harm tier and adapter parameters do.

The AION methodology is domain-agnostic by design. The same 7-phase build protocol, the same Protocol I registration, the same CEV audit applies whether the framework governs an AI decision system or a clinical trial pipeline. What changes is the harm tier, the authoritative source set, the confidence ceilings, and the fail-safe activation conditions. The architecture beneath them does not.

Reference Implementation
AI Systems
Output certification · Failure logging · Epistemic audit · Prompt architecture · Model evaluation · Benchmark integrity
Harm Tier 3–5
Life Sciences
Medical / Clinical
Clinical decision support · Drug safety evaluation · Trial data integrity · Diagnostic pipeline verification · Incident reporting
Harm Tier 5
Infrastructure
Defense & Government
Intelligence analysis integrity · Mission parameter verification · Policy decision frameworks · Classified system audit
Harm Tier 5
Compliance
Legal & Regulatory
Compliance verification · Jurisdictional constraint mapping · Regulatory filing integrity · Contract scope anchoring · eDiscovery chain
Harm Tier 4
Capital Markets
Financial Systems
Risk model certainty scoring · Investment recommendation integrity · Algorithmic trading audit · Disclosure verification · Model governance
Harm Tier 4
Research
Scientific Pipelines
Hypothesis registration integrity · Replication tracking · Peer review epistemic scoring · Data chain verification · Pre-registration enforcement
Harm Tier 3–4
Operations
Industrial Safety
Process control failure classification · Near-miss logging · Safety margin certainty scoring · Incident report integrity · Cascading failure mapping
Harm Tier 4–5
Media & Intelligence
Information Systems
Source chain integrity · Claim verification frameworks · Disinformation detection certainty · Editorial decision auditing · Archive chain
Harm Tier 3–4
Climate & Environment
Environmental Models
Scenario model epistemic integrity · Projection uncertainty scoring · Policy recommendation certainty · Data pipeline verification
Harm Tier 3
Knowledge Systems
Education & Assessment
Evaluation framework integrity · Assessment scoring certainty · Learning outcome verification · Credential chain auditing
Harm Tier 2–3
Supply Chain
Operations & Logistics
Vendor claim verification · Supply chain integrity scoring · Provenance chain certainty · Failure cascade mapping · ESG claim verification
Harm Tier 2–3
Custom
Any Domain
If your system produces outputs that inform decisions with real consequences, a certainty framework applies. The domain scoping phase handles everything that is specific to you.
Tier TBD

Harm Tier governs verification requirements, confidence ceilings, and fail-safe activation conditions. Tier 5 = physical harm or death possible. Tier 1 = negligible consequence. See CDIP v1.5 for full Harm Tier protocol.

// 04
The Honesty Instrument
The ConvergenceLadder
M-NASCENT is not a failure. It is the only honest starting point.

Convergence state measures how much empirical evidence supports the framework's claims — not how carefully it was designed, not how long it has been in use. Every framework begins at M-NASCENT. Every advance requires documented FCL entries from real deployments. A framework declared M-STRONG without FCL evidence is a misrepresentation of the same kind as a clinical trial with fabricated data.

M-NASCENT
Specified and internally consistent — not yet empirically calibrated. Framework architecture is complete: axioms registered, modules specified, Protocol I formulas declared, CEV audit passed, invalidation conditions stated. FCL entries: 0 or below advancement threshold. All quantitative claims carry [S] tags indicating theoretical basis only. This is not a limitation — it is the accurate description of a new instrument before its first field use.
FCL: 0 → threshold
M-MODERATE
Empirically tested — predictive validity partially confirmed. FCL entries from real deployments logged. Some [S] claims upgraded to [R] or [D] based on observed outcomes. Formula coefficients have initial calibration data. Invalidation conditions have been tested — none triggered, but the test is documented. Edge cases identified in production are recorded in the FCL.
FCL: threshold met
M-STRONG
Convergence validated — calibrated across multiple deployment contexts. Formula coefficients empirically calibrated, not theoretically asserted. Inter-rater reliability targets met. Predictive validity documented across domain variants. Adversarial tests conducted and passed. At least one prior version was superseded by documented findings — the framework has been falsified and rebuilt, not only confirmed. LAV v1.5 is the only framework in the AION stack at this state.
Multi-domain FCL
M-VERY STRONG
Cross-validated and independently replicated. Applied by independent auditors outside the original build context. Replication studies confirm core predictive validity. External review has attempted falsification and documented the attempt. At least one major structural revision driven by empirical finding is in the record. This state is rare by design — most frameworks should not reach it.
Independent replication
CONSTITUTIONAL
Foundational — governs other frameworks, not governed by them. Reserved for frameworks that define the rules under which all other frameworks operate. In the AION stack, only the EIGHT LAWS hold Constitutional status. A Constitutional framework that fails does not produce a wrong output — it produces a system that can no longer be trusted at all.
Architectural only
What convergence state is not
  • Not a quality score — M-NASCENT can be architecturally perfect and empirically untested simultaneously
  • Not a function of elapsed time — a framework used for 10 years with no FCL entries is still M-NASCENT
  • Not awarded — earned through documented FCL entries, not through client confidence or architect assertion
  • Not permanent — a framework at M-STRONG can regress if FCL entries reveal systematic error
  • Not hidden — convergence state is public and versioned in every framework release
// 05
What You Receive
The DeliverableSpecification
Every item is required. None are optional. The framework is not certified without all of them.

A custom AION framework engagement produces a complete, deployable framework document plus the full supporting architecture needed to maintain it: formula registry, falsification conditions, convergence certificate, and a CEV audit record that cannot be quietly updated after delivery. You receive not just the framework but the instrument for knowing when it is wrong.

Full Deliverable Set — Custom AION Framework Engagement
  • Framework Document Complete specification: domain lock, term definitions, axiom register, module set with full specification template per module. All sections in AION-standard format. Version 1.0 on delivery — versioned for all future iterations. Primary · Versioned
  • Protocol I Registry Every quantitative metric registered: formula, domain, measurement class, CF score, inter-rater reliability target, minimum one falsification condition per formula. Unregistered metrics are moved to open threads with [?] status — they do not appear in the live framework. Required · All metrics
  • NBP Entry Set Nullification Boundary Protocol entries for every [S]-tagged claim. Each entry states the exact empirical condition under which the claim is falsified, the resolution protocol, and the required sign-off for revision. These entries are what separate a framework from a collection of opinions. Required · [S] claims
  • CEV Audit Record Full arithmetic verification: every formula scanned for edge cases, boundary conditions, domain errors. Findings register with severity classifications and resolution documentation. A clean CEV record means zero unresolved findings — not zero findings. The distinction matters. Required · Pre-delivery
  • Invalidation Conditions The complete set of conditions under which the framework declares itself invalid — not uncertain, but structurally unable to produce reliable output. Written as falsification tests, not caveats. The framework can fail these tests. That is the point. Required · All frameworks
  • ECF Tag Distribution Every factual claim carries an epistemic tag: [D] data / [R] reasoned / [S] strategic / [?] unverified. Tag distribution summary in framework header. No claim exits without a tag. Tag count is falsifiable — if the numbers do not match the inline tags, it is a protocol failure. Applied · All claims
  • Convergence Certificate Formal convergence state declaration on delivery: M-NASCENT for every new framework. Includes current FCL entry count, advancement threshold, what the first FCL entry requires, and the path to M-MODERATE as a measurable milestone — not a time estimate. Issued · Day one
  • Version Delta Architecture The framework ships with its versioning structure in place — v1.0 with version trigger documentation format ready. Every future revision produces a delta entry: what changed, what triggered the change, what the previous value was, severity of the change. No silent updates. Infrastructure · v1.0
  • FCL Path Documentation The Framework Calibration Log is initialized empty on delivery — this is correct. The FCL path document specifies how to log entries from live deployments, what constitutes a valid FCL entry, the threshold for M-MODERATE advancement, and the review process for convergence state revision. Empty on delivery
// 06
Live Evidence Base
ReferenceImplementations
The AION stack is the proof of the methodology. Not a demo — a deployed record.

The frameworks offered here are not theoretical. The AION stack itself is built under this methodology — and CEV audits conducted this month found critical errors in deployed frameworks that had been in use. The methodology found them. The deliverable was corrected. The correction log is public. That is what the methodology looks like in operation.

FSVE
v3.6 — CEV Correction Release · March 2026
M-MODERATE
Certainty Scoring Engine — 6 score dimensions, validity threshold enforcement, multi-perspective reviewer architecture. CEV v3.6 audit found Gini formula sign error producing negative values for all inputs. Corrected. All prior v3.5 laundering clearances using Gini are void and require re-run under v3.6. The error was not visible until CEV was applied. That is the entire argument for the methodology.
8 CEV findings — 2 critical, 1 structural gap, 2 major, 3 minor
EV: 0.525 — bottleneck E=0.35, empirical not architectural
NBP-FORMULA-GINI-01 — first [D]-tagged NBP entry in FSVE
FCL entries: 0 — convergence honest, not suppressed
LAV
v1.5 — Active · March 2026
M-STRONG
Linguistic Anchor Validation — the only M-STRONG framework in the AION stack. 45 validated entries, 77.5% running mean. Language precision enforcement for all framework output. M-STRONG status earned through documented FCL entries across multiple deployment contexts. The reference case for what convergence advancement requires in practice — and for how long it takes.
45 validated entries — active registry
Running mean: 77.5% — above M-STRONG threshold
Only framework in the stack at M-STRONG
The benchmark every other framework is measured against
CPA-001
v2.2 — FSVE v3.6 Audit Release · March 2026
M-NASCENT
Cognitive Prompt Architecture — bias-constrained, domain-activatable reasoning framework. FSVE v3.6 review found the BRS formula carried an undefined input variable in v2.1 — the primary certainty metric was not computable. All prior v2.1 BRS values are [?] unverified. v2.2 registers DFS, RS, and OEI formulas under Protocol I for the first time. This is v2.2 because the methodology caught what v2.1 should not have shipped with.
12 CEV findings — 1 critical, 4 major, 2 moderate, 5 minor
3 Protocol I registrations new in v2.2: DFS, RS, OEI
3 NBP entries — NBP-CPA-001 through NBP-CPA-003
First version that can produce reproducible BRS outputs
What the reference implementations demonstrate
  • The methodology finds critical errors in frameworks it produced. That is not a failure of the methodology — that is the methodology working exactly as designed.
  • M-NASCENT with zero FCL entries and a clean CEV record is more trustworthy than M-STRONG asserted without evidence.
  • The correction log is always public. There is no incentive to suppress findings — suppression would be detectable in the next CEV audit.
  • Every critical finding strengthens the framework. FSVE v3.6 is a more reliable certainty engine precisely because the Gini error is documented and the correction is registered.

The framework that knowswhen it is wrong.

Framework engineering engagements begin with a scoping conversation — domain, failure modes, harm tier, deployment context. The deliverable is specified before the build begins. If the scope cannot be defined precisely enough to produce a Protocol I registry, the engagement does not proceed.

Engagement via LinkedIn DM. No RFP required — a clear domain statement
and willingness to name your failure modes is enough to begin.