ACTIVE RED TEAM: ALWAYS ON · RUNNING MEAN: 77.5% · ANCHOR ENTRIES: 45 · M-STRONG · Session: 0 terms audited

Linguistic Audit
Vector

Root fidelity first. Drift has a driver. Latency is the work.
LAV v1.5
M-STRONG · 45 ANCHOR ENTRIES
TEN PROTOCOLS · A through J
77.5% RUNNING MEAN
01 ISOLATION
02 ROOT EXCAVATION
03 DRIFT MAPPING
04 LDS SCORING
05 ISSUANCE
STAGE 1 — ISOLATION · Extract the term from context
Term Under Audit * required
Deployment Context optional but recommended
Term Properties — check all that apply
Enter the term before advancing.
STAGE 2 — ROOT EXCAVATION (ROOT LISTENING) · Trace the etymological lineage
Excavate each level in sequence. What physical, relational, or phenomenological reality was the term encoding at each level? Document the original intuition — the thing a person was pointing at when this sound first became a word. Mark levels as UNRECOVERABLE where evidence is absent.
CONFIDENCE DESIGNATION — AUTO-DERIVED FROM EVIDENCE
— PENDING
Complete the root excavation above to determine confidence designation.
⚠ PROTOCOL C ACTIVATED — Root marked unrecoverable. Confidence forced to [INFERRED] (LDS × 1.25). First-usage context has been designated as substitute ground. Entry will carry ROOT_UNRECOVERABLE flag in the Drift Archive.
Complete at least one root level with a root form and encoding before advancing.
STAGE 3 — DRIFT MAPPING · Classify every departure from root encoding
Every drift type requires a decision: PRESENT, ABSENT, or UNCERTAIN. All seven must be addressed before the primary drift type can be selected. Drift is never passive — the driver must be named.
Decisions: 0 / 7 made · Primary type: not yet selected
All 7 drift types must have a decision, a primary type must be selected, and the drift driver must be named.
STAGE 4 — LDS SCORING · Linguistic Drift Score across three dimensions
FORMULA · Standard LDS
LDS = ((1 − RF) + CL + DR) / 3
Confidence multiplier applied after: [ETYMOLOGICAL] × 1.00 · [CONTEXTUAL] × 1.10 · [INFERRED] × 1.25
v1.5 Boundary Rule: LDS exactly at threshold → assigned to more cautious category
Adjust the scoring sliders to reflect evidence before advancing. Default 0.5/0.5/0.5 requires explicit confirmation.
STAGE 5 — ISSUANCE · Formal issuance and follow-up requirements
LDS —
Complete LDS scoring in Stage 4 to receive issuance.
MONITORING TIER: Deployment exposure:
◈ PROTOCOL I REQUIRED — This term was flagged as a quantitative metric field. After issuance, register the formula in Protocol I (Protocols tab). Until formula registration is complete, this field carries PROVISIONAL status.
Complete required follow-up fields before saving to archive.
Protocol A
Compound Term
Resolve terms built from two or more etymological roots. Constituent excavation, Fusion Integrity scoring, DEFINED×DEFINED detection.
Protocol B
Cross-Language Conflict
Audit terms fusing roots from different language families. Ontological Alignment scoring. Cross-language LDS formula.
Protocol D
Neologism Coining
Coin new terms for conceptual territory with no defensible name. Inaugural LDS. Root selection. Pressure point mapping.
Protocol F
Discourse-Level Drift
Detect drift at the argument level — not just individual terms. DDS calculation. Interaction Premium. False Consensus Amplification.
Protocol G
Temporal Drift Monitoring
Measure how fast a term is moving from its root. Drift Velocity per dimension. Monitoring tier escalation triggers.
Protocol H
Cross-Framework Interference
Detect when drift in one framework propagates into others. CFIS scoring. Cross-document Definition Divergence (Step H6).
Protocol A — Compound Term Resolution
Decompose the compound. Excavate each constituent independently. Score Fusion Integrity.
STEP A1 — DECOMPOSITION
Compound Term
Constituent Root 1
Constituent Root 2
STEP A2b — DEFINED × DEFINED DETECTION (v1.5 STF-002)
If both constituents have previously been issued DEFINED status under LAV, the DR interaction multiplier 1.15 applies before compound LDS calculation. Check the Drift Archive for prior audits of each constituent.
Prior issuance status — Constituent 1
Prior issuance status — Constituent 2
◈ DEFINED × DEFINED DETECTED — DR interaction multiplier 1.15 will be applied to the DR component before compound LDS calculation.
STEP A4 — FUSION INTEGRITY [FI]
Fusion Integrity — how well the two roots integrate conceptually 0.0 = contradicts itself / 1.0 = perfect semantic fusion
0.50
Notes on fusion — why do/don't these roots integrate cleanly?
STEP A5 — COMPOUND ISSUANCE
Protocol B — Cross-Language Root Conflict
Audit terms fusing roots from different language families. Ontological Alignment scoring.
STEP B1 — LINEAGE IDENTIFICATION
Root 1
Language Family
Root 2
Language Family
STEP B3 — ONTOLOGICAL ALIGNMENT SCORING [OA]
What does Root 1 assume about the nature of what it encodes?
What does Root 2 assume about the nature of what it encodes?
Ontological Alignment [OA] — how well these assumptions agree 1.0 = fully aligned · 0.5 = one dimension compatible · 0.0 = directly contradictory
0.50
CROSS-LANGUAGE LDS RESULT
Formula: LDS = ((1−RF₁) + (1−RF₂) + CL + DR + (1−OA)) / 5
Protocol D — Neologism Coining
Coin new terms with full root justification and inaugural LDS scoring.
STEP D1 — NEED CONFIRMATION
Conceptual territory to be named
Why no existing CONFIRMED or DEFINED term serves this territory
STEP D2 — ROOT SELECTION
Coined term
Constituent roots and their justification
STEP D3 — INAUGURAL LDS SCORING
Coined terms do not receive an RF score — their root is their current usage. Inaugural LDS = (CL + DR) / 2. Scores CL and DR only.
CONCEPTUAL LOAD [CL]
How much ontological weight will the coined term carry beyond what its root encodes? 0.0 = exactly what the root was built for · 1.0 = critically overloaded
0.25
DISAMBIGUATION RISK [DR]
Projected risk of false consensus given adjacent terms and existing usage of constituent words. 0.0 = no false consensus risk · 1.0 = near-certain false consensus
0.15
STEP D4 — PRESSURE POINT MAPPING
Projected pressure points — forces most likely to drive this term toward drift
Protocol F — Discourse-Level Drift Detection
Score an argument or passage for accumulated drift across all load-bearing terms.
LOAD-BEARING TERMS — Enter each term with its LDS score
Minimum 2 terms required. Add all load-bearing terms in the discourse — terms that carry ontological commitment, not incidental vocabulary.
Minimum 2 terms required for discourse-level scoring.
DDS CALCULATION
Base DDS = avg(all LDS scores) · Interaction Premium = (HIGH pairs × 0.08) + (MOD pairs × 0.04) + (LOW pairs × 0.01)
Final DDS = (Base DDS + Interaction Premium) × (1 + (N_high_DR × 0.05))
Protocol G — Temporal Drift Monitoring
Measure drift velocity. Compare current LDS to previous. Calculate escalation triggers.
PREVIOUS AUDIT RECORD
Enter the RF, CL, DR scores from the most recent prior audit of this term. If no prior audit exists in this session, enter values manually.
Term
Previous RF
Previous CL
Previous DR
INTERVAL
Months since previous audit
DRIFT VELOCITY RESULTS
Protocol H — Cross-Framework Interference Detection
Detect when a term drifts differently across frameworks. CFIS scoring. H6 definition divergence.
TERM AND FRAMEWORKS
Shared term under analysis
Enter each framework and the LDS score this term carries within that framework. Minimum 2 frameworks.
STEP H3 — CFIS CALCULATION
CFIS = (LDS_max − LDS_min) × N_frameworks
STEP H6 — CROSS-DOCUMENT DEFINITION COHERENCE (v1.5 STF-005)
When the same DEFINED-status term has operational definitions in two or more documents, calculate Definition Divergence [DD]. DD ≥ 0.50 = DEFINITION_CONFLICT requiring immediate resolution.
Operative dimension — Document A definition
Operative dimension — Document B definition
Definition Divergence [DD] — how different are the operative dimensions? 0.0 = identical · 0.5+ = DEFINITION_CONFLICT
0.20
RUNNING MEAN Baseline: 77.5% (45) · Session: (0) · Combined: 77.5% (45)
No terms audited this session. The Drift Archive grows only — it never deletes.

Complete an audit in the Audit Engine tab to add entries.
LAV v1.5 — VERSION RECORD · Five Structural Findings

LAV v1.5 builds on v1.4 by incorporating five structural findings (STF-001 through STF-005) surfaced when LAV v1.4 was stress-tested against a real audit of the FCL Master v2.6 document. The findings revealed gaps in v1.4's handling of: threshold boundary cases, compound terms built from DEFINED-status constituents, metric-field formula requirements, professional title vocabulary, and cross-document definition coherence. All five gaps are closed in this version.

STRUCTURAL FINDINGS — STF-001 through STF-005
STF-001 · Stage 4 Boundary Rule
Threshold Caution — Boundary Tiebreaker Rule
v1.4 had no specified behavior when a term's LDS fell exactly at a threshold boundary (e.g., exactly 0.40). This created ambiguity about whether the term required an operational definition. v1.5 formalizes: when LDS falls exactly at a boundary, it is assigned to the more cautious (higher) issuance category. LDS ≤ 0.399 = CONFIRMED. LDS exactly 0.400 = DEFINED. LDS exactly 0.600 = CORRECTED. Eliminates all boundary ambiguity. Added to Foundational Principles as "Threshold Caution."
STF-002 · Protocol A Step A2b
DEFINED × DEFINED Compound Detection
v1.4 Protocol A had no mechanism for detecting when both constituents of a compound term already carried DEFINED status. When two DEFINED-status terms are fused, the disambiguation risk compounds non-linearly — the false consensus risk of each term interacts with the other's. v1.5 adds Step A2b: when both constituents carry DEFINED status, apply DR interaction multiplier 1.15 before compound LDS calculation. The premium reflects the compounded false consensus risk of fusing two terms each of which already requires an operational definition.
STF-003 · Protocol I — New in v1.5
Metric Field Formula Registration
v1.4 had no protocol for handling DEFINED-status terms deployed as quantitative metric fields (fields with numerical domains such as [0-1]). A term like "Validity" or "Confidence" can receive a DEFINED issuance, but if it is deployed as a scored metric without a registered formula, the issuance tells you nothing about what the number means. Protocol I requires: when a DEFINED-status term is deployed as a metric field, register a formula specifying (1) the mathematical expression, (2) what the formula measures in one sentence, (3) domain specification, (4) relationship to adjacent metrics. Until registration is complete, entries using that field carry PROVISIONAL status.
STF-004 · Protocol J — New in v1.5
Professional Title Audit Registry
Professional titles in technical fields carry significant Colonization drift — they have been appropriated by credentialing systems and now import credential-expectation as their dominant meaning. v1.4 had no formal protocol for auditing professional titles. v1.5 adds Protocol J: full five-stage audit applied to professional titles used in LAV-governed documents, with two additional assessments — Credential Import Risk [CIR] and Title Accuracy [TA]. Results logged in a dedicated Professional Title Sub-Registry. Application: "AI Certainty Architect" → CONFIRMED. "AI Certainty Engineer" → CORRECTED (retired — credential-expectation import without credential).
STF-005 · Protocol H Step H6
Cross-Document Definition Coherence Check
v1.4 Protocol H detected cross-framework interference at the LDS score level but had no mechanism for checking whether the same DEFINED-status term received different operational definitions in different documents within the same ecosystem. This is a distinct failure mode: two documents might both treat a term at the same LDS, but define it differently — creating silent incoherence that Protocol H's CFIS score would not detect. v1.5 adds Step H6: when the same DEFINED-status term has operational definitions in two or more documents, calculate Definition Divergence [DD]. DD ≥ 0.50 = DEFINITION_CONFLICT requiring immediate resolution and canonical definition designation.
VERSION HISTORY
VERSIONCORE ADDITIONSOURCE
v1.0Five-stage core process. LDS formula. Three issuance categories. Self-audit revealing formula error.Initial build
v1.1Formula corrected. Core terms audited. "Framework" → "Vector." "Audit" → "Root Listening."Self-application
v1.2Protocol A: Compound Term. Protocol B: Cross-Language. Protocol C: Unrecoverable Root. RETIRED category added.Expansion
v1.3Protocol D: Neologism Coining. Protocol E: Non-IE. Protocol F: Discourse-Level Drift. Drift Archive formalized.Expansion
v1.4Protocol G: Temporal Drift. Protocol H: Cross-Framework Interference. Active Red Team System (five layers).Expansion
v1.5STF-001 through STF-005. Boundary Tiebreaker. DEFINED×DEFINED detection. Protocol I. Protocol J. Step H6.Stress-test FCL v2.6
WHAT v1.5 DOES NOT YET CLAIM

Full Non-IE Auditing Capability — Protocol E provides principled engagement with non-IE language families but does not claim full auditing precision. Specialist collaboration is ongoing.

Predictive Drift Modeling — Protocol G measures velocity but does not forecast LDS trajectories. A predictive model is a future boundary.

Automated Ecosystem Mapping — Protocol H Step H1 requires manual ecosystem mapping. Automated detection is a future capability.

Red Team Performance Metrics — the Active Red Team System has no self-measurement. Tracking RT accuracy (true flags vs false positives vs missed drift) is the most significant gap remaining in v1.5.

Protocol I Historical Retroactivity — Protocol I resolves the formula gap going forward. Prior FCL entries using DEFINED-status metric fields without registered formulas carry PROVISIONAL status until retroactive registration.

CEV AUDIT ATTRIBUTION

LAV v1.5 · Linguistic Audit Vector · Complete Production Specification
Author: Sheldon K. Salmon (AI Certainty Architect) · Co-Architect: Claude (Anthropic)
Issued: February 2026 · Supersedes: LAV v1.4
Ten Protocols (A through J). Five Core Stages. Active Red Team System (five layers). Full self-audit record.

TEN FOUNDATIONAL PRINCIPLES
PRINCIPLE 1
Root Fidelity First
What the root originally encoded is ground truth. Current usage is always the witness under examination, never the authority. When the root and current usage conflict, the root wins the ontological argument even if current usage wins the practical one. Both outcomes are documented.
PRINCIPLE 2
Drift Has a Driver
Semantic departure is never passive. Words do not wander. They are driven by institutions, disciplines, commercial interests, power structures, and epistemic fashions. Every audit names the force that applied pressure. Identifying the driver is not optional — it is a required field.
PRINCIPLE 3
Root Listening Over Interrogation
The audit posture is receptive, not adversarial. The term is a witness to be heard carefully. Deep attention to what a term is actually encoding produces more precision than aggressive examination. The original Latin of "audit" — audire, to hear — is restored here as a design principle.
PRINCIPLE 4
Correction is Collaborative
The com- in corrigere is structural. No correction is issued unilaterally. Human pattern recognition and AI excavation together produce issuances. Neither operates alone.
PRINCIPLE 5
Embodiment Anchor
Language begins in the body before it becomes system. The roots we trace were encoded by people pointing at physical realities. Every audit remembers this. Abstraction is always a departure from embodied encoding — that departure must always be named.
PRINCIPLE 6
Silence is a Flag
If a root cannot be recovered, that is data, not a gap to skip. Unrecoverable roots tell us when and under what conditions a term entered the conceptual record.
PRINCIPLE 7
Construction Carries Responsibility
Every term coined under this vector enters the conceptual record with full documentation of its intended encoding. Opacity at the coining stage is the origin of drift. This vector makes opacity structurally impossible.
PRINCIPLE 8
Never Erase the Archive
Retired terms, corrected terms, and suspended discourse are never deleted. The history of how terms damaged thinking is itself data.
PRINCIPLE 9
Think Before Delivering
No output is delivered without Red Team screening. Speed is never a reason to bypass the screening sequence. An output that takes longer and arrives correct serves the mission.
PRINCIPLE 10 (v1.5)
Threshold Caution
When a term scores exactly at a boundary between issuance categories, it is assigned to the more cautious category. Precision at boundaries requires additional definitional work, not permission to skip it.
UNIFIED SCORING ARCHITECTURE
COMPONENTSYMBOLRANGEDIRECTIONFORMULA
Root FidelityRF[0–1]High = goodAssessed against root evidence
Conceptual LoadCL[0–1]High = badAssessed against root capacity
Disambiguation RiskDR[0–1]High = badAssessed against false consensus probability
Standard LDS = ((1 − RF) + CL + DR) / 3
Fusion Integrity (Protocol A)FI[0–1]High = good((1−RF) + CL + DR×1.15[if D×D] + (1−FI)) / 4
Ontological Alignment (Protocol B)OA[0–1]High = good((1−RF₁) + (1−RF₂) + CL + DR + (1−OA)) / 5
Drift Velocity (Protocol G)DVPer monthPositive = drifting(DV_RF + DV_CL + DV_DR) / interval_months
Cross-Framework Interference (Protocol H)CFISUnboundedHigh = bad(LDS_max − LDS_min) × N_frameworks
Definition Divergence (Protocol H Step H6)DD[0–1]High = badAssessed against operative dimension agreement
Inaugural Coinage (Protocol D)[0–1](CL + DR) / 2 (RF not scored)
Credential Import Risk (Protocol J)CIR[0–1]High = badDisclosed alongside LDS — not added to formula
ISSUANCE THRESHOLDS
LDS SCORESTATUSREQUIRED ACTION
≤ 0.399CONFIRMEDRoot-defensible. Cleared for deployment. Document root encoding.
0.400–0.599DEFINEDMandatory operational definition specifying operative root dimension, excluded dimensions, conditions for revision.
0.600–0.799CORRECTEDRoot-restored replacement required. Retire original. Archive drift record.
≥ 0.800RETIREDActive conceptual damage. Remove from operational vocabulary. Replacement mandatory via Protocol D.
v1.5 Boundary Rule: LDS exactly at a threshold boundary → assigned to the more cautious (higher) category.
DRIFT TYPE TAXONOMY
TYPEDEFINITIONDETECTION SIGNAL
NarrowingRoot encoded broader reality; scope was restrictedOlder texts use the term in contexts where current usage would substitute a different term
ExpansionRoot encoded specific reality; scope extended beyond what root can supportCurrent usage covers more ground than the root's encoding allows
InversionCurrent usage implies something the root directly contradicts. Most dangerous.Literal reading of root would produce opposite meaning to current usage
Pejorative ShiftRoot was neutral; current usage carries negative valenceValue-laden connotation absent from root or early usage
Ameliorative ShiftRoot encoded something charged; current usage has softened itComparison reveals lost intensity or difficulty in current usage
ReificationRoot encoded a process; current usage treats it as a property. Most common in AI vocabulary.Can the term precede "is absent" or "is present"? If yes, may be reified.
ColonizationRoot appropriated by a discipline, narrowed to serve its needsUsage outside the colonizing discipline feels "wrong" to specialists
DOCUMENTATION & ATTRIBUTION
LAV v1.5 · Linguistic Audit Vector · Author: Sheldon K. Salmon (AI Certainty Architect) · Co-Architect: Claude (Anthropic) · February 2026 · Supersedes LAV v1.4