Scientific Papers & Technical Foundations

Deterministic AI is built on first principles. We publish our foundational research openly so engineers, enterprises, regulators and partners can independently evaluate the rigor, coherence, and long-term viability of the technology.

The Deterministic Computation Law: A Formal Mathematical Framework for Reproducible Artificial Intelligence

Foundational Research Paper

This paper introduces the Deterministic Computation Law (DCL), a formal mathematical framework establishing the necessary and sufficient structure for reproducible computation. Derived from three minimal axioms-input determinism, representation invariance, and replayable reasoning-the work proves that all reproducible computation must factor through canonicalization followed by deterministic reasoning. DCL provides an architecture-independent foundation for deterministic and auditable artificial intelligence across scientific, regulated, and safety-critical domains.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Reproducibility Is a Semantic Property of Computation Why Determinism and Replay Are Required for Machine Learning

Foundational Research Paper

This paper explains why many machine learning results cannot be exactly reproduced and shows that the problem comes from how computations are executed, not from poor experimentation. It proves that when execution is nondeterministic, exact replay and verification are impossible, and it identifies the precise conditions under which machine learning results can be made repeatable, auditable, and verifiable. Together, these results reframe the reproducibility crisis and show that deterministic computation is essential for scientific, regulated, and safety-critical AI systems.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

The Determinism Requirement for AGI: Why Stable Intelligence, Memory, Identity and Multi-Agent Cognition Require Deterministic Cognitive Substrates

Foundational Research Paper

This work establishes a formal requirement for determinism in artificial general intelligence by modeling AGI as a stateful dynamical system. It proves that reproducible reasoning, stable memory, identity continuity, and multi-agent synchronization necessarily require a deterministic cognitive core. The work provides a foundational theoretical constraint for building auditable, reliable, and scalable intelligent systems.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

A Unified Covariant Framework for Quantum Dynamics, Electromagnetism and Gravity Using the Deterministic Computation Law

Foundational & Cross-Domain Theory

This work formalizes the Deterministic Computation Law (DCL) as a mathematical framework for reproducible computation, bridging deterministic system behavior and probabilistic dynamics within a single model. Although developed using tools from theoretical physics, the results directly inform how deterministic AI systems, stable inference, and reproducible decision pipelines can be designed and analyzed. The paper establishes a law-level foundation for computation where identical canonical inputs provably lead to identical outcomes.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

A Reproducibility Index for Artificial Intelligence Systems A Quantitative Measure of Determinism, Invariance, and Replayability

Foundational Research Paper

Modern AI systems often fail reproducibility not by accident, but by design. This paper introduces a Reproducibility Index, grounded in the Deterministic Computation Law, that quantitatively measures determinism, representation invariance, and replayable reasoning. It provides a practical, auditable framework for certifying trustworthy AI in scientific, regulated, and safety-critical domains.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

A Mathematical Theory of Creativity in Artificial Intelligence Deterministic State Evolution, Semantic Novelty and Value Optimization

Foundational Research Paper

This paper introduces a rigorous mathematical framework for creativity in artificial intelligence, defining creativity as the deterministic generation of semantically novel and high-value outputs under explicit constraints. It shows how deterministic state evolution enables reproducible, auditable, and long-horizon creative reasoning that stochastic generative models cannot guarantee. The work provides a foundational theory for building trustworthy, memory-centric creative AI systems aligned with the Deterministic Computation Law.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

A Deterministic Computation Model Suggesting Why Double-Stranded Helical Architectures Are Favored for Long-Horizon Information Preservation

Foundational Research Paper

This paper introduces a deterministic, information-theoretic model explaining why double-stranded, helical architectures naturally emerge in systems that must preserve information reliably over long time horizons. Using the Deterministic Computation Law (DCL), it shows how redundancy, canonicalization, and geometric constraints favor DNA-like structures without invoking biological mechanisms. The work provides a unifying computational lens linking reproducible intelligence, fault-tolerant systems, and the geometry of durable information storage.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Eliminating Litigation Risk: Deterministic AI Proves When Lawsuits Cannot Succeed Under the Law

Foundational Research Paper

Most legal AI predicts outcomes. This paper introduces a deterministic framework that proves when lawsuits cannot succeed under the law, enabling audit-grade, explainable, and defensible legal risk elimination.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Why Probabilistic Artificial Intelligence Cannot Be Audit Grade

Foundational Research Paper

Automated decisions are increasingly judged by courts and regulators, yet most AI systems cannot reliably replay and verify a single decision after the fact. This paper proves, using a simple and technology-neutral principle, that any AI system whose outcome remains probabilistic once the evidentiary record is fixed cannot meet audit-grade due-process standards. It draws a clear line between statistical accuracy and legal accountability, showing when and why determinism is not optional but required.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

The Structural Impossibility of Singular Intelligence: A Fundamental Limit on Inductive Compression and AGI

Foundational Research Paper

Why doesn’t intelligence converge into one perfect super-mind? This paper shows why it can’t. Any system that must survive uncertainty, mistakes and change fails when decisions come from one place. Long-lasting intelligence must be plural by design. It’s a law, not a choice.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Why Training Pipelines Prevent Reproducibility in Modern AI Systems Structural Limits of Stochastic Training

Foundational Research Paper

Modern AI systems often cannot be exactly replayed or audited - even with identical data and code. This paper shows that irreproducibility is not a tooling flaw but a structural consequence of today’s stochastic training pipelines. It provides a rigorous mathematical framework explaining why exact training replay fails and what true AI accountability requires.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Black Holes as Deterministic Canonicalization Computers Under the Deterministic Computation Law (DCL)

Foundational & Cross-Domain Theory

This paper uses black holes - one of the most well-studied physical systems - to explain how the Deterministic Computation Law (DCL) turns complex inputs into stable, repeatable results without losing information. It shows that determinism, reproducibility, and consistent outputs are not just engineering choices, but essential features of reliable and auditable information systems. The paper is conceptual in nature and helps explain the theoretical foundations behind deterministic, regulator-ready AI and computing platforms.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

The Dynamical Casimir Effect as a Boundary-Driven Reproducibility Transition Under Deterministic Computation Law (DCL)

Foundational Research Paper

This paper explains the Dynamical Casimir Effect as a process where changing boundaries make hidden quantum fluctuations become stable and measurable. The energy always comes from the external drive, while the vacuum only defines what states are possible. The Deterministic Computation Law reframes this as a reproducibility shift: something becomes “real” when it can be measured consistently under fixed conditions.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Aviation Safety and Connectivity: Deterministic Network Behavior as a System Requirement

Foundational Research Paper

This paper examines why aviation connectivity systems increasingly require deterministic network behavior as they transition from best-effort internet to safety-adjacent infrastructure. It introduces a systems-level framework for understanding reproducibility, auditability, and replayable network behavior without prescribing specific implementations.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Information Conservation Under Deterministic Computation A Canonical Invariance Theorem for Representation-Independent Information

Foundational Research Paper

This paper provides a theoretical foundation for several advanced computational and information-theoretic applications. By moving from a probabilistic view of information to a deterministic, representation-independent one, it offers unique solutions for verifying identity and conserving value across computations.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Deterministic Computation as a Prerequisite for Certifiable Quantum Navigation

Foundational Research Paper

This paper shows why quantum sensors alone are insufficient for safety-critical navigation. It proves that only deterministic, replayable computation can make GPS-denied navigation auditable, certifiable, and legally defensible - independent of sensor precision. The work establishes determinism as a foundational requirement for deployable quantum navigation and autonomous systems.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Advancing AlphaFold-Class Systems Beyond Accuracy: Deterministic Computation for Reproducible Scientific AI

Foundational Research Paper

AlphaFold has transformed protein structure prediction through near-experimental accuracy, but accuracy alone is not enough for scientific or regulatory reliability. This paper introduces deterministic computation as the missing foundation for reproducible, auditable AI, showing how AlphaFold-class systems can produce stable scientific evidence without altering predictive models. It establishes determinism as an independent and essential axis of progress in trustworthy scientific AI.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Unifying Shannon Information Theory and Turing Computation Through Deterministic Representation

Foundational Research Paper

This paper provides a foundational framework unifying Shannon information theory and Turing computation through deterministic, reproducible state representation. It clarifies the structural conditions required for probabilistic information to be reliably used within deterministic computation. The work establishes a theoretical basis for reproducibility, auditability, and stability in modern computing and AI systems.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Deterministic Cognition in Artificial General Intelligence: Probability-1 Coherence, Identity Continuity and Stable Cognitive Trajectories

Foundational Research Paper

This paper introduces a foundational law for Artificial General Intelligence that explains why stable, reproducible, and interpretable cognition requires deterministic internal evolution. It shows that core AGI properties - such as identity continuity, consistent reasoning, and coordinated multi-agent behavior - can only be guaranteed when identical histories always produce identical internal states. The work provides a rigorous theoretical basis for building AGI systems that are auditable, reliable, and stable over long time horizons.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Why Current AI Is Not Alive: A Formal Framework for Defining Artificial Life

Foundational Research Paper

This paper presents a formal, system-independent framework that defines life as autonomous maintenance of identity over time. It rigorously distinguishes life from intelligence, explaining why current AI systems are not alive despite advanced capabilities and outlines precise criteria under which artificial life could be meaningfully defined.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Proportional Justice in AI: A Mathematical Framework for Geometric Fairness, Stateful Decision-Making and Verifiable Accountability

Foundational Research Paper

This paper shows, using mathematics, that many real-world decisions require proportional fairness, where outcomes scale with responsibility, risk, and past behavior. It proves that stateless AI systems cannot deliver this kind of fairness over time, and that deterministic memory is required for consistent, auditable decisions. The result is a practical foundation for building AI systems that are fair, explainable, and governable in high-stakes settings.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Identity Is Not Evidence: Authority-Based Failure in Human and AI Decision Systems

Foundational Research Paper

Most decision systems - human or AI - quietly rely on authority and reputation instead of facts, leading to inconsistency and hidden risk. This paper proves why those shortcuts fail as systems scale and environments change. It presents a practical, deterministic framework for making decisions that are consistent, verifiable and audit-ready by design.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Protein Folding as Deterministic Computation: A Natural Proof-of-Existence for the Deterministic Computation Law

Foundational Research Paper

This paper explains how protein folding follows the Deterministic Computation Law, showing that even though molecular motion is noisy, the final folded structure is stable and repeatable. It demonstrates that different folding paths can still lead to the same outcome when biological processes are viewed through deterministic, invariant representations. The work connects protein folding, information theory, and deterministic computation to explain why biological systems can be reliable and auditable.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

Compliance-Native AI System Architecture Under the EU AI Act: Deterministic AI

Policy & Regulatory Analysis

This paper explains how deterministic AI can make it easier to build AI systems that comply with the EU AI Act. It shows, in clear and practical terms, how reproducible and predictable AI behavior supports auditing, oversight and ongoing regulatory compliance. The paper serves as a straightforward guide for regulators, enterprises and engineers who want AI systems that are trustworthy by design and compliant with the EU AI Act.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.

US Federal Reserve Data Reveals a New Way to Fund the AI Economy: The 0.6% Solution

Policy & Regulatory Analysis

U.S. Federal Reserve payments data reveals a simple, scalable way to fund universal basic income and core public services in an AI-driven economy. It demonstrates that a uniform 0.6% tax on settlement-level electronic payments can raise roughly $7 trillion per year without relying on labor, income, or consumption taxes. The result is a transparent, automation-aligned fiscal model designed for an economy where AI increasingly drives production and value creation.

Published on Zenodo, CERN’s open research archive and assigned a permanent DOI.