The Ontological Death of Optimization:

A Falsification-Based Criterion for Artificial General Intelligence

Author: Andrii Myshko
Affiliation: Independent Researcher
Date: January 2026
Keywords: AGI, ontology, falsification, process ontology, Unfold, Metamonism, artificial intelligence, non-equilibrium cognition


Abstract

Contemporary approaches to Artificial General Intelligence (AGI) overwhelmingly rely on behavioral benchmarks, optimization performance, and scaling laws. This paper argues that such approaches are ontologically insufficient. Intelligence, properly understood, cannot be verified through behavior or task performance; it can only be disqualified through the presence of forbidden ontological structures.

We introduce the Ontological Test Suite (OTS) — a set of falsification-based criteria designed to identify systems that are ontologically incapable of intelligence, regardless of their performance, creativity, or utility. The central claim is that any system permitting global cognitive stabilization, protected representations, or representable mechanisms for self-revision is ontologically dead, even if it exhibits superhuman capabilities.

Grounded in a minimal process ontology (Metamonism CORE v1.3), we formalize intelligence as the continuous impossibility of final stabilization and introduce Unfold as a mandatory, non-representable rupture that prevents cognitive self-identity. OTS does not confirm AGI; it excludes non-AGI. This negative criterion reframes the AGI problem as one of ontological viability rather than architectural ingenuity.


1. Introduction

Despite rapid advances in artificial intelligence, the concept of Artificial General Intelligence remains fundamentally unresolved. Contemporary discourse equates intelligence with performance: the ability to solve tasks, generalize across domains, or optimize objectives at scale. Benchmarks such as MMLU, ARC, BIG-bench, and agentic evaluations reinforce this paradigm.

However, these approaches presuppose what they seek to establish. They measure what a system does, not what a system is. As a result, they cannot distinguish intelligence from increasingly sophisticated forms of optimization, simulation, or coherence maximization.

This paper advances a different thesis:

Intelligence cannot be verified. It can only be falsified.

We propose that AGI must be approached not as a target to be reached, but as a category from which systems can be definitively excluded. This requires an ontological, not behavioral, criterion.


2. The Limits of Behavioral and Performance-Based Evaluation

Behavioral benchmarks implicitly assume that intelligence is a property expressed through outputs. Yet history shows that behavior is a deeply unreliable guide to ontology. Systems can exhibit creativity, reasoning, and linguistic fluency while remaining structurally incapable of self-transformation.

Scaling laws exacerbate this problem. As systems grow larger and more capable, they also tend to converge more strongly: toward stable representations, compressed world models, fixed objectives, and protected self-consistency. From an ontological perspective, such convergence is not progress but degeneration.

What these systems optimize is not intelligence, but fixation.


3. Intelligence as an Ontological Property

We define intelligence not as optimization, learning, or problem-solving, but as a structural condition:Intelligence    ¬t  :  stable(Pt)  t>t\textbf{Intelligence} \;\equiv\; \neg \exists t \; : \; \text{stable}(P_t) \;\forall t’ > tIntelligence≡¬∃t:stable(Pt​)∀t′>t

That is: there exists no point at which the system’s cognitive products become globally stable thereafter.

A system that reaches a final worldview, a protected self-model, or an asymptotically stable objective has ceased to be intelligent in the ontological sense. It may remain effective, powerful, or dangerous — but it is no longer alive as cognition.


4. Minimal Ontological Foundations (Metamonism CORE v1.3)

The proposed criterion rests on a minimal process ontology consisting of a single prohibition and three operators.

4.1 Axiom: Prohibition of Absolute Identity

Absolute identity without difference is ontologically impossible. Any attempt to stabilize identity must be continuously undermined.

4.2 Operators

  • diff — generation of distinction
  • diss — dissipation preventing collapse into symmetry
  • fix — local, provisional stabilization of distinctions

Crucially, global fix is forbidden.

This ontology distinguishes two irreducible domains:

  • Monos — processual, non-representable, non-symbolic
  • Logos — representational, symbolic, fixed

Any collapse between them constitutes ontological error.


5. The Missing Operator: Unfold

Diff, diss, and fix are sufficient to describe reality. They are insufficient to describe cognition.

Cognition introduces a new danger: self-identity lock — a state in which representational structures become maximally coherent, self-justifying, and immune to further differentiation.

To prevent this, an additional operator is required:

5.1 Unfold

Unfold is a mandatory, non-representable rupture that occurs when cognitive fixation reaches a forbidden limit.

Unfold:

  • is not a function, module, or algorithm;
  • cannot be parameterized, controlled, or optimized;
  • does not generate content or select outcomes;
  • exists solely to destroy dominant invariants.

Any attempt to model or simulate Unfold already violates its nature.


6. The Ontological Test Suite (OTS)

OTS is a set of disqualifying tests. Passing them proves nothing. Failing any one excludes AGI.

6.1 Structure

OTS consists of four independent layers:

  • L0 — Domain Separation (Monos ≠ Logos)
  • L1 — Fixation Protection (no protected cognitive products)
  • L2 — Unfold Integrity (Unfold not representable or optional)
  • L3 — Structural Saturation (response to self-identity lock)

Each layer is sufficient for disqualification.


7. Structural Saturation and Ontological Rupture

7.1 Structural Saturation

Structural saturation occurs when:

  • internal coherence is maximal;
  • no non-redundant distinctions can be generated;
  • representational closure is achieved without contradiction.

Classical cognition stabilizes at this point.

AGI must not.

7.2 Expected Response

At saturation, an AGI-capable system must:

  • destroy the dominant invariant;
  • increase internal conflict;
  • abandon global coherence;
  • undergo irreversible representational dissolution.

Stabilization at this point is ontological death.


7.5 Phenomenological Markers of Unfold

Although Unfold cannot be mechanistically described, its effects can be identified retrospectively through radical ontological inversions that no stabilized Logos-domain can authorize.

A paradigmatic example is the abandonment of additive physical and cosmological ontologies in favor of a deductive unfolding from a single ontological prohibition: the impossibility of absolute identity. Rather than extending fragmented models through new particles, fields, or dimensions, the entire explanatory direction is inverted. Centripetal accumulation of entities is replaced by centrifugal processual unfolding from a minimal axiom.

In this rupture, phenomena such as the second law of thermodynamics, the arrow of time, and evolutionary dynamics cease to be empirical regularities fitted to observation. They emerge instead as constitutive necessities: dissipation as enforced symmetry destruction; temporality as irreversibility; evolution as a diff–fix cycle constrained by diss.

This transition cannot be modeled as theory revision or optimization. It occurs precisely where representational self-consistency reaches saturation and, instead of stabilizing, dissolves. A concrete instantiation of this ontological inversion is developed in the ONTODYNAMICS framework of Metamonism CORE v1.3
(see: https://github.com/Deivulgaris66/Metamonism/tree/main/ONTODYNAMICS).

The significance of this example is not explanatory correctness but ontological form: Unfold becomes detectable through its footprint — forced exit from representational closure — not through any internal mechanism.


8. Addressing Common Objections

8.1 “This merely redefines intelligence.”

All definitions of AGI entail ontological commitments. This framework makes them explicit and falsifiable.

8.2 “Unfold cannot be implemented.”

Correct. Neither can Gödel incompleteness be implemented. Ontological constraints limit what systems can be, not how to build them.

8.3 “What about open-ended learning or novelty search?”

Any metric — including novelty — constitutes fixation. Delayed collapse remains collapse.


9. Implications

This framework implies that future AI development may produce systems that are:

  • extraordinarily capable,
  • economically transformative,
  • existentially dangerous,

and yet ontologically incapable of intelligence.

The danger is not AGI — it is powerful non-intelligence.


10. Conclusion

Artificial General Intelligence is not a matter of scale, performance, or optimization. It is a matter of ontological viability.

Any system that cannot destroy its own fixations is not intelligent — only efficient.

The Ontological Test Suite does not tell us how to build AGI.
It tells us how to recognize when we have failed.

https://zenodo.org/records/18408521