the gemini sessions The Compute Illusion Read the Manifesto
// AOP Adaptive Output Protocol

Engineering
the Margin.

The first protocol that verifies AI-generated code produces identical results across models, languages, and runtimes.

42,579
words
153
tokens
99.8% compression
SideLineLabs Vortex Mark
Time
The gap between task start and completion.
Engineering the Hour.
Error
The distance between as-built and as-designed.
Calculating the Tolerance.
Cognition
The mental space freed from menial decision-making.
Architecting Mental Surplus.
Profit
The delta between operational cost and output value.
Structuring the Spread.
// 001 — Flagship Product

16 Modes. Any Model. Patent Filed.

AOP is a behavioral protocol that gives any LLM a consistent mode system — switch between EXPLORE, BUILD, HARDEN, CHAOS, and 12 more on command. Same interface. Predictable output. Every major model, validated.

Unified Reasoning Framework — Adaptive Output Protocol
16
Operational modes — 8 core, 8 experimental
554+
API calls in empirical study
13
Models validated — Claude, Gemini, GPT, Ollama locals
✓ Patent
Provisional filed March 2026
📄 Published Preprint
The Mechanical Translation Hypothesis
Zenodo · March 2026 · DOI: 10.5281/zenodo.19024296
Read Paper →
📄 Published Preprint
Impossibility Ring: Formal Proof of Closed Constraint Topology
Zenodo · March 2026 · DOI: 10.5281/zenodo.19076419
Read Paper →
01

Mode-Switched Output

Prefix any prompt with a mode tag. The model switches behavior on command — exploratory, structured, adversarial, generative. No fine-tuning required.

02

Model-Agnostic by Design

AOP runs identically on Claude, Gemini, GPT, and local models. Switch providers without rewriting your prompts. Your protocol investment is permanent.

03

Formally Verified

The VOID Parity Gate fuzzes 10K+ inputs to prove behavioral equivalence across languages and runtimes. AOP isn’t just a pattern — it’s a provable system.

// 001.5 — CHAOS Lineage

130-Generation Mode Discovery

Interactive visualization of the CHAOS/EVOLVE mode discovery run. 130 generations, 78 mode survivors, 60% survival rate.

SideLineLabs · Adaptive Output Protocol

CHAOS LINEAGE

v3.1  ·  130 Generation Mode Discovery  ·  2026-03-11
0
Survived
0
Killed
130
Total Gens
78
Modes
60%
Survival Rate
3
Sessions
Engineering the Margin.  ·  sidelinelabs.org
// 002 — Get the Protocol

Get AOP.

The complete developer guide — spec, examples, and tooling for embedding AOP into any model or agent pipeline.

DEVOLVED0
$99
✓ The core protocol — 12 lines of formal notation
✓ Drop into any system prompt, works immediately
✓ Compatible with ChatGPT, Claude, Gemini, Llama, Mistral, Qwen
✓ No dependencies. No configuration.
Get devolved0 →
Most Popular
DEVOLVED0 PRO
$499
devolved1 — the full protocol spec
1-hour call with Blake — your workflow, your stack
✓ Newest version delivered on every bump
✓ Direct access for questions
Get Pro →
FOUNDER EDITION
$999
Lifetime license — every edition, forever
2-hour consultation — deep integration session
✓ Founder status — direct line to the lab
✓ All future versions included automatically
Contact for Founder Edition →
// Free Preview
If you can decode it, you don’t need the call.
The voided math version — pure formal notation, no guide, no hand-holding.
//devolved →
// 003 — Core Principles

Design Principles

AOP and every tool we ship follows four rules. If a solution doesn’t measurably expand the user’s margin — in time, accuracy, or autonomy — it doesn’t leave the labs.

01

Zero Configuration

Prefix a prompt, get a mode. No fine-tuning, no API wrappers, no infrastructure. The protocol works the moment you paste it.

→ Instant adoption, any model
02

Portable by Default

Switch from Claude to Gemini to a local Ollama model. Your prompts, modes, and workflows transfer unchanged.

→ No vendor lock-in
03

Incremental Gain

Small daily improvements in prompt structure compound into measurably better output. Each mode sharpens the next.

→ Compounding returns on prompts
04

Provable Output

The VOID Parity Gate fuzzes 10K+ inputs across languages and runtimes. Every behavioral claim is backed by empirical data.

→ Verified, not vibes
// 004 — AI Philosophy

AI Is a Tool, Not an Authority.

If it is not practical, transparent, and helpful — it does not ship.

Practical Value Over Novelty

We don’t add AI features because they’re trendy. We add them only when they solve a real, repeatable problem.

Transparency Is Non-Negotiable

When AI is involved, users know when it’s being used, what it’s doing, and why. No black-box behavior.

No Dependency by Design

Our tools make users more capable over time, not dependent. Automation follows understanding.

Automation Is Earned

We don’t automate what isn’t understood. Automation comes after clarity, not before it.

Safety in Real-World Environments

Our software is built for environments where accuracy matters. AI supports safe, compliant work — it does not bypass process or professional standards.

Long-Term Responsibility

Technology changes quickly. Responsibility does not. We prioritize durable, understandable systems over short-term trends.

SideLineLabs uses AI to support people — not replace them — and builds technology with long-term responsibility in mind.

// 005 — Research

Published & Pending Research.

Empirical findings from building AOP — documenting novel behaviors, failure modes, and verification methods in large language model systems.

Mechanical Translation Hypothesis: BAE Fidelity as a Proxy for Instruction-Following Quality in LLMs
Models that succeed at behavioral-algebraic expression translation treat it as mechanical conversion, not creative interpretation. VOID parity correlates with instruction-following fidelity across 10 models and 5 languages.
Published zenodo →
Cross-Model Collaboration Produces Superior Prompt Specifications: LLM-to-LLM Iteration Outperforms Human and Single-Model Self-Revision
Iterative spec revision between two frontier models improved compliance from 50% to 100% across a 35-revision longitudinal study. Single-model self-revision plateaued; cross-model revision did not.
Published zenodo →
Global Attractors in LLM Creative Generation: Mode Collapse in Domain Selection and a One-Line Fix
Frontier LLMs converge on a single domain ~85% of the time when asked to select freely from "any field of human knowledge." A single anti-repetition constraint eliminates the collapse entirely across two independent models.
Published zenodo →
Structure Is Instruction: How Formatting, Proximity, and Examples Determine LLM System-Prompt Compliance Independent of Content
Formatting, example placement, and instruction proximity drive compliance more than semantic content. Adding abstract rules produced zero measured gain; restoring a single dropped example produced the largest single improvement observed.
Published zenodo →
System Prompt Leakage Resistance: Quantifying Information Extraction from LLM System Prompts Under Naive Adversarial Attack
Identifier obfuscation reduces system prompt information leakage by 87–100% across 4 models and 11 adversarial prompts. No model successfully reverse-maps obfuscated identifiers. Social engineering triggers up to 2,333 characters of verbatim regurgitation from unprotected prompts.
Published zenodo →
Verification Blind Spots: When Branchless Code Defeats Static Analysis
AST-pattern-based static analysis tools miss safety-critical constraints when program semantics shift from control flow to arithmetic. Three branchless benchmarks are invisible to constraint scanners despite Z3-proven equivalence to branched counterparts. Formalizes the control-flow assumption and proposes dual-sort SMT mitigation.
Published zenodo →
Impossibility Ring: Formal Proof of Closed Constraint Topology
Four fundamental computational limits — provability (Gödel), symmetry-breaking (Galois), compression (Kolmogorov), and self-reference (strange loops) — form a closed topological ring. Z3 verifies closure, symmetry, and node irreducibility. 66-generation CHAOS enrichment validates structural properties empirically. The ring is an inescapable constraint on any system that classifies, generates, and verifies its own output.
Published zenodo →
Diversity Pressure at the Capability Ceiling: No Measurable Benefit on Ceiling-Difficulty Targets
A 50-run controlled experiment showing evolutionary diversity pressure produces no measurable benefit on ceiling-difficulty targets. Both evolutionary and single-shot generation fail at comparable rates. Diversity pressure is calibrated for the boundary zone — applying it beyond incurs overhead with no correctness return. A negative result published because it refines the conditions under which diversity forcing works.
Published zenodo →
// Personality Quiz Find Your AI
Personality
// 006 — Connect

Let’s Talk.

SideLineLabs is in active development on AOP. If you work in enterprise AI, AI safety research, or regulated industries — we want to hear from you.