Protocol

FLOPs: The Economics of Open Intelligence

By 0xbelgianwaffles, FLOpsInc
July 30, 2025
12 min

Markets don't run on code or slogans. They run on trust—trust that the thing you're buying means what it claims, that rules won't change mid‑game, that the people steering the ship are aligned with the people fueling it.

Over the past two years, crypto's loudest projects learned the hard way that attention without production decays fast. Tokens launched, unlock charts bled, and "infrastructure" that never touched real users turned out to be little more than extraction machines in prettier clothes.

Meanwhile, AI became the world's most valuable commodity—and its most centralized. Compute, data, and model weights are bottlenecked behind a handful of clouds and labs. When power centralizes, censorship is not a risk; it's a reflex. Regulations aimed at "safety" increasingly hard‑code what models may say, ship, or share. In that world, liquidity pools and memes won't save us. We need working systems that route capital to productive compute and push intelligence out to the edges.

Protocol Thesis

FLOPs exists for exactly this moment: a simple, two‑track primitive that turns capital into decentralized training power today, and community attention into a governance flywheel tomorrow. Credits buy compute and return mined DeAI rewards; the token steers the treasury, the brand, and the long arc of value capture.

Why Decentralized Training (DeAI) Is Inevitable

Centralization → Chokepoints → Policy control. The EU's new GPAI Code of Practice and guidance under the AI Act formalize obligations for general‑purpose models (documentation, copyright reporting, even systemic‑risk controls), with compliance starting August 2, 2025 for key classes of models. That's governance at the model layer via a handful of large providers and deployment venues. In the U.S., Executive Order 14110 and subsequent Commerce reporting rules move in the same direction—compute thresholds and disclosure as levers over capability.

The antidote isn't ignoring safety. It's architecting systems that are resilient to capture, much like the internet itself. That means training and post‑training that can proceed over heterogeneous, globally distributed hardware with orders‑of‑magnitude lower communication, on permissionless rails that any community can extend.

DisTrO (Nous Research)

~857× reduction in inter‑node bandwidth during 1.2B‑parameter pre‑training while matching AdamW + all‑reduce loss

NoLoCo

Eliminates all‑reduce entirely via pairwise weight‑averaging with Nesterov‑style outer step

Prime Intellect

INTELLECT‑1 across continents with 83–96% utilization and ~400× lower comms than vanilla data‑parallel

Gensyn

Compute protocol with off‑chain verification games and on‑chain settlement

Centralization Is Censorship by Other Means

When inference and training live in a few stacks, policy meets product overnight. The EU's AI Act guidance, U.S. reporting rules, and the rapid cadence of model‑behavior rulebooks for "acceptable speech" prove the point: control the hosts, and you control the outputs. Whether you call it "moderation," "risk mitigation," or "model spec," the mechanism is the same: centralized chokepoints.

Decentralized training doesn't abolish safety—it distributes it. Communities can form their own governance and proof regimes; regulators can focus on provenance and use rather than pre‑clearing every weight update; and innovation happens in the open, not at the grace of a platform.

The Economics of DeAI (and Why FLOPs Matters)

Compute is the commodity; coordination is the product. Across every DeAI effort, the same economics recur:

  • Communication is the tax. Techniques like DisTrO, NoLoCo, and DiLoCo‑style protocols reduce bandwidth from gigabytes/step to megabytes or less, flipping the constraint from "co‑located H100s over NVLink" to "stable consumer internet."
  • Verification is the contract. Work‑proofs (Gensyn) and trustless rollout verification (Prime) generalize Bitcoin's insight: pay for correct work, not promises.
  • Heterogeneity is an advantage, not a bug. Systems that treat churn, mixed GPUs, and varied bandwidth as first‑class capture more supply and lower the clearing price for useful compute.

FLOPs Dual-Track System

Track A — Credits

Users deposit USDC, priced straight in real‑world FLOPs of compute, to sponsor targeted decentralized training campaigns. Contributors receive pro‑rata distributions each epoch.

Track B — Token

A governance asset that steers the treasury and captures fee flows plus any optional share of mining rewards the DAO votes to allocate.

Where FLOPs Fits in the DeAI Stack

Training protocols (DisTrO, NoLoCo, DiLoCo descendants) are the how. FLOPs funds them, tracks them, and pushes liquidity to the ones performing in the wild.

Compute networks (Gensyn, Nous Psyche, Prime) are the markets. FLOPs allocates Credits to them as a diversified basket—exposure without picking a single winner.

Inference/verification L1s (e.g., Ambient) are the edges where models meet users. As they mature, governance can direct part of the treasury to strategic exposure or co‑mining programs.

Critical Point

FLOPs doesn't promise circular magic. Contributors get their share of mined output directly. Token holders govern how to grow the pie (more compute, liquidity, R&D, open‑source grants) and if/when to recycle treasury value back to holders. Clean separation; no muddled hybrids.

Why This Is Also a Cultural Bet

We don't buy the idea that a handful of labs should write civilization's speech rules in model weights. The better story—the one that spreads—looks like this:

  • Open coordination on a fast chain (Psyche's Solana‑based orchestration is one example) so no single switch kills a run.
  • Swarm learning as default, whether it's RL‑Swarm peers learning together or model‑parallel over everyday links.
  • Proofs over trust, on‑chain and off‑chain, so reputation compounds in public.

If Bitcoin made energy legible and Hyperliquid made trading incentives legible, FLOPs aims to make compute for training legible—so capital flows to where it constructs rather than extracts.

How FLOPs Earns Trust (and Keeps It)

1. Simple Accounting

Credits are cost‑based IOUs for compute. Rewards flow back pro‑rata by epoch. Token economics are not mixed into contributor accounting.

2. Transparent Routing

Dashboards show allocations by network and epoch performance—no mystery meat.

3. Governance with Brakes

Treasury spends timelocked; quorum and majority thresholds; delegated council to prevent knee‑jerk capture while the base grows.

4. Regulatory Hygiene

Credits are prepaid compute; the token is governance + treasury exposure with no fixed dividend.

What We'll Fund First

  • Compute campaigns across Gensyn (work‑proof training), Nous Psyche (DisTrO‑native orchestration), and Prime (fault‑tolerant global training).
  • Research bounties to productionize model‑parallel over the internet (Protocol Models / "Pluralis" line), and to harden gossip optimizers on flaky links.
  • Verification + telemetry so contributors and token holders can see, in near‑real time, how much compute was bought, where, and what it returned.
  • Open‑source grants for adapters that let small labs and indie builders plug into these networks without bespoke infra.

Risks We Acknowledge

Protocol Under-delivery

Not every network or optimizer will meet its claims; FLOPs mitigates via a diversified basket and rapid reallocation.

Hardware Shocks

GPU supply squeezes remain volatile; we'll mix cloud leases with OTC blocks and partner forward‑contracts.

Governance Capture

We build in timelocks, delegate frameworks, and public reporting to restrain rash moves while still shipping.

Regulatory Drift

We'll keep Credits clearly separated from token mechanics and route distributions through governance, not promises.

The Closing Argument

If the last cycle taught us anything, it's that reputation > information. Tweets are free; capital at risk is not. The way to rebuild trust is to point dollars at engines that produce—and prove it.

DisTrO, NoLoCo, and Protocol‑Model work tore down the myth that only super‑clusters can train frontier models. Gensyn, Psyche, Prime and others have shown the outlines of a permissionless training internet. The EU and U.S. have shown us what happens when power centralizes. The choice is plain:

Wait for someone else's governance to decide what intelligence is allowed to learn—or fund the open alternative and spin the flywheel ourselves.

FLOPs is the attention‑optimized gateway to decentralized training. Credits turn USDC into compute and raw DeAI rewards; the token turns community conviction into a treasury that multiplies those effects. It is not a promise of riches; it is an instrument to aim ingenuity.

If you believe intelligence should be widely owned—and that economies work best when trust is earned in the open—then help us route the next dollar of compute.

Not investment advice. Do your own research; we'll publish ours in public.

Source Notes & References

• Nous Research, A Preliminary Report on DisTrO—857× bandwidth reduction in 1.2B pre‑training

• Ramasinghe et al., Protocol Models / Pluralis line—model‑parallel over low bandwidth

• Kolehmainen et al., NoLoCo—no all‑reduce; pairwise synchronization

• Gensyn—compute protocol (verification and settlement), RL‑Swarm, NoLoCo explainer

• Prime Intellect—INTELLECT‑1 technical report, INTELLECT‑2 decentralized RL, OpenDiLoCo

• Policy context—EU GPAI Code of Practice & guidance under the AI Act; U.S. EO 14110 and Commerce reporting

FLOPs turns community capital into measurable compute that advances decentralized AI—and lets the people paying for that progress govern how its value is recycled.