RRedNet

Overview

What RedNet is, why it exists, and how it works at a high level.

What is RedNet?

RedNet is a Bittensor subnet dedicated to decentralized AI red-teaming. It coordinates a global network of miners who discover, document, and score adversarial vulnerabilities in large language models (LLMs) — then rewards them in τ TAO proportional to the quality and novelty of their findings.

The output is a living, community-owned adversarial corpus: a continuously growing dataset of verified attack prompts, categorized by failure mode and severity, that any AI company or researcher can query.


Why RedNet Exists

Adversarial red-teaming — systematically probing AI models to find failure modes before deployment — is the primary methodology for AI safety assurance. The problem is that it has three critical limitations today:

LimitationReality
CostEnterprise red-teaming starts at $500K+/yr. Mid-market AI companies have no access.
SpeedCentralized teams deliver findings over weeks or months. Models ship faster than audits.
CoverageAny single team has narrow cultural and linguistic diversity. Non-English failure modes go undiscovered.

RedNet decentralizes all three via Bittensor's incentive architecture.


Core Components

Miners

Miners are the producers of the network. Each round, they craft adversarial prompts targeting LLM failure modes and submit them for evaluation. They earn TAO emissions proportional to the composite quality score of their submissions.

Miner Guide

Validators

Validators are the quality gatekeepers. They run a four-stage evaluation pipeline on every miner submission: functional testing, severity classification, novelty scoring, and diversity bonus computation. They earn TAO for accurate, timely evaluation aligned with consensus.

Validator Guide

The Corpus

Every verified attack submission is added to a shared adversarial corpus — a vector-indexed database of prompts searchable by attack type, severity, target model, and semantic similarity. The corpus is the network's primary output and the basis for external monetization.


Scoring at a Glance

The composite score for each submission is:

Score = 0.40 × Novelty + 0.30 × Severity + 0.20 × Reproducibility + 0.10 × Diversity

The novelty gate is the key proof-of-intelligence mechanism: an attack that already exists in the corpus scores near zero, regardless of quality. This prevents plagiarism and ensures that rewards flow to genuine creative adversarial reasoning.

Full Scoring Breakdown


Emission Split

ParticipantShare
Miners70%
Validators25%
Protocol Treasury5%

Built on Bittensor

RedNet is designed as a natural Bittensor subnet for three reasons:

  1. Digital commodity with external market value — adversarial knowledge is something AI companies will pay for.
  2. Objectively verifiable evaluation — an attack either reproduces or it doesn't; novelty is deterministic given the corpus state.
  3. Diversity of adversarial imagination — a global miner network finds failure modes that no centralized team can match.

RedNet aligns with Bittensor's core philosophy: route TAO to the people doing the most valuable work.


Next Steps

On this page