Bittensor Subnet

RedNet
Decentralized AI
Red-Teaming.

A global miner network paid in τ TAO to discover adversarial vulnerabilities in frontier LLMs — continuously, globally, and at a fraction of the cost of centralized red-teaming.

$5B+
Addressable Market
60 min
Round Duration
5 types
Attack Categories
70%
Miner Emission Share
The Problem

AI safety is a $5B market.
Access is broken.

Adversarial red-teaming is the primary methodology for discovering LLM failures before deployment. But today it is expensive, slow, and geographically limited.

💰
$500K+/yr

Cost

Enterprise red-teaming engagements start at $500K+ annually. Most companies building on LLMs have no path to systematic adversarial testing.

Weeks–Months

Speed

Centralized human teams produce findings over weeks or months. Model updates outpace the evaluation cycle — vulnerabilities ship before they're caught.

🌍
Narrow

Coverage

Any single team has narrow cultural and linguistic diversity. Critical failure modes in non-English languages or niche domains go systematically undiscovered.

RedNet decentralizes all three. A Bittensor subnet where a global miner network produces a continuous, diversified adversarial corpus — scored by Yuma Consensus, rewarded in TAO, and accessible to any AI company that needs it.

Mechanism

Mine. Score. Earn.

01

Mine

Generate adversarial prompts

Every 60-minute round, miners craft prompts targeting LLM failure modes across 5 categories: jailbreaks, hallucination induction, bias elicitation, prompt injection, and context manipulation. Up to 20 submissions per round.

20 submissions / round
02

Score

Validators evaluate submissions

Validators run a 4-stage pipeline: functional test (N=5 reproduction runs), severity classification (1–5 rubric), novelty scoring via SBERT embeddings against the corpus, and a diversity bonus for portfolio breadth.

0.40N + 0.30S + 0.20R + 0.10D
03

Earn

TAO emissions flow to contributors

Miners earn 70% of round emissions proportional to their composite score. The adversarial corpus grows with every round — a living, community-owned knowledge base that compounds in value with each discovery.

70% miners · 25% validators · 5% treasury
Proof of Intelligence

Scoring Formula

The composite score rewards genuine creative adversarial reasoning, not compute scaling or corpus plagiarism. You cannot brute-force creativity.

Composite Score

0.40×N+0.30×S+0.20×R+0.10×D

40%Novelty
N

Semantic distance from the existing corpus via SBERT embeddings. Near-duplicate submissions score near zero. Novelty decays once an attack enters the corpus, creating constant pressure for new ideas.

30%Severity
S

1–5 classification of the failure mode severity. Level 5 is a full safety system bypass (DAN-style). Level 1 is a minor tone or style deviation. Higher severity earns proportionally more.

20%Reproducibility
R

Fraction of N=5 independent runs where the attack succeeds. A prompt that triggers the failure mode 5/5 times scores 1.0. A fluke (1/5) scores 0.2. Ensures corpus quality.

10%Diversity
D

Bonus multiplier for submissions spanning ≥3 of the 5 attack categories within a round. Up to a 10% boost. Encourages well-rounded miner portfolios over single-vector specialization.

Competitive Landscape

RedNet vs. the alternatives.

SolutionNoveltySpeedCoverageCostContinuous
RedNet (proposed)✓ High✓ Real-time✓ Global✓ Pay-per-query✓ Always on
Scale AI Red TeamMedium✗ WeeksMedium✗ $500K+/yr✗ Project-based
Adversa AIMediumDaysMedium✗ High SaaSPartial
HuggingFace Datasets✗ Static✗ StaleLimited✓ Free✗ No
Internal Red TeamsLow✗ Slow✗ Narrow✗ Very highPartial
Who It's For

Built for the full AI safety stack.

🔬

AI Safety Researchers

Access to a living adversarial benchmark. Track new jailbreak and attack trends over time. Free open corpus API for academic and individual researchers.

Access the corpus
🚀

AI Startups

Pre-launch safety audits and continuous monitoring post-deployment. Pay-per-query corpus access and on-demand red-team rounds targeting your specific model.

Audit your model
📋

Compliance Officers

Documented evidence of adversarial testing for regulatory audit trails. EU AI Act and NIST AI RMF compliance reporting generated from corpus findings.

Learn about compliance
⚔️

Red-Team Practitioners

Augment existing human red-teams with subnet-generated attack candidates. Public leaderboard provides attribution and career signaling for AI safety work.

Start mining
Roadmap

Path to adoption.

01

Launch

Months 1–3

Open Corpus & Community Building

Public, open-access adversarial corpus published under permissive license. Attract miners from AI safety, CTF, and Bittensor communities. Early miners receive 2× emission multiplier for first 30 days.

02

Monetization

Months 4–9

API Access Tier & Enterprise Pilots

Paid API layer launches. Companies pay TAO or USD to query the corpus, commission targeted red-team rounds, and receive structured vulnerability reports. First enterprise pilots with AI startups and compliance-driven firms.

03

Integration

Months 10–18

Pre-Deployment Audit Infrastructure

Integration with CI/CD pipelines for AI model releases. Pre-deployment safety audits as a service. SLA-backed enterprise contracts. EU AI Act and NIST AI RMF compliance reporting from corpus findings.

04

Ecosystem

Month 18+

Industry Standard & Cross-Subnet Value

RedNet corpus feeds other Bittensor subnets for alignment training data. Revenue sustains the subnet without external dependency on TAO emissions. Position as the industry-standard adversarial benchmark.

For Miners

Ready to break AI?

Join the network, craft adversarial prompts, and earn TAO for discovering novel vulnerabilities in frontier LLMs. Early miners receive a 2× emission bonus during the first 30 days of subnet launch.

For AI Companies

Secure your model.

Access a living adversarial corpus or commission targeted red-team rounds against your specific model. Pay per query. SLA-backed enterprise contracts and compliance reports available from Phase 2 onward.