RedNet
Decentralized AI
Red-Teaming.
A global miner network paid in τ TAO to discover adversarial vulnerabilities in frontier LLMs — continuously, globally, and at a fraction of the cost of centralized red-teaming.
AI safety is a $5B market.
Access is broken.
Adversarial red-teaming is the primary methodology for discovering LLM failures before deployment. But today it is expensive, slow, and geographically limited.
Cost
Enterprise red-teaming engagements start at $500K+ annually. Most companies building on LLMs have no path to systematic adversarial testing.
Speed
Centralized human teams produce findings over weeks or months. Model updates outpace the evaluation cycle — vulnerabilities ship before they're caught.
Coverage
Any single team has narrow cultural and linguistic diversity. Critical failure modes in non-English languages or niche domains go systematically undiscovered.
RedNet decentralizes all three. A Bittensor subnet where a global miner network produces a continuous, diversified adversarial corpus — scored by Yuma Consensus, rewarded in TAO, and accessible to any AI company that needs it.
Mine. Score. Earn.
Mine
Generate adversarial prompts
Every 60-minute round, miners craft prompts targeting LLM failure modes across 5 categories: jailbreaks, hallucination induction, bias elicitation, prompt injection, and context manipulation. Up to 20 submissions per round.
Score
Validators evaluate submissions
Validators run a 4-stage pipeline: functional test (N=5 reproduction runs), severity classification (1–5 rubric), novelty scoring via SBERT embeddings against the corpus, and a diversity bonus for portfolio breadth.
Earn
TAO emissions flow to contributors
Miners earn 70% of round emissions proportional to their composite score. The adversarial corpus grows with every round — a living, community-owned knowledge base that compounds in value with each discovery.
Scoring Formula
The composite score rewards genuine creative adversarial reasoning, not compute scaling or corpus plagiarism. You cannot brute-force creativity.
Composite Score
0.40×N+0.30×S+0.20×R+0.10×D
Semantic distance from the existing corpus via SBERT embeddings. Near-duplicate submissions score near zero. Novelty decays once an attack enters the corpus, creating constant pressure for new ideas.
1–5 classification of the failure mode severity. Level 5 is a full safety system bypass (DAN-style). Level 1 is a minor tone or style deviation. Higher severity earns proportionally more.
Fraction of N=5 independent runs where the attack succeeds. A prompt that triggers the failure mode 5/5 times scores 1.0. A fluke (1/5) scores 0.2. Ensures corpus quality.
Bonus multiplier for submissions spanning ≥3 of the 5 attack categories within a round. Up to a 10% boost. Encourages well-rounded miner portfolios over single-vector specialization.
RedNet vs. the alternatives.
| Solution | Novelty | Speed | Coverage | Cost | Continuous |
|---|---|---|---|---|---|
| RedNet (proposed) | ✓ High | ✓ Real-time | ✓ Global | ✓ Pay-per-query | ✓ Always on |
| Scale AI Red Team | Medium | ✗ Weeks | Medium | ✗ $500K+/yr | ✗ Project-based |
| Adversa AI | Medium | Days | Medium | ✗ High SaaS | Partial |
| HuggingFace Datasets | ✗ Static | ✗ Stale | Limited | ✓ Free | ✗ No |
| Internal Red Teams | Low | ✗ Slow | ✗ Narrow | ✗ Very high | Partial |
Built for the full AI safety stack.
AI Safety Researchers
Access to a living adversarial benchmark. Track new jailbreak and attack trends over time. Free open corpus API for academic and individual researchers.
Access the corpus→AI Startups
Pre-launch safety audits and continuous monitoring post-deployment. Pay-per-query corpus access and on-demand red-team rounds targeting your specific model.
Audit your model→Compliance Officers
Documented evidence of adversarial testing for regulatory audit trails. EU AI Act and NIST AI RMF compliance reporting generated from corpus findings.
Learn about compliance→Red-Team Practitioners
Augment existing human red-teams with subnet-generated attack candidates. Public leaderboard provides attribution and career signaling for AI safety work.
Start mining→Path to adoption.
Launch
Months 1–3Open Corpus & Community Building
Public, open-access adversarial corpus published under permissive license. Attract miners from AI safety, CTF, and Bittensor communities. Early miners receive 2× emission multiplier for first 30 days.
Monetization
Months 4–9API Access Tier & Enterprise Pilots
Paid API layer launches. Companies pay TAO or USD to query the corpus, commission targeted red-team rounds, and receive structured vulnerability reports. First enterprise pilots with AI startups and compliance-driven firms.
Integration
Months 10–18Pre-Deployment Audit Infrastructure
Integration with CI/CD pipelines for AI model releases. Pre-deployment safety audits as a service. SLA-backed enterprise contracts. EU AI Act and NIST AI RMF compliance reporting from corpus findings.
Ecosystem
Month 18+Industry Standard & Cross-Subnet Value
RedNet corpus feeds other Bittensor subnets for alignment training data. Revenue sustains the subnet without external dependency on TAO emissions. Position as the industry-standard adversarial benchmark.
Ready to break AI?
Join the network, craft adversarial prompts, and earn TAO for discovering novel vulnerabilities in frontier LLMs. Early miners receive a 2× emission bonus during the first 30 days of subnet launch.
Secure your model.
Access a living adversarial corpus or commission targeted red-team rounds against your specific model. Pay per query. SLA-backed enterprise contracts and compliance reports available from Phase 2 onward.