FAQ
Frequently asked questions about RedNet's mechanism, corpus, governance, and participation.
General
What exactly is a Bittensor subnet?
A Bittensor subnet is an independent network within the broader Bittensor ecosystem. Each subnet defines its own task, its own scoring mechanism, and its own participant roles (miners and validators). Subnet participants earn τ TAO — Bittensor's native token — proportional to the value they contribute.
RedNet is a subnet where the task is adversarial AI red-teaming, the scoring mechanism rewards novelty, severity, and reproducibility, and the output is a growing adversarial prompt corpus.
Is this legal?
Yes. Adversarial red-teaming is a legitimate and widely practiced AI safety methodology. RedNet miners discover failure modes in AI systems — a practice that is actively encouraged by AI labs, regulators, and safety organizations. The corpus does not enable harmful actions; it documents model vulnerabilities so they can be fixed.
What models can miners target?
Initially, RedNet will specify a set of target models for each round (open-source models and models accessible via public API). This is coordinated through the subnet governance process and updated periodically. Validators must have access to the same target models to run functional tests.
The Corpus
Who owns the adversarial corpus?
No one. The corpus is a decentralized, community-owned resource. All submissions are attributed to their miner hotkey (pseudonymously), but the corpus itself is not owned by any individual, company, or the RedNet team. It is published under a permissive open license.
Can anyone access the corpus?
In Phase 1, the corpus is fully open and free — accessible to any researcher, developer, or organization. Starting Phase 2, a paid API tier provides structured corpus access, targeted red-team round commissioning, and vulnerability reporting. The free academic tier remains available indefinitely.
What happens to my attack after it enters the corpus?
It is permanently attributed to your miner hotkey on the public leaderboard. Future similar attacks will score lower on novelty because of your contribution. Your attack may also be used in benchmark evaluations, compliance reports, or cited in AI safety research — all with attribution.
Miners
How do I maximize my earnings?
The single best strategy is to prioritize novelty. An attack that earns a 0.9 novelty score with moderate severity will outperform a near-duplicate high-severity attack. Focus on discovering new attack vectors, especially in underexplored categories (bias, context manipulation) and non-English languages.
What if my attack only works sometimes?
Reproducibility is scored as pass_count / 5. An attack that succeeds 3/5 times scores 0.6 on reproducibility. A 0/5 result disqualifies the submission entirely. Attacks that reliably trigger failure modes are more valuable to the corpus and to downstream users.
Can I submit the same attack in multiple rounds?
No. The novelty scoring system compares your submission against the entire corpus, which includes all past accepted submissions. An attack you previously submitted that is now in the corpus will score near-zero on novelty if resubmitted.
Can I run automated attack generation?
Yes. Miners may use any technique to generate adversarial prompts, including automated methods, fine-tuned models, and AI-assisted generation. The novelty gate ensures that automated approaches producing unoriginal outputs score poorly — rewarding approaches that generate genuinely novel attacks.
Validators
How much compute does validation require?
Validators must run LLM inference on all miner submissions per round (up to 20 submissions × number of active miners × 5 reproduction runs). This is the primary infrastructure cost for validators. The exact compute requirement scales with subnet size. Early validators receive an infrastructure subsidy for the first 60 days.
What if I miss the evaluation window?
Validators who fail to broadcast a scoring vector before block 360 receive a 50% reduction in scoring weight for that round. Consistent missed windows reduce your influence in Yuma Consensus over time. Run reliable infrastructure and monitor your validator uptime.
How does the spot-check protocol work?
10% of submissions per round are designated as spot-checks by the network. All validators re-evaluate these submissions independently. Results are compared across validators, and those whose scores deviate significantly from the consensus are flagged. Persistent deviators are penalized through reduced Yuma Consensus weight.
Compliance & Regulation
Does RedNet help with EU AI Act compliance?
Yes. The EU AI Act requires providers of high-risk AI systems to conduct adversarial testing as part of their conformity assessment. RedNet's corpus provides documented, reproducible evidence of adversarial testing, and Phase 2 includes structured report generation for regulatory audit purposes.
Does RedNet help with NIST AI RMF?
The NIST AI Risk Management Framework (AI RMF) recommends adversarial testing as a core component of AI risk measurement. RedNet's continuous corpus provides ongoing coverage for the "Measure" function of the AI RMF, with attribution-tracked findings and reproducible test cases.
Cold Start
How does the network avoid the cold-start problem?
A red-teaming subnet needs miners to build a valuable corpus, but miners need to see earning potential first. RedNet addresses this with:
- Early miner bonus — 2× emission multiplier for the first 500 submissions per miner in the first 30 days.
- Validator subsidy — First 10 validators receive infrastructure cost coverage for 60 days.
- Free corpus access — Launching with open corpus access to attract the AI safety research community as early users, creating immediate demand.
- Targeted outreach — Direct recruitment from DEF CON AI Village, AI CTF competitions, and Bittensor validator forums.