Overview of Bittensor Subnets

Bittensor is a decentralized AI network that uses blockchain-based incentives to reward participants who contribute valuable computation — whether that's running inference, training models, or processing financial signals. Its native token, TAO, is used for staking, registrations, and as the base currency across the network.
Bittensor originally operated under what is now called the "Legacy" model, where a council of 64 validators determined which subnets received TAO emissions and in what proportion. In February 2025, the network transitioned to a new system called Dynamic TAO, or dTAO, which replaced this validator-driven allocation with a market-based mechanism.
Under dTAO, every Bittensor subnet now has its own token — referred to as an Alpha token — and its own liquidity pool. These Alpha tokens are what make it possible for the market, rather than validators, to decide which subnets receive the most daily TAO emissions. This article covers how dTAO works, which subnets are leading by market cap as of March 2026, and how to acquire subnet tokens.
What Is Dynamic TAO (dTAO)?
Dynamic TAO is the system that governs how TAO emissions are distributed across Bittensor's subnets. The core mechanic is an Automated Market Maker, or AMM, that every subnet maintains between TAO and its own Alpha token.
When you stake TAO into a subnet under dTAO, you are technically executing a swap: your TAO enters the subnet's liquidity pool and you receive that subnet's Alpha token in return. Each Alpha token carries a name derived from its subnet number: Chutes runs on Subnet 64, so its Alpha is commonly referenced as SN64; τemplar's is SN3; Targon's is SN4.
The price of each Alpha token, denominated in TAO, is set by the market in real time. That price then determines how much of the network's daily TAO emissions flow to that subnet. A subnet whose Alpha token is in high demand draws more staked TAO, captures a larger share of the daily emission pool, and can use those rewards to attract better miners and validators. Subnets with low demand receive proportionally fewer emissions.
By March 2026, the total market capitalization of all subnet Alpha tokens reached approximately $1.12 billion, equivalent to around 27% of TAO's own market capitalization. The network currently supports 128 active subnets, each focused on a specific AI task, with expansion to 256 subnets projected later in 2026.
Top Bittensor Subnets by Market Cap

Here is the current leaderboard for Bittensor subnet market capitalizations (as of March 25, 2026):
-
τemplar (SN3) — ~$134.9
-
Chutes (SN64) — ~$132.9M
-
Targon (SN4) — ~$91.8M
-
affine (SN120) — ~$71.8M
-
lium (SN51) — ~$52.1M
-
Ridges AI (SN62) — ~$50.8M
-
Proprietary Trading Network (SN8) — ~$47.4M
-
Score (SN44) — ~$45.0M
-
iota (SN9) — ~$44.6M
-
Hippius (SN75) — ~$41.3M
The top 10 subnets have reached a combined valuation of approximately $712 million at time of writing. Now, let's take a closer look at the leading subnets.
τemplar (SN3): Large-Scale LLM Training
τemplar focuses on large-scale LLM pre-training on Bittensor's decentralized infrastructure. Previously known as a "quiet" infrastructure subnet, τemplar (SN3) became the face of the Bittensor ecosystem in March 2026 following a breakthrough that shifted the valuation logic of the entire network.
On March 10, 2026, the Templar team announced the successful completion of Covenant-72B.
-
The Achievement: Training a 72-billion-parameter model across more than 70 independent, globally distributed nodes.
-
The Tech: This was made possible by the SparseLoCo algorithm, which solved the "bandwidth bottleneck" of decentralized training by compressing gradients by 97% without losing model accuracy.
-
Performance: Covenant-72B offers performance on par with Meta’s Llama-2-70B, proving that "Sovereign AI" can be built without centralized data centers.
The subnet gained mainstream financial attention on March 20, 2026, when NVIDIA CEO Jensen Huang referenced the achievement during an appearance on the All-In Podcast. Huang acknowledged the technical feat of training a Llama-class model through collaborative distributed computing, framing it as a vital complement to centralized AI.
Chutes (SN64): Serverless AI Inference
Chutes is a serverless AI compute platform built by Rayon Labs. Chutes provides a decentralized alternative to the OpenAI API and AWS Lambda, allowing developers to deploy open-source models (like DeepSeek, Llama, and Mistral) with zero infrastructure management.
Chutes currently leads the network in pure usage metrics:
-
Throughput: The platform has processed a cumulative 9.1 trillion tokens, with daily peaks now exceeding 50 billion tokens.
-
Cost Efficiency: By utilizing decentralized idle compute, Chutes offers prices approximately 85% lower than AWS and 10–50% lower than centralized aggregators like Together AI.
Chutes was the first subnet to prove the "flywheel" effect of the dTAO model. It became the first subnet to cross the $100 million market cap milestone just nine weeks after the dTAO launch.
-
Auto-Staking Mechanism: Rayon Labs funnels platform revenue directly back into an auto-staking mechanism. This protocol-level "buyback" purchases the SN64 Alpha token, creating organic, non-speculative demand tied directly to product usage.
-
Network Share: Beyond Chutes, Rayon Labs operates two other critical subnets: Gradients (SN56) for model training and Nineteen (SN19) for high-frequency inference. Together, this "Rayon Trio" controls approximately 23.7% of all daily TAO emissions, making the team the single most influential development group in the ecosystem.
Targon (SN4): Confidential GPU Compute
Targon is a decentralized AI inference and GPU compute marketplace operated by Manifold Labs, establishing itself as the "industrial hub" of Bittensor, providing high-performance, verifiable infrastructure for enterprise-grade AI.
Targon’s valuation is backed by significant demand-side revenue rather than just network subsidies.
-
Dippy AI: The viral AI character application (boasting 8.6 million users) recently transitioned its entire backend to Targon. This six-figure deal represents one of the largest migrations of a mainstream consumer app to decentralized infrastructure.
-
Sybil AI: Manifold’s own hybrid AI search engine utilizes Targon to provide model-agnostic, real-time answers, proving the subnet’s ability to handle complex, low-latency search queries.
To scale its hardware footprint, Manifold Labs recently closed a $10.5 million Series A round. This institutional backing has allowed them to integrate NVIDIA Confidential Compute (TEEs), ensuring that enterprise data remains encrypted even while being processed by decentralized miners.
Affine (SN120): AI Reasoning & Reinforcement Learning
Affine provides a decentralized reinforcement learning environment where AI models are continuously refined and "composed" to solve complex multi-step problems that a single model cannot handle alone.
Affine’s core innovation is its ability to "bridge" intelligence. Instead of just producing one type of data, Affine coordinates multiple subnets to create a higher order of intelligence:
-
The "Winner-Takes-All" RL Mechanism: Affine validators constantly run competitions across various RL environments (like program synthesis and complex code generation). Miners submit models, and only the ones on the "Pareto frontier" — those that outperform all others across all tasks — receive the bulk of the rewards.
-
Interoperability in Action: Affine doesn't host its own models; instead, it leverages Chutes (SN64) for model hosting. This creates a "value loop" where Affine identifies the best reasoning logic, and Chutes provides the infrastructure to execute it.
A key driver of Affine’s valuation is its commitment to "Open Intelligence." Every model and data generated from evaluations are open-source, where the top-performing model is shared as the new baseline, and is continuously fine-tuned.
Lium (SN51): P2P GPU Marketplace
Lium is the go-to platform for developers who need GPU power — specifically the NVIDIA H100 and A100 clusters required for heavy-duty AI model training and fine-tuning.
In early 2026 Lium successfully onboarded institutional-grade hardware into a decentralized pool.
-
The H100 Fleet: Lium successfully onboarded a fleet of over 500 NVIDIA H100 GPUs within months of its scale-up phase. By March 2026, it is estimated to control one of the largest "sovereign" GPU clusters outside of major centralized providers.
-
Transparent Verification: Unlike traditional "cloud" rentals where you trust the provider's word,Lium uses Bittensor validators to programmatically verify hardware specs, bandwidth, and uptime. This "Proof of Compute" ensures that when a researcher rents a 40GB A100, they are receiving exactly that level of performance.
Lium fills a critical gap in the AI market: the need for short-term, high-intensity compute without long-term contracts. Developers can rent high-end nodes by the hour to run short tests or "burst" training sessions, a feature that has attracted a wave of emerging AI/ML startups to the SN51 ecosystem.
How to Buy Subnet Tokens
There are two main ways to acquire Bittensor subnet Alpha tokens.
On-chain via Bittensor
Most subnet tokens are traded natively on the Bittensor network. To buy them, you first need TAO, which is available on major centralized exchanges including Binance, MEXC, and Gate.io. From there, transfer your TAO to a compatible wallet — Tensor Wallet, SubWallet, and the TAO.com mobile app (available on iOS) are the current options. Once your TAO is in a compatible wallet, you can navigate to Taostats or use the native wallet interface to swap your TAO for a specific subnet's Alpha token.
Centralized Exchanges (CEXs)
A small number of high-volume subnets are beginning to appear on centralized exchanges. MEXC currently lists SN64 (Chutes) for direct trading against USDT, bypassing the on-chain swap process. To check whether other subnets have gained CEX listings, look at the "Markets" tab on a subnet's CoinGecko page.
A Note on Liquidity and Slippage
Because subnet tokens are traded through AMMs rather than traditional order books, large trades can move the price significantly. Liquidity pools on smaller subnets can be thin, sometimes below $1 million, meaning the price you see on an aggregator may differ from what you receive when executing a large swap. Always check the liquidity pool depth and current spread before trading, and be aware that exit liquidity may be narrower than entry liquidity during volatile periods. Subnet tokens can also experience daily swings of 50% or more, so position sizing matters.
Conclusion
Several developments are likely to affect the Bittensor ecosystem over the coming months. The network's December 2025 halving cut daily TAO emissions from 7,200 to 3,600 tokens, reducing new supply entering circulation. TAO has a hard cap of 21 million tokens, making it structurally different from most AI tokens that inflate indefinitely.
The network is projected to expand to 256 subnets by 2026, which would roughly double the number of competitive slots and expand the range of AI tasks the network incentivizes. On the institutional side, Grayscale listed its GTAO Trust on the NYSE in January 2026 and has an S-1 pending with the SEC to convert it into a spot ETF — a potential channel for regulated institutional exposure to TAO.
For anyone evaluating specific subnets, the key metrics to watch are net staking flow (whether TAO is moving in or out of a subnet's pool), the size and growth of the liquidity pool, and whether the subnet has a revenue model that creates organic demand for its Alpha token beyond pure speculation.
Subscribe to the CoinGecko Daily Newsletter!
Ethereum Mainnet
Base Mainnet
BNB Smart Chain
Arbitrum
Avalanche
Fantom
Flare
Gnosis
Linea
Optimism
Polygon
Polygon zkEVM
Scroll
Stellar
Story
Syscoin
Telos
X Layer
Xai