Author: CJ_Blockchain
On February 3, 2025, a model called DeepSeek-R1 was quietly launched on the National Supercomputing Internet Platform.

In the following month, it swept the globe due to its performance directly comparable to top closed-source models and its training cost being as low as "dirt cheap".
This triggered a sharp drop in US AI stocks and ushered in the "DeepSeek" era for Chinese AI.
On March 10, 2026, Bittensor's Subnet 3 Templar announced the completion of the largest decentralized large language model (LLM) pre-training run in history—Covenant-72B.
This is the largest decentralized large language model pre-training run in history:
With 7.2 billion parameters on a dataset of approximately 1.1 trillion tokens, implemented entirely through a Bittensor Subnet 3 network, it is permissionless and allows over 70 independent nodes to participate freely.
Bittensor has ushered in its own DeepSeek moment.
Templar's predecessor was SN3, operated by Omega Labs, which initially focused on the collection and mining of multimodal data. With the evolution of the Bittensor mechanism, this subnet completed a strategic leap from "data transporter" to "model builder".
Templar is currently positioned as a globally distributed infrastructure for pre-training large models. It aggregates heterogeneous computing power globally through an incentive mechanism, aiming to solve the extremely high computational costs and centralized censorship issues in large model training. The successful delivery of Covenant-72B validates the maturity of this decentralized production model.
Covenant-72B is a landmark achievement of Templar and is currently the largest dense architecture pre-trained model in decentralized networks.
Key parameters: It has 72 billion parameters and is pre-trained based on a high-performance DCLM corpus.
Performance benchmark: In the basic model evaluation, its performance is basically on par with Meta's Llama-2-70B.
Instruction optimization: After fine-tuning, Covenant-72B-Chat demonstrates strong competitiveness in the dimensions of IF and MATH (mathematical reasoning), and even surpasses closed-source models of the same size in certain metrics.
Inference efficiency: The model achieves an extremely high throughput of 450 tokens/sec, solving the pain point of response latency in large models in practical applications.
Training a 72-bit model in a typical internet environment presents the biggest challenge: the communication bandwidth bottleneck between nodes. Templar achieves a significant breakthrough by employing its core algorithm, SparseLoCo.
Extreme compression: The algorithm selects only 1%-3% of the core gradient components for transmission and quantizes the data into 2 bits, which greatly reduces the demand for network bandwidth.
Low-frequency synchronization: Unlike the step-by-step synchronization of traditional clusters, SparseLoCo allows nodes to iterate locally for 15-250 steps before performing global synchronization.
Error compensation: Through the local gradient accumulation mechanism, the model convergence accuracy is not compromised even when more than 97% of the information is lost.
This technological approach proves that even without expensive dedicated line clusters like InfiniBand, top-tier intelligence can still be produced using globally distributed ordinary networks.
Templar's technological achievements have attracted the attention of the mainstream AI community and the capital market:
Authoritative Recognition:
In his analysis report, Anthropic co-founder Jack Clark categorized Templar as the world's largest active decentralized training network, noting that its growth rate has exceeded industry expectations.
Jason Calacanis (host of the All-In Podcast and a well-known Silicon Valley investor) recently provided an in-depth explanation of Bittensor's mechanism in his blog and hinted at buying it.
Organizational Structure:
Grayscale continues to increase its holdings in TAO, making it a core holding in the decentralized AI sector.
DCG established Yuma, specifically focused on accelerating the development of the Bittensor (TAO) ecosystem, which is seen as DCG's biggest and most direct bet on decentralized AI.
Market performance:
$TAO : Following Templar's announcement of the completion of its 72B large model training, TAO subsequently surged by over 30%, demonstrating absolute strength amidst BTC's volatile market.
$Templar (SN-3): The protagonist, Templar, has surged 75% in 7 days, becoming Bittensor's current leader in capturing Emission emissions. The current Market Cap is only 70m.
Templar's success has opened up entirely new possibilities for the Bittensor ecosystem:
Breaking through the value ceiling: For a long time, Bittensor has been questioned as merely "air-incentive." Templar has proven that the protocol can produce commercially competitive productivity tools, shifting TAO's valuation logic from "narrative-driven" to "product-driven."
The potential of heterogeneous computing power: With the development of "heterogeneous SparseLoCo", future consumer-grade graphics cards (such as RTX 4090) will be able to directly participate in the training of models with hundreds of billions of parameters, realizing the equalization of computing power resources.
Certainty opportunities in subnets: Under the dTAO mechanism, subnets like Templar, which possess strong technological barriers and can continuously produce high-performance models, have tokens with extremely high long-term allocation value.
Templar's current MC=75m, FDV=350m
Currently, the valuations of mainstream large-scale model companies are as follows: OpenAI (valued at $840 billion), Anthropic (valued at $350 billion), and Minimax (valued at $45 billion).
It's not to say that Templar can be directly compared to these companies, but in the current context of scarce narratives, waning attention, and a loss of faith in decentralization, the emergence of Templar has undoubtedly given a shot in the arm for decentralized AI.
Templar has demonstrated that decentralized environments can not only store data, but also produce intelligence. Covenant-72B is just the beginning. With the vertical integration of SN3 (pre-training), SN39 (computing power), and SN81 (reinforcement learning), a prototype of decentralized OpenAI running on the blockchain has emerged.
Since its inception, the crypto industry has disproved countless narratives. The once-popular decentralized storage, decentralized computing power, and decentralized computers have all seemed to have been proven false. However, it is gratifying that there are still projects that are steadfastly moving forward on the path of decentralization and have achieved results.
Templar's success is not only a DeepSeek moment for Bittensor, but may also be a DeepSeek moment for Crypto.

