Interpreting the Web3 native large language model ASI-1 Mini
Discover QBio, a medical AI tool that focuses on breast density classification and transparent report generation. Upload an X-ray that will tell you within minutes whether the density of the breast is A, B, C, or D, along with a detailed report explaining the decision-making process.
Developed by Fetch and Hybrid, the QBio is just an appetizer, and the real star is the ASI-1 Mini.
Fetch is a very old project, in the years when Defi occupied the attention of the entire market, Fetch focused on AI + Crypto, and has been focusing on the development and application of general technology of multi-model Agent.
What is ASI-1 Mini
In February this year, Fetch launched the world's first Web3-native large language model (LLM) - ASI-1 Mini. What is Web3 Native? Put simply, it integrates seamlessly with the blockchain, allowing you to not only use AI, but also invest, train, and own AI through $FET tokens and ASI wallets.
So what exactly is the ASI-1 Mini?
It is a large language model designed for Agentic AI, which can coordinate multiple AI agents and handle complex multi-step tasks.
For example, the ASI inference agent behind QBio is part of the ASI-1 Mini. Not only does it classify breast density, but it also explains the decision-making process and solves the AI "black box problem." What's more, ASI-1 Mini only needs two GPUs to run, compared to other LLMs (such as DeepSeek, which requires 16 H100 GPUs), the cost is very low, suitable for careful institutions to use
ASI-1 Mini How exactly is
ASI-1 Mini innovative The performance of ASI-1 Mini is comparable to that of leading LLMs, but the hardware cost is significantly reduced, It features dynamic inference patterns and advanced adaptive capabilities for more efficient and context-aware decision-making.
MoM and MoA
are both acronyms, so don't be afraid, it's simple: Mixture of Models (MoM), Mixture of Agents (MoA).
Imagine a team of AI experts, each focused on a different task, silky and seamless. , which not only improves efficiency, but also makes the decision-making process more transparent. For example, in medical image analysis, the MoM might choose one model that specializes in image recognition and the other that specializes in text generation, and the MoA coordinates the output of the two models to ensure that the final report is both accurate and easy to read.
Transparency and extensibilityTraditional
LLMs tend to be "black boxes" where you ask them questions and they give you answers, but why they answered that, sorry, no comment. The ASI-1 Mini is different, and with continuous multi-step reasoning, it can tell you that I chose this answer for these reasons, especially in the medical field, which is crucial.
The ASI-1 Mini will have a context window of up to 10 million tokens, support multimodal capabilities (e.g., image and video processing), and will launch a Cortex series of models in the future, focusing on cutting-edge fields such as robotics and biotechnology.
Hardware
EfficiencyWhile other LLMs require high hardware costs, the ASI-1 Mini requires only two GPUs to run. This means that even a small clinic can afford it, without the need for a million-dollar data center.
Why is it so efficient? Because the ASI-1 Mini is designed with the philosophy of "less is more". It optimizes the algorithm and model structure to maximize the use of limited computing resources. In contrast, other LLMs tend to pursue larger-scale models, resulting in significant resource consumption.
Community-drivenUnlike
other large language models, ASI-1 Mini is community-driven through decentralized training. ASI-1 Mini is a tiered freemium product for $FET holders who can connect to a Web3 wallet to unlock full functionality. The more FET tokens you hold in your wallet, the more you can explore the model's capabilities.
This community-driven model, like crowdfunding, is nothing more than training and validating artificial intelligence, high-tech, no longer just for the elite, but for everyone to participate in.
Today, when LLMs are relatively mature, why do you need to build an ASI-1 Mini alone? It's easy to understand, and it fills the gap where Web3 converges with AI.
At present, LLMs (such as ChatGPT and Grok) mainly serve centralized environments, and ASI-1 Mini is the first LLM designed for decentralized ecosystems. Not only does it make AI more transparent and efficient, but it also allows community members to directly benefit from AI's growth.
The emergence of ASI-1 Mini marks the transformation of AI from "black box" to "transparency", from "centralized" to "decentralized", and from "tool" to "asset". It can play a role not only in the medical field (such as QBio), but also in many fields such as finance, law, and scientific research.
This month, Fetch partnered with Rivalz to integrate the ASI-1 Mini into Rivalz's Agentic Data Coordination System (ADCS) for on-chain AI inference. With this collaboration, decentralized applications can access advanced AI inference capabilities directly on the blockchain.
Traditional blockchain environments are resource-constrained, and smart contracts can only handle lightweight tasks, often through oracles to obtain simple data (such as prices), and cannot directly run complex AI models. ADCS solves this problem perfectly, with complex computations for AI inference being done off-chain, and the results are safely returned to the blockchain, ensuring decentralization and trust.