Author: @BlazingKevin_, Researcher at asset management firm Blockbooster 1. The background and evolution of Agent Skills In 2025, the AI ​​Agent field will be atAuthor: @BlazingKevin_, Researcher at asset management firm Blockbooster 1. The background and evolution of Agent Skills In 2025, the AI ​​Agent field will be at

From understanding Skills to knowing how to build a Crypto Research Skill

2026/03/10 20:20
23 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Author: @BlazingKevin_, Researcher at asset management firm Blockbooster

1. The background and evolution of Agent Skills

In 2025, the AI ​​Agent field will be at a critical juncture, transitioning from "technical concept" to "engineering implementation." In this process, Anthropic's exploration of capability encapsulation has unexpectedly facilitated an industry-wide paradigm shift.

From understanding Skills to knowing how to build a Crypto Research Skill

On October 16, 2025, Anthropic officially launched Agent Skill . Initially, the official positioning of this feature was extremely restrained—it was merely regarded as an auxiliary module to improve Claude's performance in specific vertical tasks (such as complex code logic and specific data analysis).

However, market and developer feedback far exceeded expectations. It was quickly discovered that this "modular capability" design demonstrated extremely high decoupling and flexibility in actual engineering. It not only reduced the redundancy of Prompt tuning but also significantly improved the stability of Agents performing specific tasks. This experience quickly triggered a chain reaction in the developer community. Within a short period, leading productivity tools and integrated development environments (IDEs), including VS Code, Codex, and Cursor, followed suit, successively completing underlying support for the Agent Skill architecture.

Faced with the spontaneous expansion of the ecosystem, Anthropic recognized the underlying universal value of this mechanism. On December 18, 2025, Anthropic made a landmark decision: officially releasing Agent Skill as an open standard .

Following this, on January 29, 2026, the official detailed user manual for Skill was released, completely breaking down the technical barriers to cross-platform and cross-product reuse at the protocol level. This series of actions signifies that Agent Skill has completely shed its label as a "Claude exclusive accessory" and has officially evolved into a universal underlying design pattern in the entire AI Agent field.

At this point, a question arises: what core pain points at the underlying engineering level does Agent Skill, which has been embraced by major companies and core developers, actually solve? And what are the essential differences and collaborative relationships between it and the currently popular MCP ?

To thoroughly clarify these issues and ultimately apply them to the practical construction of investment research in the crypto industry , this article will explore the following topics in a step-by-step manner:

  • Conceptual Analysis : The essence of Agent Skills and its basic architecture construction.
  • Basic workflow : Reveals its underlying operating logic and execution flow.
  • Advanced Mechanism : In-depth analysis of the two advanced uses of Reference and Script.
  • Practical Case Study : Analyzing the essential differences between Agent Skill and MCP, and demonstrating their combined application in Crypto investment research scenarios.

2. What is Agent Skill and its basic structure?

What exactly is an Agent Skill? In the simplest terms, it's essentially a "personalized instruction manual" for a large model that you can refer to at any time .

When using AI in our daily lives, we often encounter a pain point: every time we start a new conversation, we have to rewrite the long request. Agent Skill was created to solve this problem.

For a practical example: Suppose you want to create an "intelligent customer service" agent. You can clearly write down the rules in your Skill: "When encountering a user complaint, the first step must be to calm them down, and you must never make any promises of compensation." Another example: If you frequently need to create "meeting summaries," you can directly define a template in your Skill: "Each time you output a meeting summary, you must strictly follow the format of the three sections: 'Attendees,' 'Core Issues,' and 'Final Decisions.'"

With this "instruction manual," you won't need to repeat that long string of instructions in every conversation. When the large model receives a task, it will automatically consult the corresponding Skill and immediately know which standard to use to perform the task.

Of course, "documentation" is just a simplified analogy for easier understanding. In reality, Agent Skill can do far more than simply provide formatting guidelines; we will break down its killer advanced features in detail in later chapters. But in the initial stages, you can think of it as an efficient task instruction manual.

Next, we'll use the familiar scenario of a "meeting summary" to see how to create an Agent Skill. The entire process doesn't require complex programming knowledge.

Based on the current settings of mainstream tools (such as Claude Code), we need to find (or create) a folder called .claude/skill in the user directory of the computer. This is the "headquarters" where all skills are stored.

First, create a new folder in this directory. Name this folder the same as your Agent Skill . Second, create a text file named skill.md in the folder you just created.

Every Agent Skill must have a skill.md file. Its purpose is to tell the AI: who I am, what I can do, and how to work according to my instructions. Opening this file, you'll find it clearly divided into two parts:

At the very beginning of the file, usually enclosed by two --- , is the area where only two core attributes are written: name and description .

  • name : This is the name of the skill, and it must be exactly the same as the name of the folder outside.
  • description : This is an extremely important part. It's responsible for explaining the specific purpose of this skill to the larger model. The AI ​​continuously scans all skill descriptions in the background to determine which skill should be used to answer the user's question. Therefore, writing an accurate and comprehensive description is a prerequisite for ensuring that your skill can be accurately activated by the AI.

The rest of the text below the hyphen is the specific rules written for the AI. Officially, this is called the "instructions." This is where you get creative; you need to describe in detail the logic the model needs to follow. For example, in the meeting summary example, you could specify here in plain language: "The list of attendees, the topics discussed, and the final decisions must be extracted."

Once you've completed these steps, a simple yet highly practical Agent Skill will be created.

However, a truly useful skill often begins with meticulous upfront design. Clearly defining your goals, scope, and success criteria before typing your first line will make your build process much more efficient.

The first step in building a skill is not to think about "what tricks I can get AI to do", but to ask yourself: " What repetitive problems do I need to solve in my daily work? " It is recommended to first define 2 to 3 specific scenarios that this skill should cover.

Secondly, define the criteria for success. How do you know if the skill you've written is good or not? Before you start, set several measurable standards for it. For example, a quantitative standard could be "whether the processing speed has increased," while a qualitative standard could be "whether the meeting decisions it extracts are accurate enough and without omissions every time."

3. Basic operational workflow of Agent Skill

Having learned about the basics of Agent Skill, we can't help but ask: how exactly does this "documentation" work in actual operation?

If you've recently used a product like Manus AI, you've likely experienced this scenario: when you pose a specific question, the AI ​​doesn't immediately launch into a long-winded explanation or seem to have hallucinations. Instead, it astutely recognizes that "this matter falls under the jurisdiction of a specific Agent Skill." Then, a prompt appears on the screen asking if you allow the invocation of that Skill.

Once you click "Agree," the AI ​​behaves like a completely different person, perfectly outputting results according to the preset rules.

Behind this seemingly simple "apply-agree-execute" interaction lies a highly sophisticated underlying workflow. To fully explain this mechanism, we need to first identify the "three core roles" involved in the interaction throughout the process:

  1. User : The person who initiates the task request.
  2. Client-side tools (such as Claude Code) act as intermediaries, coordinating and managing processes.
  3. Large Language Model : The "brain" responsible for understanding intent and generating the final result.

When we input a request into the system (e.g., "Please summarize this morning's project meeting"), the following four steps of precise collaboration occur between these three roles:

Step 1: Lightweight Scan (Transferring Metadata)

After a user enters a request, the client tool (Claude Code) doesn't immediately send all the documentation to the large model. Instead, it packages the user's request along with the "names" and "descriptions" of all Agent Skills in the current system (the Metadata layer mentioned in the previous chapter) and sends it to the large model. You can imagine that even if you have installed a dozen or even dozens of Skills, the large model only receives a "lightweight directory." This design greatly saves the model's attention and avoids mutual interference of information.

Step Two: Precise Intent Matching. After receiving the user's request and the "Skill Directory," the big model performs rapid semantic analysis. It discovers that the user's request is to "summarize the meeting," and the directory contains a Skill called "Meeting Summary Assistant," whose description perfectly matches the task. At this point, the big model tells the client tool: "I found that this task can be solved with 'Meeting Summary Assistant.'"

Step 3: Loading the Complete Instruction on Demand. After receiving feedback from the large model, the client tool (Claude Code) will actually enter the dedicated folder of the "Meeting Summary Assistant" to read the complete skill.md text. Please note that this is an extremely crucial design: only at this point will the complete instruction content be read, and the system will only read this one selected Skill. Other unselected Skills remain quietly in the directory, not consuming any resources.

Step 4: Strict Execution and Output Response Finally, the client tool sends the "user's original request" and the "complete skill.md content from the meeting summary assistant" to the large model. This time, the large model is no longer making choices, but entering execution mode. It will strictly follow the rules defined in skill.md (e.g., it must extract attendees, core topics, and final decisions), generate a highly structured response, and then hand it over to the client tool to display to the user.

4. Core Mechanism 1: On-Demand Loading and Reference

The workflow in the previous chapter introduced the first core underlying mechanism of Agent Skill— on-demand loading .

Although the names and descriptions of all skills are always visible to the large model, the specific instructions are only actually retrieved into the model's context after the skill is precisely hit.

This significantly conserves valuable token resources. Imagine that even if you deploy a dozen large-scale Skills simultaneously, such as "viral copywriting," "meeting summaries," and "on-chain data analysis," the model initially only needs to perform a very low-cost "directory search." Only after a target is selected will the system feed the corresponding skill.md file to the model. This "on-demand loading" is the first layer of secret to keeping Agent Skills lightweight and efficient.

However, for advanced users who pursue ultimate efficiency, simply achieving the first level of on-demand loading is not enough.

As our business deepens, we often want our skills to become smarter. Take the "Meeting Summary Assistant" as an example. We want it to not only simply summarize the topics but also provide incremental insights: when a meeting decides to spend money, it can directly indicate in the summary whether it complies with the group's financial compliance; when external collaborations are involved, it can automatically alert to potential legal risks. This way, when the team reviews the summary, they can instantly spot key compliance warnings, eliminating the tedious process of double-checking regulations.

However, this presents a fatal contradiction in engineering: for Skill to possess this capability, it must cram all the lengthy "Financial Regulations" and "Legal Provisions" into the skill.md file. This results in an incredibly bloated core instruction file. Even if it's just a purely technical morning meeting, the model is forced to load tens of thousands of words of financial and legal "nonsense," which not only leads to a serious waste of tokens but also easily causes the model to "distract itself."

So, could we implement an additional layer of "on-demand within on-demand" on top of on-demand loading? For example, could the system only show the model financial regulations when the meeting actually touches on the topic of "money"?

The answer is yes. The Reference mechanism in the Agent Skill system was created precisely for this purpose.

The essence of Reference is an external knowledge base triggered by conditions . Let's see how it elegantly solves the pain points mentioned above:

  1. Create an external reference file : First, we add a separate file, or Reference, in the technical terms, in the directory of this Skill. We name it集团财务手册.md , which details the reimbursement standards (e.g., accommodation allowance of 500 yuan/night, meal expenses of 300 yuan/person/day, etc.).
  2. Set trigger conditions : Next, return to the core skill.md file and add a dedicated "Financial Reminder Rule". We can explicitly define it in natural language: "Trigger only when the meeting content mentions words such as money, budget, procurement, and expenses. After triggering,集团财务手册.md file must be read. Based on the content of this file, please indicate whether the amount in the meeting decision exceeds the limit and specify the corresponding approver."

Once the setup is complete, a brilliant, dynamic collaboration begins when we review the budget allocation in our next meeting:

  1. The client tool scans and requests your use of the "Meeting Summary Assistant" Skill (completing the first layer of on-demand loading).
  2. When the model was reading the meeting minutes, it keenly noticed the word "budget" and immediately triggered the rules we had embedded in skill.md .
  3. At this point, the system will send you a second request: "Do you allow reading集团财务手册.md ?" (Complete the second layer of on-demand loading: Reference dynamically triggered).
  4. Once authorized, the model cross-references the meeting content with dynamically introduced financial standards, ultimately outputting a high-quality summary that includes not only "participants, topics, and decisions," but also "financial compliance warnings."

Please remember the core characteristic of Reference: it is strictly subject to conditions . Conversely, if today's meeting is a technical debriefing to discuss code logic, and has nothing to do with money, then this集团财务手册.md will quietly lie on the hard drive, never consuming a single Token of computing power.

5. Script and Progressive Disclosure Mechanism

Having discussed the Reference mechanism for solving information overload, let's move on to another killer feature of Agent Skill: code execution (Script) .

For a mature agent, simply "searching for information" and "writing summaries" is not enough; true automation is achieved when it can directly get the job done. This is where scripts come in.

Let's continue using our "Meeting Summary Assistant" as an example. After the summary is written, it usually needs to be synchronized to the company's internal system. To achieve this final step, we create a new Python script named upload.py in the Skill folder, which contains the upload logic for connecting to the company server.

Next, we return to the core skill.md file and add an explicit instruction: "When a user mentions words such as 'upload,' 'sync,' or 'send to server,' you must run the upload.py script to push the generated summary content to the server."

When you say to AI, "The summary is well written, please sync it to the server."

The client tool will immediately request you to execute the upload.py file. But please note a crucial underlying logic: during this process, the AI ​​does not "read" the contents of this code; it merely "executes" it.

This means that even if your Python script contains 10,000 lines of extremely complex business logic, its consumption of the large model context is almost zero . AI is like using a "black box" tool; it only cares about how to start the tool and whether it succeeds in the end, and it doesn't care about how the box works inside.

This leads to the fundamental difference in mechanism between the two advanced features, Reference and Script:

  • Reference (read): It "moves" the content of external files into the model's brain (context) as a reference, and therefore consumes Tokens.
  • Script (Run): It is triggered and run directly in the external environment. As long as you clearly define the running method, it will not occupy the model's context.

Of course, here's a tip to avoid pitfalls: When writing skill.md , you must explain the script's trigger conditions and execution commands absolutely clearly. If the AI ​​encounters ambiguous instructions and doesn't know how to proceed, it might "save its breath" by trying to peek at the code to find clues, which could cost you your tokens. Therefore, the ironclad rule for writing skills is: define the rules as clearly and comprehensively as possible.

At this point, we've actually pieced together all the core components of Agent Skill. It's time to pause and summarize from a holistic perspective.

If you carefully review the entire loading process, you'll find that Agent Skill's design philosophy is actually an extremely sophisticated, gradual disclosure mechanism . To maximize computational efficiency while maintaining high performance, its system is strictly divided into three layers, with the triggering conditions for each layer becoming progressively tighter:

  • First layer: Metadata layer (always loaded). This layer stores name and description of all Agent Skills. It's like a "resident directory" for the large model, extremely lightweight. The large model will glance at this layer before accepting each order to complete the initial route matching.
  • The second layer: the instruction layer (loaded on demand) , corresponds to the specific rules in skill.md . Only when the first layer confirms the task's ownership will the AI ​​"open" this corresponding layer and load the specific rules into its brain.
  • The third layer: the resource layer (on-demand loading within on-demand) . This is the deepest and largest layer. It contains three core components:
    • Reference: For example,集团财务手册.md will only be read when a specific condition is triggered in the conversation (such as mentioning "money").
    • Scripts, such as upload.py , are only executed when a specific action (such as "uploading") is required.
    • Assets include things like company logos, custom fonts, and specific PDF templates needed to generate research reports. These are only used when the final output is generated.

6. The essential differences between Agent Skill and MCP, and their practical combination

Having discussed the advanced uses of Agent Skills, many readers familiar with underlying AI protocols might have a strong sense of déjà vu: the Script mechanism of Agent Skills seems remarkably similar to the recently popular MCP ( Multi-Channel Programming). Essentially, aren't they both about enabling large models to connect to and manipulate the external world?

Since there is functional overlap, which one should we choose when building a Crypto Research workflow?

Regarding this issue, Anthropic officials once used a very classic statement to point out the most core and essential difference between the two:

This statement hits the nail on the head. MCP is essentially a "data pipeline" responsible for supplying external information to large models in a standardized manner (such as querying the latest block height on a chain, pulling real-time candlestick charts from exchanges, and reading local investment research PDFs). Agent Skill, on the other hand, is essentially a set of "Code of Conduct (SOPs)" responsible for regulating how large models should work after receiving this data (such as stipulating that investment research reports must include token economics models and that output conclusions must include risk warnings).

At this point, some tech enthusiasts might object: "Since Agent Skill can also run Python code, can't I just write some logic in the script to connect to the database or call the API? Agent Skill can completely do the work of MCP!"

Indeed, in terms of engineering implementation, Agent Skill can also pull data. However, it is extremely awkward and unprofessional.

This "lack of professionalism" manifests itself in two fatal dimensions:

  1. Operation Mechanism and State Maintenance : Agent Skill's scripts are "stateless," executing independently each time they are triggered and then disappearing after completion. MCP, on the other hand, is an independently running long-running service that can maintain persistent connections to external data sources (such as WebSocket long connections, which will be mentioned below), something a simple script simply cannot do.
  2. Security and stability : Allowing AI to run a Python script with the highest system privileges every time poses a significant security risk; while MCP provides a standardized isolation environment and authentication mechanism.

Therefore, when building a high-level Crypto Research system, the most powerful solution is not to choose one of two options, but to combine the two into a powerful combination: "MCP for water supply and Skill for brewing".

To give everyone a direct feel for the power of this combination, we'll take opennews-mcp , built by Web3 developer Cryptoxiao, as an example to break down how to use API-enhanced skills to create a fully automated encrypted news intelligence center.

The core logic of this type of Skill is to encapsulate the discrete API capabilities provided by MCP into an intelligent agent oriented towards the final investment research goal through the instruction orchestration of the Skill.

This system endows AI with capabilities in four core modules:

Module 1: News Source Discovery

This is the entry point for AI to understand the limitations of the tool's capabilities. Through the tools in discovery.py , AI can dynamically learn from which channels it can obtain information.

Utility functions (Python) SKILL.md description Code-level capabilities
get_news_sources Get all available news source categories Calling the underlying api.get_engine_tree() returns a complete tree structure containing all news engines (such as news, listing, onchain) and their specific sources (such as Bloomberg, Binance). This allows AI to display optional news sources to the user.
list_news_types List all available news type codes It also calls api.get_engine_tree(), but flattens it into a simple list, making it easier for AI to use the news_type parameter for precise filtering when calling other tools.

Module Two: Multi-dimensional News Retrieval

This is the core query module, implemented by news.py , which provides a variety of news retrieval methods, ranging from simple to complex.

Utility functions (Python) SKILL.md description Code-level capabilities
get_latest_news Get the most recent crypto news By directly calling api.search_news() without adding any filtering conditions, the news item "fire hose" can be retrieved.
search_news Search crypto news by keyword It accepts a keyword parameter and calls api.search_news(query=keyword) to perform a full-text keyword search.
search_news_by_coin Search news related to a specific coin Accepts a coin parameter (such as "BTC"), and calls api.search_news(coins=[coin]) to perform the most common query by currency.
get_news_by_source Get news from a specific source Accepts engine_type and news_type, and calls api.search_news(engine_types={...}) to achieve precise filtering by news source.
search_news_advanced Advanced news search with multiple filters This is a "super tool" that combines multiple parameters such as coins, keyword, engine_types, and has_coin to construct complex api.search_news() requests, enabling multi-dimensional cross-filtering.

Module 3: AI-Enabled Analysis and Insights

This part of the tool utilizes the AI ​​analysis results already completed by the 6551.io backend, allowing the AI ​​Agent to directly query "opinions" rather than just "facts".

Utility functions (Python) SKILL.md description Code-level capabilities
get_high_score_news Get highly-rated news articles Accepts the min_score parameter, first retrieves a batch of the latest news, then performs secondary filtering within the MCP server, returning only news with aiRating.score greater than or equal to the threshold, and sorting them in descending order of score.
get_news_by_signal Get news filtered by trading signal It accepts the signal parameter (long, short, neutral) and performs secondary filtering on the retrieved news internally on the server, returning only the results matched by aiRating.signal.

Key Insight: When the AI ​​Agent invokes these tools, it is unaware that the MCP server internally performs a two-step "get-filter" process. To the AI, it simply invokes a magical tool that directly returns "highly rated news" or "positive news," greatly simplifying its workflow.

Module 4: Real-time News Stream

This is opennews-mcp's "killer" capability, implemented by realtime.py , which gives AI the ability to listen for real-time events.

Utility functions (Python) SKILL.md description Code-level capabilities
subscribe_latest_news Subscribe to real-time news updates The `ws.subscribe_latest()` function establishes a WebSocket long-lived connection and subscribes to specific topics based on parameters such as `coins` and `engine_types`. It then continuously receives push notifications for `wait_seconds` seconds and finally returns all the collected news at once.

Key insight: This functionality cannot be achieved with a pure Skill because it requires maintaining a stateful, persistent network connection. It can only be done through a dedicated MCP server.

Once these MCP-driven tools are written into the Agent Skill's command flow, your AI officially transforms from a "general chat assistant" into a "Wall Street-level Web3 analyst." It can fully automate complex workflows that previously required researchers to spend hours on:

Workflow Example 1: Rapid Due Diligence (DD) for New Currencies

  1. Command issued : The user enters "Conduct an in-depth investigation into the newly launched @NewCryptoCoin project".
  2. Basic assessment : The agent automatically calls opentwitter.get_twitter_user to retrieve official Twitter data.
  3. Endorsement cross-validation : By calling opentwitter.get_twitter_kol_followers , we can analyze which top KOLs or VCs have been quietly following the project.
  4. Full-network public opinion search : Use opennews.search_news_by_coin to retrieve media reports and public relations actions.
  5. Signal-to-noise ratio filtering : Call opennews.get_high_score_news to remove worthless news flashes and only read high-scoring long articles.
  6. Output Research Report : Based on the preset research report format in Skill, Agent outputs a standard due diligence report that includes "fundamentals, community asset structure, media attention, and AI comprehensive rating".

Workflow Example 2: Real-time Event-Driven Transaction Signal Discovery

  1. Instruction issued : The user inputs "Help me monitor the market around the clock and look for sudden trading opportunities in the 'Zero Knowledge Proof (ZK)' sector."
  2. Deploy Sentinels : The Agent calls opennews.subscribe_latest_news to establish a WebSocket long connection, precisely listening to news streams whose content contains "ZK" or "Zero-Knowledge Proof" and is associated with a specific token.
  3. Capture Positive News : When the system captures high-weight positive news about a project (such as SomeCoin) achieving a breakthrough in ZK technology, and the sentiment indicator is determined to be Long, it immediately blocks the dormancy.
  4. Community sentiment resonance test : The agent calls Twitter search tools in milliseconds to check whether multiple core KOLs in the ZK field are simultaneously amplifying the event.
  5. Alert Trigger : If the conditions of "media premiere + community resonance" are met, the Agent will immediately push a high-certainty Alpha trading alert to the user.

Thus, by standardizing behavioral logic through Agent Skills and connecting the data arteries through MCP, a highly automated and professional Crypto Research workflow is completely closed.

Market Opportunity
Solayer Logo
Solayer Price(LAYER)
$0.08237
$0.08237$0.08237
-0.33%
USD
Solayer (LAYER) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny

Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny

The post Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny appeared on BitcoinEthereumNews.com. The cryptocurrency world is buzzing with a recent controversy surrounding a bold OpenVPP partnership claim. This week, OpenVPP (OVPP) announced what it presented as a significant collaboration with the U.S. government in the innovative field of energy tokenization. However, this claim quickly drew the sharp eye of on-chain analyst ZachXBT, who highlighted a swift and official rebuttal that has sent ripples through the digital asset community. What Sparked the OpenVPP Partnership Claim Controversy? The core of the issue revolves around OpenVPP’s assertion of a U.S. government partnership. This kind of collaboration would typically be a monumental endorsement for any private cryptocurrency project, especially given the current regulatory climate. Such a partnership could signify a new era of mainstream adoption and legitimacy for energy tokenization initiatives. OpenVPP initially claimed cooperation with the U.S. government. This alleged partnership was said to be in the domain of energy tokenization. The announcement generated considerable interest and discussion online. ZachXBT, known for his diligent on-chain investigations, was quick to flag the development. He brought attention to the fact that U.S. Securities and Exchange Commission (SEC) Commissioner Hester Peirce had directly addressed the OpenVPP partnership claim. Her response, delivered within hours, was unequivocal and starkly contradicted OpenVPP’s narrative. How Did Regulatory Authorities Respond to the OpenVPP Partnership Claim? Commissioner Hester Peirce’s statement was a crucial turning point in this unfolding story. She clearly stated that the SEC, as an agency, does not engage in partnerships with private cryptocurrency projects. This response effectively dismantled the credibility of OpenVPP’s initial announcement regarding their supposed government collaboration. Peirce’s swift clarification underscores a fundamental principle of regulatory bodies: maintaining impartiality and avoiding endorsements of private entities. Her statement serves as a vital reminder to the crypto community about the official stance of government agencies concerning private ventures. Moreover, ZachXBT’s analysis…
Share
BitcoinEthereumNews2025/09/18 02:13
CME Group to Launch Solana and XRP Futures Options

CME Group to Launch Solana and XRP Futures Options

The post CME Group to Launch Solana and XRP Futures Options appeared on BitcoinEthereumNews.com. An announcement was made by CME Group, the largest derivatives exchanger worldwide, revealed that it would introduce options for Solana and XRP futures. It is the latest addition to CME crypto derivatives as institutions and retail investors increase their demand for Solana and XRP. CME Expands Crypto Offerings With Solana and XRP Options Launch According to a press release, the launch is scheduled for October 13, 2025, pending regulatory approval. The new products will allow traders to access options on Solana, Micro Solana, XRP, and Micro XRP futures. Expiries will be offered on business days on a monthly, and quarterly basis to provide more flexibility to market players. CME Group said the contracts are designed to meet demand from institutions, hedge funds, and active retail traders. According to Giovanni Vicioso, the launch reflects high liquidity in Solana and XRP futures. Vicioso is the Global Head of Cryptocurrency Products for the CME Group. He noted that the new contracts will provide additional tools for risk management and exposure strategies. Recently, CME XRP futures registered record open interest amid ETF approval optimism, reinforcing confidence in contract demand. Cumberland, one of the leading liquidity providers, welcomed the development and said it highlights the shift beyond Bitcoin and Ethereum. FalconX, another trading firm, added that rising digital asset treasuries are increasing the need for hedging tools on alternative tokens like Solana and XRP. High Record Trading Volumes Demand Solana and XRP Futures Solana futures and XRP continue to gain popularity since their launch earlier this year. According to CME official records, many have bought and sold more than 540,000 Solana futures contracts since March. A value that amounts to over $22 billion dollars. Solana contracts hit a record 9,000 contracts in August, worth $437 million. Open interest also set a record at 12,500 contracts.…
Share
BitcoinEthereumNews2025/09/18 01:39
Stablecoin market hits $312B as banks, card networks embrace onchain dollars

Stablecoin market hits $312B as banks, card networks embrace onchain dollars

Finance Share Share this article
Copy linkX (Twitter)LinkedInFacebookEmail
Stablecoin market hits $312B as banks, card
Share
Coindesk2026/03/10 22:48