ANN ARBOR, Mich., Dec. 26, 2025 /PRNewswire/ — MemryX Inc., a company delivering production AI inference acceleration, today announced its strategic roadmap forANN ARBOR, Mich., Dec. 26, 2025 /PRNewswire/ — MemryX Inc., a company delivering production AI inference acceleration, today announced its strategic roadmap for

MemryX Unveils MX4 Roadmap: Enabling Distributed, Asynchronous Dataflow for Highly Efficient Data Center AI

ANN ARBOR, Mich., Dec. 26, 2025 /PRNewswire/ — MemryX Inc., a company delivering production AI inference acceleration, today announced its strategic roadmap for the MX4. The next-generation accelerator is engineered to scale the company’s “at-memory” dataflow architecture from edge deployments into the data center, leveraging 3D hybrid-bonded memory to eliminate the industry’s most pressing bottleneck: the “memory wall.”

MemryX is currently in production with its MX3 silicon, delivering >20× better performance per watt than mainstream GPUs for targeted AI inference applications. With MX4, MemryX is extending that production-proven foundation to address data center workloads increasingly constrained not by compute, but by memory capacity, bandwidth, and energy efficiency.

MemryX has now signed an agreement with a next-generation 3D memory partner to execute a dedicated 2026 test chip program, validating a targeted ~5µm-class hybrid-bonded interface and direct-to-tile memory integration. The partner is not disclosed at this time.

The announcement comes as the semiconductor industry increasingly prioritizes deterministic inference architectures for the next era of AI processing, reinforced by recent multibillion-dollar licensing and investment activity across AI hardware—such as Nvidia’s $20B deal with Groq, which underscores the massive strategic value of efficient inference solutions. While the first generation of dataflow solutions proved the efficiency of 2D SRAM, MemryX is moving into the third dimension to address the power, cost, and complexity constraints of frontier AI workloads.

Software Continuity: Leveraging the MX3 Compiler Foundation

MemryX plans to leverage its mature, production-proven MX3 software stack — including its compiler and runtime — as the foundation for MX4. While MX4 introduces new capabilities to support larger memory footprints and data center-scale configurations, the roadmap is designed to preserve key elements of the MX3 programming model and toolchain to accelerate adoption and shorten time-to-deployment for existing and new customers.

Beyond LLMs: Powering Frontier Inference

While Large Language Models (LLMs) remain a priority, the data center is rapidly evolving toward Large Action Models (LAMs), high-resolution multimodal vision, and real-time recommendation engines. These “frontier workloads” require massive memory capacity and predictable throughput that traditional 2.5D HBM-based architectures struggle to provide efficiently.

The MX4 addresses this by physically bonding high-bandwidth memory directly to compute tiles, shifting the focus from data movement back to high-efficiency computation.

The Asynchronous Advantage: Scalability Without Bottlenecks

The MX4 represents a fundamental departure from synchronous chip designs. Many current accelerators rely on a global synchronous clock, which can introduce clock skew and thermal challenges as designs scale using 3D stacks.

Like the MX3, the MX4 utilizes a data-driven producer/consumer flow-control model and avoids the centralized memory bottlenecks common in traditional architectures by enabling direct interfaces from 3D memory to compute tiles. However, rather than using 2D embedded SRAM like the MX3, the MX4 directly connects computing tiles to 3D memories without using single shared controllers.

  • Asynchronous Scaling: Tiles operate independently, processing only when data is available and downstream consumers are ready. This naturally manages backpressure and reduces the switching overhead and clocking complexities inherent in synchronous architectures.
  • Direct-to-Tile 3D Interface: By targeting a ~5µm-class hybrid bonding pitch, MX4 enables a distributed vertical interconnect in which individual compute engines access memory layers directly—without relying on a single shared memory controller used by today’s HBM-based designs.
  • Technology Agnostic: The architecture is designed to support multiple 3D direct to memory formats, including today’s stacked DRAM and emerging FeRAM-class technologies.

Roadmap to Production

  • 2026: Dedicated test chip (in partnership with a 3D memory provider) to validate ~5µm-class hybrid bonding interface and direct-to-tile 3D memory integration
  • 2027: First MX4 customer sampling
  • 2028: Production release, scaling from single-chip systems to multi-chip data center arrays supporting >1TB memory configurations

“The industry has recognized that deterministic dataflow is a compelling path forward for AI inference, but both efficiency and scale are critical,” said Keith Kressin, CEO of MemryX. “By combining our production-proven architecture—including an asynchronous flow model—with 3D hybrid bonding, we are removing the physical barriers to power-efficient trillion-parameter scalability. We aren’t just building a faster chip; we are building a more practical roadmap for the future of AI.”

Learn More

To review the architectural foundation of the MX4, visit the MemryX MX3 Architecture Overview: https://developer.memryx.com/architecture/architecture.html 

Specifications, partners, and timelines are targets and subject to change.

About MemryX Inc. MemryX Inc. is a fabless semiconductor company focused on AI inference acceleration, with a production-proven “at-memory” dataflow architecture that delivers superior efficiency for edge and upcoming data center applications. Backed by $44M in Series B funding from investors including HarbourVest, NEOM Investment Fund (NIF), Arm IoT Fund, eLab Ventures, M Ventures, and Motus Ventures, MemryX is driving the next wave of AI hardware innovation from its headquarters in Ann Arbor, Michigan.

Media and Investor Contact: Roger Peene, VP Marketing

Email: [email protected]

Website: www.memryx.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/memryx-unveils-mx4-roadmap-enabling-distributed-asynchronous-dataflow-for-highly-efficient-data-center-ai-302649698.html

SOURCE MemryX

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03848
$0.03848$0.03848
+2.01%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Coinbase Data Breach Fallout: Former Employee Arrest in India Over Customer Data Case Raises Bitcoin Security Concerns

Coinbase Data Breach Fallout: Former Employee Arrest in India Over Customer Data Case Raises Bitcoin Security Concerns

The post Coinbase Data Breach Fallout: Former Employee Arrest in India Over Customer Data Case Raises Bitcoin Security Concerns appeared on BitcoinEthereumNews.
Share
BitcoinEthereumNews2025/12/27 10:36
Burmese war amputees get free 3D-printed prostheses, thanks to Thailand-based group

Burmese war amputees get free 3D-printed prostheses, thanks to Thailand-based group

PROSTHETIC FEET. Silicon foot covers fitted with metal rods found in the prosthetic production unit in Mae Tao Clinic. A good prosthetic foot must absorb impact
Share
Rappler2025/12/27 10:00
China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise

China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise

The post China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise appeared on BitcoinEthereumNews.com. China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise China’s internet regulator has ordered the country’s biggest technology firms, including Alibaba and ByteDance, to stop purchasing Nvidia’s RTX Pro 6000D GPUs. According to the Financial Times, the move shuts down the last major channel for mass supplies of American chips to the Chinese market. Why Beijing Halted Nvidia Purchases Chinese companies had planned to buy tens of thousands of RTX Pro 6000D accelerators and had already begun testing them in servers. But regulators intervened, halting the purchases and signaling stricter controls than earlier measures placed on Nvidia’s H20 chip. Image: Nvidia An audit compared Huawei and Cambricon processors, along with chips developed by Alibaba and Baidu, against Nvidia’s export-approved products. Regulators concluded that Chinese chips had reached performance levels comparable to the restricted U.S. models. This assessment pushed authorities to advise firms to rely more heavily on domestic processors, further tightening Nvidia’s already limited position in China. China’s Drive Toward Tech Independence The decision highlights Beijing’s focus on import substitution — developing self-sufficient chip production to reduce reliance on U.S. supplies. “The signal is now clear: all attention is focused on building a domestic ecosystem,” said a representative of a leading Chinese tech company. Nvidia had unveiled the RTX Pro 6000D in July 2025 during CEO Jensen Huang’s visit to Beijing, in an attempt to keep a foothold in China after Washington restricted exports of its most advanced chips. But momentum is shifting. Industry sources told the Financial Times that Chinese manufacturers plan to triple AI chip production next year to meet growing demand. They believe “domestic supply will now be sufficient without Nvidia.” What It Means for the Future With Huawei, Cambricon, Alibaba, and Baidu stepping up, China is positioning itself for long-term technological independence. Nvidia, meanwhile, faces…
Share
BitcoinEthereumNews2025/09/18 01:37