When MemryX Inc. dropped news of its $44M Series B, it didn’t just shake up edgeAI, it sent a coded message to the whole semiconductor industry: you can’t fake flow. You either design from the memory out, or you’re just rearranging the same old bottlenecks.
Founded by Dr. Wei Lu and Dr. Zhengya Zhang out of the University of Michigan, MemryX has been building in silence since 2019. And now? Now they’re making noise, with silicon. With a team of 59 and growing, they’ve stayed lean while their MX3 accelerator went from research paper to production-ready, landing over a dozen customers that aren’t just testing, they’re embedding. You don’t get there without real architecture, real tech, and real leadership. CEO Keith Kressin didn’t leave Qualcomm to play small ball. And CTO Dr. Lu didn’t pioneer compute-at-memory to chase incremental gains.
So what makes MX3 hit different? Start with 6 TFLOPs per chip. Multiply that by four in an M.2 form factor and you’ve got 24 TFLOPs, sipping just 0.6 to 2 watts per chip. The secret sauce? A native data flow architecture that laughs at traditional memory bottlenecks. MemryX built a pipeline where memory isn't just adjacent, it is the interconnect. No control plane. No network-on-chip chaos. Just pure, elegant compute, moving at the speed of data.
This isn’t a “GPU alternative,” it’s a new lane. One that moves edgeAI into real-world deployments across industrial PCs, VMS, autonomous vehicles, and edge servers. It’s not just about performance, it’s about access. MemryX doesn’t need you to retrain your model or rewrite your code. Their 1-click compilation tool handles it. PyTorch, TensorFlow, ONNX, Keras, they’ve verified hundreds of models already, and they’re just getting started.
And now, with support from a tight mix of new and returning investors, including Arm IoT Capital, eLab Ventures, and HarbourVest Partners, they’re scaling the MX3, finalizing the MX4, and expanding operations from Ann Arbor to Taipei, Bangalore, and beyond. They even inked a deal with Saudi Arabia’s National Semiconductor Hub to accelerate AI deployment across the region. That’s not just global reach, it’s strategic territory.
This isn’t a moment, it’s a movement. When the rest of the chip world zigged toward centralization, MemryX zagged into the edge. And they brought real compute with them.
Let’s connect and keep the momentum going across the tech ecosystem. Whether you’re a founder shaping the future, a leader driving change, a VC backing bold ideas, or an investor spotting the next big thing—together, we’re pushing boundaries. Proud to be building the future with you.
Let’s connect on LinkedIn and Twitter (X), and keep the conversation going.