Published: May 14, 2025
TensorWave Raises $100M to Build the World’s Largest Liquid-Cooled AMD GPU Deployment

Today, we’re announcing something big: TensorWave has raised $100M in Series A funding to accelerate the deployment of the world’s largest liquid-cooled AMD GPU cluster, consisting of 8,192 MI325X GPUs.
The round was co-led by Magnetar and AMD Ventures, with additional participation from Prosperity7, Maverick Silicon, and Nexus Venture Partners.
An AMD-Exclusive Cloud
At TensorWave, we made a deliberate choice early on: go all-in on AMD.
Why? Because AMD isn’t just catching up, they’re building the future of AI hardware. With 256GB of HBM3e, massive compute density, and robust support for open ecosystems via ROCm, the MI325X is a generational leap forward.
We’re not abstracting that power, we’re giving developers direct, optimized access to it.
“Our belief is simple: specialization wins. We’ve been AMD-native from day one. That depth of focus has let us unlock performance gains across training, fine-tuning, and inference by optimizing every layer of the stack around MI325X.”
- Darrick Horton, CEO, TensorWave
This isn’t just about more VRAM, it’s about a new design philosophy:
- Fewer GPUs
- Bigger models
- Tighter pipelines
- Predictable scale
Our commitment to AMD goes deeper than hardware specs. We believe developers deserve choice and that open ecosystems drive innovation. That’s why we’re building a cloud aligned with those values: by developers, for developers, with performance that doesn’t compromise.
And we’re just getting started.
Largest Liquid-Cooled AMD GPU Deployment
Deploying 8,192 MI325X GPUs is one thing. Making them run at peak performance, sustainably and efficiently, is another.
At TensorWave, we’re building the world’s largest direct liquid-cooled AMD GPU deployment, purpose-built for training and fine-tuning today’s largest models; at scale, without compromise.
Air cooling just doesn’t cut it at this density. As AI workloads become increasingly larger and more memory-intensive, the thermal efficiency of the hardware begins to become a choke point, not just for performance, but for reliability and uptime.
Our direct liquid cooling systems allow us to:
- Pack more GPUs per rack without thermal throttling
- Maintain consistently high throughput for long-running training jobs
- Improve energy efficiency while extending hardware longevity
- Deliver sustained performance for high-intensity inference workloads
This isn’t theoretical. It’s running now, and it’s built to scale.
“When you deploy thousands of high-bandwidth GPUs, thermals aren’t a footnote, they’re a first-principles problem. We engineered our system from the ground up to make high-density, high-performance clusters viable; and liquid cooling is the unlock."
— Piotr Tomasik, President & COO, TensorWave
This is infrastructure designed not around constraints but around what modern AI workloads actually need.
Whether you’re training multi-billion parameter frontier models, fine-tuning LLMs, or pushing longer context windows in inference, we deliver raw, sustained performance without trade-offs.
For AI builders pushing boundaries, we’re removing the friction, starting with physics.
The Expanding Ecosystem
TensorWave isn’t just building infrastructure, we’re building momentum behind an ecosystem that gives developers a real choice.
We’ve seen it firsthand: builders are tired of locked-down platforms, unpredictable pricing, and waiting in line for GPUs they’ll never fully control. They’re moving fast, building on open models, and demanding cloud infrastructure that actually keeps up.
That’s why we’re going all-in on AMD and helping shape an ecosystem that’s open, performant, and built for the devs who are pushing AI forward.
“Open-source models are moving faster than anyone expected. If you’re building with them, you need a stack that doesn’t slow you down. AMD’s powering that shift and we’re making it real... And at scale.”
— Jeff Tatarchuk, Chief Growth Officer, TensorWave
We’ve optimized TensorWave for the builders who don’t want to play gatekeeper games. Whether you’re tuning a Llama 3 checkpoint, running a massive MoE, or experimenting with long-context RAG, you get raw performance, real access, and white-glove support from people who’ve actually done this before.
And this ecosystem? It’s growing.
More developers are leaning into ROCm. More OSS communities are optimizing for Instinct Series™ GPUs. And more enterprise teams are realizing that AMD is not just a viable option, it’s the smart one.
The next era of AI won’t be dominated by walled gardens. It’ll be shaped by openness, speed, and flexibility.
That’s the wave we’re riding. And we’re bringing the community with us.
About Our Series A
This $100M Series A isn’t just validation, it’s fuel to grow to meet the demands we’re hearing and seeing from hyperscalers and enterprise customers.
Our Series A allows us to accelerate everything: our MI325X cluster rollout, our liquid-cooled architecture, grow the team, and our ability to support the world’s most ambitious AI teams with infrastructure that doesn’t slow them down.
The era of generalized cloud is over.
Modern AI workloads demand more in every aspect: more memory, more consistency, more throughput. You can’t train tomorrow’s models on yesterday’s infrastructure. And you definitely can’t productionize AI on shared, overbooked hardware abstracted beyond recognition.
“We’re scaling fast because our customers are scaling faster. We’re not here to offer another cloud—we’re here to build the one that AI actually needs.”
— Darrick Horton, CEO, TensorWave
Whether you’re a startup training frontier models or an enterprise team fine-tuning for production, TensorWave gives you speed, predictability, and full-stack support, without the bottlenecks.
To the builders out there: if you’ve experienced the vendor lock in or felt the limits of your current infra, we built this for you.
Talk to a sales engineer today to start building on MI325X GPUs using TensorWave cloud.
Fill out the form below to reserve your MI325X GPUs, today.
About TensorWave
TensorWave is the AI and HPC cloud purpose-built for performance. Powered exclusively by AMD Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference.
Ready to get started? Connect with a Sales Engineer.