Published: Jul 17, 2025

A First Look at TensorWave’s AMD MI325X Training Cluster

At Advancing AI Day 2025, TensorWave CEO and co-founder Darrick Horton took the stage to offer a behind-the-scenes look at something game-changing: TensorWave’s AMD MI325X training cluster—designed from the ground up for AI scale, speed, and specialization.

“We’re not your general-purpose cloud. We don’t do CPUs. We don’t do VMs. We build massive GPU clusters for AI. Period.”

– Darrick Horton, CEO of TensorWave

No CPUs. No Virtual Machines. Just Raw AI Power.

While most clouds try to be everything for everyone, TensorWave is unapologetically focused. Darrick made it clear: the future of AI infrastructure demands specialization, not generalization. That’s why TensorWave went all-in on AMD’s MI325X GPUs, built to support larger models, higher memory demands, and serious training workloads.

Why MI325X?

The MI325X isn’t just another GPU—it’s the next evolution in AMD’s AI roadmap. With 256GB of HBM3e memory per GPU and serious throughput gains over its predecessor, it’s tailored for LLM training and memory-intensive workloads.

TensorWave’s MI325X clusters are:

  • Purpose-built for training massive models: no overhead, no shared infra
  • Optimized for ROCm and open-source frameworks
  • Scalable across thousands of GPUs with deterministic performance

Rethinking the Cloud Stack

Darrick also shared how TensorWave is rebuilding the stack to give teams more control. Forget about abstracted, one-size-fits-all infrastructure. This is infrastructure tuned for AI, from bare metal to software stack, with direct access to the hardware that matters.

About TensorWave

TensorWave is the AMD GPU cloud purpose-built for performance. Powered exclusively by Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference.

Ready to get started? Connect with a Sales Engineer.