Lower TCO + Higher Performance
AMD Instinct™ MI300X
Supercharge your AI inference workloads with an impressive 192GB HBM3E of memory per GPU.
MI300X Server Configuration
TensorWave's AMD Instinct MI300X servers include ultra-high memory bandwidth and blazing fast networking powered by RoCEv2 interconnects.
Whether you're fine-tuning models or scaling inference, TensorWave’s AMD MI300X cluster delivers high memory per GPU and ultra-low TCO. Built for inference, it provides a cost-efficient path from development to production.
Standard Node Config
Bare Metal
with optional managed Kubernetes & Slurm
Accelerators
8x AMD Instinct MI300X
CPUs
2x AMD Genoa 9654
Memory
2.3 TB DDR5 4800 MT/s
Local Storage
4x 3.84TB NVMe Drives + 2x 960GB m.2
Node Interconnects
3.2Tb/sPeak Network Storage
50PBMI300X Server Configuration
TensorWave's AMD Instinct MI300X servers include ultra-high memory bandwidth and blazing fast networking powered by RoCEv2 interconnects.
Standard Node Config
Bare Metal
with optional managed Kubernetes & Slurm
Accelerators
8x AMD Instinct MI300X
CPUs
2x AMD Genoa 9654
Memory
2.3 TB DDR5 4800 MT/s
Local Storage
4x 3.84TB NVMe Drives + 2x 960GB m.2
Node Interconnects
3.2Tb/sPeak Network Storage
50PBWhether you're fine-tuning models or scaling inference, TensorWave’s AMD MI300X cluster delivers high memory per GPU and ultra-low TCO. Built for inference, it provides a cost-efficient path from development to production.
AMD Instinct MI300X
Dual AMD Genoa 9654 CPUs, 2.3TB DDR5, 3.2 Tb/s RoCEv2 (2x400Gb/s front-end + storage)
AIR COOLED