TensorWave Making Waves at GTC
Apr 23, 2024
At this years highly anticipated GTC event in San Jose, Jensen Huang, CEO of Nvidia, did exactly wha...

At this years highly anticipated GTC event in San Jose, Jensen Huang, CEO of Nvidia, did exactly what was expected - hype up and announce a new product (Blackwell series) before they release their next product (H200) to try to overshadow what AMD has available right now.
And we, at TensoroWave, made sure to let everyone know.
We circled the venue of the well known event with an LED truck offering the red pill to all attendees in visual range. Our goal is to promote optionality in the AI market, reminding interested parties that there are other viable options on the market today.
The news we wanted to share? That TensorWave is the first to market at scale with cloud AI development infrastructure based on Instinct MI300X GPUs from Advanced Micro Devices (AMD).
How did we choose to make our splash? Simple: Crash the 2024 GTC AI conference in San Jose, California---an event sponsored by NVIDIA, the GPU market Goliath to AMD’s David.
Why would we choose to announce our launch at an event populated with NVIDIA GPU fans? To make a point: AMD has a compelling GPU product offering that beats NVIDIA’s flagship H100 GPU on several metrics and soon the H200 as well. Our service makes it easy for AI developers to leverage the superior power that AMD is currently offering.
AMD Instinct MI300X vs. NVIDIA H100
AMD introduced its Instinct line of processors in late 2023. Why did TensorWave choose a new and unproven hardware platform as the basis for its services?
For one thing, all indications are that the MI300X is a superior product. Consider these specifications:
AMD MI300X | NVIDIA H100 | |
Memory capacity | 192 GB | 80 GB |
Memory bandwidth | 5.3 TB/s | 3.3 TB/s |
Streaming processors | 19,456 | 14,592 |
Engine clock | 2,100 MHz | 1,755 MHz |
Furthermore, published benchmark testing results lean heavily in favor of the MI300X. All of these numbers represent trillions of floating-point operations per second (TFLOPS):
AMD MI300X | NVIDIA H100 | |
FP64 | 81.7 | 33.5 |
FP64 Matrix | 163.4 | 66.9 (Tensor) |
FP32 | 163.4 | 66.9 |
FP32 Matrix | 163.4 | N/A |
FP16 | 1,307.4 | 133.8 / 989.4 (Tensor) |
FP16 Sparse | 2,614.9 | 1,978.9 |
BFLOAT16 | 1,307.4 | 133.8 / 989.4 (Tensor) |
BFLOAT16 Sparse | 2,614.9 | 1,978.9 |
FP8 | 2,614.9 | 1,978.9 |
FP8 Sparse | 5,229.8 | 3,957.8 |
INT8 | 2,614.9 | 1,978.9 |
INTA | 5,229.8 | 3,957.8 |
Although NVIDIA has disputed some of these performance comparisons, this may be a sign that they fear losing market share to AMD. And they have a reason for that fear: Aside from the obvious performance advantages, AMD’s GPUs are more available. Up to this point, NVIDIA has dominated the GPU market, so much so that their order backlog is reported to be a year or more. MI300X GPUs are available today.
The TensorWave Advantage
The performance advantage of AMD’s MI300X GPUs means that TensorWave can offer benefits that other cloud AI infrastructure providers can’t, such as:
• Easy scalability
• Higher bandwidth and much lower latency
• Native support for PyTorch and TensorFlow with no code modifications
• Implementation options to meet your needs
These performance advantages mean your AI development cycles are shorter and your total cost of ownership is lower.
TensorWave’s Coming-Out Party at GTC
Making our entrance at this year’s GTC—complete with a digital sign truck---was an unconventional and risky way to introduce ourselves to the AI development community, but it worked! People are talking about AMD’s GPUs as being game changers, and TensorWave attracted significant business interest from participants at the event.
For more information on how TensorWave can help you accelerate your AI development plans, with products that are actually available right now, contact us today.