I assume you forgot the percent sign after 25-50? Because I originally interpreted that as "25-50 times the cost" for a split second before realizing that it couldn't be right...
The part that's not clear to me is how. The value I see in AMDs participation is opening up the CUDA walled garden. This is different than what's good for TW. They would be better off with an AMD/TW walled garden that they can provide service for at better prices. But better prices alone won't be enough to get companies to move from the largest/only walled garden to a new/smaller one.
The best I could see is developing a service platform that frictionlessly and more efficiently runs CUDA workloads on AMD using a proprietary translation. Not a bad bet, IMO.
Wait for H100 and use Nvidia support and any Nvidia card to get start?
In my company 90% of computers have a Nvidia card so you can get started with CUDA immediately to start data aggregation and planning your AI training and infercing while waiting for deployment. Totally forget that approach with AMD.
Nvidia also assist larger customers with DGX cloud access through their large deployed super computers.
Nvidia's AI workbench helps a lot here as you can easily transfer your applications from local RTX cards to on-prem or cloud data centers.
MI300x was released in Dec of last year, only months ago. It has 192gb, while H100's only have 80.
There are at least 4 companies now providing bare metal cloud access to these cards. +2 more are hyperscalers (Oracle and MSFT).
It is critically important for the long term safety of AI that we are not dependent on a single source for all of the hardware and software related to AI.
It takes a lot of effort to course correct a large ship, give it some time.
So, HIP at a raw level is as performant as CUDA. The real problems come from higher level stack (BLAS, LAPACK libraries for example). But not all software need higher level stack. So, then it
becomes a cost benefit analysis.
A 15k AMD part vs a 60k nvidia part. For 100 Nvidia GPUs, you can buy 200 AMD GPUs and at least 2-3 engineers for 3 years at 300k to fix the specific library for that GPU. If you can make that work for a lower level library right now, then it makes to sustain it in future.
Amd did several acquihires in recent past also , and now amd have more money than has ever had and stated that ai software is one top priority of the company since 1y .. results are surfacing..
> TensorWave will fund its bit barn build by using its GPUs as collateral for a large round of debt financing, an approach used by other datacenter operators.
Hmmm. This seems like an odd risk for the lenders. Asymmetrical with little upside and security depends on continuing demand for GPUs. When the AI tide goes out who will be left with the losses.
I see this as super risky as well. I'm taking a totally different approach with my business. We are only growing with customer demand. First off, we are getting a decent number of GPUs to rent, which should cover our initial capacity needs. Then as we grow, we will push revenue + further investment back into more purchase orders. We can also order and deploy compute on a very short timeframe, so if we have a customer that wants a bunch of compute that we don't have today, we will get it online relatively quickly. Grandiose claims of 20k GPUs by the end of the year, almost never works out the way you want it to.
AMD already has a stock price that reflects their whole bag of products. TW stock price is effectively zero for a pre listed VC investment, TW is a cloud service bet not a chip bet (ie higher up stack = more value), TW is AI only.
TensorWave doesn't compete with Nvidia but with CoreWeave. I bet they might even have the same founder lol.
CoreWeave is doing the same stuff but with Nvidia. Also using collateral. But CoreWeave did it last year when H100 was way more valuable for collateral than it is today. And CoreWeave has actually been backed up by Nvidia and Microsoft in some funding rounds.
For Nvidia HW there are like 10x as many AI startups doing what TensorWave is doing. TensorWave is going a more risky way to go with AMD instead of Nvidia. Being among few startups might give them a large benefit but it also depends a lot on AMD support in the SW field. I wouldn't bet on that, especially not with AMD HW as collateral.
It could be great for everyone else that TensorWave is willing to invest in it and hopefully help drive improvements in the software.
https://www.techpowerup.com/318652/financial-analyst-outs-am...