I never understood this reaction the market is having. It's like reading the tea leaves - effectively random and not helpful.
I think it makes more sense if someone thinks "Gen AI is just NVIDIA - and if china has Gen AI, then they must have their own NVIDIA" so they sell.
But it makes the most sense if someone thinks "Headlines link US lead in Gen AI and NVIDIA, bad headlines for Gen AI must mean bad news for NVIDIA".
And the theoretically ultimate market analysis guru probably thinks "Everyone is wrong about Gen AI and NVIDIA being intimately linked, but that will make them sell regarding this news, so I must also sell and buy back at bottom"
> I never understood this reaction the market is having. It's like reading the tea leaves - effectively random and not helpful.
You're exactly right.
People in the US treat the market like the Oracle of Delphi. It's really just a bunch of people who don't have a grasp on things like AI or the tech industry at large placing wagers on who's gonna make the most money in those fields.
And you can apply that to most fields that publicly-traded companies operate in.
> And the theoretically ultimate market analysis guru probably thinks "Everyone is wrong about Gen AI and NVIDIA being intimately linked, but that will make them sell regarding this news, so I must also sell and buy back at bottom"
That's most likely exactly what's going on.
Markets aren't about intrinsic values, they're about predicting what everyone else is going to do. Couple that with the fact that credit is shackled to confidence, and so much of market valuations are based on available credit. One stiff breeze is all it takes to shake confidence, collapse credit, and spark a run on the market.
From the reporting, it seems like the large drop has much to do with the idea that DeepSeek has revealed how much can be accomplished without many billions in infrastructure spend, allocated largely to purchasing more NVIDIA chips, due to the perception that DeepSeek has spent relatively small amounts in the training of their models.
Isn’t it that the current market price of NVDA was based on the amount of chips they need to sell? Because to train and run models you need so many GPU’s. Now that deepseek is showing you need less GPU’s to train and run it, the value of NVDA lowers since they won’t sell as much.
Parent's comment was changed enough that my comment is meaningless. They previously said that you don't need NVIDIA for deepseek. I'll leave mine alone.
There are so many investors in the market that it's hard to figure out what or why anything happens.
But roughly, I suspect the main thing is "enough people thought NVDA was the only supplier for AI chips, and now they realize there's at least one other" that it slipped.
At this point, this reaction of the market means nothing. All these stocks were at an all time high, so the drop was inevitable. Tomorrow they can come up with a different spin and move the stock up again.
> Theoretically, anything that lets someone do more with the same number of their chips should be bullish.
If NVidia make money per compute capacity, and a new method requires less capacity, then all other things being equal NVidia will make less money.
Now, you might say "demand will make more use of the available resources", but that really depends, and certainly there is a limit to demand for anything.
That's what I meant by the market saying that it doesn't think people will use AI that much.
As of right now, there's the limited number of use cases to be applied to GenAI. Maybe that will change now that the barriers to entry have been substantially lowered and more people can play with ideas.
I especially see limit in demand for number of models. Eventually you have good enough models, and then you need less training and thus less hardware.
Also Nvidia's profit are based on the margins. If there is less demand, there is most likely less margins unless they limit supply. Thus their profit will go down either as they sell less or they profit less per unit sold.
That's an interesting framing, but "all other things being equal" is doing a lot of work there. In an alternate timeline where ChatGPT needed 1000x compute capacity to run, Nvidia wouldn't make 1000x the revenue from OpenAI; ChatGPT as we know it simply wouldn't exist.
The bull case as I see it is that demand for AI capacity will always rise to meet supply, at least until it starts to hit some very high natural limits based on things like human population size and availability of natural resources on or in reasonable proximity to Earth. Just as we quickly found uses for more than 640 KB RAM and gigabit+ Internet connections, there's no shortage of what could be done with 1000x more AI capacity. Best case scenario, we're eventually going to start throwing as much compute as humanly possible at running fully automated factories, automatically building new factories and infrastructure, running automated factories that build the machines that automatically build factories and infrastructure, and so on. Looking forward a bit further, it's not hard to imagine an AI-driven process of terraforming and industrializing Mars.
I don't know how much if any of that would be possible with current AI software given a hypothetical infinitely powerful GPU, but AI is still rapidly improving. Once it gets to the point where it can make humanoid robots do tasks at a lower cost than human labor, the demand ceiling will shoot sky-high and become a self-reinforcing feedback loop. AI will be used 24/7 to churn out new AI capacity, robots, power infrastructure, and so on, along with all the other things we might want it to produce (cars, cities, high-speed rail, drone carriers, food, etc.).
Imagine having an equivalent to AWS that could be used for provisioning and managing low-cost automatons with comparable physical and cognitive capabilities to average human laborers, along with self-driving cars, AI-controlled construction and manufacturing equipment, and so on. That would be on top of all the purely digital capabilities that are already commonplace and rapidly improving. Essentially, every public works project or business idea that anyone could conceive of would ultimately become viable to attempt with a dramatically smaller amount of capital than today, so long as the necessary natural resources were physically available and there were no insurmountable legal/regulatory roadblocks. We have an awful lot of undeveloped land and an awful lot of people on the planet who could certainly find interesting things to do with a glut of AI capacity.
> The bull case as I see it is that demand for AI capacity will always rise to meet supply, at least until it starts to hit some very high natural limits
I guess you’re assuming AGI is close and you mean demand for AGI? Because I think there will be significant demand but certainly not endless for things like ChatGPT in its current form. OpenAI’s revenue last year was $4B, which is very impressive but doesn’t feel like the demand is “endless”. By comparison, Apple’s revenue the same year was $400B. There are limits to what LLMs in their current form can do.
I don't have a strong opinion on whether something we'd universally agree to call "AGI" is particularly close, but in terms of physical automation, I would say that we're on the cusp of having a lot of not-quite-AGI AIs that all together could have a similar effect. It doesn't seem to me like we'll need AGI for "good enough" versions of things like Waymo, Tesla Optimus, or automated specialized construction and manufacturing machinery. I can easily imagine that another 5 - 10 years of steady advancements followed by throwing all of our collective economic might at full-scale production deployment would be sufficient to kick off the next industrial revolution. As the first two helped make slave labor obsolete, IRIII would more generally make most physical human labor obsolete.
As far as current LLM capabilities, I do think there's a massive amount of untapped demand even for that. ChatGPT is like the AOL of genAI — the first majorly successful consumer mass market product, but still ultimately just a proof of concept to capture the popular imagination. The real value will be seen as agents start getting plugged into everything, new generations of startups and small businesses are built from the ground up on LLM-driven business processes by non-technical founders with tiny teams, and any random teenager has a team of AI assistants actively managing their social life and interacting with a variety of platforms on their behalf.
Tons of things that big businesses and public figures pay full-time salaries for humans to do will suddenly become accessible to small uncapitalized ventures and everyday people. None of that requires a fundamental improvement to LLMs as we know them today, just cost reduction through increased supply and continued work by the tech industry to integrate and package these capabilities in user-friendly forms. If ChatGPT is AOL, then 5G, mobile, IOT, smart devices, e-commerce, streaming, social media, and so on should all be right around the corner.
Unless, of course, the market is saying "there's only so much we see anyone doing with genAI."
Which is what the 15% haircut they've taken today would indicate they're saying.