I heard nvidia is gimping consumer-grade cards to not be good at LLM training, is this true? If so are they gimped only for training or also for running LLMs?
I guess the limited amount of RAM is also a way to limit the cards.
Many Nvidia "gaming" SKUs are already at the point where memory is often the biggest likely limitation on their gaming use case, and they'll be noticeably better products for the consumer with a small cost increase by adding more memory.
So I'd say there's good evidence that something outside cost and value to the gaming use case is why they don't have higher memory SKUs, and eating into "professional" priced AI SKUs is an obvious possibility.
I doubt anyone outside Nvidia itself knows "for sure", but it's a pretty big indication.
I guess the limited amount of RAM is also a way to limit the cards.