I highly doubt that Nvidia dropped the ball this hard with Pascal.
A much more obvious and sensible conclusion is that Nvidia is currently developing their next chip, called Volta. We already know that the Department of Energy contracted Nvidia and IBM (lots and lots of money) to provide a good Volta GPU + POWER9 CPU combo for the new Summit and Sierra supercomputers set for completion in 2017.[1] This means that Nvidia knew since 2014 (at least) that they'd have very little time between their Pascal release and the more pressing Volta release. It's been their roadmap for a while now.
The Fermi, Kepler, and Maxwell architectures each had two or three years between them. Pascal and Volta are set to have a year or less.
>Short version: nVidia's PASCAL might see Christmas this year, maybe not.
How do you figure?
The article's two main claims are:
a) Had Pascal taped out in June 2015 as everyone had reported, it'd have easily already made it to market by now.
b) At the time of CES 2016, Pascal hadn't yet taped out. Nvidia had only received "bring up tools" in the last few days of 2015; actual silicon typically arrives a few weeks after the tools.
Going by the article, Pascal probably taped out for real in late January or early February. If anything, it seems on track for probably a late Q2 2016 release, maybe early Q3. No way it'll be Christmas unless something goes catastrophically wrong.
I have no idea about the author's sources, but even if you take every piece of evidence he presents as true and his sources as accurate he doesn't come within a country mile of having enough evidence to claim with certainty that an Nvidia executive flat out lied about anything.
I can't believe how negatively everyone is viewing SemiAccurate... as a chip designer and very involved in the business of the industry, SemiAccurate is one of the best news sources I've got. Everyone in the industry shits on NVIDIA's process because they have time and time again lied about benchmarks, tapeout dates, et cetera.
I only use NVIDIA GPUs, and think that most of the time they are decent products (except their Linux driver support), but I take every statement from Jen-Hsun with a HUGE grain of salt, and wait until I talk to a friend at NVIDIA, which almost always states the teams displeasure at Jen-Hsun bullshitting.
Nvidia earns a lot of flak, and SemiA might be a great site overall. But, the only times I see SemiA linked is when Charlie has written an article that gives the distinct impression that at some point Nvidia ran over his dog. That is my entire experience with the site over many years.
The only thing worse than NVIDIA's linux support is AMD's and Intel's. Then again, the issue I had that made me switch to NVIDIA was multimonitor support, which is probably not an issue to the vast majority of people.
Sorry, what does this story mean? Is Nvidia doing retrocomputing - writing code in Pascal and it's out on tape? And also Silicon? I speak geek but not this dialect of geek.
So there are two sources of "tape" in the word "tape out". Back in the good 'ol days (Pre 1980s), chip designs were done on paper by the engineers, and then transferred onto rubylith tape (http://tingilinde.typepad.com/.a/6a00d83451b54669e2017ee846b...) by mostly women. The rubylith was then moved to the fab, where it was used as the mask for photolithography (start of manufacturing of the chip).
The other source of the name was that starting in the 80s through 90s, when EDA tools started being used in the industry, designs were all done on computers, and then the final file containing the information for the fab (.GDS2) was put onto a storage media (tape) and sent to the fab.
In both of these cases, it is basically the final design step, before you wait however long for the silicon to get back from the fab. It is also a huge stressor as the fabrication runs are prepaid for, so when you are approaching tape out date, it is typically extreme overtime for everyone involved.
I should probably get back to work; 2 1/2 months to tape out myself... hopefully will get some sleep between now and then.
I'm not sure if you're straight-faced when you say "kind of excited". I'm kind of ... "is this useful to anyone" ?
Is this going to make much difference to tasks such as getting more frames out of games like fallout, or at crunching more compute tasks like cracking password hashes?
Thank you for asking. It would have just needed like 1 or 2 sentences of explanation on that site for context, especially for something "important" (not sure it really is in the larger sense).
It means there's an excessive amount of inside baseball terminology for people paying too much attention to the machinations of chip manufacturers. It all seems very important.
It seems like he has a reasonable chain of logic for every component but one - nobody has any evidence that the "BGA" component in here was specifically for Pascal, Volta, or anything at all.
So it certainly could indicate they don't have real silicon yet for whatever the component involved is, but nobody I've seen has presented a compelling argument for it being Pascal in particular.
Nvidia would do well to cozy up to Intel at this point, and pray for an acquisition. I can't see such an inept company surviving for much longer on its own.
I have been using Nvidia cards since the original geforce 256 in 2000. I purchased one of the first ATI 8514 Ultra cards in the early 90s. I own two R9-290s, and at work I use Tesla K40s. I follow GPU computing avidly as I do machine learning using mainly openCL (for my sins), precisely because I want choice in the market. It would be much easier for me to do CUDA. I resist for reasons of rejecting proprietary APIs.
I know exactly what I am talking about. I don't forgive Nvidia its sins, I actually prefer AMD, but I am not about to lie to myself that Nvidia is somehow an unsuccessful company, when it is worth 7x more than AMD, and AMD also includes an X86 line. It's a no-contest scenario in the eyes of the market, much to my chagrin, but I cannot deny the reality:
A much more obvious and sensible conclusion is that Nvidia is currently developing their next chip, called Volta. We already know that the Department of Energy contracted Nvidia and IBM (lots and lots of money) to provide a good Volta GPU + POWER9 CPU combo for the new Summit and Sierra supercomputers set for completion in 2017.[1] This means that Nvidia knew since 2014 (at least) that they'd have very little time between their Pascal release and the more pressing Volta release. It's been their roadmap for a while now.
The Fermi, Kepler, and Maxwell architectures each had two or three years between them. Pascal and Volta are set to have a year or less.
1: http://www.anandtech.com/show/8727/nvidia-ibm-supercomputers