Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
News of Nvidia’s Pascal tapeout and silicon is important (semiaccurate.com)
62 points by nkurz on Feb 7, 2016 | hide | past | favorite | 31 comments


I highly doubt that Nvidia dropped the ball this hard with Pascal.

A much more obvious and sensible conclusion is that Nvidia is currently developing their next chip, called Volta. We already know that the Department of Energy contracted Nvidia and IBM (lots and lots of money) to provide a good Volta GPU + POWER9 CPU combo for the new Summit and Sierra supercomputers set for completion in 2017.[1] This means that Nvidia knew since 2014 (at least) that they'd have very little time between their Pascal release and the more pressing Volta release. It's been their roadmap for a while now.

The Fermi, Kepler, and Maxwell architectures each had two or three years between them. Pascal and Volta are set to have a year or less.

1: http://www.anandtech.com/show/8727/nvidia-ibm-supercomputers


Short version: nVidia's PASCAL might see Christmas this year, maybe not.

The tenacity of fans to spin far reaching narratives out of small disconnected events to support their dreams and fantasies never fails to amaze.


>Short version: nVidia's PASCAL might see Christmas this year, maybe not.

How do you figure?

The article's two main claims are:

a) Had Pascal taped out in June 2015 as everyone had reported, it'd have easily already made it to market by now.

b) At the time of CES 2016, Pascal hadn't yet taped out. Nvidia had only received "bring up tools" in the last few days of 2015; actual silicon typically arrives a few weeks after the tools.

Going by the article, Pascal probably taped out for real in late January or early February. If anything, it seems on track for probably a late Q2 2016 release, maybe early Q3. No way it'll be Christmas unless something goes catastrophically wrong.


I have no idea about the author's sources, but even if you take every piece of evidence he presents as true and his sources as accurate he doesn't come within a country mile of having enough evidence to claim with certainty that an Nvidia executive flat out lied about anything.


Wait, is a CEO lying (Jen-Hsun in particular) supposed to be some sort of scandal?

Maybe if you interpret "lie" using the courtroom definition, sure, it would be a scandal, but that's not what the author was going for.


Nvidia is a public company. What the CEO has said is enough for shareholders and other investors to sue the company.


Semi-accurate all right. The accurate part is not original and the original part is mere fanciful speculation.


/r/hardware discussion seems to doubt this site's reliability: https://www.reddit.com/r/hardware/comments/43q5oa/news_of_nv...


I can't believe how negatively everyone is viewing SemiAccurate... as a chip designer and very involved in the business of the industry, SemiAccurate is one of the best news sources I've got. Everyone in the industry shits on NVIDIA's process because they have time and time again lied about benchmarks, tapeout dates, et cetera.

I only use NVIDIA GPUs, and think that most of the time they are decent products (except their Linux driver support), but I take every statement from Jen-Hsun with a HUGE grain of salt, and wait until I talk to a friend at NVIDIA, which almost always states the teams displeasure at Jen-Hsun bullshitting.


Nvidia earns a lot of flak, and SemiA might be a great site overall. But, the only times I see SemiA linked is when Charlie has written an article that gives the distinct impression that at some point Nvidia ran over his dog. That is my entire experience with the site over many years.


The only thing worse than NVIDIA's linux support is AMD's and Intel's. Then again, the issue I had that made me switch to NVIDIA was multimonitor support, which is probably not an issue to the vast majority of people.


One might say this site is only semi accurate


Sorry, what does this story mean? Is Nvidia doing retrocomputing - writing code in Pascal and it's out on tape? And also Silicon? I speak geek but not this dialect of geek.


So there are two sources of "tape" in the word "tape out". Back in the good 'ol days (Pre 1980s), chip designs were done on paper by the engineers, and then transferred onto rubylith tape (http://tingilinde.typepad.com/.a/6a00d83451b54669e2017ee846b...) by mostly women. The rubylith was then moved to the fab, where it was used as the mask for photolithography (start of manufacturing of the chip).

The other source of the name was that starting in the 80s through 90s, when EDA tools started being used in the industry, designs were all done on computers, and then the final file containing the information for the fab (.GDS2) was put onto a storage media (tape) and sent to the fab.

In both of these cases, it is basically the final design step, before you wait however long for the silicon to get back from the fab. It is also a huge stressor as the fabrication runs are prepaid for, so when you are approaching tape out date, it is typically extreme overtime for everyone involved.

I should probably get back to work; 2 1/2 months to tape out myself... hopefully will get some sleep between now and then.


Thanks for the reply; it both answers the question and is interesting in its own right.


A little Googling reveals that Pascal is the code name of Nvidia's next-gen GPU. (I know, I was kind of excited too)


I'm not sure if you're straight-faced when you say "kind of excited". I'm kind of ... "is this useful to anyone" ?

Is this going to make much difference to tasks such as getting more frames out of games like fallout, or at crunching more compute tasks like cracking password hashes?


Tapeout is when your ASIC design is fabricated.


Thank you for asking. It would have just needed like 1 or 2 sentences of explanation on that site for context, especially for something "important" (not sure it really is in the larger sense).


It turns out the UCSD Pascal System is the last of its kind:

https://en.wikipedia.org/wiki/UCSD_p-System

Unless you count Wirth's Oberon RISC CPU's:

https://www.inf.ethz.ch/personal/wirth/ProjectOberon/index.h...


It means there's an excessive amount of inside baseball terminology for people paying too much attention to the machinations of chip manufacturers. It all seems very important.


Charlie is known to be anti-Nvidia for some reason. Best to ignore his articles on Nvidia.


Nvidia is anti-consumer(cheating in tests, pushing proprietary tech and fighting against interoperability), so I dont blame him.


Two wrongs don't make a right.


It seems like he has a reasonable chain of logic for every component but one - nobody has any evidence that the "BGA" component in here was specifically for Pascal, Volta, or anything at all.

So it certainly could indicate they don't have real silicon yet for whatever the component involved is, but nobody I've seen has presented a compelling argument for it being Pascal in particular.


Nvidia cheating, lying, and using dirty tactics? Nothing new for AMD, just a repeat of Intels 2000-2006 phase.


AMD doesn't own Nvidia.


Nvidia would do well to cozy up to Intel at this point, and pray for an acquisition. I can't see such an inept company surviving for much longer on its own.


It's valued at 14 billion dollars. Hardly distressed or inept.


If you've kept up with their history at all, they're one of the most inept companies out there. They seem to succeed in spite of themselves.


I have been using Nvidia cards since the original geforce 256 in 2000. I purchased one of the first ATI 8514 Ultra cards in the early 90s. I own two R9-290s, and at work I use Tesla K40s. I follow GPU computing avidly as I do machine learning using mainly openCL (for my sins), precisely because I want choice in the market. It would be much easier for me to do CUDA. I resist for reasons of rejecting proprietary APIs.

I know exactly what I am talking about. I don't forgive Nvidia its sins, I actually prefer AMD, but I am not about to lie to myself that Nvidia is somehow an unsuccessful company, when it is worth 7x more than AMD, and AMD also includes an X86 line. It's a no-contest scenario in the eyes of the market, much to my chagrin, but I cannot deny the reality:

Nvidia is crushing AMD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: