Hacker News new | past | comments | ask | show | jobs | submit login
Intel kills off the 10nm process? (semiaccurate.com)
192 points by douglasfshearer on Oct 22, 2018 | hide | past | favorite | 102 comments



@intelnews https://twitter.com/intelnews/status/1054397715071651841 “Media reports published today that Intel is ending work on the 10nm process are untrue. We are making good progress on 10nm. Yields are improving consistent with the timeline we shared during our last earnings report.”


Charlie at semimaccurate has many, many items broken uncomfortable news about intel and others that has later been proven fully rather than ‘semi accurate’.

I know for me he has earned trust so to see this official denial has left me conflicted.


Unfortunately... the full details are likely behind the paywall. Charlie is usually "fully correct" behind a paywall, but the "teasers" are less accurate and may have a bit of hype or hypotheticals in them.

Which is fine: I realize he writes like that so that he can get paid. But still, it means you have to take the rumors he hands out "for free" with a grain of salt.

Chances are: "10nm is cancelled" is closer to "some version of 10nm has been cancelled: Intel may be making progress on EUV or some other technology which they will THEN call 10nm in the future." But those details are what matters with regards to Intel's Ice Lake release plans, as well as AMD's plan for Zen2 / EPYC Rome in the upcoming years.


Intel can call a tweaked 14nm "10nm" and probably will. Those designations are just marketing now. The real issues are: Will there be an even greater delay before a significant process improvement, including higher density, lower power consumption, and higher clocks? (Greater than single digit percentage point tweaks) Were billions of dollars of investment largely wasted? Will it require even more investment to get the next node out?

By the way, even if it means more delays and more money, it could still be an improvement for Intel if their previous path wasn't working well. But they would be motivated to hide it anyway because they would have hidden all the wasted time and money up to this point and wouldn't want to have to own all that.


Yeah....I don't trust Intel to tell the truth at this point. With their recent i9 Principled Technologies shenanigans, and before that the "Hey look at this new CPU coming" *But we forgot to mention that the demo we showed you was using an industrial chiller. Sure you are still working on 10nm, I believe you. If by "still working on it" you mean you have a skeleton crew figuring out how to dismantle and sell or repurpose your 10nm fabrication line.


Not to mention "launching" 10nm with 2 low speed cores on a 4 core die and a busted GPU.


Note that they don't state that they still intend to use their 10nm process for volume production.


Read with this Oct.18 news together:

Intel Split Technology and Manufacturing Into 3 Divisions https://www.game-debate.com/news/25940/intel-split-technolog...

As of this week, Intel has announced it’s splitting its manufacturing group into three distinct segments in a massive shake-up aimed at bolstering its development.

The move is tied into the departure of long-time senior VP Sohail Ahmed, who’s been with Intel for 34 years and is currently the head of technology and manufacturing at Intel. Ahmed will be moving on shortly, and Intel will be using this moment to restructure its business.


I dreaded these types of restructurings when I was at Intel. It was interesting watching morale plummet while on the surface everyone was optimistic and in denial about the state of the business regressing. Deep down though people just couldn't shake the feeling that large layoffs were on the horizon and sure enough they came.


When were you there?


If this is true, then we will likely see MacBooks with 7nm A12+ in 2019 or 2020. They might surpass Intel on raw performance if TSMC's 7nm process is good (and Apple paid a lot of its R&D costs).


aren't the A series ARM chips? Windows has had quite a bit of difficulty getting regular desktop stuff to work on ARM (x86 emulation is slow)


Yes, they are. And yes, Windows does.

That MS has a working x86 compatibility layer makes me think that Apple could well have something like this too. If the new chips have performance comparable with Intel's, even a 5x performance hit for "legacy" apps might not be catastrophal, since developers in the Apple ecosystem are usually fast to update their stuff and cater to the demands of the early adopters with lots of money.

Apple has gone the emulation route with the PPC/x86 transition before. With their tight grip on the ecosystem, including the development tool chain, I think most software will be updated very quickly.

This is quite different from the situation with Windows software, most of which feels like the Devs hate the it, the platform and themselves.


The emulation route was 68k to PPC (classic apps running on OS X), for x86 they went with fat binaries, unless I'm misunderstanding you?


68k to PPC had emulation, but also PPC to Intel with Apple's "Rosetta"

https://en.wikipedia.org/wiki/Rosetta_(software)


I stand corrected. Thank you.


There were 68k/PPC fat binaries too, though this was as much about compatibility with older systems as it was for performance (the emulated 68k system on PPC was quicker than any 68k hardware).


I have been enjoying Windows since 3.0, even though I had a Linux zealot phase about 20 years ago. Maybe I have Stockholm syndrome.


The first Intel powered Macs shipped in early 2006 with 10.4.4, but Intel compatible builds of OS X date back to 2001/2002 internally.

https://www.macrumors.com/2012/06/10/a-bit-of-history-behind...

A lot of the guts are shared with iOS, which runs natively on those chips already. I think it's safe to assume they have internal macOS builds running on their A series processors as well. They've probably been testing that for years now.


> The first Intel powered Macs shipped in early 2006 with 10.4.4, but Intel compatible builds of OS X date back to 2001/2002 internally.

NeXTSTEP/OPENSTEP ran on x86 already. The early Apple releases, called Rhapsody, were released for x86 and PowerPC.

It's possible they dropped support for x86 in early Mac OS X Server 1 releases (1999/2000), and readded it around the time of Mac OS X 10.0 to 10.2 (2001/2002), but I expect there was support in the codebase for the whole time.

It may have been practically unmaintained (and untested, and maybe even without ensuring it compiles), but I doubt they actually removed the x86 code that was there.


There was a big kernel change from Rhapsody and OS X Server 1 to Mac OS X Public Beta. Rhapsody/OSXS1 and NeXTSTEP had used Mach 2.5 with the BSD 4.3 personality (Rhapsody/OSXS1 was, essentially, just re-skinned NeXTSTEP without any compatibility with the classic OS X API ("blue box", later "carbon")). With OS X Public Beta, the kernel was replaced with a new one based on Mach 3 with a new Unix personality based on porting FreeBSD's upper layers onto the Mach microkernel.

It's not inconceivable that a lot of the previous x86 compatibility was lost or broken at that time. Certainly anecdotes from the Marklar x86 skunkworks team indicated that they spent about 2 years porting and fixing a lot of bugs, which had to be submitted to the normal kernel team via patches that were very carefully written to seem as though they were requesting changes related to niche PPC behaviours (for instance, the PPC was bi-endian, so you could plausibly start submitting changes related to various endian brokenness as if you had tried to use that).

And of course, other layers -- Quartz, Carbon, I/O Kit, and a bunch of others, had never existed in NeXTSTEP and may have needed their own porting work from scratch. NeXTSTEP ran on x86, but a lot had changed since then.


But more important than a lot of the actual porting work was that the overall portability work had been done. Gone were most of the inline assembly / platform-specific hacks and optimizations that earlier iterations of Mac OS and 68k NeXT code likely had.


Right, if it was unmaintained (which I could easily believe it was!) I wouldn't be at all unsurprised by it being a multi-year effort to get it working again with all the big changes made to OS X in the time period.

And things that were rewritten from scratch for Aqua (in 10.0, like the entire graphics stack) will have never run on a little-endian system, and those alone would be a major porting effort.


I'm thinking about everything non-apple that runs on apple desktops. I have no doubt they have the expertise to build the Apple stuff for ARM


I could see Apple working around the emulation speed issue by entering an agreement with AMD where Apple designs 95% of the chip, has AMD design an instruction decoder to translate x86(-64) to native uops for the most common instructions (only falling back to emulation for uncommon instructions), and has AMD "manufacture" the chip (so it technically falls under AMD's x86 license).

This would allow Apple to avoid much of the overhead of software emulation and I'm sure AMD would be happy to play along since it gets them a (thin) slice of Apple's margins which they would otherwise not have. After a few generations when x86+ARM fat binaries are the norm in the MacOS ecosystem they could drop the x86 decoder (falling back to software emulation only) and presto.


Apple has far more control over their platform than they did in previous platform migrations. They'll more likely announce a 'little checkbox' in the developer tools, put minimal effort into emulation performance, and mandate that applications going forward comply. Problem solved.


Mac OS is not Windows :) Apple's never shied away from migrating platforms when they deem it useful, and having things running in a secret lab for years.


Last time they migrated from a niche instruction set to the dominant instruction set in the PC and server space however. That is not comparable to a migration from the dominant instruction set to a niche instruction set, which a migration from x86 to ARM would be in the current computing landscape.

Last time, the Mac platform basically existed in isolation, thus the only problem was that apps for this platform had to be recompiled. This time, the Mac is no longer isolated - millions of developers write client- and server-side applications on Macs that are to be run on mostly x86-based servers, and their toolchain implicitly relies on the architecture being the same on dev and prod machines. That is not to say that it's impossible to change the architecture of the dev machines to something else - it's just a huge additional drawback that was not to be considered at all back then in the PowerPC->x86 transition.

These two facts tend to get downplayed or overlooked pretty frequently when it comes to the "ARM-based MacBooks" discussion, but I consider them fairly substantial and they dampen my enthusiasm for such a transition quite a lot.


To be fair, how many developers with Macbooks are actually writing platform-specific code? I'm under the impression that most Macbook-wielding developers are web developers and work mostly with JavaScript, Python, Ruby etc. which all have ARM runtimes available. Even the "IDEs" (Atom, VS Code) are written in JavaScript nowadays or at least in Java with minor C parts (Jetbrains), which is also available for ARM platforms. Also, none of the web stuff is ever running on Mac OS, it's almost always Linux, maybe some Windows IIS.

There are also a lot of people only using their Macbook for presentations, text writing, or even only surfing the web. Apple's own office suit will be ported to ARM when they change their CPU architecture, Microsoft has Office for ARM available (or at least in the pipeline for 2019), and LibreOffice is available for ARM as well.

If Apple really wanted to do this, they would release their small Macbook (non-Pro) with ARM first and then describe a plan to change to ARM for the Macbook Pro line within a few years. No need for a transition period where emulation takes place, everything important is already ARM-ready. The iMac Pro is another thing, that might actually be harder but I imagine manageable if Adobe etc. are willing to invest/to be paid to support ARM.


>To be fair, how many developers with Macbooks are actually writing platform-specific code? I'm under the impression that most Macbook-wielding developers are web developers

There are of course the millions of iOS developers.

And besides that, in any conference, from C to Rust to C++ to Java, you'll see tons of Macbook-wielding developers, often the majority.

And when it comes to keynote speakers at conferences (as opposed to audience) the PC laptop is the exception as opposed to the norm...


> There are of course the millions of iOS developers.

Which already is ARM and switching Mac OS to ARM could even be an advantage, so I don't see your point?


The point goes to the claim "I'm under the impression that most Macbook-wielding developers are web developers"...


Everyone that is targeting OS X, iOS, tvOS and watchOS.

The developers that Apple cares about.


ARM wouldn't be a niche instruction set in the current computing landscape. It might even be the dominant one, if we consider how many people are carrying around multi gHz ARM computers in their pocket every day.


But a MacBook is not a smartphone. A MacBook is a laptop, which is a portable version of a PC, which clearly has x86 as the dominant instruction set today.


That's fair enough.

I've done a fair amount of development work on both an ARM Chromebook and a Raspberry Pi, and I didn't run into any major issues.

It depends heavily on your tech stack, though. I find that developing on ARM and deploying to x86 was no big deal with Node, Python, and Go. Your mileage may vary with other languages and VMs, though.


1. Apple ARM computers have insanely fast CPU/GPUs in, kinda way more than they need. 2. https://www.theverge.com/2018/10/15/17969754/adobe-photoshop... 3. Apple rewrote all of their apps (or killed them), so they're bound to be cross platform. Why else start from scratch and release with fewer features? Final Cut Pro X, iWork, Logic Pro X.

90% of apple sales are for ARM computers, bet they'd love to only make 1 OS, would save loads of money


ARM is not yet dominant in the desktop computing landscape, but it might become so. Apple are notorious early adopters and, developing the chips themselves, have some great insight on the potential.

They are also in a position to isolate themselves again now.


ARM is not niche. It is the dominant platform on mobile.


But a MacBook is neither a smartphone nor a tablet, which is what the term 'mobile' refers to. A MacBook is a laptop, which is a portable version of a PC, and in that landscape, ARM is a niche instruction set.


I bet that Apple won't rely that much on emulation this time (like back when switching from PPC to x86), instead either require app-store apps to be uploaded as LLVM bitcode, or upload fat-binaries with ARM and x86 machine code (NextStep aka OSX did this already a quarter century ago), or maybe even statically translate x86 machine code to ARM on the app store "server side".

Most command line code installed through homebrew is compiled on the user machine anyway, which leaves the closed-source and legacy UI applications not distributed through the app store (but by the time Macs switch to ARM, OSX probably will forbid to run those anyway).


> I bet that Apple won't rely that much on emulation this time (like back when switching from PPC to x86), instead either require app-store apps to be uploaded as LLVM bitcode

LLVM bitcode remains architecture-specific (if not platform-specific), you can not just recompile x86 bitcode for ARM.

> or upload fat-binaries with ARM and x86 machine code (NextStep aka OSX did this already a quarter century ago)

That doesn't obviate the need for a transition compatibility layer, complex software can take years to port to different architectures.


> I bet that Apple won't rely that much on emulation this time (like back when switching from PPC to x86), instead either require app-store apps to be uploaded as LLVM bitcode, or […]

LLVM bitcode is platform specific. It deliberately isn't designed to be portable.


x86 disassembly is Turing complete. Even a modestly aggressive optimizer can defeat decompilation (and disassembly).


A huge amount of x86 is no longer patented and most of those instructions make the bulk of common x86 instructions. That could drastically reduce the overhead.

Even if they went the full-blown emulation route, A12 is almost an order of magnitude faster than the old designs Windows was running on.


I'm thinking this is not true. This is the core of Intel's business, I'd be shocked if they decided to kill it off. They may miss a deadline, maybe two - however if they want to survive as the behemoth they are today, they'll have to deliver.


I think the idea is that they will eventually create something called 10nm but it will be very different from the current process in development.


They've already missed at least two 10nm deadlines.


> They may miss a deadline, maybe two

They've missed like at least 3 already. 10nm was supposed to be ready in the second half of 2015. Now their most optimistic schedule is late 2019.


>They may miss a deadline, maybe two

Too late for that, more like missing 5 deadlines at this point. 10nm at Intel is a disaster, I am not surprised they would scrap the whole thing.


Intel is still working on 7nm. That will be their next big play.


I don't see any sources to the SemiAccurate's article. Also, they tend to be heavily biased and callous with regards to Intel.


He called them internal "moles" from within Intel.


The most interesting question is whether the dead Intel process node should be blamed on bad choices (e.g. deferring adoption of EUV lithography) or on incompetence, and also on shortsighted financial management or on optimistic/clueless technical leadership: we can live without Cannon Lake CPUs, but where will Intel be in 5 years?


TSMC 7FF isn't based on EUV either, and their 7+ is only going to use it on 4 "non-critical layers" (i.e., they're still testing it out).

The best speculation I've seen is this:

https://wccftech.com/analysis-about-intels-10nm-process/

> Our sources tell us it had to do primarily with Intel overextending too early. SAQP or Self Aligning Quad Patterning is the technique the company used to make its 10nm process and it was the first in the industry to attempt to do so.


Isn't GF's 7nm process also dead? [edit: corrected now]


Argh, thanks, I keep typing GF when I mean TSMC. I haven't had enough coffee. Edited my post to correct.


Afaik, only Samsung has figured out EUV for mass production?


I found this /g/ post of a while ago interesting: https://pastebin.com/TbtYmtyB


So is this legit? Mods have been killing this story with "NSFB" flag on /r/intel almost as fast as Intel killed 10nm for the past few hours. Not safe for who's business exactly in this case?


Why couldn't Intel get this working with other fabs could? Hubris, lack of talent or really bad decisions?


My understanding is that what Intel targeted for 10nm was very ambitious, so much so that it beats out what TSMC calls 7nm. In other words, other fabs haven't gotten it working yet either.

Process node names have been detached from the reality of corresponding to specific feature sizes. It's up to the companies to figure out what performance they want to label 10nm versus 7nm, and Intel's processes have generally been more aggressive than the others. What's changed is that Intel has gone from unreachably ahead to merely in the competition.


I think Intel is starting to fall behind a bit but perhaps not as much as it sounds from the headline, remember their "10 nm" process had better actual sizing numbers than what some were calling their "7 nm" process. That they failed to only come out a little ahead is a big problem for them but it's not like they failed to come out a generation behind.


From what I understand, Intel's 10nm process is roughly the same as 7nm processes from other foundries, and Intel's 14nm is more or less 10nm on someone else's scale.


I see this argument presented over and over and over again but never any supporting information to prove that claim.


Oh I can help with that, the numbers — which may not be up to date, for obvious reasons — are easy to find: https://www.semiwiki.com/forum/content/7602-semicon-west-int...

Note that there isn't a single widely accepted way to say who has the better density, but this table sums up the various metrics pretty well.


I don't know how much this is still true but traditionally Intel processes have better drive currents but more restrictive design rules than their competitors' at a given node. So I'm skeptical that their densities would equal TSMC's in practice. This isn't to deny that Intel 10nm and TSMC 7nm are roughly equivalent, just to say they make different trade offs in a way that chart doesn't cover.


Density is not really an important metric to end users though. What matters is power consumption and achievable clock speeds but these are very difficult to directly compare between processes unless someone has made the same design on both (as has happened with several phone SoCs dual sourcing from TSMC/Samsung).


Density is important for the fab. Power consumption and clock speeds are something that can be tuned per design even for the same process.


Well yes obviously it's important to the fab. Higher density allows them to fit more dies on a wafer, sell for slightly higher margins, whatever. Great for Intel but again absolutely zero reason for end users to care. Pricing is basically completely divorced from the manufacturing costs anyway.

When someone says something like "Intel's 14nm is as good as TSMC's 10nm" I think most people would expect this to be talking about the performance of the chips being produced.


> When someone says something like "Intel's 14nm is as good as TSMC's 10nm" I think most people would expect this to be talking about the performance of the chips being produced.

One could reasonably assume that when discussion this on a consumer oriented forum or publication.

But this is an article (for an author with a spotty record at best) about the state of the industry, not about whether or not you'll be able to clock your CPU a few MHz higher.


Density is critically important, because density is directly related to cost, same performance chip in half the size is half the cost to produce.


https://www.semiwiki.com/forum/content/7602-semicon-west-int... See the table "7nm comparison"

https://www.semiwiki.com/forum/content/7544-7nm-5nm-3nm-logi...

And: https://wccftech.com/analysis-about-intels-10nm-process/

tl;dr: Intel 10nm gets 106.1M transistors per mm^2. TSMC 7FF gets 96.49. Intel 10nm has an HD SRAM cell size of 0.0312 micrometers. TSMC 7LP is 0.0270.

Intel gets a few more transistors per area, TSMC gets more SRAM per area, but on balance, they're pretty similar. From the second article:

"From figure 3 the 4 processes have similar overall process density. GF has the smallest CPP x M2P x Tracks, Intel has the highest MTx/mm2 value and Samsung has the smallest SRAM cell size. The size of a design in each of these processes will therefore be design dependent and I would not judge any of the four processes to be significantly denser than the others. In terms of relative performance, we have no way to judge that currently."

[Note: Updated this post to quote the TSMC numbers instead of the GF numbers, since TSMC is shipping and GF has pulled the plug on 7nm]


It seems as though nm has stopped being a useful unit of measurement in this domain, and we should switch to something else--or several somethings else.


Does SRAM size here have a practical effect on cache speed? Or is it mostly just down to real estate budget?


The numbers here are just real estate - speed is going to be a big "it depends" that can't really be predicted just from area.

(in general, cache speed is more affected by size, and that's an architectural decision -- see, for example, Intel's move to a 1MB L2 in Skylake-X instead of a 256KB L2 (11 cycles -> 13 cycles, but Intel did a lot of work to try to speed up the cache to reduce the pain)).


The graph in TFA does show Intel's 14nm as having better density than competing 14nm nodes, though the lead seems to be less than a full process node.


It seems kind of like how incorrect news about science becomes “common knowledge”. This person probably saw someone else post this, and now they’re just parroting it back. That other person, they might have had it from a news article. Or maybe they were just parroting another comment in turn. Even if it was from the news, was that news sources from a sophisticated engineering perspective, or a bit of marketing fluff? Hard to know. Not saying whether this is correct or incorrect either, just making a comment about the sources of knowledge in general in the age of internet comments.

But it would be pretty quiet on here if people only talked about what they actually know.


This is a classic ad hominem argument.

There was a claim made, someone asked for evidence, and then a third person provided the evidence. Done. This is a good way for discussions to work. What would really kill the discussion is if people stopped and gathered evidence first, and compiled it into every comment they made. I trust people are skeptical enough not to believe garbage, and if they’re curious they can ask questions or do additional research themselves.


Other fabs couldn't either. Intel 10nm depended highly on EUV, the only working 7nm (TSMC) doesn't use EUV at all yet. The first EUV fab will be much better than any existing one today, and it still seems like Intel will claim that prize.

Though, tsmc certainly isn't letting them have it for free.



Neither, most probably a side effect of starting before the others, when research wasn't finished, so they dug themselves into a problem too deep to be commercially viable while other were growing on smaller parts of the market. Now that EUV is polished and stable, competition can just jump on the bandwagon while intel has to kill its old pipeline.


Can someone who has access tell us what information the paywalled article is using to make this claim? EDIT: an r/hardware post is claiming the article is based on anonymous sources that the author knows at Intel. The article claims Intel is abandoning the 10nm roadmap --> no Icelake 10nm+? Intel also had some issues with 14nm which is why Broadwell-S and Broadwell came out at the same time. Whatever the case, Intel will probably have to respond to this.


Does this mean they'll skip to 7nm? I don't believe it.


Process node-size names are more or less semantic marketing games these days anyways. What TSMC calls 7nm is roughly equivalent to what Intel called 10nm.

Its likely that Intel will roll out a node at that size, but being forced to abandon their previous attempt is a crushing blow that has set them back years, and will likely set them back years more to come.

It takes years to develop a node size, and they have had to throw out most of their work.


Pressure from AMD Ryzen is real. Exciting to see this happening again.


Yeah, no kidding. The king of the kings, the 9900K, is only marginally faster than the 2700X in almost every benchmark, with the 2700X being a lot cheaper. If I was building a PC today I would pick AMD without any hesitation.


They've talked about skipping it since 2012: https://youtu.be/NGFhc8R_uO4?t=3072


Most likely it does. They've been taking about skipping 10nm altogether for over a year now.


I don't like the choice of the word "knifing". British English or something?


That begs a question why something so important as CPU development has been left to a private company? Wouldn't be better to nationalise Intel and make sure it develops CPU that can serve the people and help advance the society? Corporate interests are not always good for the humanity and I think it is time for a state to step in. Intel has been having free reign for too long.


Your post reads like trolling, but if you're actually interested in the real history of this, semiconductor manufacturing has been a joint industry/govt. thing for almost its entire lifetime. As one starting point, read about SEMATECH: https://en.wikipedia.org/wiki/SEMATECH

DARPA has long played a huge role in furthering US semiconductor capabilities.


Look, we have state police, army and then see how disastrous private healthcare turned out to be.

Maybe DARPA should step up their game then, as instead of nationalising the Intel they could nationalise their IP and work on affordable and _fast_ CPU for the people as Intel fails to deliver.

We are yet to see a private company landing human on the moon. Imagine what CPU we could have today if state really took over.


I'm not trying to argue with you, because you still sound like you're trolling. I'm providing some information about this topic for others so they have a better grounding for reasoning about the issue. Semiconductors are one of the most interesting grounds for discussing the role of private industry and the state in advancing the state of the art in technology. See, for example:

http://mitsloan.mit.edu/shared/ods/documents/?DocumentID=461...

https://www.nature.com/articles/s41928-017-0005-9

and

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1545155

[Full disclosure, I'm married to one of the authors.]


If we had a single organization developing our national CPU manufacturing process, it would likely have been staffed much like Intel, since Intel is the big gorilla and the people best able to climb the organizational ladder ended up there. Thus, I think it would be more accurate to say that this is an example of why you DON'T have a single, national organization do something as important as this.


I am not saying there should be one. Each state could have its own division / separate entity to enable competition.


yeah, and that would end up about as glorious as state pension funds did.


We can look to NASA as an example of how well that might play out...


I am taking your comment as a concise way to imply significant inefficiency and poor efficacy at NASA. If I am misreading, please let me know.

Such polar implications about the efficacy of private vs. government (whether in R&D or other domains) represent reality poorly, and in fact your example contradicts itself with the enormous amounts of beneficial R&D NACA and NASA did and which industry built upon. This doesn't excuse the extremely wasteful behavior of NASA (e.g., SLS); only to say that you can't paint it all with the same brush of "wasteful, inefficient, government".

It is necessary to have capable people acting with good judgment to do "good and effective" work. Neither government nor industry have a monopoly on people of mediocre effectiveness or judgment. It is true that the government doesn't have market pressure to call it on the carpet for wastefulness. But that's the same attribute that enables it to undertake moonshots or do hard, expensive R&D that benefits society as a whole.


That's fair. I don't mean to imply that the public sector can't do anything well - just look at the Canadian health care system vs the US one - hardly perfect, but substantially better on the whole. However, the public sector often has less pressure to do things efficiently, or they have additional requirements, like we see with NASA splitting contracts up among various states for political reasons.

I don't think semiconductors would be better handled by the public sector - even if one could somehow get political support for that, which seems very unlikely in the US.


>>just look at the Canadian health care system vs the US one - hardly perfect, but substantially better on the whole.

The thing is, you say "on the whole" but I am sure you mean one or two parameters. I am not from US and I abhore the monstrosity that is the US healthcare system, but no one here can deny that US does have the absolute best healthcare in the world - as long as you can afford it. No wonder people from all over the world come over to US for difficult operations, because they usually get access to the best techniques, best doctors and best equipment. The only issue with all of this is that it costs a tonne of money and it's ruining the American society.

I don't think it's difficult to see that other sectors are the same - there are things that NASA excels at, and there are things which almost any space-oriented startup can do better. But it always depends on what sort of thing is a priority for you.


> I don't think semiconductors would be better handled by the public sector - even if one could somehow get political support for that, which seems very unlikely in the US.

Again, take a look at the SEMATECH example discussed above. I understand it to be considered a successful undertaking on the whole. For a present-day example along similar lines, see DARPA's Electronics Resurgence Initiative ("ERI").




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: