* very cash heavy(EDA tools, IP license, engineer wage, fab money,etc)
* very challenging technically(balance of computation, power, size,etc)
* lots of work needed on the software side(compiler,SDK,optimized libs,etc)
It is in a totally different world comparing to MVP or the lean startup concept.
Hardware(circuit board related) startup is already challenging(cash heavy, logistic challenges,etc), chip startup is 100x more. The later is about more than one hundred people with hundreds of millions investment to just get started.
I mostly agree except that I'm not sure I'd say its very challenging technically. The logical design itself that I've seen is mostly extremely simple, just extensive. Verif is a bit fiddly but not really that hard, compared to e.g. writing a compiler or automating GUI testing. I don't know anything about physical design though; maybe that's really hard?
I 100% agree with your other points though. It costs an absolute fortune, and everyone always underestimates the software effort. It's probably 5-10x the hardware effort, depending on what your chip does.
Also another thing I didn't anticipate is how backwards all the tooling and people are in the chip design space. CI is a novel concept. I'm sure there are companies not using version control. Everything runs on TCL which is on par with BASIC. SystemVerilog is not a good language (though it does at least have an amazing reference manual). The standard verification method (UVM) is ok from an actual verification point of view but basically a who's who of worst practices from a programming point of view.
Maybe it's different elsewhere but I was amazed how much convincing I had to do to get people to adopt practices that are just taken for granted in the software world, like auto-formatters. There are a lot of luddites.
The only "ok this is actually pretty good" thing I've seen is formal verification which is basically magic.
In chip design, there are a lot of non-software people who have been burned by software people. Given the amounts of money flying around, there are always a lot of charlatans looking to take a chunk out of you.
We used to do CI--a complete CI on our chip took 4 days. The library took 3 weeks to go through CI. Version control sucks unless your data is text--generally that's only your Verilog. Tcl is a decent enough language--the problem is that the stuff you write is no more than a one off and the real product is your chip--so nobody is going to reward you for a "good" script in any language. SystemVerilog wasn't meant to be a good language--it's an EDA vendor lockin meant to extract maximal money from hardware people.
And I've seen more verification in hardware before shipping a product than I EVER have seen in any software role. Yes, even those with "good" testing.
A sim run for a few of my customers with very basic chips might take 48-72 hours on a crazy fast machine. The designers are also SUPER concerned with getting everything right--with chip design there's no 'fixing' broken parts. If you don't get the design right when it goes to the fab, that entire run is ruined. Millions of dollars and at the bare minimum months of set back. It could kill the project or even the company.
So rather than just say "ok we won't do simulations of the full chip in CI; we'll just do module testbenches which are much quicker" you scrapped the whole thing?
> Version control sucks unless your data is text--generally that's only your Verilog.
So you don't use version control because it "only" works with the most important things in your repo?
> Tcl is a decent enough language
Sure, compared to Bash or BASIC or other ancient terrible languages. It's pretty terrible compared to any vaguely good language. Even PHP or 90s JavaScript are leagues better.
>the stuff you write is no more than a one off and the real product is your chip--so nobody is going to reward you for a "good" script in any language.
Yeah I've heard this from people at my company too. So do you really completely throw away your entire infrastructure and start from scratch for every chip? If not it's hardly a "one off". And yes, people are going to reward you - your future selves will say "thank god you didn't choose to use such a bug prone language that I spend all my time fixing basic type errors".
> I've seen more verification in hardware before shipping a product than I EVER have seen in any software role.
For obvious reasons. Chips are much simpler than software and therefore way easier to verify, and missed bugs are extraordinarily expensive. I bet FPGAs don't get as much verification.
Your post is exactly the sort of attitude I was talking about - thanks for the demonstration!
EXACTLY. This lazy, backwards approach to tooling and workflow is why I jumped ship from logic design to software despite having an EE degree. VHDL/Verilog tooling is heinously terrible and everyone in the industry seems to be actively opposed to doing anything about it.
And your post is a great example of why I still get paid a lot of money come clean up when VLSI design goes wrong. :)
You think Verilog/VHDL and its simulations are the most important thing to a VLSI designer. That's a very narrow lens that clouds your thinking and is only applicable in the world of ASICs (that's not really true there, either, but I'm being generous).
Verilog/VHDL is of very little importance to chips with lots of analog or RF blocks. And, for extremely large chips, Verilog/VHDL generally takes a back seat to things like timing closure (and probably power consumption analysis) since once the Verilog/VHDL is written and tested, it's STATIC. As a VLSI designer, you have to close a block every 2 weeks and once it's closed it needs to stay that way or you'll never ship that chip. Repeat for 18+ months.
In that environment, version control is a nice way to validate that I haven't accidentally mangled something, but it doesn't have the same force as it does to software developers who have an infinitely malleable, continuously changing codebase. In fact, source control often gets in the way because non-text merging isn't arbitrarily resolvable. This means that you don't have nice distributed version control, you have the old-school Subversion-type of source control which locks files.
CI doesn't work in VLSI the way it does in software. Tests aren't infinitely fractal. Sometimes that job that creates the data for your static timing analyzer takes 5 hours and that's just how it goes and that's just the start of your pipeline. If you touch a header file in software, a long compile is a couple minutes. A couple of hours is an absurdly large software codebase. In VLSI design, a fundamental library component may take hours to check timing and generate timing values if your fab house generated new models. And that's just one of hundreds of library components. If your fab releases a new transistor model deck, it will take weeks for that to propagate through your design.
As for languages, Tcl is FINE. Creating scripting hooks is absurdly more complicated than you think (look at the grief KiCad has keeping Python supported--Blender had to create a whole GUI library from scratch to have Python hooks--Altium's Python scripting is still a horribly broken bodge). Tcl means that you get scripting hooks that work. I haven't seen any other language do it better yet. And the fact that you haven't been burned by floating point in VLSI and think Javascript, which only has floats, is a better language says something ...
Could things be better? Sure!
The fact that the pay is crap compared to FAANGs is the first problem. That's why most of us who know what we are doing have left VLSI design for software and can only be brought back with large chunks of cash.
As for Verilog/VHDL--the problem is that they are fit for purpose with hardware. If you want to displace them, you need to create something that takes into account the nature of simultaneity in hardware systems and makes it explicit. And, in modern programming languages, we've gone backward on that front--async makes simultaneity more implicit rather than less.
Scripting hooks could be better. Open source tools help because you have all the code.
Non-binary formats help because you can read and decode the dumb things. You can now also put everything into standard source control tooling.
Analog and RF are still the wild west in many ways. Computers are faster, but we really don't have fundamentally better ways of doing that kind of design than we did 20 years ago. It's a bit surprising that we don't have the equivalent of RF "standard cells".
There are, without doubt, luddites in VLSI. However, there are also a lot of greybeards with deep scars. Do not confuse the two even if they look very similar.
> Chips are much simpler than software and therefore way easier to verify
I've got a few friends who are waiting for your insights. One has a power grid that seems to oscillate at weird frequencies, another has a VCO that pulls on the 2nd full moon of the month, and another that can't run down 2 orders of magnitude in leakage current.
Do keep in mind that VLSI design is larger than the slice that you have been exposed to. Dismissing that will make it quite a lot harder to convince people of your positions.
> Maybe it's different elsewhere but I was amazed how much convincing I had to do to get people to adopt practices that are just taken for granted in the software world, like auto-formatters.
Tell me more about this place where people don't get hung up on how I indent my comments.
They're obviously huge but if you looked at any small part of it you would find that it is quite simple. I don't mean the overall thing is simple, but nobody is designing the whole thing themselves.
In my experience an individual software component can be much more complex than a hardware component.
You heard IshKebab, the entire field of chip design is not very technically challenging, obviously they have the pedigree to back up such a bold statement and are too modest to tell us
- ONiO: Single-chip microcontroller with builtin energy harvesting and radio communication, enabling IoT devices without battery or dedicated charging.
> - Ascenium: Software-defined CPU without an instruction set. Highly parallel architecture with extensive compiler integration.
This is marketing BS if I've ever heard it. Their "instruction set" is LLVM. They're doing nothing more than what Transmeta (dynamic/programmable ISA) or Lisp/Arm-Java machines (running higher level code at a machine level) did before them.
Yeah, agree on the instruction set part - however they are moving the abstraction level to a point where there should be more room for optimization. I don't know enough about the technology to make any judgement - I just know a really smart guy who works there. Hopefully it turns into something cool :-)
It would represent a true architectural revolution if it ever actually came to fruition. Lots of past discussion on HackerNews over the decade(!) it’s been under development.
I guess they're technically a startup, but people working part time together for equity with no outside capital isn't what what we normally mean by the word.
I'm designing an affordable electron microscope for high schools and small businesses. While that doesn't sound like a chip startup, the long term goal is to create multipurpose tools that enable semiconductor fabrication with electron beam milling and chemical vapor deposition.
No, looks like a really cool company. Their hyperspectral imaging sensors will definitely enable a lot cool science. Limited to optical frequencies though so not as useful for the type of micro or nano scale imaging semiconductors require.
I had worked on their Hyperspectral imaging sensors providing surrounding support (Dev kit + SW stack). It is definitely cool and has a lot of applications especially for detection of adulteration in food etc.
I've jumped around it and didn't see any mention of cost/transistor going down. So I assume that isn't a respected engineer saying the opposite of what all the data clearly shows, and instead it's a hyped title to a talk about something slightly different.
This is one of the most respected engineers in the industry, he built chips for Intel, Amd64 and even Tesla chips. He talks about how there are lots of innovations at different places in cpu stack that keep giving performance gains. Even stuff like prediction of what future code will run, and run multiple copies in parrellel and only use the result of the computation that’s needed .
Cost of memory transistors has gone up on the last couple of nodes. NVIDIA has complained about it publicly.
Batteries are not on Moore's Law and more of our systems are battery powered so portable electronics is not on Moore's Law. Transistors are not cheaper if you can't turn them on.
Power goes as area squared and voltage squared. We can't turn voltage down any further and cheaper transistors mean area goes up. This means that most computation chips are fairly close to their thermal limits. Transistors are not cheaper if your cooling solution costs 10x more.
https://infiniteseconds.com/ ~ IP single-chip, energy harvest, sensing, flexible, low-power, IOT ready, in-house manufacturing, big infrastructure ready. "Paint your Night with Light".
I worked for Ember for a while which was a startup that produced a ZigBee chip. It'll be hard to target cutting edge nodes but it can work. I left when they were eventually bought by Silicon Labs.
Hardware(circuit board related) startup is already challenging(cash heavy, logistic challenges,etc), chip startup is 100x more. The later is about more than one hundred people with hundreds of millions investment to just get started.