Furthermore, the routing in CPUs is a known entity at manufacturing time.
The 'routing' of a CPU is much more variable than the routing of an FPGA. Data moves around a CPU based on the program that is executing. The control logic of a CPU is the equivalent of the routing logic of an FPGA.
On FPGAs, the routing is highly variable and must be re-negotiated at nearly every compile cycle.
The 'routing' of a CPU is the same. The compiler has to perform register allocation afresh on every compilation. There's obviously a tradeoff between fast compilation vs most efficient use of resources. Both problems are NP-complete I believe.
Nowadays, we are seeing more FPGAs with dedicated, pre-made hardware blocks inside of them, such as FPUs and even CPU cores.
You just contradicted yourself. Previously, you said "FPGAs always have to match that dense silicon through configurable silicon".
Your next paragraph talks about toolchain issues. This is hardly an insurmountable problem. Someone just needs to design a high level language that can be synthesised; something akin to a python of the FPGA world if you will.
Another problem with FPGAs is the additional hardware on board needed to configure them.
I don't quite understand, do you mean the hardware that reads the bitstream, etc or the hardware that is required in order for the FPGA to be configurable, like routing, LUTs, etc.
Another guy I work with is intent on running CPU cores on FPGAs.
I do agree with you here, this is a weird perversion if the purpose is not eventually to create an ASIC.
I also don't believe that future processors will be FPGAs, but I do believe they will be a lot closer to FPGAs than CPUs.
Someone just needs to design a high level language that can be synthesised; something akin to a python of the FPGA world if you will.
The advantage of FPGAs is that they allow nontrivial parallelism. On a CPU with 4 cores, you can run 4 instructions at a time (ignoring the pipelining). On the FPGA, you can run any number of operations at the same time, as long as the FPGA is big enough. The problem is not the low-level nature of hardware description languages, the problem is that we still don't have a smart compiler that can release us from the difficulty of writing nontrivial massively-parallel code.
"The advantage of FPGAs is that they allow nontrivial parallelism."
Want a system on a chip with 2 cores leaving plenty of space for an ethernet accelerator, or 3 cores without space for the ethernet accelerator? Its only an include and some minor configuration away.
"the problem is that we still don't have a smart compiler that can release us from the difficulty"
Still don't have smart programmer... its hard to spec. Erlang looking elegant, doesn't magically make it easy to map non-technical description of requirements to Erlang.
Thanks for clearing up some of my blurrier points :) I should not have included "always" in that statement about matching the density of silicon. That breaks my own rule about absolutes.
I hadn't considered the routing of a CPU at compile time similar to the routing of an FPGA. They're at different scales and have different challenges... I guess it's because I have mainly seen FPGAs in time-critical situations (where CPUs can't be used) in which it was very difficult to predict their performance and they required lots of hand-tweaking. That was in SONET routing, btw. CPUs on the other hand, usually have some time to spare, whether it is because they are overspecced, or because they are used in applications which tolerate variation in execution time (user interfaces) in comparison to FPGAs used for e.g. translating data between communications busses. It is simple to measure, reproduce and predict specs for common algorithms like the FFT, convolution, etc. for CPUs. I believe this is because the operations in the "routing" of CPU algorithms are based on the sum of known, discrete, orthogonal events like memory fetches, ALU operations, etc.
Inside FPGAs though, the timing is not as uniformly discretized. It's about taking resources of routing distance from a large pool with a large geographical component, making it highly non-linear and tricky to predict.
I think we all agree a better language would be the key to getting more out of FPGA technology! And I would like to see more FPGA-elements on traditional CPUs like ARM, x86, AVR, PIC, etc. I wonder what elements an improved hardware description language would use? It could certainly be trivially parseable by tools like antlr while still giving bit-level access...
W/r/t additional hardware, I meant hardware that configures the bitstream, such as a header or CPU interface, as well as power supplies. For example, this schematic which I grabbed as an example: http://upload.wikimedia.org/wikipedia/en/3/3f/WillWare_Usb_f... . You can see the FPGA has THREE power supplies (1.2V, 2.5V, 3.3V). This particular design doesn't need additional clock sources, fortunately, but it's rather common. Understand that this isn't a fatal flaw with FPGAs, it's simply a disadvantage- one doesn't add an FPGA alone, one adds an FPGA, power supplies, possibly a crystal, and a programming interface. It means using an FPGA incurs a bunch of overhead.
As for getting my account banned due to my reaction to a bogus article about a way of multiplication fraudulently claimed to be taught to Japanese school kids: Well if you folks in this little community don't need me around and don't feel what I have to say counts for anything and you don't like the way I say what I have to say, and you can't tolerate differences in people who are different from you, then go ahead and enjoy gassing each other up and blowing smoke up each other's asses in your sealed-off little echo chamber without me. I don't need you, either. I will still profit and benefit from the information here without contributing anything back if you're all so sure you know all the answers. And I'll be sure to let everyone I encounter know what an open and swell bunch of folks they can expect here, after everyone with a differing or seasoned opinion is silently silenced and banned. Keep slurping up your daily vomit from Jacques Mattheij. http://24.media.tumblr.com/tumblr_lfiferFT7c1qa55edo1_400.jp... .
I guess it's because I have mainly seen FPGAs in time-critical situations (where CPUs can't be used) in which it was very difficult to predict their performance and they required lots of hand-tweaking
I suspect that if you used FPGAs for less time critical applications, you'd have more room for productivity tradeoffs.
Inside FPGAs though, the timing is not as uniformly discretized. It's about taking resources of routing distance from a large pool with a large geographical component, making it highly non-linear and tricky to predict.
Such details can be abstracted over. If you only create synchronous circuits, for example, these subtle timing considerations can be handled automatically.
W/r/t additional hardware, I meant hardware that configures the bitstream, such as a header or CPU interface, as well as power supplies.
I don't really understand why such additional hardware is necessary, I'm not able to comment.
DrDreams:
Furthermore, the routing in CPUs is a known entity at manufacturing time.
The 'routing' of a CPU is much more variable than the routing of an FPGA. Data moves around a CPU based on the program that is executing. The control logic of a CPU is the equivalent of the routing logic of an FPGA.
On FPGAs, the routing is highly variable and must be re-negotiated at nearly every compile cycle.
The 'routing' of a CPU is the same. The compiler has to perform register allocation afresh on every compilation. There's obviously a tradeoff between fast compilation vs most efficient use of resources. Both problems are NP-complete I believe.
Nowadays, we are seeing more FPGAs with dedicated, pre-made hardware blocks inside of them, such as FPUs and even CPU cores.
You just contradicted yourself. Previously, you said "FPGAs always have to match that dense silicon through configurable silicon".
Your next paragraph talks about toolchain issues. This is hardly an insurmountable problem. Someone just needs to design a high level language that can be synthesised; something akin to a python of the FPGA world if you will.
Another problem with FPGAs is the additional hardware on board needed to configure them.
I don't quite understand, do you mean the hardware that reads the bitstream, etc or the hardware that is required in order for the FPGA to be configurable, like routing, LUTs, etc.
Another guy I work with is intent on running CPU cores on FPGAs.
I do agree with you here, this is a weird perversion if the purpose is not eventually to create an ASIC.
I also don't believe that future processors will be FPGAs, but I do believe they will be a lot closer to FPGAs than CPUs.