Hacker News new | past | comments | ask | show | jobs | submit login

I too decry the lack of openness with respect to the data file encoding which would allow for more open source tools to be created. I had a hilarious discussion with Xilinx's VP of tools about this[1].

The interesting thing about HDLs and HDL work flows is that they can "look" like software and yet not be software. VHDL and Verilog are the only "languages" I know where you can write something that is both syntactically correct and cannot by inferred into logic by the synthesis part of the tool (equivalent of the code generator in a compiler). I am not aware of any equivalent to the Turing definition of computability which would prove for any legal construct in language X there was an implementation of synchronous logic Y that could implement it. Most of the bugs are easy to avoid once you know them though, things like a register value being assigned two different values in the same process block.

That said, I've been playing lately with an "Ultra96"[2] board which is pretty freakin' cool. quad core 64 bit ARM CPU and a nice chunk of FPGA fabric to play with as well. I think it can be the basis for a pretty sweet SDR setup.

[1] His assertion was that things had to be hidden so that they could protect the value of the software, when I countered they could put a 5% tax on every chip they sold and allocate it to software which would give his team more money than it had today, he argued he would lose sales to cheaper FPGAs, and I asked how the people trying to sell C compilers were holding up, and would anyone buy chips if they had to pay extra for their tools when a top chip maker gave away their tools for free? And then he said it needed to be proprietary to keep the quality up, and I went back and asked what C compiler he used, and he admitted they used GCC (when they were compiling for Petalinux etc) and I asked why they didn't use a proprietary C/C++ compiler? And he said they didn't keep up with the standards and gcc generated just as good code generally as the proprietary ones did. He was left with "just because" as his only rational for not making all of the documentation freely available and using the cost of the chips to leverage the tools cost (which I told him would go away as soon as the open source community had caught up with and surpassed Vivado)

[2] http://zedboard.org/product/ultra96




verilog has behavioural and synthesizable subsets. One is intended for testing, the other for hardware generation. Once you understand this you can learn which constructs, though syntactically correct, should not be used for hardware generation.

Also HDLs are not alone in having 'quirks' that experienced engineers need to know about.. c/c++ for example. cough undefined behaviour cough

Code coverage and quality checking tools are quite good in the EE world, verification has some powerful tools to almost eliminate hardware bugs - which is especially important for asic design.

On the open sourcing of tools - there are free ones, that target real FPGAs - so can you say why these have not surpassed vivado? In fact, while impressive in their own right, they are very primitive in comparison to vivado despite being open source for years. The open source verilog tools also don't fully support all of verilog 2008/2012.

So I don't buy the open sourcing argument for the synthesis and pnr tools would dramatically affect FPGA sales. Instead higher level compilation and abstraction may be the key.


1: verilog started as a simulation language.

2: if the tools were open source, people would be free to improve on these bags of pain that we have the pleasure of spending thousands of dollars per license.


So you not only want open source hardware specs but open source existing tools? I don't see intel open sourcing icc, or other software devlopment centric companies open sourcing their IDEs - ao why shoukd FPGA vendors? The original point was allowing open source tools to be developed like gcc, and the response was that it has not happened despite some ground up tools do exist but are very primitive.


I'm not sure that the OP was stating that it is a requirement that the FPGA manufacturers open source their tooling, only that the devices be well-documented (without NDA requirements) so that someone could have an opportunity to do so without the imperfections that reverse-engineering a production can entail.

To your example, 'I don't see Intel open-sourcing ICC'; while it would be really nice if they did[0], it's not a requirement in the same manner that it wasn't a requirement for gcc to exist.

I could be wrong here -- I do not develop for the FPGA space, however, I've run into similar problems all over the embedded space. Try developing something on one of ARM's Secure MCU products that lands comfortably in just the "open-source software" category (skipping hardware all-together). It's...tricky. To get details on the design of the security features of these products, you have to execute multiple NDAs. And this is in a security space where openness is considered a security feature. In theory, at least, if you interact with one of these NDA-protected features, publishing the source code might be a violation.

Unfortunately, I suspect that many of these features are as good as the secrets that are kept[1] -- exposure of the documentation would likely yield viable attacks[2].

[0] Not the least of which would be to be able to port some of the optimizations that icc enables for Intel processors but disables for AMD/others to be able to be used on...AMD/others.

[1] To clarify, I have not signed any NDAs with ARM, so this is entirely speculation. I'll be a party to one, shortly, so I won't be talking on the subject assuming -- as I suspect -- that doing so would run afoul of the NDA provisions.

[2] At some point the hardware world will learn from Intel and others that security through obscurity ...isn't. As with Intel, as far as we know, the issues they experienced with their management component existed for years without breach. The vulnerability was shockingly bad, was almost certainly known by adversarial governments and black-hats, who kept it a guarded secret as carefully as Intel kept the details of their management component secret. So they succeeded in keeping attackers in business and customers in the dark ... making everyone feel secure.


Languages can have undocumented behavior, that's fine. But if the HDL compiler compiles code that should have a defined behaviour incorrectly, that's just pure frustration. I swear I've lost so many days just trying to find a way to rewrite pieces of logic so that I could find a variation that Vivado would compile according to the HDL spec.


One of the things to realise about HDLs is that they describe hardware - real hardware is non-deterministic, there are race conditions, clock crossings, metastability etc etc

It's honestly not possible to have a "defined behaviour" in all circumstances - verilog simulators don't define event order, if you depend on them stuff will break (we all fought that battle 20 years ago), more importantly you kind of hope stuff breaks to indicate you might not be building designs that work on real hardware


Can you give example of a verilog code snippet vivado actually mis compiles? I've found it incredibly reliable and often use it to double check other tools' results.


> where you can write something that is both syntactically correct and cannot by inferred into logic by the synthesis part of the tool (equivalent of the code generator in a compiler).

A decade ago, programming GPU shaders was similar to that. You could use a higher-level language but there were tight limits on everything, instructions, branches, texture lookups, etc. Here’s a summary for GL, DX was the same because they were hardware limitations of corresponding GPUs: https://stackoverflow.com/a/5601884/126995 Modern GPU chips are way more capable so that’s mostly history now.


Take a look at the LimeSDR Mini device [1] which includes an Intel MAX 10.

[1] https://wiki.myriadrf.org/LimeSDR-Mini_v1.1_hardware_descrip...


Now compare that FPGA with the Xilinx ZU3EG [1] (used on the Ultra96). I have been looking at the USB 3 support on that processor as a means of driving a regular LimeSDR board. Even better would be pulling 4x PCIe[2] lanes of the Zynq and driving an M.2 form factor XTRX or something similar to that. For my purposes I want to insure the SDR has MIMO capability.

[1] https://www.xilinx.com/support/documentation/selection-guide...

[2] And yes I know I would need a ZU4EG rather than the ZU3EG that is on the board. If I was doing my own board I'd use the 4EG and add some extra memory attached right to the FPGA fabric.


The arguement in [1] doesn't make sense to me. Xilinx tools are free, as in beer, and the sale of ICs funds those tools so they are implementing your system of a "tax on every chip". Surely the VP of tools at Xilinx knows this.

What problem are you unable to solve with an FPGA because you do not have the source code to Vivado?

I understand why Xilinx doesn't want to release their source. They don't want to support it. Supporting code is a huge overhead and it's not clear, to me anyway, what Xilinx gains from the expendature.


My argument for the VP was essentially not to release their source, rather to release all of the details that you need to create, sign, and then load a bitmap file into their FPGAs. Also to document how the chip is laid out, and how you can floorplan it by hints in the .bit file.

If they did just that, then the source code would "appear" as people wrote back ends for the various open source HDLs that are already in existence.

The challenge that I see is that Xilinx has already had the experience of selling that data to vendors like Synopsis who sell their own synthesis tools and charge major money for them. And like a person who holds an unvested stock option for a stock that goes from price A above the strike price, to a price B that is below the strike price, they feel as if they "lost" money. Similarly, Xilinx can't see "giving up" thousands, if not millions of licensing dollars they are getting from tools vendors, just to enable an open source community to start. Because they fundamentally can't see that having a vibrant open source ecosystem benefits all players. And this in spite of the gcc example which is pretty incontrovertible in my opinion.


They do document how the chip is laid out and exactly what HW resources the chip has. If you want to make your own backend you can. Use the get_ set_ property of physical constraints, extract and redo placement on the entire design via TCL, or save to your own intermediate file and work off that. I don’t think one needs access to Xilinx’s binary formats.


> Xilinx tools are free, as in beer

This is untrue. [1]

> Supporting code is a huge overhead and it's not clear, to me anyway, what Xilinx gains from the expendature.

In what world is this the case? You host the project on github or the like and let people contribute bug fixes at the cost of filtering pull requests. Letting the community contribute bug fixes is a huge reason companies open source their tools.

[1]: https://www.xilinx.com/products/design-tools/vivado.html#buy


From your link: Vivado HL WebPACK™ Edition: no-cost, device-limited version of the Vivado HL Design Edition

It's not as simple as an upload to github. https://opensource.com/business/16/5/how-transition-product-...


The keywords are "device-limited". Xilinx WebPack won't build bitstreams for some of the larger and faster devices.


It is deprecated: it won't generate bitstreams for any new FPGA.


You're mixing up WebPack (which is a licensing plan for some of Xilinx's tools) and Xilinx ISE (which is one of those specific tools).

Xilinx ISE is indeed deprecated. There are quite a few parts in production which it will still generate bitstreams for, though.

Vivado is the newer replacement. It will not build designs for parts older than 7-series, though, so Xilinx ISE is still required to work with 6-series and older parts, as well as with Xilinx CPLDs.

WebPack licenses are not deprecated. The WebPack program is still active, and will generate limited licenses for both Xilinx ISE and Vivado.


You're right. I wasn't aware that the WebPack licensing program is for Vivado as well. Thanks for the clarification.


Most EE EDA/CAD tools are ancient monstrosities of patchwork with a user experience reminiscent of using eclipse 0.01. Full of bugs, lack of fast CLI tools. Everything must start a core process that takes many seconds to even boot. It's ridiculous. Software engineers don't know how good they have it in terms of tools.


I started off doing digital design but quickly switched to embedded software after seeing the state of tooling. It's not just the tools themselves either. There are folks who will vehemently defend the way things are and shoot down even the slightest improvement efforts as naïve. With that culture in place, I'm happy just having someone else slap a cortex-mX on a board and programming it with gcc/makefiles/openocd.


Being able to do that for embedded software is also a sign that things are changing. For the longest time embedded devices were only programmable from a vendor supported IDE (looking at you TI and Cypress). At least now the open source community have figured out how to get around the limitations and the tools are starting to flourish with increasing vendor support.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: