Hacker News new | past | comments | ask | show | jobs | submit login
Clash: A modern, functional, hardware description language (clash-lang.org)
96 points by lelf on May 7, 2020 | hide | past | favorite | 40 comments



I have a couple of articles on my website about my experiences with (previous versions of) Clash if anyone wants some examples. I used Clash in a processor design class and it was amazing. We had some "competitions" between the various CPUs in the class (e.g. performance and correctness given some ISA spec) and I was able to create one of the top few CPUs with only a small fraction of the effort of my classmates using a "traditional" HDL like VHDL or Verilog.

Slides from a clash talk I gave in probably 2016:

https://yager.io/talks/CLaSH.pdf

Simple non-pipelined CPU (I've been meaning to release part 2 for years but never got around to it):

https://yager.io/CPU/CPU1.html

SKI calculus processor on an FPGA:

https://yager.io/HaSKI/HaSKI.html

A superscalar out-of-order processor I built for a CPU design class at UT Austin:

https://github.com/wyager/Lambda17

(Take a look at Hardware.hs to see some of the cool parametric stuff you can do.)

Like I said, these are all with old versions of Clash - it's changed in the last few years, but I haven't done any HDL projects recently.


The biggest pain point of "programming" hardware is the vendor tools. Imagine if all modern languages from Swift to Rust, from Python to Fortran, will all compile to C. The same situation is with HDLs - most of them compile to Verilog (some to VHDL). The world needs an open standard to rule them all - low-level HDL IR, that like LLVM can give birth to the endless variations of the cool languages, while giving them all flexibility they need. The synthesizing part would have to be written only once too. I proposed joining efforts[1] on building such a language either from scratch or based on something existing.

[1] https://github.com/SymbiFlow/ideas/issues/19


There was some interesting discussion between the LLHD[1] and MLIR[2] folks about just this topic recently[3]. My takeaway: modeling behavioral HDL semantics in the IR is a huge mess that has to account for all of the complexity in the existing HDL landscape. However, there is hope for modeling structural HDL semantics in an IR that could be the target for many languages.

You mentioned LLVM, and I think of MLIR as sort of a successor to LLVM. I'm hopeful that a low-level HDL IR is standardized as an MLIR dialect, so the compiler ecosystem and the hardware ecosystem can start to unite.

[1] https://news.ycombinator.com/item?id=22825107 [2] https://news.ycombinator.com/item?id=22429107 [3] https://drive.google.com/file/d/1x7B0IRdcJ5JBQvfHbPUBcShbTFC...


I love the idea of LLHD. Also LNAST. There are some great ideas here.

https://github.com/masc-ucsc/livehd


There was another language kinda similar to this (functional language HDL), called Bluespec. It was originally very Haskell-like, but at some point had to change its syntax to a more C-like version because of its unpopularity with hardware engineers. I had to use the newer version at my Computer Architecture class, and although the experience was way better than Verilog, it had very little documention/tutorials (being a commercial product and having almost no users), which made understanding the more complex parts of the language really hard.

Nowadays Chisel seems to be the best bet if you want a sane Verilog alternative (with functional-lang features) for your HDL needs. (https://www.chisel-lang.org/) It was originally a transpiler to Verilog, but recently it began to have its own IR code (FIRRTL) for optimization purposes.


As I recall, Chisel is a very shallowly embedded HDL, which is completely different from the approach used by Clash. Clash is a subset of Haskell that is directly compiled to a hardware representation, which has some unique advantages and disadvantages. One big advantage of Clash's approach is that you can code from any regular Haskell library as long as the code is within the subset of Haskell supported by Clash. E.g. you could import `Linear.V3` and have access to a 3-dimensional vector type with all the operators and stuff already defined, and this would compile to hardware no problem.

It turns out that Haskell's non-strict semantics are a shockingly close match for the semantics of hardware (the only difference I'm aware of being pretty obscure, having to do with zero-width busses).


> It turns out that Haskell's non-strict semantics are a shockingly close match for the semantics of hardware

Doesn't Haskell need automatic memory management in order to implement higher-order functions, that being perhaps the most prominent difference in semantics between Haskell/Ocaml and Rust? Is Clash limited to first-class functions ala C?


No, clash supports higher-order functions just fine. The only thing it can’t really represent is general recursion. You need to use type-constrained recursion. Clash isn’t fully generalized yet so it provides some nice bounded recursion primitive.


Yes, but the Clash reference documentation states that it doesn't do behavioral synthesis. It structurally maps the representation of the supported subset of fully-applied Haskell functions to either combinational or sequential circuits, depending on whether the function takes a `Signal`-type argument. The fact that the representation is shared leads to many advantages compared to e.g. Chisel, but this is definitely a "shallow embedding" approach.


"Signal" is just [] without an empty constructor (i.e. a lazy corecursive stream). As far as your Haskell code is concerned, there is nothing special whatsoever about signals, and indeed it's just a normal data type until the Clash-specific Core layer kicks in.

I think I mixed up "shallow" and "deep" embedding - by "deep embedding", I meant to say that there is no secondary AST manually defined for the embedded language, but I think that is actually a shallow embedding. Looks like I can't correct my earlier comment.

Only the top level function has to be first order - you can use higher order functions within the body of your code.


Bluespec is still around and actually got open-sourced recently! [0] The Bluespec company is focused on using their tools to build RISC-V cores: https://bluespec.com/

[0]: https://github.com/B-Lang-org/bsc


Well, I continue to ask myself what the advantage should be, if there are synthesizable subsets for more and more programming languages, to be able to develop hardware with them. The people who absolutely want to use a functional programming language for this can already use e.g. Chisel or SpinalHDL, for example, which are based on Scala. So now there is yet another HDL based on Haskell (don't forget e.g. Bluespec). There are also HDL based on OCaml or ML. But I think it is not wrong to assume that the task of developing error free hardware hasn't become easier hence.


> So now there is yet another

Clash has been around for a while now.


> if there are synthesizable subsets for more and more programming languages

Haskell is the only language I'm aware of that has semantics approaching those of hardware - in particular, purity and non-strict evaluation are absolutely required to match (or closely approximate) the semantics of digital hardware. (To get you started - bottom is equivalent to X in the HDL simulator nomenclature)

> Chisel or SpinalHDL... HDL based on OCaml or ML

Chisel is a shallow embedding. Not sure about SpinalHDL. Hardcaml is a shallow embedding. Very different approach from Clash, and I prefer Clash's deep/direct embedding approach.


I don't think this builds right now, but it's my favorite example of a project really leveraging the expressivity of Clash to do hardware design: https://github.com/cbiffle/cfm


I hate to rag on new stuff, especially in the HDL space where I think it's desperately needed, but the syntax of this language is so foreign that I think it's extremely unlikely you really get a large amount of adoption. We can't even get Verilog 1995 people to switch to 2001, let along SystemVerilog.


I think this works well for people who are interested in both Haskell and hardware design. The syntax is pretty nutty if you're just a hardware designer, but it's ultimately a language embedded in Haskell, so it has the friction of having to be embedded but the benefit of the Haskell ecosystem. I don't think it'll see super wide adoption, but imagine it will have its uses and fans.


> works well for people who are interested in both Haskell and hardware design

This intersection is not likely to differ much from the empty set.


Well chisel the language itself is proof otherwise. The existence of chisel, built on scala is also proof otherwise. People get interested in things where there's incentive to learn. Have a little faith.


Hello, I'm the empty set.


Congrats ;-)


> I hate to rag on new stuff

Haskell/ML syntax is hardly "new stuff".

> I think it's extremely unlikely you really get a large amount of adoption

Clash is used to manufacture production hardware by at least one of the top 5 largest companies in the world.


I know it's not new stuff, go look at Bluespec and their open-source HDL.

For most hardware designers though, it is new. Most hardware designers are EEs and not CS people. I know many designers that do not even want to learn a programming language like python which has a relatively easy to read syntax. ML/Functional Programming languages require a completely different mindset, just like hardware designing does if you come from programming languages.

I think the world is ripe for a new HDL. But the right foundations need to be there. It needs first class support with existing HDLs (VHDL, Verilog/SV). It needs to integrate with all the existing EDA tools in mixed language environments, and be easy to debug. Debugging is a huge problem in the "new HDL" space right now. Go look at Chisel. Google used it to design some of if not all of their TPU, but their Verif engineers had a really difficult time with the debug because it compiled to verilog, and then was difficult to reason about.

I have so many thoughts on this space and I want to make it better. It would be an absolute dream to develop a new HDL that enabled Hardware designers and Verif engineers to have all the superpowers that SWEs have in today's world. Being both proficient in various programming languages and Verilog/SV/UVM/etc, its so wild to see how big of a gap there is in productivity tools. Essentially SWEs have all the good stuff.


> I have so many thoughts on this space and I want to make it better

Have you written them down somewhere? Would be interested. Are you a HW designer?

> It would be an absolute dream to develop a new HDL that enabled Hardware designers and Verif engineers to have all the superpowers that SWEs have in today's world

Wasn't that the intention of SystemVerilog?


Yeah I've been capturing these ideas over the last couple years. I probably have a stream of consiousness document like 30 pages long. I just haven't really had the time to organize it and start figuring out what the "API" would look like.

> Wasn't that the intention of SystemVerilog?

I believe it was, but it clearly didn't pan out. A few obvious reasons:

- the language has way too much bolted onto it, if you compare SV vs C++ or even Java, there are a mindboggling ~250 keywords. Nobody can memorize all of that. If you don't have unbelievable lint/code aid tools (which we don't...), how on earth is someone supposed to be productive? Its way too much complexity. C++ has something like 80ish? C# maybe 100ish? The fat needs to be trimmed.

- Verif engineers and designers work differently, so why does a verif engineer use the same language as a designer? They probably shouldn't just like its a no-no to have a designer verify their own block. Design and Verification need to have very clear cut lines from a language standpoint, and in SV they really dont

- I think people made the wrong decision trying to Bolt on all these OO features inspired by the success of Java and OOP in the early 2000s. HDL design should be more functional than OO. And it should definitely be seperated from a verif lang.

I have so many more thoughts, everything from language syntax, to GUIs, to code analysis, etc. I should probably share it somewhere. Maybe I should just build it. I don't know.


Agree. SV is even harder to parse and validate than C++ which is quite an achievement. I think they also made a wrong decision by replacing the Verilog by the SV standard.

But there are still a lot of useful parts in SV. I thought of creating a kind of "Verilog 20" which just incorporates the useful synthesizable features from SV including parts of SVA and interfaces. I would even consider to leave out parts of Verilog which are not synthesizable and use a different language for verification altogether (e.g. C++ or Python as e.g. with Cocotb).

I recently came accross Wirth's Lola and even built an IDE for it (https://github.com/rochus-keller/LolaCreator), but its practicality and usefulness is still to be demonstrated and the compiler has issues.


I haven't heard of this one. Will take a look. I've been cataloguing everything I've found HDL wise and maybe one day I'll put out a potential spec API for what "the best parts" could look like. Similar to the Rust "inspired by" parts.

I completely agree with the Verilog20 idea. That's the way forward 100%.

Cocotb is awesome in my brief usage so far. The thing that remains to be seen though is performance benchmarking. It would be great to compare the sim runs of two identical tests, one in UVM and one in Cocotb. I highly doubt it could match the performance of a well written UVM test, and this becomes an issue as your SOC gets bigger.


> I haven't heard of this one

Here is the specification: https://inf.ethz.ch/personal/wirth/Lola/index.html

I don't yet have representative experimental data; the language is designed for the synthesis of synchronous FPGA designs and thus avoids some of the issues Verilog and other HDL suffer from. But it's too early for me to give a recommendation.

> The thing that remains to be seen though is performance benchmarking

Agree. I often use Verilator and I also experiment with LuaJIT (see https://github.com/rochus-keller/LjTools) which is one of the fastest VM available; it intend to use it for HW simulation and debugging too.


Yes, and hardware designers that refuse to upgrade from horrendous HDLs like Verilog are going to get BTFO over the next few decades by upstarts using stuff like clash, which lets you debug using SWE strategies before doing anything on hardware. Basically anything new is going to BTFO verilog really - clash, chisel, nmigen.


> "HDLs like Verilog are going to get BTFO over the next few decades by upstarts using stuff like clash.."

Like I've been saying, I want a new HDL. The hardware world needs it. I don't want to get into the Moore's Law debate, but there are tons of efficiencies in TTM to be had with a newer, more modern Digital Design workflow, starting with a new foundational HDL.

WRT to getting BTFO... I don't know. The entire semi industry pushes back on this. Why? Because if you fuck up a chip in 7nm you've wasted millions of dollars not just on fab, but R&D. The hardware world cannot afford to 'move fast and break things' like software can. You have a multi-level problem here if you want a new HDL. You need the mega EDA co's to have first-class support, and then you need the massive chip co's to demand it. Why would Intel change to a new HDL when they have generations of experience in VHDL/Verilog langs, internal tools built to support teams (millions of dollars worth), hardware designers who are productive in these langs, etc. The EDA co's build tools the big dawgs want. The amount of risk involved to change HDLs is massive. Let alone to a new language without a formal spec. That's why it has to be piecewise. If we see a new HDL succeed, it will be because of how it interops with existing solutions. It will have a spec. It will be easy to transition from Verilog, or drop down/into Verilog where needed etc.

Upstarts in Semi are super different than upstarts in software. You don't just get a new grad designing a new CPU/AI-Coprocessor/etc and making millions from it like you do a new grad designing a widget that solves a human product problem in software. The core synthesizable side of SV is actually quite intuitive and good. I think all the bloat of trying to add OO ideas to it was the biggest mistake. Basically people tried piggybacking off of the success of Java in the 90s and thought that would be a good idea. If you've seen a massive UVM/SV code repo, you'd understand how much of a disaster it is.


It turns out that CS folks with distributed systems and functional programming experience can be super productive hardware designers because distributed systems have very similar design-level tradeoffs as ASIC design e.g. how many copies of this processing-element do we use?, what's the correct ratio between storage and compute?. They both have pretty rough verification and correctness related to not being able to instantaneously-sharing state, so things like clock-domain-crossing aren't that foreign to the distributed systems folks.

You don't have a notion of clock cycles, making timing, or fan-out load in distributed systems, so if you are going to try this you definitely need some extremely experienced digital logic designers and chip architects around to keep everyone on the rails, but in turn the CS folks are really good at producing useful abstractions that make architecture exploration easy while maintaining correctness.

TL;DR: I've found it easier to teach folks that know Haskell and distributed systems design about hardware constraints than to teach RTL designers about functional abstractions and been really productive with a mix of both groups.


>Clash is used to manufacture production hardware by at least one of the top 5 largest companies in the world.

Can you elaborate on that?


Clash needs Verilog and VHDL to Clash translators. Make the translator available online for learning purposes, for small samples. Then you can at least leverge your existing knowledge, see how it translates to the new language, and then a tutorial using this could easily show how it's an improvement, either in expressiveness or safety checking.


I've seen clash used in production. Old firms aren't going to pick it up but there are upstarts leveraging it.


Ok, now can we have automatic Clash-to-Linux-dts (Device Tree Source) generation? So we can port Linux to a new board with a click? That would be something. And it might drive adoption of Clash, just for that integration advantage.


I think you might be skipping a few layers of abstraction there, but you could have something like a parametric PCIe or I2C core that would autogenerate both the hardware (de)serialization logic and the Linux-side driver logic. If your driver was written in Haskell, you could even use the exact same library on the hardware side and the software side. So let's say you have an FPGA card in a PCIe slot - with a change to a single file, you could change the wire protocol on both the Linux side and the FPGA side.


I think this is just the abstraction needed to generate the device tree. Pinouts, bus address strapping, modes - all are/could be specified in the Clash and emitted thru some template or schema. I guess that's the missing part - the dts schema. But its the closest we've ever got!


It's wouldn't be too hard to build. Clash types are already required to derive Generic. You could take all those definition and use the Haskell Generic API to generate a device tree.


This reminds me of Cryptol https://cryptol.net which can be compiled into VHDL. It is a more general language than its description as cryptographic DSL suggests.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: