It's pure vapourware, word is that they burnt through their runway hiring a team of overpriced Haskell consultants who just built a new programming language and didn't deliver a product.
> In a way that doesn't involve A16Z cynically mis-representing blockchain as some people's champion.
That's precisely the game for these funds to keep that illusion up. They buy up a bunch of tokens in a presale discount, pump the price, and then dump it to a bunch of foreign retail investors based on inside information before the inevitable failure of the "product". A16z's crypto fund has made a killing doing just this and it's entirely legal. The SEC is only going to go after the founders and a16z will have long exited their position by the time of the lawsuit.
This is how all of SV works, and has always been. Pump up company valuations with hype, then eventually they dump it all on institutional and retail investors. In some cases the companies actually work out so they might hang on to their stock, but in most cases they get out before the companies' eventual demise.
There are likewise some inaccuracies in this post. Both OPs seems to agree on the fact that Libra has no transaction privacy and then pick nits about features not being there at this point in time:
> Libra’s byzantine tolerance on a permissioned network is an incoherent design.
This post's argument rests on their being a unused flag in the configuration file for a feature which is not implemented. The Libra whitepaper indicates that a permissionless design is beyond their ability to implement at this time. There is no reason to believe a proof-of-stake model is what is intended. Even the whitepaper states:
> The challenge is that as of today we do not believe that there is a proven solution that can deliver the scale, stability, and security needed to support billions of people and transactions across the globe through a permissionless network.
> Libra HotStuff BFT is not capable of achieving the throughput necessary for a payment rail
This seems like some actual benchmarks that simulate real system load are needed before a conclusion can be made.
> Libra’s Move language is unsound
It seems like the OP takes a more traditional view of type checkers as a Haskeller programmer. Pushing type-checking to the bytecode level is a very nontraditional approach to compiler design and contradicts the original Libra whitepaper claiming to implement linear types. Libra does not do this provide any linear typechecker or formal verification at this time.
> Libra’s cryptography engineering is unsound
This section is subject to some debate. It seems there are some unpublished audits of the libraries that may lend credibility to the Rust libraries but are unpublished. The original post doesn't claim that they are insecure, just that the more audits and testing the libraries undergo the more trusted they should become trusted (i.e. libsodium).
It also seems that Libra implements a lot of extraneous next-gen crypto that is dead code and not used for any purpose in the core logic.
> Libra has no capacity for consumer protection mechanisms
The section agrees with the OP that this does not exist.
Things like Big-O notation are part of introductory computer science classes and introductory algorithms classes.
BigO notation also applies when an algorithm is distributed and runs over a network. An algorithm that's O(n^2) will always be slower than an algorithm that's O(2n). You don't need to run benchmarks to find that out.
Like most things, the kernel of tomorrow's ideas is already here. On the scale of the next five years, these ideas will give rise to what the future of programming will look like:
This is the general trend, generally making more composable abstractions and smarter compilers and languages that can reason about more of our programs for us.
Indeed, haskell code for arduinos, robots, ffi ghc hook for composition performance, opengl/opencv... can't get more practical and thrilling than that.
Static typing is mostly orthogonal, Haskell is mostly blazing fast because of all the aggressive compiler optimizations and the last decade of hard work by Simon Marlow and others on the parallel runtime.
The second part is true, but that optimization would have been much harder, if not impossible, to implement had they had to rely on some sort of flow analysis instead of static typing.
In my experience the failure rate is far below 5% for vanilla HM algorithm. I've never seem a pathological input in the wild that wasn't contrived specifically to test the typechecker.
Please do write one. Please don't publish one. Writing is a great way to think.
Without yet really understanding monads, I've come to realize that their only real problem is the name -- being named after some rather obscure (for most!) category theory concept -- that itself seems just about as simple and trivial as monads are. I find they are a little like the y-combinator in that regard.
I'm not sure if what we need is category theory in elementary school, or a different framework for discussing monads as it relates to programming -- but I'm almost convinced the confusion has to do with the language used to describe them.
Programming is a very practical art that sits on top of the much more abstract art of mathematics. It's useful to know orders of magnitude, graph theory, statistics, logic... but the art by which these concepts are animated and put to use is rather prosaic. And the discipline of computing has forged a set of concepts that are "appropriately abstract" (much, I think due to Knuth) -- but monads seem to lie just outside the grasp of many programmers. And I don't think it is because they are a particularly intrinsically hard concept.