Agreed for ML. However, F# is not exactly ML. It lives on top of, and has to interoperate with, the existing .NET ecosystem.
That means, for example, that a lot of the datatypes you'll end up dealing with are object-oriented. Those can be difficult to work with in a generic way without resorting to reflection.
It is a really nice language. Being able to interoperate with .NET is a strength, not a weakness. It's quite principled in its design, it's just that some of its design principles are pragmatic ones.
Concrete example: It doesn't have typeclasses. This was a deliberate and thoughtful move. The designers haven't been able to figure out how to implement type classes in a way that wouldn't interact poorly with the rest of the .NET ecosystem, so they've opted to keep them out rather than introducing a hacky implementation. Contrast this with Scala, where it's very easy to accidentally break compatibility with Java code when you're working with traits.
It's not like type classes are themselves suuuper-principled.
> Being able to interoperate with .NET is a strength, not a weakness.
Being able to interoperate with .NET is a good thing. Letting the .NET object model permeate the whole language's type structure is less of a good thing (to put it mildly). OCaml did the right thing w.r.t. objects: You can use objects if you so wish, but the language doesn't ram them down your throat.
Reading this makes me wonder in which situations locking will actually help. If you have two distributed updates like "UPDATE table SET value = 6 WHERE key = 'world'" and "UPDATE table SET value = 5 WHERE key = 'world'", isn't there some kind of design (or usage) issue? Are immutable tables transformed with monads going to be a thing in distributed relational databases?
UPDATE table SET value = value + 2 WHERE key = 'world';
UPDATE table SET value = value * 1.1 WHERE key = 'world';
When these UPDATEs are executed concurrently, PostgreSQL guarantees that they are serialised such that the value becomes either (value+2) * 1.1 or (value * 1.1)+2, i.e. there’s no lost update. To achieve this, PostgreSQL blocks the second update until the first one is committed using a row-level lock.
An alternative mechanism could have been to create a log of changes. The order in which the changes are added to the log then determines the order of execution. That way, we don’t have to block the UPDATE, right?
Unfortunately, things are a little more complicated in an RDBMS. Each of the UPDATEs could be part of a bigger transaction block that could (atomically) roll back. Moreover, the RDBMS provides consistency which means that the new value should be visible immediately after the UPDATE. However, the new value cannot be determined until all preceding UPDATEs have been committed or rolled back. So in the end, the UPDATE will have to block until all preceding transactions that modified the same rows are done and the new value can be computed, just like what would happen with row-level locks, except with more bookkeeping (and serialisation issues).
If you strip out some of the functionality of an RDBMS such as multi-statement transaction blocks or consistency, then you could potentially avoid locks, but Citus distributes PostgreSQL, which does provide this functionality.
I was in the startup world in the early 2010s, and saw MongoDB used in a number of startups (over ~5 years).
These companies included everything from very early stage Y Combinator backed startups to one of the most famous unicorns (tech issues at growing companies are rarely discussed publicly, so I was lucky that my friends were willing to privately share; compare that to successful tech decisions, which are widely discussed and blogged).
In that time, I heard many stories about the issues companies had with it - and angry debates about whether it was the right choice. In the early years, you were ancient if you used something more conventional than Mongo; in later years, you were dumb if you used Mongo in many startups (both views have their issues).
I wanted to understand:
- Why was MongoDB so popular
- What issues did startup engineers have with it
- Given these issues, why was it chosen in the first place
And most importantly, what lessons does this case study have for future dev tool decisions.
I'd say that sentence generalizes pretty well.