Hacker Newsnew | past | comments | ask | show | jobs | submit | more burgerdev's commentslogin

> some boring dude working at [some company] that just [happens to offer coupons for the first 300 likes]

I'd say that sentence generalizes pretty well.


Maybe more appropriate: "Wow, my phone can _optionally_ run an onion service!"

https://github.com/guardianproject/haven/blob/0fd6f690ef6303...


In choose your own adventure style: http://sethrobertson.github.io/GitFixUm/fixup.html


> if isGenTypeOf typedefof<_ option> then

Wow! This code looks horrible. Why don't you just accept the fact that the language is strongly typed and abolish reflection altogether?


I apologize for the tone of my comment, but still think that ML is not supposed to be written like that.


Agreed for ML. However, F# is not exactly ML. It lives on top of, and has to interoperate with, the existing .NET ecosystem.

That means, for example, that a lot of the datatypes you'll end up dealing with are object-oriented. Those can be difficult to work with in a generic way without resorting to reflection.


Sounds a bit like the Scala story: could be a really nice language, but the bagagge ... :)


It is a really nice language. Being able to interoperate with .NET is a strength, not a weakness. It's quite principled in its design, it's just that some of its design principles are pragmatic ones.

Concrete example: It doesn't have typeclasses. This was a deliberate and thoughtful move. The designers haven't been able to figure out how to implement type classes in a way that wouldn't interact poorly with the rest of the .NET ecosystem, so they've opted to keep them out rather than introducing a hacky implementation. Contrast this with Scala, where it's very easy to accidentally break compatibility with Java code when you're working with traits.


> Concrete example: It doesn't have typeclasses.

It's not like type classes are themselves suuuper-principled.

> Being able to interoperate with .NET is a strength, not a weakness.

Being able to interoperate with .NET is a good thing. Letting the .NET object model permeate the whole language's type structure is less of a good thing (to put it mildly). OCaml did the right thing w.r.t. objects: You can use objects if you so wish, but the language doesn't ram them down your throat.


F# and OCaml are different languages with different goals. It should come as no surprise there are different sacrifices between the languages.

Breaking compatibility with .NET, which is F#'s biggest benefit is counter productive.


My suggestion wouldn't “break compatibility” with .NET any more than, say, C++/CLI (or whatever it is called nowadays) does.



Those are essentially ADTs --- not "type-classes" in the ML/Haskell/etc sense at all.


Maybe that ZeroVM is a NaCl sandbox, while LightVM is a Xen VM?


ZeroVM, thanks for the link. I wonder if this solves glibc type dependencies across platforms, it seems unclear.


Reading this makes me wonder in which situations locking will actually help. If you have two distributed updates like "UPDATE table SET value = 6 WHERE key = 'world'" and "UPDATE table SET value = 5 WHERE key = 'world'", isn't there some kind of design (or usage) issue? Are immutable tables transformed with monads going to be a thing in distributed relational databases?


Consider UPDATEs that depend on each other:

    UPDATE table SET value = value + 2 WHERE key = 'world';
    UPDATE table SET value = value * 1.1 WHERE key = 'world';
When these UPDATEs are executed concurrently, PostgreSQL guarantees that they are serialised such that the value becomes either (value+2) * 1.1 or (value * 1.1)+2, i.e. there’s no lost update. To achieve this, PostgreSQL blocks the second update until the first one is committed using a row-level lock.

An alternative mechanism could have been to create a log of changes. The order in which the changes are added to the log then determines the order of execution. That way, we don’t have to block the UPDATE, right?

Unfortunately, things are a little more complicated in an RDBMS. Each of the UPDATEs could be part of a bigger transaction block that could (atomically) roll back. Moreover, the RDBMS provides consistency which means that the new value should be visible immediately after the UPDATE. However, the new value cannot be determined until all preceding UPDATEs have been committed or rolled back. So in the end, the UPDATE will have to block until all preceding transactions that modified the same rows are done and the new value can be computed, just like what would happen with row-level locks, except with more bookkeeping (and serialisation issues).

If you strip out some of the functionality of an RDBMS such as multi-statement transaction blocks or consistency, then you could potentially avoid locks, but Citus distributes PostgreSQL, which does provide this functionality.


I maybe missed it in the posts, but would you mind to explain why you focus on MongoDB in particular and NoSQL in general?


I was in the startup world in the early 2010s, and saw MongoDB used in a number of startups (over ~5 years).

These companies included everything from very early stage Y Combinator backed startups to one of the most famous unicorns (tech issues at growing companies are rarely discussed publicly, so I was lucky that my friends were willing to privately share; compare that to successful tech decisions, which are widely discussed and blogged).

In that time, I heard many stories about the issues companies had with it - and angry debates about whether it was the right choice. In the early years, you were ancient if you used something more conventional than Mongo; in later years, you were dumb if you used Mongo in many startups (both views have their issues).

I wanted to understand:

- Why was MongoDB so popular

- What issues did startup engineers have with it

- Given these issues, why was it chosen in the first place

And most importantly, what lessons does this case study have for future dev tool decisions.


Since you seem to like snow sports, you should perhaps try downhill longboarding.


I think the blog post is not so much about advanced code smell mitigation strategies, but rather about highlighting the features of IntelliJ.


> I'm not aware of any unsynchronized concurrent queue implementation.

Would the queues from https://www.liblfds.org/ qualify?


No, as it clearly qualifies for the second part of my statement:

> Being lock-free implies using Compare-And-Swap - a very powerful synchronization primitive.

Just look inside https://github.com/liblfds/liblfds7.1.1/blob/master/liblfds7...

It's full of synchronization stuff. By "unsynchronized" I mean NOT using that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: