Hacker Newsnew | past | comments | ask | show | jobs | submit | Lavinski's commentslogin

https://www.daniellittle.dev/ I blog occasionally just to share or think. Regular topics include the web, fp and dotnet.


I wrote about why linking is useful recently https://www.daniellittle.dev/practical-hypermedia-controls. I've been using Hypermedia in my APIs for a few years now and it's been so useful for making great APIs. One of the things I really love about it is that you can test/use the API directly much more easily, because the knowledge about how to use it is in the API itself, instead of half hardcoded into the client.


The article goes through a series of examples to show motivations for the following:

1. Variables should not be allowed to change their type.

2. Objects containing the same values should be equal by default.

3. Comparing objects of different types is a compile-time error.

4. Objects must always be initialized to a valid state. Not doing so is a compile-time error.

5. Once created, objects and collections must be immutable.

6. No nulls allowed.

7. Missing data or errors must be made explicit in the function signature.

The idea being that each feature or constraint _enables_ you to reason and predict more about a program than you could otherwise.

I encourage anyone interested in these ideas to play around with F# or a similar language and get a feeling for how they influence your code. If you've mastered one paradigm such as OO one of the best ways to find holes in your mental models is to try and find another point of view to look at the same problems. Even if you keep writing most of your code like you do today, in the language you do today, it can still be beneficial.


5. Once created, objects and collections must be immutable.

So this language would not be general purpose, as it would not be suitable for high-performance computing.

Large scale simulations almost always involve arrays that are modified in place. Being able to somehow declare a collection to be immutable would be highly useful, but not having the option of mutable collections limits the kinds of problems that can be approached with the language.


I'm not going to claim that mutability is never useful for performance, but many large scale simulations can be expressed quite elegantly using bulk operations on arrays or other structures, with no mutability in sight. Both particle simulations a la n-body and stencil operations are in this category. An efficient low-level implementation of such bulk operations involves mutable updates, just like any functional language is compiled to "impure" assembly code, but the programming model used for application programming can remain pure.


Interesting. Can you explain, with a somewhat simple example, how this can be efficiently implemented, or at all? I mean preserving the appearance of immutability at the source language level, while mutating the original structure under the hood for performance.


Any vectorised operation in Numpy is an example of this. The pure subset of Numpy can be used to write useful programs, but the Numpy functions/methods are mostly implemented in impure C.

Another example is completely pure array programming such as in Accelerate[0] or Futhark[1].

[0]: http://www.acceleratehs.org/

[1]: https://futhark-lang.org


A somewhat related idea is called "benign effects." The idea is that you write code with an immutable interface that uses mutation in its implementation.

So there are "effects" (non-functional state changes) that are encapsulated ("benign").

I learned this term in reference to Standard ML at CMU.

This is different from what you're asking because it isn't a compiler optimization and it isn't actually checked by the language at all, but it works pretty well in practice.

It's like unsafe in Rust: you write most of your code assuming a useful property that you then break in the small percentage of code that needs to break it.


Not very knowledgable on this myself, unfortunately, but I believe that in graphics programming, shaders written in GLSL often take the form of a series of functional, mathematical transformations of vertices. Those transforms are run in the GPU as highly parallelized array operations, probably using a lot of mutable state. But those details are mostly hidden from the shader programmer.


C++ supports this via the mutable keyword https://stackoverflow.com/questions/105014/does-the-mutable-... though not particularly for performance purposes.


Thanks to all who replied.


I concur, I should have been more precise in my comment.


Isn’t it possible (at least in theory) to make mutability an implementation detail of the compiler/runtime? Rust’s borrow checker approaches this, but the abstraction leaky or nonexistent. Additionally, many high performance computing applications (e.g. Tensorflow) abstract away expensive mutable operations, so at least in theory, it should be possible to isolate mutability to small segments of code where mutability is opt-in.


Yes, Haskell as a pure functional language does this too. A naive copy-by-value handling of lists will usually end up in the same order of magnitude for performance as mutate-in-place linked lists in C. The compiler can track those immutable values and just mutate them in place, when it can guarantee that's a safe operation. The vast majority of the time, you can get away with just copying a pointer or renaming, not the whole variable.

The caveat is that, in my experience, it's a fair bit harder to reason about performance, as the execution model is even more abstracted away from the hardware than even something like the C model is (which is no longer a good fit either, in this era of speculative execution and multi-level caches.)


> The caveat is that, in my experience, it's a fair bit harder to reason about performance, as the execution model is even more abstracted away from the hardware than even something like the C model is (which is no longer a good fit either, in this era of speculative execution and multi-level caches.)

One solution is to have a tool developed and distributed along with the compiler (so it can never fall out of sync with the compiler, that's why) annotate the code with notes about performance.


I think if performance is part of the requirements of your code, then performance must be a part of your type signature.

For example, a tail-recursive function needs to have it’s type as tail-recursive.


This is where linear types and in general quantitative type theory comes into play. Also eagerness / laziness annotations.

Tail recursion is not necessary to annotate imo, but I guess the compiler/linter could maybe complain if it finds recursion it can't do a tail call optimisation for. These kinds of warnings are similar to mutable languages warning about things that are probably bad but sometimes necessary.


It’s neccessary to annotate tail recursion because you are making it clear to the compiler that your initial assumption about the performance of this function is that it will not explode the stack.

The reason it must be made explicit is because when somebody else comes later on to change that function they may miss the fact that it doesn’t explode only because it’s tail-recursive.

You could of course document the requirement - but why document if you can make it a compiler option? “I don’t want this to compile unless I get the behaviour I expect from it”.

Also as far as I am aware C-style functions can not be tail-recursive because they can not clean up the stack after themselves, thus you can’t support tail-recursion across FFI.


Rust's im[1] and rpds[2] crates are refcounted pointers to immutable data structures, but support mutable operations on &mut instances. When an instance is cloned, it merely creates another pointer. When an instance is modified, it uses Arc::make_mut() to only clone each tree node if it has other users. This approach has runtime overhead, but makes nested updates (foo[0][0].attr = 1) as simple as mutable structures.

This somewhat resembles immer.js (uses a proxy around an immutable structure which records updates). Contrast this approach to Clojure transients (whose children don't magically become transient), and whatever Haskell does (https://news.ycombinator.com/item?id=24740384).

[1]: https://docs.rs/im/

[2]: https://docs.rs/rpds/


Linear types fix this problem, by letting you prove to the compiler that logically immutable operations can be implemented as in-place updates.


Mutability is an abstraction, it doesn't forbid in place modification of data. What it forbids is other code accessing data that holds references the array prior to the modification, which creates a logical error.


F# and OCaml have mutable arrays.


Rust's borrow checker would prevent Example 5 from compiling, since once you add `cust` to the collection, you can't touch it anymore (unless you insert a clone or etc.). So in this case at least, the inability to reason about code can be resolved by banning mutable aliasing, without eliminating mutability.


Rust's ownership system would prevent example 5 from working (because you have to move the instance into the set).

The borrow checker is about validating that references don't outlive their target, and R^W.


Oops.


1. Variables should not be allowed to change their type.

This sounds nice, but is there a way to accomplish it without losing some expressibility or concision? Rather than looking at JS, consider low-level operations on a small chunk of memory as a niche example. Interpreting the same region as a buffer of 64-bit ints vs 16-bit uints gives entirely different behavior to the standard operators like addition, multiplication, and shifts, and there are plenty of cases where it makes sense to mix and match those operators. It's possible to construct a single type that encompasses all that behavior, but the price for doing so is a formidable wall of similar-looking method names rather than just being able to use a plus sign or other easier-to-understand constructs.


> Interpreting the same region as a buffer of 64-bit ints vs 16-bit uints

The variable doesn't change its type though, instead you change your interpretation of it. That's a very different and explicit operation & manipulation.

Although example 1 doesn't strictly have anything to do with types, you could get the same behaviour with `x = 0` in any language with closures (like javascript), or `foo = 0` in a language which passes parameters by (mutable) references as long as that information is not visible in the caller.


In descendants of ML (Haskell, OCaml, Rust etc) you can use Algebraic Data Types to condense your wall of methods to one function


Algebraic data types have runtime case information, and won't let you reinterpret the underlying bits of a binary buffer between types. I think grandparent meant they wanted pointer casts, unions, reinterpret_cast, or transmute.


Console or media center



http://knockoutjs.com/ is also a great simple data binding lib but without any dependencies.


I recently looked into React and Knockout at work, as previously we had been using JQuery for all frontend stuff. I'm not much of a frontend developer, but I found things to like about both React and KO. We aren't making single page apps, but we do need to create more rich user interfaces in the browser. I spent a week exploring both, creating two different UIs in each.

For environments where you don't have dedicated frontend people, I think Knockout is more approachable. I very much like the flow based programming ideas behind React, but in the end I think the need to either learn/implement the Flux architecture or having to follow chains of Javascript callbacks really dissuaded my coworkers from supporting it.

KO has a more familiar template-like approach. And now that the latest version also offers components, I like that I can use templates where they work and switch to components when necessary, which should eliminate some of the boilerplate I faced in React. I'm very excited to try out the new KO features, and I'm always surprised to not see much mention of KO in front end discussions at HN.

That being said, while working with KO there were times where I created bugs that would have been impossible or difficult to create with React. But I think for a team of backend developers that don't do much frontend work and who aren't that familiar with functional programming, React (whether callback property style or using Flux) is too much of a jump. For personal projects I'm interested in trying out React+Flux, though.


If you're going to dig any further into React, check out cortex's approach to simplifying nested data: https://github.com/mquan/cortex


I absolutely adore KnockoutJS, it's completely changed the way I write JS (I'm not a huge fan of or user of javascript) and 3.2 has added component support (with templates) which for me is a huge bonus as I was doing something similar manually.


I love knockout as well, however, I would love to see something like this added. Not having to declare all bindings and just auto bind to a form would be pretty useful for small stuff like single page forms.


I hardly use an ORM anymore these days, however this article has a few issues.

First even when using SQL the author runs into the problem where splitting the database into two and having a reporting database would be much more efficient instead of having one database trying to meet all your needs. When you do writes you want transactions and 3rd normal form but when reporting 3rd normal form becomes a downside. This applies to attribute creep and data retrieval.

Second the dual schema problem is one that I think most ORM users know how to avoid. I generate the schema from the code directly, maybe with a little bit of fluent migrations to help move data.

The issue with transactions is a strange one. Ideally this is handled as a cross cutting concern in your application. This means it's consistent and transactions can be explicit and predictable. I'd do the same thing for any application.

The biggest issue here is I think the author has chosen the wrong tool. This application sounds like it would be well suited for event sourcing. I'm not going to go into it here but they solve these issues in an interesting way. Plus the data is event-based anyway.


On the front end Durandal.js Single Page App (similar to angular but MVVM). On the back end is a Rest web service written with ASP.NET Web.Api. Octopus Deploy for deployment.

It's also based on CQRS and DDD so data storage is aggregate based.



But it's also not closed source, because OpenGL is just a specification, like Wayland.


The correct term is open standard.


One API for every device is great for developers targeting the Xbox and other windows devices.


It makes the job of engine developers somewhat easier in the long run, but very few game developers are writing raw DirectX these days.

And (almost) nobody's targeting just Microsoft devices, even developers who are paid for temporary exclusives. It's all about the cross-platform engines.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: