Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Purely out of curiosity, do you know of any talk that would explain in detail why things like swift's protocol extension can be implemented in conjunction with generics, and go interfaces can't ?

From the outside, the two features (swift's protocol with extensions and go interfaces) seem a bit close, so that made me wonder. I didn't have the time to think too much in detail about it, so i'm just wondering if anybody already had.



You certainly can implement Go-like interfaces alongside generics. There's nothing inconsistent about them.


Yes, but the typical consequences that arise out of that do not fit with design goals of Go.

Just off the top of my head - boxed types, for example lead to Java-style inheritance patterns. This sits awfully with composability and readability of Go code. You read Go code like a tree. You read Java code as multiply linked list.


It can be done, the question is can it be done in a way that fits with everything else in Go?

There are technical drawbacks in the implementation of parametric polymorphism that don't suit Go's design goals: http://research.swtch.com/generic.


All of the supposed reasons given in that post have been answered to death by multiple people (including PL experts), and none holds to much scrutiny.

"We just can't be bothered" or "we don't think they're much good, and we don't have a need for them" would be a much more realistic answer.


I haven't seen any rejoinders to that document that contradict the fact that you pay for parametric polymorphism with either slower compilation or slower runtime.

Go isn't interested in either being slow. Any generics solution has to have fast runtime, because they need to replace the builtin parametric slices and maps. And the compiler is already too slow, in fact there's a lot of work slated to try and make it faster.

Lots of effort went into studying the problem, so it most definitely isn't an issue of "can't be bothered".


> I haven't seen any rejoinders to that document that contradict the fact that you pay for parametric polymorphism with either slower compilation or slower runtime.

I think the error in that reasoning is attributing this issue to generics specifically when this is actually a problem with code reuse in general.

Say I have a linked list and want to use it to hold both integers or strings. My two options, in any language—whether that language has generics or not—are: (1) write specialized code for every type I want to use it with (go generate), potentially bloating the code and resulting in extra compilation time; (2) use some sort of existential type (interface{} in Go) and share the code but pay a performance cost at runtime. The dilemma arises because of the problem itself, not because of generics.

It is of course right that there is a tradeoff here (although there are many other potential solutions that aren't at one extreme or the other, .NET-style JIT compilation or intensional type analysis for example). But not having generics doesn't eliminate the tradeoff. Generics are just a way for the compiler to automate the work that a programmer would otherwise have to do. If you don't have generics, you still have that dilemma, except that you have to write the code yourself instead of the compiler doing it for you.


That is a powerful theoretical concern that I shared when I first experimented with Go several years ago. In practice I have not found it to not be a problem.

It seems (and this is purely an analysis of my own experience and the large amount of Go I read inside Google) that the types of data structures and algorithms I need parametrized follow a power law distribution. Mostly I need a list, or a hash map, and the builtin slice and map types almost always meet my needs. For the long tail of parameterized types in that power distribution, it appears that either it is not a performance hot spot, in which case I can borrow from the dynamic dispatch tools in the language (that is, some variant of an interface{}), or performance is so critical that even in C++ there is no obvious generic data type.

The latter is an interesting case I ran into in my prior life as a C++ programmer, and in OCaml. Someone had created a generic version of a data structure (usually more than one someone), but what I needed in my performance hot spot was some variant that wasn't captured by the parameterization.

An old example that comes to mind: I inherited a struct whose memory layout described a wire protocol. It had an 8-byte piece of padding in the middle of it. That was the perfect place to hide the pointer for a linked list I wanted to create in an intermediate step. Of course, none of the many template-based linked list data structures I had in C++ could do that, so I rolled my own.

So as unexpected as this may be, I find a combination of map/slice, interface{}, and rolling my own meets my generics needs in Go well enough that I wouldn't want to trade off slower compilation for it, or bad compile-time error messages. I still miss generics, and it may be that the particular kinds of programming I do mean I miss them less than others. A co-worker with nearly as much Go experience as I have says he misses them more often. The ones that come up most often for me are sorting and some kind of ordered map, but even those bug me less than once a month.


You shifted the argument from "generics are either bad for compile time or bad for performance, and therefore bad for Go" to "I don't need generics". I can't argue that you need generics, since I don't know the code you're writing (though I think it's likely you're leaving a good deal of performance on the table). But I do think the former problem has little to do with generics as a language feature.


My goal was to explain how I make do with the features that exist (paying one of the tradeoffs, productivity, compile time, or runtime), and that any more general purpose implementation of generics would, to be orthogonal with existing features, necessarily slow compilation (because it would have to take over the job of slices and maps today, and would become prevalent in our APIs).

I'm not trying to shift the argument, I'm to explain the importance of Go in practice. There is a generics dilemma that programmers pay today and it should be possible to turn that into a language feature with the same tradeoffs. In practice, I don't think a generics language feature that captures how I program in Go is possible. (Or at least, I haven't thought of one nor seen a proposal that does.)

Like many others, I'd like to see a prototype that proves me wrong.


> any more general purpose implementation of generics would, to be orthogonal with existing features, necessarily slow compilation (because it would have to take over the job of slices and maps today, and would become prevalent in our APIs).

It would not slow down compilation. Slices and maps are just built-in generics. Generics would codegen exactly the same way as slices and maps do now. Slices and maps compile down into calls to builtin runtime functions (using intensional type analysis IIRC); a generic version of them would, if implemented properly, call down into those exact same functions.

> There is a generics dilemma that programmers pay today and it should be possible to turn that into a language feature with the same tradeoffs.

Yes, it is possible and many languages have done it. All Golang needs to do is what Swift (just to name probably the closest analogue) does, with interfaces for runtime dispatch and generics for compile-time monomorphization. Or, if you want to continue using intensional type analysis to reduce compile time, implement that as certain variants of OCaml did (though you'll pay a performance cost to do so and I don't think it's worth it).


Compile-time specialization is exactly what would slow the compiler down too much. Doing it for slices and maps is already pushing it, doing it for more types (and judging by how generics is used in every other language, it would be a lot of types) would be a significant slowdown.

That said, your second point is good, my argument is weak because the compiler could stick to just specializing maps and slices and otherwise boxing. I strongly suspect any passable generics implementation for Go will need to do this (which due to necessary stdlib changes is off the table until Go 2, so understand I comment on this topic purely for the sake on conversation, I don't think anything can be done any time soon). The degree to which it needs to do it, I'm not sure.

There is still an orthogonality issue with interfaces. There's a lot of overlap between a dynamic dispatch mechanism and parametric polymorphism. As an API designer I'm a bit worried about it. I suspect if a good generics implementation came along though that argument would get pushed aside.

On this topic: one of the better prototypes I saw had a lot of trouble producing good error messages. I suspect this is a solvable problem, but how is not clear to me yet.

Also, we call it Go, not Golang.


>That is a powerful theoretical concern that I shared when I first experimented with Go several years ago. In practice I have not found it to not be a problem.

People that didn't find it to be a problem (e.g. due to the kind of stuff they are working on) are not the ones concerned about its lack though.

It's like someone saying "I just do web development with Python, so I see no reason for NumPy to exist".


This is a fantastic explanation. Thank you for phrasing it so concisely.


>Lots of effort went into studying the problem, so it most definitely isn't an issue of "can't be bothered".

Citation needed.

I've seen absolutely zero sample implementations, testing of implementation options, etc work going in with regard to adding Generics in Go.

And, aside from a couple of blog posts and shooting down people asking for it in comments, I've seen no real organized discussion, e.g. like what would go on for a Python PEP to be accepted.


If Go's overriding focus is speed then why have a garbage collector ? Or write the compiler in Go ?

Or do you think that just maybe using speed as a reason not to do something is a bit of a cop out. Especially without any benchmarks to back it up.


There seem to be a few points in your comment, let me try to answer them as best I can:

The Go 1.5 compiler is not slower because it is written in Go, it is slower because it was mechanically translated from another language. Human eyes will improve it over time.

Garbage collection is a throughput hit the language designers (and I) are willing to accept for improved safety and simpler APIs. I'd rather not pay it, but I choose it over spending a large chunk of my API documentation describing various ownership scenarios like I did in C++. It's a tradeoff I find acceptable for most programs. You won't find me using a GC on a clock slower than 100 MHz, or in a sub-millisecond realtime system (but I probably won't be using linux either there).

I'm also willing to pay the performance hit on non-critical generic code, and I use interface{} for that where I can. If benchmarks show it's a problem, I'll do something differently. That might be hand-rolling an algorithm, which is unfortunate for the programmer who follows me and has to read it. But it doesn't come up much.

The performance price for generating large amounts of extra code in the compiler is reasonably well understood, and not something that would be amenable to simply trying and benchmarking. One would have to implement generics, then spend several programmer-years tuning the compiler over various programs, compilers are complex machines.

Again, I haven't met a compiler expert who doesn't think that generating code for widely used generics would be expensive. And they would be widely used, any good generics solution would have to replace maps and slices, and would permeate the standard library.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: