Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go 1.5 Release Notes (golang.org)
123 points by raingrove on July 31, 2015 | hide | past | favorite | 63 comments


The release notes mention the new Go compiler is about 2x slower than the old C based one. It also mentions there's ongoing work to improve this. I wonder if there are any estimates on how close to the original performance they think they can get. Do they expect to reach parity by, say, 1.6 or will it always be slower?

For my tiny hobby projects, compile times aren't an issue at all so I'm asking purely out of curiosity.


They automatically translated C code to Go code. The objective was to produce correct Go code, speed was not an immediate concern - the auto-translated code mostly is very bad Go code, speed wise. Optimizing the Go code base of the compiler is the next step, planned to happen in subsequent releases.


>the auto-translated code mostly is very bad Go code, speed wise.

Is it? It might not be particularly optimized, but why would that ("very bad speed wise") be the case?


Generally, compilers aren't very good at compiling code that isn't idiomatic for the language. This is by design, since compiler writers try to make idiomatic, common code fast first and foremost.


> I wonder if there are any estimates on how close to the original performance they think they can get. Do they expect to reach parity by, say, 1.6 or will it always be slower?

I imagine that's just a side-effect of rewriting the entire compiler; I'd expect the next versions to improve on the speed.

Though in the meantime, if compilation time is an issue, you can always try gccgo. On my larger projects, it's slightly faster than gc 1.5. YMMV, obviously.


The new compiler is now garbage collected, and is generating lots of garbage, when the old compiler ran without any garbage collection. There is an interesting thread from the mailing list about this [0]

[0] https://groups.google.com/d/topic/golang-dev/6obxRcm-rqc/dis...


What's the performance in gccgo nowadays, back when I tried it certain things was slightly faster but a lot was slower compared to the official compiler.

I read that it mainly had to do with lack of escape analysis, has this changed in recent gcco releases ?


Here's the Go 1.5 release schedule: https://groups.google.com/forum/#!topic/golang-nuts/hm9tWv53...

The final version should be released in mid-August.


Quite significant that Bell Labs Unix alums don't want to put up with C any more and actively purge it from their tree?


> Quite significant that Bell Labs Unix alums don't want to put up with C any more and actively purge it from their tree?

Go is quite literally written by the people who invented C, wrote C, and made it what it is today. (Ken Thompson wrote one of the first commits to the language).

The reason for purging C from the Go toolchain is that it makes it harder for Go developers to contribute. Go developers already all know Go (by definition) - why make them use a language that they may not know as well in order to contribute back to the project?

Furthermore, mixing C code with Go code can cause performance issues if not done right. While you can expect the official project to "do it right", this still raises the bar for people who know Go and want to contribute, but don't know C as well.

And frankly, I don't think it's that significant to say that they don't want to write C anymore. C was a great language for its time, and it still is a great language in many ways. But it's almost half a century old; it would be more surprising if after almost 50 years they couldn't write a language that they prefer even more!


> Furthermore, mixing C code with Go code can cause performance issues if not done right. While you can expect the official project to "do it right", this still raises the bar for people who know Go and want to contribute, but don't know C as well.

This is why Go was maintaining it's own C compiler -- They had it modified to work well with Go, as far as I am aware[1].

Removing it means one less compiler to maintain.

[1] From Go 1.4, the source for the Go C compiler: https://github.com/golang/go/tree/883bc6ed0ea815293fe6309d66...


I don't have the link handy right now, but IIRC there was a mention of some system level differences in how Go might like to change things in the toolchain under the language level which could move in a fundamentally different direction from the C toolchain. An example was in the area of how stacks were created and operated.

I think one could foresee making more changes in the future in that area. It's sort of exciting to think that these fairly low level foundational details which much of programming relies on, could be rethought and maybe get us to a different place than we are now.


More I think that most compilers want to be written in the language that they compile. Go has been moving toward this for a while since the compiler rewrite in Go. It's hard to call yourself a C successor when your compiler depends on a C compiler.


I think it is more dogfooding, no? Or proof of concept/completeness?


I agree. Also another benefit is that people interested in Go and not in C can now contribute to compiler itself.



Bell Labs alums have never had a specific penchant for C, per se, though their languages have mostly had C-like ALGOL syntax.


The "stop the world" phase of the collector will almost always be under 10 milliseconds and usually much less.

...and all of a sudden, GC becomes a non-issue for all but the most highly performing software.


10ms is still a gargantuan amount of time for any type of application that must present a smoothly animated UI (not just games), about 60% of the per-frame budget. Every frame where the GC decides to kick in would result in a skipped frame, which is very noticeable. The only acceptable type of garbage collector for this type of application (and I wouldn't call an animated UI application a "most highly performing software") is a garbage collector that allows full control over when and how long it triggers. This sentence from the release notes "The 'stop the world' phase of the collector will almost always be under 10 milliseconds and usually much less." is completely meaningless in this regard, it basically still says "time required for garbage collection can be anything, sometimes less then 10ms, sometimes more, who knows...".


If you'd seen the graphs you'd see that the stop time is related to total memory used. 10ms only happens with a large number of gigs of memory being used. For more reasonable amounts (up to a few gigs) the stop times are 1-3ms.


Did you read ? With 1 Gig of Heap that's less than 1ms, and if you're creating a UI or a gaming engine there are ways to recycle objects in order to prevent GCs from working on useless object groups


That's highly oversimplified. It traded throughput for pause time.

(Pause time is not the only metric that matters; thinking that it is is a common misconception about GC that bugs me.)


No, there are many more potential issues with GCs beyond this one metric.


tl;dr - No change in language, one minor consistency oversight fixed. Big changes on implementation, including compiler being bootstrapped.


And still no generics. Honestly, at this point I just wish they'd add a 'void' alias for 'interface {}' so at least non-typesafe code was only as ugly as the equivalent C -_-


Russ Cox said this year that generics aren't left out because of political reasons or design choice. They are left out because of technical constraints which are:

- Generics in current form won't work across the board with all parts of Go.

- Generics of form that will work for Go across the board are not technically trivial.

Honestly, I appreciate Go's philosophy of not implementing until fully understood and accepted. Besides, Go's primary design principle is that of composability. So you're using reflection in one form or another. Achieving generic behaviour with interface and reflection is much more consistent than generics as we know from other languages.


>Russ Cox said this year that generics aren't left out because of political reasons or design choice.

They've been saying that from the begining. There are no real technical constraints, it's been done in all kinds of languages and it has been a well understood feature for 2+ decades. They just don't want to do the compromises needed, while letting the developers continue to deal with all of them...

See also pcwalton's (of Rust fame) comment below.


They've been solved for many other languages but not for Go.

The lessons learned don't necessarily translate well between languages, because different problems need to be solved. Each language is a bag of features that need to work well with all the other features included in the language.


Do you find Go imposing any special set of contraints? If anything is far less featureful and Algol-like than most of the other languages that have found a way to fit Generics.


As far as I know it's not that something couldn't be shoehorned in, but rather that the language designers' tastes result in additional technical constraints.

Apparently they don't want to support only boxed types (like Java), and they also don't want to generate multiple implementations of each generic function for each size, resulting in code bloat (like C++), or to generate code at runtime (like, say, Julia).

You could argue that they should just make a choice and go with it because generics are so important, and some language designers would do that, but then these differences in design goals are why we have different languages in the first place.

This is based on an early article by Russ Cox [1]; I don't know how the teams' position has evolved since then.

[1] http://research.swtch.com/generic


Go does have some things like interfaces that don't work quite like they do in other languages.

I have yet to see a full technical proposal that integrates well with the existing language features and has no nasty corner cases. If such a thing existed, I'd be among the first to stand up and applaud it.

The best solution currently (IMHO), is code generation with things like gen [1]. Though I have yet to use that in a real project, so I can't say for sure it is actually the best solution.

[1] https://clipperhouse.github.io/gen/


Purely out of curiosity, do you know of any talk that would explain in detail why things like swift's protocol extension can be implemented in conjunction with generics, and go interfaces can't ?

From the outside, the two features (swift's protocol with extensions and go interfaces) seem a bit close, so that made me wonder. I didn't have the time to think too much in detail about it, so i'm just wondering if anybody already had.


You certainly can implement Go-like interfaces alongside generics. There's nothing inconsistent about them.


Yes, but the typical consequences that arise out of that do not fit with design goals of Go.

Just off the top of my head - boxed types, for example lead to Java-style inheritance patterns. This sits awfully with composability and readability of Go code. You read Go code like a tree. You read Java code as multiply linked list.


It can be done, the question is can it be done in a way that fits with everything else in Go?

There are technical drawbacks in the implementation of parametric polymorphism that don't suit Go's design goals: http://research.swtch.com/generic.


All of the supposed reasons given in that post have been answered to death by multiple people (including PL experts), and none holds to much scrutiny.

"We just can't be bothered" or "we don't think they're much good, and we don't have a need for them" would be a much more realistic answer.


I haven't seen any rejoinders to that document that contradict the fact that you pay for parametric polymorphism with either slower compilation or slower runtime.

Go isn't interested in either being slow. Any generics solution has to have fast runtime, because they need to replace the builtin parametric slices and maps. And the compiler is already too slow, in fact there's a lot of work slated to try and make it faster.

Lots of effort went into studying the problem, so it most definitely isn't an issue of "can't be bothered".


> I haven't seen any rejoinders to that document that contradict the fact that you pay for parametric polymorphism with either slower compilation or slower runtime.

I think the error in that reasoning is attributing this issue to generics specifically when this is actually a problem with code reuse in general.

Say I have a linked list and want to use it to hold both integers or strings. My two options, in any language—whether that language has generics or not—are: (1) write specialized code for every type I want to use it with (go generate), potentially bloating the code and resulting in extra compilation time; (2) use some sort of existential type (interface{} in Go) and share the code but pay a performance cost at runtime. The dilemma arises because of the problem itself, not because of generics.

It is of course right that there is a tradeoff here (although there are many other potential solutions that aren't at one extreme or the other, .NET-style JIT compilation or intensional type analysis for example). But not having generics doesn't eliminate the tradeoff. Generics are just a way for the compiler to automate the work that a programmer would otherwise have to do. If you don't have generics, you still have that dilemma, except that you have to write the code yourself instead of the compiler doing it for you.


That is a powerful theoretical concern that I shared when I first experimented with Go several years ago. In practice I have not found it to not be a problem.

It seems (and this is purely an analysis of my own experience and the large amount of Go I read inside Google) that the types of data structures and algorithms I need parametrized follow a power law distribution. Mostly I need a list, or a hash map, and the builtin slice and map types almost always meet my needs. For the long tail of parameterized types in that power distribution, it appears that either it is not a performance hot spot, in which case I can borrow from the dynamic dispatch tools in the language (that is, some variant of an interface{}), or performance is so critical that even in C++ there is no obvious generic data type.

The latter is an interesting case I ran into in my prior life as a C++ programmer, and in OCaml. Someone had created a generic version of a data structure (usually more than one someone), but what I needed in my performance hot spot was some variant that wasn't captured by the parameterization.

An old example that comes to mind: I inherited a struct whose memory layout described a wire protocol. It had an 8-byte piece of padding in the middle of it. That was the perfect place to hide the pointer for a linked list I wanted to create in an intermediate step. Of course, none of the many template-based linked list data structures I had in C++ could do that, so I rolled my own.

So as unexpected as this may be, I find a combination of map/slice, interface{}, and rolling my own meets my generics needs in Go well enough that I wouldn't want to trade off slower compilation for it, or bad compile-time error messages. I still miss generics, and it may be that the particular kinds of programming I do mean I miss them less than others. A co-worker with nearly as much Go experience as I have says he misses them more often. The ones that come up most often for me are sorting and some kind of ordered map, but even those bug me less than once a month.


You shifted the argument from "generics are either bad for compile time or bad for performance, and therefore bad for Go" to "I don't need generics". I can't argue that you need generics, since I don't know the code you're writing (though I think it's likely you're leaving a good deal of performance on the table). But I do think the former problem has little to do with generics as a language feature.


My goal was to explain how I make do with the features that exist (paying one of the tradeoffs, productivity, compile time, or runtime), and that any more general purpose implementation of generics would, to be orthogonal with existing features, necessarily slow compilation (because it would have to take over the job of slices and maps today, and would become prevalent in our APIs).

I'm not trying to shift the argument, I'm to explain the importance of Go in practice. There is a generics dilemma that programmers pay today and it should be possible to turn that into a language feature with the same tradeoffs. In practice, I don't think a generics language feature that captures how I program in Go is possible. (Or at least, I haven't thought of one nor seen a proposal that does.)

Like many others, I'd like to see a prototype that proves me wrong.


> any more general purpose implementation of generics would, to be orthogonal with existing features, necessarily slow compilation (because it would have to take over the job of slices and maps today, and would become prevalent in our APIs).

It would not slow down compilation. Slices and maps are just built-in generics. Generics would codegen exactly the same way as slices and maps do now. Slices and maps compile down into calls to builtin runtime functions (using intensional type analysis IIRC); a generic version of them would, if implemented properly, call down into those exact same functions.

> There is a generics dilemma that programmers pay today and it should be possible to turn that into a language feature with the same tradeoffs.

Yes, it is possible and many languages have done it. All Golang needs to do is what Swift (just to name probably the closest analogue) does, with interfaces for runtime dispatch and generics for compile-time monomorphization. Or, if you want to continue using intensional type analysis to reduce compile time, implement that as certain variants of OCaml did (though you'll pay a performance cost to do so and I don't think it's worth it).


Compile-time specialization is exactly what would slow the compiler down too much. Doing it for slices and maps is already pushing it, doing it for more types (and judging by how generics is used in every other language, it would be a lot of types) would be a significant slowdown.

That said, your second point is good, my argument is weak because the compiler could stick to just specializing maps and slices and otherwise boxing. I strongly suspect any passable generics implementation for Go will need to do this (which due to necessary stdlib changes is off the table until Go 2, so understand I comment on this topic purely for the sake on conversation, I don't think anything can be done any time soon). The degree to which it needs to do it, I'm not sure.

There is still an orthogonality issue with interfaces. There's a lot of overlap between a dynamic dispatch mechanism and parametric polymorphism. As an API designer I'm a bit worried about it. I suspect if a good generics implementation came along though that argument would get pushed aside.

On this topic: one of the better prototypes I saw had a lot of trouble producing good error messages. I suspect this is a solvable problem, but how is not clear to me yet.

Also, we call it Go, not Golang.


>That is a powerful theoretical concern that I shared when I first experimented with Go several years ago. In practice I have not found it to not be a problem.

People that didn't find it to be a problem (e.g. due to the kind of stuff they are working on) are not the ones concerned about its lack though.

It's like someone saying "I just do web development with Python, so I see no reason for NumPy to exist".


This is a fantastic explanation. Thank you for phrasing it so concisely.


>Lots of effort went into studying the problem, so it most definitely isn't an issue of "can't be bothered".

Citation needed.

I've seen absolutely zero sample implementations, testing of implementation options, etc work going in with regard to adding Generics in Go.

And, aside from a couple of blog posts and shooting down people asking for it in comments, I've seen no real organized discussion, e.g. like what would go on for a Python PEP to be accepted.


If Go's overriding focus is speed then why have a garbage collector ? Or write the compiler in Go ?

Or do you think that just maybe using speed as a reason not to do something is a bit of a cop out. Especially without any benchmarks to back it up.


There seem to be a few points in your comment, let me try to answer them as best I can:

The Go 1.5 compiler is not slower because it is written in Go, it is slower because it was mechanically translated from another language. Human eyes will improve it over time.

Garbage collection is a throughput hit the language designers (and I) are willing to accept for improved safety and simpler APIs. I'd rather not pay it, but I choose it over spending a large chunk of my API documentation describing various ownership scenarios like I did in C++. It's a tradeoff I find acceptable for most programs. You won't find me using a GC on a clock slower than 100 MHz, or in a sub-millisecond realtime system (but I probably won't be using linux either there).

I'm also willing to pay the performance hit on non-critical generic code, and I use interface{} for that where I can. If benchmarks show it's a problem, I'll do something differently. That might be hand-rolling an algorithm, which is unfortunate for the programmer who follows me and has to read it. But it doesn't come up much.

The performance price for generating large amounts of extra code in the compiler is reasonably well understood, and not something that would be amenable to simply trying and benchmarking. One would have to implement generics, then spend several programmer-years tuning the compiler over various programs, compilers are complex machines.

Again, I haven't met a compiler expert who doesn't think that generating code for widely used generics would be expensive. And they would be widely used, any good generics solution would have to replace maps and slices, and would permeate the standard library.


Were you surprised. They've said generics won't appear before 2.0, and they may never appear.

We have this discussion after every minor release. 1.6 will be released in December. See you then.


AFAIK you can create a named type from interface{}

type V interface{}

Don't hold your breath when it comes to generics. There is no way they can retrofit them without breaking the language, it's too late. Even features like covariance or unions are just out of question.

An option would be to write a super-set of Go with generics that would compile to Go code. what it would do basically is use interface{} everywhere the variable type T is required and insert type assertions for the user. Been thinking about that.


> An option would be to write a super-set of Go with generics that would compile to Go code. what it would do basically is use interface{} everywhere the variable type T is required and insert type assertions for the user. Been thinking about that.

I think that's pretty much how you'd want to do it, it's type erasure just like generics in Java - sure, some people complain about not having reified generics, but I'm more concerned about type-safety, Go already has some runtime reflection capabilities anyway.


> An option would be to write a super-set of Go with generics that would compile to Go code.

You could call it go++!

(But seriously, that's pretty much what C++ did to C.)


> In Go 1.5, the order in which goroutines are scheduled has been changed. The properties of the scheduler were never defined by the language, but programs that depended on the scheduling order may be broken by this change.

Are there any more details about the change to goroutine scheduling? Is this more than the just change to the GOMAXPROCS' default value?


> Are there any more details about the change to goroutine scheduling?

That would defeat the repetition about scheduling order being undefined wouldn't it?

But according to issue 11372 the answer is that up to 1.4 the scheduler would run goroutines in definition order so if you started goroutines 1,2,3,4,5 it tended to run 1,2,3,4,5. In 1.5 it's biased to favour the last-started routine so it'd run 5,1,2,3,4. Again scheduling order is undefined so the order may change again in 1.6, explicitly relying on it would be an idiotic idea.

They have also introduced limited scheduling randomisation in race detection mode[0] which they intend to randomise further in 1.6 to make scheduling-order dependencies easier to suss out and fix.

> Is this more than the just change to the GOMAXPROCS' default value?

The change to GOMAXPROCS wouldn't have changed the order in which the scheduler picks routines, since it could already be set to a non-default value previously.

[0] https://github.com/golang/go/commit/202807789946a8f3f415bf00...


Good to momentum is still growing with the arm64 port, I remember a year ago it was looking like it may never happen.

https://twitter.com/maver/status/496376555237806080

but in Feb the story changed and with this release it's well on it's way to a full port.

https://twitter.com/davecheney/status/567621293109821440


And by the time it's stable, it will be useless for official iOS development, since Apple will require developers to submit apps in LLVM bytecode (they nicknamed it "Bitcode"). For Apple Watch apps, this is already mandatory.

For Apple this makes sense if they want to switch architectures or add additional extensions to their CPU's and not require all developers to re-submit their apps, but it sucks a bit for the darwin/arm64 port of Go. Maybe we'll see an LLVM bytecode target in a few years? Then Go applications could also benefit from LLVM optimizations.


Back when the dynamic linking design doc was published, it included support for a 'plugin' build mode, wherein the package was compiled into a .so and a 'plugin' package to load and access said shared objects. Having written a number of things that would have been greatly simplified by having such a facility, I'm sad to see it didn't make the cut and has been all but forgotten. It annoys me greatly to see that C can employ plugins written in Go, but Go can not.


I don't believe it's been forgotten, they just ran out of time. There aren't infinite code monkeys working on this thing. AFAIK, almost all of the dynamic linking stuff is being done by one guy - Michael Hudson (a fellow Canonicaler who we're paying to help out with the Go toolchain).

Projects slip, especially ones like this which are fairly large and complex. I want Go plugins, too... but it takes time, and shared libraries were higher priority. Once the bugs are shaken out of shared libraries, plugins become a pretty logical (and hopefully small) next step.


Mine "go" experience comes with the latest docker 1.7. Ok okay. it comsumes 37% of 512 MB memory. That's fine. I will shutdown all of my apps for you. :P


They're defaulting away from SSLv3, nice!


Anyone know how to cross compile with cgo?


In go 1.4, you need to build the go tools for a given target beforehand. In go 1.5, you'll only have to set some environment variables before you call go build, that's it, but unfortunately this doesn't include cgo:

    env GOOS=linux GOARCH=arm GOARM=7 go build hello.go
This link is a good summary of the situation now and in future:

http://dave.cheney.net/2015/03/03/cross-compilation-just-got...

This one talks about cgo:

https://medium.com/@rakyll/go-1-5-cross-compilation-488092ba...





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: