Hacker News new | past | comments | ask | show | jobs | submit login
The State of Go: Where we are in February 2016 (golang.org)
281 points by signa11 on Feb 3, 2016 | hide | past | favorite | 216 comments



The Go runtime is really starting to look sexy. 20 ms GC pauses on 200+ GB heaps!

I remember a thread discussing a pauseless GC on the dev mailing-list; where Gil Tene, of Azul C4's fame, commented on having a clear policy on safepoints from the beginning was paramount to having a good GC. It looked like the community is very biased towards doing the fundamental things well, and attracting the right people.

And on top of that we're get BLAS and LAPACK bindings from https://github.com/gonum


You have to be very careful about these sorts of GC statistics. Things are often not quite what they seem and they depend a lot on the type of app you run.

The first thing to be aware of is that with modern collectors (I have no idea how modern Go's new collector is though), GC pause time depends on how much live data there is in the young generation. So you can easily have enormous heaps with very low pause times if all your objects die young and hardly ever make it into the old generations, because then you never really need to collect the rest of the heap at all.

Of course, outside of synthetic benchmarks, many apps don't show such pleasant behaviour, and often GC algorithms face difficult tradeoffs that are only really knowable by the developers. For instance, do you want low pause times, or less CPU used by the collector (higher throughput)? It turns out that's a fundamental tradeoff and the right answer usually depends whether your app is a user facing server (needs low pause times) or a batch job (better to pause for long periods but complete faster). No runtime can know that, which is why the JVM has tons of tuning knobs. Left to its own devices you can theoretically get away with only tweaking a single knob which is target pause time (in G1). Set it lower and CPU usage of the collector goes up but it'll try and pause for less time. Set it higher and the collector gets more efficient.

Or you can just buy Zing and get rid of GC pauses entirely.

So a stat by itself like "20ms GC pauses on 200GB heaps" doesn't mean much by itself. You can get very low pause times with huge heaps out of the JVM as well:

  http://www.slideshare.net/HBaseCon/dev-session-7-49202969
but of course, you have to pay the piper somehow ... assuming non-weird heap usage the program will run slower overall.


The GO gc is not generational, those 20ms are for a full gc. Certainly the details of the heap usage of the application is going to affect gc times, but this is not the time for just a nursery collection.


OK, the tradeoff they're making that I missed is that it's not a compacting collector. So eventually your heap can fragment to the point where allocation gets expensive or impossible. Unusual design choice.


Unlike Java Go has first class value types and memory layout can be controlled by developers. So it leads to much less objects on heap and compact layouts both will lead to far less fragmentation. As you can see here Go apps use quite less memory than Java. https://benchmarksgame.alioth.debian.org/u64q/go.html


Unfortunately it's impossible to reliably measure the memory usage of Java that way because the JVM will happily prefer to keep allocating memory from the OS rather than garbage collect. It makes a kind of sense: GC has a CPU cost that gets lower the more memory is given to the heap, so if you have spare memory lying around, may as well deploy it to make things run faster.

Of course that isn't always what you want (e.g. desktop apps) ... sometimes you'd rather spend the CPU and minimise the heap size. The latest Java versions on some platforms will keep track of total free system RAM and if some other program is allocating memory quickly, it'll GC harder to reduce its own usage and give back memory to the OS.

In the benchmarks game I suspect there aren't any other programs running at the same time, so Java will go ahead and use all the RAM it can get. Measuring it therefore won't give reasonable results as the heap will be full of garbage.

Value types don't have much to do with fragmentation, if anything they make it worse because embedding a value type into a larger container type results in needing larger allocations that are harder to satisfy when fragmentation gets serious. But ultimately a similar amount of data is going to end up in the heap no matter what. Yes, you can save some pointers and some object headers, so it'll be a bit less. But not so much that it solves fragmentation.


You can't really compare total memory usage of a JIT to total memory usage of an AOT compiler that way if what you're trying to show is that value types reduce memory usage.

Also, I suspect that the fact that JVMs use a generational GC (and a compacting GC) blows everything else out of the water when it comes to fragmentation. There's no way a best-fit malloc implementation can possibly beat bump allocation in the nursery for low fragmentation.


Those default memory use measurements are just a way to check if a particular 100 line toy benchmark program has been written to exploit time / space trade-off.


Go doesn't use a generational GC AFAIK.


Oh wow thanks for pointing out gonum! Definitely going to be playing around with this.


The only thing missing is a good binding to a javascript engine, for code that has to run on both client and server.


If you need to write code that runs on your go server and on the client, there's always https://github.com/gopherjs/gopherjs


That is nice. But I hope that you understand that such a solution is not always preferred in every situation. For example, for performance reasons, it is always better to code in the native language.


At work (Lytics) 100% of our backend code has been in Go since the beginning over 3 years ago, so slide #6 highlights one of the most things with nearly every Go release:

> Changes to the language: None

https://talks.golang.org/2016/state-of-go.slide#6

We generally get our entire stack upgraded to the latest release within 1-3 months with little effort. It wouldn't be much more work to be ready to upgrade on release day, but we haven't found a reason to worry about it.

Go is such a breath of fresh air compared to past Java and Python jobs where production was usually at least a major release version behind the latest and there was extra effort spent getting everyone using the same implementation (Oracle Java v OpenJDK or Ubuntu's Python v CentOS's -- there are differences!).

Conservative releases aren't a must-have for a language, but I do appreciate having one less operational headache to consider.


> Conservative releases aren't a must-have for a language, but I do appreciate having one less operational headache to consider.

Considering the nature of the work, the programming field seems to be particularly rife with "common wisdom" that's not supported by data. This especially goes towards language design. (In the Chris Granger talk that was recently posted here, he noted that programmers who said they "never" used the mouse, only the keyboard, actually used the mouse 50% of the time.)

The reality of the programming field is that it's "almost a field" [1] -- struggling just as much with empiricism as alchemy was before it evolved into the science of chemistry. Language features definitely suffer from the irrationality of the almost-a-field of programming.

Golang seems to be led by good empiricists who are targeting a specific set of use cases for programming in the large.

    [1] - https://news.ycombinator.com/item?id=9812487


Can you speak to why you were behind latest on the JVM? I can think of less than a handful of breaking changes over the last 15 years. I'd say it was more stable than Go.


Issues I've seen with JDK upgrades over the years (that I can think of right now):

* when "enum" keyword was added, if you had a variable named enum you had to rename it.

* If you use any of the sun.* packages you always ask for trouble on major upgrades.

* GC changes over the years can change how your program runs (latency changes, OOM issues). This probably applies to Go as well.


GC changes is a big one in the JVM, but not so much in Go (so far) as Go offers basically no GC tunables.

So while your awesome CMS tweaks from JDK7 become worthless once you switch to G1 on JDK8, with Go all you can do to optimize the GC is to create less garbage to begin with.

Not trying to say one approach is better than the other: Go's approach is operationally simpler but far less sophisticated than the JVM's.


I don't disagree with the simplicity argument, but as it relates to backwards compatibility, we've already had 1 major change to the GC that caused performance differences that needed to be investigated. With Java typically they'd have left the option to use the old GC (again which increases operational complexity) so that in that case the backwards compatibility argument seems to favor Java.


>as Go offers basically no GC tunables.

This isn't actually true, Go has a single tunable: "GOGC" https://golang.org/pkg/runtime/


I consider that "basically no" compared to Java's daunting array of options.


From that list I would only consider the first one.

Using sun.* packages or relying in GC behaviour is a way to make Java code not portable across certified JVMs.

For example I took part in some projects that were married to IBM JVM, because they were relying on its features.


> relying in GC behaviour

You don't really have a choice in the matter. If you're writing high throughput or low latency applications you are dependent on the JVM's GC behavior, period.


Yes, but what I mean is that each JVM has its own list of GC algorithms.

Just as a very basic example, a certified JVM 8 is not required to have G1.

As for high throughput or low latency applications, yeah actually one is dependent on the whole stack, hence why HPFT is already moving into FPGAs.


> If you use any of the sun.* packages you always ask for trouble on major upgrades.

IIRC, that caution about using sun.* packages was mentioned by Sun in docs of early Java versions, like 1.2 / 1.4 etc. Not sure about more recent ones.


While I agree that Java has been one of the most stable and backwards-compatible languages out there (especially compared to things like Scala, which regularly breaks source and binary backwards compat), there are still rough edges. I am not the original poster, but I can speak for why JVM deployments take time on Hadoop (which I work on).

* In order to upgrade the JVM, all the software you run has to work with the new JVM. If there's even one minor library that doesn't work or is suboptimal, you can't cut over. This is not an issue with Go because everything is compiled to an x86_64 binary there, whereas in Java everything is a jar file that must be run under (almost always a single) JVM.

* During a JVM version change, often Oracle removes or modifies non-public APIs that you need to achieve acceptable performance. For example, there is still no public way to free a DirectByteBuffer or create a FileChannel from a FileDescriptor, so you have to use the non-public APIs. Go almost never has this sort of problem since they tend to provide public APIs for everything you need, including platform-specific things.

* We run really big JVM heaps (>100 GB), and so minor changes in the GC behavior or default settings can cause major issues. For example, JDK8 changed the defaults for many GC tunables. It takes weeks of work at least for us to validate that there are no significant regressions. This is an issue that I would expect Go to have as well since they are changing the GC.

* Enterprise customers are extremely risk-averse, and they're not enthusiastic about deploying a new JVM. Operationally, they don't see any upside, only downsides. Of course JVM upgrades have to happen eventually, but they usually happen when new software is rolling out as well. Oracle's decision to stop shipping security updates for older JVM versions has "helped" in a sense by making the issue seem more urgent.

* Open source projects don't like dropping support for users running older software. There is usually someone around to argue against dropping support for anything that rolled out within the last 5 years.


The first issue is surely an issue with Go as well, if they ever do change the language or std lib in a way that isn't perfectly bug-for-bug compatible? As to upgrade Go I thought you have to recompile everything including all dependencies (I guess there's no stabilised ABI?), so it amounts to the same thing: if one of your dependencies doesn't play nice with the new Go, you can't upgrade.

The private APIs thing is indeed one of the most common issues, along with needed upgrades to bytecode rewriting libraries.


The difference is that you can upgrade each Go application to Go 1.6 separately, whereas with Java, once you upgrade the JVM, all of your applications get upgraded at once. While you could technically have two different JVMs installed side-by-side, in practice nobody actually does this because of the operational complexity and the way that Java handles dependencies. Perhaps this will become an issue for Go if people start using the new shared library feature. For Java, _everything_ is a shared library (sort of), so this is an omnipresent issue.

I think both Go and Java have been good about avoiding backwards incompatible changes in the standard library. Both languages have an explicit policy of avoiding these whenever it is at all possible.

Frankly, the Go standard library is a lot better written than the Java one. For example, I dare you to figure out how to call statvfs from Java, or figure out how many hardlinks there are to a specific file inode. Or make an asynchronous DNS lookup. Even simple things like creating a socket without doing a DNS lookup are very difficult to achieve in the Java standard library.


I really don't get this at all. The JVM is a program. It sits in a single directory. I routinely have several installed on my laptop. There are no operational complexities from having multiple different versions installed, if you want that.

I suspect this really boils down to operational complexities of Linux distros, not Java. If your package manager only lets you install one JVM then maybe this seems "complicated" relative to Go, but that's not a Java problem.

WRT the standard library, yes the Java standard library doesn't expose UNIX specific syscalls. It exposes stuff at a higher level instead because it tries to be portable. That's a different tradeoff to what Go makes but I wouldn't say that makes it badly written. To me badly written would mean buggy, confusingly designed, too small or too big etc. If you want to write non-portable software then that means you may have to link in an extra library or so (like JNA).

Creating sockets does not do DNS lookups in Java. You may be thinking of the URL class, which does, and there's a URI class that avoids that.


It's not a Linux issue. People almost always install the Oracle JVM themselves rather than using a package manager (it's a long story...)

The desire for a single JVM comes partly from the architecture of Hadoop itself. Hadoop is structured as a framework (you give your MR job to YARN and it runs it by creating new JVMs for you).

The Java standard library is weak in many areas. The "write once, run anywhere" ideology is part of it, but there are also just... weak parts.


Actually we have deployments where the JRE is packaged alongside the application, because in some customers teams have the freedom to choose their JDKs.

However we also have projects where the Java version is married to whatever the Websphere deployment of the day supports.

And on Android, well there is no upgrade at all. Which is yet another reason to use the NDK, even with all the 3rd class developer treatment, at least the C++ compiler gets updated and doesn't depend on the Android version of the target devices.


> This is not an issue with Go because everything is compiled to an x86_64 binary there

I don't think the situation is all that different. All the Java code ends up being binary as well, at least after running a while.

At the moment, I think most people's Go projects simply have fewer dependencies. If you poke about on some of the popular Go projects, you'll see some of the bad coding practices that will result in upgrade breakages like checking for exact error strings (unfortunately needed sometimes, I know), so I foresee the same issues over time.


The biggest issue I've seen with JDK upgrades is not the language changes but bytecode changes. Many tools and frameworks such as Guice, Spring, PowerMock, AspectJ, and code coverage tools all operate on byte code, not source code. Therefore, whenever there is a classfile format change, all these tools potentially break until they are updated.


One aspect is that upgrading Go only requires upgrading it on the build infrastructure rather than a deployment of a new JVM. The "next build" will simply be a binary built with a new compiler version.


Hm, it's been over 2 years, so I don't remember many details. I do remember having a couple projects (Kafka, maybe more?) using Scala which required specific JDK versions. Also some databases would recommend certain JDK versions.

So that means you either run different JDKs for different services or you stick with the lowest common denominator.


Perhaps they were using some non-Sun/Oracle programming software besides Java, something like Groovy.


This is my experience as well. The only thing that somewhat breaks between Go project is vendoring (or lack thereof). But in comparison to the pain of supporting python applications that must run on everything from 2.6.6 to 3.5, Go is a walk in the park.


Python is entirely different; the two languages cannot be compared with regards to backwards compatibility.

Python has been around since the early 90s, while Go was conceived very recently. Sooner or later, the quirks that resulted from designing the language so long ago had to be addressed, and that's why there's Python 3. The same can be seen in Java and Ruby.

Once Go is used actively for ~9 years (Python 2.0 to 3.0) and suffers no changes that break backwards compatibility, we can compare it with Python.


The concurrent map access checks in Go1.6rc1 have already uncovered one such bug in my code. Love it!

Oh, and I've already made use of the whitespace-stripping in text templates, too! :)

One thing I was worried about from the focus on reducing maximum GC pause times (i.e. latency) was that this might negatively affect GC throughput. For example, maybe the pauses are shorter but there are many more of them. The project I'm working on at the moment exercises the GC heavily but is not concerned with latency (it's bulk data-processing), and I didn't see any significant regression in performance/throughput from 1.5 to 1.6rc1. So, yay.


Aggressive GC latency improvements, like the ones Go is making, virtually always negatively affect throughput. For example, Azul C4 has lower throughput than HotSpot (at least per the numbers cited in the paper). There's no free lunch in GC.


But I would argue that most people who use Go, use it to write user-facing server apps, or at least server apps in which response time is an important metric. I don't know anybody who uses Go to primarily write batch jobs where throughput matters more than latency.


Yeah, but that's circular - if you did want to write a high performance batch job, maybe you wouldn't use Go because of the GC.

And hi by the way ;)


As I understand it, you would have seen the performance hit going from 1.4 to 1.5.


Based on Rick’s talks, this is the sort of trade-off choice one deliberately makes. Reduced throughput can be understood as amortized GC.


A Golang beginner's question: do these GC improvements make Golang a suitable language/ platform for writing games?

EDIT: I realise this is a vague question. I suppose I was wondering if the order of magnitude GC performance in Go is likely to interfere with game loops you might find in reasonably CPU/ GPU intensive Indie games (i.e. NOT Crysis).


You have to ensure that the GC never makes you drop a frame. For 60Hz, that means staying below 16.7ms.

Given that a Go 1.6 GC will still take about 4ms, you have 12.7ms to generate a frame, which can be too limiting for some CPU-intensive games, but is perfectly acceptable for many games.

(In Go 1.5, a GC was much more likely to make you drop a frame, as it could easily average 40ms.)

On the other hand, there are less high-quality libraries in Go than in C++ or in JS. That may be the most limiting factor.


Interesting fact: Unreal Engine uses a GCd heap (or used to at least) for its core game state, so that means many AAA games use GC. And you can hit 60fps with it.

Their secret is they keep the game state heap really small. Like, 50 megabytes, maybe.



Another weird thing is, the background collection is still using a core (plus somewhat slowing foreground code, with write barriers) when it runs. It's kinda like you have a varying amount of CPU power.

One approach is just to program as if you had less CPU, as if there were always a GC running. I suppose if you have some code that isn't smoothness critical (game AI, say), or CPU-affecting detail settings you can twiddle without looking too glitchy, maybe you can figure out some way to shed work when you start to fall behind.

It'd be really cool to see someone attempt a game or such in Go--boundary pushing's always fun, more so when the boundaries are recently expanded.


Back in the 80's and early 90's, that type or remark was intended to anyone trying to use Turbo Pascal, C, AMOS, Turbo Basic, Forth, Modula-2.... for game development.

No sane game developer would use anything other than Assembly.

The more things change, the more they stay the same.


Wasn't to discourage at all--was trying to give my understanding (based on the design docs, talks, etc.) of what your CPU budget looks like and speculate about ways to deal.

"It'd be really cool" was absolutely sincere; the Go folks have given us some new toys and it'd be neat to see how far we can take 'em.


That was my point as well.

The culture in the gamedev world is such that the big teams only switch tooling when forced to do so. Only amateurs tend to try out new ways.

Back when the move from Assembly to higher level languages started, many games would be filled with inline Assembly.

The compilers weren't that good generating code and they didn't want to loose the power of Assembly.

Just like now with the managed runtimes and having tons of C and C++ underneath. Languages that weren't that speedy 30 years ago.

We need to keep the spirit of taking things far alive, because in computing seeing is believing.


I just have to say this is a great answer. Thanks.


It all depends on which games people intend to write.

Apparently younger generations are unaware that C was seen as a managed language in the 80's and early 90's, with compilers not generating good enough code for game development.

In the 90's I have seen lots of Turbo Pascal and C code where the functions where plain wrappers for inline Assembly.

So unless you intend to write Crysis in Go, there are lots of games you can write with it.


Lots of neat and profitable games have been made in slower languages, so you should be fine, though in some cases you might have to replicate an entire engine... Not being real familiar with Go, the thing you'll want to look out for if your game gets "big" is unpredictable GC events, more-so even than the absolute max duration of any particular GC event. If Go (or whatever else) doesn't provide enough tuning to support "soft" realtime systems, you'll end up structuring your program to allocate in pools and release all references only at certain points to try and make the GC predictable, which is a common pattern for non-GC languages in games, so you've all but lost the let-me-not-care-about-memory-management justification for the language except for the safety aspects, which don't tend to be high priority for games...


I would say "yes", based on this project: https://github.com/thinkofdeath/steven

It's a voxel game based on OpenGL. It implements much of vanilla minecraft. Annecdotally, I'd say it performs much better for me than minecraft itself does, at longer view distances, and with essentially no observable GC pauses (of which minecraft suffers quite visibly). Also, a stabler heap size, etc.

The capabilities are definitely there.

I'm spending some of my weekend moments playing around with more GL stuff based on what I've learned from reading this project, and it's quite fun. Build times are right up there where you'd expect, too -- seconds or less! (The first build takes a few moments for running gcc for the c bindings to GL, but after that, those cache nicely.) A 1-second turnaround for recompiling a whole game is an incredible breath of fresh air.

And of course, I shipped my demo game to a friend on a mac the same day I started writing. I'm on a linux. Not bad.


I can't access that link at the moment but I will have a look at it, thank you. I had exactly the same idea- to play around with bindings to OpenGL and build some demos.


In addition to what others have said, you can relieve much of the pressure on the GC by making liberal use of `sync.Pool`.


To be honest, for 60fps, and now with VR - 120fps and more - it won't be good enough. But... I could be wrong... Lots of games do have additional scripting languages (.gsc in COD games, lua in others, etc., etc.) - these all have garbage collectors. The key to control that is checking your high watermarks (while playing the level), and doing incremental GC.

Whether you can write the whole game in it - I don't know, but it'll be pretty good for tools/editors/pipeline/etc.


yes and no, while the previous GC pauses wouldn't have really affected anything the size of a hobby game, the improvements are welcome. The bigger problem with Go regarding game development is operator overloading and interfacing with C, the latter being a pain when it comes to memory management.


Would these GC improvements put Golang in the same league as C# (which is widely used for game development e.g. Unity3D, the .NET runtime has a GC) or can these comparisons not be made?


As heads up when talking about Unity3D, please be aware of the pre-historic .NET runtime they still are shipping versus what Xamarin and Microsoft deliver.

So always take the JIT/GC complains in Unity3D context with that caveat in mind.


Thanks. Whilst I was aware that Unity3D was shipping with an ancient version of Mono (and more or less consequently with an ancient version of C#), I wasn't aware there were a lot of JIT/GC related complaints against it.


I have dual feelings with regard to Unity guys.

On one side they did a great job increasing the visibility of C# among game developers, which tend to only switch languages when the OS and console SDKs push them to do so.

On the other hand, they spread the feeling that C# is bad for game development among developers that don't understand "language != implementation" and take their Unity's experience as how C# implementations performs in general.

However I also should say that they are aware of it and planing to improve the situation after their IL2CPP compiler stabilizes.


So I wondered that as well and did some Googling.

Found a blog post by Joel Webber, who I have not heard of, but he was working on a minecraft clone and had some advice on avoiding GC and memory layout:

http://www.j15r.com/blog/2015/01/25/Game_Development_in_Go

Also found an engine, Azul 3D, and they made this claim:

https://azul3d.org/doc/faq.html#what-about-the-garbage-colle...

Then there is termloop, which is a terminal based engine:

https://github.com/JoelOtter/termloop

Fun for indie game stuff.

So, yeah. There's that.


Nim (http://nim-lang.org), is a language in some ways filling a similar niche to Go and they are suitable for games, because you can tune or even change the GC. Not the case with Go, so may be less suitable for soft real-time like games.


It should be suitable - but whether it is depends also on your memory usage. GCs are not black boxes which magically work or not work. They do get bad reputation by people who do heap allocations without thinking about them. They key to good GC performance is about the allocation profile. GO gives you very good control about heap allocation, so it should be possible to arrange the main game loop such that no fresh heap is allocated, which also would mean that the GC does not run. The GO GC runs when, the allocated heap grow to a set multiple (by default 2x) of the heap size after the last GC run. Adjusting this factor to your memory usage should give you pretty good control when the GC runs and when not.


absolutely - if you want to write your games for the terminal https://github.com/JoelOtter/termloop


There are a heck of a lot games written in C#, which is not only interpreted, but probably has a less-tuned GC. Including heavy-processors like Kerbal Space Program.

If KSP works in C#, you can write a game in Go no problem.


C# is not interpreted, and the CLR has a generational GC on par with the JVM. Many C# games are built on game engines like Unity which are implemented in C++, anyway.


> C# is not interpreted

Unless you're asserting that e.g. Python is not interpreted; or you're using the relatively recent native toolchain, C# is interpreted. That's its original state and its widest deployment pattern.


You're misinformed, I suspect because you are conflating .NET programs being distributed as bytecode with .NET programs being interpreted.

It is true that .NET programs are traditionally distributed as CLR bytecode. This is similar to a .class file or a .pyc file. But the CLR does now, and always has, had a JIT. E.g., the first line or two of https://msdn.microsoft.com/en-us/library/ht8ecch6(v=vs.71).a... , which is for .NET 1.1, which explicitly states that, even at that time, the bytecode is JITed prior to execution.


> You're misinformed, I suspect because you are conflating .NET programs being distributed as bytecode with .NET programs being interpreted.

No and no.

> But the CLR does now, and always has, had a JIT.

And Python has had a JIT[0] for as long as the framework has existed.

[0] https://en.wikipedia.org/wiki/Psyco now replaced by the pypy project


.NET is JIT'd in its most popular form, the .NET Framework.

Python is interpreted in what was its most popular form, but may not be now, CPython.

This is using the definition of a JIT as an execution engine which takes in some form of bytecode and, at runtime, emits architecture-specific assembly code to a page, marks that page executable, and changes the IP to that page.

If CPython executes in that manner then I'm mistaken about CPython's execution engine and would also consider CPython to be a JIT.


Python doesn't have a JIT by default (CPython, the reference implementation). Does the same apply to .NET?


.NET as distributed by Microsoft:

- JIT and AOT compilation via NGEN up to .NET 4.5.2

- Starting with .NET 4.6, RyuJIT which uses the Visual C++ backend and exposes SIMD support to .NET languages

- When targeting Windows 8 and 8.1, AOT compilation to native code in a format called MDIL. Basically requires dynamic linking on device, everything else will be native code already

- When targeting Window 10 store applications onwards, AOT compilation to static executables

- .NET Compact Framework also always JITs

- .NET Micro Framework is the only one that does interpret MSIL

Also Microsoft .NET JIT compilers, with the exception of the .NET Micro Framework always jit the code, there is no threshold to trigger it like on most JVMs.


What are you talking about? .Net is compiled to native code on load. (Or earlier)


Did I get down-voted because some OTHER person started a dumb argument about interpreted vs. compiled in my thread? Awesome.


Their solution to the template whitespace thing underlines a fundamental difference between what the Go core developers consider to be good language/library design and what I do.

To me adding the - to the template tag {{foo -}} to get rid of whitespace on that side of the tag is totally unintuitive and a really kludgy solution. Sure, it's terse and being terse can be nice, but terseness to me probably doesn't even make it into my top 10 concerns when designing a language or library.

In my mind a lot more thought and consideration should be put into solutions for problems that are going into core libraries. Stuffing cute hacks into the core libraries willy nilly leads you to PHP. The consequences of which I deal with daily.


This change simply draws from Jinja, which has had this feature for nearly 10 years[0]. It's a simple and efficient solution to the problem of trimming whitespace on either side of a template tag, I fail to see what is kludgy about it (and intuitiveness is in the eye of the beholder). In my experience it's clear, simple and doesn't make the template less readable.

> Sure, it's terse and being terse can be nice, but terseness to me probably doesn't even make it into my top 10 concerns when designing a language or library.

The whole point of this feature is to be terse, otherwise you could already use template comment to trim formatting whitespace.

[0] http://jinja.pocoo.org/docs/dev/templates/#whitespace-contro...


Perl's Template Toolkit has had this feature since at least 2001 (and I remember it being available before that but my googlefu is failing me right now.)

cf http://www.perl.com/pub/2001/01/tt2.html

> If these tags are replaced with [%- -%] then the preceding or following linefeed is suppressed.


Nice, thanks for the history lesson. Now I wonder whether Armin Ronacher got it from there or independently reinvented it.


> The whole point of this feature is to be terse...

I get that. I don't think you understood my objection, which was that being terse is not as important to me as being elegant and easily understood. {{foo -}} is not clear looking at it what that - is gonna do. You have to already know, or look it up. There is basically no way to know from context what the desired behavior is. That's bad design IMHO.


How much programming language syntax is really intuitive, as compared to what you've grown familiar with over time?

Once you use the Go(/Jinja/Perl) syntax once or twice, then putting a minus sign to remove whitespace will be easily understood.

I am interested in an alternative that you find more elegant.


> I am interested in an alternative that you find more elegant.

Why not make it a filter/pipeline (I haven't used template/text in ages, does it support this?), like in Django {{ thing|trimleft }}.


{{ foo -}} does not strip white space from the output of foo. It eats up the following white space.

So "{{ foo -}} bar" renders the same as "{{ foo }}bar".


It's worth observing that while text/template may ship with the core code, it's a very, well, library-y library. Get 10 programmers together and ask about text templating and you'll probably get 11 answers. It's very easy to replace it with whatever floats your boat; it's not deeply integrated into anything else, and just ties in to very standard io.Writer interfaces and such.

Bikeshedding about text/template really is just bikeshedding about text/template moreso than Go qua Go.


That's a fair point, but official libraries should be held to a higher standard, I think.


I fail to see what's so "kludgy" about it.


How the Go GC does comparing with the JVM one ?


JVM's gc is most likely significantly better. On the other hand golang's gc needs to collect less objects, in some cases orders of magnitude less.

If you compare a slice of structs with 1000 elements, it'll be one object (and allocation) in golang. Equivalent array in JVM requires the array itself + 1000 Objects, 1001 allocations. In this case, golang has lot less object graph to gc.

Of course slice of 1000 interfaces or pointers faces the same 1001 issue in golang as well.

You could emulate same gc load cost in JVM at cost of runtime performance by storing the objects in a byte array and [de]serialize as needed, but that's neither idiomatic or acceptable solution most of the time.


Could you explain what essential things JVM does better than Go at this point ? Does it stop the world less ? Or does it do more things in parallel ? Thanks


Although Go's GC is tunable to some extent, the open source HotSpot JVM are already has multiple GC implementations that you can choose based on your use case and further tune. There is also work being done in the OpenJDK project for a GC that can collect > 100GB heaps in < 10ms [1]. There are also alternative proprietary implementations available today that already have no stop the world collections [2]

[1] http://openjdk.java.net/jeps/189 [2] https://www.azul.com/products/zing/


From this "work being done in the OpenJDK project for a GC that can collect > 100GB heaps in < 10ms" I deduce that Go 1.6 is not that bad at all and 1.7 or 1.8 could even beat that easily. https://talks.golang.org/2016/state-of-go.slide#37 https://github.com/golang/proposal/blob/master/design/12800-...


Latency isn't the only concern; you also have to look at throughput. The JVM's GC has been carefully tuned to strike a balance here.

In particular, Go's GC is not yet generational from the talks I've seen, which is a large throughput loss compared to the GC of the JVM.


If it is carefully tuned why it needs such a big GC tuning guide and 100s of JVM flags to tune runtime. Any Java product of consequence comes with custom GC settings meaning they do not find default ones suitable.

https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc...


Because the JVM developers had customers who asked for the ability to tune the GC for their particular application.

Go will receive those feature requests too. The Go developers may not be as willing to provide so many knobs (which is a position I'm entirely sympathetic to, don't get me wrong). But the settings always exist, regardless of whether Google hammers in values for them or leaves them adjustable. GC is full of tradeoffs; they're fundamental to the problem.


The G1 collector has a single knob that is supposed to be a master knob: you pick your pause time goal. Lower means shorter pauses but overall more CPU time spent on collection. Higher means longer pauses but less time spent on collection and thus more CPU time spent on your app. Batch job? Give it a high goal. Latency sensitive game or server? Give it a low goal.

There are many other flags too, and you can tune them if you want to squeeze more performance out of your system, but you don't have to use them if you don't want to.


Just like C and C++ compilers have lots of options to tune all generated code.

Sometime just -O2 or -O3 aren't not enough, regardless how much tuning has gone into them.


Depends what you compare it to. As I have written above, you can get low pause times with huge heaps today. In practice very few apps need such low pause times with such giant heaps and as such most users prefer to tolerate higher pauses to get more throughput. There are cases where that's not true, the high frequency trading world seems to be one, but that's why companies like Azul make money. You can get JVMs that never pause. Just not for free.


With respect to garbage collection only and ignoring things like reliable debugging support, the primary thing it does is compaction.

If your memory manager does not compact the heap (i.e. never moves anything), then this implies a couple of things:

1. You can run out of memory whilst still technically having enough bytes available for a requested allocation, if those bytes are not contiguous. Most allocators bucket allocations by size to try and avoid the worst of this, but ultimately if you don't move things around it can always bite you.

2. The allocator has to go find a space for something when you request space. As the heap gets more and more fragmented this can slow down. If your collector is able to move objects then you can do things generationally which means allocation is effectively free (just bump a pointer).

In the JVM world there are two state of the art collectors, the open source G1 and Azul's commercial C4 collector (C4 == continuous compacting concurrent collector). Both can compact the heap concurrently. It is considered an important feature for reliability because otherwise big programs can get into a state where they can't stay up forever because eventually their heap gets so fragmented that they have to restart. Note that not all programs suffer from this. It depends a lot on how a program uses memory, the types of allocations they do, their predictability, etc. But if your program does start to suffer from heap fragmentation then oh boy, is it ever painful to fix.

The Go team have made a collector that does not move things. This means it can superficially look very good compared to other runtimes, but it's comparing apples to oranges: the collectors aren't doing the same amount of work.

The two JVM collectors have a few other tricks up their sleeves. G1 can deduplicate strings on the heap. If you have a string like "GET" or "index.html" 1000 times in your heap, G1 can rewrite the pointers so there's only a single copy instead. C4's stand-out feature is that your app doesn't pause for GC ever, all collection is done whilst the app is running, and Azul's custom JVM is tuned to keep all other pause times absolutely minimal as well. However IIRC it needs some kernel patches in order to do this, due to the unique stresses it places on the Linux VMM subsystem.


While I agree that compaction is desirable in theory, empirically it's not really necessary. For example, there are no C/C++ malloc/free implementations that compact, because compaction would change the address of pointers, breaking the C language. Long-lived C and C++ applications seem to get by just fine without the ability to move objects in memory.

Java code also tends to make more allocations than Go code, simply because Java does not (yet) have value types, and Go does. This isn't really anything to do with the GC, but it does mean that Java _needs_ a more powerful GC just to handle the sometimes much greater volume of allocations. It also makes Java programmers sometimes have to resort to hacks like arrays of primitive types (I've done this before).

People like to talk about how important generational GC is, and how big a problem it is that Go doesn't have it. But I have also seen that if there is too high a volume of data in the young-gen in Java, short-lived objects get tenured anyway. In practice, the generational assumption isn't always true. If you use libraries like Protobuffers that create a ton of garbage, you can pretty easily exceed the GC's ability to keep up with short-lived garbage.

I'm really curious to see how Go's GC works out for big heaps in practice. I can say that my experience with Java heaps above 100 GB has not been good. (To be fair, most of my Java experience has been under CMS, not the new G1 collector.)


Experience with C/C++ is exactly why people tend to value compaction. I've absolutely encountered servers and other long-lived apps written in C++ that suffer from heap fragmentation, and required serious attention from skilled developers to try and fix things (sometimes by adding or removing fields from structures). It can be a huge time sink because the code isn't actually buggy and the problem is often not easily localised to one section of code. It's not common that you encounter big firefighting efforts though, because often for a server it's easier to just restart it in this sort of situation.

As an example, Windows has a special malloc called the "low fragmentation heap" specifically to help fight this kind of problem - if fragmentation was never an issue in practice, such a feature would not exist.

CMS was never designed for 100GB+ heaps so I am not surprised your experience was poor. G1 can handle such heaps although the Intel/HBase presentation suggested aiming for more like 100msec pause times is reasonable there.

The main thing I'd guess you have to watch out for with huge Go heaps is how long it takes to complete a collection. If it's really scanning the entire heap in each collection then I'd guess you can outrun the GC quite easily if your allocation rate is high.


It's true heaps can be a pain with C/C++. 64-bit is pretty ok, it's rare to have any issues.

32-bit is painful and messy. If possible, one thing that may help is to allocate large (virtual memory wise) objects once in the beginning of a new process and have separate heaps for different threads / purposes. Not only heap fragmentation can be issue, but also virtual memory fragmentation. Latter is usually what turns out to be fatal. One way to mitigate issues with multiple large allocations is to change memory mapping as needed... Yeah, it can get messy.

64-bit systems are way easier. Large allocations can be handled by allocating page size blocks of memory from OS (VirtualAlloc / mmap). OS can move and compact physical memory just fine. At most you'll end up with holes in the virtual memory mappings, but it's not a real issue with 64 bit systems.

Small allocations with some allocator that is smart enough to group allocations by 2^n size (or do some other smarter tricks to practically eliminate fragmentation).

Other ways are to use arenas or multiple heaps. For example per thread or per object.

There are also compactible heaps. You just need to lock the memory object before use to get a pointer to it and unlock when you're done. The heap manager is free to move the memory block as it pleases, because no one is allowed to have a pointer to the block. Harder to use, yes, but hey, no fragmentation!

Yeah, Java is better in some ways for being able to compact memory always. That said, I've also cursed it to hell for ending up in practically infinite gc loop when used memory is nearing maximum heap size.

There's no free lunch in memory management.


Well, I can only say that your experience is different than mine. I worked with C++ for 10 years, on mostly server side software, and never encountered a problem that we traced back to heap fragmentation. I'm not sure exactly why this was the case... perhaps the use of object pools prevented it, or perhaps it just isn't that big of a problem on modern 64 bit servers.

At Cloudera, we still mostly use CMS because the version of G1 shipped in JDK6 wasn't considered mature, and we only recently upgraded to JDK7. We are currently looking into defaulting to G1, but it will take time to feel confident about that. G1 is not a silver bullet anyway. You can still get multi-minute pauses with heaps bigger than 100GB. A stop-the-world GC is still lurking in wait if certain conditions are met, and some workloads always trigger it... like starting the HDFS NameNode.


Ouch, not even upgrading to JDK8? That's not really a new JVM anymore now.

G1 has improved a lot over time. What I've been writing was based on the assumption of using the latest version of it.

Yes, full stop-the-world GCs are painful, but they'll be painful in any GC. If Go runs out of memory entirely then I assume they have to do the same thing.


Go is also better about allocating on the stack (vs heap) and provides a very solid pool implementation (sync.Pool).

Both of these can dramatically reduce GC pressure too.


I'd like to see evidence for this. the JVM has been great at stack allocation for over a decade [1][2].

[1] http://www.stefankrause.net/wp/?p=64 [2] http://www.ibm.com/developerworks/library/j-jtp09275/


"Great" is pushing it a bit. The JVM will not do inter-procedural escape analysis unless the called method is inlined into the callee and so the compiler can treat it as a single method for optimisation purposes. So forget about stack allocating an object high up the call stack even if it's only used lower down and could theoretically have been done so.

That said, JVMs do not actually stack allocate anything. They do a smarter optimisation called scalar replacement. The object is effectively decomposed into local variables that are then subject to further optimisation, for instance, completely deleting a field that isn't used.

Value types will be added to the JVM eventually in the Valhalla project. Go fans may note here that Go has value types, but this is a dodge - the bulk of the work being done so far in Valhalla is a major upgrade of the support for generics, because the Java (and .NET) teams believe that value types without generic specialisation is a fairly useless feature. If they didn't do that you could have MyValueType[] as an array, but not a List<MyValueType> or Map<String, MyValueType> which would make it fairly useless. Go gets around this problem by simply not letting users define their own generic data structures and baking a few simple ones into the language itself. This is hardly a solution.


It is straightforward because Go has first class value types which are most likely to be on stack vs Java where everything except primitives are reference type which are most likely to be on heap. Also Java data structures are really bloated.

https://www.cs.virginia.edu/kim/publicity/pldi09tutorials/me...


Back when Java was introduced I was disappointed that they decided to ignore value types, specifically given that Cedar, Modula-3, Eiffel and Oberon variants all had them.

Also that they went VM instead of AOT like those languages main implementations.

Oh well, at least they are now on the roadmap for Java 10, 30 years later.


The golang object pool is a bit of a problem (compared to the JVM alternatives) due to lack of generics. You tend to need to do object pooling when you have tight performance requirements which is at odds with the type manipulation you have to do with the sync.Pool.

So the golang pool is good for the case where you have GC heavy but non-latency sensitive operations, but not the more general performance sensitive problems.


Have you benched "myPool.Get()" vs "myPool.Get().(*myStruct)" ? I don't think the "type manipulation" is the problem you think it is.


At the x86 level, myPool.Get(Index) is going to be at least as expensive as cmp/jae/mov (3 cycles), and myPool.Get().(myStruct) is going to be at least as expensive as cmp/jae/cmp/jne/mov (5 cycles). So unless you have some way of hiding the latency, the type check is 67% slower by cycle count.

The experience of every JIT developer is that dynamic type checks do matter a lot in hot paths.


Not disagreeing, but I think that was a bit inaccurate.

If that branch is mispredicted, we're talking about 12-20 cycles. Ok, I assume it's a range check and thus (nearly) always not taken. So if it's in hot path, it'll always be correctly predicted. Modern CPUs will most likely fuse cmp+jae into one micro-op, so predicted-not-taken + mov will take 2 cycles (+latency).

"cmp/jae/cmp/jne/mov" will of course be fused into 3 micro-ops. But don't you mean "cmp/jae/cmp/je/mov"? I'm assuming second compare is a NULL check (or at least that instructions are ordered that way second branch is practically never taken). I think that also takes 2 cycles (both branches execute on same clock cycle + mov), but not sure how fused predicted-not-takens behave.

L3 miss for that mov, well... might well be 200 cycles.


Ah yeah, I wasn't sure if fusion was going to happen. You're probably right in macro-op terms; sorry about that.

The first compare is a bounds check against the array backing the pool, and the second compare is against the type field on the interface, not a null check. Golang interfaces are "fat pointers" with two words: a data pointer and a vtable pointer. So the first cmp is against a register, while the second cmp is against memory, data dependent on the register index. The address of the cmp has to be at least checked to determine if it faults, so I would think at least some part of it would have to be serialized after the first branch, making it slower than the version without the type guard.


> Ah yeah, I wasn't sure if fusion was going to happen.

Well, I didn't profile that case. Who knows what will really happen. Modern x86 processors are hard to understand.

> ... while the second cmp is against memory, data dependent on the register index

Hmm... that sounds like something that would dominate the cost? Memory access and data dependency. Ouch.

Also of course in that case, second compare+branch can't be fused, because cmp has a memory operand.


This is a small fixed cost as compared to the cost of not using the pool. I wasn't suggesting the operation is free.


But not using a pool is not the alternative we are talking about. Rather its hand rolling your own every time. Something the JVM doesn't require.


>Rather its hand rolling your own every time.

So your had rolled one will not have the typing overhead we are discussing, but it will have 2 much worse issues.

  1. sync.Pool's have thread local storage, something your own pools will not have.
  2. sync.Pool's are GC aware; meaning if the allocator is having trouble it can drain "free" pool objects to gain memory. Your custom pool will not have this integration.
I have a feeling that the performance you gained not type-checking you will loose by not having #1.


I think you are missing my point. So I'll restate it. sync.Pool does not help with GC issues compared to the JVM because the JVM also has object pools, further those object pools are actually better for the low latency case because the language does not force them to make a choice between dynamic type checks and specific use abstractions.

[edit] As pcwalton points out. My whole argument is actually null and void due to type erasure...doh.


To be fair, though, aren't generics on the JVM type-erased? So you're going to have a type assertion at the JIT level either way.


Hopefully Java will get value types around version 10, but that is still quite far away.

In the meantime, Azul and IBM JVMs JITs are able to optimize "value types" if the classes follow certain patterns or with some annotation help.


Yeah, and they can collect the unreachable objects in the array when no references exist to the array itself.

From the QCon talk linked to in the slides it sounds like the Go GC is benefiting from the reduced number of objects being allocated and the fact that those objects can never move. Makes things a lot simpler if you can get away with it, but I can imagine a reference into an array, and the inability to move objects could combine in bad ways if you're unlucky.


In terms of what? Latency? I think Go is down to something like 10ms, and from what I understand, pressure on the GC can be relieved by using things such as `sync.Pool`.


{{range . -}}

    <li>{{.}}</li>
{{end -}}

This seems like a bit of a hack to be honest. Would anything break if

{{range .}}

    <li>{{.}}</li>
{{end}}

worked as expected?


It would become unusable for general-purpose text templating[0], and would either break interspersing dynamic text ("this is {{ name }}" with name=Bob would be rendered as "this isBob") within static text or would need a semantic understanding of HTML.

And even then the presence or absence of whitespace in HTML does have rendering impacts (though I don't remember one offhand — aside from linebreaks — it's been a long time since I last hit one), so the template author must be able to control it and importantly to keep whitespace present between static and dynamic items.

[0] html/template is a relatively thin layer over text/template with built-in XSS security where the original does "raw" output by default


I just think that a simpler solution would be to always ignore the first line break after the range begin and end tags. I can't really see the case where that would cause an issue, if you REALLY do need the "extra" line break just put it inside the range.

Whitespaces shouldn't be touched, just parse them though and don't process them.

That would make the templates look more like those of Django and Jinja2, which most seem comfortable with.


Funny that you mention Jinja.

This is actually a feature that's been in Jinja forever that I've always wanted in Go's text/template

http://jinja.pocoo.org/docs/dev/templates/#whitespace-contro...


> Whitespaces shouldn't be touched, just parse them though and don't process them.

That's what html/template does by default, the point of the addition is this can be inconvenient as you may want whitespace for source readability but can't have whitespace in the output.

> That would make the templates look more like those of Django and Jinja2, which most seem comfortable with.

The behaviour outlined here is the same as jinja's: output text nodes as-is by default (whitespace and all), specify `-` to trim whitespace on the corresponding side of a template item.


Whitespace inserts a text node in the DOM. I think.


Whitespace is text, yes?


The general philosophy in the Golang ecosystem is to enable powerful and expressive tools wherever possible without introducing magic. While one might typically want the latter expression to automatically strip whitespace, what happens when you do want the whitespace?

This is a problem I have in my Jade (aka Pug) templates, which trims whitespace around the contents of each element by default -- and there, I have to append `#{' '}` HTML literals at the end of lines in order to assert whitespace where I need it. But overall, the whitespace-trimming in Jade is great becuause it's a targeted, somewhat-opinionated tool, contrary to Golang standard library.

If you need/want templating to work differently, though, there's no reason to not use some other template library. Or you could fork the standard library, make modifications to suit your needs, and use that instead. It's very easy to do that with Go.


The point is that it depends on your definition of "expected". I'd rather have my library act in a well-defined way that can be stated as succinctly as possible.


The problem then is what whitespace do you keep and what do you ditch? The default behavior you want is to maintain everything inside. But I agree the - tag seems like a really inelegant hack.


The default behavior has to be backwards compatible, so there was only one serious option.


What? They could have done any number of things. They could have added a parameter "-trim" that's more clear than the "-" alone. They could have made it "8<" that looks like a little pair of scissors. Or they could have actually spent some time thinking of a good idea instead of those ones that are about as a bad as the one they went with.


The idea they went with, which was one of the ones I proposed, is used in a variety of open source projects, so it had the advantage of being somewhat familiar and vetted. And what I meant to say is that they had to modify things in a backward compatible way, which means keep whitespace by default and trim with new syntax.


That ticket was created by me, and yes, treating whitespace like that would obviously break plain text templates and anything else where whitespace is significant. Remember that Go templates are for more than just html. What about generating CSV files for example? Plaintext emails? I use it for generating Go code, so the whitespace is sometimes significant (to terminate a line of code) but moreover careful control of whitespace is necessary to get readable code.


Sometimes you want the whitespace, sometimes you don't. So both are not the same.


I develop a lot of command line tools for Linux using shell scripting. The scripts are getting huge and ugly, so I have been looking at Go and it seems I can do so many things by just using the standard library and in general a big improvement over using scripting.

However, everytime Go is discussed at HN I see many posts criticizing the language for various reasons and this has put me off getting started learning Go.


I suggest just writing something in it.

A lot of the criticisms turned me off as well, but I dove in anyway.

I stopped paying attention to the criticisms when I found out how incredibly productive I was able to be in Go.

Give it a try and make your own call on it. The cognitive overhead of jumping into Go is so small compared to many other languages.


> The cognitive overhead of jumping into Go is so small compared to many other languages.

I'm not using Go now, though I have tried it out years ago. Your point is exactly why it stays on my radar for possible future use. Especially if you consider any sort of business aspect, ramping up the help. I would strongly consider using it for any serious backend project in a business environment due to the performance, ease of learning and compatibility promise.


Programming languages are kind of like religion, or one's favorite beverage/food. Everyone has their own preference/coding style. I wouldn't let negative comments on HN steer you away from learning Go. Of course, I'm biased because I write Go code and enjoy it, but there will always be folks who love something and hate something. You'll never know whether you enjoy coding in Go unless you try it for yourself.

Some of the criticisms of Go are valid and some are just haters doing what haters do--hating. Keep in mind Go is very young for a programming language. It's only about 6 years old, but it's use is becoming more and more widespread as it matures.

Although you are correct in that the standard library is pretty much all you need to write CLI apps, this library is definitely useful if you're willing to pull in a third-party dependency: https://github.com/codegangsta/cli


Definitely take a look at Go. If criticism on HN is what you're concerned about, I can tell you that when I started writing Go ~3 years ago it got a lot more criticism on HN than it does now. I've used it as my primary language since then and it's a fun language.


I second taking a look at Go, but be aware that for replacing shell scripts Go has one big disadvantage: binaries tend to get rather large, because all libraries are statically linked.

I have one script that writes postgres, does a HTTP request and needs to read file. 100 locs resulted in 7.2M binary size.


I'm doing much the same as you. command line tools for linux, windows and os x and really like go for that use case. the incredibly easy distribution of the final tools make life awesomely easy.


I'll never use Go personally myself, but criticisms on a forum are a bad reason for not using a language, if you think it will be a good tool (as you seem to do) for your purposes


Are there any published benchmarks around allocation times after many garbage collection events with the golang garbage collector?

I'm wondering how it holds up without compaction and in the face of fragmented heaps.


> Changes to the language: None

Best part of Golang, more languages need to borrow this feature. Second best is how relatively easy it is to get going with it.


I get where we are...but where are we going?



I'm wondering who is using Go in production outside US?

Noticed couple e-commerce companies and Google itself in Singapore. Few Russian companies are doing small infra project.

Anyone else seriously investing in Golang?


It's incredibly large in China (Baidu, Qiniu). I apologize since I haven't looked into countries specifically, but to respond to "anyone else seriously investing in Go?", I'll paste it here for you anyways.

Walmart, Apple, Facebook, eBay, Intel, Google, Mozilla, IBM, Microsoft, Red Hat, DigitalOcean, Zynga, Yahoo, BBC, VMware, Uber, GitHub, Getty Images, Twitter, Stack Exchange, Docker, SpaceX, Baidu, Qiniu, Imgur, CloudFlare, Bitbucket, Dell, Twitch, Dailymotion, bitly, Cisco, Verizon, Dropbox, Adobe, New York Times, HP, Canonical, Cloud Foundry, 99designs, BuySellAds, CoreOS, MongoDB, Basecamp, Rackspace, Booking, MalwareBytes, Kingsoft, Iron.io, OpenShift, Heroku, Square, Spring, Tumblr, VMWare, Symantec, Comcast, CBS, SendGrid, Digitally Imported, Pivotal, Couchbase, Koding, Shopify, Shutterfly, MaxCDN, Linden Lab, SolarWinds, IMVU, EMC, Teradata, and I'm sure many more which I'm unaware of, are all using Go to some capacity.


How about adding a flag to the bloody compiler to allow me to compile my code with unused bits?

I refuse to touch Go until that happens. Go is the asshole of programming languages. It forces you to put code in deeply nested annoying directories

    vim foo.go
    vim ../../../github.com/blahblah/moreblahblah/blah.go
    sigh
But that I could live with. But constantly commenting the code out (which of course leads to commenting out even more code!) is the real PITA.

All said and done it looks like a useful, if uninspired, language. Can avoid it so may as well use it. But this nonsense has to stop!


You can use the _ signifier to mark imports as optional during development: use

    import _ "net"
Make sure you don't check such lines of code in to your repo, because there is a concrete benefit to having the language enforce this import strictness.

But the person who would check in code with unused _ imports, would probably abuse a hypothetical "allow unused imports" flag in production. Go is probably not for that person.


Most Go developers just use goimports. So really this "unused import" warning is a "you forgot to push the magic fix-it button." (Why didn't it push it for me?)

Or more to the point, if Go can figure out imports, why require them to be explicit at all?


What do people think of Go's new(ish) vendoring?


Does Go have an interactive debugger yet?


yes yes, you wrote an AI that beat Fan Hui, we heard, stop bragging


[flagged]


> If there's a guy with the king midas touch of shit, it's Pike.

Such personal attacks are a bannable offense on Hacker News. Please don't ever post anything like this again.

Generic programming language flamewar comments are a kind of trolling and also not wanted here.


The value of Go is:

1) Reasonably fast (better than Node at CPU bound tasks, faster than Ruby/Python) 2) Statically compiled 3) Great tooling (except package management, but it's getting better)

If you want a reasonably fast statically compiled language, what would you use?

Java? Lots of JEE/App server issues to consider TypeScript? I'm a fan, but it's compile time type checking and not runtime type checking. TypeClojure? Too obscure.


Please keep in mind I may have completely misinterpreted your comment.

> Java? Lots of JEE/App server issues to consider

Why? Why not just package as an assembly, scp it up to a server, fire up Netty in a main-class and call it a day (this is basically all Play Framework does)?

It's going to be faster/simpler in every way than any Ruby/Nginx stack you can imagine. No need to fight with matching cores to processes per deployment, no unnecessary proxies or additional services to watch/manage.

Sure you can go crazy. Or maybe even need to for some use case, but even in that scenario you can bet your business on the fact that it'll still be a far simpler solution than the comparable dynamic stack.

Not that I'm advocating Java exactly. I just feel like if deployment/server-issues are an issue, compared to a dynamic stack with similar throughput, then you've done something horribly horribly nightmarishly wrong. ;-)


Sorry, I used to work at IBM and in my mind Java is WebSphere and a lot of suffering.

Certainly, there are other simple ways to use the Java stack with things like Play. DropWizard, etc.

But there is certainly a lot more complexity around tooling and deployments and such in the Java ecosystem than either Go or Node.js (although I can't say I'm a Java expert, so it could just be I'm just wrong).


I can totally understand that. Feels a bit unfair to paint an entire platform with the same brush if you haven't tried the more modern alternatives though.

This is a full HelloWorld web-server in Spray: http://scastie.org/14680

Play can get you to Hello World even easier:

  $ brew install typesafe-activator
  $ activator new my-first-app play-scala
  $ activator run
That's going from an off-the-shelf Mac with only homebrew installed to running your first Play app. To deploy it just run the `dist` task instead, and copy the generated .zip file to a server. Unzip it and run `./bin/my-first-app`. The only dependency you have on that system is Java.

Stick a load-balancer in front and you're good to go.

It's light-years beyond anything I've ever experienced on any other platform on the *nix side of things. Especially since it's also so much faster than any other platform I've worked with (decade old c# doesn't count).


With Spring Boot (not saying Spring is one of the "uncomplex" parts of the Java ecosystem) build and deployment goes like this: `gradle build`, `scp` the jar file to a server and then you can symlink it to `/etc/init.d/` and just use it as a service.


>If you want a reasonably fast statically compiled language, what would you use?

I'd go for rust.


Rust seems like a nice systems language, but is it really best suited for non-system programming? I may be too risk adverse, but it seems too early to make that bet at this point.


I'm learning Rust by writing a toy compiler, definitely a high-level task, and I'm finding that Rust is pretty good for that. Some highlights:

- Algebraic datatypes and pattern matching: extremely nice for compiler work where you need to manipulate symbol data.

- Result + try! macro: a bit more verbose than a language that can let an exception bubble up, but definitely nicer than the boiler-plate that people in Go must use.

- Module system: easy to expose only what you want.

- Cargo: one of the best, maybe the best, language package manager that I've used. Super easy to add dependencies to your project.

I will say this however: I'm moving more slowly than I would in a language like OCaml. I still haven't assimilated all of the borrow checker rules in my programming sub-conciousness, so I still make errors and it can take a little while to figure out what's happening and how best to address the problem.


Works fine for the web needs.


D.

* C/C++ like familiar syntax. * as fast as C++ (faster in some, slower in others) * type inference (auto keyword) * Garbage collected language by default. * Multiple compiler implementation (DMD, GDC, LDC) * Debugging symbol support * Package management with versioning and pkg repo * IDE/Editor support - Visual studio/xamarin/emacs * Very easy C FFI * compile type function evaluation * meta programming * The concurrency story is more "mainstream" than Go's (no "goroutine" and other fancy stuff). * Multiple new books on the topic

D has a lot going for it, except for breathless fans :) (not knocking breathless fans here. D can use some)


> If you want a reasonably fast statically compiled language, what would you use?

Anything that is C ABI compatible. I'm sympathetic to Go's philosphy, but Go is not that.

In general, I'd be on the lookout for Swift on the server in the next few years. It has the backing of Apple, is compiled, and the possibility of removing yet-another-language from your stack. Since many shops want or need an iOS frontend, running Swift on the server makes sense.


> If you want a reasonably fast statically compiled language, what would you use?

Nim. More similar to Go than most languages are to each other, but Nim has (a) more modern features, (b) better compatibility/integration with C libraries, and (c) an amazing macro system. The only thing in Go's favor is the size of its community/ecosystem, which seems to be an accident of timing and most definitely not about inherent quality.


Nim doesn't have interfaces. The amount of plumbing required to implement them is ridiculous. It has generics though. My point is how can recent OO languages miss this kind of stuff when designing their type system ? Crystal looks like a better bet, unfortunately it doesn't run on Windows.


At risk of sounding like a Go advocate when generics come up, the lack of interfaces doesn't bother me because interfaces themselves are a bit of a hack. When you have inheritance, object variants, generics, and a strong template/macro system, what's left for interfaces to do? I'd much rather have all of these other things than interfaces alone.


The fact that you can create an online interface and an unrelated library (which you didn't write) can have structures that implement your interface automatically is pretty useful (and cool) IMHO.


Fair point, but much of that benefit is lost without decent support for those third-party implementations to be loaded dynamically. So Nim is missing one piece, but those other features I mentioned can get you very close to the same place. Go is missing the other piece, with no really good way to make up for it. That problem's not even solvable as long as Go's runtime makes no provision for interfacing to code that doesn't play by its own (ever-changing, undocumented) rules about things like goroutines and GC. I know which shortcoming I'd rather live with.


> Go's runtime makes no provision for interfacing to code that doesn't play by its own

This alone makes Go really unfit for developing system libraries.


There is very little chance I could sell a CEO on using Nim. The fact of the matter is that the community and ecosystem are parts of the entire platform and the platform matters a huge deal when you're betting your business on it. Accident or not.


Not disagreeing with you, but that only matters if you need to sell a CEO on your choice of language. Many of us don't, and every language had to cross that chasm somehow. Once upon a time, there were plenty of CEOs who didn't believe you should use Java for anything that mattered. Ditto for Javascript. Most relevantly, ditto for Go. If size of the existing community and ecosystem were the main criterion for choosing a language, Go would still be in the same bucket as Nim. Somebody has to use a language because of its own inherent strength before the CEOs can be convinced.


Agreed, I think timing with Go is a huge factor. Though I think the size and scope of Go's popularity is overstated. I looked into it for a recent web project and found many formerly popular libraries now unmaintained. I felt more confident and went with Django on Python 3.5.

If I want something compiled, the killer for Go for me has always been C ABI incompatibility. It's its own island. I'm not saying I wouldn't consider using Go for some things, but that has been a big redflag vs other choices.


Nim is fun but it's not even at version 1 and has a very small developer and user base.

"The only thing in Go's favor ..."

The selectivity of a true ideologue.


It's the Go fanatics who are the ideologues here. Anything Go has is important, anything it doesn't have is trivial. What a horrible attitude to bring into any kind of technical discussion.

I've defended Go against detractors who say it's worthless because it has mandatory GC or doesn't have generics (which I do consider a weakness but not a fatal one). There's nothing particularly wrong with Go, but there's not all that much special about it either and it does have flaws. Can you name some other feature that (a) distinguishes Go from Every Other Language and (b) matters? Something that might sway a CEO, as we were discussing? "Goroutines" perhaps? Riiight. That CEO will be really impressed. Don't project your bias onto others.


Great tooling? Does Go have any quality IDEs with integrated debuggers yet?

I feel that open source developers mean something entirely different with the phrase "great tooling" than developers used to Visual Studio would mean. :)


Absolutely! They do not mean big GUI based IDE when they say tooling. Infact this is why Go will remain unviable option to .net developers. Go is likely to be much more popular among dynamic languages users and even some Java developers who are tired of enterprisey bloat.


https://github.com/visualfc/liteide exists with, basically, many of the same things as you see people augment vim, etc. with (autocomplete, format on save, etc.) but a GUI editor. It has interactive debugging, but I don't use it so I dunno how it is. I can't promise you'll like it, but I dig it for some uses and it's there. The GUI itself seems to be maintained by one developer and as such it is unlikely to ever have full feature parity with Visual Studio. :)


> Infact this is why Go will remain unviable option to .net developers.

Why should we even bother with Go, if we have C#, F#, JIT, AOT compilation to dynamic binaries (NGEN), AOT compilation to static binaries (.NET Native), NuGET already available?

And yes, I do know Go and even tried some early contributions before the 1.0 release.

For me it is only a step forward for C developers.

Still a very important one, as we need more widespread use of safer compiled languages.


Does go even have any good command-line debuggers? All of the things I've seen either involve insanity like preprocessing your source to inject hooks in between each line of code, or don't actually work.


Delve is an open source debugger for go. And many editors can work with delve such as Visual Studio Code. There is an animated gif of the debugger on the vscode-go page. https://github.com/derekparker/delve https://github.com/Microsoft/vscode-go


Due the way Delve rewrites Go source code and relies on having it available, I wouldn't call it a debugger.


> Am I in the minority to think of Go as a completely unnecessary move-along-now-nothing-to-see-here project?

Ignoring your broadside of insults, I keep asking myself the same thing after working with it: What is so compelling about Go? It's better than C, but we already have good choices in that area. I feel like there's a bandwagon effect around the language.


Personally, it's replaced Python and Ruby for my web development. I like the compile-time typechecking and the ability to annotate structs and then use those annotations as direct rules for serializing data via XML / JSON serializers. The compile-time typechecking means I don't need 100% coverage to make sure I haven't typo'd a variable name somewhere.

But if you weren't using Python or Ruby for your web development, YMMV.


I see Go as having a nice niche for "systems" code that is above C, but more low level than say python. I think that is why we see it used in docker, kubernetes, etc. The ability to distribute binaries is quite handy for containers.

I am surprised that people want to use Go for web apps though.

For whatever reason, Dart seems unpopular with the HN crowd, which is shame because it is a super productive language. It has a fast VM, great tooling, a sane package management system, and a lot of deployment options. Server side libraries are still lacking, but the situation is improving.


> I am surprised that people want to use Go for web apps though.

This. I'd put it under consideration for where I'd pick Java (an increasingly limited space now for myself), but I really do not think it competes directly with Python/Ruby or C-ABI compatible compiled languages like Rust. It's on an island like Java. As a result, you're going all-in with Go, but netting a much smaller ecosystem than Java's. I prefer to not go all-in with either.

If I were to reach for Go for webapps, I'd be just as if not more inclined to look at Swift as it matures.


The language enforces simplicity (making it easy to read), the standard library is dope, and the runtime is fast/robust.


> Go as a completely unnecessary move-along-now-nothing-to-see-here project

I don't like Go type system, but it handles concurrency quite nicely. It's fairly easy to learn and to use so I can see why managers chose it. And it's backed by Google so it can give a sense of safety and it can be used as an argument to sell the language. So it's a bit unfair to say it's unnecessary. It can help when performance is a concern and one doesn't want to go down to C or C++ for some reasons.


Devil's advocate:

> but it handles concurrency quite nicely

So does Erlang/Elixir, Haskell, Clojure.

> And it's backed by Google so it can give a sense of safety

How is Dart doing in this department?


Dart is doing fine.

It has not replaced Javascript (that was never going to happen) but it has seen a slow but steady growth in adoption.

There are some amazing things happening with Dart on mobile (flutter.io), Dart on embedded (fletch), and improved JS interop with things like the Dart Dev Compiler (DDC).

Angular 2 is still in Beta, but has first class support for Dart.

Perhaps it did not live up to it's initial hype - but Dart is a pretty "safe" bet


What would you suggest?

aside: Your history is riddled with inflammatory comments... I like outside-the-box discussion. I gave you an upvote and I really like Python.


Yeah, you're in the minority. Technology is more or less a popularity contest. I've seen plenty of really good programming languages that simply die out because they don't have high-profile personalities behind them. And there's plenty of "meh" or even terrible languages that make it big because the hype is strong with this one. It's not great but at the end of the day, bits are bits and bytes are bytes, most programming languages are all pretty decent ways to manipulate them. You'd have to do pretty badly to go so terribly wrong that nobody should ever use your language, and I don't think I've ever seen a language that bad.


Brainfuck wants a word with you


I thought esoteric languages were obviously excluded :P


[flagged]


Because when a man goes, no one looks at him funny.


Yes, but at the same time, Women are tired of being treated like they're special. All of my friends who are women and engineers(or similar) would much rather be called an "engineer", not a "woman engineer". They HATE that. Treat them as equals. By giving them all these special titles and groups, the message somewhat becomes, "hey, you can't really compete with us, so we have you special groups where you can thrive and feel good about yourself." Actually talk to the majority of women out there, they don't want that. They want to compete in the same field as men.


This is why I posed the original question. Treating somebody "differently" is discriminatory. We should deal with those that look funny at women and not treat women in a special manner.


The claim I hear when I mention this to feminists is that "Every other day is a men's day".

Or I get the

"But they are intimidating and I'm scared to go."


Why someone should look funny at a woman?


is this kind of like "why is there a black history month and not a white history month"?



so... it is?


" I just think that treating them "differently" is not the solution."

They are already treated differently ... duh.


everybody goes.


Oh, that Go.


On FF 44.0:

- The header is cut off

- Slide 20 gives an error when running [c: template: redefinition of template "list"]

- Stable sort example does not seem to work. Gives same output as regular sort


From the third slide:

> Most of the code examples won't run except locally and using Go 1.6.

> The playground still runs Go 1.5.


Ah... thanks. I think I rushed past the preamble slides and missed that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: