Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> - Restricting the language to its target use: systems programming, not applications or data science or AI...

Go has a GC and a very heavy runtime with green threads, leading to cumbersome/slow C interop.

It certainly isn't viable as systems programming language , which is by design. That's an odd myth that has persisted ever since the language called itself as such in the beginning. They removed that wording years ago I think.

It's primarily a competitor to Java et al, not to C or Rust, and you see that when looking at the domains it is primarily used in, although it tends to sit a bit lower on the stack due to the limited nature of the type system and the great support for concurrency.



Man, arguments about the definition of "systems programming" are almost as much fun as the old "dynamic" vs "static" language wars.


> the old "dynamic" vs "static" language wars.

We used to argue about dynamic vs static languages. We still do, but we used to, too.


then there's this https://en.wikipedia.org/wiki/Dynamic_programming

which has nothing to do with types nor variables but with algorithm optimization


The trick is to throw memory at it. if memoization helps, that’ll work without the memory hit!


RIP Mitch, you were one of the greats.


:) ...Mitch


IIRC, Google tried to use Go for their Fuchsia TCP stack and then backtracked. Not a systems programming language for sure.


Sure it backtracked, because the guy pushing for Go left the team, and the rest is history.

Is writing compilers, linkers, IoT and bare metal firmware systems programming?


I worked on Fuchsia for many years, and maintained the Go fork for a good while. Fuchsia shipped with the gvisor based (go) netstack to google home devices.

The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.

The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.

The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.


All valid reasons, however as proven by USB Armory Go's bare metal unikernel, had the people behind Go's introduction stayed on the team, battling for it, maybe those issues would have been sorted out with Go still in the picture, instead of a rewrite.

Similar to Longhorn/Midori versus Android, on one side Microsoft WinDev politics managed to kill any effort to use .NET instead of COM/C++, on the other side Google teams collaborated to actually ship a managed OS, nowadays used by billions of people across the world.

On both cases, politics and product management vision won over relevance of the related technical stacks.

I always take with a grain of salt why A is better than B, only on technical matters.


I see you citing usb armory a lot, but I haven’t yet seen any acknowledgement that it too is a go fork. Not everything runs on that fork, some things need patching.

It’s interesting that you raise collaboration points here. When Russ was getting in go modules design he reached out and I made time for him giving him a brain dump of knowledge from working on ruby gems for many years and the bundler introduction into that ecosystem and forge deprecation/gemcutter transition, plus insights from having watched npm and cargo due to adjacencies. He took a lot of notes and things from that showed up in the posts and design. When fuchsia was starting to rumble about dropping go I reached out to him about it, hoping to discuss some of the key points - he never got back to me.


It is written in TamaGo, originally developed by people at F-Secure.

I don't see the issue it being a fork, plenty of languages have multiple implementations, with various degrees of plus and minus.

As for the rest, thanks for the sharing the experience.


I don’t recall anything but a single definition of the term until Google muddied the waters.


I totally agree that Go is best suited outside of systems programming, but to me that always seemed like a complete accident - its creators explicitly said their goal was to replace C++. But somehow it completely failed to do so, while simultaneously finding enormous success as a statically typed (and an order of magnitude faster) alternative to Python.


It may help to understand the context. At the time Go was created you could choose between three languages at Google: Python, C++ and Java.

Well, to be honest, if you chose Python you were kind of looked down on as a bit of a loser() at Google, so there was really two languages: C++ and Java.

Weeeell, to be honest, if you chose Java you would not be working on anything that was really performance critical so there was really just one language: C++.

So we wrote lots and lots of servers in C++. Even those who strictly speaking didn't have to be very fast. That wasn't the nicest experience in the world. Not least because C++ is ancient and the linking stage would end up being a massive bottleneck during large builds. But also because C++ has a lot of sharp edges. And while the bro coders would never admit that they were wasting time looking over their shoulder, the grown ups started taking this seriously as a problem. A problem major enough to warrant having some really good people look into remedying that.

So yes, at Google, Go did replace lots of C++ and did so successfully.

() Yes, that sentiment was expressed. Sometimes by people whose names would be very familiar to you.


> At the time Go was created you could choose between three languages at Google: Python, C++ and Java.

Out of curiosity, what languages can you currently choose from at Google?


Just you guess: Python, C++ and Java… and Go.

Or JavaScript, Dart, Objective-C, Swift, Rust; even C#. But then it depends on the problem domain. Google it huge, so it depends. And that's even if you pick Python, C++, Java or Go. So your team will already have it decided for you.


I haven't worked there for a long time so I wouldn't know. I don't even know if they are still as disciplined about what languages are allowed and how those languages are used.

Can someone still at Google chime in on this?


(sorry about the asterisk causing everything to be in itallics. forgot about formatting directives when adding footnote)


I initially thought it was a stylistic choice to make the word "loser" a function invocation.


Go appears to be made with radical focus on a niche that isn't particularly well specified outside the heads of its benevolent directorate for life. Opinionated to the point of "if you use Go outside that unnamed niche you got no-one to blame but yourself". Could almost be called a solution looking for a problem. But it also appears to be quite successful at finding problem-fit, no doubt helped by the clarity of that focus. They've been very open about what they consider Go no to be or ever become. Unlike practically every other language, they all seem to eventually fall into the trap of advertising themselves with what boils down to "in a pinch you could also use it for everything else".

It's quite plausible that before Go, its creators would have chosen C++ for problems they consider in "The Go Niche". That would be perfectly sufficient to declare it a C++ replacement in that niche. Just not a universal C++ replacement.


Consider this, the authors have fixed some of the Plan 9 design errors including the demise of Alef, by creating Inferno and Limbo (yeah it was a response to Java OS, but still).

Where C is only used for the Inferno kernel, Limbo VM (with a JIT) and little else like Tk bindings, everything else in Inferno is written in Limbo.

Replace Limbo with AOT compiled Go, and that is what systems programming is in the minds of UNIX, Plan 9 and Inferno authors.


>its creators explicitly said their goal was to replace C++

so nowadays when we say "c++" we mostly mean the works should be replaced by rust, but back then, it's not like that.

I would argue that go successfully replaced c++ in specific domains (network, etc.), and changed your perspective on what "c++" means.


That's nothing new, Java successfully replaced C++ in enterprise code in the mid-to-late 1990s. Because it was safe from memory bugs.


Mid 2000s in my experience. And not because it was safe from memory bugs so much as safe from memory leaks. Still had plenty of NPEs.


Java kind of gets around the memory leak problem by allocating all of the leak up front for the JVM. ;)


I'm a JVM guy, but this is a good one :-)


NPE isn't a memory corruption bug.


Java is safe from a class of memory bugs and leaks

It is still possible to leak resources in Java

You have to fool the GC but it is surprisingly easy to do.


Those are safe.

And it’s not like Go didn’t just copy nulls (plus even has shitty initialization problems now, e.g. with make!)


Safer, I'd say, but it also introduced its own issues. Its type system and - maybe more importantly - its ecosystem around things like unit testing and static analysis at the time was a few leaps ahead of C++, making it a favorite for enterprise systems and codebases where getting 5 SIG stars is a badge of honor and/or requirement.

Java is easier than C++ as well, harder to mess things up. That said, Go feels easier again than Java because it got rid of a lot of cruft.


Except it did replace C++ in the domains it claimed it would replace C++ in. It made clear from day one that you wouldn't write something like a kernel in it. It was never trying to replace every last use of C++.

You may have a point that Python would have replaced C++ in those places instead if Go had never materialized. It was clear C++ was already on the way out, with Python among those starting to push into its place around the time Go was conceived. Go found its success in the places where Python was also trying to replace C++.


I'm not sure what you were meaning by "it".

The main domain the original team behind Go were aiming at was clearly network software, especially servers.

But there was no consensus whether kernel could be a goal one day. Rob Pike originally thought Go could be a good language for writing kernels, if they made just a few tweaks to the runtime[1], but Ian Lance Taylor didn't see real kernels ever being written in Go[2]. In the pre-release versions of Go, Russ Cox wrote an example minimalistic kernel[3] that can directly run Go (the kernel itself is written in C and x86 Assembly) - it never really went beyond running a few toy programs and eventually became broken and unmaintained so it was removed.

[1]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...

[2]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...

[3]: https://github.com/golang/go/tree/weekly.2011-01-12/src/pkg/...


You might like Biscuit: https://github.com/mit-pdos/biscuit

It was specifically written to measure the performance overhead of a high-level language in kernel, so it explicitly makes the same design choices as Linux: monolithic, POSIX-compatible.

Results look pretty good.


What domains are those? It seems to mostly be an alternative to what people have use(d) Java or C# for.


The original Go announcement spells it all out pretty nicely.


> You may have a point that Python would have replaced C++ in those places instead if Go had never materialized.

I don't think Python was starting to occupy C++ space; they have entirely different abilities. Of course, I am also glad it didn't happen.


I don't think so either, but as we move past that side tangent and return to the discussion, there was the battle of the 'event systems'. Node.js was also created in this timeframe to compete on much the same ground. And then came Go, after which most contenders, including Python, backed down. If you are writing these kinds of programs today, it is highly likely that you are using Go, Node.js, or some language that is even newer than Go (e.g. Rust or Elixir). C++ isn't even on the consideration list anymore.


> its creators explicitly said their goal was to replace C++

I think that is a far clearer goal if you look at C++ as it is used inside Google. If you combine the Google C++ style guide and Abseil, you can see the heritage of Go very clearly.


> ...finding enormous success as a statically typed (and an order of magnitude faster) alternative to Python.

An alternative to Node.js/{Java|Type}Script catastrophe

I have not written any go but I have been dragged into Node.js programming, and it is awful

I do not believe in rewriting systems (except in exceptional circumstances that do not apply) so I'm stuck. But for someone else's choice...


Interesting that you say Node programming is a catastrophe, I think it’s fantastic.


I have used Node a lot.

And many other systems over the decades.

Node is catastrophic because it perpetuates mistakes make fifty years ago where other modern systems (looking at Go and Rust) learnt from the past mistakes not to repeat them

If Node were not popular it would not be catastrophic


Which mistakes?


Here's a classic tale of Go replacing C++: https://go.dev/talks/2013/oscon-dl.slide#1


It's not at "an accident" and Go didn't "somehow" fail to replace C++ at its systems programming domain. The reason why go failed to replace C and C++ is not a mystery to anyone: Mandatory GC and a rather heavyweight runtime.

When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.

Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.

1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).

Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.

Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.

The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.

During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.

Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].

In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.

2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.

When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.

Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.

Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.

The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.

The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.

The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.

[1]: https://opensource.googleblog.com/2009/11/hey-ho-lets-go.htm...

[2]: https://web.archive.org/web/20091114043422/http://www.golang...

[3]: https://users.ece.utexas.edu/~adnan/top/ousterhout-scripting...

[4]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...

[5]: https://groups.google.com/g/golang-nuts/c/BO1vBge4L-o/m/lU1_...

[6]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...

[7]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/M9r1...

[8]: https://groups.google.com/g/golang-nuts/c/qKB9h_pS1p8/m/1NlO...

[9]: https://github.com/golang/go/issues/13761#issuecomment-16772...

[10]: https://go.dev/talks/2011/Real_World_Go.pdf (Slide #25)


I'm impressed. That's the most thorough and well-researched comment I've seen on Hackernews, ever. Thank you for taking the time and effort in writing it up.


Thank you! I really appreciate it, since it did take a while writing this ;)


It compares NuGet with Maven calling the former cumbersome. It's a tell of gaps in research made but also a showcase of overarching problem where C# is held back by people bundling it together with Java and the issues of its ecosystem (because NuGet is excellent and on par with Cargo crates).


NuGet was only released in 2010, so I wasn't really referring to it. I was referring to Maven (the build tool part, not the Maven/Ivy dependency management part, which was quite a breeze) the build system and MSBuild. Both of which required wrangling with verbose XML and understanding a lot of syntax (or letting the IDE spew out everything for you and then get totally lost when you need to fix something or go beyond what the IDE UI allows you to do). If anything, MSBuild was somewhat worse that Maven, since the documentation was quite bad, at least back then.

That being said, I'm not sure if you've used NuGet in its early days of existence, but I did, and it was not a fun experience. I remember that I the NuGet project used to get corrupted quite often and I had to reinstall everything (and back then, there was no lockfile if my memory serves me right, so you'd be getting different versions).

In terms of performance, ASP.NET (not ASP.NET Core) was as bad as contemporary Java EE frameworks, if not worse. You could make a high performance web server by targeting OWIN directly (like you could target the Servlet API with Java), but that came later.

I think you are the one who are bundling things together here: You are confusing the current C#/.Net Core ecosystem with the way it was back in the .Net 4.0/Visual Studio 2008-era. Windows-centric, very hard to automate through CLI, XML-obsessed and rather brittle tooling.

C# did have a lot of good points over Java back then (and certainly now): Less verbose language, better generics (no type erasure), lambda expressions, extensions methods, LINQ etc. Visual Studio was also a better IDE than Eclipse. I personally chose C# over Java at the time (when I could target Windows), but I'm not trying to hide the limits it had back then.


Fair enough. You are right and apologize for rather hasty comment. .NET in 2010 was a completely different beast and an unlikely choice in the context. It would be good for the industry if the perception of that past was not extrapolated onto the current state of affairs.


I agree. As someone unfamiliar with Go's history, that was incredibly well written. It felt like I cathartically followed Go's entire journey.


> Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.

Android GPU debugger, USB Armory bare metal unikernel firmware, Go compiler, Go linker, bare metal on maker boards like Arduino and ESP32

> Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.

Usually a problem only for those that refuse to actually learn about Java and .NET ecosystems.

Still doing great after 25 years, now being copied with the VC ideas to sponsor Kubernetes + WASM selling startups.


Unfortunately I have to quibble a bit, although bravo for such a high effort post.

> When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers

I worked at Google from 2006-2014 and I wouldn't agree with this characterisation, nor actually with many of the things Rob Pike says in his talk.

In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.

Rob's talk is frankly somewhat weird to read as a result. He claims to have been solving Big Problems that only Google had, but AFAIK nobody in Google's senior management asked him to do Go despite a heavy investment in infrastructure. Java and C++ were working fine at the time and issues like build times were essentially solved by Blaze (a.k.a. Bazel) combined with a truly huge build cluster. Blaze is a command line written in ... drumroll ... Java (with a bit of C++ iirc).

Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all. Google servers were all heavily multi-threaded and async at that time, and every server exposed a /threadz URL on its management port that would show you the stack traces of every thread (in both C++ and Java). I have clear memories of debugging race conditions in servers there, well before Go existed.

> The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought.

Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.

At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.

I was on-call for Java servers at Google at various points. I don't remember GC being a major issue even back then (and nowadays modern Java GC is far better than Go's). It was sometimes a minor issue requiring tuning to get the best performance out of the hardware. I do remember JITC being a problem because some server codebases were so large that they warmed up too slowly, and this would cause request timeouts when hitting new servers that had just started up, so some products needed convoluted workarounds like pre-warming before answering healthy to the load balancer.

Overall, the story told by Rob Pike about Go's design criteria doesn't match my own recollection of what Google was actually doing. The main project Pike was known for at Google in that era was Sawzall, a Go-like language designed specifically for logs processing, which Google has phased out years ago (except in one last project where it's used for scripting purposes and where I heard the team has now taken over maintenance of the Sawzall runtime, that project was written by me, lol sorry guys). So maybe his primary experience of Google was actually writing languages for batch jobs rather than web servers and this explains his divergent views about what was common practice back then?

I agree with your assessment of Go's success outside of Google.


This. I worked at Google around the same time. Adwords and Gmail were customers of my team.

I remember appreciating how much nicer it was to run Java servers, because best practice for C++ was (and presumably still is) to immediately abort the entire process any time an invariant was broken. This meant that it wasn't uncommon to experience queries of death that would trivially shoot down entire clusters. With Java, on the other hand, you'd just abort the specific request and keep chugging.

I didn't really see any appreciable attrition to golang from Java during my time at Google. Similarly, at my last job, the majority of work in golang was from people transitioning from ruby. I later learned a common reason to choose golang over Java was over confusion about the Java tooling / development workflow. For example, folks coming from ruby would often debug with log statements and process restarts instead of just using a debugger and hot patching code.


Yeah. C++ has exceptions but using them in combination with manual memory management is nearly impossible, despite RAII making it appear like it should be a reasonable thing to do. I was immediately burned by this the first time I wrote a codebase that combined C++ and exceptions, ugh, never again. Pretty sure I never encountered a C++ codebase that didn't ban exceptions by policy and rely on error codes instead.

This very C oriented mindset can be seen in Go's design too, even though Go has GC. I worked with a company using Go once where I was getting 500s from their servers when trying to use their API, and couldn't figure out why. I asked them to check their logs to tell me what was going wrong. They told me their logs didn't have any information about it, because the error code being logged only reflected that something had gone wrong somewhere inside a giant library and there were no stack traces to pinpoint the issue. Their suggested solution: just keep trying random things until you figure it out.

That was an immediate and visceral reminder of the value of exceptions, and by implication, GC.


> In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.

Thank you. I don't much about the breakup of different services by language in Google circa 2009, so your feedback helps me put things in focus. I knew that Java was more popular than the way Rob described it (in his 2012 talk[1], not this essay), but I didn't know how much.

I would still argue that like replacing C and C++ in server code was the main impetus for developing Go. This would be a rather strange impetus outside of big tech company like Google, which was writing a lot of C++ server code to begin with. But it also seems that Go was developed quite independently of Google's own problems.

> Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all.

I can't say anything about Google, but I also found that statement baffling. If you wanted to develop a scalable network server in Java at that time, you pretty much had to use threads. With C++ you had a few other alternatives (you could develop a single threaded server using an asynchronous library Boost ASIO for instance), but that was probably harder than dealing with deadlocks race conditions (which are still very much a problem in Go, the same way they are in multi-threaded C++ and Java).

> Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.

Yes, I am aware of that part, and it makes it clearer for me Go wasn't trying to solve any particular problem with the way Java was used within Google. I also think Go win over many experienced Java developers who already knew how to deal with Java. But it did offer a simpler build-deployment-and-configuration story than Java, and that's why it attracted many Python and Node.js where Java failed to do so.

Many commentators have mentioned better performance and fewer errors with static typing as the main attraction for dynamic language programmers coming to Go, but that cannot be the only reason, since Java had both of these long before Go came to being.

> At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.

Frankly speaking, GC was more minor problem for people coming from dynamic language. But the main issue for this type of developer, is that the GC in Java is configurable. In practice most of the developers I've worked with (even seasoned Java developers) do not know how to configure and benchmark Java GC, which is quite an issue.

JVM Warmup was and still is a major issue in Java. New features like AppCDS help a lot to solve this issue, but it requires some knowledge, understanding and work. Go solves that out of the box, by foregoing JIT (Of course, it loses other important optimizations that JIT natively enables like monomorphic dispatch).

[1] https://go.dev/talks/2012/splash.article


The Google codebase had the delightful combination of both heavily async callback oriented APIs and also heavy use of multithreading. Not surprising for a company for whom software performance was an existential problem.

The core libraries were not only multi-threaded, but threaded in such a way that there was no way to shut them down cleanly. I was rather surprised when I first learned this fact during initial training, but the rationale made perfect sense: clean shutdown in heavily threaded code is hard and can introduce a lot of synchronization bugs, but Google software was all designed on the assumption that the whole machine might die at any moment. So why bother with clean shutdown when you had to support unclean shutdown anyway. Might as well just SIGKILL things when you're done with them.

And by core libraries I mean things like the RPC library, without which you couldn't do anything at all. So that I think shows the extent to which threading was not banned at Google.


As an aside:

This principle (always shutdown uncleanly) was a significant point of design discussion in Kubernetes, another one of the projects that adapted lessons learned inside Google on the outside (and had to change as a result).

All of the core services (kubelet, apiserver, etc) mostly expect to shutdown uncleanly, because as a project we needed to handle unclean shutdowns anyway (and could fix bugs when they happened).

But quite a bit of the software run by Kubernetes (both early and today) doesn’t always necessarily behave that way - most notably Postgres in containers in the early days of Docker behaved badly when KILLed (where Linux terminates the process without it having a chance to react).

So faced with the expectation that Kubernetes would run a wide range of software where a Google-specific principle didn’t hold and couldn’t be enforced, Kubernetes always (modulo bugs or helpful contributors regressing under tested code paths) sends TERM, waits a few seconds, then KILLs.

And the lack of graceful Go http server shutdown (as well as it being hard to do correctly in big complex servers) for many years also made Kube apiservers harder to run in a highly available fashion for most deployers. If you don’t fully control the load balancing infrastructure in front of every server like Google does (because every large company already has a general load balancer approach built from Apache or nginx or haproxy for F5 or Cisco or …), or enforce that all clients handle all errors gracefully, you tend to prefer draining servers via code vs letting those errors escape to users. We ended up having to retrofit graceful shutdown to most of Kube’s server software after the fact, which was more effort than doing it from the beginning.

In a very real sense, Google’s economy of software scale is that it can enforce and drive consistent tradeoffs and principles across multiple projects where making a tradeoff saves effort in multiple domains. That is similar to the design principles in a programming language ecosystem like Go or orchestrator like Kubernetes, but is more extensive.

But those principles are inevitably under communicated to users (because who reads the docs before picking a programming language to implement a new project in?) and under enforced by projects (“you must be this tall to operate your own Kubernetes cluster”).


John Ousterhout had famously written that threads were bad, and many people agreed with him because they seemed to be very hard to use.

Google software avoided them almost always, pretty much banning them outright, and the engineers doing the banning cited Ousterhout

Yeah this is simply not true. It was threads AND async in the C++ world (and C++ was of course most of the cycles)

The ONLY way to use all your cores is either threads or processes, and Google favored threads over processes (at least fork() based concurrency, which Burrows told people not to use).

For example, I'm 99.99% sure MapReduce workers started a bunch of threads to use the cores within a machine, not a bunch of processes. It's probably in the MapReduce paper.

So it can't be even a little true that threads were "avoided almost always"

---

What I will say is that pattern to say fan out requests to 50 or 200 servers and join in C++ was async. It wasn't idiomatic to use threads for that because of the cost, not because threads are "hard to use". (I learned that from hacking on Jeff Dean's tiny low latency gsearch code in 2006)

But even as early as 2009, people pushed back and used shitloads of threads, because ASYNC is hard to use -- it's a lot of manual state management.

e.g. From the paper about the incremental indexing system that launched in ~2009

https://research.google/pubs/large-scale-incremental-process...

https://storage.googleapis.com/gweb-research2023-media/pubto...

Early in the implementation of Percolator, we decided to make all API calls blocking and rely on running THOUSANDS OF THREADS PER MACHINE to provide enough parallelism to maintain good CPU utilization. We chose this thread-per-request model mainly to make application code easier to write, compared to the event-driven model. Forcing users to bundle up their state each of the (many) times they fetched a data item from the table would have made application development much more difficult. Our experience with thread-per-request was, on the whole, positive: application code is simple, we achieve good utilization on many-core machines, and crash debugging is simplified by meaningful and complete stack traces. We encountered fewer race conditions in application code than we feared. The biggest drawbacks of the approach were scalability issues in the Linux kernel and Google infrastructure related to high thread counts. Our in-house kernel development team was able to deploy fixes to ad- dress the kernel issues

To say threads were "almost always avoided" is indeed ridiculous -- IIRC this was a few dedicated clusters of >20,000 machines running 2000-5000+ threads each ... (on I'd guess ~32 cores at the time)

I remember being in a meeting where the indexing VP mentioned the kernel patches mentioned above, which is why I thought of that paper

Also as you say there were threads all over the place in other areas too, GWS, MapReduce, etc.


I like that a lot of the comments are concerned with Go's failure to replace C++ as the de facto systems language yet Rob Pike doesn't mention anything about this in a retrospective article about what the language designers got wrong. He must be so embarrassed about it.


> Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure).

The Python runtime does not care about type hints; TS types are erased when compiled down to JS. How does Java's type erasure make it less strongly typed than those two? You can compare the capability of the type systems, and Java's may well be lesser, but I don't really see how erasure matters here.


So, it’s Java 1.2, but worse. Cool contribution!


> (and an order of magnitude faster) alternative to Python

Python is increasingly an easy to use wrapper over low-level C/C++ code.

So in many use cases it is faster than Go.


Than pure Go code, sure. But not really faster than Go code that's a wrapper over the same low-level C/C++ code.


That depends. C function call overhead for Go is quite large (it needs to allocate a larger stack, put it on its own thread and prevent pre-emption) and possibly larger than for CPython, which relies on calling into C for pretty much everything it does, so obviously has that path well-optimized.

So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.


> So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.

I don't have experience with Python, but I can definitely say switching between Go and C is super slow. I'm using a golang package which is a wrapper around SQLite: at some point I had a custom function written as a call-back to a Go function; profiling showed that a huge amount of time was spent in the transition code marshalling stuff back and forth between Go and C. I ended up writing the function in C so that the C sqlite3 library could call it directly, and it sped up my benchmarks significantly, maybe 5x. Even though sqlite3 is local, I still end up trying to minimize requests and data shipped out of the database, because transferring data in and out is so expensive.

(And if you're curious, yes I have considered trying to use one of the "pure go" sqlite3 packages; in large part it's a question of trust: the core sqlite3 library is tested fantastically well; do I trust the reimplementations enough not to lose my data? The performance would have to be pretty compelling to make it worth the risk.)

I think in general discouraging CGo makes sense, as in the vast majority of cases a re-implementation is better in the long run; so de-prioritizing CGo performance also makes sense. But there are exceptions, particularly for libraries where you want functionality to be identical, like sqlite3 or Qt, and there the CGo performance is a distinct downside.


Do you have an example of that? What I’ve heard over and over in comments here is that a) C interop in Go is slow, and b) Go devs discourage using it.

(Java is a similar story in my experience.)

In Python, (b) at least is definitely not true.


And yet most popular tools written in Go used to be written in C++, Kubernetes, Databases and the like.


Kubernetes mostly displaced tools written in Ruby (Puppet, Chef, Vagrant) or Python (Ansible, Fabric?). While a lot of older datastores are written in C++, new ones that were started post-2000ish tended to be written in Java or similar.


Kuberentes has nothing to do with Ruby / Python from your example it's far more complex and needs performance, what you described are not what k8s is doing.

Kubernetes is the equivalent of Borg /Omega at Google which is written in C++.


> Kuberentes has nothing to do with Ruby / Python from your example it's far more complex and needs performance, what you described are not what k8s is doing.

It's what Kubernetes is being used for in most places where I've seen it used.

> Kubernetes is the equivalent of Borg /Omega at Google which is written in C++.

Maybe, but most Kubernetes users aren't Google and weren't using those things.


Early versions of Rust were a lot like Golang with some added OCaml flavor. Complete with general GC, green threading etc. They pivoted to current Rust with its focus on static borrowcheck and zero-overhead abstractions very late in the language's evolution (though still pre-1.0 obviously) because they weren't OK with the heavy runtime and cumbersome interop with C FFI. So there's that.


> Complete with general GC, green threading etc.

AFAIK there was never "general GC". There was a GC'd smart pointer (@), and its implementation never got beyond refcounting, it was moved behind a feature gate (and a later-removed Gc library type) in 0.9 and removed in 0.10.

Ur-Rust was closer to an "applications" language for sure, and thus closer to Go's (possibly by virtue of being closer to OCaml), but it was always focused much more strongly on type safety and lifting constraints to types, as well as more interested in low-level concerns: unique pointers (~) and move semantics (if in a different form) were part of Rust 0.1.

That is what the community glommed onto, leading to "the pivot": there were applications language aplenty, but there was a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.


> a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.

I didn't know I wanted this, but yes, I did want this and when I got it I was much more enthusiastic than I'd ever been about languages like Python or Java.

I bounced off Go, it's not bad but it didn't do anything I cared about enough to push through all the usual annoyances of a new language, whereas Rust was extremely compelling by the time I looked into it (only a few years ago) and has only improved since.


Both Rust and Go are descendants of Limbo, Pike's prior language, although while Limbo's DNA remains strong in Go it's much more diffuse in Rust.


While those influences are important to Rust's history, they were mostly removed from the language before 1.0, notably green threads and the focus on channels as a core concurrency primitive. Channels still exist as a library in stdlib, but they're infinitely buffered by default, and aren't widely used.


"systems" can mean "distributed systems", "network systems" etc. both of which Go is suitable for. It's obviously not a great choice for "operating systems" which is well known.


let's just pretend that when go lang people say "systems programing" they mean smething closer to "network (systems) programming" which is where go shines the brightest


And yet Google replaced the go based network stack in Fuchsia with rust for performance reasons.


I understand ysofunny's comment to have meant basically microservices/contemporary web backend.


Hold on a minute.

You are confusing the network stack (as in OS development) and network applications. Go is the undisputed king of backend, but no reasonable person has ever claimed its a good choice for OS development.


After the guy responsible for it left the team.


This sounds more like for perf reasons that for performance reasons.


For people of Pike's generation, "systems programming" means, roughly, the OS plus the utilities that would come with an OS. Well, Go may not be useful for writing the OS, but for the OS-level utilities, it works just fine.


Has it found success in OS-level utilities? What popular utilities are written in Go?


Not sure these are really popular, but I cannot resist advertising a few utilities written in Go that I regularly use in my daily workflow:

- gdu: a NCDU clone, much faster on SSD mounts [1]

- duf: a `df` clone with a nicer interface [2]

- massren: a `vidir` clone (simpler to use but with fewer options) [3]

- gotop: a `top` clone [4]

- micro: a nice TUI editor [5]

Building this kind of tools in Go makes sense, as the executables are statically compiled and are thus easy to install on remote servers.

[1]: https://github.com/dundee/gdu

[2]: https://github.com/muesli/duf

[3]: https://github.com/laurent22/massren

[4]: https://github.com/xxxserxxx/gotop

[5]: https://github.com/zyedidia/micro


Not sure what should be counted as OS-level.

Is docker cli and OS level? What about lazygit? chezmoi? dive? fzf?

Actually many popular utilities are written in Go


Docker and Podman.


Being self hosted?

Gokrazy userspace?

gVisor?


Niklaus Wirth, rest his soul, would disagree.

Like would the folks at WithSecure, selling the USB Armory with Go written firmware.

https://www.withsecure.com/en/solutions/innovative-security-...

Back in my day, writing compilers and OS services were also systems programming.


The shells scripts that bring up a machine are also "systems programming".


Maybe the better term for go would be server-systems programming.


Server programming?

The term "systems programming" seems to be interpreted very differently by different people which in practice renders it useless. It is probably best to not use it at all to avoid confusion.


> runtime ... GC ... not viable as systems programming language

A GC can work fine. At the lower levels, people want to save every flop, but at the higher levels uncounted millions are wasted by JS, Electron apps etc. etc. We can sacrifice a little on the bottom (in the kernel) for great comfort, without a difference. But you don't even have to make sacrifices. A high performance kernel only needs to allocate at startup, without freeing memory, allowing you to e.g. skip GC completely (turn it off with a compiler flag). This does require the kernel to implement specific optimizations though, which aren't typically party to a language spec.

Anyway, some OS implemented with a GC: Oberon/Bluebottle (the Oberon language was designed specifically to implement the Oberon OS), JavaOS, JX, JNode, Smalltalk (was the OS for the first Smalltalk systems), Lisp in old Lisp machines... Interval Research even worked on a real time OS written in Smalltalk.

Indeed, GC can work in hard real time systems e.g. the Aonix PERC Ultra, embedded real time Java for missile control (but Go's current runtime' GC stops are unpredictable....)

Particularly when we consider modern hardware problems (basic OS research already basically stopped in the 90s, yay risc processor design...), with minimal hardware support for high speed context switching because of processor speed vs. memory access latency... Well, it's not like we can utilize such minuscule operations anyway. Why don't we just have sensible processors which don't encourage us to unroll loops, which have die space to store context...

There were Java processors [2] which implement the JVM in hardware, with Java bytecode as machine code. Before llvm gained dominance, there were processors optimized to many languages (even Forths!)

David Chisnell, a RTOS and FreeBSD contributor recently went into quite a bit of depth [1] ending with:

> everything that isn’t an allocator, a context-switch routine, or an ISR, can be written in a fully type-safe GC’d language

[1] https://lobste.rs/s/e6tz0r/memory_safety_is_red_herring#c_gf...

[2] https://www.electronicdesign.com/technologies/embedded/artic...


The nice thing about Java is you can choose which GC to use


Not only the GC, the JIT compiler, the AOT compiler, the full implementation even.


Seconding this. Go also has some opinionated standard libraries (want to differentiate between header casings in http requests because your clients/downstream services do? Go fuck yourself!) and shies you away from doing hacky, ugly, dangerous things you need in a systems language.

It’s absolutely an applications language.


> and shies you away from doing hacky, ugly, dangerous things you need in a systems language.

But... You end up doing hacky and ugly things all the time because Go is such a restricted language with so many opinions about what should and should not be done. Generics alone...


Meanwhile in the real world, the protocol authors agree with Go, and HTTP/2 and onward force lower case headers on everyone.


Headers are case-sensitive?


They aren't, but because you can send Foo-Bar as fOo-BaR on the wire, someone somewhere depends on it. People don't read the specs, they look at the example data, and decide that's how their program works now.

Postel's Law allows this. A different law might say "if anything is invalid or weird, reject it instantly" and there would be a lot less security bugs. But we also wouldn't have TCP or HTTP.


No, and that wasn't the claim being made. The claim being made was that there can be engineering value in preserving the case of existing headers.

Example: An HTTP proxy that preserves the case of HTTP headers is going to cause less breakage than one that changes them. In a perfect world, it would make no difference, but that isn't the world we live in.


Are you sure they are discarded and unrecoverable? Can't that be simply recovered by using textproto.MIMEHeader and iterating over the header map?

Seems that it could be a middleware away, I don't see the big deal if so.


You may well be right -- I don't know Go at all, and just assumed the parent post's claim is true.


Only per the HTTP spec, and this is the same misunderstanding that the golang developers have. Because it's so common to preserve header casing as requests traverse networks in the real world, many users' applications or even developers' APIs depend on header casing whether intentionally or not. So if you want to interact with them, or proxy them, you probably can't use Go to do so (ok, actually you can, but you have to go down to the TCP level and abandon their http request library).

Go makes the argument that they can format your headers in canonicalized casing because casing shouldn't matter per the HTTP spec. That's fine for applications I guess, though still kind of an overreach given they have added code to modify your headers in a particular way you might not want to spend cycles on - but unacceptable for a systems language/infrastructure implementation.


I thin you wanted to say that headers are not case sensitive according to the HTTP spec, but some clients and servers do treat header names as case-sensitive in practice.

What Go does here is kinda moot nowadays, since HTTP/2.0 and HTTP/3.0 force all header names into lower-case, so they would also break non-conformant clients and servers.


That is in fact what I meant to say, and I thought I said it. Anyway, HTTP/1.1 is still in use a lot of places.

I think most people here don’t have any experience building for the kind of use cases I’m considering here (imagine a proxy like Envoy, which btw does give you at least the option to configure header casing transformations). When you have customers that can’t be forced to behave in a certain way up/down stream, you have to deal with this kind of stuff.


The Go standard library is probably being too opinionated here, but it's in line with the general worse-is-better philosophy behind Go: simplicity of implementation is more important than correctness of interface. In this case, the interface can even be claimed to be correct (according to the spec), but it cannot cover all use-cases.

If my memory serves me right, we did use Traefik at work in the past, and I remember having this issue with some legacy clients, which didn't expect headers to be transformed. Or perhaps the issue was with Envoy (which converts everything to lowercase by default, but does allow a great deal of customization).


Wait, are the headers canonicalized if you retrieve them from r.Header where r is a request?

I mean, if the safest is to conform to the html spec, there should be an escape hatch for the rarer cases easier than going all the way to the tcp level?


It's been a while since I battled this but IIRC, you can set unconcalized headers on requests you serialize yourself (for egress) with a simple workaround (directly add the header to the request's header map rather than the setter function) but if you use Go's default http handler libraries, it "helpfully" canonicalizes headers for you when it deserializes incoming requests and then invokes your http handler. So you are unable to access to the original casing that way, unless you instead use a TCP server.


I'll just leave this here (emphasis mine)

https://en.wikipedia.org/wiki/Systems_programming

> The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly (e.g. word processor), whereas systems programming aims to produce software and software platforms which *provide services to other software*, are performance constrained, or both (e.g. operating systems, computational science applications, game engines, industrial automation, and software as a service applications).


> It certainly isn't viable as systems programming language

It is perfectly viable as a systems programming language. Remember, systems are the alternative to scripts. Go is in no way a scripting language...

You must be involved in Rust circles? They somehow became confused about what systems are, just as they became confused about what enums are. That is where you will find the odd myths.


It’s all admittedly a somewhat handwaving discussion, but in ‘systems programming’ ‘systems’ is generally understood to be opposite to ‘applications’, not ‘scripts’.


All software is application. That’s what software is for!


I wouldn't consider a driver an application


We live in an age in which a PC running an OS, which has drivers in it, is something that can be done by Javascript in a browser.


application: the action of putting something into operation

What's a driver if not something that carries out an action of putting something (an electronic device, typically) into operation?


Indeed - I’ve seen this refrain about “systems programming” countless times. I’m not sure how one can sustain the argument that a “system” is only an OS kernel, network stack or graphics driver.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: