I really, really appreciate key people taking the time for retrospectives. It makes a huge difference to people now who want to make a real difference.
But I'm not sure Rob Pike states clearly enough what they got right (IMO): they managed the forces on the project as well as the language, by:
- Restricting the language to its target use: systems programming, not applications or data science or AI...
- Defining the language and its principles clearly. This avoids eons of waste in implementing ambiguity and designing at cross-purposes.
- Putting quality first: it's always cheaper for all concerned to fix problems before deploying, even if it's harder for the community or OS contributors or people waiting for new features.
- Sharing the community. They maintained strict control over the language and release and core messaging, but they also allowed others to lead in many downstream aspects.
Stated but under-appreciated is the degree to which Google itself didn't interfere. I suspect it's because Go actually served its objectives and is critical to Google. I wonder if that could be true today for a new project. It's interesting to compare Dart, which has zero uptake outside Flutter even though there are orders of magnitude more application code than systems code.
Go was probably the key technology that migrated server-side software off Java bloatware to native containers. It dominates back-end infrastructure and underlies most of the web application infrastructure of the last 10 years. The benefit to Google and the community from that alternative has been huge. Somehow amidst all that growth, the team remained small and kept all its key players.
> - Restricting the language to its target use: systems programming, not applications or data science or AI...
Go has a GC and a very heavy runtime with green threads, leading to cumbersome/slow C interop.
It certainly isn't viable as systems programming language , which is by design. That's an odd myth that has persisted ever since the language called itself as such in the beginning. They removed that wording years ago I think.
It's primarily a competitor to Java et al, not to C or Rust, and you see that when looking at the domains it is primarily used in, although it tends to sit a bit lower on the stack due to the limited nature of the type system and the great support for concurrency.
I worked on Fuchsia for many years, and maintained the Go fork for a good while. Fuchsia shipped with the gvisor based (go) netstack to google home devices.
The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.
The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.
The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.
All valid reasons, however as proven by USB Armory Go's bare metal unikernel, had the people behind Go's introduction stayed on the team, battling for it, maybe those issues would have been sorted out with Go still in the picture, instead of a rewrite.
Similar to Longhorn/Midori versus Android, on one side Microsoft WinDev politics managed to kill any effort to use .NET instead of COM/C++, on the other side Google teams collaborated to actually ship a managed OS, nowadays used by billions of people across the world.
On both cases, politics and product management vision won over relevance of the related technical stacks.
I always take with a grain of salt why A is better than B, only on technical matters.
I see you citing usb armory a lot, but I haven’t yet seen any acknowledgement that it too is a go fork. Not everything runs on that fork, some things need patching.
It’s interesting that you raise collaboration points here. When Russ was getting in go modules design he reached out and I made time for him giving him a brain dump of knowledge from working on ruby gems for many years and the bundler introduction into that ecosystem and forge deprecation/gemcutter transition, plus insights from having watched npm and cargo due to adjacencies. He took a lot of notes and things from that showed up in the posts and design. When fuchsia was starting to rumble about dropping go I reached out to him about it, hoping to discuss some of the key points - he never got back to me.
I totally agree that Go is best suited outside of systems programming, but to me that always seemed like a complete accident - its creators explicitly said their goal was to replace C++. But somehow it completely failed to do so, while simultaneously finding enormous success as a statically typed (and an order of magnitude faster) alternative to Python.
It may help to understand the context. At the time Go was created you could choose between three languages at Google: Python, C++ and Java.
Well, to be honest, if you chose Python you were kind of looked down on as a bit of a loser() at Google, so there was really two languages: C++ and Java.
Weeeell, to be honest, if you chose Java you would not be working on anything that was really performance critical so there was really just one language: C++.
So we wrote lots and lots of servers in C++. Even those who strictly speaking didn't have to be very fast. That wasn't the nicest experience in the world. Not least because C++ is ancient and the linking stage would end up being a massive bottleneck during large builds. But also because C++ has a lot of sharp edges. And while the bro coders would never admit that they were wasting time looking over their shoulder, the grown ups started taking this seriously as a problem. A problem major enough to warrant having some really good people look into remedying that.
So yes, at Google, Go did replace lots of C++ and did so successfully.
() Yes, that sentiment was expressed. Sometimes by people whose names would be very familiar to you.
Or JavaScript, Dart, Objective-C, Swift, Rust; even C#. But then it depends on the problem domain. Google it huge, so it depends. And that's even if you pick Python, C++, Java or Go. So your team will already have it decided for you.
I haven't worked there for a long time so I wouldn't know. I don't even know if they are still as disciplined about what languages are allowed and how those languages are used.
Go appears to be made with radical focus on a niche that isn't particularly well specified outside the heads of its benevolent directorate for life. Opinionated to the point of "if you use Go outside that unnamed niche you got no-one to blame but yourself". Could almost be called a solution looking for a problem. But it also appears to be quite successful at finding problem-fit, no doubt helped by the clarity of that focus. They've been very open about what they consider Go no to be or ever become. Unlike practically every other language, they all seem to eventually fall into the trap of advertising themselves with what boils down to "in a pinch you could also use it for everything else".
It's quite plausible that before Go, its creators would have chosen C++ for problems they consider in "The Go Niche". That would be perfectly sufficient to declare it a C++ replacement in that niche. Just not a universal C++ replacement.
Consider this, the authors have fixed some of the Plan 9 design errors including the demise of Alef, by creating Inferno and Limbo (yeah it was a response to Java OS, but still).
Where C is only used for the Inferno kernel, Limbo VM (with a JIT) and little else like Tk bindings, everything else in Inferno is written in Limbo.
Replace Limbo with AOT compiled Go, and that is what systems programming is in the minds of UNIX, Plan 9 and Inferno authors.
Safer, I'd say, but it also introduced its own issues. Its type system and - maybe more importantly - its ecosystem around things like unit testing and static analysis at the time was a few leaps ahead of C++, making it a favorite for enterprise systems and codebases where getting 5 SIG stars is a badge of honor and/or requirement.
Java is easier than C++ as well, harder to mess things up. That said, Go feels easier again than Java because it got rid of a lot of cruft.
Except it did replace C++ in the domains it claimed it would replace C++ in. It made clear from day one that you wouldn't write something like a kernel in it. It was never trying to replace every last use of C++.
You may have a point that Python would have replaced C++ in those places instead if Go had never materialized. It was clear C++ was already on the way out, with Python among those starting to push into its place around the time Go was conceived. Go found its success in the places where Python was also trying to replace C++.
The main domain the original team behind Go were aiming at was clearly network software, especially servers.
But there was no consensus whether kernel could be a goal one day. Rob Pike originally thought Go could be a good language for writing kernels, if they made just a few tweaks to the runtime[1], but Ian Lance Taylor didn't see real kernels ever being written in Go[2]. In the pre-release versions of Go, Russ Cox wrote an example minimalistic kernel[3] that can directly run Go (the kernel itself is written in C and x86 Assembly) - it never really went beyond running a few toy programs and eventually became broken and unmaintained so it was removed.
It was specifically written to measure the performance overhead of a high-level language in kernel, so it explicitly makes the same design choices as Linux: monolithic, POSIX-compatible.
I don't think so either, but as we move past that side tangent and return to the discussion, there was the battle of the 'event systems'. Node.js was also created in this timeframe to compete on much the same ground. And then came Go, after which most contenders, including Python, backed down. If you are writing these kinds of programs today, it is highly likely that you are using Go, Node.js, or some language that is even newer than Go (e.g. Rust or Elixir). C++ isn't even on the consideration list anymore.
> its creators explicitly said their goal was to replace C++
I think that is a far clearer goal if you look at C++ as it is used inside Google. If you combine the Google C++ style guide and Abseil, you can see the heritage of Go very clearly.
Node is catastrophic because it perpetuates mistakes make fifty years ago where other modern systems (looking at Go and Rust) learnt from the past mistakes not to repeat them
If Node were not popular it would not be catastrophic
It's not at "an accident" and Go didn't "somehow" fail to replace C++ at its systems programming domain. The reason why go failed to replace C and C++ is not a mystery to anyone: Mandatory GC and a rather heavyweight runtime.
When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.
Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.
1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).
Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.
Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.
The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.
During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.
Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].
In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.
2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.
When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.
Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.
Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.
The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.
The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.
The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.
I'm impressed. That's the most thorough and well-researched comment I've seen on Hackernews, ever. Thank you for taking the time and effort in writing it up.
It compares NuGet with Maven calling the former cumbersome. It's a tell of gaps in research made but also a showcase of overarching problem where C# is held back by people bundling it together with Java and the issues of its ecosystem (because NuGet is excellent and on par with Cargo crates).
NuGet was only released in 2010, so I wasn't really referring to it. I was referring to Maven (the build tool part, not the Maven/Ivy dependency management part, which was quite a breeze) the build system and MSBuild. Both of which required wrangling with verbose XML and understanding a lot of syntax (or letting the IDE spew out everything for you and then get totally lost when you need to fix something or go beyond what the IDE UI allows you to do). If anything, MSBuild was somewhat worse that Maven, since the documentation was quite bad, at least back then.
That being said, I'm not sure if you've used NuGet in its early days of existence, but I did, and it was not a fun experience. I remember that I the NuGet project used to get corrupted quite often and I had to reinstall everything (and back then, there was no lockfile if my memory serves me right, so you'd be getting different versions).
In terms of performance, ASP.NET (not ASP.NET Core) was as bad as contemporary Java EE frameworks, if not worse. You could make a high performance web server by targeting OWIN directly (like you could target the Servlet API with Java), but that came later.
I think you are the one who are bundling things together here: You are confusing the current C#/.Net Core ecosystem with the way it was back in the .Net 4.0/Visual Studio 2008-era. Windows-centric, very hard to automate through CLI, XML-obsessed and rather brittle tooling.
C# did have a lot of good points over Java back then (and certainly now): Less verbose language, better generics (no type erasure), lambda expressions, extensions methods, LINQ etc. Visual Studio was also a better IDE than Eclipse. I personally chose C# over Java at the time (when I could target Windows), but I'm not trying to hide the limits it had back then.
Fair enough. You are right and apologize for rather hasty comment. .NET in 2010 was a completely different beast and an unlikely choice in the context. It would be good for the industry if the perception of that past was not extrapolated onto the current state of affairs.
> Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.
Android GPU debugger, USB Armory bare metal unikernel firmware, Go compiler, Go linker, bare metal on maker boards like Arduino and ESP32
> Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.
Usually a problem only for those that refuse to actually learn about Java and .NET ecosystems.
Still doing great after 25 years, now being copied with the VC ideas to sponsor Kubernetes + WASM selling startups.
Unfortunately I have to quibble a bit, although bravo for such a high effort post.
> When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers
I worked at Google from 2006-2014 and I wouldn't agree with this characterisation, nor actually with many of the things Rob Pike says in his talk.
In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.
Rob's talk is frankly somewhat weird to read as a result. He claims to have been solving Big Problems that only Google had, but AFAIK nobody in Google's senior management asked him to do Go despite a heavy investment in infrastructure. Java and C++ were working fine at the time and issues like build times were essentially solved by Blaze (a.k.a. Bazel) combined with a truly huge build cluster. Blaze is a command line written in ... drumroll ... Java (with a bit of C++ iirc).
Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all. Google servers were all heavily multi-threaded and async at that time, and every server exposed a /threadz URL on its management port that would show you the stack traces of every thread (in both C++ and Java). I have clear memories of debugging race conditions in servers there, well before Go existed.
> The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought.
Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.
At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.
I was on-call for Java servers at Google at various points. I don't remember GC being a major issue even back then (and nowadays modern Java GC is far better than Go's). It was sometimes a minor issue requiring tuning to get the best performance out of the hardware. I do remember JITC being a problem because some server codebases were so large that they warmed up too slowly, and this would cause request timeouts when hitting new servers that had just started up, so some products needed convoluted workarounds like pre-warming before answering healthy to the load balancer.
Overall, the story told by Rob Pike about Go's design criteria doesn't match my own recollection of what Google was actually doing. The main project Pike was known for at Google in that era was Sawzall, a Go-like language designed specifically for logs processing, which Google has phased out years ago (except in one last project where it's used for scripting purposes and where I heard the team has now taken over maintenance of the Sawzall runtime, that project was written by me, lol sorry guys). So maybe his primary experience of Google was actually writing languages for batch jobs rather than web servers and this explains his divergent views about what was common practice back then?
I agree with your assessment of Go's success outside of Google.
This. I worked at Google around the same time. Adwords and Gmail were customers of my team.
I remember appreciating how much nicer it was to run Java servers, because best practice for C++ was (and presumably still is) to immediately abort the entire process any time an invariant was broken. This meant that it wasn't uncommon to experience queries of death that would trivially shoot down entire clusters. With Java, on the other hand, you'd just abort the specific request and keep chugging.
I didn't really see any appreciable attrition to golang from Java during my time at Google. Similarly, at my last job, the majority of work in golang was from people transitioning from ruby. I later learned a common reason to choose golang over Java was over confusion about the Java tooling / development workflow. For example, folks coming from ruby would often debug with log statements and process restarts instead of just using a debugger and hot patching code.
Yeah. C++ has exceptions but using them in combination with manual memory management is nearly impossible, despite RAII making it appear like it should be a reasonable thing to do. I was immediately burned by this the first time I wrote a codebase that combined C++ and exceptions, ugh, never again. Pretty sure I never encountered a C++ codebase that didn't ban exceptions by policy and rely on error codes instead.
This very C oriented mindset can be seen in Go's design too, even though Go has GC. I worked with a company using Go once where I was getting 500s from their servers when trying to use their API, and couldn't figure out why. I asked them to check their logs to tell me what was going wrong. They told me their logs didn't have any information about it, because the error code being logged only reflected that something had gone wrong somewhere inside a giant library and there were no stack traces to pinpoint the issue. Their suggested solution: just keep trying random things until you figure it out.
That was an immediate and visceral reminder of the value of exceptions, and by implication, GC.
> In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.
Thank you. I don't much about the breakup of different services by language in Google circa 2009, so your feedback helps me put things in focus. I knew that Java was more popular than the way Rob described it (in his 2012 talk[1], not this essay), but I didn't know how much.
I would still argue that like replacing C and C++ in server code was the main impetus for developing Go. This would be a rather strange impetus outside of big tech company like Google, which was writing a lot of C++ server code to begin with. But it also seems that Go was developed quite independently of Google's own problems.
> Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all.
I can't say anything about Google, but I also found that statement baffling. If you wanted to develop a scalable network server in Java at that time, you pretty much had to use threads. With C++ you had a few other alternatives (you could develop a single threaded server using an asynchronous library Boost ASIO for instance), but that was probably harder than dealing with deadlocks race conditions (which are still very much a problem in Go, the same way they are in multi-threaded C++ and Java).
> Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.
Yes, I am aware of that part, and it makes it clearer for me Go wasn't trying to solve any particular problem with the way Java was used within Google. I also think Go win over many experienced Java developers who already knew how to deal with Java. But it did offer a simpler build-deployment-and-configuration story than Java, and that's why it attracted many Python and Node.js where Java failed to do so.
Many commentators have mentioned better performance and fewer errors with static typing as the main attraction for dynamic language programmers coming to Go, but that cannot be the only reason, since Java had both of these long before Go came to being.
> At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.
Frankly speaking, GC was more minor problem for people coming from dynamic language. But the main issue for this type of developer, is that the GC in Java is configurable. In practice most of the developers I've worked with (even seasoned Java developers) do not know how to configure and benchmark Java GC, which is quite an issue.
JVM Warmup was and still is a major issue in Java. New features like AppCDS help a lot to solve this issue, but it requires some knowledge, understanding and work. Go solves that out of the box, by foregoing JIT (Of course, it loses other important optimizations that JIT natively enables like monomorphic dispatch).
The Google codebase had the delightful combination of both heavily async callback oriented APIs and also heavy use of multithreading. Not surprising for a company for whom software performance was an existential problem.
The core libraries were not only multi-threaded, but threaded in such a way that there was no way to shut them down cleanly. I was rather surprised when I first learned this fact during initial training, but the rationale made perfect sense: clean shutdown in heavily threaded code is hard and can introduce a lot of synchronization bugs, but Google software was all designed on the assumption that the whole machine might die at any moment. So why bother with clean shutdown when you had to support unclean shutdown anyway. Might as well just SIGKILL things when you're done with them.
And by core libraries I mean things like the RPC library, without which you couldn't do anything at all. So that I think shows the extent to which threading was not banned at Google.
This principle (always shutdown uncleanly) was a significant point of design discussion in Kubernetes, another one of the projects that adapted lessons learned inside Google on the outside (and had to change as a result).
All of the core services (kubelet, apiserver, etc) mostly expect to shutdown uncleanly, because as a project we needed to handle unclean shutdowns anyway (and could fix bugs when they happened).
But quite a bit of the software run by Kubernetes (both early and today) doesn’t always necessarily behave that way - most notably Postgres in containers in the early days of Docker behaved badly when KILLed (where Linux terminates the process without it having a chance to react).
So faced with the expectation that Kubernetes would run a wide range of software where a Google-specific principle didn’t hold and couldn’t be enforced, Kubernetes always (modulo bugs or helpful contributors regressing under tested code paths) sends TERM, waits a few seconds, then KILLs.
And the lack of graceful Go http server shutdown (as well as it being hard to do correctly in big complex servers) for many years also made Kube apiservers harder to run in a highly available fashion for most deployers. If you don’t fully control the load balancing infrastructure in front of every server like Google does (because every large company already has a general load balancer approach built from Apache or nginx or haproxy for F5 or Cisco or …), or enforce that all clients handle all errors gracefully, you tend to prefer draining servers via code vs letting those errors escape to users. We ended up having to retrofit graceful shutdown to most of Kube’s server software after the fact, which was more effort than doing it from the beginning.
In a very real sense, Google’s economy of software scale is that it can enforce and drive consistent tradeoffs and principles across multiple projects where making a tradeoff saves effort in multiple domains. That is similar to the design principles in a programming language ecosystem like Go or orchestrator like Kubernetes, but is more extensive.
But those principles are inevitably under communicated to users (because who reads the docs before picking a programming language to implement a new project in?) and under enforced by projects (“you must be this tall to operate your own Kubernetes cluster”).
John Ousterhout had famously written that threads were bad, and many people agreed with him because they seemed to be very hard to use.
Google software avoided them almost always, pretty much banning them outright, and the engineers doing the banning cited Ousterhout
Yeah this is simply not true. It was threads AND async in the C++ world (and C++ was of course most of the cycles)
The ONLY way to use all your cores is either threads or processes, and Google favored threads over processes (at least fork() based concurrency, which Burrows told people not to use).
For example, I'm 99.99% sure MapReduce workers started a bunch of threads to use the cores within a machine, not a bunch of processes. It's probably in the MapReduce paper.
So it can't be even a little true that threads were "avoided almost always"
---
What I will say is that pattern to say fan out requests to 50 or 200 servers and join in C++ was async. It wasn't idiomatic to use threads for that because of the cost, not because threads are "hard to use". (I learned that from hacking on Jeff Dean's tiny low latency gsearch code in 2006)
But even as early as 2009, people pushed back and used shitloads of threads, because ASYNC is hard to use -- it's a lot of manual state management.
e.g. From the paper about the incremental indexing system that launched in ~2009
Early in the implementation of Percolator, we decided to make all API calls blocking and rely on running THOUSANDS OF THREADS PER MACHINE to provide enough parallelism to maintain good CPU utilization. We chose this thread-per-request model mainly to make application code easier to write, compared to the event-driven model. Forcing users to bundle up their state each of the (many) times they fetched a data item from the table would have made application development much more difficult. Our experience with thread-per-request was, on the whole, positive: application code is simple, we achieve good utilization on many-core machines, and crash debugging is simplified by meaningful and complete stack traces. We encountered fewer race conditions in application code than we feared. The biggest drawbacks of the approach were scalability issues in the Linux kernel and Google infrastructure related to high thread counts. Our in-house kernel development team was able to deploy fixes to ad- dress the kernel issues
To say threads were "almost always avoided" is indeed ridiculous -- IIRC this was a few dedicated clusters of >20,000 machines running 2000-5000+ threads each ... (on I'd guess ~32 cores at the time)
I remember being in a meeting where the indexing VP mentioned the kernel patches mentioned above, which is why I thought of that paper
Also as you say there were threads all over the place in other areas too, GWS, MapReduce, etc.
I like that a lot of the comments are concerned with Go's failure to replace C++ as the de facto systems language yet Rob Pike doesn't mention anything about this in a retrospective article about what the language designers got wrong. He must be so embarrassed about it.
> Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure).
The Python runtime does not care about type hints; TS types are erased when compiled down to JS. How does Java's type erasure make it less strongly typed than those two? You can compare the capability of the type systems, and Java's may well be lesser, but I don't really see how erasure matters here.
That depends. C function call overhead for Go is quite large (it needs to allocate a larger stack, put it on its own thread and prevent pre-emption) and possibly larger than for CPython, which relies on calling into C for pretty much everything it does, so obviously has that path well-optimized.
So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.
> So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.
I don't have experience with Python, but I can definitely say switching between Go and C is super slow. I'm using a golang package which is a wrapper around SQLite: at some point I had a custom function written as a call-back to a Go function; profiling showed that a huge amount of time was spent in the transition code marshalling stuff back and forth between Go and C. I ended up writing the function in C so that the C sqlite3 library could call it directly, and it sped up my benchmarks significantly, maybe 5x. Even though sqlite3 is local, I still end up trying to minimize requests and data shipped out of the database, because transferring data in and out is so expensive.
(And if you're curious, yes I have considered trying to use one of the "pure go" sqlite3 packages; in large part it's a question of trust: the core sqlite3 library is tested fantastically well; do I trust the reimplementations enough not to lose my data? The performance would have to be pretty compelling to make it worth the risk.)
I think in general discouraging CGo makes sense, as in the vast majority of cases a re-implementation is better in the long run; so de-prioritizing CGo performance also makes sense. But there are exceptions, particularly for libraries where you want functionality to be identical, like sqlite3 or Qt, and there the CGo performance is a distinct downside.
Kubernetes mostly displaced tools written in Ruby (Puppet, Chef, Vagrant) or Python (Ansible, Fabric?). While a lot of older datastores are written in C++, new ones that were started post-2000ish tended to be written in Java or similar.
Kuberentes has nothing to do with Ruby / Python from your example it's far more complex and needs performance, what you described are not what k8s is doing.
Kubernetes is the equivalent of Borg /Omega at Google which is written in C++.
> Kuberentes has nothing to do with Ruby / Python from your example it's far more complex and needs performance, what you described are not what k8s is doing.
It's what Kubernetes is being used for in most places where I've seen it used.
> Kubernetes is the equivalent of Borg /Omega at Google which is written in C++.
Maybe, but most Kubernetes users aren't Google and weren't using those things.
Early versions of Rust were a lot like Golang with some added OCaml flavor. Complete with general GC, green threading etc. They pivoted to current Rust with its focus on static borrowcheck and zero-overhead abstractions very late in the language's evolution (though still pre-1.0 obviously) because they weren't OK with the heavy runtime and cumbersome interop with C FFI. So there's that.
AFAIK there was never "general GC". There was a GC'd smart pointer (@), and its implementation never got beyond refcounting, it was moved behind a feature gate (and a later-removed Gc library type) in 0.9 and removed in 0.10.
Ur-Rust was closer to an "applications" language for sure, and thus closer to Go's (possibly by virtue of being closer to OCaml), but it was always focused much more strongly on type safety and lifting constraints to types, as well as more interested in low-level concerns: unique pointers (~) and move semantics (if in a different form) were part of Rust 0.1.
That is what the community glommed onto, leading to "the pivot": there were applications language aplenty, but there was a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.
> a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.
I didn't know I wanted this, but yes, I did want this and when I got it I was much more enthusiastic than I'd ever been about languages like Python or Java.
I bounced off Go, it's not bad but it didn't do anything I cared about enough to push through all the usual annoyances of a new language, whereas Rust was extremely compelling by the time I looked into it (only a few years ago) and has only improved since.
While those influences are important to Rust's history, they were mostly removed from the language before 1.0, notably green threads and the focus on channels as a core concurrency primitive. Channels still exist as a library in stdlib, but they're infinitely buffered by default, and aren't widely used.
"systems" can mean "distributed systems", "network systems" etc. both of which Go is suitable for. It's obviously not a great choice for "operating systems" which is well known.
let's just pretend that when go lang people say "systems programing" they mean smething closer to "network (systems) programming" which is where go shines the brightest
You are confusing the network stack (as in OS development) and network applications. Go is the undisputed king of backend, but no reasonable person has ever claimed its a good choice for OS development.
For people of Pike's generation, "systems programming" means, roughly, the OS plus the utilities that would come with an OS. Well, Go may not be useful for writing the OS, but for the OS-level utilities, it works just fine.
The term "systems programming" seems to be interpreted very differently by different people which in practice renders it useless. It is probably best to not use it at all to avoid confusion.
> runtime ... GC ... not viable as systems programming language
A GC can work fine. At the lower levels, people want to save every flop, but at the higher levels uncounted millions are wasted by JS, Electron apps etc. etc. We can sacrifice a little on the bottom (in the kernel) for great comfort, without a difference. But you don't even have to make sacrifices. A high performance kernel only needs to allocate at startup, without freeing memory, allowing you to e.g. skip GC completely (turn it off with a compiler flag). This does require the kernel to implement specific optimizations though, which aren't typically party to a language spec.
Anyway, some OS implemented with a GC: Oberon/Bluebottle (the Oberon language was designed specifically to implement the Oberon OS), JavaOS, JX, JNode, Smalltalk (was the OS for the first Smalltalk systems), Lisp in old Lisp machines... Interval Research even worked on a real time OS written in Smalltalk.
Indeed, GC can work in hard real time systems e.g. the Aonix PERC Ultra, embedded real time Java for missile control (but Go's current runtime' GC stops are unpredictable....)
Particularly when we consider modern hardware problems (basic OS research already basically stopped in the 90s, yay risc processor design...), with minimal hardware support for high speed context switching because of processor speed vs. memory access latency... Well, it's not like we can utilize such minuscule operations anyway. Why don't we just have sensible processors which don't encourage us to unroll loops, which have die space to store context...
There were Java processors [2] which implement the JVM in hardware, with Java bytecode as machine code. Before llvm gained dominance, there were processors optimized to many languages (even Forths!)
David Chisnell, a RTOS and FreeBSD contributor recently went into quite a bit of depth [1] ending with:
> everything that isn’t an allocator, a context-switch routine, or an ISR, can be written in a fully type-safe GC’d language
Seconding this. Go also has some opinionated standard libraries (want to differentiate between header casings in http requests because your clients/downstream services do? Go fuck yourself!) and shies you away from doing hacky, ugly, dangerous things you need in a systems language.
> and shies you away from doing hacky, ugly, dangerous things you need in a systems language.
But... You end up doing hacky and ugly things all the time because Go is such a restricted language with so many opinions about what should and should not be done. Generics alone...
They aren't, but because you can send Foo-Bar as fOo-BaR on the wire, someone somewhere depends on it. People don't read the specs, they look at the example data, and decide that's how their program works now.
Postel's Law allows this. A different law might say "if anything is invalid or weird, reject it instantly" and there would be a lot less security bugs. But we also wouldn't have TCP or HTTP.
No, and that wasn't the claim being made. The claim being made was that there can be engineering value in preserving the case of existing headers.
Example: An HTTP proxy that preserves the case of HTTP headers is going to cause less breakage than one that changes them. In a perfect world, it would make no difference, but that isn't the world we live in.
Only per the HTTP spec, and this is the same misunderstanding that the golang developers have. Because it's so common to preserve header casing as requests traverse networks in the real world, many users' applications or even developers' APIs depend on header casing whether intentionally or not. So if you want to interact with them, or proxy them, you probably can't use Go to do so (ok, actually you can, but you have to go down to the TCP level and abandon their http request library).
Go makes the argument that they can format your headers in canonicalized casing because casing shouldn't matter per the HTTP spec. That's fine for applications I guess, though still kind of an overreach given they have added code to modify your headers in a particular way you might not want to spend cycles on - but unacceptable for a systems language/infrastructure implementation.
I thin you wanted to say that headers are not case sensitive according to the HTTP spec, but some clients and servers do treat header names as case-sensitive in practice.
What Go does here is kinda moot nowadays, since HTTP/2.0 and HTTP/3.0 force all header names into lower-case, so they would also break non-conformant clients and servers.
That is in fact what I meant to say, and I thought I said it. Anyway, HTTP/1.1 is still in use a lot of places.
I think most people here don’t have any experience building for the kind of use cases I’m considering here (imagine a proxy like Envoy, which btw does give you at least the option to configure header casing transformations). When you have customers that can’t be forced to behave in a certain way up/down stream, you have to deal with this kind of stuff.
The Go standard library is probably being too opinionated here, but it's in line with the general worse-is-better philosophy behind Go: simplicity of implementation is more important than correctness of interface. In this case, the interface can even be claimed to be correct (according to the spec), but it cannot cover all use-cases.
If my memory serves me right, we did use Traefik at work in the past, and I remember having this issue with some legacy clients, which didn't expect headers to be transformed. Or perhaps the issue was with Envoy (which converts everything to lowercase by default, but does allow a great deal of customization).
Wait, are the headers canonicalized if you retrieve them from r.Header where r is a request?
I mean, if the safest is to conform to the html spec, there should be an escape hatch for the rarer cases easier than going all the way to the tcp level?
It's been a while since I battled this but IIRC, you can set unconcalized headers on requests you serialize yourself (for egress) with a simple workaround (directly add the header to the request's header map rather than the setter function) but if you use Go's default http handler libraries, it "helpfully" canonicalizes headers for you when it deserializes incoming requests and then invokes your http handler. So you are unable to access to the original casing that way, unless you instead use a TCP server.
> The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly (e.g. word processor), whereas systems programming aims to produce software and software platforms which *provide services to other software*, are performance constrained, or both (e.g. operating systems, computational science applications, game engines, industrial automation, and software as a service applications).
> It certainly isn't viable as systems programming language
It is perfectly viable as a systems programming language. Remember, systems are the alternative to scripts. Go is in no way a scripting language...
You must be involved in Rust circles? They somehow became confused about what systems are, just as they became confused about what enums are. That is where you will find the odd myths.
It’s all admittedly a somewhat handwaving discussion, but in ‘systems programming’ ‘systems’ is generally understood to be opposite to ‘applications’, not ‘scripts’.
Indeed - I’ve seen this refrain about “systems programming” countless times. I’m not sure how one can sustain the argument that a “system” is only an OS kernel, network stack or graphics driver.
Not sure I would agree with the community leading aspect. It still feels like Google decides.
My particular point would be versioning. At first Go refused to acknowledge the problem. Then, when there was finally growing community consensus, Go said forget everything else, now we are doing modules.
I also recall the refusal to make montatomicly-increasing-time a public API until cloudflare had a daylight savings time outage.
I think Go's language leadership is one of the worst if not the worst I've ever seen when it comes to managing a language community/PR. Both Ian and Rob come off as dismissive of the community and sometimes outright abrasive in some of the interactions I've seen. Russ Cox seems like a good person, though.
They probably think being hardheaded "protects" the language from feature creep and bad design, but it has also significantly delayed progress (see generics) and generally made me completely turned off from participating in language development or community in any meaningful way, even though I actually like the language. I think there are ways to prevent feature creep and steer the language well without being dismissive or a jerk.
Personally their handling of versioning, generics and ESPECIALLY monotonic time (in all 3 cases, seemingly treating everyone raising concerns about the lack of a good solution as if they were cranks and/or saying fix it yourself) definitely soured me on Go and I would never choose it for a project or choose to work for a company that uses it as language #1 or language #2.
It just left a bad taste in my mouth to see the needs and expertise of actual customers ignored by the Go team that way since the people in charge happened to be deploying in environments (i.e. Google) where those problems weren't 'real' problems
Undeniable that people have built and shipped incredible software with it, though.
> It just left a bad taste in my mouth to see the needs and expertise of actual customers ignored by the Go team that way since the people in charge happened to be deploying in environments (i.e. Google) where those problems weren't 'real' problems
I feel for this, but only to an extent. It's hard to work in any service industry and retain any notion of, "the customer intelligently knows what they want," as a part of your personal beliefs. At the end of the day, you had an idea for a product, and you have to trust your gut on that product's direction, which is going to make some group of people feel a little unheard.
Backend people use Go in my company. They do great things with it. It works well enough when the interface between a Go program and another one is a socket kind of thing.
But we also have a couple system utilities for embedded computers written in Go. I still get frustrated that I have to go and break my git configuration to enable ssh-based git clones and set a bunch of environment variables for private repos. Then there is CGO stuff like reading comments as code interfaces. Those things are incredible waste of time of the embedded developers and it makes harder for no reason to onboard people. Go generally spits out cryptic errors when building private repos like those.
I always wanted and still want to create a wrapper that lauches a container, makes whatever "broken" configuration that makes Go compiler happy, figures out file permissions and runs the compiler. The wrapper should be the only go executable in my host system and each repo should come with a config file for it.
> I still get frustrated that I have to go and break my git configuration to enable ssh-based git clones and
Just curious.. But why would you disable ssh-based git authentication? It's significantly more convenient when interacting with private repositories than supplying a username and password to https git endpoints.
> set a bunch of environment variables for private repos.
Set up a private Go module proxy. Use something like Athens. The module proxy can handle authentication to your private module repositories itself, then you just add a line in your personal bashrc that specifies your proxy.
In general I don't have complaints with the things you take issue with so I'll digress on those.
> Just curious.. But why would you disable ssh-based git authentication?
I don't disable it. However, not every git repo requires ssh to pull. When working with other languages, if there is a library that I purely depend on, it is perfectly okay to use https only and I use https.
However to use private repos with Golang, one has to modify their global git configuration to reroute all https traffic into ssh because Golang's module system uses only https and the private repos are ssh-authenticated. There is no way to specify which repo is ssh and which repo is https. Last time I used Go, it was at 1.19.
> Set up a private Go module proxy. Use something like Athens. The module proxy can handle authentication to your private module repositories itself, then you just add a line in your personal bashrc that specifies your proxy.
Why should we put more things on our stack just to make a language, which claims to be modern, work? Why do we have to change the global configuration of a build server to make it work? Rust doesn't require this. Heck our Yocto bitbake recipes for C++ can do crazy things with url including automatically fetching the submodules.
Maybe it would make sense to make that change if we used Go everyday but we don't.
> But why would you disable ssh-based git authentication?
Ask the Go developers. AFAIK the only package manager where I have to change my global git configuration to make it work. Even the venerable CPAN and tlmgr behave better.
> Then there is CGO stuff like reading comments as code interfaces.
That's not exactly novel, and while I agree that it's meh what really grinds my gears is the claims / assertions that Go doesn't have pragmas or macros, while they're over there using specially formatted comments as exactly that like it's 2001-era Java.
Anyone who's run an open source project is used to getting feature requests or complaints from groups like:
* people who are merely interested but have no plans to use your project
* people with strong opinions not backed by actual experience
* people with a specific interest (like a new API or feature) who want to integrate it into as many projects as possible
From a naive perspective, it makes sense to treat a request like 'we need monotonic time' as something that doesn't necessarily have any merit. The Go team are very experienced and opinionated, and it seems like it was a request that ran against their own instincts. The design complication probably was distasteful as well.
The problem is, the only reason they never needed monotonic time in the past was that many of them spent all their time working in special environments that didn't need it (Google doesn't do leap seconds). In practice other people shipping software in the wider world do need it, and that's why they were asking for it. Their expertise was loudly disregarded even though the requests came with justification and problem scenarios.
For anyone not familiar with the monotonic time issue, the implementation was found to be incorrect, and the go devs basically closed it and went “just use google smear time like we do lol, not an issue, bye”.
It did eventually get fixed I believe, but it was a shitty way of handling it.
Even the "fix" is... ugh: instead of exposing monotonic time, time.Time contains both a wallclock time and an optional monotonic time, and operations update and use "the right one(s)".
Also it's not that the implementation was incorrect, it's that Go simply didn't provide access to monotonic time in any way. It had that feature internally, just gave no way to access it (hence https://github.com/golang/go/issues/16658).
I mostly loved what they did with with modules/package mgmt. I think SIV was a mistake but modules was miles better then the previous projects even with SIV. Some people seemed to take it very personally that the Go team didn't adopt their prior solution but idk why they expected the go team to use their package manager. I think the SAT style package manager proposed would have created a lot more usability problems for developers and would have been much harder to maintain.
They took it personally because Go led the community and those project leaders on for years that it would be looking, learning, and communicating...
And then dropped the module spec and implementation and mandated it all in about two days. With no warning or feedback rounds or really any listening at all, just "here it is, we're done", out of nowhere.
They have every right to be personally insulted by that.
I was a Java dev and love using Go now, but I have to say I'm not sure if many of my Ex-Java-Colleagues would like Go. Go is kind of odd in that even when it was new, it was kind of boring.
I think a lot of people in the Java world (not least myself) enjoy trying to refactor a codebase for new Java features (e.g. streams, which are amazing). In the Go world, the enjoyment comes from trying to find the simplest, plainest abstractions.
Isn’t the entire language designed explicitly to prevent programmers from building their own sophisticated abstractions that could confuse other programmers who don’t understand that other persons code… as I understand it, if you can read go and understand basic programming you should be competent with go, and if you know your algorithms you should be proficient.
I hated old Java but the modern language isn’t as bad now some people have added some better syntax shortcuts the libraries are nearly twenty years more polished, and the IDE can nearly half write my code for me so the boilerplate and mind numbing aspect isn’t so bad… I loathe go because using it feels like programming with my hands tied behind my back trying on a keyboard with sandpaper keycaps, despite that, I didn’t bother “learning” go, I could just read it based on my Python/C/Basic/Java/C# experience instead of needing any extra learning.
K8s just so happens to be coded in Golang. A quick look at that overall codebase should be enough to disabuse people of this notion that Golang developers cannot possibly come up with confusing or overly sophisticated abstractions.
My experience with reading Go is that the language not giving tools to build good abstractions has failed to stop developers from trying to do so anyway. There's never a line of code where I just plain don't know what's even going on syntactically as some languages can have, but understanding what it's actually doing can still require hopping through several files.
In short: a simple (programming) language does means that every small part/line is simple. But it doesn't mean that the combination of all parts/lines is simple. Rather the opposite.
Very true! I think a lot of the accidental complexity of early Java systems were rooted in the not so powerful language. If the language is too powerful (like Scala 2) developers do insane things with it. If the language is not powerful enough, developers create their own helpers and tricks everywhere and have to write a lot of additional code to do so.
Just compare Java streams with how collections are handled in Go and scratch your head how someone can come up with such a restricted language in this century.
> and have to write a lot of additional code to do so.
And most importantly: you have to read a lot of code like this, and understand it's assumptions, failure modes, runtime behavior and bugs, which are different every time. Instead of just reading "ConcurrentHashMap", and be done with your day.
Eh, not really. Go’s philosophy around abstractions is quite poor. Duck typing begs engineers to create poor abstractions that simply reading a codebase does not necessarily lead to understanding. The bolted-on generics implementation actually makes this worse.
> As a Java dev, I love boring. That's why I picked Java. Boring means less outages.
This is why I personally love Go too :)
There's very little room for fancy tricks, in most cases there is just one way to do things. It might be verbose, but writing code is the least time consuming part of my job anyway.
> Go is kind of odd in that even when it was new, it was kind of boring.
Java was designed to be boring, too. That’s why, for example, it doesn’t have unsigned integers: it means programmers need not spend time choosing between signed and unsigned integers.
> > Go is kind of odd in that even when it was new, it was kind of boring.
> Java was designed to be boring, too.
Which is something of an irritation and bemusement to me; I remember when Java came out and the paean at the time was that it was written as a simple language, that average programmers can easily use. And was SOUNDLY SHIT ON for that. Who would want to use something for... pish posh, AVERAGE programmers!?!
Then Go essentially did the same marketing move, and this was seen as some glorious genius big brain move from on high.
Yeah, Java has been trying to add every feature under the Sun recently and it's definitely not a boring language anymore (since Java 21 at least, it's impossible to claim otherwise with things like pattern matching being in the language).
As a Java guy, I think this is looking like a desperate attempt to remain relevant while forgetting why the language succeeded in the first place.
That’s an absolutely bad take. Java is still very very conservative with every change, and they almost always have only local behaviors, so not knowing them still gives you complete understanding of a program.
Like, records are a very easy concept, fixing the similar feature in, say, c# where they are mutable. Sealed classes/interfaces are a natural extension over the already existing final logic. It just puts a middle option between none and all (other class) being able to inherit from a superclass.
C# records default to immutability. However, struct records being a lower level construct default to mutable (which can be changed with readonly keyword):
record User(string Name, DateOnly DoB); // immutable
record struct Cursor(int X, int Y); // mutable
readonly record struct Point(int X, int Y); // immutable
Almost every golang program I've seen was ugly. It's strange given that they designed the language from scratch with all the ugliness ingrained in its structure from day one.
> Different people different taste, different context, different standards of beauty.
Every statement about aesthetics is subjective, you don't have to remind me of that. BTW, what did YOU write about Amazon shows 6 days ago?
No one was pontificating about your opinion, right?
> It's interesting to compare Dart, which has zero uptake outside Flutter
Caveat: I work on Dart.
I don't see that that's a very damning critique of Dart. Every language needs libraries/frameworks to be suited for a domain. Flutter is a framework for writing client apps in Dart. Without Flutter, no matter how much you like Dart the language, you'd be spending a hell of a lot of time just figuring out how to get pixels on screen on Android and iOS. Few people have the desire or time for that.
Anyone writing applications builds on a stack of libraries and frameworks. The only difference I see between Go and Dart with Flutter is that Go's domain libraries for networking, serialization, crypto and other stuff you need for servers are in the standard library.
Dart has a bunch of basics in the built in libraries like collections and asynchrony, but the domain-specific stuff relies on external packages like Flutter.
That's in large part because Dart has had a robust package management story from very early on: many "core" libraries written and maintained by the Dart team are still shipped using the package manager instead of being built-in libraries because it makes them much easier to evolve.
I prefer that Flutter isn't baked into Dart's standard libraries, because UI frameworks tend to be shorter-lived than languages. Flutter is wonderful, but I wouldn't be surprised if twenty years from now something better comes along. When that happens, it will be easier for Dart users to adopt it and not use Flutter because Flutter isn't baked in.
I don’t disagree with all that but it seems tangential to the point being made, that people just aren’t using Dart except for Flutter apps. So compared to Go it’s very much a niche language (although maybe it’s a really big niche, I don’t know).
Go is used for command-line tools too, e.g. esbuild.
It's a question of whether the tail is wagging the dog. Flutter is more important than Dart, unless Dart finds a way to expand into another niche. I don't think any one Go framework is bigger than the language itself (even if you were to include the standard library networking utils as a framework).
Hmm, I can't quite tell if we're disagreeing or not!
I guess my point is that, yes, in each case there's an interconnected set of tools, and in the case of Go it's called "Go" or "Golang". It's a GC language with good kernel bindings but poor interop otherwise, so it's good for low-level stuff that runs directly on top of a kernel, like network servers and CLI tools.
In the Dart case the interconnected set of things is called "Flutter" and it's a cross-platform UI toolkit that happens to use its own language. Exactly the same category as Qt, in fact -- I can't even remember what Qt's special C++ variant is called, if it even has a name. I'm not aware of Qt's C++ extensions being used anywhere else.
You could definitely argue that they're both in a niche, and the UI niche is a nice big one. It feels like Dart could expand to other areas (more easily than Go could be used for a UI) but for whatever reason it hasn't. Similar to server-side Swift.
> Go was probably the key technology that migrated server-side software off Java bloatware to native containers
Interesting point of view - Golang might be pithily described as "Java done right". That has little to do with "systems programming" per se but can be quite valuable in its own terms.
Java has a culture of over-engineering, to the point where even a logging library contains a string interpolator capable of executing remote code. Go successfully jettisoned this culture, even if the language itself repeated many of the same old mistakes that Java originally did.
> Java has a culture of over-engineering [which] Go successfully jettisoned
[looks at the code bases of several recent jobs]
[shakes head in violent disagreement]
If I'm having to open 6-8 different files just to follow what a HTTP handler does because it calls into an interface which calls into an interface which calls into an interface (and there's no possibility that any of these will ever have a different implementation) ... I think we're firmly into over-engineering territory.
Java is a beautiful and capable language. I have written ORMs in both Java and Go, and the former was much easier to implement. Java has a culture problem though where developers seem to somehow enjoy discovering new ways to complicate their codebases. Beans are injected into codebases in waves like artillery at the Somme. Errors become inscrutable requiring breakpoints in an IDE to determine origin. What you describe with debugging a HTTP handler in your Go project is the norm in every commercial Java project I have ever contributed to. It's a real shame that you are seeing these same kinds of issues in Go codebases.
> But don't those exist primarily for unit testing?
I believe that's why people insert them everywhere, yes, but in the codebases I'm talking about, many (I'd say the majority, to be honest) of the interfaces aren't used for testing because they've just been cargo-culted rather than actually considered.
(Obviously this is with hindsight - they may well have been considered at the time but the end result doesn't reflect that.)
It's indeed horrible when debugging. OTOH, there's merit to the idea that better testing means less overall time spent (on either testing or debugging), so design choices that make testing easier provide a gain -- provided that good tests are actually implemented.
Agree. Open source OAuth go libraries have this too. It's like working with C++ code from the bad old days when everyone thought inheritance was the primary abstraction tool to use.
Interface, actual implementation, Factory, FactoryImpl, you get the idea.
Java lends itself to over-engineering more than most languages. Especially since it seems that every project has that one committer who must be getting paid per line and creates the most complex structures for stuff that should've been a single static function.
Nice. My reply would have been something like: it combines the performance of Lisp with the productivity of C++. These days Java the language is much better though, thanks to Brian Goetz.
The performance of the JVM was definitely a fair criticism in it's early years, and still is when writing performance-critical applications like databases, but it's still possibly the fastest managed runtime around, and is often only a margin slower than native code on hot paths. It seems the reputation has stuck though, to the point that I've seen young programmers make stock jokes about Java being slow when their proposed alternative is Python
Yes it's possible to write Java without any boxing of primitives or garbage collection, but one can't use any of the standard libraries and it's not really Java one is writing but a very restricted subset. I don't think these benchmark are particularly indicative of real world performance. But of course Java is still hundreds (thousands?) of times faster than Python.
It’s okay to do some allocations - it can be stack replaced, and even if it’s not, the cost is very negligible. The problem is mindless allocation, not allocation itself.
> it combines the performance of Lisp with the productivity of C++
Is that supposed to be a jab? Because IME SBCL Lisp is in the same ballpark as Go (albeit offering a fully interactive development environment), and C++ is far from being the worst choice when it comes down to productivity.
Hopefully you agree Lisp is more productive than C++? Lisp is however not quite fast or efficient enough to displace C++ completely, mainly because, like Java and Go, it has a garbage collector. C++ was very much the language in Java's crosshairs. Java made programming a bit safer, nulls and threads not withstanding, but was certainly not as productive as Lisp. Meanwhile Lisp evolved into Haskell and OCaml, two very productive languages which thankfully are inspiring everyone else to improve. Phil Wadler (from the original Haskell committee) has even been helping the Go team.
I consider that a bad practice, because it doesn't make things obvious. I guess it works so well in Go, because the language itself is small, so that you don't have to remember much of these "syntax tricks". Making things explicit, but not too verbose, is the best way in my opinion. JetBrains has done amazing work in this area with Kotlin.
for (item in collection) {
...
}
list.map { it + 1 }
fun printAll(vararg strings: String)
constructor(...)
companion object
I like the `for..in` which reads like plain English. Or `vararg` is pretty clear - compare that to "*" and the like in other languages. Or `constructor` can not be more explicit, there is no need to teach anyone that the name must be the same as the class name (and changed accordingly). Same is true for companion object (compare with Scala).
I've always found it eye rolling how often this is given as some sort of "mic drop" against Java. Yeah it's a little weird having to have plain functions actually be "static methods", but it's a very minor detail. And I really hope people aren't evaluating their long-term use of a language based on how tersely you can write Hello World
From looking at what the Go team had to say about Go in its earliest days, Go had very little to do with Java, and they weren't very concerned with fixing Java's issues.
The "Bloated Abstractions" issue in Java is more of a cultural thing than an issue of the language. You could even say it's partially because early Java (especially before Java 1.5) was too much like Go!
Java used to have the same philosophy around abstractions, and Sun/Oracle were pretty conservative about adding new language features. To compensate for lack of good language-level abstractions, Java developers used complicated techinques and design patterns, for example:
1. XML configuration, because there were no attributes.
2. Attributes because there were no generics and closures.
3. Observers/Visitors/Strategies/etc. because there weren't any closures.
4. Heavy Inheritance, because there was no delegation.
5. Complicated POJOs and beans, since Java didn't have properties or immutable records.
Many other software development cultures lived through similar limitations without evolving to the levels of abstraction astronaut achievement award of Java.
Very true, and Go (or bash for that matter) are proofs that language limitations do not mandate complexity. Complexity is mandated by perceived need and culture. You can easily see how complexity would play out in Go by looking at Kubernetes. Most projects did not fall into the complexity trap, but there are many differences in form, functions and context:
- Early Java projects before Java EE (like Applets and early GUI apps) did not have this level of complex abstraction. The code wasn't great (old Java APIs like StringBuffer and Date were often quite horrible), but it was simple.
- Java started getting complex with J2EE. J2EE was strongly motivated by enterprise requirements for interoperability and dynamically configurable and interchangeable components.
- Another source of complexity was the popularity of XML and the widespread belief (back in the early 2000s) that moving part or all of your business logic to XML was a good thing.
- And then there was the design patterns obsessions, where design patterns transformed from being a common pattern observed in code into something that should be emulated.
- Most of the early J2EE complexity is dead (EJBs, XML configuration, CORBA, SOAP), but many users don't want to give up component interoperability and powerful dynamic configuration. That's why Java frameworks like Spring, Java EE and even modern frameworks like Quarkus or Micronaut have all their annotations and implicitness.
- Go was luckier(?) to be born in a different time and popularized in a different context.
- There was no top-down enterprise-oriented backing for Go (even within Google, it was a bottom up project).
- The Go compiler, runtime and libraries were fully open-source from day one, under a permissive license. Third party libraries were also almost universally open source. The existence of open source culture made concerns about vendor lock-in moot, and interchangeability was not a thing.
- The XML hype died and there was a consensus that component wiring should be done in code.
- Many of us got sober on design patterns. Peter Norvig's critique[1] gained traction outside of the LISP community and the anti-design pattern view became dominant within the dynamic language community as well, even in strongly OO languages like Ruby. This is the community that was feeding Go.
- Most servers written in Go were just smaller in scope than the equivalent Java projects. This has many causes (less top-down big-rewrite enterprise projects, microservices gaining popularity, UI logic usually moving to front-end or mobile app).
I believe even the language's own designers would agree with that sentiment. There's just generally a lot of things about Go that are great for low-level microservices but not great for 1M+ line of code business applications maintained by large teams.
I can't speak for others, but personally if I'm writing software with complex business logic, I'd want null safety, better error handling, a richer type system, easier testing/mocking... I've also never liked that a panic in one goroutine crashes the whole application (you can recover if it's your own code, sure, but not if it happened in a goroutine launched by some library).
I'd disagree with most of that, but the panic in goroutines really hits home. It's so annoying to remember implementing recover in every goroutine started to avoid crashing your application. I don't get why there's no global recover option that can recover from panic's in goroutines as well.
Goroutines are expected to communicate on channels. If one goroutine simply vanishes, then its counterparties on any channels will deadlock - either waiting for messages that never come, or trying to send messages to it after channel buffers fill up.
Crash is significantly easier to detect and understand than deadlock.
"Too low level" "lacks the power" - I don't understand what this means. What are things that are hard to do in Go business applications that other languages do better?
Here is an example. Go let’s structs be passed by value or by reference. The programmer needs to decide, and that adds complexity that is largely irrelevant for modeling complex business logic. Java does not provide a choice, which keeps it simple.
Simple, but with an inherent performance issue and GC load; note also that Java's Valhalla project adds value types with more stack based semantics, so there too you as a developer get a choice.
It lacks modeling capability that you'd find even in languages like Java and C#. Enums, records, pattern matching, switch expressions, and yes even inheritance where it makes sense.
Go has pretty powerful composition, reuse, higher-order functions etc. for dealing with byte arrays and streams. Not so much for business domain entities.
But then again the internet is everywhere now: desktop, servers, watches, washing machines, industrial systems, sensors ... So "internet language" is a somewhat pointless term.
That term isn’t meant to include mobile apps, desktop apps, web apps (even though those all use the internet, of course). Nobody is using Go for any of those, as far as I know.
So I think it is a useful term, and captures the things Go is good at surprisingly well.
> Nobody is using Go for any of those, as far as I know.
Quite a few of us are, fwiw. I personally have done all three, Tailscale uses Gio UI for their mobile apps, and a fair amount of games have used https://ebitengine.org to release for Xbox/Nintendo Switch/Mobile/Webassembly/Cross-platform PC.
I actually kinda thought that Google should have made Go the default language for android. It could have been a compelling argument to the rest of the world as a reason to use it as a systems language.
It's feels like they didn't want to dog food it outside of their servers, though.
Yep, I'd say one of the things they could have done better is in making this distinction more clear to people. I spent multiple years being confused about what made go a "systems" language, when it didn't seem very good for that at all. When all the devops / infrastructure tooling started being written in it, its niche suddenly became more clear to me.
I picked up Go in 2012 as a Python dev in need to doing some bit twiddling over the wire for Modbus. I never shipped that code but it blew my mind how easy it was to just twiddle those bits and bytes and it just worked.
A decade later and a couple almost full time Go jobs under my belt and it still surprises me how well most things Just Work™.
I love the Go language and I love the Go community.
I appreciate what Rob, Ian, Russ and the others do for Go and I appreciate that this talk / blog is honest about the "bumps in the road" working with the community. There's not much point in beating a dead horse around this but having lived through it I find it very hard to believe they didn't know exactly how they were behaving, especially in regards to the package management debacle. Never the less the blog is also correct that we have landed at a very good solution (Drew's legitimate complaints aside).
Here's to another 10 years of Go and the inspired / similar languages (Zig, Deno, etc) and hoping we continue to grow as a healthy community.
My favorite thing about the core Go team is their willingness to say "No" (to all sorts of stuff) and "Wait for the right implementation" (for generics).
I'm a computational biologist rather than a programmer, so my use of Go waxes and wanes, but when I come back to Go, my code compiles and the language works the way I expect.
That being said, I do appreciate Rob Pike's willingness to admit mistakes on the learning curve on community engagement without capitulating on adding all the shiny objects.
Wrt. generics they followed in Java's footsteps: they asked the PLT community to come up with a reasonably elegant model that would mesh well with the rest of the language, and then largely stuck to that.
It took years to root out and torch flawed APIs from the JVM ecosystem. After that example, it’s hard to defend neglecting the problem again and launching with no solution.
If the cost of adding generics later is "existing code still works, and now this can work too" versus "the old version of genetics is flawed, burn everything down and use this version instead" or "generics didn't work out the way we wanted, use this new thing that's totally-not-generics-wink-wink"...
Well, I'd say waiting is the right call. You're already used to adding three extra lines of boilerplate after every single line of code for error handling, living without generics wasn't that hard.
Hell, I recall half the go community was convinced they weren't needed at all by the time they came around.
There's a gulf between "not launching perfectly" and launching with obvious, blatant deficiencies.
Go did the latter. Generics were always going to have to be implemented eventually, at least two languages in basically the space Go was targeting had had to bite that bullet in the decade before it was published. Support for third-party external dependencies only made that more dire.
Instead of doing the work up-front and launching with generics, they decided to launch with ad-hoc generic builtins and "lalala can't hear you" for a decade, then finally implement half-assed generics.
The idea that it would be better if go waited an additional year or two to launch with generics is laughable. That extra year probably makes the difference for the languages success.
Why not? I think his reasoning makes sense. Unless you mean that having a language with better design (but being released later) is not an improvement.
My argument is that timing is very important to adoption. Unix is not the best OS to have been designed by far, but it was the first free one. If go had been delayed, something else may have filled the slot, and there’s no reason to believe it would have been a better something else. I.e when Go did release 2 years later, but with generics, it’d be too late, and no one would care.
I'm not sure the creators of Unix would have agreed. Unix was a step in a very different direction at the time. It was a reaction to baroque operating systems that did a lot of stuff and were quite complex. The key to Unix is simplicity and, if you shave it down, that it was really a system interface definition - which enabled other people to create Unixen by offering the same system call interface with the same semantics.
It was neither free, nor do I think the timing played much of a role since there wasn't any comparable OS being made at the time. The important bit of Unix was a set of key ideas.
I think that generics is really difficult to do well when added to a language later. At the very least you will have a lot of pre-existing code, including the standard library, that would have benefitted from generics, but doesn't because it was written before generics existed. And you will almost certainly have cases where generics don't mesh well with other features. I think that if go was designed from the beginning with generics, then generics probably would have worked better in go (and similarly for java).
There shouldn't be anything in the chosen implementation that prevents generics on methods. The work just hasn't been done yet. There are only so many hours in the day. Feel free to jump in if you have some to spare.
A wrong implementation would hamstring making such improvements in the future. It is possible that will still happen in some unforeseen way anyway, but the earlier proposals visibly exhibited problems from the get-go. They were clearly wrong. So far, this one does look right.
Currently I've been using different AI tools (Bard, GPT-4) to just straight-up convert my old python utilities to Go.
There are a few that worked right out of the box, for a few I've had to adjust stuff mostly because of the AI model's information about APIs being a bit out of date.
But the fact that I can just scp a program to a server and it Just Works is amazing compared to the "what's the current venv system du jour" dance I had to do with Python every time.
Being that Deno is tooling rather than a language, I think it is safe to say it is inspired by / comparable to Go tooling. To me it feels like writing JS/TS with Go tooling.
That's nice, but also rather self-congratulatory.
I was expecting some kind of acknowledgment of the deeper issues in the language.
But perhaps that's the central issue, that the language is perfect in their eyes.
I'm the problem.
Well, okay then.
I can't recommend the language, because of its type system, the error handling, the unsafe concurrency, the simplistic syntax, nil, default zero, and a large number of mainstream packages are abandoned.
I now use Rust as my main language. It has a flourishing ecosystem and is visionary in so many ways that Go is not.
Put more pointedly, I'm sure Go had its day, when it was competing with PHP as a backend language.
I don't think this comment really contributes to the discussion. It comes across as Rust advocacy without having any tangible points to make.
First paragraph: "I don't like it". Second paragraph: snide comment. Third paragraph: "I don't like it". Fourth paragraph: "Rust is better". Fifth paragraph: snide comment.
There's one thing in there worth discussing IMO: the focus on zero values (incl. nil).
That's the Go mistake, the one that causes most of the issues for the intended audience, the one that can't really be fixed. It's a shame Pike doesn't really discuss this, even if it's hopeless now.
The rest is just people projecting and self-selecting outside of the intended audience. Don't like it, don't use it, we don't all need to agree with you.
Interfaces and how they ended up limiting generics (parametric polymorphism) are a tradeoff. Structural interfaces (duck-typed in an otherwise static-typed, with composition-over-inheritance, language) are innovative, interesting, and offer many benefits, enough to compensate any drawbacks. This is mentioned in the talk.
But the fact that anyone can just conjure a zero value out of thin air, and this fine because it's zero initialized (a decade after Java had proved this was really not good enough) it's pretty inexcusable. And this is not just a "default," it's actually impossible by design to enforce initialization in any way.
Then they did this in a language with pointers, which by necessity are zero initialized to nil, and simply added some timid steps to make nils more useable/useful (like nil receivers being valid). Which unfortunately, in the end, only further complicates static analysis and tooling that might ameliorate the issue.
Finally, if this wasn't enough of a problem, nil panics (any panics, in fact) are a hard crash if a goroutine doesn't handle them, and: it's impossible to add a global handler, it's impossible to prevent goroutines from being created that don't handle panics. So any code that you call can crash your program, and this is considered good form.
If you really feel that zero values are useful enough to justify all this, please explain. Because I just don't see it. This isn't a widely innovative feature that shapes idiomatic programming in an amazing way. The standard library is full of awkward hacks to make zero values useful (esp. in the face of backwards compatibility), where simple enforced construction would be much better.
I love Go. It's my favourite programming tool. I wish I could use it more professionally. But not acknowledging this error, or being dismissive, and doing nothing about it, helps no one.
I like zero values because when I create a struct I can know for certain accessing a non pointer won't crash the program. That's a pretty good value add. My type system also knows this, so I don't have to worry about asserting non nil values or adding needless if value == nil checks.
I understand that you like zero values as an alternative to initialization of objects as nil like Java does it. I think the parent comment explains pretty well what the issues with this approach are. Taking Rust as an example (because that's what I'm most familiar with), it's possible to simply enforce that variables are initialized explicitly or that safe constructors are provided, avoiding all the nil safety issues.
No they’re not, they’re a horrible decision, and some of the “solutions” I’ve seen for working around them are band-aid-code at best.
The design decisions around zero values infect protobufs too, and they suck to work around. The fact that an empty message can successfully deserialise into any valid protobuf is an insane decision and should have been thrown out long ago.
> The fact that an empty message can successfully deserialise into any valid protobuf is an insane decision and should have been thrown out long ago
The reason for this is that the protobuf wire format is designed for very high entropy: It contains only a minimal amount of metadata and consists mostly of data. This means you can deserialize most wire messages as a different message. This is a tradeoff: smaller message size for loss of schema information. This just means that schemas need to be handled at a higher level. This tradeoff makes some sense if you process millions of protos per second.
BTW: Dismissing a tradeoff like this as insane is derogatory. You can do better
This is the main drawback I also experience with zero values, knowing whether something was set or its the zero value. Like I said, there are some work arounds, buts its a tradeoff. Perhaps its a tradeoff you don't like.
1) The article should have acknowledged the issues that a wide part of the community are experiencing.
2) I accept that I'm the problem. But then I must leave.
3) The consequence of their attitude is that I cannot recommend the language anymore. I used to be excited about it, and that disappointment makes me angry.
4) If only Go was trying to be better. Rust is just an example of the visionary leadership that I expect from Go. I want go to be visionary! Because they have some things right, like fast compiliation, cross compilation, simple syntax, and a focus on simple concurrency. But it's like those ideas never developed.
5) Rust is a counterexample, that a language can be visionary, without giving up on its fundamentals.
6) Acknowledgment that Go was the best solution at a time. But also that the time seems to have passed.
All these plan9 scientists love their own brand. I started using Go in 2012, but after they killed deps.dev I gave it up. Some years later when I wanted to get work done at work I tried to introduce it on my team and another engineer spent a good amount of time looking into the language and listed all the reasons why it sucked, and he was right. The main takeaway was, yeah it's simple, but it does silly things that makes it a pain to use (error handling and unused imports) to name a few. I personally like the error handling but hated the type system.
If you use goimports (which also runs gofmt) after commenting out code, you just have to save your file and it will remove any unused imports. There is no reason to go to the extreme extent of compiling your own modified version of go just for this. The tooling is already there.
The OP mentions that the reimport is the problem (in response to your tooling suggestion). If you comment out code while testing the imports are auto-removed. When you uncomment you need add the correct imports again.
I might not have used `goimports` before. I read now that it also auto-imports when you uncomment. That's neat, but it could auto-import the wrong thing, and I'm not sure how it would handle conflicts. It still seems worse than just ignoring my unused imports. Unused imports should be more of a linting thing? If the compiler knows its unused, I don't see why it can't just ignore it.
Yes, it can import the wrong thing (or version) but it doesn't happen that often in my experience. If you use the libraries in other files it can go off of what is already in your go.mod file.
I agree, I've used Go before I learned Rust and seeing the differences really changed my mind. I used to use OCaml before so I understood the value of Option and Result types over try/catch and `nil`, but I used Go because it was easy. However, that easiness comes at a cost, namely maintenance over time. You want to get it right the first time around and not have to face challenges later on.
Not to be flippant, but I've often heard Go described as taking the programming language advances over the last 50 years and throwing them away.
> but I've often heard Go described as taking the programming language advances over the last 50 years and throwing them away.
That's by design, right? Go is very opinionated. They looked at other languages that they hated, e.g., C++/Java and didn't want to replicate them. But then adding their own mistakes along the way.
The brand new mistake that surprised me was that nil does not always equal nil. So just checking for nil is not enough sometimes, one has to "cast" as nil to the type you're expecting. And goland doesn't catch it. C++/Java/Python/C, null always equals null. But not in golang. shrug
The primary issue I have is that go doesn't make it very easy to write unit tests. You have to use interfaces everywhere just to inject your mocks.
I feel like that's the big mistake they made. Any new language needs to make it super easy to write unit tests without forcing major design decisions that affects development.
> That's by design, right? Go is very opinionated. They looked at other languages that they hated, e.g., C++/Java and didn't want to replicate them.
It’s a pity there’s no other languages in the world that they could have taken good design from, rather than looking at a handful of languages they didn’t like and basically throwing the baby out with the bath water.
I've always thought the best languages were just a step up from another language.
C with garbage collection that compiled natively and had a sane FFI to C would be a good addition. Add a decent library to get work done quickly and you've got a great solution.
That's what it felt like golang was trying to do. I do think it's easier to pick up than C. But it's not something I would say, you must write your code in.
People have raised dozens of issues on the Go tracker because they got stumped by this. You don't really need to look hard on a search engine to see thousands of questions/issues.
If you don't believe me, take a project like k8s and starting from its inception, you can see dozens of issues where people have to explain function returns a nil interface and not a nil value. This is explained a stupendous number of times.
Now repeat this for thousands of Go projects.
Of-course you can claim this is not an issue in the same way that anyone can claim that buffer overflows are not an issue in C/C++ for "real" programmers.
I see people say that they're confused by it a lot, sure. But do the bugs get to production? That was my question. Buffer overflows get to production all the time! But do the concrete vs interface nils get to production or are they caught in dev and then someone opens an issue to ask why it doesn't work? I suspect it's more the later.
Obviously, it would be better if Go didn't have this problem. If I made Go 2, it would have blank interface be "none" and blank pointer be "null". Even better, I would add a reference type that can't be null and sum types or something like that. But these things are all relative.
In Python, people using "is" for numbers is a problem. But in practice only very junior developers do that and it mostly gets caught in dev. There was the Digg outage caused by a default argument being [] instead of None https://lethain.com/digg-v4/ and that I see as a slightly more serious bug that can slip by without good code review.
Every language has pitfalls and the question is whether they outweigh the other benefits.
> But do the bugs get to production? That was my question.
Yes, in my case anyway. That code didn't have any unit tests around it. Goland didn't flag it. It's a pattern that was "surprising" to me at least. Particularly since it breaks with tradition.
I consider Goland to be invaluable when coding go. But still, it's not perfect.
> There was the Digg outage caused by a default argument being [] instead of None
FWIW Pycharm flags this now, and can fix this automatically. Not sure about VSCode.
I guess the point I'm making is IDE's should find the novice mistakes, and then a lot of my complaints about languages go away.
Yep. Every now and again I look into contemporary Go and try to give it the benefit of the doubt, but it just looks like intentionally going back to Java 5 [1] and touting the lack of features as a feature. What I find surprising is its apparent popularity among people who also like languages like Typescript and Kotlin, which go in the opposite direction, with high expressibility
To be more controversial, its flaws remind me of COBOL, both in the flawed belief that a lack of expressive power makes your programs easier, rather then harder, to read, and in the sense that both languages seemed completely ignorant of any programming language design ideas outside of their bubble, and just stayed stuck in the past
[1] As of 2022. Prior to that it famously didn't even have generics
I agree with the programming language sentiments, but in its defense, Go is very much a language for and by people who don't care about programming languages. I mean this in the nicest way. Rob Pike doesn't care that the concurrency is unsafe or that nil is the billion dollar mistake. Neither do most users of Go. Does that make Go a good language? No. But that's besides the point. It's a convenient, good enough language combined with a compelling set of tools that make it easy to use. It's the crocs of programming languages.
Well, some of us kind of care, but not enough to pay the price of ending up using a languages where people care more about language design than writing programs. Go is very much a language for getting things done rather than winning beauty contests. And it has proven itself as a productive language.
I've worked with perhaps half a dozen real "language enthusiasts" in my time. People who spend lots of time obsessing over "perfect" language design and non-mainstream languages. People who never stick to a language for more than a year or three, who insist everyone else accommodate their current fascination with some language, and who leave behind codebases full of code that will be hard to maintain because they are a patchwork of languages. Not caring that the organization then has to take the time both train people in thoselanguages and ensure they have enough people experienced with the language to be able to use a portion of their time to train newbies.
On a few occasions I actually researched their job history and found that these people had a tendency to make a lot of noise, but produce very little of consequence. They'd find jobs at the outskirts of projects where it would be easier to indulge in their interests without clashing with the principals. Most of their code would be gone just months after they left the position.
My advice is that if you care about building stuff, don't hire language enthusiasts.
Okay, circling back here. So your position is that nil safety would reduce the value of Go's "getting things done"? As someone who uses Go in production, I don't agree at all.
No, I'm saying that nil safety is a lower priority for me than overall productivity. If Go had nil safety that would be very nice. But it doesn't. And it doesn't bother me all that much. Go set out to be a practical language for writing servers - not to tick all the boxes.
In the 8 or so years I've used Go for production software, it hasn't actually turned out to be a big problem compared to my experience with C and C++. If it was a huge problem it should have manifested as such. But in my experience, nil errors are extremely rare in our production code.
If this hadn't been the case, we would probably have invested in Rust. But I can't deny that when I look at those of my friends who use Rust, I'm not exactly impressed with what they accomplish. Even a couple of years in, I see them spending a lot of time fighting with the compiler, having to backtrack and re-think what they are doing or, sometimes, having to rewrite/replace code (sometimes third party) that isn't as strict as they wish to be.
Is it worth the extra effort?
That being said, sure, I think Go would be less of a "getting things done" if the maintainers had felt an obligation to add every checklist item to the language from the start. As someone else pointed out, the fact that they had the guts to say no to a bunch of things was a really good thing. Just throwing all your favorite ingredients in the pot and stirring it doesn't guarantee you'll end up with a delicious dinner. There's a lot more to it than that.
I've learned Rust but Rust's user community makes it impossible for me to like the language. This hasn't changed over the years, if at all it has gotten worse. If I need high integrity and safety, I'll use Ada or even Ada/Spark with formal verification. For anything else, Go leads to much more productivity than Rust.
In my opinion, Rust is a prime example of overengineering (just like C++).
I am pretty sure Go is not going anywhere. Pretty much anyone can read Go and understand what is going on which is def. not true for Rust. It's very possible that if Mojo pans out it might be the "mass market" lang. that brings a lot of the Rust goodness to the avg. dev.
I think you're misreading the article. Pike doesn't say that the language is perfect at all. He says that they did better on the community aspects, but they admit the flaws in the language.
> First, what's good and bad in a programming language is largely a matter of opinion rather than fact, despite the certainty with which many people argue about even the most trivial features of Go or any other language.
> Also, there has already been plenty of discussion about things such as where the newlines go, how nil works, using upper case for export, garbage collection, error handling, and so on. There are certainly things to say there, but little that hasn't already been said.
Not a rhetorical question. Genuine question. I don't know so I'm asking question.
nil/null is really problematic, true. But how languages handle this otherwise? Is it that program must be statically analyzed to ensure that no nil/null path exists or there are other solutions as well?
The core problem with null/nil in Go (and Java) is that it is not modeled in the type system. In Java, any reference (which is most types) can be secretly null which is not tracked by the compiler. Go one-ups this and has the same concept for pointers but also introduces a second kind of nil (so nil doesn't always equal nil [0]).
All approaches come down to modeling it in the type system instead of it being invisible.
One approach is modeling it as a sum type [1]. A sum type is a type that models when it's one thing OR another thing. Like it's "a OR b". "null OR not null". So a lot of languages have a type called `Maybe<T>` which models it's a "T OR Nothing". Different languages use different names (like `Option` [2]) but it's the same idea. The key is that null is now modeled using normal / non-special / user-defined types so that it's no longer invisible.
Another approach is using so-called "nullable types" and flow typing [3]. For example, `Foo` is a normal type and `Foo?` is a nullable version of `Foo`. You're not allowed to pass a `Foo?` to a function that requires a `Foo` (the other way is fine). When doing a null check, the compiler narrows a `Foo?` to a `Foo` inside the non-null branch of the check. This is one capability of a more general compiler/language technique sometimes referred to as "narrowing" [4] or "static flow analysis" [5]
I know I sound salty here, but 10 years ago I got ridiculed on go-nuts, with dismissive comments from Rob Pike, because I dared to suggest that the way go get and module imports over the wire 1. worked, 2. were advertised in all their docs for beginners, and 3. how they were subsequently used throughout the community was ultimately harmful / shortsighted.
The way Go's package system works, especially before modules, really feels like it was a hack over an earlier and even more limited system that was designed to be used entirely inside the Google monorepo that was made to work outside. The weird global namespace tree makes sense there, and the emphasis on checked-in codegen also make sense there when you consider that Google also includes build artifacts in their monorepo.
This was exactly what happened. Rob Pike mentioned in another talk that they overfitted the pre-module system to how Google deals with packages. So I think he/they have conceded this was a mistake
It's interesting that what they came up with is better than what's out there for other languages.
Yeah, you have the "v2" / forever v0 problem. But it's still better than what I need to deal with when using npm or (doing sign of the cross) anything with python.
This seems like a more personal account of the ACM article they published [1]. In both they recognize that they didn't make a great new programming language in terms of a language specification but instead did a great job building up all the things around programming languages that may end up being even more important.
In the submitted article they talk about inventing an approach to using interfaces and also an approach to concurrency. Go routines are identical to Haskell threads and interfaces are very similar to Haskell typeclasses (now that they support generic arguments). Haskell's preceded Go- it's interesting to see procedural programmers independently discover the power of ideas from functional programming.
Go's one language innovation is to not require an interface implementation to declare the interface it implements. This is awful from a safety perspective but in practice it causes few issues and gets rid of awful circular dependency issues experienced in Haskell and now Rust.
I think you're understating how important the fact that interfaces are structurally typed is to the overall effect of the feature on the language and—even more—its idioms and ecosystem.
Go would be a deeply different language if types had to declare the interfaces they implement ahead of time. It's one of Go's main distinguishing features (or at least was until TypeScript came out and also had structurally typed inferfaces, for different reasons).
> Go would be a deeply different language if types had to declare the interfaces they implement ahead of time.
GP literally mentions haskell which uses nominative typing and does not require types declaring the interfaces they implement (typeclasses they instantiate) ahead of time.
If anything, Go's solution is worse because you have to conform the interface you declare to whatever already exists on the type. Type classes make the "type class interface" independent from the underlying type. And then it turns out Go's structural interfaces also play hell with generics, leading to the current... thing.
My understanding is that in Haskell, all instances of type classes are explicitly declared. They don't have to be declared at the type declaration, but they must be explicitly declared. Unlike Rust, Haskell does allow orphan instances, so you can approximate some of the flexibility of structural typing, but it's still not structural like interfaces are in Go.
That's a significant difference in the design space. And, in particular, it makes generics harder. With explicit instance declarations, you have a natural place to define the type parameters of the instance itself and then specify how those map to the type arguments of the type class being implemented. With implicit interface implementations, there's no place for association to be authored.
I'm not saying Go's solution is better or worse, just that it's not them half-assed cribbing type classes. It's a deliberately designed different feature.
(My actual opinion is that I think interfaces in Go are more intuitive for users and easier to learn and use, at the expense of losing some expressiveness for more complex cases. Whether that's the right trade-off depends a lot on the kinds of programs and users the language is targeting. In the case of Go, I think that trade-off makes sense for their goals.)
I get so tired of functional programmers claiming to have invented everything first and assuming that what other languages do are just failed imitations instead of deliberate differences with important subleties they are overlooking. In particular, in these kinds of discussions, the "Haskell/Lisp/Smalltalk did it first" folks rarely take into account trade-offs, usability, or context when evaluating features.
We can't lump Lisp and Haskell into the same category. Whenever Lisp did something first, it was always easy to understand and use, compared to the twisted reinventions. The reinventions are Jack Skellington's imitation of Christmas.
About the only criticism you could lob at the Lisp original feature would be some low-brow grumble about parentheses.
Yes, because naming core operations in the language after assembly instructions in an ancient IBM vacuum-tube computer because they are incidentally how those operations happened to have been implemented once is the height of clarity.
> Go's one language innovation is to not require an interface implementation to declare the interface it implements.
Uh? How is that innovation? 100% of mainstream languages that I can think of that predate Go do this.
Can you name one programming language that we should care about which, once you define an interface, FORCES YOU to provide an implementation of said interface?
The point is that in most languages something doesn't implement an interface unless it declares that it does so; in Java or C# if you don't explicitly write "extends Writer" then your type doesn't implement Writer, even if you implemented all the methods of Writer. Whereas Go offers something similar to e.g. Python's behaviour where things are "duck typed": you don't have to explicitly reference a particular interface, you just implement the right methods. Of course in (traditional) Python that works because the language doesn't have real ("static") types at all. Having "static duck typing" is pretty rare - TypeScript now has it (and Python itself sort of has it), but when Go did it it was something that was pretty much new for mainstream languages.
(IMO it's a misfeature; having explicit interfaces communicates intent and allows you to do things like AutoCloseable vs Closeable in Java - but that's a matter of judgement)
> Haskell did nominative "interfaces" (typeclasses) and post-creation conformance 20 years before Go happened.
Not the same thing, yes you can have orphan instances (with a flag) but you still need an explicit typeclass instance somewhere. Whereas Go interfaces are structural rather than nominal. (And much as I might wish otherwise, Haskell barely qualifies as a mainstream language)
I haven't seen someone mentioning Go's issues with crypto yet. After OpenSSH deprecated SHA1, the Go team took a year (!) to add support for SHA2 to x/crypto/ssh [1]. Gitea was one famous victim [2]. Furthermore it doesn't instill confidence to see a crypto maintainer bashing on GnuPG [3] and trying to discredit Dan Bernstein [4].
the x/ packages languish a lot, the problem is they follow a development model that slows the rate at which new contributors succeed to send patches
The stdlib crypto package I would describe as another of Go's big successes. OpenSSL has been a disaster for a very long time, and Go and perhaps most significantly agl managed to build and ship a very broadly exercised alternative implementation with an average very high quality, particularly from the perspective of having significantly fewer footguns in the public API.
This mentions gofmt as a "what we got right" and I think that's especially worth underscoring.
This seems to many language inventors and proponents like a small thing but it delivers huge value because it eliminates one common bike shedding opportunity entirely from day zero of a Go project. I've seen several newer languages embrace this, either copying it quite intentionally or just figuring hey Go has one so we should make one as well.
I've seen some pretty weird formatting rules but I have very rarely seen rules I couldn't get used to, whereas I have worked on plenty of codebases without enforced formatting rules where as a result it was harder to understand the code.
> the existence of a solid, well-made library with most of what one needed to write 21st century server code was a major asset.
Yes. Go was funded by Google because Google had a problem. They needed to write a lot of server-side application code. Python is too slow, and C++ is too brittle. Go does very well in that niche. A big bonus was that Google people wrote the libraries you need for that sort of thing, and uses them internally. So, when you used a library, it was something where even the error cases were heavily exercised.
I have some technical arguments with the language, mainly around the early emphasis on using queues for everything, including making mutexes out of queues. They got the "green thread" thing right for their use case. The "colors of functions" thing is a problem, and the arcane tricks needed to make async and threads play together in Rust are just too much. Go gives up a little performance for great simplicity.
I'm amused at the old hostility to threads. I started out on UNIVAC mainframes, which had threads starting in 1967. (They were called "activities") By 1972, there were not only user-space threads, but they ran on symmetrical multiprocessors. There was async I/O, with user-space completion functions.
There were built-in instructions for locks. The operating system was threaded internally.
I thought of threads and multiprocessors as normal, and felt the loss of them when moving to UNIX. It was decades before UNIX/Linux caught up in this area. Several generations of programmers had no clue how to use a shared-memory multiprocessor. The early concurrency theory from Dijkstra was re-invented, with different terminology and often worse design than the original. The Go people understood Dijkstra's bounded buffers, and understood why bounded buffers of length 0 and 1 are useful. It was nice seeing that again. With the right primitives, concurrency isn't that hard. If you try to do it by sprinkling locks around, which was the classic pthreads mindset, it will never work right. It didn't help that UNIX/Linux had an early tradition of really crappy CPU dispatchers, so that unblocking a thread worked terribly.
> Yes. Go was funded by Google because Google had a problem.
random HN commentators say things like this a lot because they don't understand how Google works or appreciate how big it is.
Go was written because *Rob Pike* had a problem - he didn't like C++ or Java or Python, and because he's Rob Pike, Google-as-a-company let him go and write a new language for a few years. Larry or Sergey or Sundar or Eric didn't go to Rob and ask him to do anything.
more or less, Go was funded (ie people were allowed to work on it, since that's about the only type of company money that matters in Google Engineering) because Rob Pike asked for it to be funded and Google liked having Rob Pike work there, and thought he'd almost certainly do something interesting.
because Rob Pike is an excellent engineer, Go turned out to be quite good in some ways and a good match for his initial problem - that writing Google prod apps required using C++ or Python or Java. and some other people in Google agreed, and started using Go, and some of those people had power and "encouraged" others to use it instead of Python.
> Yes. Go was funded by Google because Google had a problem. They needed to write a lot of server-side application code. Python is too slow, and C++ is too brittle.
I've said this elsewhere on the thread but will repeat: this doesn't match my memory of working at Google during this time period. This belief feels a bit like "go was for systems programming", a popular meme that doesn't make sense when examined.
In 2009 Python wasn't really used for servers at all at Google outside of a few internal-facing utilities, so that idea can be disposed of immediately (exception: the YouTube acquisition). The biggest Python server developed by Google itself was iirc Mondrian, the code review tool, written by Guido van Rossum himself. It was later replaced by Critique, written in Java.
At the time Google had a fairly strict three languages policy, designed to kill programming language fights. You could use C++, Java or Python. Python was used for scripts, C++ for infrastructure and Java for web servers. There was some overlap: Java was introduced to Google later than C++, and some web servers were still written in C++ because it didn't make sense to try and rewrite them, even though there were some initiatives around trying to do incremental ports to Java that hadn't really taken off. Also some infrastructure servers were written in Java (e.g. a database called Megastore). Core libraries were mostly C++ with JNI and Python bindings.
There was also a loophole in that policy, in that some teams invented their own languages for internal infrastructure, so in practice the Google codebase also had some use of custom config languages and in particular Sawzall, also by Rob Pike. It looked syntactically a bit like Go.
But overall people had a lot of freedom to choose, and new web servers were being mostly written in Java, with new database engines and similar being mostly written in C++.
Writing async code did suck, and Go definitely got that right. But Go wasn't a project initiated by the senior management to solve a problem Google had. That's not because management was afraid of initiating such projects. Huge numbers of internal infrastructure projects were created and staffed by the relevant teams to directly solve developer pain points (e.g. the giant networking upgrades they embarked on at that time). But they were telegraphed in advance.
I don't recall much complaining about this state of affairs. Build times were an issue until they got Bazel/Blaze working with remote build clusters, and at that point you could compile giant codebases in seconds because everything was cached remotely. Local compiler speed became largely irrelevant, especially as javac was very fast anyway. I don't recall exactly when this was, but I developed C++ servers around that era and Pike's 45 minute compile wasn't a common occurrence for me after Blaze caching appeared. I can imagine that would happen if you changed a core library and then recompiled something very high up the stack.
The announcement of Go came as something of a surprise. If it had actually been developed to solve Google's problems it'd have been launched internally first and then developed with internal users for years before being exposed publicly, as was normal for Google. But this one launched to the public first. I recall people in my neck of the woods wondering what it was for: not something you'd expect if it'd really been driven by internal demand.
Interesting bit here about the decision to use Ken Thompson's C compiler rather than LLVM --- something that people grumbled about, and that resulted in (especially earlier versions) less optimal generated code. The flip side of that decision is that they were able to do segmented stacks quickly; they might not have done them at all if they'd had to implement them in LLVM and fit the LLVM ABI.
(He cites this as an example of the benefit of that decision, not the only benefit).
That part of the interview is incorrect about LLVM. I implemented segmented stacks via LLVM for Rust. It's actually pretty easy, because there is support for them in X86FrameLowering already (and it was there at the time of Go's release too). If you enable stack segmentation, then LLVM emits checks in the function prolog to call into __morestack to allocate more stack as needed. (The Windows MSVC ABI needs very similar code in order to support _chkstk, which is a requirement on that platform, so the __morestack support goes naturally together with it.)
Actually, getting GDB to understand segmented stacks was harder than any part of the compiler implementation. That's independent of the backend.
What I think the author might be confusing it with is relocatable stacks. That was hard to implement at the time, because it requires precise GC, though Azul has implemented it now in LLVM. Back then, the easiest way to implement precise GC would have been to spill all registers across function calls, which requires some more implementation effort, though not an inordinate amount. (Note that I think the Plan 9 compiler does this anyway, so that wouldn't be a performance regression over 6g/8g.) In any case, Azul's GC support now has the proper implementation which allows roots to be stored in registers.
He didn't say it was not possible, but that it would have required too much effort in modifying the ABI and the garbage collector.
> because it requires precise GC
Which is why they avoided LLVM for a much smaller and easier to manipulate existing compiler. Their point is that it would have slowed things down too much to even try it inside someone else's project. Sometimes "roll your own" is the best idea.
> He didn't say it was not possible, but that it would have required too much effort in modifying the ABI and the garbage collector.
__morestack doesn't really have an ABI. It's just a call to an address emitted in the function prolog. LLVM and 6g/8g emit calls to it the exact same way. I suppose you could consider the stack limit part of the ABI, but it's trivial: it's just [gs:0x18] or something like that (also it is trivial to change in LLVM).
The garbage collector is irrelevant here as the GC only needs to be able to unwind the stack and find metadata. Only the runtime implementation of __morestack has any effect on this; the compiler isn't involved at all.
> Which is why they avoided LLVM for a much smaller and easier to manipulate existing compiler. Their point is that it would have slowed things down too much to even try it inside someone else's project.
I was suggesting that Rob Pike possibly confused segmented stacks with relocatable stacks. Segmented stacks have only minimal interaction with the GC, while relocatable stacks have major interaction.
Assuming good faith, either (1) Rob misremembered the problem being relocatable stacks instead of segmented stacks; or (2) the Go team didn't realize that LLVM had segmented stack support, so this part of the reasoning was mistaken. (Not saying there weren't other reasons; I'm only talking about this specific one.)
> Segmented stacks have only minimal interaction with the GC, while relocatable stacks have major interaction.
Okay.. this is where I'm losing your argument. Can you quantify the difference here between 'minimal' and 'major' from the 2012 perspective this was framed in?
The GC needs to unwind the stack to find roots. The only difference between segmented stacks and contiguous stacks as far as unwinding the stack is concerned is that in segmented stacks the stack pointer isn't monotonically increasing as you go up. This is usually not a problem. (The only time I've seen it be a problem is in GDB, where some versions have a "stack corruption check" that ensures that the stack pointer is monotonically increasing and assumes the stack has been corrupted if it isn't. To make such versions of GDB compatible with segmented stacks, you just need to remove that check.)
Relocatable stacks are a different beast. With relocatable stacks, pointers into the stack move when the stack resizes. This means that you must be able to find those pointers, which may be anywhere in the stack or the heap, and update them. The garbage collector already knows how to do that--walking the heap is its job, after all--so typically resizable stacks are implemented by having the stack resizing machinery call into the GC to perform the pointer updates.
Note that, as an alternative implementation of relocatable stacks, you can simply forbid pointers into the stack. This means that your GC doesn't need to be moving. I believe that's what Go does, as as far as I'm aware Go doesn't have moving GC (though I'm not up to date and very much could be wrong here). This doesn't help Pike's argument, though, because in that scenario the impact of relocatable stacks on the GC is much less.
As an aside, in my view the legitimate reasons to not use LLVM would have been (1) compile times and (2) that precise GC, which Go didn't ship with but which was added not too long thereafter, was hard in LLVM at the time due to SelectionDAG and lower layers losing the difference between integer and pointer.
> you can simply forbid pointers into the stack. This means that your GC doesn't need to be moving. I believe that's what Go does
I might have misunderstood your comment, but FWIW, Go does allow pointers into the stack from the stack.
When resizing/moving/copying a stack, the Go runtime does indeed find those pointers (via a stack map) and adjust them to point to the new stack locations. For example:
(The growable stacks I think replaced the segmented stacks circa Go 1.3 or so; I can't speak to whether they were contemplating growable stacks in the early days whilst considering whether to start their project with the Plan 9 toolchain, LLVM, or GCC, but to your broader point, they were likely considering multiple factors, including how quickly they could adapt the Plan 9 toolchain).
> SelectionDAG and lower layers losing the difference between integer and pointer
Doesn't pointer-provenance support address this point nowadays? AIUI, a barebones version of what amounts to provenance ("pointer safety" IIRC) was even included in the C++ standard as a gesture towards GC support. It's been removed from the upcoming version of the standard, having become redundant.
I think CHERI addresses the issue, but I don't know how much of that is in upstream LLVM. Pointer provenance as used for optimization mostly affects the IR-level optimizations, not CodeGen ones.
In any case, Azul's work addresses the GC metadata problem nowadays.
They eventually moved away from segmented stacks, right? In Go 1.3, released 2014. (Due to the "hot spot" issue.[1]) So while the ability to experiment was valuable, this specific example is not, like, perfect.
> Critics often complained we should just do generics, because they are "easy", and perhaps they can be in some languages, but the existence of interfaces meant that any new form of polymorphism had to take them into account.
I've been noodling on a statically typed hobby language and one of the things I'm trying to tackle is something like interfaces plus generics. And I have certainly found first-hand that Rob is right. It is really hard to get them to play nicely together.
I still think it's worth doing. Personally, I'd find it pretty unrewarding to use a statically-typed language that doesn't let me define my own generic types. I used to program in BASIC where you had GOSUB for subroutines but there was no way to write subroutines where you passed arguments to them. I don't care to repeat that experience at the type system level.
But I can definitely sympathize with the Go team for taking a long time to find a good design. Designing a good language is hard. Designing a good language with a type system is 10x harder. Designing a good type system with generics is 10x harder than that.
> But I can definitely sympathize with the Go team
I don't.
The hardest part in implementing generics is when you support inheritance of implementation.
Go doesn't.
Go had the easiest job in implementing generics.
The only reason why they didn't was not technical: it was ideological and purely based in ignorance, and the fact that most Go designers stopped paying attention to the field of PLT in the late 90s.
All the ML and functional languages don’t seem to have this problem, and a lot of them have Type Systems that are far more sophisticated and capable that go’s.
SML actually has a very simple, unsophisticated type system that isn't anywhere near as expressive as generics in most other languages (Java, C#, Go, etc.).
In SML, there's no way to define a generic hash table that works with any type that implements a hashing operation and uses that hash function automatically. Type parameters are entirely opaque types that you can't really do anything with. To make a generic hash table, the user has to explicitly pass in a hash function each time they create it.
In other languages, you can place a bound on the type parameter that defines what operations are supported and then the operations are found (either at runtime or monomorphization time) based on the type argument.
If you don't have bounds, lots of things get easier for the language designer. But lots of things get much more tedious for the language user. It's probably not a coincidence that every language newer than ML with parametric polymorphism has some sort of bound/trait/constraint/type class thing. But bounds are where most of the complexity comes from.
I do agree SML's type system feels elegant and useful at first, and becomes suddenly quite limiting when attempting the kind of domain modeling or library level genericity you'd do in e.g Java.
On the other hand, isn't the hashmap use case you mention addressed with the module system and functors ? Like the set abstraction described in this SO answer [0].
Wondering where my understanding of your message or SML fall short.
Keep in mind that Scala had been released before golang was. And became popular around the time golang was released. Without GOOG-scale resources behind it.
The biggest win for Go is its approach based on composition rather than inheritance.
There isn’t any “architect engineer” building cathedrals with interfaces and abstract classes. There’s no cult behind needing to follow DDD in an event-driven architecture powered by a hexagonal architecture in all projects, or you are tagged as a bad engineer. We don’t have thousands of traits to define every possible code interaction, yes. From a type system point of view, Go is lacking compared to HM based type system, yes. Yes, it’s all pros and cons with this decision. We can agree on that.
I’ve seen that the predominant enemy for a software project is software engineers. Go keeps them in line for the sake of the project.
> There isn’t any “architect engineer” building cathedrals with interfaces and abstract classes. There’s no cult behind needing to follow DDD in an event-driven architecture powered by a hexagonal architecture in all projects, or you are tagged as a bad engineer.
Isn't Go the big driver of Kubernetes? It feels like overarchitecturing is still there in Go projects, they've just made it distributed.
I’ve read comments from people before about how go’s lack of generics has caused significant amounts of extra code to be written in the K8s codebase. I do wonder if we could snap our fingers and magically have a K8s written in Rust, a lisp, Zig tomorrow, what that would look like, what it would be like to maintain and build and what the codebases would be like in comparison. Would make an interesting intellectual exercise.
> There isn’t any “architect engineer” building cathedrals with interfaces and abstract classes. There’s no cult behind needing to follow DDD in an event-driven architecture powered by a hexagonal architecture in all projects, or you are tagged as a bad engineer.
My experience is that you can also write simple non-over-engineered code in other languages too. Yes, it can require pushing against the wind of the prevailing culture sometimes but it's not, like, impossible.
If one chunk of code depends upon one and only one other chunk of code, then forcing the programmer to put an interface between them for unit testing does a disservice to the programmer.
I really do prefer languages that make it easy to write unit tests.
1. Nil pointers (two types of them, even!). We knew better even then.
2. Insisting that the language doesn't have exceptions, when it does. User code must be exception safe, yet basically never use exceptions. The standard library swallows exceptions (fmt and http)
Those are the biggest day to day concrete problems. There are many more that are more abstract, but also hurt.
Right, people really need to take Tony's Billion Dollar Mistake more seriously, which means no you can't have "null" or "nil" or whatever you're calling it. We know that's a bad mistake so you shouldn't repeat it.
Just as I can excuse using a fat pointer to some bytes as your "string" type in a very close to the metal language. I can excuse the possibility of a null pointer or reference in such a language, just above the level where we're doing machine code. It's not nice, but you're banging rocks together and there's a zero value in this register and so fine, let's have a "null" pointer. This is not something to be proud of, it shouldn't make its way into code that can avoid it, but it will need to exist in the very heart of the fire.
Go is far above that level, so it needs to just not expose Go programmers to Tony's mistake at all. It should have been defined out of existence.
I tend to feel, just error handling in general. Its not even something I'd care so much about, if it didn't seem like everyone else felt like the way Go does errors is great.
You can't/shouldn't do custom error types, even though its an enticing sexy interface, because of things related to the massive nil/nil mistake you've covered. We had errors-as-values in popular, large languages (Javascript callbacks?), and by the mid-10s everyone recognized that they're kinda whatever, mostly just a different way of doing the same thing less conveniently, and that community got rid of them (as a side-effect of the more general push to get rid of callback hell, but certainly no effort was made to keep errors-as-values around). We say "being forced to handle errors is great in Go", but (1) you don't have to handle them, you just have to acknowledge them with `_`, and (2) Java has had checked exceptions for years, and everyone also recognizes that those are ish. And, as you say, Go has two fundamentally different kinds of errors functions can throw (what color is your function?), except the facilities for handling panics are essentially a goto (which, I love pointing out lest we all forget, go also has literal goto). Sure, working code shouldn't panic; but all that asserts is that Go wasn't designed to be fault tolerant.
To be clear, I don't feel as much passion for hating on Ruby, because I don't use Ruby. I use Go. I don't wish to hate on the language for the purpose of hate; I wish that more people would agree that the situation is quite poor, rather than good, and that we could make meaningful positive change to the language.
Go has been my daily driver for over a decade. I was in the past a C++ programmer. In what ways am I writing exception-safe code when I write ordinary Go code?
I've run into issues where panics cause half of what should be a multistep but assumed to be atomic transaction to occur, putting the system into a goofy state that required fairly manual intervention. In my case a system daemon that required someone to manually fix up system state on the CLI and restart the system.
That's like, strong-form exception safety, a problem in most mainstream languages. But when C++ people talk about "exception safety", they're talking about basic or weak-form exception safety: not leaving dangling pointers and resources as a result of unexpected control transfer. That style of defensiveness is not common in Go code.
Well that's the thing, I am talking about resources left 'open' since they didn't complete their lifecycle due to the unexpected control flow. Yes, it's not common in go code, but I think that's more a combo of the GC making dangling memory not a problem, and the environment that most go code lives in (ie. kubernetes clusters or some equivalent) where the other resources leaked are eventually reclaimed by the autoscaler and other devops automation.
The GC is ubiquitous, and definitely a point in favor for go for the vast majority of use cases, but I've found it more difficult than anticipated to write go code that manipulates resources other than memory that the environment you're running in won't clean up for you. And that's coming from C++ code originally including the exception safety issues.
> panic = very serious problem, what do you expect?
Even the Go standard library itself panics (and recovers) when you try to e.g. json-encode a NaN, which doesn't seem like something that should make the system unstable.
encoding/json largely moved away from using exceptions internally for errors, I believe due to performance reasons. Also you're making a strawman by equating one explicit intentional use of a specific panic with all panics everywhere.
This is a direct result of Go's lacking error handling, and would not be necessary if they'd have learned from C++ and Java's mistakes, instead of just repeating them.
That code is not exception safe. It may be a contrived example, but in code review I've many times seen real code that is not exception safe, for basically the same reason.
A held lock is fairly benign (only causes a deadlock, not corrupted data) if the net/http.Server swallows it in your handler, or fmt.Print swallows it. But some other errors are not as benign.
> Insisting that the language doesn't have exceptions
Insist in what way? The Go website insists that Go has actual exceptions, unlike the pretend exceptions that are actually errors passed around using goto like you find in Java and other languages inspired by it.
Other than the fact that they spelled "throw" "panic", "catch" "recover", and "finally" "defer" how are go exceptions different than what you find in java?
I get that Go devs like to claim they are completely different because you are supposed to use them differently, but under the hood they are identical as far as I can tell.
Disclaimer: just trying to directly answer the question, but also may be wrong in many ways. Please correct me.
I thought one difference was that a panic in a goroutine kills the whole process vs. an exception in a Java thread would just kill that thread. That could be more of a consequence of "Goroutines are not threads" rather than "panics are not exceptions".
Java Checked exceptions are certainly quite different than Go panics in terms of compile-time checks and what code the user of the language must write.
I thought there are some differences with how stack traces are accessed on caught/recovered exceptions? It's been a while now but I thought you needed something special to get the Go stack trace out. Fairly minor detail though.
Error is an interface in Go vs. a base class that's extended. Probably more of a result of other language design decisions rather than a decision in this particular area.
I haven't really seen the catch-and-rethrow paradigm for Go panics, but it's kinda different because `panic` only accepts a string argument whereas you can rethrow an "object" in Java (side note - sometimes Go error handling ends up having a lot of string concatenation ex. errors.Wrap because of the focus on "errors are strings"). The lack of catch-and-rethrow is more of a usage difference than a design difference, to your main point.
Probably others but can't think of them right now.
> I thought one difference was that a panic in a goroutine kills the whole process
Only if not caught. But in any case this still means that you need to write exception safe code, because you don't know if the function you call will throw, and you don't know if the function calling you will catch.
Thanks for the clarification on the panic function signature - I missed it when checking the spec but it's definitely in there and defined as taking `interface{}`
> sometimes Go error handling ends up having a lot of string concatenation ex. errors.Wrap
Do you mean errors.Unwrap? There is no errors.Wrap function.
It does no string concatenation. It merely checks to see if the error value has an Unwrap method and returns the result of that method if available, or nil otherwise.
The only thing that comes close to having anything to do with strings in the error package is errors.New. Even then, the focus is not on the string. It is on the returned value, used when you want to store a 'constant' value for use with errors.Is.
> because of the focus on "errors are strings"
What does that mean? The focus is on types and values that satisfy the error interface. string does not satisfy the error interface. Strings cannot be errors, at least not when errors are passed as an error interface type, as is the convention.
I'm not a Go expert, is this function deprecated/removed?
RE: errors are strings
I probably didn't explain it too well but, for example, you're required to have a string to create an error of course
// errors
func New(message string) error
and then after a bunch of wrapping it ends up coming out like: "get product: fetch user: json unmarshal: invalid data"
So it's a bunch of string concatenation. Yes, as you said `error` is not a string it's an interface, but a lot of times it's basically acting like a string that's continuously growing. Rather than something more structured that's composed together, and then later (possibly) serialized to a string.
The "errors" interface mixes the concerns of error tracking with error serializing. I might not need to serialize the error to string because I'm serializing to other formats (protobuf, idk) but I have to use everything that revolves around a string-based API. That's what I meant by "errors are strings", even though that's hyperbole and not literally the case.
> I was referring to this errors.Wrap function [...] is this function deprecated/removed?
Got it. As the github.com identifier implies, that is a third-party library.
I suppose you could argue that said library is focused on strings, but then you could write a similar library in any language. Would you say that would also make those languages focused on strings? Probably not.
I don't think anyone would recommend that you use said library.
> you're required to have a string to create an error of course
It is true that said function does exist in the errors package (the standard library one), but is for defining error 'constants' meant to be evaluated on equality. Consider:
var MyError = errors.New("my error")
if (err == MyError) {
// We got MyError!
}
It is not the string that is significant. It is the comparable memory address that is significant.
Note: For reasons, you are bound to want to use errors.Is rather than == in this case, but the intent is the same. I choose to use == here as I think it explains it better.
> Rather than something more structured that's composed together, and then later (possibly) serialized to a string.
No, that's exactly what a Go error is. They are fully-fledged types, able to pack any kind of data you want in them. Consider:
type MyError struct {
Code int
User User
// ...
}
// Error is needed to satisfy the error interface.
func (m MyError) Error() string {
return fmt.Sprintf("code %d caused by %s", m.Code, m.User.Name)
}
func doSomething(user User) error {
return MyError{Code: 1, User: user}
}
That's an error in Go. You are right about strings to the extent that serialization to a string is a requirement of the error interface, but you don't ever need to use it. It is not significant to any meaningful degree. But as logging/reporting is a common way to deal with errors, it is understandable why the serialization is a requirement.
Of course, you don't have to use the error interface at all. Some functions in the Go standard library, as an example, use an int type to signify an error. You probably want to use `error` in the common case, though, so that your code plays nice with the expectations of others. Deviation should carry careful consideration.
As to why press a button that does nothing? For the same reason fidget spinners were all the rage a few years back: Bored people like to do something with their hands.
Perhaps if the comment had a question that was left unanswered, people wouldn't have become so bored?
Not really. The only suggested difference is that you "need to do something special to get a stack trace out" in Go, but that one is not even true. There is nothing special needed to get the stack trace. I suspect he is confusing exceptions with errors, the latter of which have no need for stack traces in the general case, so you would need to do something special to include the stack trace in an error for your special case.
There is more content in the thread, but not about exceptions. So, with that, here's your time to shine!
Given how they being up how fmt and http swallow them, I believe the parent is referring to panics rather the errors returned via standard control flow.
I guess I'm confused since panics are equally errors passed around by gotos as much as java exceptions are. Probably more so since at least with java it ends up being part of the the function type signature the vast majority of the time.
It creates 2 disparate types of error handling that don't neatly mesh together. You have to handle error return values, but you also have to handle exceptions (panics) because they still exist.
My issue is mostly implementing both ways of bubbling up an error to somewhere it can be handled. I think having either error return values or exceptions is preferable to having both. I don't think exceptions are perfect, but if panic() absolutely has to exist then I'd rather have an entirely exception-based language than a language that uses both systems simultaneously.
E.g. if I write a function that accesses an element of an array without bounds-checking, it could panic and I have to handle that exception. Bounds-checking basically just becomes finding things that would throw exceptions and converting them to errors so we can pretend that exceptions don't exist.
They are disparate conditions. Errors happen in response to conditions that occur during the execution of the application. Exceptions happen in response to conditions that occurred when the code was written. Very different things.
It is highly unlikely that you want to handle an exception. It's the runtime equivalent of a compiler error. Do you also want to handle compiler errors so that your faulty code still compiles? Of course not, so why would you want to do the same when your coding mistakes are noticed at runtime?
There are, uh, exceptions to that when it is necessary to handle exceptions, but if it you see it as routine you're doing something wrong. If you overloaded that with errors, forcing it to be routine, you'd have a nightmare on your hands (like in those other languages that have tried it).
> Exceptions happen in response to conditions that occurred when the code was written.
Huh? Stack overflow? Out of memory?
> It is highly unlikely that you want to handle an exception
It is very likely that I want to handle an exception. In fact, I want to handle all exceptions and keep my process and all other concurrent requests to it running. And don't tell me, that's not possible, because I've been doing that for decades. In Java that is.
Exception. The minimum available stack space is a known quantity. Exceeding it means you made a mistake.
> Out of memory?
Error. The available heap is typically not predictable. Your allocation function should provide an error state; and, indeed, malloc and friends do.
> And don't tell me, that's not possible
It is perfectly possible. Probably not a good idea, though, as you have proven that your code is fundamentally broken. Would you put your code in production if there was a way to ignore compiler failure?
Pick something. I don't care. Let's say failure for reasons of having no return statement in a function that declares itself to return something. If you could flip a switch to see that code still compile somehow, knowing that the program is not correct, would you deploy it to production?
> Go literally does not warn you until it hits the error at runtime for these exceptions.
True, but only because the Go compiler isn't very smart. It trades having a simpler compiler for allowing some programmer faults to not be caught until runtime. But if there was such a thing as an ideal Go compiler, those exceptions would be caught at compile time.
When it comes to exceptions, the fault is in the code itself, unlike errors where the fault is external to the program. Theoretically, those faults could be found before runtime. But it is a really hard problem to solve; hence why we accept exceptions as a practical tradeoff. We are just engineers at the end of the day.
Except we could just treat them the same, and we could have a type system that makes that possible. Multiple languages before Go had a solution to this, that could've been used. Or, it could've written said sufficiently advanced compiler itself.
We could, but we learned from those attempts that came before Go that it is a bad idea. There is good reason why the languages newer than Go, including Rust, also keep a clear division between errors and exceptions.
We already lived through the suffering. The new age of recognizing that exceptions and errors are in no way the same thing and should not be treated as such is a breath of fresh air.
> why the languages newer than Go, including Rust, also keep a clear division between errors and exceptions.
There are also languages older than Go that make this distinction. Java for example. Like Rust (and in contrast to Go) they even have syntax for convenience, and the compiler checks that you handle them.
You know how you can immediately spot old Java APIs and code? The one that was designed and written while we lived through the suffering? Whenever you encounter checked exceptions. Turns out there is no (or even negative) value in this rather arbitrary separation.
Obviously every language can make that distinction. As far as the computer is concerned, errors are no different than email addresses and phone numbers, which are equally representable in every language. Even Java could have errors.
With respect to the bits and bites, exceptions only stand out in that they carry stack trace information. That is useless for email address, phone numbers, and errors. A hard drive crash doesn't happen at line 32. It happens in the hard drive! But knowing that the fault lies at line 32 is very useful for resolving programmer mistakes. If you get a divide by zero exception at line 32, then you know that you missed a conditional on line 31. Exceptions are compiler errors that the compiler missed.
By convention, Java does not acknowledge the existence of errors. It believes that every fault is a programmer mistake, with some programming mistakes needing to be explicitly checked for some reason. Which, of course, questions why you let programmer mistakes ship given that you know that there is a programmer mistake in your code? That doesn't make any sense. But as you point out, everyone these days agrees that idea was nonsensical.
> Java does not acknowledge the existence of errors. It believes that every fault is a programmer mistake
Oh, you're distracted by the stack traces, of which you have a strong opinion, but it's exactly the opposite. What modern Java code does is (in the wake of Exceptional C++) it accepts failure as a given. It does not matter what's the _cause_ of a failure, be it the programmer's fault or failure of a pre-dependend system. It's the job of the dev to ensure that the process can never leave a defined state. And the way to do that is to write exception safe code, and not to handle-all-errors (TM).
Well, there is nothing else. Capturing the stack in a value along with some metadata is all that there is to an exception. Were you wanting me to say something about the weather?
> It does not matter what's the _cause_ of a failure
Except when it does. Let's say the failure is that the user is a child when your requirements demand that they are an adult. age < 18, which produces an error state, doesn't create, let alone raise an exception. Hilariously, you have to resort to Go-style error handling:
if (age < 18) {
// Do something with the failure.
// Are we writing Go now? I thought this was Java.
}
> It's the job of the dev to ensure that the process can never leave a defined state.
If the developer ensures that the process doesn't leave a defined state, then this discussion is moot. You will never encounter an exception. Exceptions are raised when the process enters an undefined state.
> Exceptions are raised when the process enters an undefined state.
Exceptions are raised _before_ a process enters an undefined state. A thread that's unrolling the stack is still in a well defined state.
> age < 18
In real code, the 18 is probably coming from the DB, is the result of resolving the user's location and happens in 5 nested layers of thread pools, logging and transaction management. None of those layers care about age or height of the user. If it's an external API, there is a layer on top that converts some useful exceptions into error codes for the JSON response. Also, there's a catch-all the maps the rest to 500. If it's an internal API, the exception might be serialized in full, to preserve the stack trace across systems.
> Exceptions are raised _before_ a process enters an undefined state.
There is no exception to raise if the state is defined. Why bother? If you know how to divide by zero, just do it! Except you probably don't know how to divide by zero, so you have found yourself in an undefined state and need to bail.
> If it's an external API, there is a layer on top that converts some useful exceptions into error codes for the JSON response.
So you have an exception, that you convert into an error, that you then (on the Java client) handle as if you were writing Go to turn it back into an exception...? I take that you didn't take a moment to read what you wrote? You must have misspoke as that would be the dumbest idea ever.
Of course. And, sadly, one of those systems I once helped with was built on Javascript, which made the whole thing even sillier.
There you had an exception, resorting to as if Go to convert it to an error, handled as if Go to convert back into an exception, and then, when the exception was caught, it was back to 'writing Go' again to figure what the exception was! At least Java does a little better there, I'll give it that.
It is completely ridiculous. I guess that's what happens when you let bootcamp completionists design software.
If a language really wants to embrace the idea that errors and exceptions are the same thing, fine. Maybe it would even prove to be a good idea. But then we should expect that language to actually embrace the idea. This "errors are exceptions, but only sometimes, because we can't figure out how to represent most errors as exceptions" that we see in Java and languages that have taken a similar path is bizarre – and for developers using the language, painful. That has proven to be a bad idea.
They could. The halting problem is solved (well, worked around) by adding a complete type system, which the compiler is free to implement.
But with respect to this discussion the compiler does not need to solve the halting problem. It can assume the program always halts. It makes no difference if the program actually halts or not.
> The minimum available stack space is a known quantity.
It is. Tracking and erroring out on it to avoid the exception means replicating your runtime environment's mechanism for tracking and erroring out on stack overflow (system in a system / inner platform anti-pattern). Your runtime environment's implementors know that, so it's unlikely you'll find the APIs necessary to avoid an exception (i.e. a maxRecursion param and equivalent error result).
> Exceeding it means you made a mistake.
No, it can be just a part of processing a request. Depending on the particular runtime environment, it does not have any impact on other parts of the process.
> so it's unlikely you'll find the APIs necessary to avoid an exception
Lacking a needed API is programmer error. Better programming can avoid that kind of exception. A hypothetical, sufficient smart compiler could fail at compile time, warning you are missing code to handle certain states in the absence of such an API.
To reiterate, exceptions are faults which come as a result of incorrect programs. Errors are faults which come as a result of external conditions. A program that overflows the stack is an incorrect program. The stack size is known in advance. If it is overflown, a programmer didn't do proper accounting and due diligence.
Whoa, easy there. We're talking about standard libraries, and the designers of those are not complete morons. The API is lacking because the runtime environment already provides a safe and defined environment for the observed behavior. It just happens to not fit your mental model, which I find too strict and off wrt reality on one hand, and infeasible on the other (Gödel wants to have a talk with you).
Don't let perfect be the enemy of good. It is quite pragmatic to make such an error.
We're ultimately talking about engineering here. Engineering is all about picking your battles and accepting tradeoffs. You go into it knowing that you will have to settle on making some mistakes. Creating an ideal world is infeasible.
Indeed, it is your mental model that is too strict. To err is fine. To err is human!
> Errors happen in response to conditions that occur during the execution of the application. Exceptions happen in response to conditions that occurred when the code was written.
wat.
You have code that ends up dividing by zero, and boom, you have an exception while the app is running.
> It is highly unlikely that you want to handle an exception.
You always want to handle an exception. That is how actual resilient systems are written
> You have code that ends up dividing by zero, and boom, you have an exception while the app is running.
Yes, and? That problem arose when the code was written. There is no reason why a program should ever find itself in a state where division by zero can occur. A simple if statement is all it takes to avoid that. If you see a divide by zero exception, the developer fucked up; having wrote an incorrect program.
That's entirely different to, say, a hard drive crash causing writes to fail. Not even an ideal programmer writing an ideal piece of software completely void of all defects can avoid an error condition.
> You always want to handle an exception.
No. You always want to ensure that you have no exceptions in the first place. They are the runtime equivalent of compiler errors. If you encounter an exception in your software, you screwed up.
There are circumstances where handling exceptions is warranted, but if you are routinely handling exceptions throughout your development, something is amiss.
Once you get outside webshit you have a world where things need to be able to get the job done even in the presence of software bugs. And software bugs are just another type of failure. Some of those involve cases where failure is expensive or life threatening.
You've got that backwards. When you need to get real shit done you use languages that have a proper type system that eliminates all possibility of exceptions. Given a sufficiently advanced compiler, the absence of exceptions can be proven at compile time.
Letting your program enter into an undefined state where it could wreak all kinds of havoc without anyone realizing beforehand is only acceptable in low-rung "webshit" development.
You can imagine an end user has some structured document created by some other program 10 years ago and they need to look at it. Unfortunately in one place it violated the spec and a field type is wrong. Two ways to deal with it.
My way. The program catches the error, notifies the user, tries to mitigate it and display the contents it can extract to the user.
You might want to read the thread again. You seem confused.
> The program catches the error, notifies the user, tries to mitigate it and display the contents it can extract to the user.
As you will see once you do, that's also my way. Only if you encounter an exception would you exit. As you point out, and to which I agree, your scenario is not exceptional. It is correctly identified as an error. You literally state as such, just as I would.
Again, exiting is reserved for exceptions. Encountering an error is not exceptional. Encountering an error is expected!
I cannot conceive of the scenario where it makes sense to recover a bounds-checking induced panic. The process should crash; the alternative is to continue operating in an unknown, irrecoverable, and potentially security compromised state.
Rust shares Go's "errors as values + panics" philosophy. Rust also has a standard library API for catching panics. Its addition was controversial, but there are two major cases that were specifically enumerated as reasons to add this API: https://github.com/rust-lang/rfcs/blob/master/text/1236-stab...
> It is currently defined as undefined behavior to have a Rust program panic across an FFI boundary. For example if C calls into Rust and Rust panics, then this is undefined behavior. Being able to catch a panic will allow writing C APIs in Rust that do not risk aborting the process they are embedded into.
> Abstractions like thread pools want to catch the panics of tasks being run instead of having the thread torn down (and having to spawn a new thread).
The latter has a few other similar examples, like say, a web server that wants to protect against user code bringing the entire system down.
That said, for various reasons, you don't see catch_unwind used in Rust very often. These are very limited cases.
> I cannot conceive of the scenario where it makes sense to recover a bounds-checking induced panic.
A bog-standard HTTP server (or likely any kind of request-serving daemon). If a client causes a bounds-checking panic, I do not want that to crash the entire server.
It's not even really particular to bounds-checking. If I push a change that causes a nil pointer dereference on a particular handler, I would vastly prefer that it 500's those specific requests rather than crashing the entire server every time it happens.
> The process should crash; the alternative is to continue operating in an unknown, irrecoverable, and potentially security compromised state.
The goroutine should probably crash, but that doesn't necessarily imply that the entire program should crash. For some applications the process and the goroutine are one and the same, but that's not universally true. A lot of applications have some kind of request scope where it's desirable to be able to crash the thread a request is running on without crashing the entire server.
That’s not true. Java has 2 types of exceptions checked and unchecked. Checked exceptions are what this thread has been calling errors, and unchecked exceptions are what this thread has been calling exceptions. Maybe it was a mistake to call them both exceptions, but Java also has 2 types of errors.
I'd say Java named them appropriately. While you are right that they almost cover the same intent, error state is not dependent, whereas checked exceptions force a dependency on the caller[1]. They are not quite the same thing.
[1] Ish. If we are to be pedantic, technically checked exceptions are checked by the exception handlers, not the exceptions themselves. If you return a 'checked' exception rather than throw it, Java won't notice. However, I expect for the purposes of discussion we are including exception handlers under the exception umbrella.
Checked exceptions are NOT like errors-as-values. It's only resemblance is that checked strictly forces the exception to be handled, similar to errors-as-values. But the handling itself is still the same as regular exceptions: out-of-band and not composable with anything else.
Only at the level of Java source. The JVM (and several other languages) doesn’t actually care or enforce which exceptions a method might throw, which is what makes tricks like https://projectlombok.org/features/SneakyThrows possible.
"Insist in what way": starts with things like the Go FAQ having a question called "Why does Go not have exceptions?".
The answer does elaborate, so it's not like they're lying, exactly. But anything and everything Go says about this also applies to C++. There's no relevant technical difference between Go and C++ exceptions, nor is there a difference in how the standard library uses them.
... except the Go standard library swallows exceptions in some cases, which is like the biggest no-no you can do.
But nobody would say that C++ doesn't have exceptions.
Go clearly has exceptions. It always has. Thus I ask what is being insisted. Like, specifically.
Is it being insisted in some tangential way, like 'Go doesn't have "try", "catch", and "finally" keywords'? Something like that would be true.
Or is the insistence straight up "Go does not have exceptions!"? If that is the case, who is saying it? Did you just read it in one of the regularly scheduled Rust advertisements that get posted here? Or did it come from someone who actually means something in the Go community?
Huh? There is nothing in the FAQ claiming that Go doesn't have exceptions... It literally says that Go does have exceptions.
Are you referring to the frequently asked question itself? That's the only thing in there that could even possibly make you think Go doesn't have exceptions. Except, being a FAQ, you know that the question comes externally from people who don't know about Go. That is why they are asking the Go people the question! One would have to be braindead to think that is an insistence.
> There is nothing in the FAQ claiming that Go doesn't have exceptions.
Is this some sort of gaslighting? Not only is it in there, I even quoted it for you.
> the question comes externally from people who don't know about Go
And they're told it doesn't have exceptions.
In the last few of years they've started correcting the FAQ and other introductory material.
I'm not going to go through all the intro material again. Maybe in the last 10 years they've fixed the misinformation. But an FAQ answer that doesn't start with "it does, ", is wrong.
Saying "Go doesn't have exceptions", followed by the feature it does have, which is the literal exact definition of exceptions (well, some Lisp languages have fancier exceptions, but still), is gaslighting.
Like I said elsewhere, the C++ standard library also doesn't throw willy-nilly. It's not "not exceptions" just because they boycott that name, or its intended usage.
Just like "Go doesn't have warnings. It's all errors, the equivalent of -Werror" is a lie. They just chose to put the warnings into "go vet", to cover their tracks.
It may be that they've stopped using "doesn't have exceptions" in basically all marketing material now, but when I was learning it a decade ago it was everywhere.
So now we have all this code out there that's not exception safe.
And that's why (to bring it back to the topic) it is a thing that they got wrong.
Errors are things that can fail after the program is written (hard drive crash, network failure, etc.).
Exceptions are things that were already broken when you wrote the code (null pointer access, index out of bounds, etc.)
To put it another way, exceptions are failures that a sufficiently smart compiler would have been able to catch at compile time. Of course, creating a compiler that smart is a monumental task, so we accept catching some programmer mistakes at runtime as a reasonable tradeoff.
If you are trying to suggest that the stack is runtime unpredictable like a hard drive, overflowing it would be an error, not an exception.
Therefore, a sufficiently smart compiler just needs to ensure that you have done the proper accounting of stack use and handle the error condition if you are approaching an overflow state. With that, you can prevent it becoming an exception.
If you encounter a stack overflow exception, it is because you didn't do your due diligence. Technically your program is flawed. Granted, the pragmatic engineer probably doesn't care about correctness in that area. It turns out not all programs have to be perfect to be useful.
No fame for me, I'm afraid. There is nothing here out of the ordinary.
I don't think the tooling is why there is a perceived "Go vs Rust" culture, I think that's due to (somewhat) overlapping use cases, or more probably that they were developed and came out around roughly the same time. There really doesn't need to be a "Go vs Rust" culture though.
I think it should be obvious to most people that Go had a big influence on Rust and other modern languages for the benefit of having unified tooling, formatter, linter, etc.
> It's my understanding that the Go compiler will format your code every time you compile.
I do not believe that is the case. You have to invoke `gofmt` or `go fmt` on the project.
You may hook it to precommit, or your editor might be configured to automatically run it on save, but afaik neither `go build` nor `go run` will auto-format.
Something that I really like about go is how easy it is to make a monorepo, and how quick and easy it is to build all of the contained apps (go build ./...).
I also find it really easy to make CLI tools in Go that can form part of unix pipelines, since: you just need a single go file and app-named folder to get started, it gives you a self-contained binary as output, and the Reader/Writer interfaces make it easy to stream and handle data line-by-line. We have a couple of CLIs at work that analyze multi-gig logs in a couple of seconds for common error patterns - Go is very handy for such things.
Maybe I'm misunderstanding here, but it sounds like he's claiming they invented "interfaces". The Go interfaces seem like the same thing as a Haskell typeclass which predates them by a long shot. Either way a great invention that should be in more languages.
The early days of Go appeared to be the work of a group of people who had not ventured out of their bubble in a very long time and were unaware of several decades of PL research, so it would be somewhat surprising if any of them knew what a typeclass was at the time.
Every way I look at it, Go brings tremendous regression when it comes to "modularity & composability" in the general sense of the terms.
Package management, package definition, interface definition, exporting, protecting or hiding components, feel like they were an afterthought, and as if specified by people who had absolutely no prior experience in other languages or actively ignored it.
There is no nesting of packages in Go. There is no selective visibility protection among packages except on the very first level. Exporting is such an afterthought that it is declared by the change of the first letter to uppercase. Local development of cross-modules was only introduced recently (workspaces) and is extremely primitive (no transitive replacement -- so workspaces depending upon other workspaces is not a thing, workspace-vendoring in the next release but will essentially conflict with mod-vendoring). This probably works with skilled and disciplined teams with strict linters and other tooling, and operating in a large mono-repository or on ultra small codebase. For others, it ends up in a large plate of spaghetti code with no help to untangle, and every single newcomer shedding blood, sweat and tears to wrap their head around a codebase which inexorably became monolithic by the invitation of the very language.
All learnings obtained from decades of building very large-scale applications in C++, of managing and publishing packages in Java or C#, was essentially put aside and ignored.
The sad but good news is that it's all being progressively rediscovered, but relearning from decades of nuget and maven for example, will take time, effort, exemplary humility and open mindedness.
Perhaps because there isn't just one choice? The Go team maintains two compilers, and each treat that interoperability differently. You have even more options if you reach out into the larger Go ecosystem (e.g. tinygo does things differently again).
I was surprised at the poor quality of serviceability given its enterprise deployment with k8s. No thread dumps without killing the process (or writing a SIGUSR1 handler). No heapdump reader so you have to use the memory sampler and hope you catch the problem (and that requires adding code), and viewcore is broken in new versions (and it doesn't work with a stripped binary which is most production binaries).
This is a retrospective written by Rob Pike, one of the creators of the Go language.
I worked at Google at the time go was created and had coffee with Rob from time to time and got to understand the reasons Go was created. Rob hates Bjarne Stroustrup and everything C++ (this goes back decades). C++-as-used at Google (which used far more threads that he says) had definitely reached a point where it was extremely cumbersome to work with.
I can think of some other things that they got wrong.
For example, when I first started talking to Rob and his team about go, I pointed out that scientific computing in FORTRAN and C++ was a somewhat painful process and they had an opportunity to make a language that was great for high performance computing. Not concurrent/parallel servers, but the sorts of codes that HPC people run: heavily multi-threaded, heavily multi-machine, sophsticated algorithms, hairy numerical routines, and typically some sort of interface to a scripting language with a REPL.
The answers I got were: Go is a systems programming language, not for scientific computing (I believe they have now relaxed this opinion but the damage was already done).
And a REPL wasn't necessary because Go compiled so quickly you could just write a program and run it (no, that misses the point of a repl, which is that it builds up state and lets you call new functions with that built-up state).
And scripting language integration was also not a desirable goal, because Go was intended for you to write all your code in Go, rather than invoking FFIs.
A number of other folks who used Go in the early days inside Google complained: it was hard to call ioctl, which is often necessary for configuring system resources. They got a big "FU" from the go team for quite some time. IIUC ioctls are still fairly messy in Go (but I'm not an expert).
I think Go spent a lot of time implying that goroutines were some sort of special language magic, but I think everybody knows now that they are basically a really good implementation of green threads that can take advantage of some internal knowledge to do optimal scheduling to avoid context switches. It took me a while to work this out, and I got a lot of pushback from the go team when I pointed this out internally.
IN short, I think go could have become a first-class HPC language but the go team alienated that community early on and lost the opportunity to take a large market share at a time when Python was exploding in ML.
I remember that being a meme of sorts. Most people understood that as a C/C++ replacement with operating systems and drivers being written in Go. System programmers laughed, of course. Eventually when it didn't look like it wasn't going happen, the token reply from Go devs became "Well not those kind of systems, we always meant a different kind of systems programming language, not what you all thought".
> Systems:
Operating systems, networking, languages; the things that connect programs together.
Software:
As you expect. (...) What is Systems Research these days? Web caches, Web servers, file systems, network packet delays, all that stuff. Performance, peripherals, and applications, but not kernels or even user-level applications.
he goes into a lot more detail there on the things he sees in 'systems software research', and it goes pretty far beyond kernels and drivers. this is not a definition he retconned onto golang in 02014 or, i would claim, a definition unique to him
Then which languages aren’t systems languages? Python can connect things together, BASIC, Java, Javascript. I guess only some GUI DSL languages might not be.
Goroutines use thread-per-core but run stackful fibers on top of that. (A similar model is sometimes known as "Virtual Processors" or "Light-weight processes".) This is unlike the use of stackless async-await in other languages. This peculiar use of fibers in Go is also what gets in the way of conventional C FFI and leads to quirks like cgo.
I'd love to see a resource that highlights these all on a table across programming languages as well as the associated strengths and weaknesses of such threading & concurrency models.
I usually just say that Go implements "userspace threading", since that's really what it is. Some early, pre-pthreads, implementations of Linux threads worked the same way as Go does and they usually called such implementations "M:N", to indicate M userspace threads mapped onto N kernel threads, so "M:N" is a good descriptor too.
IIUC it can, sometimes, avoid a kernel transition when it knows it can schedule the recipient of a message, but I believe that golang creates a threadpool for running goroutines on platforms that use thread primitives.
Back when Java had green threads (the late 1990s), there was no such thing as "multi-core machines". Some top-of-the-line servers had SMP (i.e. two or more physical processors running together on the same bus and sharing the same memory), but very few programs were built to take advantage that option yet.
So Java's green threads was not a stop-gap for the 90s machine which only had one core. That's preposterous. Does Go need to disable its goroutines to support the Raspberry Pi Zero that has only one core? Obviously not! The reason Java didn't support multi-core scheduling is that multi-core processors still weren't a thing, and SMP was too high-end for them to bother (and by the time they did start caring about high-end systems, they've already moved to kernel threads).
Nothing prevents green threads from supporting multi-core (Java 21's Virtual Threads obviously do that, but Erlang's processes also had SMP support well before Go).
I think the terms "green threads" or "user-space threads" are really not that confusing. Definitely not confusing enough to warrant inventing a new term like "goroutines". THAT is confusing. I'm happy the Project Loom team resisted the urge to give the Virtual Threads a fun name like "Jorutines".
they bought that off cray, but in 01995 a few months after they released java, they (a different division) released the dual-processor ultra 2 https://en.wikipedia.org/wiki/Sun_Ultra_series
i had a dual-processor smp pentium pro under my desk by 01998, running windows nt 4.0 and occasionally java
really tho the bigger issue with java performance was that there was no supported native-code compiler until hotspot; though gcj was available, sun didn't support it, and i don't recall its performance as being that great. so by writing your code in java you were usually wasting 95% of your cpu on the bytecode interpreter, like python today. it wasn't something you'd do for things that required a lot of cpu
> I think Go spent a lot of time implying that goroutines were some sort of special language magic, but I think everybody knows now that they are basically a really good implementation of green threads that can take advantage of some internal knowledge to do optimal scheduling to avoid context switches. It took me a while to work this out, and I got a lot of pushback from the go team when I pointed this out internally.
This comment deeply fascinates me, because I get the same feeling every time I go back and read/watch early Go resources from Rob Pike and others, but I've never actually heard it articulated before. I've always thought that surely they weren't that ignorant about PL theory and history? Or maybe Rob Pike himself was, but surely his team knew better?
It really feels like they thought they were making some special hybrid between threads and coroutines, combining the advantages of both. Like this 2011 talk[1] which demonstrated a classic coroutine code pattern implemented with goroutines and channels. But as time went on, goroutines became fully pre-emptive, and their programming model is now identical to threads. Using them as coroutines only leads to misery (read: data races) in my experience. They're faster than OS threads, sure, but that's an implementation detail at this point.
When do you think they eventually realized they were re-inventing the same green threads Java had created and scrapped 10 years earlier?
> And scripting language integration was also not a desirable goal, because Go was intended for you to write all your code in Go, rather than invoking FFIs.
"just avoid cgo" is a something i've heard many times from all Go devs
This seems misdirected. Go clearly wasn't designed for scientific computing, and that's okay. I've successfully written some multi-node MPI codes in Go, but there's not much advantage over C, and likely some disadvantages relating to the Go runtime and linking behavior.
Python (largely) is the present and future of scientific computing because people realized you can write the mathy kernels in something low-level and just orchestrate it with ergonomic Python and its bountiful ecosystem/mindshare. Python adequately checks all the boxes and I don't see how a newcomer like "Scientific Go" or R or Julia will ever unseat it. Not to mention curmudgeonly researchers have little desire to learn new tricks.
But I do use Go when needed as a systems language, and it is fantastic for whipping out the occasional necessary microservice (due to network boundaries, etc).
I think the main issue with Python tends to not be performance (though it can be hard to speed up certain bottlenecks), but rather there's a point where maintaining it goes from very easy to very difficult due to lack of static typing. Where this occurs can be pushed further back with very careful programming discipline (or by adopting mypy et. al from the start). But I could see a world where something go-like with a little more expressivity could've become that glue language instead of python.
This is a software engineering practices problem rather than a Python problem. Python has great tooling and language support for type annotations. I work on large Python codebases with ease because I leverage these things. My IDE is able to do static analysis and catch potential typing errors before runtime.
The problem is we have researchers with no SWE expertise writing huge codebases in Jupyter notebooks inside a bespoke, possibly unreproducible Anaconda environment. That is going to be a maintenance disaster no matter which language is being used.
And if you force your researchers, who are using to being productive prototyping in Python, R, etc. to use a statically-typed language, they are going to complain and be a lot less productive at their job.
Good historical perspective thanks for it. Your comments on scientific computing/HPC are interesting. Golang could indeed have solved the two-language problem and taken off like a rocket in comparison to where it is (hovering in the top 15). However, I think it would have to tackle some very orthogonal concepts - to the systems language creators like Rob and others on the go team - like vectorization as first class concept, parallelism (not green threads) etc. which might have limited some of the initial implementation efficiencies, not sure. There is still room for such a language (Julia is getting there...) perhaps some disgruntled FORTRAN elder who is sick of C++ will create such a new language :-)
Or is it that scientific computing is starting to realize that it can benefit from a systems programming language?
Scripting langages are great for exploratory work, but if you want to put to work into production, scripting starts to really show its limitations. There is a reason systems languages exist. There is good reason why they both exist. They are different tools for different jobs.
There are 2 conflicting goals: 1) Having a language in which it is easy to express and try out ideas and 2) Producing fast and safe programs.
A scripting language would seem to be good for the exploratory scientific research, because of that. Whereas when you need to create a performant library that can do heavy crunching with reproducible results on any platform you need the other. The questions is: Do you know what you want to implement, or is that still an open question?
No doubt it starts as an open question and slowly moves towards knowing.
Which isn't really much of a conflict. You can prove out your thoughts in a scripting language, and after the dust has settled you can move the workload to a systems language. Different jobs, different tools.
I guess if you're one of those weird religious types that insist there is only one true God (read: programming language) you might feel conflict, but nobody else cares.
I'm sure there are exceptions, but generally speaking, do scientists ever really end up working on systems?
Anecdotally, I work with a lot of scientists and they only write scripts. When systems are needed, they hand it off to the development team. Anyone can fumble through scripting, but there is a lot more to think about when building systems, and that's probably not something most scientists want to put effort into – for the same reasons they probably don't want to learn Rust.
Realistically, learning Rust is the easiest part of becoming a systems programmer. But there is little incentive to learn any system language if your workload is always scripting in nature. You are going to, rightfully, reach for a scripting language.
Because different languages give different focus to different features. I'm not saying there can't be a language that supports multiple goals well, but it's a bit like" Jack of All Trades".
The only active, related discussion I am aware of is about the high call overhead imposed by the gc compiler. Of course, other Go compilers have different calling conventions. tinygo, for example, can call C functions from Go about as fast as C can call C functions. So that isn't really a Go issue, just a specific compiler implementation issue. And as you know (it's in the link!), the Go team themselves maintain two different compilers and pride themselves on Go not being defined by one compiler. To equate gc and Go as being one and the same would be quite faulty.
So obviously you are not talking about that one. What else are people discussing?
A "specific compiler implementation issue" when said compiler is the compiler that 99% of Go users use, can just as well be called a Go issue.
Whether "the Go team themselves maintain two different compilers and pride themselves on Go not being defined by one compiler" is basically irrelevant in praxtice, since people using/interested in Go predominantly mean and use a specific compiler (unlike with C++ where they might use one of several available compilers equally likely).
>To equate gc and Go as being one and the same would be quite faulty.
No, it would be the most pragmatic thing to do. De jure and de facto and all that.
I appreciate your dedication to reminding us that your original comment was posted without any research or thought, but it remains that the Go project itself, along with other third-parties, provide different FFI solutions so that you can pick the one that best suits your circumstances. There is no one-size-fits-all solution.
99% of Go users don't need an FFI story at all. They can choose a compiler on different merits. If you have an FFI story to consider, then it is logical that you will need to evaluate your choices on different attributes and very well might find that the compiler 99% of users with different problems won't match your own. gc is not ideally suited to FFI. But Go offers compilers that are. This is not a Go problem. Go provides solutions. What you speak of is only a gc thing.
What story do you have for us next? That your neighbourhood restaurant, with a full menu, has no hope because the one dish you tried wasn't to your existing taste – not having it occur to you that other items on the menu might be exactly what you are looking for?
But 99% (probably 100%) of scientific computing users do need FFI, hence why there's no overlap. While there may be other compilers, my impression is most Go devs, and hence most Go libraries, assume you're using the standard golang compiler, and so FFI remains a problem for expanding the Go ecosystem to specific use-cases. I'm not suggesting Go has support scientific computing (in fact, it's likely better for everyone if it doesn't), but it's likely Go will continue to be a non-entity in scientific computing landscape, absent someone throwing loads of money at a specific use-case and effectively locking people in.
It's worth noting that other new languages (e.g. Rust) are being adopted, because they have an reasonable FFI story.
99% of scientific computing tasks are script in nature, so Go is not a good fit anyway. It is decidedly a systems langue, not a scripting language. Different tools for different jobs.
Yes, obviously you can write scripts in a systems language, and systems in a scripting language, but you will have a better time if you write systems in systems languages, and scripts in scripting languages. There is good reason why we make a distinction between the two.
The scientists are almost certainly using Python. If not Python, R or Julia. And that is in large part because these are scripting languages. It is true that amongst the 1% that are systems, Rust has found a place, but it too will never make any serious headway into the scripting realm. It is also a systems language. Different tools for different jobs.
>I appreciate your dedication to reminding us that your original comment was posted without any research or thought
I, on the other hand, don't appreciate the ad hominem. What happened, have manners gone out of style?
Also, what exactly fault do you find with my original comment: that Go isn't really a player in scientific computing. Does your "research or though" suggest otherwise?
>99% of Go users don't need an FFI story at all. They can choose a compiler on different merits.
Users of X don't need Y is a self-fullfilling prophecy when X doesn't offer Y. Languages without Y don't tend to attract users who need Y.
We're also talking about users doing scientific computing here, where the vast majority does need an FFI story.
>gc is not ideally suited to FFI. But Go offers compilers that are. This is not a Go problem. Go provides solutions. What you speak of is only a gc thing.
And gc is 99% of what people understand and use as Go - not gccgo. Unless gc has a good support for certain features, scientific computing ain't gonna happen.
People aren't going to heavily invest in building scientific programming libs interop with Go, when those would just work fast enough in a different less used go compiler as opposed to the mainstream one.
>What story do you have for us next? That your neighbourhood restaurant, with a full menu, has no hope because the one dish you tried wasn't to your existing taste – not having it occur to you that other items on the menu might be exactly what you are looking for?
Yeah, because a different version of a compiler doesn't come with different maturity, different technical tradeoffs, different community using it, different support story, and so on, it's just like "picking another item from a menu".
If that's your understanding of the situation, I can see how your argument would make sense in your mind ("just change the compiler, it's easy").
Using another language as an example: If CPython didn't have a good interop story with scientific libraries, even if PyPy did, "Python" would have gone nowhere in that domain.
And people who can't understand this, would talk all day getting blue in the face about how "it's not a Python problem, it's a CPython problem", as if that would change anything.
Manners are for engagement between people. Forum-going is a solitary activity. Maybe there is a human out there twisting nobs and pulling levers to make the software work, but if so, that's just an implementation detail hidden from the user. If that were replaced with software, I wouldn't notice, or care.
> what exactly fault do you find with my original comment: that Go isn't really a player in scientific computing.
The original contextual comment, not first comment ever written...
> People aren't going to heavily invest in building scientific programming libs interop with Go
They aren't going to invest anyway, as the vast majority of scientific computing tasks are script in nature. Go is decidedly not a scripting language. It isn't trying to be, and doesn't want to be. There were already a number of good languages in the scripting space before Go came along.
This is like lamenting that wrenches aren't winning the race in nail driving dominance. Who cares?
Yes. There are long standing feature requests for (e.g.) the reflect package that simply don't get done because they'd break this assumption and/or force further indirection in hot paths to support "no code generation at runtime, ever".
Packages like Yaegi (that offers an interpreted Go REPL) have "know limitations, won't be addressed" also because of these assumptions.
It has the limitations mentioned, which are necessary to make duck-typed interface calls reasonably efficient, if you assume no code generation at runtime, ever.
There's really no other way. If you don't know before hand all the interfaces that might exist, and/or all the types that might implement them, interface method calls would necessarily either need to be JIT generated, or a string based hash map lookup (runtime reflection, which exists, but is slow).
Go avoids both by knowing statically at compile time the list of all interface and the list of all types that can ever possibly implement them and building vtables for those.
often when I am developing systems code or RPC client code, I sit in a REPL and make repeated ad-hoc calls to various functions, building up various bits of state (I started using python long before Jupyter). I find this much more intuitive than writing code, compiling it, executing it, going back to the editor, changing the code, re-running it with something that loads a bunch of data, just so I can answer a question about the runtime behavior of the system I'm working with.
There is certainly something to be said about being able to answer questions about another system by poking at it, but that is a scripting task. You would be better served by a scripting language. And, as it happens, most scripting languages come with REPLs. That is a solved problem that was solved long before Go was ever imagined.
Just because you are building a particular systems program does not mean everything you do has to be a systems problem. And, really, if you don't exactly know what you're building, it is probably too soon to consider any of it a systems problem.
> You say systems problems like Go is for embedded work.
I'm not sure who is talking about embedded work. That didn't come from me. There is nothing in this thread about embedded. Did you accidentally press the wrong reply button?
I say that Go is for systems. Do you think Go is actually designed for scripting? If so, why?
> Half the things I have seen replaced Python scripts
Python is often used to build systems.
Python is a scripting language, yes, but that doesn't mean you can't build systems in it – just as you can build scripts in a systems language. But there are tradeoffs in crossing paths like that. Systems languages will be better for building systems, and scripting languages will be better for building scripts.
Perhaps the applications you speak of are actually systems and that is why developers saw reason to convert them into using a systems language?
> Seeing what your internal state is after taking X action is exactly what REPLs are good at...
Quite good. But that workload is not systems in nature, so why would you ever do that in a systems language? That is a scripting workload. Naturally you would use a scripting language. And, lucky for you, most scripting languages include REPLs out of the box!
Bjarne took the wonderful thing that was C and made C++. Rob is not a fan of C++: he thinks the language evolved badly and added poor concepts from the beginning (IIRC iostreams and templates were two of the concepts), and it embedded a number of design decisions that led to extremely slow compiles and links (like, 45 minutes to link a Google binary). Ian Taylor even wrote a better linker (gold) for Google to deal with that.
when I discovered STLport around 2001, it was a real revelation and very convenient because I was finally able to compile the code my coworkers wrote on "real UNIX" with "real C++ compilers" (lol cfront)
In researching my answer I came across
http://www.stlport.org/resources/StepanovUSA.html#
"putting it simply, STL is the result of a bacterial infection."
Don't know about Rob Pike in particular but Ken Thompson, who probably had the same reasons for "hating" Stroustrup, had this to say about him (from Coders at Work):
Seibel: You were at AT&T with Bjarne Stroustrup. Were you involved at all in the development of C++?
Thompson: I'm gonna get in trouble.
Seibel: That's fine.
Thompson: I would try out the language as it was being developed and make comments on it. It was part of the work atmosphere there. And you'd write something and then the next day it wouldn't work because the language changed. It was very unstable for a very long period of time. At some point I said, no, no more.
In an interview I said exactly that, that I didn't use it just because it wouldn't stay still for two days in a row. When Stroustrup read the interview he came screaming into my room about how I was undermining him and what I said mattered and I said it was a bad language. I never said it was a bad language. On and on and on. Since then I kind of avoid that kind of stuff.
Seibel: Can you say now whether you think it's a good or bad language?
Thompson: It certainly has its good points. But by and large I think it's a bad language. It does a lot of things half well and it's just a garbage heap of ideas that are mutually exclusive. Everybody I know, whether it's personal or corporate, selects a subset and these subsets are different. So it's not a good language to transport an algorithm—to say, “I wrote it; here, take it.” It's way too big, way too complex. And it's obviously built by a committee.
Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said “no” to no one. He put every feature in that language that ever existed. It wasn't cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that.
Seibel: Do you think that was just because he likes all ideas or was it a way to get the language adopted, by giving everyone what they wanted?
Thompson: I think it's more the latter than the former.
Interesting opinion, it certainly shows the broad mindset behind Golang (and its predecessors Alef and Limbo). Also let's face it, it really took Cyclone and Rust to prove that a broadly C++ish language could be made both safe for large-scale systems and developer-friendly. If your only point of reference is C++ itself, these remarks are not wrong per se.
99% of programmers, but especially the brilliant/well known ones, are insufferable egotists. Many cannot have a technical disagreement without despising the person they disagree with.
Professionalism and the tech industry are just starting to get acquainted.
Most professions, quietly, are currently like this. Architects and scientists and doctors and surgeons and lawyers all develop strong opinions about each other based on their positions.
For instance: ask a lawyer what they think about the overturning of Roe v Wade. Now ask them what they think about their colleagues who disagree. Don't forget to duck.
Which isn't to say that we wouldn't all be much better off with more distance between our opinions and our identities. But the thing you're looking for is a cultivated practice that is often not at odds with 'egotism' (what is that, precisely, anyway?) but _entailed by_ it: the not-wanting-to-be-the-kind-of-person-who-does-XYZ.
Not all vanities are risible.
What you're looking for is a long-cultivated, inward practice that can be supported or hindered by all the usual forces, local context (culture, practice, etc) chief among them.
Put another way: your claim isn't that most programmers are unprofessional. It's that they're _uncollegial_. And you're right. So are a lot of other contemporary professionals. It's a shouty era.
That kind of behaviour leads to a lot of valuable people abandoning workplaces because they can't stand the abusive atmosphere. In the end all that's left are the shouty arseholes and that doesn't do anyone any favors.
99% of [people], but especially the brilliant/well known ones [in any domain], are insufferable egotists.
IMHO this is normal. When somebody puts in a lot of effort in mastering something they intrinsically know that they are better than most people in that something. As social animals, getting "noticed" is a form of being conferred "status" in the group. Thus you tend to "act out" to acknowledge/confirm that recognition. It is fine as long as it is within acceptable social bounds and not out of touch with reality.
Interesting context. I use Go a lot for enterprise backend type work and I have to say I'm glad its not geared towards being an HPC language, but to each their own.
I can totally agree on excluding scientific computing. A lot of that simply is more cobbling something together than software engineering. And HPC wouldn't be happy with Go anyway.
> Python was exploding in ML
It's very easy to start in Python. So it's taught everywhere, and everyone knows it. Until 10 years after the majority of colleges have replaced Python with another language, Python will stay dominant. From that point of view, Mojo is quite clever.
> And a REPL wasn't necessary because Go compiled so quickly you could just write a program and run it (no, that misses the point of a repl, which is that it builds up state and lets you call new functions with that built-up state).
He'd have had a point if the go compiler could reliably compile programs in faster than 50-100ms. The claims that go is a fast compiler always seem to be relative to c++ or something. Last time I checked just compiling hello world took over 200ms which was shocking to me given how I'd heard that one of the language's claims to fame was a fast compiler.
Thing is, there are several REPLs for Go. That we have quite a few comments pretending that Go doesn't have a REPL comes as a direct result of nobody – that is, beyond the REPL authors wanting to scratch an itch – ever needing one. After all, if a REPL was necessary, the people commenting that Go doesn't have a REPL would know that Go has a REPL as they would be users.
I dream of a world where we have Go or something similar with the same / similar UX of Jupyter Notebooks. I keep an eye on Julia but I never make the leap to use it and still pickup Python (or Go).
It's ok that Go didn't work out for scientific crowds, since Julia works better for the scientific community as a replacement for hacky Matlab/c++/Fortran conbled-together scripts.
pagerank was implemented as an iterative mapreduce on classic hardware (and sibyl later adopted this model, using MR as an engine to do what is really an HPC job). Not sure I really consider it HPC, more like high throughput. HOwever, the MR approach worked really well when google was scaling super-fast in the early days; if they'd chosen to solve the problem using MPI and infiniband on expensive SGIs, they probably wouldn't have become the company they are today.
agreed about infiniband and sgis, but numerically approximating the principal eigenvalue of a large sparse matrix seems solidly in the core of traditional hpc. btw pagerank predates mr by several years
The anecdote about writing the compiler in C is very interesting. LLVM is obviously very popular these days, so it’s refreshing to see a counter example. I also love that the compiler was decidedly mediocre. It just goes to show that often the user ((or developer) experience is typically more important than the technical merits of a product.
> It just goes to show that often the user ((or developer) experience is typically more important than the technical merits of a product.
I think Go still won on technical merits, because it never really competed with other compiled languages, instead mostly converting Python and Java programmers. Compared to those, startup time and memory usage of go programs are leagues ahead, and the quality of the codegen doesn't change that much.
> First, he was generalizing beyond the domain he was interested in [...]
And then they proceed to dump on async/await. It's not a target concern for Go but often you want to run code specifically on a UI thread or call into a foreign function on some specific OS thread. AFAICT that's most easily done with async/await.
I’m surprised the fact that they manage to keep the language small and minimal isn’t mentionned as a huge success. To me that is the number one reason to use this language : it forces you to not be distracted by language constructs (there aren’t enough for that), and focus on what is it exactly you’re trying to build. Even as an educationnal tool, this is excellent. Maybe they don’t realize it because they come from C, but when you come from more recent languages that include everything and the kitchen sin, this is a godsend.
It’s now to the point that whenever i develop a feature in a language, i ask myself « how would i do that in go » to ensure i don’t go fancy with the type system for no good reasons.
in my personal experience, go language "limitations" have always forced me to clarify and simplify my design. In the end, my code is way better than in my original intent.
I had to write seven or eight identical functions transforming some data because Go didn't have generics. Hardly a "better code".
They've now released generics, thankfully, but there are many places where there's unnecessary repetition and clunkiness because the authors "know better".
you couldn't make those data conform to an interface and have the algorithm work on the interface ?
in those scenarios i usually either work with interfaces, or composition (struct embedding), or as a final resort to codegen for things like generating bindings to database tables. the codegen is usually quite a good solution and makes debugging the generated code very straightforward, since it's all there, in the _generated folder, instead of in a temporary file generated by the compiler. it also makes the generator code a separate project, which IMHO is a better solution than using all kinds of macro and complex meta programmation directly in the final project.
> Also, writing a compiler in its own language, while simultaneously developing the language, tends to result in a language that is good for writing compilers, but that was not the kind of language we were after.
I have seen this sentiment a few times recently. First of all, it raises the question is a language that is not compiled in itself a bad language for writing compilers? My intuition is usually yes. Secondly, the implication is that a good language for compilers will not be good for other applications. I really don't understand this because a compiler will use most of the same building blocks that are used for other programs.
I would really like more context into what the author is trying to say though.
For what it's worth, Dart's intended domain (client UI apps) is much farther from compilers than Go's intended domain (servers and "systems programming") but we write almost all of our tools and compilers in Dart.
Dart isn't always the best language for compilers (we have a lot of Visitor patterns floating around, which are always tedious), but it's plenty good enough and it keeps the whole team working in our own language eight hours a day, which I think is invaluable.
Also, it means that when we make our implementations or compilers faster, we get a compounding effect because our tools get faster too.
The ideal set of building blocks depends on the problem.
If the building blocks make it easy to write concurrent code (Go, Erlang), then it becomes easier to write a server. If they make it easy to represent "A or B or C" and pattern match on trees (ML-like languages), then it becomes easier to write a compiler.
Add to that: if you are trying to make an easy to onboard language, you want to look at how beginners use it, not experts. Someone writing a compiler for language X is certainly an expert in X.
Writing compilers is mostly aided by having a robust type-system and elegant tooling for parsers and AST transformation and so on.
Writing a compiler requires computer science knowledge, requires thought.
Haskell I think is a perfect example. It is a language that is well suited for writing compilers, but also very well suited for building services, backend applications, really anything if your developers are of average intelligence.
Go, however, wants to optimize for developers who think a for loop is easier to understand than an applicative functor, who think that generics are unnecessarily complex.
If you're trying to build a language for "the lowest common denominator, the average googler", that's the opposite of building a language for compiler writers, so in that case building a language that can represent such a hard CS problem well is counter productive.
I made some videos introducing Go. They're not important and the view count isn't impressive at all but it interested me that the one with the most hits is about "Go Project Structure".
I have to say I'm not at all a fan of how modules get imported in go - and how it works with github. It's an extremely confusing and complicated issue and my bete noir was forking a library. The way things are exported, the paths you need to use to access them.......the whole area is far worse than the problems I've ever had with C/C++ and certainly Java or Python.
> A key point was the use of a plain string to specify the path in an import statement, giving a flexibility that we were correct in believing would be important.
Wrong.
Importing a string is like a touchscreen UI in a car: it's deferring the problem. It's lazy.
> But we didn't have enough experience using a package manager with lots of versions of packages and the very difficult problems trying to resolve the dependency graph.
No one on the team had ever used Java? Really? Maven was ~8 yeras old when Go was released. It came from previous learnings and errors with Ant. Maven did other things too that aren't necessarily relevant or necessary (eg standardized directory structure). But the dependency system was really solid even if it was verbose because it was XML.
> Second, the business of engaging the community to help solve the dependency management problem was well-intentioned
This feels ahistorical. It felt more like the problem wasn't understood and/or thought important. This fits in with the importing a string: it's a way of not solving the problem, of kicking the can down the street.
Because it’s neither horrible nor regrettable, it just doesn’t cater to your perfect idea of what it should be. It’s a smart way to encourage people to handle errors.
you think it's "smart" to encourage a large fraction of function calls to have
if err != nil {
return nil, err
}
after it? really? you don't think it's just a very simple way to do it without having to do a lot of work in the language? and then make everyone use a linter to avoid bugs? oh and you can't always literally do 'return nil, err' because strings have to be ""?
it's certainly defensible as a minimalist approach, claiming it's actually good or smart or ideal is a pretty weird take.
You think it’s “a lot of work” to write an if statement? Any of the proposed variations have shown to be either different syntax (weak valueless change) or would allow people to ignore handling errors.
You can’t always return nil? is that a complaint that you are required to do things differently for data types that require it inside of a complaint about doing things the same?
It’s not a weird take, things have tradeoffs, this is a tradeoff of using Go.
It's not a lot of work, you can just let the IDE spew it off for you and feel like a certified J2EE programmer circa 2003 who lets the IDE write everything for them. But yeah, writing code is not the problem. Reading the code is.
So then it’s less readable because you have to process a simple error return? I won’t even comment on the 60% of code is error handling, that sounds like code that needs refactoring.
More precisely because "they" are Rob Pike, and Rob Pike doesn't seem to think error handling in Go is horrible.
Other members of the Go Team (Robert Griesemer, Russ Cox) trialed out several solutions for this problem, but this issue was just too controversial within the community.
It’s a bit repetitive but so is writing `for i, v := range n {}` everywhere. Code loops and repeats syntax all the time, so what? Argument doesn’t hold water for me (not attacking).
Then people should use those languages. From the beginning Go has been about simplicity and utility, not about providing multiple patterns and alternate syntax.
I would also argue that repetitive code is a failure of the engineer who implemented it. If writing code is repetitive then people may want to look into generating it instead.
I am not a Go user and the language has never appealed to me. On paper, it offers less than more established ecosystems for generalist backend development, such as C# and Java.
The structure of that post is weird, in that it's quite difficult to figure out where are the parts which were done right and where are those done wrong.
I gotta say, reading this tells me exactly why Go struggles to evolve as a language.
Go ahead, skimming the article tell me what go got wrong? Stumped? Yeah, well I was too. That's because a lead designer can certainly write paragraphs on paragraphs of "Look at how amazing and kool we are for being so smart" but doesn't seem capable of writing more than a half a sentence of "we got this wrong".
I saw 2 things Pike believes go got wrong. Documentation and packaging. The rest is a lot of self congratulatory "Look at how smart we are and how dumb our critics are".
No language is perfect, every language has it's own weaknesses (and go has plenty). Yet for some reason Pike can't help but gush over how perfect it is. Even with the 2 faults listed, the documentation fault very much has an undertone of "our functions are so simple but the dumb users weren't smart enough to understand the amazingness of our elegant code and libraries."
> The key missing piece was examples of even the simplest functions. We thought that all you needed to do was say what something did; it took us too long to accept that showing how to use it was even more valuable.
Go is a language designed in the last 20 years that decided "You know what wasn't bad about C? Pointers and null". Yet the only part pike can really fault is "well, we didn't know how to do package management so we sort of messed that up".
He even goes so far as to talk about how interfaces are so good and generics are dumb even though go has the notorious `interface {}` used to try, in an type unsafe way, to pass around random objects. (But hey, pike hand waves that away with "the dumb community just isn't smart enough to accept the brilliance of our type system".)
How can a language evolve when the lead designer puts on blinders to the community raising faults? When a "what we got right, what we got wrong" post is filled with how awesome the language is with scant reference to what is wrong? Even just openly dismissing concerns with the language?
Contrast that with a lead language architect I really like, Brian Goetz (seriously, go watch his talks about evolving Java). His approach is nearly the polar opposite. It's "Ok, we've seen that a lot of people using java run into problem X, so we are going work hard to find language or library solutions which help improve things for the developers". They very rarely just dismiss problems and when they do it's more along the lines of "Well, we'd love to have it but ultimately there's not a good way we can think of to do this which is also backwards compatible."
And you can really see that in the way java has evolved from 8->21. From adopting a faster release process, a ton of new language features, and a LOT of active development into the most painful parts of dev work. Virtual threads is a shining example of this. Java now has go like concurrency even though it took several iterations and trials to bring that in, they worked hard to evolve the language in a way that fits perfectly. Valhalla is another great example of the language working hard to solve real problems. A 10 year project open to the community working to fundamentally redefine the JVM memory model because the one designed in the 90s doesn't fit with modern hardware.
But hey, if you like go, that's great. Just don't expect even an iota of evolution. Barring a change in leadership, generics is almost certainly the last new language feature (at least in this decade). A feature that landed primarily because of over a decade of articles and community screaming about how horrible the lack of generics impacts everything.
To be fair, Go pointers do not allow pointer arithmetic and unsafe casting. They are really not much different than Java's references.
The main issue is null. And you can't even say Rob Pike is not deeply familiar with Tony Hoare, and yet the billion dollar mistake was very eagerly repeated.
Brian is great in its own way and doesn't need to be compare to Pike at all. And he has to admit lot of problems as Java is doing things quite opposite of what was done decade or so back. Go leads are not doing this because Go is not changing course when they do they may admit more.
Valhalla and Loom example you gave shows how important lightweight concurrency and memory layout of object is. Go had this from day 1. For faster release after Go 1.1 /1.2 they always had 6 month release schedule which Java implemented few years back. While Java is running impressive projects for last 10 years to add these features.
Before Go came along Java never really published or committed to performance numbers for GC. It was always Here are few GCs, use whatever works for you. Now they give far more details after Go pushed for numbers.
> Barring a change in leadership, generics is almost certainly the last new language feature
Pike is already retired many years back, he is not in any official position in Go project.
For sure, you like Java and I am working fine in Java like forever. But you seem to be losing all sense of perspective here.
Just don't expect even an iota of evolution. Barring a change in leadership, generics is almost certainly the last new language feature (at least in this decade)
The fact that he doesn't mention context as a huge failing of Go is very suspect...
I also find the post a little too self-congratulatory for what was essentially a reinvention of C with a GC at the right time, and not just C the language, but C's philosophy on programming.
I think as Go has become more popular, the core of C has been drowned out by people coming from other languages who insist on too many libraries, too many abstractions, generic solutions at every level, and more features. Go today has essentially moved much closer to Java, and some projects like Kubernetes are without a doubt just Java projects with a slightly different syntax.
Concurrency and interfaces I think are also a big fail in Go.
Interfaces because they failed to add enough of them to the standard library for simple things like logging, filesystem access, etc, causing numerous incompatible implementations, which is something that wouldn't have happened if they hadn't been so gung-ho on interfaces being defined where they are used instead of having community interfaces.
Concurrency is harder to summarize, but day to day you still get locking issues, libraries that don't expose interfaces that are easy to work with via coroutines (which is ironic given rob's finger pointing at async/await coloring), and as I said context is really a prime example of why CSP IS a bad model compared to mailboxes and the Erlang model of concurrency. Every function has to take an extra noisy argument, every function has to wait for a ctx cancellation, instead of just baking the semantics of cancellation into the language itself.
It really is just C with a GC and slightly better support for generic programming, which is why I like it quite a lot as a default language for writing basic programs.
I like Go, but I didn't like the post for somewhat similar reasons. I felt it pats itself on the back for several language design wheel reinventions that are basically lesser versions of counterparts in other languages. For example, interfaces are touted as some brilliant solution for polymorphism as though Haskell wasn't already doing the "sets of methods" (but with type arguments from the start) idea via type classes as early as 1988...sure interfaces are a bit different since they are implicitly implemented, but the basic idea is the same. The whole Erlang being a good prior in the concurrency space is another example. In general it left a bad taste in my mouth since it sort of felt like the language was designed in ignorance of the vast field of ideas already present in programming language theory. It really comes off as though the team thought that Java, C, and C++ were the only languages that existed before Go. Wadler's (eventual) involvement suggests otherwise (but then again, maybe they only knew about his work in relation to Java), and I realize there are constraints to giving live talks that force one to trim things down, but I really dislike this tendency to think in a vacuum, celebrate your own (re)discoveries as brilliance, and then to present it all without hardly a mention of the large body of research that went before you, and that you should have consulted during your creative process. Given the access we have to research today, there's little excuse for it. I doubt that Rob Pike actually falls into this camp or lacks this knowledge, but the talk (in essay form) really makes the team of language designers seem like it was a team of people that knew little about language design and happened to stumble onto analogues of good ideas that already existed in more mature form.
I will say, though, that I like that the writing of a spec is touted as one of their best decisions, as I really wish more novel languages would bother to define specs these days.
The fact that Erlang has existed since the 80's really begs the question "Why do language designers keep fucking up concurrency?". It's a feature that's really painful to tack onto to a language after the fact (looking at you, Python and JavaScript), but is absolutely necessary for any programming language.
I see goroutines as a solid "next best thing" after BEAM Processes, both of which are miles ahead of async/await, which is admittedly an improvement over any lower-level thread manipulation.
Because erlang is “weird” and not enough programmers learn it for it to have breached the cultural ramparts regarding what concurrency is in a programming language. I learned Erlang and Elixir and I’ve never been happy with any other languages’ concurrency mechanisms and primitives since then. Between multitasking features that let you avoid concurrent tasks causing CPU starvation, message passing allowing proper decoupling of concurrent tasks in both an asynchronous and synchronous ways and all the other little ways it does concurrent programming right… nothing holds a candle to it. There are some nice libraries in other languages but it’s not the same because it’s not built into the language at the same level.
Async/Await, asynchronous IO, none of this is really the same kind of “concurrency”… it’s why I wish I had more chance to use the Erlang/Elixir+Rust combo… low level safety and high level concurrency are a match made in heaven.
Rust is getting async-in-traits in its newest release, that can also be seen as implementing this "message passing" model in an async context. It's super impressive how the language keeps evolving so fast and improving its relevance over time.
A huge amount of the value-add of BEAM languages is the fact that the languages have built-in support for message-passing concurrency built in at very fundamental levels, which means that the "pretty path", the comfortable way to write code in those languages, simply supports the concurrency model.
You cannot achieve this by tacking on half-baked support onto an already designed language. You can tack on Async/await, which is exactly why it's so popular.
What's the BEAM languages' FFI situation like? I've got the impression that the more higher-level concepts a language supports, the worse it is to do FFI.
Complicated. There are essentially 5 possibilities:
- ports are subprocesses acting as erlang processes
- erl_interface is a more efficient version of the same as the communication uses BEAM's external term format
- C nodes are what it sounds like, basically a separate program acting as a node in a beam cluster
- port drivers have a shared library acting as a process, it's way more efficient but jettisons safety as a crash in the library kills the runtime
- finally NIFs are synchronous functions called in the context of an existing process, they are the fastest option but not only do they kill the emulator they can also screw up the VM state, and they can't be managed by the scheduler without their cooperation (which can severely degrade system stability), BEAM has a concept of "dirty NIFs" which are long-running, can't be suspended, and can't be threaded, if those are appropriately flagged they are run on dedicated "dirty schedulers", dirty schedulers have more overhead but less restrictions, in essence cgo is a more impactful version of that (although I believe dirty schedulers were invented a while after cgo existed, previously "dirty nifs" would just see you drawn and quartered)
I'd say (clarify?) that giving an answer to "What's the BEAM languages' FFI situation like?" is more complicated than actually using most of these options. Once you choose one, implementation ranges from "trivial" to "a little obscure".
Ports just wrap executables you can feed STDIN to and get STDOUT from in order to treat those streams as messages. You make one with one line of code, they behave just like any BEAM process. (you can message them, kill them, etc). If your goal is to use an existing DLL, this involves some C glue code, which looks like what you'd do if you wanted to be able to call the DLL functions in a bash script (take in STDIN, translate strings to appropriate types, call function, return strings to STDOUT).
The erl_interface option is using that library to replace the "translate to/from strings" task with "translate to/from BEAM types". Still a fair amount of glue code for "I want to call this DLL function", but I feel like it should be possible to codegen a lot of it. That might exist already, and if it doesn't it sounds like the kind of fun project I might pick up.
C nodes are using erl_interface to its fullest, defining a full blown BEAM node with one or more processes running on it in C. In practice, this means you can send messages to other BEAM processes, rather than going through an intermediary Port process. It's definitely the most involved option, but it's well documented (like everything else in OTP).
Port drivers free you from the concept of messages within the (even smaller) C code: You make a DLL (that links your target DLL) that provides some mapping information and some dispatch code, then in your BEAM language handle all the "send/receive messages" stuff associated with being a BEAM process. The BEAM node crashing if the library crashes is rightfully considered a significant issue, but it's worth noting that we only care about that so much because the BEAM spoils us with much better safety normally. In any other programming language, a crash in your linked library would probably be expected to cause your entire application to crash, whereas in BEAM land we can even mitigate this by putting our risky code on a different BEAM node, running on the same or different machine, to limit our blast radius.
NIFs allow you to present your C DLL function call as a normal, synchronous BEAM function call. They require the least C glue code, but if any of those calls take more than a millisecond or so you start getting into "thar be dragons" territory on the clean scheduler, or require the use of the dirty scheduler which slows everything else down.
Ultimately, the punchline to all this is that if you want to call an existing shared library from the BEAM, you're going to have to write some amount of C, ranging from "a couple lines per function you want to call" to "defining a small runtime that handles dispatch based on strings".
With async/await you can describe your logic with loops and conditions intermixed with async IO. In Erlang you’ll have to design a state machine, split the logic between event handlers and redesign the state machine each time the logic changes.
But that's also the gorgeous benefit. Of course it is state machines all the way down, this is Turing Completeness at its finest where various forms of Universal Turing Machines are plentiful. The beauty is that compilers are great at taking what look like iterative step-by-step, higher level descriptions and rewriting (monadically transforming, to use the fun words for it) that to state machines for us, taking most of the mental overhead out of the process of building a state machine tiny, dumb lego by tiny, dumb lego.
(That's partly why I find all the discussions about "async/await" "colors" silly: they aren't colors, they are types. You build types for other state machines, right? You don't complain about Regular Expressions [which are also often rewritten to simpler state machines] having their own types and that they can only embed other Regular Expressions and that consequent state complexity as having "colors", do you? About the only real difference between async/await and Regular Expressions is that the types for async/await implementations are all guaranteed to be monads in the async/await languages and might not be for Regular Expressions. Though monad Regular Expressions libraries exist.)
Colored functions represent exactly the failure state of types: When you've over-tuned your typing to the point that it takes substantial refactoring to make something that should just work, work.
Whether a function is called asynchronously or not should be a decision of the caller, not of the function itself. Sometimes it doesn't make sense for me to continue when I don't have an answer yet. Sometimes it does. That makes it the caller's concern.
The solution to this in Javascript would be to simply make every single function an async function, but that introduces a bunch of clutter because async/await is a feature to be bolted onto an existing language, rather than a paradigm to build your language around.
In contrast, look at Elixir: No function coloring to be found. Want to run something asynchonously?
res = Task.async(&any_function_i_like/0)
\\some other stuff
out = Task.await(res)
(that's syntax sugar for spawning a process that will die and produce a result at some point and then waiting to receiving a message from it). Want to run that same function synchronously?
out = any_function_i_like()
No colors, no nonsense.
"But wait!" you might be thinking "What happens if the task fails?" Well, that's up to the caller, too. Maybe that only rarely happens, and there's no logical path forward from that failure, say, a very tiny meteor punches out the processor core that task is running on. That's okay, by default if the child process dies, so does the caller. This is passing the buck, but is the correct answer more often than not, so it doesn't make sense to make the developer do extra work to make it happen. But maybe a failure is somewhat likely, and we just want to keep trying until it works. That's easy, too! Just spawn a supervisor process that'll make a new copy of the called function if it fails. One more line of code relative to the async case. Maybe the function is just to make some side effect, and it doesn't matter if the parent crashes or not. Maybe it does.
There's a lot of different cases when it comes to concurrent programming. It does not make sense make those decisions unilaterally for someone using your library. Leave it up to the caller to decide whether your function is synchronous, asynchronous, long-lived, independent, dependent, etc. Unfortunately, that's hard to tack onto a language as an afterthought, so we get Async/Await.
> The solution to this in Javascript would be to simply make every single function an async function, but that introduces a bunch of clutter because async/await is a feature to be bolted onto an existing language, rather than a paradigm to build your language around.
I completely disagree with this, and I think that's a strong summary of why our opinions are so hugely diverged here. Async/Await was decades of programming language research in the making. It's "just" Haskell's do notation but with a lot of smart research into "okay, but how do we do that for beginners and junior developers". It may seem like an overnight success, but that's not because it is "tacked on", it's because it is well thought out, well tested, and well designed and therefore has spread to a number of programming languages. (I didn't say anything about Javascript. Javascript wasn't the first to add async/await and wasn't the last.)
I'm not sure it is a "failure state" of types, but I can agree it is a hard edge case, and there have been criticisms of it for decades. The same "why isn't every function in Haskell in the IO monad and written in do-notation" is the same "why isn't every function async/await" complaint. It's the same reason not every function in a language with iterators is written as a generator with yield instead of return. It's the same reason you don't solve every problem with a single RegEx. It's the same reason you use OO classes for encapsulation (or don't). Pragmatic programming languages are never just a single paradigm.
The callee knows what resources it is encapsulating and which of those need to be asynchronous. It knows better than the caller if there is going to be a suspend point what state it will need to transition into next and can prepare its internal state machine best for how it needs to operate. It's useful to bubble up that library knowledge into the type system and async/await gives a strong type safe way to do that (plus eases a lot of the trivia of building a state machine in languages that support async/await). Just like a generator function is a basic state machine for iterable results. Just like an OO library might encapsulate some amount of memory usage.
Your elixir example is a single function and you can do that in any of the async/await libraries to. The trick to async/await is that what you are writing in async/await isn't a single function but a complex state machine. Have you actually tried to rewrite async/await-heavy code to something like Elixir? It's not hard, but it is a lot like hand converting a RegEx to a DFA.
That doesn't mean that caller's "lose power", the power dynamic is shifted, but it isn't as deranged as you seem to think it is. async/await just defines the state machine possibilities, it doesn't know or care if the async/await state machines that have been defined complete synchronously or asynchronously. It doesn't care what threads/threadpools/green threads/greenlets/threadlets/whatever else those state machines run on, including the same thread as the caller. The state machines are generally agnostic to how their state transitions are triggered. Javascript doesn't give you a lot of power there, but that's because browsers have always been in charge of JS threading and that was true before async/await. That's not a deficit of async/await, that's a deficit of Javascript. (That you can't see past Javascript may indicate why you think it is a tacked on part of the language and not a success story from other languages modestly applied.) You can look up some of the tools that C# presents, for example, to let callers influence downstream scheduling. You can look up all the complaints that Python despite being the "there should be one clear way to do it" language decided that instead "explicit is better than implicit" beat out as the winning mantra and left scheduling to a handful of different libraries with different opinions of how async/await should be scheduled by callers, but the benefit is that apps have to explicitly opt in to one of those behaviors rather than a "works fine for most developers" out of the box default. The "power" remains you just have to learn new ways to apply it.
> I also find the post a little too self-congratulatory
Yes. I mean, the language is a huge success if you measure how far adoption has come. But I expect more from a "what we got right and what we got wrong" post.
IMHO what went right is compiler/tooling speed. The async story ultimately is just crappy, the "no colored functions" is a lie that blows up in production. Go's interfaces also sound great in theory but don't really work so great when using them. They're mostly used for DI in UTs. Sometimes I wonder if I'd use any interfaces at all if there was a way to monkeypatch deps just for UTs (yes I would but 99% less).
And the world would be a better place with a positive-bias.
Except that this article purports to be self-critical, so arguably the very topic of this thread should cover both the positive and negative of the language, and our comments are merely expanding on it.
Slightly off-topic, but the most honest programming language book that describes “what we got wrong” is “Effective Java” by Josh Bloch. It also used to be the best book to learn Java before 1.7 (maybe still the best today, but I’m not following Java anymore).
I would have more respect if they at least admitted to the flawed type system but instead say it is not a problem. It is disappointing to see past mistakes repeated in a new programming language. Even the Java language creator was humble enough to admit fault for the null pointer problem. The Go devs do not have such humility.
It's interesting that they brought in Phil Wadler to help retrofit polymorphism, it literally is history repeating itself (Wadler did Generics retrofit for Java over 20 years ago).
I was following the mailing list as this was being developed way back then, but it always frustrated me that the team chose such a trite, un-googleable name for the language. I mean, if you want to find the community and the docs, you have to search for 'golang' which let me tell you, was not an automatic solution to the problem. That's actually my only complaint.
100% agreed. In addition to this, due to some people not capitalizing the first letter of Go, distinguishing between the verb go and the noun go takes more time than usual when reading articles related to Go. Maybe to native speakers it doesn't matter?
I think the comment's point would be that when C came out there were no search engines, so nobody would've considered that. When Go came out, not only were search engines a critical everyday software, the Go language designers worked for a company whose original and flagship product is a search engine, so they should've known.
(I also think the same criticism could be directed to other languages that are English words such as Swift, Rust, etc. but to a lesser extent)
I really wanted to love Go, spent time learning it, bought a few books read it,etc.
It did not work out for me for embedded field, it's too large in binary size, no real time due to GC, and I still need to interface with C code here and there.
In the end, I'm back to c and c++, I consider my time on Golang is actually totally a waste.
Go has its own uses e.g. native cloud or something, even there lots of alternatives exist.
It's pretty hard to replace existing popular languages, as those languages are also evolving fast.
Anything with a GC is an immediate killer for small embedded projects. Anything STM32 sized or less is going to struggle with GC due to lack of memory.
But in the intervening years the average embedded device has got larger and more powerful so now I'm doing embedded programming on devices with Linux in languages like python and Go. So maybe just wait and your Go learning will be useful again?
go's version of interfaces is fairly unique, if you read that as them inventing interfaces you misread. If you read it as them having a unique twist on interfaces they felt was powerful, then you read it correctly.
It never even occurred to me that someone would suggest they're claiming to have invented interfaces, mostly because obviously they didn't.
I was careful to try read what was said; the language might be a bit loose because it's a transcript of a live talk. That said, the example they gave, and their motivating problem of the qsort API in C don't show anything about using nominal interfaces, and instead look like a normal use of interfaces, combined with language about being wowed of how powerful they could be.
> It's clear that interfaces are, with concurrency, a distinguishing idea in Go. They are Go's answer to objected-oriented design, in the original, behavior-focused style, despite a continuing push by newcomers to make structs carry that load.
> Making interfaces dynamic, with no need to announce ahead of time which types implement them, bothered some early critics, and still irritates a few, but it's important to the style of programming that Go fostered. Much of the standard library is built upon their foundation, and broader subjects such as testing and managing dependencies rely heavily on their generous, "all are welcome" nature.
> I feel that interfaces are one of the best-designed things in Go.
> Other than a few early conversations about whether data should be included in their definition, they arrived fully formed on literally the first day of discussions.
> And there is a story to tell there.
> On that famous first day in Robert's and my office, we asked the question of what to do about polymorphism. Ken and I knew from C that qsort could serve as a difficult test case, so the three of us started to talk about how our embryonic language could implement a type-safe sort routine.
> Robert and I came up with the same idea pretty much simultaneously: using methods on types to provide the operations that sort needed. That notion quickly grew into the idea that value types had behaviors, defined as methods, and that sets of methods could provide interfaces that functions could operate on. Go's interfaces arose pretty much right away.
> That's something that is not often not acknowledged: Go's sort is implemented as a function that operates on an interface. This is not the style of object-oriented programming most people were familiar with, but it's a very powerful idea.
> That idea was exciting for us, and the possibility that this could become a foundational
> programming construct was intoxicating. When Russ joined, he soon pointed out how I/O would fit beautifully into this idea, and the library took place rapidly, based in large part on the three famous interfaces: empty, Writer, and Reader, holding an average of two thirds of a method each. Those tiny methods are idiomatic to Go, and ubiquitous.
> _THE WAY INTERFACES WORKED became not only a distinguishing feature of Go, they became the way we thought about libraries, and generality, and composition. It was heady stuff.
emphasis at the end there is mine.
how the hell _anyone_ reads that and comes away with the idea that they're claiming they invented the idea of interfaces is beyond me.
my only guess here is that many people are not familiar with the sort problem they're describing.
very famously, C++'s sort is more performant than C's sort because C uses a void pointer and C++ uses templates. The extra type information allows C++ to optimize the sort in a way that C cannot, so while C is generally more performant than C++ in a lot of ways, this is one particular area where C++ shines.
So it's no surprise that Rob Pike, et al, paid close attention to sort as something to improve over C and the way to do that is having more type information available (C++ very clearly has shown this).
> That's something that is not often not acknowledged: Go's sort is implemented as a function that operates on an interface. This is not the style of object-oriented programming most people were familiar with, but it's a very powerful idea.
I feel like most of this is just ignoring the prior art. C#'s sort works the same way. Admittedly, this isn't obvious because of the way it's implemented. But if you're a language designer you don't have much of an excuse there.
> Robert and I came up with the same idea pretty much simultaneously: using methods on types to provide the operations that sort needed. That notion quickly grew into the idea that value types had behaviors, defined as methods, and that sets of methods could provide interfaces that functions could operate on. Go's interfaces arose pretty much right away.
> That's something that is not often not acknowledged: Go's sort is implemented as a function that operates on an interface. This is not the style of object-oriented programming most people were familiar with, but it's a very powerful idea.
What he actually said is that being able to pass a value type into a sort function and have it work without needing to explicitly define and implement an interface was not a style that most people are familiar with.
and indeed, C# absolutely does NOT work that way.
It probably would have been better for you to claim that Ruby had prior art, but even that's implemented in an entirely different way it's just that the behavior is closer to Go's behavior than C# is.
In his defence, their interfaces do work somewhat differently to Java's, because they don't need to be explicitly implemented. I don't know if that matters enough to make them a novel invention, though.
It sounded like it to me. I kept thinking didn't java have interfaces to prevent multiple inheritance? All go did was replace all inheritance with interfaces
Go's concurrency isn't even that good. It just looks good for anyone coming from the languages which don't have that (which are the majority).
One of the earliest high level languages with powerful concurrency and parallelism APIs are C# and F# (TPL and Parallel/PLINQ, some of which was available back in 2010).
They are certainly different, but to say there is nothing similar is plainly untrue - apart from nominal vs structural, they are pretty much the same.
I'm not sure how significant the nominal vs structural distinction even is. In Go, a struct can implement an interface without declaring it, but the programmer still needs to deliberately write the struct to conform to the definition of the interface, so they're still coupled, just not explicitly [1]. Yes, it is possible to define a new interface which fits existing structs which weren't designed for it - but how common is that? That is, how common is it for two or more structs to have a meaningful overlapping set of methods without being designed to conform to some pre-existing interface?
Go's nominal interface feature is definitely interesting, but that section doesn't talk about how amazing it was to implement interfaces implicitly:
> That notion quickly grew into the idea that value types had behaviors, defined as methods, and that sets of methods could provide interfaces that functions could operate on. Go's interfaces arose pretty much right away.
If you replace Go with Java, this would've accurately described Java ~30 years ago.
As the review Chen links discusses, it turns out that M:N threading (i.e. goroutines) and good C compatibility are mutually exclusive. Go went one way, every other language went the other way. The most common alternative is stackless coroutines, which are much more widely implemented than the Go model.
To the extent that Go has features that no other popular language has, it is not influential. To the extent to which it invented those things, it's not influential. And that's why he didn't make that claim, he made a much broader one. The only problem is, if you make the broader one, it's obvious it's F#/C# that's been influential.
Erlang's processes might have similar semantics to Go's goroutines (green threads) but Erlang is a much simpler language, because it doesn't have shared state.
A lot of work went into optimizing Go's GC to be able to cope with concurrency.
The value it’s brought to Google has definitely not been worth the cost. It did not really replace other languages. If you join Google now as a new engineer, you will likely be writing C++, Java, or maybe a web language
I think this is easy to verify w/o resort to anecdotes. There's an (easy to find) internal dashboard with language metrics, showing the number of CLs (changelists) breaking down by languages and the team.
This in incorrect, I have been at Google more then 5 years and the work has been in go. Most of the teams around me use go as well. But this is not to say that there is lesser java/cpp use.
Your experience doesn’t invalidate the likeliness part of the parent comment.
You just happen to be in the subset / cluster / areas that write in golang. Probabilistically speaking across the entire Google engineering population both you and the teams around you are outliers. That doesn’t mean golang is insignificant.
As to whether golang was worth it I disagree with the parent, I think it was probably worth the resources invested, even if e.g. usage has leveled off internally, but at any rate this is a hard thing to measure so we’re all just opining.
Go was designed specifically as a better C++ (that is, a better successor to C). Its use as a Sawzall replacement was a lucky accident, I think -- that just happened to be the first niche where it took hold.
Sawzall was not a widely-liked language, so people had already been trying to replace it with Python, but Go had better performance and stronger typing so it was a better fit. Go really found its footing as a better Python (for some use cases) rather than a better C++.
I still wonder how many potential competitors the mapreduce and bigtable papers eliminated. perhaps it doesn't matter since the people that blindly adopted such things without enough thought might have got mired in some other nonsense anyway.
You might be, but as the parent said, it’s not likely. I think that’s a fair statement, out of 100K engineers or whatever it is, I’d estimate less than 10K of them are writing primarily golang code.
I didn’t pull any stats but I’ve been working at Google for most of the last 10 years (in two stints).
And actually I think even 10K is a very generous upper bound, really if I were betting money I’d peg it at like maybe 2000 engineers that use golang as their primary language and a big chunk of those are SWE-SRE?
There's just _way_ more Java and C++ than there is golang…
on the fact that most of the code in prod isn't Go. anyone working there can look at the CL statistics dashboard to see what's being written, and a different dashboard to see the distribution of binary sources.
Go is very popular in some niches (a semi-mandate in SRE produced a bunch of it) and not used at all elsewhere.
But I'm not sure Rob Pike states clearly enough what they got right (IMO): they managed the forces on the project as well as the language, by:
- Restricting the language to its target use: systems programming, not applications or data science or AI...
- Defining the language and its principles clearly. This avoids eons of waste in implementing ambiguity and designing at cross-purposes.
- Putting quality first: it's always cheaper for all concerned to fix problems before deploying, even if it's harder for the community or OS contributors or people waiting for new features.
- Sharing the community. They maintained strict control over the language and release and core messaging, but they also allowed others to lead in many downstream aspects.
Stated but under-appreciated is the degree to which Google itself didn't interfere. I suspect it's because Go actually served its objectives and is critical to Google. I wonder if that could be true today for a new project. It's interesting to compare Dart, which has zero uptake outside Flutter even though there are orders of magnitude more application code than systems code.
Go was probably the key technology that migrated server-side software off Java bloatware to native containers. It dominates back-end infrastructure and underlies most of the web application infrastructure of the last 10 years. The benefit to Google and the community from that alternative has been huge. Somehow amidst all that growth, the team remained small and kept all its key players.
Will that change?