Hacker News new | past | comments | ask | show | jobs | submit login

Okay, so there's definitely simplicity. I agree that this is in many ways a nice change from some of the more popular languages, which can get a bit complex and heavy, with a focus on features that are nice in isolation but add up to a surprisingly difficult architecture to comprehend at the global level. I just don't think Go gets the balance right.

There are some parts of the language that are such a joy to use – C interop is simple and elegant, the concurrency story is great, the standard library is great, and in contrast to some other people, I think the error handling is also a nice example of simple and effective design. The wider tooling system is decently usable (with the exception of gofmt, as mentioned in the article, which I think is the single best tool I've ever used in any language).

But "simplicity" in Go-land seems sometimes to be the enemy of expressiveness. The lack of abstractions over operations like filtering, mapping, or removing items from a collection is incredibly irritating for such a common operation – instead of immediately obvious actions, there are tedious manual loops where I end up having to read like 10 lines of code to figure out "oh this just removes something from a list". The use of zero values is a crime against nature. The type system is shameful in practice compared to other modern languages, with so much code just being "just slap an interface{} in there and we can all pretend this isn't a disaster waiting to happen". It feels like such a lost opportunity to exploit a richer type system to eliminate whole classes of common errors (which are predictably the ones I keep making in Go code.)

I guess it's frustrating that a language—which is relatively well-designed and intended to be simple—too often makes things more complicated overall by making fallible humans do things that computers are better and more reliable at. I'll keep using it, because it fills a really nice niche that no other language is quite suitable for. But I'll keep being a bit annoyed about it, too.




> C interop is simple and elegant

C interop in go is super slow: https://github.com/dyu/ffi-overhead


Which isn't a contratiction to the statement, that for the programmer the C interop is simple and elegant. Which I think mostly it is. The slowness comes from what goes on behind the szenes. Mostly the different way of handling stacks between Go and C creates quite some effort when calling into C and returning. Those languages, which use "C compatible" stack and register layouts get much faster C calling. That doesn't mean, they can call C as easily.

So calling C from Go for small functions isn't a performance gain. You should write those in Go. The calling of C is great for linking against larger libraries, which cannot be ported to Go. And for that it works nicely and is fast enough.


Yes, this would be the point I would make too.

I find I use Go quite often to build relatively compact tools and services that need to use some features of a fully-featured and popular C library to perform some complex function that would be expensive and time-consuming to implement.

A recent example of this is using `libusb` and `libgphoto2` to enumerate DSLR cameras attached over USB and capture images from them in response to some other events, with a small web UI on top. It's maybe a few dozen lines of Go to call some enumeration and capture functions, and then I get a nice byte slice with all the data I want. There is minimal friction, the interaction is clear, and any performance cost is worth paying because of the simplicity of the interaction.

It's entirely true, and a well-known caveat, that the C FFI is slow. This makes it inappropriate for some use-cases, but entirely suitable for others.


> And for that it works nicely and is fast enough.

It may be fast enough for you, but it certainly isn't for many other people. golang will for example never grow a large mathy/scientific ecosystem, because of it.


Note they never say it is fast, or good.


I don’t know what the Go designers think about this, but I can at least appreciate the tradeoffs of not having map/filter/etc. Memory allocations remain clear and explicit, errors are handled consistently, concurrency is explicit, and it dramatically reduces the urge to write long chains of over-complicated functional operations.

Sometimes I run into a situation where I’m like “Sigh, filter would have been nice here.” But it’s pretty rare. On the other hand, ”clever” programmers love to make incomprehensible messes out of functional constructs.


That is the justification I usually hear, but I don't buy it. Like, 99+% of the time, I want to use these kinds of operations to manipulate lists which are orders of magnitude away from anything even approaching a scalability issue. I just don't care about memory allocations when trying to manipulate something like, say, a list of active users in a real-time chat or something. And often I find that the mess coming out of having to implement those same operations without expressive constructs is worse than the messes that people can create (though I grant I've seen those errors too).

This hints at some of the ideas behind Go – it's designed, perhaps, for Google-scale software. This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need. Maybe we just have to accept that.


> I just don't care about memory allocations when trying to manipulate something like, say, a list of active users in a real-time chat or something.

This is exactly the kind of thinking that the Go language pushes back against.

> Maybe we just have to accept that.

I think so. At least for now. I recently watched a talk by Ian Lance Taylor that made it very clear to me that generics are coming (https://www.youtube.com/watch?v=WzgLqE-3IhY). When we have generics then map/reducer/filter will absolutely be introduced as a library at the very least.

> This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need.

I don't think that's exactly it. It's more about runaway complexity. You might use these primitives to perform basic operations but other people will misuse them in extreme ways.

Consider this: suppose there was a built-in map() function like append(). Do you use a for loop or a map function? There'll be a performance trade-off. Performance conscious people will always use a for loop. Expressive conscious people will usually use map() unless they're dealing with a large dataset. This will invariably lead to arguments over style among other things.


For loops violate the https://en.wikipedia.org/wiki/Rule_of_least_power. Because they could do anything, you have to read each very carefully to find out what it's actually doing (which may not be what was intended). Flat-map and filter are more concise and clearer, and if my platform makes them slower that's an implementation bug I should fix.


In practice, reading a for-loop has been less problematic for me than reading the incantations of a functional programmer who’s been reading about category theory.

I know all about the virtues of functional programming patterns, and use them in personal projects, but in my day job working with dozens of engineers in the same codebase, I appreciate not having to decode the idiosyncrasies of how each engineer decides when and how to use higher order constructs, and the subsequent performance, operational, and maintenance implications. It’s a lot easier for me to just read a for-loop and move on with my life.


Before generic is supported officially, one can use code generation to get that effect.

I did that to typescript, should be applicable to go as well. Ref: https://github.com/beenotung/tsc-macro


These properties depend on the language; iterators do not have to allocate, can still expose error conditions, and make concurrency explicit.

I think this is sometimes why it’s so hard to compare and contrast languages; even surface level features in two different languages can have two very different underlying implementations, which can mean you may like a feature in one language and dislike it in another.


I don't think I used interface{} in the last 2 years of using Go, the only case was unknown json object that went into map[string]interface{} that's it.


I agree with you on most points, and they are working hard to fix the generics issue in a way that does not make you lose all the nice things you mentioned. The only part I didn't get is:

> The use of zero values is a crime against nature

Can you elaborate?


This one is specifically the assumed values of some types when declared in a struct or variable. Like, a struct with a string field gets an empty string value by default. I appreciate the reason for it; it just jars horribly with my expectations.

The worst offender is `Time` – to quote from the documentation:

"The zero value of type Time is January 1, year 1, 00:00:00.000000000 UTC. As this time is unlikely to come up in practice, the IsZero method gives a simple way of detecting a time that has not been initialized explicitly."

That seems to be an absolutely baffling decision.


I think the rationale goes something like this: how do you create a fixed size array? Of structs? Where one field is another fixed size array of timestamps? And on the stack?

Without some kind of default value, you end up with a lot of nested constructors and loops initializing array elements and fields, versus just zeroed out memory.

C++ does the constructor thing but it seems complicated and finicky when you don't have a zero-arg constructor. When I looked at Rust array initialization it looked somewhat limited, but maybe I missed something.

I'm not sure if every type really needs a zero value like in Go, but it seems like standard types like timestamps should, so you can use them this way?


Array initialization is a bit awkward in Rust right now, it’s true. It’ll get much better in the nearish future.


Initially I found it quite baffling, that slices or hash tables don't need initialization to be valid, but long term, I find this brilliant. Especially for slices, that a zeroed out slice struct is a valid empty slice is very clever and simplle.

I also don't see what your problem with time is. Zero is one set point in time. Not sure, why you want an "invalid" timestamp. Does make as little sense to me as an "invalid" integer. Why would you need to check that a time value has been explicitly initialized? And for most purposes, the zero value pretty much would express that anyway. If not, you can wrap the time value into a struct with a nonpublic initialized bool slot. But I really would like to know what your use case for this would be.


Generics is only part of the issue. Go has a crippled type system.

Json in go cannot be represented as a type and also is not type safe. This is 100 percent because go is missing a basic type primitive.

Json contrary to what many people think is not untyped. Json is represented by a recursive sum type which go has no ability to represent.

I think what caused this issue is more because the designers of go are more systems people rather than language theorists.


> Json in go cannot be represented as a type and also is not type safe.

Not a Go programmer, but why don't you simply use an explicit type tag and a number of getters? They can assert that they're not called in invalid situations.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: