Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a developer I continue to be extremely happy with the Go language and the Go tools. Over the past 5 years Go has almost entirely replaced my use of C and C++. My Go programs (e.g., https://NN-512.com) are reliable, run fast, build fast, require no shared libraries, and have airtight error handling. Unfortunately, C++ is still necessary for my commercial work and I am amused daily by the absurd complexity and clumsiness of expression, slow builds, dynamic linkage problems across Linux releases, etc.


Go is mostly very good, but the module ecosystem continues to be a nightmare, and having a deep dependency graph can make build times brutal. I work on a project which depends on k8s, which means a cold build take several minutes. Not to mention trying to update dependencies and figure out what versions are actually being built into the final binary.


Anyone who works on large C++ projects will laugh at your "several minutes" comment. You may not realize how bad things are on the other side of the fence


Yep, without a build farm it takes 6-7 hours to do a full Chromium build...


Only if they insist in compiling the world from scratch, though.

On the world of DLLs and COM, those compile times are relatively ok.


ccache


Been working with go for over 5 years now, I dread every time I have to touch a project with anything related to k8s. Don't get me wrong, I think it's great what they have accomplished project wise, but what a complete mess code wise.


I don't get it I thought k8s was like the perfect match for Go? K8s is even written in Go. I know that doesn't strictly mean it must be easy to use Go, but it seems weird that it wouldn't be K8s best experience.


It has a reputation for being terrible Go code.


The post is referring to the codebase of k8s itself, not the experience of deploying go code to k8s.


My read is that it just go too big and they didn't do a great job of keeping up with golang. Last I looked it was still not module compatible fully.


It actually started as a Java project until the team got some strong advocates.


K8s is a massive outlier in Go build times. Several minutes for cold build times is a blessing. Lots of C++ builds still run overnight. Composer takes 3 minutes just to validate the dependency graph in one of our older PHP projects.


Go modules are great.

k8s has so far decided not to use module versions, which makes using k8s with Go hard.


Who is doing dependencies right in your opinion? Because it feels like it's a complaint I hear about every language.


Might be biased/stereotypical, but Rust's cargo does dependencies really well. It's as easy as npm to add new dependencies, but there aren't thousands needed to do anything, and if you take a look at your Cargo.lock/`cargo tree` you can really get to know each of them and what they do or why they're pulled in. I'm still bloat-wary, maybe as a leftover from doing webdev, but with less transitive dependencies in the first place you can actually go through and prune things that aren't needed, or open PRs to transitive deps to prune from their trees or update deps to the latest version to deduplicate your tree. (If there are multiple semver-incompatible versions in a dep tree, they just both get compiled in - for most apps though, you should be able to get the number of duplicates to 0 or almost that.)


I wonder how Cargo will be regarded as in 2-5 years if it gets anywhere near the number of projects/libraries that say Python or Javascript has.


You can either go to lunch or buy a Threadripper server to be able to compile a full project from scratch, everytime you want to check something.

Unless by then cargo has already learned how to deal with binary crates.


If compile time is the problem, Rust is not (yet?) the solution.

I agree that Cargo is much better than the Go build system, though.


coolreader18's points are mostly about the culture of JavaScript vs. Rust (where Go also hews much closer to the Rust side). Setting aside lockfiles vs. MVS (which won't get "solved" in an HN debate), why do you prefer Cargo to go mod?


For me, it only gets the "really well" when it also does binary dependencies vcpkg/conan style.


Having tried both, focusing on source dependencies is the only way to make sure that dependency sources are universally available and buildable, which makes a huge difference in the long run. Just look at NuGet's issues getting SourceLink adopted.

Binary caching á la Nix can work, but I can't really see that working out without Nix's commitment to environment purity.


When one is lucky to use server hardware as workstation, and works in domains that don't depend on selling binary libraries for their business.


> When one is lucky to use server hardware as workstation

Sounds like something is pretty screwed up if you're running cargo clean as part of your regular workflow.

> works in domains that don't depend on selling binary libraries for their business

Play stupid games, win stupid prizes? Meh.


IMO, Ruby Bundler was the first language / ecosystem to really get dependencies right. AFAIK, they were the first to have a lockfile locking the version of each dependency and a Gemfile with flexible version specification for dependencies, and a tool to handle installing the right version of everything and making sure you run in the context of your specified gems every time, no matter what else is installed on the machine.

Rust Cargo does about as well, probably the best for a compiled language. NPM could be about as good too - it almost feels like they deserve a point off for the ridiculously huge number of tiny packages required to do anything, though that isn't really the dependency manager's fault.


CPAN and Maven did it first.


Maven doesn't have a lockfile, from just reading the files it's difficult to find the version of a dependency you'll actually get.


It doesn't need one, when version is already part of dependency definition.

If you want to be sure of what version you get, use it.


Transitive dependencies are not going to be in your POM file normally, and Maven has a confusing algorithm for resolving them (essentially, first found in a BFS over the dependency tree). And if you do include them, that will silently override transitive dependencies on more recent versions, which is rarely what is wanted.

Yes, you can ask the tool to print them. This is way worse than any of the other systems being discussed, where you can read a file in the repo.


It is so hard to redirect the tool output to my-favourite-filename.lock, beyond the skills of the average developer.


One may like it or not, but defaults matter.


Defaults don't change history.


Naming the file .lock does not make it a lockfile. (I can't believe I have to write that sentence. I don't think you're discussing technical details in good faith.)


Of course not, it fixes "... where you can read a file in the repo", because Maven already does everything else anyway.


> Ruby Bundler was the first language / ecosystem to really get dependencies right. AFAIK, they were the first to have a lockfile locking the version of each dependency and a Gemfile with flexible version specification for dependencies,

So other than having a lockfile and having a flexible version specification (and a sensible way to resolve multiple version requests), Maven did "it" first!

... what was "it" again?


Everything.


I have a bunch of apps that depend on Kubernetes (consumers of client-go), and I haven't experienced long build times. I cleaned my cache ("go clean -cache") and tried building one:

  $ /usr/bin/time go build ./cmd/ekglue
  70.33user 32.72system 0:07.92elapsed 1300%CPU (0avgtext+0avgdata 
  583644maxresident)k
  333392inputs+1232496outputs (14768major+1135114minor)pagefaults 0swaps

  $ /usr/bin/time go build ./cmd/ekglue
  1.54user 0.90system 0:00.33elapsed 741%CPU (0avgtext+0avgdata 
  71952maxresident)k
  16inputs+0outputs (3major+11111minor)pagefaults 0swaps
It was 8 seconds for a clean build, and 330ms for an incremental build. I agree that building on a 1 core machine with no build cache and module cache is slow. I also realize it's a big pain to preserve the cache between docker builds, so you probably hit this with every commit in CI. The problem is really CI, not Go, but I agree that it sucks. I use Cloud Build / Kaniko which has decent caching, but I do have to wait 1 minute on every build for GCP to provision a new machine (since I'm using a larger-than-default machine; the time spent sleeping while a machine is provisioned is saved by parallelism in the Go compiler). Meanwhile, I run tests on CircleCI, and that is mostly bogged down by very high network contention; pulling caches is slower than rebuilding from scratch.

As for modules, I like them. My biggest blocker in using other programming languages is that their package system sucks compared to Go. I do run into problems -- upstream authors don't really know how to use Go modules, and upstream applications are very quick to take on unnecessary dependencies. For example, I depend on the Loki client. The Loki client and server are the same Go module, so they pin me to a particular version of the Prometheus library (that the Loki server depends on), which then pins me to a particular version of the Kubernetes library (that Prometheus depends on). Basically, the dependency graph is hundreds of times larger than it needs to be, because it's not a problem for the upstream authors and they've never thought about it. Splitting the client and server into two modules would make life much easier for consumers, but slightly harder for the producer, so it's rare that you see it one. (My team also makes an app that makes this same mistake -- forcing users of the client to depend on things like Kubernetes. It's hard to fix, because the server uses the client internally, but I may do it in the future. Or just auto-copy the client code into a separate repo + go.mod file for consumers!)

Upstream authors are also very quick to make fixes for themselves, unaware that they don't propagate to consumers. Many libraries have "replace" directives in go.mod, but those don't propagate to consumers, causing solved problems to reoccur for each consumer. You have to manually propagate them yourself. The solution there is to be a good open-source citizen -- if you have to hack up some module, either properly fork it and depend on the fork (there should be a tool that handles this renaming for you automatically), or push your changes upstream and depend on the new release.

Basically, modules involve the transitive closure of all shortcuts a bunch of people you've never met have taken, and the results are not always good. That has been true in every language I've ever used; I have 83 irrelevant Depndabot alerts that can't be fixed in most of my Javascript projects, for example. I think it's the best module / packaging system I've ever used, and I like it very much. In fact, there is little I'd change.

> figure out what versions are actually being built into the final binary

    go list -m all
Will print the versions.

    go list -m -u all
Will print what versions you could upgrade to.

I personally include this in every binary I produce: https://github.com/povilasv/prommod

This lets me monitor module versions across the fleet. If there's a security problem in a module, I can instantly see which apps are affected, and update them.


OP might want to check their security software- things that hook into every file access can really kill compilation performance.


>go mod list -m -u all

minor correction: just `go list` on these. the `-m` ensures it's in module-mode.


Thanks! I typo'd it both times! It's now corrected.


You aren't really pinned to the version of Prometheus the library depends on. Minimal Version Selection[0] means if you specify a newer minimal required version of Prometheus in your go.mod that is the version that will be used.

Its slightly annoying that your dependencies' optional dependencies pollute your go.sum, but it really doesn't matter. If those don't show up in your package import graph they won't be included in your builds.

[0]: https://research.swtch.com/vgo-mvs


The problem is that Kubernetes went backwards in version numbers. I want to depend on, say, v0.19.0, but through the dependency chain, we end up with a dependency on v12.0.0. (At some point, they decided to switch from "kubernetes x.y.z = client-go@vx.y.z" to "kubernetes x.y.z = client-go@v0.x.y".)

I made a test module that depends on loki and kubernetes, and updating kubernetes results in:

  $ go get k8s.io/client-go@v0.19.0
  go: downloading k8s.io/client-go v0.19.0
  go get: downgraded github.com/grafana/loki v1.5.0 => v1.0.2
  go get: downgraded github.com/thanos-io/thanos v0.12.1-0.20200416112106-b391ca115ed8 => v0.11.0
  go get: downgraded k8s.io/client-go v12.0.0+incompatible => v0.19.0
This causes loki to downgrade to a version that doesn't work.

I don't blame the module system for this, I think it's doing a great job. But when you use other people's code, you're responsible for the transitive closure of all their minor tiny mistakes, and when you depend on big codebases, the mistakes really add up. That's where the hate comes from; a lot of code was written before modules existed, and modules changed the semantics of that code.


Fair.


...you can literally go on a lunch break while doing a full C++ recompile on a moderately complex project.

Your Go project must be on an epic scale to break even 10 minutes of compile time.

Obligatory XKCD: https://xkcd.com/303/


Is it that bad compared to npm?


Calling out again, this is a fantastic program and it's remarkable that this is released pseudonymously (for whatever reason)


Thanks man


If you wanted to, couldn't you just compile your C/C++ program statically and then it would be comparable to a Go binary?


As is true of many developers, I am part of a large team working inside an ecosystem of enormous C++ programs. These programs are written in a particular way (heavily templated) and built in a particular way (dynamic linkage, CMake) and it's neither feasible nor wise to change these foundations


I guess the point I am trying to make is if you had chosen to be statically-linked from the start, then it would be comparable to a Go binary of today. If Go had many dynamic dependencies and people became "stuck" with them, and Go later introduced statically-linked binaries, then it would be the same as your C/C++ example, wouldn't it? In other words, it's not a distinguishing factor because you can be fully statically-linked in C/C++ if starting today with a new project with that goal in mind.


Sure. But here we have a typical situation where the Go tools get it right by default, and of course a sophisticated engineer can achieve the same effect with non-Go tools even though those tools get it wrong by default


Off topic: this NN-512 site has good UX for having no CSS/JS (I know, many of you would claim it's because those are omitted...). Did you use some kind of site generator?


The program generates the HTML for its own website (in DRAM) and serves that HTML over HTTPS. When you access that webpage, you are communicating with an NN-512 process that is running on a $5/month Linode cloud instance


Ok ok I see it in e.g., https://nn-512.com/browse/NN-512#64.

The anchor tags for the listings go to top instead of being self-referencing. Is that a bug?

Like

    format = `<h3 id="%d"><a href="#0">%s</a></h3>` +
Should be

    format = `<h3 id="%d"><a href="#%d">%s</a></h3>` +
And Fprintf should duplicate a parameter?


No, that's intended. There's an index at the top of each page. You use the index links to jump down. And if you want to jump back up to the index, you use any of the header links. That makes sense to me. It seems useless for a link to point to itself


> It seems useless for a link to point to itself

Hm. I guess the web has programmed me differently. Say I'm CTRL-F searching through the page and I notice an interesting bit of code and say "Hey I have a question about this". Then I start looking for a shortcut to that one file and assume that shortcut is the nearby <h3> header. I expect this because I've seen this pattern implemented a lot. You could have the <h3> be that local anchor and a short [top] link next to it.


Yeah, maybe you're right


For me the major remaining issue is a golang ide, or lack thereof. You only have 3 options:

- Goland from Jetbrains; awesome but costly

- Vscode with golang plugin; slow, brittle, little functionality

-vim; yuck

It'd help adoption greatly if there was a decent free option here.


considering i've spent hundreds of dollars on utter bullshit, i think buying jetbrains products is well worth the money. they're fantastic and worth every penny. i'm thankful my company has licenses, but i would still pay them


One of Go's creator did a lot of the initial coding in Acme[0] :D

[0] https://en.wikipedia.org/wiki/Acme_(text_editor)


Something tells me that a person who reacts to Vim with a “yuck!” won't like Acme either. Also, I don't know how true this is, but I've read that Acme loses some of its advantages when it's run in e.g. X11 as opposed to its native rio[1] with plumber[2].

[1]: https://en.wikipedia.org/wiki/Rio_(windowing_system)

[2]: https://9fans.github.io/plan9port/man/man4/plumber.html


I _think_ that Rob used Acme in OSX, but I can't find this source.



A couple of the Go authors did a lot of the initial coding and continue to code in Acme (rsc, r). The broader Go team use a wider variety: Vim, Emacs, Goland, and VSCode are the most popular from what I've seen.


Well, he is also the author, and ACME is even worse than classical VI, in terms of usability.

The only thing it has going for it is being based on Oberon's UI workflows.


Could you expand this? Quite like Acme and wondered what you don't like about it


Lack of syntax highlighting for one, not being even half as powerfull as Emacs as second.


Goland is really good. I think the $249 or whatever for all the IntelliJ products is really good value for money.


What are the key features from GoLand you find missing in VS Code? (I don't use either, but from what I have seen anything gopls-backed is roughly the same same as VS Code, and this is already much more than I need/want.)

The only GoLand feature I've seen that I'm envious of is the mode for editing template files - and even this breaks down quickly once you leave the HTML plane.


I love vim. =(

Its there everywhere you need it. If I have to use these other fancy pants editors I always enable vim emulation when possible but its never the same.


Yeah vim plus govim amazing.


vim with the vim-go pluugin is pretty good, fast and free. Maybe you should give it a try.


> require no shared libraries

One amusing downside: the Golang hello world binary takes 2 MB; when compiled with gcc though (against libgo, therefore, dynamically linked), it takes an astonishing 60 KB - not even 64 KB!


No one is actively happy about it, but when you account for garbage collection and other runtime stuff that goes into a regular go program, plus no dynamic linking... you have to be happy with the trade-off. It's the right one for most programs we will ever make.


> It's the right one for most programs we will ever make.

Unless you’re in the lucrative hello world industry. Or at least I assume it’s lucrative since so many people come here posting about hello world binary sizes—it certainly seems to employ a lot of people.


That used to bother me with Rust as well, but frankly nowadays it's really hard to argue against it. Even in the embedded world you typically have enough flash these days that you're not counting a couple MBs, and as far as RAM is concerned you'd only save some memory if several binaries run simultaneously while linking to the same shared libs.

And even then it's not a sure win, because LTO can remove a lot of useless code from dependencies, so if you only use a small part of a dependency you might still be better off with static linking regardless of other factors.

It's pretty hard to justify the complexity and overhead of dynamic linking nowadays, IMO.


Although it used to be worse, these days a standard Rust “Hello, world!” compiles to just 277 KB on Linux with ‘cargo build --release’ and ‘strip’—5× smaller than the equivalent for Go (and there are games you can play to get it down further).


Oh, that's very cool, I hadn't checked that in ages. Any idea on what Rust does better here? Better link time optimization? Or is the Go runtime just that heavier?


Rust doesn't really have a "runtime" in the same sense that Go does. For example, Rust has no GC.


Depends, dynamic linking is great for plugins, on the other hand I guess we have learnt that IPC is a better option for security and overall application stability.


not counting MBs!? must be nice, havent developed on something with 1MB or more in years


You work on systems with 1MB of storage that support dynamic linking? Please tell me more, that sounds pretty insane to me.


... and then someone puts it in a container, probably sourced without any kind of tracking, and blows cross-app shared library page sharing and adds tons of other on-media bloat.

It's so crazy where we are.


I find it more ironic when monolithic kernels are argued for and then get placed in layers of abstraction to achieve what microkernels offer out of the box.


RAM and storage are cheap. Their cost is a mere pittance for the benefits of containerization.

It's like when people whinge about Electron. Electron gets you a runtime that's near identical across the major platforms, with a11y (something the Linux native toolkits still don't have their shit together on) essentially for free, plus you can use modern reactive component frameworks like React and Vue on the desktop. Yeah, I'll pay 300 MiB for that.


I highly doubt that the optimal GUI development is at “modern reactive comp frameworks”, when decade old desktop frameworks had better tooling than that.


I seriously doubt that. Mobile platforms and web would be moving towards "decade old desktop frameworks" if those were superior.


Mobile platforms are most definitely not web technology, they are much closer to desktop frameworks. And the only reason the web is not doing so is (was) due to backwards compatibility and HTML being a bad target. But with WASM and canvas-based renderers, desktop frameworks sort of make an appearance on the web (though I personally dislike the canvas-oriented design, which basically throws away every accessibility)


For some applications this matters, but for a typical web server or web architecture, which is what go is optimized for, rarely if ever would a couple MB of binary size make a difference.


Especially when tossed in a container for most workloads nowadays. You can have a full go binary statically linked and copied into a FROM SCRATCH docker image and it just works, and is miniscule.

For example, that's all that the production image of NATS Streaming is - https://github.com/nats-io/nats-streaming-docker/blob/024b04...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: