Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go 1.17 Beta (golang.org)
118 points by 0xedb on June 10, 2021 | hide | past | favorite | 118 comments


As a developer I continue to be extremely happy with the Go language and the Go tools. Over the past 5 years Go has almost entirely replaced my use of C and C++. My Go programs (e.g., https://NN-512.com) are reliable, run fast, build fast, require no shared libraries, and have airtight error handling. Unfortunately, C++ is still necessary for my commercial work and I am amused daily by the absurd complexity and clumsiness of expression, slow builds, dynamic linkage problems across Linux releases, etc.


Go is mostly very good, but the module ecosystem continues to be a nightmare, and having a deep dependency graph can make build times brutal. I work on a project which depends on k8s, which means a cold build take several minutes. Not to mention trying to update dependencies and figure out what versions are actually being built into the final binary.


Anyone who works on large C++ projects will laugh at your "several minutes" comment. You may not realize how bad things are on the other side of the fence


Yep, without a build farm it takes 6-7 hours to do a full Chromium build...


Only if they insist in compiling the world from scratch, though.

On the world of DLLs and COM, those compile times are relatively ok.


ccache


Been working with go for over 5 years now, I dread every time I have to touch a project with anything related to k8s. Don't get me wrong, I think it's great what they have accomplished project wise, but what a complete mess code wise.


I don't get it I thought k8s was like the perfect match for Go? K8s is even written in Go. I know that doesn't strictly mean it must be easy to use Go, but it seems weird that it wouldn't be K8s best experience.


It has a reputation for being terrible Go code.


The post is referring to the codebase of k8s itself, not the experience of deploying go code to k8s.


My read is that it just go too big and they didn't do a great job of keeping up with golang. Last I looked it was still not module compatible fully.


It actually started as a Java project until the team got some strong advocates.


K8s is a massive outlier in Go build times. Several minutes for cold build times is a blessing. Lots of C++ builds still run overnight. Composer takes 3 minutes just to validate the dependency graph in one of our older PHP projects.


Go modules are great.

k8s has so far decided not to use module versions, which makes using k8s with Go hard.


Who is doing dependencies right in your opinion? Because it feels like it's a complaint I hear about every language.


Might be biased/stereotypical, but Rust's cargo does dependencies really well. It's as easy as npm to add new dependencies, but there aren't thousands needed to do anything, and if you take a look at your Cargo.lock/`cargo tree` you can really get to know each of them and what they do or why they're pulled in. I'm still bloat-wary, maybe as a leftover from doing webdev, but with less transitive dependencies in the first place you can actually go through and prune things that aren't needed, or open PRs to transitive deps to prune from their trees or update deps to the latest version to deduplicate your tree. (If there are multiple semver-incompatible versions in a dep tree, they just both get compiled in - for most apps though, you should be able to get the number of duplicates to 0 or almost that.)


I wonder how Cargo will be regarded as in 2-5 years if it gets anywhere near the number of projects/libraries that say Python or Javascript has.


You can either go to lunch or buy a Threadripper server to be able to compile a full project from scratch, everytime you want to check something.

Unless by then cargo has already learned how to deal with binary crates.


If compile time is the problem, Rust is not (yet?) the solution.

I agree that Cargo is much better than the Go build system, though.


coolreader18's points are mostly about the culture of JavaScript vs. Rust (where Go also hews much closer to the Rust side). Setting aside lockfiles vs. MVS (which won't get "solved" in an HN debate), why do you prefer Cargo to go mod?


For me, it only gets the "really well" when it also does binary dependencies vcpkg/conan style.


Having tried both, focusing on source dependencies is the only way to make sure that dependency sources are universally available and buildable, which makes a huge difference in the long run. Just look at NuGet's issues getting SourceLink adopted.

Binary caching á la Nix can work, but I can't really see that working out without Nix's commitment to environment purity.


When one is lucky to use server hardware as workstation, and works in domains that don't depend on selling binary libraries for their business.


> When one is lucky to use server hardware as workstation

Sounds like something is pretty screwed up if you're running cargo clean as part of your regular workflow.

> works in domains that don't depend on selling binary libraries for their business

Play stupid games, win stupid prizes? Meh.


IMO, Ruby Bundler was the first language / ecosystem to really get dependencies right. AFAIK, they were the first to have a lockfile locking the version of each dependency and a Gemfile with flexible version specification for dependencies, and a tool to handle installing the right version of everything and making sure you run in the context of your specified gems every time, no matter what else is installed on the machine.

Rust Cargo does about as well, probably the best for a compiled language. NPM could be about as good too - it almost feels like they deserve a point off for the ridiculously huge number of tiny packages required to do anything, though that isn't really the dependency manager's fault.


CPAN and Maven did it first.


Maven doesn't have a lockfile, from just reading the files it's difficult to find the version of a dependency you'll actually get.


It doesn't need one, when version is already part of dependency definition.

If you want to be sure of what version you get, use it.


Transitive dependencies are not going to be in your POM file normally, and Maven has a confusing algorithm for resolving them (essentially, first found in a BFS over the dependency tree). And if you do include them, that will silently override transitive dependencies on more recent versions, which is rarely what is wanted.

Yes, you can ask the tool to print them. This is way worse than any of the other systems being discussed, where you can read a file in the repo.


It is so hard to redirect the tool output to my-favourite-filename.lock, beyond the skills of the average developer.


One may like it or not, but defaults matter.


Defaults don't change history.


Naming the file .lock does not make it a lockfile. (I can't believe I have to write that sentence. I don't think you're discussing technical details in good faith.)


Of course not, it fixes "... where you can read a file in the repo", because Maven already does everything else anyway.


> Ruby Bundler was the first language / ecosystem to really get dependencies right. AFAIK, they were the first to have a lockfile locking the version of each dependency and a Gemfile with flexible version specification for dependencies,

So other than having a lockfile and having a flexible version specification (and a sensible way to resolve multiple version requests), Maven did "it" first!

... what was "it" again?


Everything.


I have a bunch of apps that depend on Kubernetes (consumers of client-go), and I haven't experienced long build times. I cleaned my cache ("go clean -cache") and tried building one:

  $ /usr/bin/time go build ./cmd/ekglue
  70.33user 32.72system 0:07.92elapsed 1300%CPU (0avgtext+0avgdata 
  583644maxresident)k
  333392inputs+1232496outputs (14768major+1135114minor)pagefaults 0swaps

  $ /usr/bin/time go build ./cmd/ekglue
  1.54user 0.90system 0:00.33elapsed 741%CPU (0avgtext+0avgdata 
  71952maxresident)k
  16inputs+0outputs (3major+11111minor)pagefaults 0swaps
It was 8 seconds for a clean build, and 330ms for an incremental build. I agree that building on a 1 core machine with no build cache and module cache is slow. I also realize it's a big pain to preserve the cache between docker builds, so you probably hit this with every commit in CI. The problem is really CI, not Go, but I agree that it sucks. I use Cloud Build / Kaniko which has decent caching, but I do have to wait 1 minute on every build for GCP to provision a new machine (since I'm using a larger-than-default machine; the time spent sleeping while a machine is provisioned is saved by parallelism in the Go compiler). Meanwhile, I run tests on CircleCI, and that is mostly bogged down by very high network contention; pulling caches is slower than rebuilding from scratch.

As for modules, I like them. My biggest blocker in using other programming languages is that their package system sucks compared to Go. I do run into problems -- upstream authors don't really know how to use Go modules, and upstream applications are very quick to take on unnecessary dependencies. For example, I depend on the Loki client. The Loki client and server are the same Go module, so they pin me to a particular version of the Prometheus library (that the Loki server depends on), which then pins me to a particular version of the Kubernetes library (that Prometheus depends on). Basically, the dependency graph is hundreds of times larger than it needs to be, because it's not a problem for the upstream authors and they've never thought about it. Splitting the client and server into two modules would make life much easier for consumers, but slightly harder for the producer, so it's rare that you see it one. (My team also makes an app that makes this same mistake -- forcing users of the client to depend on things like Kubernetes. It's hard to fix, because the server uses the client internally, but I may do it in the future. Or just auto-copy the client code into a separate repo + go.mod file for consumers!)

Upstream authors are also very quick to make fixes for themselves, unaware that they don't propagate to consumers. Many libraries have "replace" directives in go.mod, but those don't propagate to consumers, causing solved problems to reoccur for each consumer. You have to manually propagate them yourself. The solution there is to be a good open-source citizen -- if you have to hack up some module, either properly fork it and depend on the fork (there should be a tool that handles this renaming for you automatically), or push your changes upstream and depend on the new release.

Basically, modules involve the transitive closure of all shortcuts a bunch of people you've never met have taken, and the results are not always good. That has been true in every language I've ever used; I have 83 irrelevant Depndabot alerts that can't be fixed in most of my Javascript projects, for example. I think it's the best module / packaging system I've ever used, and I like it very much. In fact, there is little I'd change.

> figure out what versions are actually being built into the final binary

    go list -m all
Will print the versions.

    go list -m -u all
Will print what versions you could upgrade to.

I personally include this in every binary I produce: https://github.com/povilasv/prommod

This lets me monitor module versions across the fleet. If there's a security problem in a module, I can instantly see which apps are affected, and update them.


OP might want to check their security software- things that hook into every file access can really kill compilation performance.


>go mod list -m -u all

minor correction: just `go list` on these. the `-m` ensures it's in module-mode.


Thanks! I typo'd it both times! It's now corrected.


You aren't really pinned to the version of Prometheus the library depends on. Minimal Version Selection[0] means if you specify a newer minimal required version of Prometheus in your go.mod that is the version that will be used.

Its slightly annoying that your dependencies' optional dependencies pollute your go.sum, but it really doesn't matter. If those don't show up in your package import graph they won't be included in your builds.

[0]: https://research.swtch.com/vgo-mvs


The problem is that Kubernetes went backwards in version numbers. I want to depend on, say, v0.19.0, but through the dependency chain, we end up with a dependency on v12.0.0. (At some point, they decided to switch from "kubernetes x.y.z = client-go@vx.y.z" to "kubernetes x.y.z = client-go@v0.x.y".)

I made a test module that depends on loki and kubernetes, and updating kubernetes results in:

  $ go get k8s.io/client-go@v0.19.0
  go: downloading k8s.io/client-go v0.19.0
  go get: downgraded github.com/grafana/loki v1.5.0 => v1.0.2
  go get: downgraded github.com/thanos-io/thanos v0.12.1-0.20200416112106-b391ca115ed8 => v0.11.0
  go get: downgraded k8s.io/client-go v12.0.0+incompatible => v0.19.0
This causes loki to downgrade to a version that doesn't work.

I don't blame the module system for this, I think it's doing a great job. But when you use other people's code, you're responsible for the transitive closure of all their minor tiny mistakes, and when you depend on big codebases, the mistakes really add up. That's where the hate comes from; a lot of code was written before modules existed, and modules changed the semantics of that code.


Fair.


...you can literally go on a lunch break while doing a full C++ recompile on a moderately complex project.

Your Go project must be on an epic scale to break even 10 minutes of compile time.

Obligatory XKCD: https://xkcd.com/303/


Is it that bad compared to npm?


Calling out again, this is a fantastic program and it's remarkable that this is released pseudonymously (for whatever reason)


Thanks man


If you wanted to, couldn't you just compile your C/C++ program statically and then it would be comparable to a Go binary?


As is true of many developers, I am part of a large team working inside an ecosystem of enormous C++ programs. These programs are written in a particular way (heavily templated) and built in a particular way (dynamic linkage, CMake) and it's neither feasible nor wise to change these foundations


I guess the point I am trying to make is if you had chosen to be statically-linked from the start, then it would be comparable to a Go binary of today. If Go had many dynamic dependencies and people became "stuck" with them, and Go later introduced statically-linked binaries, then it would be the same as your C/C++ example, wouldn't it? In other words, it's not a distinguishing factor because you can be fully statically-linked in C/C++ if starting today with a new project with that goal in mind.


Sure. But here we have a typical situation where the Go tools get it right by default, and of course a sophisticated engineer can achieve the same effect with non-Go tools even though those tools get it wrong by default


Off topic: this NN-512 site has good UX for having no CSS/JS (I know, many of you would claim it's because those are omitted...). Did you use some kind of site generator?


The program generates the HTML for its own website (in DRAM) and serves that HTML over HTTPS. When you access that webpage, you are communicating with an NN-512 process that is running on a $5/month Linode cloud instance


Ok ok I see it in e.g., https://nn-512.com/browse/NN-512#64.

The anchor tags for the listings go to top instead of being self-referencing. Is that a bug?

Like

    format = `<h3 id="%d"><a href="#0">%s</a></h3>` +
Should be

    format = `<h3 id="%d"><a href="#%d">%s</a></h3>` +
And Fprintf should duplicate a parameter?


No, that's intended. There's an index at the top of each page. You use the index links to jump down. And if you want to jump back up to the index, you use any of the header links. That makes sense to me. It seems useless for a link to point to itself


> It seems useless for a link to point to itself

Hm. I guess the web has programmed me differently. Say I'm CTRL-F searching through the page and I notice an interesting bit of code and say "Hey I have a question about this". Then I start looking for a shortcut to that one file and assume that shortcut is the nearby <h3> header. I expect this because I've seen this pattern implemented a lot. You could have the <h3> be that local anchor and a short [top] link next to it.


Yeah, maybe you're right


For me the major remaining issue is a golang ide, or lack thereof. You only have 3 options:

- Goland from Jetbrains; awesome but costly

- Vscode with golang plugin; slow, brittle, little functionality

-vim; yuck

It'd help adoption greatly if there was a decent free option here.


considering i've spent hundreds of dollars on utter bullshit, i think buying jetbrains products is well worth the money. they're fantastic and worth every penny. i'm thankful my company has licenses, but i would still pay them


One of Go's creator did a lot of the initial coding in Acme[0] :D

[0] https://en.wikipedia.org/wiki/Acme_(text_editor)


Something tells me that a person who reacts to Vim with a “yuck!” won't like Acme either. Also, I don't know how true this is, but I've read that Acme loses some of its advantages when it's run in e.g. X11 as opposed to its native rio[1] with plumber[2].

[1]: https://en.wikipedia.org/wiki/Rio_(windowing_system)

[2]: https://9fans.github.io/plan9port/man/man4/plumber.html


I _think_ that Rob used Acme in OSX, but I can't find this source.



A couple of the Go authors did a lot of the initial coding and continue to code in Acme (rsc, r). The broader Go team use a wider variety: Vim, Emacs, Goland, and VSCode are the most popular from what I've seen.


Well, he is also the author, and ACME is even worse than classical VI, in terms of usability.

The only thing it has going for it is being based on Oberon's UI workflows.


Could you expand this? Quite like Acme and wondered what you don't like about it


Lack of syntax highlighting for one, not being even half as powerfull as Emacs as second.


Goland is really good. I think the $249 or whatever for all the IntelliJ products is really good value for money.


What are the key features from GoLand you find missing in VS Code? (I don't use either, but from what I have seen anything gopls-backed is roughly the same same as VS Code, and this is already much more than I need/want.)

The only GoLand feature I've seen that I'm envious of is the mode for editing template files - and even this breaks down quickly once you leave the HTML plane.


I love vim. =(

Its there everywhere you need it. If I have to use these other fancy pants editors I always enable vim emulation when possible but its never the same.


Yeah vim plus govim amazing.


vim with the vim-go pluugin is pretty good, fast and free. Maybe you should give it a try.


> require no shared libraries

One amusing downside: the Golang hello world binary takes 2 MB; when compiled with gcc though (against libgo, therefore, dynamically linked), it takes an astonishing 60 KB - not even 64 KB!


No one is actively happy about it, but when you account for garbage collection and other runtime stuff that goes into a regular go program, plus no dynamic linking... you have to be happy with the trade-off. It's the right one for most programs we will ever make.


> It's the right one for most programs we will ever make.

Unless you’re in the lucrative hello world industry. Or at least I assume it’s lucrative since so many people come here posting about hello world binary sizes—it certainly seems to employ a lot of people.


That used to bother me with Rust as well, but frankly nowadays it's really hard to argue against it. Even in the embedded world you typically have enough flash these days that you're not counting a couple MBs, and as far as RAM is concerned you'd only save some memory if several binaries run simultaneously while linking to the same shared libs.

And even then it's not a sure win, because LTO can remove a lot of useless code from dependencies, so if you only use a small part of a dependency you might still be better off with static linking regardless of other factors.

It's pretty hard to justify the complexity and overhead of dynamic linking nowadays, IMO.


Although it used to be worse, these days a standard Rust “Hello, world!” compiles to just 277 KB on Linux with ‘cargo build --release’ and ‘strip’—5× smaller than the equivalent for Go (and there are games you can play to get it down further).


Oh, that's very cool, I hadn't checked that in ages. Any idea on what Rust does better here? Better link time optimization? Or is the Go runtime just that heavier?


Rust doesn't really have a "runtime" in the same sense that Go does. For example, Rust has no GC.


Depends, dynamic linking is great for plugins, on the other hand I guess we have learnt that IPC is a better option for security and overall application stability.


not counting MBs!? must be nice, havent developed on something with 1MB or more in years


You work on systems with 1MB of storage that support dynamic linking? Please tell me more, that sounds pretty insane to me.


... and then someone puts it in a container, probably sourced without any kind of tracking, and blows cross-app shared library page sharing and adds tons of other on-media bloat.

It's so crazy where we are.


I find it more ironic when monolithic kernels are argued for and then get placed in layers of abstraction to achieve what microkernels offer out of the box.


RAM and storage are cheap. Their cost is a mere pittance for the benefits of containerization.

It's like when people whinge about Electron. Electron gets you a runtime that's near identical across the major platforms, with a11y (something the Linux native toolkits still don't have their shit together on) essentially for free, plus you can use modern reactive component frameworks like React and Vue on the desktop. Yeah, I'll pay 300 MiB for that.


I highly doubt that the optimal GUI development is at “modern reactive comp frameworks”, when decade old desktop frameworks had better tooling than that.


I seriously doubt that. Mobile platforms and web would be moving towards "decade old desktop frameworks" if those were superior.


Mobile platforms are most definitely not web technology, they are much closer to desktop frameworks. And the only reason the web is not doing so is (was) due to backwards compatibility and HTML being a bad target. But with WASM and canvas-based renderers, desktop frameworks sort of make an appearance on the web (though I personally dislike the canvas-oriented design, which basically throws away every accessibility)


For some applications this matters, but for a typical web server or web architecture, which is what go is optimized for, rarely if ever would a couple MB of binary size make a difference.


Especially when tossed in a container for most workloads nowadays. You can have a full go binary statically linked and copied into a FROM SCRATCH docker image and it just works, and is miniscule.

For example, that's all that the production image of NATS Streaming is - https://github.com/nats-io/nats-streaming-docker/blob/024b04...


Work I need to do to use 1.17: None

Benefits: Improved performance and reduced hosting costs

Go is so nice for getting stuff done.


I've been using go for years now at my work, and I have to say that it does have applications where it's a perfectly capable tool.

The problems arise when you try to do something it's not good at and you start wrangling with the language. If your coworkers have strong opinions on how Go is supposed to be used, it can make matters worse.

My biggest qualm by far is the limited type system, e.g. it has no concept of immutability or non-null pointers :( Also accomplishing simple things sometimes requires surprisingly weird techniques, e.g. cloning a slice.

The lucky people can evaluate a problem and choosing the right tool for it. The unlucky bastards like me get handed a language and are expected to build everything in it. It can be rough sometimes.


When I need to copy a slice it's usually a []byte. I'm looking forward to https://github.com/golang/go/issues/45038


This interestingly touches on another frustrating property of go. Most of the people using it seem to have a different usage pattern than me.

Because of that, the language is unlikely to fix the issues that are very real to me. The community claims they don't need them, the language designers, perhaps a bit more arrogantly, claim that I don't need them. Yet, I've used other languages where I've never had to consider this. I can clone whatever I want, whenever I want.

If I only ever cloned []bytes, I'd write a helper function, keep it in an utility package and go on with my life.

This ties back to the original theme of "it works well for certain things, but if you try to do something that the designers didn't predict, good luck to you".



I've been using Go for years, of course I know about copy.

Someone did a writeup on this which is a perfect illustration of a general theme with Go: on the surface it looks fine, but the closer you look, the stranger things get:

https://github.com/go101/go101/wiki/There-is-not-a-perfect-w...


Apologies for my tone, I didn't mean to insinuate you weren't experienced. I guess I was wondering what was wrong with copy(), thanks for the link!

IMO, copy() is well-designed and the semantics are sensible. I think the other examples in that article are not clear, and the semantics have me second guessing what they're doing.

> Drawback 1: if s is nil, the result sClone is not nil.

I can see how this particular semantic seems an issue, but I think its sensible. If your copy() returned nil and you wanted to use it, then you'd have to check if sClone is not nil first, so that if statement is unavoidable. Instead, it's often safer in practice to check ahead of time.

Sorry again for my tone, it's easy to appear rude on the internet on a late night.


Added a new testing flag -shuffle which controls the execution order of tests and benchmarks.

That's a fun one


My favorite part:

Go 1.17 implements a new way of passing function arguments and results using registers instead of the stack. This work is enabled for Linux, MacOS, and Windows on the 64-bit x86 architecture... For a representative set of Go packages and programs, benchmarking has shown performance improvements of about 5%, and a typical reduction in binary size of about 2%.


Just buried in the middle of the release notes: Oh, your code's 5% faster.


Random Go fact I learned recently: by default, HTTP requests automatically redirect. If you dont want that, you can do this instead:

    res, err := new(http.Transport).RoundTrip(req)


You can tell http.Client to not redirect:

    client := &http.Client{
        CheckRedirect: func(req *http.Request, via []*http.Request) error {
            return http.ErrUseLastResponse
        },
    }
Then use the client as normal. You can also modify the function for very specific redirect behavior.


Another fun one, in the masochistic sense, is that the default HTTP client has no timeout, and you can't forget to set it yourself if you don't want to potentially leak connections.


The stdlib http client is a bit absurd in some areas, yeah. Timeouts in particular are so confusing and misleading that there are quite a few lengthy blog posts about them alone, e.g. https://blog.cloudflare.com/the-complete-guide-to-golang-net...

Granted, part of it is that it's too complex to be captured by one timeout, but everyone wants one timeout. kinda like string-sub-slicing and UTF-8 multi-byte characters - it's a Bad Idea™ because the over-simplified stuff is fundamentally wrong and will often cause problems. E.g. I routinely encounter tools with short timeouts that don't work on slow network connections (e.g. Bazel), despite actively downloading - what you generally want for user-use is a timeout that ensures the download does not hang forever doing nothing, not the download completes within a specific amount of time.

... but also it's just abnormally bad.


There are fragments of discussion about the download timeout throughout the issue tracker, which end up leading back to this still-open but seemingly forgotten issue about adding InactivityTimeout: https://github.com/golang/go/issues/22982

I'd love to see this one addressed but it's not looking too hopeful at this stage.


I used Go for an HTTP scraper a long time ago and had to go to insane* lengths to get that library to not leak and be performant. Honestly, that was about when I soured on the language and moved on.

* https://github.com/pkulak/simpletransport


How in the world does turning off connection pooling make the client more performant for high request volumes (and presumably on the same domain, if it's a scraper)?


Because when you hit a new domain on nearly every request, a pool just fills up forever with new connections. That should be fine. The pool should just start removing old connections at some point. But Go didn't do that and just filled up forever until file descriptors ran out.


Ah, today you would set MaxIdleConns on the client to avoid this. Even back then I think there was DisableKeepAlives but I would totally believe there was a leak hiding in that ca. 2013.


Yeah, this was forever ago, so on hindsight, maybe I shouldn't have even brought it up. haha


No one in their right mind should be using the default HTTP client or server. They only exist to allow you to quick hack something together. For any serious application you should always define your own.


I think the introduction of contexts has largely resolved this footgun... As long as you actually use contexts, which many still don't. :(


It's surpring that Go 1.17 discountined support for MacOS 10.12 (or older), which released only 4 years ago.

Along with deprecating Intel support, it seems like Apple, their users and the ecosystem is totally fine not giving a shit about supporting aging software. It doesn't seem like anyone cares that much either.

Even more impressive that on average, Macbooks have a much longer lifespan than other laptops, while the software they run is intolerant of old versions.


I really enjoy writing command-line utilities to make my system admin job more efficient:

https://github.com/jftuga?tab=repositories&q=&type=public&la...

Any feedback is most welcome!


> Conversions from slice to array pointer: An expression s of type []T may now be converted to array pointer type *[N]T. If a is the result of such a conversion, then corresponding indices that are in range refer to the same underlying elements: &a[i] == &s[i] for 0 <= i < N. The conversion panics if len(s) is less than N.

> Note that the new conversion from slice to array pointer is the first case in which a type conversion can panic at run time. Analysis tools that assume type conversions can never panic should be updated to consider this possibility.

WHY! So now we can't trust that the type *[N]T won't panic at runtime when accessed by in-range index values. This should be identified as an unsafe conversion.


Most of such conversions are safe and should not panic (just like the type assertions).


The panic happens during conversion before you have a chance to index into it.


As someone who's been using Go for a long while now, I can't help but wonder if modules and other recent changes aren't a step backwards. Maybe I'm a change-averse geezer, but modules still seem like an overcomplicated solution for a solved problem. Needed a particular version of something? Great. It happens. Add a submodule and toss it in vendor/. Why in God's name do we need to proxy source code through Google-hosted services which they've been reluctant to even open source? And at the root of it all, the GOPATH model simply wasn't bad enough to be worth replacing! You have the full source repo right there, in a predictable, semantically-useful location. When you `go get` something, use `-u`. If a package fails to work with the current upstream, vendor an old version, _or update the depending package_! Ideally, I'd only ever have one copy of source code anywhere on my computer, preferably in the form of revisions in a source repo. Contrary to popular belief, disk space and bandwidth are neither free, nor infinite, and I'd like to make good use of mine. NPM and `node_modules` are NOT designs to aspire to!

If the Go tooling insists on involving itself with versioning, dependencies, and source control, it should actually USE the source control tools to manage versioning and dependencies! It isn't rocket science to make a temp directory, clone from the local repo, check out a tag, and use that for building. Git even has facilities to check out the tree at a specific commit, no full clone required. (Granted, my view is Git-centric, but I believe that mercurial and svn also allow you to clone and check out labelled versions, especially considering those are _the basic requirements_ of an SCM.) And why do we need to verify cryptographic integrity of downloaded code bundles when the SCMs typically use cryptographic primitives to describe versions? Then to mandate a gatekeeping "sum database" on top of all that?

Finally, it is depressing to me how shut-in and closed-off the core team seems. The designs for big language changes are rarely ever proposed by the community, only by people already in the team. While (I hope) the community's input helps inform their decisions, the final module design was rsc's. Before he drafted the design document, dep versioning was a user-space/community problem, but after it came from him, then—and only then—was dependency version management truly considered for the toolchain. The generics design came from, and was accepted by the core team. Rob Pike's recent d̶e̶̶c̶̶i̶̶s̶̶i̶̶o̶̶n̶ proposal to support taking the addresses of simple types is—as one might expect—unquestionably going to be added.

The greatest thing that could happen to Go would be a community-organized and -centric fork, like Gitea was to Gogs. Even if they didn't maintain strict compatibility with each other, I know which one I'd use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: