Godep wants you to touch your GOPATH. For example, `godep save` saves revisions of packages currently installed in your GOPATH and `godep restore` downloads packages into your current GOPATH. If you don't want to pollute global GOPATH, you will have to change it. This works for some people, doesn't for others.
Goop doesn't want you to worry about GOPATH. Goop also encourages you to explicitly state which packages you are using in the Goopfile, rather than "capturing" what is already in use.
It's not really clear from the documentation/README how this works under the hood, so for now I have to just go by what you're saying here until I have time to actually take a closer look.
First off, I will say that I'm not a great fan of godep, so I'm not looking to advocate for it. That said:
> Godep wants you to mangle with your GOPATH. For example, `godep save` saves revisions of packages currently installed in your GOPATH and `godep restore` downloads packages into your current GOPATH. If you don't have to pollute global GOPATH, you will have to change it. This works for some people, doesn't for others.
Your $GOPATH is a list of directories, not a single directory. This is almost never useful in practice (most people should just use a single directory as their GOPATH). However, for what you're trying to do here, that would be far simpler than introducing a third-party tool, especially one that requires adoption from all project contributors.
> Goop doesn't want you to worry about GOPATH.
The $GOPATH is technically not part of the Go language (spec), but it is a language-wide idiom respected by all build tools. I'd be very nervous about a project that eschews such firm idioms in favor of its own inventions.
> Your $GOPATH is a list of directories, not a single directory. This is almost never useful in practice (most people should just use a single directory as their GOPATH).
I disagree. I use and recommend a 2 directory GOPATH setup. The first path being the one where all 'go get' packages are installed and the second being where you put your packages. I've found keeping these separated makes long term management easier.
Why? "my code" is in $GOPATH/src/github.com/natefinch/ ... all code that is not "my code" is in other directories. Why bother having to switch over to some other gopath to get there?
Go import directories, are, by definition, unique. My work code is under github.com/juju/. Why should I put labix.org/v2/mgo under some other root path? They're unique directories either way.
Goop still sets and uses GOPATH (whenever you `goop exec` or `goop go` for example), it just eliminates the need for you to set it per directory/project that you are working on.
I think there is a little bit of confusion in this discussion about changing GOPATH as in changing the value of the env var vs changing the content of the directory tree pointed by the env var.
It is worth noting that godep recently added an import path rewrite option, which removes the need to manipulate your workspace's packages or even run the dependency tool in order to `go get` or build your code.
Edit: Though, perhaps the final solution could be cleaner with a vendoring tool written from the ground up with the purpose of import path rewriting.
From what I can tell, goop has the same GOPATH issues as godep, but godep allows you to address it in different ways. save/restore is one way, and there is also the proxy to the go tool `godep go` which looks to be the goop approach.
Thanks for answering my questions in paragraphs 2 and 3. Not sure how paragraph 1 was useful.
I see godep as a swiss knife of tools, and different people use godep in completely different ways (which has its advantages and disadvantages). Personally, the only godep commands I use are:
rm ./Godeps && godep save --copy=false # Save current dep versions
rm -rf $TMPDIR/godep && godep go build/test # build/test using saved dep versions
I'm curious why so many of these Go dependency managers are popping up. Doesn't go get download all dependencies automatically? Why are these third-party dependency managers necessary?
Because 'go get' does not allow you to point to a particular revision. It downloads the latest / HEAD commit for that dependency and sticks it into ${GOPATH}. These dependency managers are all aiming to fix that problem, and some of them, I believe, transitively.
For git (and probably others) It actually looks for a "go1" tag, and if not there it uses the head of master.
I honestly don't care for all these version managers. If I didn't trust the author of whatever library I'm pulling in to keep their API stable, then I wouldn't use that library or I'd preemptively fork. This is actually a language convention and if more people were familiar with it, there would be even less issues (though I haven't ever had any personally). If you're interested in helping the community familiarize themselves with Go's language conventions, I made a package[0] that developers should read after going through the Go Tour. It's meant to teach best practices and conventions for library design in Go.
You are arguing for versionless libraries? AFAIK the go maintainers tried that with golang itself in the beginning, and gave up on it, because it's not practical in real world use when APIs are changing quickly and outsiders want to know if they can update without breaking. There are very few libraries which have managed this in the past and have remained at version 1.x for their lifetime.
It's interesting that go get supports reading version tags, though it's a bit pointless to support them for language changes, since go1 is stable, go2 is unlikely to arrive for years, and Gofix would be best used to fix any issues with a major transition like that anyway. So in practice this feature is not used.
It'd be nice if go get instead supported reading version tags for packages, and had some simple scheme for getting the latest compatible version using semver and versioning import dirs, rather than simply pulling the latest master. I think to do that they'd have to adjust go get and go build/run though, perhaps to add a lock file and to take dirs like github.com/foo/bar-v1.2 into account. Simple versioning would not be a difficult change or an incompatible one, it just wouldn't deal with the very difficult issues of conflict resolution on larger projects, which I think was the golang team's objection (correct me if I'm wrong). I do see why they don't want to introduce a half-baked solution without dependency resolution.
At present either library authors are expected to never break compatibility ever (your proposed solution), or everyone has to update their code when they do. This particular detail seems like undesirable behaviour or an unresolved problem in golang to me, rather than a carefully thought out convention. Just because that's the way it is doesn't mean that's the way it should be.
I don't think they've given up on the idea considering there still is no official version management. To give up would mean that they're doing something else and AFAIK this is still best practice. The early Go team did believe this would be an okay idea, but over time everyone has noticed that it may not be for the best. However, the Go team also punted on solving the problem while thinking that the community will figure something out. The problem is that "go get" is already built into our standard toolchain. The moment you start using a dependency manager, now the official toolchain workflow is broken and you've fragmented the ecosystem. If people really want a solution to this problem they need to prod the Go team and finally get an official decision made and merged into the go tool, rather than just creating their own binary.
I don't think they've given up on the idea considering there still is no official version management.
Sorry, that wasn't clear, I was talking about the language itself - the language is now versioned, but it wasn't for initial releases - they had weekly snapshots, then moved to formal versioning and gave up on the idea of being version-less. See these slides about version 1:
What holds for the language holds also for libraries I think - having explicit versioned releases and being able to sometimes break backwards compatibility is really useful, esp. if others can pin whatever version they import easily and migrate at their own pace.
I think it's good people are experimenting with pkg versioning - if someone comes up with an elegant solution and deals with most of the edge cases, it'll probably get into the bundled tools like go get eventually, or people will stop using go get and use another better tool. go get is not essential for fetching go libraries, it's just the blessed method.
I suspect it stems more from the Google origins than anything else.
Most of Google's code base is one big repository, and everyone is working at HEAD. It's nice not having to support a matrix of dependency versions, but that really only works when you can also modify downstream dependencies with ease.
That works within Google because there's a culture of constant maintenance; but out in the real world, you can't expect OSS package maintainers to be constantly active & willing to accept patches.
Yes, you're probably right, it does sound like they work this way internally, but it's not really practical if you have an open ecosystem with lots of different packages by authors who are not paid to maintain them. It'll be interesting to see if they bend on this and adopt some versioned solution.
Interesting point. Actually you can see those dependency managers as managing your forks for you; unlike forks, there is a mismatch between import paths in your code and the location of your "fork", hence the need for custom tooling (either putting the right thing in the GOPATH tree, or by maintaining a separate GOPATH tree).
Having your own checked-in vendor dir, perhaps managed with git submodules, on the other hand moves the mapping to git and letting the import path point exactly to your "fork"[0] of choice.
A server responding at pinfor.io would just behave like a proxy for whatever comes after "/->/"[1], but you'd have a cmdline/web tool to actually override some mappings, like pinning a tag, a sha, another repo (your private fork).
The advantage is that it would work be compatible with go get, meaning that you use this for your libraries
The disadvantage is that it depends on an external site. On the other hand you'd be already depending on an external site to host your repo. It would be nice if this kind of redirector was actually supported by your repo host (e.g. github).
Or perhaps we could just have special support in "go get", e.g. some kind of redirects, perhaps declared as json files so it's easy to host without having access to server side software.
I guess there have been already some discussions about that. Does anybody have some pointers/thoughts about this?
[0] Here I'm broadly defining fork as any DVCS commit; that's all that matters for the build; how you advance that commit defines which "repository" you are following, whether the upstream or your fork.
Because of the way go get works when people are sharing packages. As an example:
Package author blub shares his package with the world, or his team, which depends on github.com/blab/blab. Unfortunately a week later github.com/blab/blab goes through a major rewrite, and breaks blub, but people doing go get blub get the new blab, and their build is broken, so they start complaining to the blub author.
Solutions to this:
Expect blab never to make a breaking change (somewhat optimistic)
Vendor a specific version of blab into blub and update manually
Explicitly specify a version of blab to use with a dependency tool
It says it is inspired by Bundler but in what ways? I am curious as I didn't use much of neither bundler nor goop but I have heard the constraints resolution lays in the heart of bundler.
You are right. Unfortunately, constraints resolution isn't something that can be implemented in a practical manner until there is a standard versioning scheme (there isn't much you can do with git hashes) and a standard dependency manifest file that every go project adopts.
Goop has `Goopfile.lock` and `goop exec`, inspired by Bundler's `Gemfile.lock` and `bundle exec`.
See: "and a standard dependency manifest file that every go project adopts"
You need some way to create a complete dependency graph for a given project. If your project has a list of its immediate dependencies, but those dependencies in turn require specific versions of other packages, how do know what those versions are? You need some consistent way of getting this metadata.
There are a couple solutions, but a few that come to mind: each project stores their deps metadata in a consistent location in their repo, or there's some central package repo (a la RubyGems) where such metadata can be queried.
For any solution to work, all of your dependencies (both direct and indirect) need to opt in to the same metadata scheme, or the system falls apart. Unfortunately, there isn't any consensus in the Go community on how to fix this.
This is what the parent meant by "there isn't much you can do with git hashes."
Goop for teh rescue! The only trouble I had with go is the dependency handling. I wish people realize how Erlang's way of dealing with this is superior.
there is no import, what you are calling is either there because you included the path to the beam file when you are starting up the VM or it is not there and calling the function returns an error. all of the libs are organized to the module:function structure and you cannot chain.