Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Slicing a slice is full of gotchas. I tend to forget all the rules and avoid it whenever I can.



A slice operation s[i:] seems like it should be little more than an ADD instruction for a registerized slice where i is known to be in bounds, but a surprising little detail is that when i==cap(s) we really don't want to create an empty slice whose data pointer points one past the end of the original array, as this could keep some arbitrary object live. So the compiler generates a special branch-free sequence (NEG;SUB;ASR;AND) to compute the correct pointer increment, ((i - cap) >> 63) & (8 * i).

https://go.dev/play/p/J2U4djvMVoY


Appending a slice is also full of gotchas. Sometimes it modifies the slice in place, sometimes it reallocates and copies.


Only really a gotcha if you pass a slice into a function and expect to see modifications in that slice after the function completes. It's helpful to remember that Go passes by value, not reference.

That can be addressed by passing the slice as a pointer: https://go.dev/play/p/h9Cg8qL9kNL


> Only really a gotcha if you pass a slice into a function and expect to see modifications in that slice after the function completes. It's helpful to remember that Go passes by value, not reference.

Slices are passed partly by value (the length), partly by reference (the data).

    func takeSlice(s []int) {
      slices.Sort(s)
    }
From your explanation, you would expect that to not mutate the slice passed in, but it does.

This can have other quite confusing gotchas, like:

    func f(s []int) {
      _ = append(s, 1)
    }

    func main() {
        s := []int{1, 2, 3}
        f(s[0:2])
        fmt.Printf("%v\n", s)
    }
I'm sure the output makes perfect intuitive sense https://go.dev/play/p/79gOzSStTp4


Slices are passed only by value. It's just that the value is a struct containing a reference to the data. Once one understands that, the rest makes perfect sense.

I can see why it trips up newcomers, but it feels pretty basic otherwise.


I say this as someone working with go every day.

The fact that I can pass a slice to a func 'by value' and mutate the source slice outside the func is already surprising behavior to most people. The fact that it MIGHT mutate the source slice depending on the slice capacity is the part that really drives it home as bad ergonomics for me.

Overall I enjoy working with go, but there are a few aspects that drive me up the wall, this is one of them.


How would you have designed it? An internal byte array instead of a pointer?


I think the key thing missing from go slices is ownership information, especially around sub-slices.

Make it so you can create copy-on-write slices of a larger slice, and a huge number of bugs go away.

Or do what rust did, except at runtime, and keep track of ownership

    s := []int{1, 2, 3}
    s[0] = 0 // fine, s owns data
    s1 := s[0:2] // ownership transferred to s1, s is now read-only
    s1[0] = 1 // fine, s1 owns data

    s[0] = 1 // panic or compiler error, s1 owns data, not s
With of course functions to allow multiple mutable ownership in cases where that's needed, but it shouldn't be the default


I could have worded it better, but yes, slices have footgun potential but it's simple to work with once you know how they work (and maps fall into this same category).


It is a suprisingly hard thing to implement well. I have no idea how many times I have implemented slice-like things in C (back in the 1990-2000s when I mostly wrote C) and it was one of those things I never managed to be happy with.


Something that even Mesa and Modula-2 already supported by 1980's, maybe one day C will eventually get proper slices.


Good point. As for C and slices, doubt that many care, at this point. Many will use C alternatives that have slices or are long time C users that just deal with it.


It can be done but that requires a better, more expressive type system.


An expressive type system also often means slower build times. I dislike working with Rust for this exact reason.

While most people highlight the difficulty of picking up the syntax, I find Rust to be an incredibly tedious language overall. Zig has a less expressive type system, but it compiles much faster (though not as fast as Go). I like what Zig and Odin folks are doing over there.

I like the balance Go strikes between developer productivity and power, though I dearly miss union types in Go.


An expressive type system absolutely, positively, unequivocally does not imply slower build times (especially with a Church-style type system). There are plenty of programming languages with advanced type systems which compile extremely quickly, even faster than Go, for example OCaml.

Don't make the fallacy of conflating Rust's slow compile time with its "advanced" (not really, it's 80's tech) type system. Rust compilation is slow for unrelated reasons.


Old doesn't mean non-advanced. GraalVM is based on a paper (Futamura) from fifty years ago. Off the top of my head I can't think of many language features younger than the eighties—maybe green threading? That would be surprising but might fit. I suppose you could also say gradual typing. Haskell has many recent innovations, of course, but very few of those have seen much use elsewhere. Scala has its implicits, I guess, that's another one.

Personally, I write java at my day job and the type system there makes me loooong for rust.


No need for Rust, when JVM has Haskell, Scala, Kotlin, Clojure, Common Lisp.


I prefer rust to all of them, but I also come from a very systemsy background. Plus it has the benefit of being much easier to embed inside or compose around basically any runtime you'd like than managed code, which is why I chose rust rather than basically any managed language.

But, it's just a tool, and the tools I choose reflect the type of stuff I want to build. The JVM is extremely impressive in its own right. You're just not going to to find any one runtime or ecosystem that hits every niche. I'm happy to leave the language favoritism to the junior devs—for the vast majority of situations, what you're building dictates which language makes the most sense, not vice versa.


As a start, Go could separate container and slice types, the way C# did it with T[]/List<T>/other and Span<T>/Memory<T>. No lengthy build process required.


I'm not deeply familiar with those C# types, but I think maybe it already does. An array, which includes the size of itself in its type so that a four-element array is not the same type as an eight-element array, is already in Go. Go's language affordances make it easy to just have slices without the underlying arrays, since they're generally not useful on their own, but you can take arrays and slice into them if you like.


Yeah, but the at the same time, I find C# code a sigil soup. Go makes a different tradeoff.

I've been involved in a few successful large scale projects and never felt like the type system of Go is holding me back too much. Sure the error handling could be better, union type would make interface munging easier, but the current state after generics isn't too bad.


> Sigil soup

Last time I checked, C# had clean and focused syntax for working with collection types. Could you provide an example?


You'd most likely be happy with Odin (https://odin-lang.org), which I find to be essentially a fixed Go with no GC.


It's possible to build languages that compile faster than Go, with a much more expressive type system.

It's just that compile times and DevEx haven't been a priority for most projects.


As proven by other languages with similar type systems and faster compile times, Rust's case is a matter of tooling, not language features.


Not sure if that's really a proof, as it could be the exact combination of language features that makes up the slowness. For example traits, non-ordered definitions in compilation units and monomorphization probably don't help. GHC also isn't a speed demon for compilation.

But sure, LLVM and interfacing with it is quite possibly a big contributor to it.


Haskell isn't the only language around with complex type systems.

However, it is actually a good example regarding tooling, as the Haskell ecosystem has interpreters and REPL environments available, for quick development and prototyping, something that is yet to be common among Rustaceans.


Rust has Cranelift: https://cranelift.dev


Indeed, but its compile times aren't much better than LLVM, at least one year ago.

Ideally we would be having the F# REPL/JIT, plus Native AOT for deployment, as comparable development workflow experience.

Naturally F# was chosen as example, because that's your area. :)

Not being negative per se, I also would like to have something like Haskell GHCi, or OCaml bytecode compiler, as options on rustup, so naturally something like this might eventually come.


Based on a first hand account I read (but cannot source offhand) Rust's slow compiles are because anytime there was a tradeoff involving compile time at the expense of something else they'd always choose that something else. Not cause they hated fast compilation, guess it just wasn't high on their priorities.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: