It's worth it, imo. I was in your boat a few months back and hated every minute of it. With that said, i'm still not 100%. I semi-regularly see syntax that makes me go "Wat is what!?", but then i sit for a moment and understand it. Rust introduces a lot of visual baggage and that seems to cause me syntax blindness.. not enjoyable.
Unfortunately though, i'm back on Go. I want to be on Rust, but i had to pick a language for work and i can't ask my Team to go through what i did. Rust, despite the safety, is too unnatural for our larger codebase.
Luckily i think Rust has seated itself as the language we will use if the need is truly there. Unfortunately though, not everything.. just the specific things that need it.
If the algorithm could work well on a mobile device, this would make awesome camera filters for something like Instagram, likely could make some $ or get someone to buy it from you.
At the very least Nix would make your Docker builds more reliable. Granted, you could then start making arguments for "why use Docker at all", but i can't answer that.
I can't wait for someone to write a great book on Nix. I was so lost in it. I had it running on my home server, and needing to compile my own things because of what Nixpkgs lacked was a constant struggle.
It was a very frustrating experience. A frustration which was led by the fact that i could tell how powerful Nixos was - if only i could grok it.
Yeah a book on nixos would be valuable I think. Understanding nixos really comes down to reading all the docs then trying to accomplish what you need to do, learning by doing and reading source code.
I love nix, my biggest gripe would be that the Nix language is dynamically typed...
If you need more complex logic, you simply wouldn't use `try!()`. Try! is intended for those `if error then return` checks, just like with Go's `if err != nil { return err }`.
If the error needs to be checked for a specific type (like a timeout) and then retried X times, you don't use the simple `if err != nil {...}`, you hand write something:
let barRes = try!(foo()).bar();
match barRes {
Err(MyError::Timeout(_)) => // .. some retry code
}
Note that the above code is woefully incomplete (i'm not checking all possible match values, for example), but it should give you the gist of the idea.
edit: Man, the guy (or wo-guy) has a problem and we all pounce :)
So, i was a Go developer for ~4 years, then for the last 4 months or so i've been learning and using Rust. The pure joy of some things with Rust was astounding.
Now, i joined a new job and they're in need of a new language for some backend tasks - the choice was mine. Rust or Go? The backend tasks were heavy API servers - nothing amazing, we don't do groundbreaking work. Python was their existing language, but they weren't too happy with it.
For weeks (literally) i mulled it over, i really really really wanted to use Rust. Rust gives me a peace of mind that Go never came close to. Unfortunately, Rust does not fit the development speed nor developer experience[1] that this shop has.. and i really felt Go would be the best choice for this place.
My point to this post isn't to make a Rust vs Go comparison.. but rather, a wishlist for the hopefully eventual Go 2.0. I have no idea if/when that will ever be made.. but i hope it will, and i hope they adopt some things from Rust.
1. Channels without panics. Channels are awesome, but Go's design of them means that you have to learn special ways to design your usage of channels so that it does not crash your program. This is asinine to me. So much type safety exists in Go, and yet it's so easy for a developer to trip over channels and crash their program.
2. Proper error handling. I love error checking - i love it in Rust especially. It's very explicit, and most importantly, very easy to check the type of things. Recently i was reading an article about Go errors[2] and it made me realize how messy Go errors are. There are many (three in that article) ways to design your error usage, and worst of all your design doesn't matter because you have to adapt to the errors returned by others. There is no sane standard in use that accounts for the common needs of error checking.
3. Package management. It's a common complaint, i know. But Rust & Cargo is so excellently crafted.. Go just got it wrong. Recently i've been using Glide, and while it's a great tool, there is only so much it can do. It's a tool built to try and put some sanity in a world where there is next to no standardization. We need a Go package manager.. hell, just copy Cargo.
4. Enums. My god, Enums. Such a simple feature, but so insanely welcome and useful in Rust.
You'll note that i didn't list Generics. I know that's high on peoples list, but not mine. To each their own.. please don't start a holy war. (this is likely due to me using Go for ~4 years. I'm quite comfortable without Generics)
"Channels without panics. Channels are awesome, but Go's design of them means that you have to learn special ways to design your usage of channels so that it does not crash your program. This is asinine to me. So much type safety exists in Go, and yet it's so easy for a developer to trip over channels and crash their program."
I've solved this in my programming by finally coming to grips with the fact that channels are unidirectional, and if you want any bidirectional communication whatsoever, up to and including the client telling the source for whatever reason it has closed, even that one single bit, that must occur over another channel of its own. Clients must never close their incoming channels. This does mean that many things that Go programmers sometimes try to bash into one channel need to be two channels.
> I've solved this in my programming by finally coming to grips with the fact that channels are unidirectional, and if you want any bidirectional communication whatsoever, up to and including the client telling the source for whatever reason it has closed, even that one single bit, that must occur over another channel of its own. Clients must never close their incoming channels. This does mean that many things that Go programmers sometimes try to bash into one channel need to be two channels.
Erlang got it right (again), by sending messages to processes they can only be unidirectional.
I see Erlang and Go as duals of each other here, at least considered locally; Erlang focuses on the destination and Go focuses on the conduit of the message. Each have advantages and disadvantages. I think Erlang's approach ends up easier to use, but you lose the useful ability of channels to be multi-producer and multi-consumer.
(I'd like to see Erlang create a concept of a multi-mailbox where you can have a PID that can be picked up by a pool. Trying to create pools from within Erlang proper is quite challenging, and the runtime could do better. I acknowledge the non-trivial problems involved with clustering; I think it'd still be a big improvement even if they only work on the local node.)
The Erlang equivalent is that messages don't carry their source with them, so you must embed the source in the message if you want to send a message back. The main difference here is that Erlang doesn't offer anything that can be misunderstood as sending back a message any other way, so nobody is fooled and this never comes up as a problem once someone gets Erlang. The problem in Go is that the client end of the channel is capable of closing the channel, even though it really seriously never should.
I don't know how, but frankly i'd love to eliminate all simple panics. Nil pointers and channels seem two big culprits, offhand.
Granted, i left out nil pointer/interface panics because it seems unrealistic given how difficult it was for Rust to get rid of nil pointers. I'm not sure Go 2.0 could do it and still be considered Go.
> Granted, i left out nil pointer/interface panics because it seems unrealistic given how difficult it was for Rust to get rid of nil pointers.
That wasn't really a difficult part, it's just a basic application of generic enums.
Now Go doesn't have (userland) generics or enums, but they could have taken the same path as other languages (and the one Go is wont to take): special-case it. Which they probably can't anymore because of zero-valuing, you can't have a zero value for non-null pointers.
I agree; I want non-nillable types in Go. This despite the differences in Go that makes nil values more "valid" than they often are in other languages.
I still faintly hold out hope. Unlike many of the complaints about Go that would require fundamental restructuring, C# showed that actually can be retrofitted onto a language without breaking it.
> Unlike many of the complaints about Go that would require fundamental restructuring, C# showed that actually can be retrofitted onto a language without breaking it.
C# added nullable types. Not non-nullable types.
In Go, the concept of zero values is so fundamentally baked into the semantics that non-nil pointers can never really be added to it.
Isn't that a timing race? You're relying on the notification of "I'm done" getting there before the other thread tries to send.
Write-and-panic has the advantage of being atomic. And you can catch a panic.
If you have a stoppable reader like that, maybe have it pull using a channel of channels. That is, consumer sends the producer "here is a one use unbuffered channel, please put an item in it" and then blocks waiting for the response.
I don't think panics on channels are a big problem. Mostly they are a symptom of bad architecture. In a good design the ownership of each end of the channel is exactly defined, similar like you would have to define the ownership for all resources in Rust. As soon as that's the case the owner can (on demand) close the channel and others can read from it and wait for the close/finish.
The only corner case is multi-producer channels, which either need additional synchronization or could be left open and garbage-collected.
I've taken a very similar path recently myself and started looking into Rust.
> 2. Proper error handling. I love error checking
Hugely agree here. I can get behind Go's overall mentality of returning errors instead of throwing exceptions, but in my mind there are not enough primitives in the language to keep this style of coding safe and sustainable.
Rust's `Result` type, `try!` macro, and pattern matching are an incredibly powerful combination, and make Go's approach look downright medieval in comparison.
Overall though, Go is probably still the more practical choice between the two languages (due to Rust's incredibly high barrier to entry).
> Overall though, Go is probably still the more practical choice between the two languages (due to Rust's incredibly high barrier to entry).
Does Rust really have an "incredibly high barrier to entry?"
I've been using Rust for a few months, and just deployed my first high-throughput application a month ago, and my experience has been the opposite. Yes, the first couple of weeks were a bit rough while I was getting used to the ownership system, but since then I have been progressing at a relatively quick pace. The package and dependency management facilities are incredibly good, and I've found high-quality libraries for nearly all my initial needs.
Compilation times could be faster, but the error messages provided by the compiler are so useful that I have come to depend on compilation errors for refactoring. The gains in predictable performance and resource utilization have far outweighed any initial cognitive overhead in the learning process. The community and the resources they provide are fantastic.
Coming from a mixed dynamic language and functional programming background, I could see room for improving certain FP aspects of the language, but am impressed with the pervasive pattern matching and collection handling.
Not a knock on Go, but rather an endorsement of Rust and its future.
> Yes, the first couple of weeks were a bit rough while I was getting used to the ownership system, but since then I have been progressing at a relatively quick pace.
This is the very definition of "high barrier to entry". Clearly it wasn't too much of a barrier for you but I can see how it'd be an issue for people. I'm expecting editor support and wider adoption (differently constructed tutorials, SO answers) to lower this barrier. I think Rust has the potential to be very popular, particularly if the reputation shifts from "high barrier to entry" to "slightly harder to get started but fewer problems in production".
It took me about a week of fairly vigorous effort to start writing it fluently, but I also have the advantage of having seen and written many other programming languages. I have a few anecdotal examples of friends who are great developers, but still have trouble with Rust's ownership system.
When I say something like "incredibly", it's after thinking about trying to teach it someone more junior (like you'd see in a corporate environment with a mix of skill levels). I think that this would be a very difficult task.
Indeed. But looking at the GC improvements in Go 1.8 ("typical" GC pauses of less than 100us), the set of cases where you can't use the language might be shrinking significantly. Now if only they could turn that in some sort of guarantee...
The sole performance metric of GC is not pause times! Throughput matters just as much if not more!
GC pauses are not the only reason to use Rust. Rust is not "little Go" that you reach for only if you don't want GC. You might want package management, data-race-free concurrency, a mature optimization framework, runtime-free operation, concurrent data structures, fast C interfacing, etc. etc.
You're right of course about the GC metrics. And there was some disappointing increase of GC CPU usage with the 1.8 changes (I don't now the current status). But some of the items you mention (package management, mature optimization framework) will hardly make cases where Go cannot be used, which was the original point. Same for concurrent data structures which can be implemented in Go, even if the lack of generics makes it less convenient. I do agree with you regarding the other items.
Aggressive compiler optimizations are not optional in many domains.
One rule of thumb that a lot of people don't realize is that if you aren't maxing out your sequential performance, your parallel multicore algorithm usually loses to an optimized sequential one. The reason is simple: parallelism introduces overhead, leading to guaranteed sublinear speedups. Compiler optimizations, on the other hand, frequently result in multiple factors of improvement.
Right, but I think those cases are few and far between. Basically where you need blistering performance and/or deterministic resource usage. I would throw "safety" in there, but many (most?) safety-critical applications are written in C, and Go is certainly safer than C.
> Right, but I think those cases are few and far between. Basically where you need blistering performance and/or deterministic resource usage.
No, there's a lot more. See my reply to your sibling comment.
> I would throw "safety" in there, but many (most?) safety-critical applications are written in C, and Go is certainly safer than C.
This argument doesn't make any sense to me. Why is being better than a language from 1978 our sole criterion? Shouldn't we try to make our software as reliable as possible?
Besides, Go is not any safer than C when it comes to data races. I care about those a lot too.
> No, there's a lot more. See my reply to your sibling comment.
Which criteria from your post can't be rolled up into performance, deterministic resource usage, or safety?
> Why is being better than a language from 1978 our sole criterion?
I have no idea. Thankfully no one made any such argument.
> Shouldn't we try to make our software as reliable as possible?
No, we should make it sufficiently reliable. For example, many applications might not benefit from Rust's pedantic checking of data races (e.g., if the application is sequential), but the impact on development velocity could be prohibitive. By the by, I like Rust, I just think it's not well-suited for most applications.
> Besides, Go is not any safer than C when it comes to data races. I care about those a lot too.
While I absolutely agree, this isn't incompatible with my claim, that Go is at least as safe as C, and thus safety alone doesn't preclude Go from safety critical applications.
> Which criteria from your post can't be rolled up into performance, deterministic resource usage, or safety?
Package management. Macros (Rust has a widely used ORM). Generics. Easier error handling. Pattern matching. Functional features, such as map(). A more flexible module system. Built-in FFI. Inline assembly. Etc, etc.
> By the by, I like Rust, I just think it's not well-suited for most applications.
I'm going to push back on this too. In terms of "all programs that anyone has ever written", scripting languages are overwhelmingly the most suitable choice. But in terms of usage, core infrastructure favors reliability, performance, and interoperability with other languages. Consider regex libraries, language runtimes, graphics libraries, codecs, text/internationalization support, windowing systems, UI libraries, browsers, and so forth. This is our core infrastructure, where performance and reliability are paramount, and these libraries have outsized importance.
> this isn't incompatible with my claim, that Go is at least as safe as C, and thus safety alone doesn't preclude Go from safety critical applications.
This assumes that C is OK for safety critical applications. It's not. The status quo is bad. The bar should be much higher.
> Package management. Macros (Rust has a widely used ORM). Generics. Easier error handling. Pattern matching. Functional features, such as map(). A more flexible module system. Built-in FFI. Inline assembly. Etc, etc
Package management and easier error handling are the only ones that doesn't roll up into safety or performance, and I dispute that Rust's error handling is easier than Go's.
> I'm going to push back on this too ... our core infrastructure ... libraries have outsized importance.
Granted, but that's not "pushing back" on my point, because important though those libraries may be, they still don't constitute a majority of applications.
> This assumes that C is OK for safety critical applications. It's not. The status quo is bad. The bar should be much higher.
Agreed. Rust is better than Go for safety critical applications--note that I never tried to argue the opposite--only that Go is better than the status quo, terrible though it may be.
> Package management and easier error handling are the only ones that doesn't roll up into safety or performance, and I dispute that Rust's error handling is easier than Go's.
Generics, pattern matching, functional features, modules, easy FFI, are productivity features! They have nothing to do with safety and performance (although they are incidentally used to implement some speed/safety features, because why not). Inline assembly and FFI are useful for platform interop: again, these are not safety features.
> Granted, but that's not "pushing back" on my point, because important though those libraries may be, they still don't constitute a majority of applications.
I suspect the numbers are like: 95% of applications are best in scripting languages. 4% are best in managed languages (Java, C#, Go, etc.). 1% are best in low-level languages like Rust. If you want me to concede that point, fine. But in terms of importance of projects, which correlates with the number of developers you need on the project, the numbers look very different.
> Generics, pattern matching, functional features, modules, easy FFI, are productivity features! They have nothing to do with safety and performance
I dispute that these features improve productivity. With the exception of modules (whose purpose has always eluded me) all are available in Go (or could be trivially implemented) at the expense of safety and/or performance.
> I suspect the numbers are like: 95% of applications are best in scripting languages. 4% are best in managed languages (Java, C#, Go, etc.). 1% are best in low-level languages like Rust. If you want me to concede that point, fine.
I disagree. I think that Go beats scripting languages at their own game, in most cases. Of course this is all opinion and mine is no more valid than yours.
> But in terms of importance of projects, which correlates with the number of developers you need on the project, the numbers look very different.
Perhaps, but I'm not sure what your point is here, or how it relates to the broader topic. Are you arguing that more developers are needed on the project and thus more developers in total work on these core libraries than all other software projects?
> I dispute that these features improve productivity. With the exception of modules (whose purpose has always eluded me) all are available in Go (or could be trivially implemented) at the expense of safety and/or performance.
It's complicated because the reflect library is very general. There's no need to repeat most of this. So in fact, it's a perfect example of how functions are a productivity feature.
My experience is the exact opposite. Most of the people who I work with now using Rust did not know any Rust at the time they were hired. There haven't been any major problems at all.
I'm a former C++ dev. I've made repeated attempts to learn Rust over the last few years. I now feel like I'm capable of building things with much effort and frustration and constant Googling, but with a small fraction of the effort. By comparison, I got to the same point in Go in an afternoon.
I'm not interested in debating which language is easier to learn for C++ programmers. I'm pushing back against the (clearly false, IMO) idea that adopting Rust is a mistake.
In the context of the conversation, adopting Rust over Go is a mistake for the majority of applications primarily because of the difference in learning curve. Of course there are other factors, and not all applications are equal. I threw in my C++ background to demonstrate that I'm quite capable of thinking in the low-level terms demanded of Rust programmers, and my learning curve was still very large. You can disagree if you like, but my comment was related.
I don't agree that the learning curve difference (which is a temporary cost that decreases over time) is high enough to outweigh the benefits in the "majority" of cases that could benefit from Rust (and reap those benefits long-term). I respect your experience, but it doesn't invalidate mine, which has held up with many people I've seen get up to speed with Rust.
It's about the learning curve, especially people coming from Python to build websites, I mean seriously you would recommend Rust over Go to someone that build api / websites? It's telling someone that used to Ruby to go the C++ way, terrible idea for Python/ruby.
I had to pick something i was familiar with too. We need this on a shorter timeframe (when is that not true? haha), and evaluating a language is hard.
As it is, this was a partial reason against Rust for me. After my time learning Rust, i'm still not 100% confident in my ability on it. My lack of experience and thusly productivity-hit is a real factor i had to calculate. Moreso with the rest of the team.
With Scala/etc i would be learning it new myself, and it was a gamble i wasn't going to take. Especially because our application is not intensive to specific problems that some languages excel in (concurrency, etc).
I'm sticking my neck out choosing a language, so i wanted to be quite sure i make the right choice for us. Well, as best i can - i am new in this shop after all.. i hope i made the right one :)
Honestly, unless you need some special library or your team is already familiar with those languages, I would pretty much always recommend Go over those languages. Everything is just simpler, and it's at least as fast as any of those languages.
They're talking about sum types (aka tagged unions aka variant record), where you have a proper type with a closed set of variants (which can be checked for completeness by the compiler) and optional associated data (possibly of different types for each variant).
Iota isn't even a step up from C's enums, it doesn't come close to what you can find in ML descendants (such as Rust or Swift for the most recent ones).
Curious what they weren't so happy about with Python? Was it purely performance? If so, did they consider PyPy, or at least profile what the slowest bits are so they can evaluate whether to throw everything out or just rewrite the slow bits? Was it the language itself? Not everyone likes dynamic languages, though it's odd they started with it. Did you consider Node at all?
From my time doing server backend python dev, it is only catching any problems at runtime, everything from missing arguments to typos in variable names that accidentally match another variable, turning your int to a string. Having a compiler catch these saves much time and hairpulling. And having unit tests as a final defense, rather than the only defense, does wonders for my peace of mind.
As a Python user and fan I hear this complaint a lot. I understand but I can't really agree since things like typos and type fails pretty much never happen to me, at least in production. My secret? I use the REPL, heavily. (And not even in the grand Lisp fashion, because Python's REPL isn't very advanced, mostly I use it just off to the side and maybe or maybe not running an instance of the full program, or parts of it.) Using the REPL catches most of those things just as quickly as a compiler, plus it can catch things compilers don't, such as null pointer exceptions.
Two lesser secrets are using a linter, which catches all sorts of issues too, and second actually getting the full program locally to a state where I can have it execute (most of) the new code I just wrote that I didn't verify in the REPL, or using data sources I didn't just define temporarily in the REPL, so I can make sure it seems to do what I intended. A lot of devs don't seem to do the second bit... Checked in code for Java compiles and passes existing tests and went through a basic code review but inevitably bugs get filed because it doesn't actually do everything the story said, it's like they didn't even try out their own code, it just looked correct and the compiler/tests agreed.
I think when you're working with the REPL interactively instead of relying on the common "edit -> save -> compile -> ship it|start over" cycle you don't miss those details as much, because you're constantly trying out your own code. Maybe my experience is because I don't typically use dynamic languages as scripting languages, at least in the sense of quickly hacking up a script, saving, getting to skip the compile step (look how much faster it is to develop in dynamic languages!!!), and running it until it works. I have done that, but even then, I'm usually writing the bulk of the script in the REPL -- or rather in my editor that can send text to the REPL. It's quite different from what seems to be the thing that made these languages popular to begin with, which is not having to explicitly type everything and getting to skip a (potentially long) compile step (which also encourages more source sharing).
> I understand but I can't really agree since things like typos and type fails pretty much never happen to me, at least in production.
The typos / type fail comments are shorthand for the real complaint, which is that a long-maintained large dynamic language codebase requires continuous vigilance. I've worked in dynamic languages for most of my career (Python, JS, Clojure) and typos/type-fails are pretty rare but if you haven't spent a half day tracking down a bug that turned out to be one of these at some point in your career, I'd be quite surprised. The breakdown comes when someone not familiar with the code makes a change without fully considering the consequences and it goes through something that nil-puns so the error isn't detected immediately.
My experience with people who are really against type systems is that they haven't run into a language with a good type system. I'm a fan of the gradually typed JS dialects (Flow more than Typescript) since you get the quick hacking at the beginning combined with compiler-enforced vigilance once you switch over to maintenance mode. Type-inferred languages are also nice, particularly fully type-inferred languages. I think F# [0] is both terse and accessible, for example.
I see nothing false in rewording your statement to "a long-maintained large codebase requires continuous vigilance". Typing doesn't seem to matter with this. At least to the extent that we believe typing doesn't have a meaningful impact on the expected size, lifetime, and complexity of a codebase to solve an arbitrary problem. (With better type systems around it is neat to see statically typed languages quite significantly narrow the gaps in expressiveness though, and in some cases beat out 'trivially dynamic' languages.)
I know I've wasted time tracking down simple issues a static type system would have caught (or just due diligence by the coder -- and some of these issues I've caused myself! Though I really can't remember any insidious to find but quick to fix typo or type fail I caused, but I'm willing to admit to a possible selective memory bias), I've also wasted time tracking down simple issues a type system wouldn't catch -- even ones like Rust's, Haskell's, and dare I say maybe even Shen's? I also spend/waste a lot of time, probably the most time in total, tracking down complex issues that got past the type system and existing tests and code reviews and personal or team diligence, and these days most often in either Java or JavaScript, neither of which are particularly great poster children for their respective type systems. (I don't want to get into the strong/weak axis.)
Issues from NPEs or divide-by-zeros or undefined function calls, or stuff that goes through the type system's mechanisms to escape the type guarantees like reflection, casts to Object, void * , unsafe, serializing class names instead of values, etc., are annoying, a sudden power outage is also annoying. Some of that can be caught and prevented by more powerful languages, but still the time to fix those is nothing compared to more complex issues resulting in all sorts of incorrect behavior. There are so many more causes than type mismatches. It seems in your career the trivial bugs from typos are rare for you too. I'm not convinced the possibility of slight inconvenience those rare issues can create is worth the certain tradeoff in losing expressive power (especially if I can't use the most expressive static languages for whatever non-tech reasons) and possibly more, nor am I convinced a static approach is even the best one when you have languages like Lisp which support types well enough to error out when you call COMPILE on a bad form but still have huge flexibility.
I wonder if all this sounds like I'm a diehard C++ fan and don't need no stinking safe memory management tools because I never get segfaults or security problems. If it does I don't think it should, but it's really hard to explain why my perceived utility of static type systems is low without just appealing to preference, firsthand, or wise authority's experiences. The argument has been going on for decades by smarter people than me on both sides. In the end maybe it's just preference as arbitrary as a favorite color but rationalized to kingdom come. I at least don't draw my preference line so narrowly at static vs dynamic, there are plenty of static languages I'd use over JavaScript, and plenty of dynamic languages I'd use over C++.
I will ask about your experience though: how does it square with people like the author of Clojure? Is he just a god-tier outlier? I don't think one could argue he hasn't done his homework, or doesn't have enough real-world experience. It reminds me of a quip graphic I saw once, it was something like a venn diagram showing an empty intersection of "dynamic typing enthusiasts" and "people who have read Pierce's Types/Advanced Types books".
> I also spend/waste a lot of time, probably the most time in total, tracking down complex issues that got past the type system and existing tests and code reviews
Nobody will argue against you on this. The static/dynamic debate, as I understand it, is about whether typing and design constraints typing impose are worth the reduction in simple issues. Reasonable people can choose both and my personal take is that I think types are worth it given a good type system for a long-maintained project. You get more people coming on board and I think types are most useful in that situation.
generics and sum types
> how does it square with people like the author of Clojure? Is he just a god-tier outlier?
I happened to be in a group Rich joined for lunch at the last Clojure Conj and we talked about typed functional languages. The short story is that he doesn't think types are worth the tradeoffs. At the time Clojure had just introduced transients and he mentioned the numerous typed functional language blog posts about the feature and that most were incorrectly typed. He gives specific examples at the end of Simple Made Easy and reiterated many of them those during the discussion.
I think we'd agree that, libraries and architecture matter more than language but I do think language influences the abstractions the library can provide. My post wasn't really meant to argue the superiority of types in all situations but rather a specific response to the sentiment of "why do people always bring up typos when I never run into them"?
Type safety mainly, i think. Performance is a definite concern, but they have a lot of internal applications and the stability of them varies. I offered up that less dynamic languages would provide more speed and reliability to boot.
I know Python got types in 3.5, though i'm not sure if it has Go-like Interfaces (Traits in Rust). If not, i think it really should.
I do firmly believe they'll be quite happy with Go though. Rust, not so much.
Seems a lot of the Go fans I read are former Python users burned by dynamic typing, so I agree they'll end up happy (or at least happier than Rust) with Go. Though one more option you might want to consider is Nim: http://nim-lang.org/ (It's pretty easy to get up to speed in it, especially for a Python user so long as they're not expecting to use fancy OO features.)
Python has always had Go-like interfaces in practice. The problem was that they were not reified into the code, so you had no easy way to know when calling a function and passing it a "file" exactly what file-like things the function was going to do with that "file" without reading the source code. You had to extract the interface yourself.
Not only that, but there's simply no guarantees. You can abuse a function in any way you see fit in Python, and the only one that suffers is your runtime sever :(
Optional types in 3.5 look awesome - but i don't want to lose duck typing. I want Go-interfaces in Python.
For me, it's the extremely straightforward conventions of Go, with it's straightforward tooling, and it's strong typing.
I "grew up" on Python, wrote a lot of code in it, and love it. But it doesn't feel as cohesive as Go.
As an example of cohesive tool design, let's look at Go package management. In Go, if I want to install a package, I install it with:
$ go get github.com/pkg/term
Having installed this package, I import it in my code with:
import "github.com/pkg/term"
Having imported this package, I'd like to read the documentation for it. To do that I use the command `go doc` with the package name:
$ go dock github.com/pkg/term
Now that I've read the docs, I've got a question about how some particular
functionality is implemented. With Go, I happen to know exactly where I can
read that code, on my own hard drive:
$ cd $GOPATH/src/github.com/pkg/term
With Python, I find that I don't have this absolute guarantee of consistency.
Usually, packages will have a similar convention, but some require installing
with one name and importing with another, and the local documentation viewer
(pydoc) isn't installed by default, so I didn't even know about it until
relatively late in my use of Python. I've had a similar experience with the
rest of Python's tooling: it's as feature complete or better than Go's, but
it's not quite as consistent as Go.
It was pretty bad that easy_install came out with no easy_uninstall. Plus some packages are in your system's package manager (which I think is great because I'm sick of every language having its own package manager when my system's (Gentoo) is better) and some aren't, or the latest versions aren't. Plus there's the virtualenv stuff, or the general problem of your dev environment not matching the deploy environment. Needing to have both Python2 and Python3 on your system in some cases. Some packages have C/C++ code so you need a compiler, and all the dependencies that implies. On Windows I think Python development is a joke, last time I did anything extensive there I think I ended up installing Enthought's distribution and picked off from http://www.lfd.uci.edu/~gohlke/pythonlibs/ as needed. I don't see how the Go situation on Windows could be worse than that.
I'm not a huge stickler for non-local consistency -- one of the things I like about Nim is its apathy about naming conventions (foobar is the same symbol as foo_bar or fooBar, func(arg) is the same as arg.func()...) -- so that's probably why I don't find the consistency factor a huge issue. When a language and its ecosystem has it, it's nice, but when it doesn't, it's not really a thing that annoys me.
Can't speak for the OP, but when you go full type checking it's hard to go back. Our infrastructure has many pieces in Python (right now I'm rewriting some) but all new APIs are in Go. The amount of trouble you don't even get to fight with type checking is huge. Performance gains are also good in many cases. Slightly more verbosity is a minor price to pay.
Can you give examples, assuming you are referring to patterns that still are somewhat common? I've seen a bunch of somewhat-connected Singletons used as enums with extra functionality in Smalltalk, is that the kind of thing you were thinking of?
Most of the times we couldn't do the refactorings/changes we wanted to, it was because we could only be 99.9% sure and not 100% sure someone didn't stick some goofy value somewhere to denote something special.
Also, I'm not so sure that Smalltalk as a language community and as a programming environment did what it took to get everyone to do the right thing. In Swift, it seems like the programmer would quickly learn that an enum is the right thing to do, and management would be on board with using it, because it's obviously the way you're supposed to do the thing. I could just hear one of my bosses saying, "Do you really need that? I don't want to hear about you delving into some sort of 'science project' where you could just stick a String there."
The quick and dirty way of doing things with "somewhat-connected Singletons used as enums" would be just to use the class hierarchy, and subclasses of a particular class would constitute one particular enum. But even with that, someone would have some sort of objection to it.
You're welcome. The stuff that's actually important in production code over years is often something hard to think of ahead of time while looking at toy programming examples. And often, it involves what humans might do under deadlines.
You should definitely try it as it's easy to interface with C/C++ and the compile times are short, maybe comparable to Go. Also they fixed a lot of C++ annoyances and the language is older and definitely more mature than Rust. It's garbage collected, but you can also do manual memory allocation, although standard library support for that is rather lacking. Unlike Go, It also has a package manager called DUB.
A lot of this comment resonates with my experience and views :)
> You'll note that i didn't list Generics. I know that's high on peoples list, but not mine
This is me too.
Been programming in Rust for 3 years, and picked up Go two years ago. I like the language; I like how it feels like "C but safety net". I haven't used it for anything important (course projects a bit), but this is because so far Rust works for almost all my needs and for everything else I use Python. But I'd be happy to use Go if someone asked me to or there was a reason why Python wasn't suitable.
However, generics aren't what I want in Go. I get that a language without generics can get complex and perhaps slow to compile and has other issues too. On the other hand, enums are on the top of the list for me. Especially for the message-passing programs I tend to write in Go. There have been times when I've hacked together an enum system using interfaces and hidden methods but its not great. I've grown to get used to error handling, but I do think it can be improved a lot. Package management was a major gripe of mine but it seems like y'all are fixing that :D
I have not written production software in Go so panic-proof channels haven't been on my radar but yeah, that one makes sense.
> didn't want to make a choice that would cause them to spend weeks/months feeling unproductive.
+1 I have often recommended Go to people who don't have time but want to learn something new. If you want to actually spend time writing software from scratch in week 1 of learning the language, Go is amazing. I recommend Rust often too, but I usually find out how much time they have and/or their background first.
> Rust gives me a peace of mind that Go never came close to.
Again, big +1. For me it is two effects -- one is that Rust feels safer, and the other is that as a Rustacean Go feels wasteful at times. After programming in Rust for that long, losing performance at any corner for no reason irks me. In Rust, for example, most folks will avoid reference counting and trait objects and heap allocation as much as possible. If you see an unnecessary trait object it _feels wrong_. This is a perfectly sensible attitude to have in Rust. For me, it often carries on into Go. But Go loves interfaces and has garbage collection (with good escape analysis, but a GC nevertheless). Every time I use an interface object in Go, it _feels wrong_. It shouldn't. And I've learned to ignore it -- if I'm writing an application in Go; perf probably didn't matter enough for this to matter. (In fact, the odd unnecessary trait object in a typical Rust lib is usually no big deal either.). But, that nagging feeling is still there :)
You can also build enums by building a struct with a `Tag` field. This works well and generally has better performance characteristics than using interfaces.
That approach has its own problems, such as wasting memory (need to store any data for each tag value separately, rather than benefiting from overlapping storage ala Rust enums or C tagged unions), as well as losing type safety: one has to manually remember which (groups of) fields correspond to which tag values (although the Go loss of type safety is far better/more controlled than the one for C tagged unions). Using interfaces has neither of these problems.
Note that using interfaces doesn't get you all the type safety either, since there's no exhaustive matching. But it's good enough (aside from the possible perf issues) for most use cases.
> Of course they are. What if you read from or write to the wrong field?
This falls under the rubric of "not meaningfully less type safe". This data structure is central in a large project of mine, and I have maybe 3 places where I switch on the type flag. I'm not proposing this as a general purpose replacement for interfaces, only a useful way to abstract over a few known types when you can't afford all of the allocs.
> Your example has one contained type per variant. In Rust it is common to have an enum like
enum Foo {
Variant1(String, u8, Vec<u8>),
Variant2(u8),
Variant3(String),
Variant4(f32),
Variant6(f32)
Variant5(bool, bool, String)
}
I agree. I don't propose this as a general purpose replacement for Rust's enums.
> I agree. I don't propose this as a general purpose replacement for Rust's enums.
Yeah, that's the thing, Rust enums used this way are very powerful. I'm fine with making tagged structy things in cases like the one mentioned, I feel Go can handle that. I'm missing out on all the useful stuff I can do with proper algebraic datatypes.
That only works well if each "variant" holds the same kind of thing. If not, you have to store them one after the other (space wastage), or use interfaces to store them in the same place (tag isn't necessary anymore, extra boxing).
Rust enums (ADTs) aren't like Java enums where each variant contains the same kind of data.
No worries, I think it's a matter of taste and I will try to find a good design that will look more modern either going in the mochila or premier pro direction.
On this note, i wonder if automated tools like this will become more commonplace. I know next to knowing about security[1], but i'd love for there to be some sort of self-updating simple service i can run that constantly updates and checks my router, home servers, IoT devices, all ports, etc. for known exploits.
Surely a lot of this stuff can be automated. The simpler the tool the better - a single binary would be great. Is this a pipe dream?
edit: I feel like part of the problem would be shipping all the exploits. Legal matters aside, it would at the very least mean having to code exploits for thousands/millions of things. Though, perhaps a pluggable/linkable framework for this security could be a sort of proof of work. Ie, whitehats could publish the exploits by writing the plugin.
edit2: I'm aware that this tool is sort of what i'm talking about, but this mainly focuses on a single unix machine, right? Nor does it support windows. I wonder why we can't just make this ultimately simple? Ie, single binary?
[1]: Well, i know enough to know how little i know.. which is nearly nothing heh.
OpenSCAP [0] has made a lot of progress in the last two or three years. The SCAP Security Guide [1] includes security policies for USGCB, DISA STIG, PCI-DSS, CJIS, etc. and it's really easy to get started, scan your host, and generate a nice HTML report of the results for quick consumption. They've also started including "remediation" scripts to fix any problems that are found (n.b.: that can be dangerous).
To scan remote hosts, they simple need a single package installed (I think they actually only need the oscap binary) and an SSH server running.
In recent versions of Anaconda, you can specify a security policy in your kickstart file and have the host configured in accordance with the security policy as part of the installation process. The host is in compliance before you even get that first initial "login" prompt. (For those of us who have to deal with this, this is f'ing awesome.)
Another thing you can do with it is compare a host against, say, Red Hat's security errata and get a report of which security updates a host is missing. This can be automated, ran by cron, and the results e-mailed to you once a week or whatever.
All that said, OpenSCAP isn't a panacea. It's still pretty "rough around the edges", so to speak, but it's much, much better than the tools we had to deal with this stuff just two or three years ago.
Windows isn't a supported platform (yet). There's still a lot of work to do on the Linux side of things to improve the software so I'm not sure when (if?) they'll start working at Windows.
I tried it a few months ago and as far as I could see, it's not just Windows that is unsupported, it only really supports Red Hat. It was packaged for Debian, but the policy files were absent and you could only find old unmaintained ones.
(this is not a critic, I understand that Red Hat prefers to spend money on their own distro)
More like a vulnerability scanner. Signature based antivirus apps are mostly useless nowadays, but being able to tell me I'm running a broken version of OpenSSL is very useful.
Threatstack will do that. Their agent runs on your machine as a kernel mod and will alert you to any libs being used (e.g. openssl, libcurl) whose version matches a known CVE.
Also, beyond what Karunamon mentions, i want to scan my network, my IoT devices, etc.
Besides, virus scanners are heavy and ugly, i've always hated them. Sure, it's nice to have monitoring of a breech, but why do i have to sit with holes in my security waiting for a breech? Some virus scanners try to monitor downloaded files or weird behavior etc, but i'd much rather scan my computer for holes, than things that have already exploited the security vulnerabilities that i had open.
The other option is that you use pre-built images that someone has taken the time to harden for you. The Center for Internet Security [1] have a bunch of pre-built AWS images that you can use for about 2c an hour. https://www.cisecurity.org/
I'm in the same boat as you (especially the part about knowing how little I know) and am on standby for a good tool to come about. It's hard to trust solutions given the security theater reputation in a lot of software.
Unfortunately though, i'm back on Go. I want to be on Rust, but i had to pick a language for work and i can't ask my Team to go through what i did. Rust, despite the safety, is too unnatural for our larger codebase.
Luckily i think Rust has seated itself as the language we will use if the need is truly there. Unfortunately though, not everything.. just the specific things that need it.