Having jumped back into some green field work in Scala in the past few days, I will say I'm quite impressed with the improvements to the SBT, IntelliJ, universe. Compile time seems to be improved considerably (though the project is still quite small so time will certainly tell).
One problem, which is also one of the advantages of scala is that the interop with Java often means that all your Option[] etc code can still get NPEed by some offending Java Lib you've decided to use.
As the fidelity and ubiquity of pure scala libs improves this will hopefully go away to a some extend.
For sure you can, but the internals of the libraries can also cause NPEs because of turtles all the way down :P
We've been running scala in production for 4 years, and had our share of NPEs in our code and in the libraries we host. My point I guess was just that it's not entirely true that they will not boil to the surface on occasion.
Scala gets a lot of things right by default. Also it got a lot of flexibility and power. Almost "a framework" to write other languages within the language.
Use it every work day for over two years... Like a lot, but sometimes wishes it would be a bit simpler and more aesthetic.
Not mentioned here: pattern matching. It certainly goes a long way to ensure all cases are being handled, since the compiler lets you know when your patterns are non-exhaustive.
"Yes, you can write more concise code; yes, you have a more advanced type system; yes, you can pattern match. There are hundreds of other reasons that Scala makes a great language. When a language can offer me constructs to write more correct code, I'll always be willing to deal with the learning curve."
I think the point was that these are the things that he thinks are of more value to developers than the other good things that Scala offers.
I thought it was worth mentioning as a separate point; just saying "you can pattern match" doesn't say much, when it's actually a key feature to guarantee certain correctness in a program.
Pattern matching offers mainly correctness and conciseness, maybe the correctness part should have been emphasized more.
I would pay good money to see Paul Philips' reaction to the sentence
> "the biggest benefit with Scala is correctness."
...before the author goes on to say
> "When I say correctness, I mean the ability to easily and consistently write code that works as inteded (not the academic definition of correctness)"
As someone who used to work on automatic proof of correctness systems, that irked me too.
It's quite reasonable to have a language where certain classes of errors cannot be generated from the source code. There are languages which can reliably detect or prevent subscript out of range errors, null pointer errors, dangling pointer errors, and race conditions. (C and C++ detect and prevent none of the above, which is the cause of most of the troubles in computing.) That's not full correctness; it's just language safety. It means you can't break the language model from inside the language. Most of the "scripting languages" have this property, or at least are supposed to.
Scala takes the null pointer issue a bit more seriously than most languages. That's good, but not enough to justify a claim that it offers "correctness".
Fair Warning, I worked with the other for some time, and have been having an out of band convo with him.
One of the things I absolutely HATE about scala is operator overloading - and the excessive abuse of it. I ran into it just now using some library that used ==.
This is absolutely my #1 complaint about Scala, and really it's more a complaint about the community. It's not just the operator overloading, but the overuse of operator characters instead of meaningful function names.
All of this leads to potential newcomers thinking Scala is some inscrutable mess of a language, and even old fogeys will have to scratch their heads.
I see Scala as an alternative to Java/Go for Ruby web developers who want a statically typed language that feels more dynamic. Yes, you can go crazy with Scala, but if you are responsible, I would say that migrating from Ruby to Scala is easier than to Java or Go. (I use Scala at work commercially for 3 years now and I have my issues with it, but I don't have an alternative, tried Kotlin and Java 8, and some Go, couldn't give up Scala)
> Yes, you can go crazy with Scala, but if you are responsible
The problem is not so much the code you write but the one you have to read/use.Languages that are more rigid are often easier to work with,they are predictable,as you wont have to deal with strange apis.
I believe Java8 or Kotlin are good enough. Scala is sometimes just unreadable when you have to read other people's source code.
Is it really worse than the pattern/OO folks that end up creating dozens of one-member interfaces, namespaces galore, even separate compilation units for absolutely no reason other than it feels "enterprisey"?
Interestingly, the Ruby developers I know seem more inclined towards Clojure than Scala. One stated he was more attracted by immutability than type safety, hence Clojure over Scala.
Another thing I like about Scala is its support for duck-typing like python, except it is actually enforced at compile time via traits and magic methods.
e.g you can define your own methods to sugar and desugar for pattern matches, define an apply method to treat a class like a function, or map() on an Option type - it simply behaves as a list of size 0 or 1 and that is all you need to map.
Apart from pattern matching, your example isn't like duck typing: it's all entirely statically typed. Scala does have structural types, which is a bit like duck typing.
When I say correctness, I mean the ability to easily and
consistently write code that works as inteded (not the academic
definition of correctness).
I'm curious as to what the author believes the academic definition of 'correctness' actually is. Is it something other than code "working as inteded [sic]"?
Been working with Scala for 1.5 years and loving it. sbt feels easy to work with, compile times have improved (and you can improve it further by modularizing your app). Scala gets a lot right. Type inference makes it feel dynamic while still being safely typed at the compiler. Pattern matching and everything-is-an-expression are really the killer features for me that makes my code much more expressive.
The one thing that does bother me, as mentioned elsewhere, is operator overloading. There is a veritable soup of operators and you're never quite sure what an operator is actually doing. Worse, there aren't any plaintext equivalents. scala.collection.List doesn't have any "prepend/unshift" or "append/push" methods... all you have are ::, :::, +:, :+, /:, :\, :::, ++:, :++, ++ and so on.
I don't get the business of marking variables as const in local scope (aka "val" aka "final" for local variables). It's easy for a parser or a person to scan the local scope and see if a variable is ever possibly mutated or not. This is very different from the situation with globals where it's generally intractable to prove that something is never mutated. In local lexical scope you can see all possible mutations by the definition of "lexical scope": a const variable is one that is assigned only once, a non-const variable is one that may be assigned multiple times – this is a straightforward syntactic property. Is there some benefit to marking local lexical variables as constant that I'm missing?
At least in Java, non-final variables can't be used inside anonymous inner classes. Also, it's easier for me as a developer to read "final" and know that (referential) immutability is guaranteed by the compiler instead of having to read the local scope, which likely contains method calls that may or may not modify that variable.
val and final have stronger meanings when your objects are immutable. True immutability makes reasoning far easier than just referential immutability.
> At least in Java, non-final variables can't be used inside anonymous inner classes.
Aren't those members/fields not variables? If so, then those are effectively global not local, so that's a completely different story – I'm talking strictly about local variables.
> True immutability makes reasoning far easier than just referential immutability.
Yep, immutable types and constant global bindings are great.
I still don't understand why you consider having a compiler-enforced restriction on mutation worse than letting readers figure out what was the developer's intent.
In Java, the only "downside" is having to add "final" as a modifier, which is negligible.
In Scala, the alternative is to declare that variable as "var" instead of "val". When would you ever choose var over val if your object isn't supposed to mutate?
That's just a weird wart of Java – not allowing anonymous inner classes to close over non-final variables was just a cheat to avoid having to implement proper closures; I have no idea what the technical impediments to doing so were when that decision was made, but plenty of languages, including Scala, have real closures.
> In Scala, the alternative is to declare that variable as "var" instead of "val". When would you ever choose var over val if your object isn't supposed to mutate?
It's an extra keyword, an extra complication in the language – one more binary choice to multiply with all the other options. It would be nice to get rid of the distinction altogether.
In Scala, there are both mutable and immutable data structures. Immutable data structures are preferred, however there are some cases in which a dash of mutability can simplify the code, especially in cases where Java inter-op is a must.
When it comes to helping a programmer understand code, every little bit helps. Humans are not good at parsing and keeping complex state in our minds, computers are. The less clutter I need to keep track of, the more interesting stuff about the code I can concern myself with.
As for marking local variable bindings as non-changing, I think it is tremendously helpful. In the (mostly Java) code base I work in daily we use this throughout. The net result is that I can just assume that property for everything, and whenever I see a variable not marked as non-changing I immediately know that something less than obvious is happening.
Given the above, I am naturally a big fan of making non-changing variable bindings (final/cons/val/...) the default and updatable variable bindings the case that should be marked. I would also like to work in a language where immutability of not just the variable binding but also the values themselves was better handled by the language.
In Scala, most variables should be val (i.e. constant) from a design perspective. That is, it is better, in Scala, to write code that does not have changing variable. Thus, using val instead of var is simply a check on the code, much in the same way that static typing provides a benefit over dynamic typing.
I don't think so, actually: in all the heated discussions I've had with people on the subject of static / dynamic typing, everybody agreed that static typing had significant benefits. What people don't agree on is whether these benefits are worth the cost.
It's hard to argue in good faith that having the compiler catch mistakes rather than finding about them at runtime is a bad thing. It's perfectly possible to argue that it's not worth the perceived development speed slowdown.
Keep in mind that all of this is just my opinion. I'm not going to append "IMHO" to each sentence, as to save the reader the tedium of reading it. I'm not saying that I'm correct or that people should agree with me.
I would go so far as to say that I think overly simplistic static type systems don't have much benefit. C's static typing drives me crazy, as there's almost nothing of use that I can express with it. It's the same with Go; I almost never pass an Int when I meant to pass a Bool. In exchange for thoroughly unhelpful type errors I now have to jump through flaming hoops to parse JSON.
It ends up being a bit like JavaScript or Python where both languages lack the ability to specify that something is truly private (though in JS you can use closures to hide things). You generally just use a naming convention to mark a thing as private, and hopefully people have the decency to respect that. It's like that with types in dynamic languages; I can express pretty complex relationships with types and keep the whole thing in my head with out many problems.
That said, languages with powerful type systems like Scala and Haskell are thoroughly worth the effort. I can express almost anything with these type systems, usually with a minimum of fuss. They can protect me from the dreaded NPE, and that's a bug I encounter quite often. They can help me write simpler code that deals with complex shapes of data with their support for pattern matching and TCO. This one is more Haskell related, but the guarantee that everything is immutable and lazy makes it possible for the compiler to do some insanely impressive optimizations.
Scala, Haskell, and Rust have taken a dyed-in-the-wool lover of dynamic languages and made a convert of me. They finally followed through on the promises of safety and productivity that other languages failed to deliver on.
In closing, I'll repeat one last time that all of these are merely the opinions of an insufferable neck beard (me). Even if we disagree, I'm sure you're a very nice person, and I approve of you using whatever languages and tools make you happy and productive.
I can't help but wonder whether you're that circumspect with everyone or if I come off as crazy-kill-you-you-phillistine and need to work on my communication skills?
Aside from the fact that I've never felt scarier, I agree entirely with every single point you just made and thank you for qualifying my broad generalisation.
I agree with this. I also somethings think that I use the type system the most when I am refactoring/redesigning code and at that time I might be sending a bool instead of an int as someone wrote which is being caught by any good type system. Regarding the cost I think an optional type system, like in Dart, is intresting. You can do some prototyping or quick coding and then add types when you have some working code in order to develop fast, or you can use types all the time in order to be correct.
Purely from a reader's perspective, yes, an IDE can probably parse and highlight mutable variables automatically.
But from a writer's, modifier's or refactorer's perspective, what counts is the intent. Was a particular local variable meant to be mutable or immutable? Everytime I write a line of code, I need to watch out whether I mutated a variable which was not meant to be mutated. Or even the case where I myself mutate it unintentionally (by a typo, for example).
In a non-trivial project having a code-base with 100K lines of code, the time and effort spent in this manual analysis can be an overhead that might be well worth avoiding.
I'm not recommending anything. I'm wondering what the point of declaring something that's obvious from a simple syntactic analysis is. The author of this post makes a big deal of it and I don't see why it's useful. If the motivation is that constness is the right default in global scope and you want to make local and global scope more similar (even though they are still radically different), that's cool, but then don't make it out like local variables defaulting to const is the best thing ever invented.
The main issue at the end of the day is compile time. They are working hard to address this, and from the first day we started using Scala to now, it's improved dramatically - but definitely lots of room for improvement where that's concerned.
> The main issue at the end of the day is compile time
For deployment, sure, but for daily dev the vast majority of one's time should be spent taking advantage of sbt's incremental build feature; there the compile hit is pretty neglible (particularly when you break out your application into sub projects/modules).
Even after the proposed Scala overhaul (i.e. Dotty in around 4 years time) it's unlikely that clean builds will be blazing fast.
To put in perspective, right now scalac is roughly 10X slower than javac. Scala compiler team is banking on getting a speed up by generating Java 8 closures under the hood with Scala 2.12; that will mean less code for scalac to generate.
Beyond that, trimming down language features and streamlining the type system will provide further compile time reduction. As you say, lots of room for improvement ;-)
I totally agree with the author. The one's mentioned in the article are really the key benefits. I really wish scala didn't have implicits. People go crazy with implicits resulting in very difficult to read code. Sometimes I feel like going back to java just for the readability and then I remember these nice features of scala. We need a scala minus implicits.
Sorry, I don't want to blame any frameworks in particular. There's already too much bad blood in scala land. Let's just say there have been many instances where I had a really hard time understanding a piece of code and it turned out there was an implicit conversion hidden somewhere out of sight that was the missing piece of the puzzle, and in most instances there would have been a simpler design without implicits. Most great pieces of software, such as unix, are great not because they use very complicated concepts but because of how simple they are. Finding that simplicity is the key. Very frequently people don't see the importance of simple things like the presence of hints in code to where the related pieces are. Implicits break this flow. The number of places implicit conversions can be defined at is mind boggling. And even when some code doesn't use implicits the possibility of implicits does harm. When you don't understand some code, well who knows there may be an implicit conversion in play.
Clojure and Scala are really the only alternative JVM languages to gain any traction since Sun/Oracle started promoting the JVM for languages other than Java.
Most of the other languages I've noticed mentioned in the comments so far (i.e. C, C++, Go, Ruby, Python, JavaScript, Haskell) don't target the JVM, and those that do just do it on the side.
Perhaps Ceylon or Kotlin will gain some traction and become a third alternative for the JVM -- all the other contenders have been around too long and lost momentum. I'd pick Kotlin over Ceylon since it can use the popular IntelliJ as a delivery platform.
Fantom: Typesystem is unsound, far away from correctness, only hard-coded Generics, many basic things are not expressions, like if-then-else or try-catch.
Ceylon: if-then-else or try-catch are no expressions, embraces null, unstable software, breaks backward compatibility in minor releases.
Kotlin: Embraces null, inexpressive type system limits the things the compiler can check, unstable software, breaks backward compatibility in minor releases.
I am not sure what you mean by "embraces null". It sounds like a good thing (from what I know about the languages) but you have clubbed it in a list of cons.
One problem, which is also one of the advantages of scala is that the interop with Java often means that all your Option[] etc code can still get NPEed by some offending Java Lib you've decided to use.
As the fidelity and ubiquity of pure scala libs improves this will hopefully go away to a some extend.