What a great tool. Facebook are absolutely killing with the last year or so with all of their open source contributions and releases. First HHVM, Haxl, React.js (amongst other things) and now Flow, this is fantastic. I am really liking how companies like Facebook & Google are concentrating their efforts on the web language of the future: Javascript. The support for JSX alone is a MASSIVE feature (expected given React.js and JSX).
Is this a serious post, or are you being sarcastic? I honestly can't tell. JS the language of the future? Why? It has probably the worst gotchas of any language I've coded in, it's verbose, and weird scoping. Nor is it especially nice to optimize for/with.
I can certainly see being stuck with Javascript (just like we're stuck with the x86 instruction set even if simpler alternatives exist), but I'm not sure it's something I rejoice about. Javascript is like anti-Batman: a language we all deserve, but not one we need.
ES6 adds lambdas, destructing assignment, default/rest/ spread arguments and template strings - all of those reduce verbosity. And there is `let` which has a "normal" scope, although I'm not really sure it needs that. Additionally, generators let you use normal control flow constructs for IO, if you prefer that to FP.
While its no Haskell, it certainly isn't much more verbose than other dynamic languages anymore. And a typesystem like TypeScript or Flow pretty much eliminates the rest of the gotchas.
Off the top of my head, there are two embarrassing holes: bigger integers and parallelism. Can't think of anything else at the moment (macros maybe, but they're a double-edged sword wrt tooling). Wonder if anything else is missing?
[Edit] Before starting, I just want to state: I know JavaScript has parallelism/concurrency in its supporting "system calls" (ie what would, in other languages, be blocking calls, in both the browser or Node). In fact I like this parallelism model, but I was specifically talking about parallelism built into the language (Threads, Actors, Tasks).
This is a long and complicated answer. I doubt I'll do it justice in a few paragraphs, but I'll try.
In one word: simplicity. Primarily, not having parallelism is far simpler than having it. Yes, some models of parallelism are simpler than others but at the end of the day I think we can all agree none of these models are as simple as not having parallelism. As per "net negative": yes, ofcourse, not having parallelism is a "negative". However, "net negative" is interestingly more linked to the applied domain rather than the actual concept. Simply put, the domains that JavaScript is used in don't heavily rely on parallelism (or rather, require the optimization of parallelism).
Thinking about this now, its sort of similar in nature to why people like garbage collection. There is an inherent negative to using a garbage collector, however, (I think we can both agree) in certain (most) domains it turns out to be a net positive. Why? For the same reason, simplicity.
I mean, I could go on, but I'm trying to be as concise as possible. Hopefully, that was a useful. I'm happy to elaborate if you have more specific questions about this.
* s/simplicity/developer efficiency + implicit safety guarantees/g -- Since "simplicity" is quite vague;
This is an interesting point of view. I personally find the erlang approach -no shared memory- to be the least error prone, and as a consequence the most efficient for developers.
With Erlang you can't share memory [0], but in exchange you get sequential code (no callbacks or yields), true parallelism, and even distribution over a cluster. With node.js all the asynchronous calls live in the same memory space, but code is written with nested callbacks (or yields), and of course you get no parallelism.
[0] There are shared dictionaries for when it is really absolutely necessary.
Agreed. Rarely argued I think this is a very valid view. Additionally, some languages which have true parallelism are converging to the same model (say non-blocking IO ala Node), built on top of their crude parallelism primitives.
Correct me please, but from what I understand web workers are not a part of the JavaScript language. The same way AJAX isn't a part of the JavaScript language. I would consider those (AJAX/WebWorkers) more like "system calls" to the browser. And like any system call, it has privileges the application doesn't.
- The JavaScript language is specified under the ECMAScript specification.
- Web workers are indeed not a part of the ECMAScript specification which considers them "host objects".
- WebWorkers and other browser APIs (timers, ajax etc) are called the DOM API. The DOM (document object model) is how JS interacts with the web page and what capabilities it exposes. (document.getElementById isn't any more JavaScript than web workers).
There is a problem with JavaScript in that there is no security mechanism currently in place to ensure the JS file you are running is what you expect to be running.
Imagine your on your favourite social network that used a JS based encryption in a p2p chat to your friend. On that same page advertisers are pushing content to you. That content could be a malicious JS file which can eavesdrop on your conversation, all the while you think its encrypted.
> JS the language of the future? [...]I can certainly see being stuck with Javascript
I think we all agree. JS has very ugly things, but it isn't going anywhere for the forseeeable future.
If we're going to use for at least a few more years, I'd applaud anyone making better tools and rejoice when I see better frameworks and easier to use libraries.
Now by the time alternatives to javascript become viable, we might have made javascript something way better than what we already have now. It could survive a long long time and with people actively making it evolve, and it could be very enjoyable. Who knows.
JavaScript is changing rapidly. ES6 and TypeScript are fixing the warts and adding much-needed features. And you can use those features today, thanks to projects like Traceur, 6to5, TypeScript and Flow.
If you factor in all these improvements, and the fact that it runs brilliantly on the server, it's a vastly different situation than just a few years ago.
> JS the language of the future? Why? It has probably the worst gotchas of any language I've coded in, it's verbose, and weird scoping. Nor is it especially nice to optimize for/with.
You're looking at it from a technical perspective, and from a technical perspective JS is an awful language.
It may be a turd. But it's the only sandboxed-by-default, zero-install, reasonably-fast, free, preinstalled-on-every-machine turd that we have.
You are right that there are some problems in the language. However, I would rethink your argument based on the rapid development of the language over the last 5 years.
For example, yes callbacks were very messy but very soon we will have generators. And, yes, the scoping was nasty but soon we will have the 'let' keyword. I cannot remember where I read it, but I do also remember seeing a talk about some proposals to extend "use strict" to allow people to fix some of the type-casting behaviours made infamous by wat, too.
My point isn't that everything is fixed and we can stop complaining about the bad parts. My point is that it is very impressive how those in charge are handling the evolution of the language.
I think it's worth taking a bet on a language which improves so much every year.
There's momentum, and that's the why. Imagine trying to get Apple, MS, Google, Mozilla, et al to agree on a JavaScript replacement that was rolled out at the same time.
Then deal with the 5-10 years or so legacy browsers hang around with significant market share.
JS was anointed long ago, and if we knew then what we knew now, maybe it wouldn't have been so? I remember DHTML and how silly it was and how it was just a toy and BOOM! Ajax.
So here we are. Every browser supports it, none could agree to change, so we deal with what we got. Better tools are better than nothing.
I would consider C++ to be way more verbose than Javascript. Especially with ES6 coming (and things like Flow / Typescript allowing you to use ES6 today).
Weird scoping? What are you talking about here? It's not the same as other languages. That does not make it weird. When you understand how it works, it's not a problem. Use it to your advantage.
Not nice to optimize for? Javascript is fast enough for most tasks, provided you use best practices. You can even build AAA games with it nowadays, through ASM.js. I'd like to learn more about what you mean exactly when you say it's not nice to optimize for, if you have the time.
I get that Javascript has its quirks. But so do most languages. What's awesome is that JS is easy to get started with, but can be used to build complex apps (especially with things like Flow / TS). And it works everywhere. And it has an amazing ecosystem of client side and side libraries.
> I would consider C++ to be way more verbose than Javascript.
Rather damning with faint praise there.
> Weird scoping? What are you talking about here? It's not the same as other languages. That does not make it weird.
Yes it does. In the '80s this was an open research area, but a consensus was reached in favour of lexical scoping for a reason.
> I get that Javascript has its quirks. But so do most languages.
False equivalence. Python (to pick an example I'm familiar with) has some quirks, sure, but it's a million times nicer to program in than Javascript, and it has all the other advantages you list (it's easy to get started with, suitable for complex apps, cross-platform and so on). I'm sure the same could be said for Ruby or OCaml or hundreds of other languages. If it were as easy to run these in the browser, I don't think we'd see anyone choosing Javascript - it really is a worse language than so many alternatives.
(I mean, by the standards of a single-application scripting language that was written in three days, Javascript is very good - we wouldn't expect such a language to be the equal of a carefully designed general-purpose programming language)
The bad parts are there, but as of today it's a very powerful language that runs on every browser (hence every computer in practice?) with massive community support.
Oooh, there's many more computers around you than you think ;-) Nearly every modern piece of electronics. Many don't run browsers. For instance your washing machine has an OS, but it probably doesn't run Javascript. Same goes for your car, washing machine, dishwasher, microwave, central heating system, stereo/hifi, etc. And of course the more obvious "non-PC computers" like routers, printers, scanners, TV (might actually run JS if it's "smart" and has a browser), computer monitor and who knows what else. Coffee machine. The "fancier" it is (for a rather low barrier of "fancy"), the more likely it is to have a chip in it that runs some firmware/OS something, equal in power of what people ran as desktop personal computers a few decades ago.
No,it's not fine, it's a horrible language,with a few good features that saves it from being a catastrophy.hence "Good Parts".
Or we wouldnt be here talking about Flow,Typescript or others if the language was "fine". JS was clearly not designed for what we are making out of it today.
But since there is no way around Javascript in webdev,good or bad,it doesnt even matter.It exists.TC39 isnt going to fix types,so types are fixed in userland.hence "Flow".
nonsense. Just because people and companies have contributed new features and capabilities to the language doesn't make your point. It's not a horrible language, any more than any language. Of course it has things that aren't ideal, but it's highly expressive and if you know what you're doing it can be elegant.
You're criticizing the raw JS language, but that's not what most people in the industry are using. Fine, the original language design was horrible, but if you consider the typical stack used for web development (which could be any combination of transpilers like CoffeeScript/TypeScript, Flow, Promises, module systems, etc) it's not that bad.
Also we already have Strict mode; I imagine in the future it will get more and more uncompromising, so the JS subset we'll be actually using will be just fine.
I've had the exact same concerns as the person you replied to. I just want to make sure since the title differs from the one you stated. Is this the book you were talking about: http://www.amazon.com/JavaScript-Good-Parts-Douglas-Crockfor...
And how/why did the book improve your opinion about the language?
It's an opinionated book that sets out a subset of JavaScript that you should use, avoiding all the 'bad parts'. This subset is what tools like JSLint and JSHint were designed to promote - they flag you up if you use a bad part. It's a seminal work. These days I find some of its rules a little dogmatic (for instance I like the 'new' keyword now), but I'm glad I went through a phase of sticking to it relgiously for a while. It taught me to stop bitching about the weird parts, just avoid them instead. And when you do this, you end up loving the language and making cool shit with it. Which is a good situation to be in, because it's the most widely distributed runtime in the world.
But `new` is a bad part. Just an optimized bad part... You could replace every usage of it with `var instance = Object.create(Ctor.prototype);` (and change every `this` to `instance` in your code), and actually explicitly return `instance` -- and then never use `new` again. Your code will become more readable to boot, as there is less action at a distance.
This is the standard advice in these threads. I've found the book quite short of that goal. It has a few patterns to make the stranger parts a bit more bearable but real world JS still fails in all sorts of non-obvious ways unless you're very familiar with it. I hope the newer versions JS versions have gotten better
It used to be that building a competent web app was something you could do in a number of different languages, these days you really need to know JS or one of it's skins and it's made developing for the web much less friendly to people that don't do it every day.
It's hard for me to give appreciation but while Google have traditionally been tech leaders in web technologies: Facebook are doing really awesome stuff lately. It's not just React and this - it's also FLUX, HHVM, Hack, Haxl (Their Haskell libraries), contributing to writing a spec for PHP and other ventures.
I'm interested in who is the driving force behind this open source change in Facebook, I don't recall facebook in behaving these way 4 years ago.
Can anyone find anything on a policy change that happened? They really turned around.
Thanks for the kind words. For context, I'm an HHVM alum who has been at Facebook for almost six years now (wow time flies).
From my point of view, most of what has changed is resources and the immediacy of our survival-level concerns. Four years ago Google had declared nuclear war on us, we had far fewer users, we were not profitable, there were constant fires to put out with basic production operations stuff we've gotten better at, and we were enormously more under-staffed. I was working on HHVM already, but it was in a million little pieces spread across Drew's, Jason's, and my desks. The tools we're open sourcing over the last two years mostly did not yet exist, and if they existed, it was in some primordial form. We also have gotten much, much better imho at being good stewards of our open source projects; HHVM's predecessor system, the HipHop compiler, was also open source, but we people were spread way too thin to be able to respond to bug reports, pull requests, get FB's latest code into public hands, build binary packages for popular distros, etc. on a timely basis. Huge props are due to all of the technical people on our open source teams.
Thanks for answering, it's always good to get answers straight from the source.
It would be awesome if you blogged about this. It would be interesting to read _how_ you got better at being good stewards of your open source projects and what made you open source these. I think it's a challenge a lot of people face.
Thank you, so very very much, for HHVM. Between HHVM, Hack, and now Flow, I can write software using the languages I use most often but gain the benefits of better tooling.
As kmavm said, thanks for saying so on behalf of the people who have been working on open sourcing things (I haven't as yet).
I don't think there's been any policy changes. I think the policy has always been that we're open to open sourcing stuff, so long as it is useful to others (ie, not just a code drop that nobody can use) and someone signs up for the work and there isn't something important that's being dropped in the process.
The difference I think is people, energy, and momentum.
We've been able to find people (many already at the company) genuinely interested in some of the less glamorous parts of building a scalable program to open source things - things like sync processes and pull request management and UIs for ACLs and CLAs and so forth.
We've had people who, often due to the availability of these tools, have developed an internal energy to want to put in the extra effort to make their project ready to be open sourced (like making sure all dependencies are available already, or scoping out or stubbing out things that are Facebook-specific (our asset management flow, for example) while still keeping the software useful, and so forth.
The momentum has also made the idea of open sourcing code more top-of-mind to people, which helps to get people to rewrite their changes to accommodate a nascent open sourcing effort on a piece of code, or to get more discretionary time to investigate or work on making something open source. Or even just moral support from your team and colleagues.
Well, regardless of root cause, you all are racking up some amazing open source accomplishments. Big props to everyone working on open source at Facebook.
And I think these efforts will pay off in spades for Facebook, in terms of attracting top talent and in terms of the expertise that results from having the some of the best dev tools available being developed in-house.
Facebook has been doing a great job with marketing their open source projects. It seems like every Facebook OSS project has a strong logo, beautiful landing page with good documentation, a distinct group of project champions within the company (with embedded recordings of their conference talks on the project site), mailing lists, IRC channels, and the projects are often exceedingly applicable to a large proportion of real world software projects.
Not to say that Google's OSS hasn't done much of the same, but AFAIK many of their projects fail on at least a couple of these points.
> I'm interested in who is the driving force behind this open source change in Facebook, I don't recall facebook in behaving these way 4 years ago. [..] Can anyone find anything on a policy change that happened? They really turned around.
My guess is that Facebook was just much smaller 4 years ago, over that period its staff grew very quickly. And even today it has an order of magnitude less employees than say Google, Apple, Microsoft, etc.
Larger, more established companies have more opportunity to open source things on this scale.
I am really liking how companies like Facebook & Google are concentrating their efforts on the web language of the future: Javascript.
"...on the web language of the past, which we are unfortunately stuck with for the foreseeable future: JavaScript" might have been a better summation of the current (rather dismal) state of affairs with regard to web scripting.
This looks like such a better step in the right direction than than the types of tools MS and Google have been putting out. Dynamically discerning the underlying code, and allowing optional type annotation works _with_ javascript, as opposed to attempting to turn js into a completely different (and weakened) language.
That said, I am curious what solutions this solves that isn't already solved by enforcing good code coverage. Full disclaimer, the largest js projects I've worked on were in the tens of thousands of lines, not hundreds of thousands, but type checking just seemed completely unnecessary provided a good coding guide and test coverage were maintained and enforced.
This feels like saying airbags and seatbelts add nothing to cars assuming good driving practices.
It solves not having to write a bunch of tests that can trivially be caught by a program and never having to update or maintain those tests. It hits every code path automatically, you don't have to think of cases to try to hit edge cases.
Well, here are some things that it does that other tools don't:
- It type checks JSX
- It supports some ES6 features others don't (like destructuring) as the build step
- It has union types (TS will get those soon, already in master)
- It does a lot more inference and a lot more assumptions. It assumes you won't multiply a string by a number for example (although technically '10' * 5 is legal in JS). So it's opinionated in that it enforces checks that rule out code that is legal but not likely intended in JS (an opinin I agree with).
Yes - it is possible to write tests and get good coverage and not use those tools - however static analysis can be very valuable in finding errors early. While I'm not sure I'd bother with explicit typing in this in smaller projects from the examples and unit tests it looks like it could find errors people would not easily notice otherwise. (Think JSHint, on steroids).
That said, only time will tell how good the implementation will really get in understanding your code.
you're not wrong, but out of curiosity, I just genuinely wonder how often modern programmers really hit type errors on smaller projects (obviously not FB size). I don't think I've ever had a type cast bug in my js code, provided we don't include accidental nulls in that statement. So in my experience, if I were ever told that I now had to always use annotations, I would feel like I was losing flexibility in the language for little gain. And I feel like the reason I never hit type issues, is because I write full test coverage, which in turn make it painfully obvious what is and is not expected in each method.
For the reasons above, I love the idea of this tool dynamically checking all my code paths, and looking for things that are likely mistakes or result in null exceptions.
But I'm afraid to recommend this to my boss, for fear that for now on, everything must be maximum static... everything annotated, no union types, etc.
Misspelled variable names might not be something you'd classify as a "type error" but that can't happen with static typing. "Undefined is not a function" is in essence a type error and i really doubt you've never ever seen that.
Static typing also serve as a documentation, and enables "intellisense-coding" without having to google documentation every 2 minutes. Take this function signature from pythons standard library: subprocess.Popen(args, bufsize=-1, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=True, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0, restore_signals=True, start_new_session=False, pass_fds=())
Without documentation, can you answer: what is env? what is stdin? what values are allowed in creationflags? is it obvious that startupinfo should be a STARTUPINFO object?
Compare to a C# equivalent, FileStream Open(string path, FileMode mode, FileAccess access, FileShare share). Here it is obvious that FileAccess can only have one out of 3 available options in the enum since the autocomplete will only give you those three options.
I do hobby level small js projects, I make type errors quite often, but they are easily found the moment I run the tests/the app. I would prefer to find them at compile time, but it's not a big deal.
Interstingly it's almost always the same kind of type error- accessing nested maps/arrays and forgetting to go deep enough before I call some method on the elements in that collection.
I can't remember one example of type error where I intended to call method on MyClassA and ended up trying to call it on MyClassB. But that may be the effect of my coding style (I'm not big fan of class hierarchies in js).
And in a large app it is hard to know if all relevant code has ran. Finding pieces that uses the code you just changed is much easier in static languages. We might not go to a fully statically typed javascript but supporting optional typing seems like a really good idea.
But JS does not have compile-time - you edit it, and you run it, no steps in between. If set up right you've got your unit tests running automatically whenever something changes.
I have written a couple of thousand lines in an app the last year and I have already had problems with restructuring my app. Creating new modules and moving "classes" into them or splitting classes and renaming them are very useful things and those things get easier with types. Having tests actually don't help that much with this in my opinion since I have to rewrite the tests also. Also reading old code could get easier with some type information in the code. Just my experience so far.
Absolutely. I used to use hungarian notation in a static language because I was so scared of getting the wrong type. It was a huge revelation that actually this never happens.
However, whilst I never hit on basic type errors such as passing a string when I need an int - I regularly come across object type errors. I'll pass an object with a project_id property to a method that actually expects an id property. Not sure if flow can handle these - if it could it would be pretty amazing.
Today I got the order of parameters to a callback wrong ("error" was the last rather than the first parameter). The documentation didn't say anything about parameters, it only said that the callback would be called with a result. I had to look into the source code to discover the details. I am not sure if Flow would be able to catch this kind of bug but if so, it would certainly save me a lot of time.
I don't think you understand what "type errors" means if you are making that assertion; or your multi-contributor codebase has such excellent test coverage that your tests are doing for you what a modern type system can do for you.
In a dynamically-typed codebase, it's near impossible to have rare type-error occurrence because you're off-loading the type system into into the programmers head (which is subject to human fallibility - even when countered with excellent unit testing!)
To be specific, I was talking only of a specific sort of errors that the JIT compiler allows e.g. applying numerical operators to strings, or combinations of string and int.
In JS, I see things like methods called with incorrect number of parameters, or wrong parameter types, on a reasonably regular basis, especially when things get refactored every now and then.
I frequently see methods incorrectly overidden. For example, the ancestor method effectively has a variadic argument list, but the descendant doesn't have as sophisticated argument handling and doesn't do the appropriate superFunc.apply(this, Array.prototype.slice.call(arguments, 0)) etc.
The main benefit of types for me is not correctness but documentation and tooling. It's fine not to have types in code I write myself or with one or two others. But if I have to deal with a large codebase that I wasn't involved in writing then types make it so much quicker to understand. Just being able to quickly find all references to a type or all a method's call sites makes it possible to "reverse engineer" the design from the code.
This tool claims it can do advanced inference. If it were possible to hook it up to an editor or IDE to analyse existing untyped js in this way it could be invaluable.
We have some basic editor support, more is coming soon. Flow exposes several commands that are useful through an editor, like type-at-pos (give it a position, it gives you back the inferred type), suggest (give it a file, it dumps out an annotated file), autocomplete (gives you suggestions at a given position), etc. Also, we require annotations at module boundaries, so you're modules are going to have well-typed interfaces that serve well for both documentation and stability.
I strongly agree with module typing (and being more relaxed within). Because of caller dependence on modules, the difficulty of changing static types becomes a benefit. Plus, they need to be documented anyway; and tooling support helps use they as if they were primitives.
As others have said, I love the approach of inference giving static type benefits, for free. If you get tooling support, at no extra work, why not adopt it?
But static types can be a hard sell for JS programmers. What sort of reception has it gotten inside facebook?
Still early days but the reception has been strongly positive. We may be a biased bunch but we like our code mostly statically typed with the flexibility provided by dynamically typed languages. A large part of this culture is due to the immense internal success of Hack (http://hacklang.org/).
you should check out sublime text if you haven't already. ST3 reads your js as you type it, and allows you to navigate through the code like you would through visual studio, keeping a dictionary of where the words appear and guessing what they are as. Its not as good as a static languages navigational abilities (you may have to choose between jumping to Increment the function, and Increment the variable), but its still pretty good.
> This looks like such a better step in the right direction than than the types of tools MS and Google have been putting out. Dynamically discerning the underlying code, and allowing optional type annotation works _with_ javascript, as opposed to attempting to turn js into a completely different (and weakened) language.
Typescript (from MS) sounds a lot like what you describe - it doesn't change Javascript, it just adds types to it (and makes some ES6 features available), and it puts a lot of work into playing nice with the wider JS ecosystem (e.g. via the definitelytyped project, which integrates type definitions for most popular "third-party" javascript libraries).
> That said, I am curious what solutions this solves that isn't already solved by enforcing good code coverage. Full disclaimer, the largest js projects I've worked on were in the tens of thousands of lines, not hundreds of thousands, but type checking just seemed completely unnecessary provided a good coding guide and test coverage were maintained and enforced.
Perfect testing can do everything a type system can. But a type system can do it with less programmer effort, much lower maintenance overhead, and in a standard form that makes it easier to maintain. So you can do the same thing but cheaper - or, more realistically for how software is developed in most companies, you can get better reliability for the same engineering budget.
That's because it's all valid Javascript - concatenating a string with a number is normal practice even in strongly typed languages (implicit type conversion). So, I don't see your point. If the argument 'a' had an annotation saying it should be a string, then I could understand an error from some checker tool, but then it wouldn't be vanilla JS anymore.
I specifically didn't send this tool to the team I work on, because my team lead is a sql / java / c# guy who loves static languages, and only touches the front end with a stick if he has to, and then only some basic jQuery or angular.
I've sold him on jasmine and requiring front end test coverage, recently. But right before I hit the send button I realized that if I sent him this tool, I'd never be allowed to use dynamic typing or non annotated functions/arguments in js ever again.
Another comment that just occurred to me: JavaScript becoming gradually typed is an interesting reflection of the recent history of the optimization of JavaScript interpreters, which consist of deducing where semantically dynamic objects behave like static class instances, then inlining the accessors and where beneficial, the "class methods", and specializing && JITing the semantically dynamic functions that almost always take as argument "instances" of this "class".
It seems that adding a type system to a dynamic language has little real drawbacks compared to designing language and type system at the same time, for both performance and type safety considerations.
Yes, so the conclusion one might draw is that if your code is implicitly typed, it will run fast as well as probably do well when run through a static type checker.
I went straight from hacking Scala and Haskell as a hobbyist to doing (mostly) front-end JS job, and I've always found that my code, and a lot of good libraries I read, naturally emulate something close to Hindley-Milner typing, by using objects as tuples/records and arrays as (hopefully well-typed) lists, as well as the natural flexibility of objects as a poor substitute for Either types.
I'm definitely pleased to see that the designers of this library have also realized that strongly-typed javascript was just a few annotations and a type inference algorithm away.
I'm just wondering why are nullable types inmplemented as such and not as a natural consequence of full sum types, which are inexplicably absent.
Strongly typed JS is actually pretty hard - probably not by Haskell and Scala standards - but if you take promises for example the signature of `then` is:
That is - a promise's then - takes the promise (as this) and executes either a `.then` fulfillment handler or a catch handler.
If the `fulfill` handler executes the value is unwrapped and either a new value, or a Promise over a new value and its own type of error is returned.
Now, if the `reject` handler is executed the error is unwrapped and either a new value, or a promise over a new value or a new error is returned.
This is quite simple and easy to use because it behaves like try/catch in the dynamic type system of JS with recursive unwrapping - however it is challenging to reason about when you're starting to type code and you want to actually have correct type information with promises.
Static languages generally approach these problems with pattern matching on the type - in JS that's not common nor is it feasible at runtime - you just expect a value of a certain type. When I implemented promises in another language (swift) this was a lot of fun to work through and not very trivial - if their compiler cna do this I'd be very impressed.
Promises are just one example.
Anyway - this looks cool. I definitely agree that full sum types would've made more sense - having explicit nullables is usually a smell (like in C#).
I don't care about the type signature of promises. I actually don't care about the type of anything which has a complicated type; I think it's just incredibly winning that I will now be able to describe the shape of the raw data circulating in my code.
I've always found really annoying, in Haskell, to see incredibly complex type idioms emerging to allow stuff that doesn't really deserve it. And yes, I'm talking about monad transformers.
Don't get me wrong, I think that Haskell and the typing techniques and idioms it has fostered are a tremendous achievement, but right now, I'm more focused on bringing my web code, which right now is unfortunately a jungle of implicitly-typed garbage, closer to a safe and predictable better-typed form.
I tried to do that in Haskell in the back-end, but everytime I tried, I lost mind-boggling amounts of time dealing with the monadic stack of the framework I tried.
As the saying goes, I'm not clever enough to use dynamic typing, and bugs happen. Unfortunately, I'm also not clever enough to use real strong typing, and nothing compiles, let alone gets done.
Hence why I'm immensely thankful to see facebook embracing gradual typing in a way that lets me leverage my knowledge of algebraic typing.
I'd like to argue from a Haskell backend perspective that monad transformers deserve exactly the complexity they expose. They give you a handle to factor side effects in sensible ways and to express that code requires exactly some set of effects and no others.
It is not always clear how to factor code which does not have this rigor into effect-typed form. It can be extraordinarily difficult to recognize what effects are being carried out in which parts of code at first—especially if you haven't been forced into the discipline early. Thus, I find it unsurprising that you feel the translation effort is challenging.
But, coming from the other angle—building things with well-defined effect typing from day zero and composing pieces atop one another to reach your final complexity goal—works exceedingly well. Better, it forms a design which can be translated to untyped settings and retain its nice composition properties.
Which is to say not much more than: there is some logic to all that madness and once you're on the "other side" it's hard to judge these "incredibly complex type idioms" as anything other than useful and nearly necessary for sane code reasoning.
I miss transformer stacks a lot when using other languages.
... said the monk, sitting cross-legged with an air of zen inner peace ; )
I think it all boils down to considerations of idealism vs. realism, in the end, and yes, I have to admit, I managed to get a sort of big picture understanding of Yesod and its ORM, and it's definitely a cool design. Can't say I know as much about Happstack, but the hello worlds were smaller. And the authors didn't have to invent two dependency management tools to get it to compile... /off-topic
I actually used something akin to monads to write a quick and dirty parser combinator library... In PHP! It was really fun, but I have to admit that it got really hard to keep track of what was function, what was return, what was supposed to be passed along to the next function and so on without any real typing. For the first time I... I wanted monads.
But still, I have 99 problems and I'd say 97 of them are undefined, nulls, erroneous typecasts, unexpected layers of wrapping, and so on. I wrote bad code, my teammates did, and here we are. I want to get rid of those so that I can, at last, have real problems.
I'm not a big user of Yesod, so I can't speak too much to that, though it is notoriously difficult to get Yesod to compile unless you use Stackage.
I would say that types are really difficult to do in your head. It's tedious and error-prone. That said, the advantages are high so it's valuable, e.g. your parser combinators in PHP.
Parser combinators are actually a great example of where monad transformers shine. You can see them as nothing more than a stack of `State` atop `Maybe` which dramatically simplifies the presentation and why they work. Better, you can rip out `Maybe` and replace it with `[]` to get non-deterministic parsers "for free"—all of the code remains the same.
I actually happened to write up about this recently:
https://gist.github.com/tel/df3fa3df530f593646a0
But yeah, gradual typing is seductive for existing codebases. You can just make the jump and beat them with hammers of frustration until the compiler gives you a thumbs up---but it's nobody's cup of tea. If you want discipline, you are far better off setting the rules from t=0 and going from there.
Let me know when you start porting to Haskell. That'll be an exciting time.
You make some good points here, I've also experienced the same with Haskell web frameworks. I think we can agree that the most important things to annotate are points of interaction.
If I'm part of a 5 developer team working on a code base, or I'm using a library someone made - I want the functions I'm calling to be very explicit about what they take and return and I want the functions I provide others to be very clear on what they take and return.
The problem is that even something people use every day like a Promise or an event handler creates very complex types (like the example above). I think any viable solution that expects to be type safe needs to be able to express that.
If your code does not expose any callbacks, or anything async I'd say it's simply not very typical JS code.
The alternative of course is to be _less_ safe about our type. We could say that we treat a promise as:
Promise<A> -> A -> Promise<B> -> Promise<B>
(just a `bind` from Haskell) - this would let us maintain _some_ type safety which is better than nothing.
I agree that the facilities to express function composition are woefully insufficient in nearly all languages besides ML derivatives, and that nearly all of the recent frameworks and techniques to express concurrency rely heavily on function composition.
There's certainly a hierarchy; if you don't have typed data then you won't benefit from generics. If you don't have typed containers then forget about monads. If you don't have simple monads then no point worrying about how to compose them.
And yet, as someone who's been working in Scala for nearly 5 years now, more and more of these abstractions are starting to seem "worth it". My first Scala code was quite imperative, mixing random effects left and right. But eventually I started handling async calls explicitly - or perhaps I should say, I became fluent enough in the language that I could make an explicit distinction between sync and async calls without in being too cumbersome - and then I reaped the rewards, with more reliable, more performant, more maintainable code. And then I did the same thing with error handling, replacing surprise exceptions with an explicit Either (stacking this inside the Futures), and again I found my code became clearer, easier to reason about.
I've just finished factoring out database access into a Free monad based construct, and for the first time in my life I can test database access in a smarter way than just creating a (possibly in-memory) testing database and hoping it does the same things a real database would do. The monad tools are good enough to make it easy - as easy as "magic" Spring AOP, but explicit and ordinary. I've written a library for dealing with monad stacks that I'm sure would have horrified myself of three years ago (https://github.com/m50d/scalaz-transfigure), but I've come here through small incremental steps that have made sense at every stage (it helps that I'm a big believer in Agile). If I'd been dropped in it with a language like Haskell where everything has to be monadic from day 1, I think I'd've given up. I still wouldn't use monads for I/O (at least, not yet) - the advantages don't seem worth the overhead. But I'm glad I'm working in a language where these things are possible, and where I can gradually adopt them in my own time.
Let's say a promise is a thing which can either succeed or fail eventually. If it fails, it gives a type e, if it succeeds a type a
Promise e a
Now, `then` operates on the successful result, transforming it into a new promise of a different kind. The result is a total promise of the new kind
then :: Promise e a -> (a -> Promise e b) -> Promise e b
I'll contest now that this is sufficient. Here's what you appear to lose:
1. No differentiation of error types
2. No explicit annotation of the ability to
return constant/non-promise values
3. No tied-in error handling
That's fine, though. First, for (1), we'll note that it ought to be easy to provide an error-mapping function. This is just a continuation which gets applied to errors upon generation (if they occur)
mapError :: (e -> e') -> Promise e a -> Promise e' a
For (2) we'll note that it's always possible to turn a non-promised value into a promise by returning it immediately
pure :: a -> Promise e a
Then for (3), we can build in error catching continuations
catch :: Promise e a -> (e -> Promise e' a) -> Promise e a
We appear to lose the ability to change the result type of the promise upon catching an error, but we can regain that by pre-composition with `then`.
So, each of these smaller types is now very nice to work with. They are equivalent in power to the fully-loaded `then` you gave, but their use is much more compartmentalized. This is how you avoid frightful types.
Couldn't agree more. I see this as just another example of how ordinary function composition usually wins out over OOP.
Additionally, AFAICT Promise-like things aren't actually used very much in Haskell in practice; at least not in the code I tend to see/write. MVar is probably the nearest direct equivalent, modulo error handling (which would probably just be handled using an Either in an MVar). Given that the runtime is M:N threaded, there doesn't seem to be much need for Promise and its ilk, since you can just do ordinary function composition and HoFs (with channels for communication). If you want something fancy you just use the marvellous "async" library.
I think sometimes types lead to simpler designs. For example why can't "then" just have the following type:
Promise A E -> (A -> A') -> (E -> E') -> Promise A' E'
That would eliminate some strange corner cases and make it easier to explain the function. The special cases could just get their own functions.
Real algebraic data types would probably eliminate the need for E entirely and make it even simpler.
I would argue that `then` is better thought of as:
Promise A E -> (A -> Promise A') -> (E -> Promise E') -> Promise A' E'
, with implicit boxing of bare types and thrown exceptions, as well as flattening of superfluous Promise wrappers. The `flatMap`iness of it is what really makes it interesting, in my opinion.
You can't really use sum types as they are found in Haskell in Javascript. With Haskell option types you do a pattern matching and create a new binding for the non-null value but in Javascript you don't do that - you keep using the same object that you tested against null.
case mx of Just x -> f(x)
vs
if (x != null){ f(x) }
What you can do in a Javascript-like language is use union and intersection types. However, they can get a bit complicated (specially if you allow unions of non-primitive types) and the extra flexibility can confuse the type inference a bit so I can understand them restricting things to the common case of handling null.
Right, but now you're not using null. The question was how to justify the use of sum types for null... not whether or not they could be emulated in some non-idiomatic manner.
> I'm just wondering why are nullable types inmplemented as such and not as a natural consequence of full sum types, which are inexplicably absent.
Haskell-style sum types are a generic type with multiple values describing the alternatives.
Since the goal of Flow is to typecheck existing JS semantics, the addition of wrapper types and objects to support such sum types makes very little sense while the addition of "anonymous" union types makes a lot of sense. Dialyzer took the exact same path (except even more so as its type unions can contain values) as it tried to encode Erlang's existing semantics.
One obvious thing to try is to use Flow's type inference to emit GCC annotations and see whether those optimizations kick in. (Of course, Flow can also try to replicate whatever GCC does, but that will take some time. No reason not to, though.)
How this compares to TypeScript? At the quick glance I noted:
- more powerful type system (union types, hurray)
- support for JSX
- no windows binaries
- supports more of ES6 stuff
- ...but has no support for modules yet
- no generics (??)
How about performance? and workflow? Didn't yet find this: does it use a normal "write then compile" model like TS or has something like Hack (if I'm not mistaken it has a daemon running in the background, checking the code as you write it).
Wonder why FB decided to roll this on instead of using TS.
- Has other comments have said: TypeScript is getting (already in master branch) union types.
- Support for JSX isn't really a huge deal.
- Windows support will probably come - it's an open source library. I hope it's not the OCAML tooling.
You make some great points about generics and using their own type system instead of TS, especially since TS has investments from both Microsoft and Google (with AtScript which supersets it).
They state they use a model like Hack - and the repo also looks this way but I'm also curious, it looks like a very peculiar choice.
Static analysis is definitely preferable to cross-compilation and this looks like a great tool. That said, the idea that static type checking makes developers more productive and prevents tons of errors is overstated imho. Type inference is supposed to make coding simpler and more productive (particularly in functional languages) - even C++11 has added it. I'm sure static type checking can benefit some organizations, but in my experience, type related errors are usually easy to find and fix and have rarely if ever been the root cause of our most difficult problems. Dynamic type checking and implicit conversion is one of the more powerful features of JavaScript and certainly no less prone to error or counter-productive than type-casting, making variadic functions or class templates are in other languages.
> the idea that static type checking makes developers more productive and prevents tons of errors is overstated
Then perhaps you are understating the importance of software correctness. While type systems can be an almost religious topic, the benefits of type checking are real -- a whole class of bugs to disregard, less testing code, and a more maintainable codebase for other developers. Moreover, for languages with ADTs and exhaustive type-checking, you are forced to reason about boundary and error conditions. All of which leads to higher quality software, at the minimal cost of up-front work when designing your types and fixing what the compiler/checker says.
> type related errors are usually easy to find and fix and have rarely if ever been the root cause of our most difficult problems.
Type related errors may be easy to diagnose and fix -- although by definition, in the absence of type-checking, type errors are only so after the fact, eg after causing a crash. Hence the utility of type checkers, especially if it means fewer avoidable errors appearing in production.
> Dynamic type checking and implicit conversion is one of the more powerful features of JavaScript and certainly no less prone to error or counter-productive than type-casting, making variadic functions or class templates are in other languages.
Type checking doesn't diminish the usefulness/convenience of dynamic languages, but IMO lends more weight to the benefits of strongly static languages -- benefits which are negated by the abuse of "features" such as typecasting or variadic functions.
No, software correctness is paramount. I see your point (speaking as a c/c++ dev of realtime software)...but, software correctness is independent of static vs dynamic type checking or implicit type conversion, wouldn't you say? I see companies who feel they don't need automated testing because of TypeScript, or whatever, which concerns me a bit. A trickier problem in JS with completely compatible number types is floating point arithmetic, for example. Anyway, that's why I'm glad this tool is a static code analyzer; it can benefit both the strict typing folks as well as those who want to leverage dynamic types.
JavaScript is a bit special in this respect, I think. Because of the weird type coercion rules, and how it treats null, undefined and because of the presence of NaN many common coding mistakes end up producing mysterious errors pointing somewhere far away from the place in the code that actually produced the problem in the first place. Basically JavaScript continues propagating null/undefined/NaN in many situations where other languages, including other dynamic ones like Ruby and Python, raise an error much earlier.
There are other issues as well, JavaScript doesn't check function arity on function calls (the arguments not supplied just get the "undefined" value), control structures do not introduce a new lexical scope etc.
But in a rare code path, that may not to be triggered. And if you write a unit test solely to catch the fact that the type is necessary, then obviously using a static type would be better because you get the benefit without the extra code for the test.
Having optional typing does save you the trouble of overly complicated types for things where you aren't particularly concerned.
I don't understand what you mean by "But in a rare code path, that may not to be triggered." Can you explain?
I wasn't suggesting using tests; I was suggesting using asserts. Directly in the function that is type sensitive. If that function is ever called with an incorrect type it will throw an exception.
I get the benefit of optional type checking without a compile step in development like with Flow.
Well, I assumed you'd want to spot that error before it happens rather than when it does. The exception being thrown by the assert (at run-time) means that the user sees the failure. That's sub-optimal, right? So you'd ensure that there is testing of all code that calls this code in order to trigger the exception ahead of time if that occurs. Or am I misunderstanding? It just seems that the assert applies the constraint too late in the process. It doesn't ensure you have the right type once you've deployed. Instead it ensures failure on wrong type, which is something different.
I'm not sure if what you are describing is that realistic. A function that is used in some weird incorrect way that it won't get caught in development, QA or unit testing? Perhaps that can happen, but it's not been my experience. I use assertions liberally; they are especially helpful when refactoring and trying to figure out all of the dependent code that needs to be updated.
Sure it's less certain than what you get from a compiler; that's the tradeoff for not having to compile in development. Since most JavaScript code is not type-sensitive it's only important in code that you are more than likely going to be testing heavily anyways.
The point is that you substitute the extra testing and QA with types. The assertion doesn't guarantee type correctness, so you test that function and downstream functions. That's extra code burden. Instead, you could have type checking.
Yeah, that can be confusing. On the other hand, if you write unit / functional tests then testing for invalid inputs one of the first things you'll probably do. Your comment makes me think a fuzzer to test all those falsey values could be useful.
In general, this argument is self-defeating: "I don't need a type checker because I write unit tests." Obviously that means you need a type checker, because then you don't have to write 70-90% of your unit tests. Unit tests require time and energy to produce, run, and maintain. A type-checking compiler can remove much of this burden from the developer.
A lot of JavaScript is UI code though, so unit tests might not be possible, and a suite of comprehensive functional tests with something like Selenium gets dog slow very fast, in my experience.
I just ran flow on our js constraint solver and it caught a bug. The dirty checker used && instead of & to check the dirty variables bitflag against the interested propagators bitflag so it was silently doing more work than it had to. I wouldn't even have noticed that there was a problem.
We've had plenty of other bugs in the past months that tooks hours to track down but would have been caught in seconds in a sensible language. A lot of them won't be caught by flow either, unfortunately. What I really want is for operations like key lookup to fail instead of just returning me gibberish.
Typecasting is at least explicit. In js every single operation is a timebomb.
I like to think that static typing haven't delivered its promises because the current statically typed languages and toolage is inappropriate and immature, but I realize that this might be a symptom of the Smug LISP Weenie syndrome that affects me.
> in my experience, type related errors are usually easy to find and fix and have rarely if ever been the root cause of our most difficult problems.
In a language without a strong type system you rarely appreciate quite how many of your invariants could be lifted into the type system. When I wrote Python most of my errors didn't seem like type errors (e.g. I remember forgetting to close a connection and so leaking connections), but now that I write Scala I can see how I'd structure my program using types so that that would be a type error (I'd use monads to compose the idea of an operation that uses a connection, and then execute them in one place).
(Python now has an ad-hoc fix for this specific problem in the form of the "with" statement, just as 3.4 adds an ad-hoc fix for the proliferation of different ways of doing async calls. But a good type system is a general solution to both these problems and more).
> Dynamic type checking and implicit conversion is one of the more powerful features of JavaScript
In a strongly typed language you can do this in a controlled way; it can be part of the language, and can apply equally well to user-defined structures. Whereas with Javascript you're stuck with those implicit conversions built into the language, and if you want to convert e.g. an address datatype, you're out of luck.
> certainly no less prone to error or counter-productive than type-casting, making variadic functions or class templates are in other languages.
Then use a language that doesn't have those problems either. There are good languages out there - if Scala isn't for you then how about OCaml or Haskell?
This is a form of cross-compilation kind of like how TypeScript is cross compilation. It does require a build step to produce JavaScript - you will not be able to enjoy fiddles as easily and so on.
Then again their rationale is very clear and pretty good: You need builds if you're using Facebook's stack anyway (for JSX) so this should not interfere with your current build - which you have to do anyway.
Ah, I see what you mean now with the type annotations. It looks like they require the jsx transpiler if you use them. Still, flow isn't doing the transcoding, just the static analysis part. I wish the type annotations could have been in the form of jsdoc-style comments though.
Try running an annotated file in the browser - it's simply not valid ECMAScript syntax and no JavaScript runtime will run it 'as is'.
If the code requires transformation by a compiler aware of the language semantics and syntax - it's transpiling in my book. Just like TypeScript is a superset of ECMAScript - so is Flow (in a much lighter, closer to source sense it seems).
> Try running an annotated file in the browser - it's simply not valid ECMAScript syntax and no JavaScript runtime will run it 'as is'.
Of course not, but the point was that it's not a different language, it simply adds type annotations. If you strip out the type annotations, it should be a valid JS file.
This is also true for TypeScript though. The only thing that's not directly JS in TypeScript is shims for new JS features (from ES6) not yet implemented.
AFAIK none of these are valid javascript, event assuming ES6:
* class-level variables (resolved as instance variables)
* field visibility
* initializer constructors (constructor without a body automatically assigning to a field)
* enum types
That is, you can't remove type annotations (and `interface` declarations which can probably get a pass) and end up with valid JS, which seems to be what flow yields.
If only to remove syntactically invalid type annotations, I guess. Strong typing is not incompatible with the absence of run-time typing information, due to the magic of type erasure.
Type erasure is only safe if the whole program is statically typed though - you are still going to need to use runtime checks if you want to interface with dynamic code.
Based on the inference, we can imagine a case where Flow would flag certain boundaries as 'unsafe', allowing a transform to inject dynamic type safety checks using the same type information Flow has available.
For instance, you could imagine a function calling into some unknown api, where we would need to constrain the type of the variable to the expected one to avoid `any` being applied to it. While the library files do provide typing of external dependencies, this doesn't protect you from bugs in those libraries!
The same applies to api's that are explicitly typed as `any`, but which should return a certain type.
Having Flow add a dynamic assertion on the type here would allow the rest of the program to remain type safe, knowing that a violation is guaranteed to halt execution.
While dynamic strong typing is also used at Facebook, we're not quite ready to launch this as an extension to Flow.
Static analysis is somewhat like having very minimalistic type-only unit tests. If you already have comprehensive unit tests, you don't benefit much, but if you don't you really notice the quality difference in the code that ships when you run static analysis prior to shipping.
From my perspective, the static type checking is more or less the same as TypeScript's `--noImplicitAny` option as the first example on flowtype [1] shows, the same can be achieved with
tsc --noImplicitAny hello.tsc
which will result in
hello.ts(2,14): error TS7006: Parameter 'x' implicitly has an 'any' type.
TS bolts on a straightforward nominative type system without type unions (or non-nullable types), so it can't handle a variable typed as `number | string`, it'll immediately drop down to `any`. That is, flow aims to remain useful in the face of more JS idioms. It won't make a difference between nullable and non-nullable either, so AFAIK
function length(x) {
return x.length;
}
length(null);
Union types are present in the master branch of the TS compiler. The compiler also uses instanceof and typeof === ... to reduce the range of types inside a branch, similar to Flow.
tl;dr: they want to support both of those features, the question is what syntax to use and how to introduce those features into the existing ecosystem.
No, instead of complaining about 'x' having the 'any' type, Flow will actually try to infer a static type for 'x'. So in the best case there would be no errors (and in the worst case there would be actual errors to fix).
Is there any reason we cannot use both TypeScript and Flow.
Besides perhaps the extra 'compile' time added to do both translation with TypeScript and then static analysis with Flow. Both tools have their advantages and disadvantages.
Make sure to checkout http://ternjs.net/ too. It does not have types validation I believe, but it does many other things and in combination with eslint allows catching most errors before packaging.
Tern.js actually detect types, and may be it would be possible for eslint to incorporate it somehow to detect invalid use of types.
One big ternjs plus for me is the fact that tern.js knows about require.js modules and can look in other require'd files.
IMO OCaml is the wrong tool for the job in this case (even if it is a better language for this sort of tool).
JavaScript has a weird ecosystem where it is extremely helpful to have all of your tools in the same language. browser-based IDEs, Node, portability, etc, and just one fewer runtime to juggle.
Same reasons why closure is awkward as a Java program.
OCaml has excellent compile-to-JavaScript support. Facebook use this to compile their Hack type-checker for an in-browser IDE. I imagine they do something similar for Flow.
I really like how Facebook went about getting as much information about types as possible without the coderess, not forcing her to do unnecessary stuff. Behavior design on the code level at its finest.
And on the side note, I bet Facebook did this just to make nerds install OCaml and show them the light :)
Does someone know how these types of projects come to fruition in a big company like Facebook? Are people working on them full time (with no other workload)? Do engineers build them on the weekend? How do they get 'funded'?
I think the key is that these projects actually provide value - they make things faster, more reliable, more scalable - whether that's the code's execution or the people writing the code (or debugging issues, or whatever).
They generally aren't solutions seeking problems - they are responses to problems that exist.
Engineers generally don't build things like this on the weekend - unless they like to structure their time like that, I guess. It may or may not be a full-time job, but the job whatever it is isn't some search for abstract perfection, it is again to solve real problems encountered by others in the company. Often it is a part-time component built as part of trying to solve some more direct goal - like fighting spam, or serving bits, or whatever.
Often it is something the engineers just do - it makes sense to break things up into libraries, or services, or whatever, and they do that, and then that library or service is usable/useful elsewhere, and that's it. Other times they may suggest and motivate it as a goal-in-itself in a team goal setting situation.
I doubt that's a particularly useful answer, but maybe with further questions I can make it more useful to you?
I find this type of work within companies (like google's famous 20% rule) an interesting contrast with non-tech companies. At a "normal" company if you attempt to spend time doing something of this sort, you'd get immediate pushback from higher ups who would likely say "this is not our core competency". With the secondary excuse being that they would not want to release any work like this for fear that it would help the competition.
It does raise the interesting question of whether facebook employees are doing this work just to avoid the work that is the "core competency" of the company. Especially given the fact they don't gain a competitive advantage from releasing the work that facebook paid for into the wild. By this I mean, the company benefit would seem to be attract other talent. And the personal benefit for the devs is to get their name out there are something cool and interesting. Certainly, working on the best way to advertise to users is a lot less exciting/sexy than working on static type checking for javascript.
Different people find different things exciting/sexy to differing degrees. Some people like building stuff that others can see. Some people like building stuff that others will use. Some people like building stuff that makes other people build stuff faster. Some people like building stuff that is highly reliable, really fast, really scalable, or really efficient. Some people even find improving advertising really exciting - they expound on how ads can be win-win, and even that they can only find their full effectiveness when they are, or something like that.
I don't think anyone is "avoiding" any work - there's just tons of work that needs to get done, and people mostly choose to work on that stuff that interests them. One could say that product engineers "avoid" infrastructure work and infrastructure engineers "avoid" product, but I think that would be inaccurate in most cases.
It is a great luxury to be able to work on what you like, especially as a dev. Unless you're a well-known person, I don't think it's that easy to work on what you like, even at google. At least not anymore. Hopefully you guys at facebook can keep the MBA's off your backs long enough to do more cool stuff!
It sounds like there is enough slack in the schedule that teams can decide they want to spend non-trivial amounts of time on these projects. It's surprising to hear that anyone, even the companies with big budgets, are able to hire enough people to do this without the projects getting an official seal of approval and budget. Even just taking the time to document, package, and publish is non-trivial.
It seems like the vast majority of companies are not far enough out in front of their production issues and requests from the business side that engineers could do this sort of thing. So I guess it's impressive that Facebook (and probably Google) are in that position.
There is a degree to which scale makes it necessary to develop these sorts of projects. Losing 1% of productivity in a engineering group of 10 people might not be a big thing, but at 1000 people that's 10 full-time people worth of productivity you're losing. Dedicating 1-10 people for a few months or a year (or even two years) to remove that 1% productivity loss is clearly worth it.
Documentation costs productivity to write, but when there are many people who would be made more productive from it, it makes sense to do it.
I think "slack" is the right way to think about it. It isn't a free-for-all - the business still needs to run - but there's enough space to explore, to rewrite, to document, to polish, and so forth. That's where the magic happens - the unplanned and the unexpected things around that which you thought you were going to do.
We actually built this because we write a lot of JavaScript at Facebook, and we need a tool like Flow. So yes, we worked on it full-time, with "funding": our developers like to move fast, Flow helps them do that.
(It's a different matter that we would have done this on our free time anyway, because it's so much fun!)
Just tried it. Not good for projects depending heavily on 3rd party libs. I have to define all the interfaces in a 'interface file' to keep 'flow' silent. This seems an impossible job for our project.
I know it's very early, but just curious if anyone is working on a node binding for this, or if one exists already? Would love to try it out in our stack, but it would require a javascript interface.
The code has a build step - yes. You will not be able to run this code without a step. It could be awesome if they allowed annotations in comments like other tools do - there is a GH issue on that and it looks like it should be possible https://github.com/facebook/flow/blob/master/src/typing/comm...
There's some tremendous effort being poured into making a crippled language like javascript usable, but when talking about solutions for maintainable frontend code, I'm more excited about compile-to-js languages like haxe [0], purescript [1] or ceylon [2].
The caveats I heard about transpilers often boil down to difficulty of debugging and lack of libraries. But with the amazing browser dev tools we have, debugging potential issues is not that painful. Every language compiling to js provides FFI and/or some escape hatch so you can write javascript manually, for performance tuning or for using 3rd party libs.
Even if you do write "raw" javascript, some sort of compile step is unavoidable, for running jshint, concatenating, minifying, etc. Why not walk the extra mile and use a better language?
BTW, I'm not saying a tool like this is not super-useful, specially if you already have thousands of lines of js code that you can't get rid of. Congrats to the Facebook team for the release!
If everybody agreed that most other languages are much better and more productive than javascript then we should have had one of them in the browsers already. And if the problem is just social/organizational then we might never get anything better than javascript, in which case Flow is awesome.
I think Dart seems to be a great language and IE, Firefox and Safari should have implemented it years ago, but they didn't. Now I think TypeScript is a great addition to javascript and I hope they build it into the browsers but I suppose they won't (maybe EcmaScript 7 will have some parts of TypeScript in it, or parts from AtScript from Google).
By the way, you probably still want minification and concateneted files when you create js from other languages. That stops me from using them, I would have many levels of tools between my source code and the production code.
Last time I said this I was caught but I'll say it again. Nobody really wants to even touch Javascript without a 5 foot stick.
People will tell me that it is a good language if you know how to use it, comparing javascript mastery to C mastery in a sense. I think there lies the problem.
Yes, it does. You can define object types like { x: number; y: string }, tuple types like [number, string], function types like (x:number) => string, etc.
Whatever people's thoughts on the language itself, JavaScript has built itself into a juggernaut in the amount of tooling available that fit into various opinions that developers can choose from. The number of large frameworks (in terms of popularity and usage) is not really found elsewhere. The number of smaller plugins are vast.
It helps that companies like Google and Facebook have invested a significant amount of research power into designing frameworks and tooling around it. Just from there two companies alone, we have tools like React, Angular, Karma, JSX, Jest, and now Flow. Tooling that involves the browser more include Polymer and Traceur (ES6 to ES5 transpiler).
To contrast this, I have been doing development with Cordova the past week & writing Cordova plugins to fill in missing functionality - the plugin ecosystem with Cordova is horrid, and the documentation is often awful. To compound it, Android developers don't seem to believe in documenting their libraries well.
I will take the JS ecosystem any day when confronted with a choice like that.
"Whatever people's thoughts on the language itself, Java has built itself into a juggernaut in the amount of tooling available that fit into various opinions that developers can choose from. The number of large frameworks (in terms of popularity and usage) is not really found elsewhere. The number of smaller plugins are vast.
It helps that companies like Google and Oracle have invested a significant amount of research power into designing frameworks and tooling around it. Just from there two companies alone, we have tools like GWT, Android, MySQL..."
Working on an app recently for Android & iOS. Cordova helped us leverage a lot of existing skills in our team, and definitely made a lot of things easier. Changes not having to be implemented twice per platform for example. But one would run into some really weird bugs, and tricky things to debug now and then. Overall i'd say it was worth it, but hybrid apps definitely have their caveats.
My personal preference is to have annotations because it helps future readers and maintainers understand the code better. Instead of looking through the function to see that the variable is in-fact a number, I'd rather just read "@param x {number}". And at that point, one may as well as use closure.
Apparently this is just type checking. It's not going to do any dead code removal like Closure Compiler in advanced mode or provide a better syntax like TypeScript. Whether that's good or bad depends on what you're looking for.
It's certainly possible that these could be used in combination. GCC optimizations aren't significantly enhanced by JSDoc annotations, so presumably this could generate code for GCC to optimize without losing major benefits on either side.
The React.js team has been explicit about the library being GCC advanced mode compatible, so they certainly have awareness of its capabilities. Whether they are using them together internally or they use a different solution for tree shaking et al is another question.
I wonder, how does this compare to Google's Dart.js? Like Dart, it introduces a type system into JS and like Dart, it requires a compile step between Flow code and JS that will run in a browser. What does Flow do differently than Dart?
Dart is a different language with different semantics. It is not the same as JS.
Flow starts with JS and adds a static type system (with attempts to infer types directly from the code). With the exception of the type annotations, Flow is JavaScript. Dart is not JavaScript.
Flow doesn't have any semantic changes from JS, only semantic restrictions. For anything that is not types, Flow defers to JS. Dart defers to it's own specification.
Static types without performance benefits. So far, all popular static type systems have had the performance benefits, so it's unclear how much people value the other benefits (quality and documentation).
I wonder which will have the most impact: code quality or types as documentation (esp for tooling)?
They are adapting to common idioms, rather than designing it from the ground up. This ad hoc approach is a great way to build useful tools (and startups), but it's also usually a mess. Like NN4. But, they seem to be type experts - plus they're using ocaml. Maybe ad hoc by experts is the way to get these ideas adopted?
If you can feed inferred static types to something like Google Closure Compiler, you do get performance benefits.
Also, if you're code is implicitly statically typed (as checked by Flow) you will likely hit all the right optimizations in the underlying JavaScript VM.
Even without the flow analysis and better typing, incremental compilation is a huge improvement over TypeScript, which re-parses type declarations on every compilation. That quickly leads to large compile times when you have type definitions for third-party components (even though you may only be using one definition in the file, the entire definition is parsed).
The existing available definitions from the DefinitelyTyped project is a huge productivity booster. Apparently Flow also has similar .d.flow files, but it will probably be a while until they exist for common projects.
I was going to comment that this sounds like an opportunity to get mileage out of the huge amount of typeinfo already provided by DefinitelyTyped by building a tool to convert .d.ts into .d.flow files.
After investigating this thought, it looks like this already at the top of the list for future plans for Flow:
This looks great, in typesystem/tooling/presentation, and sounds perfect for facebook; but for mainstream adoption, it needs to meet (or be closer to) the ideal of free-benefits:
(1) zero-work: works instantly with existing code and esp third party libraries; and
(2) instant-benefit: provides some compelling benefit in that zero-work case above (of course, it's OK if it provides more benefit if you do more work, adding type annotations etc).
This might be a stupid question, but is there a way to leverage object annotations [1] for runtime checks of data coming from e.g. an API or FFI call (node.js module calling C++ code)?
It looks like Flow should work to the extent that it can infer types from your CS output, but it will be tricky to embed explicit types into your CoffeeScript source.
You can probably hack it using the backtick operator though.
Best tech news I've seen this year, in terms of potential to directly improve my workflow and my clients' applications.
I'm surprised I didn't hear more about this before since it was apparently unveiled at the "Flow" conference. Wasn't at the conference and somehow I missed any prior mention of it.
If you see "[dupe]" on a dead comment you can be sure that that is why it was killed.
The problem was that there were two identical comments, the software killed one as a dupe, and avik deleted the other one. The software tries to fix this very scenario—it normally would have automatically unkilled the remaining member of the pair. But there are some corner cases where that doesn't work, and avik seems to have outsmarted it.
How does static checking work with dynamic types? Can the type checker figure out if a field/method exists on a type given that it can be added dynamically?
Edit: I assume it just checks bool/number/string and doesn't care about prototypes?
It does care about prototypes. So it checks for inconsistencies between methods added to a prototype and their uses. The tradeoff is that for dynamically added properties, it doesn't always remember where they are defined and where they remain undefined (it knows they are defined "somewhere").
It looks like it's designed for the IDE use case - on cursory inspection of the code, it contains an autocomplete database and a client-server architecture designed for editor plugins (with useful interfaces like "type at character index").
I'm surprised that they don't seem to have launched with a public editor plugin and that the documentation doesn't seem to mention it.
I'm confused by the usage of the utility itself. I ran it on the hello.js, and it reported the expected type mismatch. I then tried to run it on the file in the answer sub-directory, but it kept telling me about the previous file.
That's because it checks every file that has /* @flow */. I'm guessing you're not running it on a specific file but instead on all files in the current dir and sub directories.
In terms of layering a static type system on top of JavaScript, how does this interact with Coffeescript and other languages that compile to JavaScript?
It doesn't, at least not any more than TypeScript or other compile-to-js languages interact with each other.
If CoffeeScript is to support these annotations one day - it would require the CoffeeScript compiler to support them itself in order to generate correct annotated JavaScript for Flow.
JavaScript actually has a pretty small number of type primitives; I'd say that the bad rap about JS's typing is due to the absolutely mind-boggling type conversion rules (which are nevertheless part of the specification).
Even if that wasn't the case, their design just avoids the issue by not trying to do any typecast, ever.
Or you can just use GWT which saves you from having to use javascript at all, and lets you write java (along with all its IDEs, type checking, code structure, and other benefits) which is compiled to highly efficient javascript: http://www.gwtproject.org/learnmore-sdk.html
All the language features of java: generics, inheritance, type safety, interfaces, and soon to come: lambdas and other Java 8 features, are all supported. Plus your code is organized in java packages. So, I'm not sure you've ever tried GWT.
Things like multi threading which aren't possible in javascript aren't supported, but a lot of the JRE which is used in most code, does come out of the box. Its a huge improvement over regular javascript, anyway. And client + server code can be shared.
Those are features of the IDE, and of Java, certainly. But type safety doesn't really exist in javascript. Structures like interfaces and classes don't really exist. Inheritance is different. Scope is different.
What you have is a javascript framework which emulates the behavior of Java to the degree that javascript permits, but can't really implement it.
I'm not saying it isn't a useful tool, or that the javascript it generates isn't far, far superior to anything I could roll on my own - but...
>Things like multi threading which aren't possible in javascript aren't supported
> But type safety doesn't really exist in javascript. Structures like interfaces and classes don't really exist.
You know where else none of that exists? In the machine code on which the JVM rests and to which Java code can be JIT compiled by the JVM before execution.
So, if you can't have it compiling to JS, you never could have it on the JVM, either.
It's not the same thing at all. Javascript isn't machine code, and it isn't bytecode. It's its own completely separate high-level language, that executes in an unsafe, unpredictable and highly mutable environment.
You should tell that to google adwords, inbox, and AWS console, then. All of them are developed with GWT, and haven't functioned in any unsafe, unpredictable, or highly mutable way.
>type safety doesn't really exist in javascript. Structures like interfaces and classes don't really exist. Inheritance is different. Scope is different.
You are not coding in javascript. You are coding in java, and the code is then converted into highly efficient javascript by a compiler.
> What you have is a javascript framework which emulates the behavior of Java to the degree that javascript permits, but can't really implement it.
Absolutely not. You are actually coding in java, and its then compiled to javascript. Have you ever used GWT?
You may be coding java, but none of the tangible benefits of java over javascript actually translate, because you're actually writing javascript, with all of the warts and limitations and weirdness therein.
The javascript exported will not be truly type safe, because javascript is not, and cannot be. It may appear to act like it, within the context of the code when run as predicted, but this isn't the same thing as an actual strict language in a runtime which expects and enforces those rules. Highly efficient? Doubtless. Optimized to avoid as many pitfalls of the language as possible? Certainly. But anything more capable than bog-standard javascript? No. Because it can't be.
Hmm, so assembly code doesn't have any type safety or high level features. So when I code in a higher level language I shouldn't be able to actually program in it because assembly doesn't support it, all I am doing is writing in a assembly framework and can't really do more than what it permits, right? What a load of crap and the op is down voted!!!
You are writing java and barring some features like multi-threading, you CAN write anything you want. How to translate that to JS is the compiler's job, you need not care.
>So when I code in a higher level language I shouldn't be able to actually program in it because assembly doesn't support it, all I am doing is writing in a assembly framework and can't really do more than what it permits, right?
Javascript is not the equivalent of assembly. Javascript is, itself, a higher-level language. You're talking about translating from one high-level language to another, and expecting the interpreter for the latter to care anything about the rules of the former.
>What a load of crap and the op is down voted!!!
The entire thread is downvoted. It's annoying but it doesn't prove anything.
> Javascript is not the equivalent of assembly. Javascript is, itself, a higher-level language. You're talking about translating from one high-level language to another, and expecting the interpreter for the latter to care anything about the rules of the former
What of it? You can write Javascript by hand which would be good strongly typed code, even if there is no compiler or runtime checks to enforce this. OCaml has js_of_ocaml, which can turn 99% of valid OCaml code into Javascript. And OCaml's type system is a hell of a lot more strict than Java's. Whether or not there are runtime checks is irrelevant. The fact that it is valid OCaml code gives you compile-time guarantees when it comes to the types involved. It's a bit like the way that type erasure works in Java. It gives you compile-time guarantees, but the runtime has no idea about it.
> You're talking about translating from one high-level language to another, and expecting the interpreter for the latter to care anything about the rules of the former.
Actually, no, you aren't. The compiler cares about the rules of the language being compiled. The interpreter of the target language doesn't have to care, because code that violates the rules isn't output by the compiler. There's nothing special about high-level languages as a target, its exactly the same situation as any compiler for any general purpose machine (the only thing that might be different is a compiler for a language-specialized VM, but even for those generally many rules of the language aren't present in the VM and exist only in the compiler.)
> You're talking about translating from one high-level language to another, and expecting the interpreter for the latter to care anything about the rules of the former.
Yes; but the Closure compiler knows about the semantics of JS and generates a shitload of disgusting low-level wrappers and checks that implement the retarded JS semantics. You're right in that there are cases where this breaks down, but Closure is not really a high-level-language to high-level-language compiler. The JS output is not in any way human readable (look, for example, at http://de.indeed.com/s/8a9f5dc/jobsearch-all-compiled_de.js).
> The javascript exported will not be truly type safe, because javascript is not, and cannot be. It may appear to act like it, within the context of the code when run as predicted, but this isn't the same thing as an actual strict language in a runtime which expects and enforces those rules.
Haskell is type safe, but the runtime doesn't enforce the rules, the compiler does -- the runtime, AFAIK, is no more typesafe than JS, all the typesafety comes from compilation that rejects code that isn't well typed.
So I don't see why something with a compiler that emits JS can't be just as typesafe.
The point is, that you don't have to deal with any of those issues with javascript. I've been using GWT for over a year, and I've never had to dive into the compiled javascript to debug something. I only ever edit my java code, and I compile it, that's it. Its exactly the same as your C code compiling to assembly, or the java code compiling to bytecode. You never have to look at the compiler's output.
I think Google created Angular in part because of some problems with GWT. Of course some developers still uses GWT but others have switched to AngularJs and seems to think it is much more productive. And some combines Dart with Angular which really looks interesting.
GWT is open source and is being used / developed outside of Google, so it'll hardly be abandoned even if google left it. But Google is still using it heavily, see Inbox which just came out and uses GWT.
Also, angular and dart are slow as shit compared to GWT.
Static? I use my own type-checking/enforcing lib as a base for everything I write in JS or CS (https://github.com/phazelift/types.js). It's only 1.8kb, dynamic and never fails on me.
I'm wondering whether I can add it into my team's svn pre-commit hook that already does jshint checking. It's remarkable what a difference in runtime js errors and overall code quality that pre-commit hook made for us.
It probably doesn't typecheck yet though.
Will future versions of popular javascript libraries typecheck with flow, or will there be a repository of interface files so at least code using these frameworks can typecheck?
Good job Facebook.