Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ES6 in Depth: Destructuring (hacks.mozilla.org)
128 points by mnemonik on May 29, 2015 | hide | past | favorite | 96 comments


I'm currently building a rather large isomorphic React/Flux application in ES6 and ES7.

With our Flux implementation that we're using[0], async actions to external services become amazingly simple.

  async sendRequest(payload) {
      try {
          let result = await this.api.sendRequest(payload);
          return await result;
      } catch (e) {
          if (e.name === 'ServerException') {
              this.errorActions.serverError(e);
              return null;
          }
          
          this.errorActions.genericError(e);
          return null;
      }
  }
Having a class system on top of prototype system, removing a lot of the boilerplate is great. Browserify/CommonJS and Babel make for a phenomenal build system, and being able to with few exceptions render everything on the server correctly is brilliant.

Javascript has come a long way. The best part of destructuring is simplifying `import` statements

  import { fooFunc } from './Bar';
I highly recommend checking it out, as the confluence of ES6/7, Babel, React and Flux (with one way data-flow) feels like the future but here today. That, and I'm stoked that functional programming concepts are taking off!

[0] http://acdlite.github.io/flummox


Last time I checked, one problem with async (translated with regenerator by babel) was that transformed code is difficult to debug even with source maps. The stack traces are often useless and breakpoints map incorrectly. Also, since the async function is returning a Promise, you've to deal with Promise's own stack trace eating behavior.

To summarize, the code looks neat. But debugging a non-trivial app still has some way to go.


Just use asyncToGenerator in babel : "optional": ["asyncToGenerator"] http://babeljs.io/docs/advanced/transformers/other/async-to-...

It will transforms async functions to a generator, with a far better debugging experience, and generators are supported in chrome dev and firefox now.


I don't disagree, but one nice thing with babel is a plugin called babel-plugin-rewire; its a clone, if you will, of rewire -- a proxy for require calls that allows you to overwrite them.

With it, I can unit test my actions and other async modules in complete isolation, so the debugging burden is lessened. Id love to be able to remove console.trace() calls in my async code in the error handling side, and I think we will get there soon, but things can be worked around and the control flow becomes far easier to understand than using generators by hand, or callbacks.


> control flow becomes far easier to understand than using generators by hand.

IMO, control flow has no advantages over generators. It is exactly the same thing, replace await with yield/yield-star. Btw, until aysnc debugging becomes better, replacing yield with delegating yield even gives you stack traces since the delegation is handled by the runtime (and control does not pass back to the function (like Q.async, co, spawn) that's driving generators).

  //From my current project
  static *getInitialProps(name) {
      var project = yield* Project.getByName(name);
      return { project };
  }

async is a better way to do things, I agree. In fact, I asked this silly question once on es-discuss and got educated. https://mail.mozilla.org/pipermail/es-discuss/2014-September...


Hey, I read that thread just the other week! Fascinating discussion, though I still disagree that generators have the control-flow advantage: while I personally don't see much of a difference, the other developers whom are newer to JS and asyc execution in general have found async/await conceptually simpler, despite the fact it's still compiled down to generators anyway.

You're right, in that it makes little difference in terms of semantics per se, but having implemented both in the application we're working on (which is at just under 10k semicolons, currently) async/await along with how Flummox/Flux implements the dispatcher, it allows you to work with async methods without propgating async through the entire call stack, and lets you think of the runtime as being synchronous-ish. I'm looking forward to finishing this project, as there's a tonne of stuff I want to write up about it; I think the big issue with ES6/7 is that there isn't a lot of "in the trenches" writeups.

Thanks so much for the discussion on esdiscuss by the way, it really helped me understand async execution and control flow in ES6/7.

PS. This is a neat project, HTTP servers with ES7 async/await. Similar in concept to Koa, but using the newer syntax! https://github.com/quinnjs/quinn


I really wanted people to figure out how to make prototype systems work for them but either the web browser isn't the best context for them or people are just too used to class-based systems. I looked at Self and it was neat but I haven't seen any larger or shorter-lived programs that take advantage of the prototype-based system.


Prototype-type based systems are cool in theory, but aren't much fun to use in practice. They are incredibly powerful and easy to abuse. Basically single inheritance in disguise.

Classes aren't great either, but at least using class inheritance makes you feel icky, you're not likely to have shared state on the prototype being randomly mutated, and it's pretty readable.


what would you recommend as the best "getting started" with Flux and/or Flummox? React is pretty damn simple to grasp but building apps with it and Flux has a steep curve.


Flummox's documentation is pretty great, so reading through it's guides and it's API documentation is what really made the concepts click for me. The best part of Flummox is that it abstracts away the dispatcher and the boilerplate that Facebook's Flux requires. It also is not singleton-based, which is a massive boon for rendering on the server.

The developer of Alt (http://alt.js.org), Josh, is currently working on some documentation that will be exactly what you're after -- head over to the Reactiflux community on Slack, and go check out Alt's GitHub as I believe it's being put up there!

EDIT: Oh, and I highly recommend going through this code: https://github.com/goatslacker/microflux -- it will really make how Flux works clear. Most "modern" (haha, Flux itself is what 12 months old?) Flux implementations are pretty simple to be honest, so their code is often the best place to look to see how things are supposed to work.


Happy to hear Babel is working well for you. I'm loving it too so far. Just to noet, named imports aren't technically destructuring though, since the syntax is different and doesn't nest. e.g.

    import { fooFunc as otherFooFunc} from './Bar';
not

    import { fooFunc: otherFooFunc} from './Bar';


Or ToffeeScript has been available for years and has a better syntax..


Classifying your code is something you might do when it's finished. But while most code bases are constantly changing, classes does not make any sense! They will just make it harder to re-factor the code, and you will end up with a bunch of unused code just because you decided to classify early on instead of just prototyping.

Import is another concept that will eventually make your code base unable to manage. You'll end up with imports everywhere, with a weird spaghetti, where it would be much better to use the modular pattern of breaking up the code in separate reusable modules.


As to classes, there are some places where they make sense, but for the most part, your code really represents workflows against objects (not necessarily classes) as your code/flow really doesn't care if it's a duck as long as it quack()'s.

As to the second part of your comment regarding imports... import really isn't any different than require, and in that vein is easy enough to reason against, and work through. For the most part, your discrete modules should be hierarchical in nature, and exposed as collections/wrappers via directory/index structures... this will make it easier to avoid spaghetti.


No offense meant to anyone, but I really do not like this constant stream of new features in JavaScript. There are some core issues with the language which, of course, cannot be just fixed, and no new features will make them go away. But there is also a beautiful simplicity and power in the way this (used to be) funny little language does classes, objects, scoping... I feel like these new features are bloating it, and for what? So that you can write an assignment in 1 instead of 3 lines? Wow. It's as if, whatever programming language is trendy, JavaScript must absorb as much of its features as it can...

Don't get me wrong, destructuring is a nifty feature, but it's not really necessary. Plus, things like that usually get thought through along with everything else, when the language is designed. Not only is this increasing the conceptual weight of the language (OK, maybe I exaggerate in this example ;) but there are many more features added all the time), but now you also have a new set of potential pitfalls when doing potentially type-inappropriate destructuring. (I see from the text that you sometimes get undefined, and sometimes a TypeError?) Does JS need more of that?

Why not keep it simple? It may have its flaws, but the JavaScript mental model, once you figure out some corner cases, is really simple and powerful. I find it sometimes very elegant. This feature bloat reduces that, IMO, and could hurt understandability of JS code.


I down voted you, I owe you an explanation.

> I really do not like this constant stream of new features in JavaScript

ES6/Ecmascript Harmony was announced seven years back, and has been actively discussed for the last four years. So the constant stream of new features you see are JS engines implementing parts of the standard, which is now in its final shape. ES-Discuss mailing lists* are open for community participation (like someone mentioned below) and is led by people with a lot of experience in their respective focus areas. Every new feature has had to pass through many levels of debates, and have (mostly) been borrowed after being found successful either in other languages or libraries.

> I feel like these new features are bloating it, and for what?

De-structuring is fairly easy to grasp, and is not even one of the main draws in ES6. Since the last changes to JS, the environment and therefore expectations have changed a lot: (a) JavaScript is now a server-side technology in a big way, (b) async has become increasingly important, (c) people are writing very complex apps in JS. Without the changes in ES6 (and ES7/8 going forward), developers would have hated working with JS and JavaScript could have slowly died.

If you have been following, you'd notice that JS has a very vibrant community around it. One big reason for that is that they see the language evolving with the technology that surrounds it. Finally, note that all the new enhancements have come without breaking backward compatibility.

* I lurk there, not a contributor. Have learnt a ton from just subscribing to it.


> I down voted you, I owe you an explanation.

I do think you might be breaking the guidelines of the site - downvote should not be used just for disagreement, or? - but who cares, you're certainly not the only one ;) In any case, thank you for the explanation! I appreciate it.

As for the rest, I partly agree. I am too late to voice any constructive feedback. It was just a general comment on the direction things seem to be going. And I'd rather not participate in the es-discuss, because JavaScript is not my professional focus any more, so I do not have the time or the required expertise. I understand that my comment seems kind of like taking a passing shit on someone's hard work, I'm sorry about that. I hope everyone takes it as nothing more than what it is - just an opinion ;)

All in all, I feel the language is getting overstretched and overcomplicated. And when I see destructuring, I wonder if it's really worth it. Too much complexity can be a problem.


I did not down-vote you for disagreement, I'm firmly against that. I down-voted because it is misleading (probably unintentionally) to say JS is adding a "constant stream of new features"; rather it is all committee and consensus driven work spanning many years with many, many people participating.

I should exercise more restraint next time. Thanks for your reply.


> I do think you might be breaking the guidelines of the site - downvote should not be used just for disagreement, or?

Well, it's not mentioned in the guidelines, and there's a bunch of people who do use downvote to disagree. There's a few posts from PG; one where he says that people upvote to agree so it's understandable that they downvote to disagree, and others where he sees it as a problem to be addressed.

It's probably a good idea for people to upvote any posts they think have been unfairly downvoted. This sometimes happens, but probably not often enough.


Maybe downvotes could be split in two.

1. a downvote button, which is available to everyone, but affects only the ranking on the comment list (no graying out due to too many downvotes)

2. a "flag" button, which is available to people with higher karma and serves for moderating.

...because, it seems perfectly human to want to express disagreement with a downvote. I can totally get it, and would/will probably have a hard time restraining myself ^^


> Finally, note that all the new enhancements have come without breaking backward compatibility.

Doesn't string interpolation break from ES5?

"${foo}" means something different in 6 vs 5, right?


Template strings (https://developer.mozilla.org/en/docs/Web/JavaScript/Referen...) use backticks, instead of double/single quotes.

  let a = `${foo}`; //Template string
  var b = "${foo}"; //same as in 5.


I stand corrected. Thanks.


ES6 may make the language itself more complex, but I would argue that it does make your code simpler and more concise. Destructuring and arrow functions may seem gimmicks, but they do reduce the visual complexity of the source code and allow you to concentrate on more important things.

And stuff like the module system, class system, promises etc. really needed to be standardized because there were too many different solutions floating around and the fragmentation was hurting the ecosystem.

Personally, I think that ES6 is much more usable than any previous version of JavaScript.


So don't use the new features. Nobody is forcing you. Over time, perhaps you will come to appreciate some and bring them into your code. Or perhaps not, both are fine.

Of course, the corollary is to point out that nor should you force your approach on others. Your usage of the language is not the same as everybody else's. They might be working in different contexts to you, solving different kinds of problems to you, with different constraints.

I'd also recommend subscribing to es-discuss. All the features are discussed heavily and repeatedly by experienced, clever people in an open forum, before being standardised and implemented. Only battle-hardened ideas, usually proven in other languages or in JS libraries, make it through the process. There's solid reasoning behind all of them.


I'd like to just clarify one thing - of course everyone can ignore the new features when writing their code, but no one can ignore them when reading other people's code. Undoubtedly, a JS programmer must get a handle on them all, sooner or later.

Thanks for the es-discuss suggestion, it's cool that the process is open like that.


Understanding other people's code is never easy and I get that new language features make it seem even more daunting.

But it is always a useful process to go through, more so when it exposes you to new ideas, patterns and idioms. These features seem intimidating because they are unfamiliar to you, but they won't be unfamiliar forever. I guarantee it.


Wow, that was pretty patronizing! But I think I made a certain argument. You don't have to agree, but it would be nice if you don't just blame it all on my supposed unwillingness to learn new stuff ;)


No, but as someone who doesn't write JavaScript at all, I would like the JS engine that comes with a graphical web browser to be simpler than the expanding spec makes possible. I mostly browse with Lynx anyways, but there's a legitimate concern about how much browser developers are being expected to support here, and what the implications of that are.


Considering the additional syntax is sugar over stuff that is already possible, I'm not sure how much I agree... Anything newer than IE8 can run the transpiled ES5 code that BabelJS/Traceur and others pump out.


I agree with that. All these features are amazing, but the fact that these modification must be only additive make their implementation feel clunky.

Javascript was born with enormous defects that it's too late to remove, and adding new features, regardless of their undoubtable usefulness, make the whole language more difficult to handle. The number of concepts required to read ES6 is far more than the one required for ES5, and this number is just destined to get higher and higher, with no possibility to decrease.


I hear this type of argument a lot... regarding "Javascript was born with enormous defects..." would you mind citing some defects that aren't possible to simply ignore in practice?

I know that Automatic Semicolon Insertion, Hoisting, Scopes and mostly automatic type casting are brought up a lot...

Regarding ASE (use a linter, and always require them)

For Hoisting it's a matter of learning the language and isn't any different than many other languages in that regard.

As to functional closures/scoping... the new "let" allows you to move away from that in practice

For automatic type inferrence/casting... I think this is absolutely one of the more powerful features of JS, and allows for validating end-user input to be far more easy/flexible than most other systems that would presume to convert user input to a date, or a number, etc. This is my own opinion, but I've seen few better systems and JS doesn't require you to load up a bunch of try/catch blocks to do it.

I bring these up, because I don't think they are defects in the language, except maybe the only closures being at the function level, and related to this being able to use undeclared variables, both of which are addressed in practice by newer features.


Old features can be abandoned and forgotten. For instance, I don't even know what "with" does anymore. We don't want a python 3 situation on our hands.


First, what "constant stream of new features"? Over the past 20 years it's been a fairly stagnant language.

Second, stagnation is not a good thing. I make my living writing Javascript, and being able to write ES6 instead of ES5 makes me happier and more productive. That's not a minor benefit; that's literally the most important thing you can say about a language update. And thanks to babel, it doesn't even break backwards compatability. What possible drawback is there?

Yes, this is ruining the "funny little language" you liked, but if you want a funny toy language, go use Elm or Elixir or something. Javascript is too important to be a hipster playground.


A little meta - where can I find some written guidelines for HN commenting? It was my understanding that downvotes serve for moderating (their cumulative effect is to gray out a comment...). The comment above has been downvoted somewhat, and if that continues it could get grayed out, as if it were spam or trolling. Is it really?


I guess it's like C++. You can use the new features if you want, you don't really have to. Some teams will decide on a subset of "safe" features of the language to use. I'm waiting for something like JavaScript the Good Parts II to come out and give an indication of what I should probably avoid.


I really, really, really hope that JavaScript doesn't become like C++...


That's really funny :) I was looking at the rest of these in depth articles and this one jumped to mind as a C++'y feel to how it was implemented: https://hacks.mozilla.org/2015/04/es6-in-depth-iterators-and...

Strict backwards compatibility forcing new syntax and ugly conventions.


I'm not really into web/frontend stuff. I'm interested, but it's not part of my day job and so I dabble only, play a bit here and there.

My last experiments caused lots of browser based problems (string.prototype.contains doesn't exist in IE9 etc..) and ES6 features were a no-go for the same reason. How do you actually use things like the fat arrow etc. in applications today?

Is that limited to server-side code for now (and for quite some time..)? Do you decide to ignore a number of ~relevant~ browsers? Build tools to generate a 'lower' dialect/subset?

Any intro to this specific topic, i.e. 'making sure your ~modern~ code works in last years browsers'?


I use BabelJs (formerly 6to5), and Traceur before that, which transpile/morph your ES6/7 code into something usable with older browsers... Since it will probably be 5+ years until I can actually rely on async/await sugar to be native and not have to worry about legacy browsers, I don't think this is going away any time soon.

I actually do a lot ov server-side code in JS (via node/io.js) as well, so the build step isn't so bad.... breaking your code up into separate modules that build separately is a good idea though, so your build times aren't too much.


A lot of people use transpilers like Babel or Traceur. They compile ES6 to ES5 (or lower) for use in older browsers.


For runtime features like String.prototype.contains you can use a polyfill library like es6-shim[1]. For new syntax, you can use a compiler like Babel[2].

[1] https://github.com/paulmillr/es6-shim/ [2] https://babeljs.io/


Babel comes with Core.js [1] built in, which is (I believe) a better and more exhaustive shim library than es6-shim and the others.

[1] https://github.com/zloirock/core-js


Modern frontend development seems to be stabilizing on using either browserify or webpack to pull modules together, along with some transpiler such as babel (for es6), coffeescript, or typescript.

It's a bit of work to get a project up and running from scratch, but there's plenty of templates, and the result is nice.


I've been writing ES6 with Babel for a couple of months now, and I've fallen in love with destructuring - it's my second favorite part, besides arrow functions. My only complaint, having run into this a few times in real code, is that you can only use destructuring with assignment, not with existing variables. So this works:

  let myThing = {a: 4};
  let {a} = myThing;
But this does not:

  let a = 5;  
  let myThing = {a: 4};
  {a} = 5;
I understand there are some problems of ambiguity here, but it seems to me this could be made to work somehow?


Yeah, I wish that were handled better too. Since the parser is expecting a statements, it parses it like

    {
      a
    }

    = 5
As in, a block with just an "a" in it, followed by an assignment to nothing, which throws an syntax error.

You can however do

    ({a} = 5);
to make the parser switch to expecting a destructuring pattern instead of an block statement.


Whoops, my last line contains a major typo and should read `{a} = myThing;`. And you've dutifully carried over my typo to your example :D

But regardless, the corrected version of your example: `let a; ({a} = {a:5});` works like a charm and is very handy! Thanks for the tip!


At least with let you can reliably shadow the previous binding, which in most cases will be equivalent.


I recognize destructuring from Clojure. Did it origniate there? Or is this a specific case of pattern matching?


Other languages have this too, eg. Ruby:

  a, b = [1, 2]
  a, b = {:a => 1, :b => 2}
Similarly in Python, at least for the array example (there's probably some way of destructuring a dictionary in Python, but I don't know it myself).

PHP also has `list()`...


For arrays, yes, but that won't work for Ruby hashes (which aren't ordered):

    [1] pry ~ »  a, b = {a: 1, b: 2}
    {
        :a => 1,
        :b => 2
    }
    [2] pry ~ »  a
    {
        :a => 1,
        :b => 2
    }
    [3] pry ~ »  b
    nil


You're quite right, I meant to add `values_at` in the hash example! And the syntax is still not so nice:

  irb(main):005:0> a, b = {:a => 1, :b => 2}.values_at(:a, :b)
  => [1, 2]
Or:

  irb(main):006:0> a, b = (h = {:a => 1, :b => 2}).values_at(*h.keys)
  => [1, 2]
Edit: Ruby's hashes are ordered since 1.9, incidentally.


a, b = {a: 1, b: 2}.values


Common Lisp also has destructuring-bind[0], although its not provided with every implementation.

[0] https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node252.html


It's basically just pattern matching. Similar things have been around since at least the 70s (e.g. in SASL).


Erlang and Python have destructuring as well - it's a pretty established shorthand.


Adding to babel and traceur, Typescript is catching up with all ES6 features, so you can use destructors (discussed [here](https://github.com/Microsoft/TypeScript/issues/240)) in Typescript v1.5--beta


You shouldn't use the term "destructor" as an alias of "destructuring expressions" - destructors are an entirely different thing[1], which is not applicable to JS.

[1] http://en.wikipedia.org/wiki/Destructor_%28computer_programm...


Does typeScript support generator syntax and a Promise shim yet? What about generators? These are pretty much required to get to async/await goodness.

I ask because I am genuinely interested, and haven't looked into it for a while, only passively in the context of Angular 2. The addition of attributes (@ tagging) is specifically interesting to me as I think it's a cleaner syntax visually than mounting methods onto a function after declaration.

With BabelJS, I can use flow to get type annotations, but not sure about meta annotations (like attribute declarations in .net)


Both async/await and generators are on the roadmap[0] for the next version.

[0] https://github.com/Microsoft/TypeScript/wiki/Roadmap


I love it!

As someone else also mentioned in the comments, it is unapologetically dynamic. In many ways, this reminds me of Perl. The array destructuring assignment is such a common pattern in that language (eg. http://perldoc.perl.org/functions/caller.html)

What's even more exciting/terrifying is that ES6 goes even beyond the patterns allowed in Perl!


No, it's worse. I'm talking about the part where you want to pull only parts off a compound:

ES6:

    let [,,,,,,,,, tenth_item] = somearray;
Perl:

    my (undef x 9, $tenth_item) = @somearray;
The ES language designers got that wrong. You carefully have to count the number of commas, and use an invisible nothing in between. Instead they should have made it so that you assign to undefined, or perhaps null, or make up a new special identifier named _ (like it is used in some other languages) and then the assignment operation is smart enough to discard the value.

This is not only better because now there is a visible thing to see and talk about, but also allows the comma operator to be much less restricted and frees the programmer to be more expressive. In Perl, multiple commas are collapsed similar like multiple whitespace collapses in HTML. The expression a,,,,,,b is identical to a,b – also it does not matter where in the expression the multiple commas are.

In ES5 and later multiple commas at the end are collapsed into one, but otherwise multiple commas are kept. This is lame because inconsistent.


What do you think of the destructuring features in Perl 6 as discussed for example at http://perlgeek.de/blog-en/perl-6/2013-pattern-matching.html?


You could just write:

         let [tenth_item] = somearray.slice(9)


You're missing the point here. Slice works with only part of the compound about to be ripped apart. However, destructuring with commas is generic. Example that has no simple slice method call:

    let [,second,,fourth,,sixth]


I think this is a great language feature, using it a lot in OCaml and Clojure. I am not sure how much this will improve JS but I guess we can just wait and see.


Benefits are already known and experienced.


I mean how much the developers are picking up this. I see it as a pattern that developers sometimes reject new features on the basis that "they don't need it".


var [wtf] = NaN; console.log(wtf); // undefined

I don't know about you, but this sensibly gives a TypeError when I did it in my console on Firefox 38. Please let this article be more dated than my firefox browser, because making NaN auto-coerce into an iterator sounds incredibly stupid.


I suspect it's just accessing NaN[0]. Either form of property access ([] or .) implied coercion to object (not iterable) in ES5. It makes sense that destructuring would build on that behavior.


Array destructuring doesn't use [0] access. It iterates the right-hand side. On the one hand, that means that trying to array-destructure NaN does in fact throw. On the other hand, it means you can array-destructure a generator, which is a very desirable thing to be able to do.


    var temp = NaN;
    var wtf = temp[0];
    console.log(wtf); //undefined
What else would you expect it to be? JavaScript in general guards against throwing errors as much as possible... I'd much rather see this behavior in practice than having to put extra try/catch blocks everywhere... Not to mention what this would do to async code.


That looks like a typo in the example. This code:

  var {wtf} = NaN;
would set wtf to undefined, for the reasons described in the article, since it would end up doing (NaN)["wtf"]. But trying to destructure NaN into an array should throw an exception.


Wow, that's some pretty complicated stuff. I can only imagine the monstrous code that could be written using this.


As with every language these can certainly be abused (perhaps more easily than other aspects) but I also see great usefulness in these.


that's awful syntax IMHO.

like php's list/map but with several levels.

those are just asking for bugs that are hard to spot even on code reviews.


Other languages have implemented this idea for decades now. It's a great feature.

The main thing that bothers me is that the default behaviour is too permissive.

    let a = [1,2,3,4,5];
    let [b] = a; // ok.  b receives 1
The equivalent Python will fail because there are "too many values to unpack."


I prefer the JS version. If you're a dynamic language, be unapologetically so.

  /* I can write code independent of getCheapestSellers() impl */
  var [bestPrice, secondBest] = getCheapestSellers("product_name");


That has nothing to do with being dynamic or not. Besides your example translates quite nicely to Python as well:

    [bestPrice, secondBest, _*] = getCheapestSellers('product_name')


My question is, why? If one wanted this type of guarantee consistently, they might as well use a statically typed language.

Add: To clarify, the error shows up at runtime in Python. So you're going to need tests anyway. Just as in JS.

Btw (unrelated), you can do the same thing in JS as well [a, b, ...rest] = [1, 2, 3, 4, 5]


Yay, you now have a variable declaration you aren't even going to use... Assuming "_" in this case is a variable, and not some magic keyword in the language, which is probably more stupid than just ignoring the rest instead of throwing an error. (forgive my Python ignorance)


_ is just a name and using it for values you are not going to use is after assignment is a convention in Python, Haskell and probably other languages that support pattern matching or destructuring.

Keep in mind this also comes up in cases like:

   foo, _, bar = something()
In general you can't avoid doing something like it, why "optimize" for this one special case?


Why throw an error for something that isn't needed? This is always something I like about JS... it doesn't throw an error in many situations, where the most likely case would be to suppress it anyway, or when there's no point in throwing one...


> Why throw an error for something that isn't needed?

Because it appears the programmer's assumptions have been violated at this point. There's a syntactical way for destructuring assignment of a variable-length list, and it wasn't used. Maybe the programmer was being lazy, or maybe the programmer's assumptions were violated.

How do you feel about

    [a, b, c, d] = [1, 2]
? Should it throw?

> it doesn't throw an error in many situations, where the most likely case would be to suppress it anyway, or when there's no point in throwing one...

PHP rightly gets lots of flak for attempting to lumber on after the programmer probably made a mistake. JavaScript isn't as bad, but it does hide a fair number of programmer mistakes. For small web apps where failure or incorrect data isn't a big deal, maybe hiding small bugs is the right thing. For large applications, hiding likely programmer laziness or minor bugs is not a good feature.


I tend to be a bit more meticulous than most when it comes to structure and organization of software projects (I'm downright anal), Though I do see your point... I've always just found it easier to work with the JS behavior, and simply verify critical data at critical points.

In terms of your example, in JS, I would expect c and d to be undefined. Though I sometimes wish that JS hadn't made the distinction between undefined as a value vs. null. However, the JS flow makes it easier to do something like...

    let [foo, bar, baz, optional] = getConditionsFor(id);
Where you expect certain values to only sometimes be there conditionally... To me this is flexibility that I appreciate, and once you are used to the JS way it becomes a lot more natural.

At this point I've written a lot of code in JS both on the server and the client... after 3 years of mostly node/js projects, I've spent the past week in .Net land and frankly I miss JS, though the JS on the web portion of this project is almost as painful. It's amazing how many times a small piece of coupled code can by copy/pasted instead of isolating it for reuse... Spending a week cleaning up such code to get a handle on a number of bugs, not fun.


It's really valuable for mistakes to result in clear errors rather than confusing but "valid" programs.

Even worse:

    let [a, b, c, d] = [2];
a receives 2. b, c, and d receive undefined.


Wait, side note, but when you think of functional programming your first point of reference is PHP? More then Javascript (which I'm assuming you're probably familiar with, to have such a strong opinion about its syntax)? Am I interpreting this wrong?


it's not that far away in the sense that one started as functional language and was made c-like, and the other started c-like and was made functional :)


I fail to see this doing anything besides turning the syntax backwards!? Maybe it's useful for those of us who are used to write from right to left, like in Arabic, but I would still use var foo = bar[0] over var[foo] = bar any day. It gets even stupider with objects: var foo = bar.a, vs, var {a: foo} = bar.

JavaScript is a good language because it's so simple and easy to learn, adding stuff like destructor's just make it more confusing! Making a language more confusing just because of strong preference with syntactic sugars is a mistake. I would like to see a forking of the language before this goes mainstream. Maybe called CJS, where C stands for Clean or Classic JavaScript.


The point is apparent is you use actual use cases.

  let {title, content, email} = articles[0];
  let [all, ident, domain] = email.match(/[^@]*@.*/);
  return `<h1>${title} (${ident} at ${domain})</h1><p>${content}</p>`;
Secondly, there is no destructor in JS (because of dynamic memory management), and there will probably never be. Destructuring is nothing like destructors.

Thirdly, the language doesn't have to fork. You can still write ES5 code. But you should probably not because, believe it or not, ES2015 is a really good thing, full of improvements. And syntaxic sugars are part of what makes using it so much nicer than ES5.


Articles should already be an associative array in the first place.

I can understand why you think the email example looks better, but whoops, there's a bug, you now got two undefined variables!


I believe in this case articles[0] is pulling a specific article (object) from an array of objects, then destructuring that instance..

   articles.forEach(article => {
     let { title, content, email } = article;
     ...
   })
Might have made that more clear... but .forEach is an ES5 addition...

Given the backlash with ES6, I wonder why we didn't see this with ES5... "OMG they added extra sugar to Array.prototype, we're all gonna die!"


So.... How would a lack of destructuring avoid this? You would still have two undefined variables, plus a bit more typing.


It would probably have looked like this:

  var email = article.email,
    at = email.indexOf("@"),
    ident = email.substr(0, at),
    domain = email.substring(at + 1, email.lengt);


The part of this that is disengenuous and a little silly is that they are just slowly adding in features from CoffeeScript/IcedCoffeeScript and nobody mentions that.

And in order to do it you have to use something like Babel.

Why not just move to CoffeeScript, or IcedCoffeeScript, or ToffeeScript, or even LiveScript?

The only explanation I can see is that people just aren't capable of learning the full new languages and so the standards people are leading you by the hand like you just got off the short bus.

AND, the standards people don't want to admit that individuals did their jobs for them years ago, don't want to use someone else's design for implementing those features and so insist on slowly coming up with their own independent implementations for engines.

This is a microcosm that demonstrates the relationship between technology and society. There is just a huge lag between the newest and best ideas or systems and what the group or individuals or systems can grasp or absorb.

Take this existing many-year gap and multiply by 10X or 100X and you will start to perhaps understand the concept of the technological singularity.


The standards people very often are the same people, or represent the same organizations, that implemented the features in other places ahead of it being standardized as part of ES.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: