I switched from Ruby (after ~8 years) to JavaScript at the beginning of this year, and Ruby is a dream in comparison:
- Thanks to the proliferation of Rails and similar frameworks, most Ruby apps at least have something that resembles an MVC structure. With JavaScript, once you move past the basic TodoMVC examples you are pretty much on your own. It gives you enough rope to hang yourself, all you colleagues, and everyone in the building next door.
- The expect vs should change in RSpec is nothing compared to how fast things are changing in JavaScript. I think there are now 7 different ways of just defining a module.
- The stdlib of Ruby is pretty sensible. JavaScript has many inconsistencies (take Array.slice vs Array.splice - one modifies the original array, and the other does not), and you usually need to rely on third party libraries, or write the code yourself, to do pretty basic operations.
- The JavaScript community seems to have the opposite of NIH syndrome, so that even basic functionality is offloaded to a third-party modules (see leftpad). The project I'm working on has over 1000 modules in it's dependency tree.
> With JavaScript, once you move past the basic TodoMVC examples you are pretty much on your own.
This is a big problem that's addressed quite rarely. There's very few open JS codebases that cover the sorts of concerns most web developers face. There are open codebases for build tools, codebases for graphics editors and web frameworks, but few relevant to writing a typical frontend app and frankly few that are well written.
As a JavaScript developer myself I often wonder if there'd be demand for a 'boring webapp' open codebase, something that could showcase perhaps two or three different ways of writing garden-variety JS applications.
Rails is crazy. If you know it well maybe it's useful but it's very unapproachable. A new project, empty, involves tens of directories and files and at least two languages (coffeescript). That's been off-putting for me. But Ruby the language is quite pleasant IMO. I guess people confound their experiences of Rails and some other Ruby tooling with the language itself.
Every web application requires at least 2 languages (a sever side language and HTML). Virtually all of them also end up also including Javascript and CSS. And Javascript as a server side language is relatively new. That's not a great argument against Rails.
With clojure you get the frontend logic in clojurescript, backend logic in clojure (or even clojurescript itself) and due to the isomorphism between html and S-expressions, the actual page layout is also described as clojure data in a natural, unambiguous way.
I'm not really trying to make an argument against it anyways. IMO if you haven't jumped in early in the Rails's lifetime, it's really hard to catch up; this is what I'm saying.
I was an early Rails adopter, somewhere around 0.x and used it every day through 2.3. Somewhere just before 3.0 I stopped using it daily. I recently came back to 5.0 for a few applications and was pleasantly surprised that I was proficient in a day or so.
I wasn’t familiar with ActionCable, Asset Pipeline or CoffeeScript but felt they were all easy enough. All the basics were still there, the MVC application structure, DB migrations, etc.
I share the sentiment that JavaScript and many other languages used for web development are great but seriously lack some of the essentials Rails provides.
I realize I'm fighting an uphill battle here considering your username, but what you're saying is that it's just as easy to jump into Rails today whether you're a total n00b or you were an early adopter using it every day over ~6 years from 0.X to 3.0, skipping a couple versions, and then returning.
3 was pretty much a total rewrite of Rails, so you could argue that if you skipped 3 you are starting over from scratch. Pretty much everything changed apart from the higher-level concepts (even the opinionated stuff was diluted to a point)
Agree, Rails 3 absorbing merbisms introduced flexibility to the existing conventions making it easy to stray from the opinionated stuff. Rails 3 was still Rails but definitely made it easier to get yourself into trouble. And definitely changes any Rails 2 developer would have to learn.
I’ve experimented with most of the alternative frameworks and micro-frameworks. I have a real fondness for Sinatra. I’m happy most modern languages have their own Sinatra but I’m sad that most modern languages lack their own Rails.
Yeah I think it certainly helps that I have Rails experience coming back to it and easily picking back up. However learning it at 0.14.3 vs 5.1.4 doesn’t really make a difference. There’s no inherent advantage in the version you started on. Sure learning 5.1.4 helps if you have Rails experience but Rails today is as easy to pick up as it has ever been.
Ha so on the name, I’m transparent at least. I’m no longer a programmer but am a business customer for teams building on JavaScript. I think they have much opportunity and suffer from lack of convention in their day to day work.
Agreed re: that last bit, as a JavaScript fan. I think we’ll see a dominating convention or two emerge over the next few years. Maybe (selfishly, hopefully) not quite as opinionated as Rails but the current landscape is a bit of a mess.
I just learned Ruby (and rails) about a year and a half ago. I found it incredibly easy to pick up. Coming from a PHP/Java background I found that most MVC based frameworks that I already knew are fashioned from Rails in some way.
I think if you're new to most server side languages and javascript SPA's are your wheelhouse your experience will be very foreign and dogmatic feeling. I switch between the two worlds often and am still amazed at how productive I am in ruby/rails.
I started with Rails at 4.x, and having never written a line of Ruby prior it took me about two weeks to be fluent enough to handle most anything thrown at me in the course of "typical" webapp development.
That was about a year ago, and these days my biggest challenge is identifying and filling knowledge gaps. For instance: the mere fact that ActiveJob exists isn't obvious.
Not necessarily even html, you could write your HTML in a DSL of some sort, write functions that generate HTML, etc. Using that you could keep the backend homogeneous. For example, something like https://jaspervdj.be/blaze/tutorial.html
I think the kneejerk hate on Rails by Coffeescript proxy is a good example of why developer-hate on forums like HN is a pitiful metric for scoring an ecosystem.
It's such a non-issue.
In fact, what's impressive is how Sprockets (part of Rails asset compilation) can chain together arbitrary file transformations by looking at the extension. `file.coffee.erb.whatever` will get processed as with the tool configured for `.whatever` files, then it'll eval Erb's <%= ... %> tags, and then finally get processed as Coffeescript before being concat into the bundle.
I always thought that it was a really intuitive way to specify transformations.
But the thing is that you need Rails experience to appreciate a solution like that, and it's easier to write off Rails entirely because someone says that it depends on Coffeescript. In reality, Rails just shipped with the Coffeescript handler so that Sprockets could understand .coffee extensions if you chose to write .coffee files.
Technology-hate only serves to confuse beginners by making them think everything is black and white. And by making them think that they are making a mistake by learning one ecosystem over another when in reality everything has trade-offs and it's better to learn something over wallowing in the paralysis of indecision.
Half of these comments aren't even capable of comparing Javascript to Ruby without bringing up Webpack and the complexity of client-side bundling as if Ruby is a client-side language.
Rails doesn't rely on CoffeeScript. It includes support for it by default, but you don't have to use it, and you can never start using it by passing a flag to `rails new` or removing the `coffee-rails` gem from your Gemfile.
How do the default affect the ecosystem these days, though? I remember back when I used Rails and CoffeeScript was the standard, it took a bit of work to avoid it altogether. This wasn't difficult, to be fair, but it felt like pointless busywork nonetheless.
jQuery was removed in the latest version of Rails in favor of rails-ujs, essentially a vanilla js version of jQuery so `remote: true` and things like that would still work.
Erb / vanilla js is still the default stack for beginners. But they've started to move away from the asset pipeline and they've built in webpack.
I couldn't tell you, as I haven't been really involved in Rails for a while. I never used coffeescript for any serious project, even when this change was made back in 3.1, and I can't remember it ever causing trouble. YMMV.
Node.js was sth. I jumped on early. As soon as I saw the video of Ryan's talk, I tried it. I guess the first version I tried was <= v0.5. Then the community went... nuts. But it's quite different than what I view Rails to be.
Rails is like a culture. There are a lot of pre-baked things when you run "rails new blurb", but most of it is useful, and you'd create them anyways. But to a newcomer like me, that's a bit overwhelming. That's my lament about it. Wrt Node.js, that's just a product of lots of ignorance. All you actually need is a JS file, or a makefile with ".js.ps:\n\tpuppyscript -c $< > $@". But the wheel is reinvented with such pace and redundancy that it's a hell of a platform nowadays. Even its creator, which is a very smart guy, has said farewell.
Did you try a good tutorial? Rails is structured in a way that fits from very small to very big web projects, but it's not structured to be learned just from reading the automatically generated code - that for sure would make it look crazy at first.
Eg for "Hello World", you're not going to touch 99.9% of the auto-generated code. But you need to know where to look.
I did try some tutorials some years ago, but Rails is kind of library that wants you to know it well, and doesn't lend itself to an explorer type like me. The gap between hello world and production code is wide and I couldn't find documentation to help with that process. Probably that was my fault. Something like Flask or Sinatra does not require you to know an ORM before exploring what sort of data structures you'll want to use. I guess it's just that sth like Rails does not cater to people who like to learn their tools while actually using them, and never before...
My problem is that much of the time I don't know where to look. I'm not a great programmer, so maybe that's why, but the few times I tried to grok the innards of, say, ActiveRecord I gave up fairly quickly.
I think that a very few share of Rails programmers need to know the innards of ActiveRecord - I use and abuse it heavily, and never felt that need. When things get difficult to optimize, I fall back to hand-written SQL. I spent a thousand times more time studying the innards of PostgreSQL than AR :)
I think the point is, Active Record (like any orm) is a leaky abstraction that covers a small, but common percentage of what you can do with SQL. Any time I've run into problems fighting the "magic" of Active Record, my time was almost always better spent just taking back to hand written SQL (which isn't terrifically difficult with Rails).
Rails absolutely is a framework where you need to abandon some control and trust the framework though. If you need to know exactly how your abstractions work under the hood, it's a poor choice IMO. I doubt there a single person who has comprehensive knowledge of how every single part of Rails works.
Couldn't agree more. Invest a bit of your time gradually learning more SQL and your life will be easier no matter the backend / ORM.
SQL is really not going away anytime soon (it's demise has been falsely predicted many many times). It even fits the current language hype in that it's functional!
> I'm a bit of a control freak, so this lack of insight into the innards makes me (quite possibly unreasonable) uncomfortable
Ok, that could be the reason why you don't like Rails :)
Regarding postgres, its innards become relevant when you have partitioned tables with billions of rows and need to run queries on them for analytics purposes. In these cases you often need to manually create some complex queries from scratch; other times, you can use ActiveRecord "magic" up to a point, but then you either need to add special indexes (eg partial indexes, that saved my ass many times), or you need to replace the automatically-generated queries that ActiveRecord uses and which are good for small numbers, but become a bottleneck when scaling.
A while ago it was easier to get in (and the framework wasn't simpler), there was a lot of beginner-friendly materials (like the "do a blog in 15 minutes" video). If you started from there it was actually hard not to understand the file structure. (hint: rails/rake commands are your friends to quickly scaffold your models/pages).
I agree that a lot of people don't make the distinction between rails and ruby. I had interns who where quite surprised you could use ruby without rails, and that "it's actually a complete language!". Ruby on its own is charming indeed. Rails gets the job done: few other tools enable you to finish a functional MVP in a day or two. But if you start to add gems for everything it just becomes a pain in the ass.
edit: I was f*cking mad when they added coffeescript. fortunately it's simple to disable (just removing the gem and changing a setting for the file generators)
That's not true (regarding coffeescript), you could always used JavaScript for the front-end, they just installed coffeescript compiler as a default at some point. And there are not SO MANY directories, you actually care about like 3, maybe 4 ;-) (app, db, config, lib)
>I switched from Ruby (after ~8 years) to JavaScript at the beginning of this year, and Ruby is a dream in comparison
JavaScript is a beautiful platform and an atrocious language. I do not understand how anybody can tolerate it if they are coming from any other sanely designed language.
Transpilation is what makes JavaScript tolerable. Did I ever mention I love Dart? Because I love Dart.
>With JavaScript, once you move past the basic TodoMVC examples you are pretty much on your own
Yes! Those TodoMVC examples are great! Now write 100,000 lines of code and tell me you don't want to tear your hair out.
>I think there are now 7 different ways of just defining a module.
Thank you! I don't know what the hell is wrong with the community. There is no correct language enforced module system, so everyone rolls their own. It's a mess and nobody seems to care ... probably because if you'e doing that kind of development you simply transpile from a sane language and you side-step JavaScript ugliness.
>The stdlib of Ruby is pretty sensible. JavaScript has many inconsistencies
Yeah. Best not to dwell on it. It is the way it is because of legacy reasons.
>The JavaScript community seems to have the opposite of NIH syndrome
I would say that the JavaScript community is paradoxically suffering from NIH _and_ IH. You roll a lot things in-house and you also add dependencies on 'modules' with single-use functions. The community is really strange. I think its quirkiness is due to being dominated by young kids who have no experience with anything other than JavaScript, so they don't know what they don't know and they don't demand better. The rest of the people simply side-step JavaScript and use TypeScript (or Dart a criminally underrated language).
I would say the JS community has both NIH syndrome and huge dependency trees. Everyone does things differently and there are 5 or 6 kind of decent options for any dependency you might need. Yet somehow there is rarely one great option.
Nope, NIH Syndrome[0] is when you don't use a dependency because it wasn't written by your team/company/whatever so one would expect it to lead to smaller dependency trees.
What in the world does this have to do with the article in question? There isn't a single mention of Javascript in this entire article. Not even in passing. Jeez it's like a contest around here sometimes: who can bash Javascript the most.
I'll grant that Java / C# might be more lucrative, but I'm doubtful about the others. The main draw to JS is that if you learn it and use it regularly, you can trade as a full-stack developer.
Scala is pretty well-paying, as far as I can tell. And frankly you don't have to be a JS expert to call yourself a full-stack developer; you just have to be conversant. In my experience, anyway. And from what I've seen front-end is the least well-paying specialty.
MVC is a simple concept. You don’t need a framework to organize your modules. You can easily do MVC in Js from a vanilla setup. As another commentor suggested, maybe MVC isn’t the right abstraction for the need. That’s the balance in my experience. Sometimes it makes sense, sometimes not. In the end make the compromise that allows you to develop with high reasoning ability and iterate from there.
>MVC is a simple concept. You don’t need a framework to organize your modules.
If you're writing 1000 lines of throwaway code I agree with you. If you're writing a 100,000 lines of code that is meant to be supported for years - then I disagree wholeheartedly.
I think the move away from MVC frameworks and the move towards microservices go hand in hand. MVC frameworks are usually better suited to monolithic apps and that's just not the way things are done at the moment, for better or worse.
Maybe we're not aligned in what I'm trying to communicate. You can create a simple MVC abstraction without a framework. It's just separation of modules and their appropriate concern.
for at least 90% of the problems out there, its the the best approach. There are a lot of design patterns out there, but you can do a lot with MVC. From what i've seen, the reason developers tend to shy away from it is because it's TOO simple. They don't feel proud of it, because its so cut and dry. They want to create something unique and interesting and challenging, even if a simple solution would do just fine.
But it's not a balance, or a 50/50 sort of decision making. Really, MVC should be your first choice, and you better have a really good reason for it to not be, and a really viable alternative.
This sounds a bit like having a hammer, and seeing at least 90% of everything as a nail. Not that I don't find the MVC architecture useful - but really, it is hardly the only valid or reasonable approach, and your speculations on why people choose other approaches strike me as rather ill thought out besides.
On the other hand, if you find you need new complex told for every job you work on, you might be choosing those tools out of self interest rather than what's most effective.
To extend the analogy to its breaking point, just because some people treat every problem as a nail too be fixed with a hammer doesn't mean that you can't use a hammer if you genuinely need to push in a nail.
Sure, untrammeled neophilia is a problem too, but that's not what we're discussing here, either. No one is saying that MV* is never useful, only that it's not the be-all end-all that the comment I chose to interrogate presented it as being.
If we were discussing untrammeled neophilia, though, I'd note that in a fast-moving field it merits the professional to keep up a lively familiarity with the new tools which will likely soon deprecate the old, and it likewise merits the organization invested in such a field to avoid letting that investment grow so stale that it becomes difficult to find good people to work with it. There's really nothing here that hasn't happened with almost any other software specialization over the last few decades. It's only that it happens faster now, because everything happens faster now. There can be a certain fatigue in that, and from the outside - or from the perspective of one who has suddenly noticed that the world has moved on while he has not - it can seem as though there's nothing to it but new shiny things for new shiny things' sake.
I have not found it so; instead I find that today's tools enable those who know how to use them to do more things, better, and faster, than yesterday's tools could support - and I confide that tomorrow's tools will improve the situation still further. But perhaps my own perspective is the one that's flawed.
Gotta disagree with you there, given the number of frameworks that interpret MVC dramatically differently. I actually don't think it makes sense as a paradigm for server programming to begin with, and we've been trying to shoehorn the M, V, and C on to constructs that are a different thing entirely.
Sorry stupid question(s) here. People use mobile phones now for basically everything. JS is for building web apps. Webapps are god awful on mobile devices. So why is everyone switching over to JS? Why not just ios/android native -> to a backend. If you need a webpage just a few beautiful static html pages is great. People like the broken scrolling, the rotating arrows, loading circles, alerts?
This is like moving from Kotlin to Java or something similar. Within the domain of languages, you haven't moved that much. You stayed with dynamic type system and a JIT or interpretor. Besides a little change in syntax, what else is there.
I tend to like to hear about people switching languages that also cross paradigms, OO to Actor, Actor to FP, etc..
If you're distributing software, that means that you need to verify the source each of the 1000 dependencies, check if the licence is appropriate, and verify that there aren't any current published vulnerabilities in the version that you're using.
Doing this bare minimum of due diligence is easy if you have 10 dependencies, but quite costly for a 1000.
I don't see how the number of dependencies is important here, instead of their size. You can trivially automate license checks.
The number is misleading anyway, because many "packaged" libraries that you would normally look at as a single module are split into individual modules for perf reasons.
Again, it's not about the number of people, is it? You don't have 1000 top level dependencies, so you're not trusting 1000 entities. If you're really talking about people, companies that produce libraries have hundreds of employees - do you "trust" them all? No, what you're talking about is establishing a web of trust, and you can do that in JS as well. Use a framework, use a library that combines a set of tools together. There's nothing particularly different about JS in this regard - there are plenty of large-scale development shops(Google, Facebook, Mozilla, Yahoo, Netflix..) which offer code, alongside a healthy open source community. You choose what to use.
1000 transitive dependencies is not the same as a single dependency with 1000 employees. For one, I highly doubt each employee can `npm publish`.
Consider the destructiveness of someone malicious befriending the left-pad developer, taking the project over, and doing a malicious push.
I'd say the scale of the potential destruction in so many tiny, few-eyeballed modules is unique to the npm ecosystem for better or worse.
It's one of the security downsides of the ecosystem especially in the online casino space where I work. For example, getting an online casino's `npm ls` (full depth) would be a good place to start.
The scale of transitive deps we're talking about when compared to any other ecosystem is quite excessive, but also just how tiny many of those deps are.
> Consider the destructiveness of someone malicious befriending the left-pad developer, taking the project over, and doing a malicious push.
Not different from someone befriending a corporate employee and asking them to insert some malicious code into the codebase. You still rely on a web of trust - in the company's case, the code reviewers, in the open source example, the maintainers of the libraries that use left-pad. I don't think this argument really holds, I don't know what you're comparing it against that's different.
We'll have to agree to disagree if you see equivalence there.
Even the smallest transitive dependency in our largest dependency graph in the Java ecosystem isn't someone's 6-liner afternoon project.
I think the ease of publishing + ecosystem of small modules is a good thing, it just has what we consider an ecosystem-level security trade-off that matters for some applications.
The corporate employee is likely incentivized by a contractual obligation not to deliberately screw their employer over. There is a web of trust, sure, but don't pretend that it's somehow equivalent to betting that some rando on the internet doing whatever they feel like with their three line repo won't break your code.
1000 opportunities for semver to go wrong. And what happens if dependencies pin different versions of their dependencies? The code is duplicated in the JS bundle?
Both versions appear there, yes. Not ideal, but I'd be interested to hear what you regard as more optimal behavior in the context of a dependency graph like the one you assume.
(Dependency bloat can be a real thing, sure. But if I have to choose between adding a dependency on a solid, well-built module and spending the time to reinvent the same complex wheel all by myself, I'm going to pick the option that gets me more quickly to done, 100% of the time. I'm paid to bring business value, and while minimizing technical debt where it counts is a big part of that, spending unbounded time to satisfy my own peculiar notion of software perfectibility is most emphatically not.)
I personally ran out of reasons to prefer another dynamically-typed language over Javascript on the server. In fact, with ESLint, I'd say Javascript has some of the best static tooling among them before you even get to Typescript.
With async-everything, promises, and async/await, I think Javascript is one of the nicest dynamically-typed languages.
Couldn't agree more. And if you need to make it more concrete, Typescript gives you that and more. Well except the variadic generic types which they are planning to have.
I don't like node.js, but I occasionally use Typescript like a more powerful C#.
Typescript’s type system is completely structural and unsound. It is very different from C#’s, fitting JavaScript well but not a better more powerful C# by far.
I didn't say better. I said "more powerful" and I said that in terms of expressive power.
C#'s type system is usually better except from the times that I feel creative. Works perfectly for large codebases with many changes. Not fun for side-projects.
Also, after I stopped using Rails, I saw that they started embedding a repl on their error pages with the environment loaded at the point of failure. Pretty sexy.
I also miss being able to eval code in my buffer while writing Clojure against the same process my server is running in.
Server-side Javascript has the perk of being able to use Chrome's debugger UI. I personally haven't used it much though.
It's interesting how much your core feedback loop / workflow can change between ecosystems. Pry is definitely a highlight of the Ruby ecosystem. The craftsmanship there alone is inspiring.
The problems with Chrome's debugger UI for server-side development are:
1) It requires another window to be open, which requires screen real estate. I don't always have an external monitor attached.
2) Navigating around requires clicking buttons with a mouse. These buttons are quite small, so Fitts' law dictates that it takes longer to navigate. It is much faster to be able type "next", "c", or "up 3" in the same text box that the output appears than to have to keep moving the mouse back and forth between buttons and the console.
3) Because of point #1, it is harder to get as much space for the output of running some javascript.
4) It takes longer to make the connection happen and (at least in the past) has been unreliable. This is a significant annoyance if starting and closing a debugger window frequently, as one might do if trying to navigate to a breakpoint in a particular test case.
I personally would select a dynamically typed language (preferable with C-like syntax) like JS/PHP/Lua/etc over a statically typed one for web server projects most of the time. Only for specific high performance web server usecases I would use Go or Java. For video games and other high performance applications I would use C++.
Dynamically-typed languages on the server also tend to work well because they are often just a thin glue layer between a strict data layer (like Postgres) and the client.
Following a GET request through a web server usually just involves some middleware, a dozen lines of route handler, a dozen lines of database queries or service calls, maybe an html template, and you're done.
All of these concerns are not actually concerns. They're a symptom you haven't yet become a JS dev. You have to throw yourself mentally into the ecosystem, wholeheartedly, as if you were diving into a pool. At that point every one of your concerns vanish:
- There is exactly one way to define a module. Write a file. Done. Webpack takes care of it.
- You were using Webpack, right?
- The fact that Webpack either wasn't on your radar or already solving your problems says a lot.
Look, I know this sounds like bullshit, and in fact it's very difficult to distinguish this from actual bullshit. But there are a huge number of people here -- old fogies, stuck in the past, yearning for the old days that are never coming back -- who hate JS, and that hatred manifests itself in ten thousand subtle ways.
Inventing the idea out of thin air that "there are 7 different ways to define a module" is simply false. It does not resemble reality. Yes, it's true you can point at all the dead module systems that came and went, but nobody bothers even thinking about those systems because they're dead. They're zombies. Yeah maybe some people use them, but realistically if you're getting shit done in JS that means you have webpack and all of these concerns are not actually concerns.
JS stdlib seems like nonsense? Lodash.
Feels like you're importing too much bloat? Webpack tree shaking takes care of that. Closure compiler eliminates all the code you're not using. There's no reason not to use a first-class stdlib, whether it's Lodash or whatever else you prefer.
- The JavaScript community seems to have the opposite of NIH syndrome, so that even basic functionality is offloaded to a third-party modules (see leftpad). The project I'm working on has over 1000 modules in it's dependency tree.
This is the power of JS. It's the thing to force yourself to embrace. You need to join the cult of bullshit and just mentally force yourself to love this rather than hate it. It's why JS won. You can hate it all you want and revel in the insanity, but there is zero reason to do that other than as a way of making yourself feel good. Which is just another way of saying we're all acting very selfish when we lash out at JS like this.
Yes, you're correct. Every one of your points is absolutely true. Yet it's all so very wrong. I wish I could put it into words better than this.
Look dude. I've been around the block. I vividly remember what this shit was like in 2008. You want pain? Holy shit, I remember trying to make a little arrow rotate on my webapp. I wanted finder-style arrow rotations. You know how on a tree view, each folder has a little arrow, and it points to the right when it's collapsed? Click on it, it rotates 90 degrees and points down, and the tree expands. YEah, I tried to do that in 2007 right when Rails 1 was first coming out. Talk about horror. The solution -- the only solution -- was to make an entire animation frame by frame, of the arrow rotating 90 degrees. Like 16 different images, just for this stupid arrow to rotate 90 degrees on command. Because CSS literally wasn't a thing back then. Yeah we had "CSS", but for all intents and purposes we are living in the future right now. We have tools that 2007-me could only dream of.
JS works now. That is not the expected behavior. It's difficult to convey how fucking strange this is. Anyone who lived through that 2006-era transition knows what I mean.
Right now you can drop Semantic UI into your project, hook up React to it, and have first class testing (Enzyme) and state management (Redux).
Don't like all that bullshit? No problem. Use Vue. It sidesteps all the Redux insanity. And the tooling is catching up.
But the way you know these things is by being a JS dev. Living and breathing it every day. There's no way around that. You either throw yourself wholeheartedly into the pool or keep one toe in while complaining about how big the ocean is.
This rant came out quite a lot harsher than I intended; it's just a reaction to the very common trope that HN throws around of "JS is shit, the web is shit, this is shit." Yes, this is true. And yet -- simultaneously -- no. No no no. We can do magical things now. Hooking up Vue with hot reloading is literally magic. If 2007-era me had these tools, I would have launched a startup that could dominate everyone else solely due to the advanced tooling that we now enjoy. Our tooling now vs 2007 is like Lisp vs C++ back in 1999.
No, JS won because the web won. There's a web browser on almost every computer out there. If you write your software in JS, you can run it just by following a link to a URL. That doesn't have anything to do with JavaScript's absurd module ecosystem.
You will probably not have Python or Ruby on the client with web assembly. Shipping their runtime is several Mo. Not to mention the stdlib (which is part of the appeal of those languages). Paying that cost upfront, without even a framework or any user feature yet is way too much. We already have bloated webpages with only JS...
Opal is actually pretty nice and usable as a JS transpiler already. I suspect they'll be targeting WASM at some point.
I do think however the current trend is OCaml / Haskell influenced languages like Elm, Purescript, Reason, etc. WASM is only going to make them better.
And then suddenly I got 30Go of cache on my hard drive because of all the sites that are so smart. And limiting the size of it will just render it useless as everybody will just tried to add to it, erasing the existing cache.
But even without that, then chances that you have exactly the same build than somebody else in cache is very weak unless it's very popular. And to get it popular means a lot of people has to download it, taking the several Mo hit download on first load. And most users will just quit the page before that, thinking it doesn't work.
The solution would be for browsers to stop the madness and decide by community to adopt a new standard with a decent language for the web. Be it Ruby, Lua, Python, at this point I don't care. I'm partial to Python, but I won't fight for it if it means I can get anything with a real stdlib, good builtins, namespaces and a readable syntax.
The same thing happens now with regular browser caching, it's not that big of a deal. People will design run-times targeting webassembly to be lightweight, but if they end up being too heavy, developers will use CDNs the same way they do now with other large front-end dependencies.
> The solution would be for browsers to stop the madness and decide by community to adopt a new standard with a decent language for the web
A subjective and impossible consensus to achieve. The community will never be able to agree, that much is obvious. The answer isn't "pick a language that works today and hope for the best", that is how we got where we are today, rather, we need a solution that gives the community the power to experiment with better solutions that aren't constrained to a single language.
I agree with your conclusion but I don't think caching will save us. Currently it doesn't work. The magic of cdn didn't happen. I still download every single bloated monster on the web every time I want to read it.
The joke I always hear is that people are drawn to node because after learning all of JavaScript's weird quirks the people who've mastered it assume all languages are like that and never want to learn another language again.
Ya'll can thumb your nose at JS all you want. Haters gonna hate. I'm starting to question whether this HN mindset is healthy. People get trapped in this vortex of us constantly chanting about how horrible JS is, and totally ignoring all the incredible things we can do now. It's worth pushing back against this echo chamber and reminding people that you now have the power to grant wishes on command. Someone wishes X exists, and for most values of X, if it's technology and you can shove it into a web browser, you can make X appear out of thin air. Like, within two weeks. How incredible is that? And it's mostly due to JS, not in spite of it.
Let's put it this way. Think of your favorite language. Favorite paradigm, whatever. All the shit you hate about JS, picture the exact opposite of that. Now imagine that the web was entirely built around that.
Surprise: now everyone would hate exactly whatever you love. It would've grown all that hair and all those warts you currently despise about JS. It's what the world does.
So when people knock on JS, it's getting much harder to take this stuff seriously. I've been a dev for over a decade and have been down a dozen rabbit holes, from C++ template metaprogramming to .NET nonsense to elisp verbosity to hardcore functional + immutable paradigms. Loved all of them, each in their own way. What we have now with JS is simply incredible from a raw "get shit done" perspective.
It includes special cases for dates, takes O(n log n) time in the number of keys, and has suspicious comments like, "I've managed to break Object.keys through screwy arguments passing. Converting to array solves the problem." There are 15 open issues and 15 open pull requests. A sane language would have this operation built-in.
I would otherwise agree that JS's standard library is too small, but I don't think your example showcases it. Which languages offer this in their standard library? It's not in any of the languages I have used: C, C++, C#, Java, PHP, Python..
A better example would be the extremely poor Date object, which has only a couple format functions and can't even be created from a specified format. There's also a bunch of holes when manipulating data, filled by libraries like lodash(although the standard lib has improved significantly in this regard).
I don't blame you. I think the only reason I ever do deep comparison is in unit tests, and every test/assertion library has a `deepEquals` assertion, so it's not something I regularly miss.
There are better examples of deficiencies in Javascript's standard library, especially when targeting browsers, but Javascript has made massive strides lately like its native Set and Map datastructures.
For example, Set doesn't come with the classic methods `.intersection()` / `.difference()` / `.union()` / etc. but it's just not a defining moment of my overall experience with Javascript.
We're long past the days where you had to bring your own `Array#map`, but I don't know if all of these commenters are.
C++ has this kind of equality for STL containers. Create nested maps, vectors, lists, sets, strings, etc, and you can just compare them with "==". It will do a deep compare.
Which languages offer this in their standard library? It's not in any of the languages I have used: C, C++, C#, Java, PHP, Python..
Off the top of my head, Haskell has "deriving Eq", and Scheme has "equal?". I'm pretty sure Python uses deep equality for tuples as well. Anyway, it's just one example of something I wanted to do in JS recently and ended up disappointed in the quality of the code I saw to do it.
I'm all for bashing the insane world of Javascript, and think that the argument grandparent is is making is quite weak, but not many languages have deep-equality baked in.
Sure, you can perhaps override some methods on an object, but given two arbitrary objects (like two dicts in Python), knowing if two are 'equal' is not easy and certainly not built in.
That being said, JS doesn't make implementing this easy or simple at all.
And yet, this is a perfect example of "this does not matter."
Pick a project. Think of a website you want to make. At no point during that are you going to be stonewalled by "Man, if only I wasn't forced to spend the last 4 hours debugging this weird deep equality issue. I could've gotten so much more done!"
You can take that information and use it however you want. It's the truth. It's up to you to either reject the notion or integrate it into your mindset and abandon the tendency to cling to idealisms. Ivory tower development simply does not happen in reality, and I say this as someone who spent a couple years trying to build an ivory tower Lisp.
I know it's not coming across at all, but I really identify with your mindset and acutely feel your pain. This was a major mental hurdle that I had to force myself to overcome. I'm saying, it's possible, and the only thing you have to do is to choose to do it.
Let go. Realize that it doesn't matter. It's worth the benefits. You can do so much more.
I was afraid that "letting go" would translate into "now I'm part of the problem too." But it turns out the opposite is true. It's very powerful that you have a background that most JS devs lack. Because they often rabbithole themselves into complexity, and you can come along and do something simple that no one else thought to stop and do.
But you can do all of that while still embracing the wider ecosystem. It's not an either-or. You can help bring sanity to what would otherwise be a ball of hair. But the way to be in that position is to jump in and churn churn churn until you've used Vue and React+Redux and understand the concepts and tradeoffs.
> Pick a project. Think of a website you want to make. At no point during that are you going to be stonewalled by "Man, if only I wasn't forced to spend the last 4 hours debugging this weird deep equality issue. I could've gotten so much more done!"
I have faced this on the job. About three abstraction layers deep in a Backbone app. It took me two days to trace the exact cause and find a way to implement a better check, that wouldn't make things respond in unexpected ways.
> It's up to you to either reject the notion or integrate it into your mindset and abandon the tendency to cling to idealisms.
JavaScript is terrible. It is not the product of academic research like Smalltalk or Haskell. I can accept it is the only tool available for the job, and continually look at all the solutions for that particular issue, and I will keep doing that.
I can use JavaScript, and I can write it. Usually my code is simpler and faster, because it takes me much longer to write anything, because of how aware I am of the side-effects. My employers in the past have liked the results that has produced.
But I will never enjoy writing it.
It is a language that is difficult in ways that reveal it's sloppy roots, and spill it's implementation details. Much of the time JS feels like working with UB in C, except it is actually detailed to act that strangely, in the spec.
Just an example that has always stuck with me, this is valid JS:
Pretending it's puss-filled core is wonderful is going to lead to nothing but burn out, but so is hating every procedure you write in it.
It's a tool. It's sharp on both sides, which can lead to a lot of blood-letting if you aren't careful. But it is still the best carving tool we have on hand.
It's true that the most well-known languages are likely to attract the most criticism by virtue of being so well-known. It's true that Javascript and its ecosystem has improved. However, it's also true that there remains some warts to Javascript (for instance, modules are not there yet). It's not fair at all to say "all of these concerns are not concerns" just because you have grown to appreciate all the advantages that Javascript offers. That's being silly, sillysaurus3.
> Look dude. I've been around the block. I vividly remember what this shit was like in 2008.
I remember what shit was like in 1998, which is why I get the same sort of aggressive reflex anytime someone is criticising Rails.
Back then, a web project could, and this is only slightly hyperbolic, include a file ```/project/htdocs/version-2.0-old/real/includes/templates/config.passwords.production.INC.php3~```
...and that file would start with ```<table>``` and, somewhere, include the SQL to delete a user.
That was the state of our industry when Rails was released. Which is why anybody who went through that transition weeps whenever people today regurgitate this bullshit about "magic" and "OOP considered harmful". Maybe functional programming is incrementally better. Maybe Rails is past its prime. But on a purely emotional basis, I feel defensive when some article doesn't start with a paragraph praising the genius of DHH before offering its criticism.
Yes, the state of the web in 1998 was as you describe, but by 2005-2006 there was an explosion of MVC web frameworks in many different languages, of which Rails was just one.
That's right. Even pre-mvc, Java had Java Server Pages in 1999, which was much better than "<table>...sql here".
However, Rails is what really launched the "MVC renaissance." Because of it, soon Java, C# and many other languages moved to predominantly MVC cultures.
Rails was a (positive) reaction to Struts. Struts was an MVC framework, but clunky and difficult to manage. ActiveRecord in particular took many lessons from what Struts did wrong, and fixed it.
There's a Rails way, Python famously aims for one obvious way of doing things. That just isn't true of Javascript exactly because there is a history of developers writing several styles of JS, the "right way" to write JS being redefined by diligent people like Douglas Crockford & John Resig and things changing yet again and yet again.
And look at ES6, the JS ecosystem barely resembles what was considered good, idiomatic Javascript just a few years ago. There are a bunch of devs who have managed to stay on top of what's current and I'm sure are laying down examples of good practices & building tools in that style. But you are also talking about an ecosystem that includes people that started writing JS 4-10 years ago, have adopted some of the absolute newest flavor of the month practices but also have seen JS change so much over the years that they aren't throwing themselves into everything that is now considered a best practice because they know their code works and there could be another seismic change in 8 months.
Writing Javascript nearly every line of code is haunted with "is this still the way I should do this?" and no other language has that much hassle.
C++ has had the same problem ever since C++11(or even since its inception, if you consider it as an evolution of C). It also has the same solution - you don't have to chase the latest standard if you don't want to, thanks to an excellent history of backwards-compat. People do it because the new stuff is legitimayely better, not just out of some fad.
But you are also talking about an ecosystem that includes people that started writing JS 5-10 years ago, have adopted some of the absolute newest flavor of the month practices but also have seen JS change so much over the years that they aren't throwing themselves into everything that is now considered a best practice because they know their code works and there could be another seismic change in 8 months.
Again, this is a symptom of the "one toe in the pool" syndrome. You have to throw yourself in.
Yarn is a perfect example. It came out very recently, relatively speaking. And yet it's clearly the future. Like, the only reason not to use yarn is if you have a legacy app that explodes for some reason if you try to type "yarn" instead of "npm install". But the common case is that you do "git clone foo && cd foo && yarn" and everything works perfectly.
It's so much faster, and almost never causes problems.
So why embrace that? It sounds like horseshit yeah? I hate to use heavy handed metaphors like that, but that's exactly the feeling that you get when you hear "Oh, now we're supposed to use yarn? what? npm install isn't good enough now? Those fucking JS idiots have no idea what they're doing."
In reality, you're seeing evolution in action. This is what natural selection looks like. The good ideas triumph. And yes, it's subjective what constitutes a "good idea." But when yarn comes along and makes all my npm installs 3x faster, you bet I notice and you bet I embrace it right away.
That's what I mean about throwing yourself in, though. If you don't actually force yourself to love this stuff, then of course you end up hating it. It's insanity incarnate. Yet if you just embrace that fact, and take it as a given, you can actually find it quite fun. It's an adventure to get to learn these new systems, not a chore. If you run across something that doesn't seem to make sense or doesn't work for you, just ignore it.
It just seems like we as developers run the risk of having our heads stuck so far up our own rears regarding our favorite language paradigms or the lack of greenthreads in JS or whatever it is you hate about the ecosystem, that it's very easy to lose sight of the bigger picture. Right now you can build a team of 50 engineers, and if you choose React, that codebase probably won't devolve into utter chaos. React makes it possible to do large-scale coordination. Like, I can send you my React component and it'll probably work in your codebase.
Ditto for Vue. If you hate the complexity of React, and you're a small team (usually just you alone), then you can just use Vue and not have to care at all about React+Redux's monstrous complexity.
You see? You can pivot however you like. If you hate X then just avoid X1 It's really hard to take these concerns seriously when there are so many options.
It's actually really simple why we can't all just throw ourselves in 100% to every new thing. Instead of learning a new tool I could be writing app code for my client or employer who needs the code for their business. I have a finite amount of time & energy so I try to use it well.
And that's fine! You can still do that. That's the thing -- you can stop climbing wherever it makes sense. If YarnMaster5000 comes out tomorrow and claims to simplify your code so much that you can write a webapp simply by pressing the ` key, you don't have to spend one second learning it. Just stick with whatever you were already doing. The time to learn it is whenever the next project kicks off.
Ah, yes, that's the crux of it. We'd love it if the world would just stop changing so we can relax and stop learning. Well, too bad! Here's the uncomfortable truth: the moment you let yourself get too comfortable, your career dries up. That fear is a powerful motivator, so I suggest internalizing it. If you can't force yourself to love learning, then become afraid that if you don't learn it you'll find yourself unemployable after 5 years.
This almost happened to me, so I'm speaking from direct experience. I came from gamedev, and low level skills like that really do not translate into "I'm getting paid $INSANE/yr in a hot new startup." In fact, you can barely pass webdev interviews because you have to go "Uhh... Yeah, I worked on Rails back in 2007 or so. I guess I have some learning to do. But just trust me, I can learn it as I go. Oh wait, you don't believe me? But... I've spent my whole life learning as I go... It's fine if I don't already know Rails even though you're hiring for a senior rails dev position."
That situation is exactly what happens when you stop being on top of all this stuff. Go to sleep and wake up 5 years later and suddenly getting a job is hard. It's easy to smirk ad feel like it won't happen to you, but FWIW the way you feel about JS now is exactly how I felt about Rails in 2007.
> That's the thing -- you can stop climbing wherever it makes sense. If YarnMaster5000 comes out tomorrow and claims to simplify your code so much that you can write a webapp simply by pressing the ` key, you don't have to spend one second learning it. Just stick with whatever you were already doing.
And there you are a few comments above telling us we have to use Webpack because other older module systems are zombies that came and went.
We'd love it if the world would just stop changing so we can relax and stop learning.
The truth is that we'll never have enough time learn everything we should learn and every minute I spend re-learning a different way to do something I already know how to do is a minute I'm not spending learning something actually new and useful.
If, for example, I'm writing a web app related to storing and processing geology related measurements (which I am) I should be spending most of my 'learning time' studying things related to geology and the processing of said measurements, not re-learning how web development is done this week.
That's not really re-learning. That's incrementally building on and evolving existing knowledge. Virtually all my HTML and HTTP knowledge from 199X is still valid.
Ah, yes, that's the crux of it. We'd love it if the world would just stop changing so we can relax and stop learning. Well, too bad!
Something about this smells.
What is it.
Is it the way "we would love it if it stopped changing" but it's us who is changing it?
Is it that the changes are reinventing the wheel? Churn because frothy churn is easier and more profitable than meticulous, solid, long-lasting foundations?
Or is it that we still haven't managed to orient the programming industry around 'finished' software such that if we ever 'finish' software we immediately stop being employed, so we have a pathological need for software to continually need to be rewritten, at the level the average programmer can rewrite?
From your other posts, your main problem was that you didn't want to / couldn't move to where jobs were, not that low level gamedev code is no longer being written - VR and AR are testifying to that.
How is this client side rewrite every few months fundamentally different from the broken window fallacy? as in, unsustainable because it adds no value overall?
> Yarn is a perfect example. It came out very recently, relatively speaking. And yet it's clearly the future.
Node threw themselves into Semver, and it was supposed to be great. Nobody needs a lockfile because everyone obeys semver, so every time you do 'npm install' your app is guaranteed to work because everything is semver'd!
Oh wait, no, that didn't work out. Better freeze every dependency and sub dependency because who knows what will break when we update!
This is why throwing yourself at whatever shiny new thing without thinking or looking at how other languages do it is not always a great idea.
To be clear though, npm and yarn have not abandoned semver. Semver is still a good way of expressing dependencies. It's just that there are enough cases where it's not sufficient, and that's why lock files are in fashion this season. The underlying semver data is still there though, which is different to the sort of package manager which only records installed versions.
Sure, I didn't meant to imply they abandoned it. I simply meant that in a perfect semver world you wouldn't need lockfiles, except maybe in special cases.
> Yarn is a perfect example. It came out very recently, relatively speaking. And yet it's clearly the future. Like, the only reason not to use yarn is if you have a legacy app that explodes for some reason if you try to type "yarn" instead of "npm install". But the common case is that you do "git clone foo && cd foo && yarn" and everything works perfectly.
Last I checked its issue tracker was a pile of "hey, you, uh, didn't implement this npm feature at all, so things break" and "you're missing yet another obvious sanity check here, so things break". No wonder it's faster. The last thing I need in my JS stack is another thing that can fail under ordinary use, for marginal benefit.
If you think npm is less buggy, you haven't been paying attention after the npm 5 release. Or when npm had an obvious race condition that resulted in publish breaking at random for years. Let's not pretend npm is slowed down by being well-crafted and reliable.
I've not personally encountered an NPM bug ever, as far as I can recall, and hit a couple inside a week of using Yarn. So I'm gonna keep not using it for the time being, since there's no real difference aside from not having to spend time ruling out my package manager as a source of broken builds.
I've run into several NPM bugs (including publishing broken packages because of that race condition bug several times) and have yet to run into any problems with Yarn.
That's a very interesting metaphor you came up with.
So the JS ecosystem is a giant pool right? Everyone is swimming in all possible directions with all sorts of contraptions like rubber tires, inflatable dinghies, pieces of wood, plastic garden flamingos, etc. A lot of the swimmers naturally pee in the pool because it's more convenient than exiting (that was frowned upon at the beginning, but people kinda got used to it - and it keeps a steady development speed!).
This might look quite horrifying to those on the outside, but for the swimmers it's nice and warm, while stepping outside is cold and scary, because they would have to learn all sorts of complicated and frightening technologies.
For someone like me that got out when the pool was small around 2006, there is no incentive to jump back in. I mean yes, if I closed my eyes and jumped I would probably adjust after a while. But why jump in the JS pool when I can swim in the open sea? :)
So you "got out" of JS around 2006, before module systems, before node.js, before anything resembling modern build tools, before even jQuery was really a thing (it was created that very year), before all development tools (firebug was also originally created in 2006).
And you think you have any idea what you're talking about when you talk about modern JS just came outdevelopment?
You know when AJAX first became possible? 2005. ES3, the latest version of JavaScript available at the time, was released all the way back in 1999, the next version (ES5) wouldn't come out until 2009. Strict mode didn't exist.
MSIE had an 80%+ market share, Firefox was hovering around 12%. 2006 was the year IE7 came out. Building a website meant you were targeting IE6 if you were lucky but had to worry about compatibility with IE5 (and also the completely different but identically named MacOS version of IE5) -- but in practice IE4 and Netscape 4 were still a concern.
You know the song "IE is being mean to me"[0]? That was first published in 2009.
Your comment is the equivalent of showing pride about "getting out of" automobiles in 1903, when Ford started working on the first mass-produced car.
Ah yes, those were the days. One could open a website and it worked. You could save it, bookmark it, share it and it worked. No invasive ads, no pervasive tracking, accessibility was starting to become an important topic.
JavaScript was used mostly for decorative purposes and AJAX where it improved experience.
As far as benefits to the users are concerned, why is today's web better? It's just a souped up Model T with bilboards on all sides, a 360 degree camera on top and more cameras on the inside.
But I get why it's awesome for front-end developers, they get to rewrite everything in the framework of the day every couple of years and they are handsomely paid. They get to invent new contraptions to keep up in the eternal race with native apps.
It does get a little annoying when someone points out that the emperor is naked, but otherwise life is good.
Yes, clearly the web was better when half the sites were broken because you didn't have the latest version of Flash/Shockwave/Java and the entire website just showed a sad rectangle.
There's nostalgia and then there's outright denial. The reason you think the web was glorious in 2006 is that you forgot everything that was shit about it.
Want to know what made those shitty parts go away?
2006 is maybe not the correct year, but I'd say the was a period around JQuery's heyday that had a lot of advantages compared to today. JavaScript, while used frequently, was mostly for progressive enhancement, CSS was beggining to be a saner language to work with (particularly with sass), and overall, I generally felt more productive than I do today in the JS-driven world of the web.
Absolutely there's a class of applications that wouldn't be possible with this approach, and I'm glad they exist. But it seems the "correct" answer to any problem to be solved on the web, no matter how straightforward, is to reach for React or a similar framework.
Java was irrelevant by then, shockwave was not really used for websites, that would be Flash, which was already installed on most clients and was working fine.
Flash was not shitty, it was quite good for writing apps, but it was proprietary and like a lot of contemporary tech it was (becoming) a security liability.
You're giving waaay too much credit to JS. Apple almost single-handedly buried Flash.
Modern JavaScript is just as bad as Flash, Shockwave & Java; indeed, it's worse: those plugins could be enabled or disabled within particular pages and everything else would continue to work just fine, but modern SPAs are utterly unusable without JavaScript. Modern web sites use JavaScript to insert text & images into the DOM; it's impossible to usefully view the web without a full-fledged JavaScript implementation. It's impossible to read a blog without exposing yourself to security & privacy risks due to JavaScript.
It used to be possible to escape; now it no longer is.
> Want to know what made those shitty parts go away?
> Modern JavaScript.
Modern JavaScript didn't make the shitty parts go away; modern JavaScript made the shitty parts universal.
It seems to me that modern JS and CSS are mostly used to make much-worse-performing and even-more-annoying replacements for frames, and a plague of worse-than-ever popups. :-/
> For someone like me that got out when the pool was small around 2006, there is no incentive to jump back in. I mean yes, if I closed my eyes and jumped I would probably adjust after a while. But why jump in the JS pool when I can swim in the open sea? :)
I think that's actually worrying. While I'm not in the 'JS sucks; avoid at all cost' camp, I do find it a bit worrying that as I'm getting better as a developer, and as I'm getting more experience with non-JS ecosystems, I find myself avoiding it whenever possible. Even to the point of going for old-fashioned server-side apps just so I can use saner language ecosystem <x>.
I'm sure the loss of me will not affect the JS ecosystem much. But I'd be surprised if the same is not the case for many other, better developers. TJ Holowaychuck left for Go? Jose Valim moved on from Rails to Elixir (even to the point of creating it to solve problems he encountered with Rails).
I'm confident JS is important enough to keep a hold on many good devs, but I do hope we keep improving the ecosystem so that the really good developers stay on board to help improving it.
I think that 'really good developers' would better spend their time replacing the JavaScript ecosystem, not continuing to prop it up. The modern web platform is a security, privacy & performance nightmare, and there's no way to fix it; we need to burn it to the ground and start over again.
For someone like me that got out when the pool was small around 2006, there is no incentive to jump back in. I mean yes, if I closed my eyes and jumped I would probably adjust after a while. But why jump in the JS pool when I can swim in the open sea? :)
I was right there with you.
If you can find a way to retain your career and not have to worry, maybe it's not worth it.
But for me, I came precipitously close to having no marketable dev skills, by accident. I loved gamedev, and I spent my life learning it. It's what got me into programming.
Most cities don't have gamedev shops. And if you're not at a gamedev shop, you're not really a gamedev. Yeah you can try your hand at indie stuff, but that's roulette. Wanna be forced to move to whichever city has a studio? Be a gamedev.
So... Backing out of that career trajectory led to an uncomfortable realization: I'd amassed a bunch of low-level skills, had a deep understanding of Unix/C, and could implement https://lmax-exchange.github.io/disruptor/ without breaking a sweat. (Not merely use it; implement it.)
None of that translates into dollars. If you show up to a recruiter, they don't know what to do with you. There are literally hundreds of Rails gigs.
One interview went like this: They sat me down, opened up postgres, and said "Accomplish these goals." The goals were simple things like write SQL statements to query for certain types of users, or join data together.
I never had any reason to learn any of that. I knew that I could learn it if I needed to. I understood conceptually what joins are, why to use them, and what to avoid and why. The "why" is the crucial ingredient, and I naively thought that would protect me.
No... It's not fun when you have to sheepishly admit in front of three engineers that you have no idea how to write those SQL queries.
It was the same deal with Rails. Ditto for JS. Eventually I went through 5 interviews and struck out on all 5, in a row.
That's deep shit. When you start worrying about paying the rent, you're not in a happy position.
The way out of this mess, and not to end up in it, is to own it. Embrace all the shit. Become Elvis of code. Literally throw in the kitchen sink just because you haven't tried it out yet and this new kitchen sink might help you.
This story might not seem too relatable, and you might feel like your career is impervious. But a wise person once said that we overestimate the impact of years, and underestimate the impact of decades. "One decade" is so powerful that it's hard to overstate how much the world changes during that time frame. Are you absolutely certain you can get by without swimming in that pool that gives you the heebie jeebies?
Let go. Once you start swimming a few laps, it's not so bad. And in my case, I discovered I kinda like it. You're not supposed to like it, but it really is kinda fun. What other career would get to play with an endless box of legos?
I mean I think I understand why you chose JS, but we each make decisions that can take us on different paths.
I also do systems programming and have been doing C++ for most of my career. Instead of choosing JS I chose to invest in mobile and developed for iOS, Android and Nokia. Then I got back to system programming and will probably transition to software architecture.
I'm currently very interested in safety, security and reliability, which is basically the opposite of JS development culture. It seems to me that it's a big world development world out there.
I expect there were other low-level jobs out there, outside the gaming industry. Were there? Why didn't take you one? Not criticising, but it looks like it would have been the path of least resistance.
Not surprised you're being downvoted for calling out HN for it's anti-JS bias.
But there are indeed still several module systems in use:
* globals (i.e. none)
* CommonJS
* AMD (just kidding, nobody uses this for new code)
* UMD (which mixes AMD, CommonJS and globals)
* SystemJS
* ES modules (via Webpack, browserify, whatever)
* ES modules in the browser (which behaves slightly differently)
That's 7 so I'll stop there. Yes, it's cheating a bit but of those I'd say AMD and UMD are the only ones that are dead as a doornail (for new code anyway -- plenty of libraries still use UMD for nostalgia). You'll only end up using one of them (or maybe two if you mix ES imports and CommonJS requires for some reason) but which one depends on which part of the ecosystem you end up with.
It seems that most of the community has adopted Webpack as the one true bundler for apps. Rollup is generally seen as better for libraries though.
Angular I think still promotes SystemJS, React is half-way between CommonJS and ES modules (via Webpack/Babel). I have no idea what Ember is doing.
If you want to add type safety, Angular strongly favors TypeScript while React is slanted towards Flow for obvious reasons.
The JS ecosystem is complex. Even if you cut out all the nostalgic stacks (MVC with Backbone and jQuery is still cool, right kids?) there are still many paths to pick from.
But I agree with your final conclusion: the reason the JS ecosystem is so complex is that it has changed a lot over the years and has seen massive improvement while still maintaining backwards compatibility at a language level with tons of legacy code that's still out there and still running in production.
> AMD (just kidding, nobody uses this for new code)
Lulz, our Oracle-owned ERP system chose AMD for its module system less than two years ago. All new code will use AMD here for, I expect, the next decade.
>They're a symptom you haven't yet become a JS dev.
I agree 100%. If you're not infected you don't get it. Just like a healthy person may not understand the babbling of a delirious patient.
>I've been around the block. I vividly remember what this shit was like in 2008.
All the way back to 2008? Quite the block.
>JS works now.
JS doesn't 'work'. The language is still a mess. But the platform is beautiful. The runtime and compiler and the JIT are great. In the same way that NAT gave a ipv4 a lease on life, transpilation is what makes JS development tolerable.
The reality was that before Microsoft unilaterally added the XMLHttpRequest object in the late 90s, there was not much you could do with JavaScript even if you wanted to do something. Couple that with the fact that js engines were slow, front-end (DOM) APIs were severely limited, and most people connected to the 'net' through telephone landlines on slow PCs (by modern standards) meant that it made far more sense to do purely server-side page rendering.
> All of these concerns are not actually concerns. They're a symptom you haven't yet become a JS dev. You have to throw yourself mentally into the ecosystem, wholeheartedly, as if you were diving into a pool. At that point every one of your concerns vanish:
Before I get into the details, as a general comment I find that while you're not wrong, I've found that I need much less 'diving into a pool' when I enter other language ecosystems. I understand why this is the case with JS, and personally I have a soft spot for the language, but even if it's not JS 'fault', I find that it is not a good thing that it takes such a 'deep dive'. They're valid concerns.
> - There is exactly one way to define a module. Write a file. Done. Webpack takes care of it.
it's a lot better now, but this has not been true even until recently. I've probably lost weeks of time over the years figuring out how to load module <x> using import statements where require() worked instantly, and from what I understand it can be a serious headache for module developers to support both 'ESNext' imports and CommonJS requires.
I can't tell how often I still need to search whether it is import x from 'x', import {x} from 'x', or import * as x from 'x', or in some cases the only thing that works is require(x).
> - The fact that Webpack either wasn't on your radar or already solving your problems says a lot.
Not really. I've used Webpack since the early days, and I've tried switching to Rollup, Brunch, and Gulp because Webpack just didn't make any sense to me. Then came Webpack 2 and half the tutorials and documentation stopped working.
FWIW I generally go for Webpack 2 these days, and it's an improvement. But it's a solution that causes quite a bunch of problems of its own. Plus, it's a solution that isn't even needed in anything but front-end JS. For good reasons, and I don't think JS is the cause of the problem (rather, it's bandwidth and general front-end concerns). Nonetheless, it's not even remotely simple.
> JS stdlib seems like nonsense? Lodash.
Why yes. I love lodash. But it's a huge library that I'd rather not load in its entirety if I'm not using most of it. Have you actually tried loading parts of lodash using import statements? Last I checked, a few months ago at most, this was not possible. I had to do require('lodash/blah'). That's not obvious until you've figured it out.
> Feels like you're importing too much bloat? Webpack tree shaking takes care of that. Closure compiler eliminates all the code you're not using. There's no reason not to use a first-class stdlib, whether it's Lodash or whatever else you prefer.
Webpack tree shaking is relatively new. Much of the online help (tutorials, etc.) still refers to Webpack 1.
> - The JavaScript community seems to have the opposite of NIH syndrome, so that even basic functionality is offloaded to a third-party modules (see leftpad). The project I'm working on has over 1000 modules in it's dependency tree.
> This is the power of JS. It's the thing to force yourself to embrace. You need to join the cult of bullshit and just mentally force yourself to love this rather than hate it.
I don't see how defending JS by telling someone to embrace the suck is any kind of valid argument in favor of the JS ecosystem. Could you elaborate?
> It's why JS won. You can hate it all you want and revel in the insanity, but there is zero reason to do that other than as a way of making yourself feel good. Which is just another way of saying we're all acting very selfish when we lash out at JS like this.
First of, JS won despite its quirks and the insane ecosystem. And second, selfish in relation to who? Brendan Eich? I think complaining about the things that do suck about JS and its ecosystem are important toward improving things. As a fan of JS, I don't feel offended when people say it sucks. It sucks, and while I think that's not primarily the language's fault, I also don't get my self-worth from being (predominantly) a JS programmer.
> Yes, you're correct. Every one of your points is absolutely true. Yet it's all so very wrong. I wish I could put it into words better than this.
Without meaning to be snarky, I'd love for you to elaborate. In the grand scheme of things I'm not a great programmer, and I got started with JS, so it's quite possible that I'm overestimating how much better the non-JS ecosystems are.
> Look dude. I've been around the block. [rant that I fully agree with]
> JS works now. That is not the expected behavior. It's difficult to convey how fucking strange this is. Anyone who lived through that 2006-era transition knows what I mean.
> But the way you know these things is by being a JS dev. Living and breathing it every day. There's no way around that. You either throw yourself wholeheartedly into the pool or keep one toe in while complaining about how big the ocean is.
> This rant came out quite a lot harsher than I intended; it's just a reaction to the very common trope that HN throws around of "JS is shit, the web is shit, this is shit." Yes, this is true. And yet -- simultaneously -- no. No no no. We can do magical things now. Hooking up Vue with hot reloading is literally magic. If 2007-era me had these tools, I would have launched a startup that could dominate everyone else solely due to the advanced tooling that we now enjoy. Our tooling now vs 2007 is like Lisp vs C++ back in 1999.
This I do agree with. I'm incredibly happy that front-end development has made massive steps in being saner than it used to be. And it does often frustrate me to read low-effort criticisms that don't really add to the conversation.
But in the past year I've ventured further and further out of my JS world, and I'm starting to understand why so many people are so negative about JS. If I were an experienced dev with little JS experience, the ecosystem would strike me as insane, to the point of just giving up and doing everything with 'server-side' languages. And honestly in the past few months I've been wondering if perhaps I've underestimated how much the JS ecosystem sucks. How much perhaps I've been Stockholm Syndrome'd into thinking this is normal in any way.
All that said, the constant criticism is grating. It doesn't solve anything, and it often carries a whiff of 'if only sane people were working in this area things would be different'. Which is utter bullshit. The JS ecosystem sucks, but it's the best we have, it's not because 'JS devs r dum', and in the end it often offers enough advantages to be worth the trouble. But that doesn't mean it isn't insane. It's batshit insane even now, but for the first time in years it's at least less frustratingly insane. We're moving in a good direction.
EDIT: let me add that I do agree that comparing JS to Ruby/Rails is not perhaps the best way to argue that JS isn't good. I'v encountered plenty of batshit in the Ruby ecosystem.
Just wanted to say that I appreciate this reply. I'm probably going to fall asleep soon, but FWIW we're both pretty much on the same page.
Regarding "embrace the suck," the best I can say while being brief is...
Ah man, this is such an interesting topic, and it's worthy of a blog post in its own right. You have to trust me when I say "there's something here; it's worth tugging at this thread to find out whether it might be true."
It is counterintuitive. But the fact that it's counterintuitive is a hint that it might be worth checking whether it might be true. Relativity was counterintuitive too.
If you embrace the suck, you can pull the simplicity out of the chaos. But only after you've mastered the chaos.
That sounds like some combination of naive or impractical. Who could possibly master chaos?
You can. And that's all I'm saying. Step one is to force yourself to choose to try.
I'm not sure I completely understand what you're getting at, but FWIW despite the many problems I'm incredibly excited about the front-end JS ecosystem.
I do think it's a great situation where one single programming language allows one to not only create complete applications that run on every computer under the sun (without installing Java), but allow you to fucking debug it in a pleasant way on pretty much each of those computers.
But all that said, I wish I hadn't started out with JS and blindly followed the 'chaos'. I've wasted so much time using React and Redux when simply React, or in some cases even good old Backbone of jQuery would have sufficed. I wish I'd learned good programming practices from more overtly functional or perhaps more overtly OO languages, before learning JS. I wish I hadn't spent many hours dealing with Grunt, no, Gulp, no Webpack, no Webpack 2, only to become a 'master' at something that is entirely irrelevant outside the JS ecosystem.
Sure, I learned the intricacies of the 'chaos' and I find myself going more and more for simpler solutions within the ecosystem. I'm comfortable using Baobab.js or MobX instead of Redux. I'm comfortable using only lodash instead of a whole bunch of modules that do part of what lodash does.
But I don't feel any of that made me better as a programming. In fact, as the JS ecosystem is improving, I find that a lot of my arcane knowledge of how to, say, enable hot code reloading in Webpack 1 is entirely useless now that Webpack 2 sort of does it out of the box.
More and more I'm inclined to let the JS ecosystem do its crazy shit and wait for something semi-standard to shake out before I even bother. Because everything I learn about the specifics of this chaos will not be worth knowing in <x> months.
You're right, in a particular way that I hadn't thought of:
Most people don't have a background in programming. They weren't doing it when they were teens, and many people probably started writing code within the last year or so.
In that context, I completely agree. It's absolutely true that at the end of all that chaos, you won't end up a better programmer.
I was speaking as someone who was in the inverse situation: I'd amassed a lot of theoretical knowledge, and spent a lot of time chasing the phantom of "being a good programmer." I researched memory models, read whitepapers, explored how Google implemented bigtable, went through rtm's 6.824 Distributed Systems course just for fun (it's freely available online)... You know, a bunch of hardcore stuff that it seems like "real programmers ought to know."
Let's put it this way. If I had to do it all over again, it's very possible I would swap skillsets with you. Because you know all of the stuff you rattled off. You are now prepared to avoid it. So yes, it's easy to feel like you've wasted your time. But when it comes to employability, you are far better positioned from a career standpoint.
Stuff like this may feel "out of bounds" of traditional conversation. Normally we're supposed to make up arguments that support our points purely through reason alone, right? Like it would be much more persuasive for me to say "Well, this matters because X" where X is something related to programming. Talking about career stuff and employability may seem like punting. It might also seem like it's something you don't really need to worry too much about.
But when you relax and let yourself stop learning, and stop going through the motions of all that stuff you hated, it's merely 3-5 years before you end up in a similar position. Maybe. Or maybe you'll get lucky.
I completely agree, FWIW. I make a very decent living as a JS front-end dev using framework-du-jour. Thankfully that very often is React/Redux/React-Redux/React-Router, and a smattering of other React/Redux type stuff.
But that doesn't make it less shit. I'm actively re-schooling myself as a more backend-ish, or perhaps full-stack-ish developer that allows me to use more of Elixir, Clojure, or even Ruby/Python, and less of JavaScript.
I'd say there are basically three perspectives on this:
1. development ease/quality, where JavaScript is not the worst, but far from the best.
2. ecosystem health and best practices, where I find that JS is not doing a great job. A lot of churn, a lot of reinventing the wheel, and a lot of stockholm syndrome defending the mess.
3. money, where being a good front-ender is worth most of the effort and frustration.
I'd say 3 is the most contentious. Most people seem to agree on 1 that JS and its ecosystem are not a paragon of good development. It takes work to even just get a good initial setup, which is not a problem in many other ecosystems. Transpiling, to name one example, is just not necessary. I think a sizable portion of multi-lingual devs would agree on 2 as well.
But 3 is difficult to quantify. Is it a good thing that I can me a good chunk of money building React/Redux apps when I could've built a saner, more stable version in Django/Rails/Elixir in a fraction of the time if the only downside is that it requires a full page reload? I'm not so sure anymore. But it's worth a discussion.
Point 1 and 2 strike me as obvious enough that bringing them up ad nauseum on HN is just pointless and frustrating.
Let me add that perhaps I do see an issue you're (possibly) raising: the need in many situations for a programmer to develop arcane knowledge that is only useful in one particular ecosystem.
I find myself often frustrated at criticism of JS that really is a criticism of the need to know the ins and outs of the ecosystem as a whole to be productive.
It's frustrating because blaming, well, anyone for that situation just seems a bit utopian and unrealistic. It implies that JS devs are not aware of the problems, when in my experience most non-fanatical ones are aware, but pragmatic.
I'm at least old enough to have been working within various ecosystems where arcane knowledge was pretty much a prerequisite to being productive in said ecosystem. One of the most common frustrations I had was people criticizing this need for arcane knowledge, where my thoughts were "sure, but that doesn't change the reality of day-to-day programming in ecosystem X, and the advantages of doing so perhaps maybe offset the shittiness".
I've experienced this with obscure Delphi-related issues concerning the app's 'chrome' (title/task bar coloring). I've experienced this doing PHP development and being told I'm a shitty dev for even touching PHP. I've experienced this with jQuery, being told that I should use Backbone (which FWIW was a good choice moving forward, but it never really solved my fundamental issues in my previous jQuery work. If anything functional approaches did).
There's a point where it just gets exhausting to hear people bring up the same tired old argument, again and again, for karma or whatnot, against a particular ecosystem, when the arguments are theoretically sound, but where they disregard the pragmatism of 'participating' in said ecosystem.
It's really not that different from a libertarian getting uppity about capitalism when you're busy volunteering for some non-profit civil society-related endeavor to improve things. It's not technically wrong, but it's a hell of a lot more pointless than pragmatically working within said system (and yet, confusingly, still worth pointing out).
And of course then there's a bunch of people who build something that vastly improves a possibly shitty ecosystem, using lessons they probably learned from other ecosystems. I'd say Redux as well as React are great examples of that. I'd love to go into how these are fundamentally very much about functional programming principles, and how I think all these tired discussions about languages hide the more important discussions, but that's unfortunately probably seen as tangential to this discussion.
> The majority of my job consists of maintaining about a dozen legacy Rails 2 / Ruby 1.8.7 applications, written between 2008-2010, [...]
I would be sick of that, too.
I'm also sick of criticism on Ruby when you actually want to criticize Rails. There is a significant overlap in the two communities of course, but both have a distinct profile and you can't just lump them together.
In this case, it's also not helping that they have to stay on an ancient Rails version. With Rails, it's a much smoother experience if you can follow the major releases but especially the upgrade to Rails 3 was a painful one.
Edit: I don't want to imply that you shouldn't criticize Ruby but if you do, don't confuse issues of Rails with those of Ruby. Rails and Ruby code do have quite a different feel to them. It goes this far that when I write a clever little Ruby snippet in a Rails app, I almost feel dirty as it doesn't belong in there.
I learned Ruby before I learned Rails, and the books/tutorials I followed were pretty heavy on the metaprogramming and OO usage (inheritance, etc.).
What I find fascinating is that when I read more recent articles/books, the approach is much more functional in nature, with the OO part sometimes seeming little more than namespacing (Sandy Metz, for example).
I do think Rails, for all the good stuff it did, promotes 'bad' use of Ruby. But even 'idiomatic Ruby', based on the books-you-should-read suffers from similar problems.
It's all fun and exciting to write Ruby code, but if I hadn't been subjected to the 'avoid too much inheritance' and 'write your OO code in a hybrid OO-and-functional way', my output would've suffered from the same issues Rails code does.
That's not to say that I dislike Ruby. I love Ruby. But it doesn't strike me as a good thing that maintainable Ruby code means avoiding a lot of the cool stuff.
Writing maintainable code in most languages involves knowing exactly when to apply the "cool stuff". Ruby meta-programming can be incredibly powerful - e.g. I'm working on a project at the moment where we dynamically build a very large, complex REST API by introspecting the ORM models augmented with data about access control etc. The entire web API is maybe a tenth of the code it'd have been if we wrote all the code manually, and that proportion is steadily dropping as we add more models.
But the key there is that the meta-programming involved is very, very limited for a very high payback. As it is, it raises the barrier for new hires, and we want to make sure not to raise it higher.
And most of the rest of the code is taking an increasingly functional approach.
I tend to think there is "too much magic" in a lot of Ruby projects, so I try to be very careful and limit it to places were it so drastically cuts down on the amount of code that we can justify quite a lot of explanation and still come out far ahead. But when everything lines up, the benefits can be dramatic.
> That's not to say that I dislike Ruby. I love Ruby. But it doesn't strike me as a good thing that maintainable Ruby code means avoiding a lot of the cool stuff.
Ruby is in that regard a lot like C or Perl. It gives you lots of power and options to solve a problem your way but that doesn't mean you should always do so. Rails has used those options extensively and that explains a lot of the criticism you hear about it.
This also means Ruby is not for everybody but what language is? IMO you can write unmaintainable code in any language and there is none that makes that hard enough, so I'm firmly in the camp that I'd rather have the full power at my fingertips when I need it than being restricted.
Although with the renaissance of functional programming, I would also love to discover a functional language that felt as fresh, powerful, and elegant as Ruby was for OO programming.
Although with the renaissance of functional programming, I would also love to discover a functional language that felt as fresh, powerful, and elegant as Ruby was for OO programming.
If these programmers spent half as much time coding as they did bitching about their and others' programming environments, I think they'd get a lot more work done. Not to mention, be happier with life.
Honestly, I think part of maturing into a senior software engineer (and later, a good manager) is accepting that while things aren't perfect, they're plenty good enough to solve your problems and make money.
I'm not saying there aren't occasionally valid complaints, and sometimes these new tools (Docker, et al) are really good, but the sorts of developers I'm describing here always think the grass is greener when it's programmed in another language. No, of course they aren't factoring in the months it takes to convert over for a 1% productivity "boost". All this grumbling does is hasten your language and tools dropping out of fashion and never improving.
If you want things to get better, contribute to your language and its tooling. If you don't, just switch out and use something else. Can we skip the swan song each time you decide something isn't good enough for you?
I think this attitude results in people only ever reaching local maxima in their development environments.
Adopting something like Haskell instead of Ruby isn't just an exercise for magpies who like new shiny things (In fact, Haskell is older than Ruby, and many concepts in functional programming are far older than most OO design patterns). It's embracing a different way to think about programming which can have fundamental improvements to the maintainability of the code you write, rather than just cursory ones.
No one is saying that a business problem can't be solved, or money can't be made, using Ruby.
Sure, rewriting a project from one language to another is very expensive and usually not worth it, but does that mean that we shouldn't explore other tools at all? At some point, a new project will begin, and it'll be valuable to have a more robust decision about which tools to use than just "Rails worked ok for us last time."
I'm not saying you can't ever move on, just do it with dignity.
That is to say, picking products mature enough for production use, and executing your migration without feeling the need to justify it to the world by shitting all over your old environment.
Age has little to do with it. FWIW, Haskell is trendy right now. Probably for good reason. My issue isn't with the direction, it's the method.
When you switch to something different, you see that a bunch of your problems and annoyances simply went away. You may not see yet the new problems and annoyances that you acquired. You're in the honeymoon phase.
With time, you will see the problems with the new language/tool/environment. Is it better than what you had before? Maybe, maybe not. But you are not in a position to accurately evaluate it while you're in the honeymoon phase.
> Honestly, I think part of maturing into a senior software engineer (and later, a good manager) is accepting that while things aren't perfect, they're plenty good enough to solve your problems and make money.
I don't think this at all. I think part of maturing into a senior dev is having the experience to know what makes you happy, what makes you unhappy, and still make money.
For many the journey matters, and for the more passionate, "good enough" is a give-up mentality. And it's a good thing you don't have to keep these thoughts to yourself...you should feel free to share.
Happiness... Passion... these terms don't mean anything when it comes to maturity as a programmer.
Maturity as a programmer means being able to identify and do what it takes to get things done. Whether a developer is "happy", or "passionate" only begins to matter when they're doing what needs being done.
A true sign of maturity is when a programmer is faced with a workload that they are deeply unhappy about, but still get it done, because it needs to be done. The corresponding sign of mature leadership is acknowledging that developer's unhappiness and giving them extra money, work they enjoy, and/or time off after the crap work is done.
I guess we're just maturing in different ways. Your definition sounds like a big company doing less-rewarding work and middle management. For me, it was realizing you don't have to move towards management or dealing with lots of "crap work" (can't avoid it all of course). Also with maturity comes the realization that things that need to be done and being happy while doing it don't have to be mutually exclusive.
If you were tasked with maintaining a 10 year old codebase, your happiness really doesn't matter. If you don't find code maintenance enjoyable work, you're not going to enjoy the work. Sure, you might be able to make a game of it - to make it enjoyable - but that has no real bearing on your maturity as a programmer, just your personality.
That some programmers truly enjoy their work is a gift that they are lucky to have; not a sign of maturity.
> If you were tasked with maintaining a 10 year old codebase
Then I quit my job. As a quality senior you have the leverage to pick where and how you work. Otherwise, you may be thinking of just a tenured junior or someone who was given a title of "senior". This is why I said earlier "part of maturing into a senior dev is having the experience to know what makes you happy"...not just doing what someone tells you.
Heh. I actually enjoy maintaining old code, as long as that means I can improve it while adding new functionality.
If it means "no, we can't change that method with a 200+ cyclomatic complexity because the NY office wrote it and they would be upset about it" then yeah, that would suck. (Real story in a recent contract with a bank.)
> A true sign of maturity is when a programmer is faced with a workload that they are deeply unhappy about, but still get it done, because it needs to be done.
I’d like to pause the ruby discussion here to point out this is a deeply unhealthy viewpoint—it’s your own funeral if you knowingly go to a job that makes you “deeply unhappy” every day. That’s not maturity, that’s self harm. Maturity is responsibility, including to yourself.
Hence the directly following comment about mature management.
Leaving if management is not mature is a perfectly reasonable thing to do; but not doing the work you've been given before you leave is irresponsible at best, petulant at worst.
A personal anecdote: I've been pushed into QA for several, because we needed it to be done, and I had the prior experience. I hate doing QA work. After the crunch was done, I took a break, then went back to work I wanted to be doing. And yes, perhaps this denotes a lack of hubris, but I consider that to be mark of maturity as an employee in me, and in my manager.
For any given problem, I usually describe <my current favorite language> as being less crappy than every thing else that I've tried. Every language has disappointed me when I try to do thing those languages were never designed to do. Calling a spork a crappy spoon and/or fork ignores why one embraced the spork in the first place. Embrace the spork.
Fact is, they do spend a lot of time coding, but most of the code they write is either boilerplate, countermeasures to the language shortcomings, or tests to keep the language from biting you.
At some point (and if the codebase is not that good to begin with) one might conclude that the language is being hostile. A bit excessive maybe, but understandable.
This happens with every language, especially when maintaining 10 year old projects. Even a 10 year old project written by middle-of-the-bell-curve programmers in Haskell would be inducing the same amount of ire. "Who writes types this way?" "Did they even think about refactoring the existing types when bolting this other crap on?"
It's the maintenance that's driving this guy nuts, not the language itself. Haskel just happens to be his side project, which will, of course, look cleaner. I felt the exact same way when moving from Perl to Python.
> It’s hard to reason about code when it does something entirely different depending on what code has executed in the runtime before it.
> What we really need are more explicit guarantees on every line of code that we write [...] than what Ruby can provide
Maintenance has a cost on any language, but in some cases you also have to make up for the lack of guarantees the platform gives you. Tooling, as you said, can help, but substantial effort is required to develop and mantain, say, a (production-grade) static type checker. You can't just tell $random_ruby_dev to "help improve tooling" if there is no tooling to begin with.
Perl is also user hostile. So is haskell. Many languages are not, and people don’t typically build and maintain software in them for a reason—it’s expensive and slow compared to, say, java. What’s your point?
> If these programmers spent half as much time coding as they did bitching about their and others' programming environments, I think they'd get a lot more work done. Not to mention, be happier with life.
Without this "bitching about their and others' programming environments" we'd still be using punch cards. The author indeed does not contribute anything "new", but it's still pretty well written and to readers who are at a similar stage in their education/maturing in a more modern time it does a pretty great job of introducing concepts the typical ruby-only developer might not be familiar with.
I could have written a very similar article 15 years ago from a perspective of a black belt perl developer who found lisp to realize that his code already was lisp. Just in perl. Would it help a 25 year old other version of me today? No. Because he'd probably be proficient in ruby and not in perl. Nobody under 35 understands rants about lwp or Moose anymore and thus does not even have a chance of getting the point without lengthy research.
You can skip it if you acknowledge the tradeoffs, including the difficulties of hiring for a language people dislike, for better or worse.
Personally I would never work with other peoples rails again without seeing the code before hand. Rails outfits pay nowhere near enough to deal with method_missing and rails magic.
It’s a shame that people feel this way about Ruby. The thing about ruby and scripting languages like JavaScript is their nature make them very easy to dig in. That’s why you see these scripting languages being used everywhere. The same problems mentioned here plagues JavaScript yet it’s the most used language on the web.
I’ve personally built rails app that have been running for the last 5 years with clients using it to process millions of USD and I rarely have to touch it. And even now when I have to make changes it’s easy to fix and patch up because it was built right.
Most founders who don’t understand development “need things done yesterday”. If your business doesn’t take into account technical debt you are going to skimp on things and cut corners. It’s not ruby’s fault. It’s the business owners job to understand development and the process of building risilient long lasting software and not put developers into impossible timeframes.
From my experience business and marketing people who “need it now” don’t know what they are doing business wise too. Businesses take years to build. Nothing is ever needed “right now” if business is well planned and well executed. There is always enough time, and if there really isn’t technical debt is accounted for and fixed quickly.
That said it’s also important that technical leadership constantly inform and communicate with business end regarding development time and the concept of technical debt.
Don’t even get me started on business people who “need it now” based on “projections”, and then forget about everything they asked for the next day. Or “I need it now” “make it happen” to stroke their ego.
When you find yourself blaming the tool look at yourself as an individual and look at your team.
The same problems mentioned here plagues JavaScript yet it’s the most used language on the web.
JS achieved popularity because it was the ONLY choice in its domain. That's a rarity in PLs, and can't be extrapolated to any other language that I can think of offhand.
Sure you can write a Windows application in anything else, but be prepared for a 2nd class experience, writing FFI bindings on your own for the Windows API, without access to the tools provided by Visual Studio.
I'm not familiar with the older examples, but only feature phones from the first list are as tightly coupled as Web is with JavaScript. On all others platforms you can either run native code, or code compiled for the platform's default VM, or both.
Then by that way of thinking the Web is not tightly coupled with JavaScript, because you can use JavaScript as a target language on a compiler backend.
The alternatives to the languages I listed, don't enjoy the same support to 100% of plaform APIs, and respective SDK tooling.
Going outside the eco-system means manually writting FFI bindings, not being able to use the GUI designers if available, lack of debugging and profiling support at the same level as the platform tools.
Node is "Stockholm syndrome". Since you have to use JS for the web if you want to reduce code duplication and the number of languages you use you have to bring it on the backend.
I think Javascript is not as fundamentally unsound as, say, PHP. You can see that it was designed by people who had a clue about how languages are supposed to work.
The big problem with Javascript is that conceptually it's closer to a functional language than C or Java, yet it opted for a C-style syntax with a bunch of hacks (semicolon insertion being arguably the biggest) so I find that its syntax doesn't really match the way it's used and you end up with rather clumsy looking code even for very idiomatic constructs. Not to mention the super weird type conversions.
But that's rather subjective, IMO the biggest problem is simply the lack of a comprehensive standard library even for basic things. I'm not really a webdev but I have to maintain a web interface for one of our products, it uses JQuery, underscore.js and d3.js for various things. There's a significant overlap between these libraries when it comes to standards algorithms and DOM manipulation so you have to learn two or three ways to do the same thing depending on the situation. I shouldn't need external dependencies to have access to underscore.js's "map" or JQuery selectors.
Coding in JS feels like coding in C if it was standardized without stdio.h and string.h. Have fun re-implementing strcat every time you need it.
I think a big standard library is a tradeoff question, but also, the standard available functionality is something I really like about Node.
It's got map, filter, reduce, just like all modern browsers, and it's also got nice and easy builtins for JSON parsing/dumping, HTTP serving, regular expressions, etc.
And still the stdlib isn't bloated with lots of weird stuff, it's kind of nicely minimal. Not perfect at all, but in a pragmatic sense it's often very satisfactory.
My real favorite language is Haskell, and that language is also both loved and hated, and one of the problems with it is the standard library. There's no regexps, no HTTP serving, no JSON, hell, people even import external dependencies to get convenient records.
I'm kind of cynical and picky but tell me a language and I will give you a litany of horrible problems with it—really.
Go's type system is stupid; Rust's compiler is slow and it's hard to learn; C is unsafe and has undefined behavior; C++ is a mess; Ruby is full of monkey patching; OCaml's syntax is idiosyncratic; Haskell has problems with laziness, records, the proliferation of language extension, an "im very smart" community, etc; Smalltalk is Unix-hostile and insular; Common Lisp is too mutable; Scheme is too fragmented; and so on and so on and so on.
So I think of choosing a programming language as answering the question "Which horrible nightmare seems least awful for the particular thing I want to do right now?" and JavaScript is, for me, quite often a reasonable answer.
> I shouldn't need external dependencies to have access to underscore.js's "map" or JQuery selectors.
Been a while since you did need. document.querySelector and .querySelectorAll are everything about jQuery that's worth having, and every browser has implemented them for years. Arrays have a map method that does what you expect it should, although do be aware of the odd calling convention. Objects don't map natively, but their keys do, which I very rarely find fails to suffice. (And for those things that really can't be expressed in stdlib without considerable reinvention, Lodash is a strictly better Underscore.)
I explicitly mentioned that I wasn't a webdev, I never pretended I was some kind of authority about JS.
I maintain my point though, so far most sizeable JS projects I've seen depend on JQuery for one reason or an other, so clearly even competent JS coders feel the need to supplement the functionality of the standard library. As for the built in map, the fact that you can't use it on objects is the reason I ended up requiring underscore.js (alongside with the .find and .uniq methods, and probably a few others).
I could also have mentioned how hacky and messy it becomes if you decide to split your JS project in multiple files. Or the hack to create "modules", which is at the same time very clever and pretty terrifying.
But you're also conflating server-side and client-side (jQuery) Javascript.
Client-side applications have baggage (infinite deploy targets) that you don't have with server-side applications (one deploy target), so the comparison is disingenuous.
I see your point but that's not really what I meant. I wanted to say that server-side JS is a direct consequence of client-side JS.
If Javascript had stopped being a thing in browsers do you really think Node.js would've gained significant traction despite the numerous alternatives?
As far as I can tell the main selling point of Node is that you can use the same technology on both ends and even share some code. If you remove this from consideration then Node competes against Python, Ruby, Perl, PHP... And it's not clear how it's better than these.
Is it clear that Python, Ruby, Perl or PHP are better than Node with JavaScript? I don't think so. They're all slightly idiosyncratic dynamically typed scripting languages with their own sets of quirks.
BTW, the main selling point of Node over those languages was that the runtime was faster and more capable at multiplexing I/O in server code.
I don't think you can find the stuff d3.js does in many standard libraries. The things in jQuery and underscore have gradually entered the JS standard lib, I think your information is outdated.
> Coding in JS feels like coding in C if it was standardized without stdio.h and string.h. Have fun re-implementing strcat every time you need it.
That's a really bad example, stdio.h and string.h are notoriously horrible. Should I use strcpy? That can cause buffer overflows. Maybe I should use strncpy then? Oops, that's a buffer copy function, not a string function(it can result in non-strings)[1]. What about strcat and strncat? Well, strncat actually appends a null character, so you have to keep that in mind. How do I even use gets, getchar, fscanf properly? Here's a compilation of coding standards and pitfalls, many of which involve the standard library https://www.securecoding.cert.org/confluence/display/c/SEI+C...
Quick, what's the syntax for updating a hash table in Scheme?
"Well, it depends on if you mean SRFI-44 maps, SRFI-69 hash tables, R6RS hash tables, Racket hash tables, MIT hash tables, Scheme-48 hash tables, or . . ."
I can see the value in minimising the number of languages you need, and you're already constrained by the client, but I take your point. Node is an abomination, noone would choose it freely.
But just because it's easy doesn't mean it actually happens.
My time is pretty much 50% Ruby, 50% JavaScript. "stdlib can be patched to do random stuff" is a complaint I've often heard about Ruby, but in practice, I've encountered it precisely once in the last five years (extlib breaking with recent Ruby because of a to_h conflict, IIRC). I've lost way, way more time than that to JS idiosyncrasies.
> "stdlib can be patched to do random stuff" is a complaint I've often heard about Ruby, but in practice, I've encountered it precisely once in the last five years
We've had something like 20 of those in the last year alone. 70% of those come from Rails, and some have been quite severe. Maybe it's tied to what we do which is definitely not a simple app, for which Ruby is not entirely the right one for the job but it's definitely not obviously the wrong one either. With years of business logic, tuning and bug fixing there are things you should never do[0], but even coming to terms with what Ruby is and what its limitations are after all these years in the trenches doesn't mean we can with absolute certainty avoid getting hit by structural damage coming from idiomatic flaws which we have clearly identified through proper root cause analysis to be endemic to the Ruby way (which includes a complete disregard for backwards compatibility, even when it can trivially be made so[1]).
Rails is basically it's own world, and libraries that are written predominantly to be used with Rails tends to be full of anti-patterns in my experience.
Avoid Rails and Ruby becomes a far more pleasant place. Certainly not flawless, but you avoid a lot of the pain.
JavaScript yet it’s the most used language on the web.
It's the ONLY language on the web. If there were alternatives with a useful standard library and a way to define functions and objects that programmers of other languages are used to then the alternative would be more popular.
I don't really see how this would be much more different from language to language. The being 'sick of' part, not so much the specific irritations. Every languages has its issues and from all the languages I have tried I have found Ruby to be the most pleasant to work with, but I am sure this is different from person to person.
One of the struggles that the writer seems to have is that programming applications is not science, but he seems to think it is. DHH has a great keynote on this phenomenon: "Writing Software by David Heinemeier Hansson"
As mentioned in the comments of the article you will run into issues with any programming language. One of the tricks is to not hold on so tightly to 'rules'. Like (take from the article) 'break functionality into lots of small objects'. This is horrible advice if applied to all situations. You should only do this when it makes sense. I see so many programmers breaking up everything in useless classes and methods/functions/whatever just because someone wrote a blog post about doing so. This just makes complicated code look simple at a glance, but when you want to find out what the code really does you have to jump up and down through files looking at what each function does, you have to open split screens between abstract classes that have some dedicated piece of code in a class that had to exist in order to satisfy the 'break things up' rule, etc.
Long story short: just relax, write code (in whatever language you like, for whatever reason), go home and enjoy your family, friends and hobbies.
True, however some languages have much less 'sick part' than others. It is just unfortunate that we humans tend to pick up the worse of everything. There is some hope though. In recent years there are many projects trying to provide a better programming experience and safer, faster results. Rust, ReasonML for example.
I have yet to find a better programming experience than Ruby myself, but Rust looks fine for systems programming, etc. I don't understand however why people invest so much effort in Javascript. The amount of tooling around it just baffles me, but again, to each his own.
Ha, I can understand how you're baffled. As a primarily-front-end dev working with JS most of the time, I'd say the reason why it's worth it is that there's currently just no other language that runs in the front-end, or that is as ubiquitous a scripting language.
Much as my dislike of JS grows as I venture further into other programming ecosystems, I still love the ability to muck about with any web page I come across. I love being able to use the Chrome/Firefox devtools for debugging, and haven't found anything quite as nice outside of the browser. I love being able to package a bunch of js and a minimal html file, and just dump it anywhere that provides basic hosting.
It only barely offsets the many problems I have with JS, but because my day-job is mostly front-end, I need to deal with these problems anyways.
But man I'm looking forward to a situation where I could use <preferred language> as a drop-in replacement for javascript.
That might be more of a chicken-egg situation. One might argue that you could take even a hundredth of the effort that has been put into Javascript and put it towards an alternative and you'd come a long way.
Also, I am not against Javascript at all. I just don't understand the role it has been given on the back-end. There are better languages and on the other hand I don't agree with the amount of engineering that goes into front-end applications that are built in things like React. It seems silly to me to rebuild standard functionality of your browser in a framework in order to gain some interactivity that was already there to begin with through plain-old-javascript. Rather than invest all that time and effort into React, Redux, Redux-sagas, Ember, Vue, etc, etc, etc (there are a whole bunch of etc) people would have spent their time a lot more productively if they'd help out with making web components happen properly.
Yes! this so much. I remember a coworker who had to do an Interface or Abstract class for everything in Typescript. Only because typescript provides those features doesn't mean they should be used everywhere.
saying that all languages have issues is definitely painting things with a broad brush. Why would I want to use an unmantanable slow language when I can use a fast maintainable language.
I find Ruby to be a very maintainable language. I use Ruby with Ruby on Rails which is a great framework to create web applications and the like. I would not use Ruby if I were in the system programming space as it is too slow and not strict enough. Every purpose can have a different language that is great for it, but don't forget that every programmer also has a language that is good for him. I, for example, will never work with PHP again because I find it ugly and I don't get happy working with it. Rust looks great though, when I find a purpose for it I might give it a go.
Ruby's slowness is often mentioned, but Ruby is quick enough for many situations. I am very productive in Ruby and servers are cheaper than developers so for many purposes it works out just fine. I have tried Phoenix (Elixir) and I found it too verbose and complex to work with. I am sure though that others love it, so to each his own.
Saying that all languages have some sort of idiomatic issues therefore there's no point complaining about language X is a fallacy that attempts to completely sidestep the issue at hand. You have to admit that although they may be Turing complete, all such languages are not pragmatically equivalent and some have significant pain points, possibly more impactful than others, otherwise we'd all code in INTERCAL and be all the merrier
I once wrote (by hand, over 3 or 4 pages) a program to add (I think, been >20 years) two integers using a Turing machine. I can definitely attest that not all languages are equivalent.
This statement is a platitude.
A: "Kim Jong-un is bad".
B: "If you haven't discovered issues with your government then you haven't lived there long enough".
> so many programmers breaking up everything in useless classes and methods/functions/whatever just because someone wrote a blog post about doing so. This just makes complicated code look simple at a glance
Thank you
I've seen people create classes containing only a string or a list/array of other (existing) objects, it's not even funny anymore.
> I've seen people create classes containing only a string
That can be a good way to improve type-checking in code that would otherwise pass around bare strings. It reduces the chances that e.g. a plain text string is confused with an HTML string.
If your language supports it, an opaque type alias would be more explicit:
This is one of the biggest reasons for code becoming unmaintainable. Many people got mad at DHH for speaking ill of TDD, but he is not alone in pointing fingers at blindly restructuring code for the sake of (unit) testing as mentioned in the article.
I get why people would like to pretend that everything is made up of beautiful uncomplicated small pieces, but that is not how life works. At some point you have to understand your tax returns, mortgage contracts and communications with incoherent external APIs. Making me jump to section 4.1.2 and from there to section 7.2 in order to understand the parties mentioned under 1.1a are subject to the rules set out in Appendix A section 42.1 does not make the contract easier to understand. So why would code that feels like a legal document be easier to understand?
That's a really good example of the problems of indirection. I'll use it in future if that's ok. The thing that drives me mad about the promoters of decoupling is that they rarely talk about the downsides. The truth is that every time you decouple something there is a cost in complexity. One individual step to decouple something may seem small but the cumulative effect of many indirections in code can make it extremely difficult to navigate and understand. So when people decouple something they should understand this and make sure that the benefit outweighs the cost (which of course it does in some cases). The benefit of being able to test your code at a micro level rarely outweighs the cost.
Thank you, yes please do. You nicely explained what I feel about this 'advice' when it is formulated compulsively like in the article. You made me think 'yes, exactly that' with regards to some people who seem to ignore the downsides of decoupling. Thank you for that, I'll refer to your comment in the future. Nice to come together like that.
> I don't really see how this would be much more different from language to language. The being 'sick of' part, not so much the specific irritations. Every languages has its issues and from all the languages I have tried I have found Ruby to be the most pleasant to work with, but I am sure this is different from person to person.
And yet I never see people getting sick of functional programming after a few years. People either try it and dislike it immediately, or they switch and don't go back.
There really are better and worse languages, better and worse ways to write software.
I've a love/hate relation with Ruby because I think the language itself, is one of the best incarnations of modern very high level programming: the way OOP, imperative, functional primitives are put together is great. On the other side I don't like the programming culture that the Ruby community formed, which is on the average not super focused on stability, quality, documentation, essentiality, simple code. At the same time, coming from the Tcl interpreter, I was also not super happy in the past with the Ruby C implementation: once I rewrote certain long running tasks from Tcl to Ruby, memory usage exploded, performances were no longer deterministic, regardless of the fact that the two languages were more or less at a similar level of abstraction and speed. So I love you Ruby, but it's hard to use you.
Ruby is the only programming language I've ever used where I didn't feel like I was fighting it, or struggling to express what I wanted. It was easy to express complex behaviours and data structures without reams or syntactic or structural noise.
But yes – despite that, there was a lot of shoddy and poorly performing code out there. The ecosystem never saw the kind of massive investment that JS (for example) did, so performance was always lacklustre, and I'd particularly love optional typing.
Still, there are other options now for specific use-cases, which is good.
I agree about Crystal, the way it dispenses with metaprogramming in favor of macros is simple and elegant and optional type restrictions gives me most of the control I want.
Ruby is "too magical" for its own sake. And then people abuse their right to use it, things like overriding class methods, etc (the syntax doesn't help in the least and it has some weird quirks when compared to python)
I'd take a smoke test and some high-level functionality test before the over-testing preached by the TDD "gurus" or any crap spouted by uncle bob. 100% code coverage is a myth, your code still can fail spectacularly with a good code coverage and guess what, most of the software in production today was not TDD'd let alone unit tested.
People sell TDD like a OCD inducing religion instead of something that might be a good idea in some specific cases
Magic is extremely useful in the right place (e.g. building ORM or admin frameworks) - these can save you from writing a ridiculous amount of code.
Unfortunately since it's an 'advanced' feature lots of programmers want to shove it in places where it not only isn't necessary, but is actively harmful.
Python has all the same features and allows you to write code that is equally horrible (or powerful), but it benefits from a cultural bias in favor of simplicity.
That said, I've still seen unnecessary magic code written by people looking to prove that they are no longer "intermediate developers".
>People sell TDD like a OCD inducing religion instead of something that might be a good idea in some specific cases
I think TDD with integration testing works in almost all cases, but unit tests fail or work poorly in about 85% of cases. Unfortunately, unit test driven development is what the zealots preach.
> Python has all the same features and allows you to write code that is equally horrible (or powerful), but it benefits from a cultural bias in favor of simplicity.
It's not a cultural bias. It's a critical difference in the language designs.
In python, monkeypatching is scoped to a module.
In Ruby, monkeypatching is global to the execution environment.
So in Python, you can look at the source code for a module in isolation and deterministically reason about what it does.
In Ruby, you can't. Because you can't know what the execution environment will be.
IMO, it's the main reason why Ruby projects become harder to manage as they grow. Somewhere, someone is monkeypatching, and reasoning about the code becomes harder and less local. I spent 2 years with Ruby and will never use it again if I can help it.
Monkey patching can still make horrendously unreadable python code. I agree that it's better that its scope is localized, but I still think that it's more important that monkey patching is used sparingly, and that requires a cultural bias against it.
I like to think of it as the difference between magical and mechanical.
Imagine you need to know what time it is, and you're offered two options to find out.
One is to recite an incantation, a magic word, and the current time will appear in the air in front of you.
The other option you are offered is a clock or a watch. With this option, anytime you need to know the time you can simply look at the clock face and simply and immediately know, but when peeling back the surface, you can see an intricate set of gears all working together to keep track of the time.
On the surface, both of these options are equally easy to use and useful to find out the current time, but the clock will be far more fixable and extensible.
IMO we should strive to make our frameworks "mechanical" like the clock rather than magical.
I think there is a kind of Goldilocks syndrome that many programmers fall into. The opposite of "too magical" is having to sift through 48 custom implementations in StackOverflow and 192 npm packages to answer questions like "How do I deep clone a POJO?" I hear a lot of developers saying "Ruby/Rails is too constricted". At the same time, I hear a lot of developers saying "Javascript has no clear direction". Well, which bowl of oatmeal will you choose?
I'm curious as to what you think is "magical" in Ruby. If anything, I've found it to be reasonably un-magical as a language. Rails is certainly "magical", but that's a different thing.
method_missing is a prime example. The case of a variable indicating scope and optional brackets are both less magical but still magical in their own right.
But `method_missing` isn't magical, is it? It's entirely logical, well specified, and obviously derived from its Smalltalk message-passing history.
'Missing' brackets are merely a syntactic choice, though I will grant you that variable case is quite weird (though in that sense, I suppose Go is magical too :))
This is all true about Ruby: the things that make it feel so powerful come at a big cost. But of course what the best solution is depends on what you are trying to achieve.
I'm curious if the author pursued learning Haskell and what he thinks of it now. Side effects are inevitable in any non-trivial application that interacts with the real world. They may only happen outside the runtime, but Haskell's design makes allowances for the perfection of its type system when it comes to interacting with other systems. Ultimately, the extent and impact of side effects have everything to do with the design of the program, and your compiler won't save a poor design. The true test of the design is years of real use and evolution and maintenance, and so I'd love to hear from people maintaining a huge years-old Haskell codebase how it compares to other languages.
Very weird doing a lot of JS work after Ruby - I feel the same about Javascript, ES6, ES6++, or Typescript. All the code bases I inherit appear to be verbose, useless cruft just to overcome some limitation or to achieve some useless pattern.
While I do echo some of the sentiments of the author, I still LOVE Ruby. I use it for small scale projects, for very quick data processing needs or on Jupyter notebooks. I used to run a full fledged Ruby shop a year or two ago and I have to tell you, in this day and age, even today, there is NO full-fledged equivalent to Rails.
Having said that, now I predominantly use Phoenix/Elixir for most new projects. And the framework is moving ahead blazing fast. It has its own pros and cons, but overall, it's been a VERY positive experience and it actually saves me a LOT of time because I'm able to find code errors at compile time.
It's almost the only alternative to Rails which seems like home if you're transitioning, but it still has its own issues. For example, they screwed up the code organization with contexts, renamed the web folder a couple of times and so on. But these are small issues and I'm amazed at the productivity I gained by using Phoenix.
I wrote a full fledged Stripe API library in under 12 hours. Well tested and rock solid. Pattern matching is heaven and in many ways, your code is more robust. I have written libraries in Ruby too, they take more time simply because there are lot more tests that need to be written.
I would never go as far as saying "I'm sick of Ruby", simply because it's still a great programming language, if you're getting started and also because, I believe in the man behind it - Matz. He has a philosophy and believes in it. It takes enormous passion and dedication to believe in what you've created, support it over decade(s) and keep on improving it. I can point you many languages that have died over the years because they lacked this passion and dedication.
Same thing goes for DHH as well. I really applaud him for patiently tolerating so many people bashing the framework he helped to create that revolutionized web development. He's also very very chill and respectful about others' opinions. [1]
Having said all this, I hope Ruby 3.0 has a strong come back which is much needed at the moment.
> renamed the web folder a couple of times and so on
Just to add a bit of information to this, they were all in RC or pre-releases. I was bitten by this myself, but if you choose to run RC code then you need to own that a little bit.
I don't have any idea how it looks nowadays, but Rails 1.0 wasn't that big deal, because already in the Tcl days there were Rails like application servers being written.
Like AOLServer, Vignette and our very own at the startup I was working on during the first .com wave.
The big deal was that many developers weren't aware of such stacks.
I think the killer feature for a lot of developers when Rails came around was the ORM. IIRC AolServer didn't have an equivalent to ActiveRecord, and never having to touch SQL or know a lot about database design was a big boon for the kind of developer who got into RoR. The rest was mostly more magical PHP.
AolServer might not had one, but Vignette and our in-house solution surely did.
You would declare an "entity" type and all CRUD operations would be automatically generated for Informix, MS SQL Server, Sybase SQL Server, Oracle and DB 2.
With the possibility of extending with extra queries that look like entity methods.
Most ORMs I dealt with at the time were strict code generation from configuration or db introspection, so there was essentially a 'compilation step'. What AR had at the time was you extended a base class and directed it to a database and your objects magically figured everything else out. Change a table schema and refresh _only_ your browser and the new state was reflected in your application. That was a very magical in 2006 and having a fully working ORM with 2-6 lines of _code_ (not configuration) per schema+relationship was incredible.
Everyone who has used Ruby for some time (at least one full project) will have different bad memories, as well as probably some very happy memories.
Few would dispute that Ruby is a great language for getting things done with small, expressive code. And Rails, which is what probably brought most of us to Ruby, was at its time the ideal web framework. It was able to be generations ahead of most other frameworks because of the capabilities Ruby afforded it.
However, time moves on, versions of the language, framework, and gems change. Projects grow and evolve. These are where the pains really begin. But this is not really so much about the language but about real world project lifecycles.
For me, the first real pain was performance. At some point, you cross a threshold where the performance goes from reasonably ok to WTF is happening?. Then you start profiling and discover that the wonderful collection and ORM operations you've been doing are extremely expensive. That's when it stops being fun.
Then, if you explore other languages, you may discover the absolute joy of Clojure (once you devote the time to think functional and appreciate Lispy syntax). Plus you get a big performance boost (as well as access to tons of Java libraries that are usually very performant). But alas, then you discover that there's no real Rails equivalent for Clojure. (I now understand that there are great options if you're willing to cobble your own together.)
Finally, you might hear of Elixir + Phoenix. Suddenly (again, if you're willing to learn and think functionally), you find joy. Performance is much better than Ruby, the Phoenix framework feels Railsy, and you get the benefit of the industrial Erlang VM underneath it. The downside, however, is that Elixir and Phoenix are young. Fortunately the community is friendly and helpful.
For me, it's not that Ruby is bad; it's just that I now realize how much better functional programming is for me personally. And when I just need something quick and dirty, Python is already installed. Ruby is like an ex-girl(or boy)friend that you still like, but who no longer fits into your life.
Yeah, I feel like everyone I worked with on that project (the original guys were long gone) who wasn't a dyed-in-the-wool Rubyist just really came to hate Ruby.
Still no types in Elixir land, nor control over effects. Dialyzer was a real disappointment with its performance, poor error messages, lack of ecosystem support, lack of parametric polymorphism, and it not reporting obvious type errors.
Really hoping for more advances in distributed type systems in the coming years, but most folks don't really need the very specialised performance envelope that the BEAM gives, and would be better off elsewhere with a type system that caught more mistakes up-front, and provided better domain modelling tools.
my sense is that ruby has a perl problem. yes, you can write very beautiful things in it, and if you follow the idioms you're generally safe. but there's way too much rope to hang yourself with, too much "magic", and no way to reliably refactor large codebases. As a language, it doesn't scale well into "programming in the large" unless you really know what you're doing, and most frankly don't.
I have a similar experience with Groovy, the "Ruby of Java".
I wanted to quickly prototype an app, so I used Groovy. It was very productive, esp since I deal with lots of JSON. Over the time, the prototype has grown to be a serious app, and boy I hate refactoring it. Stupid mistakes such as method renaming can be a nightmare. Yes I should have written more unit tests, but my excuse was it's just a prototype.
> Yes I should have written more unit tests, but my excuse was it's just a prototype.
Never understood this point "well, you don't need types, you can just write tests for it". IMO you should never have to write tests for something a type system can find for you. Why should I waste time with something which can be found by the compiler?
I find that strict type systems are good for 'locking down' execution space and sanity checking but they're not effective at cleanly specifying higher level verification even when that's possible.
I'd rather use a combination of tests and stricter type checking. There's no sense in being a fundamentalist about either approach.
> I'd rather use a combination of tests and stricter type checking. There's no sense in being a fundamentalist about either approach.
Completely agreed. I tend to prefer pragmatic solutions over strict adherence to some methodology. As a sibling comment noted the natural end of specifying everything beforehand would be formal systems, which are usually not worth the effort. On the other hand I don't see "let's just throw everything the compiler could help us with out, because it's a bit more effort" as a sensible strategy.
This need not be the case forever. It's still an open problem, but the tooling is getting better all the time!
For now types+tests are a happy medium. No need to throw the baby out with the water just because 100% correctness is still unfeasible for everyday business problems.
At least with seamless JVM interop you can gradually migrate the pieces that need to be more solid over to another language, without having to stop the world and rewrite everything.
(Protip: use Scala, it combines the expressiveness of Groovy with the safety of Java. It's often seen as Haskell-like, to come back to the article)
For me it's exactly the same.
I use groovy to prototype small functionalities for the big Java application that I'm working on, but I will never consider to use it to build a full non toy-size application.
I use Python and Ruby for small scripts and I can't really see how people would ever want to use it for big applications.
The lack of static types is the problem.
For this reason I don't agree with the author when he says that OO is the problem.
While loving functional programming I think that a well executed application written in a OO language with static typing is not a nightmare to maintain.
You can write code impossible to maintain in Haskell and you can write pretty nice maintainable code in C#.
Probably I prefer the nice balance that you can strike with F#, even with the flaws that it has. (Typeclasses anyone?)
> Over the time, the prototype has grown to be a serious app
This is your problem, not the lack of tests. When you use Ruby or Apache Groovy to prototype something, it needs rewriting in a statically typed language as soon as it starts growing into something you want to be production quality.
Groovy's good for glue code, build scripts, and tests. Don't build actual systems in it.
>> The majority of my job consists of maintaining about a dozen legacy Rails 2 / Ruby 1.8.7 applications, written between 2008-2010, with essentially zero tests amongst them (when I started).
One can write unmaintainable code in any given language/framework. I am not a fan of OO but in this case I would not blame OO but the one who developed this messy app.
I thought the same thing. I have written a ton of Ruby, used to be Rails, now text Processing with occasional Sinatra apps. I also write a reasonable amount of Haskell.
There is no way that I am anywhere near as productive in Haskell as Ruby, but I like Haskell and for some things I very much prefer it. I would argue that it is a good thing to use two very different languages.
Ruby: the only problem i have with Ruby is that it is too easy to make a mess. and i do hate rspec with a passion. it's a perfect example of a DSL done wrong. otherwise Ruby's one of the most enjoyable language i've worked with.
dynamic typing: old debate and author contributed nothing to it.
side effects: with enough effort and discipline ruby can be very much functional and side effects are less of an annoyance.
oop: yeah, but more specifically "what oop has become in past 30 years".
Most ruby apps have always been a complete pain to setup.
The idea of having a build environment and language specific package managers for deploy and pulling in hundreds of dependencies directly leads to dependency hell, a user-hostile setup process and inevitably limits the language and apps to saas type use cases.
The only major end user focused ruby apps left are what, discourse, redmine, jekyll? Discourse probably recognizes the complexity of their setup process and has a docker only install which just hides and postpones the complexity, Redmine tries to be available via distribution package managers and user frustration with updating Jekyll is well known.
The people who benefit from this kind of adhoc ecosystem of hundreds of small packages, package managers, anything goes are the completely self interested jockeying for influence and move on to the next new thing, the language is left with the debt and its entirely the fault of the language developers.
The same thing exists in Node and has now been adopted by the PHP ecosystem taking away the easy setup benefits of PHP.
There's dynamic typing, but then there's also all the extra stuff like mutable global state, gratuitous monkey patching, fad following, overindulgence in complicated implementation cleverness just to make interfaces more elegant etc etc that the Ruby/Rails community has produced and promoted.
I see the "you need to write less tests because static typing" statement thrown around a lot (including the article), but haven't seen any detailed discussion on why that would be so, could someone point me to a more in-depth look at that?
Testing attempts to pin down specific use cases to ensure that they meet certain requirements. Alas they only single out one case at a time - there could be a wide range of possible failure conditions you forgot to test. Types allow you to cut down the space of possibilities to a more manageable level to ensure that your testing can be more targeted.
To see this pushed to the extreme, and to have a glimpse of the future, check out Edwin Brady's book "Type Driven Development with Idris": https://www.manning.com/books/type-driven-development-with-i... - I don't expect this style of programming to become the norm until at least another several years, but it essentially allows you to push all behavioral specifications into the types, rendering most unit testing tests obsolete. Of course I would still have smoke and integration tests to for sanity checking sake.
I have no links to share, but it seems obvious that a dynamically typed language will need tests to ensure that a function "behaves" correctly given incorrect types, where the statically typed language will not even allow you to run that code.
I perused the author's blog and peeked at his LinkedIn profile. It looks like he continues to do Ruby development, and still calls himself a "Ruby on Rails Developer." I wonder what caused him to change his mind?
This isn't really about Ruby. It's about the other things in the title. I suppose that calling out Ruby by name was necessary to make this blog post concrete, and to make it easier to relate to. The author also mentions Haskell, but that really isn't necessary to get the point across. The comparison could have been between Python (written in an OO fashion) and ML.
You can write, 'I hate X because of dynamic typing, side effects, and object-oriented programming' for many values of X. Similarly, the reasons why the author is drawn to Haskell can be applied to a large number of other languages.
Yes and no. Ruby has a particularly dynamic, side-effecty culture, even more so than Python (where monkeypatching is less encouraged, OO is less of a focus, and, not coincidentally, unit testing is far less a source of fuss and trouble). Haskell goes in for stronger isolation of side effects than OCaml does. Any given language will be at some point on the spectrum, but Ruby and Haskell are probably the extreme ends of that spectrum as far as mainstream languages go.
- Fat models (huge number of hooks, scopes, many methods stuffed inside in the name of "domain-driven design", etc)
- Fat controllers (huge number of shared methods, hooks, shared variables and hooks, concerns, etc)
- Fat views/helpers/templates
In multiple years of working with Ruby (+/- Rails) - there is one talk that changed my whole mind of scaling systems - aka ways to build the majestic monolith [1]. And I believe the author of that talk would be laughing silently at this post right now, thinking, "I know what you're talking about".
Check out those slides, and check the codebase. This is not a Ruby problem. Splitting things mindlessly into microservices, trying to go "fully functional" (answer: nope, nothing in the world is pure or perfect), everything has trade-offs.
I've actually moved from ruby too (although, still doing some for contract work), but I wouldn't say the language is the problem, nor rails. I was more tired of the community, actually (not the opensource community, but the coworkers).
Ruby and rails were awesome in the late 2000', when most people were trying to get their essence and do crazy and smart things with them, always focusing on making things easier for the (dev) user.
But then, soon after the start of the 2010', it started to get really ubiquitous, and there were a lot of people using it who weren't especially passionate about development. They started to make everything complicated, not bothering about trying to make things easier, and kept talking about "good practices", following them blindly without having any clue what problems they were supposed to solve.
This is probably something that occurs naturally for any successful tool. I'm not even blaming those people. And it seems quite normal to me that after spending years among passionate only people doing cool things, we feel bored when things normalize. Time to find an other community, no hard feelings.
I feel like Ruby’s sweet spot is sort of as a cleaner, more capable, more ergonomic Perl, with nice message-passing OO and a super-convenient standard library. Text processing? You bet! Command-line utilities, test harnesses for JVM-hosted APIs (using JRuby), small network services, ... in those domains I feel very productive with Ruby.
But for core business logic? Definitely not my first choice.
> But for core business logic? Definitely not my first choice.
I have an alternative perspective. It's clarity, refactorability and testing ability make it a great choice for core business logic. In fact, I am yet to see a cleaner alternative. I've seen rails app's go from monolith to Go microservices, and the clarity of whats going on and agility for change is just gone. It becomes a nightmare to work with.
Interesting... I personally prefer C# for core business logic. OO where you need it, great libraries & interoperability, functional programming support... good stuff.
Wish I had time to look into Clojure. Excellent programming paradigm along with the interoperability, libraries, and platform support of the JVM.
Never used Ruby really, apart from some Rails programming here and there, but I completely agree: dynamic typing + side effect + OO patterns => technical debt.
On the other hand I think pure languages, like Haskell, take it too far.
My ideal language would be something like Haskell + limited side effect support(for IO) + ecosystem of something like Python or Java or Golang.
elixir is sufficiently close. It's reasonably strongly typed under the hood and although the spec syntax is a bit awkward, it can give you compile time assurances of correctness in a majority of cases if that's what you want static typing for.
There is an argument for performance, but static typing really gets the performance benefit when you have fixed size data type arrays. Elixir has powerful bit and byte manipulation in the standard library for many such operations, and if you're really looking to do mathematical transformations of arrays and matrices, you shouldn't choose elixir.
You don't have to pray. In elixir/erlang the motto is let it fail. If it fails, it crashes, and you have supervisors that are nearly free, and your service will resume itself noncatastrophically. You're generally free to code the happy path.
MS still gets a lot of flack, but F# is nice imho (as is F* for the crazies like me by the way) and is getting better and better with the open source, Core etc. But for 'normal' environment, C# also just works a lot better for teams in my experience than something like Ruby. It might be just taste, but I did large projects in both and C# (also F#) make me sleep at night, while somehow the RoR stuff always needed constant attention (lot of breakage after security related gem updates etc). Nice if you have the people and need for that kind of thing, but a lot of what we do is set-and-forget (at least for a few years) which .NET allows generally. The only thing I don't like yet about the .NET dev experience is the lack of tooling, especially on Linux. But that's rapidly getting there, and is open source under a good license for the most part.
The thing that frustrates me about F# is it seems to be a bit of a frontier language on .NET. Theoretically you can use it anywhere you would use C#, but there's not a ton of examples or documentation out there. Googling ".NET MVC F#" returns an article from 2010 as its top result, for example.
F# has become a second class citizen on .NET with the team catching up with what the official .NET team (C# and VB.NET) is doing across all supported platforms.
Even C++ has more tooling love than what F# currently has.
Anyone that wants to be sure their code will run in whatever platform Microsoft might think of supporting next, should not focus too much on it, unless the wind changes again.
I love ruby, and agree it's not perfect, but thats not what technical debt is. And blanket including all "OO patterns" in your equation, and never having really used ruby, leads me to believe you don't know what you're talking about.
I switched from Ruby to Go and could hardly be any happier. Go feels powerful and simple at the same time. I code whatever i need to code easily, instead of hunting for frameworks and evaluating dependencies (or dreading the next deployment when a dependency changes unexpectedly). The code is easy to read and maintainable due to the lack of multiple layers of abstractions and inheritances. It's refreshing.
s/Ruby/Rails/g in the article and it makes more sense. You can write pretty good Ruby code, which avoids side-effects, monkey-patches and is easy to test. All you need to do is abandon Rails. There are many modern Ruby projects which make it simpler, see dry-rb.org, rom-rb.org, hanamirb.org and trailblazer.to
I'm old enough to remember sick of static typing, pointless indirection and mixed paradigm code.
And thus the centralisation/decentralisation wheel turns again because the new generation think they have discovered something new but ignore the lessons learned in the past.
If only we could fix that human tendency with a code upgrade.
No, we really have discovered something better. We oscillate but we are converging: these days all serious statically typed languages have some level of type inference, and all serious dynamically typed languages have some level of optional type checking. We end up overcorrecting each time - Ruby was an overly dynamic, unmaintainable response to the strictness of Java, and no doubt some post-Ruby languages go too far in the straitjacket direction - but at the same time languages on both sides are better than they were previously.
Ruby has optional type checking. What do you think the testing regime is about?
It's a compiler that is built for each project with types specific to the domain problem being solved.
Language type checking misses the point. The types I want to check, and the extent I want to check them, are in the spec files.
Go type inference, for example, is deeply primitive and an awful lot of Go code seems to spend its time implementing duck typing via work arounds. It looks an awful lot like dependency injection for the 2010s.
> Language type checking misses the point. The types I want to check, and the extent I want to check them, are in the spec files.
It's worth incorporating a way to express these things directly into the language proper. That way all your tools will understand them (e.g. automated refactoring will already know what needs to be updated) and you can talk about them in the language.
> Go type inference, for example, is deeply primitive and an awful lot of Go code seems to spend its time implementing duck typing via work arounds.
Yeah, Go is the exception. It's profoundly and proudly ignorant of 65 years of language design progress. If Go were the only language type system I had access to I'd think language type checking was pointless too.
Go is a terrible example of a statically typed language. Now that is an example of a bunch of people not learning from the past - ie. the entire lineage of ML-based languages!
A rich type system lets you mold and shape the types to fit nicely over your domain, effectively becoming a machine-verified DSL for your business problem. It will catch flaws in your mental model before you even begin to write tests or an implementation, and is a cheap way of sketching out your ideas and great documentation to have over the lifetime of your project. Of course you can't fit it exactly, so you need a smattering of spec tests, and probably some property-based tests for good measure to fill in the gaps.
The difference is that we know there's something better than Java, C#, or C++ this time around. We have Typescript, Flow, Elm, Purescript, Haskell, Rust, OCaml, Idris, F*, Nim, Crystal, and more. Yes, some lessons have been forgotten that will have to be relearned, but on the whole we are improving. It's like how Andrew Whiles says[0] on how it's important to be forgetful to be a good mathematician:
> It goes like this. You try one strategy on a problem. It fails. You retreat, dispirited. Later, having forgotten your bitter defeat, you try the same strategy again. Perhaps the process repeats. But eventually—again, thanks to your forgetfulness—you commit a slight error, a tiny deviation from the path you’ve tried several times. And suddenly, you succeed.
I think we're a bit better than that, but it pays to remember it the next time you see the young whippersnappers publishing an NPM package doing the same thing you did years ago and deemed to be a failure. By all means, share the war stories with them - it's important we get better at appreciating history, but do it from a position of encouragement. They may yet succeed where you failed. :)
In all fairness, this applies to any dynamic language - Especially scripting ones. Python, Ruby, PHP... It doesn't matter really, it's so easy for a project to grow out of control. I have growing insecurities when I program in these languages.
On a sort of similar note, I've been teetering between Python and Go. On one hand, clearly you can move faster in Python. On the other hand, Go is faster and I can be more sure that code that compiles and passes tests actually works.
This is part of the reason Go is gaining so much popularity. It gives you back all the power and speed of typed, compiled language but without all the tedious parts of C like managing memory and complicated threading.
Now that .Net is cross platform, I've been switching everything over to C#. Besides the fact that it's compiled, Visual Studio is a dream to work with.
My experience with switching from static to dynamic or dynamic to static typing is that it I've always missed the other at times. The flexibility of dynamic languages is especially nice when your requirements change a lot, which is a huge problem with what I'm doing now. I probably prefer static generally, but I'm glad I'm primarily working with a dynamic language at the moment.
If you can’t be successful writing ruby...I don’t know what to say. I’m a manager/exec at a medium sized public company. Recently, I got to code in ruby again after a long time. It was great. It’s still my favorite language. I try to quit it, leave it for another newer, sexier language, but I just can’t. Maybe crystal, but then you can have both!
Ruby-style monkey-patching just seems like an awful idea. I cringe when I see it used in JavaScript, though it is less and less often seen in the wild now that browsers are less terrible and we tend to use babel and other transpilers instead of having to shim and polyfill around the more broken parts of the language.
It’s a lot easier if you cut it out with the types everywhere, and just focus on writing functions that return values. Values! Pass around values, not weird bespoke type instances everywhere. If that’s not OO, too bad. Above all, keep it simple.
- Thanks to the proliferation of Rails and similar frameworks, most Ruby apps at least have something that resembles an MVC structure. With JavaScript, once you move past the basic TodoMVC examples you are pretty much on your own. It gives you enough rope to hang yourself, all you colleagues, and everyone in the building next door.
- The expect vs should change in RSpec is nothing compared to how fast things are changing in JavaScript. I think there are now 7 different ways of just defining a module.
- The stdlib of Ruby is pretty sensible. JavaScript has many inconsistencies (take Array.slice vs Array.splice - one modifies the original array, and the other does not), and you usually need to rely on third party libraries, or write the code yourself, to do pretty basic operations.
- The JavaScript community seems to have the opposite of NIH syndrome, so that even basic functionality is offloaded to a third-party modules (see leftpad). The project I'm working on has over 1000 modules in it's dependency tree.