Wait, what? This is what I have to do in any language. Writing Perl? Yup, have to read my libraries. Writing Haskell? Yup, source diving is helpful! Writing Java? I have to single-step into the internals to get my programs to work right!
Understanding your libraries and their implementation is a great thing. Once you realize that library code is not a magical black box, and is instead very much like the code you are writing, you can really become a productive programmer. ("But what about abstraction! If I have to understand my libaries, they're not abstract enough!" Wrong. Abstraction prevents parts of your program from knowing about each other. You, the programmer, still have to know how each part works. Abstraction means you only have to actively think about once piece at a time -- nothing more.)
The Posterous codebase is littered with monkey patches, modifying gems to work as we need them to.
Yeah, you know that "fork me" button on Github? You click it, then you edit the code, then the bug is fixed for every single person on Earth. That's the point of free software; if there's a bug, you squish it and move on.
(I do this all the time. Does it take time to fork someone's project, fix the code a bit, and send them a pull request? Yes. But it's worth it, because now you have a robust and permanent fix... and a community!
Monkey patching is for monkeys. Collaborating is what humans do.)
What you're describing with the 'fork' button is incorrect. When you fix something on your own branch, nobody on earth but yourself benefits from it. For this to work, you have to contribute your work back and (the hard part), the original owners have to accept it. And that's where open source usually breaks down.
Another point of the original post is that monkey patching is the kiss of death of many Ruby and Ruby on Rails projects.
I can't think of any change I've ever made to an open source project that hasn't been accepted. This ranges from random Perl modules to Emacs and the Linux kernel. Contributing is as easy as sending an email!
The post-Rails Ruby ecosystem is definitely a high-energy place where stability is traded for rapid development. Like any community it has its idiosyncrasies. Rubyists compared to Pythonistas perhaps monkey patch a bit too much, are overly fond of novelty, and tend to have pretty strong yet sometimes objectively unsupportable ideas about the proper way to do testing.
All of these things have annoyed me from time to time over the past 5 years of working almost exclusively with Ruby. However the community also has its strengths which are extremely appealing for passionate programmers.
For one thing, the fact that Rails emphasizes improvement over maturity has made Rails 3 an amazingly modular framework, able to better fit enterprise requirements better than 2004's DHH ever would have wanted or acknowledged the need for. Also, consider the rise of the RSpec and Cucumber frameworks despite the fact that Test::Unit and Mocha already had the bases pretty well covered. In other communities, yes, you get better maintained libraries, but you also get stagnation from having too high a bar for new ideas. In Ruby you have to deal with constant deprecation, but at least you get better chances that your entire ecosystem won't be obsoleted by the next hot language or framework.
Well, if you're asking about the ability of a language to prevent the ability of a corporate army of 1000 developers to blow it's collective foot off, then no, Ruby and almost every other language will forever pale in comparison to Java.
On the other hand, if you're talking about the ability to mix, match, configure and replace components of Rails in order to meet your particular requirements, then Rails 3 shows real promise, though it's not yet officially released and a lot of the modularity needs battle-testing.
"Well, if you're asking about the ability of a language to prevent the ability of a corporate army of 1000 developers to blow it's collective foot off, then no, Ruby and almost every other language will forever pale in comparison to Java."
This is the common wisdom about java, but I don't think it's true. I have a far harder time dealing with other peoples java code than I do dealing with other people's ruby code. And yes, I work in a corporate army that happens to do both java and ruby.
Libraries change over time. In java, if you realize your interface needs to change then you're fucked if someone else is using your code.
If you're using someone else's code and it doesn't do exactly what you need it to do then in java you have to get their code, change it, compile your own custom version, and in many java shops upload it to your companies maven repo with some version like foo-lib.jshens-fork-1.0.1. In ruby you'd just monkey patch it. This has it's risks, but those risks are under your control, you don't have to ask someone else for a favor to update their lib. (using other people's monkey patching is a different animal)
Spring, need I say more? In our corporate army there are 1000's of developers with 1000 page tomes telling them how to glue their libraries together in java. Think about that for a minute. Some of the "problems" that java doesn't have are really still there. They're just pushed into things like maven, ant, and spring.
At the end of the day, bad java code makes my life much harder than bad ruby code.
On a side note, what's up with the recent spate of people complaining about rails because they used some shitty library that isn't a part of rails? Here's a hint, you should do some due diligence before using some random library.
This may well be true. I don't have any corporate Java experience.
However I do believe from my limited Java experience that it's nearly impossible for a bug to occur that the average programmer can not trace back to it's source using more less standard methods (race conditions and other concurrency nastiness notwithstanding). In Ruby on the other hand, real head scratchers can and do occur on a regular basis that require a higher level of debugging skill than some mass-educated code monkeys are capable of.
It's just a different way of doing it. In java you ctrl-b your way into method calls using your IDE. In ruby (assuming it's not C code or eval'd code) you use ruby-debug. Set a break point and start stepping. In minutes you'll stop scratching your head.
I've seen the "code monkeys" you speak of hit a brick wall with spring. Java the language simply pushes these problems into it's tools. I've spent countless hours getting my java tools to play nice. I don't have that problem in ruby. There really is a law of conservation here.
No I'm talking about the crazy bugs that happen when you do soemthing like silently redeclare a method from deep in the framework and get no warning because Rails runs with warnings off. Or when two popular well-tested gems happen to add the same method to a core class with slightly differing implementations.
Usually once a year or so I run into one of these mind-bending bugs where something breaks in a bizarrely tangential way that simply can't happen in Java.
yeah, that happens to me too, about once a year, and worst case I spend a few hours on it. Let's say it's 5 hours a year I spend on this sort of problem. That's nothing compared to the amount of time I spend getting maven and spring to do what I want. Hell, it's far less time than I spend getting my IDE to run my projects!
I'm confused, are you suggesting that Rails 3 architecture is influence by Perl and Python frameworks? Or are just talking about modularity in general?
Because I was just talking about Rails 3 getting out in the wild and people getting to actually start using it with heavy customization to shake out some of the bugs. The re-archtecting to this point is quite impressive, but it's still living in a bit of a vacuum.
Comparing the officially supported iOS SDK to 3rd party open source libraries is more than a bit disingenuous.
I wonder what 3rd party libraries they are using for the iPhone. We've had the same experience he had with ruby libraries with 3rd party iPhone libraries.
Exactly what I was thinking. He also mentioned that the code base "is littered with monkey patches". I suppose the author does not realize that monkey patches can be very costly for maintenance. Just because you can monkey patch something doesn't necessarily mean you should.
His point is that he HAS to monkey patch, but it clearly saying that it shouldn't have to be that way, if the underlying frameworks, plugins, and gems simply worked the way they should from the beginning.
Why would they not submit bug fixes instead of monkey patching everything? If the library is supposed to work a certain way, then it is a bug and should be fixed. Bug fixes that make their way back to the main repository have a much lower maintenance cost than monkey patches.
Every time you want to write a monkey patch, you should really ask yourself if it is worth the likely future maintenance costs.
Perhaps because so many small project owners don't accept and merge in patches after people kindly submit pull requests containing proper patches. Albeit I see this for many areas of open source, not just Rails, such as ObjectiveResource, Three20, and Moonshine. I've had to wait as long as 3 or 4 months before even getting some feedback from some project admins, and that's after an email or two.
We shouldn't have to beg to get our patches committed. Some project owners are just lazy, or have a very narrow view of what they want to accept for patches. And that's fine, it's their project. But that's more the reason why people monkey patch and not continue to submit patches, of which they'd need to monkey patch anyway to have the change effective immediately. After all, the projects are often still active, and do update with bug fixes, etc. And if you've only modified the tree, then you need to reimplement after you update. A monkey patch avoids this requirement, letting you get on to more important work.
The only way I see this as a Rails problem is that Rails makes it easy to get saddled with lots of external dependencies. Otherwise, it's a problem with how external dependencies are managed.
If it's not the number and quality of dependencies then it's the wrong type. I usually stay away from gems that provide syntactic sugar or provide an alternative to ruby / rails defaults. If I'm using 5% of some library, I'd rather write it myself and only address the 5% that I need. If it's not maintained, I stay away. Etc.
A little discipline on what gems get included would go a long way to solving the perceived problems. Gems are generally designed to solve a much more generic version of the problem you're trying to solve and maybe not even the exact same problem, if you're messing with the innards too much, that gem is probably not a good enough fit.
Less is always better when it comes to dependencies.
Less is always better when it comes to dependencies.
Yes but I think the point here is that the environment you choose can make it hard or easy to avoid dependencies.
I totally agree on "less is better with dependencies" but there are clearly many programmers who don't want to "reinvent the wheel" and always use libraries first. And a lot of times, the programmers willing to piece together lots of different community libraries will be a lot faster initially. They're just incurring technical debt.
If you're part of a team of these programmers, you might not actually be able to convince everyone that "less is always better" with dependencies.
So, it's not Rails that's the problem, it's the post modern software developers that would rather use a buggy, thinly supported library than try to write their own code...
One time, I went to the store and bought a bike wheel. But it didn't come with a little strip to protect the tube from the spoke nipples. So, I threw it away and built my own wheel. Actually, I first built metal-refining plant, and then I built my own wheel.
Oh wait, I died before I built the wheel, but at least i didn't incur any 'technical debt'. If I had to buy a separate piece of rim tape, my wheel just wouldn't have been right! It may have given me years of enjoyment, but who cares? Those other wheel makers did it wrong!
(Oh, and last time I checked, 'technical debt' didn't mean, "I didn't get to write a research paper about the internal string representation." But that was a long time ago...)
I bought a wheel from a bike store without knowing anything about bike wheels or bike stores. The wheel collapsed on me 3 weeks later, I fell into traffic and a car ran over me, crushing my chest. I'm now rotting in a graveyard paying back my 'technical debt'.
Ok that's just a story. Here's an anecdote:
I spent the last 3 weeks, which I could have spent doing things far more interesting and productive, rebuilding Perl modules because so many perl programmers couldn't be fucked to write portable code and our dependencies appear to number 300 or more. (Nobody actually kept track, of course)
For example, somebody's script, somewhere, uses Array::Compare. It's a tiny module that compares arrays (length, index of different elements, etc.), which is like a first year computer science project. Version 1.18 is 500 lines (including docs+comments) and has one dependency: Carp, a core module. Version 2.01 of Array::Compare is 465 lines (including docs+comments) and requires Moose which itself has 23 other dependencies many of which are not core modules and some require compilation. If Array::Compare was part of a standard library, one that ships with the standard Perl distribution, this wouldn't have been a problem. It wouldn't use Moose unless Moose was also part of the core dist, at which point it wouldn't be a problem unless maybe I was compiling everything from sratch, which I'm not.
So I have to downrev that module and use the old version, or else introduce another new significant dependency (which did not compile successfully on my first couple attempts) or else track down the script and eliminate the dependency and check to see if the author is in punching distance.
That's just one example. There were also mismatched compiler flags, modules that didn't install on some days because tests required connections to webservers that were returning 500 errors-- all the kinds of minor problems that aren't that big of a deal when you are one developer installing only the modules you need to solve your immediate problem, but turn into a huge mess when there are 300+ of them.
That's technical debt coming due. (And I'm certainly not paying back all of it, I'm not compiling Perl from scratch I'm not manually finding every scrap of perl code on our systems to see which modules they still really need, etc.)
not an apt analogy. It's more like, I went to the store to buy a bike wheel and what I got was a bike wheel building kit that let me choose between plastic spokes, metal spokes, number of spokes, width of the rim, type of tire. But I had to figure out the magic incantation that produced the combo I wanted, which took almost as long as it would have taken me to build one wheel.
Ruby platform developers are much more likely to trade stability (edit: that's API stability) for API quality than the platform devs in other communities.
If you're an active refactorer, that's a good thing. It means those crufty APIs that you had to hack around, they get better, and you can clean up your code.
But if you're a "write it and forget about it" kind of person, that's going to sound like extra work to you.
I'm not sure I'm a "write it and forget about it" kind of person, but I don't want to have to spend time going back over an ever-increasing amount of existing code to fix stuff rather than writing new features/integrations/etc...
There's an easy solution to this, use bundler. Make your dependency a specific version of whatever gem you're using. Then, you don't have to worry about it changing until you want to upgrade it.
The thing I love most about Java versus working in Ruby / JS / anything else, is the IDE (read: eclipse). Right now we are building an app in SproutCore and I find my life spent mostly in TextMate, which is quite nice and their JSLint integration makes me very very very happy (hunting down missing commas is much easier this way).
IMHO there is nothing like the refactoring tools in eclipse, and the code completion. Say what you want about the verbosity of Java, man is it nice cranking out 200 lines of code in 2 minutes with eclipse.
Whats even better, though, is not needing 200 lines of code in the first place. The thing I hate about java, is that you're required to use an IDE, or you'll never get anywhere. In Ruby, you just metaprogram in 10 lines what takes 200 in java. Its `attr_accessor :foo, :bar, :baz` in ruby, vs 8 loc for each in Java. Want to change the name? Go change the one word in ruby. No need for an IDE with refactoring support.
The biggest difference there is that Ruby programmers frequently don't think twice about exposing their object's state to anyone and everyone.
It should be one line of code for each attribute in Java "public SomeClass foo;" if we're going for the same semantics as the Ruby example(admittedly, if you don't want to expose that it's a field, that's not quite as easy in Java, although really, there's no reason to not just do this:
private SomeClass foo_; // I don't know if Java allows methods with the same name as fields.
public SomeClass foo () { return foo; }
public SomeClass foo (SomeClass aFoo) { foo = aFoo; }
Yes, that's not what most Java programmers would write, and it's not too concise either, but it's not nearly as bad as you suggest.
Ruby is actually more conservative on this issue, it isn't possible to make fields public. The attr_accessor macro is actually adding getters and setters. The implementions used for the getters and setters are equivalent to those that you would write in java. As mentioned elsewhere you can switch in custom implementations as required but, as in java, the majority of the time the default implementations are sufficient. The difference is that it in ruby you don't need to write the getters and setters until you need the customisation.
I know. That's why I include the admittedly a bit longer and boiler-plate-y-er code sample. Same effect as the ruby. Unfortunately, 3 lines per attribute. A bit more explicit and quicker to change when you make the getter/setter methods actually do something.
I prefer the Ruby. But, frankly, Ruby has too much boilerplate for attributes, too. I have to explicitly add attribute handling code in initialize? Why?
No, thanks, I'll use Perl 6:
has $.foo is rw;
There you go. An lvalue accessor method for an attribute, and the ability to set it in the constructor with the named argument ":foo(someValue)". And, if I want to add a type constraint, it's a simple matter of doing this instead:
has Int $.foo is rw;
Or even this if I only want even numbers:
has Int $.foo where { $_ % 2 } is rw;
If I really want to, I can manually implement my own constructor, but often, I won't need to.
If I want the attribute to be private, I just replace "$.foo" with "$!foo" and it's only accessible within the class.
CLOS also makes this very simple, you just add an entry in the slots list of the class like so:
(foo :accessor foo :initarg :foo)
You can also do type-checking by adding ":type some-type-specifier", specify that it's either a class or instance(the default) slot with ":allocation class" or ":allocation instance", use :reader or :writer instead of :accessor to only generate a reader or a writer accessor or :initform to give the slot a default value.
Ruby is at a nice place on the accessor definition boilerplate continuum, but it's nowhere near the bottom.
" I have to explicitly add attribute handling code in initialize? Why?"
'Attributes' are just methods like any other. What code do you think you need in initialize?
(Encouraging people to think of some methods as 'attributes' was a mistake, but also now a long-lost battle. It breaks the concept of the "messages only" Ruby object model, even though that's exactly what's happening. )
If I want them to be initialized when .new is called, I need explicit code in initialize. Yes, it the attributes are externally mutable, one can set them after initialization, but sometimes you don't want an externally mutable attribute.
Suppose I have this class(if it doesn't exactly work, sorry, I'm not fluent in Ruby):
class Complex
attr_accessor :re, :im
def initialize(aRe, anIm)
re = aRe
im = anIm
end
end
In Perl 6, I get almost the same thing(with explicit attributes as if I had done "def initialize( initargs ); re = initargs[:re]; im = initargs[:im];end" for the initialize method) with this:
class Complex {
has ($.re is rw, $.im is rw);
}
If I want to do as in the actual Ruby example, I just add this:
method new ($aRe, $anIm) {
self.bless(*, :re($aRe), :im($anIm));
}
"If I want them to be initialized when .new is called, I need explicit code in initialize."
Sure, but that's true of all instance variables in Ruby. That there is a method of the same name as an instance variable is an implement choice; it shouldn't make things magical. (I don't like the idea that a public API reveals the implementation, so I don't like to encourage this automagic tying of method names and instance variables. But clearly many like to think of Ruby as a language with "public properties", perhaps because of wanting it to be like other languages they are more used to.)
If you want smarter attribute accessor code, perhaps fattr would help:
http://github.com/ahoward/fattr
" but sometimes you don't want an externally mutable attribute."
This is my point. Why would you think of Ruby as having "externally mutable attributes"? There is private data (instance vars) and code to respond to messages, which may or not alter instance data. No data are public by default (barring the usual metaprogramming hooks to get around that). But an idiom was promoted to encourage people to think of a coincidence of method name and instance var name as being "attributes". Later, people have to unlearn stuff in order to properly grok how Ruby works.
When I refer to an externally mutable attribute, I am not making any statements about the details of the storage or interface of the "attribute". I am referring to any bit of information that can be retrieved somehow and modified somehow. That might be implemented by performing some operation on some other attribute, it might be an actual physical instance variable, it might just be a variable closed over by the accessors, it might be something else. What matters is that objects of the class present some kind of interface that allows setting a value for it and retrieving a previously set value.
With that said, on the main body of my reply:
"Sure, but that's true of all instance variables in Ruby."
Right. The question is: should that necessarily be the case? If you're going to have a way to auto-generate methods that provide an externally mutable attribute(see above for working definition of this term) as Ruby does, why not provide at least the option of auto-generating the constructor that sets those attributes, with the option of overriding that if you don't want that behavior?
'Why would you think of Ruby as having "externally mutable attributes"?' Languages don't have "externally mutable attributes", particular interfaces within those languages do. Any Ruby class which uses attr_accessor does. Similarly, any Perl 6 class that contains "has $.foo is rw" and doesn't override the generated foo methods does. Similarly, any Java class with either a public field or a private field with corresponding getFoo and setFoo methods does.
"There is private data (instance vars) and code to respond to messages, which may or not alter instance data."
The private data is part of the implementation. The implementation is not part of what I call "externally mutable attributes". The messages may or not alter instance data, but whether they modify instance data is not part of the interface of an externally mutable attribute either. An externally mutable attribute could look up a page on a website, and send a POST request to modify it upon setting. That's a bad idea when that's not the express purpose of the attribute, but it's one way of implementing externally mutable attributes that doesn't have to modify any instance data.
"No data are public by default (barring the usual metaprogramming hooks to get around that)."
If you use attr_accessor, then yes, there is some public data. Given that I was responding to a post about comparing the Ruby way to create an instance variable and read and write accessors to the Java way, it makes sense to compare the Ruby way to other languages, no? By the way, if you don't like data defaulting to public(me, either), you might be happy to know that Perl 6 has an equally convenient way of creating private attributes(they're not even available to subclasses): just replace "$.foo" with "$!foo".
'But an idiom was promoted to encourage people to think of a coincidence of method name and instance var name as being "attributes".'
I agree that an instance-var named @foo and accessor methods named foo and foo= do not necessarily comprise an externally mutable attribute "foo". However, two methods foo and foo= or setfoo or whatever you want do, if the foo= method is specified as determining what all future calls of foo before another foo= call returns. foo doesn't even have to return the exact value given to foo=. Its result just has to be determined by the call to foo=.
I suspect our disagreement is largely an artifact of differing definitions. If that's the case, I'm sorry for not sooner clarifying my definition for "externally mutable attribute".
'If you use attr_accessor, then yes, there is some public data ...I suspect our disagreement is largely an artifact of differing definitions. If that's the case, I'm sorry for not sooner clarifying my definition for "externally mutable attribute".'
Seems to be the case. Using attr_accessor is no different than defining methods by hand that just happen to have the same name as an instance variable. By one view, having any message that returns a value means the object has public data. I think that's your POV. Still, I think few other people think of all such methods as attributes or accessors; this may be my own skewed view of what I've seen among Rubyists.
I think the attr_* methods grew out of early Rubyists finding themselves writing the same sort of code and decidinf to automate that with some metaprogamming. I don't know why they don't include a means to also set a defaut value, but then fattr may just be a continuation of that process: Use the language to extend the language.
You need an IDE in Ruby just as much as in Java. It's the easiest way to browse the code, navigate from place to place, get some auto completion and auto imports, etc...
Of course, Ruby being dynamically typed makes it impossible to have most of the refactorings that statically typed languages take for granted, but that's a separate topic.
Look at it this way: you might think you are productive with Emacs/TextMate/vi, but you would be even more productive with an IDE of the Eclipse or IDEA caliber.
An IDE is nevery mandatory, it just makes you more productive.
"Of course, Ruby being dynamically typed makes it impossible to have most of the refactorings that statically typed languages take for granted, but that's a separate topic."
Most of the refactoring tools that Java IDEs have originated in Smalltalk, an extremely dynamic language.
Most of these refactorings required human intervention, even for simple refactorings like "Rename". When the IDE doesn't have types, some refactorings are simply impossible to perform automatically.
The arguments in your linked article are inaccurate. For example: the Refactoring Browser described in "A Refactoring Tool for Smalltalk"(at http://st-www.cs.uiuc.edu/users/droberts/tapos.pdf ), was capable of determining which sends of a renamed message were to the class on which it is to be renamed by analysis of the program's dynamic behavior.
There's of course a limitation on this. To quote the paper: "The major drawback to this style of refac-
toring is that the analysis is only as good as your
test suite. If there are pieces of code that are not
executed, they will never be analyzed, and the
refactoring will not be completed for that par-
ticular section of code. "
The impracticality of this technique notwithstanding, the article is perfectly accurate based on your own summary: in a dynamically typed languages, automatic refactorings are never 100% safe.
is usually quite sufficient for purpose if you have a reasonable idea what's going on in your codebase (and if you don't, you probably don't want to be renaming things yet).
Doesn't mean a real refactoring tool such as the ones people've been playing with for http://padre.perlide.org/ wouldn't be better, but it's amazing how well a simple brute force replace works on a reasonably tested codebase.
Coming from VS I love netbeans for writing Rails code in.
I became very spoiled by Visual Studio's ability to let you explore libraries via intellisence. Netbeans is the only Ruby IDE I've found that comes close to letting you do that.
IDEA does that rather nicely. Its maven support can reduce your box to a paperweight when it reindexes, but other than that, it's a very nice product indeed.
If you're not working on a complex codebase, it is total overkill though.
Why do you need to "standardise" on a Java IDE? Do you standardise on vi, emacs, or Notepad? Can't developers simply use whichever IDE they prefer, keeping the code formatted in the desired format?
Pair-programming teams often standardize on an IDE, so that when you hand the keyboard to your pair, they don't have to exit your IDE and start up theirs in order to type the next few lines of code. Standardizing on Notepad is a bad idea, because there's no way to enhance it to meet your team needs better.
It's just a matter of getting used to the way of doing things, like any big app it has its quirks that you just have to know. Learn the keyboard shortcuts, you'll start liking it soon enough.
What are some of your issues with Eclipse? With only 6 weeks in, it might just be a matter of being unfamiliar with certain things or not being aware of others.
As odd as it sounds, I actually enjoy being able to and sometimes having to dip into other people's code. For a bunch of jars that I had to use in Java (or C++ or VB even), there would be some feature missing and I was plum out of luck and had to write something completely around what they were doing to make special cases.
Now, with Ruby(and Rails), 99% of the time, I just grab the latest off Github and roll with it. I need to get better at forking in my changes so that others can share, but that is a self esteem problem of a different kind. Some of the time, I have seen solutions that I would never have though of implemented more gracefully than the brute steel glove I was going for.
As for sachin, to me it looks like now that he has hit success, he is no longer hungry, and just biting the hand that feeds him out of aimlessness... or would this be excentricism? Looks like he updated to say how Rails allowed him to deploy as fast as they did. But still, the post sounds a lot like a duke crossing a river by running atop the heads of his subjects and then complaining when his coat tail and shoes got wet when he gets to the other side.
I completely disagree with this article. Rails and ruby have a great amount of emphasis on TDD and BDD. I would be surprised if there is another open source community that is as focused on writing tests as the ruby community. The fact that their code base is "littered with monkey patches" might have something to do with the maintenance problems they are facing.
I also don't see how this problem is specific to rails. Any project where you choose a ton of 3rd party libraries can be riddled with complexity and upgrade costs.
How this article makes it clear that objective-c, java, or ... is more maintainable or easier to work with, is beyond me.
I might be making a bit of a stretch here, but it sounds like Posterous might be complex software. Maybe the iPhone project he was working on did not have as much business logic and 3rd party libraries involved.
Tests don't make code work. The problem I've experienced with the Ruby commumity is a constant belief in "this can't be that hard, I'll just throw one together". One day I traced a major bug in a client's app to their Ruby webserver not handling HTTP 100 Continue correctly... that simply should not happen. Pulling apart the server I found that most of the behavior was hardcoded for common clients, and certainly no one had bothered to read the HTTP spec. That wasn't an isolated problem, either, but I feel like I need not go into every single library bug I've run into. I simply did not get that feeling from the large number of Java libraries I used to use, many of which were actually the official reference implementation for the specification. ;( Bugs, yes; completely misunderstanding the problem domain and thereby not really being worth using, no.
Thank you, I agree 100% and this is basically the gist of my post.
That's why there is so much fragmentation in ruby gems. When one gem fails to do something, a developer thinks, "well I can just do this myself and do it better!"
And do you end up with 10 gems that all do roughly the same thing, but not perfectly, each designed by and for a different person.
I know the fans will vote me down for saying this, but Github has greatly increased the level of fragmentation. With the exception of the core projects, people seem to fork more than they join forces to build a better gem.
It's a very easy problem to solve though- build fewer dependencies on thinly supported libraries. That applies to all languages.
How this article makes it clear that objective-c, java, or ... is more maintainable or easier to work with, is beyond me.
They offer a perfectly good hypothesis, actually: Obj-C and Java are older and more mature.
That has obvious drawbacks: It would be impossible to redesign Java, now, to the extent that Rails 3 just got redesigned. But, though certain Java libraries may suck, you can rest assured that they will suck in exactly the same way in five years. Not so with Rails. The platform moves, and you have to move as well.
Another obvious point is that Rails (a) just went through a big version change, which just happens to coincide with a push toward Ruby 1.9; that double-whammy is bound to produce some yelps of pain; and (b) Rails is slowly maturing as well. Sooner or later Rails will be as mature as Java is today. How will we know when that has happened? When occasional gentle rants like this one turn into numerous, big, influential rants about the prospect of breaking backward compatibility; that will be a sign that the cost of change outweighs the benefit. And then it will be time for a new framework to inherit the role of young upstart...
At my work, I noticed a very similar issue to your point (a) when our iPhone app started to show a decent number of bugs when iOS4 dropped. 3rd party libraries may or may not be future proof and their support may or may not be available in the future.
Perl definitely focuses on tests. I can't say "more" because that is subjective (although "I" think so). I would say the testing mindset is huge in the Perl community.
With Django, I've learned to avoid anything that tries to do too much. Small apps or collections of code that aim to solve very specific problems seem to work with the least hassle. With a carefully defined and limited scope, these apps have a much easier time integrating into existing projects. You want something that handles OAuth. You want something that handles Facebook Connect. You do not want something that handles "integration with social networks" or anything so grandiose.
There's definitely a tendency among library/framework writers (not just Django) to include the kitchen sink. Ultimately it becomes problematic for the end-user and maintainer because more time is spent trying to understand everything that is going on (and often times removing unwanted functionality).
It's really hard to find functionality in the "hammer,screwdriver" size - more often than not I get a whole tool set with all sorts of things I'll never need, and end up having to devote more time figuring out how it all "hangs together" with the rest of the system.
It becomes a matter of whether you end up piecing your system together block by block or chiseling away to your end product.
I think the attitude is different. Projects with a homepage outside of github I tend to treat as black boxes. Projects that are just up there with a README, I tend to take as someone saying "here's this thing I built, use the source if you'd like." The difference with the latter is that I do not expect support or a community. I consider the plugins and gems that are like that to be under my ownership, they are just saving me the time of writing the majority of the code. While I do have to fork and monkeypatch a lot, that is still faster than DIY. When effort to "make right" exceeds the effort to rewrite, I'll rewrite, simple as that. I think the "just put it out there" attitude is useful. We should also be better as a community when we give up on a project, though the metrics that gh provides give good ideas about the activity and life around something. If anything, reading through 10 implementations of slugs lets you know the pros and cons that you may not have considered before you go on and roll your own.
In short: in ruby community, libraries are starting points, not black boxes.
I recently ditched rails for sinatra / padrino (mostly sinatra) due to the fact that if you want to do something off the beaten path, even in rails 3 it's a complete crap shoot of trying to tie together different pieces since it seems people only work with rails specific active_* parts - You spend more time wrestling libraries in rails 2-3 than actually doing work.
Not to take anything away from those microframeworks (they are awesome), but are you sure you're giving Rails 3 a chance?
After all, they brought the merb devs into Rails core and radically re-architected the whole framework to support modularity and customization down to a very low level. From the extraction of ActiveModel, to the inclusion of Thor for generators, the decoupling of Prototype and an ambitious attempt at unobtrustive javascript, and the declaration of a publicly support API upon which even the framework itself is built, Rails 3 the biggest overhaul Rails has ever had.
Have they succeeded in making it truly modular? I dunno, it's not released yet for one thing, but to say it's a crap shoot is a bit premature IMO.
Yes I'm also using sinatra, I love how clean and simple it is. I was trying to decide which gem to use for authentication, but couldn't find one that was simple enough (I'm fairly stupid) so I rolled my own. Sorry this is more of a ramble, I just really like sinatra, no sinatra-language to learn as in rails.
I first used rails very early on, late 2003 no... maybe it was early 2004. Anyway I didn't like it then but it did save my ass in getting my college project done.
I don't think it is specific to Rails! I find myself thinking the same with Django and lately, with Node.js... I'm talking mostly about the abundance of 3rd party librairies, and not really knowing which one to pick, which ones are still maintained etc...
Agree, and I'd throw in the example of Drupal: the core and a set of well written contributed modules are pretty decent. The rest of the contributed modules are mostly hacked together, poorly documented, half-working garbage.
Drupal is orders of magnitude worse owing to the fact that the design philosophy is to create a powerful CMS that works out of the box with infinite customizability for hardcore developers.
The result of these assumptions is an architecture that has a certain elegance, but is impossibly complex, where writing good modules means having tons of domain knowledge of the framework itself which does not transfer to any other form of web development.
This approach makes a viable framework for a lot of low-budget projects (ie. clients with $5000 and a ridiculous laundry-list of general components), but compared to Rails, Django or other frameworks that focus on the basic building blocks that apply to the majority of web applications the results are horribly baroque.
Django has a nice little saying, "Django is Python", meaning most of what is done is just the natural way to do things in Python.
Now there are some really smart people that have wrote excellent Drupal modules but the majority of the modules aren't very good. Drupal provides a lot of Drupal functions that you need to know about to write good modules but technically aren't required to hack together something that kinda works.
I've found it's an even greater issue in Node, while the core library is solid, it seems like _a_lot_ of the people developing the 3rd party libraries are missing some very fundamental CS.
"_a_lot_ of the people developing ... are missing some very fundamental CS." FTFY ;)
Adequate culture can compensate for lack of fundamental knowledge, but Node doesn't have "The Node Way" (yet.) DC taught us what to avoid, but we don't have a strong body of blessed examples or expressed philosophy.
First posterous is claiming other publishing solutions are obsoleted by them. Then they trash on the framework that allowed them to build their supposedly superior solution. Maybe these guys should just keep their mouths shut and work in silence - PR doesn't seem to be their strong suit.
Looking critical at your tools is the hallmark of a good craftsman, and improving upon them is even better.
I have absolutely no problem with this post, and why should it be PR ? It's not like he's speaking on behalf of the marketing department, just another hacker that produces code for a living.
It's interesting actually that you could probably replace 'rails' and 'gems' with 'drupal' and 'modules' and still end up with a valid piece.
Mark, in no way did i trash on the Rails framework. I basically said that I find 3rd party frameworks to be less reliable than I have seen on other platforms. I also said that being on Rails has still helped Posterous tremendously and I'm appreciative of that.
Ok, so maybe it came off a little harsh and I apologize for that. I'll try to address your points a little better:
I think you're focusing too much on the negative aspects of there being a lot of plugins and gems out there that are maybe not maintained that well. I run into the same issues all the time, and a lot of times I end up having to patch other people's code or roll my own solutions. But I'd rather there be a lot of someone good choices out there that are written in ruby and save me a ton of time than relying on a company to provide perfect solutions when they feel like it.
Your headline was also a little sensationalist and considering how us rubyists love to preach about the joy of coding in ruby, a little on the trollish side. You can't say that coding in objective-c is as enjoyable as coding in ruby.
Anyways, I hope that explains my reaction a little better.
This post is a bit confusing. At first glance his post appears to be directed at the Rails framework itself. Reading further, he seems to be referring to the overall ecosystem rather than Rails itself.
That seems about right to me. The trunk and the major branches of the tree are generally solid. Minor branches and the leaves of the tree, not so much. I think the pace of the community creates a lot of leaves and minor branches. Anything really important will mature into a major branch and become solid.
My experience has shown if I want something in Java, I can usually find it from a source I think is reputable for stability and serving large numbers of customers: Apache, Codehaus, JBoss etc.
If I want something in Ruby, I throw my chips in with some guy on GitHub (who was nice enough to pack it as a gem), usually choose based on whether it was good enough to at least come with half-decent documentation.
I wouldn't necessarily agree that JS or non-Apple Obj-C libraries are all that much better than the Rails community effort though.
There is something to be said for the amazing contributions the Apache group has made to open source. I pick Java mainly to benefit from their brilliance (I suppose I could use Scala, Groovy, etc. instead though).
The big difference here isn't about the languages, it seems to be about the technologies. Comparing web libraries (esp. add-ons/plugins) with anything else in any language will have a pretty devastating result.
I'm not entirely sure why. It can't be just the fact that nowadays web programming is often an entry-level job, so often you have to wade through the fallout of "My first Lightbox". Maybe it's the idea of packaging something you just whipped up for a project for public consumption.
The java ecosystem is optimised for maturity, ruby the cutting edge. That is to say that the cutting edge java libraries are considerably more painful to use than the cutting edge ruby libraries. If you're the kind of team that gravitates towards newer libraries than ruby is going to be a better experience. If you prefer to plod along with good ol' faithful than java is the way to go.
I think "app store" is a bit of hyperbole. What Ruby needs is some curated Gem repositories. This would be more along the lines of Apache commons or the various centralized Maven repos, or the clojure-contrib project.
Understanding your libraries and their implementation is a great thing. Once you realize that library code is not a magical black box, and is instead very much like the code you are writing, you can really become a productive programmer. ("But what about abstraction! If I have to understand my libaries, they're not abstract enough!" Wrong. Abstraction prevents parts of your program from knowing about each other. You, the programmer, still have to know how each part works. Abstraction means you only have to actively think about once piece at a time -- nothing more.)
The Posterous codebase is littered with monkey patches, modifying gems to work as we need them to.
Yeah, you know that "fork me" button on Github? You click it, then you edit the code, then the bug is fixed for every single person on Earth. That's the point of free software; if there's a bug, you squish it and move on.
(I do this all the time. Does it take time to fork someone's project, fix the code a bit, and send them a pull request? Yes. But it's worth it, because now you have a robust and permanent fix... and a community!
Monkey patching is for monkeys. Collaborating is what humans do.)