While porting 6 jQuery plugins over, I encountered various differences in the APIs and it was a bit of a headache to sort through them (especially the very subtle differences), but it was worth it.I'm writing a blog post currently on my porting experience, as I imagine others might take on the task and encounter similar issues.
For people comparing Zepto to this offering, keep in mind that Zepto also offers mobile-specific events, so if your reason for porting is mobile latency, Zepto's a good bet.
Update: I've now written the post about porting from jQuery to Zepto which includes my workarounds for missing jQuery functionality. If you're thinking of porting from jQuery, it should give you a good idea of what sort of stuff you'll run into:
http://www.everyday.im/learning/porting-jquery-to-zepto.html
Wouldn't it be more cost effective (rather than convert/port a whole site to a new js lib) to rewrite your page init JavaScript so it does not require a js lib at all (0ms)? Jquery would be async loaded by the time the user executes actions/buttons; if it had not loaded yet you wait, or show a loading icon, etc.
This porting/optimization adds no value to your users 6 months from now who are running a quad core nexus-razr-droid's browser that loads jquery in 300ms.
Good point. For my particular app/use case, the user quickly opens the app, uses the camera, then closes the app and moves on -- which is why I'm optimizing load time.
But even for that case, that technique is a great suggestion, and probably what I should have tried first. I'll try it next. :)
Moore's law is applied differently to mobile phones, as processing power is less important than battery life. And even if phones will get a lot more powerful, that's secondary to poor bandwidth, a problem that I bet we'll still have 4 years from now.
I use jQuery because it makes Javascript usage sane. Without it I would hate my job.
I thought is was worth mentioning, that the Google Closure Libary has quite an interesting approach to this. They use the Closure compiler to handle it. First of all only the pieces of code that are actually used are compiled into the final javascript file. You can also set compile flags in case you only want to target specific browsers, like mobile webkit, which will reduce code size even further.
Use LabJS [1] or a similar library to load both CSS and JS in async, not blocking the loading of other assets.
Edit: Sorry, I misread that you're talking about parsing time. LabJS helps only with the loading time. Does it really take so long to parse? That's not negligible.
> so if your reason for porting is mobile latency, Zepto's a good bet.
Unless you want to be compatible with WP7's IE9/IE10 or mobile Opera browsers of course, since Zepto is webkit only (for a phonegap app it probably does not matter).
I've had a very similar experience recently porting from JQuery to Zepto. The difference in load time is dramatic, even loading the js locally from the device file system.
It was at risk of ending up like PHP (in more ways than one). the core team recognizes that the API needs a trim. I also no longer use jQuery and haven't in a while because of the bloat.
but I think the way they are approaching it is wrong. They want to trim the API by 10% for the next -1.8- (edit: 1.7) release. This would break semantic versioning[2].
I think they should fork the code now with a new 'jQuery 2' branch and not break any backwards compatibility in the 1.x branch. The 2.x branch should be a complete re-org with browser support as modules (for eg. if you want to support IE 6.0 you enable a module, etc.)
For now, the jQuery light branch should be optional in 1.x. There are analogies in this approach with what PHP did between 4 and 5, and what Python did between 2 and 3. jQuery shouldn't be stripping out API functions in point releases, but it definitely, definitely needs a trim (and I would rather this work is done on the mainline jQuery project rather than in forks. I think a lot of devs at the moment have forks of Zepto/jQuery they run (i do)).
this project may be the perfect starting point for a jQuery2 branch. i'd definitely be interested in working on that project, and bringing in features from all the various forks and cleaning it up.
Actually, use the new `#on` as replacement: it can handle the job of `#bind` and `#delegate` (and `#live` of course), so I'd fully expect it to be the only one remaining in the fullness of time.
Warning: the first two arguments are reversed compared to `#delegate`:
The README for this desperately needs the blacklist, not the whitelist. I care a lot more about the 10% that's missing.
Unfortunately, the "limitations" sections reads fairly cryptically. I'm probably just a little slow today. Can anyone explain what's actually not included? For example are "Valid Examples" examples of what's allowed or not allowed?
From the readme, I see two major show-stoppers for me; no .delegate() - or even .live(), and no support for the 'change' event... and I can't tell if selectors support filters like :first, :last, :checked, etc.
I think it's worth adding .delegate() to core, .live() no.
I'll dive in to see how much code is required to come along with it. Also note the list to what's in core is not permanent, and can be changed by popular demand :)
I still think a large part of jQuery is bloat that is rarely used.
Oh, I agree. It would be better to have .delegate() than .live(). I'm not bothered by the lack of .live()'s presence. I just meant that .delegate() didn't exist, and .live() wasn't there in lieu of it.
"Events passed to your event handers are the 'real' browser DOM events."
This is a real deal-breaker for me. One of the absolute best things about jQuery is how it normalizes events, so you don't have to care that, e.g., the source element is event.target in some browsers and event.srcElement in others.
The source element is also bounded to the 'this' context.
I'm curious where the other breakages are, I've normalized the events where I see that it's needed, e.g. like in the custom `$.key()` function.
I think the better approach is to opt-in to receive the cross-browser normalized event in this way, i.e. via another plugin. There are a lot of cases where it doesn't matter and you don't even use the passed in jQuery event and such the overhead is otherwise not welcome.
I don't think that means it's removing unified properties. I think that means that srcElement won't be copied in, but target will still refer to the right thing (i.e., to srcElement if that's what the current browser is providing). It's just not also providing a copy of srcElement for the user.
If you're only targeting a subset of browsers, like say ones released in the last 4 years, it's usually fine. Though if you need to handle older versions (not just IE, all the browsers have some oddities), then it's quite helpful to have jQuery normalize them.
Perhaps this could be jQuery's Merb, encouraging the jQuery developers to reexamine the current monolithic package. I'd appreciate a modular version that can still be built as full-size jQuery for situations in which it is convenient.
Don't most apps reference jQuery from a CDN source (like Google)? In those cases jQuery is probably served from the browser cache. And if "those cases" are "most cases", is having a smaller version of a similar library particularly valuable?
The parse time is significant for jQuery, atleast on some mobile devices. For example, in my tests on an Android N1 today, jQuery took 800ms to parse (even when cached) whilst Zepto took 300ms -- saving 500ms.
(I'm not an expert) but it's more than simply code length - it also depends on if the library runs any functions at parse time, and perhaps also how many variables it puts in the global space.
For example: I discovered that my tiny localStorage library was taking 300ms to load, and that was because I was doing feature detection at parse-time and that feature detection attempted to actually set an item in localStorage. I postponed that, and got the load time down to 20ms.
I'm considering using it because of a combination of the following:
1) You can't always count on users having cached it from the CDN.
2) If it's included on my own server I can be sure they're getting exactly what I intend and there will be less HTTP requests when I include it with the rest of my JS which will only add ~4kb.
3) I'm a bit of a minimalist when it comes to code - for the web at least.
You need to be careful with your wording here. The "MAJORITY" of the top 1m websites (based on Alexa rankings) don't use Google's CDNs (i.e. load a resource from googleapis.com):
...and that's across all versions of all libraries.
Quite how that correlates with how many of your first-time visitors will already have the library cached – because they happen to have recently visited another site that uses the same version of the same library – depends on how much your visitor demographic intersects with those sites that use the CDN. You then need to offset that against the DNS lookup time requires for the rest of your visitors to work out whether loading the file from Google's CDN makes sense.
If you're talking about repeat visitors, it doesn't matter where the file was served from, so long as you apply the correct cache-controlling headers.
17% of the top million sites using them (and rising) is like a guarantee the user will have it. It's not like they reset their browser cache every few days. Also, if their ISP does caching, it will most definitely be there too.
Besides, even when they don't have it, Google's CDN is better than a hit on your servers, both for your IO load, parallelism, delivery speed, etc.
There's some research around suggesting that with so many versions of jQuery in use that the chances of finding the version of jQuery your site uses in cache is quite small. (can't find the article at the moment)
Components don't seem to stay in cache for very long these days because browser caches are max only 50MB (phones are much smaller) and with a bit of surfing it's easy to get to a position where components get ejected.
Also there is no guarantee that retrieving it from Google's CDN is faster than retrieving it from your servers e.g. there's DNS resolution, TCP connections to be setup etc., some of which will already be done for the main site.
...and it was used by just 2.7% (945) of the 35,204 pages in the dataset. Note that it's not just version fragmentation - you have to take protocol into account too as browsers cache HTTP and HTTPS separately.
At this point there really isn't much of a debate; unless you have evidence to the contrary (e.g. all our visitors come from Facebook, and Facebook use the same version of jQuery as we do) using Google's CDN to load jQuery isn't likely to benefit the majority of your first-time visitors.
Even though the Google Jquery libraries are used by a large number of top 100 sites, I prefer to serve my own files. 100k or 200k for the foundation of functionality that jquery provides isn't going to severely affect your visitors in any way.
Of course I'm assuming you're using common best practices with cache-control/expires headers.
I find it very improbably that Google will decide to mess with it, but as a general feature it would be nice if one could indicate an hash of the file to be loaded (as a <script> attribute), and throw an error callback if the verification failed.
That way you could use Google's jQuery file without being vulnerable to them messing with the file contents.
I'd say there is 0.000% chance google would use it's CDN to compromise 3rd party sites. It would be discovered in a flash and in no way will it every justify the brand confidence hurt.
Just as absurd as Google destroying any and all confidence any of us tech people would have in them as a company if they loaded malicious javascript instead of jquery or along with jquery from their CDN's.
The 'competitive advantage' JQuery has over the competition is the fact it's been so thoroughly tested in so many browsers for its plethora of features.
I hope this library gains some traction, as I'd love to see a JQuery clone that could be modified to pass in targets as named parameters instead of the over-reliance of 'this'.
Every tech seems to have an annoying something. PHP it's the order of arguments in stdlib functions, in Rails it's indirect code ruining your day, in JQuery it's 'what does this refer to again in this context?', and 'var that = this', ugh!
The jQuery annoyance you describe is a trait of JavaScript and the idea of bound function contexts. I think most of the "annoyances" that people encounter with jQuery is that "it doesn't do this really cool thing my other language/framework did/does".
I think two features that have almost ruined the language is the Function.apply and the new keyword.
Function.apply is to JS as Metaprogramming and indirection is to Ruby. Features don't have a priority of use, and programmers reach for the most interesting one (and go to town misusing them).
If there was no new and no Function.apply, JS would be more coherent and remain exactly as useful. It means that when an event is triggered or whatnot, rather than changing the meaning of 'this', it passes the target in as a named variable to the anon-function. Which is way better!
If you constantly have to look up 'what is this set to in this callback' you're so completely doing it wrong!
It's not something that can be resolved by simply removing `Function.apply` or the `new` operator. The way that the `this` keyword works is an integral of how JS works. There's nothing stopping you from using `Function.prototype.bind` or defining an event mechanism that retains its assigner's `this` value. That's what's so great about JS -- you have this level of control.
As far as events go, having `this` set to the element receiving the event is, IMO, entirely logical, and has been a common pattern way before jQuery came into existence. You can always access the target as `event.target` if you'd prefer.
If you have to look up "what is this set to in this callback" constantly, then you just need to learn the API/language.
> The way that the `this` keyword works is an integral of how JS works.
Says who? Antiquated API design by the W3C (which by the way had no reference implementation) shouldn't be the defining factor in how a language works. 'this' and 'new' were brought over to JS so people who were familiar with Java would 'get it'. JS was designed around different ideas before it was mutilated for the sake of superficial familiarity.
> If you have to look up "what is this set to in this callback" constantly, then you just need to learn the API/language.
I assert that good API design requires you to not have to look up things like this. Things that could be unambiguous. Usage of 'this' introduces ambiguity. Not to mention that in an asynchronous programming environment, if you're doing anything non-trivial and you're not copy-pasting $.ajax() code snippets, you want to have named references to things so that closures written inside the scope have access to whatever interesting thing you're working with.
var that = this is a huge indicator that API design is wrong, that this is an abomination and that people generally aren't thinking critically about the code they're writing.
Downvotes? What on earth for? Oh man you front-end guys really don't get it. If you disagree with my sentiments, that's fine. If you don't understand what I'm talking about, you're not a real programmer.
I disagree that is not inherent to Javascript. The way "this" is promiscuous among functions allows methods to be "just" function properties and is integral to prototypical inheritance.
The alternative closure-based OO style also works fine most of the time but it has some limitations, such as not being able to define protected properties that can be accessed in a subclass.
Of course, "this" breaking inside inner helper functions and event handlers (and having to use "var that" or Function.prototype.bind) is annoying as hell but is more of a syntax issue (fixed by things like CoffeeScript and the proposed #() lambda syntax for ES6)
If you're talking about variables in closures, a coworker of mine has a pretty clever way to do it: "var _this = this". It's just a generally useful part of javascript.
I find _this and that are the most common ways of referring to the enclosed this. I would avoid using self like the plague because of window.self though.
I'm performance obsessed since if our biz serves an additional 2K on our widgets it's an extra 1.86TB of transfer/month.
However for most on-site content, 33K is the size of a large image. Most delay loading a web page is caused by latency. Latency in the three way handshake to establish TCP connections and latency for each additional HTTP request even with keepalive enabled. Payload is not as big an issue because it needs fast throughput, not fast round-trip times which are impossible to achieve if you're far away from the server.
Smaller and faster are always good and always worth striving for (in coding). So I don't want to take away from this, especially since it means more eyes on jQuery. I think it's great work. But keep in mind that the speed increase is not going to be enormous and the cost is running a non-standard jQuery that lacks .css(), .ajax() and $(function()).
It's not just code-base size that affects performance. There's the latency of the extra network hit, parse js time, etc.
$(function()) isn't included because it's generally bad practice over having javascript run immediately at the bottom before the </body> tag.
If you're site doesn't use $.ajax() why include it? many sites don't use ajax.
$.css() isn't included by default because it's implementation is huge. I prefer to addClass/removeClass (which only causes 1 reflow) or set element styles directly.
Yes, Unfortunately this is a common misconception, and is why the Google closure library intentionally leaves this feature out (so people don't use it):
>>The short story is that we don't want to wait for DOMContentReady (or worse the load event) since it leads to bad user experience. The UI is not responsive until all the DOM has been loaded from the network. So the preferred way is to use inline scripts as soon as possible.
I was looking through Mozilla Developer network.
There is no documented DOMContentReady event.
There is a DOMContentLoaded [1] event which is target at the document object. There is also a load [2] event fired at the window object. This is fired when the document is completely loaded.
jQuery.ready will use DOMContentLoaded and load as a backup. [3]
It is common for jQuery developers to shove their whole code base into the ready callback, which I imagine would slow a page down.
The problem with that is that plain script tags can be started to load once the first TCP package of the HTML source code has been received. Your javascript tag would only be executed after all that has been loaded and the JS parser has been initialized. That can be up to 300ms to 500ms on slow and mobile devices.
I like that mootools lets you do this on your own. The builder lets you choose exactly that you want. IMO the only detriment to using Mootools over jQuery is that less people use Moo, therefor less libraries are available.
yes that should be the way to go, not providing a library in lots of parts (i've yet to try it - how well does it work in practice?). compilers should handle this for you (linking in only the required functions) instead of distributing a library in chunks.
i would also like to see jQuery versions for different browsers (kinda like Zepto for Webkit but with 100% of the APIs) - so I could compile using closure against different libraries and serve just the right stuff to different user agents.
it shouldn't be hard to add user-agent specific comments to jQuery so the library could be used to produce user-agent specific versions. i think the relative popularity of zepto shows that there's a need for that.
I am a big fan of ender, you can build a framework out of smaller specialized libraries, the Jeesh is very close to jQuery API, add reqwest and morpheus to the stack and you have the missing ajax functionality (with a cleaner interface) and a very nice and fast css tweener based upon MooTools.Tween.
What he has done is nice, but hardly what he says he achieved.
90% good parts of jQuery? I don't think so. The omissions, like .css, abstracted events, etc make this unfit for the majority of jQuery using projects out there.
So it's more like "90% of what the author wants, YMMV" that "90% period".
While porting 6 jQuery plugins over, I encountered various differences in the APIs and it was a bit of a headache to sort through them (especially the very subtle differences), but it was worth it.I'm writing a blog post currently on my porting experience, as I imagine others might take on the task and encounter similar issues.
For people comparing Zepto to this offering, keep in mind that Zepto also offers mobile-specific events, so if your reason for porting is mobile latency, Zepto's a good bet.