Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 4, 5, 6 and 7 to be released before the end of 2011 (switched.com)
188 points by rkwz on Feb 7, 2011 | hide | past | favorite | 112 comments



To be able to support this, they're going to need to do some major overhaul of how add-on versioning works. As a Add-On developer (https://addons.mozilla.org/en-us/firefox/addon/font-finder/), I shudder at the thought of 10,000 different addons needing to keep having their support version incremented w/ each upcoming release.


Hi ecaron,

I lead the add-ons team at Mozilla. Add-on compatibility was definitely a big consideration in our plans to switch to a faster release cycle, and we're working on proposing some changes to the way it works. As you mention, the current way of doing things won't work 4 times a year.

These changes will probably include automatic compatibility with certain major versions. We'll have more to share soon on the add-ons blog.


forgive me if i'm wrong, but i couldn't help but feel like the documentation for upgrading addons from 3 to 4 was extremely scarce and scattered. i pieced together several documents from MDC, the developer forum, and the Mozilla blog, and still couldn't get a full view of the changes that affected my addon.

revisiting the submission process itself is welcome and seemingly necessary; but i'd be willing to deal with an inefficient submission process if the upgrade was well documented.


Personally, I think that the compatibility should be based on feature versions. If an addon says "I use version x of feature y", and that feature doesn't get touched for 20 versions, it would be automatically compatible.


I have 21 different add-ons on AMO right now, and while it's kind of annoying to do so, it only takes me about 10 minutes to update the maxVersion on all of them every time a new version of Firefox comes out. Of course, the problem isn't that it's hard or takes a long time to do; the problem is that developers forget to do it.

Maybe it's time for Mozilla to move to using minVersion only, like Chrome does.


That requires a major commitment to backwards compatibility, which is a massive time-suck and reduces your ability to innovate. Look at Microsoft for an example of how nasty that can be.


It's not necessarily any worse than if the developer simply doesn't care enough to test it/update it. It might be better to use minVersion, then have plugins that use deprecated functions disabled.


This announcement doesn't say anything about versioning for the underlying mozilla platform (which moves from 1.92 to 2.0 between Firefox 3.6 and 4.0). The versioning for the platform only changes when the api changes and breaks binary compatibility for people who use binary components in extensions. Authors writing pure JavaScript extensions can code them to be compatible across multiple Firefox versions.


They could also pull a Windows 7, which has the underlying version number 6.1, and decouple the real version number from the marketing version number: http://www.businessweek.com/the_thread/techbeat/archives/200...


You're right. But they have been making a simultaneous push toward the Jetpack addons (as opposed to the traditional extensions), which are apparently less of a hassle to deal with in terms of versioning and compatibility.


Sounds like the main story is that they're redefining what a major version number means a la Chrome.

Chrome's numbers are not in the same scale as FF and IE too, of course. Users can't tell the difference between Chrome 6 and 7 like one can distinguish FF 2 and 3, or IE 6 and 7. I wonder, though, if Google was just trying to catch IE in version number with Chrome and once it's at 10, they'll stop increasing it so rapidly.


I think Mozilla is also aware of the problematic perception that this particular release generated among onlookers. They've already spun 10 betas and are going to make at least two more betas before the release candidates.

It seems like the Firefox 4 project simply got out of control, there was no clear cut-off, features were added willy-nilly throughout the beta process, and the browser (for a while) seemed stuck in perpetual development. It doesn't look good, and doesn't make Mozilla look like they can ship without the temptation to add more or tweak more.

Fx1.5 and 2 both had two beta releases. Fx3, 3.5, and 3.6 had five beta releases each. Fx4 is scheduled for at least twelve betas.

They need a significant change in their project management, and setting a fast-paced roadmap is probably an aspirational way to force themselves to ship.


It seems like the Firefox 4 project simply got out of control, there was no clear cut-off, features were added willy-nilly throughout the beta process, and the browser (for a while) seemed stuck in perpetual development. It doesn't look good, and doesn't make Mozilla look like they can ship without the temptation to add more or tweak more.

Netscape all over again?


Firefox 4 = Netscape 4? :)


I agree. It suggests that version numbers are essentially meaningless when the consumer of the versioned item is only concerned with what is supported.

It almost seems reasonable to think any browser level 7 should have full HTML5 compatibility. Level 8 should have IndexDB as the built-in database.

I haven't spent a lot of time considering this idea, but I'm attracted to the stability that could be gained from having versions somewhat mapped to implementation standards.


(Disclosure: I work for Mozilla.)

Anthony Laforge, the Google program manager in charge of Chrome releases, was kind enough to give a talk at Mozilla's all-hands meeting in December. He gave a version of this presentation: http://techcrunch.com/2011/01/11/google-chrome-release-cycle...

Each Chrome release is in development for 12 weeks. There are always two 12-week cycles in progress at once, staggered by 6 weeks. So users on the stable channel see a new version every 6 weeks.

This has benefits much more profound than just a large number before the dot. For example, developers working on new features actually feel less schedule pressure. If a feature misses the deadline for version N, the team knows they'll have another chance just six weeks later for version N+1. And for users, the browser becomes more like a web app. Do you know what version of GMail you use? Do you notice every time they add fix bugs or improve performance on Google Maps? No - every time you open the app, you simply have the latest version.

Laforge mentioned some things about Chrome development that make this work:

- Silent background updates. When an update is available on your channel, Chrome downloads it in the background and installs it. The next time you launch the browser, it is running the new version and deletes the old one. Because of this, the majority of Chrome users are up-to-date within days of each release.

- Feature switches. Development is done on trunk, but each work-in-progress feature can be disabled with a switch (either a preference, a build option, or a command-line flag). This lets developers work for several cycles before turning a feature on for the stable channel. It also lets product management disable a new feature if bugs are found during beta, without changing any code. Web startup folks might recognize this as the "always ship trunk" pattern used by sites like Flickr: http://news.ycombinator.com/item?id=1463751

- No support for old versions. Instead of deciding which "version" to install, Chrome users decide which "channel" to follow. If you are a developer and want to test upcoming changes, use the canary or dev channels. If you want a more tested browser, use the beta or stable channels. New features roll out to each channel in turn. If you are on one of these four channels, you get updates. If you are not, then you don't. (Just like you can't choose to run an old version of GMail or Google Reader.)

Mozilla is just starting to talk about the new roadmap and how to achieve it. Chrome's experience is useful, and it was really helpful of Laforge to share it with us. But we're different both technically and organizationally from Chrome, so just blindly copying what they do would not be a good idea. The wiki shows a plan for releases every 3 months (not every 1.5 months like Chrome). We use Mercurial with long-lived project branches (unlike Chrome which uses Subversion and does all development on trunk). And our product differs in some important ways, like the ABI for binary extensions. (Even non-binary Firefox extensions may reach deeper into the browser internals than Chrome extensions.)

But we do think that the benefits for users (more frequent improvements to the browser) and developers (more predictable schedule, fewer release delays) are worth pursuing. There are benefits for web developers too, since it reduces the time from when we start implementing new web-facing features (like IndexedDB, or CSS3 transitions) to when you can deploy them to more of your users.


You can have a fast internal development cycle without devaluing the established concept of major version numbers.

I don't appreciate this because I make money from software. To me, a new major version number is:

a) an indication that there have been major changes. It's not something that increments every 12 weeks.

b) something most people reasonably expect to pay for.

I know Mozilla isn't first with this approach. It still sucks.

Edit: Interesting that it has been driven by the same company that rendered the term 'beta' meaningless.

Edit 2: Why the downvotes?


Arguably, the concept that ties major releases with major purchases is old and broken. It discourages companies from making major updates to software between such releases, as it will actually make it harder for them to make new revenue off the next major release. So you get point releases that are mostly minor bug fixes, and then a huge release every year or two that the maker hopes will squeeze you into opening your wallet, even if you're mostly happy with the last release.

Web apps are clearly going to break this version treadmill. Even if you're downloading a package to the desktop, modeling software more as a service - you pay for the right to use software for a period including all updates, rather than for a perpetual license that in reality will expire and need to be re-purchased as soon as the next major version hits.

Pay as you go is better for both users and developers. It means you can charge less up front, spend less on marketing (since the cost of trialing the software drops) and focus on delivering the _best_ experience to all your users rather than denying some features to existing customers in order to create a future revenue event.


That assumes a series of minor improvements equals a major release, but that isn't always the case. Sometimes something might need a year's work because it is a change to the core of the programme(and old features have to be re-added in a different way for the new core and users are never happy about losing features) but with your system that gets completely disincentivised and minor improvements get prioritised. All the incentives point to bloatware, you create an expectation of constant releases that update or add features so you have to fulfil them, but you never have long enough to change the core and get back to where you started feature wise. That or they cheat and don't meet expectations of what version changes are and instead of point releases that are "mostly minor bug fixes" you get version changes that are so and then they need a new way of marketing that "huge release every year or two".

"...the maker hopes will squeeze you into opening your wallet, even if you're mostly happy with the last release." I don't understand, what is the problem with that? They can hope all they want, if you're happy with the last release you don't need to update in the "old fashioned" system, it's yours forever. In the "new" subscription system you could be perfectly happy with the product as it is but you are forced to keep paying for updates just to use it.


"Changing the core" isn't generally any more encouraged by either system, if it doesn't result in marketable features.

The idea of the "complete overhaul" is something engineers love to dream about, business managers hate to do and end users generally just don't care that much about. That's why they happen so seldom - and usually only if the existing package is a pile of unmaintainable crap.

The shrink-wrap model encourages chasing adding more bullet points to the outside of the new box over actually improving the day-to-day user experience. The service model encourages making your actual customers happy with their product AFTER the purchase, so they keep buying.

The whole concept of a large, up-front fee is driven by traditional mass-media marketing strategy: spend a bunch of money making something sound really great to buy, sell it for a bunch of money up front so you can get a return on your advertising spend. Don't worry about what happens after that.

"I don't understand, what is the problem with that?"

The problem with that is if there is some small feature change that would really improve the current product (say supporting an additional new file format on import) there is little reason for the maker to add it to the old release. Instead, they lump it in with the new version and hope it forces you to buy a whole new license. So a feature that might only add a small marginal cost but would make current users happy for longer doesn't get released to them.


Firstly I said nothing of "complete overhaul", core changes are for example, the internet moving to IPv6, OSes moving from 32-bit to 64-bit.

Anything that is constantly updated with anything more than bug-fixes will become "a pile of unmaintainable crap", or just obsolete if the core isn't changed. Core changes normally happen before that point though. That is if business managers' incentives aren't all wrong. An example of that fairly recently was twitteriffic for iOS, they felt it was heading towards being a pile of crap so they had to stop and go back to the core.(even though it meant unhappy current customers)

Users not caring about it unless it immediately comes with new features or at least all the old features is exactly my point! And if what the user is paying for is updates they are absolutely not going to accept losing any features(and not going to be happy with not updating since that is the whole reason that they are paying in your method). If the user owns a version and sees that the next version doesn't have features they need they'll stick with the old version until the new version has those features. In your model they have already paid for that update.

"So a feature that might only add a small marginal cost but would make current users happy" Prices aren't just based on costs, they are usually(in the IT sector) based on how much more they "make users happy", in some cases they are almost entirely based on that, clothes, shoes and apple products being the most obvious examples. Which to repeat in another way is the problem with core updates in your system, they are high "cost" but they only prevent the programme from going to shit so they don't "make current users happy"(just prevent future unhappiness).

That's not to say that your model doesn't work in some cases, anti-virus programmes need constant updates but very seldom core changes.(although oddly enough at least for norton it's much cheaper to buy the newer version in amazon or a shop and get continued subscription that way then to update a subscription) And you mentioned GPS apps the same applies to them, no core changes needed. Angry Birds on iOS is absolutely not a case of it though, you don't pay for new levels, new levels are free and are an example of the opposite of what you're saying happening, you pay upfront and they have continued adding levels way beyond a point where it would have been reasonable for them to make an Angry Birds 2 and putting the new levels in that. It is a special case but it is the traditional model just with a new attitude (on iOS, on android it's the ad revenue model which is different again).


Sounds great but I am never going to pay a subscription for my IDE, or my FTP program or my Twitter client. Relationships require trust and effort from both parties and it's simply not a perfect model for every piece of software.

I don't want a relationship with a company. For most apps I want to buy and own a thing as it is now.

This "old and broken" model persists because it's something that customers get instantly and can commit to knowing in advance what their full investment will be.


With Android apps (and iPhone I presume), it seems that you pay once and automatically get any available updates. I'd guess that would work reasonably well for small utilities, so long as they continue to get new customers.


That hasn't always held true for some of the major iPhone apps, at least - when there's a major version release, they sometimes require a new purchase.

Also, iPhone and Android apps have the potential for ongoing in-app purchases by users. So you buy the GPS app once for a very low price (or free) but you have to keep buying the data updates if you're using it. Or new levels for Angry Birds.


I don't mind faster releases if they're doing things we traditionally associated with minor version numbers, such as performance improvements or slightly UI tweaks.

I really hope they don't change any rendering components at the same pace, though. I already have to strike any contractual obligation to test/support Chrome in commercial web development projects, because it's a moving target and Google do release breaking changes. If Mozilla browsers go the same way, then I really will start coding for the nice, stable, standards-friendly world of IE -- and that's never a statement I thought I would write on a serious forum with a straight face!


The plan is most certainly to make Gecko updates on a 3-month cycle. That means new features, bugfixes, etc.


Can you give some examples of HTML/CSS-compliant layouts or layout features that were broken by changes in Chrome or Firefox?


Recently I worked on a website with a lot of forms, and decided to use HTML5 form validation to make my life easier, with a JS fallback for older browsers. It turns out that Webkit added support for HTML5 form validation, then found a problem, so they switched off the implementation without switching off the exposed API. So every JS fallback that tests for the presence of the HTML5 methods on form elements will assume that HTML5 form validation is implemented, and leave the validation up to the browser... which doesn't actually do it at all.

That means server-side validation, which of course one needs to do anyway. But it's not to hard to imagine somebody making a site during the brief window when Chrome supported HTML5 form validation, and then discovering their site broken in Chrome a few months later.


They didn't "switch it off" they merely turned off form submission blocking when the form was invalid. It's back on in chrome now because they have the error notification bubble. The constraint validition API, the error pseudo-classes and the extra attributes all work in Safari.


If the answer to "are the contents of this form valid" is always "yes", I don't really care how much of the validation API the browser supports; it's effectively going to waste.

I'm glad to hear it's turned back on; I hope the form-validation interface is as sleek and professional-looking as the one in Firefox 4.


H.264 is scheduled to be broken in Chrome in exactly 6 weeks (given Chrome 9 came out this week)



If the software is free, b) doesn't really apply. But I agree that a major version number should mean something - see http://semver.org/

Why not do fast development cycles that result in point releases? That, combined with background updates like Chrome does, would be great.


> Do you notice every time they add fix bugs or improve performance on Google Maps? No - every time you open the app, you simply have the latest version.

Since you asked... not every time, but often. You see, it's apparent from my experiences that Google makes incremental changes to the live system.

So, somewhat frequently, I'll see mis-loaded pages. Sometimes I fire up the web inspector and see that someone's typoed a JavaScript function, or a CSS element. Very annoying. This happens with GMail and GMaps fairly often.


For clarification, are all the different channels simultaneously present in stable release that regular Joe's dl and install, just hidden by by pref/build option/cl-flag switches?

For example, when you say developers don't have to decide which version to install, but only which channel to follow, is that how they switch among channels - just restart the stable version using a different switch?


No, the update channel determines which code is downloaded to your computer. The "stable" and "beta" channels receive builds from release branches, while the "dev" and "canary" channels receive snapshots from trunk.

Regardless of your channel, you can turn on different features in Chrome by going to "about:flags" (or sometimes by passing command-line switches) - but only if those features are in the build you are running.


Chrome auto-updates and generally always looks the same. I can't imagine why anyone -- users or developers -- would care what version number it's up to.


> I can't imagine why anyone -- users or developers -- would care what version number it's up to.

One reason is that while Chrome's UI may look much the same, its behaviour often changes significantly, and not always for the better.

They broke CSS3 rounded corners just as web designers were starting to use them instead of graphics.

They made a political decision to remove H.264 support just as the HTML5 video tag was starting to gain traction.

There is a reason technical communities develop standards. Unfortunately, the web development community has completely lost the plot in recent years. The W3C have become so slow and politicised that they have become irrelevant. The browser vendors are moving so fast that no-one can keep up.

No-one can actually use all of these state-of-the-art features in serious projects anyway, because they don't work in most browsers. As keen as web developers are for long-standing pain points to get fixed and to play with new toys, to get the job done you still have to use the old tried-and-tested techniques anyway as a fall-back for all the browsers that don't support the latest cool stuff. If you have to do that anyway, there isn't much point to using most of that cool stuff in the first place.


Can you elaborate on broken rounded corners? They seem to work just fine for me, always have.


If you search the Chromium bug database under "rounded corners", you'll find various issues related to poor antialiasing or worse, with screenshots attached.

Basically, rounded corners weren't rendering smoothly, and if there was a border applied then you could even get the wrong colours showing, which looked awful.


My theory is that they wanted to avoid PHB types saying 'Internet Explorer is at version 9 - this new Google browser, only version 3? Is it ready for the enterprise?'.


That's why Microsoft called the new Xbox "Xbox360" instead of Xbox2, because it will _seem_ a less powerful machine in front of a PlayStation3


This is also part of why MS were obsessed with using calendar years for version numbers, windows 95 and so on. It makes things seem obsolete automatically with the passage of time.


There's that, but it probably fits better into their development model.

If they're continually adding features and not breaking backwards compatibility on a "fixed" timeline, then the major version jumps that Firefox, IE do etc. seem more pointless than anything.

If I've got this application that we're continually developing on and we're using a 2 month lifecycle for development ... at what point does it make sense to jump a major version? Never? What's the point of it then?

It's the fact that we've become accustomed to major versions representing periods of time.

Version numbers should be representative. If it was the case that they used the minor number to represent their two month cycles, then they'd be at 1.11. What's the point of the 1 in that scenario? It's redundant.

The major version increments fit in better with their development lifecycle, so why not use it?


I personally find larger version numbers a bit odd. Generally people tend to work better with smaller numbers. I think the Ubuntu system is pretty good, use the major version for the year and have the month as the minor version. The Chrome 2 month releases could go 1.1, 1.3, 1.5, etc. That way they wouldn't chew through so many numbers. Ultimately I guess projects must go through a reset of their versions before they get to really ridiculous numbers.


Chrome's 6-week release cycle (with no support or security updates for old versions) is probably a much harder sell to enterprise IT departments than a low version number would be.


I guess, but I don't even know what version number of Chrome I'm currently running, and I like to think I'm more tech savvy than the average PHB.


Exactly. In the ideal world, I would simply create a web app, and it would run on all browsers. No more keeping 3 VMs just to do some browser testing. Alas, until there is a high enough price for not following the standard to the letter, we won't see this happen.


I doubt that the Chromium authors are concerned about trying to catch IE by having a larger version number. Their development calendar (https://sites.google.com/a/chromium.org/dev/developers/calen...) looks as though they have a rigid two-month schedule per version number.

I myself am already running version 11 (screenshot: http://jhn.me/4RYO) via their nightly builds.


But why have they chosen to version their software like this, in contrast to every other browser? They could have gone with even numbered point releases.


As per the last paragraph of this post: http://blog.chromium.org/2010/07/release-early-release-often...

"Please don’t read too much into the pace of version number changes - they just mean we are moving through release cycles and we are geared up to get fresher releases into your hands!"


Thanks, but that actually addresses how they will be moving the whole version numbers up even faster than before, not how they've chosen to use whole version numbers to represent fairly minor milestones, in contrast to established practices.


Would you prefer Linux/SunOS-like version numbers where the major version never increments? When you do a release every 6-8 weeks, they're all going to be minor.


I guess at this point version 9 looks OK. You have MSIE 9, Opera 10 etc and we've grown accustomed to those. It's up there at the same level and not particularly out of place.

Now look back: the public stable release of Chrome 1 was on 11 December 2008. That's roughly 5 up per year. In another year it will be 15 and then 20. After a while it will start looking silly (like it did with MS Office) and I don't think the question is if they will stop with this aggressive version-numbering, but when.

They now have beaten IE to the release of version 9, they will beat Opera for version 11. I'm guessing once they have beaten all the other browsers they will stop and get back to point-releases like everyone else.

But not until they are ahead of the curve and say "Your browser only goes to 9 or 10? My browser goes to version 11!". In lack of better words: Pointless or not, I'm guessing they want their browser to be the one which goes to 11, Spinaltap style.

The number is all a mind-trick and you have to be a fool not to see the game being played here.


I disagree, the number is there to indicate a version, nothing more. The version number in Chrome is not exposed to users in a marketing way.

Google simple decided to go for big version numbers, because what does switching from 2.95 to 3.0 mean if you release new features constantly and major steps thus never happen? Sticking to decimal point version numbers makes no sense in this case.


If you use semantic versioning (http://semver.org/), switching from 2.95 to 3.0 means you made backwards-incompatible changes. Major features could still happen in a point release.


Yeah, this is what I wrote about last week.

Until now, every version of Firefox had significant changes -- visual changes, changes to the way you interact with the browser.

It now sounds like Firefox will follow Chrome. It will keep the same UI, and simply add new features.

Which is fair enough, I guess, if the browser is to become the next platform (for Web apps). It's time for the browser to 'fade' into the background, to give Web apps focus; to empower the apps, rather than interfere with their operation -- much like any other OS!


I dont think 95% of users care/know what version their chrome is..especially considering that it updates automatically.


Chrome is a pretty spankin' good browser, but it's version number system has bugged me, like theyre only doing it to seem like they make advances quicker than other browser vendors.


Not true. The core elements of the number are major and minor. Major represents a new stable release from the trunk. Minor is a subsequent release from that branch. We almost never do minor. The notable exception was 4.1, which came off the 4.0 branch.

We don't emphasize version numbers in our marketing because consumers are almost never aware of what version of the browser they're running other than "latest". Do you know what version of GMail you're using? Neither do I. With this in mind we chose a predictable versioning scheme even if it's somewhat unconventional in the world of desktop software.


One interesting thing to note... Chrome operated with a 4-releases per year schedule from beta launch until late 2010, before switching to the interleaved 12-week cycle mbrubeck mentions. While the quarterly release process represented a definite improvement over the one release every 12-18 months we had been familiar with before, it still had some drawbacks.

The period of time required to put together a release was still sufficiently large that both engineering and management felt pressure to shoehorn stuff into releases. The cost of missing a release was waiting another quarter. For many engineers this means missing your quarterly objectives completely. The temptation to ship unpolished features was high, and the morale hit for missing a release was still tangible.

Moving to the interleaved cycle with 6-week updates is intended to break down these problems by making the consequences of missing a release less severe to both engineers and management. We are still in the early days of this approach and the team is getting used to the adjustment.

I'm pleased to see Firefox is going to be trying to improve the frequency of their releases. Staying fresh is critical to vitality in this modern browser landscape. When I worked on Firefox prior to 1.0 we would do releases roughly quarterly but the team size was much smaller. A high level of discipline in engineering and management is required to maintain regular releases with a large team size.


I'm pretty excited about Account Manager, the identity management feature, that's scheduled for Firefox 5. It has the potential to make the web both more secure and simpler to use. Usually those goals are opposing.

Of course, the biggest obstacle will be getting people to add support for the feature to their site.


That was a feature that was supposed to be in Firefox 4, but got pushed back at some point.


"the main focus of Firefox development in 2011, is to make sure there is no more than 50ms between any user interaction and feedback from the browser"

Quite surprised to see this, judging from the fact that empirically the average human brain-hand response is around 200 ms. Only in competitive CS have i seen <100 ms response times. Sure, f.i. 20 ms instead of 50 ms network latency in a non-LAN setting makes a difference in that scenario, but only when you are not on the very top of the relevant Bell curve.

Seems like a moot bragging point. I'd love to hear from a more knowledgeable person what i'm missing.


Brain-hand response is not the same thing as a perceived delay.

Once latency becomes greater than 50ms, it becomes human noticeable -- aka "not instant."

Given that UI response isn't used to trigger human reflexes, you are just misunderstanding the intent.


From my first hand experience in FPS gaming in non-amateur settings the vast majority of average users will not be able to notice a (say) 50 ms difference in UI responsiveness. I covered that when talking about network latency in my previous post.

Of course this is just my experience, scientific findings may very well dispute my ignorance.


Correction, the vast majority of users will not be able to explain that they feel a responsiveness issue, but they do indeed feel it. Less hardcore gamers will often just blame other things -- loose controls, or ignored button presses, or what not.

I see it pretty regularly in playtests and such. Anyway, it's not something someone can simply say "Hey, this app took longer than 50ms to respond!" It's more of a gut feeling that something about it is slow.

They are likely aiming at the 50ms threshold because that's the point at which you really can't notice it, most of the time. At > 50ms, it becomes more and more noticeable, and by just 100ms people actively start complaining about the slowness of response.

edit:

As for the FPS gaming issue, the reality is that you don't notice it in FPS gaming because the games are built to hide it from you -- when you press your fire button, your gun fires, regardless of net latency. And that has been true for a very long time. I think the last game to really wait for server response before doing anything was Quake 3, and probably some of the Quake 3 engine games.

I could never stand Quake 3 for that reason, if you had anything higher than 30-40ms it started feeling like you weren't controlling the game.


Old hardcore quake3 player here. Q3 didn't wait for a server response, but the net-code was not that great and the interpolation did iirc not really take into account server responses. You could end up with players teleporting because of network issues. OSP was the first mod iirc that tried to fix it and it was somewhat successful.

The one mod that made great strides to fix the issues with the Q3 net-code was CPM (www.promode.org). 50-70ms to the server felt like 20-30ms in Vanilla Q3 and when you hit 20-30ms in CPM it felt like LAN play in VQ3. It fixed the niggling issues of lost packages (caused players to warp) as well. You even had some players that intentionally downloaded things in the background to cause dropped/delayed packets so that they would warp.

Eventually the net-code in CPM got so good the community even had cross-Atlantic competitions. The team I played in had 100-120 ping vs west-coast American teams on NYC servers and it was actually playable to the point where you could have fairly fair fights with them. Except against Team Abuse... :p. Man what a schooling in team-play and lock-downs of maps they gave my team.


"Correction, the vast majority of users will not be able to explain that they feel a responsiveness issue, but they do indeed feel it."

That sits well with me. Upvoted! It is true that "clunkiness" is a valid complain, but i've always attributed it to a combination of all latency-inducing factors.


Musicians will start to notice anything greater than about 7-10ms between keypress and sound. Below the threshold of consciously noticing the delay, it's still possible to make an instrument or UI feel subjectively "snappier" by reducing the delay further. If the browser is to host any audio-related applications, the minimum latency needs to be much lower than 50ms.


The 200ms braind-hand response time you are refering to is the time it takes to see a change on the screen, brain reacting and sending a signal to the hand and then to actually press the button. What the firefox roadmap is talking about is from the time the button was pressed until something changes on the screen. These are two different things and they actually add up. Then there's also the final delay between the changes happening on the screen and the brain recognizing this.

So the user experience delay is the sum of three different sub-delays. Firefox wants to improve it by reducing the middle part.


Brain-hand response isn't what it's about here: it's about synchronicity, the notion that clicking the mouse and effects on screen occurring at the same time. It's quite easy to see (and hear) relatively small differences between two events that are supposed to happen simultaneously.

For example, of my two monitors, one is lagged by one frame. I can see this easily by taking a white window against a black desktop, moving it so that it straddles both monitors, and moving it rapidly up and down: the monitor that's a frame slower makes the window look a bit like it's made of rubber, that it's bent away from the direction of motion. And that's just a 17ms difference in two events that are supposed to be simultaneous, an order of magnitude less than your factoid of 200ms.


In addition to the other points, you need some engineering buffer anyhow. Whatever system they are testing this number on, it probably won't be the bottom of the line they want Firefox to run on, and even if it is you still want good responsiveness anyhow. Because of the wide variance in the systems that run Firefox, this is probably less a principled number coming out of extensive research than a line in the sand. But a line in the sand is fine.


I think you have it backward. It's not about how long after you think of clicking do the action happen, it's how long after you did click can your brain start to see the result on screen. With that 100 ms you mention, and let's add another 100ms by the software, you get 200 ms between the time your brain wants to click and the time it start receiving a visual response (assuming that part is immediate, which it is not). It is actually a big selling point of google chrome, while a lot of it isn't really that much faster, it do seems instant and firefox does not (at least for some people, including me).


This might be a good read for such things: http://www.useit.com/papers/responsetime.html


From the road map it looks like their main focus will be on product polish. They will be adding some features to the system. These features seem to cluster around "Web Platform" features (ie. HTML5, CSS3, Social stuff, etc...) and continued improvements to their add on system.

The polish focuses on improving interaction times for all user interactions. Their goal as previously mentioned is to get it down to 50ms from user action to visible response. They also say the have 50 common usage paths (identified from testing) that need to be improved.

This looks like a reasonable though ambitions plan for Mozilla and if they stick to it they will remain competitive and relevant.


It looks like Windows 7 64-bit will be officially supported with FF5, too.

You're kidding, right? Do they REALLY not support the fastest growing OS version + platform combo?

OEMs are finally paying attention to 64-bit computing, and Windows 7 has been replacing Vista like wildfire.... and FF is still two releases away from official Windows 7 64-bit support?

EDIT:

Oops. Seems stupid Aol/Weblogs/Switched/DowloadSquad (I don't even know what it's called any more!) didn't bother to fact-check, and they're referring to 64-bit native builds of Firefox.

EDIT2:

So I'm really being downvoted because the original story got it wrong? I guess that's one way to take out your anger on inaccurate writeups...


They're not talking about "Windows 7 64-bit operating system", they're talking about "64-bit Firefox" (running on Windows 7 64-bit). One of the biggest slowdowns for this hasn't been FF's ability to run in 64-bit, but plugins (e.g. Flash) not having a 64-bit version for Windows.


>> You're kidding, right?

Nope, they're just misinformed; they read the roadmap incorrectly. What the Firefox devs mean is a 64-bit version of Firefox (i.e. an x64 process). Right now Firefox runs perfectly on both Windows 7 and 64-bit versions of Windows.


There are 32-bit and 64-bit builds of Firefox. The 64-bit builds only work on 64-bit machines -- and the 32-bit builds work on both 32- and 64-bit machines.


If anyone’s interested, there are 64bit Windows builds at the bottom of https://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/late...

The usual proviso apply of course: these are nightly builds and come with no guarentees, nor should they be used as an indication of the quality of the release going forwards.


64-bit build for Windows?

Can you please provide a link to that, I'd like to test/use it.


Author here.

There are 64-bit nightly builds of FF4, but _only_ nightlies: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/lates...

This suggests that there won't be an official 64-bit build of Firefox 4 -- we'll have to wait for Firefox 5.

As some people comment below me, only some parts of it are working in the 64-bit space (and most notably, the Flash 64-bit plug-in is still beta/alpha.)


Firefox is far from becoming "more and more irrelevant". Just because you've switched to an alternate browser doesn't mean the rest of the world has. Chrome may be incrementally gaining some share, but it's mostly at the expense of IE. Firefox has almost 25% of the market, while Chrome is 5%.


Hear, hear.

I've been using Chrome / Chromium exclusively since its release, primarily due to the simple, uncluttered UI and blazing speed. It was a breath of fresh air, and I switched readily.

But something changed with the recent Firefox 4 betas. The browser feels fast, again, and the UI seems more reasonable. I find Panorama useful. And slowly, Firefox is winning me back. In time, I think it may win back other developers, too. And if it does, I could easily see a fairly stable equilibrium developing between the two browsers.

Firefox needed the competition from Chrome, and in many ways, it's starting to meet that challenge.


Chrome currently has 16.5 percent global usage share, IE has 45 percent. FF's share remains stable at 31 percent. Chrome gained ten percent points in the last year, while IE lost ten percent points. At its current growth rate, Chrome will pass FF in 15 months.

http://gs.statcounter.com/#browser-ww-monthly-201002-201102


And just six short years after that it will eclipse 100% usage!


Chrome doesn’t run in 64bit mode either on Windows or OS X.


One subtle message I got out of this: Web applications are a good choice. Stick with it.


Honest question, can you possibly expound on how you got that message? Does it mean web apps are necessarily better than developing for browsers via extensions?


Probably because Mozilla explicitly said their focus would be on web apps, particularly javascript improvements.


Yes, that too. But even if not for the explicit mention - rapid development of a certain platform is usually a sign of a strong and lasting ecosystem of applications for that platform. And browsers are certainly an important component of the web platform.


When I read that headline, the first thing that came to my mind was http://en.wikipedia.org/wiki/Year_of_the_Four_Emperors

Presumably, the course of Firefox will run a little less bloody. =)


Does it seem like Mozilla is lowering the bar as to what will constitute an integer release (4.0, 5.0, etc.)? It was July 2008 when FireFox 3.0 came out, 19 months later, 4.0 hasn't come out yet.


No, it seems like they want to ship major releases more often and don't want to think hard about what version increment the new release's features merit.


It's just "Firefox".

Mozilla has been talking about faster iterations for a while now. 3.6 was, I think, supposed to be the start of that initiative; 3.5 was June 2009, and 3.6 was January 2010, even after slipping by a month or two.

You could argue that the platform version change from 1.9.1 to 1.9.2 isn't that big of a difference compared to the jump from 1.9.2 to 2.0, but you have to keep in mind that the decision to dub the-platform-behind-Firefox-4 as "Gecko 2" wasn't even made until over the summer.


If version numbers weren't useful for marketing, I'd suggest using a hash of the codebase as a true version identifier. It'd be a consistent umpteen alphanumerics long (no more squabbling over who has the biggest number) and could actually serve a technical purpose under the hood.


Great, now I'm going to need to learn a bunch of new CSS selectors.


The new stuff in CSS3 is so useful that this is an opportunity, not a curse!


I find it awfully sad that they are turning Firefox into Chrome. If I wanted to use Chrome then I would have switched already.


Isn't the development cycle a bit too fast. 4 major versions in a single year.


Both Firefox and Chrome are hoping to catch Emacs before 2014.


My browser hasn't even caught up with Windows 98. :(


I just spilled coffee all over myself


Not a vi user, then :P


Why? Major versions are just a number. How many major versions of Chrome were there last year? I haven't looked it up but I bet it was more than 4.


Google Chrome version number is meaningless as told by Google itself: http://googlesystem.blogspot.com/2010/11/google-chromes-vers...

This doesn't however mean that other browsers too have meaningless version numbers.


That link is an unofficial blog. Either way, the point stands though, they're still just arbitrary numbers. Back in the day when software was paid for by version number, an increase of 1.0 was a marketing tool that usually meant a significant update. Browsers are free and are beginning to implement auto-update which makes the version number of a browser even more insignificant.


I won't be excited until they reach 9 or 10. That's when it really gets good.


Oh. So they are going for the completely fracturing the user base approach.

Meaning since people will learn that upgrading major versions will constantly break things, they will stop upgrading and consider other browsers like Chrome.


Yes, Chrome, to avoid changing major versions.


LoL, okay so they will go back to IE8 for infinity.

Do Chrome major version number changes break their plugins?

I don't understand this rush to double-digit version numbers.

Do they think because IE is at version 9 that newbie users will also want their other browser to be at version 9 ?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: