Blink and WebKit are less sexy and use tetrasexagesimal a.k.a. 1/64 (https://trac.webkit.org/wiki/LayoutUnit). 1/60 was tried originally, but it was switched to 1/64 for performance and to avoid precision loss. The reason it's faster is interesting: floating point values are also stored as number with a base-2 exponent, so to convert to/from floats, you can use shifts instead of needing division. The differences between 1/60 and 1/64 were important 10 years ago but are pretty minor now.
Thanks for that. I was very confused as to why the value was about 7% larger than 16,777,216 (2²⁴), and not exactly that.
But assuming a binary representation (!) having the subdivisions be slightly fewer than a power of 2 (60 being slighly less than 64) means that the whole number limit will be slightly over a power of two, meaning that 60 × 17,895,697 should be a power of 2 (give or take a bit of rounding).
And if you do the multiplication, it's 1,073,741,820, which is... huh. 2³⁰ (- 4).
Why not 2³¹ or 2³²? Looks like they could at least double the CSS length limit there, even while continuing to allow negative lengths.
Edit: As we can see from the parent-linked source on line 36, there is:
# define nscoord_MAX nscoord((1 << 30) - 1)
which confirms the limit, but doesn't say why 30 is used instead of 31.
For those wandering why this is an issue. If you are using virtualised scrolling [0] to display a very long list of items, this limits the maximum length of the scrolling area, and therefor the maximum number of items.
Say you have a display that will show all items in a database, each row is 20px high, this limits the maximum number of items you can show to 894,784. That may seem absurd, but if you have a UI that allows you to jump to a specific item in the scrolling list, you would expect to be able to do it.
I always assumed mega-large scrolling just used an "arbitrarily large" size, and every time there was a large scroll event, it would just determine the proportional position, and then display the relevent user elements.
So I understand why this would allow you to skip this "hack", but at the same time I kinda feel like absurdly large dimensions are not something reasonable to expect a browser to support, because the browser has to implement a limit at some point. So it definitely feels reasonable to set a limit somewhere moderately beyond wherever you think the longest infinite-scroll-search-results page would produce from somebody actually scrolling for a couple of hours. E.g. maybe a kilometer long, but not 100 or 10,000 kilometers long. (By my calculations, 17,895,697 px at 96 ppi = 4.73 km, which seems pretty decent.)
> I always assumed mega-large scrolling just used an "arbitrarily large" size, and every time there was a large scroll event
Moving things, even with "transform: translate()", during an "onscroll" event has really bad performance and will always show some level of lag.
Using the massively optimised bowser scrolling gives a much smoother user experience. What you do with the virtualised scrolling is limit the number of DOM nodes, and where possible recycle them during the scroll. But they are "absolutely" positioned within the scrolling area.
So yes, you could potentially go taller by manually moving elements within an arbitrarily large scroll view, the ux will be frankly quite crap.
But for virtualized scrolling, the content you're scrolling to doesn't even exist before you scroll to it. There is no moving of things, so I don't understand what you're saying.
If the height of the scroll area doesn't equal the height of an item X the number of items, you will have to change the offset position of each element within the scroll area on each scroll event. That will lag behind the scroll.
To put it another way, if the total height of all items is greater than the height of scroll view, as you scroll from top to bottom the items need to move through the scroll view faster than than the scrolling element. To do that the items need to be moved on each "onscroll" event, even if you only drawing the small fraction of them that are currently visible.
Sure but there's no way around that. That's the whole point of virtualized scrolling. With virtual scrolling there's always going to be a lag in displaying. But it's also infinitesimal so it's fine.
Also if you're just scrolling by tiny amounts (with trackpad or scroll wheel) then the virtualization doesn't matter, you just add new rows as needed while leaving everything in place. It won't precisely line up with the scroll bar position but the difference will be infinitesimally small, and then you can fix it once scrolling ends.
The solution is to use a virtualised scrollbar as well, so you have a div whose only purpose is to show a scroll bar, and then when that scrolls you scale the scrollTop proportionally. E.g. dragging the scroll bar by 1 pixel might equate to moving the grid by 20 items, but this is fine because in grids of that size it's impossible to precisely scroll using a scroll bar anyway. Scrolling via the scroll wheel is unaffected so the user doesn't notice.
You can work around this issue by creating a bit of separation between the height of your items and the heights used in CSS. At these heights the scroll thumb is already reduced to it's minimum height, so as long as you make sure that scrolling jumps to that right point in the data you can "borrow" from the CSS height to stop it going over the maximum.
> It is used for displaying a search result in a table which has like a million rows. Since it is not possible to actually show all the entries, the image instead is placed in a scrollable div and the scroll position is used as a reference for which part of the results to show.
And on a 300dpi screen it's a little over 1.5km only. ;)
For “deep zoom” applications, such as microscope slide images or a “google maps” app, I can imagine that developers get the idea to let the browser do the panning/zooming. Then you could run into limits like this. But he should have tested if this works before building the app.
FWIW, Cairo (a 2D drawing API in C) has long had a similar issue. Despite coordindates being represented as double precision floating point, their maximum value is approximately that of a 16 bit integer.There is (used to be?) some comment in the code about how important this was to fix, but how incredibly hard it would be.
Fastmail shows lists of messages using a progressively-loaded list, where each item is of a consistent height (88px for me, but it can be a few other values too, depending on your configuration—I think 51px is the default). This means that the scrollbar is real and accurate, and you can seek to any point in your mailbox easily (provided your platform allows interacting with the scrollbar, which largely means “on desktop platforms”). But this does cause problems for very large mailboxes, because browsers only support finite lengths.
A few years back, while I worked at Fastmail, we had a ticket come in from an IE user that they could suddenly only access the first few messages in their mailbox. Trouble was they’d gone over IE’s limit, and IE just ignored the entire height declaration in that case, and so you ended up with only the initially-rendered list items available.
The limits I found:
• Firefox: ignores declarations that resolve to a value higher than 17,895,697 pixels (which is a bit more than 2²⁴).
• IE: ignores declarations that resolve to a value equal to or higher than 10,737,418.23 pixels (2³⁰ − 1 hundredth pixels).
• WebKit: clamps values somewhere around 2²⁵ (~33,554,432) pixels; clamping means you don’t need to worry about it so much, since that was the best workaround in other browsers anyway.
So yeah, it actually only took about 200,000 messages in the list to hit this limit and fall over, or subsequently just make the bottom of the mailbox inaccessible. 200,000 messages in one mailbox is uncommon, but not at all unrealistic, especially in an “All mail” sort of mailbox.
For those in this thread saying “why would you ever want to do that?”: this is why. Real scrolling is extremely useful in small cases, and still absolutely useful on occasion as things grow even extraordinarily large. This is not a degenerate case: this is very sane. And pagination of any form would be horrible. Like pagination basically just makes old emails inaccessible except by precise search in Gmail. Fastmail’s approach is very, very strongly desirable here. Pagination is the clumsy hack, not progressive loading of a list (though I will admit that you need list items to be of consistent heights to do a good job of this style of progressive loading).
I was half expecting the conversation to derail into trying to convince the reporter that their approach is completely nuts and they should just paginate or do anything else. At least I would've felt some urge to do so.
What exactly is nuts about the approach? Would you argue that we should have stuck to high RAM and low RAM because that was necessary on some computers, a long time ago? You can barely boot Firefox on anything less than a 1990 supercomputer, much less use gmail.
Its still pretty nuts today. You would never try to display million of records like that without some kind of pagination mechanism. Its not even a useful view to for humans to view that much data at that level of detail all at once.
I think that is completely wrong, scrolling through thousands or even millions of records is vastly more usable than clicking through pages. Of course you do not load millions of records into RAM, just the ones that cross the view bounds.
This may seem like a ridiculous complaint, but I actually ran into a similar limit in Chrome with respect to the width of a canvas and had to pull all kinds of nasty tricks to get around it.
I tried to render a piece of music horizontally on one canvas. That didn't work past a very low limit. In the end I chopped it up into multiple canvases and scroll at most two of them at the time (there is some risk of artifacts at the point where they meet, unfortunately).
I have hit this problem in a project of mine[0]. Users can zoom the timeline in and out, and what this does is adjust the width and the scrollLeft of the zoomed element [1]. And zooming is exponential, so you very quickly get very large values.
Obviously I have feature requests for supporting both larger and smaller time scales so I'll have to figure something else to do. My initial inclination is to virtualize the scrolling, so the element has a minimum and maximum width, and when the user zooms in and out (beyond/close to the min/max) I reset the view and the scroll position. You definitely have to hide the scroll bar in this scenario as it would be jumping all over the place.
Blink and WebKit are less sexy and use tetrasexagesimal a.k.a. 1/64 (https://trac.webkit.org/wiki/LayoutUnit). 1/60 was tried originally, but it was switched to 1/64 for performance and to avoid precision loss. The reason it's faster is interesting: floating point values are also stored as number with a base-2 exponent, so to convert to/from floats, you can use shifts instead of needing division. The differences between 1/60 and 1/64 were important 10 years ago but are pretty minor now.
If you're interested, the code is at https://github.com/mozilla/gecko-dev/blob/d36cf98aa85f24ceef... and https://source.chromium.org/chromium/chromium/src/+/main:thi...