This trend of design fashions negatively impacting usability has become so blatantly visible across the web (and apps, and desktop interfaces) during this century that I've become quite curious what psychological/organisational effects are at force.
Are there historical examples of the same tendency that can be examined? Tools, signage, forms or public spaces becoming a progressively more difficult and less usable mess under the aegis of "making things easier for users"?
Surely somebody has done a study of this effect. In web and software design it's particularly hilarious (in an I-want-to-cry way) since so much good sense has been written (Tufte etc) which is completely discarded by these trends. I expect the same is true in architecture, only I'm less familiar with the literature of that domain.
It discusses how Gropius started off with more of a continental arts-and-crafts outlook, but that increasing competition for the intellectual and political purist high-ground lead to what we call, despairingly, the international style.
A similar dynamic has played out with software UX: people were producing garish (but usable) UX with drop shadows, etc. and along came the anti-bourgeoisie puritans. They had a point, of course (they always do) but their solution was worse than the original problem.
> Users are forced to explore pages to determine what’s clickable. They frequently pause in their activities to hover over elements hoping for dynamic clickability signifiers, or click experimentally to discover potential links. This behavior is analogous to the behavior of laboratory rats in operant-conditioning experiments.
It amazes me how much time people spend on their mobile phones. Tiny screen, half of which gets taken up by a crappy keyboard (on screen keyboards are technically quite amazing, but a normal keyboard is way more usable).
If I want to book something, or basically type more than a few lines, give me a desktop (or laptop) any day.
The analysis of the 'mobile footer menu' case study seems a bit misguided.
Sure, everything on that image looks about as clickable as everything else. But the user didn't click on everything -- he clicked on 'Shop', repeatedly.
Why?
The article mentions 'language' as a 'contextual clickability clue'... but language is much more powerful than the cues whose absence the page laments. The non-exist visual (un)clickability signifier doesn't help... but language is the overriding issue in that experience.
It's widely held that If something is clickable, it should have a clear & reliable 'information scent' -- it should tell you what clicking it will do. People don't click on things because they're clickable -- they click because they think it'll do what they want.
The converse is also true -- If something has a clear 'information scent', it should be clickable, and should do what it implies it will. Information scent makes people want to click on things -- and they'll be disappointed even if they immediately realise they can't click.
In the case study, the user clicks on 'SHOP' because 'SHOP' is where he wants to go. (there is a 'shop' page on the site, BTW). Clearer styling would make the experience less bad, but the only real solution is to make SHOP clickable.
This is my understanding of how we ended up with today's flat UIs with less affordances even though many (most?) users dislike them. HN can tell me if I was told incorrectly.
1) GUIs in 1980s like MS Windows 1.0/2.0 had flat design[1]. The buttons were flat. No drop shadows. The "flatness" was not a deliberate design intention but simply the first iteration of a graphical UI to supplant text mode DOS console.
2) In 1990s, MS Windows 3.0/3.1 introduced 3-dimensional sculpted buttons. An visible improvement in UI affordances. Windows 95 further extended the 3D look where whole window edges, etc had sculpted look.
3) This era includes the Apple Mac OS X GUI ("Aqua
) that had 3D look and buttons had depth. It includes the Steve Jobs quote, “one of the design goals was when you saw it you wanted to lick it."
4) The zenith of affordances is reached with Windows Vista/7 "Aeroglass" where windows could cast translucent drop shadows on the desktop. Effects like that required heavier computation such as "alpha channels". Hardware-assisted (premium graphics card) was required. Desktop computing power (both cpu and graphics chip) to deliver all this GUI effects was not a big deal. This was the time period before "skeumorphism" became persona non grata.
4) iPhone/Android mobile phones come on the market in 2007/2008 with low-powered CPUs and precious battery life. Now, things like painting 3D heavy UI and rendering translucent drop shadows are seen as a massive extravagance. A waste of cpu & battery power. In 2012 Windows 8 and 2013 iOS 7, everybody removes the last 20 years of 3D GUI affordances and makes everything flat again.
5) To make the GUI consistent between mobile phones and desktops, Microsoft makes the desktops flat as well even though there is abundance of computing power. Therefore Windows 8/8.1 looks like Windows 1.0 again[1]. The Apple Mac OS X is also flatter but at least they kept the windows casting drop shadows.
What's interesting is that the marketingspeak from Microsoft/Apple/Google about "flat design" talks about it being "modern", "clean", and "fresh". To me, it seems like it's really all about the current limitations of mobile phone cpus and forcing a UI consistency to the desktop users. Basically, it's punishing the desktop users by enforcing the lowest common denominator across device platforms.
Hopefully, we'll get a new trend where everybody will go back to styling GUI elements with some hints of "clickability" without gratuitous skeumorphism. We just need some balance.
To me, the flat UI of Windows 8 looks more like that of a Web site than a vintage Windows version (although I've only used Windows as far back as 3.1; I used AmigaOS/Workbench before then, which shifted from a flat look in v1 to a sculpted look in v2+).
In Web design, 3D buttons and rounded corners have been a consistent pain to implement, at least until recent versions of CSS. The usual trick was to have a different background image for each corner, a different repeated background image for each edge, and a flat colour behind the text; all of this would be constrained to line up by a hierarchy of divs.
On the other hand, flat rectangles have always been easy, so they could be seen as the Web's "default look" (if we ignore native controls).
I see the trend towards flat (native) UIs as a way to blur the distinction between Web sites and native applications. This certainly makes sense on mobiles, where many native applications are just an alternative UIs for Web sites (which might even be implemented in HTML/CSS/JS!). In that sense, it's not so much that desktops are regressing towards the battery-efficiency of mobiles, but towards the lowest-common-denominator of Web UI.
> 4) The zenith of affordances is reached with Windows Vista/7 "Aeroglass" where windows could cast translucent drop shadows on the desktop. Effects like that required heavier computation such as "alpha channels". Hardware-assisted (premium graphics card) was required. Desktop computing power (both cpu and graphics chip) to deliver all this GUI effects was not a big deal. This was the time period before "skeumorphism" became persona non grata.
I would say the introduction of hardware-accelerated translucent UIs was a blessing and a curse for usability. The ability to cast shadows added a useful visual cue to the ubiquitous "2.5D" floating window manager.
Simultaneously, this new rendering ability allowed application windows to become translucent; undermining the 2.5D concept and sometimes making content illegible. Another technical capability was added to "fix" this: blurring the transparent sections, when IMHO it would be better to avoid transparency other than for shadows.
You're right up to about 4). The iPhone had more power than 95-era computers and definitely could render buttons with bezels.
It's just a design fad. iPhone removed a few decorations from the GUI to make it fit better on a small screen, then people just had to out-minimalise Apple and it all culminates in Windows 10's display settings dialog where resolution settings is hidden behind a clickable "advanced settings" label typeset with tiny font in grey on grey background.
>You're right up to about 4). The iPhone had more power than 95-era computers
Well, Win95 computers got never-ending electricity from a wall-outlet. Mobile phones expending cpu cycles on "unnecessary" GUI styling wastes battery power.
You're right about the other justification: a mobile phone's "UI buttons" shouldn't waste pixels on boundaries (e.g. bounding rectangles). It just needs to be spaced far apart enough for fingertip width. Unfortunately, this design makes it impossible to distinguish between text that's just a "status" as opposed to a button that executes an "action".
Those pixels are always drawn anyway. And a phone's biggest battery killer is the screen which... is on if you're looking at it. So, either your biggest power draw is already on, or there's no reason for you to put any pixels on screen.
In terms of LCD being "on" vs "off", yes, they are always "on". However, the pixels for extra GUI effects must still be "computed". For example, Apple recommends that the "parallax" feature can be turned off to help conserve battery power. Parallax requires cpu computation to "draw" even though those pixels are always "on".
It's the same concept of disabling screensavers on the thousands of servers in datacenters. Even those those pixels are always on, computing the drawings in screensavers consume cpu. Multiply the waste that by thousands of servers and the company is paying extra electricity bills for no reason.
This trend of design fashions negatively impacting usability has become so blatantly visible across the web (and apps, and desktop interfaces) during this century that I've become quite curious what psychological/organisational effects are at force.
Are there historical examples of the same tendency that can be examined? Tools, signage, forms or public spaces becoming a progressively more difficult and less usable mess under the aegis of "making things easier for users"?