Don Norman's gripe applies not only to the elderly. I think good usability benefits everyone, and I do sympathise with him when it comes to the direction Apple has been taking for the past 10 years or so.
I don't think it's aesthetics vs usability that's at the core here -- I don't think at all that aesthetics and usability are somehow mutually exclusive. I think it's simply the lack of focus on first principles outlined by Don Normal himself.
HCI used to be front and center in the collective minds of the Internet, but it slowly faded to the background. As an example, check out the dates on the articles referenced in the "Mystery Meat Navigation" Wikipedia article: https://en.wikipedia.org/wiki/Mystery_meat_navigation#Refere...
I think it's neat that our affordances are evolving (we don't need to have things looking exactly like physical buttons anymore for us to click on them). But at the same time, we should still apply ergonomic guidelines when designing interfaces, whether it's for the elderly, or not.
The problem is on most projects, the managers end up calling the shots on what's good design and the vast majority of them don't know what they are talking about and are just looking for whatever looks like Apple.
I used to think it was the designer-as-dictator that was the problem but now I believe it's the self-anointing-expert manager who believes design is merely intuition and not a rigorous field of engineering.
and IMHO apple's design went south post-jobs with ios 7.
The main problems that I didn't like:
- the flat look meant buttons didn't look like buttons anymore. This means that you didn't know what is actionable on the screen
- hiding stuff offscreen. This hiding of complexity also hides common uncomplicated actions and requires extra swiping and fiddling for routine actions.
regardless of the merits of apple design, believing the discipline of design is merely converging towards it is a pretty impoverished view of the field.
-edit- thanks for that mystery meat link, now I have a name for that. -
My special gripe is removing text labels below abstract, monochrome icons. The Markup toolbar being a good example of this {expletives deleted}. That an how finder colour labels turned into tiny little dots.
John Siracusa's Mac OS X reviews, do a good job of documenting the downfall at least for Mac OS X (before it became the barf that is macos).
If I understand what you're saying, it's the macos UI that is degenerate?
I say that because the reliability with advanced hardware (power management, power per watt) has been famously rejoiced over increasingly with the M1 and M2 hardware and macos on it.
Personally I too get annoyed with the UI inflexibilities, prefer Asahi so i can just control the visual aspects.
Windows became unusable for me around 8, now it's an ad-filled dumpster fire.
I'm amazed at how quickly Asahi has developed hardware support, Linux isn't a serious option for me, yet, so I'm stuck with macos.
If somebody wants to port Haiku to the M2, with flawless hardware support, including accelerated video, and power management, that would be great, and probably no more than an hours work. Or port Mac OS X Panther/Tiger
{Rant starts, you can stop reading now }
I offer no solution, and this isn't really a criticism, but an observation:
Linux,... I'm glad it exists, but I've reached JWZ's CADT point. I'm amazed at all the hard work, but. I can't use it. It's always something, and I'm too "old" or lazy to care anymore.
I've used Linux since Slackware with kernel 1.2, I remember ipfwadm, I've configured XFree86 Modelines. I bought Caldera OpenLinux, that included Word Perfect (and let you play a game while it installed). Over the years, I've used RedHat, Fedora (Core), openSuse, Debian, Ubuntu, Mandrake, and more as a daily or sometimes daily driver.
The entire concept of a "distro" is just sort of absurd, 99% the same software - Linux Kernel, GNU+misc Userland, Xorg/Wayland, some sort of desktop or GUI toolkit, and all the same open source apps.
I don't care about package managers, or init systems, or kernel versions anymore. I don't know or want to know about the merits of FLapak Or Snaps, I just want to use the damn apps.
Also a longtime, off-and-on Linux user. Using old gaming laptops instead of Apple HW, though.
My latest thing has been Solus Linux. The hardware has "just worked", the package management is genuinely original, it has a stable rolling-release and definitely cleaner feeling than the Debian/Redhat landscape(I have not opened a command line for packages, not even once!), it has a MATE desktop(which they plan to switch over to XFCE, but it's basically the same to me - tried and true over bold and new) and the remaining bits of snowflake software work in a VM running whatever other OS.
So the computer has finally gotten out of the way for me, at least until I do software development. But that's one of the things I bottle up in the VM, and the only associated hassle of that is the edit/test cycle, for which I just forgo IDE functions to do fast local editing and try not to rely on a fast iteration loop otherwise. I've learned that it mostly gets you to wrong answers faster and the insights need time anyway.
I'll take your special gripe and raise you one. My gripe is not being able to get rid of the icons in favor of the text labels. There's a reason alphabetic (and similar) writing systems won out over hieroglyphics, and why we don't use both today.
And this is not just Mac OS, it's Microsoft and an increasing amount of Windows-compatible software. I wouldn't mind the Ribbon half as much if I could just turn off the hieroglyphs, I mean the icons, and keep the text labels.
I'm not sure we can say this. There are tradeoffs when you can't make assumptions about your end user. Text that is comfortably large for someone with bad eyesight might not allow enough text for someone with better eyesight. Volumes for someone with poor hearing might be painful for someone with better hearing, etc. A company like Apple will always err towards the demo that isn't likely on a fixed income.
We've got companies actively removing usability from their products in order to chase fads - check out Youtube Shorts and how they don't have stuff like manual tracking or volume control at all.
Maybe your argument has some merit to it, but based on where we are, I don't think it needs to be worried about too much.
Companies don't remove usability just to chase fads. They do it to exploit users.
The lack of manual tracking on YouTube shorts, or (much earlier) Instagram reels? That's not a fad, that's a "feature" - it's meant to change the way you interact with and experience the content, forcing you into a paradigm that's optimal for the vendor.
Same with other usability and accessibility features of yore - the ones that disappear first are the ones giving users flexibility and control, because the point is to funnel users into specific, optimized workflows that are most profitable for the vendor.
> Can't you just change the volume on your device?
And this is the problem with YouTube shorts—they've been designed exclusively for the mobile experience without any consideration for desktop.
On desktop it's always been customary to allow adjusting the volume on each piece of media individually, because multitasking is not uncommon. Some people will want to be able to adjust the volume of YouTube independently from the volume of the video game they're playing at the same time. Or even just turn the volume down while still having full volume alerts from Slack.
I agree, and that's what I just said. YouTube Shorts has not existed very long, but it could very well be on the roadmap to add a volume control to the desktop version.
There's not really such a thing as usability that benefits everyone. Design is defined by tradeoffs; if you think you've found a perfect solution, you probably haven't fully understood the problem.
Many blind people rely on tactile paving bumps to navigate the urban environment. Those bumps are a literal pain in the ass for wheelchair users who have to roll over them. An ATM that is low enough for a wheelchair user to reach might be too low for a tall person with a stiff back. A computer interface that seems absurdly over-simplified to a power user might still be impossibly complex for a novice.
There's a simple solution to at least the text problem: software control. On most web browsers you can embiggen or ensmallen the font with Ctrl-Plus or Ctrl-Minus. And every app has a way of controlling audio volume, and often the OS and/or hardware that the app runs on can also control the audio. So I don't think that's an issue at all.
Everyone wants good usability, but what that means varies widely. When my vision was 20/5 (really, it was amazing), I wanted small text with lots of information on the screen so I could do more without context switching. Now that my vision is a pitiful 20/20 (how do people live this way), context switching with zoom is ideal on the phone and most of the time I'd rather use a laptop.
Interfaces cannot appeal to everyone. We can do better to make magnification universal, but that's not what the article means.
It's also funny to read this article as they think that older people want scooters. Most of the people who are experiencing problems are having issues with strength and balance. Scooters are the last thing they want.
Apple's switch away from the physical home button to gestures has created a usability threshold for the iPhone that many can't cross.
Intricate gestures are difficult to grok for some, and difficult to perform for others. Try using an iPhone and closing an app with shaky hands.
Currently the iPhone SE still has a physical button, but I'm worried what device I'll start recommending to older/less tech savvy people when that goes away.
iOS itself is a bit of a disaster zone too now. I see people constantly get stuck having activated the "press to edit your lock screen" by mistake, or getting confused by a constant stream of ads for iCloud, Apple Arcade etc.
It's sad because most of this poor UX is unnecessary. It feels like its origins are in Apple no longer caring, combined with running out of real ideas and getting distracted with things like widgets.
My biggest gripe is continuously changing interfaces. You've sold me. I'm a customer, I'm using your thing. Why do you want to make it difficult for me to memorize my use of your thing? Moving menus and buttons around all the time is craziness. I don't have time, cognitive capacity, or interest in finding new ways to do the same functionality from before.
Things do need to change over time, I get it, I create things too. Sometimes new functionality evolves and has to go somewhere, sometimes you find a previous design was bad and there truly is an improved layout that will help most. Fine. Those sorts of states should converge quickly so I can memorize and dedicate it to muscle memory vs having to actively look and think all the time.
> You've sold me. I'm a customer, I'm using your thing. Why do you want to make it difficult for me to memorize my use of your thing?
Typical company these days: you're who? Ah yes, you're an existing customer. You're already paying us, sunk time into learning our product, and rearranged your work or life to be at least minimally dependent on us. We can safely ignore you - it's unlikely you'll leave near-term, so our focus is much better spent on acquiring new customers.
To be clear: I hate it, but this seems to be how most software products are being developed these days - all focus is on making them dumb and pretty enough to sell to first-time users, at the expense of already onboarded users.
Yeah. You’ve chosen Apple for a reason, and since you have only one real alternative that you emphatically do not want, you’ll just have to deal with whatever we throw at you.
The two phenomena exist together. Yes, companies target Marls, but most, especially startups, seem to be more interested in acquiring new Marls than it is in milking the ones they already have.
This is a big part of what made me start using linux. On windows, there's a new way to do basic stuff all the time and they screw around with menus that work perfectly well just to have something new.
That means that any tutorials will quickly get outdated and you can spend half your mental capacity just keeping up with this crap. The amount of times I googled how to do something in MS office, clicked an article from half a year ago and found that one of the options it mentions doesn't exist anymore is too damn high.
Things are nice on linux, especially the CLI world. You learn a little program once and use it for decades without thinking about it.
As always with Linux it depends on which distribution you're on but that hasn't been my experience at all with distributions like Ubuntu and Mint. I used Ubuntu back when the close, maximise and minimise buttons were on the top right and they moved them right from under me. I've tried to adjust to using flat packs but struggled with their very serious limitations while programs I rely on are no longer available in other package types. I have seen them see-saw between the horrible global menu and window menus, make wholesale changes to areas of the settings screen like display settings and mouse and trackpad settings. And I don't even know how many wildly different iterations of that horrible main menu application selector UI I have had to suffer through.
I use MATE, an old desktop environment descendant from GNOME 2. If I apply the principle of "It should last at least about as long as it's been around...", I can hopefully use it in peace :) (It'll probably look the same in another 20 years!)
I really don't see the benefit of almost anything else that came later (I did add an app launch shortcut that I rarely use). I also autohide most of the UI by default (bars and menus), so it's just there to do its basic function and allow me to focus. Performance is excellent.
That said, I think the main difference is mostly from community-focused development, it tends to bring out genuine usability concerns (which is why I think most *nix DEs work fine).
I've been through all the ups and downs around Ubuntu and gnome, but at least those changes generally happen at a time of my choosing too. I don't go to a commonly used app one day to do something quick only to find the entire UI was updated overnight.
Of course even the cli world has had it's changes though, systemd alone made decades of documentation obsolete.
As an older person, CLI is unusable. It is impossible to memorize all the switches, need to consult chatgpt at every step. I paid my dues to the CLI gods in my miniVAX days.
The recent overhaul of the Apple Watch was very egregious. Maybe I find it extra offensive since the device is physically attached to me at all times so the muscle memory is particularly strong. There are only 2 buttons on the device and they decided to completely change their behavior, throwing away 10 years of experience I had with the device. The new design isn’t even bad or anything, it’s just that the old design wasn’t bad either, so throwing it away was completely unjustifiable. It’s also offensive because I made the mistake of listening to their marketing and strapped one to everyone elderly in my life to help protect them from falls and heart failure, and now I have to help teach them a bunch of new stuff for no reason. Some of them just decided to stop using the thing instead, and I can hardly blame them.
>... getting confused by a constant stream of ads for iCloud, Apple Arcade etc. It's sad because most of this poor UX is unnecessary.
UX went from an altruist field around making tasks easier to perform to tricking people into spending money, clicking away rights to their personal data, etc.
It's the logical path of end users being the product. This axiom started around "free" services like Facebook, but now we see it even in expensive products like iPhones and Windows 11.
Not an elderly or disabled person, I probably even fit some people's definition of a touch screen wizard since I regularly use swiping keyboards without looking, but I really can't be bothered with those weird slidey gestures between apps on iOS or Android. At least on Android I can switch to regular (configurable!) bottom buttons.
Gods I miss having real customizable gestures on Android. It took me years to unlearn my gesture set, especially "double tap with three fingers" to close the current app and go home.
I used to use xposed edge to configure the crap out my gestures, it was great. Now that xposed isn't really a thing anymore, I followed the rest of the world and use my phone with two hands most of the time and use my index finger for the top of the screen with one hand.
Not terribly hard at my age, but why do I have to?
In principle, I like for example the four-finger sliding gesture to switch apps on the iPad, but it’s implemented in a way that makes it very easy to accidentally swipe to the second-next app instead of to the next app. Similarly, there’s a gesture for Speak Screen where you swipe down from the top with two fingers, but it regularly takes me more than three attempts before it doesn’t instead swipe down the lock screen or the control center. I honestly don’t understand how they think this is fine.
My mum finds the iPhone hard to use (she has early stage dementia), she has an Apple Watch for fall detection, I've tried turning off everything but still there's stacks of stuff you can't turn off. I'd love to make a simple interface with "Answer Phone" and a list of people she can call but its not possible, once you see her use it, you realise how insanely complex the UI is. Swiping is also harder for older people because their hands are dryer and less conductive. We need to keep the iPhone though for the fall detection, I'm examining options but there's not much around.
Wow, Point and Speak looks amazing. Despite all their faults, Apple really deserves credit for bringing great accessibility stuff like this to millions of consumers.
>Detection Mode in Magnifier and Point and Speak
>In the Magnifier app, Point and Speak helps users interact with physical objects that have several text labels. For example, while using a household appliance, Point and Speak combines input from the Camera app, the LiDAR Scanner, and on-device machine learning to announce the text on buttons as users move their finger across the keypad.
>Point and Speak is built into the Magnifier app on iPhone and iPad , works with VoiceOver, and can be used with other Magnifier features such as People Detection, Door Detection, and Image Descriptions to help users navigate their physical environment more effectively.
> Apple's switch away from the physical home button to gestures has created a usability threshold for the iPhone that many can't cross.
Funny you mention home buttons being more usable. I've got mild wrist pain (not RSI levels fortunately), and I find the pressure required to press it disturbingly high, often resorting to using the assitivetouch button instead.
It was better with actual physical button. Current SE has a fake software button that didn't move, only vibrate at certain pressure. It feels more difficult to press even at easiest setting because of the way pressure detection works.
The samsung s8 was the best compromise for this - it didn't have a physical home button, but had an area on screen where the home button would normally be that acted like the home button with a firm press of the finger.
Not sure if it had special hardware or just well written drivers, but it always worked flawlessly even with a hung app in the foreground.
Imo ios gestures are not even that good compared to android. The ability to go back by swiping from any edge is so much easier than reaching to the top left of the screen...
Not sure if they are to blame here though. Cars, sure. But for phones design and software cost doesn't decrease with worse UX, support costs probably even rise.
Not in this article, but Don Norman also frequently rails against complex conceptual models.
The other day, my mom was complaining that her phone was not ringing and it took me forever to figure it out. I had to go to Google to find a troubleshooting guide.
The problem is that there are multiple ways to prevent a phone call from ringing. You can switch the hardware button (silent mode), you can set focus mode on (or have it set automatically), and you can mute individual people in the address book. Or you can add people to a group so that they ring even if the phone is in focus mode (but not in silent mode). There are probably other ways I've forgotten.
Already we've introduced a bunch of concepts: silent mode, focus mode, muting individual people, exceptions to focus mode, etc. And the user has to figure out these concepts just from looking at the UI. But if you don't understand the entire conceptual model, you may not know why something is not working.
This problem can't be solved with better affordances or more text labels, unfortunately. Maybe LLMs will eventually save us. Instead of the user having to figure out the capabilities and UI of the device, the device tries to figure out the intent of the user.
> A feature interaction is some way in which a feature or features modify or influence another feature in defining overall system behavior. Feature interactions are especially common in telecommunications, because all features are modifying or enhancing the same basic service, which is real-time communication among people.
> Features are popular because they are easy to add and change. The dark side of features is feature interaction, which is implicit in feature composition and therefore difficult to understand. [emphasis added]
...whereas in data science or statistics, a "feature interaction" is the influence of a combination of features on the target variable, e.g. "age 55+, baseball fan and living in zipcodes X tend to exhibit behavior Y".
Since part of this thread is complaining about the overuse and lack of universality of jargon.
This 'rings' with me, when you have to do root-cause-analysis to find out why your phone doesn't ring.
Conceptual complexity aside, for this case I wonder if it's almost a cultural/social problem. Before ubiquitous cell phones, you could be "disconnected", and this was normal.
Now, it may seem weird to say, 'don't call unless it's an emergency'.
Yeah that stuff's pretty bad and worse on my iphone it just appeared without warning during some updates. The phone used to work fine in a simple way and then they nag you to update to ios whatever and then the slip in focus mode etc without warning you. I'm wary of even trying to figure all the stuff before the muck it all around again on some other update. As a work around I've figured putting it on 'work' mode makes it behave like a phone again.
The problem is one device responsible for too many different things. At some point it gets confusing, and sensible "affordances" become impossible with so many modes of operation.
> Designers and companies of the world, you are badly serving an ever-growing segment of your customer base, a segment that you too will one day inhabit.
That last part - "a segment that you too will one day inhabit" - is one which should be shouted from the rooftops and ingrained into folks when they are in their early teens or twenties - before they get employed as designers of any kind.
I'm deeply concerned about when my dad is forced to update to Windows 11. He's 80 years old now, bought his first computer about 40 years ago, and used it to write programs to do engineering calculations. So, a fairly technical user from the jump, and has gotten accustomed to everything from 8-bit micros, through DOS and Unix to Windows. And dammit, the shit these companies are pulling is just going to invalidate his prior knowledge and leave him confused and pissed off, calling me for help (like I know jack about how the modern hodgepodge of Windows works). When Don Norman writes that the digital realm is no country for old men, I can see it in my father's increasing bafflement, I can feel it in myself.
I think that a big part of the reason why vi (later vim) and Emacs used to enjoy dual status as the canonical hackers' text editors is because their interfaces didn't change much, so skill with them would serve you a lifetime and could be passed to upcoming generations. I recently fired up Xenix in an emulator, and found that I was quite facile in using its copy of vi to manipulate text, because the skills I'd developed on Vim on modern Linux machines translated well all the way back to that ancient editor. Vim added a lot but the fundamentals are the same.
When the interface changes, just for the sake of changing, every two years or less, how can you feel like anything you learn will be relevant?
A couple of good examples of "good usability benefits everyone":
The company OXO makes kitchen gadgets originally designed for reduced mobility (i.e. older people) but now popular with everyone.
The ADA: having, for example, a ramp, doesn't just help people in wheelchairs: if I have something difficult to carry (or am using a cart) or have a temporary injury that makes steps hard to navigate I'm glad there's a ramp.
In many ways I consider the vast majority of designers and architects to be working away from their putative goals, instead pursuing egotism.
Just posting this here for anyone like me that had read some of Don Norman's work, but not The Design of Everyday Things, and was confused by a certain differing use of vocabulary.
In his original coining of the term, Normal used "affordance" to mean a thing an object allowed to be done, but some user, usually a human, sometimes another object. For instance, a chair affords sitting by a person. A door handle affords opening.
But in the design world "affordance" is now almost ubuiquitously used to mean some visual hint added to a design element to indicate what can be done with it. For instance, in a UI, you might say that you added an "affordance" in the form of a drop shadow to show a button is clickable (probably a crappy example, me being a non-ui person).
In the later editions of Design of Everyday Things, Norman addresses this difference (perhaps we could say evolution) of his idea and term. If I remember correctly, he does not love this conflation of ideas, but has come to terms with it.
It's a bit more subtle than that. From my copy of the 2002 edition (p.9):
> Affordances provide strong clues to the operations of things. Plates are for pushing. Knobs are for turning. Slots are for inserting things into. Balls are for throwing or bouncing. When affordances are taken advantage of, the user knows what to do just by looking: no picture, label, or instruction is required.
Then on p.88
> Consider the hardware for an unlocked door. It need not have any moving parts: it can be a fixed knob, plate, handle, or groove. Not only will the proper hardware operate the door smoothly, but it will also indicate just how the door is to be operated: it will exhibit the proper affordances. Suppose the door opens by being pushed. The easiest way to indicate this is to have a plate at the spot where the pushing should be done. A plate, if large enough for the hand, clearly and unambiguously marks the proper action.
It's not just that handles afford opening, it's that they afford pulling.
I've had a quick flick through the pages on affordances, and can't see anything that stands out about the drift of the word "affordance", so that might be in a later edition than the 2002 one. (The original edition is from 1998).
Yes, in a later version of the book, Norman differentiates between signifiers (which we might now call perceived affordances) and affordances. However, the HCI field and UX field has by and large not adopted signifier as part of its vocabulary, and most people use affordance to mean the perceived relationship rather than Norman's original definition.
It's also worth pointing about that the original definition of affordances, by Gibson, is about animals and their relationship to their environment, which can be quite broad in its totality.
Feels like there needs to be more intuitive terminology and for it to only be used when it's helpful to the discussion because most of the time I see the word "affordance" used, it's dropped in unnecessarily into the conversation where the poster should know the other commenters aren't going to know what it means, or it devolves into a discussion about the definition (see here). I've don't hear it used outside UX circles.
During UX work on projects, it's simple enough and comes naturally to most for everyone involved to phrase it something like "we should add X so it's more obvious you can Y". I'm don't see the gain in breaking it down more outside of more academic discussions, and introducing unnecessary terminology creates a barrier for communication.
A while back someone, replying to one of my comments, mentioned affordances in C#. He simply meant procedure or function, but chose to use a fancier sanding, more abstract word which really confused me. There is too much misuse of pompous language happening where simpler, plainer descriptions work better (yes, language is also a user interface of sorts).
Right. "visual hint" is self-explanatory and cannot be misinterpreted, and seems to be a more common term. As a non-UX person I've never heard the UX jargon "affordance", but I've heard "visual hint" tons.
I don't believe this was in the original edition, but someone can correct me if I am wrong: he proposed the concept of "signifiers" to fill the definition that people were increasingly giving to "affordances".
A signifier helps to indicate the presence of an affordance that might not be immediately apparent.
I recently had occasion to reside, temporarily, and for unremarkable reasons, in a rented apartment in the Scottish town of Dundee. The lobby of the apartment building was accessed through a single door, with handles on both sides.
Adjacent to the handles, there was also, on each side of this door, a hand-scribbled sign reading “Push”.
The distinction is that in the original meaning, it meant "to make the action possible." In the new meaning, it means "to suggest the action is possible visually." The original meaning had to do with actual usability, as opposed to just visual design.
Huh, ok. If Norman said that himself then fair enough, but it surprises me. I learnt it from Bob Spence, I recall he used the example of door handles and push plates: you can pull or push the handle, but it visually invites pulling, especially in contradistinction to the plate, which clearly says 'push me' - that's essentially suggesting the action by what's not possible visually.
I thought (not necessarily, but probably visual) design (to tell you how to use the thing) was the whole point. The original meaning as you say seems pretty redundant, that's just the same as the thing's function?
A doorknob tells you the door will open, but doesn't tell you which way it will open, so you have to add something (a signifier) to the door if you want that.
Whereas a pushbar both tells you the door will open and how to do it and it's all tied together without a sign.
It sounds like someone described a real problem (ambiguous interactions with everyday things) and someone else stole that term and said "we can apply it to the thing we add to fix the problem we created with our bad design".
Sadly, this kind if linguistic trick is quite common.
Well, with the distinction being made it seems that a door push plate with barbed wire welded to it would be a current/changed meaning affordance that it can't be pushed (or anti-affordance if you will) but in the original meaning.. it's either neutral or an affordance that it can be pushed because it physically can be.
Or a pull handle on a door that should actually push open - I think that was an example given when I studied it, I think there was another term for it but I can't recall, a 'misleading affordance', say. But in the supposed original meaning it's just still an affordance for pushing, because it does physically allow that, even if it looks like it should pull?
If it's correct, the original meaning just seems redundant to me, the changed one seems to make more sense and be more useful, but perhaps I still haven't understood the top level comment.
> Then there’s the aesthetic problem. When products are developed for the elderly, they tend to be ugly and an unwanted signal of fragility.
Just an observation: My main takeaway from The Design of Everyday Things was that design should make it obvious what the thing is for, and how to use it. Affordance is the big keyword. I think these mobility tools succeed in that respect. Maybe his point here is that an ugly cane makes it look like it's a tool for dying slowly, but a more likely explanation is that it is what he's saying on the surface: that aesthetics matter too. I wonder whether this is a change of heart, of just a change of emphasis for this particular article.
Making it obvious how to use it and making it look pretty are often in conflict. That doesn’t mean you can’t have both, but it’s generally hard work. The people who are nowadays being hired to design UIs tend to prioritize the latter over the former, whereas good usability requires the former but not the latter.
I've worked as a designer for a couple decades. There's a lot of bad UI out there, but it's only bad because making complex apps usable is really, really hard. And, in general, web UX and usability has come a long way in a short time. For example, small companies didn't usability test at all when I started out, unless they were way in front of the trend. Nowadays they do, unless they are way behind the trend.
I have also not observed a change that emphasizes form over function. If anything it's been the opposite, because today's product-driven world knows that websites which are easy to use make more money.
(There is this question of whether the design benefits the users, or if it is only to serve the company's bottom line, even at the expense of user happiness. These incentives lead to so-called dark patterns, but I don't call that behavior "bad" in the sense of execution, even though it is "bad" in the sense of morality.)
There's also been a ton of standardization in UI patterns, which lowers the floor on just how awful a UI can really be. Those had to be invented, and now they're relatively stable. We have good patterns now for how to make a product listing, or a detail page, or a checkout process, or an accessible form. And they are widely known. In many cases they've just been internalized by younger designers before they even start. There aren't many Kai's Power Tools style UIs out there anymore.
In general, I've observed web design getting better and better. It's easy to cherry pick counterexamples, but I would not go back to the design of the average early 00s website.
That’s not necessarily true. It’s why we have dark patterns.
> In general, I've observed web design getting better and better. It's easy to cherry pick counterexamples, but I would not go back to the design of the average early 00s website.
My point of reference is good native desktop apps that adhere to the platform conventions. Web apps typically don’t come close in usability.
> Making it obvious how to use it and making it look pretty are often in conflict.
Probably because modern design took the “less is more” mantra as its root value and thus became lazy. Less is not more, it’s easier to make it look good, it’s like throwing away all your furniture and painting everything white and calling that interior design.
This is Don Norman whose work led to the term a Norman Door. A push door which has a pullable handle on it tricks the user into thinking it's a pull door. Sure, you can just push it if you know; but good design should be intuitive without prior knowledge.
cosign on the small text thing. I had perfect eyesight and could read anything a mile away until I was 50. now I can't even read my phone, and there's literally some manuals / printing that I still cannot read even with my reading glasses on. My eyes always used to work great so you have no idea what a big change this is until all the sudden it happens.
this is the author of Design of Everyday Things, which I have to assume is at the high end of the unread-after-buying ratio. Still a super impressive person.
Hmmm, I dunno about that assumption. Only because the writing is very engaging and the subjects are novel. It's not a dry textbook, staid non-fiction, or a doorstop...
Admittedly, I have only read specific chapters, but all were easy reading.
> Design of Everyday Things, which I have to assume is at the high end of the unread-after-buying ratio.
I don't believe that. I'd say the high-end begins with Knuth's TAOCP with the end of the spectrum being something like HOTT
https://homotopytypetheory.org/book/
We read it at an HCI class I took in college, and I found it to be one of the most useful, if not THE most useful class for my software engineering career.
Those who own it and haven't yet read it are missing out. Even if your role isn't design (i.e., it's more tech / code) DoEDT is a solid foundation that anyone who works on product - digital or physical - should read.
This the reason Apple started with skeumorph UI when they released the iPhone because of affordances life like button, but as more people use it and get used to it, they can afford to get away from it(7 years later in iOS 7). And then (iOS 13 another almost 7 years) to all gesture front screen. And this will of course leave some people behind that have never used a full touch device but it’s hard to move forward but still have classic controls. I suppose there was a time when hardware buttons had the same issue when they first came out 100s of years ago
Those gestures that have to be memorized allow you to efficiently navigate your phone. I don't want a phone that sacrifices efficiency for ease of use.
If you're actually struggling to use your phone please just turn on assistive access.
I don't think there's any complaint about the gestures. The complaint is about discoverability of these gestures. It was practically major news when "someone discovered" that you could navigate the iPhones cursor by holding down the spacebar (turning it into a trackpad). Meanwhile, I only learned yesterday that if you hold down Option on macOS when you click the WiFi menu-bar, it gives you a bunch of legitimately useful info...it's the only menubar item that does something like this as far as I can tell, although it does harken back to an era when you could almost always get denser/expert context menus by holding down Option.
There's very little natural discovery. And this is made worse when the gestures don't consistently work for you. This is common for older people who develop "zombie finger" where contact with a touchscreen only sometimes activates the capacitive screen, but if they knew they were otherwise doing the gesture "correctly" they might be okay.
Another fun one is that my Apple Watch just updated, and the swipe left/right gesture doesn't do anything anymore (it used to change watch faces). It took me longer than I care to admit hire long it took me to find out that I needed to Force Touch or Long Press in order to access a new menu where I can swipe left/right to change the face. Other gestures were also just straight up removed. There was no explanation and no tutorial.
These are just small examples of many. I do want power-user features. I just want them to be defined cohesively, so that once you've learn discovery for one set of gestures, you can easily access your "cheat sheet" for deploying them in new contexts, even for Apple applications you've never used before.
I switched to macOS about 2 years ago now and still struggle to find these ostensibly hidden features.
Everything from the small font sizes, inconsistently sized window/dialog close buttons, the animations and sound effects, to the terrible text contrast in dark mode makes it a really 2nd rate UX.
Windows has its own disasters, but Windows 11 -- IMO -- is the better OS from a UX perspective.
… call me one of “today’s lucky 10,000,” but this is the first I’ve heard of the spacebar to get trackpad thing on iOS, and I’ve been using iPads (and later iPhones) since 2011 or so.
I’ve ranted here plenty of times about my late-80s aunt who will painstakingly document every tiny step to do something on her iPhone, with me patiently practicing with her, only to have something she’s gotten down change on her with the next update. It’s all very well to add new features, but for the love of older people (my mid-40s self increasingly included), do not change how common features work!
Power user features like gestures should also never be the only way to perform an action. I like to be able to do everything in a program with keyboard shortcuts, but if half of a program's features are only accessible by keyboard I get pissed.
There is actual complaint about the gestures, because they require more subtle motor control and are often more difficult to perform reliably, and/or can be more easily triggered by accident.
Yes, that's exactly what I was getting at about "gesture space" in the sibling comment and medium article: how pie menus fully cover all of gesture space: every possible gesture has an easily distinguished understandable meaning, but with typical handwriting/graffiti/swipe gesture recognition systems, some gestures are dangerously close (like "2" and "Z"), most gestures are wasted as syntax errors in order to maintain a wide separation between distinctive gestures (i.e. Graffiti, Unistroke).
If you scribble or your hand shakes and you mess up or brush the screen, it guesses wrong, and anything can happen, and you have no way of figuring out what happened and how to undo it, because it's invisible. The user has no intuitive understanding of how the black box of the gesture recognition system works, or how it might misinterpret their mistakes, and there is no self revealing of what gestures are possible, no prompting and leading, no feedback, no incremental disclosure of more information, no browsing, no changing, no canceling, no error correcting.
But pie menus have such an obvious simple crisp direct geometric tracking model (the direction between delimiting mouse click or touch/release events, regardless of the path between them), which users can easily understand. There is no mystery to why it picked one slice or another. Plus that also enables reselection (changing your mind or correcting misinterpretation) and browsing (pointing at successive items to highlight and reveal more information) and feedback (especially applying a preview of the item in the game or editor in real time as you browse the menu and adjust the distance parameter) and error prevention and correction, and increasing control of direction by moving out from the center to get more precise leverage, none of which gestures can support, all of which are useful.
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
The whole point of pie menus is to be "self revealing", supporting discoverability and browsing and prompting and gently training users to quickly use the gestures without looking, through rehearsal. They solve the problem of gestures being invisible and impossible to discover and learn.
>Pie menus are a self-revealing gestural interface: they display multiple options to a user and direct them to select one.
>Users operate the menu by observing the labels or icons present as options, moving the pointer in the desired direction, then clicking to make a selection. This action is called a "mark ahead" ("mouse ahead" in the case of a mouse, "wave ahead" in the case of a dataglove).
>Repetition of actions and memorization of the interface further simplify the user experience. Pie menus take advantage of the body's ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.[1]
However Apple has never adopted pie menus, Steve Jobs thought they sucked, and Donald Norman has never been a big fan, and he totally missed the point when I tried to explain it to him at Ted Selker's NPUC workshop.
I had the honor of meeting Steve Jobs at EduCom on October 25 1988, when he released the NeXT machine. Sun had lent me a workstation to demonstrate NeWS software in their booth, right across from the NeXT booth, and Ben Shneideman brought him over for a demo of the stuff we'd developed at HCIL.
So I gave Jobs a whirlwind tour of pie menus, the NeWS window system, UniPress Emacs and HyperTIES for about half an hour. Jobs was jumping up and down, pointing at the screen, and yelling "That sucks! That sucks! Wow, that's neat! That sucks!"
When I explained to him how flexible NeWS was, he replied "I don't need flexibility -- I got my window system right the first time!" But I gave him a NeRD button, anyway (which I'd made for NeWS window system NeRDs, but he liked because it had a lowercase "e" like NeXT).
Years later I gave a talk about pie menus at Ted Selker's epic (and free!) NPUC (New Paradigms for Using Computers) workshop at IBM Almaden Research Lab in the early 90's, and after giving my talk, I watched Don Norman give his talk. He he started at the left end of the room and moved across to the right complaining about everything he saw from the tray of the chalk board to each cabinet to the microphone to the fluorescent lights to the wall socket.
At that point it just seemed like a contrarian schtick, reflexively taking trite cheap shots at everything. But I got in a zinger when he complained about how the pie menus in SimCity that I'd just shown were so horrible because they made it easy to build a city quickly without thinking about it.
He totally missed the point that pie menus were faster and more efficient than linear menus, and just bitched about how fast efficient menus in a game about city planning made it easy to quickly plan bad cities, as if that was what playing SimCity with pie menus taught you.
Without considering that millions of people waste millions of hours picking items from inefficient linear menus every day for all kinds of applications.
Don Hopkins and Donald Norman at IBM Almaden's "New Paradigms for Using Computers" workshop:
He never has a positive word or comment about anything, no constructive suggestions, just negative complaints. This is the video he was responding to by complaining about pie menus making SimCity too easy to play:
X11 SimCity Demo with Pie Menus and Mouse Ahead Gestures:
Talks by Don Hopkins and Donald Norman at IBM Almaden's "New Paradigms for Using Computers" workshop. Organized and introduced by Ted Selker. Talks and demonstrations by Don Hopkins and Don Norman.
Norman: "And then when we saw SimCity, we saw how the pop-up menu that they were doing used pie menus, made it very easy to quickly select the various tools we needed to add to the streets and bulldoze out fires, and change the voting laws, etc. Somehow I thought this was a brilliant solution to the wrong problems. Yes it was much easier to now to plug in little segments of city or put wires in or bulldoze out the fires. But why were fires there in the first place? Along the way, we had a nuclear meltdown. He said "Oops! Nuclear meltdown!" and went merrily on his way."
Hopkins: "Linear menus caused the meltdown. But the round menus put the fires out."
Norman: "What caused the meltdown?"
Hopkins: "It was the linear menus."
Norman: "The linear menus?"
Hopkins: "The traditional pull down menus caused the meltdown."
Norman: "Don't you think a major cause of the meltdown was having a nuclear power plant in the middle of the city?"
(laughter)
Hopkins: "The good thing about the pie menus is that they make it really easy to build a city really fast without thinking about it."
(laughter)
Hopkins: "Don't laugh! I've been living in Northern Virginia!"
Norman: "Ok. Isn't the whole point of SimCity how you think? The whole point of SimCity is that you learn the various complexities of controlling a city."
(My joking but also serious point was that in SimCity "Meltdown" is on the linear "Disaster" menu. So linear menus cause meltdowns. But the pie menus has bulldozers and roads, that you can use to recover from meltdowns with.)
My talk was about pie menus, not about SimCity, which was only an example of pie menus.
But he disregarded the point of my talk and instead criticized the game design of SimCity instead.
Which I wholeheartedly agreed with (that pie menus make it really easy to build a city really fast without thinking about it, and you end up with something like Northern Virginia).
But that wasn't the fucking point of my talk, it was that pie menus can make any game or application more efficient, and that they're self revealing and easier to learn than invisible gestures.
I would have been much more interested to hear if he had any criticisms or comments on the actual pie menus, the audio feedback, the design of the menu layout and icons, the mouse ahead, popup menu display pre-emption, gestural interaction, muscle memory, and so many other things that he squandered the opportunity to discuss.
(That talk was a few years before I started working on The Sims and implemented pie menus in that game too. But maybe they would have been better if Don had taken the opportunity to criticize the pie menus in SimCity, instead of the game itself.)
And then we could have had the whole interesting discussion about what and how SimCity really does teach you, constructionist education, and so on. (See the HAR talk below.)
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo:
>The first step in learning a pie menu, using it in “novice” mode, is rehearsal for using it in “expert” mode. So if you remember that you want to move the mouse down, you can press and move the mouse, then you wait, and it pops up only after you stop moving.
>Pie menus should support an important technique called “Mouse Ahead Display Preemption”. Pie menus either lead, follow, or get out of the way. When you don’t know them, they lead you. When you are familiar with them, they follow. And when you’re really familiar with them, they get out of the way, you don’t see them. Unless you stop. And in which case, it then pops up the whole tree.
The Design and Implementation of Pie Menus
They’re Fast, Easy, and Self-Revealing.
By Don Hopkins. Originally published in Dr. Dobb’s Journal, Dec. 1991, cover article, user interface issue:
>For the novice, pie menus are easy because they are a self-revealing gestural interface: They show what you can do and direct you how to do it. By clicking and popping up a pie menu, looking at the labels, moving the cursor in the desired direction, then clicking to make a selection, you learn the menu and practice the gesture to “mark ahead” (“mouse ahead” in the case of a mouse, “wave ahead” in the case of a dataglove). With a little practice, it becomes quite easy to mark ahead even through nested pie menus.
>For the expert, they’re efficient because — without even looking — you can move in any direction, and mark ahead so fast that the menu doesn’t even pop up. Only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections.
>Most importantly, novices soon become experts, because every time you select from a pie menu, you practice the motion to mark ahead, so you naturally learn to do it by feel! As Jaron Lanier of VPL Research has remarked, “The mind may forget, but the body remembers.” Pie menus take advantage of the body’s ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.
Micropolis: Constructionist Educational Open Source SimCity:
>We’ll go straight in, we’ll get rid of this. Oh, pie menus, right! If you click… (Dutch “Taartmenu” cursor pops up!) I’ve got to have a talk with my translator.
>You click, and you get a pie menu, which has items around the cursor in different directions. So if you click and go right, you get a road. And then you can do a little road. And if you click and go up and right, you get a bulldozer.
>And then there are submenus for zoning parks, and stuff like that. This gives you a nice quick gesture interface.
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
[...]
>DonHopkins on March 19, 2018
>There have been various implementations of pie menus for Android [1] and iOS [2]. And of course there was the Momenta pen computer in 1991 [3], and I developed a Palm app called ConnectedTV [4] in 2001 with “Finger Pies” (cf Penny Lane ;). But Apple has lost their way when it comes to user interface design, and iOS isn’t open enough that a third party could add pie menus to the system the way they’ve done with Android. But you could still implement them in individual apps, just not system wide.
>Also see my comment above about the problem of non-transparent fingers.
>Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing” [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
>They also provide the ability of “Reselection” [6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
>Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
>There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
>But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
>Pie menus also support “Rehearsal” [7] — the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it’s not rehearsal.
>Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
Epic post. I've always been a huge fan of pie menus, ever since I first encountered them in The Secret of Mana, and those were even a fairly inelegant version of them. I'm not sure they're better than linear menus for keyboard + mouse (emphasis on keyboard), but they seem like an obvious win for joysticks and for touch (which is really a lot like a joystick).
Pie menus present somewhat of a problem for keyboard, but they're excellent for mouse. Every button on the menu is the same very short distance from the initial click point, but it's difficult to click the wrong one because their effective size is very large, just like a button in the corner of the screen.
I played an arcade strategy game that used arrow keys to navigate through pie menus which I thought was a pretty interesting concept. It's akin to learning arcade "combos." The sequence eventually becomes muscle memory, but it's very self-revealing before then, too.
I used to have a trackball back in '98 that had some software that allowed you to interact with things in a radial (pie) menu. I could customize the menu to a certain degree, and it was very very handy. I loved it, but newer version of Windows didn't support it and I never saw anything else like it. I don't remember the trackball or the software's name.
I think manuals should be written, but not required for simple usage of an OS UI.
"Tutorial" style introductions to the OS make sense.
I remember when Ubuntu first came out with Unity, and had really powerful window tiling features bound to variations of "Super" key + arrow key, as well as some other hotkeys.
The great thing about it was that you could hold down "Super" for 1 second, and a reference would show up explaining all the different keybinds.
I think the issue can be party broken down to this: UX != HCI.
I can't remember the last time I sat in a meeting with someone with the title "interface designer". Everyone in this realm today is a "UX something" and commonly it seems these people have never:
- heard the term HCI or know what it actually stands for.
- read and/or internalized the human interface guidelines for the platform(s) they're building for (there is a lot of overlap but still).
- thought in way that puts ease of use/discoverability/context dependence front and center, over anything else. How to do something seems often arbitrary/there seem to be no HCI-based guide rails by which decisions are taken.
That said, there are exceptions of course, but they seem rarer by the year.
One issue is that we now have a generation of young people that just grok stuff because they grew up completely digital and with apps that already have arguably crappy interfaces.
I.e. they can and will work with even the worst interface or something that shuns all standards/guidelines of the platform/OS it runs under.
When you then have people from this generation getting jobs as "UX something" you have self a perpetuating loop that inevitably leads to the increased enshittification of user interfaces.
> Then there’s the aesthetic problem. When products are developed for the elderly, they tend to be ugly and an unwanted signal of fragility. As a result, people who need walkers or canes often resist. Once upon a time, a cane was stylish: Today it is seen as a medical device.
The canes didn't change. If anything they look nicer, and you have more options.
People are going to hate anything associated with being handicapped or elderly, no matter what the design is.
While I agree with him in substance, I think it's also worth pointing out that the "negative nancy" has been a rhetorical device he's been using throughout his career. So instead of "What I see today horrifies me", it should more accurately be "What I've been seeing has consistently horrified me, to this day". No real newsflash here.
I agree with Don Norman's points on product design, but I couldn't help but think that the current world is also designed to benefit those people who will never live long enough to be impacted by the consequences of our current inaction to stop/reverse/mitigate climate change.
I don't think it's aesthetics vs usability that's at the core here -- I don't think at all that aesthetics and usability are somehow mutually exclusive. I think it's simply the lack of focus on first principles outlined by Don Normal himself.
HCI used to be front and center in the collective minds of the Internet, but it slowly faded to the background. As an example, check out the dates on the articles referenced in the "Mystery Meat Navigation" Wikipedia article: https://en.wikipedia.org/wiki/Mystery_meat_navigation#Refere...
I think it's neat that our affordances are evolving (we don't need to have things looking exactly like physical buttons anymore for us to click on them). But at the same time, we should still apply ergonomic guidelines when designing interfaces, whether it's for the elderly, or not.