Hacker News new | past | comments | ask | show | jobs | submit login
How Apple Is Giving Design a Bad Name (fastcompany.com)
96 points by digitalmud on Nov 14, 2015 | hide | past | favorite | 93 comments



"So often, the user has to try touching everything on the screen just to find out what are actually touchable objects."

This happened to me many times on iOS and I'm a (still young) software developer, being familiar with most user interfaces that happened since Windows 3.11 until now. To be honest, biggest culprits are third party apps, but they follow official guidelines and Apple's example. This was my biggest problem with Windows Phone too, with the whole transition to flat interfaces that don't give clues on what can be tapped. If this can happen to us, then other people don't stand a chance.

But you know what's worse? Every couple of years we change the UI just for the sake of change.


Same here. I'd also add OS X UI has gone downhill since the good old Tiger days, backporting all bad ideas from iOS.

Somehow, it feels like the hardware design guys at Apple took over UI design and seem to have broken all the guidelines created by their former HIG lab, which once was at the forefront of UI research [1].

Now it seems minimalism (flat design) for the sake of it trumps everything else. I miss cozier designs, and so do many other heavy users I think.

[1] http://interface.free.fr/Archives/Apple_HIGuidelines.pdf


There's an iOS option to outline clickable buttons, but I get what you mean


Am I the only one who has always thought of the Apple UI to have always been form over function/usability by a long shot?

I remember the first time I used an iMac: I spent about half an hour trying to figure out how to eject a CD. A context menu might not look very elegant but it's insanely useful and powerful and very easy to get the hang of. In the end I had to be told/taught that the way to eject the CD on a mac was to move the CD icon on the desktop to the trash (in the days of CD-RW drives, you'd think this would erase the CD).

That, to me, is the sheer definition of an un-discoverable feature.

The iPhone's lack of back button (and the fact that apps are very inconsistent in their back button placements) has brought me similar pains in getting simple things done.


That Apple was ever good at design was probably the relative side effect of other software and hardware developers being abysmally bad at it (probably due to monopolistic inefficiencies). And now that the competition is heating up and challenging Apple on the design front, the flaws that earlier would have been swept under the carpet by ardent fans, including me, are becoming difficult to disregard.


Uh, no. It's because they valued design highly, made it an indispensable part of their company, hired great designers, and gave designers like Jony Ive extraordinary power to shape products.

BTW, what flaws, precisely, have now magically become "difficult to disregard", but were entirely invisible before? Just a couple examples will do.


Nah, Jobs didn't know much about hardware or software but he was a stickler for appearance.


Yeah I remember being confused by the drag-to-trash metaphor too, except in my case we were still using floppy disks and I thought "are you sure this won't erase my term paper?"


Floppy drives had a mechanical eject push-button and there was no unix-style "mount" afaik.




To my knowledge the Mac has never had an (easily usable) eject button. I remember it being really annoying because a couple times I had to get the disk out and the computer wasn't working, so I had to push one end of a paper clip into a tiny hole next to the drive.


On unix at least (linux and aix) there was mount / umount of a floppy, which should always be done before removing it from the slot. Linux couldn't force you though, but it was good practice.


Depends, you could mount it with the sync option. This would make all writes happen right away. Not recommended on FAT formated flash devices btw, as it will cause massive amounts of writes to the table.


From the menubar: "Special" → "Eject"


Command-E


I think Android has surpassed iOS in terms of design and usability. I upgraded to a Android 5.0 recently (lightly modded by One Plus), and the design is really stunning. The icons have been redesigned to appear as though they they are made of actual flat material adding almost a hybrid 3d flat design that looks great. The colors are vibrant without being overwhelming. Messenger and Contacts applies a new color to your contacts which helps you keep in mind with whom you are speaking. All the Google apps exhibit similar colors and styles to add for consistency.

I'm not a designer and maybe its just personal preference, but I'm a long time Android user and this is the first time the stock Android has been pleasant to look at. I use other's iPhones and, despite the great display, the UI now appears rather dull in comparison.

https://design.google.com/


    > The production of beautiful objects is only one
    > small component of modern design…
As the authors say, and as Steve Jobs and Jony Ive have reiterated in almost every Apple product launch, how something looks is only a small part of the design Apple focuses on. Historically, the priority is how things _work_.

This is probably the most misunderstood aspect of Apple products. A screenshot of software XYZ may look just as thoughtful as an Apple product, but rarely in practice is that true.


What has OP misunderstood? That he or she _should_ prefer iOS over Android because of a demagogoic quote? It's unclear what your point is here.

That said, the other day I was using Google Maps for iOS and could not figure out how to switch directions from public transit to car. It seriously took me a few minutes of playing to realize how to do it. I feel like people implementing Material do it in a kind of perfunctory way and that can be lossy usability wise. On the other hand the Google Maps have a history of myopically introducing big UX changes that make the product worse, so why would I expect anything more from the team when implementing Material on iOS?


OP starts with:

    > I think Android has surpassed iOS in terms of
    > design and usability.
But then only describes how materials _looks_. Both Windows and Android have been plagued with attempting to make things look a particular way without consideration for how they _work_. Ironically, Apple is often criticized for exactly this, when in my experience they really do sweat the _work_ details.

I'm not saying OP should prefer anything over another thing, just that only talking about how devices look is missing the broader goals of design and usability.


Well, Android with Material started focusing a LOT on how things work, how animations have to seem natural, gives guidelines on language and explains how to structure your app.


> the priority is how things _work_

I really wish that they would apply this to iTunes and with their power/USB cables.

They've done a good job with OSX - F3, F4 and the multi-window support all work really well. Multi-windowing is an especially big improvement over windows.

But there are easily 10 x as many iTunes users as OSX users so they should really do something about it.


I've never understood why iTunes is the way that it is, except that was already available on Windows, so it seemed like a good vector for devices. It really is a jumbled bag now though, and seems to go through significant UI changes every release for no good reason (hearts? stars? iTunes has them both!)

OS X used to have a separate syncing application, iSync, for PDAs that was simple and generally worked. I was dusappointed when the moved everything into iTunes (really, calendar and photo setting in my music player?)

What is the concern with USB and power? I really like lightning and MagSafe.


The lack of stress relief which means that the cables break very quickly. It's a well known design defect in Apple products.

I'm very careful with my technology but I'm already on me 2nd MBP power supply and my wife is on her 3rd USB cable for her iPad. This all in 18 months.


Uh, there is strain relief. On every cable. You're probably doing what everyone does, which is to wind that cable on the power supply as tightly as you can around the charger, which is going to eventually break the cable. Instead, make a small loop where the cable begins, then wrap gently. It will last essentially forever.

And no. This isn't a "well known design defect". In fact, Apple's cables are inevitably designed and manufactured better than competitors'. Have you even seen the awful mess that passes for PC power supplies lately? At all? Well, I have, since I administer about 200 Macs and around 75 PCs. I can tell you that there is no comparison in design or quality.


I have a MBP from 2008 with the original t-shaped magsafe power cord, and its still in perfect working order. Maybe you're harder on you're harder on your power cables than you're giving yourself credit for.


> Historically, the priority is how things _work_.

Right, everyone agrees with this. And as users of their products, we can see that this is no longer the priority today. The question is, how and why did it change?

You can see it even in little things like Apple removing the ability to display how much battery time is remaining instead of percentage in OS X. This makes the product work less well, but makes it more consistent with iOS and "simpler".


I'd guess they removed it because it was so inaccurate due to people changing their use habit. If I'm just browsing the web my laptop might last for 7 hours, but as soon as I fire up some youtube it drops dramatically.


The positioning of the FAB or "Floating Action Button" on top of the content, makes sense when you think about it but in practice I find it frustrating:

- Because the way I hold the phone, most of the time my thumb is covering it and I don't see it.

- Because it is supposed to be the "main" action, it makes all other actions move around. Example: on Google Photos, the FAB is search. on Gmail the FAB is "create" and search is at the top.

- It is still the last place I look on the screen to perform an action. I look at the top first, then I look in the hamburger menu and only then I realize it was beneath my thumb. Fuck

I'm not convinced. I don't think google is convinced either because they keep shuffling it around from update to update, from app to app like they don't know what to do with it.


I don't think everyone would agree with your reasoning. In fact, your reason for disliking the FAB (it's position) is all the more reason for it to be there because it's easy to tap.


Most the material ergonomy is laughable, lots of nice ideas and work but I'm about 32x more productive with a BadaOS interface.


Yeah i miss the Holo stuff. At least there buttons etc seemed to automatically find a finger friendly size.

Again and again i have seen apps go Material, only to have buttons etc become so small i have to tap 2-3+ times to hit them. And i do not have anything close to large fingers.

Frankly i am left with the impression that with Material Google abandoned their adaptive UI frameworks and went with "pixel counting". Thus Material interfaces only seem to work properly on 10" and 5" screens.


Material often places action item in very bad places, floating over content, far from where you were touching... like a mouse compared to a keyboard home row, aka non-ergonomic.

It was amazing as a design idea, having all the pieces analog, independent yet coupled when needed, it felt like the proper way to build things, but I'm starting to think that this is a classic wrong variable focus, this doesn't matter for the kind of interaction it's used for.


I also think that Google has found a sweet spot.

The implementation is far from perfect, but the use of z-position + color creates a really good UX IMO.

They took pretty much the only thing they got right design wise (a sense of depth applied to buttons & popups) with the first versions of the OS and used it very well.

There are still some things that need improvement though :

- Consistency. The platform is slowly becoming more consistent but there is still a way to go. We now have a design library. It is a great step in the right direction but it needs to evolve a lot. Particularly, Google apps should always use the same implementation of a widget (whether it comes from the public design lib or a private one). With the nav drawer, at first there was no common implementation in the support/design lib, so each team implemented one with small differences, for 6 months it was very maddening.

- Navigation patterns. Google encourages the use of tabs for 0-depth apps with a few options. Good, that's indeed a strong pattern. For more complex apps though, the only solution pushed by the system is the Navigation Drawer. I have got to admit that Google uses it well with all the important features accessible from the first screen and only the secondary use cases hidden in the drawer. Third party apps generally get that entirely wrong. That's not always a good pattern though and I would really like to have an alternative for complex apps pushed by the platform. (yes in most cases the response is to simplify the app, but that's not always possible.

-System navigation. the back versus up paradigm is almost always badly implemented so it might be time to just abandon it. The home + drawer idea is confusing for lambda users.


It's definitely just subjective tastes. Personally I find the colour schema of Material Design to be too saturated and garish. Case in point. YouTube on iOS:

http://appleinsider.com/articles/15/10/05/youtube-for-ios-ge...


Moved from Windows to OSX a couple of years ago after hearing for so long how much better the Mac experience and usability was.

Some examples of why this was wrong:

1. Cmd-Tab lets me select a hidden window but it won't show it!

2. Many frequently used click targets are too small. For example, the close, minimise and maximise buttons.

3. The close minimise and maximise buttons reveal their functions only when the mouse is over them.

4. There's no reliable way to maximise a window.

5. Despite really trying to go keyboard-only, it's very difficult to avoid using the mouse.

6. The file browser doesn't show the aggregate size of selected files.

7. Cmd+Down to open the selected file! Why not Enter?!

8. I still have no idea which button Enter will press on a dialog box. There seems no way to make this clearer or to activate something like Alt+underlined_letter_of_desired_button_caption.

9. There's no Cut operation, only Copy and Paste.

My MacBook Pro is awesome hardware but Windows is way ahead of OSX.


Please not yet another "I've switched from A to B and was shocked to discover that B does not work like A!" rant. You're simply executing working patterns from Windows and of course it does not work so well.

1. Cmd-Tab does not select windows, it selects apps. If the app is hidden (Cmd+H), it will be unhidden.

2. They are not such a frequent click targets. Also, that is not a maximize button.

I see what your problem with 1+2 is: you are using minimize function to reveal windows behind the current window like you did in Windows. Well... don't. There are other ways to do it (Mission Control, Dock, hide), you are not supposed to use minimize so often.

3. Seriously? It's only an issue if you're seeing Mac the first time in your life.

4. Because there is no maximize function.

5. Keyboard-only never was a design goal.

6. Alt+Cmd+I. Learn about significance of Alt button, you'll discover a lot of functions that you think are missing.

7. Because Enter renames the item.

8. The blue button. (Apple has recently broken that in some dialogs in 10.11 and hasn't fixed yet.)

9. Finder does not allow to cut&paste files, yes. Of all your list, this is the only valid critic, which does not stem from mis-using the thing.

Trust me as Mac user, we are not suffering, and when rebooting to Windows to play PC-only game, we do not feel any improvement (to put it mildly). What you think is "the most intuitive way to do it" is actually just "the most familiar to me way to do it".

There is a term for this, "baby duck syndrome": https://en.wiktionary.org/wiki/baby_duck_syndrome

> (humorous) The tendency of computer users to always think the system (software or usage paradigm) they originally started using is better.


you are not supposed to use minimize so often.

So you are saying he is "holding it wrong"...


I agree with all of your points. I have done tons of research on how to use the keyboard only.

1. If the window is minimized (minimize with Cmd-M) you can Cmd-tab to it let go of Cmd hold tab and press option and let go of tab and the window will reappear.

2. I use Cmd-W to close windows and Cmd-M to minimize windows. If you hold Shift while double clicking the title bar of the window you can maximize it in the same paradigm as windows.

3. I use the shortcuts.

4. see 2.

5. Yes.

6. If you hold Cmd-option-I after selecting files.

7. Cmd-O for Open. It started to make more sense to me.

8. Sometimes you can press (and this is inconsistent) Control-(the letter the button that you want to press starts with, so Control-C for Create)

9. Sure there is a cut operation.


Thanks for the Cmd-Option-I, never even thought about googling for this.

I know some of the shortcuts including how to open something but some of the defaults here are not about mac being different from windows (commenting on the parent) but stupid at least to my understanding. No one renames file more than they open it in a file browser so making the most frequent operation in a file browser a two key shortcut is irrational. Same goes for multiple files showing multiple info window. I think windows/linux now even shows you warning when you have too many files selected and you press return. OsX the first time I tried Cmd+I on multiple files opened 140 info windows. How many times in your lifetime do you want that when pressing Cmd+I on files?

And lets not talk about highly expert commenters on apple forum explaining why cut paste is supposedly something that can make you loose files :(


Windows has historically has really excellent keyboard support, at least back in the Win3.1 days, where driving an entire app from the keyboard was simple, easy, obvious and frequently easier than using the mouse. This slipped a bit with later versions of Windows, with too many apps having unnavigable widgets. But, despite Microsoft's many mistakes, this was something they got absolutely right back at the very beginning.

MacOS was much less good, with the only keyboard support being explicit accelerator keys. A Mac with no mouse was unusable.

How does OSX fare?


Well, that was kind of the point.

The original Mac OS was designed to be mouse-first. Keyboard shortcuts were there, but only to be used as well as, not instead of the mouse.

Windows had to be completely usable by keyboard as it began as a graphical overlay to a keyboard-only interface - and it wasn't possible to guarantee every machine it was on had a mouse.

It's a bit like now I suppose. iOS was designed to be touch-first, with limited keyboard and no pointer support, and Windows adds a touch overlay to a keyboard and pointer-first system.


The keyboard shortcuts are such a blessing. Truth be told, I learned all of them even when I used a Mac in the System 7 era. For me, the mouse [0] is not just an ergonomic problem by itself, but using it with graphical software gives me pounding eyestrain headaches and neck fatigue. The keyboard lets me divert my visual focus away from the screen, yet know "where" the commands are.

To this day, programming is my favorite design activity because it doesn't require interacting with a graphical display. I can't do CAD.

I always just assumed that Windows kept the keyboard shortcuts because they were such a good idea.

[0] or touch pad, track ball, whatever. I've tried 'em all. The best of the bunch is the touch screen.


"7. Cmd+Down to open the selected file! Why not Enter?!"

Consistency. Cmd+Up moves you up one level in the file system, Cmd+Down down one level.

Also, at least historically, there was a consistency in that a key stroke is a command if and only if it contains the command key.

The reason the Finder did not have copy-paste for years, and why it doesn't have cut till this day is that it is hard to implement. A proper cut would immediately delete the file from the file system, and a later paste would paste the file with the contents it had when the file was cut or copied (Excel is also notoriously bad at this; cut a range, close the file, and try pasting)


>The reason the Finder did not have copy-paste for years, and why it doesn't have cut till this day is that it is hard to implement. A proper cut would immediately delete the file from the file system, and a later paste would paste the file with the contents it had when the file was cut or copied (Excel is also notoriously bad at this; cut a range, close the file, and try pasting)

Huh? That's the strangest reasoning I've ever seen - all other file managers on platforms (including 3rd party file managers on OS X) do this well and delete original file only after the paste command is done. It works, is very useful and doesn't confuse users any more than the cut functionality does. So I'm not sure where your reasoning comes from.


I know that, but that is inconsistent and that inconsistency _is_ the reason the Finder resisted implementing it for years, and why it doesn't have cut-paste to this day, but does have move.

It is hard to find things from decades ago on the web, but see https://www.quora.com/Why-does-Apple-not-provide-a-cut-funct... (2011). The answer from Eduo-Gutiérrez IMO points to the same.

The Apple Human Interface guidelines also are fairly clear on the fact that the clipboard behaves like a real object. My 1987 edition says (page 81):

The Clipboard holds whatever is cut or copied from a document. Its contents stay intact when the user changes documents, opens a desk accessory, or leaves the application.

That certainly is not the way Excel behaves.

Also,on page 83, discussing the Cut menu item:

When the user chooses Cut, the application removes the current selection from the document and puts it in the Clipboard, replacing the Clipboard's previous contents. [...] If the user chooses Paste immediately after choosing Cut, the document is just as it was before the Cut.


"I know that, but that is inconsistent and that inconsistency _is_ the reason the Finder resisted implementing it for years"

Inconsistent with what exactly?


1) A cut without a paste deletes stuff, except in file explorers and in ?some/most/all? cut actions in Excel.

2) A copy or cut puts stuff in the clipboard. It stays there until you copy or paste again, except in Excel, where entering new data between copy and paste clears the clipboard.


Thanks. Reminds me that various file explorers do not call it a cut, but a move. They just reuse the same keyboard shortcut as the cut found in text editors.


Cut can be done with CMD+C and then I think it's CMD+Alt+V for paste and remove original.


I assume we're talking about Cut in Finder, because it works elsewhere. This has been one of my OSX bugbears for ages now, but I always assumed there was a technical/ideological reason for disallowing Cut/Paste of a file. Now you're telling me I can do it, I just need to learn another shortcut, and remember that it only applies in this situation?! What on earth could possibly be the rationale behind that?

Keyboard support in OSX is truly appalling. Window management with the keyboard is also a pain, and even tabbing through controls - even with the stupidly-off-by-default setting - sometimes fails. The reason this annoys me so much is that OSX on Mac hardware is almost a dream setup apart from these flaws, but these flaws are significant enough that, after 4 happy years, I'm actually considering switching back to Windows.


Thanks. There's no context menu option like in Windows so no one knows about this without googling.


Have I just walked into some alternate reality?

On my Mac, cut is command-x.


In the Finder it isn't.


check out "rightzoom" to give you a kb shortcut for maximize.

Also, I agree with you that finder is dumb.


My pet peeve: when I try to click a button and just before I touch the screen it changes, so I hit completely another button (on Androod - but it is the same everywhere). So instead of cancelling a call I place another one, to a different person. No I didn't intend to call my mother in law!

The solution is so simple it hurts: if a clickable area has been shown less than X ms before the touch / click, ignore the event. Nobody can type this quick except for Superman, and he can change this via some setting, if you think he will be annoyed.

I am not sure why I haven't seen this solution anywhere yet. Do others not encounter this problem?


I encounter this problem all the time, incidentally also with calls.

Often I'll doing something at the same moment as someone else calls me, I end up hanging up the incoming call by mistake. A delay in "screen change" would be great!


Glad I'm not alone!

Not delay in screen change - system can't predict when user is going to press the button. But it is highly unlikely that the change of screen occured, I have noticed it, found the correct button and pressed it - all in 50 ms... The system should just disregard the touch, because it was obviously meant for previous screen.


> The system should just disregard the touch

That's exactly what I meant, should have been a bit clearer!


Can you give a more concrete example? I'm on Android, and I've never experienced that.

Your suggestion would only work for very undeterministic parts of the UI, though.

Good UIs are predictable, and sophisticated users "know" where to click and don't need to read buttons, so they're very fast. Enforcing some delay will make the system seem "laggy".

So now you have to make a distinction between "predictable parts of the UI" and "unpredictable parts of the UI" and everything's not all that simple anymore.

A better solution is to not to pop up random things at random times in random places.


I place a call and a callee is busy. I want to click "end call" button, but just a bit before that app figures out it can stop the call, so it switches back to main screen. The touch on the same spot now means something entirely else - so a new call is placed and I try to stop it in a fit of panic.

It happened at least 3 times in last two months.

Update: I am not suggesting adding a delay, that wouldn't help with anything.


Frankly the only place i have seen that on any platform is with web browsers. In particular sites that reformat to make room for ads after the main content has fully loaded.


A similar problem happens on iOS CarPlay.

If you're getting navigation directions in Maps, and then go back to the home screen (say, to play music), there's a temporary notification dropdown that appears from Maps with your next turn info. The problem is that the dropdown covers the top row of app icons, so if through muscle memory you (as I do) hit home and then hit Music, you hit the notification instead of Music and end right back in Maps. It's infuriating.

This problem makes me question if anyone from Apple's CarPlay team has actually used their own product in a real setting.


I hate a different version of this: keyboard focus stealing on OS X by windows that pop up in front of something you were typing into.

The worst one is at login when I'm typing the password for my encrypted second drive and other apps that are starting up pop up in front, including ones that rely on that drive to be unlocked. If ever a dialog needed to be more modal, it's that one.


Its a good article.

Force touch is a good example of this. Adding a new action without any signifier for its availability or its purpose when we already have a baffling mix of durations and gestures. Madness.

An innovation by a platform would be to invent a new way to aid discovery / learnability of a UI not just add a new input. For example gestures - there should be a system standard way to be told about the available gestures and their effect for each screen - a bit like browsing a menu to learn what a tool can do. Instead its left to each app to try and invent a tutorial mode.


Steve Jobs was against hiding features in modes, which is why the iOS lacked any kind of context menu until version 3, and Apple mice lacked right click for a long time.

Speaking of Apple's mice, they are some of the worst offenders in "looks good but don't try using it". There's the infamous hockey puck, and currently the Magic Mouse.

I use one when I'm desperate, but usually give up when it magically gives me hand cramps due to it's shape. A bar of soap is more ergonomic, and slides around better too.

This article isn't served well by it's sensational tone. If children can pick up something up and use it without training, then it's already doing a better job than a mouse, or even a keyboard, which you basically have to be literate to use.

https://en.wikipedia.org/wiki/Mouse_Practice


I thought I was the only one that gets wrist pains after using the Magic Mouse. There's something really wrong with how that piece of hardware is designed it seems :/


Do you have big hands? I've used Microsoft mice since the Intellimouse Explorer 3.0 because it's easy to push around. I know people who hate it because it's too big for them though.


I'm very comfortable with the Apple mouse.

But that's a single data point. The problem seems to be that the industry has decided that everyone has the same standard pair of hands.

Which is nonsense, obviously. It's like selling shoes or clothes in a single standard size.

So what happens in practice is people keep trying different mouse models until they find one that happens to fit. For me, the Magic Mouse works fine. For someone else, it won't fit at all.

This is also a problem with design. In spite of a century of theory and history there's really no such thing as an ideal standard design, because everyone's cognitive needs/skills are different.

So what happens in practice is that "design" means "looks polished and expensive, and confers accessible social proof". This attracts a price premium, and that's where the feedback loop is in the market.

The feedback loops for usability are more diffuse and not nearly as strong.

The problem with Google's Material is that it doesn't understand the social proof element. It's purely utilitarian. (IMO it's not brilliant for usability either. But that's a different issue.)

Apple's design language has been much better at creating social proof through aesthetics. As long as it keeps doing that, usability can be good-but-not-great and it will still be a commercial success.

Never mind Windows 8/10 - the less said about that, the better.


Yeah and I think that's the issue. But the funny thing is, that the Magic Mouse is the only one causing me problems - even tiny Logitech NX Nano doesn't cause me pain. I think it's caused by the low height coupled with the fact it has no buttons to press.


> against non-obvious features (there's a word for this, I can't remember it)

Are you thinking of "affordance"? https://en.wikipedia.org/wiki/Affordance


I was thinking of modes! Nice guess though :)


I would say that even a tutorial mode doesn't get back all the usability, because you need to explicitly look for the tutorial. In a desktop GUI, you have buttons, and they are obviously buttons. You can figure out what they do by clicking on them. You can usually find the keyboard shortcut by hovering over the button.

Even with a tutorial, there is no smooth transition in a touch interface. Nothing to indicate that a double tap or a three-finger swipe will be recognized. Instead of being presented with new features, the user needs to go looking for them.


Yeah, I miss hover mode on touch devices.

Ironically, I see hover as the computer equivalent of touching a physical object to help work out how something might work without initiating any operation. With current touch devices, we only have "do" we don't have "feel".


If you could detect the difference between your index finger and middle finger, that would be a fairly cool right buttton emulation. Would that even be possible?


I thought this was what Force Touch was supposed to be for.


I don't disagree with you that 3D Touch is rather undiscoverable. However, Apple have provided guidance that 3D Touch should only be used as a "preview" of an action -- never to reveal new behavior unavailable in other places. We all think of it as right click, but Apple's guidance is very much against the notion.

It's possible -- unlikely, I think, but possible -- that 3D Touch will alleviate some of the concerns in this article. 3D Touch should ideally provide a preview of an action -- quite useful in a world without undo.


Force Touch is an oddity; at least for now, any FT actions can only be shortcuts that are reachable in other ways too, since most iOS devices lack force touch. IMO it's fine for shortcuts to be non-obvious (think of all the keyboard shortcuts in any decent desktop app)

The problems will emerge if/when developers rely upon force touch for primary actions.


The article is overly critical and does not acknowledge either the technical debt nor the complexity inherent in doing even simple changes to something as complex as iOS at this point. Still, lots of points are valid, especially about non-obvious gestures.

In general I do everything I can to avoid using gestures. A lot of apps with heavy use of gestures use a one-time tutorial that explains gestures to a new user but that just admits that there's a UX failure to me. I like the idea of having a standard way to check all of the available gestures at any given time. Maybe there is even a way to convey that with simple font, style or color usage but getting everyone to get on board to that level of detail seems impossible. Without such a system, every gesture should be ultra obvious. Even something as simple as having a thumbnail partially cut off to signify that there are more thumbnails viewable upon a swipe gets the job done.


Force Touch is an fascinating example because it's the equivalent of a "right click". The same right click Apple famously used to hate because of the very discoverability issues you have raised.

I think if Apple keeps Force Touch to certain specific use cases e.g. pop or contextual menus off app icons then it could be still very useful.


I wish the authors had spent a few hours more honing their argument, because some of the points are really good.

Where it fell flat for me:

* The post was much too long. I was intently reading but soon realized that it was all over the place and started having to skip through to try to dig out the main points.

* They admitted that they weren't even working at Apple during the time when it was designing successful products.

* They admitted that iOS 9 solved many of the problems quickly at the end without saying which ones or how they failed and while introducing new problems (memory?) that only served to confuse the reader.

* The authors' argument was incoherent. It seems to be building up reasons why Apple's design was bad, but they are all over the place. "It's just bad" is not an argument. Lay the groundwork. Provide evidence for each point and evidence against arguments against those points. Tie it all up.

Some of the information in here would make great articles on their own, e.g. I'd like to read a full criticism of Apple's new font, how the Human Interface Guidelines list changes over the years and how it could be better, and the whole Dieter Rams piece would have made an excellent article even outside of the context of Apple's failures.

I think few that have been using iOS and OS X over the years would dispute that Apple has dropped the ball since Jobs' death. While much of Apple's design still trumps others', others have blindly followed Apple and done an ever worse job. (I'm looking at you, Windows 8, which was a reaction to both iOS's use in the iPad and the terrible AppStore trend introduced in OS X.) Hopefully, Microsoft, Google, and others have realized by now that leading and innovating doesn't mean copying or reacting out of fear to Apple's direction- it means hiring extremely creative people that understand user-centric design.

But, even though there are problems at Apple and with design worldwide thanks to Apple's mistakes, the authors of this post haven't made it much clearer why they dropped the ball or how we can all understand how to avoid those problems.


The authors, Don Norman and Bruce Tognazzini, have both been rather cranky in their writings for years. Sometimes Tognazzini seems almost too ready to criticize anything new.

cf: Tognazzini's "Top Ten Reasons the Apple Dock Sucks" (http://www.asktog.com/columns/044top10docksucks.html)


He seems to diect a lot of critisim at iOS, basing it on design principles that appear to be more relevant to desktop computing. Discovering new features, undo, consistencey. These seem relevant when using a computer to create something, but less so on a phone which is usually used as a media consumption device.


I agree. Criticizing kindle reader because it lacks desktop functionality is comparing apples and oranges.

But we are talking smartphones here. If you claim that smartphone is no more than kindle reader with ability to call, then why is it smart -phone? How can apple market it's phones with "25 billion apps downloaded"?

And eventually could you insist that owning a smartphone is somehow essential for modern life? You could just settle with dumb phone, books and old walkman. If smartphone is not essential then you can't make apps with the assumption that "everybody will have smartphone soon". If you don't assume that, smartphones are doomed to be a just luxuries.

Then stuff like "you need to optimize your website for mobile" is pretty much like saying "we need to optimize our streets for roller skating".

If you think that, I would honestly agree. I think smartphones are just a fad. But I would not bet on it.


Strangely composed article: "Norman" is introduced from nowhere along with the multiple authors.

Some valid points but all rather overstated ("destroying design") and finishes with admitting that a lot was fixed in iOS 9. However, that's not enough because another overblown article calls new features "secret"...


You know, a lot of these features are so mindlessly piled that for an existing iPhone user it does not feel like cognitive overload.

But my mom used the newest iPhone last week -- she used iPhone 4 as her full time phone back in the day -- and she could not navigate Safari properly. That's how unusable the iPhone is now even for a legacy iOS user. Whatever happened to days when toddlers could figure out iPads? Is this really the company that brought sliding, tapping, pinching actions to touchscreens?

I think when it comes to Flat UI, Apple seriously lost the new design wars to Google. To add function, you need corresponding forms. Previously, Apple could hide the new function in the skeuomorphism wrapper and it didn't feel like too much on the plate. Form wasn't needed as it mimicked real world. With 7.0 and beyond, they tried to use the z-axis, but willingly chose form over function over and over again, and it shows! Google, on the other hand with their material design and paper element succeeded in wrapping together functionality in 3 dimensions, widgets (paper elements) and feedback based animation in such a way that the usability endpoint is very static to interact with the various functions. If you can use the Phone app on Lollipop, you can use anything.

And all this while Apple managed to put in yet another dimensions to somehow preserve their z-axis to two layers to make sure integrity of Flat UI is preserved. iOS' UI isn't just flat, it's also congested -- just like my flat table.


> Whatever happened to days when toddlers could figure out iPads?

Nothing happened to those days; they're still here.


The aritcle is just vaguely describing user experience from a "first time user" point of view, which is clearly wrong, like someone is going to throw away his idevice after his first use. As a counter-point this might be true for websites or even some kind of apps, but not really for a device that is going to be your everyday thing.

I've been a regular user of windows and linux and while back, until I switched to a Mac a while back. It took some time tinkering and reading to get used to it, but once I got the hang it has become really productive - some gestures like three finger drag, switch panes, two finger tap to double click are some time savers.

"Give me six hours to chop down a tree and I will spend the first four sharpening the axe." - Abraham Lincoln


I think too much time is spent thinking only "obvious" features are well designed. If it's not obvious it's poorly designed. It doesn't have to be this rigid dichotomy. I think Jason Fried put it best here: https://medium.com/@jasonfried/the-obvious-the-easy-and-the-... Some stuff should be obvious, but some can just be easy, and some - just possible. Putting things into those three buckets vs trying to cram everything into obvious is where we make well designed things.


Agreed 100%, particularly the thing about thin fonts and lack of contrast. I'm 27 years old and I had to turn on the accessibility option "Increase contrast" on newer OS X versions to avoid getting headaches; so I went back to Mavericks. My next computer won't be a Mac, I'm not interested in using an OS that causes me literal pain. Seeing that Windows is going down that route too with terrible and blurry font rendering in Windows 10 seems like the only option is Linux, where at least you can adjust the UI to suit your needs.


Using iOS means learning how to finger dance. Press-Press-Tap-Pause-Swipe-Pull-and-Slide!

For a laugh, I walked into a UX lab at this year's WWDC. I started to talk about my app in terms of GOMS - specifically Fitts law. There were two Apple UX engineers. He didn't know what the hell I was referring to. She did - and really enjoyed exploring the tradeoffs from a user-centered-design perspective. We discussed personas. Gesture cost. And the tradeoff of breaking an idiom. I hope she gets promoted.

I was introduced to GOMs by Jef Raskin. He also left when Jobs came back. I wonder if there is a tradeoff between product and UX. Parc had rigorous design during its downturn. Apple had great designers during its downturn. Maybe part of the magic is hiding the rabbit under the hat.

And then there's Apple Music [shaking head] geez.


My foremost example of apple's form over function is actually the 'Form' of the hardware. Every apple product that I have touched the phones, pads and macbooks are a slippery nightmare. They are often thin enough to be impossible to grip and trying to get even thinner for no reason. They have clean slippery lines everywhere, edges sharp enough that I think it will give you deep cut if it falls in the proper angle again for no reason. The only saving grace I found with iphones is that for the thinner width the grip isn't as bad it would be otherwise.

I remember my fear of dropping my office macbook every time I had to move it the first few weeks and while I got rid of the fear mostly the device didn't get any safer to use.

The biggest crime of apple probably (if I'm attributing this right) is it has pushed the idea so much and corrupted the whole market and other companies that everyone now tries to be a 1mm thinner as if that makes it look any nicer and just makes it hotter or slower or lowers battery. I think the surface book (I have only seen pictures) might have better grip on the hinge side but even its edges look sharp if not as bad as the apple product.

Now if you talk about weight than at least has the benefit of usability when it gives you 3lb full power laptop. But even for a phone does it matter if its 135g vs. 145g?


As someone who has actually found Apple's design principles to be stupid and dumbed-down to the point of being dumb, I am not sure if I agree or disagree with this article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: