Any implementation must accept that you will lose users this way. You can't implement every 'trivial' feature possible in order to avoid shedding users; implementing one way precludes a different way that also sheds users. Overall, you try to maximize the number of users that you don't shed, but that's it. And in fact, you're frequently better off shedding users willy-nilly, iteratively finding what the core features are that build a userbase, and forgetting about all the rest.
This post is a prescription for paralytic featur-itis.
I think part of the reason why developers that "dogfood" their own product is so important to some people. If you use your own product, you tend to find these sorts of important minor issues faster. I think, as developers, we've all been guilty about missing obvious and simple areas for improvement because we don't use the product, like users do, every day.
Or we have worked places where our bug reports are taken at low priority compared to real customers'. Real customers often don't bother reporting simple bugs, either assuming they will be fixed anyway or judging their value (to one user) as too small. So fit and finish suffer, because developers who do care are systematically ignored.
Came here to say this. It pains me to think of great products not being released because they don't have the _perfect_ set of features. Until you get real-world feedback from users it's often hard to guess what those must-have features are, anyway.
As someone who sells software online, the most horrifying thing about this scenario is that the user is very unlikely to ever tell me why they moved on, and so I'm terrifyingly unlikely to ever know that this trivial feature is missing.
This literally keeps me awake at night.
Statistic: it takes roughly 200 non-converting free trial downloads of my software to get one data point of feedback telling me why they decided not to buy.
Do users have to sign up to something in some way to give you feedback? Are you jacking their attention with a web page pop-up at uninstall time?
If a user is already unhappy about dealing with you (safe to assume, since they're uninstalling your product. Fair to say?). It might be worth filtering whatever trolling you'd receive from such a feedback mechanism if it's that important to you.
As a personal interest of mine, what part of that 200 non converting trial downloads used the program up to expiry vs opened once?
This is why metrics and analytics have become superstars lately. Why hope for your users to tell you, when the software can do it by itself?
Following the article's use case, the software could transmit the last dozen or so commands or actions performed by the user. In a simplistic scenario, how would you interpret it if 40% of your user's last commands are "Save"? Success, surely! How about "Undo"?
The art and magic, of course, are in figuring out valuable yet inexpensive metrics for your app, capturing that data when you have thousands or millions of users, and interpreting the meaning.
Metrics are great but they won't tell you the user passed on your software because you're missing width/height info on selection
Similarly it seems easy for metrics to mislead you into thinking the most important features for your app are the ones that get the most usage. It could easily be that some other features users would like far more but are currently hidden or poorly designed so that users are avoiding them
Metrics can't answer every specific question directly. But if you look at the kind of activities your lost users performed last, and find a surprising amount of, for example, "Select", "Undo", "Select", etc you will know you have a problem area.
How do you feel about properly anonymous opt out logging?
My feeling is that anything personal including user content and searches should be opt-in but opt out is ok for monitoring feature usage that can't be linked to an individual. Obviously online/web apps reveal at least this anyway. Does context web/mobile app/PC app make a difference?
My perspective is that you are using that person's processor time, so it's okay to have anonymous opt-out if they know what is happening, otherwise I don't think it's appropriate.
The tricky part is that what some users think of as a must-have trivial-feature, others won't care about at all. Thus getting every trivial-feature for every user becomes a herculean task. (The standard "each user only needs 10% of a program's features... but they all need a /different/ 10%".)
For instance: I don't care about seeing my current cursor coordinates when using an image editor. I probably have some other must-have tiny feature that the author here wouldn't even notice.
The author was looking for a vector illustration app, not a photo editor. Vector graphics apps are often used to create diagrams and other semi-technical drawings that require some CAD-like features, the simplest of which is a usable coordinate system.
I think something gets lost when we call this kind of shortcoming a missing feature... A missing feature is too easy to dismiss since we can mention all these scary words like "paralytic featur-itis." The fix here involves no new dialogs, no additional learning curve, nothing fancy whatsoever. It's just a case of absent-minded application design, and the point stands that it makes a tremendous difference in the end product.
Another possibility is that the developers implemented it and the UI designers/tech writers insisted that it be eliminated because it would "confuse the users".
This is why we should be using small, focused, interoperable modules instead of large monolithic frameworks and applications that try to do everything.
A plugin architecture format shared by applications would be invaluable and solve this problem.
Folks have been musing along these lines for decades. There have been a few attempts. To date, they've almost all failed. The problem domain here is a lot more complex than anything we'd normally call a "plugin architecture". IMO, the problem is that this sort of interoperability pretty much requires the power of a full-on programming environment. If it's even possible, I think it also requires an architectural organization that no one's yet divined.
The notable successes tend to be domain-specific and manifest as end-user programming environments. The spreadsheet might be considered a weak example of this. It has a clear user model that allows for powerful extensibility, but doesn't really facilitate "interoperability" in any meaningful way -- each spreadsheet tends to be a one-off.
The Max/MSP (and Pure Data/Pd) environments for audio/visual signal and event processing are perhaps more successful in that they allow the simultaneous construction of UI and logic in a visual programming environment. A program (aka "patch") can be built as a reusable sub-program that manifests the same kind of interface as the primitive (native-code) modules. But again, this is another narrow domain application with a specific visual-semantic model which works well for that domain. That contrasts to the OP's problem of adding arbitrary UI to a software whose main domain has nothing inherently to do with either the added UI or even to programming.
I'd also say that the venerable programmable programmer's editors, Emacs and Vim, form another category of end-user programming environment with loose interoperable modules. These are certainly subject to a fairly high degree of extensibility within their domains. But these are saddled by frustrating UI constraints and architectural models inherited from their now ancient origins. Interoperability between "modules", such as it is, is largely ad-hoc and far from guaranteed.
Each of these successes can teach us something about what works for these kind of programmable-framework environments. But to achieve The One Architecture To Rule Them All goes beyond simply having the tools of design, architecture, language, and environment. It must also become a computing platform, where by "platform" I mean an ecosystem that's large enough to have the social synergies that make the above examples and other conventional software platforms successful.
Eclipse is an example of an immensely successful plug-in architecture-based software.
It's amazing how well plug-in ecosystem in Eclipse operates. For instance, I could install the PyDev plug-in and have an intelligent Python editor -- but at the same time the EGit plug-in provides seamless git integration that works with any language!
How all these plug-ins come together and play nice with each other to create a cohesive and powerful IDE is impressive!
Much more than that, and you're back at the start. In this case, the plugin would need to interop with the app to pickup internal UI states (dragging a selection rectangle, versus just moving an item) and interface with that coordinate system.
I'm curious how you think you'd have "interoperable modules" that would provide such functionality, without creating a large framework to specify all this stuff in the first place.
> …without creating a large framework to specify all this stuff in the first place.
It is, in most cases, significantly more work to create a plugin architecture that could allow for some feature to be implemented than it would be to implement that feature statically — and most of that extra work is not shared.
Until you lose users because implementing that plugin infrastructure slows down your program or prevents certain features from being written easily (either directly due to trade offs in having the infrastructure implemented or indirectly in not wanting to break compatibility with older plugins).
Also as Apple showed, a nice polished product sells and that final polish matters. It's a lot more difficult to do that when you need to deal with an unknown quality like plugins.
But then you have the new problems of what plugins to include by default, how users discover plugins and how they are configured.
Plugins, while valuable for people willing to take the time to learn about your product's eco-system are unlikely to help users who are willing to spend only 5 minutes evaluating your product.
Settings on a microwave are, I think, a terrible example. Most microwaves I've used are horribly over-encumbered.
I expect two dials on any microwave I use: Time, Power. Maybe a control to set the time, if it has a clock.
I don't disagree with your general point about tools being able to be reprogrammed, though. One of the reasons I loved Autocad as a tool is the Lisp interpreter that allows you to script and extend it. I think that's a perfect example of a professional, mainstream tool with a good API that non-programmers find useful (if only to run scripts they find).
Autocad has a very interesting GUI concept, not in that it's programmable (VBA existed in CorelDraw for years), in its dialogue mode: you click a circle, and it asks you of a center; you either click or type coordinates. Sadly, this mode is rare in non-CAD systems.
Solidworks used to work like that too, it's a delight to just start a line and keep on typing distance deltas in numbers and see stuff emerging without a single mouse movement.
Do you have any current examples of consumer level microwaves with that simple interface? I've been looking for exactly that sort of simple microwave, but I haven't been able to find any in my brief searches on the internet or in the large mart style stores.
Almost all the microwaves I see in Finland require that the user rotates the dial and off you go. With US microwaves I use the first 30s to scratch my head trying to figure out into witch category my food belongs to. But then again, the same happen with other devices as well: US products are full of settings and preconfigs.
I have never found making something a configurable setting to solve any problems. All it does is push the decision forward to what the default should be, given that 99% don't change it.
Design is about making decisions. Trying to duck that leads nowhere.
Not sure what "settings" have to do with this. The point being made in the post is that if trivial, "expected" features are missing, users wont even bother to look further.
Everybody has different expectations. For example, maybe you don't care about rect size but what you care about is the rect sides ratio. So obviously you want the status line to display the ratio and it isn't there.
Or maybe you care about rect's area! And the status line doesn't display area either.
You can't have all that: it won't fit and it would look like crap. So you need settings.
It is also why your settings must be absolutely simple, logical and follow conventions of whatever platform you are on (if any).
If I have to dig through 10 pages of arcane settings to find some "simple thing" like the one described in the OP's example feature, it might as well not exist (assuming the context that I'm a new user and not already committed to using this software), I've already moved on to the next candidate app.
I doubt there is a wide choice of professional tools in most areas. You have to choose between a few suboptimal fits, and that's where you start valuing flexibility.
And they're often closed source, so there's limited chance to add the features.
You might be able to ask users on some forum.
And maybe there's something on the website about feedback or feature request.
Having said all that how do you control the flood of "Why doesn't X do Y?" (when Y is in the menu, on the tool bar, on the splash tips of the day, in the tool tip, etc)
Theoretically it should be solved by making everything scriptable, but in practice this solution tends to only work for a small subset of users (who could benefit from customization).
This is why I love Emacs--if there's a trivial feature missing, adding it myself is almost always easier than even thinking about installing something else. 90% of the time, somebody else has a code snippet on StackOverflow or Emacs Wiki. 9% of the time, I can code it up myself with essentially no hassle--Emacs is self-documenting and makes developing Emacs in Emacs a pleasure. The remaining 1% of problems I usually just ignore: no program is going to be perfect, and Emacs is already more than close enough.
What boggles my mind is when obvious usability problems persist over multiple versions of an application.
(A quick example: Apple's spreadsheet app Numbers has a behavior where the handle for moving a chart around on the workspace disappears if the chart is moved all the way to the left. It kind of docks the window, requiring an annoying work around to free it. There's no way anyone who uses it would not encounter this, but it's been that way for many years)
The problem space of user facing features is overall quite complex. A developer can neither understand the beginning user's experience, nor the heavy user (they usually don't have the time to use their own application as a worker would, IDEs being an exception.)
Just blindly adding features because a user requested isn't a solution for obvious reasons. I think a combination of user feedback, usability testing, quantitative analysis, along with creative problem solving by developers and product people is needed. Not to mention the nitpicky reality of there being an actual business case need for the feature.
I disagree. Understanding how users use your software and what they are trying to accomplish with it is important, and it is not always obvious why a user is asking for a particular feature without asking. Furthermore, when you learn what the user is trying to accomplish with the feature they are asking for you may realize that there is a better way of providing that functionality rather than the specific approach the user requested.
I think the two of you are mostly in agreement. I believe mikecane is talking about someone that believes their mental model of users' needs, desires, and expectations is complete, and is using the question as a way to shut down a feature, with the implication being "if the reason someone might want X is not immediately obvious to me, it must not exist."
I don't think he was talking about someone that acknowledges that their understanding of users is incomplete and wants to learn more.
Yes. As an example, Jobs putting actual typefaces into the original Mac. Anyone used to the existing computing paradigm back then would have asked, "Why would a user want that?" Another infamous example is the marketeers at CompuServe thinking the CB Simulator was a bad idea -- and today we have something like it, called Twitter.
People like to chat with other people in real time. CompuServe's CB Simulator was a huge money draw for them (back then you paid per minute of connect-time).
Yet this environment created the iPad and propelled Apple to near-unprecedented success. It's not as simple as that—cutting features is often as important as adding features.
Yes, it can be tricky, but sometimes what gets cut is the stuff that professionals with a deeper understanding of their field require. Ask any pro photographer about iOS Photos not using metadata (I don't know if that's still the case, but it once was). The example in that post is a great one, it seems to me. Something that seems extraneous but really makes a real difference in the user's ability to get things properly done with a minimum of friction.
CLUELESS_BOB PHB "Hey, can you do this for the project?"
CLUEFUL_ANN DEV "Well, I could do that, but the question I want to ask is 'Why the hell would anyone want that?'".
--
See also the confusing non-match between password echoing in gui or command line environments. Someone needs to ask "Why would a user want to do that?" and then get it done.
I'd argue this is why many open source project who's target audience is not developers are not nearly as good as their commercial counterparts.
When the software is made for artists there's a very different dynamic at work then when a a similar piece of software is created by engineers just to scratch an itch .
This is how I feel every single time I happen to use GIMP while working on rails in fedora, and don't want to reboot into windows and fire up photoshop just to change a transparency. I wonder why open source has often to be synonymy of poor GUI ergonomy.
For starters, its built on a volunteer basis, so there is no monetary reward for making it easier to use.
Also, some of the developers are probably actually unaware of the main concepts in UX design and even don't want the software to be user friendly because to them that is synonymous with 'dumbing-down' the system for beginners.
GIMP targets experienced users. If we acknowledge that GIMP is not (primarily) for beginners, we cut off a lot of problems such as “do we need to support that,” etc. Peter noted that a “GIMP Light” would not just have some options cut off from the menus: it would have a completely different user interface, even if it would use the same code under the hood.
Some developers work on GIMP to promote the Free Software movement and would probably not contribute if GIMP was not free. Others think that GIMP should provide fun for its developers, although our user base has grown a bit large for just doing fun experiments. We have to acknowledge that we address a user base that may be more experienced in image manipulation than we are, so the developers are partially out of the target group.
Before converging towards a definition of the GIMP target groups and GIMP vision, there were several discussions involving examples and use cases, whether GIMP should be the best image manipulation program in the universe (best for who?), whether those working on icons and those working on photos have the same needs (number of images open, relative sizes), whether people need to switch frequently between GIMP and other applications (browser or editor for web work), whether we will support painting with shapes and natural media, etc.
Eventually, a GIMP vision emerged...
What GIMP is:
GIMP is Free Software
GIMP is a high-end photo manipulation application, and supports creating original art from images;
GIMP is a high-end application for producing icons, graphical elements of web pages, and art for user interface elements;
GIMP is a platform for programming cutting edge image processing algorithms, by scientists and artists;
GIMP is user-configurable to automate repetitive tasks;
GIMP is easily user-extendable, by easy installation of plug-ins.
What GIMP is not:
GIMP is not MS Paint or Adobe Photoshop
TODO
Make it easier to perform repetitive tasks (macro recording)
Provide a UI with a low barrier to entry
GIMP should be easily extensible by the average user: one click-installation of plug-ins
Well, "a UI with a low barrier to entry" was on that TODO list at least. If they had hundreds of thousands of dollars lying around, probably someone would have been hired to focus on that one. But they have zero dollars and its not a big enough priority for most of the developers to motivate the type of changes required.
What if for every person that agrees with the author, there are 2 people that will immediately dismiss the product if it does show this extra information in the status bar? I'm not saying that is likely, but it's certainly conceivable. What is likely is that more users will be discouraged or overwhelmed as the number of onscreen data and configurable settings increases.
Chances are OP downloaded an open source svg program. I'd have to guess inkscape. At this point, the right move would have been to either post a feature request on launchpad or implement the functionality yourself.
If you don't want to get your hands dirty, pay for illustrator.
I think this is where customer testing comes in. Watch 10 or 100 customers try to use you product, and observe the features they "reach for" but are not there - and the ones that are there, but they never use.
Your understanding of the domain changes considerably as you implement an application in that domain, reducing your ability to approach the domain as a beginner.
This post is a prescription for paralytic featur-itis.