Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am serious. Yes, it's true it's easier than ever to create sophisticated websites. But it's also true that almost all this sophistication delivers negative value to users - it distracts them, slows them down, and forces them to keep buying faster hardware. It doesn't have to be like this - but it currently is. It's not even just a problem of explicit business incentives - the technical side of the industry has been compromised. The "best practices" in web development and user experience all deoptimize end-user ergonomy and productivity.


The mistake you are making is that you are trying to answer the question of what the average user wants by looking at what you want. Developers are not representative of users.


The average user, by Google's own studies, wants a faster experience.

By far.

Is there any evidence that web apps of today are faster in achieving what equivalent non appy web pages of the past managed? Despite the fact that those older "apps" were running on computers which were orders of magnitude slower than our cell phones today.

I've worked with 2 companies now where their users (and both these companies have users who pay 4 digit annual fees per user) have refused to migrate to the new Web 2.0 apps these companies have tried to foist on them.

The difference in these cases is that the users, by virtue of paying, actually have a say and so have required the companies to provide parity with the older and newer applications, and usage continues to not just be higher, but grow faster on the older versions (despite having a larger base).

Regular users have no such option. Google changes GMail, but its users still insist on using the older versions of the app, which is why they provide HTML mode, etc. However, it's users do not pay Google, and are forced to go with whatever Google wants to do, which is constantly hiding the older version and making it progressively worse to use.

It's not evident to me at all that regular users "want" to use these new web 2.0 apps, as much as they don't have a choice.


I find it interesting by default I switch back to classic if possible. All newer interfaces seem to remove features and slow things down.


Yes and no: Ajax allows interactive (snappier/faster) behavior in most cases, especially for complex interaction flows. Using the minimal html Gmail interface vs the modern one, the modern one is quicker for complex interactions because I end up loading fewer pages,even if the average load is more expensive.


It doesn't feel faster to me. And that's the case with Ajax in general - in principle, it can allow for snappier, faster experience. In practice, it rarely does.


I just measured it: switching between "drafts" and "inbox" on the new gmail takes ~15 ms, while on the simple HTML view, it takes 3-500ms per load.

Its literally 20 times faster, and importantly, cuts across the human visual perception boundary, which is at ~200ms. So the old HTML version is human perceptible, while the new version renders in ~1 frame.


Fair enough. On a fast enough machine. This is off-topic, but while on my beefy dev desktop I get similar times to you, this was absolutely not the case on a much weaker laptop I used sometimes during travel - and the latter is more representative of how regular people experience things.

(Also, FWIW, loading delay of HTML pages on said beefy desktop, as tested right now, are consistently about 750-1500ms for me.)

More on-topic, these times apply only when the pages are hot in the browser cache. On my desktop, the first-time switch between "drafts", "sent" and "inbox" takes around 3 seconds each, and only then it is instantaneous. So regular users are likely to experience these long Ajax delays more often than not.


It depends on the application though. Mapping apps (e.g google maps) without ajax are awful.

Even basic UIs like filtering can be bad if you want to change multiple filters, and you have to wait for a whole page load in between each change (page load times for pages with filters are often slow as they're performing complex queries).

It's a case of different things being appropriate for different use cases I think. There definitely are still times when plain HTML is best, but it's also not always faster.

I've built a React app that's under 300kb (cached with a service worker for subsequent page loads) that loads almost instantly and works offline. On the other hand, plenty of plain HTML sites include heavy CSS frameworks, or 5mb images, gifs, etc and load pretty slowly, especially on poor connections.


TBH, that's a side effect of "EVERYthing is a webpage, because that's the way it is!" Of course a map application is SLOOOOOOW when forced to go through all the intermediate steps to resemble a web page. Mapping apps that forgo this convention are snappy; but no: a chat client is a web browser, a map client is a web browser, the "show me two lines of text" notification bar is a God-damned webbrowser.


Snappier! You mean "with more spinners, now with added lag". (Get off my lawn.)


That's a cop-out. Being a developer and a long-time computer user biases me, but also gives me a more informed perspective on what's productive and ergonomic, and what's distracting and annoying. I can name the issues I see, instead of writing off the frustration as "this computer must have viruses" or "this is just how things have to be".

Bloat because of inefficient design isn't delivering any value to regular people that a developer is oblivious to. It's just bad engineering. Similarly for distractions, abandoning all UI features provided by default for the sake of making a control have rounded corners, setting the proficiency ceiling so low that no user can improve their skill in using a product over time, etc.


In my experience (years of talking IRL with thousands of users of my B2B SaaS product), there exists a large cohort of users that don't want to improve their computer skills. They want the software to make things as absolutely "user friendly" as possible.

As an example, I tried standardizing on <input type="date" /> across our product (hundreds of fields). Within 24 hours we logged >1,000 tickets with users saying they disliked the change. They preferred the fancy datepicker because it let them see a full calendar and that enabled a more fluid conversation with the customer (like "next Wednesday").

Yes, Chrome does offer a calendar for this field type, but Safari for desktop does not (just off the top of my head).

I'm a vim-writing, tmux-ing, bash-loving developer. If it were up to me, I'd do everything in the terminal and never open a browser.

I recognize that the world doesn't revolve around me and my skills, interests and tastes. If a large cohort of my customers tell me they don't want to improve their computer skills and want a fancy UI, who am I to tell them they're wrong? They're paying me. They get what they want.


You're conflating a couple of things here. It's true that users don't like change - and for good reason; messing with UI invalidates their acquired experience, and even if you know you've changed only one small thing, they don't know that. It quite naturally stresses people out.

Two, I'll grant you that you sometimes have to use a custom control, because web moves forward faster than browsers, and so you can't count on a browser-level or OS-level calendar widget being available. But then the issue is, how do you do it. Can the user type in the date directly into your custom field? Is the calendar widget operable by keyboard? Such functionality doesn't interfere with first-time use, but is important to enable users to develop mastery.

A lot of my dissatisfaction with modern UI/UX trends comes from that last point: very low ceiling for mastery. Power users aren't born, they're made. And they aren't made in school, but through repeated use that makes them seek out ways to alleviate their frustrations. A lot of software is being used hours a day, day in, day out by people in offices. Users of such software will be naturally driven towards improving efficiency of their work (if only to have more time to burn watching cat photos). If an application doesn't provide for such improvements, it wastes the time of everyone who's forced to interact with it regularly.


Extending Web capability by building in features to HTML, such as calendar-based date-pickers (sometimes useful, often tedious), is one thing. Those standards can either be retrofited into a console-mode browser (it is possible to display a calendar in plain text, see e.g. cal(1)), or degrade gracefully to a text-based input field of, oh, take your pick, YYYY-MM-DD, YY-MM-DD, DD-MM-YYYY, MM-DD-YY, etc.

Better forms inputs could very well be useful, I agree.

The recent MSFT + GOOG snogfest announcing major improvements to HTML by ... improving form and input fields in their (GUI-only) browsers strikes me, in light of rather ominous icebergs looming on the HTML horizon, of rather gratuitious deckchair-rearanging. No matter how fine those arrangements might be.


This seems to be exactly where progressive enhancement is preferred. If you use input=date, it’ll probably work on mobile much better than s as JavaScript calendar solution. Also, I hope your date picker also allows typing by hand on desktop, otherwise may God have mercy on your soul...


[flagged]


> Greed and morality can't mix. I personally support morals.

Hold up, your position is that users that prefer something 'easy to use' as opposed to something 'powerful' are immoral? What am I missing here?


I thought the parent was referring to the developers/business choice as being immoral, not the customers.

If I choose to offer predatory loans which I would never accept for my friends or family to a community that is not financially savvy, and someone calls me out on it, it doesn't fly to say "hey, what do you have against these people taking advantage of my easy to use service?".


What you're missing is that I'm talking about developers having a choice between providing the best possible product in a technical sense, or simply going the way of profit and greed. To me, when you knowingly produce a substandard product and seek non-savvy users, that's an immoral act. I believe the goal should be to constantly raise the floor, not lower the ceiling.


How can what the parent did be considered immoral? It's not like they pushed tracking and ads down their customers throats, they just made the interface easier for people to use.


Parent's idea, as far as I can tell, not that I lean any which way, is that if you create a deficient product because you're financially incentivized, that's immoral.


This makes me wonder. If we could have all of the SPA and frontend framework overcomplication and the ability to make four asynchronous loading screens to load what could be rendered server side as an HTML table with a navbar instead - if we had all of that technological progress two decades ago, would we have seen what benefits it would give over minimalist design, if any?

Sometimes it feels like the web of yore was so simple to use and free of unnecessary bloat simply because that was as far as the technology had progressed at that point. React didn't exist and the browser was limited from a technical perspective so the best people could manage was some clever CSS hacks to cobble together something mildly attractive. It might have taken a while to render even those simple pages on computers of the time, so back then those pages might have hit a performance ceiling of some kind.

Maybe as more and more features get added to a piece of technology, there's some kind of default instinct that some people have to always fully exercise all of it even if it's not at all necessary. Simply because you can do more with it that you couldn't do years ago, there's some assumption that it's just better for some vague reason. It's easier to overcomplicate when everyone else is doing it also, so as to not get left behind.

Then everyone who doesn't have knowledge about web technologies in the like get used to it, and people's expectations change so this "new Web" becomes the new standard for them and start enjoying it in some Stockholm syndrome manner - or not, and the product managers mistakenly come to this conclusion from useless KPIs like "time spent on our website" which will obviously increase dramatically if it takes orders of magnitude more CPU cycles just to render the first few words of an article's headline.

I'm only speculating though.

Personally, as someone with a headless server running Docker, it pains me to no end I can't browse Docker Hub with elinks.


> Maybe as more and more features get added to a piece of technology, there's some kind of default instinct that some people have to always fully exercise all of it even if it's not at all necessary.

I suspect this is very true. It seems true in my experience. I think the reason may be just that our field is so young and dynamic, that everyone learns everything on the job. If you want to stay up to date, you have to gain experience with the new tools, and the best way to do it is to find a way to make using them a part of your job. It saves you time after work, and you even get paid for it.

It takes good judgement to experiment with new technologies on the job without compromising the product one's working on. I feel that for most developers, the concerns of end-users rarely enter the picture when making these calls.


[flagged]


It's analogous to the "should you listen to parents or childless people on the topic of parenthood/children issues". Parents are strongly biased, but they also know much more about the issue.

Or: when seeking driving advice, would you reject a priori the opinions of professional drivers, just because they're not representative of the general population? You would be justified in rejecting their views when considering your marketing strategy. Which kind of makes me think that a lot of reasons for pushing the "devs are not representative users" thought aren't about how to best serve users.


It’s nothing like that at all. Those are opinions influenced by technical expertise and experience. User interfaces and design that people like is completely based on personal preferences and tastes. It’s more like the driving instructor telling the pupil that they are factually incorrect for liking Ford cars because of some technical inferiority the instructor believes they have over some other car.

Your technical insight doesn’t make your user experience any more or less valuable or important.


I never said that technical expertise makes my experience more (or less) important. I'm saying that my technical expertise lets me understand my experience better, reason about it better, and communicate it better. I know concepts and vocabulary that a non-tech-savvy user doesn't, which makes it easier for me to pin-point sources of frustration.

User interfaces are an acquired taste. A shiny-looking app built on newest design trends may look appealing at first. But once you're couple hours into using it, your outlook changes. It suddenly starts to matter whether the application is able to handle reasonable-sized workloads without slowing to a crawl (many web applications can't). It matters whether you can use the keyboard instead of clicking on everything. It matters whether the application is fast and responsive, and doesn't. lag. on. you. every. click.

What long experience with software - both as a creator and as a consumer - gives me is the language and ability to look past the shiny facade, and spot the issues that will matter long-term.


You say:

> I never said that technical expertise makes my experience more (or less) important.

But then immediately dive into a long winded explanation of why your own personal opinions are superior. If you have some deeply academic reason for not likening something, but millions of user just absolutely love how shiny it is, then you can’t prove they’re wrong for doing so, nor that your opinion is in any way better or more valuable than theirs.


When I meet the user that prefers shiny over functional after working for a few hours with each, I'll change my mind. I haven't met that user yet.

What I've seen though is that users almost never have any say in the matter. Software is generally forced upon users - mandated by their employers, being the only way to access a particular service or a social network, etc. Users have little say in how the software works.


A faster horse.

People value privacy when made aware of the issue, but very few people are aware of how websites track (and manipulate) them.


I feel like this claim needs support.

All of the studies I'm aware of that ask people to choose between different price points based on privacy end up showing minimal valuing of privacy.


It is clear that many people know that many services they enjoy compromise their privacy. These stories have been covered by mainstream media ad nauseam. Jokes about privacy compromises are part of pop-culture at this point.

Most people, however, are not enthusiasts or activists who are wholly invested in these topics. Even being aware of the issue, most act based on their immediate needs. Immediacy is an extremely important factor in decision-making.

If we were to give this a utilitarian assessment formula:

[severity of problem] * [immediacy of problem] <||> [importance of need] * [immediacy of need]

Many people using social media to communicate with their family may evaluate that as:

[100] * [1] < [100] * [100]


GP's claims and the studies you mention are about two different things. GP claimed that users aren't generally aware of the extent their privacy is violated, and not that they would chose privacy over price if made aware.


Did those studies gave the big picture about privacy?

Why is it that people working in the field tend to be paranoid about privacy, and people outside are not? It can't be that we are just on average more paranoid. It's more likely that we know exactly how this data can be aggregated, stored forever, analysed in the future with capabilities that don't exist yet, etc. I think most people don't understand privacy in those terms.


Counterpoint: all purchasing sites I used so far are tracking me (or at least try to). I am tracked although I pay for the site.

You have to factor in the fact that people know they are tracked whether they are paying or not. Given that fact I'd prefer not paying as well.


There's a difference between valuing privacy and being willing to pay an extortion fee.


I think there's a faulty premise in there somewhere. An architect or an engineer is not a candy salesman. A candy salesman really shouldn't have a say in what flavor, texture, sweetness, etc. is appropriate for any given client.

However, the way the poster you're replying to sees this issue is not as a candy salesman; but, as a public engineer. There's a problem, like "what should web be", "how should web work", which is akin to "how to build a railway over this valley", "how to minimally disturb the ecosystem", etc. That's not a realm of likes and dislikes, but of practicality.

One of the realities of today is that the average user is extremely distanced from the technicalities of Web, whether that's desirable or not. That puts a lot of burden on the informed and on the developers, which are often the same people. The few are obligated to make decisions for the many.

Do you deliver a box-shadow, but increase technical debt? Do you migrate to a more energy efficient platform, but alienate some users? Do you broaden the scope of your system, in turn increasing system complexity, or do you delegate to a dedicated third-party system, having the user possibly learn to use that third-party system?

It's a question of which compromise best serves the user. It shouldn't be a question of likes and dislikes. This is a complex situation rife with miscommunication, ignorance, conflict of interests, and inertia. Any simple solution, such as disregarding the opinions of developers, should be regarded with great suspicion.


Many "average" users don't know what they want, don't even realize the options that are available to choose from (i.e. the different ways an app can be built), and/or will accept whatever is offered.

Though it wasn't patently insulting, you have gone ad hominem - "to the man" - rather than the idea. Whether one is a developer or not, doesn't have any bearing on whether economy and efficiency are worthwhile values in a computer application. Although it so happens that developers tend to also be users. If they ARE users, then they represent users. Also in the sense of acting as an advocate of sorts for the user, developers represent users.

"Average" users having never been offered the thing being proposed here (faster and ad-free versions of the same apps, with the same network effects etc.), I don't see how you can state with any confidence that they wouldn't have chosen them, were they available.


It's ironic that Google got "lucky" because of its minimalistic approach back in the day. Now, it's just a huge mess.


> The "best practices" in web development and user experience all deoptimize end-user ergonomy and productivity.

What are you seeing that leads you to think this? The ads and engagement drivers (autoplaying videos of other content on the site) on sites that need eyeballs to keep the lights on, or the articles showing how to download the minimum usable assets so you don't waste the user's bandwidth, battery, and disk space[1]? The latter is what I tend to see when I'm looking at pages describing "best practice".

1: https://alistapart.com/article/request-with-intent-caching-s...


The best practices that encourage you to minimize content and maximize whitespace on your screen. To change text for icons. To hijack the scrollbar. To replace perfectly good default controls with custom alternatives that look prettier, but lose most of the ergonomic features the default controls provided. To favor infinite scrolling. Etc.

Some "best practices" articles discourage all this, but in my experience, that's ignored. The trend is in ever decreasing density.

It all makes sense if you consider apps following the practices I mention as sales drivers and ad delivery vectors. Putting aside the ethical issues of building such things, my issue is that people take practices developed for marketing material, and reapply them to tools they build, without giving it any second thought.


This sounds like the kind of argument that would have said that the algorithm for rounded rectangles in the Mac OS toolbox was superfluous fluff.

The world is bigger and more interesting than screens and screens of uninterrupted plaintext.


Rounding rectangles is superfluous fluff, but it also is nice, and serves a purpose in the context of the whole design language they're using. I'm not against rounding rectangles and other such UI fluff in general. But I am against throwing away perfectly working controls, with all the ergonomy their offer, and replacing it with a half-broken, slower version of that control that only works if you use it in one particular way, but hey, it has rounded rectangles now.


Sounds like you're against bad dynamic HTML then.

Fortunately, good dynamic HTML also exists.

And I wouldn't write off rounded rectangles as superfluous fluff. They're fairly ubiquitous in user interface design because round cornered structures are fairly common in nature. They make a UI look more "real". And decreasing the artificiality of a user interface isn't superfluous; it lets more users interoperate with the interface without feeling like they've strapped an alien abstraction onto themselves. A lot of people in the computer engineering space have no trouble working with alien abstractions for hours at a time, but it's an extremely self-selecting group. We are often at risk of believing that what is normal for us should feel normal for everybody.

https://bgr.com/2016/05/17/iphone-design-rounded-squares-exp...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: