Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Perfect UX Is Impossible (2019) (itland.no)
85 points by eitland on April 25, 2020 | hide | past | favorite | 66 comments


Many people forget to ask the (essential) question: "Good UX – for whom?"

For example a tool that might be exceptional to use for a professional might be very very bad to use for a consumer and vice versa.

Software (in theory) could give multiple target demographics with good consistent etc UX at once by beeing configurable. However there is a clear trend to making everything intuitive and easy for inexperienced users (like a iPad), often at the expense of people who have to use the software the whole day. In the worst case the whole target demographic of a software is powerusers but the UX still trades off usability for a flat learning curve.

Imagine a point of sale system. Retail workers have to use that stuff the whole day. This means it nearly never makes sense to sacrifice power user efficiency just to be able to tell some manager who will never have to work with the damn thing: "Look how easy this is to understand".


> For example a tool that might be exceptional to use for a professional might be very very bad to use for a consumer and vice versa.

This was known if not widely distributed at least 20 years ago.

UI that is accommodating for beginners is debilitating for professionals, and I don't just mean existing professionals as the author mentions.

As a software developer, I want and have the skills to seek a solution where a set of N steps only takes a couple of keystrokes. For everyone else, the best they can get is muscle memory, and most skilled labor ends up leaning on that kind of skill heavily.

So UI that is heavy on feedback may be slow in several ways. It could literally slow down the process, or it could introduce roadblocks due to imprecision. A keyboard shortcut is almost always the same two keys in the same order. Fine motor control is the only limiter. If I substitute a mouse click, now hand-eye coordination is dominant. The guy who always hits the waste paper basket from across the room might not care, but everyone else is poorer for it.

And the thing is, video games have this problem solved, but we haven't adopted the techniques the way we have for earlier innovations. Take World of Warcraft. You start off with a few dozen activities you can perform. At intervals you add another batch of actions, and you celebrate that milestone by building up a tool bar (action bar) by cherry picking the set that speaks to you.

New actions and new ways to manage the actions you have are available as addons, and the very best players don't have the largest number of addons, but the ones they do have are reliable, and they all complement each other.

And every once in a while, an action that is very popular gets included in the base UI. Now everyone gets to use it, and 20% of the userbase already was before it ever launched, so you have more resources available to help you figure it out.


> Imagine a point of sale system. Retail workers have to use that stuff the whole day. This means it nearly never makes sense to sacrifice power user efficiency just to be able to tell some manager who will never have to work with the damn thing: "Look how easy this is to understand".

The problem is that the sales teams for the POS machine don't interact with the retail workers. Instead they interact with at best middle managers, or at worst the C-Suite, which generally has minimal software competency and usually have little understanding of the various software solutions serving the domain.


> In the worst case the whole target demographic of a software is powerusers but the UX still trades off usability for a flat learning curve.

> Imagine a point of sale system. Retail workers have to use that stuff the whole day. This means it nearly never makes sense to sacrifice power user efficiency just to be able to tell some manager who will never have to work with the damn thing: "Look how easy this is to understand".

I’ve made comments about how frustrating this exact scenario is before. I’m glad to see someone else notice that.


We often divide UX into those two broad "inexperienced" and "powerusers" categories, but in my experience those are only what we see for Good Software that could at worst miss its demographic. More frequently, I see Bad Software written, where the UX targets either looking good on a sales demo, or impressing an audience of 1-3 people (the CEO and/or other execs). Good sales demos may lean more to the simple side, but often they'll cram menus full of features purely to check off things on the client's list and look impressive on paper, even if in practice those features are too clunky to use. For CEO-targeting software, it may lean to "poweruser", but execs may also demand their product put the few things that they personally are interested in front-and-centre, at the expense of everything else.

As a programmer, I want Good Software to align with work that they'll pay me for. I'm fortunate enough that this is the case with my current work, though I can recognize that this hasn't always been the case over my career.


This is why I wish software in general - especially tools - was more configurable.

The more users are able to alter UI/UX, the more they can tailor it to their use case.

There has been so much focus on consistency in UI/UX that we have thrown customization right out the window.

Consistency is valuable, but I think we ought to move its focus away from UI/UX, and down one rung on the ladder of abstraction: to functionality.

We could also make an effort to move UI/UX one rung up the ladder of abstraction, and have a high-level interface for discreet functionalities. This is something that isn't tried often, but seems to work out well when it is.


Do you have any examples?


bash, or any other shell.

In a shell, you can set aliases, and even create intermediate functionality to interface with. You can even write software that works as a front end for your shell.

The shell is so far removed from functionality, it hardly implements any at all. Instead, it uses entirely separate programs by providing the user a consistent and configurable interface to those programs.


The key question is if the tasks and users value a differentiated output from using the software or not. If not, then the best UX is as streamlined and invisible as possible. Professional or power user doesn't mean much here, because they could value both things, streamlining and an abundance of options, it's domain dependent.


intuitive and easy for inexperienced users (like a iPad)

And yet my dad is constantly confused and frustrated by all these gestures he keeps invoking by accident. In truth, iOS is loaded with power user features that constantly surprise beginners. The same is true for macOS with all of its trackpad gestures and complicated windowing features.

The last truly user friendly operating system was Classic Mac OS. It had a consistent user interface backed by extensive research into human-computer interaction. It used powerful spatial metaphors that worked with the brain’s spatial memory [1], not against it like today’s “smart” systems that try to guess what the user wants.

[1] https://arstechnica.com/gadgets/2003/04/finder/


>The last truly user friendly operating system was Classic Mac OS

This is a statement for which you're like to find people for most every operating system saying the same thing, inserting their own preference in place of Classic Mac OS. For your typical user, I'd wager it will often be the OS to which they were first exposed. (though certainly not always)

Personally, I rather liked Windows 3.11. It did everything I needed at the time, and it was simple enough that I could hold pretty much the entire abstraction of it's configurations, settings, features, etc in my own mental space. Windows 95 was... okay. I managed to skip ME and Vista.

In contrast, Classic Mac OS seemed like a "pretty" OS that crashed for difficult to troubleshoot issues that often required disabling "extensions" and re-enabling them in groups to find the offending one. Until then, you'd experience the supremely unhelpful "type 10 bomb" hard crash. Windows BSOD was annoying, but I hated those bombs.


crashed for difficult to troubleshoot issues

Oh there’s no question that Classic Mac OS crashed a lot. The underlying technology left a lot to be desired. That’s really beside the point though. None of crashes were caused by the UI design. They were due to a lack of protected memory and corruption of preference files, along with the ad hoc extensions system you mentioned. These problems could all have been solved without abandoning the truly easy-to-use UI.

Modern macOS continues to have a lot of very difficult-to-troubleshoot problems. They don’t usually cause the system to crash, but they’re very annoying to deal with nonetheless. A common one I keep having is random processes using 100% CPU for no readily apparent reason. Take one look at Activity Monitor (or Task Manager on Windows) and you’ll see dozens or even hundreds of processes running in the background. How the heck is my dad supposed to troubleshoot that? He won’t! He’ll notice his computer running hot and battery draining rapidly and ask me to figure it out for him.


With modern macOs the battery meter shows applications using a lot of power, and it has its own tab in Activity monitor.


I use that feature every day. It doesn't solve the problem of random services like cloudphotod occasionally deciding to use 100% CPU for a few hours in a row. Killing the process in Activity Monitor just causes it to restart immediately.


> his is a statement for which you're like to find people for most every operating system saying the same thing

Maybe, but they'd be wrong. I didn't grow up using classic Mac OS much, but in hindsight I'd definitely agree that it put more thought into user interaction than anything else I've ever used.


You cut off an important bit there, cherry picking a part of my comment, which makes it seem like I had made a sweeping generalization. I specifically stipulated that it didn't apply in all cases.


> you didn't put in any kind way to configure it.

I have often found that in many projects I have worked on that there are always UX compromises that are made for good reasons (typically to make the primary use cases as usable/simple/easy as possible etc, often at the expense of "power users").

Things that can be configured often never get done purely due to time/resource constraints, rather than due to any maliciousness or lack of foresight. You can bet your bottom dollar that the engineers building the system will have brought up the fact that something can/should be a user preference/have a power-user shortcut/etc etc during early stages of design & development (since the engineers tend to fall into the "power user" category due to their familiarity with the software they're making)

However, I have found that once the primary use case gets marked as done, it is never updated to allow power users to shortcut/configure bits and pieces "because agile" (i.e. you are off to the next feature and there is never a time to go back and look at tech-/UX-debt).

For what it is worth, I don't think this is a bad thing. Often the "power users" with some niche workflows are typically a) a very, very small percentage of your users, and b) a very, very vocal percentage of your users. Most of the users will just get on with their life when using your software, but it is the "power users" who will make sure that you're aware of how much your team's work sucks.

When the "power users" are a significant proportion of your users, only then is it worth formally recognising their needs as part of the formal design, otherwise it generally seems to be a lot of work to support the niche needs of literally a handful of users at the opportunity cost of adding features for the other 99.99995% of your users.


> You can bet your bottom dollar that the engineers building the system will have brought up the fact that something can/should be a user preference/have a power-user shortcut/etc etc during early stages of design & development

As a developer, I’ve come to respect the curse of dimensionality w.r.t. testing and quality control: every binary option doubles the number of possible user configurations, and thus the testing load. In a very concrete sense configurability leads to more buggy code, and I usually err on the side of reliability.


> every binary option doubles the number of possible user configurations, and thus the testing load

Only if you assume your software is the worst possible variety of spaghetti.

Configurations do increase the software complexity. By how much is a very complicated question.


In theory, every additional binary configuration item doubles your state space, because you have all the old state space, but now both with that value on and off.

It is tempting to say that in practice the result is less than this, but in practice, it isn't. Adding a binary option may seem not to double the state space because it only modifies a small portion of the original space. e.g., imagine an option that the user sets to force all emails to be in ALL CAPS. Strong abstractions can mostly contain that state space increase in a principled manner to just that email code, for instance.

But if you've been doing this for ten+ years, you've almost certainly encountered the moral equivalent three other supposedly equally-well contained things interacting to produce a bug. Maybe if the user set this setting to true, and was using your UI in Armenian, and all the other settings on the page are in their longest possible form in Armenian, and your UI guy used a fixed-size frame ever so slightly incorrectly, your setting disappeared off the UI entirely, just to draw one example out, even though you had a "proof" that this setting couldn't affect anything but the email code. Any number of other ways this could go wrong.

None of our tools are good enough to fully contain the exponential state space explosion exponentially correctly, and it will sooner or later push through your best efforts to contain it. One of the best ways to prevent this is just not to feed the exponential monster.


If you look at what happens with Refactoring or Unit testing, instead of just listening to what people tell you is happening, you see that a lot of goes on is synthesizing one state from a small group of others, over and over again.

So for instance there may be five different criteria that decide whether you are qualified to receive a 10% off discount. You make a block of code that is responsible for emitting a single boolean, and the rest of the system only ever interacts with that single value.

Instead of an upper bound of 2^n you have one that is somewhere around n/3 factorial. Which is still a scary-big number, but might push out the dog leg a couple of years.


> instead of just listening to what people tell you is happening

Can you elaborate on what you’re referring to with this phrase? As far as I can tell, everyone here is reasoning from their personal experiences rather than adopting the opinions of others.


Ah, I meant 'in the literature' not in the thread. Just commenting on the frequent disconnect between 'why we do things' and why we do things.


> Only if you assume your software is the worst possible variety of spaghetti.

Gravity always wins.

The thing about exponentials that if the constant is > 1 the trend line essentially looks the same, the X axis just changes a bit. So maybe a boolean doesn't double the surface area, maybe it's only 1.5, that still results in compound interest. Even at 1.1, it only takes 7 booleans to double your surface area.


Yet, everything interesting happens on the border where things are falling but still not irreversibly so.

Yes, if you leave them wild options will overwhelm your code. The entire field of software engineering is focused on solving that exact problem; what people can do, with varying degrees of success.


The initial cost is quite low and there are techniques to reduce the growth factor, but I’ve seen no evidence of anything that reduces this below an exponential scenario.

Over large scales, exponentials can be reasonably modeled by a piecewise function that’s 0 below some threshold and infinity above it (1). That translates roughly into a fixed number of options that can be maintained, modulated by your development practices.

Without knowing what that limit actually is, what’s the best strategy for spending this limited resource? As there’s no such thing as an average person (2), I contend that this budget should be reserved for things that materially affect accessibility.

(1) This is the standard approximation for a diode’s I-V curve, for instance

(2) https://apps.dtic.mil/dtic/tr/fulltext/u2/010203.pdf


> every binary option doubles the number of possible user configurations

This is why you distinguish UI options from functionality options.

If you properly factor functionality out by modularizing UI, you end up with an order of magnitude less complexity.


Ok, so you’ve successfully reduced the problem space from 2^n to 2^(kn) + 2^(n-kn) , where k is the ratio of options in the two categories. That’s better, but still clearly exponential growth.


Best description of this is from Jack Dorsey: you have to get the details perfect, and the only way to do that is to limit the number of details.


> Most of the users will just get on with their life when using your software, but it is the "power users" who will make sure that you're aware of how much your team's work sucks.

It is enough that one make a viral blogpost detailing why he thinks your software sucks, for a lot of people to instantly put your software in the "won't touch" category though.


possibly 'a lot of people' meaning other power users though.

How much blog posts about software do you think regular folks read?


There is no such thing as "regular folk". People with an interest in photography will follow photography blogs, people with an interest in music will follow music websites, etc...

e.g. music making forums such as GearSlutz / KVR have 380k / 390k members. DPReview, for all things photography has millions of posts. And only a small portion of people are active, the immense majority just reads without registering an account - for instance the current ratio in gearslutz is 8:1 (714 members and 5929 guests).


I was talking about this:

> It is enough that one make a viral blogpost detailing why he thinks your software sucks, for a lot of people to instantly put your software in the "won't touch" category though.

People in general just want an app that does what they want to be done. A lot just install the first search result, some will look at options and comparisons. But saying one blog post will make a lot of people put a software in the 'dont touch' category doesn't seem that realistic


Case in point. Music notation is a good example. There's Sibelius, which is the most widely known software for transcribing music. And it's been on top for a long time. But since everybody in the music community dislikes its UX, many people will jump it straight over and go to the second google result.


> How much blog posts about software do you think regular folks read?

Zero, probably, but one might argue that opinions of power users have somewhat of a disproportionate weight in that they influence close friends and family much more than those of average users.


In general I have enjoyed the sparseness of the OS X UI, and I knew that there was a key you could hold down to give you more dangerous options in the right-click menu (like run this software you just downloaded).

But only very recently did I learn that there are a bunch of hidden options scattered throughout the System Preferences that work the same way. I have mixed feelings about this, and I wonder if I would almost rather have the about: menu solution.


>> We just simplified the whole Desktop Environment Experience!

> Fine, this will be great for onboarding new users but you just made it a lot harder to use for all existing users.

This is one of my biggest pet peeves in UX. People almost always mix up user-friendly with beginner-friendly.

Those are very different and the majority of cases they are opposites of each other.

A good UX is one that combine the two (well, that is also under the assumption that a broad audience is targeted).


> A good UX is one that combine the two

good metaphor is of help. e.g. desktop metaphor for tool to manage files. works good for beginners, doesn't constrain experts.


In principle there's no reason why you couldn't cater to each user group quite differently and make one experience morph into another as users gain confidence and/or show interest in more advanced features, but in reality there's rarely the scope to even refine the basics iteratively, so everything ends up heavily compromised. (Speaking as a UX designer, 10+ years experience)


I partially agree with the article, but the examples given are very simple and don't prove anything. Saying that it's impossible to have good UX because A is good for some users and bad for others while B is the opposite doesn't prove that there is no option C which satisfies both types of users.


It’s a straw man article. No one is saying any UX is perfect, and their analysis is so superficial to be meaningless.


Close to “Perfect UX” is when technology is acting on your behalf, and you don’t even know it’s there.

A good example of this is the Nest thermostat. After a few weeks you just forget it exists, and it does its thing in the background.


Like any proper thermostat. Except when it loses Internet connection or Nest/Google decides it's bored with the platform and bricks your thermostat remotely :).

I think depending on third-party subscriptions should be considered an UX antipattern.


Other thermostats don’t learn your preferences.

If anything, Nest gets in the way the first few weeks, then goes to the background.

I’m talking about product UX, how a Nest thermostat usually works, independently of Google getting bored ;)


Getting caught up in trying to have the perfect UX is one of the largest causes of developer paralysis that I’ve seen. IMO, push it even if it works for 75% of people, and keep working towards 100% – UX is learned on the way.


Attempting to achieve perfection is a problem, but at a minimum I believe any feature needs a good long time spent considering pain points for a wide variety of users.

Achieving perfection is impossible, but you need to at least improve the level of the worst aspects.

A lot of the problems in open-source software user interfaces, I contend, come from people finishing a feature on the backend and then throwing together a UI that merely functions.

I, on the other hand, spend most of a feature's development time thinking about the UI, putting a lot of consideration towards how it interacts with other aspects of the program and how I can remove unnecessary mouse clicks and reduce visual surprise.


Both budget and time are the constraints for UI / UX. A bad UI / UX can be redesigned easily without affecting much, especially data. But bad process can produce bad data, and it's both harder and time consuming to fix process and data.


Move away from one-size-fits all, but have the UX adapt to user behavior. Personalized UX is pretty close to perfect.


at the same time, way more complex to automate tests/qa, and to debug


I read some of the other posts by OP and they mentioned that the ribbon interface was a mess. I remember I quite enjoyed it, and I'm also aware that some people didn't like it, but I never knew why they didn't like it. Anyone here care to enlighten me, from a UX standpoint?


For me, the ribbon interface makes it difficult to find things, unless they happen to be on the default ribbon.

The classical menu is a tree structure. The first level is horizontal in the menu bar, every other level is vertical and opens when you click the previous level. You can easily go inspect the entire structure if you are looking for something that you use rarely and don't remember where it is.

Also, the items are ordered by topic ("file-related things here, editing-related things here"), with ribbons it feels like "most frequently used buttons here, everything else randomly hidden somewhere else". It does not help that some ribbons are secret and only appear when you do a specific action, e.g. select an item of given type. I'd rather have a disabled menu item that I can't click, but at least I see that it exists.

Shortly, with classical menu I feel in control: everything is here, neatly organized by topic, I choose what I want. With ribbons it feels like the system discourages me from doing anything that is not in the default ribbon, by making it more difficult to find.

I have no idea how large part of this is simply a force of habit. But from my experience, new users who never saw a non-ribbon menu still have a problem finding things. Except now I have that problem, too.

(In larger context, it feels like a typical Microsoft arbitrary change for change's sake. Just like the control panel is dramatically different in each version of Windows. It's like the more important something is, the more they love to experiment with it, where "experiment" means forcing it on all users.)


I see. I can only say that I was never advanced enough of a user that this became a problem; much of what I needed was already in the ribbon, and I could find them by just scanning the whole thing, as opposed to drilling down menus.


I was fine with the ribbon interface when I first encountered it with MS Word. I was in college then, and using that interface constantly.

Now though, I occasionally run into programs that I use once or twice per year that adopt Windows' ribbon interface, and yeah, it's definitely harder to find things than a drop-down menu interface would be.

I have to imagine that there's also a factor of context-switching friction in mixing both icons and words. I think the old approach of "common familiar icons in toolbars, those and everything else in organized drop-down menus" was superior for this reason as well.


i don't have and concrete UX recommendations, but for me (a casual Office user) it always somehow manages to show both too much and too little... i.e. there's this distracting mess of buttons and fields on the top bar, but i still have to fish around various tabs to find what i'm looking for. and i can never remember which tab stuff is in, kind of like the way they grouped things in the Win10 Settings app


I had the same problem with menus too, so I thought ribbon was helpful because it showed most of everything I needed and I just had to give the whole ribbon a quick glance over to find what I needed. Admittedly my needs were simple enough that I never had to hunt around very much.


I agree with basically everything said in the post. I also don’t believe any sane and experienced software professional would disagree with any point made here.

I think the point is to strive to satisfy ease of usability and function for the highest percentage of users possible. If you have multiple personas of users who use your application, the amount of time you spend with the R&D of the UX to satisfy each persona should likely be proportional to the percentage of revenue each persona brings in.

I’m a backend developer who is pretty awful at developing good UIs, but damn I appreciate a good one when it comes along and feel like the part of the brain that’s used to build a very solid UI is a special one that not many people have developed.


> I think the point is to strive to satisfy ease of usability and function for the highest percentage of users possible.

And so, it blows my mind how Instagram and WhatsApp have continued to get their UX right despite adding so many features over the years. This is super hard considering more than half the world, not all of whom are technology savvy, use WhatsApp and Instagram on daily basis with flawless understanding. I mean, sure, they might struggle a bit, but it isn't anywhere close to frustrating them unlike other popular software. Not to mention how frictionless, smooth their entire UX is.

I guess, design focused on content consumption is easier (Email/IM) than content creation (Microsoft Office)? Or, may be, the value of popular software is such, that people put up with it, anyway, and learn to live with it despite the UX?


Using a chat application or scrolling through a feed is not hard. Users can easily learn a new UX if the product really provides value. Yes, having better UX is good for many reasons, but people have always learned to go around bad UX in order to achieve their goals. Facebook has a shitty UX, I never find what I'm looking for, spammed with ads, fake news, slow loading times, bugged notifications, yet it's still one of the most popular sites on the planet. Their ad manager is a joke when it comes to UX and bugs, yet people still spend billions on advertising on Facebook.


“the amount of time you spend [...] should likely be proportional to the percentage of revenue each persona brings in”

... and with that you enter the deep waters of revenue attribution. Whose needs are more important? Those of the exec signing the deal, interested in reporting and auditing features, or those of the poor schmuck using form #B67-A hours a day to enter data? Your customers and your users are not necessarily the same people.


Agree with this article - also, another unspoken truth about UX: it is a multiplier, not a constant factor. If your app provides a ton of value, people will suffer through truly awful UX. And the tightest flow can’t save an app nobody wants.


Define "perfect."

If we want an app that will work for all possible users, then, yes, it is impossible.

If, however, we want a UX that is optimized for only a certain set of users, then it may actually be entirely possible; especially if we can depend on user training.

Tufte is big on interfaces like that. Some of his ideas are weird as hell, at first blush; but, as soon as we start to grok it, it's great.

I remember the Japanese train schedules and maps in the Tokyo stations. They looked like complete monstrosities, at first. However, after I'd been there a couple of days, I learned to use them to very quickly find my stations and cost.


I don't think the title is helpful, but I agree with the gist of the article. We have a long way to go to achieve human-computer interfaces for naive users that are better than human-human-computer systems, where the human in the middle is a trained expert user working with a poweruser interface. This is basically what sales and customer support are for digital services and SaaS. I think we'll eventually get there, but it's a long road, with loads of open problems. This is why I'm optimistic about the career prospects of product engineers for the foreseeable future.


Because there will always be more than one target audience with different needs and preferred ways of doing things it can't be done. You can do the most common for your targeted audience which can be a good solution, but never will everyone think it's perfect.

Its similar to the idea that someones utopia is someone else's dystopia.


Deleted and resubmitted since I had posted a wrong URL.


At least that's Microsoft's motto


Steve jobs: hold my beer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: