Stepping on toes

As a practice, UX can be modeled as having two components: Research and Design. For any UX initiative to be successful, these components have to work together harmoniously. But a user’s experience can also be spoiled when one of these components steps on the toes of the other.

Research and Design are co-iterative as phases; they may overlap, but if UX is being done properly then Research is both the first and last phase — it doesn’t matter how many Design iterations fall in the middle.

Research/Design iterations in UX practice

What I mean by stepping on toes is when a phase that should come later starts too early. Design can step on the toes of Research, for example, by developing prototypes or even finished products that are not informed by Research findings, which is bad on its own but worse if users come into contact with those uninformed artifacts. (Indeed, in UX we often lament instances where this happens, and it happens a lot!) It affects those users’ expectations and perceptions of what the experience should be, and closes doors to what the experience might be.

When Research steps on the toes of Design, you get things like the incident that prompted me to write this blog entry today: I was on a website, entering information into a form. In fact I was typing into a field. In the middle of that task, I was interrupted by a popup asking me for feedback on the website, and I had to take my hands off my keyboard and use my mouse to take further action.

I was interrupted in a task by a popup that looked roughly like this

There are three actions a user might take in this situation:

  1. Give feedback according to how the experience has generally been up to that point
  2. Close the popup, usually without thinking about it
  3. Give feedback that is negatively biased by the recent interruption

I would bet that a tiny fraction of people would respond with Action 1. Personally, I responded with Action 2, as I suspect most people probably would. (When briefly interrupted from a task, it’s normal to simply want to resume the task.) I don’t know how many people will generally go with Action 3, but I would bet it is a much larger proportion than those who would go with Action 1.

The people maintaining the website should have waited until just after users completed the form to ask for feedback. There’s a good chance many people would still take option 2 at that point, but whatever feedback would be collected would not be negatively biased by a recent disruption.

In this case, Research stepped on the toes of Design by asking users for feedback before they could complete the experience as designed. Not only will those users not be able to comment on the whole experience, but when giving feedback they will be biased against whatever came before. This is why it’s important for the two sides of UX not to step on each others’ toes.

Advertisement

If a user interface requires explanation, is it bad?

Some coworkers shared a meme that’s apparently been circulating on LinkedIn:

A USER INTERFACE IS LIKE A JOKE. IF YOU HAVE TO EXPLAIN IT, IT’S NOT THAT GOOD.

The meme is true for some applications, but definitely not for all. For instance, Microsoft Excel features a highly complex interface, but there’s a good reason it’s one of the most utilized business and research tools in the world, and I doubt the interface could realistically be made much simpler while retaining its capabilities.

The meme definitely doesn’t work in reverse: some interfaces require no explanation but are plenty bad regardless. For example: touchscreens are usually considered very intuitive because the user directly manipulates objects on the screen, but they can be difficult for certain people (e.g. those with vision impairment) to interact with in a physical sense rather than a cognitive one. There is an abundance of accessibility cases such as this.

Or, interfaces can require no explanation but be bad because in their simplicity they encourage an unhealthy level of interaction, or any interaction with a product that is unhealthy to begin with; if not for the simple interface, users might interact less with the unhealthy product or avoid it altogether, to their benefit. Think slot machines, social media, etc.

For many of the products I’ve worked on, a balance must be struck between intuitiveness and accommodating power users — those who rapidly learn and integrate into their regular use the non-obvious shortcuts, hidden features, and advanced capabilities of the product. This balance is found through research: talking with stakeholders, listening to and observing users, and learning about the broader context into which the product fits.

It is fun to identify ways a product’s interface can require less explanation — this very activity is what first got me interested in UX, as I suspect is true for many others in my field — but sometimes a simplification that makes a product easier to use for one person makes it less powerful, or in some cases completely unusable, for another.

Technology Ethics: My Seven Year Journey

In 2014 I attended a UX conference in which the closing keynote was about how the future will have no interface. The presenter gushed about the years ahead, when transhumanist devices, the Internet of Things, and ubiquitous sensors will transform our world into one in which the interface, instead of being located on a device, is the world itself:

You won’t have a light switch on your wall; you’ll turn on the light by pointing at it in a decisive way. You won’t learn to draw; you’ll wear a wristband that jiggles your hand as you hold a pencil to paper, causing you to make marks indistinguishable from those that might have been put there by a trained artist. You won’t interact with computer hardware; virtual objects and a keyboard will be projected right onto your desk and you’ll manipulate them as if they were real.

Rather than get excited for this, my reaction was horror and disgust. On my two-hour drive home I wondered if I was in the wrong industry, if maybe I should pack up my family and start life over in some kind of ultra-primitive arrangement in the woods.

I got over the worst of it by the time I arrived home, but I was left wondering why I had had that strong reaction and what it meant. What was it about that future that felt so wrong? What could I learn from it?

Eventually I figured out that what I recoiled from was the threat to my values presented by these technologies. All the sensors and complex circuitry required to make the IoT work violated my desire for simplicity and privacy; the transhumanist devices attacked my sense of authenticity; and so on. Moreover, I feared the alienation I would feel from my children if (or when) they embraced these technologies and absorbed their attendant values.

Amish churches, I came to learn, have successfully tackled this exact problem by drafting ordnungs — written community guidelines, more or less — that include regulations about which technologies may be owned and used as normal, or else must be modified, kept outside the home, or banned entirely. As a result the Amish appear frozen in time to most outsiders, but it is hard to deny they also enjoy the benefits of tight-knit communities, lower rates of physical and mental illness, and are even seeing a decades-long drop in attrition. Apparently being able to control the direction and pace of change in one’s social values comes with a huge payoff.

Although the Amish do not explicitly map technologies to values, this was something I recognized as necessary. In 2015 I devised a design process model in which the values potentially supported or threatened by a proposed technology are evaluated deeply so their primary and secondary effects on users’ lives might be anticipated, and negative effects avoided. I got as far as testing this process model, but the results were unclear. Later I determined my experimental design likely did not control variables tightly enough. Further, I conjectured that the system that includes both technology and modern western social values is probably too complex to predictably influence with a design process model.

I was deflated by this setback, but soon began to notice lots of other people had started talking about ethics in design. It sounded like many people shared my concerns about the future of technology and its impact on society. When I gave a presentation on this topic at Midwest UX in 2017, it seemed like half the other presentations shared the same theme.

(I wrote an article describing my technology ethics journey up to this point, with more detail about the process model and how I tested it, on Pomiet’s website in 2016.)

Shortly afterward I joined a fledgling discussion group, Ethical Technology, founded by someone who’d struck me by his intelligence and clear writing on the subject. Many of the things he said felt like things I’d wanted to say but hadn’t found words for.

The discussion group soon grew quite sizeable but I found the tone of the conversation had changed. We didn’t seem to be talking about technology as much we talked about the people making it. It did not take long for the dialogue to devolve further, all the way into partisan politics. Sometimes technology was merely decoration and not relevant to what people were really talking about; the issues raised seemed purely about political ideology. Disillusioned with both its political fixation and ideological uniformity, I left the discussion group and returned to thinking and reading on my own.

Around that time, during my “day job” I was the UX researcher on a learning management system that was to eventually be rolled out to thousands of employees at the large company where I worked. In our team meetings we frequently discussed the change management aspects of the project, and I came to see how the user experience and the change experience were intricately tied together.

I became fascinated with the field of change management. I read its foundational texts and many of its journal articles, and attended meetings of the local chapter of the ACMP. But I did all this with a critical eye: I wanted to show that those who resist technology change need to be listened to rather than persuaded. This stands as the most recent influence on my thinking about technology ethics.

The success of the Amish is ultimately attributable to the control and self-determination they are able to exercise over their technology adoption decisions. I have come to see that as the most basic truth of technology ethics. The most important aspect of a technology’s design when it comes to ethics is the degree to which the human user can control his or her relationship with that technology. This means the ethics may come from the design of the technology itself, or from the rules and customs that surround it, but will ultimately be determined by the user’s freedom to adopt or reject it.

This also means few technologies are ethically perfect. We give up some freedom of what technologies to use or avoid when we agree to work for various employers, or sometimes even just to live in certain areas. We adopt many technologies simply because they are considered normal and baseline, and we never think twice about it.

Yet awareness of this situation brings into sharper relief the opportunities to make technology more ethical. That is what I hope to do in my work these days, and going forward.

Small difference in design, big difference in user experience – water bottle edition

Sometimes a seemingly insignificant design feature can carry bigger assumptions and implications for the lifestyle of the user.

Water bottlesConsider two reusable water bottles, as pictured above. Both have a capacity of 20 ounces. Both have a small opening designed for drinking out of and a larger opening designed for adding ice or other solids. Both have about the same footprint and will fit most cars’ cup-holders. The only real difference is the shape of the upper portion.

To fill either one up to maximum capacity, you have to first secure the top part and then make sure your stream of liquid is narrow enough to fit easily through the small “drinking” opening. The alternative is to unscrew the top part and fill up the bottle through the wide opening, which is much easier, but won’t get the bottle all the way full.

The design of the bottle on the left assumes that its users have access to a steady narrow stream of water and can easily hold the bottle still long enough to fill it up through that small hole. The alternative approach—unscrewing the top part and filling it through the large opening—would end up causing users to forfeit about a quarter of the bottle’s capacity, merely because of the long slender design of the bottle’s neck.

The design of the bottle on the right mitigates most of this problem by only placing about a tenth of the volume in the top part. This means it can be filled by the easier method of unscrewing the top part, without much sacrifice in capacity. In turn, this means assumptions about what kind of stream of water and bottle-holding abilities the user has access to are no longer necessary.

It’s usually a good thing when we can tweak a design so the need to assume things about our users or inadvertently force requirements on them is eliminated.

These two styles of bottles are both for sale right now in many different stores, and the people who bought each one are getting very different experiences even though they bought very similar reusable water bottles.

This example does not demonstrate the most dramatic impact on users a design can have, but it demonstrates how there can be more of an impact than the designers may have considered. The little things matter, and design works out better if you account for that.