Technology Ethics: My Seven Year Journey

In 2014 I attended a UX conference in which the closing keynote was about how the future will have no interface. The presenter gushed about the years ahead, when transhumanist devices, the Internet of Things, and ubiquitous sensors will transform our world into one in which the interface, instead of being located on a device, is the world itself:

You won’t have a light switch on your wall; you’ll turn on the light by pointing at it in a decisive way. You won’t learn to draw; you’ll wear a wristband that jiggles your hand as you hold a pencil to paper, causing you to make marks indistinguishable from those that might have been put there by a trained artist. You won’t interact with computer hardware; virtual objects and a keyboard will be projected right onto your desk and you’ll manipulate them as if they were real.

Rather than get excited for this, my reaction was horror and disgust. On my two-hour drive home I wondered if I was in the wrong industry, if maybe I should pack up my family and start life over in some kind of ultra-primitive arrangement in the woods.

I got over the worst of it by the time I arrived home, but I was left wondering why I had had that strong reaction and what it meant. What was it about that future that felt so wrong? What could I learn from it?

Eventually I figured out that what I recoiled from was the threat to my values presented by these technologies. All the sensors and complex circuitry required to make the IoT work violated my desire for simplicity and privacy; the transhumanist devices attacked my sense of authenticity; and so on. Moreover, I feared the alienation I would feel from my children if (or when) they embraced these technologies and absorbed their attendant values.

Amish churches, I came to learn, have successfully tackled this exact problem by drafting ordnungs — written community guidelines, more or less — that include regulations about which technologies may be owned and used as normal, or else must be modified, kept outside the home, or banned entirely. As a result the Amish appear frozen in time to most outsiders, but it is hard to deny they also enjoy the benefits of tight-knit communities, lower rates of physical and mental illness, and are even seeing a decades-long drop in attrition. Apparently being able to control the direction and pace of change in one’s social values comes with a huge payoff.

Although the Amish do not explicitly map technologies to values, this was something I recognized as necessary. In 2015 I devised a design process model in which the values potentially supported or threatened by a proposed technology are evaluated deeply so their primary and secondary effects on users’ lives might be anticipated, and negative effects avoided. I got as far as testing this process model, but the results were unclear. Later I determined my experimental design likely did not control variables tightly enough. Further, I conjectured that the system that includes both technology and modern western social values is probably too complex to predictably influence with a design process model.

I was deflated by this setback, but soon began to notice lots of other people had started talking about ethics in design. It sounded like many people shared my concerns about the future of technology and its impact on society. When I gave a presentation on this topic at Midwest UX in 2017, it seemed like half the other presentations shared the same theme.

(I wrote an article describing my technology ethics journey up to this point, with more detail about the process model and how I tested it, on Pomiet’s website in 2016.)

Shortly afterward I joined a fledgling discussion group, Ethical Technology, founded by someone who’d struck me by his intelligence and clear writing on the subject. Many of the things he said felt like things I’d wanted to say but hadn’t found words for.

The discussion group soon grew quite sizeable but I found the tone of the conversation had changed. We didn’t seem to be talking about technology as much we talked about the people making it. It did not take long for the dialogue to devolve further, all the way into partisan politics. Sometimes technology was merely decoration and not relevant to what people were really talking about; the issues raised seemed purely about political ideology. Disillusioned with both its political fixation and ideological uniformity, I left the discussion group and returned to thinking and reading on my own.

Around that time, during my “day job” I was the UX researcher on a learning management system that was to eventually be rolled out to thousands of employees at the large company where I worked. In our team meetings we frequently discussed the change management aspects of the project, and I came to see how the user experience and the change experience were intricately tied together.

I became fascinated with the field of change management. I read its foundational texts and many of its journal articles, and attended meetings of the local chapter of the ACMP. But I did all this with a critical eye: I wanted to show that those who resist technology change need to be listened to rather than persuaded. This stands as the most recent influence on my thinking about technology ethics.

The success of the Amish is ultimately attributable to the control and self-determination they are able to exercise over their technology adoption decisions. I have come to see that as the most basic truth of technology ethics. The most important aspect of a technology’s design when it comes to ethics is the degree to which the human user can control his or her relationship with that technology. This means the ethics may come from the design of the technology itself, or from the rules and customs that surround it, but will ultimately be determined by the user’s freedom to adopt or reject it.

This also means few technologies are ethically perfect. We give up some freedom of what technologies to use or avoid when we agree to work for various employers, or sometimes even just to live in certain areas. We adopt many technologies simply because they are considered normal and baseline, and we never think twice about it.

Yet awareness of this situation brings into sharper relief the opportunities to make technology more ethical. That is what I hope to do in my work these days, and going forward.

Advertisement

The Vicious Cycle of Recruiting With Unpaid Work

The pandemic has left lots of people unemployed, including many UX professionals. Some companies are taking advantage of this situation to automate and scale their recruiting. (Historically this has been called “carpetbagging“, and it hasn’t gone away.)  I have seen one company even make the completion of unpaid work part of their application process: after an initial screening interview with a third-party recruiter, the applicant is sent a link to Usertesting.com, where he or she is asked to watch a video of a user interacting with the company’s software and then evaluate the session. Applicants are not compensated for this work, which the recruiter told me takes about three quarters of an hour.

Forty-five minutes of uncompensated work in exchange for a shot at a steady job might seem like an okay deal for someone just entering the field, or who is for other reasons desperate. But if ten people go through this process, then the company has received 7.5 hours of free labor. If 100 people go through this process, then they have received nearly two weeks of free labor. It becomes easy to see how the incentives become misaligned.

When the prospect of being hired is drastically reduced, going through an application process like this is an unambiguously negative experience. Luckily, quality UX candidates have a passion for improving experiences. They want to work somewhere they can put this passion to use, which means a place where their recommendations will be taken seriously. A company that persists in putting people through bad experiences will ultimately fail to attract quality candidates; it’s a vicious cycle.

The unpaid work I described above was ostensibly meant to show the company how the candidate evaluates a usability session. Here are two alternatives to that kind of recruiting method:

  1. Pay applicants for the time they spend evaluating your usability sessions. That at least keeps the incentives more aligned and steers clear of unethical “carpetbagging” practices.
  2. Talk to candidates instead. Quality candidates will be willing to spend time interviewing, because an interview gives them visibility into the process they’re participating in, realtime feedback about how they’re doing, and a personal sense of who they’re going to be working with.

What’s in a name?

Recently I changed my “branding” here and on LinkedIn to describe myself as an “experience researcher” — as opposed to a UX, Usability, User, Human Factors, or other kind of researcher. This reflects an evolution in my thinking that’s been going on for a couple years now, as I’ve meditated on how my strengths align with my goals and the things I’m interested in. This blog post is an attempt to summarize it, mostly for myself but also in case anyone’s curious.

Titles.PNG

For whatever reason, “UX” tends to connote users’ interactions with software systems in particular, whereas I like to take a more holistic view in my work, and generally find other kinds of systems — procedural, organizational, taxonomic, etc. — more important and interesting anyway; software systems are but one component among these. So I stopped putting the term “UX” in front of “Researcher”.

In most industries, “Human Factors” has to do with the interactions between humans and a wider set of systems than just software, but there does seem to be a bit of an emphasis on hardware, so Human Factors is often lumped in with ergonomics. At my last job my title was Human Factors Associate, which reflected both the type of work I was doing and the mindset of that company. I admire that company and am proud of the work I did there, but I see my path going forward as somewhat different, and so “Human Factors” doesn’t feel quite appropriate for me anymore.

“User” tends to imply a machine or computer technology, whether hardware or software; it doesn’t seem like the right term for someone who interacts with more nebulous types of systems such as onboarding or professional development. “User” also doesn’t describe people experiencing change in the workplace (people aren’t “users” of change), and that experience is what I see my work as anchored to.

“Usability” is usually all about making things easier, quicker, lighter, more pleasant and learnable and understandable. This is obviously important and applies to everything from individual Word documents to massive interconnected software systems, and usability research is what my work is largely composed of, but like “user”, the term “usability” doesn’t seem to fit with how people experience less tangible kinds of systems or workplace change.

What I ultimately realized is that all the work I’m doing has to do with people’s experiences, and none of it doesn’t, and so simply placing the word “Experience” before “Researcher” was the most accurate and succinct way to describe my professional self. I hypothesize it’s also a fairly accessible term: people who are accustomed to thinking or reading about “UX”, “Human Factors”, “Usability”, and so on will see “Experience Researcher” and have a reasonably accurate idea of what that means. (Do you agree?)