Should we be concerned about Maria Farrell?

The title of this post is tongue-in-cheek of course, but in an article at the Conversationalist, Maria Farrell compares smartphones to abusive partners by listing a bunch of things abusive partners do and claiming smartphones do those same things (quote):

  • They isolate us from deeper, competing relationships in favour of superficial contact – ‘user engagement’ – that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.
  • They tell us the onus is on us to manage their behavior. It’s our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.
  • They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we’re meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.
  • They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.
  • It’s impossible to just cut them off. They’ve wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?

I agree with some of these, but not with the claim that it’s impossible to stop using smartphones. As someone who doesn’t use a smartphone, I am living testimony to the contrary. (Hasn’t Farrell ever met someone who doesn’t use a smartphone?)

This article, like a lot of the criticism of technology I’ve seen, contains a recurring theme: it articulates serious concerns about the technology but then stops short of saying we should discontinue our use of it. (Another instance of this was Cathy O’Neil’s book, Weapons of Math Destruction, which presented a strong case against the use of computer algorithms in finance, hiring, criminal justice, and other areas, but dismissed the notion that we ought to abandon them.) Why?

If Farrell knows her smartphone is doing all these horrible things, why does she still have a smartphone? Why isn’t she leading the charge to go back to simple phones and leave the serious computing to laptops and desktop machines? I would happily support her if she did that, and I could provide lots of good reasons to use a simple phone as well as answers to many of the anticipated objections. I honestly do think a significant migration from smartphones to simple phones would make the world a drastically better place, even with all the benefits of smartphones considered.

It could be that Farrell is herself a victim to the abuses she warns us about: maybe she’s isolated from deep relationships, and her social life is curated by her phone; maybe she lacks the willpower to curtail her use of her phone; maybe she’s taken in by the “love-bombing” whenever she tries to cut it out of her life; maybe she really is unable to manage her life without her phone. If these things were true, it would explain why she doesn’t end her article by calling for readers to ditch their smartphones: she knows her smartphone will discover the betrayal, and abuse her even worse.

In that case we should be concerned, and maybe even intercede on her behalf. If we followed her analogy, and her phone was like an abusive partner, the right thing to do would be to take away her phone so she can be safe. And then if she says “No, give me my phone back,” we should interpret it as a kind of Stockholm syndrome and continue to withhold the phone permanently, while setting her up with a simple phone with which she can have a healthier relationship.

But no, instead she resorts to daydreaming about what a Prince Charming smartphone would be like instead. “We have to imagine a future we want to live in so we can build it.” Just like you have to imagine the partner you want so you can change the abusive one you’ve got? I suppose that part of the analogy isn’t totally fair since phones really are designed from the ground up, but I think this hides a lot of complexity around what a smartphone is and how it’s even possible to bring them to market at an affordable price. The incentives on the part of the designers, manufacturers, businesspeople, retailers, and even consumers, just aren’t lined up in a way that would make the phone “loyal” to its owner.

Farrell seems to admit this when she says that to make these utopian phones a reality “[w]e can pay the full cost of them”, but is that true? Who is “we”? I can’t imagine what the “full cost” would be, or that anyone who isn’t rich would be willing or able to pay it.

Near the end of the article she reminds us again that smartphones and the services running on them fall into the category of “life-critical public goods”, like clean drinking water.

Does this mean she thinks I need a smartphone? Maybe in some weird inversion of the scenario I described above, instead of her smartphone being taken away, she thinks somebody ought to take away my flip phone and force me to use an iPhone or Android instead. No thanks, Ms. Farrell: I am not technologically destitute, and you are not a technology victim. You have a choice.

Same goes for any smartphone user reading this.

Advertisement

Andrew Yang wants to reduce harm to children caused by smartphones

(Note: currently no presidential candidate reflects most of my views, and I do not yet know whether or for whom I will vote. When I do, I certainly will not write about it here! As I hope will be obvious, this blog post is not an endorsement or disavowal of anyone. Instead it is ultimately about the technology discussion itself.)

As far as I am aware, Andrew Yang is the only presidential candidate talking about the negative impact of smartphones on kids. He seems to take a research-first approach, which is encouraging to see. His goals are:

– Work to understand emerging technologies impact on human health and behavior
– Find a way to promote responsible smartphone usage, both within the industry and within the users
(from https://www.yang2020.com/policies/effects-smartphones-human-development/)

He does refer to some statistics without citing them, and he does make some bold claims without referring to any known statistics. Sample quote:

Teenagers are spending more time worrying about whether their online acquaintances like their recent post than they are in person with their friends hanging out and developing social skills. The average teenager spends Friday nights at home, interacting with a machine, instead of out with friends at a game or event.

But that is from his campaign website after all; he is an aspiring politician, not a researcher. He also says some things that resonate with me:

Those who have worked within the industry describe the work they’ve done in stark terms. Often relating apps to slot machines, they say that the smartest minds of a generation are spending their time getting teenagers to click on ads and obsess over social media posts to see how many acquaintances respond or react to their posts.

In short, many experts are worrying that the widespread adoption of a poorly understood technology have destroyed the psyches of a generation.

Less inspiring to me is his proposed solution to create a Department of the Attention Economy that “focuses specifically on smartphones and social media, gaming and chat apps and how to responsibly design and use them, including age restrictions and guidelines.” And he wants Tristan Harris to lead it. I’m skeptical that regulation will be effective and efficient, or produce the desired outcome. I’m pretty sure the very concept of “the attention economy” is Harris’s invention, and it’s contestable and unproven.

From a policy standpoint, I’d much rather see a long-term education and public service campaign that simply discourages parents from giving smartphones to their children, and perhaps even from owning them themselves without a specific compelling reason.

Still, I’m glad Yang is talking about this, and that the notion of putting restrictions around computing technology usage is on the table. (I’d prefer them to be culturally rather than legally enforced, but I guess you have to start somewhere.) My hope is it will inspire other candidates to respond, and that this topic will become part of the national conversation.

Of course, the risk is that these issues will be politicized, and that the solutions people support will be mostly predicted by which party or candidate they support, and that would be a terrible outcome. In fact, I think it’s likely to happen. So in some ways, I’m also really horrified that Andrew Yang is talking about this!

All the more reason why it should be a conversation first and foremost within the technology industry.

The elusive “questioning attitude”: What it is, Why it’s important, and How to cultivate it

An internet search for “questioning attitude” (include the quotes if you’re trying this at home) turns up article after article about this worker characteristic. It’s often stated to be a desirable trait in disciplines like nuclear power generation, construction management, and other industries where errors can be extremely costly, even deadly. You want people in those fields who are not satisfied that “everything looks OK” just because it seems so at first glance. This trait is also crucial in the social sciences, including my occupation, UX research.

When a researcher has a questioning attitude it doesn’t just mean “likes asking lots of questions”. Asking questions is fundamental to the job, but a questioning attitude is about being aware of the assumptions one makes and then testing whether those assumptions are true. If they aren’t true, then they get discarded. While designing those tests and interpreting the results one still needs to be constantly on the lookout for unfounded assumptions in a kind of recursive pattern all the way down, ensuring experimental design and the findings generated rest on solid bedrock of what has been established as true (as best as can be ascertained given the constraints).

A questioning attitude doesn’t stop being important just because the immediate risks seem low. Any technology built without a questioning attitude can have dramatic negative effects: an annoying Learning Management System can set people back in their careers; a frustrating payment workflow can cost a company millions of dollars; a facial recognition algorithm that’s relatively blind to people of certain races can cause those people to feel alienated or worse. These problems often happen because the designers of the technology work off of assumptions derived from what is familiar to them, and don’t consider that they might not be representative of their users.

If a questioning attitude means being aware of one’s own assumptions, how does one gain that awareness? Based on reading accounts of people who demonstrate a strong questioning attitude, talking to senior colleagues, and drawing from my own experience, I think this awareness can be cultivated from exposure to unusual and uncomfortable situations. Getting to closely know people who are very different from yourself, living in a far-away place that’s very unlike where you’ve spent most of your life, and learning and performing a wide variety of new skills are some of the ways to expose yourself to these situations.

Part of what this provides is the ability to switch into a “man from Mars” mentality, where you can see things afresh, without value judgment or preconceived notions (similar to what I learned from searching through a hot dumpster for a pair of lost keys). Critically, it also hones a rebellious instinct to look where nobody else around you is looking, to draw connections where nobody else is drawing them, etc. Not every place you look and not every connection you draw will be valid, but without this rebellious instinct important considerations are bound to be forgotten.

Humans naturally (by virtue of genetics and formative development) have different levels of self-awareness and rebelliousness, so to some extent the amount of “questioning attitude” present among UX researchers at a given company could have to do with which researchers that company hires. But I believe it is still a skill that can be maximized for each individual, and should be to produce the best design outcomes.

Stop pathologizing change resistance!

Change Management professionals are fond of pointing out humans’ many cognitive biases, which contribute to people’s supposed resistance to various kinds of change. Reference is also often made to the fact that most categories of human emotion are negative, and that change is threatening to people for a long list of emotional reasons related to things like status, or the feeling of insecurity that comes with having to learn to perform tasks in a new way.

It’s easy to come away from these messages with a picture of change resisters as damaged, fragile victims, who respond to change only with irrational defensive emotions, and who need to be “managed“, “dealt with“, “addressed” (and compared to toddlers!), and “overcome“.

In my career I have listened to countless people within various organizations tell me about workplace changes they resisted. In every single case these accounts centered around specific, often tangible negative impacts and interactions the changes were causing: doctors were forced by a new electronic records system to interact primarily with screens instead of patients; accountants had to do double entry in a new piece of software that was confusing and error-prone; engineers found their new ordering tool required them to enter extra, redundant search information while producing results that were unhelpful and irrelevant.

Without talking to people like this and hearing their stories, one could get the impression they were just being pulled along by their familiarity bias, or that they were simply fearful of the loss of status that the newly implemented systems represented. Their condition, one might think, is unfortunate, but ultimately they need to (in the words of one change leader I overheard) “get over it.”

In reality, people seem to usually resist change for good reasons: the new thing is flawed; the new thing is incomplete; the new thing is not communicated about effectively or truthfully; the new thing is not needed; the new thing is not the right solution; the new thing provides a worse interaction experience than the old thing; no training on the new thing was provided, or it was provided at the wrong time, or the training was of low quality; no support for the new thing was offered; etc.

Furthermore, over my years of interviewing people, everyone I’ve asked about workplace change has expressed some variant of this realistic and positive attitude: “Change is inevitable, and I do my best to adapt to it even if I don’t always like it.” Most people I’ve talked to could name both positive and negative workplace technology changes they’d experienced, as well as both technology changes that were forced on them and ones they undertook of their own will.

Pathologizing change resistance is especially damaging because it gives managers and executives the idea that they ought not to question or challenge the latest trends, lest they be found to be suffering the same pathologies as their Luddite employees. This contributes to a kind of Emperor’s New Clothes problem. In the end it’s everyone — not just the “emperor” — who bears the brunt of the bad decision to adopt the change.

The way to avoid this problem is to stop treating change resisters as obstacles, and instead use them as a front-line resource. Some texts give only the merest lip service to seriously engaging change resisters (for example, the 100-page book “The Eight Constants of Change” devotes exactly one paragraph to it) and even then, it is typically done as an afterthought. That is a backward approach.

The people identified as change resisters are really the ones who have the answers to questions like:

  • “What change does our organization actually need to make?”
  • “What are we doing well and should keep doing?”
  • “What makes this organization a place where people want to work?”
  • “What factors go into a successful change?”

These are the kinds of questions that need to be answered before any significant workplace change is considered, which means the so-called change resisters should be engaged right at the beginning, and their considerations taken seriously.

If nothing else, giving employees the impression they are not heard is a way to ensure that a workplace change will fail.

What’s in a name?

Recently I changed my “branding” here and on LinkedIn to describe myself as an “experience researcher” — as opposed to a UX, Usability, User, Human Factors, or other kind of researcher. This reflects an evolution in my thinking that’s been going on for a couple years now, as I’ve meditated on how my strengths align with my goals and the things I’m interested in. This blog post is an attempt to summarize it, mostly for myself but also in case anyone’s curious.

Titles.PNG

For whatever reason, “UX” tends to connote users’ interactions with software systems in particular, whereas I like to take a more holistic view in my work, and generally find other kinds of systems — procedural, organizational, taxonomic, etc. — more important and interesting anyway; software systems are but one component among these. So I stopped putting the term “UX” in front of “Researcher”.

In most industries, “Human Factors” has to do with the interactions between humans and a wider set of systems than just software, but there does seem to be a bit of an emphasis on hardware, so Human Factors is often lumped in with ergonomics. At my last job my title was Human Factors Associate, which reflected both the type of work I was doing and the mindset of that company. I admire that company and am proud of the work I did there, but I see my path going forward as somewhat different, and so “Human Factors” doesn’t feel quite appropriate for me anymore.

“User” tends to imply a machine or computer technology, whether hardware or software; it doesn’t seem like the right term for someone who interacts with more nebulous types of systems such as onboarding or professional development. “User” also doesn’t describe people experiencing change in the workplace (people aren’t “users” of change), and that experience is what I see my work as anchored to.

“Usability” is usually all about making things easier, quicker, lighter, more pleasant and learnable and understandable. This is obviously important and applies to everything from individual Word documents to massive interconnected software systems, and usability research is what my work is largely composed of, but like “user”, the term “usability” doesn’t seem to fit with how people experience less tangible kinds of systems or workplace change.

What I ultimately realized is that all the work I’m doing has to do with people’s experiences, and none of it doesn’t, and so simply placing the word “Experience” before “Researcher” was the most accurate and succinct way to describe my professional self. I hypothesize it’s also a fairly accessible term: people who are accustomed to thinking or reading about “UX”, “Human Factors”, “Usability”, and so on will see “Experience Researcher” and have a reasonably accurate idea of what that means. (Do you agree?)

Data analytics, change, and ethics

Much ado is made about data-driven decision-making. Why do things the old-fashioned way with reports written by slow humans when you can make decisions based on vast quantities of realtime data compiled by automated systems, displayed in the most (ostensibly) helpful ways?

The firehose of data from which we are encouraged to drink, and to which our own activity contributes and from which others then drink and act, has a mixed reputation. Nobody would argue that informed decision-making is worse than flying blind, and in certain cases the “more data=better” curve really is a linear diagonal up and to the right. But at the same time, most people instinctively recoil from the collection and use of data in a growing set of instances where it feels invasive, unnecessary, and even “creepy.”

Take the well-known case (perhaps somewhat mythologized at this point) of the dad who found out his teenage daughter was pregnant because the big-box retailer Target tracked the daughter’s shopping habits and, identifying her as pregnant, proactively sent baby formula coupons to the household. It may be true that the dad would have had other more direct opportunities to find out about his daughter’s pregnancy eventually, but most people still see what happened as a violation of some kind.

Target was taking advantage of all the data available to them in order to maximize revenue, just as all businesses are coached to do, with the result that they intruded upon a delicate family situation and maybe even crossed a line with respect to privacy and ethics. To what extent are other companies taking notice of this and learning lessons from it?

The language of change management is often fatalistic: “This is what the future is going to look like, this is where your industry is headed, so you’d better do X or else get left behind.” This creates an environment where it’s easy to forget that even the biggest overarching changes are built from decisions made at the most granular levels, and that we actually have control over our technology choices. “No thanks” is always on the table even if we aren’t thinking about it.

The urgency with which companies are coached to adopt the latest technologies is not necessarily valid. Sometimes it’s better to hang back and wait, or at least to implement a change gradually and cautiously, so that the ethical boundaries of the new technology can be figured out and adhered to. It might be better for the bottom line to ask forgiveness rather than permission, but it isn’t always the right thing to do, and it can get you into trouble later on.

Interpreting data

I don’t know where else to put this, so here goes:

st-pattys-shirk-21

I came across that image in a link-dump on a blog I sometimes visit; it was borrowed from someone who borrowed it from someone, etc., and I don’t know who to attribute it to, but I love how the animation elegantly explains the problem of interpreting data, agreeing on facts, etc.

As researchers, we need to triangulate our conclusions with other stakeholders to be sure we aren’t focusing on the wrong patterns. I suppose that speaks to the importance of collaboration with team members, asking the right questions, etc.

I am now a certified DACUM facilitator!

Last week I completed a training course at the Ohio State University to become a certified DACUM facilitator. During the week of training, my co-learners often asked me if having observed about a dozen DACUM workshops made the training easier.

A DACUM workshop provides its observers with a thorough introduction to the DACUM process: each workshop begins with a somewhat in-depth orientation, plus the sheer repetitiveness and intensity of the process makes it impossible not to come away with a strong impression of how a DACUM is carried out.

A careful observer can also pick up a lot of what the facilitators are doing “under the hood” to make the workshop successful. In this sense, observing DACUMs helps to make the process non-alien, and imparts at least an “academic” understanding of how they are facilitated. It’s a bit like closely watching how someone rides a bike or drives a stick shift: after enough time you can at least figure out how it’s done and start to mentally practice doing it yourself.

But there is no substitute for the experience of getting up there and actually doing the facilitating with a panel. OSU’s DACUM facilitator training program consists of one day of conventional instruction in which learners are seated before an instructor, then two days in which the learners participate as panelists in a mock DACUM (or “Facum”) with one at a time taking turns as facilitator. The final two days are a sort of capstone session, spent conducting an actual DACUM with real panelists provided by government or industry organizations.

In my case, the panel I facilitated for consisted of employees from American Electric Power, where I work. In fact, their DACUM session was part of the very project I’m on, so I had the double benefit of also advancing my team’s project while I gained my DACUM certification.

Normally 3-4 learners share rotating facilitator duties on the capstone session, but for this one the panel was broken out into four mini-DACUMs consisting of two panelists and one facilitator each. This meant I facilitated a whole workshop by myself.

I can’t think of any better way to train! By time it ended, I was eager to facilitate another. A coworker observing the workshop asked me how it felt to be DACUM-certified. I responded, “Now everything looks like a nail.” My project is slated to include another five or six DACUM workshops before the end of 2019 and I can’t wait to facilitate them.

For a slightly more in-depth explanation of what DACUM is, I’ve written about it before.

What searching through a hot dumpster taught me about UX research

A good user experience researcher is able to suspend judgment in the moment of data collection. I’ve heard it said that a UX researcher ideally has the ability to see with “new eyes” as if completely ignorant, like a visitor from another planet.

That is a difficult skill to acquire, but a few summers ago I found myself in a situation in which I had basically no choice but to practice it. After telling this story to some coworkers they encouraged me to write it down so it could be easily shared, so here it is:

One evening, arriving home late from a weekend trip with my family and unloading the car, I realized my keys were missing. After much fruitless searching I determined by process of elimination that the only place my keys could be was inside of the dumpster where my wife had tossed some trash soon after we’d arrived. We reasoned the keys must have been in her hand as she threw the trash in, and she must have accidentally let go of them in the same motion.

This was back when we lived in a condo complex, and there was a huge 6-yard dumpster, the kind with sliding doors on the sides, where everyone from the condo complex (and occasionally outside trespassers) deposited all kinds of junk. Since “trash day” was coming up the day after next, the dumpster was already very full.

My options were to either forget about the lost keys and cough up about $200 for a new set (one of the keys was to a car we were leasing, so not cheap) as well as risk a bad person finding the keys and gaining access to my home and both cars, or else to crawl into the dumpster and recover the keys.

First I tried a compromise: several 20-30 minute sessions of peering inside the dumpster with a flashlight, poking at objects with a long stick, trying to look under things, hoping I’d see a glimmer of metal and be able to just fish out my keys without compromising my bodily cleanliness.

No such luck. Trash day was looming and I was running out of options. Eventually I decided that a few minutes of discomfort was not worth $200 and lingering paranoia about being burgled or having my car stolen. So the next day I resolved to travel inside the dumpster and leave no stone unturned, as it were.

I went in the daytime when I’d have the most light. I suited up in waterproof fishing boots, elbow-length kitchen gloves, a headlamp, and some old gym clothes I was prepared to throw away immediately after this excursion. I also tied a handkerchief around my face to filter out the taste of the air.

That’s right: taste. Obviously, I was resolved not to smell anything. That was Rule #1: I forswore inhalation through the nose; only oral respiration permitted. One whiff and you’re done, my inner drill sergeant barked. Knowing I’d still have to taste the air inside that dumpster, I decided I’d at least try to filter it a bit.

As it was summer, the air and metal were both hot as I probed for hand- and footholds and hoisted myself up toward one of the open windows of the giant rusted box. One knee in, then one leg in, then both legs in. I was sitting on the ledge, facing the abyss. I leaned back and took a few deep breaths outside, then held the last one in and I slid forward into the stifling darkness.

I was crouching on various kinds of trash. As I slowly let the air out of my lungs and prepared to suck in more through my bandana (which immediately tasted awful), I glanced around and was almost overwhelmed. Everywhere I looked was something horrendously nasty; things that would be unpleasant enough immediately after being thrown away, but which had by now been sweltering in what was essentially a small oven for almost a week. The refuse was haphazardly piled up around me, ready to avalanche, giving way under my feet.

That’s when I discovered Rule #2, the most important of all: judge nothing. The demands of my circumstances dictated that I internalize this rule immediately and fully, so I did. This amounted to nothing less than a new lens that materialized in front of my eyes, a new filter on existence. A whole new way of seeing things.

A leaking bag of party trash was no longer tepid beer and grease-covered empty cans and napkins that had been dragged across the sweaty faces of drunken pizza eaters. Under my left boot was no longer a torn couch cushion with questionable stains and a bewildering backstory. The three inches of opaque wet stuff sloshing around my right foot in the bottom of the dumpster was no longer a mixture of rainwater and bile and fermenting backwash. The taste in my mouth was not a flavor, it was just a pattern of molecules. The sloshing mixture was just a liquid substance. The cushion and trash were just objects.

These were the new categories of my reality: rigid materials (to be moved so as to be looked under), flexible sheet-like materials (to be drawn away or inverted so as to be looked behind or inside of), liquid materials (to be probed either by boot or by hand), and so on.

Really, there were only two kinds of objects in the universe at that moment: Things That Were My Keys, and Things That Were Not My Keys.

That was my insight. I could judge the world the way I normally would, and fail and suffer, or else suspend all judgment and use my senses only to serve the purpose of disproving my research hypothesis (in this case, “my keys are in this dumpster”). My arms and hands and fingers were now scientific instruments, gathering and testing binary data. My mind had been temporarily optimized for lost-key-finding, and importantly, nothing else.

To make a long story a bit shorter, I did not end up finding my keys in that dumpster. Eventually my Dantean tour of the Inferno was cut short when I accidentally inhaled through my nose and barely clambered out in time to avoid throwing up into the handkerchief tied around my face. Later that day I was hopelessly searching again around my car’s tailgate, and the keys dropped into my open hands. Apparently my wife had placed them on the roof of the car while she was on auto-pilot (not her fault; we had a young child at the time and were both sleep-deprived), and during the day’s driving the keys slid down the roof toward the rear of the car.

The dumpster dive would have been a most unpleasant waste of time and effort if it hadn’t taught me such a valuable lesson. Abstaining from value judgments is essential for good research, and cultivating that ability is crucial for a good researcher. There are probably less-disgusting ways to practice that abstention than crawling into a hot dumpster full of your random people’s garbage; I recommend exploring them.