Social interaction and technological mediation under Lockdown

The shelter-in-place order in response to Covid-19 has demonstrated both the necessity and insufficiency of virtual socialization. Technologies like Zoom and Skype (not to mention email) have allowed countless people to remain employed, to check in with each other, and to enjoy some diversion from the tedium of confinement.

But although most states are only in week three or four of “lockdown” I am already seeing accounts of people, particularly those who live alone, suffering emotional breakdowns as a result of their physical isolation. Seeing and hearing each other in real-time is a marvelous thing, but we apparently have a deeper need to be with other people.

Videoconferencing in 3D with goggles and headphones (i.e. VR) might one day become a viable way to fulfill that need. But it presents an additional hurdle: participants will have most of their faces covered by cumbersome equipment. One solution is for participants to represent themselves with 3D avatars.. But this detracts significantly from the verisimilitude of the experience, which was supposed to be the whole point.

What is less clear is whether people will care. Maybe feeling as if you’re around someone else, even if that someone looks like an obviously-fake 3D avatar, is still psychologically preferable to interacting with a video representation of a person, even a realistic one, if that representation must be mediated by a screen held at arm’s length.

“A System to Change the Culture”

The title of this blog post is a quote from Michael Corboy, the assistant commissioner of police in New South Wales, Australia. He used that phrase to describe the introduction of traffic cameras that use an AI to detect when drivers are on the phone.

I think it’s a profound phrase. In one sense, it’s backward from how we normally like to think about the relationship between technology systems and culture: we want our culture to grow organically, and our technology systems to be designed around them, in a humane way that preserves and supports our values. Intuitively, the culture should affect the system rather than the other way around.

But in another sense, this acknowledges a very real and basic phenomenon that happens any time a system is introduced into a culture: the culture changes. Now, the intended changes rarely obtain exactly the way they’re meant to, at least without unintended side-effects, but the relationship between human culture and manmade systems is definitely a two-way street.

These traffic cameras will have some impact on traffic safety in NSW. And, they will incite some amount of backlash from people who feel intruded upon by Big Brother. But a lot of people will respond with indifference, and these cameras might even further normalize and legitimize the idea of high-tech government surveillance.

From the government’s standpoint, it will be nearly impossible to go back to a lower-tech alternative if this initiative does not succeed, so these cameras also mean a redefining of what it means for law enforcement to do their job. They signal an increasing dependence on computers and automation to replace human labor and judgment. And will the cameras actually change Australians’ culture around traffic safety? If so, how?

Time will tell whether the introduction of these cameras is a good thing in the end, but as always it is much bigger than just the adoption of one system.

Another writer exposes the terrible downsides of a new technology — but keeps using it!

Not only did Alli Conti get scammed on Airbnb, she uncovered a big ring of scams that exploit baked-in security weaknesses of the site, its rules, and the expectations of its users. But…

Even after a month of digging through public records, scouring the internet for clues, repeatedly calling Airbnb and confronting the [scammer] who called himself Patrick, I can’t say I’ll be leaving the platform, either. Dealing with Airbnb’s easily exploitable and occasionally crazy-making system is still just a bit cheaper than renting a hotel.

Conti’s message to Airbnb is effectively “Don’t worry about fixing these problems, I’ll keep using your site anyway so long as an Airbnb is marginally cheaper than alternatives.” Not counting all the indirect costs, of course.

I suspect there’s something else going on under the surface: a reluctance to go back to reserving rooms in hotels, simply because it is the “old way”. It doesn’t feel as hip or fresh or exciting — or dare I say fashionable? — to book a room in a hotel, and it doesn’t fit the narrative people have told themselves about what travel is supposed to look like in 2019. But that is a narrative, and not only is it an arbitrary one, it’s harmful in the case of people who are unwilling to change their consumer behavior in response to serious problems.

Should we be concerned about Maria Farrell?

The title of this post is tongue-in-cheek of course, but in an article at the Conversationalist, Maria Farrell compares smartphones to abusive partners by listing a bunch of things abusive partners do and claiming smartphones do those same things (quote):

  • They isolate us from deeper, competing relationships in favour of superficial contact – ‘user engagement’ – that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.
  • They tell us the onus is on us to manage their behavior. It’s our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.
  • They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we’re meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.
  • They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.
  • It’s impossible to just cut them off. They’ve wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?

I agree with some of these, but not with the claim that it’s impossible to stop using smartphones. As someone who doesn’t use a smartphone, I am living testimony to the contrary. (Hasn’t Farrell ever met someone who doesn’t use a smartphone?)

This article, like a lot of the criticism of technology I’ve seen, contains a recurring theme: it articulates serious concerns about the technology but then stops short of saying we should discontinue our use of it. (Another instance of this was Cathy O’Neil’s book, Weapons of Math Destruction, which presented a strong case against the use of computer algorithms in finance, hiring, criminal justice, and other areas, but dismissed the notion that we ought to abandon them.) Why?

If Farrell knows her smartphone is doing all these horrible things, why does she still have a smartphone? Why isn’t she leading the charge to go back to simple phones and leave the serious computing to laptops and desktop machines? I would happily support her if she did that, and I could provide lots of good reasons to use a simple phone as well as answers to many of the anticipated objections. I honestly do think a significant migration from smartphones to simple phones would make the world a drastically better place, even with all the benefits of smartphones considered.

It could be that Farrell is herself a victim to the abuses she warns us about: maybe she’s isolated from deep relationships, and her social life is curated by her phone; maybe she lacks the willpower to curtail her use of her phone; maybe she’s taken in by the “love-bombing” whenever she tries to cut it out of her life; maybe she really is unable to manage her life without her phone. If these things were true, it would explain why she doesn’t end her article by calling for readers to ditch their smartphones: she knows her smartphone will discover the betrayal, and abuse her even worse.

In that case we should be concerned, and maybe even intercede on her behalf. If we followed her analogy, and her phone was like an abusive partner, the right thing to do would be to take away her phone so she can be safe. And then if she says “No, give me my phone back,” we should interpret it as a kind of Stockholm syndrome and continue to withhold the phone permanently, while setting her up with a simple phone with which she can have a healthier relationship.

But no, instead she resorts to daydreaming about what a Prince Charming smartphone would be like instead. “We have to imagine a future we want to live in so we can build it.” Just like you have to imagine the partner you want so you can change the abusive one you’ve got? I suppose that part of the analogy isn’t totally fair since phones really are designed from the ground up, but I think this hides a lot of complexity around what a smartphone is and how it’s even possible to bring them to market at an affordable price. The incentives on the part of the designers, manufacturers, businesspeople, retailers, and even consumers, just aren’t lined up in a way that would make the phone “loyal” to its owner.

Farrell seems to admit this when she says that to make these utopian phones a reality “[w]e can pay the full cost of them”, but is that true? Who is “we”? I can’t imagine what the “full cost” would be, or that anyone who isn’t rich would be willing or able to pay it.

Near the end of the article she reminds us again that smartphones and the services running on them fall into the category of “life-critical public goods”, like clean drinking water.

Does this mean she thinks I need a smartphone? Maybe in some weird inversion of the scenario I described above, instead of her smartphone being taken away, she thinks somebody ought to take away my flip phone and force me to use an iPhone or Android instead. No thanks, Ms. Farrell: I am not technologically destitute, and you are not a technology victim. You have a choice.

Same goes for any smartphone user reading this.

Andrew Yang wants to reduce harm to children caused by smartphones

(Note: currently no presidential candidate reflects most of my views, and I do not yet know whether or for whom I will vote. When I do, I certainly will not write about it here! As I hope will be obvious, this blog post is not an endorsement or disavowal of anyone. Instead it is ultimately about the technology discussion itself.)

As far as I am aware, Andrew Yang is the only presidential candidate talking about the negative impact of smartphones on kids. He seems to take a research-first approach, which is encouraging to see. His goals are:

– Work to understand emerging technologies impact on human health and behavior
– Find a way to promote responsible smartphone usage, both within the industry and within the users
(from https://www.yang2020.com/policies/effects-smartphones-human-development/)

He does refer to some statistics without citing them, and he does make some bold claims without referring to any known statistics. Sample quote:

Teenagers are spending more time worrying about whether their online acquaintances like their recent post than they are in person with their friends hanging out and developing social skills. The average teenager spends Friday nights at home, interacting with a machine, instead of out with friends at a game or event.

But that is from his campaign website after all; he is an aspiring politician, not a researcher. He also says some things that resonate with me:

Those who have worked within the industry describe the work they’ve done in stark terms. Often relating apps to slot machines, they say that the smartest minds of a generation are spending their time getting teenagers to click on ads and obsess over social media posts to see how many acquaintances respond or react to their posts.

In short, many experts are worrying that the widespread adoption of a poorly understood technology have destroyed the psyches of a generation.

Less inspiring to me is his proposed solution to create a Department of the Attention Economy that “focuses specifically on smartphones and social media, gaming and chat apps and how to responsibly design and use them, including age restrictions and guidelines.” And he wants Tristan Harris to lead it. I’m skeptical that regulation will be effective and efficient, or produce the desired outcome. I’m pretty sure the very concept of “the attention economy” is Harris’s invention, and it’s contestable and unproven.

From a policy standpoint, I’d much rather see a long-term education and public service campaign that simply discourages parents from giving smartphones to their children, and perhaps even from owning them themselves without a specific compelling reason.

Still, I’m glad Yang is talking about this, and that the notion of putting restrictions around computing technology usage is on the table. (I’d prefer them to be culturally rather than legally enforced, but I guess you have to start somewhere.) My hope is it will inspire other candidates to respond, and that this topic will become part of the national conversation.

Of course, the risk is that these issues will be politicized, and that the solutions people support will be mostly predicted by which party or candidate they support, and that would be a terrible outcome. In fact, I think it’s likely to happen. So in some ways, I’m also really horrified that Andrew Yang is talking about this!

All the more reason why it should be a conversation first and foremost within the technology industry.

Stop pathologizing change resistance!

Change Management professionals are fond of pointing out humans’ many cognitive biases, which contribute to people’s supposed resistance to various kinds of change. Reference is also often made to the fact that most categories of human emotion are negative, and that change is threatening to people for a long list of emotional reasons related to things like status, or the feeling of insecurity that comes with having to learn to perform tasks in a new way.

It’s easy to come away from these messages with a picture of change resisters as damaged, fragile victims, who respond to change only with irrational defensive emotions, and who need to be “managed“, “dealt with“, “addressed” (and compared to toddlers!), and “overcome“.

In my career I have listened to countless people within various organizations tell me about workplace changes they resisted. In every single case these accounts centered around specific, often tangible negative impacts and interactions the changes were causing: doctors were forced by a new electronic records system to interact primarily with screens instead of patients; accountants had to do double entry in a new piece of software that was confusing and error-prone; engineers found their new ordering tool required them to enter extra, redundant search information while producing results that were unhelpful and irrelevant.

Without talking to people like this and hearing their stories, one could get the impression they were just being pulled along by their familiarity bias, or that they were simply fearful of the loss of status that the newly implemented systems represented. Their condition, one might think, is unfortunate, but ultimately they need to (in the words of one change leader I overheard) “get over it.”

In reality, people seem to usually resist change for good reasons: the new thing is flawed; the new thing is incomplete; the new thing is not communicated about effectively or truthfully; the new thing is not needed; the new thing is not the right solution; the new thing provides a worse interaction experience than the old thing; no training on the new thing was provided, or it was provided at the wrong time, or the training was of low quality; no support for the new thing was offered; etc.

Furthermore, over my years of interviewing people, everyone I’ve asked about workplace change has expressed some variant of this realistic and positive attitude: “Change is inevitable, and I do my best to adapt to it even if I don’t always like it.” Most people I’ve talked to could name both positive and negative workplace technology changes they’d experienced, as well as both technology changes that were forced on them and ones they undertook of their own will.

Pathologizing change resistance is especially damaging because it gives managers and executives the idea that they ought not to question or challenge the latest trends, lest they be found to be suffering the same pathologies as their Luddite employees. This contributes to a kind of Emperor’s New Clothes problem. In the end it’s everyone — not just the “emperor” — who bears the brunt of the bad decision to adopt the change.

The way to avoid this problem is to stop treating change resisters as obstacles, and instead use them as a front-line resource. Some texts give only the merest lip service to seriously engaging change resisters (for example, the 100-page book “The Eight Constants of Change” devotes exactly one paragraph to it) and even then, it is typically done as an afterthought. That is a backward approach.

The people identified as change resisters are really the ones who have the answers to questions like:

  • “What change does our organization actually need to make?”
  • “What are we doing well and should keep doing?”
  • “What makes this organization a place where people want to work?”
  • “What factors go into a successful change?”

These are the kinds of questions that need to be answered before any significant workplace change is considered, which means the so-called change resisters should be engaged right at the beginning, and their considerations taken seriously.

If nothing else, giving employees the impression they are not heard is a way to ensure that a workplace change will fail.

Data analytics, change, and ethics

Much ado is made about data-driven decision-making. Why do things the old-fashioned way with reports written by slow humans when you can make decisions based on vast quantities of realtime data compiled by automated systems, displayed in the most (ostensibly) helpful ways?

The firehose of data from which we are encouraged to drink, and to which our own activity contributes and from which others then drink and act, has a mixed reputation. Nobody would argue that informed decision-making is worse than flying blind, and in certain cases the “more data=better” curve really is a linear diagonal up and to the right. But at the same time, most people instinctively recoil from the collection and use of data in a growing set of instances where it feels invasive, unnecessary, and even “creepy.”

Take the well-known case (perhaps somewhat mythologized at this point) of the dad who found out his teenage daughter was pregnant because the big-box retailer Target tracked the daughter’s shopping habits and, identifying her as pregnant, proactively sent baby formula coupons to the household. It may be true that the dad would have had other more direct opportunities to find out about his daughter’s pregnancy eventually, but most people still see what happened as a violation of some kind.

Target was taking advantage of all the data available to them in order to maximize revenue, just as all businesses are coached to do, with the result that they intruded upon a delicate family situation and maybe even crossed a line with respect to privacy and ethics. To what extent are other companies taking notice of this and learning lessons from it?

The language of change management is often fatalistic: “This is what the future is going to look like, this is where your industry is headed, so you’d better do X or else get left behind.” This creates an environment where it’s easy to forget that even the biggest overarching changes are built from decisions made at the most granular levels, and that we actually have control over our technology choices. “No thanks” is always on the table even if we aren’t thinking about it.

The urgency with which companies are coached to adopt the latest technologies is not necessarily valid. Sometimes it’s better to hang back and wait, or at least to implement a change gradually and cautiously, so that the ethical boundaries of the new technology can be figured out and adhered to. It might be better for the bottom line to ask forgiveness rather than permission, but it isn’t always the right thing to do, and it can get you into trouble later on.

Book review: The Driver in the Driverless Car

I got through the first four chapters of Vivek Wadhwa and Alex Salkever’s The Driver in the Driverless Car before deciding not to continue.

The Driver in the Driverless Car is appropriate for readers who have very little familiarity with emerging technology and have never thought much about technology ethics issues. I’m not part of the intended audience but I was prompted to try the book anyway because it seemed to be specifically addressing the issue of how we should choose which technologies to adopt. That is a very important question, and I was excited to see it posed by a book you might find in an airport terminal store.

The authors spend a lot of time familiarizing the reader with various emerging technologies and some of the most well-known ethical dilemmas those technologies pose. As they do this the authors (who admit to being early adopter tech-enthusiasts themselves) seem unable to stifle their excitement about the new technologies well enough to provide serious criticism at the same time. So their attempt to be even-handed somewhat fails, but this might have been a deliberate choice.

My problem with this book is not its simplification, but the authors’ rubric for how to assess technology. They propose a 3-question test of all new technology: Does it have the potential to benefit everyone equally? What are the risks/rewards? Does it create autonomy rather than dependence?

These are fine questions to explore, but can they serve as a basis on which to build a useful technology adoption assessment? Aside from being indistinct from each other, the questions encode and promote the authors’ own personal values (equality, rationalism, autonomy) rather than create a stable framework that works for different people in different situations. Technology users and use contexts are extremely varied, so a technology that might seem to, say, promote equality to one user in one context could still raise all kinds of red flags to another user in another context.

I like that the authors are thinking about the question of how to assess technology for adoption, but there is danger in going about this the wrong way. Concepts like equality and autonomy don’t mean the same thing to everyone, and are easily politicized; it’s easy to take them and run very far and very quickly, irreversibly, in the wrong direction.

My social media quitting story

One day around 2012 I was walking to the corner store to pick up some beer. As I walked, I caught myself composing a status update (something about buying beer, probably) that I would type into Facebook when I got back home. At that moment I realized I had been doing this sort of thing for weeks or months, maybe longer: mentally translating my meatspace experiences into content to share on social media.

Facebook had influenced my day-to-day thought process in a way I hadn’t anticipated when I started using it. It had essentially rewired my brain, and I found this intrusive and disturbing. I also felt that by constantly mentally framing my experiences to support social media posting, I was diluting my connection to the world around me. It felt like a technology-induced mental disorder.

Fortunately it was a disorder I had the power to rid myself of. Right then and there on my way to the store I decided I was not going to post that status update. Instead I was going to delete my Facebook account and never look back. When I returned home, that’s exactly what I did.


Do you have a social media quitting story? Please feel free to share it and I will publish it here! Post it as a comment or email me.

Beyond the Trolley Problem: more ethical issues with driverless cars

I. The Trolley Problem

Much has been written about the ethical problems with driverless cars, but in most of those writings the emphasis is on a specific hypothetical “Trolley Problem” scenario where the car must decide between swerving to avoid a crash that would be fatal to the passenger, or allowing the passenger to die so that someone else (e.g. a pedestrian) is not killed.

The Trolley Problem itself is many decades old and ethicists are still not close to settling it. There’s no good reason why recasting it with driverless cars will suddenly inspire a solution. Besides, people die in car crashes every day, in situations of much greater ethical clarity; it doesn’t nudge our answer to the question “should we drive cars?”

Furthermore, the driverless car Trolley Problem scenario is unlikely to happen much, and even then it can be somewhat mitigated with more technology: better sensors to avoid road hazards, signage or barriers to limit pedestrian road access, better airbags, better brakes, etc. So what other, deeper ethical issues are raised by driverless cars?

II. Driverful and driverless cars cannot coexist

Research suggests that driverless cars are safer than human-operated cars–so long as the driverless cars are interacting only with other driverless cars. But when human drivers have to interact with driverless cars, that is the least safe scenario: human drivers have trouble “reading” driverless cars, and can get spooked by them, leading to accidents.

This means if driverless cars come into regular use, they may need to be mandated, at the exclusion of human-operated cars, at least within certain zones. For people living within those zones, the rule will be “driverless car or no car.”

III. “Ours,” not “yours”

Now consider the economics of owning a driverless car. The car itself will be expensive. While you are not driving it, a normal car is sitting in your driveway or in a parking space basically just leaking value and gathering rust. A driverless car, on the other hand, could be out acting like a taxi, making you money and helping to pay for itself.

Turning your driverless car into a taxi would require after-market alterations, some virtual hailing and payment services, and some additional legal and tax work, all with additional price tags. Driverless car manufacturers or dealers might anticipate the demand for this and price it into the vehicles as a standard feature package.

This creates a strong incentive for people who buy driverless cars to essentially start their own cab companies. Actual cab companies would probably just beat them to the punch, so long as unions don’t get involved (though if cab companies with actual unionized drivers have trouble competing with Uber it’s not clear how they could possibly compete with Ace Driverless Taxi Service).

So in a place where driverless cars are the only cars allowed, and where owning a driverless car has considerably more up-front costs than car ownership does now, the default way for most people to get around will be by hailing a driverless cab.

IV. Cascading effects – family impact example

A generation of DOD (driverless-on-demand) and you will see suburban and exurban homes built without garages, while new development will continue to be designed around cars. That will cement the DOD arrangement, because it will become even more costly to be one of those weirdos who wants a house with a garage, but it will also become even more costly to be one of those weirdos who walks or rides a bike places.

Living without a family vehicle in a city designed for cars means that having a family will become more costly too. Driverless cars might be built with fold-out kids’ seats like the newer Dodge Grand Caravans have, but anyone who has young kids and a car knows that the car is also a portable storage facility for toys, changing supplies, spare clothes, a stroller, and bunch of other things that would be a pain to lug in and out of the DOD car every time.

Having kids might therefore mean either putting up with a bigger hassle each time you travel, or having to shell out for your own driverless car–no more getting off easy with a used minivan or SUV. This adds up to one more excuse on the “it’s too expensive to have kids these days” pile, and as a result we might see lower fertility rates (at least among people who are future-oriented and careful with their finances).

V. The end of something beautiful

There’s something about being a teenager, craving freedom, learning to drive, and finally getting your license that is an essential part of the American experience. Another part of that experience being able to occasionally get out on the open road and command the movements of a machine that can take you across the continent. And there’s also something about when the machine is yours, putting the hood up and tinkering with it, changing its brakes and oil, even vacuuming it and washing it that millions of Americans find intoxicating and are able to bond over.

OK, maybe driverless cars will bring their own set of unique rituals and beautiful experiences and rites of passage that we can find culturally unifying. (Sure, maybe.) But driverless cars definitely spell death for driverful cars as an institution. This means the death of classic cars too. You can’t have classic cars without having cars that regular people can drive and own and maintain.

If you’ve never been to a classic car show, I highly recommend it. Sometimes they are quasi-spontaneous, so that one day there will simply be a few dozen spectacularly well-maintained half-a-century-old cars sitting one after the other in a parking lot as you pass by. Take a stroll through one of those shows. Talk to the owners. Look at those cars, their shapes and lines, the way they were built. With permission from the owners, feel some of the materials with your hand. When people say “They don’t make’em like they used to,” those aren’t hollow words. The whole character of those cars is different. It would be a tragedy to lose that piece of Americana or relegate it to museums.

In the end we are faced with a kind of Trolley Problem after all. Driverless cars do offer their advantages–any avid reader who has to drive places can understand that. By steering our technology adoption choices in that direction we assure ourselves some convenience, some safety, some freed-up time to be productive or to rest, maybe even some savings per mile traveled (averaged over our lifetimes at least). But is it worth what we would give up?