I groaned through the first four chapters of Vivek Wadhwa and Alex Salkever’s The Driver in the Driverless Car before deciding not to continue. That is both a caveat to the review below, and a mini-review in itself.
The Driver in the Driverless Car is appropriate for readers who have very little familiarity with emerging technology and have never thought much about technology ethics issues. As someone who is doing research in these issues, I’m not part of the intended audience but I was prompted to try the book anyway because it seemed to specifically be addressing the issue of how we should choose which technologies to adopt. That is a very important question, and I was excited to see it posed by a book you might find in an airport terminal store.
Just a word of caution to anyone who might pick up this book for the same reason, though: the authors spend a lot of time familiarizing the reader with various emerging technologies and some of the most well-known ethical dilemmas those technologies pose — though not necessarily the most important ones. As they do this the authors (who admit to being early adopter tech-enthusiasts themselves) seem unable to stifle their excitement about the new technologies well enough to provide serious criticism at the same time.
So their attempt to be even-handed somewhat fails. But again, this might have been a deliberate choice, as the authors chose to match the attitude likely held by their target audience.
My problem with this book is not its dumbed-down-ness, but the authors’ rubric for how to assess technology. They propose a 3-question test of all new technology: Does it have the potential to benefit everyone equally? What are the risks/rewards? Does it create autonomy rather than dependence?
These are fine questions to explore, but they will never serve as a basis on which to build a useful technology adoption assessment. Aside from being indistinct and therefore redundant, the questions merely encode and promote the authors’ values (equality, rationalism, autonomy) rather than create a framework that accommodates values relevant to different users in a broad range of use contexts. Technology users and use contexts are extremely varied, so a technology that might seem fine to one user in one context might raise all kinds of red flags to another user in another context.
Like I said, it’s good that people are thinking about the question of how to assess technology for adoption. But I sense a danger in going about this the wrong way. Concepts like equality and autonomy are easily politicized; it’s easy to take them and run very far and very quickly, irreversibly, in the wrong direction.
I wish this book had been more about simply intentionalizing the process of adopting technology. There has been a flurry of writing in the past few years about technology’s implications for equality, fairness, privacy, autonomy, and so on. But very few people are saying “Buying the latest technology is not mandatory. The newest thing running on a computer, or with a computer in it, doesn’t have to take over. We can slow down and decide if we really want that new normal.” Maybe some day, somebody will write that book.