Let me start with a little story: A couple of years ago, I gave a talk at a small conference on technology. During it, I referenced your book The Inner History of Devices. My wife was sitting at a table with a number of people who work in entertainment technology. At the sound of your name, one of them pretended to gag and the others laughed. They were, I guess, technological literalists, people who can see nothing but surfaces. When told of it, I simply laughed and dismissed them.
I was reminded of that today as I listened to your interview with Krista Tippett for the radio show On Being. You recounted encountering a Galapagos turtle, asleep, with your daughter. She said, “For what this turtle is doing, they could have just had a robot.” You said to Tippett, “It struck me that, from her point of view, the fact that it was alive mattered not at all.”
In the interview, you go on:
That’s when I started talking about a new pragmatism among this generation of young people. This is no longer philosophical. Life becomes a pragmatic quality. Is this alive enough for this purpose? And this is important because we’re now talking about robots that will serve as companions to the elderly, robots that will serve as companions to children as kind of nanny-bots. This is the question being asked of them. Are they alive enough for this purpose? I, of course, think this is the wrong question in many cases and that moment at the museum helped me frame, you know, helped me frame my thinking.
In the book you were talking about, your newest, Alone Together: Why We Expect More from Technology and Less from Each Other, this is, it seems, what you explore (I haven’t read it yet, but I will… I most certainly will… and I look forward to it). And it is a topic worthy of the intelligence you show and the intellectual risks you are willing to take (risks that those staid technological literalists are not willing to contemplate in others, let alone take themselves).
In a 1978 speech, “How to Build a Universe that Doesn’t Fall Apart Two Days Later,” the science-fiction writer Philip K. Dick said (talking primarily about TV, but the point is much broader):
another way to control the minds of people is to control their perceptions. If you can get them to see they world as you do, they will think as you do. Comprehension follows perception. How do you get them to see the reality you see? After all, it is only one reality out of many. Images are a basis constituent: pictures…. We only imagine that we consciously see what is there…. Our memories are spurious, like our memories of dreams; the blank spaces are filled in retrospectively. And falsified. We have participated unknowingly in the creation of a spurious reality, and then we have obligingly fed it to ourselves. We have colluded in our own doom. (8)
He goes on to wonder just what an “authentic” human is in such a situation. Then he says something that I suspect will resonate with your own experiences:
I watch the children watching TV and at first I am afraid of what they are being taught, and then I realize, they can’t be corrupted or destroyed. They watch, they listen, they understand, and, then, where it is necessary, they reject. There is something enormously powerful in a child’s ability to withstand the fraudulent. (22)
The questions you are exploring can be illuminated, and in powerful ways, by Phil Dick’s fiction, where the question ‘What is human?’ becomes more and more complicated as he sees humans acting inhumanely and imagines androids operating in humane fashion. And he extends it further, to ‘What is real?’ The two questions, of course, are intertwined:
I consider that the matter of defining what is real–that is a serious topic, even a vital topic. And in there somewhere is the other topic, the definition of the authentic human. Because the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans–as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or fake humans will generate fake realities and then sell themselves to other humans, turning them, eventually, into forgeries of themselves. (6)
Of course, I don’t know if you have read any of Phil Dick’s work, though I do know you reference Do Androids Dream of Electric Sheep? in your new book (though you call it a story instead of a novel)–though that particular page isn’t available for preview. I guess I’ll just have to wait until I have the actual book in my hand or on a screen.
The topics Phil Dick focused on throughout his career can inform contemporary discussions that have become a lot less abstract and a lot more immediate. As you are spearheading such discussions today, I hope you will allow this odd, off-beat thinker to also have a place, so to speak, at the table.