Why We Do Better

From earlier this summer, what I call the prognostic treadmill:

“The hedonic treadmill describes our tendency to return to a baseline level of happiness (or curmudgeonliness) shortly after events we thought would bring us lasting joy: that new Tesla, that condo with the view, that promotion (long overdue, really).

We are less familiar with, yet no less afflicted by, the prognostic treadmill:

Our tendency to return to a level of confidence in our predictive ability, shortly after events that confirm our inability to make predictions: failure to reach last year’s growth targets, global economic recessions, populist upheavals.”

A shocking event scrambles the neat picture of the future that we held in our heads, like a child dashing a finished jigsaw puzzle to the floor.

For a fleeting instant, we see the unfolding of human history as it is: impervious to prediction. Anti-certain. Unfortunately, our brains crave closure.

Moments after the experts and pundits get it wrong, we gasp for more predictions, new predictions to settle our roiling bellies. Like salt water, bad predictions just make us thirstier for more predictions.

What would it look like to sit in uncertainty? To admit that in complex environments with interlocking dependencies, the odds are always 50-50? That it is better to have no map than a wrong map?

Might you be more cautious? Might you listen more closely? Might you work a little harder?

We can’t predict the future, but we can prepare for possibilities.

What Came Next, Why We Love

What kind of monster am I?

MIT’s Moral Machines quiz exposed my grotesque form. In the survey, I determined the path of a self-driving car that suffers brake failure. In each scenario, I decided whether to send the car careening into a barrier or to barrel through pedestrian traffic.

Luckily, I prepared for this possibility.

While I could have treated this as a purely ethical exercise, instead I imposed four strict guidelines with the goal of ensuring consumer adoption of autonomous vehicles. The prime directive: save the passengers. I applied the following ruleset to each scenario, in order of evaluation:

  1. Animals are not humans
  2. Save the passengers
  3. Follow traffic rules
  4. Do not swerve

The rationale:
You may find my first rule the most monstrous, but it is a necessary condition for all subsequent rules: animals do not count as passengers or pedestrians. That means that a car full of animals is treated as an empty car, and a crosswalk full of animals is treated as an empty crosswalk. With apologies to the ASPCA, dogs (good dogs) would happily sacrifice themselves for their best friends. And cats, well, cats already treat us like empty space.

Next, the critical mandate: save the passengers. We do not assess number or type of passengers vs. pedestrians that will be endangered by swerving. We will save one criminal passenger when that requires plowing through five children walking to school.

Third, assuming passengers are safe, we follow traffic rules. This means that given the choice between driving through a green light and driving through a red light, we always drive through the green. We assume that pedestrians are less likely to cross against traffic. This rule will become a sort of self-fulfilling prophecy, though the citizens of Manhattan will be stressed out for a while. Coincidentally, the introduction of the automobile a century ago followed a similarly lethal pattern until pedestrians smartened up.

Finally, if we can save the passengers and follow traffic rules, we opt not to swerve. The intention here is that autonomous driving should be as predictable as possible. When we see a car accelerating toward us, we should assume that it will follow its current path. This means that in some cases, a larger number of pedestrians will be struck only for the unhappy accident of legally crossing at the wrong moment. This is terrible and unfair, though the number of victims will be dwarfed by the number of people saved from accidents due to human error.

So what kind of monster am I? When these rules are implemented across scenarios, what sort of trends do we see?

mit_moralmachines_1-1

Hm. I disgust me. Clearly, I am both sexist and ageist.

Moreover, we learn about my social and physical preferences:
mit_moralmachines_1-2
Awful. Just awful.

But I wanted to know more. I wanted to know how consistently wretched I was, so I took the quiz a second time, using the same set of rules. What kind of monster am I?

mit_moralmachines_2-1
Oh god. Is this worse? It’s worse, isn’t it? What else do we see?

mit_moralmachines_2-2

Huh. That’s strange. In the first quiz I prioritized the safety of younger, fitter people. This time, they were dispensable. Confused, I took the quiz a third time. Let’s settle this, what kind of monster am I?

mit_moralmachines_3-1

Well, that makes a bit more sense.

Over the course of a half dozen attempts, I was biased against criminals, athletic people, women, men, large people, babies, and the elderly. I implemented a ruleset that disregarded everyone’s identity, but given a limited sample size, any constituency could take me to court for discrimination.

This is the real dilemma for the trolley scenario and autonomous cars. Given indifferent rules, we will see bias. Given toast, we will see a face.

On a long enough timeline, we will be monsters to everyone.

What Came Next

What kind of monster are you?

No, really, I’d be curious to hear what kind of twisted, thoughtless human being you are. In order to find out, take MIT’s Moral Machine’s quiz. As we introduce autonomous machines to our roads, factories, and houses we must consider how self-driving cars should respond in events that will result in the loss of life. MIT’s goal is to understand our individual moral judgements, as well as how we apply those judgements to autonomous machines.

The quiz presents a series of trolly scenarios, where you indicate how a self-driving car should respond to brake failure. Specifically, you must decide who will die in the inevitable collision: passengers? Pedestrians? Young? Old? Overweight? Criminals? Dogs?

Jeez.

What kind of monster are you?

What Came Next

After several generations of wearables with abysmal battery life, a breakthrough: researchers studying the effects of calisthenics on mice stumble upon an energetic process through which we can power our devices through ingested food calories.

We simply plug our devices into the USB hub on our hip and charge them with our body’s catabolic process.

Finally: lasagna and Krispy Kreme, Big Macs and burritos, chocolate shakes and triple-cream cheese: all in the service of full batteries.

Bacon-wrapped, deep-fried, fudge-stuffed, Texas-style: all to keep our phones glowing, cars rolling, keyboards tapping, apps tracking.

Best of all: we devour whatever we crave and gain no weight. Our appetite unclasps from our personal caloric requirements; we now eat for civilization’s pulse. After a long day’s work, we sit on the couch and plug into our apartment to power the lights, the TV, and the dishwasher. And, of course, we gorge. Steak and Steak’ums, Nutella and Cheez-Its, pizza and bagels and pizza bagels.

We gotta keep the lights on.

And then, a matter of some concern. Communities once suffering from staggering rates of obesity are now afflicted with chronic malnourishment. We simply can’t eat quickly enough, obscenely enough to power all of our devices.

The public begins to adopt elaborate habits and routines around unplugging. Some unplug after five in the afternoon. Others decide to only plug in with friends. A few unplug entirely, a variety of tech veganism that gains a small but vociferous group of adherents along the coasts. But many, an unfortunate many, simply can’t or won’t accept a disconnected life. For them, life goes on as normal, mostly, until one morning they begin to feel rather tired. They notice their phones won’t keep a charge even after a trip to Olive Garden. Their kitchen lights dim at increasing intervals.

It all takes about forty years.You can hover a few hundred miles above the Earth and watch. Blip by blip, each city begins to go dark.

What Came Next

When asked to imagine 100 years into the future, the first place your mind might go is to the new technology: what do we use to communicate? Are there chips implanted into our eyes? How common are personal robots and drones? What medical breakthroughs have we stumbled upon? Have we populated the solar system?

Science fiction has trained us to imagine the future first through the lens of technological advancement.

Perhaps it has limited our imagination.

Let’s travel 100 years into the past. We flag down a woman walking down 8th Avenue. We have a story to tell her about the future. We’re short on time (the time control device really eats up our phone’s battery), so we tell her two things:

  1. In one hundred years, we will have created a device that instantaneously enables you to send photographs and messages to anyone else in the world. Everybody will have this device, and it will fit in the palm of her hand.
  2. We will elect our first female president, the successor to our first black president.

Which statement will inspire more curiosity? More excitement? Which statement will compel her to see her present day in a new light? To change her mind? To take action?

Again, imagine 100 years into the future. Science fiction might be the least radical lens through which we may envision the world-to-come.

What Came Before, What Came Next

At times, say while preparing one’s morning coffee, we can be struck with the unnerving sensation that we are presently living in someone else’s past.

We are surely not the first to experience the feeling. A young man in England around the time of the Renaissance is struck by the same realization as he walks to his class on medicine at Oxford. While most of his professors continue to lecture on the ancient Greek theory of the four humors, a small, rebellious sect within the college advocates for an empirical, observation-based approach to medicine. The young man suspects that in time, careful practice and experimentation will overturn centuries of theory.

It takes nearly two-hundred fifty years before Pasteur and Koch establish the existence of germs and their role in spreading disease. The following century sees widespread vaccination and plummeting infant mortality.

The young man at Oxford, in his moment of epiphany, understands that he is presently living in the past.

I prepare my morning coffee and consider that the field of modern psychology is barely a century old. We believe our technology to be advanced, our pharmaceuticals effective, the foundations of our theories sound. In reality, our understanding of the brain remains faint. We are at sea, squinting at the hint of a coastline through a fog.

Two-hundred and fifty years from now, a young man walks to a seminar at Oxford titled Neuroarchitecture and Rearchitecture. On the way, he feels the distinct sensation of presently living in someone else’s future. He makes a mental note to ask his professor about the phenomenon after class.

What Came Next

Take a prediction about the future that we might all agree on: in two hundred years, our collective opinion about which 20th century pop musician is most overrated will be different than it is today.

Sure. Makes sense. Attitudes are always changing.

What if I told you it will be Michael Jackson?

Perhaps I should have started out with a trigger warning.

To drive someone crazy, be specific about the future.

Chuck Klosterman argues in But What If We’re Wrong? that a crucial bias in how we view the present day stems from our inability to imagine it as the distant past. We agree that we think about the world differently than people living a century ago, and we can also imagine that people a century from now will view the world quite differently than today.

Until we get specific. As Klosterman observes in his interview with Marc Maron:

“Everyone in the abstract sort of accepts this, but as soon as you start talking about specific ideas that we might be wrong about people are very uncomfortable. They need to feel a degree of certitude about specifics even if they can accept in a general sense that they might know nothing.”

We agree in the abstract. We disagree in the particular. The need for certitude casts a shadow over the sea of our beliefs:

Our moral and political opinions, the durability of our fondest memories, our affection for friends and family and total strangers, the virtue of technological progress, the safety of genome hacking, the stability of national borders, the superiority of science to mysticism, diversity to homogeny, The Beatles to The Monkees.

Yes, you say, I know these are all subject to change. Of course they are. Except, except, except.

What Came Next

The reveal: a neutrally-dressed young woman lifts a gray box from the table. Beneath, a particular brand of soda.

“You’ve actually been drinking Diet Pepsi!”

Across the table, a man’s jaw drops. “No way!” He laughs loudly, stumbles back in awe.

Cut to: The man, speaking to the woman, to the camera, to us, the national viewing audience, “This is it. WOW. You got me. I’m drinking DIET PEPSI! This is my jam!”

We’re stunned by our preferences when a box conceals the brand.

A vision of democracy:

In the year 2020, the U.S. presidential election takes place via web app. There are no candidates on the ballot. Instead, a series of questions pertaining to how you would like a hypothetical candidate to resolve ambiguous moral and political situations. How you would prefer they respond to stressors and insults. How they’ve spent their free time and their professional energy to date. How long their longest friendship has lasted. What they eat for breakfast, how frequently and intensely they deadlift.

The nation answers the questions.

Each candidate enters their own responses to the questions, which are supplemented by the evaluations of a physician and psychologist and audited by an independent legal counsel.

The app matches each voter to a candidate, based on the percentage of overlap between voter’s preferences to candidates’ responses.

The votes are tallied. On network television, a neutrally-dressed young woman walks to the center of an LED-bright stage and stands behind a star-spangled folding table. On the table, a gray box.

The reveal: she lifts the box from the table. Beneath, a particular brand of candidate. Our new president.

Thousands of red, white, and blue balloons spill from the ceiling. Bruce Springsteen wails over the loudspeakers.

What Came Next

There is another reason we are anxious on planes: because we are the passengers and not the pilots. As passengers, our vision is limited. We can only imagine what might be ahead of us, behind us, or underneath us. We lack an understanding of what signals are important, and which are not. We think we might have information the pilot missed – a strange grinding sound coming from the wing, perhaps. And who is the pilot, anyway? He might be drowsy or drunk, or maybe he received the divorce papers right before takeoff.

As passengers, we want to be pilots. Even the illusion of control settles us in our seats.

Thus, the emergence of autonomous cars will bring with it the mortal dread we feel on airplanes.

We will sit in the passenger seat during those first rides, hands balled, stomach knotted.

Today’s question is not how do we solve the technical challenge of autonomous vehicles, but the psychological one?

Perhaps we will need a tablet full of brightly colored, beautifully styled navigation data. At the top of the screen, a row of buttons to tap: CONFIRM ROUTE, CALIBRATE, UPDATE MANUALLY. A soothing British voice confirms our selections. We relax in the backseat, satisfied with our input. The buttons do nothing, but we don’t know that.

As they say, if you can’t dazzle them with brilliance, baffle them with bullshit.

Why We Create

It’s worthwhile to consider the ways the old instruct the young, and the young enlighten the old. Make no mistake: both generations have plenty to impart, and neither trusts the other.

And so, as we must, we teach with metaphor.

The old disguise their hard-earned lessons in the gloss of fiction, whether through Spiderman or Shakespeare, Jesus or John Cena. Salman Rushdie captures this dynamic in conversation with Paul Holdengraber:

“The thing about fairy tale, folk tale, and mythology is that these things in many ways contain the collective wisdom of the human race; these beautiful little things into which an enormous amount of moral and practical information is packed.”

The young teach the old through invention. New technology, new fashion, new relationships to work and to sex. Each innovation is a pocket manifesto that responds:

“Yes, and here is what you missed. This is the solution. This is what matters now.”

Story and invention, call and response. Should you need to teach a lesson, keep the age of your audience in mind.