Why We Fuck Up

In the bloodless war between computers and referees, computers win 100% of the time.

Research shows that baseball umpires are less likely to call a strike after they’ve called the previous pitch a strike.

Soccer referees flub offside calls all the time. Why wouldn’t they? They are asked to judge a split-second half-centimeter difference in player positioning from twenty meters away.

Funny thing is, after the blown call, the TV broadcast rewinds and replays the action in slow motion. Neon lines materialize and frame the play. High-definition lenses confer with powerful computer servers, and we have a near-instant, perfect judgment of what we just watched. The information is both thrilling and useless, like finding out a high school crush really liked you, 20 years later.

Why don’t we simply replace those near-sighted, weak-willed meat bags with infallible, unswayable robo-refs?

Because deep down, we like the chaos of it all. Because when shit goes sideways and our guy or gal misses the line by a nose hair (or did they?), we need a person to blame, a human voodoo doll to stick our pins.We need an idiot.

Ideally, computers would watch Them, and nervous humans would watch Us. At the end of the day, we want a fair game. But for Them, it should be extra fair.

What Came Next, Why We Love

What kind of monster am I?

MIT’s Moral Machines quiz exposed my grotesque form. In the survey, I determined the path of a self-driving car that suffers brake failure. In each scenario, I decided whether to send the car careening into a barrier or to barrel through pedestrian traffic.

Luckily, I prepared for this possibility.

While I could have treated this as a purely ethical exercise, instead I imposed four strict guidelines with the goal of ensuring consumer adoption of autonomous vehicles. The prime directive: save the passengers. I applied the following ruleset to each scenario, in order of evaluation:

  1. Animals are not humans
  2. Save the passengers
  3. Follow traffic rules
  4. Do not swerve

The rationale:
You may find my first rule the most monstrous, but it is a necessary condition for all subsequent rules: animals do not count as passengers or pedestrians. That means that a car full of animals is treated as an empty car, and a crosswalk full of animals is treated as an empty crosswalk. With apologies to the ASPCA, dogs (good dogs) would happily sacrifice themselves for their best friends. And cats, well, cats already treat us like empty space.

Next, the critical mandate: save the passengers. We do not assess number or type of passengers vs. pedestrians that will be endangered by swerving. We will save one criminal passenger when that requires plowing through five children walking to school.

Third, assuming passengers are safe, we follow traffic rules. This means that given the choice between driving through a green light and driving through a red light, we always drive through the green. We assume that pedestrians are less likely to cross against traffic. This rule will become a sort of self-fulfilling prophecy, though the citizens of Manhattan will be stressed out for a while. Coincidentally, the introduction of the automobile a century ago followed a similarly lethal pattern until pedestrians smartened up.

Finally, if we can save the passengers and follow traffic rules, we opt not to swerve. The intention here is that autonomous driving should be as predictable as possible. When we see a car accelerating toward us, we should assume that it will follow its current path. This means that in some cases, a larger number of pedestrians will be struck only for the unhappy accident of legally crossing at the wrong moment. This is terrible and unfair, though the number of victims will be dwarfed by the number of people saved from accidents due to human error.

So what kind of monster am I? When these rules are implemented across scenarios, what sort of trends do we see?

mit_moralmachines_1-1

Hm. I disgust me. Clearly, I am both sexist and ageist.

Moreover, we learn about my social and physical preferences:
mit_moralmachines_1-2
Awful. Just awful.

But I wanted to know more. I wanted to know how consistently wretched I was, so I took the quiz a second time, using the same set of rules. What kind of monster am I?

mit_moralmachines_2-1
Oh god. Is this worse? It’s worse, isn’t it? What else do we see?

mit_moralmachines_2-2

Huh. That’s strange. In the first quiz I prioritized the safety of younger, fitter people. This time, they were dispensable. Confused, I took the quiz a third time. Let’s settle this, what kind of monster am I?

mit_moralmachines_3-1

Well, that makes a bit more sense.

Over the course of a half dozen attempts, I was biased against criminals, athletic people, women, men, large people, babies, and the elderly. I implemented a ruleset that disregarded everyone’s identity, but given a limited sample size, any constituency could take me to court for discrimination.

This is the real dilemma for the trolley scenario and autonomous cars. Given indifferent rules, we will see bias. Given toast, we will see a face.

On a long enough timeline, we will be monsters to everyone.

What Came Next

What kind of monster are you?

No, really, I’d be curious to hear what kind of twisted, thoughtless human being you are. In order to find out, take MIT’s Moral Machine’s quiz. As we introduce autonomous machines to our roads, factories, and houses we must consider how self-driving cars should respond in events that will result in the loss of life. MIT’s goal is to understand our individual moral judgements, as well as how we apply those judgements to autonomous machines.

The quiz presents a series of trolly scenarios, where you indicate how a self-driving car should respond to brake failure. Specifically, you must decide who will die in the inevitable collision: passengers? Pedestrians? Young? Old? Overweight? Criminals? Dogs?

Jeez.

What kind of monster are you?

What Came Before

I think my Spotify profile hates me.

I spent September in Spain, and the outrageous cost of data outside the U.S. was a good excuse to go on an iPhone diet. During that time I didn’t log into Spotify a single time. Before my trip, I downloaded my favorite songs from 2016 directly to the phone and used those as a soundtrack to long walks through Sevilla’s narrow streets and bustling plazas.

From Spotify’s perspective, I’d ghosted them. At no point did my account check in with their servers, let them know where I’d been and what I’d listened to.

My algorithm was starving.

Every Monday, Spotify generates a playlist called Discover Weekly, where they use your listening habits to automatically select an hour of songs that you might enjoy. The more you listen (and the more your tastes fall into a narrow and predictable set of genres), the better their recommendations become.

Spotify introduced me to many of my favorite musicians this year: NAO, Tei Shi, Gallant. Under-the-radar geniuses that I probably would have missed without a good music sherpa. In it’s best moments, Spotify understands my preferences better than I understand myself.

Upon my return from Europe, something had changed. I fired up Discover Weekly and… their predictions seemed desperate and inaccurate. Here was a bland funk remix. There was atonal indie rock dirge. Starving for data, my algorithm had started flailing.

Or. Perhaps it was passive-aggressively trying to teach me a lesson: “Here, want to disappear for a few weeks? Hey, I don’t care. Go on! While you’re at it, why don’t you check out this cool Puddle of Mudd B-side. And you like rap, right? Get on this hot Wiz Khalifa collab! Stupid idiots love it!”

I recently came across a video where kids from 2016 react to Tamagotchis, the toys from the mid-90s that tasked children with taking care of a baby alien. Every few minutes the Tamagotchi beeped to get your attention – to play, to eat, to poop. My friends and I spent about a month of fifth grade tending to our pixel babies. And then we moved on. I put my Tamagotchi in the dresser drawer next to my old folders and notebooks. It chirped, and it chirped, and then finally fell silent.

I thought, along with everyone else, that my Tamagotchi’s spirit died along with its nickel battery. Instead, like Obi Wan Kenobi, Tamagotchis became more powerful than we could possibly imagine. They are the invisible force that surrounds us, binds our millennial technology together. The buzz in our pocket, the iPhone notification, the email that begins “We noticed you haven’t logged in for awhile…”

The ghosts of our digipets, chirping away. They are hungry. We must feed them.

What Came Next

Has the progress of technology lived up to its promise? Has the last hundred years of innovation made us happier? Our lives richer or more fulfilling?

Marc Andreessen and Peter Thiel take opposing sides on this question. Thiel argues that most areas of technology, from energy to transportation to medicine, have seen only iterative progress the last century. Furthermore, while communications and IT have transformed, evidence that they have improved our lives remains elusive.

Toward the end of their 2013 debate, he muses on which signals might indicate that technological innovation is bettering our condition.

He suggests (51:30):

“I come back to an indicator that I think is an interesting cultural one: if Hollywood started producing science fiction movies in which technology was a good thing. The only ones i can think of are the Star Trek retread movies, which are a flashback to the 60s. Everything else, it’s, ‘technology is bad, its going to kill you, its going to destroy you.’ If people here stopped hating technology and started using their imagination to produce some good science fiction movies, that would be a very good sign.”

Recall films where new technology or novel social values rescue us (e.g. Contact, ID4).

Count that against films where new technology causes the problem and traditional technology or values solve the conflict (e.g. Avatar, Terminator, Star Wars, Jurassic Park).

We will call that ratio our Faith In Technology index.

When you have an idle commute or a quiet evening, check out Thiel and Andreeson’s debate and find where your perspective hovers.

Why We Fuck Up

The irony is that even if our self-driving cars kill indiscriminately, we would read bias into their distribution of victims for the same reason we see famous faces in toast.

You will hear:

“We must take into account regional and cultural differences in adherence to pedestrian jaywalking laws!”

“Autonomous vehicles are effectively targeting specific populations, and we demand accountability!”

“Of course the cars are biased. They’re made in San Francisco.”

The car company’s public policy and legal teams debate the ethics of algorithmic vehicular manslaughter over an increasingly emotional and sleep-deprived chain of weekend emails. Philosophical battle lines are drawn. On Sunday night, the CEO informs both teams that she has arrived at a decision and has informed her head of product: to counter the perceptions of bias, we will begin adding bias to our artificial intelligence.

What Came Next

We must teach intelligent machines how to kill. Not whether. How.

This is a source of much hand-wringing for those developing self-driving cars. On their shoulders (hunched, from years of poor desk posture) lies the responsibility to engineer a solution to the trolley problem.

The trolley problem, in brief, is a morbid game of “would you rather?” As in, would you rather let a runaway trolley plow through a group of kindergartners, or would you stop the trolley by pushing a man into its path?

Would you stop the trolley by throwing yourself into it path?

Although engineers of autonomous cars have started to downplay the weight of this dilemma, the truth is that their approach to this problem will determine whether self-driving cars become a global standard or a luxury. Or banned altogether. Should engineers aim toward adoption, I’ve created a helpful guide on how to resolve the trolley problem for a variety of catastrophic scenarios:

Q. Save an adult man in the street or save the passenger of the car?
A. Save the passenger.

Q. Save an adult woman or save the passenger?
A. Save the passenger.

Q. Save a nun or save the passenger?
A. Save the passenger.

Q. Save a baby or save the passenger?
A. Save the passenger.

Q. Save 30 babies (their strollers are somehow lined up in a row) or save the passenger?
A. Save the passenger.

Q. Save a rare white leopard or save the passenger?
A. Save the passenger.

Q. Save the last white leopard or save the passenger?
A. Save the passenger.

Q. Save the last white leopard, 4 babies in divinity school, George R.R. Martin (assuming the Winds of Winter not released), and the original Declaration of Independence or save the passenger?
A. Save the passenger (hope that GRRM survives).

Q. What if we can assure the safety of the passenger?
A. Choose survivor at random (unless Winds of Winter is still unreleased ).

Indeed, the only ethical answer to the trolley scenario is the same one nature itself leans toward: uncertainty. There will be some engineers who insist we can devise a complicated algorithm that will minimize the severity of accidents: by number of victims, by age, by profession (they will insist it is reasonable to prioritize saving engineers).

Do not trust them. If adoption is our goal, only one certainty matters: we must save the passenger. Leave all else to chance.

What Came Next, Why We Act

Artificial intelligence will not kill us all. Instead, it will ground us like an angry parent, send us to our room.

David Krakauer suggests that the threat of technological progress is not existential, but volitional. He asserts that algorithms that curate, like the one that recommends our next movie on Netflix, erode the concept of free will. To the algorithm, the tendency of humans to surprise, to contradict themselves, to act unpredictably is an inefficiency. By constraining our set of visible and viable options, artificial intelligence effectively negates the possibility of freely-made choice.

Krakauer provides an analog example in our choice of dress:

“I am a Western male, you are a Western male: you are probably wearing trousers and a shirt. The sartorial options available to you are extraordinarily small. If you look at world culture, and historically look at Persia or the Roman empire, or China, the way we have chosen to adorn ourselves is incredibly diverse and fascinating. And yet, now as Western men we all look like clones. I would claim you are not exercising your judgment, you’re being told precisely how to dress. And when you get to exercise your judgment, it is a very, very low dimensional space around texture and color that the manufacturers of clothing, based on purely economic efficiency, have decided to give you.”

But is his premise accurate? If, as Krakauer implies, free will is exercised via deliberation over a complete data set, then even nature itself is a kind of curator. After all, our environment constrains the availability of flora and fauna we might choose to consume or wear or convert into shelter. Does the height of trees and the tensile grip of our fingers limit our choice of recreation any less than Amazon’s recommendation engine? Are we truly exercising free will when our habitat and biology provide only a narrow band of easily-accessible options?

Of course we are. Free will is not at stake, but heroism. Heroism is finding, then crossing the boundaries of conventional wisdom and curated options. It is grasping for the uneasily-accessed branches.

Religious theocracies curtail choice, but not free will. Authoritarians build fences, heroes hop over them.

Algorithmic curation suggests we choose from a small set. Heroes venture beyond recommendation.

Artificial intelligence draws from the oldest human tradition. Every society places borders around what constitutes normal, responsible, civilized, sacred. So do our psychological biases and intuitions. If artificial intelligence further shrinks our set of choices, then the opportunity for heroism has never been greater.

What Came Next

There is another reason we are anxious on planes: because we are the passengers and not the pilots. As passengers, our vision is limited. We can only imagine what might be ahead of us, behind us, or underneath us. We lack an understanding of what signals are important, and which are not. We think we might have information the pilot missed – a strange grinding sound coming from the wing, perhaps. And who is the pilot, anyway? He might be drowsy or drunk, or maybe he received the divorce papers right before takeoff.

As passengers, we want to be pilots. Even the illusion of control settles us in our seats.

Thus, the emergence of autonomous cars will bring with it the mortal dread we feel on airplanes.

We will sit in the passenger seat during those first rides, hands balled, stomach knotted.

Today’s question is not how do we solve the technical challenge of autonomous vehicles, but the psychological one?

Perhaps we will need a tablet full of brightly colored, beautifully styled navigation data. At the top of the screen, a row of buttons to tap: CONFIRM ROUTE, CALIBRATE, UPDATE MANUALLY. A soothing British voice confirms our selections. We relax in the backseat, satisfied with our input. The buttons do nothing, but we don’t know that.

As they say, if you can’t dazzle them with brilliance, baffle them with bullshit.

What Came Next, Why We Talk

An odd result of the explosion in communication technology, from the printing press to the typewriter to Facebook, has been the simultaneous depersonalization of the written word.

There’s a reason why receiving a postcard from a friend feels more intimate than an email: there is a physical remnant of that person conveyed through their handwriting. The imperfections and smudges, the even spacing or chaotic scrawl, the sum effect as singular as the smell of their coat.

Mass-produced writing has been standardized via typeface, and communication is deadened by the loss.

Should there be a future where we communicate with intelligent robots, I dread a similar phenomenon: a standardization (and thus a loss) of body language.

Just as with today’s typefaces, body languages will be designed and encapsulated into discrete sets. We will call them body fonts. Eye contact, stroking of the chin, fumbling with hands. A cheeky smile, a gentle smile, a sly smile, a flirtatious smile.

They’ll be given names like Draper and Marge. You’ll cycle through a list and select one for each of your assistants.

The first versions will occupy the uncanny valley, and we’ll all feel revulsion. But as the years go on, we’ll acclimate to the unnerving consistency of their gestures. Their simulated emotions will be easier to read than our human companions’ messy, ambiguous tics.

Soon, our natural body language will appear erratic and unprofessional. We will send robot surrogates to our jobs, to sit at our desks on our behalf. Your manager will tell you, via her surrogate, that the directors have moved your two engineers to a different project. Don’t worry, she says, your project is still a priority this year. Her surrogate smiles gently. Your surrogate smiles gently.

You watch on your headset, twitching from your couch. Your eyebrows want to do.. something. Your fists feel hot for some reason. What is it that you are trying to do?