The One Weird Trick Behind Ford’s ‘Ghost Driver’ Test


Last month, a team from Virginia Tech’s Transportation Institute sparked a minor social-media freakout in the D.C. area when they carried out a “ghost driver” experiment. The test involved sending a self-driving van out into traffic, but the robot car was a fraud—there was researcher hiding in car-seat camouflage piloting what appeared to be a driverless vehicle around city streets (at least until an intrepid TV news reporter attempted to unmask him).

It looked, frankly, like a joke, but there was some real science going on: The researchers were testing a model for autonomous vehicle signals. The automaker Ford needs to know how AVs will communicate with the human users it will be sharing the roads with, so the company’s Human Factors department tasked Virginia Tech to study how people reacted to a driverless vehicle equipped with a eye-level light bar on its windshield that transmitted a series of signal patterns. The experiment was designed to help establish a car-industry standard for a whole new kind of automotive communication: How do we translate the language of nods, waves, honks, and headlight dips that drivers use to assert right-of-way into robot-ese?

This is becoming an increasingly less abstract question: a bill passed in House of Representatives last week calls for putting 100,000 autonomous vehicle on the road by 2020. But at the moment, the only driverless cars on the roads of American cities are required to have licensed humans in the driver’s seat poised to take over in case of emergency. How do you test how people react to a driverless car when there’s clearly a person behind the wheel?

The answer: Fake it. Researchers drove a car with a “seat-suit” to simulate a driverless car.

“We wanted people to be able to look in,” says Andy Schaudt, VTTI’s project director for the experiment. “The end goal to use psychology and misdirection, so that people could easily see inside and confirm their suspicion that there isn’t anyone there.”

Ford proposed a set of three signals from the prototype used in the experiment: The white lights mounted on the windshield bar pulse slowly for yielding, blink quickly for accelerating, and remain solid while driving. “It felt like a video game in a lot of ways,” says Schaudt. “We can activate these lights and see how people behave,”

This is completely normal. (Ford)

The experiment was designed to provoke reactions: The van roamed suburban D.C., lurking outside Metro stations and negotiating with pedestrians, cyclists, and motorists in intersections, parking lots, garages, and airport drop-offs. Schaudt says his team is still digging into the 150 hours and 1,800 miles the car logged with cameras and sensors on the roads in Arlington, Virginia, which led to 1,650 activations of the signals built for the car.

For Ford’s John Shutko, who once studied how the company’s customers interacted with cars, the autonomous vehicle tests represent a chance to flip that script, as he wrote in a new Medium post.

“To date, my job has been how the driver interacts with the vehicle from the inside,” says Shutko, Ford’s human factors technical specialist for driverless cars. “When we started thinking about the future, we realized that when there’s no longer a driver, there’s a human interaction that is going to be missing.”

The fake-AV’s signal is at driver level, since that’s where one expects to see a face. (Ford)

Automakers are looking to not only establish an industry standard for an entirely new kind of automotive signaling—they’re trying to re-engineer the trust that undergirds the driving ecosystem. To do that, they needed to come up with a simple and readily understandable way for robo-cars to communicate their intentions. NHTSA’s Federal Motor Vehicle Safety Standards limit the colors on front-facing lights to white or amber, which ruled out the notion of a traffic-light scheme of green for go, yellow for yield, and red for stopping. Shutko says that might be for the best anyways, since white lights are visible in different levels of daylight. “We also didn’t want to communicate caution at all times, and people typically interpret amber that way.” They located the car’s light bar at the top of the windshield, since that’s where people naturally expect to see a driver’s face.

Still, it will take time and education to acclimate road users to the new world of driverlessness: During virtual reality simulations ran before the ghost driver experiment, few subjects understood what the signals meant immediately, Shutko says. “But as people watch and get exposed, they learn.”

The most important thing, says Schaudt, is that a standard formula for AV signaling is established, so that future AVs are all speaking the same language. “What would be really confusing is if you had three or four different kinds of signal systems that you then had to stop and think about,” he says. “All of us at one point had to learn that ‘green means go’ and ‘red means stop.’ So now it seems intuitive to us.”