How to Mourn a Space Robot

NASA’s Cassini probe will soon plunge into Saturn, ending its 13-year mission to the ringed planet and triggering a wave of grief among scientists, engineers, and an adoring public.  

An illustration of the Cassini spacecraft cruising through space
JPL / NASA

Cassini, the spacecraft that has been orbiting Saturn for 13 years, is running out of fuel and nearing the end of its mission. Over the next few months, Cassini will dive into the space between Saturn and its rings, moving closer and closer to the planet until it eventually disintegrates in its atmosphere in September. This week, NASA released a short animation showing these final moments, set to a majestic, brassy overture.

“On its final orbit, Cassini will plunge into Saturn, fighting to keep its antenna pointed at Earth as it transmits its farewell,” a comforting voice narrates as the music swells. “In the skies of Saturn, the journey ends, as Cassini becomes part of the planet itself.”

The video prompted some emotional responses from viewers. “Someone must be cutting onions in here,” tweeted a scientist who watched with the audience gathered at Jet Propulsion Laboratory headquarters in California. “Crying about a spaceship,” said a user in Boulder, Colorado. “This is so sad ... I love you Cassini,” tweeted a user in Bogota, Colombia.

This is probably the reaction NASA was going for. Cassini is one of the most successful missions to the solar system. It delivered a probe to the surface of Titan, and discovered its lakes of liquid methane. It observed plumes of water vapor shooting out of Enceladus. And, of course, it captured stunning photos of Saturn and its rings. Its demise will elicit sadness among scientists, engineers, and space enthusiasts alike.

But the NASA video also taps into something that goes beyond scientific discovery or Saturn. The public mourning of Cassini serves as another example of the complicated relationship between humans and machines, and of the tendency of humans to anthropomorphize robots and care about them.

“That’s how we’re built,” said Doug Gillan, a psychology professor at North Carolina State University who studies human interaction with technology. “We have these social processes, and sometimes they get applied to non-social things.”

The anthropomorphism of machines appears to fall on a rather unsurprising spectrum: The more alive a robot seems, the more likely its behavior will trigger feelings of empathy, attachment, or protectiveness in humans, who are hardwired to respond in such ways to other social beings. Even the slightest perceived hints of life in a machine—like Cassini “fighting to keep its antenna pointed at Earth”—can cause people to see it as something other than just a collection of sensors and circuitry, and then react to it in an emotional way.

Research has shown that people can feel the same kind of empathy for robots as they do for humans. In one study, participants watched videos of a boy and several kinds of robots getting yelled at or being pushed around, and were then asked which they would save in the event of an earthquake. Their picks showed that participants felt more empathy for the boy and the robots that looked and talked like humans than for device-like robots, like a Roomba—but they felt compassion for the Roomba, too. In another study, participants were hooked up to a brain-imaging machine and shown clips of a a woman, a box, and a robotic toy dinosaur receiving affection, and then being physically abused. Participants showed more concern for the human than for the dinosaur, but their brains reacted in a similar way as they watched the dinosaur squirm and let out tiny electronic screams.

Here’s a prime example of a robot that exhibits enough “aliveness” to make humans burst into tears: the fictional Pixar character WALL-E is a rusty trash compactor. But his boxy frame shivers when he’s scared, his shovel hands come together when he’s hopeful, and his giant, binocular eyes tilt downward when he’s sad. R2-D2 looks like a trash can and BB-8 like a soccer ball wearing a Go-Pro, but the Star Wars crew gave them the ability to make expressive squeaks and act in ways that show bravery and loyalty.

The same effect holds for real-life robots. A survey of military personnel that use robots to disarm bombs found that soldiers can feel frustrated, angry, or sad when their robots are destroyed. The soldiers said they’d given the robots names and genders, which may have led them to feel affection for the machines. The DAR-1 is a six-legged, spider-like machine, but it’s designed to track human faces, lock eyes with people, and back away if they get too close, meant to suggest that it feels nervous. The Blabdroid is literally a cardboard box and a set of wheels, but two holes and a thin slit cut into the material create the appearance of eyes and a mouth, making the robot appear friendly.

Machines like WALL-E, bomb-disposal robots, and Roombas, while they don’t resemble humans, possess some human qualities. On the inside, their movements are the products of a complex arrangement of sensors, motors and other programmable hardware. To the outside world, to the people watching, their movements seem autonomous. The robots appear, at times, to show intent.

“Once it takes on autonomy and social agency and consciousness, whatever you want to call it—then we start to think of it more like another person,” explained Gillan. He cited a character in Westworld, the HBO drama about a robotic theme park whose artificial inhabitants begin to reach consciousness. “The Man in Black—all he’s doing is mistreating machines. And if he were mistreating his coffeemaker, we’d think that he’s kind of a jerk, but we wouldn’t think he was evil,” he said. But “because we can start to perceive these machines as having social agency, then somebody who behaves that way is seen as evil.”

The recognition of such autonomy could help explain why some will mourn Cassini, as it burns up in Saturn’s atmosphere, as something other than the amalgamation of metal and wires and circuits. The spacecraft has no human or animal qualities, and it is not designed to give lifelike cues, like a tilt to signal curiosity or a shudder to suggest fear. But it can communicate in its own language, transmitting data back to Earth. It has ventured into dangerous territory, all alone, to seek answers about the unknown. And it appears, as NASA suggests in its closing narrative, that Cassini has accepted its mortality. Soon its work will be over, and it will no longer exist. That, perhaps, is the most human thing of all.

Marina Koren is a staff writer at The Atlantic.