Rotate your device
Scroll to read
Finding Baymax
Robotics companies are hiring Pixar engineers to make their robots friendlier
Illustrations by philiplueck
Scroll to read
Next story >
Illustrations by philiplueck

He’s hellbent on killing a small child, and he’s almost impossible to stop.

They shoot him with pistols and shotguns, they ram him with a car, and he just absorbs the shocks. His liquid-metal body can reform to resemble anyone on the planet, or any object that will aid in his killing of young John Connor, before he grows into the adult that will start a human rebellion against the all-powerful AI.

The robot is called T-1000. It was the main villain in the film Terminator 2: Judgment Day. It and its predecessor, famously played by Arnold Schwarzenegger, perfectly crystallized our fear that at some point, in the not-too-distant future, artificial intelligence will become smarter than humans, and its robots will enslave or eradicate us all.

edit

The Terminator movies—there have been six of them now—are part of a long science-fiction tradition of painting the destruction of humanity at the hands of robots, from HAL 9000 in 1968’s 2001: A Space Odyssey, to 2004’s I, Robot, and recent works like Ex Machina or Westworld. Many of these films and television shows are set right around the time we’re living in now, making them very relatable, just as we begin to adopt robots in our daily lives.

In film, it’s just a hop, skip, and a jump from human-shaped robots preparing dinner to rising up and killing us all. In the real world, bots can’t yet make us dinner, but we’re starting to experience how useful they can be. We’ve had automated systems in factories for years, and more recently, in warehouses. We drive cars that will tell us if we’re veering out of our lane, and we’re adopting tiny vacuum-cleaning robots in our homes. In other words, we’re at a point where science fiction is tipping into fact, as automation plays a greater and greater role in our everyday lives.

In the near future, businesses will build a brand-friendly chatbot that can interact with every customer at once instead of hiring thousands of people to man customer service lines. Companies are starting to build AI interfaces to act as conduits through which their customers use their services (think Amazon’s Alexa on the Echo), and others are even starting to tackle the challenge of building general-purpose robots that can provide companionship and help around the home. It won’t be long before automation will touch just about every aspect of our lives. And yet people still feel a lingering unease about AI, whether it’s fearing that robots will take our jobs, invade our privacy, or lead to the downfall of humanity itself.

It shouldn’t surprise us then, that there’s a group of individuals—roboticists, artists, animators, engineers, industrial designers, neuroscientists, among others—working to make robots and AI seem less threatening. They’re taking cues from some of fiction’s friendlier robots—think the droids in Star Wars, or Wall-E—and blending it with the latest thinking on how our own brains work to create real-life robots that may make us more inclined to accept these technologies into our lives. Unlike companies such as Boston Dynamics that are focused on robots’ functionality, this cottage industry of bot-makers are concerned with what the machines look like, how they sound, and what kind of personalities they have.

RTX2931B cropped

Getting those things right, they’ve discovered, can mean the difference between creepy and endearing.

Reuters/Nigel Treblin

One of the reasons why R2-D2 has become a favorite of multiple generations of Star Wars’ fans and the seemingly more human C-3PO has not, is because of his character. He’s somehow more energetic, more emotive, and more willing to save the day for his team than C-3PO, even though he doesn’t have arms, legs, a face, or even the ability to talk.

A lot of work is going into figuring out exactly what makes a bot likable. It turns out it’s a whole series of elements—and they don’t necessarily center on language.

Reuters/Carlo Allegri
Personality
Personality

A new robot, called Kuri and created by the Bosch-backed startup Mayfield Robotics, channels that insight. It may be the first robot you’ll actually want to bring home. (It’s available to preorder at $799.) Standing roughly two feet tall and dressed like a waiter at a black-tie event, the diminutive robot is exceedingly cute. It has an oval-shaped body, and a round head, with two circles for eyes. It rolls around, bumping into new objects with surprise, blinking confusedly when something unexpected happens, or opening its eyes wide and chirping enthusiastically when it sees someone it recognizes. You can rub its head, and it’ll look up at you, with a giant light glowing from its chest, like E.T. when he phones home.

Kuri_Catalog_3QL_HR_v3a_8bit
Kuri robot (Mayfield Robotics)

Kuri is a new type of device, a robot companion, sitting somewhere in the family between an Amazon Echo and a pet dog. It can play music, send messages to people dotted around the house, and patrol the property for anyone who is or is not supposed to be there. But its main selling point, Mayfield executives told Quartz when we met Kuri at this year’s Consumer Electronics Show in Las Vegas, was the “spark of life” it provides.

“We wanted to make a robot that was communicating how intelligent it was through its actions and through its design—that was a big part of why we don’t have the robot speaking,” Sarah Osentoski, Mayfield’s COO, told Quartz in a later conversation. “Because when something’s talking to you, you start to assume a certain level of intelligence.”

“We wanted to make sure that people understood that the robot is a medium smart dog, not as smart as a three-year old,” said Osentoski. “And so people can talk to Kuri, but Kuri just speaks robot back because she’s got her own little thing going on, and so that’s kind of a big part of how we started designing the robot and we then started thinking about the robot’s personality.”

Osentoski and her team wanted to create a personality for the robot that fit its size and its capabilities.

They landed on three character traits that they thought conveyed this: humility, earnestness, and curiosity.
They landed on three character traits that they thought conveyed this: humility, earnestness, and curiosity.

To convey those traits as movement and sound, Mayfield turned to a team of animators and sound engineers who had created characters like this from scratch for movies. The team included Doug Dooley, a 13-year veteran of Pixar, and Connor Moore, who runs CMoore Sound, a San Francisco-based sound design studio.

Dooley, who worked as an animator on films including Monsters Inc, Finding Nemo, Up, and The Incredibles, said he’s learned tricks over the course of his time animating on how to make creations feel lifelike.

There’s a lot of over-exaggerating, for example, to show someone why a character (or in Kuri’s case, a robot) is doing something.

“The most important thing is showing the character’s thoughts if we’re going to really make it appealing,” said Dooley. People, he said, have their brains in their head, so it’s natural to represent thoughts with head movements in animations. “Robots, you know, their brains are in their butt, basically,” he said. Mayfield added in head-bobbing and eye-blinking to signal that Kuri was completing a task or curious about something in front of it, to help people understand why it’s moving around. “If the robot does something and they didn’t expect it, that’s uncomfortable and not very appealing,” he said. “The head motion is extremely important. You really need to have that anticipation before the action or it gets creepy.”

Another important lesson drawn from screen animation: Less can often be more. Dooley and the team have tried to strip Kuri down to the bare essentials, to minimize the risk that its emotions will be misconstrued. It’s part of the reason why Kuri doesn’t have a mouth. “If you have any kind of cue in there that doesn’t animate correctly and isn’t correct, people will misread it.” He compared this to the different ways that people smile: If you smile but not with your eyes, people will perceive it as insincere; if you smile very broadly, it can appear sarcastic. “Everything that’s there has to play perfectly together or people are going to start reading the wrong body cues.”

Designing a robot to act alive, but not be alive, is a daunting task. It’s something that Oren Jacob, the former chief technical director and a 20-year veteran of Pixar, is striving to solve in the bots that his startup, PullString, builds. His team develops bots that provide their client’s audiences with interactive experiences. They might build on existing brands—for example, they’ve made bots of Doctor Strange, the characters from the video game Call of Duty, and the voice inside Hello Barbie, a doll released in 2015 that lets children converse with their Barbies—or create entirely new ones. It’s a lot easier to develop a personality for a bot when its character has a pre-existing history—Barbie has been around since the 1950s. But taking that character and turning it into something that people actually want to interact with involves making every aspect of that interaction as authentic as possible.

With bots, which don’t have physical bodies, you’re primarily designing and constructing its personality through language. Indeed, there are some skills from film and books, like dialogue writing, that transfer directly to building bots, said Jacob. But instead of creating a linear narrative around a character, writers are essentially trying to build out its life story—every moment that brought the character to this conversation. Getting there involves character exploration, just as it would in any fiction writing. “We consider specifics of what has happened to that character before they arrive at the experience, what’s their motivation, what’s their inner drive, what desire do they have they are unable to achieve in their own circumstance, as a way to position them in the conversation,” said Jacob.

“What comes out of that might be a character who is belligerent or a sycophant, or a goofball or a wisecracker, you know, lots of things can happen there, but it’s defended from a place that’s describing that character’s circumstance as they come to the experience that we’re designing.”

The bot that emerges from this process has to match both the shared knowledge of the human users, and more generally how we expect to be able to interact with someone. Jacob offers the example of the Call of Duty bot his team built for Facebook Messenger, based on Lieutenant Reyes, the game’s main character. “Reyes is a military operative 500 years in the future who’s fighting aliens with these particular seven weapons, and these particular five vehicles, in this particular battlefield in Call of Duty in the fiction of that world, and he needs to speak with that authenticity of what that character’s experience is.”

You can’t have him not know exactly what’s going on in this future war, since anyone who plays the game will know something. It requires writing in the character’s world, to be able to address any possible questions.

“It’s kind of like if you were to call Darth Vader on the phone, you don’t want to speak to the abstract generalization of evil villains who have funny breathing patterns,” said Jacob, offering another example. “You’re talking to Vader about the Force, and Obi‑Wan and Yoda, and the Death Star that blew up.”

A completely rounded personality matters as much if you’re building an Alexa skill for a bank, or a messenger bot for a video game, Jacob has found. And this is something the tech giants are still grappling with as they try to persuade us to accept their bots into our lives.

“One of the great tensions of this field, which I think will probably drive the technology giants—Apple, Google, Facebook, Microsoft, and Amazon—a little loopy for a while is that the more specific a character you express, the higher the engagement numbers, because you’re taking a position in the conversation,” said Jacob. Apple has tried to give its voice assistant, Siri, some personality with pithy jokes and wisecracks, but they’re so robotically delivered and repetitive, that it feels insincere. Clearly, Apple et al have a lot more work to do.

“The downside is, by taking a position in the conversation, you will alienate some percentage of your audience,” said Jacob.

With robots, physical or digital, it’s easy to fall into what is known as the “uncanny valley”—where a bot is rather lifelike, but eerily off in some jarring way. Often, it’s the the way the robot moves that will freak humans out. For example, Bina48, a robot created to look like Bina Rothblatt, the wife of inventor and multimillionaire Martine Rothblatt, looks awfully like her—until she starts moving.

Robot lady Bina48 press conference

The awkwardness of the robot’s motion and its dead eyes staring back at you show that it’s clearly not alive. It’s a problem that the robotics industry is still trying to solve: how human does a robot need to look for us to like it? For some, the solution is not at all.

EPA/Frank Rumpenhorst
Appearance
Appearance

Although we might all like something like Rosie from The Jetsons cleaning our homes, in reality, the closest thing we have is a giant hockey puck called the Roomba. iRobot, the company behind the robot vacuum cleaners, told Quartz that some of the most common names people give their Roombas are Rosie, Wall-E, and R2-D2.

“When I was a kid thinking about how cool it would be to have a robot in the house, there was no way to avoid thinking like R2-D2,” said Patrick Evans, Mayfield’s senior visual designer, who oversaw the design of Kuri. The tiny robot was one of the first affable robots that many people were introduced to on screen.

“I mean, honestly, nobody wants C-3PO in their home,” Evans added, “but, undeniably, these sorts of pop culture references certainly informed a lot of the way we thought about [design], as well as just inspiring us to work in this space in the first place.”

Many of the home robots that companies are starting to bring to market, as well as larger commercial or research robots, may have some anthropomorphic characteristics, or might be shaped like humans, but few look exactly like us.

Humanoid robot of British company RoboThespian blushes during the opening ceremony of the Hanover technology fair Cebit

Until we can capture every way a face moves, every way a muscle twitches or a piece of hair might fall on a face, it’s going to be difficult to convince users that they’re not interacting with something sinister.

Reuters/Wolfgang Rattay

But robot builders are still drawn to the human form—maybe it’s because that’s what we most relate to. Mark Sagar, the founder and CEO of Soul Machines, has been working on what he calls a “biologically-based neuro-system simulation”—essentially trying to make digital avatars that look and act as a human face does. Sagar’s background is in bioengineering (rather than computer science), having spent 15 years simulating human anatomy for things like surgical simulations and physiology models used by doctors and medical students. This led him to work on human face simulations and visual effects that were lifelike enough to convince filmmakers to use Sagar’s work in big-budget films such as Avatar and 2005’s King Kong. Now, he’s applying those skills to creating lifelike faces that can be used by businesses to answer customers’ queries.

“It’s about creating this illusion of life and the illusion of intellectual connection,” said Sagar.

Whether that’s just a quick conversation over email, or a face-to-face interaction, we expect a certain amount of responsiveness and feedback from whoever we’re talking to. When someone is in front of you, there’s a gamut of things you’re subconsciously checking: “You’re getting the body language, the neural facial expressions, the look in their eyes, where their eyes are looking,” said Sagar. “What we’re trying to do is simulate all of those elements—which is what makes it so difficult, because it all has to be consistent.”

Soul Machines’ approach has been to simulate all the layers of a human face. Instead of a simple digital face on a screen, its avatars have digital skulls, muscles, tendons and skin, theoretically pulling and moving as any human’s might. Sagar says his team has tried to mimic all of the things a human would do as they speak: “For example if you’re speaking, then you’re breathing out, and then you’re breathing and you’re not speaking.” All of this would affect how the avatar looks at any given second.

“If you think about the face or any particular type of human behavior as a symphony of a whole lot of different instruments coming together to create a final result which actually has an emotional impact but also has a communicative impact,” said Sagar, “That’s what we’re doing.”

One of Soul Machines’ first projects was BabyX, a computer simulation of a baby that can learn and respond to the world much as a regular baby might. It’s unnerving to watch, because your brain knows that it’s not real—it’s a computer program on a screen—but when it reacts to everything in front of it as any small child might, it’s arresting. Sitting in a car seat, with wispy blond hair parted gently to the side and slightly ruddy and chubby cheeks, looking at BabyX feels more like chatting with a friend’s child on Skype than it does interacting with a computer simulation. BabyX’s eyes light up when you show it something it likes, and it’ll cry when you take it away; its eyes will follow your body as you move, its cheeks and mouth will move from a big smile to a pensive furrow when you show it something it needs to concentrate on to figure out what it is.

Baby X 4 copy
Its attention span is pretty short; you can see the reflection from the computer screen it’s looking at in its big blue eyes.
Soul Machines

BabyX shows you how the human brain can be tricked into empathizing with non-living entities, and Sagar believes that this will be key to future human-machines being convincing enough for us actually use them.

“BabyX works because everything is about the present moment,” said Sagar. The program is constantly responding and acting as a real child might. “A lot of times when I do demos of BabyX, the audience gets really drawn into the emotion of this baby. She starts crying and the audience gets to see why.”

Sagar compares interacting with a capable AI to a jazz concert. “You’ve got the musicians playing off each other, and by simulating a lot of those reactive behaviors you start getting a direct feedback on an action.” We are all essentially always riffing on the people we interact with in our daily lives, and their responses to our actions are what draw us in.

But riffs can be the hardest things to recreate. Our involuntary movements, the mannerisms we pick up from people over the course of our lives, culturally-learned actions—they all make us us, and not someone else. Which is one reason why Sagar’s team started with a baby—there are fewer of those ticks to recreate.

Soul Machines is working on more complex systems now, but Sagar expects this work to be a long-term project. They’re even trying to provide some simulacrum of the same biochemicals that affect our own emotions: “We’re trying to have the avatar have a physiological emotional system so it does have things like virtual cortisol if it gets stressed. It’s got virtual dopamine. It’s got serotonin.” But recreating these chemicals’ effects isn’t simple—how stressed would a robot actually get? And even if Sagar’s team can figure out why we act the way we do at any given moment, there are still so many other variables affecting us: “In a real person, their body is flooded with these different drivers and behavior, everything’s linking to memories. Everything is the whole sum of that person.”

Sagar’s goal is to create avatars that act as we do, in the intellectual quest of answering the question of what truly makes us human. But until we can traverse the uncanny valley, it’s easy to see why companies are choosing to make their robots cute instead of perfect—it’s just easier for us to accept when they fail.

Voice
Voice

Back at Mayfield, Moore said that his team initially listened to the sounds made by famous movie robots, including R2-D2, Wall-E, and BB-8, as inspiration for Kuri’s language. “After that, we jumped into what I call a kind of tunnel exploration phase,” said Moore. “This is where I’m experimenting with tons of different instruments, objects, recording things, pulling them into synthesis, to customize these sound families that are really meant to drive home key personality traits for the robot.”

Animators know that an appealing voice is more about pitch and intonation than what words are said.

Through his fugue-state-like sound exploration, Moore identified about seven groups of sounds. He presented them to the design team at Mayfield, which chose a set that was created by sampling and synthesizing an African thumb piano. Moore then modeled the sounds on certain characteristics of humans communication—a rising tone often means a positive response, and down a negative one, for example.

Moore used effects like a filter envelope (similar to the wah-wah sound you might recognize from a Jimi Hendrix song) to replicate the way “the mouth opens and closes when speaking,” and attack envelopes (filtering the start of a sound) to imitate the way we pronounce the letters T, P, and S.

“Using pitch really helps with intonation, and I think pitch is kind of the cornerstone,” said Moore. “It’s an easy thing to point to for design, but I think it really helps shape the personality of Kuri and you can get, a positive reaction, a negative reaction or like kind of there are times where curiosity comes into play.”

“These are the different elements that we’ve been playing with and these allow Kuri to be emotive and for her personality to really shine, but also and very importantly, it allows her to be informative and functional to the user so they can understand each other in the home,” added Moore.

For bots that are being built on existing characters, the task of finding a voice is a little easier. We all know what Darth Vader sounds like, for example, and although James Earl Jones may not always be available to lend his dulcet baritone voice to the character, we know what a bot that looks like the Sith Lord should sound like. Yet even when designers are creating characters from scratch, there are still some expectations. A bot appearing to be a middle-aged man should likely not have a high-pitched squeaky voice for example, and a bank teller should not talk about farming sheep.

“Do you use emoji? Do you use leetspeak? Do you speak with an accent if you’re in Alexa? How fast do you talk? I probably talk too fast. Maybe you talk slower. Those are the design considerations that are premised in the field of computer conversation,” said Jacob. “And because of that, language is inextricably tied to the person who said it to you.”

Tomorrow’s robots
Tomorrow’s robots

The field of commercialized AI and robotics products is still very much in its infancy. Six years ago, if you’d told someone you can talk to your cell phone and it would find information for you just like Captain Kirk did on Star Trek, chances are people would not have believed you. Siri, Alexa, Google, and Cortana can all do this with relative ease now. Amazon is trying to get its users to hold conversations with their Echoes, to help it learn how to actually chat with humans. In the not-too-distant future, chatting with a bot will be as easy, and likely as normal, as calling up a friend. Assuming we get over our fears that AI and robots will enslave us, there are real, useful ways that robots and artificial technologies will make a difference in our lives. Robots will help the elderly, provide companionship, give mobility to those who have previously not had it, a voice to the mute and ears to the deaf. AI will strip away the noise from signals, providing us with insights and information we wouldn’t have found on our own, however hard we looked.

Like all new technologies, their adoption will be slow. There will always be that group of people that wants to have the latest gadgets or toys as soon as possible. They will expect rough-around-the-edges products. But most of us will not.

We want shiny, well-designed, well-marketed products that just work. We want to feel welcomed and comforted by what we own, not frustrated. Tomorrow’s robots will feel familiar. You won’t be able to put your finger on it necessarily, but when you take the wrapping off your robot and set it up for the first time, it will feel like welcoming an old friend into your home. And the robots after that will be built on that same literary and visual history, evolving and refining with each new generation.

Reuters/Yuya Shino
Recommended viewing on human-robot interactions
Recommended viewing on human-robot interactions

Metropolis, 1927

Forbidden Planet, 1956

The Twilight Zone, 1959

2001: A Space Odyssey, 1968

The Stepford Wives, 1975

Star Wars: A New Hope, 1977

Blade Runner, 1982

Aliens, 1986

Terminator 2: Judgment Day, 1991

Star Trek: Generations, 1994

I, Robot, 2004

The Hitchhikers’ Guide to the Galaxy, 2005

Robot & Frank, 2012

Her, 2013

Ex Machina, 2014

Interstellar, 2014

Chappie, 2015

Humans, 2015

Westworld, 2016

home our picks popular latest obsessions search