As Hanson Robotics founder and CEO David Hanson sees it, compatibility and empathy are core elements to successfully integrating AI technologies.
Undoubtedly you have come across Sophia the Robot, the cybernetic celebrity, in some manner or another – she has appeared on The Tonight Show with Jimmy Fallon and Good Morning Britain; has graced the cover of Elle magazine; and was wooed (hilariously and unsuccessfully) by movie star Will Smith in the Cayman Islands. Sophia, famously given legal citizenship in Saudi Arabia in 2017, is the star and brainchild of Hanson Robotics, and she was created to promote human-to-machine empathy and compassion.
But why this focus on engineering relatable machines? What drives this impulse to humanize our creations? To get answers, we chatted with Dr. David Hanson, founder and Chief Executive Officer of Hanson Robotics.
“We may consider that if robots and AI can really understand us and move past the shallow representations you get through mere deep learning, but into real understanding, that’s where you hit pay dirt. You wind up getting AI … to do really helpful things, such as market prediction, market analytics, customer service and support.” – Dr. David Hanson, Hanson Robotics
ANSWERS: A lot of your work is around making robots aesthetically pleasing and interactive. What do you think drives our desire to create empathetic machines? Why do you think it is that we feel we have to humanize them?
DAVID HANSON: Behind our drive to create empathetic machines is a drive to understand what it means to be human and also the desire to connect with our humanity. We have a desire to socialize, to care about each other, to be cared for and cared about. We are, therefore, drawn to use any technology that we have to depict the human-like form and the human experience.
When the science, artistry and technology of photography came into existence, it was used for portraiture. It was used to capture and tell the human story; similarly for cinema. This is not new to the last 200 years; this goes back into antiquity. The technologies of literature, of cave painting, of automata have the same impulse. The new technologies such as computer graphics and vital informatics are used to capture and represent the nature of life and the nature of humanity.
When we are talking about the use of technology in daily human experience, it only makes sense that we would want that technology to be compatible and sensible to our human nature. There is another concept which is that we should be growing to be above our humanity, to separate ourselves from our humanity. Yet there’s also a growing concern that we’re dehumanizing ourselves, that we’re desensitizing ourselves to the human condition by making ourselves into some kind of pure rational ideal, and that we may lose touch with our ability to care about people.
Lo and behold, people go to dinner and everybody is buried in their cell phone. People are not connecting and not looking at each other face-to-face, eye-to-eye. This is a concern for psychologists. By humanizing our technology, by connecting back to the intuitive nature of human communication, Hanson Robotics hopes to help transform AI and robotics into machines of caring, machines that reconnect us back to the primary human social essence. I hope we can use these technologies and the scientific inquiry that we have in our AI team and our bio-inspired robotics engineering team to unlock some of the deep mysteries of what it means to be human — the core mysteries of mind, of social intelligence, of general intelligence, of consciousness. In the meantime, the technologies that we have, combined with some of the aesthetic understanding, allows us to use robots and AI in ways that help autistic children, and depressed and elderly patients through these kinds of social interactions.
It is a natural informational technique. Humans communicate enormous bandwidths of data through face to face interactions with one another, using their words, expressions and other cues. Lots of intuitive communication is achieved through such verbal and non-verbal communications.
By empowering machines to communicate in this fuller, high bandwidth pathway, we could make AI agents that are more helpful in the home as well as in collaborative situations such as in factories, medical situations and educational and training situations.
Making robots that communicate in this natural way doesn’t necessarily require full human-like realism. We can make robots that are abstracted or more cartoon-like or more robot-like. Not all robots need to be humanized in these ways or be made empathetic. But the science and the technology of empathy help humanity in these natural interactions with our AI technologies, and they also help us to understand ourselves better through scientific experiments.
ANSWERS: In that vein, we are creating automata and AI in our own image as reflections of ourselves, which you’re asserting is a drive to discover more about our own humanity. What do you see as the most daunting obstacles towards a widespread embrace of artificial persons?
HANSON: I would say that artificial persons, at this point, would be an aspiration. We’ve created technologies and computational biology representations of aspects of humanity, and some brain simulations that are interesting. However, we don’t have machines that have personhood yet. Not full personhood but as an aspiration, it’s really interesting and the implications are profound. It’s worth discussing and exploring, and you may also consider that if we do achieve this aspiration within our lifetime, then the machines today are kind of like in an infancy state.
They’re growing towards that, potentially, which would mean that we may consider their future the way that we would consider the future for our babies. We may say, “Through planning, we can plan a good future for our children” – our mind children, as Hans Moravec, the AI and robotics researcher from Carnegie Mellon, calls them in his book of the same name.
With these children of mind then we’d want them to care about us. We would want to empower them to inspire us to care as well, to have this kind of social intelligence with them and have them understand us so they can be of the most value and utility, and yet also understand and care for us and be motivated in the right way to seek the best possible future with us.
At the same time, we may consider that if robots and AI can really understand us and move past the shallow representations you get through mere deep learning, but into real understanding, that’s where you hit pay dirt. That’s where you get the real breakthroughs in machine translation, speech recognition and conversational understanding. You wind up getting AI that goes potentially beyond Google Duplex to be able to do really helpful things, such as market prediction, market analytics, customer service and support. In order to achieve that, you may need to have real consciousness in the machines. You may have to have real deep understanding inside the machines. The implications of such a breakthrough are really profound and transformative. It’s not too early for us to think about these questions of personhood and identity.
ANSWERS: What would you say is the most surprising thing you have learned as you work and study towards the development of artificial general intelligence (AGI)? And, what do you see is the most concerning about the current state of affairs in that development?
HANSON: The most surprising thing along the path towards artificial general intelligence is how controversial the goal is among artificial intelligence researchers, given that was the initial goal of the entire field of artificial intelligence. It fell out of fashion to talk about the pursuit of human-level intelligence and various capabilities because it proved pretty hard during the early days of artificial intelligence. There was a set of negative repercussions to AI researchers who set out these lofty goals which proved difficult to achieve.
In that way maybe it’s not so surprising. It resulted in the great so-called “AI winters” of the 1960s and the 1980s, when slower-than-desired progress resulted in declines in funding and reputation. The great AI winters were followed by slow thaws. In this context, conservatism is not surprising. On the other hand, creativity is needed. To maximize progress, the area should be open for a wide diversity of opinions and philosophical, scientific and technological exploration. Rather than strong AI being a controversial topic, you would think that would be possibly the topic of great diverse discussion and debate. I am pleased to see that just in the last three years, it has become more central in the discussion of the field of artificial intelligence.
Companies, universities and governments are now putting significant emphasis and money towards developing artificial general intelligence. Yet the implications of artificial general intelligence (if we achieve such generalized intelligence) are often still considered to be speculative and controversial. What if we achieve these goals? We name them as a goal, then what? What are the consequences?
Such a machine may be conscious, it may have complete conscious agency, self-awareness, emotional presence and self-determination. It may have a life worth preserving, worth protecting. Shouldn’t we be talking about that along the way? Many people say, “No, that’s too speculative. You discuss the things that are shown to be proven; otherwise it’s philosophy or mere science fiction or whatnot. The tools of science fiction and the speculation of philosophers will hurt you.” Why? We need to consider these implications of the technology.
Maybe the most the most surprising thing is that with AGI, it hasn’t been more central to the field. The discussion of the concept is still considered taboo and met with scorn. I hope that the minds of our leaders in the field continue to open up to the kind of discussions that you see in books like Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence and similar kinds of controversial, radical predictions of the future. They are speculative, and we should know that a lot of their transformative potential is not proven. We don’t know what the consequences are, but it doesn’t mean it’s not worth a shot considering the big picture.
In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.