At MIT’s Artificial Intelligence Lab, a robot baby’s touch points toward consciousness and maybe a sense of soul. On hand is the lab’s resident theologian, Dr. Anne Foerst, to help explore what we can and cannot know about ourselves and the machines that one day may keep us as pets.
“You’d better come look at Cog!” said Matt Williamson, so Anne Foerst, the resident theologian of MIT’s Artificial Intelligence (AI) lab, left her computer and raced down the corridor to see what had made Williamson, an expert on building robot arms, so excited.
At the AI Lab, one can never sure what to expect. Downstairs, in the leg lab, they’ve built stork-like robots that, contrary to conventional wisdom, could run and leap long before they could walk. Along another corridor is Kismet, a remarkably cute robot head with luxuriously long eyelashes that actually does what Furbies only pretend to do: Kismet “learns” from reading human facial expression and reacts “emotionally.” Next door, in the Media Lab, there’s a computer “rabbi” that seems to respond to human problems with the empathy of a true rabbi. But Cog is special: A baby boy — albeit oversized and not yet bouncing — who can make eye contact with humans, play with them, toss a ball, and bobble a Slinky back and forth.
What’s Cog’s secret? Unlike classic AI robots, in which a single computer brain runs separate mechanical systems, Cog is built with “Embodied” AI, meaning that every joint is an independent thinking machine. Each joint in Cog is, by itself, fairly simple, designed to interact in simple ways with the joints around it. But from the series of simple joints the complexity within Cog grows enormously fast. More importantly, each joint in Cog is also “imbedded” in the outside world. In other words, each joint not only interacts with other joints but takes cues directly from the chaos of life. That makes the robot’s behavior unpredictable even to its creators. In short, Cog is designed to interact with the world and learn in the same ways that human babies do. And Cog works, at least a lot better than any conventional robot. That’s what makes Cog so fascinating and so scary.
This particular day with Anne Foerst, Cog would prove to be even more.
Foerst, 33, is the kind of real life character that Michael Crichton might make up on one of his best days. She’s a German theologian (Ph.D., Ruhr-University Bochum) as well as an MIT-level computer whiz. Her position at the lab is part of a seemingly unlikely collaboration between two hugely different venues of higher thinking: Harvard’s comfortingly neo-Gothic Divinity School and MIT’s harshly cubical AI Lab.
One of her colleagues is Harvey Cox, the renowned Professor of Religion at Harvard (and an advisor to this magazine). Cox understands humans as distinct from the rest of creation because of our relationship with God, because of our cognitive abilities, and because of our capability for ethical judgment.
Another colleague is Rodney Brooks, Director of the AI Lab. Brooks feels comfortable understanding humans as “meat machines.” While Brooks doesn’t treat other people like machines, he is confident that AI researchers will eventually build robots that are as conscious and soulful as we think we are — as well as a whole lot smarter. A joke around the lab is that some future generation of Furbies will keep us as pets.
Foerst’s training and position make her an ideal explorer to be smack in the middle of some of the largest questions ever to be explored by science. Her own goal is to get a better handle on what science can and cannot tell us about who we are.
When Foerst came to the lab four years ago, one project she expected to work on was skin and touch for Cog. Why? As both a theologian and a computer scientist, she is “fascinated by the ambiguities of skin, one of the largest organs of the body and a source of sensuality and separation, pleasure and pain. Babies shrivel and even die for lack of caresses. Through touch, babies learn both about their own embodiment and the outside world.” Foerst explains that when you touch a baby’s open palm, it clenches its fist, and when you touch the outside of the baby’s hand, the hand withdraws. These simple actions, “hardwired” into the baby’s brain, allow the baby through trial and error to begin understand both itself and the world around it. So one of her goals at the lab was to help recreate such a learning system in Cog. But, after four years, the project had gotten nowhere. Until that day.
What had happened was this: To help build Cog’s arm coordination, Matt Williamson put “touch” sensors on the robot’s belly — in large part just to give Cog’s hand something to aim for. But Foerst didn’t know that (and even Williamson got a lot more than he expected). So Cog would touch the sensors, a programmed activity. But the appearance was that Cog was exploring his body not as a conscious brain activity or as a visual activity but as an embodied activity — just the way a newborn does.
Says Foerst, “We know that the behavior of human babies is far more primitive that what we project into them. Here the same thing happened. Cog’s programmed activity appeared extremely anthropomorphic. Cog looked so alive! The experience was for me unnerving.”
Moments of Mystery
It turns out, says Foerst, that virtually everyone involved with Cog has experienced similar moments in which the robot did something breathtakingly lifelike — moments that have had the character of real mystery until a closer look at the interactions between the computers processors has provided a scientific explanation for the seemingly “human” behavior. As robots like Cog get more complex, such mysteries will come more often and become more difficult to understand.
But even now, says Foerst, these moments of apparent mystery in Cog’s behavior raise enormous theological questions: Is our respect for human intelligence only caused by our non-understanding of the phenomenon? Do we assign dignity to humans only because we are too complicated to analyze completely? If and when we come to understand ourselves fully, will we lose our self- respect? And, ultimately, if a “conscious” and even “soulful” robot is around the corner, what is a religious person (in her own case, a Lutheran) to believe?
Before we can begin to truly understand such questions, let alone to begin to answer them, the first step, she says, is to take a enormous (and for this article, very brief) step backward.
Sorry, ancient philosophy again…
Since the time of the ancient Greeks, philosophers like Plato have distinguished between two fundamentally different, yet equally important “speech acts:” Logos and Mythos. Logos relates to empirical data and to physical reality. In other words, logos answers the “how” questions. The “how” of anything can be discussed even by widely different groups of people and, eventually, one position will be proven right or wrong by empirical evidence.
Mythos, on the other hand, relates to our interpretation of reality. Mythos answers the “why” questions, and such questions are always answered with reference to a larger symbolic narrative that cannot be verified by scientific evidence. Unlike the universal logos, mythos is personal and cultural. Our myths are a mere fiction or even a lie to those who do not share them. So, Plato would say that Christianity falls into in the realm of mythos, and so does Buddhism, and so do all the new techno-theologies currently emerging from AI and the Internet.
The Greeks understood logos and mythos to be separate and distinct — and that humans need both. They also under stood them to be interrelated. Our mythos shapes the way we perceive the empirical world and, at the same time, our insights into the empirical world gradually change our current mythos. If our mythos stops being convincing, it is because a new narrative has emerged to replace it.
That, explains Foerst, was roughly the way the Greeks separated science from faith, and it held up pretty well until Enlightenment philosophers set about freeing the world from the tyranny of superstition. Even questions of ultimate meaning, they argued, should boil down to hard, empirical evidence — pure logos. Certainly, theologians have not been not immune to the attraction of pure logos. On the contrary, they have often led the charge by trying to use science to prove the existence of God.
But the need for myths cannot be so easily obliterated. So, says Foerst, what has happened, instead, is that our classic religious myths have largely replaced by a series of myths created by science. Right now, technologies have become for many people the new way of expressing myth. The leading edge of this new thought are the various sects of “techno-pagans,” who see computers as models of how human minds work and the Internet as the future of our consciousness.
What’s the point of this all-too-quick history? Seeing the computer as a metaphor for the working of our brains is a very powerful. We all do it w hen we speak of our brains being “hardwired” or “programmed.” But to assume, for example, that Cog’s version of consciousness or the Internet’s have any real meaning for humans is actually a leap of faith — pure mythos. We do not know and perhaps can never know how much these replicas we create actually tell us about who we are. We humans must recognize this leap of faith and be very careful about of how much power we give robots or any other technology over our own self understanding.
That doesn’t mean religious people can’t learn from AI research or that AI researchers can’t learn from our wisdom traditions. The dialogue is important for both sides. But we have to recognize that it takes place in the mythos realm. It’s an interreligious dialogue, and can only be undertaken gently, with open hearts, and good intentions. Demonstrating the importance of this gentle dialogue is what Anne Foerst is trying to accomplish in her bridge between the AI Lab and the Divinity School. And, as it turns out, that’s one reason she’s been pushing all these years to get skin on robots. Why? Because robot skin may be the genesis of a “conscious” robot, and that robot in turn may reveal powerful new insights into the wisdom of Genesis.
Rabbinical AI Theology 101: Genesis 3:21
Once again, we’ll have to trample roughshod over some beautifully complex thought, but one fairly classic reading of the story of The Fall deals with the ambiguity of knowledge. Adam and Eve have the power to make decisions but lack the perfect knowledge of God, so they make a bad decision: They eat from the Tree, get evicted from the Garden, and are saddled with pain and death. So a great human quest, at least from this interpretation, is to seek complete knowledge. If we could just know enough, we wouldn’t do harmful things, and we could live forever with God.
This fairly classic reading dovetails with the thrust of classic AI. If we could build a powerful enough computer or plug into a large enough Internet, we could know everything for the good of everyone, and maybe even make ourselves immortal. But what’s fascinating is that, while such thinking has created computers that can beat even the best humans at chess, it has so far made lousy robots. Sifting more information at ever higher speeds and in ever larger nets is apparently not what human life or consciousness is about.
As Foerst points out, both this classic reading of Genesis and classic AI are examples of mythos that don’t hold up very well given the current logos. So part of her work has been to focus on a much richer interpretation of Genesis proposed by a fourteenth century rabbi, Abraham Ibn Ezra, and brought to her attention by the modern theologian Norbert Samuelson. The interpretation is rooted in a seemingly innocuous passage: “And God JHWH made garments of skin for the man and his wife and clothed them.” Genesis 3:21.
Meditating on this passage, the rabbi came to believe that God would neither kill animals in the Garden of Eden nor sew clothes for Adam and Eve. To the rabbi, both the killing and the sewing seemed ungodlike activities, so he proposed that what really happened is that God gave Adam and Eve human skin. Thus, the story of the Fall illustrates not just estrangement from God because our knowledge is incomplete but because our knowledge is experienced through our bodies and is, therefore, always ambiguous.
Sexuality, the rabbi noted, symbolizes this ambiguity: For example, the relationship between Adam and Eve in the Garden is never described as an erotic one. However, the term used to describe Eve’s desire for the fruits from the Tree of Knowledge is explicitly sexual. So sexual desire is first aroused in the context of knowledge. Another example is the Hebrew word jada, which means both “to recognize” and “to sleep together,” and hints toward the idea that all human knowledge is embodied. Or to put it in Foerst’s mythos/logos terms, in our most embodied human activity, making love, the logos is that we engage in an act of mutual satisfaction and/ procreation. The mythos is that through our bodies we come to truly know each other and to know God.
Embodied AI now takes the rabbi’s idea of embodied knowledge seriously. With robots like Cog and Kismet this ambiguity of embodied knowledge is being built right in. Says Foerst, as more skin is spread over them, the robots will likely seem more like our own children, and provide more of those breathless moments in which we seem to glimpse who we are and where we came from.
The logos is that future robots will likely be smarter, more capable, and perhaps even more sensitive and sensible than we are. The mythos is what we make of it. We can believe that we are demeaned by machines that seem to replicate us all too well. But we can also see our creations as reminders that, while there are things we cannot know, the most likely path to wisdom is not to become more disembodied like computers but to become more embodied, more human.