In 1997, the IBM supercomputer Deep Blue defeated world-champion Garry Kasparov in a famous chess match. In the past year, computer programs named MoGo and Many Faces of Go have bested human champions in Go—an ancient Chinese board game that many people consider the most complex game people play. Although silicon-based opponents will continue to pummel humans in chess and Go, Robert Hearn says there are some games people play that no computer can solve.
Hearn is a game and puzzle fanatic. Besides being a ranked Go player, he studied the theoretical basis of how computers play games as a graduate student at the Massachusetts Institute of Technology. Hearn and his adviser, Erik Demaine, developed a mathematical model of puzzles and games to understand how hard a computer must crunch numbers to find the best game-playing strategy. With this novel model, Hearn proved that it is impossible to write a program that can crack certain team-based games, such as Rengo Kriegspiel—a Go variant where blind-folded teams play without knowing the other players’ moves.
Hearn has spent 20 years working in different spheres of the computational world. In the early 1990s, he co-developed the popular office software Claris Works. After leaving the software industry in 1999, he went back to school to study theoretical computer science. Hearn now works on artificial intelligence as a visiting research scholar at Dartmouth College’s Neukom Institute for Computational Science.
At the 2009 meeting of the American Association for the Advancement of Science in Chicago, Hearn organized a symposium entitled “Games People Play” to discuss computers and games. After his talk, he spoke about the difference between how computers and human brains attack games and the state of research into artificial intelligence, or AI.
How did you become interested in games computers can play?
I had always been fascinated by games and puzzles from a young age. To suddenly see that you can have fun working with games and puzzles and also do theoretical computer science at the same time was surprising to me. The ability to connect games and puzzles to theoretical computer science was just kind of irresistible. In the end, it turned into something a little bit deeper.
What games do you like to play?
I love Go. I spend a lot of time playing Go. But I play most board games that people play. I like to play Scrabble and different card games. I’m a big collector of mechanical puzzles.
"The AI community has this idea that computer scientists can understand the essence of intelligence at an abstract level without thinking about the messy details of neurons. I'm not tied by that idea. I want to understand thought."
As a graduate student, you developed a general mathematical model of games and puzzles called constraint logic. What did you learn from this model?
It led to very direct proofs of the mathematical limits on the abilities of people and computers to play games. You can prove that no matter how hard you look, you can never find a clever algorithm for solving a game or puzzle. That’s an interesting thing to prove—proving a negative, which you usually can’t do.
These human-defeating computers play games such as Chess and Go by running through some or all of the possible outcomes of a specific move. That method seems like brute force.
It is. I suppose people do that for game-playing programs because it works so well. When theoretical tools are available, they use them too. For example, there’s the game Nim. The game is played with beans in five piles—one has ten, another has fifteen, and so on. On your turn, you take some number of beans from one pile. The players then alternate like that, and whoever takes the last bean wins. If you sit down and play that game, it looks complicated and sort of arbitrary. But there is an extremely simple way to analyze this game by looking at the binary representation of the beans in each pile. It’s easy to find the state of any position and move to a winning state. It’s that sort of analysis that we would like to apply to any game, but the computational complexity of Go, Chess, or Checkers prevents us.
So a computer could never analyze a Go board and solve for the best way to win the game?
Go is in the complexity class of problems known as X-time complete. X-time problems require an exponential amount of time to solve. So that means for arbitrary board sizes, we can prove we’re never going to have an efficient algorithm for solving a given Go board. That doesn’t necessarily mean we won’t find one for a specific board size, like a 19-by-19 board that people play on. But the fact that the general problem is X-time complete argues strongly against that.
Is the brute force method similar to the way a human would play a game?
I would say no. The people who work in these fields do call these methods artificial intelligence. Depending on your definition of what artificial intelligence is, that’s fine. I personally don’t feel that it’s related to how people think about games.
What is the difference between how computers and people play these games?
Computers simply see where the pieces are on the board. Humans see that information, but then in their heads they build up a higher level representation of what’s happening in the game. In Go, you’ll have one position with a certain amount of influence and strength, and it radiates to another part of the board. Computers have to do a lot of analysis to come up with that, and they often get it wrong. People can also represent what their opponents are trying to do in their heads. They read the board and say “If I play here, he’ll play there, then I’ll play here, and he’ll play there.” That search is informed by a higher-level representation of the game’s state, which computers generally don’t have.
After working on games for your doctoral thesis, you’re now working on artificial intelligence. What’s the goal of your new research?
At heart, I’m a hacker. I’m a guy who likes to build stuff, to engineer stuff. And I want to build an intelligent system. I can’t think of any greater challenge as an engineer than to build something that actually thinks. I’ve always been interested in artificial intelligence—even in high school I was reading some AI books. Between undergraduate and graduate school, I read Marvin Minsky’s book, The Society of Mind. It really grabbed me. The book puts forth the idea that you should think of a mind as a society of interacting agents—these sub-components that have their own goals and agendas. They talk with other agents and interact in special ways. They each learn and have different tasks. So I set as my goal to try to implement Minsky’s Society of Mind—to build systems that worked that way. At MIT, I got some way down that road. But I soon discovered that it was just too much. The ideas in the Society of Mind are great heuristic guiding principles, but I was trying to take them absolutely literally.
So I began to gradually think more about where these components actually exist in the brain. I began to study more neuroscience, which I was advised against by everybody in the AI community. They have this idea that computer scientists can understand the essence of intelligence at a more abstract level without thinking about the messy details of neurons. I guess for whatever reason, I’m not tied by that idea. I want to understand thought. I don’t care if I have to describe it in a LISP program or simulating brain structures. So my most recent work in AI has been at a more computational neuroscience level than at the more abstract Society of Mind level.
What brain structures are you trying to model?
Certainly, the cortex is very important. But that’s not all there is to the brain. There is also the basal ganglia, which does all of our reinforcement learning. There are tons of models out there that people have proposed for these structures. So you pick and choose what looks plausible and what you can computationally investigate and hope that they will lead to something interesting. I’ve built these abstract models of cortical regions that talk to each other and interact with basal ganglia components. These software simulations of cortical regions talk to robots that move around and try to solve problems.
What do these robots do?
Right now I can teach a robot dog to learn to stand up in a way that uses hierarchical brain representations. These representations are analogous to having a higher-level representation of the state of a game in your head. The dog has built a representation of its body posture as it learns to stand up. So, it’s not only learning that from one body state it needs to move its left leg like this, but it actually has a higher level representation of “Oh, I’m lying on my side.” Each individual leg has to learn that “If I’m in this posture and I want to be in that posture, I have to go through this other intermediate posture.” But it’s not like the robot is dancing or anything. There is still a long way to go.
What makes artificial intelligence research so hard?
There’s no solid definition of the goals of AI research. A lot of people in AI think the end goal is to have computers think the way people do—computers that you can have a conversation with and that can solve general problems. But we don’t know what the mileposts are. That’s the most fundamental problem in AI: the lack of the ability to measure clear progress.
For me, given that I’m of the school that you want to approach AI by understanding how human minds work, the extra problem is that there is so much that is not known about how brains work. It’s a great time right now because so much is being learned, but there’s also so much that is still unknown. You have to take these little puzzle pieces that don’t really fit together. Some are missing, and you have to try to figure out what picture they form.
Michael M. Torrice, a graduate student in the Science Communication Program at UC Santa Cruz, earned an S.B. in chemistry from MIT and a Ph.D. in chemistry from Caltech. He has worked as a reporting intern at the news office of the SLAC National Accelerator Laboratory, the Santa Cruz Sentinel, and the San Jose Mercury News. He will work for six months as a science writing intern at Science in Washington, D.C.
© 2009 Michael M. Torrice