Gregory Hager, computer scientist

This technologist studies ways that robotic systems can amplify human capabilities. Interview by Bethany Augliere

April 04, 2016

Hager_lab_crop.jpg
Technologist Gregory Hager of Johns Hopkins University. Courtesy of Gregory Hager

Since the 1990s, computers have bested humans at checkers, chess, and even Jeopardy. Now, an artificial intelligence has defeated the reigning human champion in Go, an ancient east Asian board game that features a much higher number of possible moves during each turn. The game requires tremendous computing power, according to computer scientist Gregory Hager of Johns Hopkins University. Google’s AlphaGo program was up to the task, beating South Korean grandmaster Lee Sedol in four out of five matches in March.

The victory raises questions about the future of artificial intelligence. Inventor and futurist Ray Kurzweil predicts AI will exceed the intellectual capabilities of humans by the year 2045, which he has dubbed “the singularity.” As smart systems continue to evolve and integrate into our society, Hager believes a national conversation is key. “Technology will advance,” says Hager. “It’s not a natural force, but it’s somehow an inexorable force as we continue to look for innovations that enhance our lives, increase our safety and comfort, and enhance our productivity.”

At Johns Hopkins, Hager is founding director of the Malone Institute for Engineering in Healthcare, which develops and deploys technological innovations to make healthcare more efficient and effective. He spoke about the realities and shortcomings of smart technology at the February 2016 meeting of the American Association for the Advancement of Science in Washington D.C. SciCom’s Bethany Augliere sat down with Hager afterward to discuss how autonomous technology might change our lives.

Have you heard of the Kurzweil Singularity?

Yes. I think it's just that artificial intelligence will eventually take over.

What’s your take on that?

When a machine defeats a human at chess or Go, the human has defined the problem and the solution. It’s like training a highly skilled athlete. AlphaGo is a Go-playing machine, not a generally intelligent machine that has been taught how to play Go. It’s hard for to me think that in 30 years we are going to suddenly see this convergence of capability to create an intelligence that transcends our ability now, which is creating very narrow specialized intelligences.

Why is that?

It’s an act of creative engineering to develop these systems. It’s not like discovering fusion or fission. We are creating these systems, we are controlling them, and we have the off switch. The Kurzweil notion that the “singularity is near” creates this impression that someday, inadvertently, we will release a virus into the world that will be super intelligent and take over. I just don't believe that is a correct picture to have.

In the future, we might have a system in the kitchen that is able to understand the operation of a kitchen and cooking. But you couldn't put the same system on a farm and expect it to learn farming. It’s wouldn't see the analogy between mixing food in a kitchen and mixing feed for animals, which would be natural for a person.

And that’s the missing piece. This perception and understanding of the world around us doesn’t exist in the machine world. It’s up to humans to find the problem, find the data, find the methods, and eventually train the machine to do it. Once you've done it, though, you can train a machine to be amazing at many of these tasks. 

What then is a more realistic vision of our future with smart machines?

I think most of the progress we will see in the next couple of decades will be centered around developing and training machines that solve very specific problems.


"We are creating these systems, we are controlling them, and we have the off switch."


Tell me about the robots you have developed with your research.

We focused on looking for areas where there is potential human amplification from robotic systems. One area has been retinal surgery. We developed a robot that has extremely high precision.

What does the robot do in those retinal surgeries?

One thing it does it membrane peeling, which is removing a thin kind of scar tissue. If you have diabetes, you often get this condition in which your eye starts to scar up.  The robot can actually peel that membrane from the retina. It is fantastically fine work because the blood vessels in the retina are about the size of a human hair, and this membrane is about one-eighth of the width of that.

But these aren’t used in actual human surgeries?

Not at this point, although they are potentially human safe. We haven't been able to demonstrate the value proposition to the point where we could actually bring it in. Really, we need funding from the National Institutes of Health to demonstrate the benefits.

And I’ve got to say, that’s one of the things about all these technology areas. We can develop beautiful technologies, but really you need private industries to scale those technologies and have them find their niche. They can explore the technology so much faster than we can as researchers.

In your talk, you focused on the intersection of robots and society. Why is that a conversation we need to have?

We really are in the wild west of many of these new frontiers. Over the next few years, we need to advance our policy and social understandings of these devices at the same rate that we advance our technology.

More and more I see how technology is bumping up against society. When people watch the robot perform surgery in an automated fashion, they immediately leap to ask, “When will a robot be my surgeon?” That’s just a fantastic fantastic leap. Technology can have a very positive impact, but it’s important to understand both its opportunities and its limitations.

I wonder if a smart physical robot going to invade my privacy, take my job, or run over my cat. Do you think people have reason to be a little cautious or worried?

Oh, I think absolutely. There is a lot of potential for unforeseen consequences of these innovations. Manufacturers will produce products that are likely to have utility and be safe. Pure economics puts a lot of constraints on that. For example, a mower that runs over my cat probably isn't going to have many buyers. But there is a hidden piece: What data is it taking about my home and where is that data going? What data is it acquiring about me?

That’s scary. What do you think about these issues? Is there any kind of comfort you can think of? 

It’s a policy question. I asked the freshman students I teach how many of them were concerned about data gathering. None of them were. They assumed people were gathering data on them and that it is benign. But there should be a law that if you are under 18 you can't have data collected. 

Europe has a different set of right to privacy laws. Data cannot be collected without permission. In the U.S. you can opt out, but in Europe you have to opt in. And furthermore, you are only allowed to keep the data in Europe for a fixed period of time before you have to dump it, unless there is a reason not to dump it. If you made that policy change in the U.S., it would enhance your ability to control your data privacy.

What would you say to someone who is scared to embrace technology?

The fact is that technology is created in service of humanity. I would tell anyone who is looking at a new technology to make sure you are informed and educated.

As we move into the future, what might be the main benefits of more autonomous systems? 

I think the benefit is going to be these sorts of qualitative life enhancements. Once you have something that regulates the heat in your house, you don't think about stoking the stove. If you have an aging parent, having a system that’s able to detect behavioral changes and allow you to be in contact with that parent will have a tremendous impact on your quality of life.

The second area will be in disaster response, which is evolving very rapidly. If a disaster scatters 20 people, you can send out 200 drones to survey the area. The drones can establish a temporary communications mash, so peoples’ cellphones work even though the towers are down. Suddenly, you have this ability to amplify our capacity to react to situations.

Then you are going to see productivity enhancement in certain sectors of the economy, such as transportation, manufacturing, and some of the service industries. In long-haul trucking, for example, we might see one person driving the lead truck, and the ten trucks behind are automated. But they depend on the lead person to manage that.

Do you think technology creates a bigger gap between the haves and the have nots?

There are lots of cases where technology has enabled a leapfrogging. For example, the power and communications infrastructure in Europe is far better than in the U.S. Most of it was blown up during World War II, so they completely rebuilt most of their cities in the 1950s and 1960s. Our infrastructure tends to date from the 1920s and 1930s. That’s been a tremendous advantage [in Europe]. In Africa, there are no land lines and there never will be. Cellphones came in and allowed them to create communication networks without having to run all the wires that we've had to run for our system.

So, as I start to think about the developing world and technology, I realize that they may end up ahead of us in interesting ways. If you haven't built your roads already, and you know that automated driving or augmented driving is possible, maybe you build your road system oriented to the idea that transportation can be automated.

I think an important thing is to understand is that we as a society have choices to make. Clearly, these sorts of innovations have become enormous wealth concentrators. By thinking carefully about what direction we want to go, we can ensure that we don’t enhance disparity in society by locking it away to those who can afford it.

Do you think this kind of technology can help us be a more sustainable society?

It is something I have thought a lot about in the sense of smart cities. For example, this morning it took me about an hour and a half to drive from Baltimore to D.C. for the conference. Most of that time was due to stopping in traffic lights, and a full stop is the most energy inefficient thing you can do with a car. I thought to myself, how much more efficient it would be if we had some intelligent staging of traffic?

So when you watch movies like Ex Machina or Her, are they just sci fi or do they ring true in any way?

They are very much in the realm of sci fi.

____________________

© 2016 Bethany Augliere. Bethany’s online portfolio of writing and photography is multiplying at www.bethanyaugliere.com.


<< PREVIOUS INTERVIEW
   |   NEXT INTERVIEW >>