Aydogan Ozcan, biophotonics engineer

An optical entrepreneur envisions replacing large microscopes with everyday cell phones equipped to monitor human health. Interview by Stephen Tung

March 31, 2012

ozcan.jpg
Photo: Ozcan Research Group/UCLA

The camera on the back of your cell phone can be transformed into a high-tech microscope with a cheap, new, easy-to-use attachment developed in the lab of UCLA associate professor Aydogan Ozcan. Using the standard parts of a cell phone—its processor and the sensor on its camera—his team developed a device that rivals conventional microscopes by replacing expensive optics with clever computing.

A user simply attaches the device to the cell phone and places a transparent sample inside the device. The camera snaps multiple pictures as the device turns different internal lights on and off. Then, Ozcan's software combines those pictures into an enormous image containing hundreds of megapixels—10 to 20 times more detailed than images produced by high-end consumer cameras.

The 2011 Presidential Early Career Award for Scientists and Engineers awardee envisions that traveling health-care professionals in developing countries could use the portable tool to identify diseases like tuberculosis, HIV, and malaria, or to monitor water quality. “That will really help us bring more advanced technologies to global health,” he said in a session at the February 2012 meeting of the American Association for the Advancement of Science in Vancouver, Canada.

After his talk, titled “Photonics-Based Telemedicine Technologies Toward Smart Global Health Systems,” he peered into the minutiae of his work with SciCom's Stephen Tung.

So why did your group choose to focus on cell phones?

Cell phones are being used almost everywhere. Close to 90 percent of the entire world population lives where there is reception. In addition, we have more than five billion cell phone subscribers, and most of these cell phones are actually used in developing countries. The unique infrastructure creates a lot of opportunities, especially for computational imaging.

With so many cell phone users, how prevalent are cell phone cameras?

I don't know the penetration of camera phones specifically. But the world manufactures around 2 billion new phones every year, and they all have cameras on them.

What happens inside your device?

At the bottom is the cell phone camera's sensor. In the middle you place your sample, like a blood smear, and then on the top there is an array of light-emitting diodes. You can turn each LED on and off, one by one. Every time you turn one LED on, the shadow of the cell starts to shift. It's equivalent to the sun moving in the sky, and your shadow moves with the sun. One difference is that the light source moves very small amounts.

"Imagine if you could use your cell phone to understand the quality of water in your well, every week or every few days."

So your LEDs are packed closely together.

Yes.

How does this technology work?

We employ shadow imaging, which is based on digital holography. Each shadow of a cell that we are capturing can be considered a digital hologram, and we can then use reconstruction algorithms to remake the image.

When you say hologram, people might think of a three-dimensional image. What do you mean here?

When you shine a light through an object, it casts a shadow. As humans, we are opaque to light. Our shadows on a sunny day do not contain much information. They're dull, monotonous. But if you look at a cell or a bacteria or a parasite, they are small, on the order of a couple of microns. It's a hundredfold smaller than the diameter of your hair. At that scale, cells are semitransparent. They cast shadows that are textured.

Some of the light rays penetrate through the cell body and they pick up the information of the cell. Some of these rays do not see the cell. These two different types of optical rays can interfere with each other. Their interference creates patterns—like oscillations, texture—that uniquely encode the information of the cell body. That encoding process is called holography, and you can decode it, like holographic reconstruction.

Camera phones are improving rapidly, with modern cellphones regularly having 5 megapixels or more. But compared to other sensors it still is small. Is that resolution limiting?

No, not at all. Five megapixels is plenty for us. If we use a higher megapixel camera, we will just improve the imaging area. In other words, if we change the imager from a 5 megapixel to a 10 megapixel camera, I will be able to look at twice the field of view with the same resolution.

That's the advantage of computational imaging: It follows the advantages of the electronics industry. Improvements in digital technologies, in terms of scale and density of transistors, will improve the performance of our microscopes. Every time there is a new sensor with more megapixels, it will improve our field of view without sacrificing anything.

How does the field of view for your microscopes compare to that in other devices?

A conventional microscope with a submicron resolution has a field of view that is maybe half-a-millimeter squared or less, a couple hundred microns by a couple hundred microns. For the same submicron resolution in our devices, we can look at a field of view that is 20 to 30 square millimeters.

I'm somewhat confused. For a field of view that large, but with submicron resolution, it seems like you'd have much more than 5 megapixels of information.

That is true. Another way of saying that is you use a 5 megapixel image from the back of an iPhone or a Blackberry, and from that you create an image that has 200 to 500 million pixels.

Right, so how are you doing that?

The technology behind this is called pixel superresolution. It's a very interesting technology, also exploited in security cameras. If you're looking at a security camera at an airport from a far distance, your image is pixellated. However, if you are walking while you're looking at the security camera, it will capture a movie of your face. By putting the different frames of that movie together, they can digitally synthesize a much smaller pixel size and have your face resolved with much better resolution. We now employ the same idea to create holographic images that look at a large field of view.

Does that require the samples to be moving?

No, the light source is moving. It's equivalent to moving the sample or its shadow.

How do these devices compare to other devices in developing countries?

One of the gold standard techniques to look at specimens, such as pathologist slides, blood smears, or pap smears, is to use conventional microscopes. These technologies essentially date back two centuries, almost. That's why they are relatively bulky, costly, and not easy to miniaturize. What we are doing is replacing some of the components of the microscopes, like lenses with computer algorithms that can essentially create images without those expensive components.

How do you envision health care in the developing world changing with these new devices? Would people visit health care professionals in a different way? Could they do tests independently?

Initially, these devices will be used by professionals, not by the public. We envision them to be used by mobile health care units to bring more advanced micro-analysis and diagnostic tools to remote locations.

In your talk, you mentioned thinking of these new devices as just better microscopes, not necessarily screening for certain diseases. It seems you might be able to build applications to do that.

One aspect is the microscope. And another aspect is the health care professional, who is supposed to read the microscope. In the short term, you can replace the current microscope with a better microscope, but you still need these pathologists who are trained to look at these images. Down the line, there is potential to also replace some functions of the pathologist, using computation, or at least to make the pathologists more efficient—spending less time per sample by helping them through the manual search of slides.

How would software help manual searching?

Take malaria, for example. For malaria, you get blood from the patient, put that blood droplet on a single slide, then look at the morphology of the red blood cell to see if it's infected by the parasite. This is a tedious job, because only 1% of the cells are infected. That means you have to look at around 1,000 cells before faithfully saying the slide is negative. That's why we have to have microscopes that can look at large numbers of cells quickly, without the tedious job of scanning the microscope, realigning it and everything. That's where high-throughput widefield imaging comes in.

Are there any other high-throughput widefield imaging devices?

There are, but they are expensive. You're looking at maybe a $50,000 to $100,000 investment for a scanning optical microscope. 

It seems this device has applications in developed countries, as well.

The same platform could be used at the home. Just in the United States, we have 40 million houses with their own [water] wells. Imagine if you could use your cell phone to understand the quality of water in these wells, every week or every few days. Another example could be fertility tests by monitoring the quality of semen.

Other research groups are looking to convert cell phone cameras into microscopes. How does your technology compare with their efforts?

Our microscopes are computational and lens-free. This brings in certain advantages such as larger imaging area, lower cost, and lighter weight.

Are there any limitations?

One important limitation is that the specimens should be transparent. That means we can't image tissue that's not transparent, for example. If you want to screen for skin cancer, it's something that you have to look for by light reflecting off the surface on the skin. [Our] microscopes won't see it.

Are any of your microscopes being used in the field?

We had field visits to the Amazon in Brazil and to Turkey to test some of our microscopes. These field visits will continue this year.

How long will it take to bring to market?

There is a startup company working on it based in Los Angeles. According to their plans, some of these computational microscopes should be available in the next 12 to 18 months.

Where are you going in the future with this?

One thing is data management. With this technology, the number of personal microscopes in the world can increase by several orders of magnitude. All of that potential data can help generate statistics about what's going on at the microlevel. To get the most out of such a number of microscopes working together, you have to have infrastructure and algorithms that can handle such large amounts of data and make meaningful predictions. That's what we're really excited about.

_________________________

Stephen Tung, a graduate student in the Science Communication Program at UC Santa Cruz, earned his bachelor's degree in mechanical engineering at Cornell University. At UCSC, he has worked as a reporting intern at the Monterey County Herald, the Stanford University News Service, and the San Jose Mercury News. He will work as a science writing intern this summer at the U.S. Department of Energy Joint Genome Institute in Walnut Creek, CA.

© 2012 Stephen Tung