By David Hambling

Troops and unmanned aircraft could be the first to benefit from a new smart, ultra-slim camera technology which combines the images from many low-resolution sensors to create a high-resolution picture. Known as Panoptes, it promises lightweight, flat cameras with the power of a big lens in a device just five millimeters thick. It’s being developed by Marc Christensen, a professor at Southern Methodist University, with funding from Darpa. Planned applications include sensors for miniature drones and helmet-cams for soldiers.

A key feature of the system is that is made up of a large number of tiny imagers. These are small, simple cameras, each directed independently by a MEMS-controlled micro-mirror. Because there is no large lens, Pantoptes can be made flat, unlike other cameras.

A central processor combines the images into a single picture, producing a higher resolution than the individual imagers. The intelligence is in the way that the system identifies areas of interest and concentrates the sub-imagers on the relevant part of the scene. Christensen gives the example of the Panoptes system looking at a building in a field.

“After a first frame or two was collected, the system could identify that certain areas, like the open field, had nothing of interest, whereas other areas, like the license plate of a car parked outside or peering in the windows, had details that were not sufficiently resolved,” he tells Danger Room. “In the next frame, subimagers that had been interrogating the field would be steered to aid in the imaging of the license plate and windows, thereby extracting the additional information.”

As well as concentrating on areas of interest, the smart software will combine the overlapping images in a way that will give a clear image without the ‘noise’ associated with low-resolution imagers like camera phones. This sounds like it might require a tremendous amount of processing power. But it’s possible to achieve good frame rates — 30-60 per second — using a normal digital signal processor.

“Our defense partners tell us a typical image only contains 10-15 percent of interesting features,” says Christensen. “Non-adaptive cameras therefore waste 85-90 percent of their resources — resolution, bits, power, etc — forming detailed images of regions that are not of interest.”

The project’s name is derived from Argus Panoptes, a hundred-eyed watchman in Greek mythology. According to Christensen, Panoptes stands for “Processing Arrays of Nyquist-limited Observations to Produce a Thin Electro-optic Sensor.”

The current goal is to demonstrate an imager five millimeters thick, weighing tens of grams. Because the cameras are flat, the professor says, they can be carried where other sensors cannot -– such as all over the surface of an unmanned aircraft. Because the system is adaptive and can focus on areas of interest, it combines a very wide field of vision with high resolution, without the need for a bulky zoom lens.

Panoptes builds off previous work in this field. The original inspiration comes from the compound eyes of insects. A few years ago, a German and Swiss team demonstrated a “paper thin” camera, based on a compound eye. More recently, researchers from the University of Osaka developed a button-sized camera called Tombo (Thin Observation Module by Bound Optics); using nine lenses, it could recreated three-dimensional scenes. Panoptes goes beyond these; its individual imagers are active elements that can be trained on any part of the scene.

The other way of approaching the task is by having a single imager but with variable resolution: Use high-resolution for the interesting part, and keep the rest fuzzy. This saves a huge amount of processing power, and it’s the approach used by many vertebrates, including humans. Only the fovea, the small area at the center of your field of vision, has high resolution. Machine vision can be designed either with a fixed fovea or like the Air Force-funded Variable Acuity Superpixel Technology system. In it, the software-defined fovea can be trained to follow a point of interest. This speeds up processing so much that the high-resolution “window” of the VAST camera can follow speeding bullets in flight.

It’s also the approach taken by Darpa’s 1.8 gigapixel ARGUS-IS flying camera and the Air Force’s Gorgon Stare. These use wide-angle cameras to catch the whole scene, but only render small, user-selected areas of interest in high resolution – 12 windows for Gorgon Stare, 65 for the bigger and more advanced ARGUS-IS. Panoptes is aiming for the same effect, using a lot of scanning imagers that can be directed to grab more detail.

Eventually, the project could lead to a kind of wide-angle helmet cam. Linked to the right image-processing software, it could be a life-saver, keeping a constant lookout in all directions for possible threats. Darpa is already pursing a similiar approach to try to spot RPGs before they are fired.

But the first customer for Pantoptesmay be a robotic plane. The current stage of the program is due to be completed next year when the system will be passed to a defence company for integration with an unmanned aerial vehicle. Who knows what the drone might see, with its new eye in the sky?