VIDEO: THE FUTURE OF CAMERAS MIGHT SEE BEHIND WALLS
The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing – a radical new approach where the incoming data may not actually look like an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modelling how light travels through the scene of the camera.
The additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense anymore. Instead, these cameras will use light detectors that only a few years ago would've never been considered, and they will be able to do incredible things, like see through fog, inside the human body and even behind walls.
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels – tiny sensor elements – to capture a scene that is likely illuminated by a single light source. You can also do things the other way around, capturing information from many light sources with a single pixel.
To do this you will need a controlled light source, for example, a simple data projector that illuminates the scene, either one spot at a time or with a series of different patterns. With each illumination spot or pattern, you will then be able to measure the amount of light reflected and add everything together to create the final image.
The disadvantage of taking a photo in this way is that you have to send out lots of illumination spots or patterns in order to produce one image. These cameras could be used to take photos through fog or thick falling snow. Or it could mimic the eyes of some animals and automatically increase an image's resolution (the number of details it captures), depending on what's in the scene.
Simple-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But with multi-sensor imaging, it will involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths.
You can buy commercial versions of this kind of technology, such as the Lytro camera, that collects information about light intensity and direction on the same sensor, to produce an image that can be refocused after the image has been taken.
Next generation cameras might look like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. The data is combined using a computer to provide a 50Mb, re-focusable and re-zoomable, professional-quality image.
These are just the first steps towards a new generation of camera that will change the way we think and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls and even imaging deep inside the human body and brain.
All these techniques rely on combining images with models that explain how light travels through or around different substances. Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself travelling across a scene.
Have a look at the video below to learn more about the research for futuristic cameras.