Scientists formed images of the light field without using a lens. Instead, a radiation diffuser was placed in the camera circuit, which played the role of encoding information about the light field for its further recovery from a two-dimensional image. This approach has allowed physicists to improve both the spatial and angular resolution of the resulting images, and in the future, this method will help overcome the fundamental limitations on the quality of the vector image of the light field, characteristic of previous generations of similar devices. The text of the paper is published in the journal Light: Science & Application.
A normal photo is a two-dimensional image that cannot convey the angular distribution of light that hits the sensor. On the contrary, light field cameras (also known as plenoptic cameras) are designed to record information about both the spatial location of radiation sources and the direction of light propagation. This combination opens up a lot of possibilities: for example, you can change the focus distance on the finished photo, and measure the distance to the object from the resulting image. The potential of light field cameras is already being used in microscopy, and there are also commercial film-optical cameras.
To realize the full potential of the technology, it is very important that the film-optical cameras have a high angular resolution, but we do not want to reduce the quality of the two-dimensional image either. However, in the classic approach to creating light field cameras using a microlens array, physicists have to choose between good spatial and angular resolution, since improving one of these characteristics leads to a deterioration of the other. As a result, the output number of pixels in the camera is orders of magnitude higher than the resolution of a two-dimensional image.
To solve this problem, instead of a large set of lenses, scientists use devices for scattering light-diffusers that play the role of encoders of two-dimensional photos. From the latter, in turn, knowing the parameters of the used lens, you can analytically obtain an image of the original light field with a high angular resolution, without sacrificing the quality of two-dimensional images. But in existing schemes with diffusers, lenses are still used as part of the lens, which introduces aberrations into the system and makes it difficult to accurately restore the original picture of the light field.
Now Zewei Cai from the University of Stuttgart and the University of Shenzhen, together with German and Chinese colleagues, have demonstrated the ability to create images of the light field using a diffuser without using a lens. To do this, the authors used a transparent phase plate with a highly variable width in its plane as a light diffuser from the object. In this case, the sensor registers pseudo-random distributions of radiation coming from each point in the observation area. With suitable optical characteristics of the diffuser and its early calibration on point sources, this approach allows you to restore a four-dimensional image of the studied light field from the two-dimensional image of the sensor.
To implement this recovery, physicists calculated the transmission matrix of the used diffuser by analyzing images of a set of calibration radiation sources. This matrix is used by the decoding algorithm, which, due to the immutability of the diffuser and a set of data from template sources, is able to recreate the full image of the object's light field from a two-dimensional image of the sensor. The algorithm took between 10 and 30 minutes to work for different angular resolutions, and when tested on real samples, the researchers achieved a spatial resolution of 50 micrometres.
According to the authors, the results obtained are important first of all as proof of the effectiveness of the used method of forming an image of the vector field of rays. Although physicists were able to achieve a higher resolution compared to previous experiments, in which the scheme in addition to diffusers and lenses were present, this approach is still far from practical implementation, but the capabilities are significantly superior to their counterparts.
Not only scientists are involved in the development of light field cameras, but also large IT companies: Google recently created a light field camera for shooting three-dimensional videos. And physicists don't always need a lot of pixels for such a camera: for example, they made a transparent single-pixel film-optical camera out of graphene.
Photos: Zewei Cai et al. / Light: Science & Application, 2020