UCLA Researchers Use Computer-Free Method to See Through Frosted Screen Instantly

Photo Credit: Prof. Aydogan Ozcan

Figure Caption: Computational imaging without a computer: seeing through random diffusers at the speed of light using a diffractive imager designed by deep learning

Approach can apply to biomedical imaging, atmospheric sciences and autonomous vehicles

Imaging through scattering and diffusive media has been a challenge for many decades, with various computer-aided reconstruction solutions that require multiple steps to correct distorted images. Researchers at UCLA have developed a new approach using diffractive surfaces to see through random diffusive media without a computer.

In principle, images distorted by random diffusers, such as frosted glass, can be recovered using a computer. However, existing methods rely on complicated algorithms and computer codes that digitally correct the distorted images.

Adaptive optics-based methods have also been applied in different scenarios to see through diffusive media. With significant advances in wavefront shaping, wide-field real-time imaging through turbid media became possible. However, in addition to digital computers, they require guide-stars or known reference objects, introducing additional complexity to an imaging system. Another alternative approach involves training deep neural networks in computers to reconstruct images of distorted objects using graphics processing units (GPUs).

A new paper published in eLight has sought an entirely new paradigm to image objects through diffusive media. In their paper, entitled “Computational Imaging Without a Computer: Seeing Through Random Diffusers at the Speed of Light,” UCLA researchers, led by electrical and computer engineering professor Aydogan Ozcan, presented a new method to immediately see through random diffusive media without the need for any digital processing. This new approach is computer-free and optically reconstructs object images distorted by unknown, randomly generated phase diffusers.

To achieve this, Ozcan and his team trained a set of diffractive surfaces or transmissive layers using deep learning to optically reconstruct the image of an unknown object placed behind a random diffuser. The diffuser-distorted input optical field diffracts through successive trained layers that are transmissive — the image reconstruction process is completed at the speed of light propagation through these diffractive layers, providing an ultra-fast solution to a challenging imaging problem that existed for decades. Each trained diffractive surface has tens of thousands of diffractive features called neurons that collectively compute the desired image at the output.

Many different and randomly selected phase diffusers were used during the training to help generalize the diffractive optical network. After this one-time deep learning-based design, the resulting layers are fabricated and put together to form a physical network positioned between an unknown, new diffuser and the output/image plane. The trained network collected the scattered light behind the random diffuser to instantaneously reconstruct an image of the object entirely optically.

There is no need for a computer or digital reconstruction algorithm to image through an unknown diffuser. In addition, this diffractive processor does not use any external power source, aside from the light that illuminates the object behind the diffuser.

The research team experimentally validated the success of this approach using terahertz waves. They fabricated the designed diffractive networks with a 3D printer to demonstrate the capability to see through randomly generated phase diffusers that were never used during training. The team also improved object reconstruction quality using deeper diffractive networks with additional fabricated layers, one layer following another.

The all-optical image reconstruction achieved by these passive diffractive layers allowed the team to see objects through unknown random diffusers. It presents an extremely low-power solution compared with existing deep learning-based or iterative image-reconstruction methods that use digital computers or GPUs.

The researchers believe that their method could be applied to other parts of the electromagnetic spectrum, including the visible and far/mid-infrared wavelengths. The reported proof-of-concept results represent a thin and random diffuser layer. The team believes these underlying methods can potentially be extended to see through volumetric diffusers, such as fog.

This research on diffractive networks was funded by the National Science Foundation and Fujikura, a Japan-based electrical equipment manufacturing company, and has the potential to enable significant advances in fields where imaging through diffusive media is of utmost importance. Those fields include biomedical imaging, astronomy, autonomous vehicles, robotics and defense/security applications.