George Barbastathis - 2018-Wuxi

Conference Video|Duration: 21:11
August 16, 2018
Please login to view this video.
  • Video details

    Too small, too far, too dark, too foggy: on the use of Artificial Intelligence for imaging challenging objects

    Computational Imaging systems consist of two parts: the physical part where light propagates through free space or optical elements such as lenses, prisms, etc. finally forming a raw intensity image on the digital camera; and the computational part, where algorithms try to restore the image quality or extract other type of information from the raw intensity image data. Computational Imaging promises to solve the challenge of imaging objects that are too small, i.e. of size at about the wavelength of illumination or smaller; too far, i.e. with extremely low numerical aperture; too dark, i.e. at very low photon counts; or too foggy, i.e. when the light has to propagate through a strongly scattering medium before reaching the detector. In this talk I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) to attack the quad of challenging objects. In several imaging experiments carried out by our group, objects rendered “invisible” due to various adverse conditions such as extreme defocus, scatter, or very low photon counts were “revealed” after processing of the raw images by DNNs. The DNNs were trained from examples consisting of pairs of known objects and their corresponding raw images. The objects were drawn from databases of faces and natural images, with the brightness converted to phase through a liquid-crystal spatial phase modulator. After training, the DNNs were capable of recovering unknown, i.e. hitherto not presented during training, objects from the raw images and recovery was robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set.

Locked Interactive transcript
Please login to view this video.
  • Video details

    Too small, too far, too dark, too foggy: on the use of Artificial Intelligence for imaging challenging objects

    Computational Imaging systems consist of two parts: the physical part where light propagates through free space or optical elements such as lenses, prisms, etc. finally forming a raw intensity image on the digital camera; and the computational part, where algorithms try to restore the image quality or extract other type of information from the raw intensity image data. Computational Imaging promises to solve the challenge of imaging objects that are too small, i.e. of size at about the wavelength of illumination or smaller; too far, i.e. with extremely low numerical aperture; too dark, i.e. at very low photon counts; or too foggy, i.e. when the light has to propagate through a strongly scattering medium before reaching the detector. In this talk I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) to attack the quad of challenging objects. In several imaging experiments carried out by our group, objects rendered “invisible” due to various adverse conditions such as extreme defocus, scatter, or very low photon counts were “revealed” after processing of the raw images by DNNs. The DNNs were trained from examples consisting of pairs of known objects and their corresponding raw images. The objects were drawn from databases of faces and natural images, with the brightness converted to phase through a liquid-crystal spatial phase modulator. After training, the DNNs were capable of recovering unknown, i.e. hitherto not presented during training, objects from the raw images and recovery was robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set.

Locked Interactive transcript