Ulugbek S. Kamilov, Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
520 Pao Yue-Kong Library
In optical tomography, an object is illuminated with with various input patters and the scattered field is holographically recorded, giving access to both the amplitude and the phase of the light-field at the camera plane. The image of the object is then numerically formed from the measurements, using computational inverse scattering methods that rely on physical models describing the object-wave interaction. In this talk, we present a new computational imaging method to image objects from scattered light-fields. Our method is based on a nonlinear physical model that can account for multiple scattering in a computationally efficient way. Specifically, we prose to interpret the propagation of light through object, as an artificial multi-layer neural network, whose adaptable parameters correspond to the voxel values of the 3D object. Training the network to reproduce the experimentally recorded input-output light-field pairs yields the 3D image of the object. Results suggest that this learning approach yields an image quality better than other tomographic reconstruction methods.
 Kamilov et al., “Learning Approach to Optical Tomography,” Optica, vol. 2, no. 6, pp. 517–522, 2015.