Speaker: Yichen Wu
Affiliation: Ph.D. Candidate
Abstract: Exponential advancements in computational resources and algorithms have given birth to the new paradigm in imaging that rely on computation to digitally reconstruct and enhance images. Such computational imaging modalities have enabled higher resolution, larger throughput and/or automatic detection capabilities for optical microscopy. An example is lens-less digital holographic microscope, which enables snapshot imaging of volumetric samples over wide field-of-view without using imaging lenses. Recent developments in the field of deep learning have further opened up exciting avenues for computational imaging, which offer unprecedented performance thanks to their capability to robustly learn content-specific complex image priors.
This talk introduces a novel and universal modeling framework of deep learning -based image reconstruction technique to tackle various challenges in optical microscopic imaging, including digital holography reconstructions and 3D fluorescence microscopy. Firstly, auto-focusing and phase recovery in holography reconstruction are conventionally challenging and time-consuming to digitally perform. A convolutional neural network (CNN) based approach was developed that solves both problems rapidly in parallel, enabling extended depth-of-field holographic reconstruction with significantly improved time complexity from O(mn) to O(1). Secondly, to fuse advantages of snapshot volumetric capability in digital holography and speckle- and artifact-free image contrast in bright-field microscopy, a CNN was used to transform across microscopy modalities from holographic image reconstructions to their equivalent high contrast bright-field microscopic images. Thirdly, 3D fluorescence microscopy generally requires axial scanning. A CNN was trained to learn defocuses of fluorescence and digitally refocusing a single 2D fluorescence image onto user-defined 3D surfaces within the sample volume, which extends depth-of-field of fluorescence microscopy by 20-fold without any axial scanning, additional hardware, or a trade-off of imaging resolution or speed. This enables high-speed volumetric imaging and digital aberration correction for live samples.
Based on deep learning powered computational microscopy, a hand-held device was also developed to measure the particulate matters and bio-aerosols in the air using the lens-less digital holographic microscopic imaging geometry. This device, named c-Air, demonstrates accurate, high-throughput and automatic detection, sizing and classification of the particles in the air, which opens new opportunities in deep learning based environmental sensing and personalized and/or distributed air quality monitoring.
Biography: Yichen Wu is a Ph.D. candidate in Electrical and Computer Engineering department of University of California, Los Angeles (UCLA). He received his Bachelor of Science in Engineering (B.S.E.) in Information Engineering (Optics) from Zhejiang University, China in 2014. Yichen has authored/co-authored 21 journal articles, and 18 conference proceedings on computational and bio-medical imaging/sensing. He is a winner of UCLA Dissertation Year Fellowship, SPIE John Kiel scholarship, Vodafone Americas Wireless Innovation Award, and UCLA EE departmental fellowship. Yichen also served as the 2017-2018 president of UCLA chapter of OSA&SPIE.
For more information, contact Prof. Aydogan Ozcan ()
Date(s) - Oct 14, 2019
11:00 am - 12:30 pm
E-IV Faraday Room #67-124
420 Westwood Plaza - 6th Flr., Los Angeles CA 90095