About Me

    I am a Ph.D. candidate in Computational Imaging at UC Berkeley and UC San Francisco advised by Laura Waller and funded by NSF GRFP. My work combines optical design, convex optimization, and deep learning to achieve capabilities that are not possible with conventional imaging setups. I designed systems for single-shot 3D imaging, hyperspectral imaging,  HDR imaging, digital holographic microscopes as well as deep learning architectures for fast spatially varying deconvolution. I interned with Microsoft ResearchFacebook Reality lab . I graduated from UCLA with a B.S. in Bioengineering in 2016 where I worked with Aydogan Ozcan on designing digital holographic microscopes for water-monitoring

    Research Projects

    Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy 

    K. Yanny*, N. Antipa*, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, L. Waller

    Project Page / Paper (Nature LS&A)   

    In this work, we design a miniature single-shot 3D microscope by replacing the tube lens of a 2D Miniscope with an optimized phase mask. The mask is printed using a Nanoscribe 3D printer. The resulting microscope is inexpensive, tiny (the size of a quarter), and can capture 3D fluorescent volumes from a single image, with 2 micron lateral resolution and 10 micron axial resolution at 40 frames per second with no moving parts. We also develop theory for efficiently modeling field varying aberrations and metrics to optimize the phase mask. Check out more of our 3D single-shot reconstructions here.

    Spectral DiffuserCam: lensless snapshot hyperspectral imaging

    K. Yanny*, K. Monkhova*, N. Aggarwal, L. Waller

    Project PagePaper (Optica) 

    In this work, we propose a novel, compact, and inexpensive computational camera for snapshot hyperspectral imaging. Our system consists of a repeated spectral filter array placed directly on the image sensor and a diffuser placed close to the sensor and relies on solving a sparsity constrained inverse problem to recover the hyperspectral volume with good spatio-spectral resolution. By using a spectral filter array, our hyperspectral imaging framework is flexible and can be designed with contiguous or non-contiguous spectral filters that can be chosen for a given application.

     

    Deep learning for fast spatially-varying deconvolution

    K. Yanny*, K. Monakhova*, RW. Shuai, L. Waller

    Project PagePaper (Optica)

    Deconvolution can be used to obtain sharper images or volumes from blurry or encoded measurements in imaging systems. Given knowledge of the system’s point spread function (PSF) over the field-of-view, a reconstruction algorithm can be solved to recover a clear image or volume. In realistic systems, the PSF often varies with lateral and axial object positions due to aberrations or design, however most deconvolution algorithms assume shift-invariance. Shift-varying models can be used, but are often slow and computationally intensive. In this work, we propose a deep learning-based approach that leverages knowledge about the system’s spatially-varying PSFs for fast 2D and 3D reconstructions in the presence of spatially-varying aberrations. Our approach, termed MultiWienerNet, uses multiple differentiable Wiener filters paired with a convolutional neural network to incorporate spatial-variance. Trained using simulated data and tested on experimental data, our approach offers a 625 – 1600x speed-up compared to iterative methods with a spatially-varying model, and outperforms existing deep-learning based methods that don’t model shift-variance.

     

    Physics-based learning for lensless imaging

    K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, L. Waller

    Project Page / Paper (Optics Express)

    Mask-based lensless imagers, like DiffuserCam, can be small, compact, and capture higher-dimensional information (3D, temporal), but the reconstruction time is slow and the image quality is often degraded. In this work, we show that we can use knowledge of optical system physics along with deep learning to form an unrolled model-based network to solve the reconstruction problem, thereby using physics + deep learning together to speed up and improve image reconstructions. As compared to traditional methods, our architecture achieves better perceptual image quality and runs 20× faster, enabling interactive previewing of the scene.

     

    Portable Digital Holographic Microscope for Water Monitoring

    Z. Gorocs, M. Tamamitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Rivenson & A. Ozcan.

    Paper (Nature LS&A) 

    We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h. The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel. These holographic diffraction patterns are reconstructed in real-time using a deep learning based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling. 

    logos