Fulfilling the final project requirements for Stanford's EE367 Computational Imaging course, a novel method for single shot high-dynamic-range (HDR) imaging was developed. Single shot HDR can be performed using a spatially varying exposure (SVE). The SVE image is typically decoded into an HDR image using neural networks or standard interpolation with some information discarded. These techniques can be undesirable due to inflexibility in the case of neural net approaches, and loss of high frequency detail in interpolation approaches.
Here a method for SVE based single shot HDR imaging using compressed sensing techniques as a decoder is proposed and demonstrated. This method discards no information and can be used in conjunction with any existing exposure fusion technique. We will see that this method can be used to accurately compute a busy HDR scene from a single-shot image with PSNRs of up to 28 dB.
Single shot HDR often relies upon the use of SVE to encode additional exposure information about the scene into a single image. To implement this one typically uses either a physical neutral density filter mask, spatially varying ISO, or variable shutter. In this work an SVE image was simulated by splicing together pixels from various exposure shots as shown above.
HDR results using N = 3 exposure data sets from Merianos et al. a) and d) MEF LDR data sets. b) and e) Ground truth exposure fusion results using the full 3 LDR images. c) and f) Single-shot SVE exposure fusion results using our technique with PSNRs of 30.90 and 30.53 respectively.