Diagram of the noise correction algorithm pixel maps ... · COS-7 cell culture and EB3 and PMP...
Transcript of Diagram of the noise correction algorithm pixel maps ... · COS-7 cell culture and EB3 and PMP...
Supplementary Figure 1
Diagram of the noise correction algorithm
Starting with a raw sCMOS frame, sequential steps include, pre-correction using offset and gain pixel maps, calculating negative log-likelihood using variance and gain pixel maps and noise contribution in Fourier space and iterative update to minimize the pixel-wise sum of the two quantities (See Supplementary Notes 2-7)
Supplementary Figure 2
Temporal fluctuation comparison of fluorescence microscopy images.
Peroxisome membrane proteins in COS-7 cells tagged with tdEos were imaged on a conventional wide-field fluorescence microscope. (A) Temporal standard deviation (STD) map over 400 sCMOS frames (pre-corrected by gain and offset). The colormap scale is from min (STD of 2.3, black) to max (STD of 12.3) in units of effective photon count. (B) Temporal STD map over 400 NCS (noise correction for sCMOS camera) frames. The terms sCMOS and NCS frame will be used throughout the supplemental figures. (C) Zoom in regions i and ii from A and B show pixels with high variance are effectively removed after NCS. (D) The pixel intensity traces of selected pixels from cropped regions i and ii over 50 frames. For the pixels with high readout noise (pixel 1 and 3), the value fluctuation decreases significantly after NCS, while for the pixels with low readout noise (pixel 2 and 4), the pixel value fluctuation remains the same. And the mean pixel values stay the same in both high and low readout noise cases before and after noise correction.
Supplementary Figure 3
Pixel fluctuation comparison before and after noise correction at low photon levels.
End-binding protein 3 in COS-7 cells tagged with tdEos were imaged on a conventional wide-field fluorescence microscope. (A) A single sCMOS frame pre-corrected for gain and offset for comparison purpose with an exposure time of 10 ms and at time point t = 0 s. (B) Time series of selected regions in A from sCMOS frames and the corresponding NCS frames showing the significant reduction of sCMOS noise while maintaining the underlying signal.
Supplementary Figure 4
Resolution comparison using both experimental data and simulated data.
(A) 100 nm yellow-green fluorescent bead images from sCMOS camera and NCS corrected images. To cancel readout noise in sCMOS frames for a fair comparison between the sCMOS frames and NCS frames, images were averaged over 200 frames for both cases. The intensity profiles were generated by averaging over the vertical dimension of each bead image and fitted with a Gaussian function to extract their widths, σsCMOS and σNCS. (B) Simulated bead images based on the parameters in Supplementary Note 14. The simulated bead images were averaged over 20 frames from sCMOS and NCS frames. From both experimental data and simulated data, the σNCS is slightly larger than σsCMOS, resulting in 5.5 nm and 4 nm decrease in resolution, a small decrease is potentially negligible compared with the diffraction limit of approximately 250 nm.
Supplementary Figure 5
Comparison of NCS result using OTF weighted and noise only masks.
Peroxisome membrane proteins in COS-7 cells tagged with tdEos were imaged on a conventional wide-field fluorescence microscope. (A) Temporal standard deviation map over 400 sCMOS frames, NCS frames with OTF weighted mask and noise only mask respectively. The colormap scale is from min (STD of 2.3, dark red) to max (STD of 12.3, white) in units of effective photon count. (B) Average of a sequence of sCMOS and NCS frames as in A and their corresponding amplitudes in Fourier space. Average images were used to cancel pixel dependent noise for fair comparison of sCMOS and NCS frames. (C) Radial average of the amplitude in Fourier space from the sCMOS and NCS frames.
Supplementary Figure 6
Comparison of NCS algorithm with low pass filter.
In order to illustrate the fundamental differences between low pass filters and the NCS algorithm, the simulated bead data uses a simulated high variance map (3000~6000 ADU2). (A) sCMOS frame. (B) NCS frame. (C) The sCMOS frame blurred by a 2D Gaussian kernel with a sigma equal to 1 pixel. (D) The sCMOS frame after a low pass filter with a cutoff frequency equal to the OTF radius. The cutout region in each image is the 2x zoom of the region above the white box. It shows that both the Gaussian blur and the OTF filter cannot effectively remove the sCMOS noise (yellow boxes and yellow arrows), while the NCS algorithm can significantly reduce the sCMOS noise fluctuations. Furthermore, the Gaussian blur method also reduces the resolution of the original image (red circles) (Supplementary Note 12).
Supplementary Methods
Optical setup
Images were recorded on a custom-built setup based on an Olympus IX-73 microscope stand
(IX-73, Olympus America Inc., Waltham, MA) with a 100x/1.35 NA silicone oil-immersion
objective lens (FV-U2B714, Olympus America Inc.) and a Mercury lamp, 560 nm (2RU-VFL-P-
500-560-B1R, MPB Communications Inc., Pointe Claire, Quebec, Canada) or a 642 nm laser
(2RU-VFL-P-2000-642-B1R, MPB Communications Inc.) for excitation. The filter turret
contained a standard FITC filter cube (FITC-3540C-OFF, Semrock Inc., Rochester, N.Y.) for
excitation with the Mercury lamp or a quadband dichroic and quad bandpass filters (Di03-
R405/488/561/635-t1 and FF01-446/523/600/677, Semrock Inc.) for excitation with the 560 nm
and 642 nm lasers. A deformable mirror (Multi-3.5, Boston Micromachines, Cambridge, MA)
reduced optical aberrations following the procedure described previously1,2. Fluorescence
signals excited by the 642 nm laser passed through an additional bandpass filter (FF01-
731/137-25, Semrock Inc.) placed just before the camera. The fluorescence signal was
recorded on a sCMOS Orca-Flash4.0v3 (Hamamastu, Tokyo, Japan). The overall system
magnification was ~70x, resulting in an effective pixel size of 91 nm.
COS-7 cell culture and EB3 and PMP labeling
COS-7 cells (a gift from Yale tissue culture facility) were seeded on 25 mm diameter coverslips
(CSHP-No1.5-25, Bioscience Tools) one day before transfection. Transfections of td-Eos-EB3-7
(a gift from Michael Davidson, Addgene plasmid # 57610)3 or tdEos-PMP-N-10 (a gift from
Michael Davidson, Addgene plasmid # 57658)4 were performed using Lipofectamine 2000
(Thermofisher Scientific, Waltham, MA) following the manufacturer’s protocol when cells
were >50% confluent. Cells were imaged one day after transfection. Just prior to imaging, the
cells were washed three times with 37°C DMEM with 10% glucose and imaged with 1 mL of
37°C DMEM with 10% glucose. Please check Life Sciences Reporting Summary for additional
information.
Aplysia bag cell neuron culture and F-actin labeling
Bag cell neurons were harvested from adult Aplysia (Marinus,Long Beach, CA) and plated on
Poly-L-Lysine (PLL, 70-150KDa, Sigma, St. Louis, MO) coated 35 mm glass-bottom dishes
(MatTek Corporation, Ashland, MA). Dishes were acid cleaned and washed multiple times
before Ultraviolet sterilization and coating with 20 µg/mL PLL. L15 (Invitrogen; Life
Technologies, Grand Island, NY) with artificial sea water served as culture media (L15
supplemented with 400 mM NaCl, 9 mM CaCl2, 27 mM MgSO4, 28 mM MgCl2, 4 mM L-
glutamine, 50 µg/mL gentamicin, and 5 mM HEPES, pH 7.9, osmolarity 950–1000 mOsm). Cells
were incubated at 14 degrees for 20 hours to allow for formation of growth cone5. To label actin
filaments (F-actin), SiR-Actin (Cytoskeleton Inc., Denver, CO) together with Verapamil
(Cytoskeleton Inc., Denver, CO) were added to culture media to reach final concentration of 500
nM and 1 µM, respectively. Cells were labeled at room temperature (RT) for 4 hours before
imaging. Please check Life Sciences Reporting Summary for additional information.
Data acquisition and analysis
COS-7 cells and neurons were illuminated with the mercury lamp and the 642 nm laser
respectively. The fluorescence beads were illuminated with the 560 nm laser. The exposure
time was 10-40 ms (40 ms for Figure 1 and Supplementary Figs. 2 and 4, and 10 ms for
Supplementary Fig. 3, Supplementary Videos 1 and 2, and Fig. SS1). The illumination
intensity was adjusted so that the brightest pixel value in ADU (Analog to Digital Units) was
150~200. In the camera settings, the defect correction was turned off and the hot pixel
correction was set to minimum. Data processing steps are described in details in
Supplementary Notes.
Supplementary Notes
1. Introduction
An image recorded on a sCMOS camera mainly contains two types of noise, the Poisson noise
which originates from the quantum nature of light, and the readout noise which is introduced
during the readout process of a camera6–8. In comparison with EMCCD cameras which have
ignorable effective readout noise but a significant electron multiplication noise9, scientific CMOS
(sCMOS) cameras have been characterized to have pixel-dependent camera statistics – each
pixel has its own offset, variance and gain4. This is largely due to independent readout units for
each pixel while in EMCCD cameras, each pixel is processed serially using the same readout
unit. Therefore, pixels on sCMOS cameras appear to ‘flicker’ even when there are no expected
incident photons and the level of this noise changes drastically across pixels from 1-2 ADU2 to
1000-2000 ADU2 (ADU stands for analog to digital unit, the output count from the camera)4. This
noise drastically reduces the image quality and makes quantitative study using sCMOS
cameras a challenge, especially in applications that require efficient detection of low photon
numbers, for example, in fluorescence microscopy, astronomy and hyperspectral detections.
2. Outline of sCMOS noise correction algorithm
The noise correction algorithm for sCMOS camera (NCS) reduces the readout noise from a
single image of the sCMOS camera to produce a noise corrected image.
The algorithm consists of the following steps (Supplementary Fig. 1):
1. Correct offset and gain using pre-characterized values (some sCMOS camera
manufacturers now provide these parameter maps) for each pixel to obtain a pre-
corrected image 𝐷 (Supplementary Note 3). Set all pixels with non-positive values in 𝐷
to a small but non-zero value such as 10-6.
2. Segment input image into sub-images with M by M pixels. The recommended value for
M is 8~32.
3. For each segment, obtain an estimate by minimizing a cost function 𝑓 = 𝐿𝐿𝑆 + 𝛼𝜎𝑁
where LLS and 𝜎𝑁 stand for the simplified negative log-likelihood (Supplementary Note
6) and the noise contribution near and outside the OTF periphery respectively
(Supplementary Notes 4 and 5), and 𝛼 is an empirical weight factor, here we choose
𝛼 = 0.2 (Fig. SS6). One can use a generic minimization routine such as ‘fminunc’ in
MATLAB (MathWorks, Natick, MA). The initial values of the estimates are set to the
values of the segments from 𝐷.
4. For each segment, repeat step 3. All segments are independent and therefore can be
processed in parallel through GPU or CPU. The regression converges within 20
iterations.
3. Pre-correction
Based on pre-calibrated sCMOS camera offset and gain maps, this step reduces the number of
calculations within the iterative process. Please note, for simplicity we consider a single gain
map and a variance map are associated with a sCMOS chip. This concept can be extended to
sensors with multiple amplifiers (e.g. column amplifier) as described in Supplementary Note
10. An image 𝐴 obtained from an sCMOS camera contains 𝑁 number of pixels. The pixel
dependent camera offset and gain are denoted as 𝑜 and 𝑔, where 𝑜𝑖 and 𝑔𝑖 indicate the specific
value for pixel 𝑖. We obtain 𝐷𝑖 of pixel 𝑖 from4,10:
𝐷𝑖 =
𝐴𝑖 − 𝑜𝑖𝑔𝑖
1
The resulting image 𝐷 is subsequently used in the iterative process described in the following
sections. Pixel-dependent gain maps can be either characterized in the lab or obtained from the
manufacturer depending on the sensor type and in-factory calibration process.
4. Noise contribution of a microscopy image
A diffraction limited imaging system has a cutoff frequency above which the higher frequency
signal cannot be collected11,12. The cutoff frequency is defined by the numerical aperture (NA),
and the detection wavelength (𝜆), of the imaging system. We used this cutoff frequency to
extract the noise part of the image. Aberrations in the microscope system might decrease the
cutoff frequency which makes the use of theoretical cutoff a conservative approach (see
Supplementary Note 9 for details). In the 2D Fourier transform of a microscopy image, the
signal from the diffraction limited system is only contained in a circular region defined by the
optical transfer function (OTF) of the imaging system. The OTF is the autocorrelation of the
pupil function of the imaging system11. For an ideal imaging system, the pupil radius is
𝑁𝐴/𝜆12,13, and the OTF radius is therefore two times of this quantity, 2𝑁𝐴/𝜆.
Given an image 𝜇 with L by L (L being an even positive integer) pixels, its 2D discrete Fourier
transform is calculated from,
�̂�(𝑘𝑥 , 𝑘𝑦) =1
𝐿∑ 𝜇(𝑥, 𝑦)𝑒−𝑖2𝜋𝑘𝑥𝑥𝐿 2⁄ −1
𝑚=−𝐿/2𝑛=−𝐿/2
𝑒−𝑖2𝜋𝑘𝑦𝑦 ,
2
in which, 𝑥 = 𝑚Δ𝑥, 𝑦 = 𝑛Δ𝑦, 𝑘𝑥 =𝑝
𝐿Δ𝑥, 𝑘𝑦 =
𝑞
𝐿Δ𝑦, and 𝑚, 𝑛, 𝑝, 𝑞 are incremented by 1, and 𝑝, 𝑞 ∈
[−𝐿/2, 𝐿 2 − 1⁄ ] , Δ𝑥 and Δ𝑦 represent the pixel size of the image in two dimensions and
generally Δ𝑥 = Δ𝑦.
An OTF mask image 𝑀(𝑘𝑥 , 𝑘𝑦) is generated from a raised cosine filter (as described in
Supplementary Note 5) and the noise contribution in �̂� can then be calculated from,
𝜎𝑁 = ∑ |�̂�(𝑘𝑥 , 𝑘𝑦)𝑀(𝑘𝑥 , 𝑘𝑦)|2
𝑘𝑥,𝑘𝑦
.
3
5. Raised cosine filter implementation on OTF mask
We implemented a high-pass raised-cosine filter to generate the OTF mask for the calculation of
the noise contribution in the NCS algorithm. The raised-cosine filter enables the user to adjust
the sharpness of the mask boundary and the degree of filtering ability. The OTF mask is
generated from,
𝑀(𝑘𝑥 , 𝑘𝑦) =
{
1, 𝑘𝑟 >
1 + 𝛽
2𝑇
0, 𝑘𝑟 <1 − 𝛽
2𝑇1
2+1
2cos [
𝜋𝑇
𝛽(𝑘𝑟 −
1 − 𝛽
2𝑇)] , otherwise
4
Figure SS1: Comparison of both experimental and simulated data in Fourier space.
Averaged experimental image over 400 frames were used to cancel the pixel-dependent
noise in order to provide a clear view of its Fourier space amplitude. (A) Comparison of
sCMOS frames and NCS frames of conventional fluorescence images of EB3-tdEos in COS-
7 cells. The real space image was obtained by averaging over 400 sCMOS frames and the
images in Fourier space are the amplitude part of the Fourier transform of the real space
images. For NCS frames, the radial average of the Fourier space amplitude shows
significant decrease after the OTF cutoff frequency (2𝑁𝐴/𝜆). We used the averaged image
to cancel out sCMOS noise per pixel in order to facilitate clear visualization of the Fourier
space image. (B) Comparison of ideal, sCMOS frames and NCS frames of simulated
microtubule images. The real space images are a single frame of the simulated data
including ideal case (no noise), sCMOS frame, and noise corrected image while the bottom
row shows their amplitudes in Fourier space. The amplitude of the Fourier space image after
NCS drops at outer radius with its value remaining higher over the entire frequency range in
comparison with the ideal image. Color bar indicates the effective photon count.
Where, 𝑘𝑟 = √𝑘𝑥2 + 𝑘𝑦
2. In the paper, we demonstrated two types of OTF masks. One is referred
to as OTF-weighted mask (Supplementary Fig. 5), where 𝛽 = 1 and 𝑇 = 𝜆/5.6𝑁𝐴. And the
other one is referred to as the noise-only mask (Supplementary Fig. 5), where 𝛽 = 0.2 and 𝑇 =
(1 − 𝛽)𝜆/4𝑁𝐴. We found that the OTF-weighted mask gives the best performance in regard to
noise correction. However, it sacrifices a small amount of signal information (Fig. SS1), which
results in a slight decrease in resolution (around 2-5 nm decrease, Supplementary Fig. 4),
while for the noise-only mask, the signal part is left untouched, but the noise correction
efficiency is also reduced. Overall, we recommend the OTF-weighted mask based on a
theoretical cutoff frequency as the recommended and safe OTF mask. The software package
allows user to choose from two premade OTF masks mentioned above and another two types
of adjustable OTF mask defined by the user (see Supplementary Note 9 for details).
6. Negative log-likelihood of the image
The pixel readout (unit: ADU) from a single pixel on an sCMOS camera is modeled as a sum of
two random variables following Poisson and Gaussian distributions respectively. Therefore, the
probability of the pixel readout can be described as a convolution of a Poisson distribution and a
Gaussian distribution4. As described in our previous work4, to reduce the computation
complexity of an infinite sum required by the convolution, we applied an analytical
approximation of this probability function4. Therefore, the likelihood between the estimated
image 𝜇 and the image 𝐷 is calculated from
𝐿(𝜇, 𝛾|𝐷, 𝛾) =∏
𝑒−(𝜇𝑖+𝛾𝑖)(𝜇𝑖 + 𝛾𝑖)(𝐷𝑖+𝛾𝑖)
Γ(𝐷𝑖 + 𝛾𝑖 + 1)
𝑁
𝑖=1
5
in which 𝑖 is the pixel index and 𝑁 is the number of pixels of the image, and 𝛾𝑖 is the a
calibration quantity and equals to 𝜎𝑖2 𝑔𝑖
2⁄ , where 𝜎𝑖2 and 𝑔𝑖 are the variance and the gain of pixel
𝑖 on the sCMOS camera (which can be generated from single or multiple variance and gain
maps, see Supplementary note 10). For optimization, we take the negative of the log-likelihood
function, and with Stirling’s approximation we have,
−ln(𝐿) ≅∑[𝜇𝑖 − 𝐷𝑖 − (𝐷𝑖 + 𝛾𝑖) ln(𝜇𝑖 + 𝛾𝑖) + (𝐷𝑖 + 𝛾𝑖)ln (𝐷𝑖 + 𝛾𝑖)]
𝑁
𝑖=1
6
Because 𝐷 and 𝛾 are constant, they are omitted in the objective function. And Eq. 6 is then
reduced to,
𝐿𝐿𝑠 =∑[𝜇𝑖 − (𝐷𝑖 + 𝛾𝑖) ln(𝜇𝑖 + 𝛾𝑖)]
𝑁
𝑖=1
7
in which 𝐿𝐿𝑠 denotes the simplified negative log likelihood function. Because 𝜇 is our estimate of
photo-electrons before readout and therefore must be non-negative. Thus, to ensure 𝜇𝑖 ≥ 0
during each iteration, we constrain 𝐷𝑖 > 0, by setting pixels with non-positive values in 𝐷 to 10-6.
To justify the performance of the NCS algorithm, we compared the images with different types
of noise and image after noise reduction with the ideal image (Supplementary Note 14) for
Figure SS2. Log-likelihood ratio (LLR) comparison of simulated data. (A) Simulated
microtubule images under conditions of no noise (ideal), with readout and Poisson noise
(sCMOS), after noise correction (NCS), and with Poisson noise only (Poisson). The pixel-
dependent noise is effectively removed using the noise correction algorithm; the colorbar
indicates the effective photon count. (B) Log-likelihood ratio map of each images shown in
A. For the ideal case, the LLR is zero for each pixel. The pixel LLRs are significantly
reduced after noise reduction and slightly smaller than the case of Poisson due to the
additional microscope information incorporated during the correction. (C) Histogram of LLRs
of individual pixels in B under the conditions of sCMOS, NCS and Poisson.
simulated data based on log-likelihood ratio test (Fig. SS2). The log-likelihood ratio is obtained
from,
𝐿𝐿𝑅 = −2ln
𝐿(𝑢|𝑋)
𝐿(𝑋|𝑋)= 2[𝑢 − 𝑋 + 𝑋ln(𝑋) − 𝑋ln(𝑢)] 8
in which, 𝑢 stands for the ideal image, which is noise free. And 𝑋 is either the estimated image,
the ideal image, the image with readout and Poisson noise, or the image with only Poisson
noise (Fig. SS2). The smaller the 𝐿𝐿𝑅 value, the closer the image 𝑋 is approximate to the ideal
image 𝑢. We found that the 𝐿𝐿𝑅 value are significantly reduced after noise reduction and slightly
smaller than that of Poisson-noise limited image due to the additional microscope information
(Supplementary Note 4) incorporated during the correction.
7. Image segmentation
In the sCMOS noise correction algorithm, the number of variables is equal to the number of
pixels in the image. For an image with 256 by 256 pixels, the optimization of a total of 65536
variables is computationally intensive. To reduce the computation time, each image is
segmented into small sub-images, each containing 𝑀 by 𝑀 pixels and therefore, these sub-
images can be processed in parallel through CPU or GPU (Fig. SS3). The segmentation size is
an adjustable parameter in the software package. Due to performance reasons, we recommend
𝑀 to be 8~32 (see Supplementary Note 13 for details).
To remove the edge effect from segmentation, each segment is padded on the edge to form an
(𝑀+2) x (𝑀+2) image. The padded pixels are generated as follows: 1) for non-edge pixels
relative to the original image, it is padded with its adjacent pixel in the original image; 2) for edge
pixels relative to the original image, it is padded with itself. This modification is to avoid the edge
effect from the Fourier transform. The padded image will be used as 𝐷 in the noise reduction
algorithm. In the final out-put image, the padded edges will be discarded. By performing this
modification, the edge effect is eliminated except on the edge pixels of the final image.
8. Discussion on effects of pixel readout correlation
The calculation of likelihood function in the NCS algorithm assumes no correlation in the pixel
values between adjacent frames. In the case of such correlation, the algorithm could be adapted
to incorporate this additional information by modifying the cost function. To achieve this, one
would need a precise understanding of the correlation behavior of the noise and its probability
density function. By extending the current single frame based operation into global optimization
using multiple frames, we expect further enhancement of the developed algorithm.
Figure SS3. Illustration of the segmentation scheme used in NCS. Top panel: General
segmentation scheme. The input sCMOS frame is segmented into small sub-images and
then fed into the parallelized noise correction algorithm. The processed segments are then
combined together to form the entire NCS frame. Bottom panel: Detailed pre-processing
steps for each segment to avoid fast Fourier transform artifacts on image boundaries.
(Supplementary Note 7)
9. OTF mask in an aberrated microscope system
As described in Supplementary Note 5, we used the theoretical cutoff frequency to extract the
noise part of the image. However, this cutoff value is derived based on an ideal imaging system.
For a real imaging system, due to the system and sample induced aberrations, the effective
cutoff frequency may become smaller than 2𝑁𝐴/𝜆. Therefore, the NCS algorithm allows the
user to choose from the following options: OTF-weighted mask (default), noise-only OTF mask,
adjustable OTF mask and user defined OTF mask. The adjustable OTF mask requires two user
inputs: the radius, 𝑤, and the height, ℎ (as demonstrated in Fig. SS4). Those two values change
the shape of the OTF mask. For advanced users, the option of a user defined OTF mask
provides additional flexibility and will further enhance the correction performance on aberrated
systems with a user generated OTF mask.
For the NCS algorithm, rather than the complete OTF function, the algorithm only requires an
estimation of the effective cutoff frequency (OTF boundary). Here we provide a method on how
to estimate the OTF boundary. Starting with a high signal to noise image of sparsely distributed
beads, this cutoff frequency can be estimated by the following steps (an example code of this
method can be found in the software package): (1) Calculate the average image of a static
image sequence from the sCMOS camera. (2) Calculate the modulus of its Fourier transform.
(3) Calculate the radial average of the logarithm of the resulting image from (2). (4) Use a
shrinking window to smooth the radial average curve (e.g. calculate the standard deviation
within the window) and find the cutoff frequency from the turning point of the smoothed curve.
To obtain an accurate estimation of the OTF boundary for aberrated systems, we recommend
imaging the fluorescence bead at focus, with a diameter of 40 nm to 100 nm, and a spectrum
matching the emission spectrum of the biological sample of interest with a density of 1-3 beads
per μm2.
For an optimized microscope system that is diffraction limited, we recommend the OTF
weighted mask (Supplementary Note 5) with an increasing gradient from the center to the
edge. A comparison between the OTF-weighted mask and the noise-only mask is shown in
Supplementary Fig. 5. We found that the OTF-weighted mask corrects noise more effectively
while maintaining the image resolution (Supplementary Fig. 4).
10. sCMOS camera with multiple gain and variance maps
The NCS algorithm is compatible with sensors that have multiple effective readout units per
pixel (for example, the dual column amplifier in certain CMOS sensors). The algorithm allows
the inputs of the gain and the variance map(s) generated either from a single map or multiple
maps that takes the ADU-level dependent gain and variance into consideration. In the software,
we provide an example code of using multiple gain and variance maps based on the signal
levels (ADU) of each pixel. The code requires the user to provide a set of N (N>1) signal levels
Figure SS4. Illustration of the choices of the OTF mask. (A) Various adjustable OTF
masks with the width from 10-6 to 2, unit in 𝑁𝐴/𝜆, and height from 0.7 to 1. The width must
be greater than zero. (B) Radial average plot of the different choices of the OTF masks. (C)
OTF weighted mask, it has a width equal to zero, and the height is dependent on the
sampling rate of the detector (Fig. SS5). It is the default and recommended option of the
OTF masks. (D) The noise-only OTF mask. It has a width equal to 2 (unit in 𝑁𝐴/𝜆), and the
height is also dependent on the sampling rate of the detector.
in ADUs, with their corresponding gain and variance maps at each ADU level. For example, for
dual amplifier sensors with signal levels of 0-1500 ADU and 1500-Max ADU (user defined;
obtained from camera manufacturer), the code takes in two sets of corresponding gain and
variance maps. For each input sCMOS frame, the code determines the proper gain and
variance to use for each pixel according to its ADU count. The resulting gain and variance
stacks, having the same size as the input sCMOS frame stack, will be used as the inputs of the
NCS algorithm. Under the assumption of the time-independent noise behavior (see
Supplementary Note 8), the NCS algorithm will perform in the same way as using a single gain
and variance map. We expect, however, in most situations a single gain and variance map at
low photon level (e.g. <750 photons per pixel per frame) are generally applicable to studies with
low light (e.g. live cell fluorescence imaging and light sheet microscopy) where the NCS
algorithm is mostly effective.
11. Discussion on sampling rate of the detector
Based on the Nyquist sampling rate, the pixel size of the image is often selected to be no more
than half of the system resolution. The system resolution can be characterized by the Rayleigh
criterion, 0.61𝜆/𝑁𝐴. We define the sampling rate of the detector to be the Rayleigh criterion
divided by the pixel size of the image (e.g. 2 is the minimum Nyquist sampling rate). We
investigated the performance of the NCS algorithm under various sampling rates. We found that
the NCS algorithm tolerates a wide range of sampling rates from 1.4 to 2.9. However, the
correction power starts to gradually decrease from the sampling rate less than 2.0. This is
because with a sampling rate smaller than 2.0, the OTF boundary is mostly outside the image
boundary in the Fourier space (Fig. SS5), leaving fewer pixels to be considered majorly from
the noise part of the image and consequently decreasing the proportion of the noise contribution
in the cost function, which results in a similar effect as decreasing the 𝛼 value (Fig. SS6). With a
significant low-sampling rate such as 1.4 (Fig. SS5), the height of the OTF-weighted mask is
decreased to ~0.2, which will greatly limit the optimization potential in the NCS algorithm.
Furthermore, for a sampling rate less than 1.73, there are no pixels outside the OTF boundary,
and in this case, we recommend using the OTF-weighted mask rather than the noise-only mask
as the latter will give a zero noise contribution making the cost function unminimizable. Overall,
we recommend a sampling rate greater than 2 for an efficient noise correction.
Figure SS5. Comparison of NCS results at various sampling rates of the detector. The
sampling rate of the detector is defined to be the Rayleigh criterion divided by the pixel size
of the image (e.g. 2 is the smallest Nyquist rate) and the Rayleigh criterion is 0.61𝜆/𝑁𝐴. (A)
The amplitude in Fourier space of the simulated beads imaged at various sampling rate. The
OTF radius is a constant, equal to 2𝑁𝐴/𝜆 (red circle). The number of pixels with higher
weight in the OTF-weighted mask (the gradient overlay) decreases with the sampling rate.
(B) Radial average plot of the OTF mask at various sampling rates. As the sampling rate
goes down, the OTF mask is truncated more to the center. (C) The sCMOS frame and its
corresponding NCS frame at various sampling rates. It shows that the NCS algorithm
performs well up to the sampling rate at 2 and gradually decreases after 2.
12. Comparison of NCS algorithm and low pass filter
We demonstrated the differences between the NCS algorithm and the low pass filter, which is
used for generic noise reduction. The low pass filters are generated by either convolving the
image with a sharp 2D Gaussian or multiplying a low pass filter in the Fourier space. For the first
method, we implemented a Gaussian blurring kernel with a sigma of 1 pixel, and for the second
method, we used a low pass filter with a cutoff frequency equal to the OTF radius, 2𝑁𝐴/𝜆. We
Figure SS6. Comparison of the NCS results at various 𝜶 values. In the cost function, an
𝛼 factor is multiplied with the noise contribution part. If 𝛼 = 0, the cost function is already at
the minimum, so that the NCS algorithm has no effect on the original image and the
resulting NCS frame is the same as the sCMOS frame. As the 𝛼 value increases, so does
the noise correction power. However, if the 𝛼 value is too large, the NCS algorithm will over
emphasize on the noise contribution, which will result in a decrease in resolution.
Empirically, we found that 𝛼 = 0.2 provides the optimum performance of the NCS algorithm.
noticed that both low pass filters failed to correct the sCMOS noise (Supplementary Fig. 6).
For example, rather than correcting high readout-noise pixels, the low pass filters replace the
high readout-noise pixels with a blurry spot. Furthermore, by using the low pass filters,
significant decreasing of the image resolution is observed (Gaussian blur) and the ringing
artifacts in the filtered image (Gibbs phenomenon) also appear. In contrast, we found the NCS
algorithm is able to effectively reduce the sCMOS noise while maintaining the image resolution
unchanged.
13. Speed and segmentation size
The computation speed of the NCS algorithm can be significantly increased with segmentation
(Supplementary Note 7) and parallel processing. Here we studied the performance of the NCS
algorithm under various segmentation sizes and iteration numbers on a 6-core 3.4 GHz CPU.
Under the tested conditions (Supplementary Table), we found that the speed decreases with
the increased segmentation size and as expected, with the increased iteration number. It is then
preferred to use as few iterations as possible. However, lower iteration number will decrease
noise correction performance especially in cases where readout variance is relatively high (Fig.
SS7). Overall, we recommend at least 10 iterations (default iteration number is set to 15 in the
NCS software).
We found that the noise correction performance of the NCS algorithm slowly increases with the
segmentation size, however, for a segmentation size less than 8 x 8, the cost function achieves
a local minimum in a few iterations. Therefore, we do not recommend a segmentation size less
than 8 x 8.
As each image segment is independently processed, we expect that GPU implementation of
this algorithm will further improve the speed by generating a large number of concurrent
operations.
runtime (s) segment size 8 x 8 segment size 16 x 16 segment size 32 x 32
5 iterations 3.93 5.01 12.60
10 iterations 6.41 8.27 21.74
15 iterations 9.03 11.72 30.94
20 iterations 11.52 15.14 40.08
Supplementary Table: Processing speed of a 256 x 256 image under various segmentation sizes and iteration numbers (CPU implementation)
14. Data simulation
In the simulated data, we use the worm-like chain (WLC) model to simulate microtubules, the
WLC model produces a set of x, y coordinates as described previously4. Based on those
coordinates, we generate a binary image, where the pixel value will be set to 1 if one or more
points locate within that pixel, and otherwise, the pixel value is set to zero. The binary image
has a much finer pixel size, usually 8 times smaller than the output image pixel size (100 nm).
Then the image is convolved with a PSF image, generated based on the parameters: 𝑁𝐴 =
1.4, 𝜆 = 700 nm, with the pixel size being 12.5 nm13,14. The PSF image is normalized so that the
sum of all pixel values is equal to 1. The resulting image is a diffraction limited microtubule
image with a pixel size of 12.5 nm. Subsequently, the image is binned by 8 to generate the
image with a pixel size of 100 nm, denoted as 𝑆norm. Given an intensity, 𝐼 = 100, and
background, 𝑏𝑔 = 10, an ideal image from a diffraction limited system is generated from,
𝑆ideal = 𝑆norm ∙ 𝐼 + 𝑏𝑔.
Figure SS7. Comparison of the NCS result with three segmentation sizes and two
iteration numbers. The simulated beads data use a simulated high variance map
(3000~6000 ADU2). The temporal standard deviation (STD) maps are generated from 100
sCMOS frames and NCS frames. The colormap scale is from min (STD of 1.5, black) to max
(STD of 6, white). (A) Temporal STD map of 100 sCMOS frames. It shows many high read-
out noise pixels (STD > 6). (B) Temporal STD map of 100 NCS frames at various conditions.
It shows that the noise correction performance slowly improves with larger segmentation
size and quickly improves with the iteration number. However, most of the high readout
noise pixels are removed after 5 iterations.
Then the image is corrupted with Poisson noise, denoted as 𝑆Poisson. Given a gain map, 𝑔,
variance map, 𝜎𝑟2, and offset map, 𝑜, the final image of the sCMOS camera is generated
from4,14,15,
𝑆sCMOS = 𝑆Poisson ∙ 𝑔 + 𝐺(0, 𝜎𝑟2) + 𝑜,
where 𝐺(0, 𝜎𝑟2) generates an image of random variables from Gaussian distributions with means
equal to zero and variances equal to corresponding values in 𝜎𝑟2. (Fig. SS8)
Figure SS8. Illustration of the data simulation procedure. To simulate a sCMOS frame of
microtubules with 64 by 64 pixels, a binary (512 x 512) image (A) is generated from the
coordinates simulated using worm-like chain model where the pixel size of the binary image
is 8 times smaller than the given pixel size. The binary image is subsequently convolved with
a normalized PSF (B) and then binned by 8 to generate a 64 by 64 image, denoted as 𝑆norm
(C). The 𝑆norm image is multiplied by a given intensity 𝐼 and added by a given background 𝑏𝑔
to generate the ideal image, 𝑆ideal (D). The ideal image is then corrupted with Poisson noise
to form 𝑆Poisson (E). The 𝑆Poisson image is then multiplied by a sCMOS gain map 𝑔, and
added by a Gaussian noise in each pixel, 𝐺(0, 𝜎𝑟2), where each pixel is a random value
drawn from a Gaussian distribution with a mean equal to zero and variance equal to 𝜎𝑟2,
where 𝜎𝑟2 is different in each pixel. An offset map 𝑜 is then added to the image F. The
resulting image is a simulated sCMOS frame, denoted as 𝑆sCMOS. (Supplementary Note 14)
15. Source code and usage
The demo package consists of functions and scripts written in MATLAB (MathWorks, Natick,
MA). The code has been tested in MATLAB version R2015a. To simplify coding, we use the
DIPimage toolbox (http://www.diplib.org/).
To run the demo code for simulated microtubule structure:
1. Set the current folder in MATLAB to be NCSdemo code\
2. Open script NCSdemo_simulation.m. Set the value of the following parameters: 𝑖𝑚𝑔𝑠𝑧
(image size), 𝑃𝑖𝑥𝑒𝑙𝑠𝑖𝑧𝑒, 𝑁𝐴 (numerical aperture of the objective), 𝐿𝑎𝑚𝑏𝑑𝑎 (emission
wavelength), 𝐼 (photon count of each fluorophore), 𝑏𝑔 (background photon count), 𝑁
(number of images), 𝑜𝑓𝑓𝑠𝑒𝑡 (offset ADU level of the sCMOS camera), 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑁
(number of iterations), 𝑎𝑙𝑝ℎ𝑎 (weight factor of the noise contribution), 𝑅𝑠 (segmentation
size) and the type of the OTF mask (Supplementary Note 9).
3. Run the code.
4. The output includes: 𝑖𝑚𝑠𝑑 (sCMOS image stack), 𝑜𝑢𝑡 (the noise corrected image.
5. The computation time depends on the 𝑖𝑚𝑔𝑠𝑧 (image size), 𝑁 (number of images),
𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑁 (number of iterations) and 𝑅𝑠 (segmentation size).
The usage of the demo code for experimental data is similar to the instruction above. Please
see the included script, NCSdemo_experiment.m, for detail.
Additional references:
1. Burke, D., Patton, B., Huang, F., Bewersdorf, J. & Booth, M. J. Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy. Optica 2, 177 (2015).
2. Huang, F. et al. Ultra-High Resolution 3D Imaging of Whole Cells. Cell 166, 1028–1040 (2016).
3. Shtengel, G. et al. Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure. Proc. Natl. Acad. Sci. U. S. A. 106, 3125–3130 (2009).
4. Huang, F. et al. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms. Nat. Methods 10, 653–8 (2013).
5. Ren, Y. & Suter, D. M. Increase in Growth Cone Size Correlates with Decrease in Neurite Growth Rate. Neural Plast. 2016, 1–13 (2016).
6. Long, F., Zeng, S. & Huang, Z.-L. Localization-based super-resolution microscopy with an sCMOS camera Part II: Experimental methodology for comparing sCMOS with EMCCD cameras. Opt. Express 20, 17741 (2012).
7. Saurabh, S., Maji, S. & Bruchez, M. P. Evaluation of sCMOS cameras for detection and localization of single Cy5 molecules. Opt. Express 20, 7338 (2012).
8. Huang, Z.-L. et al. Localization-based super-resolution microscopy with an sCMOS camera. Opt. Express 19, 19156–68 (2011).
9. Robbins, M. S. & Hadwen, B. J. The noise performance of electron multiplying charge coupled devices. IEEE T. Electron. Dev. 50, 1227–1232 (2003).
10. Smith, C. S., Joseph, N., Rieger, B. & Lidke, K. A. Fast, single-molecule localization that achieves theoretically minimum uncertainty. Nat. Methods 7, 373–5 (2010).
11. Goodman, J. W. Introduction to Fourier optics. (Roberts, 2005).
12. Hanser, B. M., Gustafsson, M. G. L., Agard, D. A. & Sedat, J. W. Phase-retrieved pupil functions in wide-field fluorescence microscopy. J. Microsc. 216, 32–48 (2004).
13. Liu, S., Kromann, E. B., Krueger, W. D., Bewersdorf, J. & Lidke, K. A. Three dimensional single molecule localization using a phase retrieved pupil function. Opt. Express 21, 29462 (2013).
14. Liu, S. & Lidke, K. A. A Multiemitter Localization Comparison of 3D Superresolution Imaging Modalities. ChemPhysChem 15, 696–704 (2014).
15. Ober, R. J., Ram, S. & Ward, E. S. Localization accuracy in single-molecule microscopy. Biophys. J. 86, 1185–200 (2004).