Light field microscopy with correlated beams for high resolution volume imaging

Thank you for visiting Nature.com. The browser version you are using has limited CSS support. For the best experience, we recommend that you use an updated browser (or disable Compatibility Mode in Internet Explorer). In the meantime, to ensure continued support, we will render the site without styles and JavaScript.
Light field microscopy represents a promising solution for microscopic volumetric imaging due to its ability to encode information in multiple planes in a single scan. This is achieved due to its unique ability to simultaneously obtain information about the spatial distribution and direction of light propagation. However, modern light microscopes have an unfavorable loss of spatial resolution compared to standard microscopes. In this article, we experimentally demonstrate how a novel scheme called correlated light field microscopy (CLM) works, in which the correlation between two beams is used to produce a volumetric image with a resolution limited only by diffraction. In CLM, correlated images are obtained by measuring the intensity correlation between a large number of pairs of ultra-short frames, each pair of frames being illuminated by two correlated beams and exposed for a time comparable to the source coherence time. We experimentally demonstrate the ability of CLM to recover information contained in out-of-focus planes in 3D test targets and biomedical models. In particular, we demonstrate the improvement in CLM depth of field compared to conventional microscopes with the same resolution. In addition, multiple viewpoints contained in a single correlated image allow over 50 distinct lateral planes to be reconstructed in a 1 mm3 sample.
Rapid imaging of 3D specimens at the diffraction limit has been a longstanding challenge for microscopy. To meet the need for fast visualization of large volumes, many attempts have been made to obtain a data acquisition rate sufficient for the analysis of dynamic biological processes, each of which leads to different types of trade-offs. Advances in this field include deep-focus scanning using tunable lenses2,3, light illumination4 as well as non-diffracting beams5,6,7,8,9,10, fast STED11, fast two-photon microscopy12 and multifocal multiplexing13. Software methods such as compressed probing and computational microscopy are also used to improve productivity. From this point of view, light field microscopy is one of the most promising methods. By detecting the spatial distribution and direction of light in a single exposure, light field imaging provides the ability to refocus out-of-focus parts of a 3D sample in post-processing. Thus, the depth of field (DOF) within the image volume can be increased by placing refocused planes at different distances of 15, 16, 17, 18, 19, 20. However, in its usual implementation, light field rendering suffers from fundamental obstacles associated with compromise between resolution and degrees of freedom. In the microscopic region, this compromise is particularly suboptimal, since the required high resolution severely limits the degrees of freedom, making it necessary to perform multiple scans to characterize thick samples 21 . In microscopy applications, light field microscopy can eliminate bottlenecks associated with long acquisition times (typical of scanning methods) and unbearably large data volumes (typical of multifocal multiplexing). However, its widespread use is limited by the degradation of resolution away from the diffraction limit of 16.18. However, through the development of image analysis tools and deconvolution algorithms that can provide partial resolution recovery, light field imaging has shown its potential in neuroscience applications where it is used to analyze large areas of an excitatory neuron 25. A miniature light field microscope has also recently been used for microscopy of freely moving mice.
In this article, we present an experimental demonstration of a new method for performing light field microscopy with diffraction limited resolution, discussing its implementation and application on a test bench. This new technology is able to push the boundaries of classical microscopy by exploiting the statistical properties of light27, 28, 29, 30, 31, 32 using the principle of Correlative Plenoptic Imaging (CPI) 33, 34, 35, 36, 37, 38, where light field imaging is performed at the diffraction limit by measuring the correlation between the intensity fluctuations of two non-overlapping detectors 39,40,41,42,43. Previous CPI architectures were limited to brightfield operations based on cloaking objects. Here we develop the CPI architecture for various microscopy modes (fluorescence, polarization, dark field) that are critical for biological applications. To this end, the sample is illuminated 44 by the entire beam of the chaotic light source, and not only by one beam of the coupled pair of beams 33, 36, 37. This allows visualization of self-luminous, scattering and diffuse samples, as well as visualization of birefringence without compromising the obtained correlations. Another advantage of the proposed Correlated Light Field Microscope (CLM) over the previous Correlated Light Field Microscope (CLM) is that the imaging speed is increased by more than an order of magnitude and the sample can be monitored using conventional (i.e. -based) diffraction-limited microscope.
The comparisons presented in Table 1 illustrate the expected theoretical improvements offered by CLM in terms of resolution and degree of freedom44 compared to standard and conventional light field microscopy46. The first column highlights the diffraction-limited imaging capabilities that CLM shares with conventional microscopy, rather than the image resolution that conventional lightfield imaging sacrifices. The coefficient \(N_u\), which quantifies the loss of resolution in conventional light field display, is determined by the number of resolution units per side dedicated to direction information and is proportional to the improvement in depth of field. The second and third columns of the table report the degrees of freedom of the three methods for detailing an object under the constraints of resolution and detailing an object of arbitrary size, respectively. When rendering a light field, the latter represents the refocusing range and the former determines the axial resolution of the focal plane. In conventional light field microscopy, increasing the refocusing range (i.e. choosing a larger value of \(N_u\)) results in a proportional loss of lateral resolution and a more detrimental loss of axial resolution (comparable to \(N_u^ 2\)); This typically limits \(N_u\) to values ​​less than 10. In addition, for object details that exceed the resolution limit, the refocus range of both standard microscope depth of field and conventional light field microscopes scales linearly with the size of the object detail; this is due to the “circle of confusion” created by the finite numerical aperture of the imaging system. In CLM, in contrast, the refocus range is quadratically proportional to the size of the object detail and limited only by the diffraction of the object (see refs. 33, 37, 44 for a detailed discussion; in particular, the DOF expansion of CLM is derived from Equation (23) of 44). These are the key features that provide the unique benefits of CLM refocusing. The fourth column represents the axial resolution of the three methods as defined by the circle of confusion (see the Materials and Methods section for details). The ratio between the third and fourth columns can be thought of as the number of independent axial planes that each method can provide: the axial resolution is the same for all three imaging methods, but the depth of field is scaled the same way as in CLM. the resolution of the object a means a linear scaling of the number of independent axial planes with a, in contrast to this, in a standard light field the number of independent axial planes is fixed \(N_u\) and is usually much less than \(3,3 \mathrm {NA }_0 a/\ lambda\). The last column of Table 1 also shows that the refocusing range of the two light field microscopes is closely related to the diversity of viewpoints, which is defined as the number of viewpoints available on each side of the 3D specimen: In terms of the size of the part of interest, the multiplicity of viewpoints is proportional to the number of independent axial planes. which can be refocused from above. Unlike conventional light field microscopy, the variety of CLM viewpoints can even be an order of magnitude greater than conventional light field imaging without sacrificing diffraction limited resolution; for imaging systems with large numerical apertures and refocusing away from the focal plane This is especially true for imaging systems (i.e., larger details of the value object). Most importantly, CLM’s DOF ​​expandability is independent of the imaging system’s numerical aperture (see the Materials and Methods section for details), so a larger numerical aperture can be chosen to maximize volumetric resolution without sacrificing degrees of freedom. This is very different from conventional light field microscopy, which requires the correct choice of numerical aperture and \(N_u\) to achieve an acceptable compromise between resolution and depth of field.
The structure of the operation is as follows. In the “Results” section, we outline the theoretical foundations of the method and present the experimental results. In the Discussion section, we discuss the results, their impact on the current state of affairs, and opportunities for further improvement. In the Materials and Methods section, we provide a detailed description of the experimental setup and methods for extracting the relevant information from the corresponding measurements.
The correlated light field microscope shown schematically in Figure 1 is based on a conventional microscope consisting of an objective (O) and a tubular lens (T) to reproduce the image of a sample on a high-resolution sensor array (detector)\(\ mathrm {D }_a\)); The microscope can only correctly reconstruct slices of 3D objects that fall within its depth of field. The CLM’s ability to refocus the out-of-focus portions of a 3D sample is due to its ability to also gain information about the direction of the light coming from the sample. In our architecture, this is done with a beam splitter (BS) that reflects a small portion of the light coming out of the lens onto another lens (L), which maps the lens onto a second high resolution sensor array (detector\(\mathrm {G}_b\ )). See the Materials and Methods section for more details on the experimental setup.
Scheme of the light field of a microscope with correlated beams. The light from the sample (green pyramid and yellow box) is split into two optical paths by a beam splitter (BS) located after the objective (O). The microscope can only display the part of the sample that is in focus (green), while the part outside the depth of field is blurred (yellow). On the transmission path, the tubular lens (T) is focused on the detector \(\mathrm {D}_a\) (blue) 3D sample, whose DOF is determined by the numerical aperture of the lens. Following the reflection path of the BS, the lens is displayed on the detector \(\mathrm {D}_b\) (magenta) through the secondary lens (L). The intensity patterns recorded by two array detectors in sets of N frames are processed by a computer to reconstruct the correlation function of the three-dimensional plenoptic information about the encoded sample (equation (1)). All experimental results presented in the article were obtained using this setup.
A 3D sample is a random light source or a diffuse, transmissive or reflective sample illuminated by an external random light source. The chaotic nature of light makes it possible to visualize the light field due to the rich information encoded in the correlations between intensity fluctuations. In fact, the correlation function is estimated in CLM using intensities obtained from pixels illuminating two non-overlapping detectors \(\mathrm {D}_a\) and \(\ mathrm {D}_b\) at the same time.
where \(\langle \dots \rangle\) is the mean of the original statistic, \(I_{a}(\varvec{\rho}_{a})\) and \(I_{b}(\varvec{ \rho }_{b})\) – lateral position on detectors \(\mathrm {D}_a \(\varvec{\rho }_a\) and \(\varvec{\rho }_b\) intensity \) and \( \mathrm {D}_b\) are in the same frame, and \(\Delta I_{j} (\varvec{\rho }_{j}) = I_{j}(\varvec { \ rho }_{j} ) – \langle I_{j} (\varvec{\rho }_{j}) \rangle\), \(j=a,b\). The statistical reconstruction of the correlation function requires the collection of a set of N independent frames, assuming stationary and ergodic sources. At best, the exposure time of each frame corresponds to the source coherence time.
When considering the geometric optical limit of the above correlation function, the ability of the CLM to map the light field is evident at an exponent of 44.45.
where \(F(\varvec{\rho }_s)\) is the light intensity distribution from the sample, \(P(\varvec{\rho }_O)\) is the objective intensity transfer function, and f is the distance from the common plane to lens, \(f_T\) – focal length of the tube lens, \(M_L\) through \( \mathrm {D}_b\). When the plane of interest is in focus (i.e. \(f=f_O\), where \(f_O\) is the focal length of the lens), the correlation simply gives the same in-focus image as the image received by the detector\( \math{ D}_a\). However, as shown in Figure 2, out-of-focus 3D samples (i.e. they lie in a plane at a distance \(f \ne f_O\) from the target) are considered to be displaced, and their displacement depends on the sensor\ (\mathrm {D}_b\ ) a specific pixel \(\varvec{\rho }_b\) corresponding to the point \(\varvec{\rho }_O=-\varvec {\rho }_b/M_{L} \) on the target is selected. That is, for 3D samples thicker than the natural depth of field of a microscope, different \(\varvec{\rho}_b\) values ​​correspond to different choices of the sample’s viewpoint: the correlation function is in the equation. (1) Has the form of a four-dimensional array characterized by two detector coordinates \((x_a, y_a, x_b, y_b)\) encoding all the spatial and angular information needed for refocusing and multi-angle imaging. By fixing the coordinates \((x_b,y_b)\) of the 4D array, one can make a “slice” of the correlation function, which corresponds to the selection of the image sample from the selected lens point of view. This property determines the location of the details of the pattern in three dimensions and highlights the hidden parts of the pattern. A refocused image of the sample plane at an arbitrary distance f from the objective can be obtained by properly superimposing and summing these different viewing angles44,45:
where \(M=f_T/f_O\) is the natural magnification of the microscope. The refocusing process greatly increases the signal-to-noise ratio (SNR) compared to the single-view function associated with \(\varvec{\rho}_b\). Note that by replacing the physical distance with the optical distance, this method can be generalized to the case of samples with axially varying diffraction indices. In addition, it is shown that the recovery of the correlation function is resistant to turbulence and scattering around the sample within the limiting distance determined by the wavelength of the emitted light and lateral coherence.
Correlation function \(\Gamma (\varvec{\rho }_a,\varvec{\rho }_b)\) Schematic diagram of different viewpoints on a pooled 3D sample (equation (2)); here \(\varvec{\po }_i=(x_i, y_i)\),\(i=a,b\). Thick samples are represented by (yellow) circles and (red) arrows located at distances f and \(f^{\prime}\) from the objective, respectively. These two details may overlap (upper right panel) or be completely separated (lower right panel) depending on the particular transverse coordinate chosen on the detector \ (\mathrm {D} \ (\varvec {\rho }_b\)_b\) , which corresponds to a point on the lens \(-\varvec{\rho}_b/M_L\).
Before moving on to the experimental demonstration of CLM, let’s talk about the benefits offered by CLM’s ability to collect a large number of viewpoints due to the large target plane with which to choose the view angle, and the resulting wide range. the angle at which the sample can be observed. First, refocusing has a close relationship between the maximum viewing angle and the achievable degrees of freedom in 3D imaging. Secondly, the greater the number of viewpoints superimposed to obtain a refocused image (see the integral over \(\rho _b\) in equation (3)), the suppression of contributions from neighboring planes on the plane of interest and is well known in 3D visualization. In addition, adding a large number of viewing angles to form the final image is advantageous in terms of noise reduction. In fact, when summing opinions with statistically independent noise, the process leads to a pointwise increase in the signal-to-noise ratio, proportional to the square root of the number of contributions.
The refocusing and depth mapping capabilities of CLM were initially tested with a simple 3D steerable object consisting of two planar resolution targets (named \(3\mathrm {D}_1\) and \(3\mathrm {D} _2\) ) placed at two different distances from the lens, well beyond its natural depth of field. Data were obtained by focusing a correlated light field microscope onto a plane that contained neither of the two test targets. Correlation measurements combined with equations. (3) was used to refocus two test targets separately from this dataset. The illuminated areas of both targets contain triple slits with center-to-center distance\(d = 49.6\,\upmu\mathrm{m}\) and slit width\(a=d/2\); the total linear field of view (FOV) is \(0.54\,\mathrm {mm}\). The test targets are placed \(2.5\, \mathrm {mm}\) apart (i.e. \(f_{\text {3D}_1} – f_O = -1250\,\mathrm {\upmu m}\ ) and \(f_{\text {3D}_2}-f_O = 1250\,\mathrm {\upmu m}\), where \(f_O\) is the focal length of the lens), i.e. of a given size (i.e., a circle of confusion, see Table 1), several times larger than the natural depth of field of a microscope. The presented results are obtained by evaluating the correlation function \(N=5\times 10^3\) on the received frames; we report more details about the change in SNR depending on the number of collected frames in the additional information.
The improvement of the CLM over standard microscopes can be observed in Figure 3a, where we report the trade-off between resolution and degrees of freedom in both microscopes. In particular, the curves represent the resolution limits of a CLM and a standard microscope (SM) with the same numerical aperture as a function of the distance to the focal plane of the objective. In both cases, the resolution limit is defined as the value d of the center-to-center distance between two slits of such width \(a=d/2\) that their images can be seen at 10% visibility; this definition generalizes the Rayleigh criterion to the resolution of out-of-focus images. For a fixed distance between slots d (vertical axis), according to our theoretical results, it is possible to determine the longitudinal extent \(f-f_O\) (horizontal axis) in which two slot images can be distinguished. The points marked A to E and A’ to D’ in fig. 3a were experimentally examined to demonstrate agreement between the experimental results and the theoretical predictions of resolution limits and degrees of freedom. We examined the range from \(-1\) mm to \(+1\) mm along the optical axis with a step of \(250\ \upmu\) using various three-slit masks tested with resolution in the target plane. , characterized by center-to-center distance in the range from \(44\, \upmu \mathrm {m}\) (A and A’) to \(4\, \upmu \mathrm {m}\) (E) In particular, point E is close to the diffraction limit of standard microscopy, indicating that the CLM has the same resolution at focus. Figure 3b shows an experimental test of the refocusing ability of the CLM for cases A and D’; the completeness of the refocusing images obtained in all other cases is reported in the supplementary information. The red dots in Figure 3a indicate the resolution target parameters \(3\mathrm {D}_1\) and \(3\mathrm {D}_2\) that make up our test 3D object. The successful experimental refocusing of the two targets shown in Figure 3c shows that the CLM is able to achieve degrees of freedom that is 6 times greater than that of a standard microscope at a given resolution, or, at a given degree of freedom, 6 times magnification. in resolution. The leftmost panel shows standard microscopic images for comparison of 3D test objects where both planes of the objective are significantly out of focus and it is not possible to identify the three slit group. It is worth noting that a set of triple slits placed at \(f_{\text {3D}_2}-f_O=1250\,\upmu \mathrm {m}\) (i.e. the object farthest from the target) is still ideal. This means that the resolution achieved in more distant planes is not greatly affected by other features located along the optical path, although they perform extensive spatial filtering.
Panel (a) compares DOF ​​resolution and trade-off in a standard microscope (SM) and CLM, numerical aperture \ (\mathrm {NA}_0 = 0.23 \) and illumination wavelength \ (\lambda = 532 \ \mathrm {nm} \ ). The curve represents the limit at which a double-slit mask with center-to-center distance d (on the vertical axis equal to twice the width of the slit) can be seen at 10% visibility depending on the longitudinal distance. mask to the target focal plane (\(f-f_O\)). The curves were obtained at the same illumination wavelength and numerical aperture for both microscopes. The points \(3D_1\) and \(3D_2\) represent a three-dimensional sample consisting of two flat targets with resolution (three slits with distances between slits \(d=49,6\,\upmu \mathrm {m}\). The parameters are placed at distances from the target \(f_1-f_O = -1250\,\upmu \mathrm {m}\) and \(f_2-f_O = 1250\,\upmu \mathrm {m}\) respectively. Points A to E and A’ to D’ correspond to further experimental data demonstrating the expected maximum achievable CLM degrees of freedom at various resolutions as the object (resolution test target) moves out of plane at the focal point (A, A’ : \(d= 44.2 \,\upmu \mathrm {m}\) to \(f-f_O= \pm 1000\,\upmu \mathrm {m}\); B, B’: \(d= 39,4 \ ,\upmu \ mathrm {m}\) to \(f-f_O= \pm 750\,\upmu \mathrm {m}\); C, C’: \(d= 31,3\,\upmu \mathrm { m }\ ) to \(f-f_O= \pm 500\,\upmu \mathrm {m}\); D, D’: \(d= 22,1\,\upmu \mathrm {m}\) to \( f -f_O= \pm 250\,\upmu \mathrm {m}\), E: \(d= 4\,\upmu \mathrm {m}\) at \(f-f_O=0\,\upmu \mathrm {m}\)) Overfocused and images of the three slits corresponding to points A and D’ are shown in panel (b) and the rest are indicated in the supplementary information. The line graph below the refocused image shows the intensity of the image vertically averaged and normalized to the maximum. Panel (c) shows images of the 3D test sample above, corresponding to the \(3D_1\) and \(3D_2\) points in panel (a): images taken with a standard microscope (left) with CLM refocused on the nearest and most planes \(3\mathrm {D}_1\) (center) and \(3\mathrm {D}_2\) (right) respectively. The line plot below the overfocused image shows the intensities in the area circled in red averaged along the longitudinal direction of the slit and normalized to the maximum.
The results in Figure 3c also show that CLM increases data acquisition speed by more than an order of magnitude over previous correlation-based light field imaging protocols,37,38 where \(5\x 10^4\) frames (which will be compared to the current \ (5\times 10^3\)) , additional Gaussian low-pass filtering is used in post-processing to achieve comparable SNR. This improvement is due to the elimination of ghosting in the CLM architecture and replacing them with regular images on both sensor arrays. Indeed, correlation between live images has been shown to significantly improve the signal-to-noise ratio in ghost images.
Following a 3D target, we tested the effectiveness of CLM in reproducing features of interest for biomedical applications in fat phantoms; the samples were made from birefringent starch granules randomly suspended in a clear, non-birefringent gel. The focal plane inside the sample is chosen arbitrarily and is approximately half of its thickness. On fig. 4a we show a standard image of the focal plane, and in fig. 4b shows images of four different planes refocused with CLM at optical distances \(-10\,\upmu\mathrm{ m}\), \(- 130\,\upmu \mathrm {m}\), \(- 310\,\upmu \mathrm {m}\) and \(+200\,\upmu \mathrm {m}\), respectively. It is clear that some aggregates are concentrated on only one of the four images, which makes it possible to determine their longitudinal optical distance from the focal plane. The volumetric resolution of CLM allows us to refocus 54 planes per volume \(1\,\mathrm {mm}\) thick with a transverse resolution less than \(20\,\upmu \mathrm {m}\) and a longitudinal resolution less than \(90\, \upmu\mathrm {m}\) and the field of view is about \(1\,\mathrm {mm}^2\) (see video for additional information).
Interestingly, in the current CLM architecture, the signal-to-noise ratio is large enough to efficiently observe images from different viewpoints (hence useful for further data analysis, such as 3D reconstruction). On fig. 5 shows the change in the viewing angle obtained by the CLM when moving the “viewpoint” in the plane of the lens horizontally: in this case, the position of the details in focus does not change with a specific viewing angle, outside the focus starch the particles move in the horizontal direction. From one correlated image, 130,000 sample images from different points of view were obtained, distributed over the lens area (\(\sim 1\,\mathrm {cm}^2\)), each image is characterized by \(40\ ,\ upmu \mathrm {m}\) spatial resolution limited by diffraction. Such a large number of statistically independent viewpoints allows us to generate viewpoint images in which object details can be clearly distinguished, which is especially important when implementing 3D reconstruction algorithms based on multiple viewpoints.
First order images of a thick biomedical model (a) obtained at \(f-f_O=0\) \(\upmu \mathrm {m}\) and refocused CLM images of four different planes in the same sample (b); The exact value \(z=f-f_O\) is indicated at the top of each image. All images were obtained from the same data using \(N=5000\) frames. Methods for optimizing the correlation function in this case are described in the Materials and Methods section.
Thanks to the CLM’s refocusing capability, the depth of field is 6 times greater than that of conventional microscopes with the same numerical aperture and the same resolution (limited by diffraction) in the focal plane. These results are in good agreement with the expected refocusing range of the CLM44 at a given resolution (\(d=50\,\upmu \mathrm {m}\)), indicating the robustness of the proposed CLM architecture. Given that the CLM method does not require scanning, the CLM volumetric resolution for complex thick samples (\(1 \times 1 \times 1 \,\mathrm {mm}^3\)) is another very interesting result. In the adopted CLM setup, we obtained 54 independent axial slices of the biological phantom. Considering how axial and lateral resolution vary along the optical axis, this corresponds to the total number of voxels (7.21×10^6) in the volume of interest. Note that the device considered in this paper was used as a demonstration sample and its parameters were not optimized. However, the properties shown in Table 1 serve as a guide for scaling settings to smaller resolutions depending on the actual application. All of these results are unattainable with standard light field microscopy due to its hyperbolic trade-off between spatial resolution and multiview (hence the maximum achievable degrees of freedom).
CLM and conventional light field microscopy allow 3D imaging without moving any part of the specimen or optics; the resolution depends on the distance from the focal plane, as shown in Figure 3, but is the same in the lateral direction if the light propagation can be considered paraxial. These features can be compared to those of complex methods such as confocal microscopy, which require longitudinal and transverse scanning to obtain an image with uniform volumetric resolution, as well as to much less laborious methods (for example, with undiffracted beams, which require only longitudinal scanning). , exchanging this interesting feature with non-uniform illumination and lateral resolution. The main disadvantage of CLM is its operational definition: while in a first-order non-coherent image, it is only necessary to expose the sensor much more than the source coherence time. SNR can be improved in the minimum possible time, but reliable reconstruction of the correlation function (1) requires the collection of a large number of different frames, the duration of which preferably corresponds to the source coherence time.
Increasing the acquisition rate of the CLM is a major challenge that needs to be addressed to ensure it is competitive with current light field microscopes25. In fact, this acceleration is critical for preventing radiation damage to biomedical specimens, for in vivo imaging, and for studying dynamic processes. The large signal-to-noise ratio of the CLM compared to the original CPI scheme represents an important step in this direction, since it is able to increase the acquisition rate by an order of magnitude while guaranteeing a higher signal-to-noise ratio (see ref. 33). Similar to the denoising method implemented here (Fig. 4, 5) and described in the Materials and Methods section, further accelerating data collection lies in the use of compressed sensing and deep learning methods, which are increasingly being used in imaging missions 47, 48, 49. 50. From a hardware standpoint, there is significant room to improve our microscope’s acquisition speed, both by exploring possible optimizations in our current acquisition procedure and by using cameras with better timing characteristics. For example, the easiest way to start improving the timing of current CLMs is to use the camera in rolling shutter mode (see the Materials and Methods section) instead of the global shutter intensity mode we used to guarantee \(I_a\ ) and \(I_b \) (and then pixel-by-pixel correlated) represent a simultaneous statistical sample of chaotic sources. This condition is consistent with the theoretical model (i.e., equation (1)), but it would certainly be interesting to find a mechanism by which a slight deviation from the theory would introduce artifacts small enough to justify the speed gain. With our camera, this could even mean doubling the frame rate, reducing the current capture time to around 20 seconds (from 43 seconds currently). In addition, the chaos source can be significantly improved by replacing the currently used ground glass discs with digital micromirror devices (DMDs) (see the Materials and Methods section), which increases versatility and useful statistics while significantly reducing the Source coherence time, since its a typical frame frequency is around 30 kHz. In addition, since the DMD modes are completely user-controlled, their functionality can be adjusted to achieve the desired signal-to-noise ratio in the lowest possible number of frames, or even to experiment with structured lighting. In this case, the data acquisition rate will be mainly limited by the maximum frame rate of the sensor and, ultimately, by the data rate. This problem can be solved by replacing our current sCMOS with a faster camera capable of 6.6k frames per second at full resolution, or with an ultra-fast, high-resolution SPAD array that provides frame-level acquisition speed\(10 ^5\) binary frame Second, in the array \(512 \times 512\) 52.53. When choosing an alternative camera, you should not prioritize speed over readout noise, dynamic range, detection efficiency, or minimum exposure time—all of which are important parameters based on correlation imaging. In this regard, SPAD arrays are of particular interest due to their much shorter minimum exposure times, from a few hundred ps to 10 ns52, 53, 54, 55, although their binary nature may present problems. The camera’s minimum exposure time also determines the possibility of CLM spreading to uncontrolled heat sources, including fluorescent specimens, which underlie appropriate microscopy techniques. Due to the chaotic nature of fluorescence, CLMs and fluorescence microscopes are, of course, compatible, but in this case it is necessary to solve an experimental problem: to match the low fluorescence coherence time with the shortest sensor exposure time. Similar problems have been successfully encountered by Hanbury-Brown and Twiss in the Narrabri stellar interferometer56 and more recently in related solar imaging experiments57,58. In the context of CLM, we will solve this problem in future work.
Change in viewing angle of starch granules suspended in a gel in a \(1\, \mathrm {mm}^3\) sample obtained using CLM. By comparing the intensity recorded in each pixel of the spatial sensor (\(D_a\)) with the angular sensor (\(D_b\)), up to the point \(\varvec{\rho}_b = (-M\rho_{ O,x } ,0)\) as the center corresponding to the x-axis \(\rho_{O, x}\) on the objective lens [that is, by changing the second parameter in the equation. (2)]. Shown are two different viewpoints of the sample taken on the horizontal diameter of the objective. The image on the left is a perspective view of a point 3145 \(\upmu\) m to the left of the center of the lens, corresponding to the angle \(-5^{\circ }\) (OA) with the optical axis . On the right is the viewpoint at \(-1050\,\upmu\)m (\(-2^{\circ}\) with respect to OA). Three different functions are highlighted. The starch granule, marked 1, does not move with the viewpoint, which means that it is in the focal plane of the lens. As the viewpoint moves to the right, the particle labeled 2 moves to the right, meaning it is (out of focus) in the plane between the lens and its focal plane. On the other hand, the particle marked 3 is offset in the opposite direction, which means that it has a greater longitudinal distance from the lens than the focal length. As expected, none of the visible particles will change their position along the vertical axis from the point of view, changing in the horizontal direction.
In conclusion, we point out that many short exposures are required to create a single image, which is typical for other well-known microscopy methods such as STORM59. This result is encouraging given the absence of photobleaching and photodamage, while also considering that the SNR requirement per frame for CLM is much weaker than in STORM, where the signal must be sharp enough to be able to evaluate the centroid. Exposure problems can also be reduced by temporally modulating the lighting to match the frame rate of the shot.
The experimental setup used to demonstrate CLM is shown in Figure 6. The controlled chaotic light source is a single-mode laser (CNI MLL-III-532-300 mW) with a wavelength of \(\lambda=532\,\mathrm {nm}\) , illuminating a rotating ground glass disk (DGS). with the diffusion angle \(\theta _d \simeq 14^{\circ }\), whose velocity determines the source coherence time (\(\about 90 \upmu\)s). The size of the laser spot on the disk is increased to a diameter of 8 mm using a beam expander \(6\x\), and the sample is placed at a distance of \(10\,\mathrm {mm}\) GGD , therefore, the effective numerical aperture of our system is \ (\mathrm {NA} = 0.23\), which determines our expected diffraction-limited resolution\(\delta = 1.6\, \upmu \mathrm {m}\). The light transmitted by the object travels to the lens O with focal length \(f_O = 30\,\mathrm {mm}\) and reaches the first polarizing beam splitter (PBS), where it is split into two beams. The transmitted beam reaches the tube lens T with focal length \(f_T = 125\,\mathrm {mm}\) and hits the part of the sensor marked \(\mathrm {D}_a\). The distance between the objective O and the tubular lens T is equal to the sum of the focal lengths of the two lenses \(f_O + f_T\) and the distance between T and \(\mathrm {D}_a\) and \(f_T\) Consistent. Therefore, the plane of the focused image is at a distance \(f_O\) from the lens. The beam reflected from the PBS illuminates lens L with focal length \(f_L = 150\,\mathrm {mm}\), then hits the part of the sensor labeled \(\mathrm {D}_b\) before hitting the second PBS reflects. The distance between the lens O and the lens L\(S_O\) and the distance between L and \(\mathrm {D}_b\)\(S_I\) are conjugate, and the front aperture of the lens is displayed at the point \(\mathrm{D}_b\ ). The measured magnification for such images is \(M_L = 0.31\). Model two sensors \(\mathrm {D}_a\) and \(\mathrm {D}_b\) using two non-overlapping halves of the same camera (Andor Zyla 5.5 sCMOS) to ensure synchronization. To take full advantage of the camera’s dynamic range and maximize SNR, we balanced the beam intensity on the two halves of the sCMOS camera by placing a half-wave plate in front of the laser beam in front of the GGD. The camera sensor is characterized by \(2560\times 2160\) pixel size \(\delta _{\mathrm {p}} = 6.5\,\upmu \mathrm {m}\) and can operate at resolutions up to 50 fps Works in full frame (100 fps rolling shutter in global shutter mode). Since the resolution on the object\(\delta=1.6\,\upmu\mathrm{m}\) corresponds to the unit of resolution increase on the sensor\(M\delta=6.7\,\upmu\mathrm{m}\) , the data is shown in Fig. . 4, 5 are usually obtained with hardware binning \(2\x 2\); binning is not applied when obtaining data corresponding to points C, D, their original counterparts, and E in Figure 3a. The test targets used to obtain the data presented in Figure 3 were Thorlabs R3L3S1N and R1DS1N. The exposure time is set to \(\tau = 92.3\,\upmu \mathrm {c}\) to match the source coherence time, and the camera acquisition rate is \(R = 120\,\mathrm {Hz} \), global The maximum possible speed of our FOV in shutter mode.
Experimental setup for CLM. Light from a chaotic source, consisting of a laser, a beam expander (EB), a half-wave plate (HWP), and a rotating ground glass disk (RGHD), illuminates the object, passes through the lens (O), and reaches the first polarizing beam. splitter (PBS). The transmitted beam passes through the tubular lens (T) and the second PBS and finally hits the \(\mathrm {D}_a\) detector, which is the (blue) half of the CMOS camera sensor. A focused image is characterized by an increase \ (M \u003d f_T / f_O \u003d 4.2 \). The reflected beam passes through the additional lens L and is then reflected by the second PBS to the \(\mathrm {D}_b\) detector, which is the non-overlapping (magenta) half of the sensor on the same sCMOS camera. The magnification of the lens O by \(\mathrm{D}_b\) is equal to \(M_L=0.31\). All experimental results shown in this article were obtained in this setup.
All refocus images reported in the article were obtained by applying the refocus formula in the equation. (3) Give the experiment a four-dimensional correlation function. We also applied corrections to eliminate edge effects due to detector size, as described in the reference. 45. The problem of a noisy background in refocusing images of a biomedical model (Fig. 4, 5) is solved by preprocessing the correlation function. Statistical noise, quantified by the variance of the quantity measured in the equation. (1) can be reduced by introducing additional terms to optimize the correlation function; this approach is consistent with the so-called differential phantom imaging, where each pixel \(\mathrm {D}_b\) is treated as a bucket detector. correlation function to obtain a modified correlation
where \(I_a^{\mathrm {TOT}}\) is the total intensity of the shock detector\(\mathrm {D}_a\). The free parameter K can be fixed by minimizing the dispersion condition
Modified correlation function . \(\mathcal {F}(\varvec{\rho}_a,\varvec{\rho}_b)\) The output of K immediately shows that the minimum is reached
Various analysis results when considering the standard correlation function of the equation. (1) and one of the modifications of the equation. (4) Available in additional information.
Viewpoint multiplicity is defined as the number of viewpoints allowed in the lateral direction. In the case of CLM, the multiplicity of viewpoints is estimated as the number of resolution cells that fall within the diameter D of the lens. When evaluating the unit of resolution, it must be taken into account that in the correlation function the sample acts as an aperture and thus determines the resolution of the lens (see reference 44 for more details). lens is equal to
This result highlights the interesting relationship between the two apertures and the two elements of resolution in the lens and object plane. If the object’s axial position z satisfies \(|z-f_O|\ll f_O\), the results of the above evaluation for objects in the focal plane are still approximately valid.
The degrees of freedom of each refocused image provide information about the axial resolution of the CLM. As described in the second column of Table 1, the CLM has the same focal plane depth of field as a standard microscope, defined by numerical aperture and illumination wavelength. The degrees of freedom of image refocusing when an object goes out of focus can be determined by geometric optics methods. functions
as given by the equation. (2), where \(\delta ^{(2)}(\varvec{\rho })\) is the two-dimensional Dirac delta and \(\hat{u}_x\) is the unit vector along the x-axis The image described by refocusing equation. (9) Equation refocusing algorithm. (3), we see that once refocusing is achieved along a different axis coordinate than the identified object position (\(f\ne z\)), each point source creates a “circle of confusion” similar to a normal image. Unlike other CLM functions, the circle of dispersion depends on the numerical aperture of the CLM device. The image pickup is refocused to the normal distance \(f\ne z\)
If we assume that the objective transfer function P is a circular aperture of radius R, then equation (10) means that the radius is \(MR|fz|/z\) and is expressed as \(\pm Mf/2z\,\ varvec{ a}\ ) as the center of two circles. Therefore, there will be a range where the refocus parameter f generates two separate circles; beyond this range, the two circles begin to overlap and eventually the two sources can no longer resolve. In particular, there will be two refocus positions \(f^\prime\) and \(f^{\prime \prime}\) (10) describing two tangent circles. Therefore, we define the DOF of the refocused image as
where the approximation is for \(|f^{​​\prime}-f^{\prime\prime}|\ll f_O\). Therefore, depending on the size of the details of the object of interest, the image refocused by the DLM has degrees of freedom depending on the numerical aperture, just like a standard microscope. However, the relationship between the extended degrees of freedom available for CLM and the degrees of freedom for image refocusing gives the number of independent planes available for refocusing, namely:
This result shows that, as with conventional light field imaging, the number of longitudinal planes that can be refocused is proportional to the multiplicity of viewpoints on the lens plane. The advantage of CLM over conventional light field imaging lies in the greater number of viewpoints available, as already discussed in the introduction.
The datasets used and/or analyzed in the current study are available from the respective authors upon reasonable request.
Oku, H., Hashimoto, K. & Ishikawa, M. Variable-focus lens with 1-kHz bandwidth. Oku, H., Hashimoto, K. & Ishikawa, M. Variable-focus lens with 1-kHz bandwidth. Oku H., Hashimoto K., and Ishikawa M. Zoom lens with 1 kHz bandwidth. Oku H., Hashimoto K., and Ishikawa M. Zoom lens with 1 kHz bandwidth. to choose. Express 12, 2138 (2004).
Mermillod-Blondin, A., McLeod, E. & Arnold, CB High-speed varifocal imaging with a tunable acoustic gradient index of refraction lens. Mermillod-Blondin, A., McLeod, E. & Arnold, C. B. High-speed varifocal imaging with a tunable acoustic gradient index of refraction lens. Mermillod-Blondin, A., McLeod, E. & Arnold, CB Высокоскоростная варифокальная визуализация с настраиваемым показателем акустического градиента линзы преломления. Mermillod-Blondin, A., McLeod, E. & Arnold, CB High speed varifocal imaging with adjustable refractive lens acoustic gradient index. Mermillod-Blondin, A.、McLeod, E. & Arnold, CB 具有可调声学梯度折射率的折射透镜的高速变焦成像。 Mermillod-Blondin, A.、McLeod, E. & Arnold, CB Mermillod-Blondin, A., McLeod, E. & Arnold, CB Высокоскоростное изображение преломляющих линз с масштабированием и настраиваемым индексом акустического градиента. Mermillod-Blondin, A., McLeod, E. & Arnold, CB High speed refractive lens imaging with zoom and adjustable acoustic gradient index. to choose. Wright. 33, 2146 (2008).
Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, EHK Optical sectioning deep inside live embryos by selective plane illumination microscopy. Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, EHK Optical sectioning deep inside live embryos by selective plane illumination microscopy. Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, EHK Оптические срезы глубоко внутри живых эмбрионов с помощью микроскопии с селективным плоским освещением. Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, EHK Optical sections deep within live embryos using selective flat illumination microscopy. Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, EHK 通过选择性平面照明显微镜在活胚胎内部进行光学切片。 Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, EHK Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. and Stelzer, E.H. Optical sections inside live embryos using selective flat illumination microscopy. Science 305, 1007 (2004).
Fahrbach, FO, Simon, P. & Rohrbach, A. Microscopy with self-reconstructing beams. Fahrbach, F.O., Simon, P. & Rohrbach, A. Microscopy with self-reconstructing beams. Farbach F.O., Simon P. and Rohrbach A. Microscopy with self-healing beams. Fahrbach, FO, Simon, P. & Rohrbach, A. 具有自重构光束的显微镜。 Fahrbach, F.O., Simon, P. & Rohrbach, A. Farbach, F. O., Simon, P. and Rohrbach, A. Self-reconfiguring beam microscopy. National Photonics 4, 780 (2010).
Fahrbach, FO & Rohrbach, A. A line scanned light-sheet microscope with phase shaped self-reconstructing beams. Fahrbach, FO & Rohrbach, A. A line scanned light-sheet microscope with phase shaped self-reconstructing beams. Farbakh, F.O. and Rohrbach, A. Line scan sheet microscope with self-healing phase-shaped beams. Fahrbach, FO & Rohrbach, A. 具有相形自重构光束的线扫描光片显微镜。 Fahrbach, F.O. & Rohrbach, A. Farbakh, F.O. and Rohrbach, A. Line-scanning light microscopy with a phase self-reconfiguring beam. to choose. Express 18, 24229 (2010).
Chen, B.K. et al. Lattice light sheet microscopy: Visualization of molecules in embryos with high spatiotemporal resolution. Nauka 346, 1257998 (2014).
Chu, L.A. et al. Fast single-wavelength light microscopy with sheet positioning for tissue clarification. National communicate. 10, 4762 (2019).
Takanezawa, S., Saitou, T. & Imamura, T. Wide field light-sheet microscopy with lens-axicon controlled two-photon Bessel beam illumination. Takanezawa, S., Saitou, T. & Imamura, T. Wide field light-sheet microscopy with lens-axicon controlled two-photon Bessel beam illumination. Takanezawa, S., Saitou, T. and Imamura, T. Wide-field light microscopy with two-photon Bessel illumination driven by an axicon lens. Takanezawa S, Saitou T, and Imamura T. Wide-field light microscopy with two-photon Bessel illumination controlled by a lens and an axicon. National communicate. 12, 2979 (2021).
Moneron G. et al. Fast STED microscopy using a continuous fiber laser. to choose. Express 18, 1302–1309 (2010).


Post time: Oct-12-2022