## Volumetric Fluorescence Imaging

### Fast Volume Imaging with Bessel Beam Tomography and Machine Learning

- Fig. 1: a) Side projections allow undistorted 3D reconstruction. b) Projections under an oblique angle lead to distorted 3D reconstructions. c) Steps in the volumetric reconstruction process.
- Fig. 2: Reconstruction of beads. a) Four recorded projections with each Bessel beam from a volumetric sample composed of 1 m diameter beads. b) Reconstruction of the sample using back projections compared to ground truth.
- Fig. 3: Reconstruction of pollen grains. a) Four projections recorded with Bessel beams from a sample of pollen grains. b) First row: reconstruction of the sample using only back projection. Second row: reconstruction of the sample after applying a U-net to the back projections to correct distortions. Third row: ground truth recorded with a two-photon microscope with a Gaussian focus.

**Fluorescence microcopy is one of the most widely used tools for investigating the dynamics of biological systems. Such dynamics, for example the propagation of neural signals in the brain, often plays out in all three spatial dimensions. This is challenging for microscopy which typically only monitors a single two-dimensional focal plane at a time. To address this problem, we developed a fluorescence tomography approach to record volume information in a single frame scan from sparsely labeled samples.**

Optical microscopes typically record from a single focal plane at a time. This can be limiting for samples that show fast dynamics across multiple focal planes, which is often the case for example when monitoring neural activity in the brain at cellular resolution. In this situation, neurons are arranged in three dimensions and the dynamics of these cell ensembles evolves over a volume. To monitor such dynamics it can be advantageous to sacrifice some spatial resolution for temporal resolution. We describe in the following one such approach that can reconstruct volumetric information from images recorded in a single frame scan with multiple laser beams.

**Tomographic Imaging**

The developed approach [1] relies on tomography [2], which reconstructs volume information from multiple two-dimensional projections. Tomographic imaging in optical microscopes can be achieved using scattered light or using fluorescence [3, 4]. In the latter case the sample is typically rotated with respect to the imaging path to obtain multiple projections. However, this method is not compatible with many imaging situations where the sample can only be accessed from one side with a single microscope objective and is not transparent, as is typically the case for *in vivo* imaging in neuroscience.

**Temporally Multiplexed Bessel Beams**

For generating projections under these conditions, one can take advantage of Bessel beams. Instead of a focal spot, these beams generate a focal line that is extended along the optical axis while maintaining a later resolution that is similar to a focused Gaussian beam; Bessel beams have for example been applied for *in vivo* imaging of neural activity of sparsely labeled samples [5].

By introducing a lateral offset when passing through the microscope objective these Bessel beams can be tilted by an angle θ with respect to the optical axis; scanning across the sample then results in a projection along this tilted beam axis. For volumetric reconstruction, multiple such projections need to be recorded from different angles and to additionally cope with fast sample dynamics, these projections have to be recorded at the same time. This can be achieved using temporal multiplexing where, using pulsed fluorescence excitation together with time-gated detection, signals generated by different beams can be sorted into different channels [6].

**Bessel Beam Tomography**

As described in more detail below, images were reconstructed in the following steps: First, four 2D projections from a volumetric sample were recorded by scanning four temporally multiplexed Bessel beams, each with a different projection angle; then, similar to computed tomography, images were reconstructed using so called back projection, which computationally inverts the projection signal generation process. Compared to reconstruction using side projections (fig. 1a), which would be best suited for tomography, the oblique projection angles lead to distortions of the reconstructed volume (fig. 1b). Then, again taking advantage of image reconstruction approaches developed for computed tomography, these distortions were finally corrected using machine learning, by applying a suitable convolutional neural network. These steps are summarized in figure 1c.

**Tomographic Reconstruction**

Bessel beam tomography first requires generating four beams with different orientations. Here, the elevation angle was kept constant for all beams (about 13 degrees, limited by the numerical aperture of the objective), and the azimuth was varied in steps of 90 degrees. In a calibration step the exact orientation of the Bessel beams across the field of view was mapped using fluorescent beads. The fluorescence signal resulting from scanning the Bessel beams across any sample can then be simulated using these calibration measurements similar to computed tomography using so called Radon transforms.

For the volumetric reconstruction of arbitrary samples this signal generation process is computationally inverted and the calibration information is used to calculate the so called back projection of each projected image. The back projection computationally inverts the signal generation process by projecting each image along the corresponding Bessel beam back through the sample. The intersection of the four back projections then confines the shape and intensity of the volumetric sample. In figure 2 we show how applying back projection using four projections of a volumetric sample (fig. 2a) of 1 um diameter fluorescent beads embedded in agar leads to volume reconstruction (fig. 2b). The distortion in the z-axis is due to the limited projection angle which is constrained by the numerical aperture of the microscope objective.

**Improved Reconstruction Using Machine Learning**

The volume reconstructions obtained from these oblique projections leads to a distortion along the z-axis with a factor of 1/sin(θ), as shown in figure 1b. To correct these distortions we used machine learning where a convolutional neural network (U-net) was trained using simulated data. Using a large set of simulated ground truth data as well as the corresponding simulated projection images such a network can be trained to recover the volume at high resolution based on the projection images. Figure 3 shows the reconstruction of four projections experimentally recorded from a sample of pollen grains (fig. 3a). Applying back projection to the four images leads to a distorted volume, as shown in the first row of figure 3b. Applying the trained U-net to these back projections can partially restore the information in the z-axis (second row in figure 3b) as seen in the comparison with the ground truth recorded using two-photon microscopy with a Gaussian beam in the third row of figure 3b.

**Conclusion**

We combined scanning microscopy with tilted Bessel beams, temporal multiplexing and tomographic reconstruction aided by machine learning to reconstruct volumetric information recorded in a single scan across a three-dimensional sample. This allowed recording volume information in a single frame scan. In future work optimizing the tilted beams, for example using objectives with higher numerical aperture, as well as training neural networks with larger training data sets specific for the sample of interest will allow improving the spatial and temporal resolution of this imaging approach.

**Authors**

Andres Flores Valle^{1} and Johannes D. Seelig^{1}

**Affiliation**

^{1}Research Center caesar, Bonn, Germany

**Contact
Dr. Johannes Seelig**

Max Planck Research Group Leader

Research Center caesar

Bonn, Germany

johannes.seelig@caesar.de

**References**

[1] Flores, A.V. and Seelig, J.: Bessel beam tomography for fast volume imaging (2019)

[2] Kazemipour, A., Novak, O., Flickinger, D., Marvin, J.S., King, J., Borden, P., Druckmann, S., Svoboda, K., Looger, L.L. and Podgorski, K.,: Kilohertz frame-rate two-photon tomography bioRxiv, p.357269 (2018)

[3] Jin, D., Zhou, R., Yaqoob, Z. and So, P.T.: Tomographic phase microscopy: principles and applications in bioimaging. JOSA B, 34(5), pp.B64-B77 (2017)

[4] Sharpe, J.: Optical projection tomography. Annu. Rev. Biomed. Eng., 6, pp.209-228 (2004)

[5] Lu, R., Sun, W., Liang, Y., Kerlin, A., Bierfeld, J., Seelig, J.D., Wilson, D.E., Scholl, B., Mohar, B., Tanimoto, M. and Koyama, M.,: Video-rate volumetric functional imaging of the brain at synaptic resolution. Nature neuroscience, 20(4), p.620 (2017)

[6] Amir, W., Carriles, R., Hoover, E.E., Planchon, T.A., Durfee, C.G. and Squier, J.A.: Simultaneous imaging of multiple focal planes using a two-photon scanning microscope. Optics letters, 32(12), pp.1731-1733 (2007)