1 Propagation-Based Phase-Contrast Tomography

For the high photon energies E of hard X-rays which are needed to penetrate bulk samples, absorption contrast becomes negligible and phase contrast prevails. This is simply the result of the energy dependence of the X-ray index of refraction \(n(E, \mathbf{r})= 1 -\delta (E,\mathbf{r}) + i\beta (E,\mathbf{r})\), which is well suited to describe the propagation of X-rays in matter as long as the continuum approximation holds, i.e., if scattering angles are small. One of the advantages of hard X-ray imaging is that the spatial distribution of the real part of the refractive index is proportional to the corresponding electron density \(\rho (\mathbf{r})\). The predominance of phase interaction over absorption is quantified by the ratio \(\delta /\beta \gg 1\), which is particularly large for the low-Z elements of soft (unmineralized) biological tissues. The resulting phase shift and amplitude decrement of a beam traversing a resolution element (voxel of side length a) are \(\varDelta \phi = -k \delta a\) and \(\varDelta A = \exp ( - k \beta a)\), respectively. Even for materials where absorption contrast is still sufficient at large length scales, it will become impossible to distinguish structural details if the side length a of the voxel is decreased. Hence, high resolution imaging with hard X-rays always requires phase contrast. It is therefore not surprising that suitable implementations of phase-contrast radiography and tomography have been a major research goal over the last two decades, after X-ray sources with sufficient partial coherence had become available.

All phase-contrast methods rely on wave-optical transformations of the phase shifts, which the sample induces in the (partially) coherent wavefront, into measurable intensities patterns by ways of interference, see also [1, 2] for a review. The particular geometries and mechanisms by which waves are brought to interfere can be quite different. The methods can be classified according to the order of phase contrast. Zero-order methods such as a Bonse-Hart interferometry [3] are capable to measure the absolute phase shift \(\varDelta \phi \) between an object and an empty reference beam. First-order methods such as crystal [4] or grating (Talbot) interferometers [5, 6] are sensitive to the first spatial derivative of the phase \(\nabla \phi \) in the object plane. Finally, second-order methods are based on contrast formation proportional to \(\nabla ^2 \phi \). This is the case for propagation-based phase contrast, where the self-interference of the diffracted beam behind the object and the unattenuated or weakly attenuated primary beam interfere to form a defocused ‘image’. As a second-order phase-contrast method, propagation imaging is particularly well suited for high spatial resolution. Further, it does not require any optical components acting as an ‘analyzer’. First-order techniques make use of optical components which are scanned or rotated during data acquisition, such as in crystal interferometers (diffraction enhanced imaging) [4], or in grating (Talbot) interferometers [5, 6]. A particular advantage of Talbot interferometry is that along with the phase information it also generates an additional and completely separated darkfield image [5]. Furthermore, phase sensitivity is very high. A related first-order phase-contrast technique uses edge illumination or coded apertures [7, 8]. As in Talbot interferometry, the beam is structured in the object plane by a periodic array creating many beamlets. In contrast to a phase grating, these are spatially separated and small angular changes in their directions induced by the object are recorded downstream by a detector with sharp absorbing edges, without interference between the different beamlets. Hence, this technique is also applicable in the case of an incoherent source. Finally, in a more recent variant, the Talbot grating is replaced by a random speckle producing pattern [9].

A major disadvantage of all first-order techniques is the fact that the resolution is limited, e.g., by the grating period, aperture size or speckle grain. At the same time, the number of images to be acquired is fairly high and can pose a serious challenge in terms of acquisition time and dose. Note that tomography already requires the acquisition of hundreds or even thousands of projection images. It is therefore important to keep the acquisitions per projection angle at minimum and to record a sufficient number of resolution elements in parallel, i.e., to use detectors with a large number of pixels, and an optical setup where resolution and magnification are matched. For high-resolution tomography of biological samples, phase contrast by free space propagation thus remains the method of choice.

A central challenge in propagation-based imaging has been the formulation of accurate and efficient phase-retrieval schemes [1, 10, 11]. This has been particularly difficult in the holographic regime (small Fresnel number F), where the wave diffracted from any point in the object reaches all or a large set of detector pixels. However, high geometric magnification results in small F and hence one always ends up in the holographic regime, since the effective pixel size in the object plane enters quadratically [12]. At the same time, this holographic regime offers highest sensitivity to small phase shifts. As we review in this chapter, quantitative phase retrieval can be accomplished with a set of four measurements per projection. The key concept as introduced by Peter Cloetens and coworkers is to record images at four different \(F_i\) [10], e.g, by varying the sample/detector position or the photon energy. In the meantime, the initial limitation to objects with weakly varying phase can be overcome by iterative algorithms [13, 14]. In special cases, additional constraints such as compact support [15], sparsity [16], or range and dimensionality constraints [17] can reduce this set to a single acquisition.

It is also instructive to briefly compare propagation-based phase contrast to coherent diffractive imaging (CDI), which is typically carried out in the optical far field. Strictly speaking, far-field diffraction can also be counted as a ‘phase-contrast’ technique, since it is dominated by the Fourier transform of \(\delta (\mathbf{r})\), but the term ‘phase contrast’ is mostly used only in full-field radiography, and not in diffractive imaging. How then, do CDI and propagation imaging compare, in particular if both record diffraction patterns at small F? Importantly, holographic imaging exploits the interference terms \(2\cdot \text {Re}[\psi _s \psi _0 ^*]\) between a scattered wave \(\psi _s\) and a reference wave \(\psi _0\) (enlarged primary wave), while CDI uses the far-field diffraction pattern \(\psi _s \psi _s^*\) [18,19,20,21,22,23], without additional mixing with a reference wave. This important difference changes the way in which phase information is encoded in the intensity images, including the mathematical nature of the phase problem [24]. Furthermore, in near-field imaging, a weak scattered signal can be amplified high above background signals of residual scatter. These differences could in principle also affect the dose-resolution relationship, as we will further discuss later on. In this chapter, we show how propagation-based phase-contrast imaging has now matured to a powerful tool for 3D imaging of biological matter. We both include inhouse µ-CT results, where partial coherence has become sufficient to observe edge enhancement in the direct-contrast regime of propagation imaging (large Fresnel numbers F), and high-resolution phase contrast in the holographic regime by synchrotron radiation. Recent studies on biological cells and tissues performed by our group serve as examples for the current capabilities of phase-contrast imaging and tomography.

2 Nano-CT Using Synchrotron Radiation: Optics, Instrumentation and Phase Retrieval

2.1 Cone-Beam Holography

X-ray propagation imaging with nanoscale resolution requires a correspondingly small focus of the X-ray beam, which is today only possible using synchrotron radiation. The divergent cone-beam behind the focus is then exploited for illumination and recordings in the holographic regime. We therefore use the term ‘holography’ or ‘holographic X-ray imaging’ synonymously for this type of high-resolution propagation imaging. The properties of the beam, which is also denoted as the probe in the field of coherent X-ray imaging, are hence essential for holographic imaging. High resolution and quantitative phase contrast can only be achieved by efficient nano-focusing optics, sufficient coherence and smooth wavefronts [25,26,27,28]. These properties are certainly beneficial for all coherent X-ray techniques, however, to a different degree. Coherent diffractive imaging (CDI) in the far field, for example, is less sensitive to wavefront errors, but requires higher coherence [29].

In holography, aberration-free image formation relies on the quasi-spherical nature of wavefronts, since in data treatment one tacitly assumes spherical wavefronts in order to apply the Fresnel scaling theorem [2]. A small focus in a coherent probe also warrants a large numerical aperture in the illuminating wavefront. This directly affects the spatial resolution and also facilitates high geometric magnifications M, and hence small effective pixel sizes, in the case of a limited detector distance. Typical values for the focus-to-sample distance in high-resolution holography range between a few millimeters and several centimeters. Unfortunately, X-ray nano-focusing is associated with significant wavefront distortions, see also [30] for an overview of different X-ray focusing optics. These wavefront artifacts violate the idealizing assumptions made on the probe in the course of image reconstruction, such as point-source emission or distortion-free wavefront. The validity of these assumptions has recently been investigated, showing that they lead to reduced resolution and image quality [31, 32]. To avoid this, additional optical filtering and wavefront cleaning can be used. Alternatively, phase-retrieval schemes have to be generalized to non-ideal illumination conditions. In short, either hardware or software solutions, or a combination of both, are required.

For the latter case, the ptychographic concept of simultaneous probe and object reconstruction was recently generalized to near-field (propagation) imaging [26, 33,34,35,36]. Ptychographic algorithms were initially formulated only for confined probes (but extended objects), a setting which is typical for far-field coherent diffractive imaging [23, 37,38,39,40,41]. A generalization to extendeded illumination wavefronts was given in [36, 42], but a wavefront diffuser was required in order to increase the diversity of the probe. Thus, only the artificially modified wavefront and not the ‘natural’ probe could be recovered. This limitation was lifted by introducing longitudinal scanning of the object in [34], as well as the combination of lateral and longitudinal scans [33] to generate diversity in the data. Reconstructions of the illumination produced by a set of Kirkpatrick-Baez (KB) mirrors in the imaging plane were presented in [35] for the upgraded beamline ID16a of the European Synchrotron Radiation Facility (ESRF) in Grenoble [43], and in [26] for the holography endstation GINIX (Göttingen Instrument for Nano-Imaging with X-rays, cf. Fig. 13.1a) [44] at the P10 beamline of PETRAIII (DESY, Hamburg). As an alternative to shifting the sample, probe reconstruction was also achieved without any object in the beam by translating the detector [26], using an improved multiple magnitude projections (MMP) scheme [31, 45, 46]. The disadvantage of these approaches is that multiple images with different sample translations have to be recorded for each tomographic projection. Further, they only work if the empty beam remains temporally stable.

Fig. 13.1
figure 1

Adapted from [47]

Experimental realization of holographic imaging at the GINIX setup installed at the P10 beamline at PETRAIII (DESY, Hamburg). a The X-rays are generated in an undulator and monochromatized by a double-crystal Si(111) channel-cut monochromator. Subsequently, they are focused by a set of Kirkpatrick-Baez (KB) mirrors. An X-ray waveguide is placed in the KB focal plane as a coherence filter and to increase the numerical aperture for holographic imaging. The sample is mounted on a fully motorized sample tower at a distance \(z_{01}\) behind the waveguide and the evolving intensity distributions are recorded in the detection plane at distance \(z_{12}\) behind the sample. b A waveguide can be either realized by combining two 1d devices, consisting of a multilayer structure with carbon as guiding layer, or by etching a channel into a silicon wafer and bonding a second wafer on top, leading to a closed channel with air as guiding layer. c By introducing these waveguides into the setup, the disturbed illumination (left), caused by small sub-nanometer irregularities on the surface of the KB-mirrors, is spatially filtered, resulting in a smooth illumination in which high-frequency variations are suppressed (right). Scale bars: 0.5 \(\text {mrad}\).

2.2 Waveguide Optics and Imaging

A hardware solution to avoid artifacts due to a non-ideal illumination is given by additional optical elements for coherence and wavefront filtering. This can be accomplished by X-ray waveguides [48,49,50,51,52], positioned in the focal plane of the focusing optics. Owing to the fact that the X-ray focus is smaller at the exit of the waveguide than in the front, this also offers the important advantage of increased numerical aperture. The significant challenges in fabricating two-dimensionally confining X-ray waveguides of suitable quality were solved by crossing two planar one-dimensional waveguides [53, 54], or by advanced lithographic techniques and wafer bonding [55, 56], cf. Fig. 13.1b, including advanced schemes with tapered [57] or curved waveguides [58]. As shown in [12, 14, 15, 25], X-ray waveguides provide highly coherent, well controlled, smooth and quasi point-like illumination for nanoscale X-ray imaging. A comparison between the illumination in the imaging plane provided by a waveguide and a set of KB mirrors is depicted in Fig. 13.1c. The 2D imaging capabilities of this filtered wavefront for biological tissues are demonstrated in Fig. 13.2 at the example of freeze-dried Deinococcus radiodurans cells. The first tomography application using waveguides was demonstrated for bacterial cells in [59]. Significantly increased 3D image quality has been achieved in the meantime due to various improvements on different levels, starting from the waveguide optics, the alignment and image processing procedure, the recording and detection scheme, and finally the phase retrieval [14]. The current state-of-the art for tomography of biological cells is reported in [14] and for biological tissues in [60, 61].

Fig. 13.2
figure 2

Adapted from [25]

Waveguide holographic imaging at the single cellular level. a Normalized hologram of Deinococcus radiodurans cells (freeze-dried), obtained in a single recording with 8 s dwell time along with b the iterative mHIO phase reconstruction. c mHIO reconstruction of (initially) living cells in solution. Each frame was recorded during 80 s (every other frame is shown). Gradual changes in the densities (see arrow) are observed in response to successive irradiation. Scale bars: 4 \(\upmu \)m.

2.3 Dose-Resolution Relationship

The resolution values as obtained for waveguide-based holography have reached the range of \(50\,\text {nm}\) (half-period resolution). Fitting of line cuts through lithographic test patterns has given FWHM values down to \(25\,\text {nm}\), but based on limited numerical aperture twice this value seems a more realistic resolution estimate. This is still significantly lower than typical values for ptychography, e.g. 10–15 nm for the same test pattern imaged in [62, 63]. Note that resolution determination of near-field holographic imaging deserves a careful consideration [64], and shows particularities not known from far-field imaging, such as a dependence of the maximum theoretical resolution on the object position in the field of view. The analysis in [64] also indicates the importance of the numerical aperture in holographic imaging. From an experimental point of view, benchmark experiments on realistic biological samples are required, beyond the typical demonstrations for specially designed test charts, which are highly contrasted and for which high resolution can be much more easily achieved.

Fig. 13.3
figure 3

Adapted from [65]

Dose-resolution relationship in holographic and coherent diffractive imaging. a Example reconstructions for 200 photons per pixel after 200 iterations of RAAR (\(\beta _0=0.99\), \(\beta _\text {max}=0.75\) and \(\beta _s=150\) iterations) for near-field holography (NFH) and coherent diffractive imaging (CDI) using a support and pure phase object constraint. b Fourier ring correlation of the reconstructions. The intersection with the 1/2-bit threshold curve determines the maximum frequency that can be resolved. c Resolution as a function of dose. Scale bar: 50 pixels.

For this reason, ptychographic and holographic reconstructions have been compared for the same objects, namely Deinococcus radiodurans bacterial cells. This bacterium had served early on as a first demonstration that ptychographic imaging with hard X-rays is possible for low contrast biological samples [66]. 3D reconstructions by ptycho-tomography were presented in [62], holographic imaging in [25], and an early holo-tomography result in [59]. In these studies the point was made that X-ray imaging yields quantitative electron density contrast. Therefore, the long debated (mass) density in bacterial nucleoids could be addressed in quantitative terms [25, 62, 66, 67].

These studies also provided a starting point for considerations of the dose-resolution relationship. Surprisingly, they pointed to an advantage of holography in terms of dose with respect to (far-field) ptychography. Since experiments are never completely free of uncontrolled parameters, this issue was studied also by analytical and numerical work. Starting with random binary bitmap patterns, the information content of near-field and far-field diffraction patterns was compared, as well as reconstructibility as a function of fluence (see [68] for a precise definition of ‘reconstructibility’), based on a maximum likelihood approach. Earlier work had already used random bitmap pattern and mutual information theory for a one-dimensional model [69]. More realistic phantoms and reconstruction algorithms were used in [65] for numerical simulations of the dose-resolution relationship for near- and far-field coherent diffractive imaging (cf. Fig. 13.3). In this study, a dose advantage for near field phase retrieval over CDI was found. This conclusion can, however, only me made for the particular reconstruction algorithms which have been used. Other authors who compared holographic and ptychographic reconstruction did not find any considerable differences [70]. Numerical simulations have also considered the effect of finite partial coherence, multi-modal wavefield reconstructions [71], as well as the coherence-resolution relationship [29]. Analytical scaling of the dose-resolution curves as well as experimental results on the resolution limits of biological objects due to radiation damage are given in [72, 73].

2.4 Phase Retrieval Algorithms

In many cases, phase-contrast experiments are carried out in the direct-contrast regime where phase-contrast effects are visible as edge enhancement and phase reconstruction can be performed by linearization of the transport of intensity equation (TIE) along the propagation direction z [74, 75]. In the holographic regime, however, where the phase contrast transfer and hence phase sensitivity is highest, these reconstruction algorithms fail. An approach for the inversion of phase-contrast effects in this regime based on the contrast transfer function (CTF) was proposed 20 years ago by Cloetens and coworkers [10]. It relies on several different measurement planes with varying Fresnel numbers and it is valid for weakly absorbing or pure-phase objects with a slowly varying phase. In the next section, this so-called CTF approach and its limits will be discussed in detail, and a particular iterative approach will be presented which is well suited to replace CTF with a larger range of applicability [13]. Before we do so, however, we give a broader overview over recent work on iterative algorithms, including those which are designed for single distance acquisition or are based on different constraint sets (see also Chap. 6). To this end, we already assume a general understanding of how iterative projection algorithms work, see for example the tutorial chapter on basic X-ray propagation and imaging.

We first want to mention the so-called Holo-TIE algorithm [12], which is a one step direct reconstruction scheme operating on two images recorded at slightly different (defocus) distances z from the source. By Holo-TIE, the TIE approach [75, 76] is extended towards arbitrary defocus distances z, including the holographic regime addressed here. This is important since TIE phase retrieval enjoys much success in the direct-contrast regime, and can thus be seamlessly extended, given at least two recordings. Importantly, Holo-TIE does not rely on assumptions on material composition nor on linearization of the specimen’s optical constants, as it is typically the case in conventional non-iterative reconstruction schemes. An extension of the initial Holo-TIE reconstruction was presented in [14], operating on four measurements recorded at different, well-chosen distances (or Fresnel numbers). In order to further improve reconstruction quality, if necessary, Holo-TIE reconstructions can also be used to initialize iterative algorithms.

Next, we consider iterative algorithms for single-distance acquisitions. For this case, the so-called modified hybrid-input-output (mHIO) algorithm [15] was proposed as an iterative reconstruction for (single distance) X-ray holograms. The designation ‘modified’ refers to the fact that the HIO was well established in CDI, and was modified to the near-field case. The mHIO uses a support estimate from the holographic reconstruction to slowly push the phase outside the support to zero. Importantly, it was shown that this algorithm can fill in the lost information due to the zero crossings of the oscillatory CTF. Hence, samples of arbitrary composition can be phased, overcoming the common assumption that the \(\delta (\mathbf{r})\) and \(\beta (\mathbf{r})\) components of the complex index of refraction \(n(\mathbf{r})=1-\delta (\mathbf{r})+i\beta (\mathbf{r})\) are coupled, which strictly is true only for samples consisting of a single material. Support estimation in mHIO was demonstrated in a fully automated manner for tomography in [59]. In [77], the scheme of mHIO was compared to RAAR, again using the same single distance holographic data and constraints. Significant improvements were provided by an iteratively regularized Gauss-Newton (IRGN) method, reaching higher resolution and image quality than mHIO for noisy data [78, 79]. In [14], different iterative phase-retrieval techniques were compared for the holographic regime, using both numerical simulation and experimental data of biological cells.

Finally, ptychographic algorithms should be mentioned, which offer a solution for cases where the separation between object and probe is challenging (for example if intensity minima occur in the illumination wavefront) or if the constraint set is insufficient. This can be the case, e.g., if no support constraint is available as the object covers the entire field of view and further object constraints, such as sparsity, cannot be applied either. In this case, additional data is required, for example by translating the object with respect to the probe. For such a scan series, ptychographic algorithms can exploit the constraint of separability, which typically offers high reconstruction quality for object and probe. This, however, comes at the prize of a significantly increased number of acquisitions, by a factor on the order of 10 or more. This is impractical for tomography, in particular with large fields of view, as these also lead to a high number of projection angles in order to fulfill the sampling criterion.

3 CTF-based Reconstruction and Its Limits

While the assumption of a weakly varying phase in CTF-based phase retrieval is a reasonable approximation for unstained biological tissues [47], samples with a larger variance in electron density lead to artifacts in the reconstructed phase distributions. An alternative phase-retrieval approach is given by iterative algorithms, in which the phase distribution is reconstructed by alternately propagating between the object and measurement plane and applying according constraints as, e.g., a compact support [80]. As many samples are, however, not compactly supported, no suitable phase-retrieval algorithms exist.

In [13], a simple approach of iterative alternating projections is introduced which can fill this gap, providing superior image quality for extended samples which do not obey the assumption of a slowly varying phase. The input data corresponds to the same set of measurements which is typically used in CTF-based phase retrieval with images acquired in \(N=4\) measurement planes. Due to the cone-beam geometry of the setup, these projection images are recorded at varying magnification and hence fields of view and have to be aligned to each other prior to the phase-retrieval step. This can be implemented according to the scheme presented in Fig. 13.4. In a first step, the different projections are scaled to the effective pixel size of the projection which was recorded at highest magnification. Subsequently, matching fields of view are determined via a cross-correlation of the corresponding projection and its predecessor in Fourier space [81]. As the variation of the source-to-sample distance also results in a variation of the geometric magnification, images are acquired at different Fresnel numbers and hence, the occurring interference fringes vary in all projections. Since this can affect the quality of image alignment, it can be of advantage to use single-step CTF reconstructions instead of the raw projections for alignment. In a last step, all images are cropped to the same field of view, leading to projection images with the same effective pixel size but varying Fresnel numbers, which are well suited for the application of multi-distance phase-retrieval approaches.

Fig. 13.4
figure 4

Procedure for the alignment of projection images in multi-distance cone-beam phase-contrast imaging using a butterfly as test object. a By acquiring images at varying source-to-object distance \(z_{01}\), while the source-to-detector distance \(z_{02}\) is kept constant, different Fresnel numbers can be reached in the resulting projections. The simultaneously changing magnification, however, also alters the effective pixel size and field of view of the single images. b To account for a varying magnification, all images are scaled to the effective pixel size of the projection with the highest magnification. Subsequently, the images are aligned to each other in Fourier space to identify overlapping regions. By subsequently cutting all projections accordingly, projection images with the same field of view and effective pixel size but varying Fresnel numbers can be obtained, well suited for the application of multi-distance phase-retrieval approaches

In Fig. 13.5, the results of the CTF approach for homogeneous objects [82] as well as the iterative phase retrieval are shown on a 2D projection of polystyrene spheres with a diameter of 15 \(\upmu \text {m}\) and a 3D reconstruction of an epon-embedded Golgi-Cox stained brain slice of a wild type mouse hippocampus [60]. In both examples, the CTF approach results in severe artifacts due to the violation of the underlying assumptions whereas the iterative approach leads to superior quality in the reconstructed phase distribution, especially in the 2D case of the polystyrene spheres.

So far, the computation time of iterative algorithms impeded the application of iterative algorithms on large data sets. However, with the advent of new computational hardware, especially GPUs, this limitation no longer applies, making iterative reconstructions feasible for a large variety of measurements in which the assumptions of the CTF approach are violated.

Fig. 13.5
figure 5

Adapted from [13]

Comparison between CTF-based and iterative phase retrieval in 2D and 3D. a, b Phase distribution of a layer of polystyrene spheres with a diameter of 15 \(\upmu \)m obtained by the CTF approach. The region marked by the red rectangle in (a) is shown at higher magnification in (b). c, d Iteratively obtained reconstruction of the same data set. The magnified part in (d) shows the unwrapped phase obtained by Matlab’s unwrap function. e, f Virtual slice through the density of a Golgi-Cox stained mouse hippocampus obtained from projections reconstructed according to the CTF (e) as well as iterative approach (f). The insets show regions marked by rectangles at higher magnification. Scale bars: 100 \(\upmu \)m (a, c, d, f) and 15 \(\upmu \)m (b, e).

4 Laboratory µ-CT: Instrumentation and Phase Retrieval

Phase-contrast imaging based on free-space propagation between object and detector requires a high degree of spatial coherence. Therefore, it was long considered to be an imaging technique which is only applicable at large scale synchrotron facilities. With the development of microfocus X-ray sources, however, the degree of spatial coherence could be considerably increased, as the lateral coherence length \(L_\perp =\lambda z_{01}/s\) is proportional to the source-to-sample distance \(z_{01}\), and inversely proportional to the source size s. In order to observe phase-contrast effects for object features of spatial length scale d, we must have \(L_\perp \ge d\) in the source plane. The flux density on the sample \(I \propto s/z_{01}^2\) is directly proportional to the source size, since the power loading of the target is proportional to the linear dimension s of the source (and not its area!). Hence, intensity is maximized if the coherence condition is just fulfilled, i.e., if the object is moved to the minimum distance which still fulfills the coherence constraint \(z_{01}=sd/\lambda \). Inserting this distance in the flux density expression gives \(I \propto \lambda ^2/(s d^2)\), showing that the source size ought to be reduced in order to maximize the lateral coherence (even if this reduces the power) and that the wavelength should be increased to the maximum value compatible with object transmission.

The coherence length, however, is not the only factor which has to be taken into account. In [83] it was shown that diffraction effects behind the object have to be constrained to sufficiently small angles, such that the coherent wave scattered by an object feature of size d does not scatter ‘out’ of the coherent radiation cone. This sets a limit for the scattering angle \(\alpha \), or equivalently the ‘shearing length’ \(L_\text {shear} = z_{12} \alpha \). This finally results in the condition \(L_\text {shear}/L_\perp =(M-1)s/(Md)<1\) (with the geometrical magnification \(M=z_{02}/z_{01}\)). Along with \(L_\perp /d \ge 1\) this must be fulfilled in order for the associated phase-contrast effects to become measurable. Further, visibility increases towards smaller values of \(L_\text {shear}/L_\perp \) and larger \(L_\perp /d\). In Fig. 13.6a, the ratio \(L_\text {shear}/L_\perp \) is plotted as a function of the geometrical magnification M and the object feature size d for a constant source size \(\text {FWHM}_\text {src}=10\,\upmu {\text {m}}\). For feature sizes below 10 \(\upmu \text {m}\), the lateral coherence length is thus insufficient, even if the object was illuminated coherently, since the diffraction angle would be too large, and the signal would not interfere coherently with the primary wave in the detection plane. Only when using the ‘inverse’ geometry at low M, the shearing condition \(L_\text {shear}/L_\perp <1\) can be met for feature sizes smaller than 10 µm.

Fig. 13.6
figure 6

Considerations of partial spatial coherence and resolution in a microfocus setup. a Effect of coherence on phase contrast. The ratio \(L_\text {shear}/L_\perp \), as defined by [83], is an indicator for the visibility of phase-contrast effects, with enhanced visibility for smaller values of \(L_\text {shear}/L_\perp \). For \(L_\text {shear}/L_\perp \ge 1\), the partial coherence is not sufficient for phase-contrast effects to be measurable. The ratio depends on the source size (here: \(10\,\, {\upmu } \text {m}\)), the geometrical magnification M as well as the size of features of interest. Note that other important factors as the detector resolution are not considered. b Effective propagation distance as a function of the inverse magnification 1/M. For a given source-to-detector distance \(z_{02}\), the maximum effective propagation distance is \(z_\text {eff,max}=0.25\cdot z_{02}\), reached at a geometrical magnification of \(M=2\). While it decreases symmetrically for larger and smaller values of the inverse magnification, the Fresnel number increases monotonically according to \(F_\text {eff} \lambda z_{02}/p^2 =1/(M-1)\). c System resolution given by (13.1) as a function of detector standard deviation and geometrical magnification for different typical source sizes

While a small source size s warrants sufficient partial coherence for the realization of propagation-based phase-contrast, it also compromises the flux, as the power loading of the anode cannot be increased without melting of the target material. To overcome this limitation, more elaborate anode schemes are required, such as rotating anodes (cf. Fig. 13.7a) [84] or liquid-metal jets consisting of the alloy Galinstan which is liquid at room temperature (cf. Fig. 13.7c) [85].

Fig. 13.7
figure 7

Sketch of the laboratory setups. a The X-rays are generated by a microfocus source with a rotating copper anode. In a distance \(z_{01}\) behind the source, the sample is positioned on a fully motorized sample stage and the intensity distributions are recorded in a distance \(z_{12}=z_{02}-z_{01}\) behind the sample. Due to the comparably large source diameter, this setup is only operated in the inverse geometry. b In this geometry, the 700 nm lines and spaces can be resolved in both directions, as revealed in the profiles averaged over the indicated lines shown on the right. Note that in horizontal direction also the 600 nm lines and spaces can be recognized (not shown) due to the smaller source spot in this direction. c In a different setup, the X-rays are generated by a liquid-metal jet source with Galinstan as anode material. Depending on the desired resolution and field of view, the setup can be either operated in a cone-beam (\(z_{01}\ll z_{12}\)) or inverse geometry (\(z_{01}\gg z_{12}\)). c In the inverse geometry, the 600 nm lines and spaces of a test pattern can be resolved in 2D

In the first case, a higher photon flux is enabled by the decrease in stationary heat development as the interaction point between electron focus and anode material is constantly changing. The minimum spot size, however, lies in the range of \(\sim \)70 \(\upmu \)m. In the second case, the advantage of the heat transfer is combined with an anode that is already in a liquid state, so that the electron power is no longer limited by the melting of the anode material, and which is continuously regenerated, allowing for electron power densities that can vaporize the anode material. Additionally, electron spot sizes well below 10 \(\upmu \)m can be reached.

The divergent beam emanating from the quasi-point sources and the associated geometrical magnification M result in an effective pixel size \(p_\text {eff}=p/M\) in the object plane, as well as an effective propagation distance \(z_\text {eff}=(z_{02}-z_{01})/M\) and effective Fresnel number \(F_\text {eff}=p_\text {eff}^2/(z_\text {eff}\lambda )\), based on the Fresnel scaling theorem [2]. Interestingly, the maximum effective propagation distance is given by \(0.25\cdot z_{02}\), reached at a magnification of \(M=2\), while for larger or smaller values of the magnification, the effective propagation distance decreases symmetrically, see Fig. 13.6b. As the effective Fresnel number, on the other hand, also depends on the effective pixel size, it deviates from the symmetric behavior of the effective propagation distance, and hence, can be freely adjusted by changing the geometric magnification.

Laboratory setups are often implemented in geometries which correspond to either of two limiting cases: (cf. Fig. 13.7c). In the ‘cone-beam geometry’, the source-to-sample distance \(z_{01}\) is small compared to the sample-to-detector distance \(z_{02}\), leading to a large magnification \(M>1\), while in the ‘inverse geometry’, the the sample is moved close to the detector, resulting in a small magnification \(M\simeq 1\). The resolution of the imaging system as a function of the source standard deviation \(\sigma _\text {src}\), corresponding to a Gaussian source distribution, and detector standard deviation \(\sigma _\text {det}\) is given by [86]

$$\begin{aligned} \sigma _\text {sys}=\sqrt{(M-1)^2M^{-2}\sigma _\text {src}^2+M^{-2}\sigma _\text {det}^2}. \end{aligned}$$
(13.1)

Hence, in the cone-beam geometry, the resolution is limited by the source size, while in the inverse geometry, the resolution of the detector is the limiting factor. In Fig. 13.6c the system resolution as a function of geometrical magnification and detector standard deviation is shown for typical source sizes of a liquid-metal jet source (\(\text {FWHM}_\text {src}=4\) \(\upmu \)m and \(\text {FWHM}_\text {src}= 10\, \upmu \)m) and a source with a rotating anode (\(\text {FWHM}_\text {src}=\)70 \(\upmu \)m). It is evident that for the smaller source sizes, resolutions well below 5 \(\upmu \)m can be reached in both geometries, while for the larger source size, only the inverse geometry provides resolutions sufficient to resolve features smaller than 10 \(\upmu \)m. Provided that a detector with a point-spread-function (PSF) of standard deviation in the range of \(\sim \)1 \(\upmu \)m is available, the highest resolution for both types of X-ray sources can be reached in inverse geometry, in the same order of magnitude as the detector resolution. This could be experimentally validated by imaging an absorbing test pattern with the XSight Micron (Rigaku, Czech Republic) [47, 87], a lens-coupled high-resolution detector, showing that half-period resolutions well below 1 \(\upmu \)m are possible (cf. Fig. 13.7b, d). Note that the constraints of shearing length and system resolution result in a similar expression for the maximum magnification.

Fig. 13.8
figure 8

Adapted from [89]

Comparison of different phase-retrieval approaches. a Results from the raw projections. Top: Virtual slice through the reconstructed pedipalp of an iodine stained cobweb spider. Bottom: Volume rendering of its thorax, showing individual muscle strands that appear hollow due to the edge-enhancement effects caused by free-space propagation between the object and the detector. After the application of the b MBA [74], c SMO [75] and d BAC approach [88], quantitative gray values can be reconstructed at high signal-to-noise ratio, though at the cost of resolution in the case of the MBA and SMO. Only the BAC provides a reconstruction with a resolution comparable to the raw data. Scale bars: 100 \(\upmu \)m.

In order to allow for tomographic imaging at the laboratory, the setup should comprise a fully motorized sample tower, containing one rotational axis for the tomographic scans, three translations above the rotation axis for the alignment of the sample in the field of view and one translation perpendicular to the optical axis for alignment of the rotation axis. Additionally, a further translation along the optical axis can be used for varying the source-to-sample distance \(z_{01}\). By also enabling the motion of the detector perpendicular to the optical axis, all relevant degrees of freedom for a proper alignment of the rotation axis are given [47, 90, 91].

Data analysis starts by phase retrieval on the individual projections, followed by tomographic reconstruction of the 3D volume in which the cone-beam geometry of the setup has to be taken into account, e.g., by using the implementation of the algorithm by Feldkamp, Davis and Kress (FDK) [92] provided by the ASTRA tomography toolbox [93, 94]. As shown in [89], a suitable phase-reconstruction strategy follows the Bronnikov-aided correction (BAC) [88], since this approach is robust with respect to the non-ideal beam conditions at compact X-ray sources such as low spatial coherence and large bandpass, providing sharp and quantitative reconstructions of the sample’s 3D density distribution (cf. Fig. 13.8). More examples of the results that can be obtained at the laboratory, both in cone-beam and inverse geometry, can be found in Sect. 13.6.

5 Novel Tomography Approaches

5.1 Combined Phase Retrieval and Tomographic Reconstruction

In the classical phase-contrast tomography approach, data reconstruction is performed in a sequential manner, i.e., it starts with phase retrieval on the individual projections, followed by tomographic reconstruction. The main challenge in this reconstruction scheme is the phase-retrieval step and many methods have been developed relying on, e.g., the linearization of the transport of intensity equation [74, 75] or an analytic form of the free-space contrast transfer functions [10, 82]. However, in most cases, phase-retrieval techniques require additional assumptions like negligible absorption, slowly varying phase, or known compact support as well as measurements from several measurement planes. In [17], a combination of phase retrieval and tomographic reconstruction was introduced, called ‘iterative reprojection phase retrieval (IRP)’, which can overcome these limitations, providing reconstructions of the object from projections acquired at a single reconstruction plane without further assumptions on the phase-shifting and absorption properties of the object or its support. It relies on the Helgason-Ludwig consistency which states that tomographic projections are not independent from each other as the finite size of the object imposes systematic correlations. This helps to phase in particular low spatial frequencies, which pose a significant challenge in single-distance phase retrieval. The general scheme of the IRP algorithm is shown in Fig. 13.9a.

Fig. 13.9
figure 9

Adapted from [14, 17]

Combined phase retrieval and tomographic reconstruction. a Sketch of the combined iterative reconstruction algorithm called ‘iterative reprojection phase retrieval (IRP)’ [17]. \(\mathcal {P}\) denotes the tomographic projection, whereas \(\mathcal {D}\) stands for Fresnel propagation. b Virtual slice through the reconstruction obtained by the IRP scheme. c Virtual slice through the reconstruction obtained after application of the CTF approach for phase retrieval and subsequent filtered backprojection (FBP). Compared to the IRP approach, a lower signal-to-noise ratio can be observed and twin image artifacts occur due to the imperfect phase retrieval. Scale bars: 5 \(\upmu \)m.

Reconstruction starts with an initial guess for the 3D distribution of the refractive index decrement \(\delta (\mathbf{r})\) as well as for \(\beta (\mathbf{r})\). By forward projection, a first iteration of the phase and amplitude distribution of the exit wave \(\varPhi _\alpha (x,y)\propto \exp \left( -k\int _z [i\delta _\alpha (\mathbf{r})+\beta _\alpha (\mathbf{r})]\mathrm dz\right) \), i.e., the wave field directly behind the object, is obtained for each tomographic angle \(\alpha \). Subsequently, the exit wave for each angle is propagated to the detection plane (Fresnel propagator \(\mathcal {D}_z\)) and the magnitude constraint is enforced. Back propagation then yields a modified exit wave \(\varPhi '_\alpha \). In the last step, the 3D distributions of \(\delta (\mathbf{r})\) and \(\beta (\mathbf{r})\) are reconstructed from these modified exit waves in a similar fashion as in the algebraic reconstruction technique (ART). By also enforcing positivity of the electron density, corresponding to \(\delta (\mathbf{r})>0\), as well as positivity for \(\beta (\mathbf{r})\) (no generation of X-rays), additional unrestrictive constraints can be implemented for the object. This basic sequence of propagators and projectors is iterated M times, leading to a consistent tomographic reconstruction. The result of the IRP algorithm for the example of a barium-stained macrophage is shown in Fig. 13.9b [14]. A higher signal-to-noise ratio can be reached and twin image artifacts are reduced, compared to the standard sequential scheme (with CTF-based reconstruction of all projections followed by a filtered backprojection), which is depicted in (c).

5.2 Tomographic Reconstruction Based on the 3D Radon Transform (3DRT)

As discussed in Sect. 13.4, a significant challenge in laboratory-based phase-contrast tomography is to reach sufficient brilliance with a laboratory source. To this end, small source sizes are required, which often means insufficient photon flux. In [95], a novel tomographic reconstruction approach based on the 3D Radon transform (3DRT) has been introduced, instead of the 2D Radon transform (2DRT) used in classical tomography. The 3DRT could help to solve this intensity/coherence dilemma, as it allows for relaxations of the source size in one of the two source dimensions, while exploiting the smaller dimension for resolution and coherence.

Fig. 13.10
figure 10

Adapted from [96]

Tomography with extended sources based on the 3D Radon transform. a Empty-beam corrected projection of a gerbil cochlea prior to tomographic reconstruction. The bar at the top indicates the effective width of the source. b Numerical reprojection of the reconstructed volume, showing a sharp image of the projected cochlea. c Slice through the reconstructed volume using the 3D Radon transform. Scale bars: 1 mm.

To this end, it was shown that by proper extension of the data recording scheme, in particular rotation around two instead of one tomographic axes, an experimental realization of the area integrals required for 3DRT becomes possible. Within this scheme, the recorded projections are integrated along the ‘low-resolution direction’ in which the source spot is elongated. At the same time, the resolution and contrast of the entire 3D object reconstruction are determined by the perpendicular ‘high-resolution direction’. The 3DRT filtered backprojection is performed analogously to the 2D case by filtering the 1D absorption profiles and subsequent backprojection or ‘smearing’ into the 3D space. Note, however, that the filter function in Fourier space is given by \(k^2\) as opposed to |k| in the 2D case. Figure 13.10 presents an example of a 3DRT reconstruction applied to experimental data of a gerbil cochlea, which was recorded with anisotropic source conditions [96]. In the recorded projection in (a), the blurring in horizontal direction is clearly visible. After reconstruction with the 3DRT, the numerical reprojection under the same angle in (b) as well as the virtual slice in (c) show an isotropically sharp representation of the cochlea. Note that the sampling scheme is chosen such that the full 3D Fourier space is equidistantly sampled. However, in the practical implementation, the computation of additional 1D profiles from each of the anisotropically blurred projections in a sector of \(\pm {\varDelta \theta }\) around the high resolution direction yielded better results. As this leads to a nonuniform sampling of the unit sphere, partitioning of the hemisphere into Voronoi regions was used for normalization in the 3DRT reconstruction step.

The 3DRT can also be used for phase-contrast imaging based on, e.g., grating interferometry, edge illumination or free space propagation. This was demonstrated in [95] for the case of propagation imaging, taking the example of a common match. Phase retrieval according to the BAC algorithm [88] was performed prior to the tomographic reconstruction. This phase retrieval step was carried out on the sinogram and hence on the 1D projections acquired after integration along the low-resolution direction. Note that only in the high-resolution direction of the anisotropic source, the required spatial coherence was provided.

One particular motivation to develop reconstruction based on the 3DRT is related to local or region-of-interest tomography. It can be shown mathematically that the reconstruction only depends on the local values of the Radon transformed object function, so that artifacts introduced by object components outside the reconstruction volume, which often affect the image quality in local tomography of standard 2DRT, should in principle be suppressed. This, however, could not be confirmed by numerical simulations.

6 Tomography of Biological Tissues: Applications and Benchmarks

As is well known from classical histology, physiological function is enabled by the underlying tissue structure, and conversely, alterations lead to different pathological states, e.g., in neurodegenerative diseases. Deciphering the 3D tissue structure from the the whole organ down to the cellular scale enables the quantification of these relations and the underlying mechanisms. Conventional approaches as histological sectioning or electron microscopy (EM), are associated with serial sectioning, staining and subsequent investigation under a light or electron microscope. They provide excellent results on single 2D sections, but the 3D anatomy can only be determined after aligning the individual sections, leading to a non-isotropic resolution within the tissue. Apart from possible artifacts due to the slicing or staining procedure, they are labor-intensive and time-consuming techniques, impeding the visualization of large fields of view, e.g. entire organs, even at moderate resolution.

To this end, phase-contrast tomography based on free propagation offers a unique capability for high resolution imaging of soft tissues over a cross section of several mm, and with a geometric zoom capability to visualize selected regions of interest down to 20–50 nm voxel sizes. Zoom-tomography is enabled by variation of the focus-to-sample distance, yielding 3D reconstructions at selectable magnification, resolution and field of view (FOV). The zoom capability and the dose efficiency are particularly pronounced if highly divergent and highly coherent beams with low wavefront distortions are available. Such wavefronts are provided by optimized X-ray waveguide optics, as presented in Sect. 13.2. In combination with suitable phase-retrieval algorithms, challenging radiation sensitive and low-contrast samples can be reconstructed with minimal artifacts.

In this section, we review phase-contrast X-ray tomography of biological tissues, presenting examples and benchmark studies for two different cases. First, phase-contrast µ-CT  using (in-house) laboratory sources, and second, nano-CT using synchrotron radiation (SR). The first case is illustrated by tomography on the scale of small animal organs, notably cochlea [97, 98], as well as tomography at the small animal scale [99]. For the second case, we present tomography of nerves from mouse (optic nerve, sciatic nerve), showing each axon in the nerve with details such as the node of Ranvier and Schmidt-Lantermann incisures [100], lung tissue for asthma and control mice [27], and finally high-resolution reconstructions of human cerebellum, yielding the precise locations of neurons in the molecular and granular layer [61]. The last example comprises both synchrotron nano-CT and laboratory µ-CT.

6.1 3D Structure of Cochlea

Imaging of the delicate and complex anatomy of the cochlea in small animal models is required to understand malformations caused by genetic defects, to guide new treatments and to develop cochlear implants [101, 102], including novel optogenetic approaches [103]. Cochlea imaging is perfectly suited to illustrate the particular advantages of phase-contrast X-ray imaging, since soft tissues and membranes have to be visualized while surrounded by bone. Phase-contrast tomography of cochleae using synchrotron radiation can overcome the limitations of imaging approaches such as classical histology or magnetic resonance imaging [102, 104, 105]. Reaching sufficient contrast and resolution at laboratory sources, however, poses a much larger challenge. In [97], a well chosen combination of a liquid-metal jet anode (cf. Sect. 13.4), high resolution detectors, an optimized geometry and reconstruction algorithms was used to achieve sufficient contrast and resolution down to 2 \({\upmu \text {m}}\), enabling the visualization of thin membranes and nerve fibers surrounded by bone. Importantly, the high data quality allowed for automatic histogram-based segmentation between bone and soft tissue. Figure 13.11 illustrates the achieved contrast, data quality and resolution for the visualization of thin membranes and nerve fibers within the cochlea.

Fig. 13.11
figure 11

Adapted from [97]

Results of in-house phase-contrast tomography on cochlea. a 3D visualization of a mouse cochlea with bone (brown, semi-transparent), basilar membrane (green), Reissner’s membrane (yellow), Rosenthal’s canal (blue) and optical fiber (gray). b Magnified view showing nerve tissue (orange). The nerve fibers of the spiral ganglion pass out between the two layers of the lamina spiralis ossea (bottom layer shown in magenta). c, d Slices through regions of interest within the cochlea, revealing scala tympani (ST), basilar membrane (BM), scala vestibuli et media (SVM) and spiral ganglion (SG) for c cone-beam and d inverse geometry. Finer nerve fibers are resolved in the inverse geometry (see inset). Scale bars: 200 \({\upmu } \text {m}\) and 20 \({\upmu \text {m}}\) (insets).

The presented results show that polychromatic illumination of laboratory X-ray sources does not per se impede high data quality. However, the reconstructed grey values can by no means be regarded as quantitatively correct. Neither does the phase-retrieval approach by [88] properly separate phase from amplitude, nor is phase or amplitude well defined in the case of a broad bandpass. Effects of beam hardening make even effective values for (mean) photon energy extremely questionable. Such problems are often particularly noticeable if strongly absorbing materials as metal are present in the object. Cochlea implants with wires and electric components in the vicinity of soft tissues fall into this category of multiple material objects with strong differences in the \(\delta /\beta \)-ratio. To find a solution for such applications, a new class of narrow-band and compact radiation sources was evaluated in [98], based on the interaction of accelerated electrons and laser photons (inverse Compton effect) [106]. As a prototype of such sources, the Munich Compact Light Source (MuCLS) generates narrow-band X-ray photons within a continuously tunable energy spectrum [107,108,109], providing a very useful source for phase-contrast imaging, and closing a performance gap between conventional laboratory instruments and synchrotron facilities. MuCLS data enabled high quality reconstruction of the functional soft tissue within guinea pig and marmoset cochleae even in the presence of an electrical cochlear implant with metallic components. Figure 13.12 illustrates imaging of a guinea pig cochlea, at a resolution in the range of 10 µm [98]. The higher and tunable photon energy and in particular the narrow bandpass of the MuCLS allows in principle for more quantitative reconstruction values (grey levels) than possible with conventional laboratory microfocus X-ray sources.

Fig. 13.12
figure 12

Adapted from [98]

Reconstruction results for a guinea pig cochlea measured at the MuCLS. a Virtual slice through the reconstructed 3D volume, showing the typical anatomical features of the cochlea in high detail without artifacts like beam hardening. In particular, the Rosenthal’s canal (RC), the osseous spiral lamina (OSL) and the stria vascularis (STV) can be recognized. In the inset, in which contrast was optimized for the soft tissue components, also the basilar membrane (BM) and the Reissner’s membrane (RM) are visible as well as the corresponding chambers separated by these membranes, the scala tympani (ST), scala media (SM) and the scala vestibuli (SV). b 3D rendering of part of the volume with a cut revealing the inner structure of the cochlea in high detail. c, d Segmentation of typical anatomical features of the cochlea together with a volume rendering displayed semi-transparently to put it in context. Note that due to rupturing, the theoretical shape of the membranes was derived from the position of typical landmarks in the volume. The segmentation includes the ossicles (malleus, incus and stapes), the round window membrane (RWM) as well as the OSL, RC, RM and BM. Scale bar: 1 mm.

6.2 Small Animal Imaging

In the next example, we show that not only excised organs, but entire small animals are amenable to propagation-based phase-contrast tomography even at compact laboratory sources. This is important since synchrotron radiation sources are rarely in direct vicinity of small animal and biomedical research facilities, and beamtime scheduling constraints easily interfere with the requirements of small animal studies. Contrarily, a much wider range of premedical research applications can be addressed after translation of phase-contrast tomography to the laboratory scale.

The chosen example is concerned with in situ 3D lung imaging of small animals. Phase contrast had been demonstrated earlier for this application by grating-based phase contrast [110, 111], which does not, however, achieve the resolution to resolve small features in tissue. With the advent of improved sources, instrumentation and analysis, 2D [112] and 3D [99] propagation-based phase-contrast imaging has now become practical also at the level of small animals. A suitable strategy, demonstrated in [99], is as follows: First, large overview scans are recorded in absorption contrast. Subsequently, by changing according geometric parameters and increasing the magnification of the setup, a phase-contrast data set is acquired in local tomography mode. As shown in [99] and illustated in Fig. 13.13, fine terminal airways and thousands of small alveoli of the lung can be resolved at a resolution of about 5 µm, despite the rather thick and absorbing surrounding tissue.

The results required an optimization of the energetic spectrum by pre-hardening. Hence, for future experiments it would be useful to enrich the liquid jet alloys with indium or to replace it with a suitable silver alloy. By further improvements resulting in a decrease of the acquisition time by a factor of ten, live animal phase-contrast imaging would become possible. The high availability of laboratory sources would thus enable longitudinal studies, as well as the necessary statistical power.

Fig. 13.13
figure 13

Adapted from [99]

a Projection of the large-FOV data set that covers the thorax of the whole mouse (top), showing mostly absorption contrast, and a phase-contrast projection obtained by zooming into the region marked by the rectangle (bottom). b Virtual slice through the large-FOV measurement. The inset shows a zoom into the lung area marked by the rectangle with adjusted contrast. c Virtual slice obtained in the zoom configuration setting, where the sample-detector distance was increased to obtain phase contrast. d The same slice after the application of the phase retrieval approach proposed by Paganin et al. [75]. The signal-to-noise ratio is increased while simultaneously the gray values get more quantitative. e Profiles along the 6 pixel wide lines indicated in (c) and (d), respectively. The positive effect of phase retrieval is clearly visible. f 3D rendering of the large-FOV measurement (left), containing automatically segmented bones (gray), the heart (red) and lung tissue (pink). On the right, a 3D rendering of the zoomed reconstruction volume is shown, with orthogonal slices through the volume and a rendering of the soft-tissue structure. Scale bars: 2 mm (a, top and b), 400 \(\upmu \)m (a, bottom) and 500 \(\upmu \)m (c, d).

6.3 3D Virtual Histology of Nerves

A fast conduction of action potentials in specific nerves in the peripheral or central nervous system (PNS/CNS) is enabled by myelin sheaths which surround the corresponding parallel arranged axons within the nerve, leading to an electrical insulation against the surrounding fluids. These myelin sheaths comprise myelin segments with a length of 150–200 \({\upmu \text {m}}\) in the CNS and up to 1 mm in the PNS [113, 114], followed by myelin-free gaps called nodes of Ranvier (RN). This segmental structure results in saltatory conduction, with the action potentials propagating from one node of Ranvier to the next, where a large sodium influx leads to a regeneration of the signal [115]. In more or less regular distances within the myelin segments of the PNS, its compact structure is interrupted by clefts, the so-called Schmidt-Lanterman (SL) incisures.

3D virtual histology by phase-contrast tomography offers a unique access to probe the spatial organization of the axon bundles within the nerve and to answer questions of axon organization and size distribution as well as correlations of RN and SL between different axons. In [100], entire (uncut) optic, saphenous and sciatic nerves were prepared from mouse using high pressure freezing, and scanned using the nanofocus KB optics at legacy beamline ID22NI of ESRF. In subsequent work, the recent ESRF upgrade beamline ID16A [43], as well as the upgraded GINIX endstation at beamline P10/PETRAIII [44], have been used to demonstrate the suitability of these novel setups for nerve tomography [116].

It was found that intrinsic electron density without additional labeling or staining is sufficient to identify axonal structures. However, to specifically image the myelin sheath surrounding the axon, labeling by an osmium tetroxide stain was required. By placing the nerve at different defocus positions in the diverging waveguide, both overview scans of entire sciatic nerves, as well as zoom tomograms of relevant sub-structures as nodes of Ranvier and Schmidt-Lanterman incisures were recorded (cf. Fig. 13.14). The reconstructions were found to be very consistent with histology sections and EM micrographs, but offered the clear advantage of probing much larger volumes that could be visualized with isotropic 3D resolution.

Fig. 13.14
figure 14

Adapted from [100]

Phase-contrast tomography of nerves. a Three dimensional visualization of a mouse saphenus nerve stained with osmium and embedded in agarose (voxel size \({430\,\mathrm{\text {nm}}}\)). The nerve is rendered in blue, while an adjacent blood vessel is depicted in red. Additionally, a longitudinal virtual slice is shown, revealing the single axons within the nerve due to the high electron density of the osmium-stained myelin sheath. b Virtual slice through an EPON-embedded osmium stained mouse sciatic nerve measured with a voxel size of \({430\,\mathrm{\text {nm}}}\). c Virtual slice through a zoom-tomogram of the same nerve (\({50\,\mathrm{\text {nm}}}\) voxel size), recorded in the region marked in (b). d 3D rendering of the same nerve measured with a voxel size of \({100\,\mathrm{\text {nm}}}\). 20 axons (turquoise) are shown along with a virtual slice through the reconstructed volume. Nodes of Ranvier are rendered yellow, Schmidt-Lanterman incisures red. An additional rendering of 13 axons (black box) suggests a correlation between the positions of these nodes and incisures of neighbouring axons. Scale bars 50 \(\upmu \)m (a), 100 \(\upmu \)m (b) and 10 \(\upmu \)m (c).

6.4 Macrophages in Lung Tissue

The lung is the primary organ of the respiratory system in air-breathing vertebrates. It enables the oxygen exchange between the inhaled oxygen-rich air and the blood in the cardiovascular system of the body. The air is transported through the trachea, which branches into many bronchi and bronchioles that eventually end in the alveoli. This anatomical structure leads to a continuous surface enlargement, enabling a fast exchange of oxygen between the alveoli and surrounding blood vessels. One of the major diseases associated with the lung is asthma, with typical symptomps as coughing or shortness of breath, the cause and progression of which is still not fully understood [117]. It leads to a chronic inflammation of the respiratory tract, especially the bronchi and bronchioles. Macrophages, which are part of the immune system, are a special kind of phagocytes, protecting the organism by ingesting harmful pathogens and other foreign substances. They are known to be involved in processes of allergic inflammation [118], but their precise role in asthma and the underlying mechanisms are still debated [119], including in particular their migration properties [120].

High resolution X-ray phase-contrast tomography is a promising tool to visualize the 3D distribution of macrophages in situ. In [27], tissue slices from lungs of mice were imaged at the legacy beamline ID22NI of ESRF as well as the GINIX setup with voxel sizes in the range of 50–430 nm. In this study, the intricate three-dimensional (3D) structure of lung tissue was visualized, with its system of the bronchial tree, alveoli, and blood vessels (see Fig. 13.15). In addition, the distribution of macrophages and their migration properties within the lung were investigated by 3D visualization with high resolution and contrast. Precise tracking of alveolar macrophages in relation to anatomical structures was enabled by barium-labeling [27, 121]. It was shown that the intratracheally applied macrophages (MH-S cell line [122]) localize predominantly on alveoli and are able to penetrate the epithelial layer between the airway lumen and parenchyma.

Fig. 13.15
figure 15

Adapted from [27]

Phase-contrast tomography on lung tissue. a Virtual slice through the reconstructed asthmatic lung tissue obtained at ID22NI (ESRF) with a voxel size of \(p = {430\,\mathrm{\text {nm}}}\). Barium sulphate particles (black) and fat (white) show a strong density contrast compared to soft tissue. A blood vessel (BV) and a bronchial tube (BT) can be identified based on their different wall morphologies. b, c Virtual slices through the reconstructed lung tissue from a healthy control measured at the GINIX setup (DESY) both in an overview scan (b, voxel size: \({245\,\mathrm{\text {nm}}}\)) and a zoom configuration (c, voxel size: \({52\,\mathrm{\text {nm}}}\)). The position of the zoom scan is indicated by a rectangle in (b). d 3D rendering of the ID22NI data set together with barium clusters (green), alveolar walls in a small ROI (yellow) and part of a blood vessel (purple). e, f 3D visualization of the tomography results obtained at the GINIX setup. The 3D renderings show barium aggregates in macrophages (green), part of a blood vessel (purple), the bronchial wall (yellow) and the outline of a single macrophage (blue). This cell is additionally shown at higher magnification in (f, bottom). Scale bars: 100 \(\upmu \)m (a), 50 \(\upmu \)m (b) and 10 \(\upmu \)m (c).

6.5 Neuron Locations in Human Cerebellum

The cerebellum, which is among other things important for the maintenance of upright posture and synergy of movements [123], is located at the back of the brain of mammals. Compared to the largest part of the brain, the cerebral cortex, it has a significantly higher cell density and contains 80% of the total number of neurons within the human brain, despite its relatively small weight of \(\sim \)10% of the total brain mass [124]. The cerebellum generally consists of the tightly folded cerebellar cortex, comprising three distinct layers, the cell-rich granular layer, the low-cell molecular layer and the intermediary mono-cellular Purkinje cell layer, located above white matter with a large amount of axon bundles.

Studying the cytoarchitecture of the cerebellum via propagation-based phase-contrast tomography requires an additional contrast enhancement as hydrated unstained tissue does not allow for an unambiguous identification of all cells in the densely packed granular layer [47]. Contrary to radiocontrast agents, which specifically increase contrast in certain features of the sample, e.g., the myelin sheaths in mouse nerves or macrophages in the mouse lung, a global contrast enhancement can be reached by exchanging the surrounding medium with a medium with lower electron density. This makes it possible to examine conventional neuropathological samples as human brain obtained during routine autopsy since these are usually embedded in paraffin after fixation.

In [61], propagation-based phase-contrast tomography was performed on paraffin-embedded human cerebellum both at the GINIX endstation and at the laboratory setup in inverse geometry, providing insights into the 3D cytoarchitecture at sub-cellular level (cf. Fig. 13.16). In order to fully exploit the potential of this 3D virtual histology, a workflow was developed in order to automatically locate the small cells in the molecular and granular layer, leading to the segmentation of several ten thousands (GINIX) to \(\sim \)1.8 million of cells (laboratory). This has enabled the analysis of spatial organization of neurons in the granular layer, e.g., based on local density estimations or pair correlation functions, pointing towards a strong short-range order of these cells visible as local clustering (cf. Fig. 13.17). Moreover, the availability of the exact cell positions in 3D allows for the precise quantification of cellular distributions, revealing an anisotropy in the arrangement of nearest neighbors within the granular layer which is governed by the principle directions of the Purkinje cell layer, a result which would not have been accessible by conventional 2D histology.

Fig. 13.16
figure 16

Adapted from (a) [47] and (b–d) [61]

Phase-contrast tomography of tissue from human cerebellum, showing reconstructions obtained both at the GINIX endstation and the laboratory setup. a The virtual slice through the reconstructed volume of the synchrotron data set reveals the interface between the low-cell molecular (ML) and cell-rich granular layer (GL), including a cell of the mono-cellular Purkinje cell layer (PCL). b Corresponding slice of the laboratory data set, showing the larger volume accessible by the laboratory setup while maintaining the resolution required for single cell identification. A magnified view of the region marked by the rectangle is shown on the right, corresponding to the FOV of the synchrotron data set in (a). c Segmentation of the cells in the granular layer (dark red), the molecular layer (light red) and the Purkinje cell layer (shades of gray) with two exemplary Purkinje cells shown separately, from front and side view. The segmentation for the granular and molecular layer was performed automatically whereas for the Purkinje cell layer, a semi-automatic approach was used. d The same segmentation for the laboratory data set. Note that the individual Purkinje cells are the same as for the synchrotron data set and that the thick branches of the dendritic tree can already be resolved with the laboratory setup. Scale bars: 50 \(\upmu \)m (a and b, right) and 200 \(\upmu \)m (b, left).

Fig. 13.17
figure 17

Adapted from [61]

Analysis of cell distributions in the granular layer of human cerebellum, as obtained at the synchrotron (left column) and the laboratory setup (right column). a Local cell density distribution within the granular layer of the cerebellum. By considering small volumes for the computation of the local density, a clustering of cells within this layer can be clearly recognized as hotspots in the density distribution. With increasing volume, the differences in cell density are vanishing, leading to an almost uniform density distribution within the granular layer. b Angular averaged pair correlation function of the cells in the granular layer, revealing two distinct peaks at approximately once and twice the cell diameter, which indicates a local clustering of the cells. c Angular distribution of nearest neighbors in the granular layer. Note that the data sets were aligned with respect to the Purkinje cell layer such that the dendritic tree lies approximately in the xy-plane and hence at \(\theta \simeq {90\,\mathrm{{}^{\circ }}}\). The majority of nearest neighbors are clearly distributed in parallel to the dendritic tree of the Purkinje cells as hotspots in the angular distribution are visible at \(\theta \simeq {90\,\mathrm{{}^{\circ }}}\).

6.6 Outlook: Time-Resolved Phase-Contrast Tomography

Together with the progress in detector technology, the current upgrades of synchrotron sources to a multi-bend achromat lattice and the corresponding increase in brilliance will offer unique opportunities for time-resolved (dynamic) tomography. In other words, the data acquisition rate f could become high enough to observe dynamic processes in biological matter on the micro-scale and in some cases even on sub-micron scales. The phase-contrast imaging capabilities presented in this chapter could thus be (at least partially) extended from 3D to 4D (time & space) imaging. The first question to be asked concerns the temporal sampling required to probe the dynamic process of the object. In [125], e.g, phase-contrast tomography was performed in vivo on Xenopus laevis embryos, revealing new aspects of their gastrulation over time. The time scale of this development was long enough for the single tomographic scans to be considered as static, enabling data reconstruction following the classical approach via simple filtered backprojection, and the development of the gastrulation was monitored by recording several tomograms with a time lapse of \(\sim \)10 min.

For faster processes, more elaborate and generalized recording or analysis schemes have to developed to meet the challenges of dynamic tomography. In the special case of cyclic processes such as a beating heart, acquisitions can be gated or triggered (hardware or a posteriori software) to cover different phases of the considered motion. By combining projections which are recorded at the same state of the motion but at different rotation angles, static solutions can be generated for the different time points within one cycle, unraveling, e.g., the complex muscle movement during insect flight [126]. However, such approaches fail for non-cyclic processes.

Fig. 13.18
figure 18

Adapted from [127]

Four dimensional movie of a burning match. a Exemplary sinogram extracted from all 18800 recorded projections. The bottom row depicts tomographic reconstructions of the highlighted sinogram segments, each consisting of 401 equidistant projections. While the shape, features of the wooden structure and the stages of the burning process can be clearly identified, the reconstructions show motion artifacts such as ‘streaks’ or non-closed shapes. b By estimating the motion between successive time points via optical flow, a better tomographic reconstruction can be carried out by backprojection on dynamically curved paths (right). Note that the motion amplitude was increased by the factor of 3 for better visibility. c The improved reconstruction quality can be observed in fine object details which can be resolved in the image on the right (red arrows), compared to the result of a conventional direct filtered backprojection, shown on the left. d Rendered 3D structure of the burning match at different time points. Scale bars: 1 mm.

In [127], an approach for the reconstruction of time-resolved processes based on filtered backprojection along dynamically curved paths was introduced. It can account for non-affine and non-cyclic motion on time scales shorter than the time needed for an entire tomogram, provided that the motion model can be estimated. The workflow is depicted in Fig. 13.18 for the example of a burning match. In order to monitor the burning process, 47 single tomograms with 401 projections each were recorded at the TOMCAT beamline (SLS, Villigen, Switzerland) while the match was continuously rotated at a rate of 1.25 Hz. In the exemplary sinogram in (a) at the top, the shrinking of the structure as well as a decrease of signal intensity due to the burning process can be observed. This process can be approximately depicted in 3D by selecting intervals of 401 projections from the sinogram and performing standard filtered backprojections, leading to the reconstructions shown in the lower row. The shape and features of the wooden structure and the stages of the burning process can be clearly identified. However, the reconstructions show motion artifacts such as ‘streaks’ or non-closed shapes. These can be reduced by estimating the motion perpendicular to the rotation axis between each subsequent pair of slices in the time series via optical flow analysis [128]. This motion model is then used to perform filtered backprojection along dynamically curved paths, as depicted in (b), accounting for the motion of certain parts of the sample during the time span of the corresponding tomogram. The comparison between the reconstructed slice using this approach and a standard filtered backprojection in (c) shows that artifacts can be significantly reduced, enabling the investigation of the dynamics of the burning process at high temporal and spatial resolution in the µm-range. The 4D nature of the data is illustrated in (d), showing the rendered 3D structure of the wooden part of the match at 5 different points in time.

As this simple example of the burning match shows, time-resolved phase-contrast tomography based on advanced reconstruction schemes allows us to observe dynamic processes in the interior of biomaterials and biological matter. With further improvements, 4D reconstructions of such processes in the interior of multi-cellular assemblies and tissues up to the level of entire organs and small animals can be anticipated.