4 - QLI – Quantitative Light Imaging Laboratory



3. Spatiotemporal field correlations

3.1. Spatiotemporal correlation function. Coherence volume.

• All optical fields in practice fluctuate randomly in both time and space and are subject to a statistical description [1]. These fluctuations depend on both the emission process (primary sources) and propagation media (secondary sources).

• Optical coherence is a manifestation of the field statistical similarities in space and time and coherence theory is the discipline that mathematically describes these similarities [2]. A deterministic field distribution in both time and space is the monochromatic plane wave, which is only a mathematical construct, impossible to obtain in practice due to the uncertainty principle.

• The formalism presented below for describing the field correlations is mathematically similar to that used for mechanical fluctuations, for example, in the case of vibrating membranes.

• The analogy between the two different types of fluctuations and their mathematical description in terms of spatiotemporal correlations has been recently emphasized [3].

• A starting point in understanding the physical meaning of a statistical optical field is the question: what is the effective (average) temporal sinusoid, [pic], for a broadband field? What is the average spatial sinusoid, [pic].

• A monochromatic plane wave is described by [pic]. These two averages can be performed using the probability densities associated with the temporal and spatial frequencies, S(() and P(k), which are normalized to satisfy [pic], [pic].

• Thus, S(()d( is the probability of having frequency component ( in our field, or the fraction of the total power contained in the vicinity of frequency (.

• Similarly, P(k)d3k is the probability of having component k in the field, or the fraction of the total power contained around spatial frequency k. Up to a normalization factor, S and P are the temporal and spatial power spectra associated with the fields. The two “effective sinusoids” can be expressed as ensemble averages, using S(() and P(k) as weighting functions,

[pic] (1a)

[pic] (1b)

• Equations 1a-b establish that the average temporal sinusoid for a broadband field equals its temporal autocorrelation, (. The average spatial sinusoid for an inhomogeneous field equals its spatial autocorrelation, denoted by W.

• Besides describing the statistical properties of optical fields, coherence theory can make predictions of experimental relevance. The general problem can be formulated as follows (Fig. 1):

[pic]

Figure 1. Spatio-temporal distribution of a real optical field

• Given the optical field distribution [pic] that varies randomly in space and time, over what spatiotemporal domain does the field preserve significant correlations? This translates into: combining the field [pic] with a replica of itself shifted in both time and space, [pic], on average, how large can [pic] and [pic] be and still observe “significant” interference?

• We expect that monochromatic fields exhibit infinitely broad temporal correlations, plane waves are expected to manifest broad spatial correlations. Regardless of how much we shift a monochromatic field in time or a plane wave in space, they remain perfectly correlated with their unshifted replicas. It is difficult to picture temporal correlations decaying over timescales that are shorter than an optical period and spatial correlations that decay over spatial scales smaller than the optical wavelength. In the following we provide a quantitative description of the spatiotemporal correlations.

• The statistical behavior of optical fields can be mathematically captured generally via a spatiotemporal correlation function

[pic] 2

• The average is performed temporally and spatially, indicated by the subscripts r and t. Because common detector arrays capture the spatial intensity distributions in 2D only, we will restrict the discussion to [pic], without losing generality. These averages are defined in the usual sense as

[pic] 3

• Often we deal with fields that are both stationary (in time) and statistically homogeneous (in space).

• If stationary, the statistical properties of the field (e.g. the average, higher order moments) do not depend on the origin of time. Similarly, for statistically homogeneous fields, their properties do not depend on the origin of space. Wide-sense stationarity is less restrictive and defines a random process with only it’s first and second moments independent of the choice of origin. For the discussion here, the fields are assumed to be stationary at least in the wide-sense.

• Under these circumstances, the dimensionality of the spatiotemporal correlation function [pic] decreases by half,

[pic] 4

• The spatiotemporal correlation function becomes

[pic] 5

• [pic] represents the spatially averaged irradiance of the field, which is, of course, a real quantity. In general [pic] is complex. Define a normalized version of [pic], referred to as the spatiotemporal complex degree of correlation

[pic] 6

• For stationary fields [pic] attains its maximum at [pic], thus

[pic] 7

• Define an area [pic] and length [pic], over which [pic] maintains a significant value, say [pic], which defines a coherence volume

[pic] 8

• This coherence volume determines the maximum domain size over which the fields can be considered correlated. In general an extended source, such as an incandescent filament, may have spectral properties that vary from point to point. It is convenient to discuss spatial correlations at each frequency [pic], as described below.

3.2. Spatial correlations of monochromatic light

3.2.1. Cross-spectral density

• Taking the Fourier transform of Eq. 2 with respect to time, we obtain the spatially averaged cross-spectral density [4]

[pic] 9

• The cross-spectral density function was used previously by Wolf to describe the second-order statistics of optical fields, the Fourier transform of the temporal cross-correlation between two distinct points, [pic] [2, 4].

• This function describes the similarity in the field fluctuations of two points, for example in the two-slit Young interferometer.

• Two points are always fully correlated if the light is monochromatic, because, at most, the field at the two points can differ by a constant phase shift.

• Across an entire plane, the phase distribution is a random variable.

• To capture the spatial correlations in a ensemble-averaged sense, most relevant to imaging, we use the spatially averaged version of [pic], defined in Eq. 9.

[pic]

Figure 2. Mach-Zender interferometry with spatially extended fields.

• One configuration that allows measurement of W is illustrated in Fig. 2 via an imaging Mach-Zehnder interferometer. Here the monochromatic field [pic] is split in two replicas that are further re-imaged at the CCD plane via two 4f lens systems, which induce a relative spatial shift [pic].

• The question of practical interest is: to what extent do we observe fringes, or, more quantitatively, what is the spatially averaged fringe contrast as we vary [pic]?

• For each value of [pic], the CCD records a spatially resolved intensity distribution, or an interferogram. We compute the spatial average of this quantity as

[pic] 10

• Assuming that the interferometer splits the light equally on the two arms.

• Once the average intensity of each beam, [pic], is measured separately (e.g. by blocking one beam and measuring the other), the real part of [pic], as defined in Eq. 9, can be measured experimentally.

• Clearly, multiple CCD exposures are necessary corresponding to each [pic].

• The complex degree of spatial correlation at frequency [pic] is defined as

[pic] 11

• [pic] is nothing more than the average optical spectrum of the field,

[pic] 12

• [pic], where the extremum values of [pic] and [pic] correspond to complete lack of spatial correlation[pic] and full correlation, respectively.

• The area over which [pic] maintains a significant value defines the correlation area at frequency [pic], e.g.

[pic]. 13

• Often, we refer to the coherence area of a certain field, without referring to a particular optical frequency. In this case, what is understood is the frequency-averaged correlation area, [pic].

• In practice we deal many times with fields that are characterized by a mean frequency, [pic]. In this case the spatial coherence is fully described by the behavior at this particular frequency. A broad band field is fully spatially coherent if [pic], for any ( in the domain [5, 6].

3.2.2. Spatial power spectrum

• Since [pic] is a spatial correlation function, it can be expressed via a Fourier transform in terms of a spatial power spectrum, [pic],

[pic] 14

• The spatial correlations of fields at two different frequencies vanish,

[pic], 15

• The meaning of [pic] is: performing the spatial correlation measurements in Fig. 2 with each field of the interferometer now having two different optical frequencies, [pic] for each spatial shift [pic], the spatial integration averages to zero the effect of the cross term.

• Maximum contrast fringes in a Mach-Zehnder interferometer like in Fig. 2 are obtained by having the same spectral content on both arms of the interferometer.

[pic]

Figure 3. Measuring the spatial power spectrum of the field from source S via a lens (a) and Fraunhofer propagation in free space (b).

• The spatial correlation function [pic] can also be experimentally determined from measurements of the spatial power spectrum, as shown in Fig. 3.

• Both the far field propagation in free space and propagation through a lens can generate the Fourier transform of the source field, as illustrated in Fig. 3,

[pic] 16

• The CCD is sensitive to power and detects the spatial power spectrum, [pic].

• In Eq. 16, the frequency component [pic] depends either on the focal distance, for the lens transformation (Fig. 3a), or on the propagation distance z, for the Fraunhofer propagation (Fig. 3b),

[pic]. 17

• In the Fraunhofer regime, the ratios x/f and x/z describe the diffraction angle; therefore sometimes [pic] is called angular power spectrum.

• For extended sources that are far away from the detection plane, as in Fig. 3b, the size of the source may have a significant effect on the Fourier transform in Eq. 16. This effect becomes obvious if we replace the source field U with its spatially truncated version, U, to indicate the finite size of the source

[pic] 18

• [pic] is the 2D rectangular function, a square of side a. The far field becomes

[pic] 19

• * denotes convolution and sinc is sin(x)/x.

• The field across detection plane (x’, y’), [pic], is smooth over scales given by the width of the sinc function.

• This smoothness indicates that the field is spatially correlated over this spatial scale. Along x’, this correlation distance, [pic], is obtained by writing explicitly the spatial frequency argument of the sinc function,

[pic] 20

• We can conclude that the correlation area of the field generated by the source in the far zone is of the order of

[pic] 21

• [pic] is the solid angle subtended by the source.

• This relationship allowed Michelson to measure interferometricaly the angle subtended by stars.

• For example, the Sun subtends an angle [pic], i.e. [pic]. Thus, for the green radiation that is the mean of the visible spectrum, [pic], the coherence area at the surface of the Earth is of the order of [pic]. Measuring this area over which the sun light shows correlations (or generates fringes) provides information about its angular size.

• For angularly smaller sources, far field spatial coherence is correspondingly higher. This is the essence of the Van Cittert-Zernike theorem, which states that the field generated by spatially incoherent sources gains coherence upon propagation. This is the result of free-space propagation acting as a spatial low-pass filter [7].

• Zernike employed the spatial filtering concept to develop phase contrast microscopy [8, 9]. It had been known since Abbe that an image can be described as an interference phenomenon [10]. Image formation is the result of simultaneous interference processes that take place at each point in the image.

[pic]

Figure 4. Spatial filtering via a 4f system.

• To turn transparent specimens visible, Zernike employed spatial filtering (in a way that is similar to Fig. 4) and extended the coherence area of the illuminating field to exceed the field of view of the microscope. This type of illumination is called coherent.

• An average field over the entire image could be defined, because the phase relationship among different points was stable. The simultaneous interferences at all points that generate the image have a common phase reference, which is the phase associated with the mean field.

• Controlling the phase delay of the mean field adjusts the contrast of the entire image, like in typical interferometry experiments. Phase contrast microscopy is a major breakthrough in microscopy and an important precursor to QPI.

• Another major method for intrinsic contrast microscopy, Differential Interference Contrast (DIC, or Nomarski) [11, 12], which renders maps of phase gradients across a transparent sample, is categorized under “incoherent” methods. DIC does use incoherent illumination (no spatial filtering), yet the high contrast images are generated by interfering an image field with a replica of itself that is slightly shifted transversally. The numerical aperture of the microscope is finite, i.e. the imaging system itself performs spatial filtering. According to Eq. 21, the spatial coherence is of the order of

[pic],

• NA is the numerical aperture of the microscope.

• This equation states that the image field is fully correlated across a region of the order of the diffraction spot. In DIC, shifting the two replicas of the image field by less than a diffraction spot generates high contrast fringes.

3.2.3. Spatial filtering

• From the properties of Fourier transforms, we infer that higher spatial coherence at frequency [pic], i.e. a broader [pic], can be obtained by narrower [pic].

• When dealing with extended sources, it is common practice in the laboratory to perform low pass filtering on [pic], such that the coherence area extends over the desired field of view. This procedure, commonly encountered in QPI experiments, is called spatial filtering, and is illustrated in Fig. 4.

• In Fig. 4, the extended, source S emits light at a multitude of frequencies [pic], and spatial frequencies k. At a given frequency [pic], lens L1 performs the spatial Fourier transform. If an aperture is placed at this Fourier plane to block the high spatial frequencies, the field reconstructed by lens L2 at plane S’ (conjugate to S) approximates a plane wave of wavevector [pic].

• With this procedure, from an extended source, we obtain a highly spatially coherent field. This procedure is lossy, as the energy carried by the high spatial frequency is lost. Asymptotically, closing down the aperture generates a field that approaches a plane wave at plane S’.

• Conversely, all sources exhibit spatial coherence at least at the scale of the wavelength. This is easily understood by noting that a [pic]-correlated source, [pic] requires [pic] infinitely broad, i.e. [pic]. This is impossible, because a planar source can only emit in a [pic] solid angle.

• Thus the minimum coherence area for an arbitrary source is of the order of

[pic] 22

• Spatial coherence of a field over a given plane describes how close the field is to a plane wave. Alternatively, spatial coherence describes how well the field can be focused to a point (this point corresponds to a delta-function in the frequency domain).

• Spatial coherence plays an important role in microscopy and is used to differentiate between two classes of methods: (spatially) coherent vs. incoherent. Quantitative phase imaging requires that the illumination field is spatially coherent, such that a phase shift can be properly defined over the entire field of view of interest.

3.3. Temporal correlations of plane waves

3.3.1. Temporal autocorrelation function

• We now investigate the temporal correlations of fields at a particular spatial frequency k (or a certain direction of propagation). Taking the spatial Fourier transform of ( in Eq. 2, we obtain the temporal correlation function

[pic] 22

[pic]

Figure 5. Michelson interferometry.

• The autocorrelation function [pic] is relevant in interferometric experiments of the type illustrated in Fig. 5. In a Michelson interferometer, a plane wave from the source is split in two by the beam splitter and subsequently recombined via reflections on mirrors M1,and M2.

• The intensity at the detector has the form (we assume 50/50 beam splitter)

[pic] 23

• The real part of [pic] is obtained by varying the time delay between the two fields. This delay can be controlled by translating one of the mirrors. The complex degree of temporal correlation at spatial frequency k is defined as

[pic] 24

• [pic] represents the intensity of the field, i.e.

[pic] 25

• The complex degree of temporal correlation has the similar property with its spatial counterpart [pic], i.e.

[pic] 26

• The coherence time is defined as the maximum time delay between the fields for which [pic] maintains a significant value, say ½.

• If we cross-correlate temporally two plane waves of different wave vectors (directions of propagation), the result vanishes unless [pic],

[pic] 27

• At each moment t, the two plane waves generate fringes parallel to [pic]. If the detector (e.g. a CCD) averages the signal over scales larger than the fringe period, the temporal correlation information is lost.

• As ( changes, the fringes “run” across the plane such that the contrast averages to 0. For this reason, for example, the two beams in a typical Michelson interferometer are carefully aligned to be parallel.

3.3.2. Optical power spectrum

• The temporal correlation [pic] is the Fourier transform of the power spectrum,

[pic] 28

• [pic] can be determined via spectroscopic measurements, as exemplified in Fig. 6.

[pic]

Figure 6. Spectroscopic measurement using a grating: G grating, D detector, diffraction angle. The dashed line indicates the undiffracted order (zeroth order)

• By using a grating (a prism, or any other dispersive element), we can “disperse” different colors at different angles, such that a rotating detector can measure [pic]directly.

• To estimate the coherence time for a broad band field, let us assume a Gaussian spectrum centered at frequency [pic], and having the r.m.s. width [pic],

[pic] 29

• S0 is a constant.

• The autocorrelation function is also a Gaussian, modulated by a sinusoidal function, as a result of the Fourier shift theorem

[pic] 30

• If we define the width of [pic] as the coherence time, we obtain

[pic] 31

• and the coherence length

[pic] 32

• The coherence length depends on the spectral bandwidth in an analog fashion to the coherence area dependence on solid angle (Eq. 21). This is not surprising as both types of correlations depend on their respective frequency bandwidth.

3.3.3. Spectral filtering

• The coherence length values can vary broadly, from kilometers for a narrow band laser, to microns for LEDs and white light. Figure 7 shows qualitatively the relationship between [pic] and [pic].

[pic]

Figure 7. a) Broad (marker) and narrow (solid line) power spectrum. b) Temporal autocorrelation functions associated with the power spectra in a.

• Of course, using narrow band filters has the effect of enlarging the coherence length of the field. The short coherence length of a broad band source is the starting point in low-coherence interferometry and optical coherence tomography [13], as discussed in Chapter 7.

• Phase can only be defined via correlation functions; there is no absolute origin for measuring a phase shift in time or space. To define a quantitative phase image, the phase shift itself across the image must be well defined.

• The illumination field must be spatially coherent over the field of view. This does not mean that the illumination has to be monochromatic, as long as, on average, each frequency component ( has a correlation area larger than the field of view.

• Phase contrast microscopy [8, 9] is a white light method where the phase shift is meaningful across the entire field of view [14]. QPI with white light illumination provides certain advantages over laser illumination.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download