The collision–coalescence process of inertial particles in turbulence is held responsible for the quick growth of cloud droplets from ∼15 to ∼50 µm in diameter, but it is not well understood. Turbulence has two effects on cloud droplets: (1) it brings them closer together, preferentially concentrating them in certain parts of the flow, and (2) it sporadically creates high accelerations, causing droplets to detach from the underlying flow. These turbulence–cloud droplet interactions are difficult to study numerically or in the laboratory due to the large range of scales involved in atmospheric turbulence, so in situ measurements are needed. Here, we present a Lagrangian particle tracking (LPT) experimental setup situated close to the summit of Mt. Zugspitze at an altitude of 2650 m above the sea level on top of the environmental research station Schneefernerhaus. Clouds naturally occur at this location about a quarter of the time. The LPT experiment probes a volume of ∼40 × 20 × 12 mm3, has a spatial resolution of 5 µm and a temporal resolution of 0.1 ms, and measures accelerations to within 0.1 m s−2. Furthermore, the experiment can slide over a set of rails, driven by a linear motor, to compensate for the mean wind. It can slide up to 7.5 m s−1. By doing so, the average residence time of the particles in the measurement volume increases. The mean wind compensation allows us to study various dynamical quantities, such as the velocity autocorrelation, or the dynamics of clustering. Moreover, it is beneficial for particle tracking, in general, since longer particle tracks allow us to apply better filtering to the tracks and thus increase accuracy. We present the radial distribution function, which quantifies clustering, the longitudinal relative velocity distribution, and the Lagrangian velocity autocorrelation, all computed from cloud droplet trajectories.
I. INTRODUCTION
Rain formation in wet clouds proceeds in four phases: activation, condensation, collision–coalescence due to turbulence, and finally collision–coalescence due to differential settling.1,2 These processes are quite well understood, except for collision–coalescence due to turbulence. This phase is held responsible for quickly growing droplets from a radius of ∼15 to ∼50 µm, but what role turbulence exactly plays is not clear.1 It is believed that turbulence has two effects on inertial droplets: first, it brings them closer together, preferentially concentrating them in certain parts of the flow,3 but with velocities still similar to those of the flow. Second, it sporadically accelerates droplets to high velocities, causing them to detach from the underlying flow.4 Both effects individually increase the collision rate, but it has not been confirmed how these effects depend on droplet and flow properties or what their relative importance is.
Diverse methods are available to investigate turbulence–cloud interactions, including numerical simulations, and laboratory and in situ (field) measurements. The two main computational approaches for studying cloud-turbulence interactions are Direct Numerical Simulation (DNS) of Navier–Stokes equation, and Large Eddy Simulation (LES). In LES, the small scales are parameterized, while large scales are directly computed. As a result, LES can be used to simulate relatively high Reynolds numbers and can reproduce many turbulence features.5 However, the parameterization of the fine-scale characteristics of turbulence (e.g., mixing, fluctuations, and intermittency) by LES does not allow for a detailed and robust study of particle–fluid interactions. DNS, on the other hand, allows for the detailed simulation of the turbulence fine-scale, but the computational cost increases rapidly as the ratio of the largest (L) to the smallest scale (η) grows. As an example, for a cloud with ReL ∼ 108, the order of L/η will be since . This indicates that a three-dimensional simulation of a cubic domain of size L requires 1018 grid points. Solving the Navier–Stokes equations in such a domain would require many years of computation even with the best supercomputers available today.5
Similarly, it is difficult to conduct laboratory experiments at sufficiently high Reynolds numbers. To achieve a large range of scales, one has to either scale the experiments up, which is often impossible due to space and/or budget constraints, or decrease the size of the small scales. The latter, however, often implies the use of a larger dissipation rate, which causes the ratio of gravity to characteristic turbulent acceleration, g/ϵ3/4ν−1/4, to decrease, which, in turn, affects the particles’ settling behavior.6 Furthermore, as the small scales get smaller, they become more difficult to probe. The highest experimentally resolvable Reynolds-number experiments that we know of are performed in the Max Planck Variable Density Turbulence Tunnel at the Max Planck Institute of Dynamics and Self-Organization in Göttingen,7 where Taylor Reynolds numbers up to 6000 are reached. In these experiments, the Kolmogorov scale η can be as small as 9 µm, which is up to five times smaller than the hot-wires used, so special care is taken to ensure the validity of the results.8 A particle tracking setup has recently been deployed, but the Kolmogorov scale is smaller than a pixel and therefore remains difficult to probe. All in all, scaling to cloud-physics situations is a challenge, so in situ measurements are critical tools in this field.
Field observations using airborne setups installed on research aircraft and aerostats have been widely used in previous investigations. A key advantage of airborne units (e.g., Refs. 9–17) is the possibility of picking cloud type (e.g., marine vs continental) and region of interest (e.g., core vs boundary). However, airborne measurements are constrained by spatial resolution (sampling time × airspeed) when using aircraft and by scientific payload when using aerostats. Even though ground-based measurements do not suffer from these constraints, to our knowledge, a time-resolved Lagrangian cloud droplet tracking experiment has not been attempted yet.
Here, we present a Lagrangian particle tracking experimental setup situated close to the summit of Mt. Zugspitze at an altitude of 2650 m above the sea level on top of the environmental research station Schneefernerhaus. The turbulence statistics at this location have been shown to be representative of cloud turbulence, and moreover, clouds naturally occur there about a quarter of the time.18 The experimental setup is motorized and mounted on rails, which allows us to compensate for the mean wind. We show that the experimental setup is suitable for measuring various cloud droplet distribution statistics, such as the radial distribution function (RDF) and the fractal dimension.
II. EXPERIMENT OVERVIEW
In this section, we briefly discuss each of the main features of the experiment, starting with its location (Sec. II A). We then introduce its various components: the rail system that allows us to compensate for the mean wind (called “seesaw” for its ability to tilt), the camera box that is the heart of the particle tracking system, and the vibration dampening system (Secs. II B–II D). Finally, in Sec. II E, the control and data acquisition are discussed. A more elaborate treatment of the particle tracking system can be found in Sec. III. The mean wind compensation is treated in Sec. IV. Mechanical drawings and computer aided design (CAD) data of the setup are published separately.19
A. Location
The experimental setup is situated on top of the environmental research station “UFS Schneefernerhaus” at a height of 2650 m in the German Alps. Figure 1 shows the topography of its immediate surroundings and its location on top of the Schneefernerhaus. The Schneefernerhaus is located on a southern slope immediately south-west of Mt. Zugspitze. It looks over a relatively flat area that is bordered to the north, south, and west by a mountain ridge. Directly to the west of the Schneefernerhaus, the mountain ridge has a low spot due to wind erosion, which we shall call the wind hole. To the east, the landscape slopes down into a valley. The wind hole and the valley determine the typical wind direction at the Schneefernerhaus: either easterly or westerly, but always blowing in between the wind hole and the valley. The experimental setup is installed on top of the tower of the Schneefernerhaus and is aligned with the typical wind direction.
Topography near the Schneefernerhaus. The Schneefernerhaus is located in the German Alps, immediately south-west of Mt. Zugspitze, and looks over a large, flat area, which is bordered to the north, south, and west by a mountain ridge. Immediately to the west of the Schneefernerhaus, the mountain ridge has a low spot, called the wind hole. To the east, the landscape descends into a valley. The experimental setup is situated on top of the tower of the Schneefernerhaus and is aligned with the east-west direction. Also shown are the two LIDAR domes and the stairwell. The top two images are reprinted from www.google.com/maps, Schneefernerhaus photo credit: UFS GmbH.
Topography near the Schneefernerhaus. The Schneefernerhaus is located in the German Alps, immediately south-west of Mt. Zugspitze, and looks over a large, flat area, which is bordered to the north, south, and west by a mountain ridge. Immediately to the west of the Schneefernerhaus, the mountain ridge has a low spot, called the wind hole. To the east, the landscape descends into a valley. The experimental setup is situated on top of the tower of the Schneefernerhaus and is aligned with the east-west direction. Also shown are the two LIDAR domes and the stairwell. The top two images are reprinted from www.google.com/maps, Schneefernerhaus photo credit: UFS GmbH.
Risius et al.18 investigated suitability of this location for cloud–turbulence research. Based on the data of the German Weather Service (DWD) from 2000 to 2012, they found that the wind is indeed predominantly easterly or westerly. They furthermore found that clouds are likely to be present more than 25% of the time in summer (April–September) with a peak of 30% in July. They also measured turbulence statistics using five sonic anemometers. Typical dissipation rates obtained were 1–100 cm2 s−3, the Taylor Reynolds number was ∼3000, and in terms of isotropy, the flow was found to be similar to laboratory flows. Siebert et al.20 did further measurements to characterize the cloud–turbulence interaction at the Schneefernerhaus. They again found that the small-scale turbulence is close to homogeneous and isotropic and the cloud–turbulence interactions are comparable to those in free-atmospheric clouds under continental conditions. They concluded that, together with the results of Risius et al.,18 the Schneefernerhaus is a suitable location for Lagrangian measurements of cloud droplets in turbulence.
B. Seesaw
The seesaw allows us to compensate for the mean wind. It supports the rails on which the table slides; see Fig. 2. It has been designed for velocities of up to 7.5 m s−1 and accelerations of up to 28 m s−2. It also withstands emergency stops, which occur, for instance, if the linear motor fails and the table runs into the endstops; in such events, the deceleration can reach up to 230 m s−2.
Top: A computer render of the experimental setup. The seesaw supports a pair of 6.5 m long rails, along which slides an aluminum table that supports the camera box. The seesaw, rails, and table together form a linear motor that is used to make the Lagrangian particle tracking (LPT) setup move with the mean wind. In case of failure of the motor or motor controller, shock absorbers at either end of the rails prevent damage to the apparatus. The seesaw can tilt by ±15° and can be locked in place using two pairs of stiff, lockable, and telescoping cylinders. Bottom left: A photo of the experimental setup. When this photo was taken, we were not using the seesaw. To protect the seesaw from unnecessary exposure to the elements, we covered most of it. Bottom right: Transport of the seesaw to the Schneefernerhaus by a helicopter. Image credit: bottom right: UFS GmbH.
Top: A computer render of the experimental setup. The seesaw supports a pair of 6.5 m long rails, along which slides an aluminum table that supports the camera box. The seesaw, rails, and table together form a linear motor that is used to make the Lagrangian particle tracking (LPT) setup move with the mean wind. In case of failure of the motor or motor controller, shock absorbers at either end of the rails prevent damage to the apparatus. The seesaw can tilt by ±15° and can be locked in place using two pairs of stiff, lockable, and telescoping cylinders. Bottom left: A photo of the experimental setup. When this photo was taken, we were not using the seesaw. To protect the seesaw from unnecessary exposure to the elements, we covered most of it. Bottom right: Transport of the seesaw to the Schneefernerhaus by a helicopter. Image credit: bottom right: UFS GmbH.
The highest frame rate we typically record with our cameras is 10 kHz; at this frame rate, we can record up to 1.6 s of data before exhausting the camera buffers. During this time, the table should be moving at a constant speed. We normally limit the acceleration and deceleration to 10 m s−2 to prevent the box from shaking too much. Given that the maximum travel is 5.3 m, this gives a maximum constant velocity of 2.8 m s−1, which is in our experience sufficiently high most of the time.
Risius21 found that the inclination of the mean wind varies between −15° and 15°. The seesaw is built to account for this: using a hand crank, it can pivot around an axis close to the center of the rail. To give the structure sufficient stiffness, two lockable telescoping cylinders have been added on either side of the axis.
The seesaw’s inclination can be adjusted so that, on average, the wind moves parallel to it and, on average, we measure no vertical wind component. This process takes several minutes and is not meant to be performed in real-time. Furthermore, if the inclination is changed, the vibration damping (see Sec. II D) needs to be adjusted as well. However, the seesaw’s speed can be adjusted quickly in less than a second, and as will be shown later, only adjusting the speed already provides a significant improvement in particle trajectory lengths. A discussion of the need for the seesaw, the effects it has on our measurements, and the challenges it poses are given in Sec. IV.
The seesaw and its supporting structure weigh ∼4400 kg. The table and camera box may weigh up to 300 kg. If due to motor or motor controller failure, they run into the shock absorbers, this could exert a force of up to 140 kN. To account for these forces, mounting points poured in concrete have been installed on the roof of the Schneefernerhaus.
On the seesaw, a sliding table is installed to move on a pair of rails. The sliding table is driven with a linear motor that is part of the Bosch Rexroth IndraDyn L series, a series of industrial three-phase synchronous motors. A linear motor is particularly suitable for this purpose for two reasons: first, a linear motor can provide larger velocities and accelerations than a rotary motor and ball screw can and second, it can be made arbitrarily long, whereas with a rotary motor, we would need a 5.3 m long ball screw. For the length that we need, no pre-fabricated linear motor exists, so we are using a kit motor. This means that the stator and the rotor are delivered separately, which, when installed in the setup, form a complete motor. In our case, the stator consists of plates of permanent magnets that are mounted on the seesaw in between the rails. The rotor consists of two sets of electromagnets that are mounted on the bottom side of the table.
The actual position of the table is measured using the Bosch Rexroth integrated measuring system, which integrates a magnetic grating with a period of 40 µm in the side of one of the rails and an inductive sensor head in one of the carriages. Initially, a more precise optical grating was used to measure position; however, during the environmental conditions most suitable for our experiments, it was found to be extremely unreliable. Cloud droplets would stick to the grating and collect into a larger drop as the sensor head would travel over it, disrupting the functionality of the optical sensor head. The magnetic system on the other hand is completely insensitive to the presence of water on the grating.
The motor is controlled by using a Bosch Rexroth IndraDrive C series controller, which is installed in a cabinet in the laboratory, two floors below the experiment. At its heart, it has a set of cascaded proportional‐integral‐derivative (PID) controllers that control the motor position by setting the motor current, taking maximum velocity, acceleration, and jerk (the third derivative of the position with respect to time) into account. The IndraDrive controller allows for automated real-time control, either by loading a motion control program into it or through a real-time bus such as CANopen, but at present, we control it by manually setting the operating parameters using software provided by Bosch Rexroth.
C. Camera box
The heart of our apparatus consists of a custom-built vibration-damped aluminum box housing the high-speed cameras used for Lagrangian particle tracking (see Fig. 3). The box has been designed in such a way to minimize its total weight and cross-sectional area exposed to the wind while being extremely rigid and able to fit three Vision Research v2511 cameras with their corresponding optics. Further streamlining of the box would have added a lot of complexity in terms of fabrication, usability, and serviceability, but if the current shape of the box proves to be problematic, lightweight streamlining elements made out of, e.g., styrofoam can be added. The box’s main components are 24 aluminum parts, providing the rigid skeleton, three window sub-assemblies through which the cameras observe the measurement volume, and six transparent polycarbonate plates that provide visual and manual access to the box in the case of malfunction. Its external dimensions are 930 × 720 × 360 mm3, and internally, the box is subdivided into three upper sections containing one camera each and three lower sections, which contain the camera, power supplies, Ethernet and trigger cables, cooling hoses, an Arduino control unit, and temperature, humidity, and acceleration sensors.
Rendering of the camera box and laser beam expander. The top compartment of the box houses the three cameras, optics, mirrors, and window sub-assemblies; it is completely made of aluminum to provide stiffness. The bottom compartment houses power supplies and control electronics; it has acrylic glass windows that provide visual access and can easily be removed for manual access.
Rendering of the camera box and laser beam expander. The top compartment of the box houses the three cameras, optics, mirrors, and window sub-assemblies; it is completely made of aluminum to provide stiffness. The bottom compartment houses power supplies and control electronics; it has acrylic glass windows that provide visual access and can easily be removed for manual access.
The maximum weight of the camera box is, in principle, limited by the maximum payload that can be moved by the seesaw motor. However, more restrictive practical limits were imposed by the need to carry the box up to the roof manually through a narrow staircase, which allowed only two people to carry the box at a time. For this reason, the weight of the box without cameras or the beam expander (see below) was kept below 60 kg. Even so, should the movable table crash into the emergency shock absorbers, very large forces would be generated both at the braces keeping the box above the table and at the pillars fixing the seesaw to the roof. This leads to the current orientation of the box with its longest side parallel to the direction of motion. We also considered having the box stand upright with its longest side vertical; however, in such an orientation, the torque applied on the table during a crash into the emergency shock absorbers would put a too large strain on the system.
The disadvantage of the current box orientation is that the optical access windows, which separate the inside and outside of the camera box, are more exposed to the effects of precipitation. We have built a two-tier mechanism designed to lower the risk of a water droplet residing in the path of optical access, thus deteriorating the quality of the images obtained. Each window is mounted at the lower end of a short tube (see Fig. 4) with the inner diameter of 24 mm just large enough to not interfere with any of the light rays between the camera apertures and the measurement volume. The tubes are oriented along the line of sight of the cameras, which is at 30° with respect to the vertical. Their upper ends are cut vertically to decrease the chance of a raindrop entering the tube and are capped by movable lids. The lids open only for the duration of each video capture and close during the much longer duration of the data transfer. Moreover, a powerful stream of air is injected at the higher end of each window to remove any droplets that might still make their way through the tube. On the lower end of each window, the air and any liquid that collects there are sucked out and removed from the camera box. To aid the water removal process, the windows are coated with a hydrophobic coating.
Side view of the camera objective held in place by the two holders, the adjustable mirror assembly, and the window tube. The top of the window tube is closed off with a motorized movable lid. Inset: a section of the bottom of the window tube. The window is located at the bottom of the tube. Small jets inject a powerful stream of air at the higher end of each window. On the lower end of each window, the air and any liquid that collects there is sucked out and removed from the camera box.
Side view of the camera objective held in place by the two holders, the adjustable mirror assembly, and the window tube. The top of the window tube is closed off with a motorized movable lid. Inset: a section of the bottom of the window tube. The window is located at the bottom of the tube. Small jets inject a powerful stream of air at the higher end of each window. On the lower end of each window, the air and any liquid that collects there is sucked out and removed from the camera box.
The box is supplied with a steady small stream of dry air to prevent water condensation on the underside of the windows. Currently, the box is not temperature controlled. On a typical cloudy day, when data can be taken, the outside temperature is between 0 and 5 °C and the temperature within the box equilibrates to 30–45 °C after a few hours of operation, depending on the mean wind speed. These temperatures are within the operation range of the cameras, which is from −10 to 50 °C, as is the typical minimum inside temperature at dawn, so no cooling or heating is required for operation. It is possible that on a warm day with low wind speed, the inside temperature could get too close to the upper operational limit, so there are cooling channels running through the camera base plates, which can help bring the inside temperature down; however, the use of this cooling system has not been necessary so far. The temperature fluctuations do affect the camera calibration, but the changes are small enough to be removed by self-calibration without any adverse effects on the data quality (see Sec. III E).
If desired, additional scientific instruments can be mounted immediately next to or downstream of the measurement volume. Small instruments, such as fast temperature and humidity probes, can be mounted in between the black vertical supports shown in Fig. 3. Larger instruments can be mounted on the seesaw’s table; however, they will not benefit from the box’s vibration damping, which is described next. It must be noted that if additional instruments are mounted directly downstream of the measurement volume, they must be removed and remounted every time the wind direction reverses; otherwise, they disturb the flow into the measurement volume.
D. Vibration damping
Moving the LPT setup over rails, as we do here, causes vibrations that can be detrimental to particle tracking. To prevent these vibrations from reaching the camera box, the box is suspended on four extension springs Z-195I made by Gutekunst Federn with spring constant 6.07 N/mm and unstressed length 149 mm. The height of the spring attachments to the box can be adjusted to be at the height of the box center of mass, which prevents the box from tilting during acceleration and deceleration. To limit the amplitude of the swing of the camera box during the acceleration and deceleration phase, the box is further constrained by six pairs of rubber buffers (see Fig. 5). The buffers’ height can be adjusted so that they are level with the box’s center of mass.
The camera box’s suspension and buffers. Left: the buffers are adjusted, so when accelerating or decelerating, they push the box at the height of its center of mass. Middle: overview of the springs and buffers; this viewing angle is comparable to the one in Fig. 2. Right: the camera box is suspended with extension springs. The springs are adjusted so that the camera box is level and the spring attachment points are level with the box’s center of mass.
The camera box’s suspension and buffers. Left: the buffers are adjusted, so when accelerating or decelerating, they push the box at the height of its center of mass. Middle: overview of the springs and buffers; this viewing angle is comparable to the one in Fig. 2. Right: the camera box is suspended with extension springs. The springs are adjusted so that the camera box is level and the spring attachment points are level with the box’s center of mass.
We have measured the vibrations of the moving table and the camera box with the help of 3D accelerometers mounted in the middle of the top surface of the camera box and on top of the moving table (see Fig. 6). We measured the vibrations for translating speeds ranging from 0 to 100 m min−1 in increments of 4 m min−1. The vibrations of the moving table generally increase with increasing translation speed, but resonances occur at 0.46 and 0.92 m s−1, which lead to local maxima of vibration magnitude. The dependence of the camera box vibration level on translation speed is more complex. For speeds below 0.6 m s−1, the frequency of the table vibrations is low enough that their x component is transferred without significant dampening through the buffers onto the camera box. At a translation speed of around 0.65 m s−1, we hit the resonant frequency of the buffer–camera-box system, and beyond that, the buffers dampen the vibrations sufficiently so that the camera box vibrations stay at or below 0.1 m s−2. The y- and z-components of camera box vibrations stay below that threshold everywhere except in the narrow resonance region. These values are small compared to the typical rms accelerations in atmospheric turbulence, which are on the order of several m s−2. Since our foreseeable analysis only involves looking at relative quantities (spatial distribution, relative velocities, and collisions) for which a small apparent motion is irrelevant, the quality of damping achieved is more than satisfactory. Improvement on the vibration damping using the current passive design does not seem possible, but we are exploring a partially active approach where the box would be clamped during the acceleration phase and then released to be held only by the springs during the constant velocity phase. The results of that approach are left for future work.
Dependence of the typical (rms) vibrations of the camera box (dark markers) and the moving table (light markers) on the translating speed of the moving table. The rms vibrations in x-(⊳), y-(◦), and z-(△) direction are shown.
Dependence of the typical (rms) vibrations of the camera box (dark markers) and the moving table (light markers) on the translating speed of the moving table. The rms vibrations in x-(⊳), y-(◦), and z-(△) direction are shown.
E. Control and data acquisition
The experiment is controlled by a small computer cluster that is installed in a laboratory two floors below the apparatus. It consists of a main node, six compute nodes, and two storage nodes, each of which is connected to 10 Gbit Ethernet switches and contains 35TB of storage. The cameras are also connected to the 10 Gbit network, through which they are controlled and read out. The main node runs a Python code that controls and triggers the cameras, controls the box’s window shutters, monitors the box’s environmental parameters, and controls the laser. After an experiment, the compute nodes download videos from the cameras to their internal hard drives. This takes ∼40 s, which is the limiting factor for the experiment’s repetition rate. The videos are then copied to both storage nodes. After a measurement campaign, the disks from the second storage node are taken out and transported to Göttingen, where data analysis takes place.
III. PARTICLE TRACKING
A. Optics
The ideal particles for particle tracking are so small that each of the multiple glare points that a single particle has overlap on the camera sensors. The distance between the glare points is proportional to magnification × particle size, and since we cannot control the size of the cloud droplets, we should use an optical system with a small magnification. However, we are interested in seeing droplet pairs at distances where they nearly touch each other, which requires a large magnification. We settled on a magnification of 28 µm pixel−1, which means that if diffraction effects are ignored, a typical cloud droplet is of the order of 1 pixel on the sensor. We have calculated the Airy disk to be ∼3.2 pixel in diameter at the laser’s wavelength, so our particle tracking data do not suffer from peak-locking. To provide the desired level of magnification, each camera is equipped with two 2× teleconverters and a Nikon 200 mm objective. To prevent sagging and any lateral motion of the objectives during box movement, each objective is supported on one end by a finely adjustable clamp ring and on the other end by three plastic adjustment screws. The cameras are also firmly bolted to the base plates underneath them.
To adjust the viewing angle of the cameras, each upper section also contains an 85 × 60 mm2 mirror housed within a custom-built mirror holder (see Fig. 4). The holder can be rotated along a vertical axis, and the mirror tilt can be finely adjusted using a fine-threaded adjustment screw. Once the desired rotation and tilt of the mirror are reached, this orientation can be kept in place by tightening a set of fixing bolts. The disadvantage of this holder design is that all adjustments need to be performed manually before the box is closed off.
The resulting measurement volume is diamond-shaped and measures about 16.6 cm3; see Fig. 7. If one allows particles to be triangulated by using fewer than three cameras, the usable volume is much larger; however, in a large portion of it, the particles are so much out of focus that they are hard to detect and their positions are obtained with large uncertainty.
Measurement volume of the particle tracking setup, defined as the set of points visible by all three cameras, as seen from the top, the front (west), and the side (north). The top right view is from a general direction. The colors have no physical meaning; they are purely a visual aid.
Measurement volume of the particle tracking setup, defined as the set of points visible by all three cameras, as seen from the top, the front (west), and the side (north). The top right view is from a general direction. The colors have no physical meaning; they are purely a visual aid.
B. Illumination
The requirement of a typical particle residence time on the order of a Kolmogorov time, with its most likely value of 30 ms, and the typical fluctuating velocity of roughly 2 m s−1 led to a choice of a rather large measurement volume with the largest horizontal diameter of around 35 mm.
Achieving a depth of field comparable with the dimensions of the measurement volume, while simultaneously maintaining acceptable levels of the signal-to-noise ratio even at the lower end of the droplet size range of interest across the measurement volume, requires a very powerful source of light. We use a Trumpf TruMicro 7240 laser with a wavelength of 515 nm, a maximum pulse energy of 7.5 mJ, and a pulse length of 300 ns. Although at higher repetition rates the laser can achieve a light power output of 300 W, at the lower repetition rates, it is limited by the maximum pulse energy, and thus for our purposes (i.e., a sampling rate of 10 kHz), the light power output is effectively 75 W.
The beam coming out of the laser fiber first passes through a diverging lens in order to reach the desired beam diameter within the small amount of space available in the beam expander. It then passes through a converging lens, which makes it nearly parallel with a diameter of ∼39 mm. This diameter fits the measurement volume well, i.e., only little light is wasted, illuminating droplets outside the measurement volume. The lenses were chosen such that their spherical aberrations expand the beam’s center, while compressing its edges, which helps us to illuminate the measurement volume more uniformly. The beam finally passes through an optical diffuser, the function of which is to smooth the dependence of the intensity of scattered light on the scattering angle and thus simplify the process by which the droplet size is deduced. As the beam leaves the beam expander, it is clipped from both sides, so it becomes narrower in the x-direction. By doing so, it fits the shape of the measurement volume better; in particular, the volume that is illuminated but not seen by all cameras is reduced. The beam’s profile at the measurement volume is shown in Fig. 8. There, the relative light intensity within the laser beam is plotted as a function of the position within its cross section. The intensity data were obtained from the tracking data by comparing instantaneous particle intensity with its mean value over its whole trajectory. Thus, data are available only for positions within view of at least two cameras, where triangulation is possible. The profile is nearly flat, with a slight asymmetry most likely due to a slight offset of the laser head from the optical axis of the other optics and a smooth decrease in intensity at the edges due to the diffuser.
Dependence of the laser beam intensity on the location within its cross section. The maximum extent of the measurement volume is indicated by the black dashed line. The intensity is normalized such that the average in the measurement volume is 1.
Dependence of the laser beam intensity on the location within its cross section. The maximum extent of the measurement volume is indicated by the black dashed line. The intensity is normalized such that the average in the measurement volume is 1.
C. Camera positions and viewing angles
The amount of light is still a major limiting factor, and we had to adjust the geometry of the camera orientations with respect to the laser beam in order to obtain the most favorable image quality while keeping the camera aperture diameter constant. The decisive parameter that sets the limits on the maximum tractable seeding density and the uncertainty of the triangulated particles’ three-dimensional position is the mean variance of the positioning error of particle images on the sensor, . For a given image, , where is the total variance of camera thermal and shot noise and is the total intensity of the given particle image on the sensor. In our case, the dominant source of noise is usually the shot noise, so . Thus, for fixed aperture size and camera orientation, .
The image intensity is sensitively dependent on the scattering angle, which is the angle between the laser beam direction and the camera viewing axis. Although the dependence is non-monotonic for water droplets due to constructive and destructive interference between the refracted and reflected beams that reach the aperture and due to our use of a diffuser and a finite aperture size, these variations are mostly smoothed out. The dependence of the scattered light intensity on the scattering angle θ can then roughly be modeled as exponential: with cθ ≈ 3.55 rad−1 so that the intensity roughly halves with an increase in θ by 11°. The rate constant cθ was obtained from a fit to Mie scattering curves.
While making the scattering angle as small as possible will generally keep the particle image intensities large, it is not optimal for minimizing the triangulation error due to the resulting geometry of the camera orientations: when all camera view axes are at a small angle with respect to the beam, they are also necessarily at a small angle to each other, and a small change in the particle image position on a given sensor can lead to a large shift in the component of the triangulated 3D position along the laser beam direction. This geometric effect can be modeled by assuming that the cameras are placed symmetrically with respect to the laser beam, at equal distances from the measurement volume, so that the angle between the view axes of any camera pair is the same and that each camera view axis is at an angle of θ to the laser beam. It can be shown that the components of the three-dimensional particle position error variance are and . Here, the z-direction points along the laser beam. Thus, while small θ decreases the variance of the position error components perpendicular to the laser beam, in the direction along the beam, small θ leads to a catastrophic amplification of the error.
Combining the effects of the camera orientation geometry and the scattering light intensity, we obtain and . Optimizing the total variance leads to an optimal angle of θ = 22.6°.
In our case, we are particularly interested in high accuracy in the vertical direction in order to accurately measure settling effects and the vertically-conditioned radial distribution function. As our laser beam is pointing nearly vertically, we can optimize just for , which leads to an optimal angle of θ = 29.4°. We have designed the geometry of the camera box for an intended camera view axes at 30° to the vertical. In reality, due to a slightly asymmetric placement of the measurement volume above the camera box, the actual view axes are at (30.4 ± 0.4)° to the vertical.
It should be noted that the considerations of position error variance apply only to the particles with image intensity low enough to not saturate the sensor pixels. In our case, the smallest drop that can possibly lead to saturation (gray scale level of near 4096) when perfectly in focus has a diameter of ∼23 µm. This number was obtained using a droplet sizing method described in Sec. III F, combined with a model for a droplet’s point spread function. For larger droplets, the optimum angle would be closer to the one obtained by geometric considerations only, which is 54.7° (cameras view axes perpendicular to each other).
D. Particle tracking algorithm
Our particle tracking algorithm was inspired by the Shake-the-Box (STB) algorithm developed by Schanz, Gesemann, and Schröder22 and heavily adapted to our specific needs and challenges. A comprehensive description of the algorithm is beyond the scope of this work and will be presented in a separate publication. Here, we only briefly outline its main features and differences to the standard version of STB.
The most salient features of STB that we use are the progressive subtraction of already tracked objects and iterative improvements of their fitted parameters. These are required for successful tracking of droplets in our setup for reasons explained below. Due to large measurement volume chosen for aforementioned reasons, we can have up to 104 tracked particles in the measurement volume at a single time. In Fig. 9, we plot the mean number of tracked triangulated particles per frame and the Sauter mean droplet diameter d32 for each 3.3 s sample taken over seven days of measurement in July and August 2019. The Sauter mean diameter d32 is defined as where the sums are taken over all droplet trajectories recorded. This particular choice of mean was made as it best correlates with the particle seeding density. Although 104 particles, which translates to 0.01 particles per pixel, a standard measure of tracking feasibility, is not very high by modern standards, the actual number of tracked images on each sensor is up to 2.5 times as high due to the shallow view angle with respect to the laser beam. Moreover, due to the short depth of field compared to the size of our measurement volume, an inevitable consequence of working with the given amount of illumination, many of the images are badly out of focus. Thus, the total area of all the tracked images may be several times higher than the total sensor area. Since the images heavily overlap and do not necessarily achieve their peak intensity at their center like well-focused images do, finding and fitting the images is a difficult and computationally expensive task. Without having a good model for the point spread function, subtracting the intensity profiles of already tracked particles, and performing several iterations of “shaking” (fitting the image parameters), we could not hope to achieve a high yield necessary to study droplet clustering.
A scatter plot of the Sauter mean droplet diameter d32 and mean number of tracked droplets per frame for each experimental sample taken over seven days of measurement in 2019. Different colors and markers differentiate different days.
A scatter plot of the Sauter mean droplet diameter d32 and mean number of tracked droplets per frame for each experimental sample taken over seven days of measurement in 2019. Different colors and markers differentiate different days.
Our tracking algorithm consists of the following major steps, performed in the stated order at each time step:
Each video frame is pre-processed in order to most closely correspond to the light intensity that an ideal sensor would read out. This step, crucial for our particle sizing method, consists of subtracting background intensity, correcting hardware artifacts such as image lag (where intensity values on the current frame are affected by their values on the previous frame), and correcting for smudges or scratches on the sensor and optical elements.
Existing 2D and 3D trajectories are extrapolated to the current time and their expected intensity profiles created on each sensor and optimized using the Levenberg–Marquardt algorithm with a fixed damping factor. The goal of the optimization is to minimize the difference between the fitted and actual intensity profiles on each sensor. Unlike the standard STB, where each object is optimized separately while the others are kept fixed, we optimize all particle images on each single sensor simultaneously. This is only possible to do in reasonable time when the properties of the particle images are independent across the sensors, another major difference to standard STB where the image locations are projections of a common 3D position. An additional reason not to tie the image locations to the particle position in the world coordinates is the detrimental effect of thermal gradients on the accuracy of the projection map, as explained further below.
The optimized particle images obtained in the previous step are subtracted, and each sensor is searched for new images. This step proceeds in several rounds, where in each round, we search for images within a certain narrow interval of defocus, starting with the well-focused ones. In each round, we first filter the frame using a Gaussian filter of standard deviation matching best the images within this range of defocus. Local maxima of the filtered intensity are used as the initial guesses for new image locations. The image properties are optimized in the same manner as in the previous step.
Both the extrapolated and the new images are combined into one set and again optimized several times in the same manner as above.
Using the images available at this time, we obtain candidate 3D positions of the particles using triangulation. Our triangulation takes into account not only the image locations but also their level of defocus and their intensity. This enables us to reliably triangulate using just two cameras and somewhat compensate for the lower position accuracy of the defocused images. Each triangulated point is assigned a likelihood of being “real,” i.e., being triangulated from images that indeed correspond to the same particle.
The existing 2D/3D trajectories are extended to the current time step by the addition of optimized images/triangulated points that lie close to their extrapolated position. Each of the trajectories consists of a single tail and possibly multiple heads. The tail is made out of objects from the distant past, for which we are confident that they belong to this trajectory. Each head is made out of recent objects, for which the certainty level is much lower, as they could be ghosts or belong to another trajectory.
We trim trajectory heads, calculating a likelihood of each trajectory head being real based on its smoothness and on the product of likelihoods of each object being real, as calculated above. Heads with low likelihood are deleted. Further trimming is achieved by ensuring that older objects are not used in more than one trajectory. Tracks with no remaining heads are deleted, but those of sufficient length are saved first.
We initiate new trajectories using those objects from the two most recent time steps, which are not already part of a sufficiently long trajectory. Only such pairs of objects are used, which could possibly correspond to a particle moving at a velocity similar to the mean velocity of all particles in the measurement volume.
We update (self-calibrate) the camera models, defocus map (map from world coordinates to the level of image defocus), mean velocity, and other auxiliary variables using the current ends of sufficiently long trajectories.
The particle tracking algorithm described above addresses all of the challenges posed by the specific work environment:
Large measurement volume: The fact that most of our particle images are not in focus has an inevitably detrimental effect on the position accuracy of the tracking. However, we still achieve accuracy of only a few micrometers, which is entirely sufficient for our purposes, and thanks to our novel way of extracting particle images, can still achieve very high yield throughout the measurement volume. By using image defocus as an additional parameter, we can triangulate using only two cameras without having many ghosts.
Large mean flow: Our particles typically stay within the measurement volume for only a few Kolmogorov times or less, which poses an additional challenge when it comes to choosing between alternative triangulations or temporal links. We cannot afford to make the wrong choice as there might not be enough time left for initiating a new correct track before the particle leaves the measurement volume again. Using our strategy of having multiple trajectory heads, this risk is minimized. As an additional benefit, this way, we can reduce the sampling rate to as low as 5 kHz without adverse effects on the temporal-linkage reliability. Doing so increases the amount of trajectory statistics collected within the limited measurement time provided by the weather conditions since we are primarily limited by the data transfer rate between the cameras and the compute nodes.
Vibrations and thermal gradients: By self-calibrating at each time step, we can effectively cancel the effects of vibrations caused by the movement of the camera box or camera fans. Thermal gradients, which arise due to thermal convection within and above the camera box, can only be partially compensated for in this manner, as they cause apparent particle image shifts that are non-uniform over the sensors. As a consequence, standard STB loses a lot of its power as the projections of three-dimensional positions cannot be made sufficiently accurate on all cameras simultaneously to allow for effective image subtraction. We “shake” the images on each sensor separately; this way, we can subtract their intensities better and can optimize all images simultaneously, achieving a faster convergence rate than the standard STB approach.
Diversity of tracked particles: The point spread function model that we use is based on the assumption that the particles can be approximated as point sources of light. This assumption works well for water droplets and ice particles up to a diameter of about 30 µm for our setup (in general, this limit depends on the wavelength of light used, aperture size, object distance, and scattering angle). For larger particles, their images become increasingly different from our point-source model and more likely to be interpreted as a close cluster of individual particle images rather than as a single entity. We have developed ways to detect such occurrences and join these clusters into single images again, but the detection of another particle image nearby becomes increasingly unlikely, with potential impacts on the clustering analysis. Droplets and ice crystals larger than about 70 µm saturate the pixels (with our chosen aperture size) to such a degree that their image subtraction becomes impossible. While we can still track their movement and also presumably their orientation (although we have not attempted this) in case their shape significantly differs from spherical, the position accuracy is diminished compared to that of the smaller particles. However, if tracking of large particles were the aim, one could change the magnification, aperture size and beam width accordingly to achieve better results.
E. Self-calibration
The camera model is self-calibrated at each frame to account for apparent motion, for which there are five possible causes: (1) thermal gradients due to thermal convection within and above the camera box; (2) thermal expansion of the camera box itself; (3) vibrations caused by the camera fans; (4) vibrations caused by the linear motor; and (5) permanent displacement of the optics due to mechanical shocks. The first three effects are always present and are discussed here. The last two only occur if the camera box is moving to compensate for the mean wind; for a discussion of these, the reader is referred to Sec. IV. We achieve the best results in terms of compromise between robustness and precision by calibrating five parameters of the model per camera at each frame, corresponding to a small shift of the apparent position of the world coordinate origin (two parameters) and a small change in the view angle (three parameters). In our experience, the shift of the apparent position of the world coordinate origin is the most sensitive to changes in calibration. In the following, we use this as an indicator of the severity of the previously mentioned effects.
The largest component of the camera shifts we have to deal with is caused by changes in the camera box’s temperature during the course of the day. Changes in temperature cause the box to slightly expand and/or contract, which, in turn, slightly displaces all the optics and hence influences the calibration. In Fig. 10, we show the dependence of the apparent shifts of the world coordinate origin’s sensor positions on camera box temperature over the span of two months. The initial camera model was obtained at a temperature of 30.5 °C, which is why the data seem to converge there. Even the largest apparent shifts of about 7 pixel ≈ 200 µm are well within the limits of self-calibration and small relative to the size of the sensor (1280 × 800 pixel2).
Temperature dependence of the apparent shift of the world coordinate origin’s projection onto each camera sensor. These apparent shifts are the most sensitive indicators of any change to the camera and mirror positions and orientations. The data shown here span eight days of measurement over the course of nearly two months.
Temperature dependence of the apparent shift of the world coordinate origin’s projection onto each camera sensor. These apparent shifts are the most sensitive indicators of any change to the camera and mirror positions and orientations. The data shown here span eight days of measurement over the course of nearly two months.
The second largest contributor to changes in the calibration is thermal convection inside and above the camera box. We illustrate the problem here by again plotting the apparent shifts of the world coordinate origin’s sensor position, this time over the course of the first 1000 frames of a single high-speed video taken at the sampling rate of 5 kHz, in Fig. 11. These shifts are computed relative to the temporal mean over the entire video, so they are relatively small at ∼0.2 pixel ≈5 µm in amplitude.
Temporal dependence of the apparent shift of the world coordinate origin’s projection onto each camera sensor. A short section of a single high-speed experiment sampled at 5 kHz is shown, with the inset showing a zoomed-in part of the plot to highlight the contribution of the camera cooling fans.
Temporal dependence of the apparent shift of the world coordinate origin’s projection onto each camera sensor. A short section of a single high-speed experiment sampled at 5 kHz is shown, with the inset showing a zoomed-in part of the plot to highlight the contribution of the camera cooling fans.
The final contributor to changes in the calibration are the vibrations caused by the cooling fans built into the cameras. These, too, are visible in Fig. 11 in the form of very small high frequency oscillations and are highlighted in the inset. The shifts due to these vibrations are small on the order of 0.02 pixel, but due their high frequency of 340 Hz, they create an uncertainty in the acceleration of the world coordinate frame of up to 2 m s−2. The contribution from the thermal convection is of similar magnitude.
It must be noted that the tolerable amount of change due to self-calibration depends on the quantity of interest. If we are interested in quantities that do not depend on an inertial frame of reference, such as the radial distribution function, relative particle velocities, or velocity structure functions, then successful self-calibration is sufficient. However, when we are interested, for example, in particle accelerations, then successful self-calibration is no longer sufficient, as it is insensitive to solid-body translations and rotations of the world coordinate system. In this case, additional constraints must be imposed to uniquely determine the self-calibration parameters, such as zero mean acceleration over all particles in view, which might affect the overall statistics.
F. Particle sizing
We infer droplet sizes by relating them to the observed brightness of each droplet: , with di being the diameter of droplet i and Ii being its observed brightness. The droplet size uncertainty is computed from the brightness standard deviation using standard uncertainty propagation, which, in turn, is computed as the standard deviation of the droplet’s brightness over all frames in its track. Typical relative droplet size uncertainties are less than 10%.
The calibration constant c = 0.768 was determined in the laboratory using a calibrated droplet generator [TSI flow-focusing monodisperse aerosol generator (FMAG)model 1520]. The droplet generator was set to generate droplets of diameter d = 20, 30, …, 120 µm, which were then observed by the particle tracking experiment. For each reference droplet size, a histogram of the droplets’ square-root intensities was created, secondary peaks due to satellite drops and merged drops were masked, and the mean and standard deviation of the main peak were computed. Finally, the means were fitted with a straight line, the slope of which is the calibration constant. The histograms, means, and fits are shown in Fig. 12(a). Note that the calibration constant is dimensional (unit: μm/) and that it depends on the laser power, the scattering angle, the lens used, the aperture, and the camera type and settings.
Droplet sizes di are inferred from observed particle brightnesses Ii, . (a) Calibration curves. Square-root-intensity histograms were obtained by observing droplets generated with a calibrated droplet generator. Secondary peaks were masked out (shaded gray), after which the means and standard deviations of the main peaks were computed. Finally, a straight line was fitted through the means, the slope of which is the calibration constant. [(b) and (c)] Comparison of inferred cloud droplet size distributions with those obtained with an Artium dual range flight probe phase-Doppler interferometer (PDI-FPDR). The size distributions compare well for both narrow (b) and broad (c) size distributions.
Droplet sizes di are inferred from observed particle brightnesses Ii, . (a) Calibration curves. Square-root-intensity histograms were obtained by observing droplets generated with a calibrated droplet generator. Secondary peaks were masked out (shaded gray), after which the means and standard deviations of the main peaks were computed. Finally, a straight line was fitted through the means, the slope of which is the calibration constant. [(b) and (c)] Comparison of inferred cloud droplet size distributions with those obtained with an Artium dual range flight probe phase-Doppler interferometer (PDI-FPDR). The size distributions compare well for both narrow (b) and broad (c) size distributions.
This droplet sizing method is verified in situ by comparing the size distributions thus obtained with those measured using a commercial interferometer (Artium PDI-FPDR). Examples of this are shown in Figs. 12(b) and 12(c). Our method compares well but shows some oscillations below 35 µm. We believe these oscillations are caused by the scattering pattern, which is well-described by Mie scattering theory and is known to have strong oscillations as a function of both droplet size and scattering angle. However, more work is needed to verify this and to see if we can put this to our advantage. Details on this droplet sizing method will be published in a separate article.
IV. MEAN WIND COMPENSATION
A. The need for mean wind compensation
One of the main added complications of performing particle tracking in the field compared to a laboratory setting is the lack of control over the flow velocity and the number density and size of the tracked particles—in our case, water droplets and ice crystals. It is fundamentally difficult to change any of these parameters without defeating the purpose of being in the field. As a result, the level of illumination, image magnification, or camera lens aperture is often not at the optimal setting for the instantaneous conditions and is rather set to provide satisfactory results over the whole range of expected conditions.
One can, however, in principle, compensate for the possibly large flow velocity relative to the ground by performing the measurements using an apparatus moving with a similar velocity. The underlying goal of this compensation is to maximize the residence time of the particles within the given measurement volume. There are several reasons why longer residence time is desirable. First, the residence time sets the upper limit on the length of temporal filter that can be applied to the raw data in attempt to reduce the effect of measurement error on the final statistics.23,24 Thus, when one is interested in particle velocities or accelerations, for a given particle position uncertainty, there is a minimum residence time required, below which even the filtered data will be dominated by measurement noise. Second, to study the Lagrangian properties of atmospheric turbulence or particle dynamics therein, it is necessary to follow the particles for times comparable to Kolmogorov time or longer,25,26 as that is where the interesting physics happens. For atmospheric turbulence, which typically has Kolmogorov time on the order of tens of milliseconds, this is an especially demanding condition to satisfy.
Another advantage of longer residence time is specific to Lagrangian particle tracking (LPT) and particle tracking velocimetry (PTV). As mentioned before, as the seeding density increases, so does the number of ghost particles. To our knowledge, the only other reliable way of distinguishing the ghosts from real particles is by tracking the particle images over time and checking whether the lines of sight remain nearly intersecting over a sufficiently high number of frames. Thus, the longer the residence time of the particles, the more confidently can we tell apart ghosts and real particles, leading to more accurate results and a larger range of tractable seeding densities.
Finally, long residence time is very useful for scenarios involving particle pairs at small separations, such as in drop–drop collisions, computing the RDF in the limit of separation approaching the typical drop diameter, and the distribution of relative velocities in the same limit; all of which are of paramount importance to the process of cloud droplet growth. When the pair of particles is close enough, their images overlap on each camera sensor to such a degree that it becomes difficult to distinguish them from the image of a single particle. From a single snapshot, it might therefore be impossible to reliably tell whether we are seeing a particle pair or a single particle, and moreover, even if we are confident that it is, in fact, a particle pair, the positioning accuracy of these two particles will be negatively impacted by the overlap of their images. When the residence time is high, however, even a small relative velocity of the two particles will ensure that on some of the images, the particles will be far enough apart to be clearly discernible as a pair. From the long available sequence of snapshots, we can then also reconstruct a more accurate time-dependence of their positions (related to the earlier point about filter lengths).
B. Feasibility study
To gauge the feasibility of moving with the flow, we plot in Fig. 13 the probability distributions of all three wind speed components. The data were collected over eight days of measurements in August and September of 2019 by the stationary LPT setup, so each elementary count corresponds to the mean velocity of all tracked particles within the measurement volume at one frame of the video. Clearly, the vertical wind velocity is dominated by the horizontal components. Moreover, as shown in the inset of Fig. 13, the direction of the horizontal wind velocity is predominantly close to either easterly or westerly. This allows for some significant simplifications in the mechanical design of the moving setup since one can fix the moving direction to be aligned with the east-west line and still expect to match the three-dimensional wind velocity reasonably well. Finally, similarly to the work of Risius et al.,18 we observed a wider velocity distribution for westerly wind compared to the easterly wind, which only rarely exceeds 5 m s−1.
Probability of observing a given wind speed component, comprising data collected by using the LPT setup in August and September 2019. The east-west (solid line), north-south (dashed line), and vertical (dotted-dashed line) components are shown, with positive values indicating wind from the east, north, and bottom, respectively. Dark and light shaded areas indicate the regions containing 1/2 and 1/4 of the east-west wind velocity probability distribution, respectively. Inset: Distribution of the angle of the horizontal wind velocity relative to the east-west line.
Probability of observing a given wind speed component, comprising data collected by using the LPT setup in August and September 2019. The east-west (solid line), north-south (dashed line), and vertical (dotted-dashed line) components are shown, with positive values indicating wind from the east, north, and bottom, respectively. Dark and light shaded areas indicate the regions containing 1/2 and 1/4 of the east-west wind velocity probability distribution, respectively. Inset: Distribution of the angle of the horizontal wind velocity relative to the east-west line.
To better illustrate what kind of translation speeds are required for the moving setup in order to match the relevant component of the wind velocity for a significant fraction of time, we indicate the regions |vx| < vlim with vlim chosen such that P(|vx| < vlim) = 0.5 and P(|vx| < vlim) = 0.75 using a dark- and light-gray rectangle, respectively. The velocity limits that give these probabilities are 1.48 and 3.12 m s−1, of which our setup is easily capable (see Sec. II B).
Moving with the flow obviously increases the residence time of the droplets in our measurement volume and, thus, the length (temporal extent) of trajectories we can hope to obtain. On the other hand, the rate at which the droplets enter and leave the measurement volume will decrease, decreasing the overall number of tracks that we will observe. A natural question to ask then is, keeping the total measurement time fixed, for which trajectory lengths will the total number of trajectories of that length increase if we move with the flow and by how much?
To answer that question, we plotted histograms of track lengths under the following circumstances: (1) with the stationary camera box; (2) with the camera box moving along the x axis with a velocity that matches the mean wind’s x component; and (3) with the camera box moving with the mean wind. Case 2 corresponds to the camera box moving along the rails, while case 3 is the best-case theoretical scenario, in which the box can move in any direction. We obtained these histograms by considering all data from August and September 2019, all of which was acquired with a stationary camera box. For cases 2 and 3, the data were processed to get estimates for the track lengths that would have been obtained if the box were moving. The details of this procedure can be found in the supplementary material. The resulting histograms are shown in Fig. 14. As shown in the inset of Fig. 14, moving the camera box is expected to increase the number of particle tracks longer than about 10 ms, compared to a stationary case. The longer the track, the more significant the relative increase becomes, with the number of 100 ms tracks being twice as high when moving in the east-west direction and more than four times as high in the hypothetical scenario that the camera box could move in any horizontal and vertical direction.
Histogram of the trajectory lengths (temporal extent) gathered over several days of measurement with a stationary box (black dashed-dotted line). For comparison, the expected histograms that would have been obtained with a box moving with the east-west component of the wind (blue solid line) and with all components of the wind (red dashed line) are shown. Inset: the increase in the number of tracks of the given length obtained with a moving box, relative to that same number obtained with a stationary box.
Histogram of the trajectory lengths (temporal extent) gathered over several days of measurement with a stationary box (black dashed-dotted line). For comparison, the expected histograms that would have been obtained with a box moving with the east-west component of the wind (blue solid line) and with all components of the wind (red dashed line) are shown. Inset: the increase in the number of tracks of the given length obtained with a moving box, relative to that same number obtained with a stationary box.
C. Challenges
Besides the aforementioned advantages, the moving apparatus also introduces several challenges and issues of its own. Most immediately, moving the LPT experimental setup introduces vibrations, which may be detrimental to the particle tracking and any further analysis. In our case, the vibrations are predominantly caused by the periodic layout of the permanent magnets, which causes a small position-dependent component of the horizontal force applied to the moving table. Thus, when the table is prescribed to move with a constant velocity, in reality, its velocity periodically oscillates around the intended value, inducing vibrations in the camera box with frequency proportional to the translation speed. At a prescribed speed of 1 m s−1, the speed oscillations have a typical period of 25 ms, which is very close to the time scale of interest for cloud droplet dynamics—the Kolmogorov time scale (typically 30 ms). If the period of the vibrations were significantly shorter than this, one could make the seesaw, the camera box, and their connection as stiff as possible so that any vibrations induced in the camera box would happen on similar time scales and be removable by using appropriate low-pass filters, without filtering out the particle dynamics of interest. This is, however, not the case, so we have to dampen the vibrations mechanically, which led us to the current passively damped design.
Second, either gradually due to vibrations or after a single violent event, such as an emergency stop, the positions and orientations of the cameras and mirrors could shift so far away from their initial values that self-calibration on particle images is no longer possible. It is hard to say how far is “far away” in this context, although clearly if the camera views no longer intersect that is certainly the case since to large degree, it is only computational cost that limits the powers of self-calibration. Certainly, in our case, shifts resulting in an apparent shift of particle images by a few tens of pixels are still recoverable. We have not experienced any such irrecoverable shifts.
Finally, the size of the seesaw could potentially mean that it affects the properties of the flow that we are trying to measure. However, the measurement volume is a meter above the level of the seesaw rails and 2.5 m above the roof, and so this effect should be negligible compared to some other effects outside our control, such as the influence of the Schneefernerhaus and the surrounding topography. Further experiments need to be conducted to see how large the effects of the seesaw, the Schneefernerhaus, and the surrounding topography are and whether or not they are important.
V. RESULTS
In this section, we present some preliminary results obtained with the apparatus. In all cases, the cameras were configured to record at 5 kHz. Laser power was adjusted so that ∼0.1% of pixels would be saturated. Particle tracking was performed with an in-house code that bases on the ideas of the Shake-The-Box algorithm.22 Finally, MATLAB was used to do various statistical analyses, all of which are described in their respective sections. In all of the following analyses, we restricted the interrogation volume to a slice constrained by |z| < 6 mm, where the particles are well-focused and thus most likely to be detected. We deem it premature to compare our results to the literature because the data that we show here have not been conditioned on the Stokes number and might suffer from biases due the limited vertical extent of the measurement volume.
A. Environmental conditions
All results shown in this section were computed from the first 40 experimental samples on September 28, 2019, which were acquired between 6:44 UTC and 7:22 UTC. The flow properties were measured using an ultrasonic anemometer (part No. 4.3830.20.340, Thies Clima, Göttingen). The sonic was installed 1.5 m away from the measurement volume to the south of the camera box, so it would not influence the flow into the measurement volume. The sonic data were analyzed using a method inspired by that of Risius;21 however, we did not condition on turbulent intensity and instead split the data into fixed 2-min-long subsegments with 75% overlap so that we obtain a value for the dissipation rate every 30 s. To reduce measurement noise, the dissipation rates were smoothed using a 30-min Blackman window. This window length loosely coincides with the time scale of the large scales of the turbulent flow at the Schneefernerhaus (5 min; see Fig. 8 in the work of Risius et al.18). The measured and smoothed dissipation rates are shown in the top panel of Fig. 15.
Summary of environmental conditions on September 28, 2019, between 6:40 and 7:30 UTC. Top: dissipation rates as measured (dots) and after smoothing (line). Smoothing is done with a 30-min Blackman window and helps reduce measurement noise. Bottom: Stokes number distributions computed from droplet sizes that were measured with the particle tracking experiment. Data from the first 40 experimental samples on September 28, 2019 are shown, which were acquired between 6:44 UTC and 7:22 UTC.
Summary of environmental conditions on September 28, 2019, between 6:40 and 7:30 UTC. Top: dissipation rates as measured (dots) and after smoothing (line). Smoothing is done with a 30-min Blackman window and helps reduce measurement noise. Bottom: Stokes number distributions computed from droplet sizes that were measured with the particle tracking experiment. Data from the first 40 experimental samples on September 28, 2019 are shown, which were acquired between 6:44 UTC and 7:22 UTC.
The droplet sizes were obtained directly from the particle tracking data, as described in Sec. III F. Stokes numbers were computed for each experimental sample using the smoothed dissipation rate. The distribution of droplet sizes and Stokes numbers, both as a function of time, are shown in the bottom panel of Fig. 15.
B. Radial distribution function
The radial distribution function (RDF) is a measure for particle clustering. In particular, it measures the number of particle pairs with a certain separation r, relative to the number of pairs with separation r that would have been found if all particles were distributed homogeneously. Here, we calculate the RDF as
with N(r) being the histogram of measured particle separations and Nhom(r) being the histogram of particle separations of homogeneously distributed particles. A field of homogeneously distributed particles is obtained by discarding the temporal coordinate of all measured particles (creating a “super frame”) and randomly selecting ∼30 000 particles from the super frame. This approach is insensitive both to finite-volume effects and to the varying sensitivity throughout the measurement volume. In Fig. 16, we show radial distribution functions for distances of 0.15η ≤ r ≤ 30η (0.1 ≤ r ≤ 20 mm) computed from experimental samples taken on September 28, 2019. Each of these samples is 3.3 s long, and they are spaced one minute apart. Two RDFs are shown, one for experimental samples taken between 6:44 and 6:55 UTC, for which the Stokes number is St ≈ 0.1, and one for experimental samples taken between 7:05 and 7:22 UTC, during which St ≈ 0.01. We tried to reduce the effects of particle shadowing, which decreases the likelihood of detecting a pair of nearby droplets if they are aligned with one of the camera view axes, by conditioning the radial distribution function on the angle between the vector connecting the droplets and the vertical, and only included the pairs for which this angle exceeded 60°. The values of the RDF shown here were computed as a weighted average over the experimental samples used, with the weights proportional to the total number of particles in each sample. The colored regions indicate the sample standard deviation of each bin across the experimental samples. Other than the separation in time, which loosely corresponds to a separation between large and small particles, no conditioning on particle size has been applied yet, so the functional profile is expected to follow a power law that levels off at small r.27–29 The decrease for r < 0.3η could be due to the residual effects of particle shadowing but might also be a real effect, caused by droplet charge.30 A comparison to RDFs computed on synthetic data needs to be made in order to verify the data for r < 0.3η.
Two radial distribution functions (RDFs) of cloud droplets at Zugspitze as a function of their non-dimensional separation r/η, both computed for data taken on September 28, 2019. The purple symbols are the computed experimental samples taken between 6:44 and 6:55 UTC, for which the Stokes number is St ≈ 0.1, while the blue symbols are for the experimental samples taken between 7:05 and 7:22 UTC, during which St ≈ 0.01. The colored regions indicate the regions within one standard deviation from the mean value. The decrease for r < 0.3η (gray shaded area) could be a measurement artifact; here, further investigation is needed.
Two radial distribution functions (RDFs) of cloud droplets at Zugspitze as a function of their non-dimensional separation r/η, both computed for data taken on September 28, 2019. The purple symbols are the computed experimental samples taken between 6:44 and 6:55 UTC, for which the Stokes number is St ≈ 0.1, while the blue symbols are for the experimental samples taken between 7:05 and 7:22 UTC, during which St ≈ 0.01. The colored regions indicate the regions within one standard deviation from the mean value. The decrease for r < 0.3η (gray shaded area) could be a measurement artifact; here, further investigation is needed.
C. Relative velocities
In Fig. 17, we show the probability distribution of the longitudinal relative velocity v‖ of droplet pairs, summed over the first 40 experimental samples taken on September 28, 2019. The velocity distributions for several separation ranges are shown, with each range of the form , r0 = 1.2η (0.8 mm). In order to best show the data for different separation ranges in a single graph, we have nondimensionalized the relative velocity by its typical scale in the dissipative range r/τη, with the dissipation rate ϵ = 0.0286 m2 s−3, the mode of the log-normal distribution observed by Siebert et al.20 The relative probability values were scaled by such a factor as to make their maxima all equal to 1.
Relative probability distributions of the longitudinal velocity difference v‖ between droplet pairs at a given separation r, non-dimensionalized by the dissipative-range scaling r/τη, and rescaled to have maximum equal to 1. Distributions for several ranges of r are shown.
Relative probability distributions of the longitudinal velocity difference v‖ between droplet pairs at a given separation r, non-dimensionalized by the dissipative-range scaling r/τη, and rescaled to have maximum equal to 1. Distributions for several ranges of r are shown.
It is the distributions at the smallest separations that are of the most interest due to their importance to the droplet collision rate. However, for those separation ranges, a larger number of experiments have to be combined to achieve statistical convergence, and the results need to be conditioned on the drop size in order to be useful. This requires more careful analysis, and is beyond the scope of the preliminary results shown here.
D. Velocity autocorrelation
In Fig. 18, we show the per-component particle velocity autocorrelation, again computed over the first 40 experiment samples taken on September 28, 2019. The autocorrelations are normalized such that Rii(0) = 1. Since we can track particles only for as long as they reside in the measurement volume, which is much shorter than the integral time scale, it is difficult to correctly define and compute a temporal mean velocity. Hence, we computed the autocorrelation functions without subtracting the particles’ mean velocities. As can be seen in Fig. 18, all autocorrelations decay more or less exponentially, but the vertical-component autocorrelation Rzz decays much slower than the others. The slower decay of Rzz is due to the effects of the particles’ settling velocity. In an infinitely large measurement volume, we would observe Rzz decaying to a positive constant , where is the mean (across the particle size distribution) settling velocity and is the z-component of the mean square fluctuating velocity. However, since our measurement volume is finite, for longer time lags τ, the faster-settling droplets are unlikely to be observed for that long and the statistics become increasingly dominated by the slowest-settling particles. Thus, experimentally observed Rzz also converges to 0, albeit at a much slower rate.
Normalized per-component particle velocity autocorrelations. The vertical-component autocorrelation decays slower than the others, which is most likely caused by the limited vertical extent of the measurement volume. Inset: the same data but plotted on a logarithmic scale to emphasize the exponential decay.
Normalized per-component particle velocity autocorrelations. The vertical-component autocorrelation decays slower than the others, which is most likely caused by the limited vertical extent of the measurement volume. Inset: the same data but plotted on a logarithmic scale to emphasize the exponential decay.
VI. CONCLUSION AND OUTLOOK
The apparatus we present here allows us to perform in situ particle tracking on cloud droplets. The whole of the particle tracking setup is built into a stiff weather-proof box that we call the camera box. The volume probed by the particle tracking setup is 16.6 cm3; however, due to its vertical extension, many particles are out of focus and therefore difficult to locate accurately, if at all. By limiting the vertical size of the measurement volume, its shape can then be approximated by a cuboid of 40 × 20 × 12 mm3, particle positions can be measured with an uncertainty of 5 µm, and their accelerations can be measured with an uncertainty of 0.1 m s−2. The smallest particles that we can accurately track within this volume are 5 µm in diameter. We typically operate the experimental setup at 5 or 10 kHz; however, frame rates of up to 25 kHz are possible without losing accuracy.
We have mounted the camera box on a table that can be moved over a set of rails by using an electromotor, which allows us to compensate for the mean wind. The wind at the Schneefernerhaus is predominantly east-west, so the rails are aligned in this direction. In order to compensate for the horizontal component of the mean wind, the rails are bolted onto a structure and dubbed the “seesaw,” which can be tipped up to 15° east or west. Although the system has been designed for velocities of up to 7.5 m s−1, in practice, the maximum velocity we can run it at is set by the travel of the table (5.3 m), the maximum acceleration (10 m s−2), and the desired duration of the constant velocity phase. At 10 kHz, our cameras can record for up to 1.6 s; if we set the constant velocity phase to be at least this long, the velocity is limited to at most 2.8 m s−1. The electromotor introduces extra vibrations into the camera box, which we dampen passively, by decoupling the box from the table with springs and rubber buffers. This way, we successfully reduce the rms accelerations of the camera box down to 0.1 m s−2, which is sufficient for our purposes. We are, however, looking into an actively damped approach, where we stiffly couple the camera box to the table during the acceleration and deceleration phases but almost completely decouple it otherwise. By using this mean wind compensation, we expect to see fewer particles, but their average residence time in the measurement volume should increase. In particular, the number of tracks longer than 10 ms increases; the number of tracks of 100 ms doubles. This is beneficial for particle tracking, in general, since longer particle tracks allow us to apply better filtering to the tracks, which helps to increase accuracy, in particular of higher-order derivatives such as the droplet acceleration.
Particle tracking is done with an in-house code, with which we can accurately track particles that are badly illuminated and/or strongly out of focus. It is inspired by the Shake-The-Box algorithm,22 but instead of trying to use the particles to resolve the underlying flow, it puts emphasis on finding and tracking the largest possible number of droplets in view and accurately determining the amount of light associated with each particle image. We are able to track under all number concentrations encountered so far, which are up to 104 droplets in simultaneous view of all cameras and up to 8 × 104 droplets in total (due to the shallow view angles). By performing a self-calibration at every frame, we can deal with thermal convection and expansion and vibrations caused by the camera box movement. By extracting not only particle positions as a function of time but also their brightness as registered by each individual camera, we are able to deduce each individual droplet size. For droplets larger than 35 µm in diameter, we can estimate the size of each droplet accurately by a simple quadratic fit to the dependence of scattered light on droplet size. For smaller droplets, the effects of Mie scattering should be taken into account for better accuracy, which is left for further work.
We demonstrated our ability to measure the radial distribution function (RDF) and the longitudinal relative velocity distribution; however, more work is needed to ensure that these data are not affected by particle shadowing. Other quantities that we expect to measure are, e.g., the particle acceleration statistics and various measures for particle clustering, such as the fractal dimension and the size distribution of Voronoi cells.31 We hope the results from this experiment find application in meteorology, where they can be used to improve cloud models, in particular to improve the parameterizations that are used to predict rain formation. Furthermore, since the flow at the measurement site has a large Taylor Reynolds number of ∼3000 and an isotropy comparable to that of laboratory flows, the apparatus could help to improve our understanding of intermittency and its influence on inertial particles.
SUPPLEMENTARY MATERIAL
See the supplementary material for the method used to estimate the trajectory lengths that would be obtained with a moving camera box.
ACKNOWLEDGMENTS
Over the years, many people have contributed to the experiment that we describe here. We are, particularly, grateful to Professor Heng-Dong Xi and Steffen Risius for their work on the very first version of the camera box; Artur Kubitzek, Udo Schminke, and the machine shop at the MPIDS for designing, building, and installing the seesaw and rail system; Denny Fliegner and Hecke Degering for providing and maintaining the computational resources at the MPIDS; Markus Neumann, Till Rehm, and the staff of the Schneefernerhaus for maintaining the research station and the various experiments in our absence; and the partners of the Schneefernerhaus and the State of Bavaria for their continued effort and financial support to keep the Schneefernerhaus running. We, furthermore, would like to thank Professor Siegfried Specht for recommending the Schneefernerhaus to us and Professor Raymond Shaw for advice. Dr. Bagheri was partly funded by the Swiss National Science Foundation (Grant No. P2GEP2-165065). This work was supported by the Bavarian Ministry for Environment and Consumer Protection and by the European Union Horizon 2020 program, in particular Marie Skłodowska-Curie actions under Grant Agreement No. 675675.
AUTHOR DECLARATIONS
Conflict of Interest
We have no conflicts of interest to disclose.
DATA AVAILABILITY
Mechanical drawings and CAD data of the experimental setup are published separately.19 The data that support the findings of this study are available from the corresponding author upon reasonable request.