The typical sound of George Lucas’ laser blaster in the “Star Wars” series is well known. What does a laser blaster in “Star Wars” sound like, and why? Here we show a simple way to produce this sound by using low-cost lab material, like a spring or a Slinky (Fig. 1). Building on the work of Crawford, who analyzed the sound of a Slinky using oscilloscopes, we present a technique for analyzing the sound using mobile devices. For a deeper quantitative analysis, a PC with open-source software is used.
The typical sound of George Lucas’ laser blaster in the “Star Wars” series is well known. What does a laser blaster in “Star Wars” sound like, and why? Here we show a simple way to produce this sound by using low-cost lab material, like a spring or a Slinky (Fig. 1). Building on the work of Crawford,1 who analyzed the sound of a Slinky using oscilloscopes, we present a technique for analyzing the sound using mobile devices. For a deeper quantitative analysis, a PC with open-source software is used.
Recently, many papers showed how to use smartphone applications effectively, targeting hands-on experiments for educational purposes, especially in mechanics2–4 and acoustics.5–7 These papers deliver an experimental approach to several acoustical concepts and phenomena. The underlying effect of the “Star Wars” sound, namely acoustic dispersion, was until now not under investigation. The experimental work we present here closes this gap by offering different methods and complexities for students to study the physics of this phenomenon:
Elementary exploration of the phenomenon without any technical material.
Qualitative and basic quantitative analysis using smartphone apps—“making acoustic dispersion visible.”
Detailed analysis using a PC with Audacity and two microphones.
Experimental setup and procedure
To analyze the sound of the spring or Slinky with a smartphone qualitatively, a tin can or plastic cup can be used as a resonator. The internal microphone of an ordinary smartphone is sufficient to record the sound, if it is located close to the resonator. Depending on the operating system, different apps are available.10
For a detailed quantitative analysis, we used a laptop and the free and open-source audio software Audacity.11 To record the sound, two contact microphones are connected to the spring at the different endings. The connection to the laptop depends on the model. We recommend using an external sound card with line-in-input (details in the following section).
Experimental procedure and analysis
Elementary exploration
The spring (or Slinky) needs to be connected to a resonator (e.g., a tin or a plastic cup). An assistant holds this output end, while the experimenter holds the input end [see Fig. 2(a)]. The experimenter stretches the spring parallel to the floor, so it doesn’t touch the ground and adjacent turns of the spring don’t touch one another. If an assistant supports the spring in the middle, rubber bands can be used as a connection to the spring, so dampening of the vibrations are minimized at this point. With a pen or a fingernail, the spring is struck on the input end. This short impulse, produced by hitting the spring, consists of a continuous spectrum of frequencies. After the vibrations propagate down the spring, the resonator emits a significantly longer signal that changes from high to low pitch very quickly: “tiuu”— the sound of a laser blaster. When shortening the length of the spring by holding it at half-length, then at quarter-length and so forth, striking again, the sound changes with every shortening: from a stretched “tiuu” to a short sound, which resembles the initial “tack.”
Please note: Springs or Slinkies are usually used to demonstrate the propagation of waves, as a model for the propagation of sound, by compressing the coils of the spring and letting the compression travel along the spring. Though the setting seems quite similar, this experiment does not use the longitudinal waves of the spring, but the longitudinal compressional waves of the material when hitting the spring.
This exploration demonstrates that the phenomenon depends on the length of the spring and therefore the distance the sound travels through the material. The longer the material, the longer the sound lasts. From a learner perspective, two initial assumptions might be relevant:
The initial sound, produced by hitting the spring, is made of various frequencies.12
High-frequency components travel faster in the spring than lower frequencies.
As Crawford stated, “[H]igh frequency components of the ‘delta function’ excited by the pencil tap propagate down the Slinky® delay line more rapidly than do the low frequency components.”1 These hypotheses can be analyzed by using mobile devices.
Crawford further found in his experiments with a Slinky, that “the spring tension has very little to do with the whistler we hear” (Crawford calls the “tiuu” sound “whistler”), which we could confirm in our experiments with the spring.
Qualitative analysis of the phenomenon with smartphone apps
Crawford used in his experiments an oscilloscope and microphones to analyze the sound. For a more contemporary and hands-on implementation in a classroom situation, we’ll show an updated version of these experiments using mobile devices, using their internal microphone and appropriate applications for displaying and analyzing audio signals [see Fig. 2(b)].
To analyze the waveform of the sound we used the app phyphox,13,14 with its tool “audio oscilloscope” as shown in Fig. 3(a). When zooming in by changing the time scale [see Fig. 3(b)], it becomes obvious that the first wave packets have the shortest time period (high pitch) and the following wave packets’ time periods get consecutively longer—and therefore have a lower pitch.
This (audible) shift of pitch can be visualized in an audio spectrum, which calculates and displays the frequencies of the sound by fast Fourier transform (FFT). However, the standard FFT spectrum doesn’t display time-dependent development of the frequency components. This important information can be visualized in an audio spectrogram (we used the app SpectrumView15). Figure 4 shows the spectrogram of the blaster sound: the declining sound is clearly visible, as well as the echoes, reflected at the ends of the spring. These echoes are not only reaching the output end later but also more stretched, as they propagate for a longer time in the material. The existence of echoes might be a surprising discovery, as they are not easily audible.
This relation is given in dispersive media like metal.
The distance s (= 56.77 m) the signal travels on the spring was calculated by measuring the perimeter d (= 0.109 m) of one coil of the spring and multiplying it with the number of coils n (= 520).
Using the app SpectrumView, we were able to measure the time t the sound traveled in the material. As an exact display of time in the app is not possible (e.g., by zooming into the time axis), we used a screenshot of the recorded data and exported it on a computer (see Fig. 5).
The time measurements for the different frequencies are of limited accuracy—in addition to the uncertaintiesof the errors in reading the data, the time of the initial input event is not very precise. In this setting, the detected input signal (“tack”) travels through the air from the input end to the microphone—this time is not included. This error is in the order of magnitude of 5 to 10% of the measured times (Fig. 10). This problem can be solved by including the echoes to the measurement or by using a more sophisticated setup with two microphones (see next section).
Figure 6 shows the results of the calculation of the group velocities vG for different frequencies. These results support assumption 2.
Quantitative analysis using a laptop with Audacity and two microphones
The setup with a smartphone does not allow the gathering of direct information of the input event. The time between input event (striking the spring) and output event (hearing the sound) can only be measured indirectly. To verify if the method used above is correct, a setup using a laptop and microphones is used. The main differences of this setup are that signals are recorded on two different channels from two microphones respectively [see Fig. 8(a)]16 and the software Audacity is used for more accurate measurements and analysis.
To get data from the input end as well as from the output end in the same data set, two microphones were connected to the laptop [see Fig. 2(c)]. As the normal microphone jack is a mono-input, a line-in jack is needed. Some devices offer this option, but most don’t. In this case an external sound card with line-in jack is necessary.17 The two microphones are connected with a Y-cable (splitting right and left channel) to the line-in jack of the sound card, so in Audacity each microphone is on a separate audio track (see Fig. 7).
Using the external sound card, the line-in is selected as input. When starting the recording and hammering on the input end of the spring, microphone 1 detects a signal and after a short while (depending on the length of the spring) another signal on microphone 2 can be detected, which looks and sounds very different from the first signal [see Fig. 8(a)]. A conversion of the recorded data into a spectrogram reveals that this input signal consists of a continious frequency spectrum. This corresponds to assumption 1 listed under the elementary explorations. As assumption 2 stated, the frequency is declining over time—in a nonlinear fashion [Fig. 8(b)].
Spectrogram representation makes the reflected signals visible too: mic 1 shows a signal in the second part, which was reflected on the output end and detected at the input end (reflection 1). Note that the detected reflection on microphone 2 corresponds to echo 1 in Fig. 4.
The results of the spectrogram had been verified by analyzing the waveform of different wave packets: Measurements of the period of the waves (by counting the number of samples between two maxima) confirmed the frequencies in the spectrogram. As discussed above (“Elementary exploration”), varying the length s of the spring can modify the generated sound.
In order to demonstrate the antecedent, Fig. 9 shows the signal of mic 2 for three different spring lengths. In the spectrogram four different frequencies from 500 Hz to 7000 Hz are highlighted and the dashed line represents the input signal, detected on mic 1.
The results for the group velocities at different lengths were consistent and are shown in Fig. 10. The results for shorter springs (PC 14.2 m and PC 25.4 m) show much lower accuracy as the errors in reading the data carry more weight. Higher frequencies show a higher margin of error due to the shorter “time of flight” of the wave packet.
A comparison between PC measurement (PC 56.8 m) and mobile measurement (Mobile 56.8 m) confirms the systematic error of the mobile measurement described above: The calculated speeds of sound are significantly higher (>10%) when using mobile devices. As discussed previously, this is caused by the air-conducted and therefore delayed input signal.
Theoretical analysis
For the spring we were using (d=0.17 cm), the theoretical model of Crawford does not seem to fit very accurately (see Fig. 12). As Crawford presumed, the effect of the spiraling material in the spring might cause this divergence.
A further empirical investigation of the Crawford model could be part of subsequent research, where Slinkies with different winding diameters are examined. A hypothesis might be that the larger this diameter, the less the spiraling of material, and the better the Crawford model with vG ∼ f0.5 fits. As the scope of this article lies in educational aspects of the experiments, further theoretical analysis beyond this point had been omitted.
Discussion and outlook
The experiments shown above offer great potential for hands-on experiments about dispersion at high school as well as at the university level. By visualizing the audible phenomenon with mobile devices, it helps to understand the concept of dispersion and allows the cross connection to other dispersion phenomena like in optics (rainbow). The use of very low-level material in combination with mobile apps offers the potential for individual exploration. By developing and using a more sophisticated experimental setup with external microphones, we demonstrated the reliability of the mobile device setup. The measurement uncertainties are reasonably low for educational purposes. The systematic error in the mobile measurement can potentially be avoided by using the echoes of the signal to measure the time of flight. This will, on the other hand, make the measurement more complicated for students to understand. Therefore, we suggest discussing potential errors and more sophisticated analysis methods after conducting the experiments.
Further experimental studies should be done, for example, on the impact of the tension of the spring. Our studies showed evidence for Crawford’s assumption, that tension has no influence on the dispersion. A deeper experimental review on the theoretical model, which Crawford suggested for the “Slinky whistlers,”1 should be part of further investigation.
The reason that metal springs show dispersion—while those made of plastic do not—lies in the properties of the material: “There is no dispersion in homogeneous amorphous solids such as glass and plastic. … In heterogeneous materials such as cast iron, granite, concrete, etc., dispersion is very large. Considerable dispersion is also observed in most metals, even at a high degree of homogeneity.”18
An alternative way to go further in the analysis of acoustic dispersion in everyday life is possible with ice: If you throw a stone on a frozen lake, a similar sound like the one of a laser blaster is audible. Furthermore we plan to make invisible sound phenomena visible in real time by using augmented reality technologies, which have already been proven suitable for physics education in other contexts.19–21
Conclusion
In order to investigate the “tiuu” sound, we presented an updated version of the “Slinky whistler”1 experiment by using mobile devices to analyze the sound produced by a metal spring. This experiment offers great potential to investigate the phenomenon of dispersion with students at the high school and college level. The easily available material, as well as the additional sparking context of “Star Wars,” fosters an enthusiasm for the subject matter in class.
We discussed in this paper an experiment in which the well-known sound of the “Star Wars” laser blaster is produced. However, this analysis does not answer why laser blasters make a “tiuu” sound. The answer is pretty simple: Filmmaker George Lucas hired the sound designer Ben Burtt to produce the sounds for the “Star Wars” movies. Burtt is the inventor of the famous laser blaster sound and he came up with it during a backpacking trip by hitting the guy-wire of an AM radio transmitter tower with a hammer.22 A guy-wire is, like the metal pull spring, a long steel cable and shows the same phenomenon of acoustic dispersion.