As you read this story, your gaze flutters about the page roughly three times a second. But the scene you perceive is fixed firmly in space. Your brain achieves this feat by multiplying the images on each retina by a function that describes where your eyes are pointing. Other examples of sensory multiplication have been found in the animal kingdom. Locusts use it to see and avoid other flying objects; humans and owls use it to localize sound.

Where and how does this multiplication take place? Neurons, it’s usually assumed, integrate inputs from other neurons. Once the combined input crosses a threshold, a neuron fires a voltage spike to communicate with whatever cell is next in line, usually another neuron.

In a landmark 1943 paper, the University of Chicago’s Warren McCulloch and Walter Pitts proved theoretically that a network of integrating neurons can perform any computational operation, including multiplication. 1 Their formalism underlies the artificial neural networks that are currently put to work to predict the weather or stock prices.

But for some time, neuroscientists have suspected that individual neurons can multiply. Like an AND gate, a multiplicative neuron fires only when all its inputs are positive. An additive neuron, by contrast, is more like an OR gate. To fire, it doesn’t require all its inputs to be positive, only that their sum be above a threshold.

Now, from Caltech’s José Luis Peña and Masakazu Konishi, comes the cleanest evidence yet in favor of neural multiplication. 2 In an experiment of surgical intricacy, the two researchers uncovered the neural mechanism by which barn owls combine time-difference and intensity cues to locate sound sources.

Their experiment not only bolsters the case that some neurons are more than simple adding machines; it also adds a final, physiological touch to a model developed 23 years ago to explain how humans localize sound.

Owls, humans, and other two-eared creatures locate sound sources by exploiting differences in the signals detected at each ear. A sound coming from the right will reach the right ear before the left ear, and will be less intense in the left ear because it’s been partially absorbed by the head. (For more about localizing sound, see Bill Hartmann’s article in Physics Today, November 1999, page 24.)

As figure 1 shows, the barn owl’s ear openings are not at the same level. This asymmetry heightens the barn owl’s ability to localize sound, especially in the vertical dimension. Even in total darkness, an owl can find and snatch a mouse off the ground.

Figure 1. Feathers formerly covered this barn owl’s ear openings (two per ear, above and below the eye), but they’ve been removed, revealing that the owl’s left ear is higher than its right. Barn owls exploit this difference to help them localize sound in the vertical direction.

Figure 1. Feathers formerly covered this barn owl’s ear openings (two per ear, above and below the eye), but they’ve been removed, revealing that the owl’s left ear is higher than its right. Barn owls exploit this difference to help them localize sound in the vertical direction.

Close modal

Konishi has been studying how barn owls localize sound since the 1970s, when he first recognized their potential as experimental subjects. For the acoustical investigator, owls have a key advantage over other animals: Whenever they hear a sound, they turn their heads to face its source. By equipping owls with tiny loudspeakers placed in their ears, Konishi and his collaborators can manipulate arrival time and intensity differences to trick an owl into believing a sound comes from a direction of their choosing. Induction coils fixed to the owl’s head and coupled to an external magnetic field record the direction of the owl’s gaze.

With the loudspeaker–inductioncoil approach, Konishi’s team found that owls use interaural time difference (ITD) to determine a sound’s horizontal coordinate, and interaural intensity level difference (ILD) to determine the vertical coordinate. Going deeper—uncovering the physiological mechanisms for sound localization—required looking inside the owl’s head.

In a series of physiological experiments over the past three decades, Konishi and his changing cast of collaborators have discovered that

  • ▹ Two parallel physiological pathways process ITD and ILD from the midbrain all the way back to the point where the auditory nerve enters the brain.

  • ▹ In a part of the owl’s midbrain called the inferior colliculus resides a mental map of auditory space. Particular regions of the map correspond physiologically to a set of “space-specific” neurons. That is, each neuron is sensitive to signals coming from a particular direction in space.

  • ▹ Each space-specific neuron is triggered by a particular combination of ITD and ILD. In essence, ITD and ILD determine a sound’s direction.

Peña and Konishi’s latest paper can be thought of as a logical next step in this series of experiments: identifying what happens in a space-specific neuron when it receives ITD and ILD inputs.

Most studies of individual neurons involve removing the neuron from its host and prodding it into action with external stimuli. But to see the physiological effect of ITD and ILD on a particular space-specific neuron, Peña and Konishi had to work on live owls.

Each owl in the experiment was anesthetized, fitted with loudspeakers, and immobilized. Then, in the most challenging part of the experiment, Peña guided a glass electrode, no more than 1 µm in diameter, through a hole cut in the owl’s skull and into the cell body of a space-specific neuron. Another, reference electrode was inserted under the owl’s skin. Together, the two electrodes measured the electric potential difference V across the cell membrane, which at rest is about −70 mV.

Once the electrodes were in place, the loudspeakers emitted precisely controlled broadband signals, each lasting 100 ms, to sample the ITD–ILD plane. Figure 2 shows some of the results of those trials. When both ITD and ILD are at the values that trigger the neuron under investigation, the membrane depolarizes; V becomes less negative, triggering voltage spikes that are transmitted along the neuron’s axon to the brain’s auditory processor. When either ILD or ITD is unfavorable, or when both are, the membrane is hyperpolarized; V becomes more negative and even less likely than at rest to trigger a spike.

Figure 2. The voltageV across the membrane of a space-specific neuron depends on both interaural time difference (ITD) and interaural intensity level difference (ILD). In the figure, the values of ITD and ILD that the neuron is tuned to have been subtracted. (a) V as a function of time for four ITD–ILD pairs. Only when ITD and ILD are favorable does the membrane depolarize and initiate a voltage spike. (b) The number of voltage spikes as a function of ITD and ILD.

Figure 2. The voltageV across the membrane of a space-specific neuron depends on both interaural time difference (ITD) and interaural intensity level difference (ILD). In the figure, the values of ITD and ILD that the neuron is tuned to have been subtracted. (a) V as a function of time for four ITD–ILD pairs. Only when ITD and ILD are favorable does the membrane depolarize and initiate a voltage spike. (b) The number of voltage spikes as a function of ITD and ILD.

Close modal

As can be seen in figure 2, the raw data are somewhat noisy. Establishing that the neuron does actually multiply its inputs requires a statistical approach and the testing of alternative models. Following the advice of a mathematician colleague, Peña and Konishi chose a method called singular value decomposition, which quantifies the extent to which a matrix can be represented as a product. They found that the values of V (actually, V minus its resting value) could indeed be fitted with a model that multiplies ITD and ILD. A model that sought to reproduce the data by adding ITD and ILD failed, and a model that combined addition and multiplication was equivalent to just multiplication.

It’s not clear how, at the biochemical level, neurons are able to multiply. Peña speculates that the postsynaptic potentials (PSPs) that carry the ITD and ILD inputs to the neuron’s body contain both inhibitory (that is, hyperpolarizing) and excitatory (depolarizing) signals. Only when the right mix of PSPs arrives at the cell body does the neuron fire.

Determining which receptors and neurotransmitters are involved is on Peña and Konishi’s list of future experiments. But it won’t be easy. Says Konishi: “We’ll have to manipulate the membrane potential externally or inject a drug to stop ions moving in or out.”

That ITD and ILD are multiplied to localize sound was first inferred in 1978 by Richard Stern and Steve Colburn, who were at MIT at the time. 3 In humans, as in owls, localization in the horizontal plane is dominated by ITD, which is processed in a system of neurons connected to delay lines. To understand how this mechanism works, suppose that signals from your left ear arrive at a particular neuron after a delay of 10 µs and from your right ear after a 100-µs delay. When those signals, internally delayed, reach the neuron simultaneously, it fires, registering an ITD of 90 µs. Other neurons, with different interaural delays, stay quiet.

This model for processing ITD was devised in 1948 by Lloyd Jeffress. 4 At frequencies above about 1500 Hz, ITD can’t be used to localize sound because the wavelengths become comparable or smaller in size to the human (or owl) head, resulting in phase ambiguities and misleading localization. At those frequencies, ITD hands over the sound-localization task to ILD.

But ILD also plays a role below 1500 Hz. Headphone experiments on humans have shown that if the ITD is fixed, varying the ILD can move the sound image left or right.

Twenty-three years ago, Stern and his thesis adviser Colburn (now at Boston University) set out to determine which of several classes of models could best account for how ITD and ILD interact. The data they were trying to explain had been collected five years earlier by Bob Domnitz, another of Colburn’s students. Domnitz asked human volunteers to locate the apparent origin of sounds that had various combinations of ITD and ILD.

According to one model, the main role of ILD was to hasten the generation of neural impulses for the side of the head that receives the louder sound. Raising intensity in effect changed the delay on one side of the head. In another model, ITD and ILD were fundamentally additive. Localization would then correspond to a weighted sum of ITD and ILD in a one-dimensional auditory space. Stern and Colburn found, however, that Domnitz’s data could best be accounted for by a cross correlation function for ITD multiplied by a bell-shaped intensity function that shifts left or right depending on the strength of the ILD.

Remarkably, the two functions that Stern and Colburn derived from Domnitz’s measurements are similar to their equivalents derived 23 years later by Peña and Konishi. “When I read the paper,” says Stern, who is now at Carnegie Mellon University, “I was struck and gratified by the similarity. But we knew it had to be multiplication.”

1.
W. S.
McCulloch
,
W. H.
Pitts
,
Bull. Math. Biophys.
5
,
115
(
1943
) .
2.
J. L.
Peña
,
M.
Konishi
,
Science
292
,
249
(
2001
) .
3.
R. M.
Stern
,
H. S.
Colburn
,
J. Acoust. Soc. Am.
64
,
127
(
1978
) .
4.
L. A.
Jeffress
,
J. Comp. Physiol. Psychol.
41
,
35
(
1948
) .