One obvious characteristic of the binaural system is the distance between the ears. This distance allows the brain to compare the signal at each ear by two measures: it can measure the difference in phase between each ear, and because the head casts a shadow on one ear, it can measure the difference in pressure at each ear.
Given perfect phase information from each ear, one could narrow the position of a pulse down to being a point on the surface of a hyperbaloid. Let the ears be represented by two points on an axis. The difference in phase between signals reaching each ear lets us know the difference between the distances from each ear and the signal source. This delay is called the interaural time difference (ITD). Recall from conic sections that a fixed difference between the radii of two foci yields a hyperbola. Rotation of this hyberbola on the axis formed by the ear points yields a hyperbaloid.
In practice, however, there are limitations to how accurately the brain can process phase information. At high frequencies, most phase information is lost, and the brain relys more on pressure differences, as explained below. For a steady, perfectly periodic signal, the brain can only deduce directionality from the initial period of the signal. Once the signal is sustained, the difference in distance between each ear to the signal source could be the phase difference times the wavelength plus or minus any multiple of wavelengths. Thus, the brain uses phase information only at the onset of a steady signal. This is rarely a problem, however, since signal sources rarely move a wavelength of distance quickly enough not to be noticed. This loss of phase information is particularly problematic a high frquencies, where the signal wavelength approaches the size of the head. The cutoff frquency for phase perception is about 1200 Hz or 27cm (Yost 55). This loss is imposed by the brain, either because of the method by which it calculates ITD (by head size) or because it can no longer get reliable phase information from the ear.
Below are two wav files containing two tones of slightly different frequency. The first file puts each tone in a separate channel so that one tone is heard only by the right ear and the other tone only by the left. If you listen carefully, you may percieve beats produced by the two off-frequency tones. However, there are no beats at either ear: this indicates that the brain is keeping phase information from the tone at each ear and is combining them to create the perceived beats. The second file puts both tones in both ears so that actuall beats are reaching the eardrums. The beats in the first file may not be easily perceived, the brain's ability to keep phase information is limited by frequency and is inconsistent between listeners. Attempt to hear the beats in the first file, the second is included as a reference.
Interference caused by the head creates a difference of sound pressure level at each ear. Sound reaches the shadowed ear by diffraction, and loses energy along the way. At long wavelengths, however, sound tends to arrive at each ear with close to equal pressure, as the head is not large enough to attenuate the waves.
As noted above, the brain relys on the first period of a signal for much of its localization. This is refered to as the Law of The First Wavefront. This has been demonstrated using two speakers sounding the same signal in phase towards the ears, but in two different positions. First one speaker sounds the entire signal while the other is silent, then gradually the first speaker's amplitude drops to zero as the other speaker is raised to the full level. The listener hears no change in the sound's direction, even though the source has moved to another part of the room (Berkley in Yost 253). The law of the first wavefront helps the brain determine distance, and remove ambiguity, in reverberant rooms. Within a small delay, the brain does not interpret reflected or reverberant sound to be a different signal source. Rather, it ignores most of the information from reverberant signals, and takes its localization cues from the first signal to reach it: the direct, unreflected signal. The brain does gain some information from reverberant signals, however. It compares the sound level of the direct signal to that of the reverberant signals. Since the amplitude of the reverberant signal does not drop as quickly with respect to distance as the direct sound, a comparison of the two sound levels gives information about the distance of the auditory event (Blauert 280).
Below are six bassoon sounds in pairs. The were calculated assuming the listener and the bassoon were in a room measuring 10m x 10m x 25m. The listener is located 5m from the back wall, and 5m from each side wall. The bassoon is located also 5m from each side wall, but either 5m or 15m from the listener, as marked. In each case, the reverberant sound was mixed from six reflections. In the first file, the direct signal and reverberant signals are mixed. The second file has only the direct signals, the third has only the reverberant signals. Notice that the direct signal dies off with distance more quickly than the reverberant signals.
The direct and reverberant signals from the bassoon, 5m away, then 15m away