top of page

Capturing and re-creating sound

Supplement for chapter 6

All the nuances of sound, every rustle of leaves, call of a wolf, musical passage or "I love you'' ever heard by humans is contained in just two sound traces, one for each ear. These traces are the key to reproducing the sound electronically. You can get close to the original sound, as perceived "live'', by recording pressure variations at the ear to obtain the sound trace, later playing it back using earbud headphones. (The electronics in an iPod turns a compressed version of the sound trace into a voltage trace reproducing the original sound trace. The job of the earbuds is to turn this voltage trace back into pressure variations in the ear canal.)

​

Archaeoacoustics
 

Some readers might be interested in the subject of Archaeoacoustics. What sounds are implied by ancient objects, sound spaces, etc.? In Why You Hear What You Hear we mention two sound phenomena that residents of Chichen Itza must have heard coming from the steps of the temple.

Some people maintain that ancient "accidental" recordings of human singing or conversations may exist, say through action of a stylus passing through wet mud on a pottery wheel, but this is so unlikely that we hope too much focus on it doesn't sully the larger field of archaeoacoustics. The most plausible accidental recording is mentioned in this chapter, namely the screeching sound of a chisel described by Galileo. The chisel left precise "chatter" marks on a metal plate. If only the plate still existed ... but even if it did it still wouldn't be a sound recording in the modern sense,

 

Stanford University, in connection with the CCRMA, supports the Chavín de Huántar Archaeological Acoustics Project.

​

Sound files

The earliest recording of the human voice, 1860 Clair de la Lune, made 17 years before Thomas Edison invented the phonograph, was discovered by David Giovannoni in 1988. Originally presented in the form heard here, it is now usually played back slower, giving the impression of a man's voice, that is probably correct.

1860 Clair de la Lune
00:00 / 00:57

Reproduction and sampling fidelity


In this chapter we raise the issue of how often a digital recording method needs to sample the sound source. Clearly sampling too infrequently leads to distortion, for how can you render a sinusoid if it is sampled less than a few times per period? This is illustrated and heard nicely in a CDF applet written by Stephan Wolfram, found at

http://demonstrations.wolfram.com/PureTonesWithSampleRate/

 

History

A good and relatively brief website with illustrations about the history of sound recording can be found at

www.recording-history.org

 

My Fair Lady and voice training

​

This scene is available on YouTube, showing an apparatus Elsza Doolittle used earlier in the Warner Bros. movie My Fair Lady to try to match sound traces of "properly" spoken words, under the tutelage of Prof. Higgins. This is what Alexander Graham Bell did in real life for the deaf. We remark though that matching the waveform on a sound trace is not required for a word to be perceived as "correctly spoken". Matching a sonogram would be much better!

​

Loudspeakers

The subject of loudspeakers is vast, but we already know a great deal from earlier sections. One key lesson is the unique properties of the horn, ideally having no resonances of its own to color the sound. This is very much unlike a clarnet or a trumpet, which have strong resonances. The horn and a small "reproducer" chamber is a near necessity when the source power is weak yet loud sound is desired. The prime example is the gramophone. The horn loudspeaker is not forgotten, but often superseded by other designs that are far less efficient, something that hardly matters when more than enough power is available.

​

Making a good loudspeaker requires the skills of the best acoustical engineers, but there are a few simple principles that are easily understood. One of them has to do with the monopole-dipole issue. If a speaker cone vibrates in open air, it is a dipole source because the compression in front of the cone's direction of motion is accompanied by rarefaction behind it. They largely cancel each other, reducing the sound energy radiating away from the speaker cone, leading to poor efficiency. This is especially true in the bass, where the wavelength is longest (over 3 meters) and the regions of compression and rarefaction are closest together on the scale of a wavelength. Even at much shorter wavelengths, as seen in Ripple simulations, an interference pattern can be set up that is undesirable, leading to regions of low sound amplitude in certain directions. This is solved by enclosing the speaker in a housing, preventing the production of canceling, out of phase waves. This results in a large increase in loudness.

​

The subject of ported loudspeakers comes up in chapter 13. Opening the inside chamber of the loudspeaker with a port to the outside would seem to bring back the short circuiting, but actually it turns the speaker into a Helmholtz resonator which, if operated above its resonance frequency, actually enhances in-phase with the speaker cone oscillations, boosting the sound. See chapter 13, section 7.

​

The body of a kettle drum performs the same baffle function, shielding the rarefaction created above the membrane from the compression formed below the membrane the instant it is struck by the stick, and vice versa as the membrane subsequently vibrates up and down.

 

Sonification - listening to the data

Some sound files on this site were generated by taking data and turning it into sound, for example the below sound file in the section "Vocal tract resonances (formants) via Ripple" in chapter 17.

falstadvocalsim
00:00 / 00:00

However, this was done with data that was specifically created to simulate a sound generating system (the vocal tract). Sonification is the art of taking other types of data and turning it into human - audible sound, so as to take advantage of our considerable capacity for audio pattern recognition. An unusual but perfectly legitimate example of this was discussed in connection with the below sound file of the hunt for individual atoms in a "quantum corral", (see chapter 9) by a method that is exactly analogous to using sound to detect the presence of objects or walls.

STM sounds
00:00 / 00:39

An excellent May, 2012 article is available online from Physics Today, "Shhhh. Listen to the data". It refers to a sonification (and animation) of black hole simulations, that can be found on the web in various forms. Another sonification project is from the Large Hadron Collider, the biggest particle accelerator in the world.

 

The Moog Synthesizer

Dr. Robert Moog (May 23, 1934 – August 21, 2005) was a pioneer of electronic music. His synthesizer broke new ground; this video of him demonstrating his production synthesizer is very well suited to this book, although some of the subjects mentioned are treated in detail later in Why You Hear What You Hear.

bottom of page