Search Gear
 

The Monitor vs. The Room

May 5, 2006
share

Conventional wisdom dictates that the quality of our studio loudspeaker monitors is the primary measure of how well we can know what’s going on with the music — i.e., how well we can assess the tonal balance in our mixes, how much detail we can hear in order to identify problems, and so on. But we often overlook the huge effect our monitors’ interaction with the room can have on their accuracy. A good studio monitor is, by definition, one that converts electrical signals into their exact acoustical equivalents with no coloration. But having “good” studio monitors can’t alone ensure that the complex acoustical signals they’re creating will arrive at our ears with the same frequency and phase relationships they started with. Indeed, we often think of the loudspeaker as the final link in the audio reproduction chain between the recorded material and our ear-brain mechanisms; but actually, the room is the final link.

This is not new information, really: We’ve witnessed in recent years the rapid proliferation of direct-field* monitors to address room interaction problems in project and commercial recording studios alike. (Direct-field monitors aim to maximize the ratio of sound that arrives at your ears directly to the sound that reflects off adjacent surfaces before arriving at your ears — ostensibly removing the effects of the room.) But direct-field monitoring isn’t a panacea for the effects of a poorly configured room, or poorly placed monitors.

You’ve probably observed that speakers can sound different in different rooms, and in different positions within the same room. There are two main causes for this variable performance: (1) at low frequencies, the room actually has a lot more to do with the way a speaker sounds than does its inherent design; (2) the spectral output from a speaker varies at different angles and causes reflections off of hard surfaces in the room, degrading the sonic image. There are plenty of well-documented methods for correcting this second issue, and most of us have absorbers and diffusers in our studios to handle echoes and ambience above 300Hz. But solutions for controlling low-frequency problems in listening rooms are more arcane and difficult to implement. So let’s address the low-frequency issues to get you thinking about ways to improve them.

The Case of the Missing Cello

Recently, I had to audition some music for a client at his home, so I lugged my trusty direct-field monitors, laptop, and Firewire audio interface over there to present my music to him using a [relatively] high-quality playback system. I set up the monitors in his den about two feet from the front wall and six feet from the back wall. When listening, my client remarked that there was no low-mid range in the music; he said the cellos were inaudible and the contrabass lacked articulation. I checked the settings on the monitors, in my DAW software, and on the interface. Everything was in order. But he was right — there was no low end. How could this be? The mix I made in my well-calibrated studio with my “very flat” monitors was balanced and even — the cellos and contrabass were clear and defined. While it’s no secret that a room can affect the tonal balance of the speakers, is it possible that the room modes of my client’s den actually caused the bass frequencies to disappear? In a word: YES.

When loudspeakers’ low-frequency drivers move back and forth, they modulate the air pressure in front of and behind them. For simplicity, let’s concentrate on just the sound waves in front of the speakers.

Sound reflects back and forth between at least two parallel surfaces in a rectangular room like this. At certain frequencies the direct and reflected sounds conspire to form standing waves in which those frequencies become amplified; at other frequencies, the direct and reflected sounds cancel each other out, creating nulls. What we hear at those frequencies depends on where we, and the speakers, are located. So when my client remarked that there was no bass in the music, what he was hearing was the effect of severe cancellation due to the room modes created by the particular dimensions of his room, the location where I had placed the speakers, and our listening position. Furthermore, the peaks created by phase-coherent buildup in the sound waves no doubt exacerbated the audible effects of the cancellation.

The relationship between frequency and wavelength is an inverse function:


gr 1


where is wavelength, f is frequency, and c is the speed of sound (1125 ft/sec at sea level). So a frequency of 80Hz corresponds to a wavelength of about 14 feet; 140Hz is about eight feet and so on. In the case of the missing cello, we can surmise with first-order approximations that the distance between the speakers and the on-axis reflecting walls resulted in severe cancellation for some frequencies of sound in the 55–95Hz band; we heard this as “no bass.” Moreover, we can also estimate that some other frequencies of sound in the 140–190Hz range were amplified — possibly resulting in the “muddy” quality of the contrabass. So much for my “high-quality” playback system.

Sine Language

What we identify as sound is just time-varying pressure changes on our ears. Engineers often use single-frequency sine waves to analyze audio gear, even though nobody actually listens to these signals for pleasure. Since sine waves are the building blocks of all sound, musical or not (or any signal, in fact), let’s use them to analyze the problem of room resonances. We can represent a single-frequency tone as follows:


gr 2

where

gr

Here, A is the amplitude (>0), f0 is the frequency (in Hertz), and Ê is the phase angle (in radians). With this basis, we can express any note from any musical instrument as a sum or integral of sine waves having different amplitudes, different frequencies (multiples of the fundamental frequency ˆ0), and different phase angles:


Greek 3


[The pitch of the musical note is related to ˆ0, and the harmonics (represented by a2, Ê2, a3, Ê3, etc.) affect the timbre. In other words, an ˆ0 of 880 radians would correspond to the A below Middle C, and the harmonic coefficients account for why that note sounds different on an oboe than on a trumpet.]

Reflect on This

We can use simple sine waves with varying frequencies and phase angles to examine what happens when you move your speakers (or change your listening position) in a given room. Here’s an example of a speaker facing a wall 16 feet away from its front baffle. Let’s represent the variation in air pressure, which is how our ears perceive sound, using the functionwwwwwwwwwwxxx. In other words, it’s a sine wave with an arbitrary amplitude, a frequency of 140Hz, and a corresponding wavelength of about 8 feet; note, also, that the wave front leaves the speaker with a 0-radian phase angle.

As you can see, the sound wave hits the back wall just as its pressure oscillation crosses the horizontal axis. It reflects off the wall at some lower amplitude (presumably some sound gets absorbed into the wall) and it cancels out the direct sound. This “destructive interference” results in a greatly attenuated sound level at that frequency. No matter where you are in the path of that wave, you’ll barely be able to hear it. (Could this be what happened to my cello?)

So what happens when we move the loudspeaker two feet closer to the back wall, so it’s now 14 feet away?

Here, the sound wave reaches the back wall on its bottom compression peak (3Ðradians into its cycle) and it reverses direction in a way that combines with the direct sound — creating “constructive” interference. In this case, the signal will be louder. Note, however, that if your listening position is two feet from the back wall, you’ll be right in the dead spot — so your ears won’t detect any pressure variation and you won’t hear the sound!

It’s fascinating to check out what happens when you vary the sine wave’s phase angle, Ê, or when you use more complex signals — the additive outcomes can be surprising. The problem becomes even more interesting when you factor in reflections off other parallel surfaces like sidewalls, ceiling, floor, and the inevitable low-frequency buildup you get from the front wall if your speakers are too close.

The Muddy Truth

What’s the solution? How do we ensure that low-frequency sound will be balanced — if not throughout the room, at least from our listening position? Judicious use of bass traps in your room can help, but the most important thing is to put serious thought into how your studio monitors’ location will influence their interaction with the room and your listening position. Simply relying on the fact that you have inherently flat speakers doesn’t mean the sound will arrive at your ears in the same proportions as the signal that you input into the speakers. You can accept the fact that room resonances are inevitable, and try to make the best of it.

The Rayleigh equation can give you a first order prediction of your room resonances:


greek 4


where f is a resonant mode (in Hertz) and c is the speed of sound (1125 ft/sec at sea level); L, W, and H are the length, width, and height respectively of your room; and m, n, and o are all the positive integers (including 0). Remember that there are millions of room modes in a typical small room — and if the length of your room is its largest dimension, the lowest frequency mode occurs for m=1, n=0, and o=0.

After a little bit of thought and preparation, you can start experimenting with different locations for your studio monitors. There are some general rules of thumb to get you started:

1) Try to counteract room resonances by not duplicating distances; in other words, if your left monitor is four feet from the back wall and three feet from the side wall, try to use different distances to adjacent walls for your right monitor.

2) By carefully evaluating the negative effects of the various room boundaries, try to position your listening spot where the bass is fairly smooth — keeping in mind that it can never be perfect. The key is to make your minimal-reflection area as large as possible. The best way to do this is to use the Rayleigh equation to calculate the theoretical resonance modes in your room and then experiment with moving your listening location using your ears as a guide.

3) Be wary of the bass buildup that inevitably occurs when you place a bass-reproducing speaker in a corner of your room; doing this will excite every resonance in the room — which is not necessarily a bad thing, but you need to recognize what you’re doing.

4) Pay attention to any peculiarities in the harmonic content of the bass clef of the music you’re mixing or recording. For example, if you know you’ve got room modes around 55Hz or 110Hz and the upright bass on your song is frequently hitting the low A: Keep in mind that the instrument’s fundamental and natural harmonics (like perfect fifths) can cause it to ring out or die a sudden death. This can be useful information to have while you’re trying to balance the levels of the different instruments in your material.

5) Some engineers like to use room equalization to correct resonant modes. This tactic can be quite successful — and the aforementioned calculations coupled with trial and error can be the ticket to minimizing an inaccurate low-frequency room response.

The sad truth is that there’s no practical way to eradicate completely the low-frequency resonances inherent in any given room. All you can do is be mindful of the physics of sound and how it affects what you’re hearing. If you do that, you’ll be ahead of the game.

Vivek Maddala is a national award-winning composer, multi-instrumental performer, and producer. He also develops products for M-Audio in his spare time.

Show Comments

These are my comments.

Featured

Reader Poll

Do you spend more time producing or playing?


See results without voting »