Search Gear
 

Going Beyond Presets

July 1, 2009
share

In the late '60s and early '70s, there were probably fewer than 20 synthesizers on the market, all costing in the tens of thousands of dollars. Today, life is good: There are hundreds of electronic musical instruments available, both virtual and hardware-based, employing a wide variety of synthesis and sampling technologies, and costing somewhere between nothing and a few thousand dollars. Yet, with this embarrassment of riches lying at our metaphorical (and, occasionally, literal) feet, most people barely do more than choose a preset closest to their needs at the moment.

There are, of course, reasons for this. People who are impatient or under deadline pressure (or both) don't have the motivation to spend time programming sounds. With so many presets available, many people don't see a need to learn how to program an instrument themselves. And then there is the double-edged sword of flexibility: The more options that exist in an instrument, the more parameters there are that must be dealt with. Further, the functions of some parameters are not particularly intuitive, making their uses harder to grasp.

In short, taking on a new electronic musical instrument usually generates tremendous excitement at the possibilities, but that excitement can be dampened by terror at how overwhelming it all seems. Still, while good presets are critically useful, there's more fun, musical richness and distinctiveness in rolling your own sounds.

To get some tips and perspectives that you can take into your own studio and use, I spoke with five experts who make their livings using, programming and, in many cases, even designing electronic instruments. My particular interest was the intersection of theory and practice — the point at which training and knowledge come in contact with creativity and instinct.

This article takes a fairly broad view of synth programming, but much more material, including instrument-specific tips and tricks, can be found at emusician.com/bonus_material. There you'll also find “EM's Panel of Synth Experts,” which offers background on the sound designers who are QUOTEd in the story: Eric Persing, Jack Hotop, Ian Boddy, John Lehmkuhl and Martin Jann.

Start From Scratch

The interviewees were unanimous on several points, the first of which being, as Jann, the founder of Pixelsonic puts it, “Knowledge of technical principles is important. If you have a good camera but don't know anything about photography, you can try to make pictures and may have luck shooting a good one, but you can only make good pictures at a good rate if you know the principles and techniques of photography. I think it's the same with these kinds of instruments.” Sound designer Boddy seconds the thought, saying, “Being able to dissect a sound into its constituent pieces is important to knowing how to build up patches from scratch.”

Whether you have a hardware or virtual synth, the way to really begin understanding its sonic potential is to turn off its effects, remove modulations and generally simplify the complexity of the signal path to see how much you can get from its raw elements. “I think the way most people learn sound design is that they tweak presets,” explains Persing, the founder of Spectrasonics. “That's fine, but then they're working backward: They enter in from the finished sound. That's useful, but if you don't have some time where you do the ‘strip it down and see if I can re-create this sound that I heard or this sound in my head,'' then it's difficult to really learn the subtleties of what an instrument does or what it's good for, or to be able to apply the sound design knowledge you get from one instrument to another.”

Persing recommends Roland's classic Juno 60 synthesizer as a good launch pad for learning to program sounds. “If you look at the spec for the Juno 60, it doesn't have anything,” Persing says. “It has one oscillator, a sub oscillator, one Moog-like filter, a preset chorus with two switches and one envelope generator. That's it. There's a finite amount of things you can do with a Juno 60, but it's much more than what most people realize. So you turn off all of the effects and see how far you can go with one oscillator. Then you start to build onto that.”

This approach stems, in large part, from these sound designers having started out when there were no presets. “Growing up in the days of the birth of analog synthesis and having modular synths was a good learning experience for me that probably helped me find the niche that I'm in,” explains Hotop, a pre-eminent programmer at Korg.

“When I started, there was no title of ‘sound designer''; you played the synthesizer,” agrees Persing. “If you played the synthesizer, there was some level of sound design knowledge because there were no presets. The idea of separating out sound design as its own discipline is very new.”

Take Stock of Your Synth

Understanding your synth's resources can point you in the right direction and help you avoid wasting time trying to make an instrument do something for which it is ill equipped.

FIG. 1:  In Native Instruments'' Absynth 4, the signal path is user-definable. Each of the three oscillators feeds two modules that can be defined as a filter, modulator or waveshaper. There are two definable modules in the Master section, as well.

FIG. 1: In Native Instruments'' Absynth 4, the signal path is user-definable. Each of the three oscillators feeds two modules that can be defined as a filter, modulator or waveshaper. There are two definable modules in the Master section, as well.

“One of the things to look at is the instrument's architecture. Often, that can be found in the materials the manufacturer has online,” suggests programmer and PlugInGuru.com host John “Skippy” Lehmkuhl. “With Absynth 4, Native Instruments changed the instrument's signal path. It used to have an oscillator going into a filter, followed by a ring modulator, then a waveshaper at the last output stage that you could run sounds through, and finally another filter. Now it's got this modular concept in which the function of each one of these blocks can be interchanged, so that instead of being an oscillator going into a filter, you could change it to feed the waveshaper [see Fig. 1]. This is the case for each of the three oscillators in Absynth, so that gives you the idea that there's a huge amount of programming power in there.”

FIG. 2:  In Native Instruments'' FM8, a drop-down menu of column names lets you select “Author” as one of the columns. This allows you to group presets by the sound designer who created them, making it easy to audition everything made by someone whose tastes you find appealing.

FIG. 2: In Native Instruments'' FM8, a drop-down menu of column names lets you select “Author” as one of the columns. This allows you to group presets by the sound designer who created them, making it easy to audition everything made by someone whose tastes you find appealing.

Familiarizing yourself with the individual character of the components in each instrument is crucial. “One of the first things I do is listen to the oscillators — raw, on their own with no added filtering or effects,” sound programmer Boddy says. “This is one of the most difficult things to get right in soft synths. It's not essential for them to be exactly like a Moog or whatever, but I like to hear how they sound as that's the starting point and gives me a good idea of how rich and interesting I can make the sounds.

“After that, I explore the filters to see how they shape the sound,” Boddy continues. “These days, most filters in soft synths are pretty good except when it comes to self-oscillating stuff; they still seem to fall down at that hurdle. Once I know how the oscillators and filters sound, I can start to program patches.”

Persing also listens for an instrument's essential sound quality. “Is the overall character of an engine glassy or smooth sounding? Does it have an edge to it? The [Roland] D-50 would be an example where the high end of the synth is interesting: There's a raspiness to it, but it's musical. It's not a particularly fat-sounding synth, so I'm not going to spend a lot of time trying to get really fat synth basses out of a D-50; I'm going to put energy into getting the more glassy kinds of colors and ethereal things. There are other synths that really don't do that kind of stuff at all, but are great at a ‘squelchy'' thing, or do something wonderful when the filters are overdriven.”

Don't take the D.I.Y. focus of this article, or even Persing's comment about learning synthesis, to mean that exploring presets is not valuable to learning how to make your own sounds. After all, the presets are the result of people like Persing, Hotop and the rest going through the learning process themselves with that instrument.

“Companies usually hire a number of programmers to work on a new synth, and, if it's really a great synthesizer, every programmer that sits and approaches it is going to make it speak in a different way,” reasons Lehmkuhl. “It's not documented, but there's a way to set up the columns in Native Instrument's FM8 to list the programmer for each patch. Then you can click the programmer list and it will group all of a certain programmer's patches together, so you can go through, and say, ‘Oh, this guy did this really cool sound.'' Now you can go and find out what other sounds he did, and, chances are, those are the kinds of sounds that will really speak to you because they rang your bell in the first place [see Fig. 2].”

Hotop emphasizes asking yourself questions from the standpoint of usability. “Does it play well? Does it sound musical? Does it feel good playing it from a keyboard? You make the decision on what the music is calling for,” he says. “Is this sound working for me? Break it down, and say, ‘What aspect do I like about it? What works, what doesn't work?'' Think in terms of the music, the part that's being played, and always try to walk a mile in the other guy's shoes if you're dealing with emulative sounds. Synth stuff is a whole other strategy.”

Add Life to Sounds

The eternal quest for synthesists is to be able to produce sounds as responsive and expressive as acoustic instruments. As those qualities don't naturally exist with electronic instruments, there are a multitude of techniques sound designers use to add variation.

“It's important to understand the synthesizer as a musical instrument,” Jann says simply. “Music itself has been made for hundreds of years with things that generate sound by putting energy into materials that vibrate. A flute has a column of air, a guitar has a plucked string; it's always something solid and not variable. People have learned to make music with things that have a certain set of fixed parameters and very few variable parameters — to put it in the language of synthesis.

“So if I create a sound, it's very important for me to not make it too variable,” Jann continues. “I start with something fixed and add more and more parameters that I can control to make it a musical instrument that I can play. My experience is that if I have a sound that gives me the feeling I can learn to play it as a musical instrument, that helps me use it for my music. Conversely, a sound that doesn't fall into this category and makes me feel I can't control it because it has too many variable parameters is probably not as versatile or usable for me as a musician and composer.”

FIG. 3:  A screen from the Korg M3 showing, in the second row of the EG Level/Time Modulation section, key tracking (in this case, filter key tracking) being applied to the decay, sustain and release of a marimba sound to give shorter decays for higher notes

FIG. 3: A screen from the Korg M3 showing, in the second row of the EG Level/Time Modulation section, key tracking (in this case, filter key tracking) being applied to the decay, sustain and release of a marimba sound to give shorter decays for higher notes

Injecting life into sampled sounds often comes down to making every note sound different. “One of the most important things is enveloping,” Lehmkuhl states. “I've always loved Korg's envelopes because they have separate modulation of the levels and times of the envelopes. For instance, if you're making a marimba sound, you can change the time of the decay [with key tracking] to be long at the bottom of the keyboard and short at the top of the keyboard, which mimics the behavior of a real marimba. What's really critical for me, also, is to apply a little bit of the velocity values to the attack time of an envelope [See Fig. 3].” This tactic involves configuring the modulation such that higher-velocity values produce shorter attack times, so that harder playing results in sharper attack transients and gentler playing makes softer attacks.

FIG. 4:  Modulating the time within a sample at which it starts playing back can add life to the sound. This drop-down menu shows the variety of sources available in Spectrasonics Omnisphere for sample start modulation. (The Sample Start parameter itself is obscured by the drop-down menu in this illustration.)

FIG. 4: Modulating the time within a sample at which it starts playing back can add life to the sound. This drop-down menu shows the variety of sources available in Spectrasonics Omnisphere for sample start modulation. (The Sample Start parameter itself is obscured by the drop-down menu in this illustration.)

Pitch stretching, as described in the sidebar “Better Sounds Through Pitch Stretching” below, is a technique that employs timbral modulation rather than dynamics processing to make each note different.

Hotop also views envelopes as powerful tools for adding expressivity. “A lot of synths and workstations have sliders and knobs that make it easy to say things like, ‘How about if I extend the release on this patch so that I don't have to pedal as much and I can play with a legato touch.'' That is often an easy parameter to find these days, and, a lot of times, just a little tweak like that can make a sound work better in a track or fit better in the sound of a band for live performance.”

Persing offers another technique to liven up a sound: “If you modulate the sample start time so that you're never hearing the same sample playing in the same place every time you hit the key, it puts just a little subtle variation in there and the sound really feels much more alive [see Fig. 4]. The musician may not be able to put his or her finger on it, but when you play the instrument, it sounds good, it sounds alive, it doesn't have that dead feeling of a ‘boring old sample.''”

Of course, there will always be a place for aimless experimentation and just following your muse. But understanding your instrument and how to maximize what it has to offer is a gateway to finding your individual voice on it and extracting the expressiveness we all seek in making music.


Larry the O has programmed synthesizers since the mid-'70s, and has contributed to EM since 1986. His company, Toys In the Attic, provides sound design and music composition services. He wishes to give special thanks to John Lehmkuhl for the abundant quantity of materials he provided for this article.

Better Sounds Through Pitch Stretching

“The problem with a sampled sound is that it's a snapshot,” says Eric Persing. In other words, hitting the same key three times in a row produces the exact same sound three times, unlike an acoustic instrument in which every note will have at least slight differences. But Jack Hotop and Persing describe one of the methods for countering this problem.

“There's a great technique called pitch stretching, and there's a couple of ways of achieving it,” says Hotop. “If you bend or transpose the pitch of a sample up, it gets brighter in harmonic content. You might do this using an envelope, or a parameter called something like ‘pitch adjust'' or ‘pitch stretching.'' What it does is change the tonal and timbral character of the sound.

“If you bend the pitch down by anywhere from a half-step to an octave, you'll find that the timbre becomes darker,” Hotop continues. “It's just a way of evoking different tones. My first experience with this was back in the days of the Mellotron, where I'd use the pitch control, but then have to transpose the part I was playing and play it in a different key [to bring it back to the desired pitch].

“With synthesizers and workstations now, you have transpose functions, so raise or lower the pitch of the sample, and then use an offset for the transpose function. Some synths and workstations have a parameter you can just adjust and it does that; on others it's a programming trick you can do if you modulate the pitch with an envelope, keep it there with a high sustain level and do offsets that way. That's another way of getting different timbral characters.

“But remember that when, for instance, you lower the pitch of a sample, you're slowing the attacks of the sound down. Since you have the knowledge that the attack is going to be played at a slower rate, you can move the sample start point into the waveform a little bit past that initial attack so you don't get a klunk [see Fig. A].”

Persing describes the same technique as it is implemented in one of Spectrasonics' instruments. “We have a control in Omnisphere where you can shift the timbre of the sound, which works very simply by taking all of the keymaps and shifting them in one direction and then compensating for that by changing the pitch,” he says. (Omnisphere's Timbre Shift parameter can be seen in Fig. 4 just above the drop-down menu.) “What happens then is that all of the formants of the sound are changing without the pitches changing. We can modulate that parameter, too, so that every time you strike a key, you can randomize the shift (or use whatever modulation source you want to change it). That way, the instrument doesn't sound the same every time you hit a key. [see Web Clips 1a, 1b and 1c].”

Velocity Curves Ahead

Next to note timing, velocity is probably the most accessible mechanism for imparting the human touch to a synthesized or sampled part. Switchable velocity curves on a keyboard controller allow a user to contour the effect of velocity for each context. “In a keyboard, velocity is a measure of how fast you're playing the keys, and it's most often used to control volume, but it also can be used to control filtering and brightness, and envelope times, when velocity can be routed to envelope segments,” explains Jack Hotop. “When you change a global velocity curve, it can change the way a sound responds, and not just in terms of volume. It lets you take control of the sound.”

Choosing the right curve for the part is key. “Don't get too technical,” advises Hotop. “Try playing a phrase with a couple different velocity curves, or even try recording it: Do one or two passes with two similar curves or two very extreme curves and adjust your levels accordingly. Use your ear, and ask yourself, ‘Do I need a consistent level throughout the track or do I need something with some dynamics that lets me get out of the way of a vocal or support a little stronger in a chorus?''

“Another thing I've done occasionally is vary the velocity curve according to the track or the tempo I'm playing. An alternative to controlling dynamics with a limiter or a compressor is to play the phrase I'm going to record and choose a velocity curve that gives the dynamics I want.”

But, Martin Jann notes, finding the right curve can be tricky. “[Velocity curves] are a difficult issue, and the reason is that we are trying to put many things into one, which should be separated in my view.

“If you're learning an instrument, you usually are playing the same instrument all the time,” Jann continues. “If you play violin, you may have the same violin almost all your life and you know this instrument the best. If you apply this view to our realm of using MIDI keyboards with different synthesizers and sounds, you find you can change the velocity curve on your keyboard, within your sequencer, within the sound, in fact, just about everywhere. Every keyboard's velocity response is different as well, so the same preset can sound very different when played with different keyboards or velocity curves.” The solution, says Jann, comes, as with the violin, only through intimate familiarity with your controller's response.

“But remember that when, for instance, you lower the pitch of a sample, you're slowing the attacks of the sound down. Since you have the knowledge that the attack is going to be played at a slower rate, you can move the sample start point into the waveform a little bit past that initial attack so you don't get a klunk.”

Show Comments

These are my comments.

Reader Poll

Do play more hardware or software synths?


See results without voting »