Inside the Dark Side of Mastering
Phase Alignment Manifesto
10 things clients don’t care about

wEBpAGE_html_7d09002RA’s Peter Chambers goes behind the curtain to tell the rest of us just what a mastering engineer does—and why they’re vitally important to every note of music you hear.

“Master” is a hefty old noun. In the days of yore, masters were important men. They had control of wives and animals, they were the captains of ships, they did as they pleased. People knew better than to mess with a master. As a verb it’s scarcely better. Typically, you mastered something to subdue it, to get the better of it, to own, dominate and possess it.

In fact, if you peruse the OED, it’s only once you’ve read through eight definitions featuring silenced children, whipped livestock and highly polished weaponry that you come to mastering in the sense of audio. “To create the master copy of (something); (Sound Recording) to record the master disc or tape for (a record or recording); to make a recording of (a performance) from which a master disc or tape can be created.”

I can’t help but think that this eighth sense of “master” masks the other seven senses of the master himself, and his mastery. That’s because a mastering engineer isn’t just a highly skilled, thoroughly experienced audio craftsman. To us lay people, there seems to be some kind of weird goings-on in those anechoic rooms, something that reaches beyond skill and even art into the realms of black magic. But, as I found out after speaking to several of the world’s leading mastering engineers, it’s not like that at all. In fact, mastering has two fundamental, basic roles in music, neither of which necessitates knowledge of arts either dark or martial. Well…only a bit.wEBpAGE_html_m453195fd
The first of these roles is the technical one. Productions have personality: No two recordings are identical, and no two producer’s techniques follow the same route (let alone rules) in order to achieve their particular, peculiar sound—and this is especially the case with electronic music, which focuses so carefully on sculpting timbre. Omar-S is not Deadmau5 is not Alva Noto, and this is a good thing, surely. As a t-shirt I once saw in Japan said: “Many colours make happiness rainbow.”

Problem is, all those beautiful colours have to appear in order to be heard, and on all speakers ranging from the tinniest little earbuds and laptop speakers right up to the multi-million dollar club systems. From this point of view, mastering is just about ensuring that will happen: the technical craft of mastering is just about applying knowledge to make sure the recording translates. The mastering engineer, as AudibleOddities’ Shawn Hatfield explains, is applying an “understanding of the science of sound and how best to achieve the highest range of playability on the widest array of systems.” You can make your recording as weird (and as weirdly) as you like: Mastering makes sure you can hear as much of that weirdness as possible on any available system.

The tricky part? Although the master copy is the recording from which all subsequent copies will be made—the closest a recording comes to being an “original”—it’s never a “machine translation,” even though machines are used. Beyond the basic, technical matter of what mastering has to do, everything mastering involves is a matter of interpretation. As Prairie Cat Recordings’ Mark Richardson puts it, “With audio, there are only opinions.” Mastering engineers don’t just feed your odd-shaped sound experiment into machines, they exert a shaping influence on recorded music, and they do so with their own ideas, their own style. They’re artists in their own right.

Mark Richardson: “Imagine that the audio project you’ve worked on is a vehicle, a car. And in the studio, you and the artists and the engineers have worked to put together this car to the best of their ability. Once the car is finished, some people will always opt to have it go over to a detail shop, have some guys make it as shiny and buffed out as possible. That’s what mastering does for audio, it’s a detail shop.”wEBpAGE_html_m7cdd8e08
And here move from the first, technical aspect to the second, artistic aspect. Almost everyone would agree that a recording should sound as good as possible—but who determines “good”? Which details are to be emphasised, and what constitutes “shiny and buffed”? Most engineers agree that this is absolutely a matter of the intention of the artist and the intended meaning and context of the recording they’ve produced. Dubplates & Mastering’s Christoph Grote-Beverborg elaborates the list of questions he goes through each time he sits down to master a recording: “What does it try to achieve musically, atmospherically, sonically? Does it meet its goal? If not, where are the weaknesses and the strengths? And what listening situation is it aimed at? A mobile telephone, an mp3 player, a home stereo or the club?”

Here you can see it’s always a matter of motion, of moving between the technical/scientific and the artistic/interpretive. Technically, the mastering engineer has to enable an intelligible dialogue between the stereo and the listener. Artistically, the mastering engineer is trying to establish a better understanding between the artist, their aims and the outcome that will emerge at the other end. In this way, the mastering engineer is a kind of therapist, a person who intervenes in the recording process to improve communication where it’s sub-optimal or salvage it where it threatens to break down. For Stefan Betke, this is the part of mastering that he sees as most important. “The mastering engineer is a third person who listens to your music without being involved in it, and with a very open ear… You’re helping people make musical decisions, so it becomes a musical process as well.” It’s a matter of saying to musicians, producers and recording engineers—who may have become so close to their recording they can no longer “hear” it—this snare is very high and sibilant: Is that what you wanted?

“What people want,” has recently taken on a decidedly political character in the world of mastering, as Robert Babicz tells it. “I get sent some really, really, really loud stuff on file, with the request: Can you make it shiny and loud? They think that loud is good, and they often still think so, even after I’ve tried to convince them otherwise.” The so-called “loudness wars” have come to dominate people’s awareness of mastering, and very often it’s mastering engineers who have been blamed for all this unbearably loud, fatiguing music, something that nearly every engineer I contacted really, really, really arced up about when askedWhat is loudness, and why is it such a problem? In order to get at this, it’s important to distinguish loudness from volume. Rotate the big knob clockwise and you increase the volume at which the recording is being played, pushing a larger signal through the amplifier, down the cables and driving the speakers harder. The volume knob increases the sound intensity—a physical quantity measurable in definite physical units. Volume is primarily a physical thing. Loudness is related to volume in that it is the extent to which a sound is heard as loud. Loudness is the magnitude of the auditory sensation produced by that sound. Loudness is primarily a perceptual thing. And it’s loudness that people want.wEBpAGE_html_6b8c561d
But why? Why do people want loudness? Don’t they have volume knobs? The simple culprit is compression, but even then, there’s a story to be told, and it starts thirty years or so ago. Back in the day, analogue audio formats could only be so loud: vinyl couldn’t be grooved too deep or wide without transgressing the physical limitations of the format itself. During the ’80s, tape faced similar physical limitations: Push the signal too far, and you caused unpleasant distortion. So not only was loudness considered to be undesirable, it was also constrained by the physical limitations of the format.

With digital, this gap has to a great extent been closed, which means recordings can be made to seem significantly louder without sounding distorted. Compression helps to make the music sound louder without hitting those dreaded physical limitations. All of the loud stuff remains loud. And all of the soft stuff becomes louder too. But recall the point from the end of the last paragraph: loud recordings weren’t just precluded by the limitations of the medium, they were also undesirable, and they were undesirable because they destroyed the dynamism and nuance that makes good recordings so beautiful (and relaxing) to listen to. As Christoph puts it: “When everything is loud, nothing is loud. When you allow for a certain amount of dynamism you can achieve more depth, more detail, more richness in sound. A maximised recording sounds flat, lacks depth and is tiresome to the ear.” Maximise to minimise.

So why did loudness become a good thing? When did people decide it’d be a good idea to “yell” at their devoted fans? Robert Babicz thinks it comes down to confidence. “Maybe people are a little unsure about their musical ideas, so they think that, if I make it loud, then the people will ‘hear me’ and like me. Something like that.” Christoph believes that it’s partly what happens when inexperienced people use the maximising and mastering plugins included in most software production packages, and the way the plugins interfere with transients, which are (technically) the attack phase of the signal and (perceptually) what gives each sound its signature. Knock off the transients, and vocal chords and Nords, oboes and Oberheims, all begin to sound like indifferent mush: “These plug-ins, they’re transient killers, especially if used by inexperienced people. It is very easy to get cheated by the increased loudness when switching in a maximizer.”wEBpAGE_html_42c557a7

Shawn Hatfield elaborates: “The biggest and most common mistake people make is to squash their recordings before mastering. I see it in eight out of ten projects that come to AudibleOddities… There are lots of songs that come through my studio that had dynamic range at one point, but—because they wanted their song to be as loud or louder than the next guy—they squashed it with a limiter before I’ve ever had a chance to decide what would work best. Dynamic range creates an ebb and flow of energy that can tell a powerful story.”

In short, ladies and gentlemen: please stop yelling at each other. And trust your mastering engineers. A blow up is also a breakdown: Usually, if people are yelling at one another, this is a sign of failing communication. People mostly raise their voice when they don’t think they’ve been understood; they yell ‘cos they’re afraid they won’t be heard. Either that, or they yell because they have to, because everyone around them is already shouting. The net result of this is a perfect recipe for a thumping headache: Recording and mastering engineers trying to impress record execs with loud recordings, recordings all vying for market share by trying to out-yell each other, and a culture of listening to such recordings through cheap D/A converters playing highly compressed files which are then pumped (with the digital EQs on their computer bumped up and the normalizer on) through shitty headphones or el-cheapo computer speakers.

The remedy for this is simple, and it involves calm, quiet and confidence: the characteristics of a master, in other words. Or mistress. (Cos maybe machismo is part of the problem, too.) “For me,” says Babicz, “‘educating the people’ is just a matter of being myself and doing what I do. Just complaining about what’s wrong with music—I see no reason in this. You just have to take action, do what you do, develop your own thing.”

Stefan Betke claims, emphatically, that it is a matter of moving the conversation beyond an obsession with technology. It’s a conversation where technology is posed as the solution to problems caused by technology, one which absents the vital role of our understanding and use of that technology. “A purely technological point of view, pure technology without anything else, it doesn’t work. Technology on its own works against every cultural development.” The art and craft of mastering is at its most crucial right here, because mastering engineers, as people who have a highly developed, deep understanding of how to use technology to finish recordings, are our best defence against what got us into this fix in the first place—a very stupid understanding and application of technology, which has then been followed by an (at times) misguided conversation applies focuses and blame to technology, rather than our applied understanding of it.

Remember: mastering engineers use technology, but not in order to make technology heard, to place it centre stage. On the contrary, the point is to allow the artist’s expression to reveal itself, and the only way to do that is to operate almost inaudibly in the wings. You shouldn’t notice the technology, you should hear the music. A mastering engineer does not make music; she allows it to be heard.

Header photo credit: Brian Petersen
Foreboding machines, needle in the red and volume knob photo credits: Timothy Fellsrow


Phase Alignment Manifesto

 Written by: Ben Lindell

I don’t mind admitting it, I love the infinite possibilities of phase relationships.

If you are still stuck in the mindset that things are either in-phase or out-of-phase, black or white, then it’s time to wake up and listen closer.

There are many shades of phase.

One microphone can and will never be 100% in or out of phase from another microphone – there are simply too many variables. Since very little music is made completely from pure sine waves (thankfully!) we must think of phase as being a very complex relationship that ;exists ;in several dimensions.

Warning: It’s about to get a little geeky for a minute but stick with me and I’ll show you how to think about phase in practical terms and ways to manipulate it for fun and profit

For example, a bass drum with a fundamental frequency of say 60Hz also produces many ;harmonics above that frequency- that’s what defines thetimbre of a bass drum. 60Hz has a wavelength of 18.27ft (5.66m). In a perfect ideal space, i.e. no walls, if you place two microphones ;9.135ft (2.83m) away from each other they will cancel out 60Hz exactly.

This phase cancelation will never happen in real-life recording situations. ;

  1. We generally record drums indoors, and by doing so the reflections, diffusions, resonances and modes of the room all influence the propagation of sound. More about that here.
  2. Bass drums output many more frequencies than just 60Hz. Even if you some how magically phase-canceled 60Hz, the result would still sound like a bass drum, just without a solid fundamental frequency to it. (Most people call this out-of-phase even though you can still hear the source by way of harmonics)
  3. The time delay from placing microphones 9ft away is about 26 milliseconds, that’s on the borderline of being perceived as a new sound (my bad should be approx 8ms but read about the Haas Effect just for a refresher anyways). 

Since all recorded sounds are made up of complex harmonics which are further influenced by the room they are played in, two microphones, no matter their positioning, will never be completely phase-canceled. Got it? Good!

Phase Relationships

So if two microphones can never be phase canceled then why does every mic preamp have a phase invert switch and why do I hear a difference when I press it?

Whenever you combine multiple microphones in a recording you introduce phase relationships. Here is a situation most people run into while recording – they like how one microphone sounds but it may be lacking sonically in some areas so they augment the first mic with a second to capture some other details of the source. You probably have noticed that when you press the phase invert switch on one channel that the sound never magically disappears, instead the harmonics of the sound are altered, the bottom end may become less apparent and the upper harmonics shift around.

Since there is no such thing as being perfectly in or out of phase we have to be aware how we can control the phase relationships. ;Anytime you have multiple microphones capturing a source there are :

  1. Timbre differences – microphones focused on different areas of the source capture different harmonics (think edge of a guitar speaker to the center)
  2. Time differences – caused by varying distances from the source to the microphones (sound can only travel 1,126ft/sec)
  3. Transient response – some microphones capture transients quicker than others (think condenser vs. dynamic vs. ribbon)
  4. Frequency response – all microphones have a “sound” to them, in fact many microphones have emphasis ;circuits ;built into them (think Audix D6 and ;Neumann U87) 

When you add together two slightly different sounding versions of the same source the result can sometimes get messy. ;However, it is the un-predictable results from combinations of microphones that can add depth and richness to our recordings just as easily as it can take it away.

Common Phase Mistakes

Phase issues can often occur when performers move around. Say you have two microphones on a singer/songwriter playing acoustic guitar and singing simultaneously. When the performer rotates slightly, the phase relationship between the two microphones and the sources changes and is easily heard. The mics are now receiving slightly different timbres and time differences than they were before she moved. When this happens during a take you’ll hear a “phasing” sound and when it happens between takes it makes for very obvious edits.

The other major phase issue I see many producers and engineers doing (often on purpose unfortunately) is abusing stereo width controls. I love width in mixes but I hate sounds that make my head spin. I’m not one to be a stickler for mono ;compatibility ;but it’s irresponsible to leave tracks so wide that when the mix is ;collapsed ;to mono entire parts disappear.

Working with Phase While Recording

What are some ways to minimize undesirable phase relationships?

  • Try using just one microphone, adjust until it alone captures the desired detailed sound information from the source.
  • Use phase ;coherent ;microphone techniques. For stereo micing use an XY pattern instead of spaced pair. For multi-micing a single source try to adhere to the 3:1 rule. 

What are some cool ways to use phase relationships?

If I’m using more than one microphone on a source I always have a very good reason for doing so because phase relationships can quickly become difficult to manage. 

  • Kit MicKick Drum ;- I like the combination of an inside mic close to the beater to capture the attack of the pedal and an outside mic to capture a more developed resonance. This is a very common technique. If the mics are placed similar distances from each head phase isn’t usually too much of a problem.
  • Snare Drum – Most people like combining a top and bottom microphone (with the phase inverted on the bottom mic of course) but I often like to add a third mic. I’ll use a combination of a dynamic and a condenser microphones on the top side of drum to capture different transient responses and then blend those with a bottom mic to add some snap.
  • Kit Mic – I love intentionally breaking the 3:1 rule with this one, when placed correctly I can capture a great picture of the drum set with one mic that, when added to the close mics and overheads, gives me depth without too much room tone. The Royer 122 in this picture is serving as the kit mic.
  • Guitar – On both acoustic and electric I like have one close mic and one distant, generally observing the 3:1 rule but not always ;)  

These are just a few unique examples of how I use phase to my advantage to create interesting and detailed recordings of instruments. The key is to have a solid sonic vision for everything you record and place your microphones with a very clear sonic purpose in mind. If a combination of mics doesn’t sound like how you want them to sound, first try the phase invert button, if that doesn’t work then go back and move the microphones until they sound good!

Guitar Amp Mic'd Close and Far 

Manipulating Phase While Mixing

Playing around with phase relationships doesn’t have to end at the tracking date.

1. Geeks love to point out that all ;equalizers are phase shifters.

2. The most deliberate manipulator of phase would be phaser plugins and pedals. Here’s a video on how they work. ;Phasers are a fantastic effect that is often under utilized while mixing. It’s a great way to add motion and life to vocals, strings, ;synthesizers, etc.

3. There are also many Mid/Side (M-S) processors and plugins ;available ;these days that allow you to process the center ;separately ;from the sides and also increase or decrease the perceived width of a stereo sound. Some of my favorite plugins for M-S are the ;Brainworx ;plugins and a recent addition to my arsenal of stereo field manipulators is the ;UAD Precision K-Stereo ;plugin. The only piece of Behringer gear I own is their Edison which is an analog stereo image processor, it usually sits on my effects return stem going into my summing bus.


4. IBP ScreenshotMy favorite and more useful tool for manipulating phase while mixing nowadays is the Little Labs IBP which is available both as a hardware unit and aUAD plugin. This was the first UAD plugin I ever purchased and I use it on ;almost ;every mix. It allows to you dial in a precise amount of time delay and phase shift. It can arguably transform the sound of multi-mic’d sources more than any combination of EQ and panning. By changing the phase relationships you also change the color, depth and punch of the drums. Here’s an example where I used IBP on the Kick In and Snare Top tracks to change the sound of the drums when combined with the Overheads and Kit microphones. A is IBP off, B is IBP on.

It’s important that we learn how to use phase to our advantage, it isn’t something that we should be fighting against constantly. Once you flip your perspective around and starting thinking about how phase can help you and your recordings, you’ll find that you’re no longer “fixing” phase problems, you’re creating interesting phase relationships that add richness and depth to your sounds and mixes.

 I hope, if nothing else, this article opened your mind and ears to the importance that phase has on your recordings and mixes. The common perception is that phase is just black or white. There are so many shades and colors that phase relationships help us create in our recordings.

Does all this crazy phase talk have your head spinning? What’s your relationship with phase? 


10 things clients don’t care about

By Paul Strikwerda

Let me preface this post by saying that I feel very lucky.

In the past 25 years I was able to develop a strong relationship with a number of clients. The longer we go back, the fewer words we have to waste on what each side is expecting from the other.

It’s almost like a marriage. And very much like a marriage, a lasting business relationship needs commitment from each partner. It can be love at first sight and it can also end in a divorce, due to unspoken expectations and unfulfilled desires.

Throughout the years I have heard colleagues complain about their clients:

“She doesn’t speak to me anymore” or “He dumped me in a heartbeat for some cheap actress. I thought that what we had was special.”

And how about this one:

“All I ever wanted was a little bit of attention. Was that too much to ask?”

It usually is.

When I just started out as a freelancer, one of my more cynical mentors warned me against romanticizing the relationship with my clients. His mantra:

“Business is business and the rest is bull****.”

Today, these words resonate even stronger. In these fast and furious times, online matchmaking is getting more and more popular. And nobody seems to take it slow anymore. Making small talk is so yesterday.

“I need your demo now. Are you available this afternoon?”

Before you know it, you’re off into some dark room talking to yourself, and when you’re done recording, you dump the files into a dropbox.

As one of my friends put it: “I almost feel used.”

Well, isn’t that the whole idea? We offer our services. We deliver our services. We move on. End of story.

Let’s be honest. Most times, both parties aren’t that interested in getting to know each other before the deal is sealed.

How well do you really know your clients? How well do they know you?

Does it even matter?

In most cases it doesn’t, as long as the job gets done.

That’s why it is time to take off those rose-colored glasses and get rid of your great expectations. Here’s my top ten of things most clients don’t seem to care about anymore:

1. YOU
All you are is a solution to a problem; a means to an end. It’s your job to ensure that the benefits of hiring you outweigh how much you charge. Your client doesn’t have to care about you. It’s your work that matters.

What you perceive to be the benefits of your service is not important. The question is: Do you understand and can you meet the needs of your clients?

Your take on a script (or any other freelance assignment) may be interesting, but it’s often irrelevant. You’re the stylist. The client determines how she wants her hair cut. Unless you have permission to be creative.

The fact that you’ve been at it for a certain number of years doesn’t automatically mean you’re the right person for the part. Over the years, some people have become very good at being very bad. They’re stuck in a rut.

Years of experience entitles you to nothing. In fact, it can make you look like you’re old school. The quality of your experience qualifies you. Not the length.

An impressive resume tells a client what you have done for others, usually years ago. All he really wants to know is: What can you do for ME, today?

If you can’t make that clear, why should he hire you?

Experience can also backfire.

One of my friends specializes in medical narrations. In order to impress a possible new client, he quoted a fine endorsement from a pharmaceutical company he’d been working for, for years. It was his way of saying: “See… I have a proven track record. I can easily handle your project.”

The other party was not impressed. The email he got back effectively said:

“Since you’ve established yourself as the voice of brand X, it would be unwise for us to hire you. People would automatically associate your sound with our main competitor.”

Never justify your fee by bringing up how much you have invested in your dream. That’s the price you pay for being and staying in business. After all, you don’t care about your client’s business expenses either, do you?

Clients won’t hire you because you happen to own a Steinway. They hire you because they like the way you play, or because you offer the best value for money.

You might impress your colleagues with a brand new Neumann U87 studio microphone. My last client hadn’t even heard of the brand.

It’s lame to blame technology for your lack of preparation. In voice-overs, home studios are steadily becoming the norm. Even if you record in a stuffy bedroom closet (and call it a ‘professional studio’), you’re the head of IT, audio engineering and data transmission. If you can’t handle that, don’t expect any sympathy from the client. He’ll find someone who can.

Leave them at the door. Clients are clients; not friends or family. You’re hired to do a job, no matter how horrible you might feel about your dead cat or a recent break-up. Put your life on the back burner and focus on the project. Cry when the job is over.

You are hired to make your client look good and not to boost your ego. If you’re in need of praise, visit an evangelical church.

Sure, nobody talks like you or walks like you. That doesn’t make you irreplaceable. Even if you’ve been working with a client for years, don’t be surprised if they ask you to re-audition.

One of the joys of being an independent contractor is that there’s no long-term contract with severance pay, should things come to a premature end.

You’re on your own.

Never take anything for granted. Complacency will be your downfall. Be ready to prove yourself, over and over and over again.

If you don’t take care of your career, nobody else will.

Business is business. And the rest is…

Paul Strikwerda ©2012