Harriet Tubman, Notre Dame, and the Sound of History [SOTG #8]

Earlier this year, I visited the farmland where my Irish ancestors lived and worked from the mid-1850s to the early 1920s. Just a few years ago, I didn’t even know their names. And yet here I was, a century after the last Mattimoe (pronounced “Mattimer”) died, walking among a herd of cows on what was once the family’s dairy farm in County Leitrim. 

Photo of large tree, backlit by the sun, with a brown cow and white cow standing to the right. Green grass is in the foreground and blue sky in the background.

The views, the smells, the sounds, the out-building remnants, the family history – I wanted to bottle it all and bring it home.

Impossible in the literal sense, but that experience did come home, not in a bottle but in me.

These people are real to me now. I feel connected to them through their lives on that land. And although no photos of them exist, if I close my eyes, I can see them there.

“Mattimoe Fields,” County Leitrim, Ireland. May 2024. (Copyright Lori Mortimer, 2024)

As an audio person, I wondered what the farm sounded like 100–150 years ago. No airplanes, cars, or farm tractors, of course. But what about the wind, grasses, wildlife, farm animals, music, or the sound of people working hard and hopefully laughing sometimes?

Can we even know what a place sounded like in the past?

Thanks to ethnomusicologists, acoustical archaeologists, and acoustical historians, we can get reasonably close. I learned about their work from three stellar podcast episodes I listened to this year.

In “The Sound World of Harriet Tubman,” by Phantom Power podcast, ethnomusicologist Maya Cunningham reads her Ms. Magazine essay about the musical world Tubman lived in. Cunningham calls her written essay a sound collage, a description that becomes fully realized in audio.

Phantom Power producer Ravi Krishnaswami weaves Cunningham’s narration with sounds she provided him: her field recordings from Brodess Farm, where Tubman was born and raised, and recordings of African instruments and African American spirituals that research shows Tubman would have known. (Cunningham created a Spotify playlist to accompany the essay.)

Cunningham draws a direct line from Tubman’s musical environment, rooted in Christian faith, to her self-liberation and abolitionist activism. From the field workers’ songs to hymns and spirituals sung during secret Hush Arbor meetings, Tubman’s world resounded with voices singing about freedom, whether here on Earth or in the afterlife.

My favorite revelation: Tubman used her voice and this music to secure freedom. She sang “Bound by the Promised Land” to alert her family that she was about to make her escape. And she sang other spirituals and hymns as coded instructions to the Underground Railroad travelers she conducted.

This episode was one of my favorite listens of the year. Come for the gorgeous sound essay and stay for the interview with Cunningham at the end.

With last week’s reopening of the Cathedral of Notre Dame, most news stories have covered the architectural and aesthetic reconstruction.

Equally fascinating to me was the acoustic reconstruction. Would architects and artisans make the building look and sound as it once did? The cathedral was well known for its acoustics, particularly its reverberance. Acoustic archaeologists and historians didn’t want that environment, which had been altered by the 2019 fire, to be lost forever.

Materially Speaking episode “Notre-Dame: An Acoustic Reconstruction” describes how, in 2015, a Sorbonne research group created a virtual-reality model of the cathedral’s soundscape. The acoustic measurements they took became indispensable during the post-fire reconstruction.  

They used those measurements and knowledge of the restoration materials to predict the cathedral’s post-restoration acoustics. They also researched Notre Dame’s acoustic history, incorporating what they learned into their virtual-reality model. Using that VR model, while the real building was closed for restoration, singers performed in a virtual Notre Dame across different historical periods.

This episode covers much more, including a 16th-century instrument called the serpent, the placement and direction of choir-organ pipes, the effort to record the sounds of the artisans at their craft, and even the acoustic impact of cathedral tourists.

Although this episode is the most technical of the three, it explains a practical application for podcast sound design.

Have you ever wanted to create a certain sound for a scene but lacked the skills to do it? Maybe you wanted a voice to sound like it was in a cave or on the phone. Or maybe you needed a scene where you could hear, through your childhood bedroom wall, the muffled guitars of your brother’s KISS album rocking and rolling all night.

You can create these effects by noodling with things like EQ, delay, and reverb. But if, like me, you’re not an audio engineer, that can be a lengthy trial-and-error slog.

Luckily, with free DAW convolution reverb plugins like Soundly’s Place It and Melba Productions’ MConvolutionEZ, you can simply choose the acoustic effect you want from a preset list and apply it to your clip.  

Place It main interface (L) and preset menu (R)

I never understood how convolution worked until I listened to Field & Foley podcast Episode 11 – Mariana Lopez. Lopez is a researcher who recreates historical-site soundscapes with computer modeling and acoustic measurements – pretty much the same work done in Notre Dame.

(Now, this is where things get more technical. If you like nerdy details, stay with me! If that’s not your thing, no worries. I’ll be back with another issue soon. In the meantime, treat yourself to a download of Place It or MConvolutionEZ and have fun!)

The acoustic measurements come from impulse responses recorded in a given indoor or outdoor space. To capture an impulse response, you record a sound (the impulse) – like a balloon pop – and the resulting sound reverberations (the response). That recording (usually a .wav file) can then be loaded into convolution reverb software, like the ones I mentioned above, and applied as an effect on other audio files.

Impulse responses can tell you about a space. For example, can a person standing in a busy medieval town square clearly hear actors, 50 feet away, performing an outdoor play?

That’s the kind of thing Lopez wanted to know. She studied the acoustics of a street in York, UK, for a project on the medieval York Mystery Plays. The plays were originally performed on many streets in York from the 14th to 16th centuries.  

Lopez recorded a series of impulse responses a street called Stonegate, the best-preserved site of the Mystery Plays. Using acoustical modeling software, she then built a model of Stonegate as it exists today.

Because all materials – stone, wood, hay, textiles, even live (or dead) bodies – affect the acoustics of an environment, Lopez next adjusted the model based on historical records of that location. For example, she changed the modern street pavement to other, more irregular surfaces to approximate what it might have been like several hundred years ago.  

Acoustic reconstruction uses detailed data to create approximations – no one can say what the exact acoustics were 600 years ago. Lopez’s work offers educated estimates of what it might have sounded like while the Mystery Plays were performed.

She also created an interactive website where you can experiment with her acoustic modeling: Soundscapes of the York Mystery Plays

I’m guessing most of us aren’t acoustic archaeologists or ethnomusicologists. But when we sound design a narrative podcast scene, we share this in common with them: we understand that it’s important to create a sense of place for our audience so they can better understand the people and characters in our stories.

My Ireland trip gave me a sense of place that I didn’t have before. And though we can’t beam our listeners to other lands, we can bring them into a scene through sound, make the scene feel real to them, and bring them there figuratively. We don’t need acoustic measurements to do that. But it helps to think carefully about the setting we’re constructing for listeners.

More on that in a future issue.

Until next time,

Lori

Atomic Units of Meaning: Resonate Podcast Festival 2024 [SOTG #7]

Workshop alert! On Saturday, November 16, from 1-5 p.m. ET, I’m leading an online workshop: Sound Design for Narrative Audio. We’ll listen to, dissect, and critique published examples to uncover and understand the sound-design choices their producers made. And we’ll explore sound-design tools, techniques, and resources that won’t break the bank. You’ll leave with the skills and knowledge you need to get started, plus an appreciation for sound design as an integral part of narrative storytelling. I’d love to see you there!


While explaining how improv comedy rules apply to audio storytelling, Davy Gardner, who helms Tribeca Audio, said that carefully chosen sounds are “atomic units of meaning” that can “quadruple your vocabulary.”

That’s how I felt after each session at last week’s 2024 Resonate Podcast Festival: every 45 minutes, a presenter dropped a bunch of atomic units on us. Each speaker distilled years of knowledge and experience into succinct yet layered truths for us to contemplate and eventually incorporate into our work. These talks will be, dare I say, resonating with me for weeks and months to come.

Celebrating the Craft

My last in-person conference was in the fall of 2019, so I was long overdue for audio-community immersion. There’s just something about being in the same room with people who do what you do, who love what you love.

At a festival, the presenter lineup draws you in, but you also find gold in the spaces in between. It’s the magic of ad-hoc conversations at the food truck, of discovering you’re standing next to someone you met in an online class, of matching a face to a voice that’s been in your ears for years.

Despite the recent industry contraction, with its layoffs and shuttering of beloved and important shows, the place positively buzzed with audio love and excitement about the work. It’s fitting that Chioke I’Anson, who founded Resonate, named it a festival.

Our field overflows with passionate, talented creators who feel called to tell stories, create audio art and journalism, center marginalized voices, expose and question power structures, and innovate to move the field forward. The creative future is bright. Resonate was proof of that.

Titled “Telling Stories,” this year’s event blended presentations about audio fiction and nonfiction because, as the website says, “they have more in common than we often think, and there is much that the genres can learn from one another.”

The event was delightfully light on business-speak. Downloads, marketing, and audience development are important, of course, but Resonate 2024 centered what is essential: craft, process, creativity, and artistry. The money-bros have wreaked their havoc, but Resonate is here to shout that audio storytelling survives and thrives.

Takeaways

You might be wondering if I did anything besides absorb the vibes for two days. Although I’m still contemplating most of the atomic units we were gifted, I did digest some sound-design observations and tips.

Sound Design & Scoring

Throughline producers Rund Abdelfatah and Ramtin Arablouei demonstrated how they score and sound-design a scene. Throughline is known for its immersive sound design. Here’s a high-level view of their process:

  • They use approximately a 60:40 mix of sound effects from sound libraries and effects they create themselves
  • In their first draft, they get the story structure in place, with sound-design cues added to the script
  • In their second draft, they start adding sound design
  • They begin with sound beds and then add texture
  • Sound before words: in a scene, they make sure their audience hears what’s going to be talked about before a character starts talking about it

Physicality of Sound

In her presentation “Sound Is Physical,” Ellen Horne reminded us that sound waves move through the air and into our ears. That bass sounds rumble in our chests. And that when two people speak in person, both bodies become resonant chambers.

When audio recordings contain these embodied experiences, listeners more easily connect emotionally and viscerally to what they’re hearing. This is especially true of and important for audio fiction.

However, in the wake of Covid-19, many shows have stopped taping in person. Remote recording saves time and money, but it also removes the physicality of people interacting in person.

And that’s a shame, Ellen says, because when people record remotely, they are literally and figuratively “distanced from others, hard to reach, and removed emotionally.” Something is missing from the tape, and listeners can sense it.

(There was so much more to Ellen’s presentation … ironically, I’m just scratching the surface.)

Ellen offered some tips for bringing more physicality into your audio:

  • Whenever possible, record in person—get out in the world
  • For an interview, sit on a sofa next to the interviewee—not across a table from them
  • Record people walking together, lying on a blanket together, using their bodies in some way
  • For narration:
    • Warm up your body—go for a walk, stretch, move—before recording
    • Stand up while recording
    • Talk with your hands
    • Tear up your script and rewrite it as bullet points; then record from that

Favorite Quotes from Resonate 2024

“If it sounds good, it is good.” – Jason Reynolds

“Sound is touch at a distance.” – Researcher Anne Fernald, quoted by Ellen Horne

“I think there is embodied authenticity. If it’s true, it’s deeper in you.” – Ellen Horne

“Sound isn’t just a visual medium. It’s a sensory medium.” – Davy Gardner

“Sound is a language.” A sound—such as the chime of a grandfather clock—can contain many meanings. In a scene, carefully chosen sounds are like “atomic units of meaning” that can “quadruple your vocabulary.” A sound can be “far more specific than words, yet be universal at the same time.”  – Davy Gardner

“We create [art/stories] to stir curiosity in someone else in the hope of becoming someone else’s muse.” – Avery Trufelman, creator of Articles of Interest

“If from year to year we’re making the same show, we’re not growing.” – Rund Abdelfatah

“Work with people who are willing to get into a pool and get murdered for you.” – Ayeesha Menon, creator of Mumbai Crime


Congratulations to Chioke I’Anson and the VPM+ICA Community Media Center team at the Institute for Contemporary Art at Virginia Commonwealth University. They curated a stellar lineup—with off-the-charts expertise—and orchestrated a seamless two-day event.

I’m already looking forward to Resonate 2025.

Sounds Complicated [SOTG #5]

As New Hampshire Public Radio’s Taylor Quimby learned, you never know when a plastic sled and a jar of marbles might come in handy.

Taylor was the sound designer on Outside/In’s Windfall series. With the sled and marbles, he created my favorite example of sound that explains complex information.

Windfall is about the history and future of offshore wind farms in the US. In the first episode, “Sea Change,” the Windfall team conveys the size of wind turbines through visual writing.

They describe blades bigger than airplanes and predict rotors the size of three football fields, with a turbine the size of “a giant, spinning sports complex.” This writing style helps listeners picture these structures in the real world.

But sometimes it’s hard to find an analogy, especially for numbers so large or small that we can’t wrap our minds around them. We need a frame of reference, and sound design can provide it.

For example, the Windfall team found it challenging at first to explain the exponential growth in wind-farm energy-output.

“We knew that transmitting the scope and scale of growth was the most important thing,” Taylor said.

“Yet, especially when it comes to energy, that’s really hard because it’s a subject full of jargon. You’re talking about kilowatts, megawatts, gigawatts, and most of us don’t have a sense of how they factor into our own lives.”

Early episode drafts described power output with numbers. But the team realized numbers weren’t communicating just how significant the increase in power was.

In an editing meeting, someone asked, “How can we use sound to help transmit scale in a way that’s almost like a type of synesthesia?”

Taylor remembered an old anti-smoking public service announcement that started with a ball bearing being dropped into a bucket. The bearing represented one smoking-related death. Next, a bucket full of bearings, representing thousands of smoking-related deaths, was poured out.

“You hear the one ball bearing, and then you hear this just unimaginable staggering amount of sound that represents all the people that die of smoking,” he said.

“And the point is not that you can individually pick all those things out. The point is to be so overwhelmed by a number and the comparison between those things that you just get a sense of this as almost beyond comprehension.”

And in the spirit of stealing like an artist, the Windfall team borrowed that idea.

First, they chose a unit of measurement other than watts. The first offshore wind farm, near Vindeby (pronounced in English “vin-eh-boo”) Denmark, became their base unit of energy. Next, they chose the sound of one marble to represent one Vindeby-sized wind farm. Over time, as the wind farms grow in size, so does the number of marbles we hear.

Listen below for how the marbles create a sense of scale.

Next, wind farms continue growing, and the marble sounds juxtapose a large number with a much smaller one.

Listen below for the big sound-design payoff.

Did you notice how long it took for 7,000 marbles to pour out? Thirty-five seconds, an eternity in audio. That was intentional.

“We wanted to do it slowly and hear the movement and hear it building and clicking and clacking and feel like it’s going on forever and ever and ever and get that sense of scale, so that it feels overwhelming,” Taylor said.

After the 7,000 marbles roll, we get the big payoff: the sparse thunking of a few marbles representing the minuscule output of US wind farms. The sonic contrast tells the tale. No words needed.

Screen capture of a sound wave in a digital audio workstation. The wave is in color, with a long, dense black section in the middle that represents the sound effect of 7,000 marbles rolling.
Can you find the 7,000 marbles?

The marble concept was ingenious, as was Taylor’s method for creating the sound effect. Taylor borrowed a marble collection from NHPR Senior Producer Jack Rodolico’s son. He took the marbles home and started experimenting.

“I dropped a marble in a bucket, and it just made a thunk.”

And when Taylor dropped a bunch of marbles, “it happened super-fast, and it did not communicate what we talked about trying to communicate. You needed a sense of movement. And marbles are heavy, so they just drop.”

For a sense of movement, the marbles needed to roll. Taylor tried a few more things before grabbing his son’s sled, made of high-density foam covered in plastic.

“I propped it up at an angle and dropped a marble on it. You hear the thunk … thunk … thunk-thunk-thunk-thunk-thunk. And I thought, this is what you need because somebody can listen to this, hear the movement, and picture the marble dropping.”

Now he just needed to create the sound of 7,000 marbles.

“I counted out 100 marbles so I could record 100 falling, and then I multiplied that sound file a number of times in my DAW, over and over again to create bigger and bigger numbers,” he said.

Taylor offered some tips for using sound design when words alone can’t paint the picture.

Example: Outside/In’s 250th episode tells the story of Earth’s largest mass extinction event, which occurred about 250 million years ago. After a series of natural disasters, the Earth’s landscape was barren. Then fungi and ferns began popping up.

“To transmit the idea of all Earth’s continents being populated by countless numbers of fungi,” Taylor said, “we used little popping sounds, like bloop… bloop … bloop … bloop bloop bloop … bloop-bloop-bloop-bloop-bloop-bloop. Nature documentaries often use little bloop sounds during time lapses because that’s what we imagine. So, I thought, let’s take that thing that people will inherently understand and make a sonic landscape that might help people imagine this world I’m describing.”

To hear the mushroom world popping up, start listening at 24:00.

Example: Episode 3 of Patient Zero, an NHPR podcast about Lyme disease, explains how the Lyme-causing toxin gets from a tick into the human body. This time, Taylor borrowed from the Magic School Bus.

“The Magic School Bus shrinks and goes into the human body. And I thought, let’s do this Magic-School-Bus style, and it’ll be really gross,” he said.

“I’m zooming into a world and imagining stuff. What does a tick thrusting its mouthparts into your skin sound like?

“I used different sound effects to imagine life at the scale of a tick. There was the classic Foley stuff, where you’re tearing or squeezing leather or cutting meat. The sound of the tick walking on skin is fingers palpating a sponge very close to a microphone.

“And I heard from people who said, ‘I think you went too far because that was deeply uncomfortable to listen to.’ But that’s what I was going for.”

To hear the tick feasting on human flesh, start listening to episode 3 of Patient Zero at 14:00. Use headphones if you dare.

Occasionally, an object’s literal sound just doesn’t work. It’s not what listeners imagine, and it can even be confusing without a visual aid. So, through trial and error, listen for the sound that people will just inherently get. Sometimes, a more cartoon-like version of the sound conjures the image better.

Example: Years ago, an Outside/In episode included an Indiana-Jones-style journey where someone got on a train and then on a plane. Taylor didn’t like the sound of a real jet for the plane. “It’s just a whine. So, I used an old prop-engine plane, which was not the actual plane, but people immediately hear it and get it.”

With factual information and real-world scenes, be transparent about what you’re doing so your listeners know when you’ve altered something, Taylor said. “If I have field tape with a reporter in an actual part of the world and then combine that with some sort of fake sound effect, that’s where you can get into some problematic areas where you’re misleading your audience and passing fake sound design off as field tape.”

And don’t forget, the biggest tip in sound design is to experiment and play. When you find yourself at a loss for words, start experimenting with sound design to help you express what you mean.


A big thank-you to Taylor Quimby for sharing his knowledge and experience with us!

Do you have a favorite sound-design example? Send a link and short description to lori@lorimortimer.com, and I’ll include it in the next issue.


And before I go, I’ve got two resources for you:

  • I’ve started uploading some of my recorded sounds to Freesound.org with Creative Commons Zero (CC0) licensing, so you can use them any way you want. I’ll add more each week until the end of the year. Get them here: https://freesound.org/people/lori.mortimer/
  • PodPeople (@podppl) is an excellent Instagram follow. They’ve been posting reels of their sound designers demoing and explaining how they sound-designed specific podcast scenes. (Thanks to Ashley Lusk for the recommendation.)

A Beginner’s Course: Five Episodes from Sound School Podcast [SOTG #4]

Before we get started: Last month, I participated in an AIR (Association of Independents in Radio) webinar about how non-musicians can learn to make their own podcast music. As one of three panelists, I focused on showing how easy it can be to use free or inexpensive iOS music apps. AIR members can watch the webinar for free. If you’re not an AIR member, you can view the demo videos I made on my YouTube channel. Also, check out the benefits of joining AIR — members receive discounts on AIR programs and trainings.

When a novice writer asks how they can improve their writing, the answer is often “read, read, read.” I feel the same way about becoming a better audio maker: I need to listen, listen, listen to other people’s work.

But in addition to listening to podcasts for their sound design, I also listen to podcasts that explain sound design. I love getting under the hood and listening to an audio maker describe their overall approach for a piece or how and why they added sound design or music to specific scenes – or why they didn’t.

Sound School, a podcast from Transom.org and PRX, is one of my favorite resources for these kinds of discussions. Host Rob Rosenthal has been teaching audio storytelling for years, including at Salt and the Transom Story Workshop (RIP) — experience that comes through in the clips he selects to examine, the questions he asks his audio-maker guests, and how he explains nuanced details in a way new audio makers can understand. 

On Sound School, Rob has covered everything from interviewing to field recording to story editing (and more). With over 200 episodes to choose from, it’s hard to know where to begin.

So I’ve curated a beginner’s sound-design course for you. Here are five of my favorite sound-design-focused Sound School episodes to help you get your sound off the ground. I’ve put them in the sequence that makes the most sense to me, but of course you can listen to them in any order you like.

Sound Design Basics

Sound designer Matt Boll explains how he and the Gimlet team developed the sound-design principles they implemented on the first season of Crimetown, which focused on crime and politics in Rhode Island.

Key takeaways: Keep it short and simple when adding sound design to a scene. Emotionally heavy moments sometimes work better without sound design – let the speaker convey their own emotion. Sound design is an iterative process: “You just have to keep trying until it works.”

Avoiding Cheesy Sound Design

Radiolab, the groundbreaking investigative journalism radio show/podcast, has its own unique production style and sound. In this episode, Jad Abumrad explains some of his sound design principles through examples from the Radiolab episode “Nukes.”

Key takeaways: Avoid overly literal sounds. Brainstorm about what you want each scene to feel like. Why do you need sound design there? What emotion or experience are you trying to evoke/create? 

She Sees Your Every Move

Musician and sound designer Jonathan Mitchell explains how he used music to help shape the story in his piece, “She Sees Your Every Move.” It’s about a photographer who takes pictures of people in their homes at night, from the street and through their windows, without their knowledge or permission. (It’s creepy af!)

Key takeaways: The music and the story are not separate from each other. They’re both equal parts of the story. The music choices inform the clip choices, and the clip choices inform the music choices, “like a soup that’s getting stirred.” 

Scoring Stories Part 2

(There is, of course, a Part 1, but it’s not necessary to listen to it before Part 2.) 

Rob makes subtle changes to the music in Tiarne Cook’s audio piece (with her permission). Through trial and error, he shows how small changes can make a big impact. He explains how to choose where to start and end music at different points in a story, as well as how to choose which music to include in an audio story.

Key takeaways: There are some basic principles about scoring that can guide you, but it’s still important to experiment and to vary how you score an individual audio piece, so that the use of music doesn’t become predictable or boring. 

Remixing the Music 

The term “wallpapering” refers to sound design or music that plays throughout most or all of a piece. In this episode, Rob makes a few narration cuts and remixes the music on a wallpapered audio story (with producer Neena Pathak’s permission) to show how a different approach, one where music comes in and out at key points, can change and, in his opinion, improve the listening experience.

Key takeaways: Try to use music strategically at different points in the story, to emphasize a change in scene or mood, or to emphasize, or to provide a moment of reflection before moving on to another scene. Try to find places where the speaker’s words can and should stand alone, without music, for the most impact. 

Go ahead and drop these episodes in your queue. Maybe listen to one and see how it resonates with the kind of work you do. Are there any takeaways that appeal to you for the kind of audio pieces you create? Which principles might you adopt or adapt for your own sound design approach? Then maybe practice a little and move on to the next episode for more ideas you can copy.

Zen Ear, Beginner’s Ear [SOTG #2]

As a rule, I don’t do rules.

That’s one of the reasons I love sound design: no rules. Not for me, anyway. Except maybe “Don’t damage anyone’s ear drums.”

Other than that, it’s principles all the way down.

Principles are far more interesting than rules. While rules involve one-size-fits-all, almost mindless application, principles provide guidance and encourage flexibility.

On their face, rules seem simple, clear, and oh, so certain. What are rules, if not a list of dos and don’ts and when to do or don’t them. (Wait … eh, you know what I mean.) You merely need to remember the rules and apply them.

In reality, rules suck. They may promise certainty, but they rarely anticipate every scenario, and that’s where they crumble. Suddenly we make new rules or create exceptions. Or worse, we start justifying why we’re not following the rules. Now the rules aren’t so simple anymore. (Shout-out to “i before e except after c,” which has so many exceptions it’s actually just wrong.)

Principles ask us to trust our judgment in new situations. Designed to be flexible, they feel open and full of possibilities because they don’t tell us what to do. They guide our thinking and decision-making.

And most importantly for beginners, principles offer more learning opportunities.

Do you have any principles?

In the first issue of SOTG, I encouraged copying ideas from other people’s sound-design work (the ideas, not the work itself!). Today I’m suggesting a follow-up: distill some of the ideas you like (including your own) into a set of sound-design principles for your show. Then you can refer to them as you produce each episode.

Let’s take a look at the sound-design principles I developed for Mementos.

Each episode, a guest tells the story behind a cherished keepsake and how it became a container of meaning, memory, emotion, and human connection for them. As a result, from a sound-design perspective, it’s a quiet, reflective show.

I created the principles in the table below to align with the show’s overall tone. They may seem familiar because I based them on some of the things I listen for in other people’s work, which I described in the last issue.

PrincipleWhat it means for sound design
Know who your soloist isWho or what (speaker, music, sfx) should have the audience’s focus right now, like a soloist in a band? Does the sound design reflect that? It’s more than volume. For example, does the sound design interfere with the tone or quality of the voice or conflict with what’s being said in mood, beat, or fullness/sparseness?
Cadence matters Listen to the speaker’s voice — not just what they’re saying but also how they’re saying it. Most people have a natural cadence — a pace, plus highs and lows and points of emphasis — to the way they speak. How should the sound design work with the speaker’s cadence? Should it imitate it? Complement it? Be completely different?
BreatheIs there enough time for the listener to reflect on something important or thought-provoking the speaker said, before starting the next scene or continuing with the story?

While making an episode, I periodically review these principles to make sure the episode aligns with my overall intent for the show.

For a different type of show, I’d have different principles. For example, Have You Heard George’s Podcast? is “a fresh take on inner-city life through a mix of storytelling, music and fiction” by George the Poet. The show sounds nothing like mine (or anyone else’s), and therefore any underlying sound-design principles would differ from mine, too.

What is your show about, who is it for, and what do you want it to sound like? The answers to those questions will help shape your show’s sound-design principles.

Questions are better than answers

You may have noticed that I described my sound-design principles with questions instead of statements (aka, rules). The questions enable me to answer differently in different situations.

A rule about letting the piece breathe might assert something like, “Leave at least four seconds between something profound the guest says and the start of the next scene.”

A principle asks, “Did I leave enough time between the profound thing the guest said and the next scene?” I won’t know what “enough” is until I’m making that episode. It might be different each time, even within the same episode. Maybe it’s four seconds. Maybe it’s six. Or two.

You might be thinking this is just semantics. I get it. But for me, the difference between rules and principles, between answers and questions, is about learning to trust my ear.

I’ll give you an example. On social media, I’ve seen a few good-natured debates about whether it’s okay to fade music in or out, or whether, instead, music should always have a “hard” start and finish.

Some folks believe that music should never fade in or out. But as a newb, why would I adopt such a rule? It takes away half of my options and leaves me with no chance to learn what sounds good to me and what doesn’t.

Instead, I would ask, How should I start the music here, and why? Asking and answering a question ensures I listen closely and decide what sounds better in this case. Other times, I might apply the same principle and arrive at a different conclusion.

*An Instagram post of mine from 2017, well before I started podcasting. At least I’m consistent!

Trust your beginner’s ear

I don’t want to wander too deep into Zen philosophy, mostly because I’ll screw it up. But Shunryu Suzuki’s concept of the beginner’s mind works for sound design, with a twist.

Suzuki says, “The mind of the beginner is empty, free of the habits of the expert, ready to accept, to doubt, and open to all the possibilities [emphasis mine].”

Beginners have fewer, if any, expectations and preconceptions. They’re unobstructed by past experiences (good or bad) and lessons learned. The secret to Zen practice, Suzuki says, is the beginner’s mindset. The aim is to clear your mind and think, experience, and be like a beginner.

Being a beginner usually feels like a disadvantage, like the time my game-loving 12-year-old utterly destroyed me, the newb, at Settlers of Catan. But in sound design, we can flip that dynamic upside down. Being a beginner can be an advantage.

Borrowing from Suzuki, I think of new audio makers as having a beginner’s ear. We listen differently and hear differently than experts. Our ear is untrained and unencumbered by past experiences. It doesn’t know the rules yet, so it hasn’t “ruled out” any possibilities.

So forget about rules and dos and don’ts and always-s and nevers. Consider instead a set of sound-design principles, aligned with your show, that encourage you to ask questions and answer them yourself.

Embrace your beginner’s ear. Listen for what sounds good to you, and choose that.

Cheers,

Lori

In the next issue of Sound Off the Ground: less philosophy and more resources!

Not subscribed yet? Sign up now!

Copy That! [SOTG #1]

Years ago, one of my sons was drawing a picture after dinner. He said he was copying something his friend drew that day in preschool.

I said, “Oh, that’s nice. But why don’t you draw your own original thing?”

He said, “My own original thing is copying people.”

Turns out, he was on to something.

We learn by copying others

You can expect copying to be a recurring theme in this newsletter. As in, don’t be afraid to copy, borrow, or imitate ideas and techniques from other sound designers. That’s the best way to learn.

Color photo of tortie cat sitting and facing the camera, with a window behind her. The photo is repeated four times in a square layout. The photo is labeled "Copy cat".

Sound Off the Ground began germinating in my brain when I read the first issue of Alice Wilder’s newsletter, Starting Out. In an interview with Alice, Tobin Low said the most helpful mentors to new audio makers are often not experts or higher-ups but people just ahead of them in skill and experience.

But for independent podcasters who don’t work on a team, it can be hard to find a mentor or discover work by people who are slightly ahead of you.

Luckily, we can still learn from the work of people who are far more advanced. My favorite podcasts tend to be narrative-style shows made by experienced, professional teams with actual budgets (god love ya!). These teams make complex, rich, immersive podcasts that sound amazing. Of course, their skill sets and resources far exceed mine.

How can I learn from my favorite “higher-ups” if there’s such a big gap between them and me? How can I copy what they do with my skills still in the larval stage?

By narrowing the scope of what I’m listening for in their sound design.

Concentrate on timing and music

When learning a new skill, sometimes you get worse before you get better. But I never wanted to reveal that to my audience. I wanted each episode to sound a little better than its predecessor.

So I concentrated on what I knew I could manage — what was within my skill set or just a little past it — when listening to other audio for inspiration.

I started by giving most of my attention to timing and music because to me, they’re the core of basic sound design. I just needed the ability to:

  • make basic edits in my audio software (DAW) of choice
  • listen to and identify cadences and patterns in speech and music

Keeping in mind my preference for narrative shows, here’s what I listened for (and still do).

Timing:

  • Transitions between scenes or when music or sound effects start, end, or blend together. When is the music fading in or out? When does it start or stop suddenly, without fading? What’s happening in the story when the music starts or ends? Is that effective? Would I change it or leave it the way it is? 
  • Pacing and spacing. Does the timing sound intentional? Does the story “breathe”? If it doesn’t, is it that way for a reason? When a speaker says something important or reflective, is there time for the audience to sit with it and their own thoughts and emotions for a few seconds, or does the scene jump quickly to the next thing, killing the buzz?

Music:

  • The music or sound effects under someone speaking. Does the volume level or style of music compete with the people speaking, making it harder to hear or follow? Does it hang back a bit and help move the speech along? Does it complement what’s being said? Would another style of music have been better? 
  • Repetition. Does the same music appear more than once in the episode? When does that happen? Why do I think it’s repeated in those places? Is the repetition effective for the story or is it just … repetitive?

Timing + Music:

  • The beat or cadence of the music under narration. Does music line up well with the phrasing and timing the speaker’s words and pauses? Are quiet parts of the music that allow the speaker to be more prominent? Do musical beats align with the words the speaker is emphasizing? 

By listening intentionally for these sound-design choices, I stayed focused and was able compare similar scenarios across different podcasts and episodes. Over time, I developed a sense for what sounded good to me. And then I tried to mimic it in my episodes.

Originality is nothing but judicious imitation.

Voltaire

Example: Outside/In’s “Windfall” series

Sometimes I nerd out and listen to a show or episode more than once, focusing on different things each time.

I did that with Outside/In’sWindfall” series about the history and future of wind farms in the United States. The first time through, I listened because I wanted to learn about wind farms. But it had such great sound design that I listened again to focus on that.

The entire series is beautifully sound designed, and I’ll probably talk about it again in future issues. But focusing on timing, specifically, I learned from “Windfall” to expand the transitions between scenes in an episode. Here’s an example from “Windfall Part 1: Sea Change.” (If you’re reading the newsletter in your email, click here.)

Notice how long the musical transition is between the end of the first scene (when Sam Evans Brown says “…to reshape the future of where our energy comes from”) and the next one (when Annie Ropeik says, “In the spring of this year….”). 

It’s 16 seconds.

In audio, 16 seconds is an eternity. In many cases, including probably all of my Mementos episodes, it would be too long. But not here. 

The hosts have just spent the first seven minutes establishing the background for the series, and they’re about to delve into the details. There’s no rush to get to the next scene. Listeners are afforded the time to reflect on what they’ve just heard about the context for the series and the multiple voices they’ll hear throughout it. 

Although a 16-second musical transition would be too long for my episodes, I still took something from this “Windfall” example: I can trust my audience with longer transitions than I thought.

Conventional wisdom says audiences are busy and have short attention spans, so you better keep things moving. But the “Windfall” example showed me that’s not necessarily true and that I could play around with longer transitions between scenes. If the story was good, the listeners would still be there when the next scene started.

I wouldn’t have noticed that minor detail had I not been listening specifically to learn. It’s important, I think, because over the course of an episode, making small adjustments to timing and pacing can make a big impact on the listener’s experience. And those kinds of adjustments are often within a beginner’s skill set.

So the next time you’re listening to your favorite show, pay close attention to timing and music, and see if there’s anything you want to copy as “your own original thing” on your next piece.

Cheers,

Lori

Not subscribed yet? Sign up now!

Why Sound Off the Ground?

Why did I create Sound Off the Ground? Because just a few years ago, when I was brand new to audio, I got a lot of help from other people in the industry. And now I’ve learned enough to help newbs like you get started.

Hand-drawn illustration on an X-Y axis. Title is "My Sound Design Journey." The X axis is labeled Time. The Y axis is labeled Skill.  The illustration depicts a learning curve moving from the bottom left upward toward the top right corner. The area under the learning-curve line is divided into three sections. The first section is labeled "Steep Learning Curve." There is a stick figure climbing up to the top of that section with a microphone in her hand. The figure is labeled "Me." The figure is just about to reach the second section, labeled "Plateau of Resting and Sharing What I've Learned." And the third section on the right is labeled "Continued Lifelong Learning." 

The image depicts Lori Mortimer pausing during her learning process to share what she's learned about sound design.

Can’t Carry a Tune? No Problem!

Sound Off the Ground can help you.

Let’s get something out of the way right now. Can you: 

  • read music?
  • play an instrument?
  • sing on key?
  • write music?

Me either!

And yet I’ve learned how to make my podcast, Mementos, sound good. I get compliments all the time on my sound design. (And they’re not even from my mom, because she’s dead.)

Not too long ago, I was where you are now.

I know how overwhelming it can feel when learning this stuff. And how many mistakes you make when learning. And how *@$#&! time-consuming it can be. Plus my wallet can tell you how tempting it is to spend money on apps and sound packs that you don’t really need.

That’s why I’m focusing on sound design for new podcasters.

In Sound Off the Ground, I’ll:

  • save you time by sharing lessons I’ve learned through trial and error — I suffered so you don’t have to
  • save you money by showing you free or cost-effective sound resources and how to use them creatively
  • show you how to make your own simple music even if you know nothing about music (I swear!)
  • share new sound-design tips and resources as I discover them along the way

And you’ll be able to: 

  • make your show sound great without spending gobs of money on sound design
  • listen carefully to the sound design of other shows and borrow their ideas, putting your own, unique spin on them
  • find free music, sound effects, and software, and use all of them in ways that fit your personality and show.

What do I mean by sound design?

When I say sound design, I mean the process of choosing, creating, altering, and arranging audio elements — like music and sound effects — to set the tone, create atmosphere, and enhance the story you’re trying to tell.

No matter which microphone or DAW (audio production software) you use, you will be able to use these principles, tips, and techniques. Therefore, I won’t be covering studio setup, which mic is best, or which DAW you should use. Those are all personal decisions.

So go grab a cuppa, and let’s get your sound off the ground. Subscribe here. It’s free and always will be.

Let's get your sound off the ground!

I respect your privacy.