Atomic Units of Meaning: Resonate Podcast Festival 2024 [SOTG #7]

Workshop alert! On Saturday, November 16, from 1-5 p.m. ET, I’m leading an online workshop: Sound Design for Narrative Audio. We’ll listen to, dissect, and critique published examples to uncover and understand the sound-design choices their producers made. And we’ll explore sound-design tools, techniques, and resources that won’t break the bank. You’ll leave with the skills and knowledge you need to get started, plus an appreciation for sound design as an integral part of narrative storytelling. I’d love to see you there!


While explaining how improv comedy rules apply to audio storytelling, Davy Gardner, who helms Tribeca Audio, said that carefully chosen sounds are “atomic units of meaning” that can “quadruple your vocabulary.”

That’s how I felt after each session at last week’s 2024 Resonate Podcast Festival: every 45 minutes, a presenter dropped a bunch of atomic units on us. Each speaker distilled years of knowledge and experience into succinct yet layered truths for us to contemplate and eventually incorporate into our work. These talks will be, dare I say, resonating with me for weeks and months to come.

Celebrating the Craft

My last in-person conference was in the fall of 2019, so I was long overdue for audio-community immersion. There’s just something about being in the same room with people who do what you do, who love what you love.

At a festival, the presenter lineup draws you in, but you also find gold in the spaces in between. It’s the magic of ad-hoc conversations at the food truck, of discovering you’re standing next to someone you met in an online class, of matching a face to a voice that’s been in your ears for years.

Despite the recent industry contraction, with its layoffs and shuttering of beloved and important shows, the place positively buzzed with audio love and excitement about the work. It’s fitting that Chioke I’Anson, who founded Resonate, named it a festival.

Our field overflows with passionate, talented creators who feel called to tell stories, create audio art and journalism, center marginalized voices, expose and question power structures, and innovate to move the field forward. The creative future is bright. Resonate was proof of that.

Titled “Telling Stories,” this year’s event blended presentations about audio fiction and nonfiction because, as the website says, “they have more in common than we often think, and there is much that the genres can learn from one another.”

The event was delightfully light on business-speak. Downloads, marketing, and audience development are important, of course, but Resonate 2024 centered what is essential: craft, process, creativity, and artistry. The money-bros have wreaked their havoc, but Resonate is here to shout that audio storytelling survives and thrives.

Takeaways

You might be wondering if I did anything besides absorb the vibes for two days. Although I’m still contemplating most of the atomic units we were gifted, I did digest some sound-design observations and tips.

Sound Design & Scoring

Throughline producers Rund Abdelfatah and Ramtin Arablouei demonstrated how they score and sound-design a scene. Throughline is known for its immersive sound design. Here’s a high-level view of their process:

  • They use approximately a 60:40 mix of sound effects from sound libraries and effects they create themselves
  • In their first draft, they get the story structure in place, with sound-design cues added to the script
  • In their second draft, they start adding sound design
  • They begin with sound beds and then add texture
  • Sound before words: in a scene, they make sure their audience hears what’s going to be talked about before a character starts talking about it

Physicality of Sound

In her presentation “Sound Is Physical,” Ellen Horne reminded us that sound waves move through the air and into our ears. That bass sounds rumble in our chests. And that when two people speak in person, both bodies become resonant chambers.

When audio recordings contain these embodied experiences, listeners more easily connect emotionally and viscerally to what they’re hearing. This is especially true of and important for audio fiction.

However, in the wake of Covid-19, many shows have stopped taping in person. Remote recording saves time and money, but it also removes the physicality of people interacting in person.

And that’s a shame, Ellen says, because when people record remotely, they are literally and figuratively “distanced from others, hard to reach, and removed emotionally.” Something is missing from the tape, and listeners can sense it.

(There was so much more to Ellen’s presentation … ironically, I’m just scratching the surface.)

Ellen offered some tips for bringing more physicality into your audio:

  • Whenever possible, record in person—get out in the world
  • For an interview, sit on a sofa next to the interviewee—not across a table from them
  • Record people walking together, lying on a blanket together, using their bodies in some way
  • For narration:
    • Warm up your body—go for a walk, stretch, move—before recording
    • Stand up while recording
    • Talk with your hands
    • Tear up your script and rewrite it as bullet points; then record from that

Favorite Quotes from Resonate 2024

“If it sounds good, it is good.” – Jason Reynolds

“Sound is touch at a distance.” – Researcher Anne Fernald, quoted by Ellen Horne

“I think there is embodied authenticity. If it’s true, it’s deeper in you.” – Ellen Horne

“Sound isn’t just a visual medium. It’s a sensory medium.” – Davy Gardner

“Sound is a language.” A sound—such as the chime of a grandfather clock—can contain many meanings. In a scene, carefully chosen sounds are like “atomic units of meaning” that can “quadruple your vocabulary.” A sound can be “far more specific than words, yet be universal at the same time.”  – Davy Gardner

“We create [art/stories] to stir curiosity in someone else in the hope of becoming someone else’s muse.” – Avery Trufelman, creator of Articles of Interest

“If from year to year we’re making the same show, we’re not growing.” – Rund Abdelfatah

“Work with people who are willing to get into a pool and get murdered for you.” – Ayeesha Menon, creator of Mumbai Crime


Congratulations to Chioke I’Anson and the VPM+ICA Community Media Center team at the Institute for Contemporary Art at Virginia Commonwealth University. They curated a stellar lineup—with off-the-charts expertise—and orchestrated a seamless two-day event.

I’m already looking forward to Resonate 2025.

Doing the Doog with Richard Parks III [SOTG #6]

It’s great to be back with this issue of Sound Off the Ground. I spent the spring and summer developing sound-design training programs for this fall. I made a video about sound design for narrative podcasts that will be included in a Penn State podcasting course. And I’m leading a workshop called Music for Non-Musicians: Create Music on Your iOS Device, which runs on Sept 27. Participants will learn how to compose songs on their iPad or iPhone with GarageBand and a $4 generative-music app. No musical skill required (really!). You can learn more and register here. I’d love to see you there!



My all-time favorite sound design of a narrative podcast is in “The Ballad of Mount Doogie Dowler,” an episode of Storytime with Seth Rogen.

It’s the most intense, incredible true story I’ve heard on a podcast.

In the episode, Colin Dowler tells us how he decided to “do the Doog” and summit the Canadian mountain named after his grandfather, Doogie Dowler. Colin was supposed to do the Doog with his brother but ended up going alone. And he almost didn’t come back.

All because he crossed paths with a pesky 900-pound grizzly bear who tried to eat him alive.

From the start, we know that Colin survives the bear attack. I mean, he’s alive enough to tell us the story himself. And yet, I was on the edge of my seat.

I credit Seth Rogen and Colin’s storytelling skills. But the sound design also creates suspense and tension, especially during the 22 minutes dedicated to the attack itself.  

It has a percussive, instrumental, scattered Peter and the Wolf vibe. These sounds and textures amplify Colin’s emotions and illustrate the menacing actions of the bear. From a sound-design perspective, it’s not a literal scene. Yet those sounds got my heart racing and evoked Colin’s terror without a single “realistic” bear-attack sound.

It’s brilliant, and I wanted to learn how the sound for this scene was conceived. I reached out to Richard Parks III, who produced and sound designed the episode.

You may know Richard as the creator of Richard’s Famous Food Podcast, a documentary food show that is, as Richard says, “more like Pee Wee’s Play House than a normal podcast.”

Since the spring, Richard has also been publishing Dodger Blue Dream, which chronicles the Los Angeles Dodgers’ 2024 baseball season. A gambling scandal broke out early in the season when it was revealed that Ippei Mizuhara, the translator for the Dodgers’ star pitcher, Shohei Ohtani, stole millions of dollars from Ohtani to cover his own gambling debts. He will be sentenced on October 25, which just happens to coincide with the World Series.

Dodger Blue Dream episodes “The Talented Mr. Ippei” and “The Complaint Against Ippei” cover the scandal.

Richard says the podcast is “a perfect primer for anybody casually interested in baseball and looking for a fun way to get caught up on some of the year’s biggest storylines in a tightly edited, sound-rich package.”

Now, let’s get back to our topic at hand: The Doog! Richard and I spoke for an hour, so I’ve edited and condensed our conversation below. To get started, let’s look at some key points Richard made:

  • Sound design is like writing and editing. With writing and editing, you’re arranging words in a particular order. And with sound design, you’re arranging sounds – music, narration, interview tape, sound effects – in a particular order.
  • The informational and emotional meaning of sound is always a part of whatever piece of sound you’re using.
  • The emotional tenor of the story overall, especially in the interview tape and what you know about the subject of the story, should drive your decisions about how you’re going to approach the overall sound design of the piece.
  • For “The Ballad of Mount Doogie Dowler,” Richard decided that there would be no difference between music and sound design.
  • An audio project is really a series of decisions – deciding what to put into a container, how big the container is, how much stuff you want to put in it, and the order in which you’ll put it in the container.

The first thing for me is that sound design is just a part of writing. I don’t see a line between music, sound design, or tape. The informational and emotional meaning of sound is always a part of whatever piece of sound you’re using. And so sound design is deciding what length and order that those things go in from zero minutes to the end of the piece.

And that’s also what writing is. Inevitably, what you’re doing is arranging sound in a certain order. The job is always the same, whether you’re writing, editing, sound designing, scoring. Those things have symbiotic, inevitable relationships.

The more you think of it as an integrated process, it opens up avenues for better ideas, in my experience.

It was a decision that came early on, and I think that it just made sense for the piece because it’s man and nature. And Colin telling the story of being alone, of being attacked by a bear, just happens to lend itself to that kind of sound design.

You have to use the emotional tenor of the story overall. In this case, the interview tape, the man telling the story, what he’s like, what his experience was, and what you know about him from your interactions producing the episode – taking all that into account, to make the first decisions about how you’re going to approach how it should sound.

Other Storytime episodes we did are these kind of psychedelic multimedia collages, as opposed to the stark-landscape oil painting from, you know, 1898 that this piece was. It just made sense for this piece.

I worked with a composer named William Ryan Fritch, who I’ve worked with for eons. I come from a documentary film background, and Will contributed music [for some of those projects].

When we worked together before, he gave me music, and then I wrote and scored and sound designed with that. That’s how I like to work. I like to start with music a lot of the time.

With Doogie Dowler, because of what the story was, I came up with comps [musical examples] that I knew I wanted to talk to Will about. It was like dirty, pulsing, synth things. I put in something with a sort of acoustic, eerie Americana vibe.

I knew that he had this in his palette, because he lives in a barn and has all these antique instruments. He’s always playing and recording things. He sent me a bunch of files that were like sound effects. For example, he was making little clicky sounds with an instrument, basically. A bass clarinet.

Then I got to bounce off of the rhythms that were in those files and mess with them. And I realized the music and sound design were the same thing. I immediately took that as a rule for the piece. I’m allowed to change the rule later on, but for now, that’s the rule, and I’m going to see where that takes me just to create forward momentum.

There were a couple times that Colin made noises himself. I’m sorry, this is pretty gory stuff – but he talks about the bear chewing on him sounding like a lab chewing on a cow bone.

He also describes the bear’s nails on the gravel, and he goes like this [Richard makes scatching noises]. He even had a rhythm to it. So, I took those and they became another piece of the sound design.

Then I had a new variation to my rule, which was to use any nonverbal sound that Colin made.

In Doogie Dowler, I think that a man’s voice, along with the kind of texture and musical elements in the palette that Will gave me, is hyper-real in someone’s mind.

To me, it feels like reading a good book because it’s only descriptive to a certain point. And that really engages the mind’s imagination. And I think that’s what people refer to as cinematic in audio.

This story had a beautifully simple version of that. I had played with a few sound design pieces [realistic sounds], and I realized that it was taking me out of the cinematic world a little bit.

So I decided wouldn’t go into my sound design folders [of sound effects]. That decision indicated a philosophical approach to how I would work on this thing.

It’s good to think about it in terms of decisions. That’s what every creative project is like – you just need to make decisions.

That’s what editing is. You have to decide to cut, cut away. It’s like we’re deciding what to put into a container. How big the container is, how much stuff you want to put in it, and the order of it.

The whole idea of the music and sound design is to help transport you. It puts you right there. It’s like creating a proscenium for the storyteller to be heard. There’s a spotlight, it’s in the right place, and you know that the person wielding it is being intentional about it. And, so, sound design just doing a lot in order to get out of the way.

There’s no comparison to Man Fights Off Bear. It’s the perfect distillate of high stakes, live or die. And we know that he lives, but also, we’re not thinking about that if it’s told right.

It’s hard work. And it went incredibly fast. Working on that episode was one of the most intense things I’ve done.

It all comes back to how Colin told it. I think when you go through a physical trauma like this, it’s not uncommon to have time slow down.

And Colin had all this detail. Sometimes I try to work around exhaustive detail or length. But in this case, I realized it was just part of the fact that we were talking to someone who fought off a grizzly bear, and it means that we’re gonna sit there and we’re gonna hear all about that moment because it’s embedded in his memory for very good reason.

I think it’s important to just remember that the job is different depending on what venue your work is going to be in, and what the purpose is, and what the emotional and informational value of the interview – and therefore the piece – is going to be, and to whom.

You need a lot of context, so I wouldn’t want to prescribe one thing or another. But each piece is its own movie, and this one was kind an outdoors action-horror-real-life thriller movie. So, I made decisions around that.

In terms of like how to approach things, I think the last thing I would say is just make me listen and make me care.


You can follow Richard Parks III at @reechardparks on Instagram and X.


Sounds Complicated [SOTG #5]

As New Hampshire Public Radio’s Taylor Quimby learned, you never know when a plastic sled and a jar of marbles might come in handy.

Taylor was the sound designer on Outside/In’s Windfall series. With the sled and marbles, he created my favorite example of sound that explains complex information.

Windfall is about the history and future of offshore wind farms in the US. In the first episode, “Sea Change,” the Windfall team conveys the size of wind turbines through visual writing.

They describe blades bigger than airplanes and predict rotors the size of three football fields, with a turbine the size of “a giant, spinning sports complex.” This writing style helps listeners picture these structures in the real world.

But sometimes it’s hard to find an analogy, especially for numbers so large or small that we can’t wrap our minds around them. We need a frame of reference, and sound design can provide it.

For example, the Windfall team found it challenging at first to explain the exponential growth in wind-farm energy-output.

“We knew that transmitting the scope and scale of growth was the most important thing,” Taylor said.

“Yet, especially when it comes to energy, that’s really hard because it’s a subject full of jargon. You’re talking about kilowatts, megawatts, gigawatts, and most of us don’t have a sense of how they factor into our own lives.”

Early episode drafts described power output with numbers. But the team realized numbers weren’t communicating just how significant the increase in power was.

In an editing meeting, someone asked, “How can we use sound to help transmit scale in a way that’s almost like a type of synesthesia?”

Taylor remembered an old anti-smoking public service announcement that started with a ball bearing being dropped into a bucket. The bearing represented one smoking-related death. Next, a bucket full of bearings, representing thousands of smoking-related deaths, was poured out.

“You hear the one ball bearing, and then you hear this just unimaginable staggering amount of sound that represents all the people that die of smoking,” he said.

“And the point is not that you can individually pick all those things out. The point is to be so overwhelmed by a number and the comparison between those things that you just get a sense of this as almost beyond comprehension.”

And in the spirit of stealing like an artist, the Windfall team borrowed that idea.

First, they chose a unit of measurement other than watts. The first offshore wind farm, near Vindeby (pronounced in English “vin-eh-boo”) Denmark, became their base unit of energy. Next, they chose the sound of one marble to represent one Vindeby-sized wind farm. Over time, as the wind farms grow in size, so does the number of marbles we hear.

Listen below for how the marbles create a sense of scale.

Next, wind farms continue growing, and the marble sounds juxtapose a large number with a much smaller one.

Listen below for the big sound-design payoff.

Did you notice how long it took for 7,000 marbles to pour out? Thirty-five seconds, an eternity in audio. That was intentional.

“We wanted to do it slowly and hear the movement and hear it building and clicking and clacking and feel like it’s going on forever and ever and ever and get that sense of scale, so that it feels overwhelming,” Taylor said.

After the 7,000 marbles roll, we get the big payoff: the sparse thunking of a few marbles representing the minuscule output of US wind farms. The sonic contrast tells the tale. No words needed.

Screen capture of a sound wave in a digital audio workstation. The wave is in color, with a long, dense black section in the middle that represents the sound effect of 7,000 marbles rolling.
Can you find the 7,000 marbles?

The marble concept was ingenious, as was Taylor’s method for creating the sound effect. Taylor borrowed a marble collection from NHPR Senior Producer Jack Rodolico’s son. He took the marbles home and started experimenting.

“I dropped a marble in a bucket, and it just made a thunk.”

And when Taylor dropped a bunch of marbles, “it happened super-fast, and it did not communicate what we talked about trying to communicate. You needed a sense of movement. And marbles are heavy, so they just drop.”

For a sense of movement, the marbles needed to roll. Taylor tried a few more things before grabbing his son’s sled, made of high-density foam covered in plastic.

“I propped it up at an angle and dropped a marble on it. You hear the thunk … thunk … thunk-thunk-thunk-thunk-thunk. And I thought, this is what you need because somebody can listen to this, hear the movement, and picture the marble dropping.”

Now he just needed to create the sound of 7,000 marbles.

“I counted out 100 marbles so I could record 100 falling, and then I multiplied that sound file a number of times in my DAW, over and over again to create bigger and bigger numbers,” he said.

Taylor offered some tips for using sound design when words alone can’t paint the picture.

Example: Outside/In’s 250th episode tells the story of Earth’s largest mass extinction event, which occurred about 250 million years ago. After a series of natural disasters, the Earth’s landscape was barren. Then fungi and ferns began popping up.

“To transmit the idea of all Earth’s continents being populated by countless numbers of fungi,” Taylor said, “we used little popping sounds, like bloop… bloop … bloop … bloop bloop bloop … bloop-bloop-bloop-bloop-bloop-bloop. Nature documentaries often use little bloop sounds during time lapses because that’s what we imagine. So, I thought, let’s take that thing that people will inherently understand and make a sonic landscape that might help people imagine this world I’m describing.”

To hear the mushroom world popping up, start listening at 24:00.

Example: Episode 3 of Patient Zero, an NHPR podcast about Lyme disease, explains how the Lyme-causing toxin gets from a tick into the human body. This time, Taylor borrowed from the Magic School Bus.

“The Magic School Bus shrinks and goes into the human body. And I thought, let’s do this Magic-School-Bus style, and it’ll be really gross,” he said.

“I’m zooming into a world and imagining stuff. What does a tick thrusting its mouthparts into your skin sound like?

“I used different sound effects to imagine life at the scale of a tick. There was the classic Foley stuff, where you’re tearing or squeezing leather or cutting meat. The sound of the tick walking on skin is fingers palpating a sponge very close to a microphone.

“And I heard from people who said, ‘I think you went too far because that was deeply uncomfortable to listen to.’ But that’s what I was going for.”

To hear the tick feasting on human flesh, start listening to episode 3 of Patient Zero at 14:00. Use headphones if you dare.

Occasionally, an object’s literal sound just doesn’t work. It’s not what listeners imagine, and it can even be confusing without a visual aid. So, through trial and error, listen for the sound that people will just inherently get. Sometimes, a more cartoon-like version of the sound conjures the image better.

Example: Years ago, an Outside/In episode included an Indiana-Jones-style journey where someone got on a train and then on a plane. Taylor didn’t like the sound of a real jet for the plane. “It’s just a whine. So, I used an old prop-engine plane, which was not the actual plane, but people immediately hear it and get it.”

With factual information and real-world scenes, be transparent about what you’re doing so your listeners know when you’ve altered something, Taylor said. “If I have field tape with a reporter in an actual part of the world and then combine that with some sort of fake sound effect, that’s where you can get into some problematic areas where you’re misleading your audience and passing fake sound design off as field tape.”

And don’t forget, the biggest tip in sound design is to experiment and play. When you find yourself at a loss for words, start experimenting with sound design to help you express what you mean.


A big thank-you to Taylor Quimby for sharing his knowledge and experience with us!

Do you have a favorite sound-design example? Send a link and short description to lori@lorimortimer.com, and I’ll include it in the next issue.


And before I go, I’ve got two resources for you:

  • I’ve started uploading some of my recorded sounds to Freesound.org with Creative Commons Zero (CC0) licensing, so you can use them any way you want. I’ll add more each week until the end of the year. Get them here: https://freesound.org/people/lori.mortimer/
  • PodPeople (@podppl) is an excellent Instagram follow. They’ve been posting reels of their sound designers demoing and explaining how they sound-designed specific podcast scenes. (Thanks to Ashley Lusk for the recommendation.)

A Beginner’s Course: Five Episodes from Sound School Podcast [SOTG #4]

Before we get started: Last month, I participated in an AIR (Association of Independents in Radio) webinar about how non-musicians can learn to make their own podcast music. As one of three panelists, I focused on showing how easy it can be to use free or inexpensive iOS music apps. AIR members can watch the webinar for free. If you’re not an AIR member, you can view the demo videos I made on my YouTube channel. Also, check out the benefits of joining AIR — members receive discounts on AIR programs and trainings.

When a novice writer asks how they can improve their writing, the answer is often “read, read, read.” I feel the same way about becoming a better audio maker: I need to listen, listen, listen to other people’s work.

But in addition to listening to podcasts for their sound design, I also listen to podcasts that explain sound design. I love getting under the hood and listening to an audio maker describe their overall approach for a piece or how and why they added sound design or music to specific scenes – or why they didn’t.

Sound School, a podcast from Transom.org and PRX, is one of my favorite resources for these kinds of discussions. Host Rob Rosenthal has been teaching audio storytelling for years, including at Salt and the Transom Story Workshop (RIP) — experience that comes through in the clips he selects to examine, the questions he asks his audio-maker guests, and how he explains nuanced details in a way new audio makers can understand. 

On Sound School, Rob has covered everything from interviewing to field recording to story editing (and more). With over 200 episodes to choose from, it’s hard to know where to begin.

So I’ve curated a beginner’s sound-design course for you. Here are five of my favorite sound-design-focused Sound School episodes to help you get your sound off the ground. I’ve put them in the sequence that makes the most sense to me, but of course you can listen to them in any order you like.

Sound Design Basics

Sound designer Matt Boll explains how he and the Gimlet team developed the sound-design principles they implemented on the first season of Crimetown, which focused on crime and politics in Rhode Island.

Key takeaways: Keep it short and simple when adding sound design to a scene. Emotionally heavy moments sometimes work better without sound design – let the speaker convey their own emotion. Sound design is an iterative process: “You just have to keep trying until it works.”

Avoiding Cheesy Sound Design

Radiolab, the groundbreaking investigative journalism radio show/podcast, has its own unique production style and sound. In this episode, Jad Abumrad explains some of his sound design principles through examples from the Radiolab episode “Nukes.”

Key takeaways: Avoid overly literal sounds. Brainstorm about what you want each scene to feel like. Why do you need sound design there? What emotion or experience are you trying to evoke/create? 

She Sees Your Every Move

Musician and sound designer Jonathan Mitchell explains how he used music to help shape the story in his piece, “She Sees Your Every Move.” It’s about a photographer who takes pictures of people in their homes at night, from the street and through their windows, without their knowledge or permission. (It’s creepy af!)

Key takeaways: The music and the story are not separate from each other. They’re both equal parts of the story. The music choices inform the clip choices, and the clip choices inform the music choices, “like a soup that’s getting stirred.” 

Scoring Stories Part 2

(There is, of course, a Part 1, but it’s not necessary to listen to it before Part 2.) 

Rob makes subtle changes to the music in Tiarne Cook’s audio piece (with her permission). Through trial and error, he shows how small changes can make a big impact. He explains how to choose where to start and end music at different points in a story, as well as how to choose which music to include in an audio story.

Key takeaways: There are some basic principles about scoring that can guide you, but it’s still important to experiment and to vary how you score an individual audio piece, so that the use of music doesn’t become predictable or boring. 

Remixing the Music 

The term “wallpapering” refers to sound design or music that plays throughout most or all of a piece. In this episode, Rob makes a few narration cuts and remixes the music on a wallpapered audio story (with producer Neena Pathak’s permission) to show how a different approach, one where music comes in and out at key points, can change and, in his opinion, improve the listening experience.

Key takeaways: Try to use music strategically at different points in the story, to emphasize a change in scene or mood, or to emphasize, or to provide a moment of reflection before moving on to another scene. Try to find places where the speaker’s words can and should stand alone, without music, for the most impact. 

Go ahead and drop these episodes in your queue. Maybe listen to one and see how it resonates with the kind of work you do. Are there any takeaways that appeal to you for the kind of audio pieces you create? Which principles might you adopt or adapt for your own sound design approach? Then maybe practice a little and move on to the next episode for more ideas you can copy.

Looperman: The Hero We Deserve [SOTG #3]

Today, I’m going to gush about a sound-design superhero: Looperman (www.looperman.com).

Animated gif of 1970s comedian Andy Kaufman lip syncing to the Mighty Mouse cartoon theme song, “here I come to save the day.”

Looperman will save us all!

Looperman has nearly 230,000 loops (short music clips) that you can use for free in non-commercial and commercial projects.

It illustrates the power of crowdsourcing. Musicians upload loops they’ve created so other people can incorporate them into their musical compositions. These short clips, typically somewhere between 10-30 seconds long, are intended to be looped (repeated) in songs, creating an extended rhythmic or melodic pattern in those songs.

Musicians are clearly Looperman’s target audience, but luckily for us, nobody checks credentials. In fact, creators may be happily surprised to hear their loop in a podcast episode instead of a song. When you use a loop, you’re asked to leave feedback with a link to your work so the loop’s creator can check it out.

Screen shot of a comment thread on a Looperman loop. 

User lmortimer wrote: I used this loop in a podcast episode at min 1:57 and again at 7:21. It was perfect for what I needed. Thank you. 

User Nightingale replied: I lvoe the idea. Excellent use at 1:44 and very sexy on 7:07. Thank you too for the feedback.

It’s all love in the comments.

I’ve also started adding Looperman to my episode end-credits and linking to the loops in my written credits. When someone allows me to use their work for free, that’s the least I can do in return.

A couple of caveats:

  • You need to create a Looperman.com account before you can download any files.
  • Looperman offers other types of music for download, such as tracks (songs) and “acapellas” (vocals with no music). But they have more stringent terms and conditions. Example: to use a track, you have to get permission from the member who created it. Please read the terms and conditions carefully.

Needles in the haystack

On a site with more than 220K loops, how do you find just the right one? Luckily, Looperman’s search tools work pretty well.

First, be sure you’re on the Loops & Samples tab. Then you can see the list of tags, genres, and categories by selecting those options near the top of the page.

Screen capture of the Loops & Samples tab on Looperman.com. The top menu area is outlined in red to call attention to the Tags, Genres, and Categories links.

Make sure you’re on the Loops & Samples tab.

Take a gander at the variety of options for genre and category, as well as the number of loops in each one (in parentheses).

Screen capture from Looperman.com. The heading says "Find loops sorted by genre." Then there are three columns of musical genres, such as Dub, Country, and Fusion. Next to each genre name, in parenthesis, is the number of loops available in that genre. In total, there are 69 genres.

What’s in the “Weird” genre, I wonder?

Screen capture from Looperman.com. The heading says "Find loops sorted by category." Then there are three columns of categories, which are the names of instruments, such as Accordion, Harpsichord, and Strings. Next to each category listing, in parenthesis, is the number of loops available in that category. In total, there are 40 categories.

The didgeridoo needs more love.

To search using the Filter options, select Search for Free Loops on the Loops & Samples tab. The more filters you use, the more narrowly you can target the type of sound you’re looking for.

Just for kicks, I searched for mandolin loops in the fusion genre. The results: there is one fusion mandolin loop. Now that’s a needle in a haystack.

Screen capture of the Loops & Samples tab on Looperman.com. The Search Free Loops menu option is outlined in red to call attention it. Further down, the Filter area of the page is outlined in red to call attention to the different filter options: Category, Genre, By Member, By Keyword, Key, Date, BPM/Tempo, and Order By.

Make sure you’re on the Loops & Samples tab and that you’ve selected Search Free Loops.
Then select your search criteria in the Filter area.

Search tips

When searching, start by selecting the genre and category (instrument) you want. Then try one or more of these filters to further refine your results.

  • Date filter:
    • Allows you to limit search results to loops uploaded anywhere from the past 60 days to the last 24 hours. Using any of these filter options significantly reduces the number of search results.
    • Example: In the last 30 days, 2328 loops have been uploaded to the site. That’s a much smaller pool to start with than the entire database of loops. The date filter is especially helpful when you need a loop in any of the larger genres (e.g., hip-hop and trap).
  • Key filter:
    • Allows you to select from a list of 24 major and minor keys.
    • Major keys are generally happier sounding.
    • Minor keys are generally sadder sounding (look for “m” after the key name).
  • BPM/Tempo filter:
    • Allows you to specify a tempo (speed/pace) range in bpm (beats per minute).
    • Rather than search for a bpm range, such as 100-120 bpm, search for a specific bpm, which will reduce your search results significantly, sometimes by half or more.
    • To search for a specific bpm, put that number in both the “from” and “to” fields.

Make your own luck

I won’t lie, you’ll probably find yourself listening to a lot of loops. Loopscrolling is real, I tell ya.

But it’s worth it. First of all, it’s free. (Bears repeating!) Second, thousands of loopy gems are waiting to be unearthed and used in ways their creators never imagined.

Here’s an example from Mementos, which is the opening scene to the Ruth’s Poetry episode.

This is what the scene looks like in Hindenburg. Please ignore my poor track organization.

Allow me to explain what you’re looking at:

  • I staggered and layered three different loops (numbers 1, 2, & 3) for the music in this scene.
  • Loop 1, the plucky strings, starts alone and loops (repeats) six times.
  • Loop 2 starts next and loops five times.
  • Loop 3 joins last, near the end of the scene, and does not loop.
  • It’s hard to see in the screen capture, but the loops fade out sequentially. Loop 2 fades out first, then Loop 3, and finally Loop 1, so at the end, we’re left with only the plucky strings, just like at the beginning.
  • Even though Loop 1 repeats for about 1.5 minutes and Loop 2 for almost a minute, the music never gets monotonous. A sense of movement and a growing, tongue-in-cheek seriousness develops as Loops 2 and 3 join and complement the narration.

For this scene, I wanted classical-style music to accompany the narration, but in a playful, faux dramatic way, to match my guest’s storytelling style. Think Downton Abbey meets small-town Connecticut. At this point, I was just hoping to find one good loop that would work for the entire opening scene.

I searched for “classical” genre and “strings,” and while scanning the results, I just so happened to notice three loops with similar names:

After listening to them, I thought they sounded like a string-orchestra loop that had been disassembled into separate parts.

Once I started noodling around with them in Hindenburg, I realized that didn’t I need to start them all at the same time, and, in fact, it would sound better if I didn’t. And I figured out the rest from there.

Honestly, I was lucky that I noticed the similarity in the loop names because I never look at the names. They rarely tell you anything useful about the loops. This time, they did. Then I made the most of my good fortune by taking the time to noodle around and figure out a unique way to combine the loops with the story.

That’s power of Looperman. With so many loops and so many ways to search them, it’s worth spending some time getting to know this superhero resource. You never know when you might make your own luck and create sound-design magic.

Get thee to Looperman!

Zen Ear, Beginner’s Ear [SOTG #2]

As a rule, I don’t do rules.

That’s one of the reasons I love sound design: no rules. Not for me, anyway. Except maybe “Don’t damage anyone’s ear drums.”

Other than that, it’s principles all the way down.

Principles are far more interesting than rules. While rules involve one-size-fits-all, almost mindless application, principles provide guidance and encourage flexibility.

On their face, rules seem simple, clear, and oh, so certain. What are rules, if not a list of dos and don’ts and when to do or don’t them. (Wait … eh, you know what I mean.) You merely need to remember the rules and apply them.

In reality, rules suck. They may promise certainty, but they rarely anticipate every scenario, and that’s where they crumble. Suddenly we make new rules or create exceptions. Or worse, we start justifying why we’re not following the rules. Now the rules aren’t so simple anymore. (Shout-out to “i before e except after c,” which has so many exceptions it’s actually just wrong.)

Principles ask us to trust our judgment in new situations. Designed to be flexible, they feel open and full of possibilities because they don’t tell us what to do. They guide our thinking and decision-making.

And most importantly for beginners, principles offer more learning opportunities.

Do you have any principles?

In the first issue of SOTG, I encouraged copying ideas from other people’s sound-design work (the ideas, not the work itself!). Today I’m suggesting a follow-up: distill some of the ideas you like (including your own) into a set of sound-design principles for your show. Then you can refer to them as you produce each episode.

Let’s take a look at the sound-design principles I developed for Mementos.

Each episode, a guest tells the story behind a cherished keepsake and how it became a container of meaning, memory, emotion, and human connection for them. As a result, from a sound-design perspective, it’s a quiet, reflective show.

I created the principles in the table below to align with the show’s overall tone. They may seem familiar because I based them on some of the things I listen for in other people’s work, which I described in the last issue.

PrincipleWhat it means for sound design
Know who your soloist isWho or what (speaker, music, sfx) should have the audience’s focus right now, like a soloist in a band? Does the sound design reflect that? It’s more than volume. For example, does the sound design interfere with the tone or quality of the voice or conflict with what’s being said in mood, beat, or fullness/sparseness?
Cadence matters Listen to the speaker’s voice — not just what they’re saying but also how they’re saying it. Most people have a natural cadence — a pace, plus highs and lows and points of emphasis — to the way they speak. How should the sound design work with the speaker’s cadence? Should it imitate it? Complement it? Be completely different?
BreatheIs there enough time for the listener to reflect on something important or thought-provoking the speaker said, before starting the next scene or continuing with the story?

While making an episode, I periodically review these principles to make sure the episode aligns with my overall intent for the show.

For a different type of show, I’d have different principles. For example, Have You Heard George’s Podcast? is “a fresh take on inner-city life through a mix of storytelling, music and fiction” by George the Poet. The show sounds nothing like mine (or anyone else’s), and therefore any underlying sound-design principles would differ from mine, too.

What is your show about, who is it for, and what do you want it to sound like? The answers to those questions will help shape your show’s sound-design principles.

Questions are better than answers

You may have noticed that I described my sound-design principles with questions instead of statements (aka, rules). The questions enable me to answer differently in different situations.

A rule about letting the piece breathe might assert something like, “Leave at least four seconds between something profound the guest says and the start of the next scene.”

A principle asks, “Did I leave enough time between the profound thing the guest said and the next scene?” I won’t know what “enough” is until I’m making that episode. It might be different each time, even within the same episode. Maybe it’s four seconds. Maybe it’s six. Or two.

You might be thinking this is just semantics. I get it. But for me, the difference between rules and principles, between answers and questions, is about learning to trust my ear.

I’ll give you an example. On social media, I’ve seen a few good-natured debates about whether it’s okay to fade music in or out, or whether, instead, music should always have a “hard” start and finish.

Some folks believe that music should never fade in or out. But as a newb, why would I adopt such a rule? It takes away half of my options and leaves me with no chance to learn what sounds good to me and what doesn’t.

Instead, I would ask, How should I start the music here, and why? Asking and answering a question ensures I listen closely and decide what sounds better in this case. Other times, I might apply the same principle and arrive at a different conclusion.

*An Instagram post of mine from 2017, well before I started podcasting. At least I’m consistent!

Trust your beginner’s ear

I don’t want to wander too deep into Zen philosophy, mostly because I’ll screw it up. But Shunryu Suzuki’s concept of the beginner’s mind works for sound design, with a twist.

Suzuki says, “The mind of the beginner is empty, free of the habits of the expert, ready to accept, to doubt, and open to all the possibilities [emphasis mine].”

Beginners have fewer, if any, expectations and preconceptions. They’re unobstructed by past experiences (good or bad) and lessons learned. The secret to Zen practice, Suzuki says, is the beginner’s mindset. The aim is to clear your mind and think, experience, and be like a beginner.

Being a beginner usually feels like a disadvantage, like the time my game-loving 12-year-old utterly destroyed me, the newb, at Settlers of Catan. But in sound design, we can flip that dynamic upside down. Being a beginner can be an advantage.

Borrowing from Suzuki, I think of new audio makers as having a beginner’s ear. We listen differently and hear differently than experts. Our ear is untrained and unencumbered by past experiences. It doesn’t know the rules yet, so it hasn’t “ruled out” any possibilities.

So forget about rules and dos and don’ts and always-s and nevers. Consider instead a set of sound-design principles, aligned with your show, that encourage you to ask questions and answer them yourself.

Embrace your beginner’s ear. Listen for what sounds good to you, and choose that.

Cheers,

Lori

In the next issue of Sound Off the Ground: less philosophy and more resources!

Not subscribed yet? Sign up now!

Copy That! [SOTG #1]

Years ago, one of my sons was drawing a picture after dinner. He said he was copying something his friend drew that day in preschool.

I said, “Oh, that’s nice. But why don’t you draw your own original thing?”

He said, “My own original thing is copying people.”

Turns out, he was on to something.

We learn by copying others

You can expect copying to be a recurring theme in this newsletter. As in, don’t be afraid to copy, borrow, or imitate ideas and techniques from other sound designers. That’s the best way to learn.

Color photo of tortie cat sitting and facing the camera, with a window behind her. The photo is repeated four times in a square layout. The photo is labeled "Copy cat".

Sound Off the Ground began germinating in my brain when I read the first issue of Alice Wilder’s newsletter, Starting Out. In an interview with Alice, Tobin Low said the most helpful mentors to new audio makers are often not experts or higher-ups but people just ahead of them in skill and experience.

But for independent podcasters who don’t work on a team, it can be hard to find a mentor or discover work by people who are slightly ahead of you.

Luckily, we can still learn from the work of people who are far more advanced. My favorite podcasts tend to be narrative-style shows made by experienced, professional teams with actual budgets (god love ya!). These teams make complex, rich, immersive podcasts that sound amazing. Of course, their skill sets and resources far exceed mine.

How can I learn from my favorite “higher-ups” if there’s such a big gap between them and me? How can I copy what they do with my skills still in the larval stage?

By narrowing the scope of what I’m listening for in their sound design.

Concentrate on timing and music

When learning a new skill, sometimes you get worse before you get better. But I never wanted to reveal that to my audience. I wanted each episode to sound a little better than its predecessor.

So I concentrated on what I knew I could manage — what was within my skill set or just a little past it — when listening to other audio for inspiration.

I started by giving most of my attention to timing and music because to me, they’re the core of basic sound design. I just needed the ability to:

  • make basic edits in my audio software (DAW) of choice
  • listen to and identify cadences and patterns in speech and music

Keeping in mind my preference for narrative shows, here’s what I listened for (and still do).

Timing:

  • Transitions between scenes or when music or sound effects start, end, or blend together. When is the music fading in or out? When does it start or stop suddenly, without fading? What’s happening in the story when the music starts or ends? Is that effective? Would I change it or leave it the way it is? 
  • Pacing and spacing. Does the timing sound intentional? Does the story “breathe”? If it doesn’t, is it that way for a reason? When a speaker says something important or reflective, is there time for the audience to sit with it and their own thoughts and emotions for a few seconds, or does the scene jump quickly to the next thing, killing the buzz?

Music:

  • The music or sound effects under someone speaking. Does the volume level or style of music compete with the people speaking, making it harder to hear or follow? Does it hang back a bit and help move the speech along? Does it complement what’s being said? Would another style of music have been better? 
  • Repetition. Does the same music appear more than once in the episode? When does that happen? Why do I think it’s repeated in those places? Is the repetition effective for the story or is it just … repetitive?

Timing + Music:

  • The beat or cadence of the music under narration. Does music line up well with the phrasing and timing the speaker’s words and pauses? Are quiet parts of the music that allow the speaker to be more prominent? Do musical beats align with the words the speaker is emphasizing? 

By listening intentionally for these sound-design choices, I stayed focused and was able compare similar scenarios across different podcasts and episodes. Over time, I developed a sense for what sounded good to me. And then I tried to mimic it in my episodes.

Originality is nothing but judicious imitation.

Voltaire

Example: Outside/In’s “Windfall” series

Sometimes I nerd out and listen to a show or episode more than once, focusing on different things each time.

I did that with Outside/In’sWindfall” series about the history and future of wind farms in the United States. The first time through, I listened because I wanted to learn about wind farms. But it had such great sound design that I listened again to focus on that.

The entire series is beautifully sound designed, and I’ll probably talk about it again in future issues. But focusing on timing, specifically, I learned from “Windfall” to expand the transitions between scenes in an episode. Here’s an example from “Windfall Part 1: Sea Change.” (If you’re reading the newsletter in your email, click here.)

Notice how long the musical transition is between the end of the first scene (when Sam Evans Brown says “…to reshape the future of where our energy comes from”) and the next one (when Annie Ropeik says, “In the spring of this year….”). 

It’s 16 seconds.

In audio, 16 seconds is an eternity. In many cases, including probably all of my Mementos episodes, it would be too long. But not here. 

The hosts have just spent the first seven minutes establishing the background for the series, and they’re about to delve into the details. There’s no rush to get to the next scene. Listeners are afforded the time to reflect on what they’ve just heard about the context for the series and the multiple voices they’ll hear throughout it. 

Although a 16-second musical transition would be too long for my episodes, I still took something from this “Windfall” example: I can trust my audience with longer transitions than I thought.

Conventional wisdom says audiences are busy and have short attention spans, so you better keep things moving. But the “Windfall” example showed me that’s not necessarily true and that I could play around with longer transitions between scenes. If the story was good, the listeners would still be there when the next scene started.

I wouldn’t have noticed that minor detail had I not been listening specifically to learn. It’s important, I think, because over the course of an episode, making small adjustments to timing and pacing can make a big impact on the listener’s experience. And those kinds of adjustments are often within a beginner’s skill set.

So the next time you’re listening to your favorite show, pay close attention to timing and music, and see if there’s anything you want to copy as “your own original thing” on your next piece.

Cheers,

Lori

Not subscribed yet? Sign up now!

Why Sound Off the Ground?

Why did I create Sound Off the Ground? Because just a few years ago, when I was brand new to audio, I got a lot of help from other people in the industry. And now I’ve learned enough to help newbs like you get started.

Hand-drawn illustration on an X-Y axis. Title is "My Sound Design Journey." The X axis is labeled Time. The Y axis is labeled Skill.  The illustration depicts a learning curve moving from the bottom left upward toward the top right corner. The area under the learning-curve line is divided into three sections. The first section is labeled "Steep Learning Curve." There is a stick figure climbing up to the top of that section with a microphone in her hand. The figure is labeled "Me." The figure is just about to reach the second section, labeled "Plateau of Resting and Sharing What I've Learned." And the third section on the right is labeled "Continued Lifelong Learning." 

The image depicts Lori Mortimer pausing during her learning process to share what she's learned about sound design.

Can’t Carry a Tune? No Problem!

Sound Off the Ground can help you.

Let’s get something out of the way right now. Can you: 

  • read music?
  • play an instrument?
  • sing on key?
  • write music?

Me either!

And yet I’ve learned how to make my podcast, Mementos, sound good. I get compliments all the time on my sound design. (And they’re not even from my mom, because she’s dead.)

Not too long ago, I was where you are now.

I know how overwhelming it can feel when learning this stuff. And how many mistakes you make when learning. And how *@$#&! time-consuming it can be. Plus my wallet can tell you how tempting it is to spend money on apps and sound packs that you don’t really need.

That’s why I’m focusing on sound design for new podcasters.

In Sound Off the Ground, I’ll:

  • save you time by sharing lessons I’ve learned through trial and error — I suffered so you don’t have to
  • save you money by showing you free or cost-effective sound resources and how to use them creatively
  • show you how to make your own simple music even if you know nothing about music (I swear!)
  • share new sound-design tips and resources as I discover them along the way

And you’ll be able to: 

  • make your show sound great without spending gobs of money on sound design
  • listen carefully to the sound design of other shows and borrow their ideas, putting your own, unique spin on them
  • find free music, sound effects, and software, and use all of them in ways that fit your personality and show.

What do I mean by sound design?

When I say sound design, I mean the process of choosing, creating, altering, and arranging audio elements — like music and sound effects — to set the tone, create atmosphere, and enhance the story you’re trying to tell.

No matter which microphone or DAW (audio production software) you use, you will be able to use these principles, tips, and techniques. Therefore, I won’t be covering studio setup, which mic is best, or which DAW you should use. Those are all personal decisions.

So go grab a cuppa, and let’s get your sound off the ground. Subscribe here. It’s free and always will be.

Let's get your sound off the ground!

I respect your privacy.