Sounds Complicated [SOTG #5]

As New Hampshire Public Radio’s Taylor Quimby learned, you never know when a plastic sled and a jar of marbles might come in handy.

Taylor was the sound designer on Outside/In’s Windfall series. With the sled and marbles, he created my favorite example of sound that explains complex information.

Windfall is about the history and future of offshore wind farms in the US. In the first episode, “Sea Change,” the Windfall team conveys the size of wind turbines through visual writing.

They describe blades bigger than airplanes and predict rotors the size of three football fields, with a turbine the size of “a giant, spinning sports complex.” This writing style helps listeners picture these structures in the real world.

But sometimes it’s hard to find an analogy, especially for numbers so large or small that we can’t wrap our minds around them. We need a frame of reference, and sound design can provide it.

For example, the Windfall team found it challenging at first to explain the exponential growth in wind-farm energy-output.

“We knew that transmitting the scope and scale of growth was the most important thing,” Taylor said.

“Yet, especially when it comes to energy, that’s really hard because it’s a subject full of jargon. You’re talking about kilowatts, megawatts, gigawatts, and most of us don’t have a sense of how they factor into our own lives.”

Early episode drafts described power output with numbers. But the team realized numbers weren’t communicating just how significant the increase in power was.

In an editing meeting, someone asked, “How can we use sound to help transmit scale in a way that’s almost like a type of synesthesia?”

Taylor remembered an old anti-smoking public service announcement that started with a ball bearing being dropped into a bucket. The bearing represented one smoking-related death. Next, a bucket full of bearings, representing thousands of smoking-related deaths, was poured out.

“You hear the one ball bearing, and then you hear this just unimaginable staggering amount of sound that represents all the people that die of smoking,” he said.

“And the point is not that you can individually pick all those things out. The point is to be so overwhelmed by a number and the comparison between those things that you just get a sense of this as almost beyond comprehension.”

And in the spirit of stealing like an artist, the Windfall team borrowed that idea.

First, they chose a unit of measurement other than watts. The first offshore wind farm, near Vindeby (pronounced in English “vin-eh-boo”) Denmark, became their base unit of energy. Next, they chose the sound of one marble to represent one Vindeby-sized wind farm. Over time, as the wind farms grow in size, so does the number of marbles we hear.

Listen below for how the marbles create a sense of scale.

Next, wind farms continue growing, and the marble sounds juxtapose a large number with a much smaller one.

Listen below for the big sound-design payoff.

Did you notice how long it took for 7,000 marbles to pour out? Thirty-five seconds, an eternity in audio. That was intentional.

“We wanted to do it slowly and hear the movement and hear it building and clicking and clacking and feel like it’s going on forever and ever and ever and get that sense of scale, so that it feels overwhelming,” Taylor said.

After the 7,000 marbles roll, we get the big payoff: the sparse thunking of a few marbles representing the minuscule output of US wind farms. The sonic contrast tells the tale. No words needed.

Screen capture of a sound wave in a digital audio workstation. The wave is in color, with a long, dense black section in the middle that represents the sound effect of 7,000 marbles rolling.
Can you find the 7,000 marbles?

The marble concept was ingenious, as was Taylor’s method for creating the sound effect. Taylor borrowed a marble collection from NHPR Senior Producer Jack Rodolico’s son. He took the marbles home and started experimenting.

“I dropped a marble in a bucket, and it just made a thunk.”

And when Taylor dropped a bunch of marbles, “it happened super-fast, and it did not communicate what we talked about trying to communicate. You needed a sense of movement. And marbles are heavy, so they just drop.”

For a sense of movement, the marbles needed to roll. Taylor tried a few more things before grabbing his son’s sled, made of high-density foam covered in plastic.

“I propped it up at an angle and dropped a marble on it. You hear the thunk … thunk … thunk-thunk-thunk-thunk-thunk. And I thought, this is what you need because somebody can listen to this, hear the movement, and picture the marble dropping.”

Now he just needed to create the sound of 7,000 marbles.

“I counted out 100 marbles so I could record 100 falling, and then I multiplied that sound file a number of times in my DAW, over and over again to create bigger and bigger numbers,” he said.

Taylor offered some tips for using sound design when words alone can’t paint the picture.

Example: Outside/In’s 250th episode tells the story of Earth’s largest mass extinction event, which occurred about 250 million years ago. After a series of natural disasters, the Earth’s landscape was barren. Then fungi and ferns began popping up.

“To transmit the idea of all Earth’s continents being populated by countless numbers of fungi,” Taylor said, “we used little popping sounds, like bloop… bloop … bloop … bloop bloop bloop … bloop-bloop-bloop-bloop-bloop-bloop. Nature documentaries often use little bloop sounds during time lapses because that’s what we imagine. So, I thought, let’s take that thing that people will inherently understand and make a sonic landscape that might help people imagine this world I’m describing.”

To hear the mushroom world popping up, start listening at 24:00.

Example: Episode 3 of Patient Zero, an NHPR podcast about Lyme disease, explains how the Lyme-causing toxin gets from a tick into the human body. This time, Taylor borrowed from the Magic School Bus.

“The Magic School Bus shrinks and goes into the human body. And I thought, let’s do this Magic-School-Bus style, and it’ll be really gross,” he said.

“I’m zooming into a world and imagining stuff. What does a tick thrusting its mouthparts into your skin sound like?

“I used different sound effects to imagine life at the scale of a tick. There was the classic Foley stuff, where you’re tearing or squeezing leather or cutting meat. The sound of the tick walking on skin is fingers palpating a sponge very close to a microphone.

“And I heard from people who said, ‘I think you went too far because that was deeply uncomfortable to listen to.’ But that’s what I was going for.”

To hear the tick feasting on human flesh, start listening to episode 3 of Patient Zero at 14:00. Use headphones if you dare.

Occasionally, an object’s literal sound just doesn’t work. It’s not what listeners imagine, and it can even be confusing without a visual aid. So, through trial and error, listen for the sound that people will just inherently get. Sometimes, a more cartoon-like version of the sound conjures the image better.

Example: Years ago, an Outside/In episode included an Indiana-Jones-style journey where someone got on a train and then on a plane. Taylor didn’t like the sound of a real jet for the plane. “It’s just a whine. So, I used an old prop-engine plane, which was not the actual plane, but people immediately hear it and get it.”

With factual information and real-world scenes, be transparent about what you’re doing so your listeners know when you’ve altered something, Taylor said. “If I have field tape with a reporter in an actual part of the world and then combine that with some sort of fake sound effect, that’s where you can get into some problematic areas where you’re misleading your audience and passing fake sound design off as field tape.”

And don’t forget, the biggest tip in sound design is to experiment and play. When you find yourself at a loss for words, start experimenting with sound design to help you express what you mean.

A big thank-you to Taylor Quimby for sharing his knowledge and experience with us!

Do you have a favorite sound-design example? Send a link and short description to, and I’ll include it in the next issue.

And before I go, I’ve got two resources for you:

  • I’ve started uploading some of my recorded sounds to with Creative Commons Zero (CC0) licensing, so you can use them any way you want. I’ll add more each week until the end of the year. Get them here:
  • PodPeople (@podppl) is an excellent Instagram follow. They’ve been posting reels of their sound designers demoing and explaining how they sound-designed specific podcast scenes. (Thanks to Ashley Lusk for the recommendation.)

A Beginner’s Course: Five Episodes from Sound School Podcast [SOTG #4]

Before we get started: Last month, I participated in an AIR (Association of Independents in Radio) webinar about how non-musicians can learn to make their own podcast music. As one of three panelists, I focused on showing how easy it can be to use free or inexpensive iOS music apps. AIR members can watch the webinar for free. If you’re not an AIR member, you can view the demo videos I made on my YouTube channel. Also, check out the benefits of joining AIR — members receive discounts on AIR programs and trainings.

When a novice writer asks how they can improve their writing, the answer is often “read, read, read.” I feel the same way about becoming a better audio maker: I need to listen, listen, listen to other people’s work.

But in addition to listening to podcasts for their sound design, I also listen to podcasts that explain sound design. I love getting under the hood and listening to an audio maker describe their overall approach for a piece or how and why they added sound design or music to specific scenes – or why they didn’t.

Sound School, a podcast from and PRX, is one of my favorite resources for these kinds of discussions. Host Rob Rosenthal has been teaching audio storytelling for years, including at Salt and the Transom Story Workshop (RIP) — experience that comes through in the clips he selects to examine, the questions he asks his audio-maker guests, and how he explains nuanced details in a way new audio makers can understand. 

On Sound School, Rob has covered everything from interviewing to field recording to story editing (and more). With over 200 episodes to choose from, it’s hard to know where to begin.

So I’ve curated a beginner’s sound-design course for you. Here are five of my favorite sound-design-focused Sound School episodes to help you get your sound off the ground. I’ve put them in the sequence that makes the most sense to me, but of course you can listen to them in any order you like.

Sound Design Basics

Sound designer Matt Boll explains how he and the Gimlet team developed the sound-design principles they implemented on the first season of Crimetown, which focused on crime and politics in Rhode Island.

Key takeaways: Keep it short and simple when adding sound design to a scene. Emotionally heavy moments sometimes work better without sound design – let the speaker convey their own emotion. Sound design is an iterative process: “You just have to keep trying until it works.”

Avoiding Cheesy Sound Design

Radiolab, the groundbreaking investigative journalism radio show/podcast, has its own unique production style and sound. In this episode, Jad Abumrad explains some of his sound design principles through examples from the Radiolab episode “Nukes.”

Key takeaways: Avoid overly literal sounds. Brainstorm about what you want each scene to feel like. Why do you need sound design there? What emotion or experience are you trying to evoke/create? 

She Sees Your Every Move

Musician and sound designer Jonathan Mitchell explains how he used music to help shape the story in his piece, “She Sees Your Every Move.” It’s about a photographer who takes pictures of people in their homes at night, from the street and through their windows, without their knowledge or permission. (It’s creepy af!)

Key takeaways: The music and the story are not separate from each other. They’re both equal parts of the story. The music choices inform the clip choices, and the clip choices inform the music choices, “like a soup that’s getting stirred.” 

Scoring Stories Part 2

(There is, of course, a Part 1, but it’s not necessary to listen to it before Part 2.) 

Rob makes subtle changes to the music in Tiarne Cook’s audio piece (with her permission). Through trial and error, he shows how small changes can make a big impact. He explains how to choose where to start and end music at different points in a story, as well as how to choose which music to include in an audio story.

Key takeaways: There are some basic principles about scoring that can guide you, but it’s still important to experiment and to vary how you score an individual audio piece, so that the use of music doesn’t become predictable or boring. 

Remixing the Music 

The term “wallpapering” refers to sound design or music that plays throughout most or all of a piece. In this episode, Rob makes a few narration cuts and remixes the music on a wallpapered audio story (with producer Neena Pathak’s permission) to show how a different approach, one where music comes in and out at key points, can change and, in his opinion, improve the listening experience.

Key takeaways: Try to use music strategically at different points in the story, to emphasize a change in scene or mood, or to emphasize, or to provide a moment of reflection before moving on to another scene. Try to find places where the speaker’s words can and should stand alone, without music, for the most impact. 

Go ahead and drop these episodes in your queue. Maybe listen to one and see how it resonates with the kind of work you do. Are there any takeaways that appeal to you for the kind of audio pieces you create? Which principles might you adopt or adapt for your own sound design approach? Then maybe practice a little and move on to the next episode for more ideas you can copy.

Zen Ear, Beginner’s Ear [SOTG #2]

As a rule, I don’t do rules.

That’s one of the reasons I love sound design: no rules. Not for me, anyway. Except maybe “Don’t damage anyone’s ear drums.”

Other than that, it’s principles all the way down.

Principles are far more interesting than rules. While rules involve one-size-fits-all, almost mindless application, principles provide guidance and encourage flexibility.

On their face, rules seem simple, clear, and oh, so certain. What are rules, if not a list of dos and don’ts and when to do or don’t them. (Wait … eh, you know what I mean.) You merely need to remember the rules and apply them.

In reality, rules suck. They may promise certainty, but they rarely anticipate every scenario, and that’s where they crumble. Suddenly we make new rules or create exceptions. Or worse, we start justifying why we’re not following the rules. Now the rules aren’t so simple anymore. (Shout-out to “i before e except after c,” which has so many exceptions it’s actually just wrong.)

Principles ask us to trust our judgment in new situations. Designed to be flexible, they feel open and full of possibilities because they don’t tell us what to do. They guide our thinking and decision-making.

And most importantly for beginners, principles offer more learning opportunities.

Do you have any principles?

In the first issue of SOTG, I encouraged copying ideas from other people’s sound-design work (the ideas, not the work itself!). Today I’m suggesting a follow-up: distill some of the ideas you like (including your own) into a set of sound-design principles for your show. Then you can refer to them as you produce each episode.

Let’s take a look at the sound-design principles I developed for Mementos.

Each episode, a guest tells the story behind a cherished keepsake and how it became a container of meaning, memory, emotion, and human connection for them. As a result, from a sound-design perspective, it’s a quiet, reflective show.

I created the principles in the table below to align with the show’s overall tone. They may seem familiar because I based them on some of the things I listen for in other people’s work, which I described in the last issue.

PrincipleWhat it means for sound design
Know who your soloist isWho or what (speaker, music, sfx) should have the audience’s focus right now, like a soloist in a band? Does the sound design reflect that? It’s more than volume. For example, does the sound design interfere with the tone or quality of the voice or conflict with what’s being said in mood, beat, or fullness/sparseness?
Cadence matters Listen to the speaker’s voice — not just what they’re saying but also how they’re saying it. Most people have a natural cadence — a pace, plus highs and lows and points of emphasis — to the way they speak. How should the sound design work with the speaker’s cadence? Should it imitate it? Complement it? Be completely different?
BreatheIs there enough time for the listener to reflect on something important or thought-provoking the speaker said, before starting the next scene or continuing with the story?

While making an episode, I periodically review these principles to make sure the episode aligns with my overall intent for the show.

For a different type of show, I’d have different principles. For example, Have You Heard George’s Podcast? is “a fresh take on inner-city life through a mix of storytelling, music and fiction” by George the Poet. The show sounds nothing like mine (or anyone else’s), and therefore any underlying sound-design principles would differ from mine, too.

What is your show about, who is it for, and what do you want it to sound like? The answers to those questions will help shape your show’s sound-design principles.

Questions are better than answers

You may have noticed that I described my sound-design principles with questions instead of statements (aka, rules). The questions enable me to answer differently in different situations.

A rule about letting the piece breathe might assert something like, “Leave at least four seconds between something profound the guest says and the start of the next scene.”

A principle asks, “Did I leave enough time between the profound thing the guest said and the next scene?” I won’t know what “enough” is until I’m making that episode. It might be different each time, even within the same episode. Maybe it’s four seconds. Maybe it’s six. Or two.

You might be thinking this is just semantics. I get it. But for me, the difference between rules and principles, between answers and questions, is about learning to trust my ear.

I’ll give you an example. On social media, I’ve seen a few good-natured debates about whether it’s okay to fade music in or out, or whether, instead, music should always have a “hard” start and finish.

Some folks believe that music should never fade in or out. But as a newb, why would I adopt such a rule? It takes away half of my options and leaves me with no chance to learn what sounds good to me and what doesn’t.

Instead, I would ask, How should I start the music here, and why? Asking and answering a question ensures I listen closely and decide what sounds better in this case. Other times, I might apply the same principle and arrive at a different conclusion.

*An Instagram post of mine from 2017, well before I started podcasting. At least I’m consistent!

Trust your beginner’s ear

I don’t want to wander too deep into Zen philosophy, mostly because I’ll screw it up. But Shunryu Suzuki’s concept of the beginner’s mind works for sound design, with a twist.

Suzuki says, “The mind of the beginner is empty, free of the habits of the expert, ready to accept, to doubt, and open to all the possibilities [emphasis mine].”

Beginners have fewer, if any, expectations and preconceptions. They’re unobstructed by past experiences (good or bad) and lessons learned. The secret to Zen practice, Suzuki says, is the beginner’s mindset. The aim is to clear your mind and think, experience, and be like a beginner.

Being a beginner usually feels like a disadvantage, like the time my game-loving 12-year-old utterly destroyed me, the newb, at Settlers of Catan. But in sound design, we can flip that dynamic upside down. Being a beginner can be an advantage.

Borrowing from Suzuki, I think of new audio makers as having a beginner’s ear. We listen differently and hear differently than experts. Our ear is untrained and unencumbered by past experiences. It doesn’t know the rules yet, so it hasn’t “ruled out” any possibilities.

So forget about rules and dos and don’ts and always-s and nevers. Consider instead a set of sound-design principles, aligned with your show, that encourage you to ask questions and answer them yourself.

Embrace your beginner’s ear. Listen for what sounds good to you, and choose that.



In the next issue of Sound Off the Ground: less philosophy and more resources!

Not subscribed yet? Sign up now!

Copy That! [SOTG #1]

Years ago, one of my sons was drawing a picture after dinner. He said he was copying something his friend drew that day in preschool.

I said, “Oh, that’s nice. But why don’t you draw your own original thing?”

He said, “My own original thing is copying people.”

Turns out, he was on to something.

We learn by copying others

You can expect copying to be a recurring theme in this newsletter. As in, don’t be afraid to copy, borrow, or imitate ideas and techniques from other sound designers. That’s the best way to learn.

Color photo of tortie cat sitting and facing the camera, with a window behind her. The photo is repeated four times in a square layout. The photo is labeled "Copy cat".

Sound Off the Ground began germinating in my brain when I read the first issue of Alice Wilder’s newsletter, Starting Out. In an interview with Alice, Tobin Low said the most helpful mentors to new audio makers are often not experts or higher-ups but people just ahead of them in skill and experience.

But for independent podcasters who don’t work on a team, it can be hard to find a mentor or discover work by people who are slightly ahead of you.

Luckily, we can still learn from the work of people who are far more advanced. My favorite podcasts tend to be narrative-style shows made by experienced, professional teams with actual budgets (god love ya!). These teams make complex, rich, immersive podcasts that sound amazing. Of course, their skill sets and resources far exceed mine.

How can I learn from my favorite “higher-ups” if there’s such a big gap between them and me? How can I copy what they do with my skills still in the larval stage?

By narrowing the scope of what I’m listening for in their sound design.

Concentrate on timing and music

When learning a new skill, sometimes you get worse before you get better. But I never wanted to reveal that to my audience. I wanted each episode to sound a little better than its predecessor.

So I concentrated on what I knew I could manage — what was within my skill set or just a little past it — when listening to other audio for inspiration.

I started by giving most of my attention to timing and music because to me, they’re the core of basic sound design. I just needed the ability to:

  • make basic edits in my audio software (DAW) of choice
  • listen to and identify cadences and patterns in speech and music

Keeping in mind my preference for narrative shows, here’s what I listened for (and still do).


  • Transitions between scenes or when music or sound effects start, end, or blend together. When is the music fading in or out? When does it start or stop suddenly, without fading? What’s happening in the story when the music starts or ends? Is that effective? Would I change it or leave it the way it is? 
  • Pacing and spacing. Does the timing sound intentional? Does the story “breathe”? If it doesn’t, is it that way for a reason? When a speaker says something important or reflective, is there time for the audience to sit with it and their own thoughts and emotions for a few seconds, or does the scene jump quickly to the next thing, killing the buzz?


  • The music or sound effects under someone speaking. Does the volume level or style of music compete with the people speaking, making it harder to hear or follow? Does it hang back a bit and help move the speech along? Does it complement what’s being said? Would another style of music have been better? 
  • Repetition. Does the same music appear more than once in the episode? When does that happen? Why do I think it’s repeated in those places? Is the repetition effective for the story or is it just … repetitive?

Timing + Music:

  • The beat or cadence of the music under narration. Does music line up well with the phrasing and timing the speaker’s words and pauses? Are quiet parts of the music that allow the speaker to be more prominent? Do musical beats align with the words the speaker is emphasizing? 

By listening intentionally for these sound-design choices, I stayed focused and was able compare similar scenarios across different podcasts and episodes. Over time, I developed a sense for what sounded good to me. And then I tried to mimic it in my episodes.

Originality is nothing but judicious imitation.


Example: Outside/In’s “Windfall” series

Sometimes I nerd out and listen to a show or episode more than once, focusing on different things each time.

I did that with Outside/In’sWindfall” series about the history and future of wind farms in the United States. The first time through, I listened because I wanted to learn about wind farms. But it had such great sound design that I listened again to focus on that.

The entire series is beautifully sound designed, and I’ll probably talk about it again in future issues. But focusing on timing, specifically, I learned from “Windfall” to expand the transitions between scenes in an episode. Here’s an example from “Windfall Part 1: Sea Change.” (If you’re reading the newsletter in your email, click here.)

Notice how long the musical transition is between the end of the first scene (when Sam Evans Brown says “…to reshape the future of where our energy comes from”) and the next one (when Annie Ropeik says, “In the spring of this year….”). 

It’s 16 seconds.

In audio, 16 seconds is an eternity. In many cases, including probably all of my Mementos episodes, it would be too long. But not here. 

The hosts have just spent the first seven minutes establishing the background for the series, and they’re about to delve into the details. There’s no rush to get to the next scene. Listeners are afforded the time to reflect on what they’ve just heard about the context for the series and the multiple voices they’ll hear throughout it. 

Although a 16-second musical transition would be too long for my episodes, I still took something from this “Windfall” example: I can trust my audience with longer transitions than I thought.

Conventional wisdom says audiences are busy and have short attention spans, so you better keep things moving. But the “Windfall” example showed me that’s not necessarily true and that I could play around with longer transitions between scenes. If the story was good, the listeners would still be there when the next scene started.

I wouldn’t have noticed that minor detail had I not been listening specifically to learn. It’s important, I think, because over the course of an episode, making small adjustments to timing and pacing can make a big impact on the listener’s experience. And those kinds of adjustments are often within a beginner’s skill set.

So the next time you’re listening to your favorite show, pay close attention to timing and music, and see if there’s anything you want to copy as “your own original thing” on your next piece.



Not subscribed yet? Sign up now!

Why Sound Off the Ground?

Why did I create Sound Off the Ground? Because just a few years ago, when I was brand new to audio, I got a lot of help from other people in the industry. And now I’ve learned enough to help newbs like you get started.

Hand-drawn illustration on an X-Y axis. Title is "My Sound Design Journey." The X axis is labeled Time. The Y axis is labeled Skill.  The illustration depicts a learning curve moving from the bottom left upward toward the top right corner. The area under the learning-curve line is divided into three sections. The first section is labeled "Steep Learning Curve." There is a stick figure climbing up to the top of that section with a microphone in her hand. The figure is labeled "Me." The figure is just about to reach the second section, labeled "Plateau of Resting and Sharing What I've Learned." And the third section on the right is labeled "Continued Lifelong Learning." 

The image depicts Lori Mortimer pausing during her learning process to share what she's learned about sound design.

Can’t Carry a Tune? No Problem!

Sound Off the Ground can help you.

Let’s get something out of the way right now. Can you: 

  • read music?
  • play an instrument?
  • sing on key?
  • write music?

Me either!

And yet I’ve learned how to make my podcast, Mementos, sound good. I get compliments all the time on my sound design. (And they’re not even from my mom, because she’s dead.)

Not too long ago, I was where you are now.

I know how overwhelming it can feel when learning this stuff. And how many mistakes you make when learning. And how *@$#&! time-consuming it can be. Plus my wallet can tell you how tempting it is to spend money on apps and sound packs that you don’t really need.

That’s why I’m focusing on sound design for new podcasters.

In Sound Off the Ground, I’ll:

  • save you time by sharing lessons I’ve learned through trial and error — I suffered so you don’t have to
  • save you money by showing you free or cost-effective sound resources and how to use them creatively
  • show you how to make your own simple music even if you know nothing about music (I swear!)
  • share new sound-design tips and resources as I discover them along the way

And you’ll be able to: 

  • make your show sound great without spending gobs of money on sound design
  • listen carefully to the sound design of other shows and borrow their ideas, putting your own, unique spin on them
  • find free music, sound effects, and software, and use all of them in ways that fit your personality and show.

What do I mean by sound design?

When I say sound design, I mean the process of choosing, creating, altering, and arranging audio elements — like music and sound effects — to set the tone, create atmosphere, and enhance the story you’re trying to tell.

No matter which microphone or DAW (audio production software) you use, you will be able to use these principles, tips, and techniques. Therefore, I won’t be covering studio setup, which mic is best, or which DAW you should use. Those are all personal decisions.

So go grab a cuppa, and let’s get your sound off the ground. Subscribe here. It’s free and always will be.

Let's get your sound off the ground!

I respect your privacy.