It was a pleasure to be on the Field & Foley podcast; we discussed field recording, game audio, experimental music, and my philosophies about not believing that there are any solid boundaries between any of it. Listen here!
About two years ago, I built a bodyfall dummy, inspired by one made by Mike O’Connor. Mine just had a torso and arms; plenty to generate lots of rhythmic variations. I wound up not using it then, but I recently busted the dummy back out and did a quick outdoor session for an indie sci-fi horror game I’m working on. The dummy’s name is AAE-9173, most folks’ least-favorite Pro Tools error code.
The video below contains the cleaned-up, lightly-mastered impacts, recorded with a Sennheiser MKH50 (a CO-100K was also used, but the ultrasonic content was neither extensive nor useful). Details on its extremely low-budget, DIY construction are below.
In thinking about how to build up enough mass to make it sound realistic, but be light enough to manage, it occurred to me that I was also trying to get rid of books and old magazines that local agencies didn’t want. That became the basis for the “guts” of the dummy. I found a paint-stained longsleeve shirt in my closet, and measured my own torso and arms to ensure that I build the inner parts to provide a good fit.
Most of the old books I wanted to get rid of were really thick hardbacks, and I knew those covers would sound wooden or slappy if they were just put into the dummy. I used a miter saw to cut off the book bindings, duct-taped the pages together, and that formed the torso. To make sure there was a slightly absorbent “skin” over the books, I wrapped the entire assemblage in a few layers of closed-cell foam sheets (which I had left over from my stop-motion animation days).
For the arms, each arm segment was a duct-taped bunch of rolled up magazines, also wrapped in closed-cell foam. The foam that holds the upper and lower arms together is one arm’s-length sheet, providing for elbow-like articulation…a bit stiff, but we’re not building a posable mannequin here. I sealed the wrists and waist of the shirt with duct tape. The bulk of the material within the shirt was enough to keep it all in place without falling out.
Where to do the recording was frustrating; inside, I have suspended hardwood floors (wrong resonance), and the transients would active the room modes of any place in my home. Outside was the concrete surface I needed to record on, there was a lot of traffic and neighborhood noise…birds, too. What saved the day was having the mics really close to the impact zone. The dummy weighs 40 pounds, so the impacts are beefy and quite loud. With some judicious edits, the signal to noise ratio wound up being just fine without any need for de-noising.
The best thing about this project was knowing that my back issues of Tape Op magazine would continue providing sonic goodness!
Cyan’s game Myst was the first game I ever finished more than once. That was the winter of 1993. I never would have guessed that exactly 30 years later, they’d release a game whose soundtrack I mixed and mastered. That game, scored by composer Maclaine Diemer, is Firmament.
I had worked with Maclaine before, mastering his episodic scores for Guild Wars 2 and both mixing and mastering his dark score for Salt and Sacrifice. Being asked to help work on a Cyan title was pretty exciting, especially after being thrilled by Maclaine’s desperate and grim Salt and Sacrifice material, and being told that he was going in a fairly dark emotional direction for Firmament. There was a time I’d have been nervous about such a gig, but this was about my 500th musical services project, so I was just more excited than anything.
Maclaine’s approach was paying subtle homage to the original Myst score, composed by the game’s developer Robyn Miller, by using (and abusing) synths used on that game and effects from that era.
While not orchestral, each song was quite simple in terms of instrumentation…but the number of parallel processing effects was insane. Maclaine deep-dives into his process for creating the music in this interview, which is well worth watching. Projects that I mix don’t have to be in my favorite style or genre, of course, but this one sure hit me where I live, aesthetically speaking.
Mixing Firmament
The 25 cues we created for the game (21 were released on the official soundtrack) itself became a game of what effects layer was doing what job in the cue, so the mix was more about arranging the effects like they were parts of an orchestra. Some cues only had six tracks total; others had maybe only two or three instruments, but up to six layers of delay, in addition to other effects, per instrument.
One misstep I made was grunging Maclaine’s mixes up even further with additional saturation. When I was asked to pull back on it, it was not only the right decision, but a strange sort of victory. I found my client’s limit! From there, the mixing process was more streamlined with less guesswork.
While the track counts per mix never got that high, it was no mean feat balancing all these frequency- and time-domain effects to build “density with clarity.” Over eleven discrete rounds of mix deliveries, some cues took four or five revisions to get right. We got three nailed on the first try. The mixing process took approximately one month.
Mastering the Score
Mastering went quickly, but it’s always a challenge mastering work that you mix. I can’t do it without lots of translation tests and, frankly, time. I need to have my sense memory of the mixes fade away quite a bit. Being a game project, though, there wasn’t too much time we could take between mixing and mastering. I tested my mixes in a friend’s studio before starting the mastering process, taking notes of what I thought were issues. I listened on daily walks on AirPod Pros. I used nearfield monitors alongside my full-range mastering monitors, as well as a Bluetooth speaker.
For this project, my signal chain started with digital EQ and resonance control, and ended with all-analogue dynamics control. Tubes and transformers added a subtle mid-high sheen with no phase shift, which was a big help in a score with lots of low-end and low-mid frequencies. Mastering took about a week.
Lessons for Media Composers
One thing that’s always tricky when mixing is ensuring that all the tracks or stems are exported correctly, and this project was no different. Simply due to the number of individual files involved, I’ve never had a mixing project where all the stems or tracks were delivered perfectly the first time. Software is getting better at this, but there’s so many human elements involved that it’s important to remember it’ll never be perfect. Plan for re-exports, because that’s just going to happen, especially when track counts are high or stem deliveries get wide…like, maybe don’t go on vacation the day after you send all your tracks. (That didn’t happen on this project, thankfully!)
In my role, that means it’s critical to always call ask about anything that seems like it could be a mistake. Checking composer intent is essential at every step of the way. Composers can also proactively provide comments and notes to this effect. “Now, you’re gonna hear this sound that seems like an error, but trust me, the whole cue hinges on that sound…”
Every DAW also seems to have a different threshold at which it thinks that all effects tails have been printed fully. Speaking from experience, never trust your DAW to do this for you. Always pad each track with silence manually, or set your render region to where your meters are reading infinite negative dBFS. Add two seconds to the maximum delay time in your session if you’re not sure. That tip won’t work, though, with high-feedback delays, relying on meters and monitoring the tails at very high output levels is the only way to know for certain.
This project was fun, thrilling, and an absolute honor to be involved in. Huge thanks to Maclaine for having me on his team once again, and the whole team at Cyan for being so supportive of the composer and his vision for this interactive experience.
Despite my years of field recording and sound design, it wasn’t until recently that I negotiated my first location access for a personal recording project. It was easier than I’d had expected, but I also got lucky and had lots of factors in my favor. I thought I’d summarize my experiences for anyone who might want or need to negotiate access to a private or controlled location for field recording.
This article is intended for those who have never done this before; the process for much more involved locations for larger commercial projects is beyond the scope of this post.
A sound example of this outing is at the end of this article.
Lesson One: Passionate People Want to Share
My sea kayaking club had a meeting aboard the SS Red Oak Victory, a World War II ammunition freighter that’s now a floating museum and event space. The vessel is operated as a nonprofit and has a cadre of volunteers keeping her (ahem) ship-shape. That meant I could reach out to the Red Oak Victory’s crew and staff, and relay that I was already familiar with the vessel, but that I also had a genuine interest in her story.
What I didn’t expect, but is painfully obvious in retrospect, is that the ship’s staff are all volunteers. You don’t volunteer aboard a vessel like that if you don’t have a passion for it. And they want to show off the object of their passion! I couldn’t possibly ask enough questions of the crew and staff, and they were all incredibly supportive…and excited that anyone would “bless” their vessel with as peculiar an interest as audio recording!
Find a location with people who dearly love it, and find an authentic way to be interested not just in the location, but to understand their passion and share it.
Lesson Two: Educate Stakeholders about Audio Recording
As a nonprofit organization, I was happy to throw some money at the Red Oak Victory in exchange for my time aboard. But how much? In my favor, the ship also acts as a performance venue and has a rate card for rentals, including for photography clubs and video production/filming.
But their director of marketing had said no one had ever just wanted to record audio there, and had no idea what was involved, so how would we price it? Rather than make a rate suggestion, I outlined how my footprint on the vessel would differ from that of a film or video crew. One person just moving multiple recording rigs around. They offered a rate that was a small percentage of the full video-crew rate, which that seemed fair to all parties.
We also discussed noise levels; I wanted a mix of quiet and ambient activity. Learning that the crew is tools-down from noon until 1pm for lunch, we picked a time that bridged both activity levels, and the results were great.
As a recordist, it’s your job to both respect the location as well as educate its gatekeepers as to what your impact will be on the location.
Lesson Three: Start Small
Recording an entire vessel is a daunting task, and being unsure how my presence would be received, I wanted to use that first visit to show that I could be trusted and would be respectful.
So I started small. Ambiences, small hard effects (hatches, doors, etc.), and things that would keep me out of peoples’ way. At the end of the session, it was the Red Oak Victory’s staff that asked if I wanted to come back some time; “Maybe the cargo hoist could be fired up for you, it makes cool noises.” Hey, maybe that ship’s alarm I wasn’t prepared for could be tested on cue? “Yeah, we can definitely do that.”
Another sub-lesson here was that I had not yet learned Lesson One, so I could have pushed harder than I did for more access and more involved requests, because they wanted to share their passion for the ship!
So, use your instincts and balance what you think the location’s gatekeepers will allow with what you hope to accomplish. And as I learned, sometimes there’s such a thing as starting too small.
Lesson Four: It’s All About Relationships
Like everything else in life (and certainly in professional audio), it’s all about building relationships. I had a great time, and so did the crew and the staff. We joked about how I botched a recording of the coolest sound on the ship, and they laughed knowingly when I said it was the refrigerator compressor in the officers’ mess. Everyone felt respected and heard. And all parties are already talking about future visits. Heck, it turned out that they belong to an organization of other historic vessels, and maybe they could place a few calls on my behalf…
Even if you’ll only use a location once, build that relationship. Even if all you might need in the future is using that location’s gatekeepers as character references for others, you never know what connections might happen, even years down the road.
Yeah, yeah, yeah…but what did it sound like?
Here’s a sample of some of the recordings I made: This was recorded with a mid-side rig in the ship’s shaft alley, a 10-foot-wide by 170-foot-long corridor that houses the shaft that runs from the ship’s engine to its single rear propellor. I loved the ambient sounds of movement and work, which almost had an organized, musique concréte rhythm and dynamism to it.
The same winter storms that I mentioned in my last post also brought down a ton of fronds off the King Palm (Archontophoenix alexandrae) in our front yard. One can never have enough foliage source recordings, so… (the video below is just camera audio, not the Sennheiser MKH 50+30 mid-side rig I used to do the recording)
Field recording greatly improves one’s motivation to do yard work!
Here is a designed vegetation movement or destruction sound effect made from the material I recorded in this session. Soundcloud’s MP3 compression won’t be kind to this sort of material, but oh well!
California’s winter of 2022-2023 was one of the stormiest and wettest on record. Near-constant atmospheric rivers and wind events for four months generally made going outside A Bad Idea.
Not great news for those of us that like to do field recording. But, as often happens, constraints do wonders for spurring inspiration.
During one especially torrential downpour, I was thinking about how loud it was on the roof…but we have an attic crawlspace between us and the roof, so how loud must it be up there? I had just created a small, compact ORTF recording setup (Sound Devices MixPre 6ii, two Schoeps MK4/CMC1U microphones, Rycote blimp), and its size allowed me to put the rig in the crawlspace, and let it record all through the night while I slept.
The results were deliciously unusual: The rain was close-miked, but it was still clearly an interior perspective. Its volume drowned out whatever neighborhood noises there might have been. The open-faced insulation batting in the crawlspace created a space with almost no reverb. The result took reverb in post very well, providing flexibly usable interior rain and wind tones.
In fact, I now have a similar setup using a pair of Line Audio CM3 microphones ready to be put up into my attic at a moment’s notice; I keep it up there most of the winter, and just plug a 5-pin XLR cable into it, running to the recorder, whenever there’s a blustery day or some rain. This “attic rig” delivers gold pretty regularly…assuming I can record at night. There’s too much noise of general house habitation during the daytime hours. The CM3’s are no Schoeps, but they are a whole lot more expendable due to their lower price, in case of a leak, particulate matter, or rodents.
Here’s a composite of heavy winds, light rain and some wind, and heavy rain from a few different recording sessions.
When I released the EP A Vast Unwelcome on March 31, 2023, I said it was my first metal album, a comment that was both cheeky and 100% accurate.
It wasn’t metal in genre, but rather in timbre: Every sound on the album was made from metallic objects, metallic instruments, or a handful of virtual instruments that physically modeled complex metallic instruments. Since the timbres and process were so particular, I thought I’d discuss the album’s origin, recording process, and post-production, as this project represents my happy place of the intersection of sound design and music.
It also is a great example of how projects can start as one thing, and then morph into something quite different, and that the creative process is usually anything but linear and predictable.
From Magic to Metal
This album started as a series of recording sessions for a crowdsourced magic sound effects sound library. I ordered a number of bells, chimes, shakers, and more, and specifically set about recording them at 192 kHz with ultrasonic-capable microphones to preserve their high-frequency content when subjected to extreme pitch shifting. I also used unusual recording techniques, like striking chimes and then dipping them into water, recording the result with a hydrophone (similar to this technique I posted about many years ago).
The objects recorded included a waterphone, a custom Tim-Kaiser-designed instrument known as the Icarus, elephant bells, sleigh bell shakers, ankle bells, tree chimes, and more.
My musical inspiration came during “stress-testing” these recordings by doing extreme pitch and time manipulation. Since ostensibly these objects were instruments, there were tones and intervals I was hearing that led me to start assembling musique concréte sketches, not unlike my process for my previous all-wind-in-wiresalbum, The Quivering Sky. I also started to integrate some field recordings I made aboard the SS Red Oak Victory, a restored WWII ammunition freighter. (More about those recording sessions in future blog posts.)
The first track of the album, “Phase Change,” is an example of where all of the songs started.
The Borders of Sound and Music
But I kept hearing harmonies in these recordings, and they started to turn out to be more musical than expected. In 2022, I got to know the software (and people) of Physical Audio, and it struck me that their virtual instruments would compliment these metallic tones perfectly…I mean, any company that makes a prepared-piano emulator is OK in my book! Derailer and Preparation could be traditionally tonal and melodic, or loaded with loads of inharmonic partials and resonances. These two instruments wound up being good aesthetic fits for this project.
The track “Pruina” (the word for hoarfrost in Latin) is a good example of these virtual instruments integrating with other metallic tones.
Things kept progressing from there. I used techniques lifted from my own sound library, Metallitronic, to re-amp a some synthesized tones through gongs placed on large, powerful transducers. I unboxed some of my own self-made instruments made of springs, and bowed and struck a suspended sheet of steel in my garage. Much fun was had, but that thin plate of steel in the garage, hung from a c-stand, started to give me an idea…
The “Only Real Reverb” Rule
I started to put the stereo spring reverb in my studio to heavy use during the mixing stage, and one day I thought to myself, “Wait a second…I’ve got all this rich reverb from the SS Red Oak Victory sessions…this spring reverb sounds great, non-linear, and chaotic…why am I using virtual reverbs and delays at all?” This led me to give myself the challenge to discard all virtual reverbs in my mixes (despite my undying love for ValhallaDSP for most uses), and only use electro-mechanical reverbs.
That instantly made me think of plate reverbs. I asked around to see if any local studios had any plate units that were functional, and much to my surprise, the Skywalker Sound Scoring Stage had not one, not two, but three functioning EMT 140 plate reverbs. After a few phone calls, I found myself in this world-class facility re-amping stems through six channels of luscious, real-steel reverbs.
While EMT 140 units ostensibly have a 3-5 second maximum decay time, I did the ol’ Walter Murch trick of bringing some stems varispeeded up by an octave, playing back twice as fast. We tracked all the EMT 140 returns at 96 kHz, so back in my own studio I varispeeded those returns back down…now I had 8-10 second reverb tails from real plate reverbs. Most of the final mixes actually have a full six channels of plate reverb on them, and there are no virtual reverbs anywhere on the album.
This felt like a logical conclusion to the album’s all-metal creative constraint.
From Music to Meaning
The recording sessions happened during an unusually brutal and long winter, and the steely tones of the works started to feel like both a paean and a dirge to winter itself. This became the compositional focus of the album, which influenced the songs, their titles, and the cover art (a glacier in Iceland, photographed by me). As friends’ neighborhoods were literally crushed under the weight of snow and local areas flooded from winter rains, the music turned out dark but with a core ray of hope, at least to my ear.
But of course sound and music only has meaning given to it by the listener. My intent is mine alone, and whether that comes across to those experiencing it is out of one’s hands. That’s the essence, terror, and joy of releasing art into the world.
Posted: April 29th, 2023 | Author:Nathan | Filed under:news
It’s 2023, and I’ve not posted here in four years. Let’s talk about what’s been happening, why I’m restarting this blog – my first post was July 4th, 2009! – and what’s next.
Game Audio and Sound Design
On the sound design side, I’m focusing on interactive audio, primarily working through Skywalker Sound. Through SkySound, Team Audio, and my own freelance work, I’ve designed and implemented audio for The Callisto Protocol, Pentiment, SCP: Fragmented Minds, and more (and much more coming soon) in the last two years. It’s a logical next step after having created sound for interactive installations, consumer electronics prototypes, and other non-game software since 2005. And I’m having a blast.
This is my sixth year of being a professional mastering engineer at my own studio, Obsidian Sound. To balancing this work with all the game audio projects I’m doing, Obsidian Sound in increasingly focused on mastering for professional composers in games and film. I’ve mastered over 500 projects at this stage…it continues to be a blast and I learn something new on every project.
Music
My own musical practice continues to be a vital means of self-expression for me. A compulsion, at times. As of this blog post, I have released 26 EPs and full-length albums since 2015, most of which are available at nathanmoody.bandcamp.com. I continue my exploration of both organic and electronic sounds and a variety of composition methods, and there’s no sign of this slowing down. Some exciting new releases are coming up soon.
Restarting This Blog
My audio practice has changed so much that it’s time to start re-documenting my experiences, ideas, thoughts, and philosophies in a longer form than current forms of social media will allow. Blogs aren’t as popular as they once were, but screw it. I’m a creature of habit. And it’s my sandbox; no service can dictate how or when I share content here, and I vehemently feel that it should always be free of change.
What’s Next
It’s my hope to continue to post articles, ideas, and thoughts that showcase the truest breadth of my interests, and which underscore my philosophy that the borders between audio disciplines are far more porous than most are led to believe. Art, music, equipment, field recording, sound design, experiments that both succeed and fail…this will continue to be a space not to showcase myself as any sort of expert. As always, I am just documenting my journey, with its missteps and its wins.
If this post reaches anyone, I hope you’ll rejoin me on my lifelong quest to explore the realm of sound in all of its forms.
Since my last blog post, something fairly unusual happened: I became a full-time audio professional.
I’m a freelancer now, and since 2017, I’ve decided to follow my passion – sound – to the best of my ability. Here’s what happened.
First, I fell into becoming a mastering engineer, and opened my own practice called Obsidian Sound. Having leapt back into music in 2014, I’ve realized that this discipline suited me well: fast project turnover, tough critical thinking, attention to detail, and being able to help guide musicians’ craft and creative development in a post-label era. I’m closing in on my 100th mastering project soon, and I’ve loved every second of it.
Second, I’ve started releasing my own sound libraries. With the help of A Sound Effect, I’ve created two such libraries that blend my loves of music and pure sound, which still convey a lot of the dark themes I express in my albums, but oriented towards use in film and games.
Third, I’ve been doing sound design and audio editing for podcasts and video. There’s a whole new part of this site dedicated to that work. Heck, I’m even learning Wwise.
Fourth and finally, I continue to release 3-4 full length albums a year. I’ve played live in the SF Bay Area, Salt Lake City, and across Germany. I also have been commissioned to write a video game theme as well as a documentary score. The former will be announced in a month, and the latter will probably still be years in the making. More on both as I’m able to disclose details.
This site has been online for ten years, and I’ll do my best to update it as these varied explorations of audio develop. It’s not yet lucrative, but I’ve not been this creatively fulfilled in a long time. I must thank everyone who has supported this ongoing journey into the world of sound over the years, including the kind words, the tough criticisms, the countless conversations, inspirations, and transfers of knowledge.
If your interest in sound and audio is as broad as mine, then let’s keep this train moving. (And, realizing that this is a complete career reboot, reach out if you need my services on any of your projects.) Onward!
Want a challenge? Try to play back interface sounds on the show floor at CES. [Intel Booth, CES 2012.]
For those that might not know, for the last decade I earned my living designing digital installations: Multi-touch interactive walls, interactive projection mapping, gestural interfaces for museum exhibits, that sort of thing. Sometimes these things have sound, other times they don’t. When these digital experiences are sonified, regardless of whether they are imparting information in a corporate lobby or being entertaining inside of a museum, clients always want something musical over something abstract, something tonal over something mechanical or atonal.
In my experience, there are several reasons for this. [All photos in this post are projects that I creative-directed and created sound for while I was the Design Director of Stimulant.]
Expectations and Existing Devices
It’s what people expect out of computing devices. The computing devices that surround me almost all use musical tones for feedback or information, from the Roomba to the XBox to Windows to my microwave. It could be synthesized waveforms or audio-file playback, depending on the device, but the “language” of computing interfaces in the real world have been primarily musical, or at least tonal/chromatic. This winds up being a client expectation, even though the things I design tend not to look like any computer one uses at home or work.
Yes, I strapped wireless lavs to my Roomba. The things I do for science.
Devices all around us also use musical tropes for positive and negative message conveyance. From Roombas to Samsung dishwashers, tones rising in pitch within a major key or resolving to a 3rd, 5th, or full octave are used to convey positive status or a message of success. Falling tones within a minor key or resolving to odd intervals are used to convey negative status or a message of failure. These cues, of course, are entirely culture-specific, but they’re used with great frequency.
The only times I’ve ever heard non-musical, actually annoying sound is very much on purpose and always to indicate extremely dire situations. The home fire alarm is maybe the ultimate example, as are klaxons at military or utility installations. Trying to save lives is when you need people’s attention above all else. However, even excessive use of such techniques can lead to change blindness, which is deep topic for another day. Do you really want a nuclear engineer to turn a warning sound off because it triggers too often?
The Problem with Science Fiction
Science fiction interface sounds often don’t translate well into real world usage.
This prototype “factory of the future” had to have its sound design elevated over the sounds of compressors and feeders to ensure zero defects…and had to not annoy machine operators, day in and day out. [GlaxoSmithKline, London, England]
My day job has been inexorably linked to science fiction: The visions of computing devices and interfaces that are shown in films like Blade Runner, The Matrix, Minority Report, Oblivion, Â Iron Man, and even televisions shows like CSI set the stage for what our culture (i.e., my client) sees as future-thinking interface design. (There’s even a book about this topic.) People think transparent screens look cool, when in reality they’re a cinematic conceit so that we can see more of the actors, their emotions, and their movement. These are not real devices – they, and the sounds they make, are props to support a story.
Audio for these cinematic interfaces – what Mark Coleran termed FUI, or Fantasy User Interfaces – may be atonal or abstract so that it doesn’t fight with the musical soundtrack of the film. If such designs are musical, they’re more about timbres than pitch, more Autechre than Arvo Part. This just isn’t a consideration in most real-world scenarios.
Listener Fatigue
Digital installations are not always destinations unto themselves. They are often located in places of transition, like lobbies or hallways.
I’ve designed several digital experiences for lobbies, and there’s always one group of stakeholders that I need to be aware of, but my own clients don’t bring to the table: The front desk and/or security staff. They’re the only people who need to live with this thing all day, every day, unlike visitors or other employees who’ll be with a lobby touchwall for only a few moments during the day. Make these lobby workers annoyed and you’ll be guaranteed that all sound will be turned off. They’ll unplug the audio interface from the PC powering the installation, or turn the PC volume to zero.
This lobby installation started with abstract chirps, bloops, and blurps, but became quite musical after the client felt the sci-fi sounds were far too alienating. Many randomized variations of sounds were created to lessen listener fatigue. There was also one sound channel per screen, across five screens. [Quintiles corporate lobby, Raleigh NC]
Music tends to be less fatiguing than atonal sound effects, in my experience, and triggers parts of the brain that evoke emotions rather than instinctual reactions (in ways that neuroscience is still struggling to understand). But more specifically, sounds with without harsh transients and with relatively slow attacks are more calming.
Randomized and parameterized/procedural sounds really help with listener fatigue as well. If you’re in game audio, the tools used in first- and third-person games to vary footsteps and gunshots are incredibly important to creating everyday sounds that don’t get stale and annoying.
The Environment
Another reality is that our digital experiences are often installed in acoustically bright spaces, and technical sounding effects with sharp transients can really bounce around untreated spaces…especially since many corporate lobbies are multi-story interior atriums! A grab bag of ideas have evolved from years of designing sounds for such environments.
This installation had no sound at all, despite our best attempts and deepest desires. The environment was too tall, too acoustically bright, and too loud. Sometimes it just doesn’t work. [Genentech, South San Francisco, CA]
Many clients ask for directional speakers, which comes with three big caveats. First, they are never as directional as the specification indicate. A few work well, but many don’t, so caveat emptor (they also come with mounting challenges). Second, their frequency response graphs look like broken combs, partially a factor of how they work, and so you can’t expect smooth reproduction of all sound. Finally, most are tuned to the human voice, so of course musical sound reproduction is not only compromised sonically, but anything lower than 1 kHz starts to bleed out of the specified sound cone. That’s physics, anyway – not much will stop low-frequency sound waves except large air gaps with insulation on both sides.
The only consistently effective trick I’ve found for creating sounds that punch through significant background noise is rising or falling pitch, which lends itself nicely to musical tones that ascend or descend. Most background noise tends to be pretty steady-state, so this can help a sound punch through the environmental “mix.”
One cool trick is to sample the room tone and make the sounds in the same key as the ambient fundamental – it might not be a formal scale, but the intervals will literally be in harmony with one another.
Broadband background noise can often mask other sounds, making them harder to hear. In fact, having the audio masked by background noise if you’re not right in front of the installation itself might be a really good idea. I did a corporate lobby project where there was an always-running water feature right behind the installation we created; since it was basically a white noise generator, it completely masked the interface’s audio for passersby, keeping the security desk staff much happier and not being intrusive into the sonic landscape for the casual visitor or the everyday employee.
Music, Music Everywhere
Of course, sometimes an installation is meant to actually create music! This was the first interactive multi-user instrument for Microsoft Surface, a grid sequencer that let up to four people play music.
These considerations require equal parts composition and sound design, and a pinch of human-centered design and empathy. It’s a fun challenge, different than sound design for traditional linear media, which usually focuses on being strictly representative or on re-contextualized sounds recorded from the real world. Listen to devices around you in real life and see if you notice the frequency (pun intended) with which musical interface sounds are commonplace. If you have experiences and lessons from doing this type of work yourself, please share in the comments below.