It was a pleasure to be on the Field & Foley podcast; we discussed field recording, game audio, experimental music, and my philosophies about not believing that there are any solid boundaries between any of it. Listen here!
When I released the EP A Vast Unwelcome on March 31, 2023, I said it was my first metal album, a comment that was both cheeky and 100% accurate.
It wasn’t metal in genre, but rather in timbre: Every sound on the album was made from metallic objects, metallic instruments, or a handful of virtual instruments that physically modeled complex metallic instruments. Since the timbres and process were so particular, I thought I’d discuss the album’s origin, recording process, and post-production, as this project represents my happy place of the intersection of sound design and music.
It also is a great example of how projects can start as one thing, and then morph into something quite different, and that the creative process is usually anything but linear and predictable.
From Magic to Metal
This album started as a series of recording sessions for a crowdsourced magic sound effects sound library. I ordered a number of bells, chimes, shakers, and more, and specifically set about recording them at 192 kHz with ultrasonic-capable microphones to preserve their high-frequency content when subjected to extreme pitch shifting. I also used unusual recording techniques, like striking chimes and then dipping them into water, recording the result with a hydrophone (similar to this technique I posted about many years ago).
The objects recorded included a waterphone, a custom Tim-Kaiser-designed instrument known as the Icarus, elephant bells, sleigh bell shakers, ankle bells, tree chimes, and more.
My musical inspiration came during “stress-testing” these recordings by doing extreme pitch and time manipulation. Since ostensibly these objects were instruments, there were tones and intervals I was hearing that led me to start assembling musique concréte sketches, not unlike my process for my previous all-wind-in-wiresalbum, The Quivering Sky. I also started to integrate some field recordings I made aboard the SS Red Oak Victory, a restored WWII ammunition freighter. (More about those recording sessions in future blog posts.)
The first track of the album, “Phase Change,” is an example of where all of the songs started.
The Borders of Sound and Music
But I kept hearing harmonies in these recordings, and they started to turn out to be more musical than expected. In 2022, I got to know the software (and people) of Physical Audio, and it struck me that their virtual instruments would compliment these metallic tones perfectly…I mean, any company that makes a prepared-piano emulator is OK in my book! Derailer and Preparation could be traditionally tonal and melodic, or loaded with loads of inharmonic partials and resonances. These two instruments wound up being good aesthetic fits for this project.
The track “Pruina” (the word for hoarfrost in Latin) is a good example of these virtual instruments integrating with other metallic tones.
Things kept progressing from there. I used techniques lifted from my own sound library, Metallitronic, to re-amp a some synthesized tones through gongs placed on large, powerful transducers. I unboxed some of my own self-made instruments made of springs, and bowed and struck a suspended sheet of steel in my garage. Much fun was had, but that thin plate of steel in the garage, hung from a c-stand, started to give me an idea…
The “Only Real Reverb” Rule
I started to put the stereo spring reverb in my studio to heavy use during the mixing stage, and one day I thought to myself, “Wait a second…I’ve got all this rich reverb from the SS Red Oak Victory sessions…this spring reverb sounds great, non-linear, and chaotic…why am I using virtual reverbs and delays at all?” This led me to give myself the challenge to discard all virtual reverbs in my mixes (despite my undying love for ValhallaDSP for most uses), and only use electro-mechanical reverbs.
That instantly made me think of plate reverbs. I asked around to see if any local studios had any plate units that were functional, and much to my surprise, the Skywalker Sound Scoring Stage had not one, not two, but three functioning EMT 140 plate reverbs. After a few phone calls, I found myself in this world-class facility re-amping stems through six channels of luscious, real-steel reverbs.
While EMT 140 units ostensibly have a 3-5 second maximum decay time, I did the ol’ Walter Murch trick of bringing some stems varispeeded up by an octave, playing back twice as fast. We tracked all the EMT 140 returns at 96 kHz, so back in my own studio I varispeeded those returns back down…now I had 8-10 second reverb tails from real plate reverbs. Most of the final mixes actually have a full six channels of plate reverb on them, and there are no virtual reverbs anywhere on the album.
This felt like a logical conclusion to the album’s all-metal creative constraint.
From Music to Meaning
The recording sessions happened during an unusually brutal and long winter, and the steely tones of the works started to feel like both a paean and a dirge to winter itself. This became the compositional focus of the album, which influenced the songs, their titles, and the cover art (a glacier in Iceland, photographed by me). As friends’ neighborhoods were literally crushed under the weight of snow and local areas flooded from winter rains, the music turned out dark but with a core ray of hope, at least to my ear.
But of course sound and music only has meaning given to it by the listener. My intent is mine alone, and whether that comes across to those experiencing it is out of one’s hands. That’s the essence, terror, and joy of releasing art into the world.
Since my last blog post, something fairly unusual happened: I became a full-time audio professional.
I’m a freelancer now, and since 2017, I’ve decided to follow my passion – sound – to the best of my ability. Here’s what happened.
First, I fell into becoming a mastering engineer, and opened my own practice called Obsidian Sound. Having leapt back into music in 2014, I’ve realized that this discipline suited me well: fast project turnover, tough critical thinking, attention to detail, and being able to help guide musicians’ craft and creative development in a post-label era. I’m closing in on my 100th mastering project soon, and I’ve loved every second of it.
Second, I’ve started releasing my own sound libraries. With the help of A Sound Effect, I’ve created two such libraries that blend my loves of music and pure sound, which still convey a lot of the dark themes I express in my albums, but oriented towards use in film and games.
Third, I’ve been doing sound design and audio editing for podcasts and video. There’s a whole new part of this site dedicated to that work. Heck, I’m even learning Wwise.
Fourth and finally, I continue to release 3-4 full length albums a year. I’ve played live in the SF Bay Area, Salt Lake City, and across Germany. I also have been commissioned to write a video game theme as well as a documentary score. The former will be announced in a month, and the latter will probably still be years in the making. More on both as I’m able to disclose details.
This site has been online for ten years, and I’ll do my best to update it as these varied explorations of audio develop. It’s not yet lucrative, but I’ve not been this creatively fulfilled in a long time. I must thank everyone who has supported this ongoing journey into the world of sound over the years, including the kind words, the tough criticisms, the countless conversations, inspirations, and transfers of knowledge.
If your interest in sound and audio is as broad as mine, then let’s keep this train moving. (And, realizing that this is a complete career reboot, reach out if you need my services on any of your projects.) Onward!
Want a challenge? Try to play back interface sounds on the show floor at CES. [Intel Booth, CES 2012.]
For those that might not know, for the last decade I earned my living designing digital installations: Multi-touch interactive walls, interactive projection mapping, gestural interfaces for museum exhibits, that sort of thing. Sometimes these things have sound, other times they don’t. When these digital experiences are sonified, regardless of whether they are imparting information in a corporate lobby or being entertaining inside of a museum, clients always want something musical over something abstract, something tonal over something mechanical or atonal.
In my experience, there are several reasons for this. [All photos in this post are projects that I creative-directed and created sound for while I was the Design Director of Stimulant.]
Expectations and Existing Devices
It’s what people expect out of computing devices. The computing devices that surround me almost all use musical tones for feedback or information, from the Roomba to the XBox to Windows to my microwave. It could be synthesized waveforms or audio-file playback, depending on the device, but the “language” of computing interfaces in the real world have been primarily musical, or at least tonal/chromatic. This winds up being a client expectation, even though the things I design tend not to look like any computer one uses at home or work.
Yes, I strapped wireless lavs to my Roomba. The things I do for science.
Devices all around us also use musical tropes for positive and negative message conveyance. From Roombas to Samsung dishwashers, tones rising in pitch within a major key or resolving to a 3rd, 5th, or full octave are used to convey positive status or a message of success. Falling tones within a minor key or resolving to odd intervals are used to convey negative status or a message of failure. These cues, of course, are entirely culture-specific, but they’re used with great frequency.
The only times I’ve ever heard non-musical, actually annoying sound is very much on purpose and always to indicate extremely dire situations. The home fire alarm is maybe the ultimate example, as are klaxons at military or utility installations. Trying to save lives is when you need people’s attention above all else. However, even excessive use of such techniques can lead to change blindness, which is deep topic for another day. Do you really want a nuclear engineer to turn a warning sound off because it triggers too often?
The Problem with Science Fiction
Science fiction interface sounds often don’t translate well into real world usage.
This prototype “factory of the future” had to have its sound design elevated over the sounds of compressors and feeders to ensure zero defects…and had to not annoy machine operators, day in and day out. [GlaxoSmithKline, London, England]
My day job has been inexorably linked to science fiction: The visions of computing devices and interfaces that are shown in films like Blade Runner, The Matrix, Minority Report, Oblivion, Â Iron Man, and even televisions shows like CSI set the stage for what our culture (i.e., my client) sees as future-thinking interface design. (There’s even a book about this topic.) People think transparent screens look cool, when in reality they’re a cinematic conceit so that we can see more of the actors, their emotions, and their movement. These are not real devices – they, and the sounds they make, are props to support a story.
Audio for these cinematic interfaces – what Mark Coleran termed FUI, or Fantasy User Interfaces – may be atonal or abstract so that it doesn’t fight with the musical soundtrack of the film. If such designs are musical, they’re more about timbres than pitch, more Autechre than Arvo Part. This just isn’t a consideration in most real-world scenarios.
Listener Fatigue
Digital installations are not always destinations unto themselves. They are often located in places of transition, like lobbies or hallways.
I’ve designed several digital experiences for lobbies, and there’s always one group of stakeholders that I need to be aware of, but my own clients don’t bring to the table: The front desk and/or security staff. They’re the only people who need to live with this thing all day, every day, unlike visitors or other employees who’ll be with a lobby touchwall for only a few moments during the day. Make these lobby workers annoyed and you’ll be guaranteed that all sound will be turned off. They’ll unplug the audio interface from the PC powering the installation, or turn the PC volume to zero.
This lobby installation started with abstract chirps, bloops, and blurps, but became quite musical after the client felt the sci-fi sounds were far too alienating. Many randomized variations of sounds were created to lessen listener fatigue. There was also one sound channel per screen, across five screens. [Quintiles corporate lobby, Raleigh NC]
Music tends to be less fatiguing than atonal sound effects, in my experience, and triggers parts of the brain that evoke emotions rather than instinctual reactions (in ways that neuroscience is still struggling to understand). But more specifically, sounds with without harsh transients and with relatively slow attacks are more calming.
Randomized and parameterized/procedural sounds really help with listener fatigue as well. If you’re in game audio, the tools used in first- and third-person games to vary footsteps and gunshots are incredibly important to creating everyday sounds that don’t get stale and annoying.
The Environment
Another reality is that our digital experiences are often installed in acoustically bright spaces, and technical sounding effects with sharp transients can really bounce around untreated spaces…especially since many corporate lobbies are multi-story interior atriums! A grab bag of ideas have evolved from years of designing sounds for such environments.
This installation had no sound at all, despite our best attempts and deepest desires. The environment was too tall, too acoustically bright, and too loud. Sometimes it just doesn’t work. [Genentech, South San Francisco, CA]
Many clients ask for directional speakers, which comes with three big caveats. First, they are never as directional as the specification indicate. A few work well, but many don’t, so caveat emptor (they also come with mounting challenges). Second, their frequency response graphs look like broken combs, partially a factor of how they work, and so you can’t expect smooth reproduction of all sound. Finally, most are tuned to the human voice, so of course musical sound reproduction is not only compromised sonically, but anything lower than 1 kHz starts to bleed out of the specified sound cone. That’s physics, anyway – not much will stop low-frequency sound waves except large air gaps with insulation on both sides.
The only consistently effective trick I’ve found for creating sounds that punch through significant background noise is rising or falling pitch, which lends itself nicely to musical tones that ascend or descend. Most background noise tends to be pretty steady-state, so this can help a sound punch through the environmental “mix.”
One cool trick is to sample the room tone and make the sounds in the same key as the ambient fundamental – it might not be a formal scale, but the intervals will literally be in harmony with one another.
Broadband background noise can often mask other sounds, making them harder to hear. In fact, having the audio masked by background noise if you’re not right in front of the installation itself might be a really good idea. I did a corporate lobby project where there was an always-running water feature right behind the installation we created; since it was basically a white noise generator, it completely masked the interface’s audio for passersby, keeping the security desk staff much happier and not being intrusive into the sonic landscape for the casual visitor or the everyday employee.
Music, Music Everywhere
Of course, sometimes an installation is meant to actually create music! This was the first interactive multi-user instrument for Microsoft Surface, a grid sequencer that let up to four people play music.
These considerations require equal parts composition and sound design, and a pinch of human-centered design and empathy. It’s a fun challenge, different than sound design for traditional linear media, which usually focuses on being strictly representative or on re-contextualized sounds recorded from the real world. Listen to devices around you in real life and see if you notice the frequency (pun intended) with which musical interface sounds are commonplace. If you have experiences and lessons from doing this type of work yourself, please share in the comments below.
Following 2015’s full-length album, Dissolver, I’m happy to announce the release of Dissolved, an EP with remixes by musicians from around the world. The US is represented by A Box in the Sea (WA), The Sight Below (NY), and r beny (CA); other contributors include The Heartwood Institute (aka Jonathan Sharp, UK), Hainbach (DE), and Fake Empire (NZ). The remixers’ techniques were as varied as their locations, from DAW-based arrangements to use of vintage hardware to recordings using dictaphones. The pieces exhibit a similar range of moods and styles as the original Dissolver LP, from lilting to tense, ambient to percussive, experimental to melodic.
I’m thrilled to announce that the first album I’ve released under my own name, Dissolver, has been released. It is available now as a digital download on Bandcamp (with PDF booklet with additional artwork and liner notes, exclusively available on Bandcamp). You can also buy it as a digital album on iTunes, Amazon, and Google Play.
Sure, it’s fun to use long, non-reverb sounds as impulse responses…but what about short, percussive ones?
Convolution reverbs have been a staple of audio post-production for a good while, but like most tools of any type, I prefer to force tools into unintentional uses.
While I am absolutely not the first person to use something other than an actual spatial, reverb-oriented impulse response – bowed cymbals are amazing impulse responses, by the way – I hadn’t really looked into using very short, percussive impulse responses until recently. I mean, it’s usually short percussive sounds you’re processing through the convolution reverb. I found that it can add an overtone to a sound that can be pretty unique. Try it sometime!
(Coincidentally, today Diego Stocco’s is promoting his excellent Rhythmic Convolutions, a whole collection of impulse responses meant for just these creative purposes. Go check it out!)
Today’s sample is in three parts. First, a very bland percussion track. Then, the sound of a rusty hinge dropped from about one foot onto a rubber mat, recorded with my trusty Sony PCM-D50 field recorder. Then, the same percussion track through Logic Pro’s Space Designer (Altiverb or any other convolution reverb will do, of course) using the dropped hinge sound as an impulse response. It adds a sort of distorted gated reverb, adding some grit, clank, and muscle to an otherwise pretty weak sound.
The joys of knobs. And patch points. And empty bank accounts.
While this may be old news to my followers on Soundcloud, Twitter and Instagram, I’ve configured my first Eurorack-format modular synthesizer. These cabled amalgamations of faceplates, cables, circuits, and glowing LEDs are desirable, fetishized, addictive, and steeped in history. But, really, they’re just tools.
But what tools they are. Modular synthesizers are no longer relegated to the dustbin of history, nor an underground elite (as well documented in the excellent documentary, I Dream of Wires). They have come roaring back, arguably leading the way in technical synthesis innovation, and are a commonplace instrument in many studios. This boom has even gotten the heavyweights of mass market synthesizers, like Roland, to (re)release Eurorack modules, and pop musicians like Martin Gore to release all-modular electronic albums.
Everyone’s path to modular synthesis is different, as is mine. But why did I go modular? How did I even know where to begin? And how can I hope to stem the addictive nature of constantly adding low-cost modules, which leads it to be known as “Eurocrack?”
Embrace Limitations
It’s tempting to just buy flavor-of-the-month new products, but that way lies financial ruin and a studio full of stuff you don’t use. The way to stem the financial bleed and random module selection is to place limitations on the process. For me, the limitations were as follows.
I’ve got a significant investment in existing software and hardware that I want to honor and leverage, not duplicate. I’m designing an additional instrument, not building a new studio.
I have limited physical space in my home studio. Therefore my case will be on the small side, and that will enforce limits on the number of modules I can purchase.
I will “version” the modular synth and roadmap it, as if I was designing an actual instrument or a piece of software. I will buy modules in two initial rounds: v0.5 to instantiate the most basic system to ensure that the workflow and gestalt of modular synthesis does actually speak to me, and then a v1.0 that I will live with for a year. Only after user testing – my own, of course – can I roadmap a meaningful path to a v1.5, v2.0, and so on.
I’ve spent my career breaking down everything, from human relationship challenges to sound design, as a set of design problems. This helps frame the real problem so that solutions are more meaningful. So, I asked myself: What’s the problem I’m trying to solve, or am I just lusting after gear? (Spoiler: It’s both!)
My current system lacked in two key areas: complex modulation options and the ability to support serendipity. My existing tools didn’t have much in the way for allowing for happy accidents, randomness, and cross-modulated signals and patterns of control. When my most interesting and complex synthesized rhythms and timbres I was creating were coming from Propellerhead Reason during my morning bus commute, I knew something was missing in my main studio.
Software is an expense, hardware is an investment. Software suffers from instability and, over the long haul, the danger of becoming incompatible that many hardware units do not.
I’ve already been enjoying workflow of using external hardware as sound sources and then post-processing them digitally, or the otherwayaround.
With the above considerations, the idea of a flexible, modulation-rich instrument to add to the stable seemed to make sense.
Modular synths are, well, modular: Flexibility is what they’re all about. But you are building your own instrument. Without a sense for what you want to accomplish, you’ll overspend and not get what you really need…and, more dangerously, you won’t know when you should stop buying modules. Most of us don’t have the disposable income to buy modules willy-nilly.
Here were my rules of engagement for assembling my modular synth. These will change over time, but it helped me understand what the first iteration of this instrument would be. I wrote these down and re-read them any time I started to think about adding a new module.
No analog oscillators. While that may seem against conventional modular wisdom, I have a total of ten analog oscillators across four other devices. I’ve got this covered. Go for something really unusual as a sound source.
No effects. I know that even if I monitor a track with effects on, I always record dry and have effects as plugins or rendered to separate tracks. I use tons of plugins and stompboxes: I have effects covered already.
Go nuts with modulation. Having enough tools to generate and modify clock signals and control voltages will be critical, because I don’t have digital tools that excel at this. Get more modules that control modulation than produce sound (or, ideally, ones that can do both).
Don’t forget the DAW. I’ve got a significant investment in a computer-based audio workstation that should be leveraged, so ensuring that modulation and clock signals can drive the modular was critical.
Embrace multi-tracking. Look at the modular as a sound design station, instrument, or voice, not as a complete studio. Get enough expressive options to do drones, melodies, and unusual percussion…but I don’t have to do all these things at once. That also means no more than 2 channels in or out of the modular synth.
There’s nothing mysterious about putting a modular together or how it’s used, as long as you have a good grasp of signal flow inside a typical synthesizer. It doesn’t really take any other technical skills other than using a screwdriver, reading directions, and doing simple math around power consumption.
I’ve got solid sync with my DAW.
I’ve got an instrument that can do things none of my other instruments can, and vice-versa.
I’ve got methods to interface with effects pedals, external semi-modular instruments (even with different interconnects), my DAW, and even my iPad. It’s deeply integrated into the rest of my studio.
It’s small. Full, but small. It’s even able to be self-contained if I decide to embrace limitations and create sounds or music only with this instrument outside of my studio or otherwise away from my DAW, even with my vintage Roland TR-606 drum machine.
It’s capable of percussion, melody, and drones that can modulate in complex and random ways over seconds or many minutes.
Modular users have a reputation for noodling and sound designing, but never actually completing songs or projects. It’s like an aural sandbox. The satisfaction of signal routing is autotellic: It’s its own reward, constant discovery and following or rejecting conventional wisdom. It’s also extremely meditative once you’re past the initial learning curve.
I’ve already broken the “no effects” rule, but only with modules that can be “self-patched” and act as sound sources in their own right.
Even having only purchased digital oscillator modules, analog modulators like LFO’s can often be used as analog oscillators when they are pushed into the audible range, as can filters that self-oscillate when their resonance is set high. I even wound up with four analog oscillators without knowing it.
Once you realize that anything can be routed into anything, all synthesis rules go out the window. LFOs and filters can be oscillators, as mentioned above, but clocks can be triggers, envelopes can be clocks, envelopes can be LFOs, audio amplitude can modulate anything…that’s the mind implosion and creativity that modular synthesis brings.
Over time, will I jettison older gear and go all Eurorack? Will I dispense with the computer entirely for making music? Probably not. But I’m sure my system will slowly expand, change, and evolve with my interests, just as I’ve shifted from oils to acrylics to pastels to pencils to pixels in my visual arts career. The initial rules I started with will morph, change, get relaxed, and get updated. My initial configurations has gaps and weaknesses, but nothing’s perfect. And now I’m good to go with a new palette of sonic colors.
Now, if you’ll excuse me, I have field recordings to run through my modular.
We had two days of 15-25 knot winds, and as you might imagine, a lighthouse is a roughshod place. The winds were howling through the old windows and making amazing sounds.
Only one problem: I had a small sea kayak with no room to even pack a handheld field recorder. As I’ve said many times before, the best field recorder is the one you have with you, and this case, my only option was my iPhone. In glorious, shimmering mono.
Today’s sound are of these howling winds, recorded with the Voice Memos app on iOS. I’m not about to make a habit of using my iPhone as a field recorder, even with aftermarket microphones, but hopefully this goes to show that sometimes you do the best with what you have. Especially if the sounds and location are literally once-in-a-lifetime events.
Following on my last post, I’ve continued to play around with my recordings of deer antlers through a contact microphone. Today’s sound is almost entirely from that session, with only a handful of synthesized sounds, all triggered by LFOs and other random modulations. The manipulations of the deer antler sounds were done in the very weird, pretty unstable, and utterly unique Gleetchlab application, as well as iZotope Iris, which did an amazing job of figuring out the root frequency of the flute-like and cello-like bowed resonances.