It was a pleasure to be on the Field & Foley podcast; we discussed field recording, game audio, experimental music, and my philosophies about not believing that there are any solid boundaries between any of it. Listen here!
Want a challenge? Try to play back interface sounds on the show floor at CES. [Intel Booth, CES 2012.]
For those that might not know, for the last decade I earned my living designing digital installations: Multi-touch interactive walls, interactive projection mapping, gestural interfaces for museum exhibits, that sort of thing. Sometimes these things have sound, other times they don’t. When these digital experiences are sonified, regardless of whether they are imparting information in a corporate lobby or being entertaining inside of a museum, clients always want something musical over something abstract, something tonal over something mechanical or atonal.
In my experience, there are several reasons for this. [All photos in this post are projects that I creative-directed and created sound for while I was the Design Director of Stimulant.]
Expectations and Existing Devices
It’s what people expect out of computing devices. The computing devices that surround me almost all use musical tones for feedback or information, from the Roomba to the XBox to Windows to my microwave. It could be synthesized waveforms or audio-file playback, depending on the device, but the “language” of computing interfaces in the real world have been primarily musical, or at least tonal/chromatic. This winds up being a client expectation, even though the things I design tend not to look like any computer one uses at home or work.
Yes, I strapped wireless lavs to my Roomba. The things I do for science.
Devices all around us also use musical tropes for positive and negative message conveyance. From Roombas to Samsung dishwashers, tones rising in pitch within a major key or resolving to a 3rd, 5th, or full octave are used to convey positive status or a message of success. Falling tones within a minor key or resolving to odd intervals are used to convey negative status or a message of failure. These cues, of course, are entirely culture-specific, but they’re used with great frequency.
The only times I’ve ever heard non-musical, actually annoying sound is very much on purpose and always to indicate extremely dire situations. The home fire alarm is maybe the ultimate example, as are klaxons at military or utility installations. Trying to save lives is when you need people’s attention above all else. However, even excessive use of such techniques can lead to change blindness, which is deep topic for another day. Do you really want a nuclear engineer to turn a warning sound off because it triggers too often?
The Problem with Science Fiction
Science fiction interface sounds often don’t translate well into real world usage.
This prototype “factory of the future” had to have its sound design elevated over the sounds of compressors and feeders to ensure zero defects…and had to not annoy machine operators, day in and day out. [GlaxoSmithKline, London, England]
My day job has been inexorably linked to science fiction: The visions of computing devices and interfaces that are shown in films like Blade Runner, The Matrix, Minority Report, Oblivion, Â Iron Man, and even televisions shows like CSI set the stage for what our culture (i.e., my client) sees as future-thinking interface design. (There’s even a book about this topic.) People think transparent screens look cool, when in reality they’re a cinematic conceit so that we can see more of the actors, their emotions, and their movement. These are not real devices – they, and the sounds they make, are props to support a story.
Audio for these cinematic interfaces – what Mark Coleran termed FUI, or Fantasy User Interfaces – may be atonal or abstract so that it doesn’t fight with the musical soundtrack of the film. If such designs are musical, they’re more about timbres than pitch, more Autechre than Arvo Part. This just isn’t a consideration in most real-world scenarios.
Listener Fatigue
Digital installations are not always destinations unto themselves. They are often located in places of transition, like lobbies or hallways.
I’ve designed several digital experiences for lobbies, and there’s always one group of stakeholders that I need to be aware of, but my own clients don’t bring to the table: The front desk and/or security staff. They’re the only people who need to live with this thing all day, every day, unlike visitors or other employees who’ll be with a lobby touchwall for only a few moments during the day. Make these lobby workers annoyed and you’ll be guaranteed that all sound will be turned off. They’ll unplug the audio interface from the PC powering the installation, or turn the PC volume to zero.
This lobby installation started with abstract chirps, bloops, and blurps, but became quite musical after the client felt the sci-fi sounds were far too alienating. Many randomized variations of sounds were created to lessen listener fatigue. There was also one sound channel per screen, across five screens. [Quintiles corporate lobby, Raleigh NC]
Music tends to be less fatiguing than atonal sound effects, in my experience, and triggers parts of the brain that evoke emotions rather than instinctual reactions (in ways that neuroscience is still struggling to understand). But more specifically, sounds with without harsh transients and with relatively slow attacks are more calming.
Randomized and parameterized/procedural sounds really help with listener fatigue as well. If you’re in game audio, the tools used in first- and third-person games to vary footsteps and gunshots are incredibly important to creating everyday sounds that don’t get stale and annoying.
The Environment
Another reality is that our digital experiences are often installed in acoustically bright spaces, and technical sounding effects with sharp transients can really bounce around untreated spaces…especially since many corporate lobbies are multi-story interior atriums! A grab bag of ideas have evolved from years of designing sounds for such environments.
This installation had no sound at all, despite our best attempts and deepest desires. The environment was too tall, too acoustically bright, and too loud. Sometimes it just doesn’t work. [Genentech, South San Francisco, CA]
Many clients ask for directional speakers, which comes with three big caveats. First, they are never as directional as the specification indicate. A few work well, but many don’t, so caveat emptor (they also come with mounting challenges). Second, their frequency response graphs look like broken combs, partially a factor of how they work, and so you can’t expect smooth reproduction of all sound. Finally, most are tuned to the human voice, so of course musical sound reproduction is not only compromised sonically, but anything lower than 1 kHz starts to bleed out of the specified sound cone. That’s physics, anyway – not much will stop low-frequency sound waves except large air gaps with insulation on both sides.
The only consistently effective trick I’ve found for creating sounds that punch through significant background noise is rising or falling pitch, which lends itself nicely to musical tones that ascend or descend. Most background noise tends to be pretty steady-state, so this can help a sound punch through the environmental “mix.”
One cool trick is to sample the room tone and make the sounds in the same key as the ambient fundamental – it might not be a formal scale, but the intervals will literally be in harmony with one another.
Broadband background noise can often mask other sounds, making them harder to hear. In fact, having the audio masked by background noise if you’re not right in front of the installation itself might be a really good idea. I did a corporate lobby project where there was an always-running water feature right behind the installation we created; since it was basically a white noise generator, it completely masked the interface’s audio for passersby, keeping the security desk staff much happier and not being intrusive into the sonic landscape for the casual visitor or the everyday employee.
Music, Music Everywhere
Of course, sometimes an installation is meant to actually create music! This was the first interactive multi-user instrument for Microsoft Surface, a grid sequencer that let up to four people play music.
These considerations require equal parts composition and sound design, and a pinch of human-centered design and empathy. It’s a fun challenge, different than sound design for traditional linear media, which usually focuses on being strictly representative or on re-contextualized sounds recorded from the real world. Listen to devices around you in real life and see if you notice the frequency (pun intended) with which musical interface sounds are commonplace. If you have experiences and lessons from doing this type of work yourself, please share in the comments below.
Sure, it’s fun to use long, non-reverb sounds as impulse responses…but what about short, percussive ones?
Convolution reverbs have been a staple of audio post-production for a good while, but like most tools of any type, I prefer to force tools into unintentional uses.
While I am absolutely not the first person to use something other than an actual spatial, reverb-oriented impulse response – bowed cymbals are amazing impulse responses, by the way – I hadn’t really looked into using very short, percussive impulse responses until recently. I mean, it’s usually short percussive sounds you’re processing through the convolution reverb. I found that it can add an overtone to a sound that can be pretty unique. Try it sometime!
(Coincidentally, today Diego Stocco’s is promoting his excellent Rhythmic Convolutions, a whole collection of impulse responses meant for just these creative purposes. Go check it out!)
Today’s sample is in three parts. First, a very bland percussion track. Then, the sound of a rusty hinge dropped from about one foot onto a rubber mat, recorded with my trusty Sony PCM-D50 field recorder. Then, the same percussion track through Logic Pro’s Space Designer (Altiverb or any other convolution reverb will do, of course) using the dropped hinge sound as an impulse response. It adds a sort of distorted gated reverb, adding some grit, clank, and muscle to an otherwise pretty weak sound.
We had two days of 15-25 knot winds, and as you might imagine, a lighthouse is a roughshod place. The winds were howling through the old windows and making amazing sounds.
Only one problem: I had a small sea kayak with no room to even pack a handheld field recorder. As I’ve said many times before, the best field recorder is the one you have with you, and this case, my only option was my iPhone. In glorious, shimmering mono.
Today’s sound are of these howling winds, recorded with the Voice Memos app on iOS. I’m not about to make a habit of using my iPhone as a field recorder, even with aftermarket microphones, but hopefully this goes to show that sometimes you do the best with what you have. Especially if the sounds and location are literally once-in-a-lifetime events.
After getting to know the Tetrax Organ, profiled in my last post, I became interested in what other devices used banana jack interfaces for control voltage (CV) modulation. The eurorack standard for modular synthesis is wildly popular, but its buzz drowns out other equally interesting platforms, like the banana-based Buchla and Serge systems.
This research led me to BugBrand, a quirky English manufacturer of both modular synths and desktop formats (who also happens to be a top notch guy, not to be confused with The Bug, who I’ve been following since his first release in 1997, which is based on The Conversation, which is about a sound recordist…talk about circular references…). I had heard great things about, and from, his gear, especially a well-regarded but often-overlooked device called the DRM1 Major Drum. This filled a hole in my gear list: a dedicated all-analog, super-flexible drum synthesizer. And with a Tetrax Organ and a Low Gain Electronics UTL-1/2Â format converter, I could easily drive it from pretty much any source that output CV.
In short, I picked one up, and am thoroughly enjoying it. It mixes well with other gear, especially if I’m rolling all-analog. It overdrives naturally, aesthetically, and quickly, lending itself to aggressive styles, but not limited to them. I especially like the ability to create rising or falling triggered envelopes via the “Bend” feature. Having two trigger inputs (three if you include the big red button) and CV control of both its oscillator and filter are great. I do wish the filter was steeper for more extreme sculpting of the noise generator, but you do get the choice of bandpass or lowpass/highpass (the latter switchable with an internal jumper) via a front-panel switch.
In all my research, though, I never really came across a single piece of media that really dove into its sound design abilities. While its tone can be varied a little based on the strength of the trigger signal it’s fed, it’s a single-voice synth, and no video demo or Soundcloud track really seemed to express its breadth of sound design possibilities.
So, I decided to do something about it.
The sounds in today’s track are entirely made from the BugBrand DRM1. About half of the tracks are sequenced via the EHX 8-Step Program sequencer pedal (including the dubby melodic loop), and the rest are hand-edited, and one track features modulation form the Tetrax Organ’s touch pressure, and another using the Tetrax’s oscillators to drive the DRM1’s oscillator and filter. Effects include some delays, one reverb, and a bunch of high-pass and low-pass filters and EQ’s, with some compression on the output bus.
The sounds all have a very strong flavor, sharing a lot of timbral qualities, regardless of the function they serve in the mix. That can be good or bad, depending on what you’re after. But still, I think it’s impressive that this is all from a device with only one oscillator, one filter, and only three CV inputs. And this thing has a truly massive frequency range: its lowest pure tones drop to at least 20Hz, and it’s pretty easy to get spikes near or above 20kHz!
Pro tip: BugBrand products are tough to get a hold of, as Tom Bug doesn’t hold much inventory at any one time, so when he makes a production run, they sell out in a heartbeat. If you want to get in on Tom Bug’s next manufacturing runs/releases, get on his list.
Following on my last post, I’ve continued to play around with my recordings of deer antlers through a contact microphone. Today’s sound is almost entirely from that session, with only a handful of synthesized sounds, all triggered by LFOs and other random modulations. The manipulations of the deer antler sounds were done in the very weird, pretty unstable, and utterly unique Gleetchlab application, as well as iZotope Iris, which did an amazing job of figuring out the root frequency of the flute-like and cello-like bowed resonances.
I love handmade soundmaking devices, but outside of my beloved Grendel Drone Commander, a lot of the weird noise boxes and effects I have are, well, noisy. They tend to be aggressive, loud, and blippy. Some accept MIDI, some accept CV, some accept no sync signal at all.
One evening I wondered if I could coax them into some semblance of ambient drones, to loosen myself up and not record to a fixed tempo, and to not get too “precious” with editing in post. Somehow the angry nature of these devices just seems to bleed through anyway. Or is that my angry nature?
So, the result of this cathartic experiment was “angry ambient.” Or, angrient.
This track features the following:
All takes recorded live into Logic Pro X: No sync to anything, no MIDI, no CV.
One track of a Bleep Labs Nebulophone, with its alligator clip clamped onto a key for a sustained drone, recorded through a Red Panda Particle pedal set to Reverse, both tweaked live. The dry and effected track were tracked simultaneously.
Another droned Nebulophone track went through the Particle set to Delay, and then through a Seppuku Memory Loss pedal, with its clean microchip inserted, all three tweaked live. The dry and effected track were tracked simultaneously.
One track of the RareWaves Grendel Drone Commander, recorded 100% dry. That thing needs no love, especially when its bandpass filters gets overdriven at low frequencies. Yummy.
Sometimes sound design requires thinking inside of multiple boxes.
I’ve developed a small collection of handmade and boutique electronic effects and instruments over the years, like the Grendel Drone Commander, Lite2 Sound PX, and many more (perhaps the subject of another post). Longtime readers may recall that I just love supporting independent makers and small cottage industries: That’s where all the weird, truly innovative stuff happens, and I (like many of you, dear readers) am more interested in cool sound design possibilities than straight-up distorted guitarrrrrrrr sounds.
Rare Waves’ Lite2Sound PX, by Eric Archer: A photonic microphone!
I’ve previously written about the heavily-built, wickedly cool Grendel Drone Commander synth from Eric Archer. I check his site, Rare Waves, from time to time for new handmade electronic toys, and I was really intrigued by his newer Lite2Sound PX unit. This small device, in Eric’s words, “extracts audio from ambient light.” It’s a photodiode amplifier. Or a photosensitive microphone. Point it at light, it makes sound. It runs off a 9-volt battery, has a volume control, and a headphone jack. Simple, exciting, and a whole new world of sonic insanity. You can buy them as kits or, as I did, fully assembled.
Sounds pretty straightforward. If you just point it at bright, broad light sources, it’s kind of disappointing. It’s when you start listening to artificial lights in otherwise dim environments that some serious magic starts to happen. My experiments were conducted in and around high tech computer equipment, running an 1/8″ mini jack from the headphone output into my Sony PCM -D50 recorder.
Lights inside of PCs, modulated by fans…and further modulated by speaker grills as I passed the Lite2Sound from side to side. Ethernet network activity lights. Server disk access indicator lights. A close up of the power button of an XBox 360 while booting up. Pulsing lights of devices in standby mode. Halogen lamps behind spinning desk fans.
Lightly armored for future fieldwork!
The resulting sounds were astounding in their range: Static, glitches, distorted synth pads, pure sinewave tones, sawtooth-like tones, and much more. You can’t control it, really. It’s a tool of discovery, and its very nature encourages constant experimentation. It was so small and so perfectly complemented a handheld field recorder, I just wanted to take it everywhere and point it at everything! It imparted the same joy as when you start recording with contact microphones, or hydrophones: A new way to listen to the world around you. The more I used the Lite2Sound, I put it in a small plastic container (hacked with an XActo knife for access to controls and the headphone jack) in order to keep the components better protected.
Lite2Sound is a pretty narrowly-focused device and how useful it is to you depends on your taste for the unpredictable. Me, I adore this thing. Hell, I bought two (for future stereo photo-phonic insanity). It encourages constant experimentation, weighs nothing, and I can see using its output in both sound design and musical contexts. Eric Archer nails it again with an odd concept and a rock-solid, focused execution that results in a toy that just begs to be played with.
This thing was given to me as a Christmas gift. I immediately wanted to not froth milk with it, but to record it. With a hydrophone.
It was initially disappointing…until I put it into a metal pan and realized that its interaction with the pan, not the water, was far more interesting. The hydrophone was still in the water, but the frother was used in the water, inside the pan, and outside the pan as well, at varying speeds.