Community, Interviews

Making Waves: Sound Design in modern day software

Experts in sound design share insights about how to design better, more inclusive products with sound in mind

Team Stark

Team Stark

May 18, 2023

The portraits of the three speakers: Cat Noone, Megan Clegg and Matthew Bennett

When it comes to designing and building product experiences — online and offline — it’s easy to imagine a checklist of visual elements of things like logos, colors, fonts, graphics, and spacing. Video or audio might also get layered in if there’s an obvious connection, like for product demos or marketing materials. 

But what about intentional sound design for software? 

We invited two phenomenal designers to talk with our community about how they approach sound design from very different perspectives. Matthew Bennett, a composer and musicologist who works in sound design innovation and was the Chief Sound Designer at Microsoft. He’s thinking primarily about how product experiences are expressed through sound. (You’ll know his work if you’ve heard the Windows 10 calendar alert.) We also talked with Megan Clegg, a senior product designer who has worked with big media brands like Nickelodeon and iHeartRadio, as well as Compass real estate. She’s also hard of hearing and she brings that perspective to designing product experiences. 

Here are a few big takeaways from our conversation about the role of sound in design and user experience. (We also posted the whole session on YouTube in case you want to check it out!)

Sound is an emotion-rich area. 

Matthew’s research and experience has shown that sound is the most emotionally-subjective element, and he pointed out, “It’s the absence of emotion that creates problems!” 

Think about the range of emotions that are stirred-up when it comes to sound. Emotion is what makes famous movie soundtracks truly iconic, driving laughter, tears, or awe. Emotion is also critically important when there’s an absence of sound. Just look at how emotions and facial expressions are a core part of communicating with sign language. 

Science has shown how emotions are processed by multiple parts of the brain. We are constantly processing information, across all of our senses. If you’re hard of hearing, you might process more information through touch or vibration. This reminder about whole-brain processing goes to show how we experience communication — and the emotions that arise from communication — across many of our senses.

An intentional absence of sound is also important. 

Since sound generates emotions, the absence of sound can give a user space, much like an uncluttered graphical layout can make way for a feeling of focus or calm. 

On the flip side, too much sound can be polarizing. Megan remembered the days of MySpace, when people could put sound behind their profiles. “And people hated it!” Especially when there’s already so much sensory input in our lives, “extra” sounds can easily be overwhelming. And if a person is working on something complex, they might not want to hear extra things in the background.  

When it comes to life-critical scenarios, like healthcare settings, sounds designed to be alerts might fire too frequently. This can be dangerous because caregivers might start to tune out alerts and miss an important alarm. At the same time, too many sounds can drive up the anxiety in a patient, making “healing” harder to achieve.

We love how Matthew described his holy grail for sound design: to create sounds that are so fitting for the use case that people don’t even know they’re hearing the sound, until they notice that the sound is gone. (Now doesn’t that sound delightful!?!) 

Haptics are an under-leveraged communication experience. 

When it comes to tapping the senses, touch and sound are on the same emotional spectrum. So, because sound equals energy frequencies, that energy can be felt in the whole body. This shows another way that touch plays a role in our emotions.

Matthew described how sounds aren’t just about what the ear hears. Sounds are also about designing with rhythm. That’s where the power — and potential — of haptic comes in. He said, “We should think of this as a meta sense!” 

Yet, that game-changing experience for haptic hasn’t yet been developed. The world is still waiting for something that goes beyond a buzz. For example, haptic can be soft or “loud,” too (loud = a firmer touch). Matthew pointed out that someone could compose mini-rhythmic experiences and messages with touch. 

Among the Deaf and hard of hearing communities, Megan agreed that haptics have a lot of potential. One reason why is because haptics can convey the same range of emotions that audio sound can. For example, a person could be alerted gently, or strongly, in accordance with the intended emotion. 

Our Stark team would love to see sets of standards come together around designing sound and haptic. Better implementations of haptic can bridge many accessibility needs. Even better, there are already lots of technologies in-the-wild that can deliver these experiences. 

How are you designing for sound? 

It’s obvious that a visual-centric design approach addresses only a small part of a person’s experience. Humans are sense-rich, and there are many more dimensions to consider with how messages are sent and received.

As computing becomes even more ubiquitous, and software experiences become multi-modal by default, we also need to make experiences multi-sensory by default. This will not only result in more accessible products for people with disabilities, but also an overall richer experience for every person.

We’d love to hear about what great sound experiences you’ve come across in the world. Even better, maybe your team is the one leading the way!

Feel free to share your winning ideas in our Stark Slack Community or with us on LinkedIn.