Imagine sound as a pulsating wave, traveling through air and bridging the gap between its original source and your ear canal, where it’s captured and decoded by your brain.
The science behind the sonic realm is as fascinating as it is articulated, and one of the most fundamental concepts in the field is the role of hertz and frequency in sound. These are the building blocks of not just music, but the way we perceive audio in all its forms.
In this article, I’ll discuss the role of hertz in music, delving deep into the science behind this unit of measurement, and how it can affect your musicianship, whether you’re an artist, audio engineer, or music listener.
About me
I’m a professional musician and producer, with over a decade of experience in producing music across a variety of genres.
I’ve been involved in the post-production process of dozens of albums, learning more and more about the importance of audio frequencies and how to tailor them to the listener’s needs.
As a result, I’ve become knowledgeable about the role of acoustics and audio frequency range in music production, and I’m excited to share everything I know about it with you in the following sections.
Given the scientific nature of the article, and for the sake of clarity, I decided to divide this article into small sections, focusing on defining each element defining music acoustics and explaining their roles in shaping audio.
Contents
Use these links below to navigate to the desired section of the article.
- What is hertz (Hz)?
- How we perceive sound frequencies?
- How does frequency relate to music?
- How does frequency affect sound quality?
- Frequently asked questions
What is hertz (Hz)?
As I mentioned earlier, a sound is a wave that pulsates at regular intervals, creating predictable cycles.
If we consider a cycle the distance between the two closest points where the wave is at the same height, then the hertz (Hz) is the measurement unit that describes how fast that sound cycles per second.
For instance, if a sound wave completes one whole cycle in one second, its frequency is 1 Hz. If a sound wave completes 100 cycles in a second, the frequency is 100 Hz.
It’s crucial to remember that the frequency of cycles per second determines the pitch of the sound. Higher frequencies correspond to higher-pitched sounds, while lower frequencies correspond to lower-pitched sounds.
This relationship between pitch and vibration speed is what allows us to associate certain frequencies to certain musical notes, like the A note above middle C has a frequency of 440 Hz.
How we perceive sound frequencies?
Now that we know how sound waves behave, it’s time to focus on how we human perceive those air vibrations and translate them into speech, music, and so on.
Human hearing range
Humans can hear sounds between 20 Hz to 20,000 Hz. Anything below 20 Hz is called infrasonic, and anything above 20 kHz is called ultrasonic.
Animals like pigeons and whales can communicate through infrasonic sounds, and bats can hear ultrasonic sounds.
The average human speech ranges between 90 to 155 Hz for an adult male and 165 to 255 Hz for an adult female.
The music you listen to, regardless of the audio reproduction system used, falls within the 20 Hz – 20 kHz range, and most musical material is within 40 Hz to 15 kHz.
Frequency and amplitude
We already spoke about how the frequency of a sound wave determines its pitch, but what about its perceived volume? That’s when amplitude comes into play.
The amplitude is responsible for the volume or intensity of a sound, and is defined by the distance between the sound wave in rest position and at its highest peak.
While frequency is measured in hertz (Hz), amplitude is measured in decibels (dB). A sound wave with greater amplitude will produce a louder sound, and vice versa.
Bear in mind that frequency and amplitude are independent elements of a sound wave, meaning that changes in one don’t necessarily affect the other.
How does frequency relate to music?
How do you apply this new knowledge to music composition, production, or consumption?
What we’re discussing in this article is nothing less than the building blocks of acoustics, meaning they have an impact on anything related to music.
In this final section, we’ll see how the nature of sound frequencies can affect music creation as much as audio gear manufacturing, and analyze how every field can benefit from a deeper understanding of the nature of sound.
Same pitch, different sound: The role of timbre
When you hit the same note on two different instruments, why is the sound they make so different? After all, they might have exactly the same pitch and amplitude, so their sound waves look identical on your DAW.
The answer is the timbre, which is the tone or color of a certain voice or musical instrument that defines its nature.
The timbre of an instrument is defined by the complex interplay between all the harmonics it produces when we strike a note, as well as its ADSR envelope (attack, decay, sustain, release).
The distribution of different frequencies plays an important role in music orchestration and post-production, where every instrument’s frequency range can be adjusted to fit perfectly within the existing soundstage.
How does frequency affect sound quality?
The way sonic information is captured, manipulated, and reproduced defines how we hear audio, and hertz is the unit that helps us monitor the sound waves we’re aiming to capture or recreate.
Frequency response in recording and listening equipment
A careful balance needs to be obtained between the frequency range of the sound source and the recording gear you’re using.
The frequency of sound has an impact on sound quality in a sense that the spectrum has to be captured fully in order to obtain a high-fidelity audio representation.
When recording a musical instrument, it’s crucial to use a microphone that can capture the full range of frequencies it can produce.
For example, if you’re recording a violin, which produces frequencies from 196 Hz to about 15 kHz, you’ll want a microphone with a frequency range that covers these frequencies, like a condenser microphone with a relatively small diaphragm.
For more information, be sure to check out the article by my colleague Brandon Schock, where he explains the different types of microphones and how they compare in various recording applications.
As for audio reproduction, loudspeakers and headphones should cover the entire audio range from 20 Hz to 20 kHz.
Bear in mind that, once a speaker is placed in a room, the environment will greatly affect the audible frequency response. For instance, placing a speaker near a wall or the floor will boost its low-frequency response.
If you’re new to the topic, check out my recent article on speaker placement for stereo and surround sound systems.
Even though the frequency of a sound is not directly connected to its “quality,” all the elements mentioned above can have a great impact on how sound frequencies are captured and perceived.
Frequencies in music post-production
Frequencies can be altered, adjusted, mitigated, or enhanced in music post-production, bringing to life articulated soundscapes made of seamlessly interconnected harmonies coming from different sources.
The role of a mixing engineer is to create a coherent and cohesive tapestry where all musical instruments are present, and have a clear position in the sonic image created.
This post-production process requires a deep understanding of how sound frequencies interact with each other and are translated by audio devices.
An EQ is the Swiss Army Knife of any audio engineer, and is a tool used to adjust the frequencies of a musical instrument or voice.
By mitigating the high frequencies of a violin, for instance, we might be able to enhance the power of a female voice, or a piano solo, making a composition more articulated and polished.
Options are endless, and having an overview of the audio frequencies comprising a soundstage and their behavior will benefit your post-production process immensely, whether you’re an indie artist or an audio engineer working in a recording studio.
Frequently asked questions
Since the first issue of this article in 2020, we have consistently received questions about hertz, frequencies, and their role in music.
I’ve compiled these questions, and in this section, you will find the answers to the most frequently asked ones.
Does higher Hz mean better sound?
No, a higher frequency, by itself, only means a higher-pitched sound.
What is a good frequency response (Hz)?
When talking about high-fidelity audio, a good frequency response means a natural sound reproduction of sounds across the audible spectrum (20 Hz to 20 kHz), without coloring or affecting frequencies excessively.
Can humans always hear all frequencies equally?
No. When we’re born, we can hear frequencies between 20 Hz and 20 kHz, but as we age, we experience hearing loss on the higher end of the spectrum, which is usually connected to age, genetics, and frequent exposure to loud sounds.
What happens if a frequency is below 20 Hz or above 20 kHz?
These frequencies are outside the range of human hearing, and can be felt rather than heard, especially when it comes to infrasound.
What frequency do we tune instruments to and why?
The commonly accepted tuning reference for Western music is A440, where the A above middle C is tuned to 440 Hz.
That wasn’t always the case. In the 1880s, the Italian government’s Music Commission decided that all musical instruments should be tuned using a tuning fork that vibrates at 440 Hz.
This was so that all around the world, a song could be performed and it would sound the same as it would if it was performed elsewhere. Prior to this, tuning forks vibrated at frequencies of 435 Hz and 432 Hz.
Why do different speakers and headphones have different frequency responses?
When it comes to frequency range, the size of the driver is what usually makes the difference. Larger loudspeakers have larger drivers, which can move more air and therefore better reproduce low-end frequencies. Conversely, smaller loudspeakers can provide a faster response to high-frequency sounds.
Depending on what they’re designed to do, speakers and headphones come with different tonal balance, level of transparency, and dynamic range.
Some of the “coloration” defines their sound signature, and when it comes to studio monitors or headphones, their neutrality is paramount.
Final thoughts
I hope this guide helped clarify what hertz is and how a better understanding of frequency range can improve your music production and listening experience. Have fun!
Help, I’m trying to understand something:
Is a Solfeggio frequency at, say 528hz simply the same 440hz “note” but sounds higher, because it is? For example, play a Middle C tuned to 440hz = Middle C note; change frequency of keyboard to 528hz = Middle C note sounds higher (because it becomes a different note X). True or False?
I’m trying to understand Solfeggio re playing “live” with my synth. Is it real or hype? I sure love lots what I’m hearing but wondering if I need new gear to play solfeggio frequencies live, or just stick with my pad sounds on a synth tuned globally to 440hz
Hi Lori! Solfeggio frequencies don’t correspond exactly to notes in standard 440 Hz. For instance, 528 Hz falls between C5 (523.25 Hz) and C#5/Db5 (554.37 Hz). You have three options: you can stick with 440 Hz tuning and approximate Solfeggio frequencies (i.e. play a C5 to get a close approximation of 528 Hz). Alternatively, you can tune a specific note on your synth to 528 Hz, if your synth allows microtonal tuning. Finally, if your synth offers a global tuning option, you can completely retune it so that one of the keys will play the frequency you’re looking for.