If you’ve been involved in music production in any way over the last couple of decades, I’m sure you’ve come across the term “normalization” many, many times.
Normalizing tracks has been a common practice among artists since the dawn of DAWs, but with the advent of streaming platforms and the subsequent loudness standardization, as well as new, less invasive ways to make tracks louder, you might wonder whether it’s still worth it normalizing your tunes before publishing them.
To me, audio normalization still makes a lot of sense in certain contexts, so in this article, I’ll explain what audio normalization is, when you should use it, and how.
Behind the insights
I’m a musician and label founder. My record label has released over 50 albums, and I’ve been involved in the production and curation of each one of them.
Over the year, I’ve learned the importance of reaching industry-standard loudness levels without compromising the dynamic range of a tune.
As a result, normalization has been part of my Swiss Army Knife in audio post-production and is a valuable tool for many operations, from podcast production to sampling creation.
Contents
Use these links below to navigate to the desired section of the article.
- What is audio normalization?
- What are the different types of normalization?
- Explaining the concept of loudness in music
- What are international standards for loudness?
- What’s the difference between normalization and compression?
- When to use audio normalization?
- How to normalize audio? Step by step.
- Frequently asked questions
What is audio normalization?
Audio normalization is a process that takes the track’s loudest peak and considers it as the target level, which usually is around 0 dB.
As a result, the process will raise the volume of the whole song equally until the loudest peak reaches that level without clipping.
Why normalize audio?
There are a couple of reasons why you might want to apply normalization.
First of all, it makes your track louder without causing distortion but also enhances the consistency of volume levels in an album, leaving the dynamic range untouched.


This can make it an extremely valuable tool in post-production.
Different types of audio normalization
The two most common normalization techniques are peak and loudness normalization.
Peak normalization adjusts the amplitude of an audio signal based on its highest peaks. This will make your track as loud as possible and in no time. Pretty simple, right?
Instead, loudness normalization focuses on perceived loudness levels. It’s a more complex process that adjusts the audio signal to achieve a consistent perceived loudness level across different audio sources.
Loudness normalization is measured in LUFS (loudness units full scale), which is more in line with how the human ear perceives sounds. We’ll talk more about this in the following section.
Finally, RMS volume normalization is a technique that adjusts the volume levels of audio based on their RMS (root mean square) amplitude.
Contrary to peak normalization, RMS normalization takes into account the average amplitude of the signal over a certain period of time and then amplifies it to bring the RMS level closer to the target level.
The concept of loudness in music
You should know by now that loudness plays a crucial role in the music you produce and consume. However, you might not be aware that for the last couple of decades, there’s been an ongoing loudness war between artists, labels, and audio engineers who are trying to make the music as loud as possible.
In the 2000s, it became common practice to apply extreme compression and EQ to make music as loud as possible at the cost of reduced dynamic range, the risk of clipping, and a duller sound.
Luckily, volume level standardization is finally bringing this loudness charade to an end.
Nowadays, each music platform has its own loudness levels you should refer to when distributing your music.
Here are some examples from the most common audio streaming services:
- Spotify: -14 LUFS, -11 and -19 available in Premium
- Tidal: -14 (default) or -18 LUFS
- Apple Music: -16 LUFS
- Amazon Music: -13 LUFS
- SoundCloud: -14 LUFS
- YouTube: -14 LUFS
So, does that mean you should have different masters with different loudness levels for each streaming platform?
I’m sure some studios working with big artists do that, but my advice to you (I’m assuming you’re not an indie artist) is to create a good master at around -14 LUFS, which will work nicely on all platforms.
Depending on the genre you work on, the loudness level’s importance varies considerably. If you’re an avant-garde composer or ambient artist, dynamic range is surely more important than loudness.
On the other hand, if you play heavy metal or trance music, you probably want your music to sound as loud as possible.
Trust your ears and the experience of your audio engineer if you have one. Don’t get overly obsessed with LUFS.
International standards for loudness
Finally, here are just a few words on the international guidelines for loudness, the most popular of which are EBU R128, ITU-R BS.1770, and ATSC A/85.
These are all standard loudness measurement protocols to evaluate the average loudness of an audio program, whether on TV, radio, movies, and so on.
The idea is to create consistent loudness levels across different platforms, programs, and TV channels, and these standards define the maximum digital amplitude reproduced audio should reach.
If this is way over your head, don’t worry: you can check your loudness levels on most DAWs and mastering plugins, and here’s a link to a great affordable plugin (you can try it for free) for loudness measurement that takes these standards into account.
Normalization vs compression
Normalization and compression are both common processes in post-production but lead to very different results in terms of loudness.
As I explained earlier, normalization takes the song’s loudest peak as the ceiling of your track and raises the volume until that peak reaches the desired target level.
Instead, compression adjusts the dynamic range of a song by reducing the volume of loud parts and boosting the quiet ones. It’s a dynamic process that takes the entire song into account and adjusts its amplitude based on the compression ratio settings chosen.
Compression gives you a more consistent volume level and, most likely, a louder song, whereas normalization maintains the original dynamic range of your composition.
What you value the most, whether it’s dynamic range or loudness, should be the critical factor to consider when you don’t know which of the two processes you should apply.
When to use audio normalization?
I briefly mentioned this in the previous paragraph, but here’s the great thing about audio normalization: it’s useful when we want to leave the dynamic range and signal-to-noise ratio untouched.
Compression, whether soft or hard, will have an impact on the perceived depth of a composition because it’ll reduce the distance between the loudest and quietest parts.
If you don’t want that to happen, then audio normalization can be an invaluable companion.
Normalization is not as popular as it used to be because, right now, there’s a certain standardization in the way audio is produced and consumed.
Plus, tools to reach high loudness levels are more affordable than ever, becoming accessible to professionals as well as bedroom producers alike.
As a result, audio engineers tend to reach industry-standard volume levels by using more “sophisticated” ways to increase gain, like compression and carefully crafted equalization.
However, normalization still makes sense if you want to preserve the authenticity of your composition and a high loudness is not your main priority.
How to normalize audio?
Each DAW is different, but generally, it’s quite simple to normalize audio, and most workstations I’ve used had an option to automatically normalize audio.
Here are a couple of projects I’m currently working on, on Studio One and Audacity, where you can see how to find the audio normalization effect on these two DAWs.


Here’s a step-by-step on how you can normalize audio on most DAWs:
- Import audio. You can import a mix or master, the only difference being that with a mix, you’ll have to select each track you want to normalize, but this could lead to clipping, so do it carefully.
- Select track. Make sure you select the entire track.
- Find the Normalization effect. You’ll usually find it under the Effects menu bar or by right-clicking after selecting the track you want to normalize.
- Set parameters. Choose the target volume level, and if possible, choose between peak or RMS normalization.
- Apply normalization.
- Preview and save.
- Export audio.
And that’s it! As I said, audio normalization is a simple process to increase the volume of your track, and while it might not be the most effective or professional-sounding one, it’s still a great option if you want to get things done fast.
Frequently asked questions
Here are some of the most frequently asked questions about audio normalization.
Does audio normalization affect sound quality?
No, audio normalization is designed to simply raise the volume of your track without affecting the dynamic or frequency range.
The only way it can affect sound quality is by setting the ceiling at such a high level that you’ll start hearing distortion and clipping.
Can loudness normalization be automated?
Yes, loudness normalization is usually an automated process in all DAWs.
What should I do if my audio sounds distorted after normalization?
The best thing to do in this case is to try again by reducing the target volume level. If the problem persists, chances are there’s an issue with the recordings themselves, which might be distorting even without normalization.
Is there a good universal dB level to normalize to?
Anything between -1.0 dB and -3.0 dB should do. These settings will leave you enough headroom for additional effects while maximizing loudness.
Final thoughts
While nowhere near as popular as it used to be, audio normalization still serves its purpose whenever you want to maximize volume levels without touching the dynamic range.
If you do use it, make sure you leave enough headroom for effects and to let your sound “breathe.”
In my experience, audio normalization is great if you need a quick way to increase the volume levels of your mixes or samples, but not much so if you want to achieve industry-standard loudness levels.
Have fun!